id
int64 12
1.07M
| title
stringlengths 1
124
| text
stringlengths 0
228k
| paragraphs
list | abstract
stringlengths 0
123k
| date_created
stringlengths 0
20
| date_modified
stringlengths 20
20
| templates
sequence | url
stringlengths 31
154
|
---|---|---|---|---|---|---|---|---|
5,623 | Canal | Canals or artificial waterways are waterways or engineered channels built for drainage management (e.g. flood control and irrigation) or for conveyancing water transport vehicles (e.g. water taxi). They carry free, calm surface flow under atmospheric pressure, and can be thought of as artificial rivers.
In most cases, a canal has a series of dams and locks that create reservoirs of low speed current flow. These reservoirs are referred to as slack water levels, often just called levels. A canal can be called a navigation canal when it parallels a natural river and shares part of the latter's discharges and drainage basin, and leverages its resources by building dams and locks to increase and lengthen its stretches of slack water levels while staying in its valley.
A canal can cut across a drainage divide atop a ridge, generally requiring an external water source above the highest elevation. The best-known example of such a canal is the Panama Canal.
Many canals have been built at elevations, above valleys and other waterways. Canals with sources of water at a higher level can deliver water to a destination such as a city where water is needed. The Roman Empire's aqueducts were such water supply canals.
The term was once used to describe linear features seen on the surface of Mars, Martian canals, an optical illusion.
A navigation is a series of channels that run roughly parallel to the valley and stream bed of an unimproved river. A navigation always shares the drainage basin of the river. A vessel uses the calm parts of the river itself as well as improvements, traversing the same changes in height.
A true canal is a channel that cuts across a drainage divide, making a navigable channel connecting two different drainage basins.
Both navigations and canals use engineered structures to improve navigation:
Since they cut across drainage divides, canals are more difficult to construct and often need additional improvements, like viaducts and aqueducts to bridge waters over streams and roads, and ways to keep water in the channel.
There are two broad types of canal:
Historically, canals were of immense importance to commerce and the development, growth and vitality of a civilization. In 1855 the Lehigh Canal carried over 1.2 million tons of anthracite coal; by the 1930s the company which built and operated it over a century pulled the plug. The few canals still in operation in our modern age are a fraction of the numbers that once fueled and enabled economic growth, indeed were practically a prerequisite to further urbanization and industrialization. For the movement of bulk raw materials such as coal and ores are difficult and marginally affordable without water transport. Such raw materials fueled the industrial developments and new metallurgy resulting of the spiral of increasing mechanization during 17th–20th century, leading to new research disciplines, new industries and economies of scale, raising the standard of living for any industrialized society.
Most ship canals today primarily service bulk cargo and large ship transportation industries, whereas the once critical smaller inland waterways conceived and engineered as boat and barge canals have largely been supplanted and filled in, abandoned and left to deteriorate, or kept in service and staffed by state employees, where dams and locks are maintained for flood control or pleasure boating. Their replacement was gradual, beginning first in the United States in the mid-1850s where canal shipping was first augmented by, then began being replaced by using much faster, less geographically constrained & limited, and generally cheaper to maintain railways.
By the early 1880s, canals which had little ability to economically compete with rail transport, were off the map. In the next couple of decades, coal was increasingly diminished as the heating fuel of choice by oil, and growth of coal shipments leveled off. Later, after World War I when motor-trucks came into their own, the last small U.S. barge canals saw a steady decline in cargo ton-miles alongside many railways, the flexibility and steep slope climbing capability of lorries taking over cargo hauling increasingly as road networks were improved, and which also had the freedom to make deliveries well away from rail lined road beds or ditches in the dirt which could not operate in the winter.
The longest extant canal today, the Grand Canal in northern China, still remains in heavy use, especially the portion south of the Yellow River. It stretches from Beijing to Hangzhou at 1,794 kilometres (1,115 miles).
Canals are built in one of three ways, or a combination of the three, depending on available water and available path:
Smaller transportation canals can carry barges or narrowboats, while ship canals allow seagoing ships to travel to an inland port (e.g., Manchester Ship Canal), or from one sea or ocean to another (e.g., Caledonian Canal, Panama Canal).
At their simplest, canals consist of a trench filled with water. Depending on the stratum the canal passes through, it may be necessary to line the cut with some form of watertight material such as clay or concrete. When this is done with clay, it is known as puddling.
Canals need to be level, and while small irregularities in the lie of the land can be dealt with through cuttings and embankments, for larger deviations other approaches have been adopted. The most common is the pound lock, which consists of a chamber within which the water level can be raised or lowered connecting either two pieces of canal at a different level or the canal with a river or the sea. When there is a hill to be climbed, flights of many locks in short succession may be used.
Prior to the development of the pound lock in 984 AD in China by Chhaio Wei-Yo and later in Europe in the 15th century, either flash locks consisting of a single gate were used or ramps, sometimes equipped with rollers, were used to change the level. Flash locks were only practical where there was plenty of water available.
Locks use a lot of water, so builders have adopted other approaches for situations where little water is available. These include boat lifts, such as the Falkirk Wheel, which use a caisson of water in which boats float while being moved between two levels; and inclined planes where a caisson is hauled up a steep railway.
To cross a stream, road or valley (where the delay caused by a flight of locks at either side would be unacceptable) the valley can be spanned by a navigable aqueduct – a famous example in Wales is the Pontcysyllte Aqueduct (now a UNESCO World Heritage Site) across the valley of the River Dee.
Another option for dealing with hills is to tunnel through them. An example of this approach is the Harecastle Tunnel on the Trent and Mersey Canal. Tunnels are only practical for smaller canals.
Some canals attempted to keep changes in level down to a minimum. These canals known as contour canals would take longer, winding routes, along which the land was a uniform altitude. Other, generally later, canals took more direct routes requiring the use of various methods to deal with the change in level.
Canals have various features to tackle the problem of water supply. In cases, like the Suez Canal, the canal is open to the sea. Where the canal is not at sea level, a number of approaches have been adopted. Taking water from existing rivers or springs was an option in some cases, sometimes supplemented by other methods to deal with seasonal variations in flow. Where such sources were unavailable, reservoirs – either separate from the canal or built into its course – and back pumping were used to provide the required water. In other cases, water pumped from mines was used to feed the canal. In certain cases, extensive "feeder canals" were built to bring water from sources located far from the canal.
Where large amounts of goods are loaded or unloaded such as at the end of a canal, a canal basin may be built. This would normally be a section of water wider than the general canal. In some cases, the canal basins contain wharfs and cranes to assist with movement of goods.
When a section of the canal needs to be sealed off so it can be drained for maintenance stop planks are frequently used. These consist of planks of wood placed across the canal to form a dam. They are generally placed in pre-existing grooves in the canal bank. On more modern canals, "guard locks" or gates were sometimes placed to allow a section of the canal to be quickly closed off, either for maintenance, or to prevent a major loss of water due to a canal breach.
A canal fall, or canal drop, is a vertical drop in the canal bed. These are built when the natural ground slope is steeper than the desired canal gradient. They are constructed so the falling water's kinetic energy is dissipated in order to prevent it from scouring the bed and sides of the canal.
A canal fall is constructed by cut and fill. It may be combined with a regulator, bridge, or other structure to save costs.
There are various types of canal falls, based on their shape. One type is the ogee fall, where the drop follows an s-shaped curve to create a smooth transition and reduce turbulence. However, this smooth transition does not dissipate the water's kinetic energy, which leads to heavy scouring. As a result, the canal needs to be reinforced with concrete or masonry to protect it from eroding.
Another type of canal fall is the vertical fall, which is "simple and economical". These feature a "cistern", or depressed area just downstream from the fall, to "cushion" the water by providing a deep pool for its kinetic energy to be diffused in. Vertical falls work for drops of up to 1.5 m in height, and for discharge of up to 15 cubic meters per second.
The transport capacity of pack animals and carts is limited. A mule can carry an eighth-ton [250 pounds (113 kg)] maximum load over a journey measured in days and weeks, though much more for shorter distances and periods with appropriate rest. Besides, carts need roads. Transport over water is much more efficient and cost-effective for large cargoes.
The oldest known canals were irrigation canals, built in Mesopotamia circa 4000 BC, in what is now Iraq. The Indus Valley civilization of ancient India (circa 3000 BC) had sophisticated irrigation and storage systems developed, including the reservoirs built at Girnar in 3000 BC. This is the first time that such planned civil project had taken place in the ancient world. In Egypt, canals date back at least to the time of Pepi I Meryre (reigned 2332–2283 BC), who ordered a canal built to bypass the cataract on the Nile near Aswan.
In ancient China, large canals for river transport were established as far back as the Spring and Autumn Period (8th–5th centuries BC), the longest one of that period being the Hong Gou (Canal of the Wild Geese), which according to the ancient historian Sima Qian connected the old states of Song, Zhang, Chen, Cai, Cao, and Wei. The Caoyun System of canals was essential for imperial taxation, which was largely assessed in kind and involved enormous shipments of rice and other grains. By far the longest canal was the Grand Canal of China, still the longest canal in the world today and the oldest extant one. It is 1,794 kilometres (1,115 mi) long and was built to carry the Emperor Yang Guang between Zhuodu (Beijing) and Yuhang (Hangzhou). The project began in 605 and was completed in 609, although much of the work combined older canals, the oldest section of the canal existing since at least 486 BC. Even in its narrowest urban sections it is rarely less than 30 metres (98 ft) wide.
In the 5th century BC, Achaemenid king Xerxes I of Persia ordered the construction of the Xerxes Canal through the base of Mount Athos peninsula, Chalkidiki, northern Greece. It was constructed as part of his preparations for the Second Persian invasion of Greece, a part of the Greco-Persian Wars. It is one of the few monuments left by the Persian Empire in Europe.
Greek engineers were also among the first to use canal locks, by which they regulated the water flow in the Ancient Suez Canal as early as the 3rd century BC.
There was little experience moving bulk loads by carts, while a pack-horse would [i.e. 'could'] carry only an eighth of a ton. On a soft road a horse might be able to draw 5/8ths of a ton. But if the load were carried by a barge on a waterway, then up to 30 tons could be drawn by the same horse.— technology historian Ronald W. Clark referring to transport realities before the industrial revolution and the Canal age.
Hohokam was a society in the North American Southwest in what is now part of Arizona, United States, and Sonora, Mexico. Their irrigation systems supported the largest population in the Southwest by 1300 CE. Archaeologists working at a major archaeological dig in the 1990s in the Tucson Basin, along the Santa Cruz River, identified a culture and people that may have been the ancestors of the Hohokam. This prehistoric group occupied southern Arizona as early as 2000 BCE, and in the Early Agricultural Period grew corn, lived year-round in sedentary villages, and developed sophisticated irrigation canals. The large-scale Hohokam irrigation network in the Phoenix metropolitan area was the most complex in ancient North America. A portion of the ancient canals has been renovated for the Salt River Project and now helps to supply the city's water.
In the Middle Ages, water transport was several times cheaper and faster than transport overland. Overland transport by animal drawn conveyances was used around settled areas, but unimproved roads required pack animal trains, usually of mules to carry any degree of mass, and while a mule could carry an eighth ton, it also needed teamsters to tend it and one man could only tend perhaps five mules, meaning overland bulk transport was also expensive, as men expect compensation in the form of wages, room and board. This was because long-haul roads were unpaved, more often than not too narrow for carts, much less wagons, and in poor condition, wending their way through forests, marshy or muddy quagmires as often as unimproved but dry footing. In that era, as today, greater cargoes, especially bulk goods and raw materials, could be transported by ship far more economically than by land; in the pre-railroad days of the industrial revolution, water transport was the gold standard of fast transportation. The first artificial canal in Western Europe was the Fossa Carolina built at the end of the 8th century under personal supervision of Charlemagne.
In Britain, the Glastonbury Canal is believed to be the first post-Roman canal and was built in the middle of the 10th century to link the River Brue at Northover with Glastonbury Abbey, a distance of about 1.75 kilometres (1,900 yd). Its initial purpose is believed to be the transport of building stone for the abbey, but later it was used for delivering produce, including grain, wine and fish, from the abbey's outlying properties. It remained in use until at least the 14th century, but possibly as late as the mid-16th century.More lasting and of more economic impact were canals like the Naviglio Grande built between 1127 and 1257 to connect Milan with the river Ticino. The Naviglio Grande is the most important of the lombard "navigli" and the oldest functioning canal in Europe.Later, canals were built in the Netherlands and Flanders to drain the polders and assist transportation of goods and people.
Canal building was revived in this age because of commercial expansion from the 12th century. River navigations were improved progressively by the use of single, or flash locks. Taking boats through these used large amounts of water leading to conflicts with watermill owners and to correct this, the pound or chamber lock first appeared, in the 10th century in China and in Europe in 1373 in Vreeswijk, Netherlands. Another important development was the mitre gate, which was, it is presumed, introduced in Italy by Bertola da Novate in the 16th century. This allowed wider gates and also removed the height restriction of guillotine locks.
To break out of the limitations caused by river valleys, the first summit level canals were developed with the Grand Canal of China in 581–617 AD whilst in Europe the first, also using single locks, was the Stecknitz Canal in Germany in 1398.
In the Songhai Empire of West Africa, several canals were constructed under Sunni Ali and Askia Muhammad I between Kabara and Timbuktu in the 15th century. These were used primarily for irrigation and transport. Sunni Ali also attempted to construct a canal from the Niger River to Walata to facilitate conquest of the city but his progress was halted when he went to war with the Mossi Kingdoms.
Around 1500–1800 the first summit level canal to use pound locks in Europe was the Briare Canal connecting the Loire and Seine (1642), followed by the more ambitious Canal du Midi (1683) connecting the Atlantic to the Mediterranean. This included a staircase of 8 locks at Béziers, a 157 metres (515 ft) tunnel, and three major aqueducts.
Canal building progressed steadily in Germany in the 17th and 18th centuries with three great rivers, the Elbe, Oder and Weser being linked by canals. In post-Roman Britain, the first early modern period canal built appears to have been the Exeter Canal, which was surveyed in 1563, and open in 1566.
The oldest canal in the European settlements of North America, technically a mill race built for industrial purposes, is Mother Brook between the Boston, Massachusetts neighbourhoods of Dedham and Hyde Park connecting the higher waters of the Charles River and the mouth of the Neponset River and the sea. It was constructed in 1639 to provide water power for mills.
In Russia, the Volga–Baltic Waterway, a nationwide canal system connecting the Baltic Sea and Caspian Sea via the Neva and Volga rivers, was opened in 1718.
The modern canal system was mainly a product of the 18th century and early 19th century. It came into being because the Industrial Revolution (which began in Britain during the mid-18th century) demanded an economic and reliable way to transport goods and commodities in large quantities.
By the early 18th century, river navigations such as the Aire and Calder Navigation were becoming quite sophisticated, with pound locks and longer and longer "cuts" (some with intermediate locks) to avoid circuitous or difficult stretches of river. Eventually, the experience of building long multi-level cuts with their own locks gave rise to the idea of building a "pure" canal, a waterway designed on the basis of where goods needed to go, not where a river happened to be.
The claim for the first pure canal in Great Britain is debated between "Sankey" and "Bridgewater" supporters. The first true canal in what is now the United Kingdom was the Newry Canal in Northern Ireland constructed by Thomas Steers in 1741.
The Sankey Brook Navigation, which connected St Helens with the River Mersey, is often claimed as the first modern "purely artificial" canal because although originally a scheme to make the Sankey Brook navigable, it included an entirely new artificial channel that was effectively a canal along the Sankey Brook valley. However, "Bridgewater" supporters point out that the last quarter-mile of the navigation is indeed a canalized stretch of the Brook, and that it was the Bridgewater Canal (less obviously associated with an existing river) that captured the popular imagination and inspired further canals.
In the mid-eighteenth century the 3rd Duke of Bridgewater, who owned a number of coal mines in northern England, wanted a reliable way to transport his coal to the rapidly industrializing city of Manchester. He commissioned the engineer James Brindley to build a canal for that purpose. Brindley's design included an aqueduct carrying the canal over the River Irwell. This was an engineering wonder which immediately attracted tourists. The construction of this canal was funded entirely by the Duke and was called the Bridgewater Canal. It opened in 1761 and was the first major British canal.
The new canals proved highly successful. The boats on the canal were horse-drawn with a towpath alongside the canal for the horse to walk along. This horse-drawn system proved to be highly economical and became standard across the British canal network. Commercial horse-drawn canal boats could be seen on the UK's canals until as late as the 1950s, although by then diesel-powered boats, often towing a second unpowered boat, had become standard.
The canal boats could carry thirty tons at a time with only one horse pulling – more than ten times the amount of cargo per horse that was possible with a cart. Because of this huge increase in supply, the Bridgewater canal reduced the price of coal in Manchester by nearly two-thirds within just a year of its opening. The Bridgewater was also a huge financial success, with it earning what had been spent on its construction within just a few years.
This success proved the viability of canal transport, and soon industrialists in many other parts of the country wanted canals. After the Bridgewater canal, early canals were built by groups of private individuals with an interest in improving communications. In Staffordshire the famous potter Josiah Wedgwood saw an opportunity to bring bulky cargoes of clay to his factory doors and to transport his fragile finished goods to market in Manchester, Birmingham or further away, by water, minimizing breakages. Within just a few years of the Bridgewater's opening, an embryonic national canal network came into being, with the construction of canals such as the Oxford Canal and the Trent & Mersey Canal.
The new canal system was both cause and effect of the rapid industrialization of The Midlands and the north. The period between the 1770s and the 1830s is often referred to as the "Golden Age" of British canals.
For each canal, an Act of Parliament was necessary to authorize construction, and as people saw the high incomes achieved from canal tolls, canal proposals came to be put forward by investors interested in profiting from dividends, at least as much as by people whose businesses would profit from cheaper transport of raw materials and finished goods.
In a further development, there was often out-and-out speculation, where people would try to buy shares in a newly floated company to sell them on for an immediate profit, regardless of whether the canal was ever profitable, or even built. During this period of "canal mania", huge sums were invested in canal building, and although many schemes came to nothing, the canal system rapidly expanded to nearly 4,000 miles (over 6,400 kilometres) in length.
Many rival canal companies were formed and competition was rampant. Perhaps the best example was Worcester Bar in Birmingham, a point where the Worcester and Birmingham Canal and the Birmingham Canal Navigations Main Line were only seven feet apart. For many years, a dispute about tolls meant that goods travelling through Birmingham had to be portaged from boats in one canal to boats in the other.
Canal companies were initially chartered by individual states in the United States. These early canals were constructed, owned, and operated by private joint-stock companies. Four were completed when the War of 1812 broke out; these were the South Hadley Canal (opened 1795) in Massachusetts, Santee Canal (opened 1800) in South Carolina, the Middlesex Canal (opened 1802) also in Massachusetts, and the Dismal Swamp Canal (opened 1805) in Virginia. The Erie Canal (opened 1825) was chartered and owned by the state of New York and financed by bonds bought by private investors. The Erie canal runs about 363 miles (584 km) from Albany, New York, on the Hudson River to Buffalo, New York, at Lake Erie. The Hudson River connects Albany to the Atlantic port of New York City and the Erie Canal completed a navigable water route from the Atlantic Ocean to the Great Lakes. The canal contains 36 locks and encompasses a total elevation differential of around 565 ft. (169 m). The Erie Canal with its easy connections to most of the U.S. mid-west and New York City soon quickly paid back all its invested capital (US$7 million) and started turning a profit. By cutting transportation costs in half or more it became a large profit center for Albany and New York City as it allowed the cheap transportation of many of the agricultural products grown in the mid west of the United States to the rest of the world. From New York City these agricultural products could easily be shipped to other U.S. states or overseas. Assured of a market for their farm products the settlement of the U.S. mid-west was greatly accelerated by the Erie Canal. The profits generated by the Erie Canal project started a canal building boom in the United States that lasted until about 1850 when railroads started becoming seriously competitive in price and convenience. The Blackstone Canal (finished in 1828) in Massachusetts and Rhode Island fulfilled a similar role in the early industrial revolution between 1828 and 1848. The Blackstone Valley was a major contributor of the American Industrial Revolution where Samuel Slater built his first textile mill.
A power canal refers to a canal used for hydraulic power generation, rather than for transport. Nowadays power canals are built almost exclusively as parts of hydroelectric power stations. Parts of the United States, particularly in the Northeast, had enough fast-flowing rivers that water power was the primary means of powering factories (usually textile mills) until after the American Civil War. For example, Lowell, Massachusetts, considered to be "The Cradle of the American Industrial Revolution," has 6 miles (9.7 km) of canals, built from around 1790 to 1850, that provided water power and a means of transportation for the city. The output of the system is estimated at 10,000 horsepower. Other cities with extensive power canal systems include Lawrence, Massachusetts, Holyoke, Massachusetts, Manchester, New Hampshire, and Augusta, Georgia. The most notable power canal was built in 1862 for the Niagara Falls Hydraulic Power and Manufacturing Company.
Competition, from railways from the 1830s and roads in the 20th century, made the smaller canals obsolete for most commercial transport, and many of the British canals fell into decay. Only the Manchester Ship Canal and the Aire and Calder Canal bucked this trend. Yet in other countries canals grew in size as construction techniques improved. During the 19th century in the US, the length of canals grew from 100 miles (161 km) to over 4,000, with a complex network making the Great Lakes navigable, in conjunction with Canada, although some canals were later drained and used as railroad rights-of-way.
In the United States, navigable canals reached into isolated areas and brought them in touch with the world beyond. By 1825 the Erie Canal, 363 miles (584 km) long with 36 locks, opened up a connection from the populated Northeast to the Great Lakes. Settlers flooded into regions serviced by such canals, since access to markets was available. The Erie Canal (as well as other canals) was instrumental in lowering the differences in commodity prices between these various markets across America. The canals caused price convergence between different regions because of their reduction in transportation costs, which allowed Americans to ship and buy goods from farther distances much cheaper. Ohio built many miles of canal, Indiana had working canals for a few decades, and the Illinois and Michigan Canal connected the Great Lakes to the Mississippi River system until replaced by a channelized river waterway.
Three major canals with very different purposes were built in what is now Canada. The first Welland Canal, which opened in 1829 between Lake Ontario and Lake Erie, bypassing Niagara Falls and the Lachine Canal (1825), which allowed ships to skirt the nearly impassable rapids on the St. Lawrence River at Montreal, were built for commerce. The Rideau Canal, completed in 1832, connects Ottawa on the Ottawa River to Kingston, Ontario on Lake Ontario. The Rideau Canal was built as a result of the War of 1812 to provide military transportation between the British colonies of Upper Canada and Lower Canada as an alternative to part of the St. Lawrence River, which was susceptible to blockade by the United States.
In France, a steady linking of all the river systems – Rhine, Rhône, Saône and Seine – and the North Sea was boosted in 1879 by the establishment of the Freycinet gauge, which specified the minimum size of locks. Canal traffic doubled in the first decades of the 20th century.
Many notable sea canals were completed in this period, starting with the Suez Canal (1869) – which carries tonnage many times that of most other canals – and the Kiel Canal (1897), though the Panama Canal was not opened until 1914.
In the 19th century, a number of canals were built in Japan including the Biwako canal and the Tone canal. These canals were partially built with the help of engineers from the Netherlands and other countries.
A major question was how to connect the Atlantic and the Pacific with a canal through narrow Central America. (The Panama Railroad opened in 1855.) The original proposal was for a sea-level canal through what is today Nicaragua, taking advantage of the relatively large Lake Nicaragua. This canal has never been built in part because of political instability, which scared off potential investors. It remains an active project (the geography has not changed), and in the 2010s Chinese involvement was developing.
The second choice for a Central American canal was a Panama canal. The De Lessups company, which ran the Suez Canal, first attempted to build a Panama Canal in the 1880s. The difficulty of the terrain and weather (rain) encountered caused the company to go bankrupt. High worker mortality from disease also discouraged further investment in the project. DeLessup's abandoned excavating equipment sits, isolated decaying machines, today tourist attractions.
Twenty years later, an expansionist United States, that just acquired colonies after defeating Spain in the 1898 Spanish–American War, and whose Navy became more important, decided to reactivate the project. The United States and Colombia did not reach agreement on the terms of a canal treaty (see Hay–Herrán Treaty). Panama, which did not have (and still does not have) a land connection with the rest of Colombia, was already thinking of independence. In 1903 the United States, with support from Panamanians who expected the canal to provide substantial wages, revenues, and markets for local goods and services, took Panama province away from Colombia, and set up a puppet republic (Panama). Its currency, the Balboa – a name that suggests the country began as a way to get from one hemisphere to the other – was a replica of the US dollar. The US dollar was and remains legal tender (used as currency). A U.S. military zone, the Canal Zone, 10 miles (16 km) wide, with U.S. military stationed there (bases, 2 TV stations, channels 8 and 10, Pxs, a U.S.-style high school), split Panama in half. The Canal – a major engineering project – was built. The U.S. did not feel that conditions were stable enough to withdraw until 1979. The withdrawal from Panama contributed to President Jimmy Carter's defeat in 1980.
Large-scale ship canals such as the Panama Canal and Suez Canal continue to operate for cargo transportation, as do European barge canals. Due to globalization, they are becoming increasingly important, resulting in expansion projects such as the Panama Canal expansion project. The expanded canal began commercial operation on 26 June 2016. The new set of locks allow transit of larger, Post-Panamax and New Panamax ships.
The narrow early industrial canals, however, have ceased to carry significant amounts of trade and many have been abandoned to navigation, but may still be used as a system for transportation of untreated water. In some cases railways have been built along the canal route, an example being the Croydon Canal.
A movement that began in Britain and France to use the early industrial canals for pleasure boats, such as hotel barges, has spurred rehabilitation of stretches of historic canals. In some cases, abandoned canals such as the Kennet and Avon Canal have been restored and are now used by pleasure boaters. In Britain, canalside housing has also proven popular in recent years.
The Seine–Nord Europe Canal is being developed into a major transportation waterway, linking France with Belgium, Germany, and the Netherlands.
Canals have found another use in the 21st century, as easements for the installation of fibre optic telecommunications network cabling, avoiding having them buried in roadways while facilitating access and reducing the hazard of being damaged from digging equipment.
Canals are still used to provide water for agriculture. An extensive canal system exists within the Imperial Valley in the Southern California desert to provide irrigation to agriculture within the area.
Canals are so deeply identified with Venice that many canal cities have been nicknamed "the Venice of…". The city is built on marshy islands, with wooden piles supporting the buildings, so that the land is man-made rather than the waterways. The islands have a long history of settlement; by the 12th century, Venice was a powerful city state.
Amsterdam was built in a similar way, with buildings on wooden piles. It became a city around 1300. Many Amsterdam canals were built as part of fortifications. They became grachten when the city was enlarged and houses were built alongside the water. Its nickname as the "Venice of the North" is shared with Hamburg of Germany, St. Petersburg of Russia and Bruges of Belgium.
Suzhou was dubbed the "Venice of the East" by Marco Polo during his travels there in the 13th century, with its modern canalside Pingjiang Road and Shantang Street becoming major tourist attractions. Other nearby cities including Nanjing, Shanghai, Wuxi, Jiaxing, Huzhou, Nantong, Taizhou, Yangzhou, and Changzhou are located along the lower mouth of the Yangtze River and Lake Tai, yet another source of small rivers and creeks, which have been canalized and developed for centuries.
Other cities with extensive canal networks include: Alkmaar, Amersfoort, Bolsward, Brielle, Delft, Den Bosch, Dokkum, Dordrecht, Enkhuizen, Franeker, Gouda, Haarlem, Harlingen, Leeuwarden, Leiden, Sneek and Utrecht in the Netherlands; Brugge and Gent in Flanders, Belgium; Birmingham in England; Saint Petersburg in Russia; Bydgoszcz, Gdańsk, Szczecin and Wrocław in Poland; Aveiro in Portugal; Hamburg and Berlin in Germany; Fort Lauderdale and Cape Coral in Florida, United States, Wenzhou in China, Cần Thơ in Vietnam, Bangkok in Thailand, and Lahore in Pakistan.
Liverpool Maritime Mercantile City was a UNESCO World Heritage Site near the centre of Liverpool, England, where a system of intertwining waterways and docks is now being developed for mainly residential and leisure use.
Canal estates (sometimes known as bayous in the United States) are a form of subdivision popular in cities like Miami, Florida, Texas City, Texas and the Gold Coast, Queensland; the Gold Coast has over 890 km of residential canals. Wetlands are difficult areas upon which to build housing estates, so dredging part of the wetland down to a navigable channel provides fill to build up another part of the wetland above the flood level for houses. Land is built up in a finger pattern that provides a suburban street layout of waterfront housing blocks.
Inland canals have often had boats specifically built for them. An example of this is the British narrowboat, which is up to 72 feet (21.95 m) long and 7 feet (2.13 m) wide and was primarily built for British Midland canals. In this case the limiting factor was the size of the locks. This is also the limiting factor on the Panama canal where Panamax ships were limited to a length of 289.56 m (950 ft) and a beam of 32.31 m (106 ft) until 26 June 2016 when the opening of larger locks allowed for the passage of larger New Panamax ships. For the lockless Suez Canal the limiting factor for Suezmax ships is generally draft, which is limited to 16 m (52.5 ft). At the other end of the scale, tub-boat canals such as the Bude Canal were limited to boats of under 10 tons for much of their length due to the capacity of their inclined planes or boat lifts. Most canals have a limit on height imposed either by bridges or by tunnels. | [
{
"paragraph_id": 0,
"text": "Canals or artificial waterways are waterways or engineered channels built for drainage management (e.g. flood control and irrigation) or for conveyancing water transport vehicles (e.g. water taxi). They carry free, calm surface flow under atmospheric pressure, and can be thought of as artificial rivers.",
"title": ""
},
{
"paragraph_id": 1,
"text": "In most cases, a canal has a series of dams and locks that create reservoirs of low speed current flow. These reservoirs are referred to as slack water levels, often just called levels. A canal can be called a navigation canal when it parallels a natural river and shares part of the latter's discharges and drainage basin, and leverages its resources by building dams and locks to increase and lengthen its stretches of slack water levels while staying in its valley.",
"title": ""
},
{
"paragraph_id": 2,
"text": "A canal can cut across a drainage divide atop a ridge, generally requiring an external water source above the highest elevation. The best-known example of such a canal is the Panama Canal.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Many canals have been built at elevations, above valleys and other waterways. Canals with sources of water at a higher level can deliver water to a destination such as a city where water is needed. The Roman Empire's aqueducts were such water supply canals.",
"title": ""
},
{
"paragraph_id": 4,
"text": "The term was once used to describe linear features seen on the surface of Mars, Martian canals, an optical illusion.",
"title": ""
},
{
"paragraph_id": 5,
"text": "A navigation is a series of channels that run roughly parallel to the valley and stream bed of an unimproved river. A navigation always shares the drainage basin of the river. A vessel uses the calm parts of the river itself as well as improvements, traversing the same changes in height.",
"title": "Types of artificial waterways"
},
{
"paragraph_id": 6,
"text": "A true canal is a channel that cuts across a drainage divide, making a navigable channel connecting two different drainage basins.",
"title": "Types of artificial waterways"
},
{
"paragraph_id": 7,
"text": "Both navigations and canals use engineered structures to improve navigation:",
"title": "Structures used in artificial waterways"
},
{
"paragraph_id": 8,
"text": "Since they cut across drainage divides, canals are more difficult to construct and often need additional improvements, like viaducts and aqueducts to bridge waters over streams and roads, and ways to keep water in the channel.",
"title": "Structures used in artificial waterways"
},
{
"paragraph_id": 9,
"text": "There are two broad types of canal:",
"title": "Types of canals"
},
{
"paragraph_id": 10,
"text": "Historically, canals were of immense importance to commerce and the development, growth and vitality of a civilization. In 1855 the Lehigh Canal carried over 1.2 million tons of anthracite coal; by the 1930s the company which built and operated it over a century pulled the plug. The few canals still in operation in our modern age are a fraction of the numbers that once fueled and enabled economic growth, indeed were practically a prerequisite to further urbanization and industrialization. For the movement of bulk raw materials such as coal and ores are difficult and marginally affordable without water transport. Such raw materials fueled the industrial developments and new metallurgy resulting of the spiral of increasing mechanization during 17th–20th century, leading to new research disciplines, new industries and economies of scale, raising the standard of living for any industrialized society.",
"title": "Importance"
},
{
"paragraph_id": 11,
"text": "Most ship canals today primarily service bulk cargo and large ship transportation industries, whereas the once critical smaller inland waterways conceived and engineered as boat and barge canals have largely been supplanted and filled in, abandoned and left to deteriorate, or kept in service and staffed by state employees, where dams and locks are maintained for flood control or pleasure boating. Their replacement was gradual, beginning first in the United States in the mid-1850s where canal shipping was first augmented by, then began being replaced by using much faster, less geographically constrained & limited, and generally cheaper to maintain railways.",
"title": "Importance"
},
{
"paragraph_id": 12,
"text": "By the early 1880s, canals which had little ability to economically compete with rail transport, were off the map. In the next couple of decades, coal was increasingly diminished as the heating fuel of choice by oil, and growth of coal shipments leveled off. Later, after World War I when motor-trucks came into their own, the last small U.S. barge canals saw a steady decline in cargo ton-miles alongside many railways, the flexibility and steep slope climbing capability of lorries taking over cargo hauling increasingly as road networks were improved, and which also had the freedom to make deliveries well away from rail lined road beds or ditches in the dirt which could not operate in the winter.",
"title": "Importance"
},
{
"paragraph_id": 13,
"text": "The longest extant canal today, the Grand Canal in northern China, still remains in heavy use, especially the portion south of the Yellow River. It stretches from Beijing to Hangzhou at 1,794 kilometres (1,115 miles).",
"title": "Importance"
},
{
"paragraph_id": 14,
"text": "Canals are built in one of three ways, or a combination of the three, depending on available water and available path:",
"title": "Construction"
},
{
"paragraph_id": 15,
"text": "",
"title": "Construction"
},
{
"paragraph_id": 16,
"text": "",
"title": "Construction"
},
{
"paragraph_id": 17,
"text": "Smaller transportation canals can carry barges or narrowboats, while ship canals allow seagoing ships to travel to an inland port (e.g., Manchester Ship Canal), or from one sea or ocean to another (e.g., Caledonian Canal, Panama Canal).",
"title": "Construction"
},
{
"paragraph_id": 18,
"text": "At their simplest, canals consist of a trench filled with water. Depending on the stratum the canal passes through, it may be necessary to line the cut with some form of watertight material such as clay or concrete. When this is done with clay, it is known as puddling.",
"title": "Features"
},
{
"paragraph_id": 19,
"text": "Canals need to be level, and while small irregularities in the lie of the land can be dealt with through cuttings and embankments, for larger deviations other approaches have been adopted. The most common is the pound lock, which consists of a chamber within which the water level can be raised or lowered connecting either two pieces of canal at a different level or the canal with a river or the sea. When there is a hill to be climbed, flights of many locks in short succession may be used.",
"title": "Features"
},
{
"paragraph_id": 20,
"text": "Prior to the development of the pound lock in 984 AD in China by Chhaio Wei-Yo and later in Europe in the 15th century, either flash locks consisting of a single gate were used or ramps, sometimes equipped with rollers, were used to change the level. Flash locks were only practical where there was plenty of water available.",
"title": "Features"
},
{
"paragraph_id": 21,
"text": "Locks use a lot of water, so builders have adopted other approaches for situations where little water is available. These include boat lifts, such as the Falkirk Wheel, which use a caisson of water in which boats float while being moved between two levels; and inclined planes where a caisson is hauled up a steep railway.",
"title": "Features"
},
{
"paragraph_id": 22,
"text": "To cross a stream, road or valley (where the delay caused by a flight of locks at either side would be unacceptable) the valley can be spanned by a navigable aqueduct – a famous example in Wales is the Pontcysyllte Aqueduct (now a UNESCO World Heritage Site) across the valley of the River Dee.",
"title": "Features"
},
{
"paragraph_id": 23,
"text": "Another option for dealing with hills is to tunnel through them. An example of this approach is the Harecastle Tunnel on the Trent and Mersey Canal. Tunnels are only practical for smaller canals.",
"title": "Features"
},
{
"paragraph_id": 24,
"text": "Some canals attempted to keep changes in level down to a minimum. These canals known as contour canals would take longer, winding routes, along which the land was a uniform altitude. Other, generally later, canals took more direct routes requiring the use of various methods to deal with the change in level.",
"title": "Features"
},
{
"paragraph_id": 25,
"text": "Canals have various features to tackle the problem of water supply. In cases, like the Suez Canal, the canal is open to the sea. Where the canal is not at sea level, a number of approaches have been adopted. Taking water from existing rivers or springs was an option in some cases, sometimes supplemented by other methods to deal with seasonal variations in flow. Where such sources were unavailable, reservoirs – either separate from the canal or built into its course – and back pumping were used to provide the required water. In other cases, water pumped from mines was used to feed the canal. In certain cases, extensive \"feeder canals\" were built to bring water from sources located far from the canal.",
"title": "Features"
},
{
"paragraph_id": 26,
"text": "Where large amounts of goods are loaded or unloaded such as at the end of a canal, a canal basin may be built. This would normally be a section of water wider than the general canal. In some cases, the canal basins contain wharfs and cranes to assist with movement of goods.",
"title": "Features"
},
{
"paragraph_id": 27,
"text": "When a section of the canal needs to be sealed off so it can be drained for maintenance stop planks are frequently used. These consist of planks of wood placed across the canal to form a dam. They are generally placed in pre-existing grooves in the canal bank. On more modern canals, \"guard locks\" or gates were sometimes placed to allow a section of the canal to be quickly closed off, either for maintenance, or to prevent a major loss of water due to a canal breach.",
"title": "Features"
},
{
"paragraph_id": 28,
"text": "A canal fall, or canal drop, is a vertical drop in the canal bed. These are built when the natural ground slope is steeper than the desired canal gradient. They are constructed so the falling water's kinetic energy is dissipated in order to prevent it from scouring the bed and sides of the canal.",
"title": "Features"
},
{
"paragraph_id": 29,
"text": "A canal fall is constructed by cut and fill. It may be combined with a regulator, bridge, or other structure to save costs.",
"title": "Features"
},
{
"paragraph_id": 30,
"text": "There are various types of canal falls, based on their shape. One type is the ogee fall, where the drop follows an s-shaped curve to create a smooth transition and reduce turbulence. However, this smooth transition does not dissipate the water's kinetic energy, which leads to heavy scouring. As a result, the canal needs to be reinforced with concrete or masonry to protect it from eroding.",
"title": "Features"
},
{
"paragraph_id": 31,
"text": "Another type of canal fall is the vertical fall, which is \"simple and economical\". These feature a \"cistern\", or depressed area just downstream from the fall, to \"cushion\" the water by providing a deep pool for its kinetic energy to be diffused in. Vertical falls work for drops of up to 1.5 m in height, and for discharge of up to 15 cubic meters per second.",
"title": "Features"
},
{
"paragraph_id": 32,
"text": "The transport capacity of pack animals and carts is limited. A mule can carry an eighth-ton [250 pounds (113 kg)] maximum load over a journey measured in days and weeks, though much more for shorter distances and periods with appropriate rest. Besides, carts need roads. Transport over water is much more efficient and cost-effective for large cargoes.",
"title": "History"
},
{
"paragraph_id": 33,
"text": "The oldest known canals were irrigation canals, built in Mesopotamia circa 4000 BC, in what is now Iraq. The Indus Valley civilization of ancient India (circa 3000 BC) had sophisticated irrigation and storage systems developed, including the reservoirs built at Girnar in 3000 BC. This is the first time that such planned civil project had taken place in the ancient world. In Egypt, canals date back at least to the time of Pepi I Meryre (reigned 2332–2283 BC), who ordered a canal built to bypass the cataract on the Nile near Aswan.",
"title": "History"
},
{
"paragraph_id": 34,
"text": "In ancient China, large canals for river transport were established as far back as the Spring and Autumn Period (8th–5th centuries BC), the longest one of that period being the Hong Gou (Canal of the Wild Geese), which according to the ancient historian Sima Qian connected the old states of Song, Zhang, Chen, Cai, Cao, and Wei. The Caoyun System of canals was essential for imperial taxation, which was largely assessed in kind and involved enormous shipments of rice and other grains. By far the longest canal was the Grand Canal of China, still the longest canal in the world today and the oldest extant one. It is 1,794 kilometres (1,115 mi) long and was built to carry the Emperor Yang Guang between Zhuodu (Beijing) and Yuhang (Hangzhou). The project began in 605 and was completed in 609, although much of the work combined older canals, the oldest section of the canal existing since at least 486 BC. Even in its narrowest urban sections it is rarely less than 30 metres (98 ft) wide.",
"title": "History"
},
{
"paragraph_id": 35,
"text": "In the 5th century BC, Achaemenid king Xerxes I of Persia ordered the construction of the Xerxes Canal through the base of Mount Athos peninsula, Chalkidiki, northern Greece. It was constructed as part of his preparations for the Second Persian invasion of Greece, a part of the Greco-Persian Wars. It is one of the few monuments left by the Persian Empire in Europe.",
"title": "History"
},
{
"paragraph_id": 36,
"text": "Greek engineers were also among the first to use canal locks, by which they regulated the water flow in the Ancient Suez Canal as early as the 3rd century BC.",
"title": "History"
},
{
"paragraph_id": 37,
"text": "There was little experience moving bulk loads by carts, while a pack-horse would [i.e. 'could'] carry only an eighth of a ton. On a soft road a horse might be able to draw 5/8ths of a ton. But if the load were carried by a barge on a waterway, then up to 30 tons could be drawn by the same horse.— technology historian Ronald W. Clark referring to transport realities before the industrial revolution and the Canal age.",
"title": "History"
},
{
"paragraph_id": 38,
"text": "Hohokam was a society in the North American Southwest in what is now part of Arizona, United States, and Sonora, Mexico. Their irrigation systems supported the largest population in the Southwest by 1300 CE. Archaeologists working at a major archaeological dig in the 1990s in the Tucson Basin, along the Santa Cruz River, identified a culture and people that may have been the ancestors of the Hohokam. This prehistoric group occupied southern Arizona as early as 2000 BCE, and in the Early Agricultural Period grew corn, lived year-round in sedentary villages, and developed sophisticated irrigation canals. The large-scale Hohokam irrigation network in the Phoenix metropolitan area was the most complex in ancient North America. A portion of the ancient canals has been renovated for the Salt River Project and now helps to supply the city's water.",
"title": "History"
},
{
"paragraph_id": 39,
"text": "In the Middle Ages, water transport was several times cheaper and faster than transport overland. Overland transport by animal drawn conveyances was used around settled areas, but unimproved roads required pack animal trains, usually of mules to carry any degree of mass, and while a mule could carry an eighth ton, it also needed teamsters to tend it and one man could only tend perhaps five mules, meaning overland bulk transport was also expensive, as men expect compensation in the form of wages, room and board. This was because long-haul roads were unpaved, more often than not too narrow for carts, much less wagons, and in poor condition, wending their way through forests, marshy or muddy quagmires as often as unimproved but dry footing. In that era, as today, greater cargoes, especially bulk goods and raw materials, could be transported by ship far more economically than by land; in the pre-railroad days of the industrial revolution, water transport was the gold standard of fast transportation. The first artificial canal in Western Europe was the Fossa Carolina built at the end of the 8th century under personal supervision of Charlemagne.",
"title": "History"
},
{
"paragraph_id": 40,
"text": "In Britain, the Glastonbury Canal is believed to be the first post-Roman canal and was built in the middle of the 10th century to link the River Brue at Northover with Glastonbury Abbey, a distance of about 1.75 kilometres (1,900 yd). Its initial purpose is believed to be the transport of building stone for the abbey, but later it was used for delivering produce, including grain, wine and fish, from the abbey's outlying properties. It remained in use until at least the 14th century, but possibly as late as the mid-16th century.More lasting and of more economic impact were canals like the Naviglio Grande built between 1127 and 1257 to connect Milan with the river Ticino. The Naviglio Grande is the most important of the lombard \"navigli\" and the oldest functioning canal in Europe.Later, canals were built in the Netherlands and Flanders to drain the polders and assist transportation of goods and people.",
"title": "History"
},
{
"paragraph_id": 41,
"text": "Canal building was revived in this age because of commercial expansion from the 12th century. River navigations were improved progressively by the use of single, or flash locks. Taking boats through these used large amounts of water leading to conflicts with watermill owners and to correct this, the pound or chamber lock first appeared, in the 10th century in China and in Europe in 1373 in Vreeswijk, Netherlands. Another important development was the mitre gate, which was, it is presumed, introduced in Italy by Bertola da Novate in the 16th century. This allowed wider gates and also removed the height restriction of guillotine locks.",
"title": "History"
},
{
"paragraph_id": 42,
"text": "To break out of the limitations caused by river valleys, the first summit level canals were developed with the Grand Canal of China in 581–617 AD whilst in Europe the first, also using single locks, was the Stecknitz Canal in Germany in 1398.",
"title": "History"
},
{
"paragraph_id": 43,
"text": "In the Songhai Empire of West Africa, several canals were constructed under Sunni Ali and Askia Muhammad I between Kabara and Timbuktu in the 15th century. These were used primarily for irrigation and transport. Sunni Ali also attempted to construct a canal from the Niger River to Walata to facilitate conquest of the city but his progress was halted when he went to war with the Mossi Kingdoms.",
"title": "History"
},
{
"paragraph_id": 44,
"text": "Around 1500–1800 the first summit level canal to use pound locks in Europe was the Briare Canal connecting the Loire and Seine (1642), followed by the more ambitious Canal du Midi (1683) connecting the Atlantic to the Mediterranean. This included a staircase of 8 locks at Béziers, a 157 metres (515 ft) tunnel, and three major aqueducts.",
"title": "History"
},
{
"paragraph_id": 45,
"text": "Canal building progressed steadily in Germany in the 17th and 18th centuries with three great rivers, the Elbe, Oder and Weser being linked by canals. In post-Roman Britain, the first early modern period canal built appears to have been the Exeter Canal, which was surveyed in 1563, and open in 1566.",
"title": "History"
},
{
"paragraph_id": 46,
"text": "The oldest canal in the European settlements of North America, technically a mill race built for industrial purposes, is Mother Brook between the Boston, Massachusetts neighbourhoods of Dedham and Hyde Park connecting the higher waters of the Charles River and the mouth of the Neponset River and the sea. It was constructed in 1639 to provide water power for mills.",
"title": "History"
},
{
"paragraph_id": 47,
"text": "In Russia, the Volga–Baltic Waterway, a nationwide canal system connecting the Baltic Sea and Caspian Sea via the Neva and Volga rivers, was opened in 1718.",
"title": "History"
},
{
"paragraph_id": 48,
"text": "The modern canal system was mainly a product of the 18th century and early 19th century. It came into being because the Industrial Revolution (which began in Britain during the mid-18th century) demanded an economic and reliable way to transport goods and commodities in large quantities.",
"title": "History"
},
{
"paragraph_id": 49,
"text": "By the early 18th century, river navigations such as the Aire and Calder Navigation were becoming quite sophisticated, with pound locks and longer and longer \"cuts\" (some with intermediate locks) to avoid circuitous or difficult stretches of river. Eventually, the experience of building long multi-level cuts with their own locks gave rise to the idea of building a \"pure\" canal, a waterway designed on the basis of where goods needed to go, not where a river happened to be.",
"title": "History"
},
{
"paragraph_id": 50,
"text": "The claim for the first pure canal in Great Britain is debated between \"Sankey\" and \"Bridgewater\" supporters. The first true canal in what is now the United Kingdom was the Newry Canal in Northern Ireland constructed by Thomas Steers in 1741.",
"title": "History"
},
{
"paragraph_id": 51,
"text": "The Sankey Brook Navigation, which connected St Helens with the River Mersey, is often claimed as the first modern \"purely artificial\" canal because although originally a scheme to make the Sankey Brook navigable, it included an entirely new artificial channel that was effectively a canal along the Sankey Brook valley. However, \"Bridgewater\" supporters point out that the last quarter-mile of the navigation is indeed a canalized stretch of the Brook, and that it was the Bridgewater Canal (less obviously associated with an existing river) that captured the popular imagination and inspired further canals.",
"title": "History"
},
{
"paragraph_id": 52,
"text": "In the mid-eighteenth century the 3rd Duke of Bridgewater, who owned a number of coal mines in northern England, wanted a reliable way to transport his coal to the rapidly industrializing city of Manchester. He commissioned the engineer James Brindley to build a canal for that purpose. Brindley's design included an aqueduct carrying the canal over the River Irwell. This was an engineering wonder which immediately attracted tourists. The construction of this canal was funded entirely by the Duke and was called the Bridgewater Canal. It opened in 1761 and was the first major British canal.",
"title": "History"
},
{
"paragraph_id": 53,
"text": "The new canals proved highly successful. The boats on the canal were horse-drawn with a towpath alongside the canal for the horse to walk along. This horse-drawn system proved to be highly economical and became standard across the British canal network. Commercial horse-drawn canal boats could be seen on the UK's canals until as late as the 1950s, although by then diesel-powered boats, often towing a second unpowered boat, had become standard.",
"title": "History"
},
{
"paragraph_id": 54,
"text": "The canal boats could carry thirty tons at a time with only one horse pulling – more than ten times the amount of cargo per horse that was possible with a cart. Because of this huge increase in supply, the Bridgewater canal reduced the price of coal in Manchester by nearly two-thirds within just a year of its opening. The Bridgewater was also a huge financial success, with it earning what had been spent on its construction within just a few years.",
"title": "History"
},
{
"paragraph_id": 55,
"text": "This success proved the viability of canal transport, and soon industrialists in many other parts of the country wanted canals. After the Bridgewater canal, early canals were built by groups of private individuals with an interest in improving communications. In Staffordshire the famous potter Josiah Wedgwood saw an opportunity to bring bulky cargoes of clay to his factory doors and to transport his fragile finished goods to market in Manchester, Birmingham or further away, by water, minimizing breakages. Within just a few years of the Bridgewater's opening, an embryonic national canal network came into being, with the construction of canals such as the Oxford Canal and the Trent & Mersey Canal.",
"title": "History"
},
{
"paragraph_id": 56,
"text": "The new canal system was both cause and effect of the rapid industrialization of The Midlands and the north. The period between the 1770s and the 1830s is often referred to as the \"Golden Age\" of British canals.",
"title": "History"
},
{
"paragraph_id": 57,
"text": "For each canal, an Act of Parliament was necessary to authorize construction, and as people saw the high incomes achieved from canal tolls, canal proposals came to be put forward by investors interested in profiting from dividends, at least as much as by people whose businesses would profit from cheaper transport of raw materials and finished goods.",
"title": "History"
},
{
"paragraph_id": 58,
"text": "In a further development, there was often out-and-out speculation, where people would try to buy shares in a newly floated company to sell them on for an immediate profit, regardless of whether the canal was ever profitable, or even built. During this period of \"canal mania\", huge sums were invested in canal building, and although many schemes came to nothing, the canal system rapidly expanded to nearly 4,000 miles (over 6,400 kilometres) in length.",
"title": "History"
},
{
"paragraph_id": 59,
"text": "Many rival canal companies were formed and competition was rampant. Perhaps the best example was Worcester Bar in Birmingham, a point where the Worcester and Birmingham Canal and the Birmingham Canal Navigations Main Line were only seven feet apart. For many years, a dispute about tolls meant that goods travelling through Birmingham had to be portaged from boats in one canal to boats in the other.",
"title": "History"
},
{
"paragraph_id": 60,
"text": "Canal companies were initially chartered by individual states in the United States. These early canals were constructed, owned, and operated by private joint-stock companies. Four were completed when the War of 1812 broke out; these were the South Hadley Canal (opened 1795) in Massachusetts, Santee Canal (opened 1800) in South Carolina, the Middlesex Canal (opened 1802) also in Massachusetts, and the Dismal Swamp Canal (opened 1805) in Virginia. The Erie Canal (opened 1825) was chartered and owned by the state of New York and financed by bonds bought by private investors. The Erie canal runs about 363 miles (584 km) from Albany, New York, on the Hudson River to Buffalo, New York, at Lake Erie. The Hudson River connects Albany to the Atlantic port of New York City and the Erie Canal completed a navigable water route from the Atlantic Ocean to the Great Lakes. The canal contains 36 locks and encompasses a total elevation differential of around 565 ft. (169 m). The Erie Canal with its easy connections to most of the U.S. mid-west and New York City soon quickly paid back all its invested capital (US$7 million) and started turning a profit. By cutting transportation costs in half or more it became a large profit center for Albany and New York City as it allowed the cheap transportation of many of the agricultural products grown in the mid west of the United States to the rest of the world. From New York City these agricultural products could easily be shipped to other U.S. states or overseas. Assured of a market for their farm products the settlement of the U.S. mid-west was greatly accelerated by the Erie Canal. The profits generated by the Erie Canal project started a canal building boom in the United States that lasted until about 1850 when railroads started becoming seriously competitive in price and convenience. The Blackstone Canal (finished in 1828) in Massachusetts and Rhode Island fulfilled a similar role in the early industrial revolution between 1828 and 1848. The Blackstone Valley was a major contributor of the American Industrial Revolution where Samuel Slater built his first textile mill.",
"title": "History"
},
{
"paragraph_id": 61,
"text": "A power canal refers to a canal used for hydraulic power generation, rather than for transport. Nowadays power canals are built almost exclusively as parts of hydroelectric power stations. Parts of the United States, particularly in the Northeast, had enough fast-flowing rivers that water power was the primary means of powering factories (usually textile mills) until after the American Civil War. For example, Lowell, Massachusetts, considered to be \"The Cradle of the American Industrial Revolution,\" has 6 miles (9.7 km) of canals, built from around 1790 to 1850, that provided water power and a means of transportation for the city. The output of the system is estimated at 10,000 horsepower. Other cities with extensive power canal systems include Lawrence, Massachusetts, Holyoke, Massachusetts, Manchester, New Hampshire, and Augusta, Georgia. The most notable power canal was built in 1862 for the Niagara Falls Hydraulic Power and Manufacturing Company.",
"title": "History"
},
{
"paragraph_id": 62,
"text": "Competition, from railways from the 1830s and roads in the 20th century, made the smaller canals obsolete for most commercial transport, and many of the British canals fell into decay. Only the Manchester Ship Canal and the Aire and Calder Canal bucked this trend. Yet in other countries canals grew in size as construction techniques improved. During the 19th century in the US, the length of canals grew from 100 miles (161 km) to over 4,000, with a complex network making the Great Lakes navigable, in conjunction with Canada, although some canals were later drained and used as railroad rights-of-way.",
"title": "History"
},
{
"paragraph_id": 63,
"text": "In the United States, navigable canals reached into isolated areas and brought them in touch with the world beyond. By 1825 the Erie Canal, 363 miles (584 km) long with 36 locks, opened up a connection from the populated Northeast to the Great Lakes. Settlers flooded into regions serviced by such canals, since access to markets was available. The Erie Canal (as well as other canals) was instrumental in lowering the differences in commodity prices between these various markets across America. The canals caused price convergence between different regions because of their reduction in transportation costs, which allowed Americans to ship and buy goods from farther distances much cheaper. Ohio built many miles of canal, Indiana had working canals for a few decades, and the Illinois and Michigan Canal connected the Great Lakes to the Mississippi River system until replaced by a channelized river waterway.",
"title": "History"
},
{
"paragraph_id": 64,
"text": "Three major canals with very different purposes were built in what is now Canada. The first Welland Canal, which opened in 1829 between Lake Ontario and Lake Erie, bypassing Niagara Falls and the Lachine Canal (1825), which allowed ships to skirt the nearly impassable rapids on the St. Lawrence River at Montreal, were built for commerce. The Rideau Canal, completed in 1832, connects Ottawa on the Ottawa River to Kingston, Ontario on Lake Ontario. The Rideau Canal was built as a result of the War of 1812 to provide military transportation between the British colonies of Upper Canada and Lower Canada as an alternative to part of the St. Lawrence River, which was susceptible to blockade by the United States.",
"title": "History"
},
{
"paragraph_id": 65,
"text": "In France, a steady linking of all the river systems – Rhine, Rhône, Saône and Seine – and the North Sea was boosted in 1879 by the establishment of the Freycinet gauge, which specified the minimum size of locks. Canal traffic doubled in the first decades of the 20th century.",
"title": "History"
},
{
"paragraph_id": 66,
"text": "Many notable sea canals were completed in this period, starting with the Suez Canal (1869) – which carries tonnage many times that of most other canals – and the Kiel Canal (1897), though the Panama Canal was not opened until 1914.",
"title": "History"
},
{
"paragraph_id": 67,
"text": "In the 19th century, a number of canals were built in Japan including the Biwako canal and the Tone canal. These canals were partially built with the help of engineers from the Netherlands and other countries.",
"title": "History"
},
{
"paragraph_id": 68,
"text": "A major question was how to connect the Atlantic and the Pacific with a canal through narrow Central America. (The Panama Railroad opened in 1855.) The original proposal was for a sea-level canal through what is today Nicaragua, taking advantage of the relatively large Lake Nicaragua. This canal has never been built in part because of political instability, which scared off potential investors. It remains an active project (the geography has not changed), and in the 2010s Chinese involvement was developing.",
"title": "History"
},
{
"paragraph_id": 69,
"text": "The second choice for a Central American canal was a Panama canal. The De Lessups company, which ran the Suez Canal, first attempted to build a Panama Canal in the 1880s. The difficulty of the terrain and weather (rain) encountered caused the company to go bankrupt. High worker mortality from disease also discouraged further investment in the project. DeLessup's abandoned excavating equipment sits, isolated decaying machines, today tourist attractions.",
"title": "History"
},
{
"paragraph_id": 70,
"text": "Twenty years later, an expansionist United States, that just acquired colonies after defeating Spain in the 1898 Spanish–American War, and whose Navy became more important, decided to reactivate the project. The United States and Colombia did not reach agreement on the terms of a canal treaty (see Hay–Herrán Treaty). Panama, which did not have (and still does not have) a land connection with the rest of Colombia, was already thinking of independence. In 1903 the United States, with support from Panamanians who expected the canal to provide substantial wages, revenues, and markets for local goods and services, took Panama province away from Colombia, and set up a puppet republic (Panama). Its currency, the Balboa – a name that suggests the country began as a way to get from one hemisphere to the other – was a replica of the US dollar. The US dollar was and remains legal tender (used as currency). A U.S. military zone, the Canal Zone, 10 miles (16 km) wide, with U.S. military stationed there (bases, 2 TV stations, channels 8 and 10, Pxs, a U.S.-style high school), split Panama in half. The Canal – a major engineering project – was built. The U.S. did not feel that conditions were stable enough to withdraw until 1979. The withdrawal from Panama contributed to President Jimmy Carter's defeat in 1980.",
"title": "History"
},
{
"paragraph_id": 71,
"text": "Large-scale ship canals such as the Panama Canal and Suez Canal continue to operate for cargo transportation, as do European barge canals. Due to globalization, they are becoming increasingly important, resulting in expansion projects such as the Panama Canal expansion project. The expanded canal began commercial operation on 26 June 2016. The new set of locks allow transit of larger, Post-Panamax and New Panamax ships.",
"title": "History"
},
{
"paragraph_id": 72,
"text": "The narrow early industrial canals, however, have ceased to carry significant amounts of trade and many have been abandoned to navigation, but may still be used as a system for transportation of untreated water. In some cases railways have been built along the canal route, an example being the Croydon Canal.",
"title": "History"
},
{
"paragraph_id": 73,
"text": "A movement that began in Britain and France to use the early industrial canals for pleasure boats, such as hotel barges, has spurred rehabilitation of stretches of historic canals. In some cases, abandoned canals such as the Kennet and Avon Canal have been restored and are now used by pleasure boaters. In Britain, canalside housing has also proven popular in recent years.",
"title": "History"
},
{
"paragraph_id": 74,
"text": "The Seine–Nord Europe Canal is being developed into a major transportation waterway, linking France with Belgium, Germany, and the Netherlands.",
"title": "History"
},
{
"paragraph_id": 75,
"text": "Canals have found another use in the 21st century, as easements for the installation of fibre optic telecommunications network cabling, avoiding having them buried in roadways while facilitating access and reducing the hazard of being damaged from digging equipment.",
"title": "History"
},
{
"paragraph_id": 76,
"text": "Canals are still used to provide water for agriculture. An extensive canal system exists within the Imperial Valley in the Southern California desert to provide irrigation to agriculture within the area.",
"title": "History"
},
{
"paragraph_id": 77,
"text": "Canals are so deeply identified with Venice that many canal cities have been nicknamed \"the Venice of…\". The city is built on marshy islands, with wooden piles supporting the buildings, so that the land is man-made rather than the waterways. The islands have a long history of settlement; by the 12th century, Venice was a powerful city state.",
"title": "Cities on water"
},
{
"paragraph_id": 78,
"text": "Amsterdam was built in a similar way, with buildings on wooden piles. It became a city around 1300. Many Amsterdam canals were built as part of fortifications. They became grachten when the city was enlarged and houses were built alongside the water. Its nickname as the \"Venice of the North\" is shared with Hamburg of Germany, St. Petersburg of Russia and Bruges of Belgium.",
"title": "Cities on water"
},
{
"paragraph_id": 79,
"text": "Suzhou was dubbed the \"Venice of the East\" by Marco Polo during his travels there in the 13th century, with its modern canalside Pingjiang Road and Shantang Street becoming major tourist attractions. Other nearby cities including Nanjing, Shanghai, Wuxi, Jiaxing, Huzhou, Nantong, Taizhou, Yangzhou, and Changzhou are located along the lower mouth of the Yangtze River and Lake Tai, yet another source of small rivers and creeks, which have been canalized and developed for centuries.",
"title": "Cities on water"
},
{
"paragraph_id": 80,
"text": "Other cities with extensive canal networks include: Alkmaar, Amersfoort, Bolsward, Brielle, Delft, Den Bosch, Dokkum, Dordrecht, Enkhuizen, Franeker, Gouda, Haarlem, Harlingen, Leeuwarden, Leiden, Sneek and Utrecht in the Netherlands; Brugge and Gent in Flanders, Belgium; Birmingham in England; Saint Petersburg in Russia; Bydgoszcz, Gdańsk, Szczecin and Wrocław in Poland; Aveiro in Portugal; Hamburg and Berlin in Germany; Fort Lauderdale and Cape Coral in Florida, United States, Wenzhou in China, Cần Thơ in Vietnam, Bangkok in Thailand, and Lahore in Pakistan.",
"title": "Cities on water"
},
{
"paragraph_id": 81,
"text": "Liverpool Maritime Mercantile City was a UNESCO World Heritage Site near the centre of Liverpool, England, where a system of intertwining waterways and docks is now being developed for mainly residential and leisure use.",
"title": "Cities on water"
},
{
"paragraph_id": 82,
"text": "Canal estates (sometimes known as bayous in the United States) are a form of subdivision popular in cities like Miami, Florida, Texas City, Texas and the Gold Coast, Queensland; the Gold Coast has over 890 km of residential canals. Wetlands are difficult areas upon which to build housing estates, so dredging part of the wetland down to a navigable channel provides fill to build up another part of the wetland above the flood level for houses. Land is built up in a finger pattern that provides a suburban street layout of waterfront housing blocks.",
"title": "Cities on water"
},
{
"paragraph_id": 83,
"text": "Inland canals have often had boats specifically built for them. An example of this is the British narrowboat, which is up to 72 feet (21.95 m) long and 7 feet (2.13 m) wide and was primarily built for British Midland canals. In this case the limiting factor was the size of the locks. This is also the limiting factor on the Panama canal where Panamax ships were limited to a length of 289.56 m (950 ft) and a beam of 32.31 m (106 ft) until 26 June 2016 when the opening of larger locks allowed for the passage of larger New Panamax ships. For the lockless Suez Canal the limiting factor for Suezmax ships is generally draft, which is limited to 16 m (52.5 ft). At the other end of the scale, tub-boat canals such as the Bude Canal were limited to boats of under 10 tons for much of their length due to the capacity of their inclined planes or boat lifts. Most canals have a limit on height imposed either by bridges or by tunnels.",
"title": "Boats"
}
] | Canals or artificial waterways are waterways or engineered channels built for drainage management or for conveyancing water transport vehicles. They carry free, calm surface flow under atmospheric pressure, and can be thought of as artificial rivers. In most cases, a canal has a series of dams and locks that create reservoirs of low speed current flow. These reservoirs are referred to as slack water levels, often just called levels. A canal can be called a navigation canal when it parallels a natural river and shares part of the latter's discharges and drainage basin, and leverages its resources by building dams and locks to increase and lengthen its stretches of slack water levels while staying in its valley. A canal can cut across a drainage divide atop a ridge, generally requiring an external water source above the highest elevation. The best-known example of such a canal is the Panama Canal. Many canals have been built at elevations, above valleys and other waterways. Canals with sources of water at a higher level can deliver water to a destination such as a city where water is needed. The Roman Empire's aqueducts were such water supply canals. The term was once used to describe linear features seen on the surface of Mars, Martian canals, an optical illusion. | 2001-11-06T10:10:43Z | 2023-12-27T21:19:33Z | [
"Template:Commons category",
"Template:More footnotes needed",
"Template:Webarchive",
"Template:Cite news",
"Template:Infrastructure",
"Template:Wikiquote",
"Template:Portal",
"Template:Div col end",
"Template:Cite web",
"Template:JSTOR",
"Template:Citation",
"Template:Cite book",
"Template:ISBN",
"Template:Coord",
"Template:Rivers, streams and springs",
"Template:Reflist",
"Template:Refend",
"Template:Authority control",
"Template:Convert",
"Template:See also",
"Template:Div col",
"Template:Cite NIE",
"Template:Harvnb",
"Template:EB1911 poster",
"Template:Short description",
"Template:Use Oxford spelling",
"Template:Use dmy dates",
"Template:Interlanguage link multi",
"Template:Rp",
"Template:Other uses",
"Template:Anchor",
"Template:Main",
"Template:Refbegin"
] | https://en.wikipedia.org/wiki/Canal |
5,626 | Cognitive science | Cognitive science is the interdisciplinary, scientific study of the mind and its processes with input from linguistics, psychology, neuroscience, philosophy, computer science/artificial intelligence, and anthropology. It examines the nature, the tasks, and the functions of cognition (in a broad sense). Cognitive scientists study intelligence and behavior, with a focus on how nervous systems represent, process, and transform information. Mental faculties of concern to cognitive scientists include language, perception, memory, attention, reasoning, and emotion; to understand these faculties, cognitive scientists borrow from fields such as linguistics, psychology, artificial intelligence, philosophy, neuroscience, and anthropology. The typical analysis of cognitive science spans many levels of organization, from learning and decision to logic and planning; from neural circuitry to modular brain organization. One of the fundamental concepts of cognitive science is that "thinking can best be understood in terms of representational structures in the mind and computational procedures that operate on those structures."
The goal of cognitive science is to understand and formulate the principles of intelligence with the hope that this will lead to a better comprehension of the mind and of learning. The cognitive sciences began as an intellectual movement in the 1950s often referred to as the cognitive revolution.
The cognitive sciences began as an intellectual movement in the 1950s, called the cognitive revolution. Cognitive science has a prehistory traceable back to ancient Greek philosophical texts (see Plato's Meno and Aristotle's De Anima); Modern philosophers such as Descartes, David Hume, Immanuel Kant, Benedict de Spinoza, Nicolas Malebranche, Pierre Cabanis, Leibniz and John Locke, rejected scholasticism while mostly having never read Aristotle, and they were working with an entirely different set of tools and core concepts than those of the cognitive scientist.
The modern culture of cognitive science can be traced back to the early cyberneticists in the 1930s and 1940s, such as Warren McCulloch and Walter Pitts, who sought to understand the organizing principles of the mind. McCulloch and Pitts developed the first variants of what are now known as artificial neural networks, models of computation inspired by the structure of biological neural networks.
Another precursor was the early development of the theory of computation and the digital computer in the 1940s and 1950s. Kurt Gödel, Alonzo Church, Alan Turing, and John von Neumann were instrumental in these developments. The modern computer, or Von Neumann machine, would play a central role in cognitive science, both as a metaphor for the mind, and as a tool for investigation.
The first instance of cognitive science experiments being done at an academic institution took place at MIT Sloan School of Management, established by J.C.R. Licklider working within the psychology department and conducting experiments using computer memory as models for human cognition.
In 1959, Noam Chomsky published a scathing review of B. F. Skinner's book Verbal Behavior. At the time, Skinner's behaviorist paradigm dominated the field of psychology within the United States. Most psychologists focused on functional relations between stimulus and response, without positing internal representations. Chomsky argued that in order to explain language, we needed a theory like generative grammar, which not only attributed internal representations but characterized their underlying order.
The term cognitive science was coined by Christopher Longuet-Higgins in his 1973 commentary on the Lighthill report, which concerned the then-current state of artificial intelligence research. In the same decade, the journal Cognitive Science and the Cognitive Science Society were founded. The founding meeting of the Cognitive Science Society was held at the University of California, San Diego in 1979, which resulted in cognitive science becoming an internationally visible enterprise. In 1972, Hampshire College started the first undergraduate education program in Cognitive Science, led by Neil Stillings. In 1982, with assistance from Professor Stillings, Vassar College became the first institution in the world to grant an undergraduate degree in Cognitive Science. In 1986, the first Cognitive Science Department in the world was founded at the University of California, San Diego.
In the 1970s and early 1980s, as access to computers increased, artificial intelligence research expanded. Researchers such as Marvin Minsky would write computer programs in languages such as LISP to attempt to formally characterize the steps that human beings went through, for instance, in making decisions and solving problems, in the hope of better understanding human thought, and also in the hope of creating artificial minds. This approach is known as "symbolic AI".
Eventually the limits of the symbolic AI research program became apparent. For instance, it seemed to be unrealistic to comprehensively list human knowledge in a form usable by a symbolic computer program. The late 80s and 90s saw the rise of neural networks and connectionism as a research paradigm. Under this point of view, often attributed to James McClelland and David Rumelhart, the mind could be characterized as a set of complex associations, represented as a layered network. Critics argue that there are some phenomena which are better captured by symbolic models, and that connectionist models are often so complex as to have little explanatory power. Recently symbolic and connectionist models have been combined, making it possible to take advantage of both forms of explanation. While both connectionism and symbolic approaches have proven useful for testing various hypotheses and exploring approaches to understanding aspects of cognition and lower level brain functions, neither are biologically realistic and therefore, both suffer from a lack of neuroscientific plausibility. Connectionism has proven useful for exploring computationally how cognition emerges in development and occurs in the human brain, and has provided alternatives to strictly domain-specific / domain general approaches. For example, scientists such as Jeff Elman, Liz Bates, and Annette Karmiloff-Smith have posited that networks in the brain emerge from the dynamic interaction between them and environmental input.
A central tenet of cognitive science is that a complete understanding of the mind/brain cannot be attained by studying only a single level. An example would be the problem of remembering a phone number and recalling it later. One approach to understanding this process would be to study behavior through direct observation, or naturalistic observation. A person could be presented with a phone number and be asked to recall it after some delay of time; then the accuracy of the response could be measured. Another approach to measure cognitive ability would be to study the firings of individual neurons while a person is trying to remember the phone number. Neither of these experiments on its own would fully explain how the process of remembering a phone number works. Even if the technology to map out every neuron in the brain in real-time were available and it were known when each neuron fired it would still be impossible to know how a particular firing of neurons translates into the observed behavior. Thus an understanding of how these two levels relate to each other is imperative. Francisco Varela, in The Embodied Mind: Cognitive Science and Human Experience, argues that "the new sciences of the mind need to enlarge their horizon to encompass both lived human experience and the possibilities for transformation inherent in human experience". On the classic cognitivist view, this can be provided by a functional level account of the process. Studying a particular phenomenon from multiple levels creates a better understanding of the processes that occur in the brain to give rise to a particular behavior. Marr gave a famous description of three levels of analysis:
Cognitive science is an interdisciplinary field with contributors from various fields, including psychology, neuroscience, linguistics, philosophy of mind, computer science, anthropology and biology. Cognitive scientists work collectively in hope of understanding the mind and its interactions with the surrounding world much like other sciences do. The field regards itself as compatible with the physical sciences and uses the scientific method as well as simulation or modeling, often comparing the output of models with aspects of human cognition. Similarly to the field of psychology, there is some doubt whether there is a unified cognitive science, which have led some researchers to prefer 'cognitive sciences' in plural.
Many, but not all, who consider themselves cognitive scientists hold a functionalist view of the mind—the view that mental states and processes should be explained by their function – what they do. According to the multiple realizability account of functionalism, even non-human systems such as robots and computers can be ascribed as having cognition.
The term "cognitive" in "cognitive science" is used for "any kind of mental operation or structure that can be studied in precise terms" (Lakoff and Johnson, 1999). This conceptualization is very broad, and should not be confused with how "cognitive" is used in some traditions of analytic philosophy, where "cognitive" has to do only with formal rules and truth-conditional semantics.
The earliest entries for the word "cognitive" in the OED take it to mean roughly "pertaining to the action or process of knowing". The first entry, from 1586, shows the word was at one time used in the context of discussions of Platonic theories of knowledge. Most in cognitive science, however, presumably do not believe their field is the study of anything as certain as the knowledge sought by Plato.
Cognitive science is a large field, and covers a wide array of topics on cognition. However, it should be recognized that cognitive science has not always been equally concerned with every topic that might bear relevance to the nature and operation of minds. Classical cognitivists have largely de-emphasized or avoided social and cultural factors, embodiment, emotion, consciousness, animal cognition, and comparative and evolutionary psychologies. However, with the decline of behaviorism, internal states such as affects and emotions, as well as awareness and covert attention became approachable again. For example, situated and embodied cognition theories take into account the current state of the environment as well as the role of the body in cognition. With the newfound emphasis on information processing, observable behavior was no longer the hallmark of psychological theory, but the modeling or recording of mental states.
Below are some of the main topics that cognitive science is concerned with. This is not an exhaustive list. See List of cognitive science topics for a list of various aspects of the field.
Artificial intelligence (AI) involves the study of cognitive phenomena in machines. One of the practical goals of AI is to implement aspects of human intelligence in computers. Computers are also widely used as a tool with which to study cognitive phenomena. Computational modeling uses simulations to study how human intelligence may be structured. (See § Computational modeling.)
There is some debate in the field as to whether the mind is best viewed as a huge array of small but individually feeble elements (i.e. neurons), or as a collection of higher-level structures such as symbols, schemes, plans, and rules. The former view uses connectionism to study the mind, whereas the latter emphasizes symbolic artificial intelligence. One way to view the issue is whether it is possible to accurately simulate a human brain on a computer without accurately simulating the neurons that make up the human brain.
Attention is the selection of important information. The human mind is bombarded with millions of stimuli and it must have a way of deciding which of this information to process. Attention is sometimes seen as a spotlight, meaning one can only shine the light on a particular set of information. Experiments that support this metaphor include the dichotic listening task (Cherry, 1957) and studies of inattentional blindness (Mack and Rock, 1998). In the dichotic listening task, subjects are bombarded with two different messages, one in each ear, and told to focus on only one of the messages. At the end of the experiment, when asked about the content of the unattended message, subjects cannot report it.
Embodied cognition approaches to cognitive science emphasize the role of body and environment in cognition. This includes both neural and extra-neural bodily processes, and factors that range from affective and emotional processes, to posture, motor control, proprioception, and kinaesthesis, to autonomic processes that involve heartbeat and respiration, to the role of the enteric gut microbiome. It also includes accounts of how the body engages with or is coupled to social and physical environments. 4E (embodied, embedded, extended and enactive) cognition includes a broad range of views about brain-body-environment interaction, from causal embeddedness to stronger claims about how the mind extends to include tools and instruments, as well as the role of social interactions, action-oriented processes, and affordances. 4E theories range from those closer to classic cognitivism (so-called "weak" embodied cognition) to stronger extended and enactive versions that are sometimes referred to as radical embodied cognitive science.
The ability to learn and understand language is an extremely complex process. Language is acquired within the first few years of life, and all humans under normal circumstances are able to acquire language proficiently. A major driving force in the theoretical linguistic field is discovering the nature that language must have in the abstract in order to be learned in such a fashion. Some of the driving research questions in studying how the brain itself processes language include: (1) To what extent is linguistic knowledge innate or learned?, (2) Why is it more difficult for adults to acquire a second-language than it is for infants to acquire their first-language?, and (3) How are humans able to understand novel sentences?
The study of language processing ranges from the investigation of the sound patterns of speech to the meaning of words and whole sentences. Linguistics often divides language processing into orthography, phonetics, phonology, morphology, syntax, semantics, and pragmatics. Many aspects of language can be studied from each of these components and from their interaction.
The study of language processing in cognitive science is closely tied to the field of linguistics. Linguistics was traditionally studied as a part of the humanities, including studies of history, art and literature. In the last fifty years or so, more and more researchers have studied knowledge and use of language as a cognitive phenomenon, the main problems being how knowledge of language can be acquired and used, and what precisely it consists of. Linguists have found that, while humans form sentences in ways apparently governed by very complex systems, they are remarkably unaware of the rules that govern their own speech. Thus linguists must resort to indirect methods to determine what those rules might be, if indeed rules as such exist. In any event, if speech is indeed governed by rules, they appear to be opaque to any conscious consideration.
Learning and development are the processes by which we acquire knowledge and information over time. Infants are born with little or no knowledge (depending on how knowledge is defined), yet they rapidly acquire the ability to use language, walk, and recognize people and objects. Research in learning and development aims to explain the mechanisms by which these processes might take place.
A major question in the study of cognitive development is the extent to which certain abilities are innate or learned. This is often framed in terms of the nature and nurture debate. The nativist view emphasizes that certain features are innate to an organism and are determined by its genetic endowment. The empiricist view, on the other hand, emphasizes that certain abilities are learned from the environment. Although clearly both genetic and environmental input is needed for a child to develop normally, considerable debate remains about how genetic information might guide cognitive development. In the area of language acquisition, for example, some (such as Steven Pinker) have argued that specific information containing universal grammatical rules must be contained in the genes, whereas others (such as Jeffrey Elman and colleagues in Rethinking Innateness) have argued that Pinker's claims are biologically unrealistic. They argue that genes determine the architecture of a learning system, but that specific "facts" about how grammar works can only be learned as a result of experience.
Memory allows us to store information for later retrieval. Memory is often thought of as consisting of both a long-term and short-term store. Long-term memory allows us to store information over prolonged periods (days, weeks, years). We do not yet know the practical limit of long-term memory capacity. Short-term memory allows us to store information over short time scales (seconds or minutes).
Memory is also often grouped into declarative and procedural forms. Declarative memory—grouped into subsets of semantic and episodic forms of memory—refers to our memory for facts and specific knowledge, specific meanings, and specific experiences (e.g. "Are apples food?", or "What did I eat for breakfast four days ago?"). Procedural memory allows us to remember actions and motor sequences (e.g. how to ride a bicycle) and is often dubbed implicit knowledge or memory .
Cognitive scientists study memory just as psychologists do, but tend to focus more on how memory bears on cognitive processes, and the interrelationship between cognition and memory. One example of this could be, what mental processes does a person go through to retrieve a long-lost memory? Or, what differentiates between the cognitive process of recognition (seeing hints of something before remembering it, or memory in context) and recall (retrieving a memory, as in "fill-in-the-blank")?
Perception is the ability to take in information via the senses, and process it in some way. Vision and hearing are two dominant senses that allow us to perceive the environment. Some questions in the study of visual perception, for example, include: (1) How are we able to recognize objects?, (2) Why do we perceive a continuous visual environment, even though we only see small bits of it at any one time? One tool for studying visual perception is by looking at how people process optical illusions. The image on the right of a Necker cube is an example of a bistable percept, that is, the cube can be interpreted as being oriented in two different directions.
The study of haptic (tactile), olfactory, and gustatory stimuli also fall into the domain of perception.
Action is taken to refer to the output of a system. In humans, this is accomplished through motor responses. Spatial planning and movement, speech production, and complex motor movements are all aspects of action.
Consciousness is the awareness of experiences within oneself. This helps the mind with having the ability to experience or feel a sense of self.
Many different methodologies are used to study cognitive science. As the field is highly interdisciplinary, research often cuts across multiple areas of study, drawing on research methods from psychology, neuroscience, computer science and systems theory.
In order to have a description of what constitutes intelligent behavior, one must study behavior itself. This type of research is closely tied to that in cognitive psychology and psychophysics. By measuring behavioral responses to different stimuli, one can understand something about how those stimuli are processed. Lewandowski & Strohmetz (2009) reviewed a collection of innovative uses of behavioral measurement in psychology including behavioral traces, behavioral observations, and behavioral choice. Behavioral traces are pieces of evidence that indicate behavior occurred, but the actor is not present (e.g., litter in a parking lot or readings on an electric meter). Behavioral observations involve the direct witnessing of the actor engaging in the behavior (e.g., watching how close a person sits next to another person). Behavioral choices are when a person selects between two or more options (e.g., voting behavior, choice of a punishment for another participant).
Brain imaging involves analyzing activity within the brain while performing various tasks. This allows us to link behavior and brain function to help understand how information is processed. Different types of imaging techniques vary in their temporal (time-based) and spatial (location-based) resolution. Brain imaging is often used in cognitive neuroscience.
Computational models require a mathematically and logically formal representation of a problem. Computer models are used in the simulation and experimental verification of different specific and general properties of intelligence. Computational modeling can help us understand the functional organization of a particular cognitive phenomenon. Approaches to cognitive modeling can be categorized as: (1) symbolic, on abstract mental functions of an intelligent mind by means of symbols; (2) subsymbolic, on the neural and associative properties of the human brain; and (3) across the symbolic–subsymbolic border, including hybrid.
All the above approaches tend either to be generalized to the form of integrated computational models of a synthetic/abstract intelligence (i.e. cognitive architecture) in order to be applied to the explanation and improvement of individual and social/organizational decision-making and reasoning or to focus on single simulative programs (or microtheories/"middle-range" theories) modelling specific cognitive faculties (e.g. vision, language, categorization etc.).
Research methods borrowed directly from neuroscience and neuropsychology can also help us to understand aspects of intelligence. These methods allow us to understand how intelligent behavior is implemented in a physical system.
Cognitive science has given rise to models of human cognitive bias and risk perception, and has been influential in the development of behavioral finance, part of economics. It has also given rise to a new theory of the philosophy of mathematics (related to denotational mathematics), and many theories of artificial intelligence, persuasion and coercion. It has made its presence known in the philosophy of language and epistemology as well as constituting a substantial wing of modern linguistics. Fields of cognitive science have been influential in understanding the brain's particular functional systems (and functional deficits) ranging from speech production to auditory processing and visual perception. It has made progress in understanding how damage to particular areas of the brain affect cognition, and it has helped to uncover the root causes and results of specific dysfunction, such as dyslexia, anopia, and hemispatial neglect.
Some of the more recognized names in cognitive science are usually either the most controversial or the most cited. Within philosophy, some familiar names include Daniel Dennett, who writes from a computational systems perspective, John Searle, known for his controversial Chinese room argument, and Jerry Fodor, who advocates functionalism.
Others include David Chalmers, who advocates Dualism and is also known for articulating the hard problem of consciousness, and Douglas Hofstadter, famous for writing Gödel, Escher, Bach, which questions the nature of words and thought.
In the realm of linguistics, Noam Chomsky and George Lakoff have been influential (both have also become notable as political commentators). In artificial intelligence, Marvin Minsky, Herbert A. Simon, and Allen Newell are prominent.
Popular names in the discipline of psychology include George A. Miller, James McClelland, Philip Johnson-Laird, Lawrence Barsalou, Vittorio Guidano, Howard Gardner and Steven Pinker. Anthropologists Dan Sperber, Edwin Hutchins, Bradd Shore, James Wertsch and Scott Atran, have been involved in collaborative projects with cognitive and social psychologists, political scientists and evolutionary biologists in attempts to develop general theories of culture formation, religion, and political association.
Computational theories (with models and simulations) have also been developed, by David Rumelhart, James McClelland and Philip Johnson-Laird.
Epistemics is a term coined in 1969 by the University of Edinburgh with the foundation of its School of Epistemics. Epistemics is to be distinguished from epistemology in that epistemology is the philosophical theory of knowledge, whereas epistemics signifies the scientific study of knowledge.
Christopher Longuet-Higgins has defined it as "the construction of formal models of the processes (perceptual, intellectual, and linguistic) by which knowledge and understanding are achieved and communicated." In his 1978 essay "Epistemics: The Regulative Theory of Cognition", Alvin I. Goldman claims to have coined the term "epistemics" to describe a reorientation of epistemology. Goldman maintains that his epistemics is continuous with traditional epistemology and the new term is only to avoid opposition. Epistemics, in Goldman's version, differs only slightly from traditional epistemology in its alliance with the psychology of cognition; epistemics stresses the detailed study of mental processes and information-processing mechanisms that lead to knowledge or beliefs.
In the mid-1980s, the School of Epistemics was renamed as The Centre for Cognitive Science (CCS). In 1998, CCS was incorporated into the University of Edinburgh's School of Informatics.
One of the core aims of cognitive science is to achieve an integrated theory of cognition. This requires integrative mechanisms explaining how the information processing that occurs simultaneously in spatially segregated (sub-)cortical areas in the brain is coordinated and bound together to give rise to coherent perceptual and symbolic representations. One approach is to solve this "Binding problem" (that is, the problem of dynamically representing conjunctions of informational elements, from the most basic perceptual representations ("feature binding") to the most complex cognitive representations, like symbol structures ("variable binding")), by means of integrative synchronization mechanisms. In other words, one of the coordinating mechanisms appears to be the temporal (phase) synchronization of neural activity based on dynamical self-organizing processes in neural networks, described by the Binding-by-synchrony (BBS) Hypothesis from neurophysiology. Connectionist cognitive neuroarchitectures have been developed that use integrative synchronization mechanisms to solve this binding problem in perceptual cognition and in language cognition. In perceptual cognition the problem is to explain how elementary object properties and object relations, like the object color or the object form, can be dynamically bound together or can be integrated to a representation of this perceptual object by means of a synchronization mechanism ("feature binding", "feature linking"). In language cognition the problem is to explain how semantic concepts and syntactic roles can be dynamically bound together or can be integrated to complex cognitive representations like systematic and compositional symbol structures and propositions by means of a synchronization mechanism ("variable binding") (see also the "Symbolism vs. connectionism debate" in connectionism). | [
{
"paragraph_id": 0,
"text": "Cognitive science is the interdisciplinary, scientific study of the mind and its processes with input from linguistics, psychology, neuroscience, philosophy, computer science/artificial intelligence, and anthropology. It examines the nature, the tasks, and the functions of cognition (in a broad sense). Cognitive scientists study intelligence and behavior, with a focus on how nervous systems represent, process, and transform information. Mental faculties of concern to cognitive scientists include language, perception, memory, attention, reasoning, and emotion; to understand these faculties, cognitive scientists borrow from fields such as linguistics, psychology, artificial intelligence, philosophy, neuroscience, and anthropology. The typical analysis of cognitive science spans many levels of organization, from learning and decision to logic and planning; from neural circuitry to modular brain organization. One of the fundamental concepts of cognitive science is that \"thinking can best be understood in terms of representational structures in the mind and computational procedures that operate on those structures.\"",
"title": ""
},
{
"paragraph_id": 1,
"text": "The goal of cognitive science is to understand and formulate the principles of intelligence with the hope that this will lead to a better comprehension of the mind and of learning. The cognitive sciences began as an intellectual movement in the 1950s often referred to as the cognitive revolution.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The cognitive sciences began as an intellectual movement in the 1950s, called the cognitive revolution. Cognitive science has a prehistory traceable back to ancient Greek philosophical texts (see Plato's Meno and Aristotle's De Anima); Modern philosophers such as Descartes, David Hume, Immanuel Kant, Benedict de Spinoza, Nicolas Malebranche, Pierre Cabanis, Leibniz and John Locke, rejected scholasticism while mostly having never read Aristotle, and they were working with an entirely different set of tools and core concepts than those of the cognitive scientist.",
"title": "History"
},
{
"paragraph_id": 3,
"text": "The modern culture of cognitive science can be traced back to the early cyberneticists in the 1930s and 1940s, such as Warren McCulloch and Walter Pitts, who sought to understand the organizing principles of the mind. McCulloch and Pitts developed the first variants of what are now known as artificial neural networks, models of computation inspired by the structure of biological neural networks.",
"title": "History"
},
{
"paragraph_id": 4,
"text": "Another precursor was the early development of the theory of computation and the digital computer in the 1940s and 1950s. Kurt Gödel, Alonzo Church, Alan Turing, and John von Neumann were instrumental in these developments. The modern computer, or Von Neumann machine, would play a central role in cognitive science, both as a metaphor for the mind, and as a tool for investigation.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "The first instance of cognitive science experiments being done at an academic institution took place at MIT Sloan School of Management, established by J.C.R. Licklider working within the psychology department and conducting experiments using computer memory as models for human cognition.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "In 1959, Noam Chomsky published a scathing review of B. F. Skinner's book Verbal Behavior. At the time, Skinner's behaviorist paradigm dominated the field of psychology within the United States. Most psychologists focused on functional relations between stimulus and response, without positing internal representations. Chomsky argued that in order to explain language, we needed a theory like generative grammar, which not only attributed internal representations but characterized their underlying order.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "The term cognitive science was coined by Christopher Longuet-Higgins in his 1973 commentary on the Lighthill report, which concerned the then-current state of artificial intelligence research. In the same decade, the journal Cognitive Science and the Cognitive Science Society were founded. The founding meeting of the Cognitive Science Society was held at the University of California, San Diego in 1979, which resulted in cognitive science becoming an internationally visible enterprise. In 1972, Hampshire College started the first undergraduate education program in Cognitive Science, led by Neil Stillings. In 1982, with assistance from Professor Stillings, Vassar College became the first institution in the world to grant an undergraduate degree in Cognitive Science. In 1986, the first Cognitive Science Department in the world was founded at the University of California, San Diego.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "In the 1970s and early 1980s, as access to computers increased, artificial intelligence research expanded. Researchers such as Marvin Minsky would write computer programs in languages such as LISP to attempt to formally characterize the steps that human beings went through, for instance, in making decisions and solving problems, in the hope of better understanding human thought, and also in the hope of creating artificial minds. This approach is known as \"symbolic AI\".",
"title": "History"
},
{
"paragraph_id": 9,
"text": "Eventually the limits of the symbolic AI research program became apparent. For instance, it seemed to be unrealistic to comprehensively list human knowledge in a form usable by a symbolic computer program. The late 80s and 90s saw the rise of neural networks and connectionism as a research paradigm. Under this point of view, often attributed to James McClelland and David Rumelhart, the mind could be characterized as a set of complex associations, represented as a layered network. Critics argue that there are some phenomena which are better captured by symbolic models, and that connectionist models are often so complex as to have little explanatory power. Recently symbolic and connectionist models have been combined, making it possible to take advantage of both forms of explanation. While both connectionism and symbolic approaches have proven useful for testing various hypotheses and exploring approaches to understanding aspects of cognition and lower level brain functions, neither are biologically realistic and therefore, both suffer from a lack of neuroscientific plausibility. Connectionism has proven useful for exploring computationally how cognition emerges in development and occurs in the human brain, and has provided alternatives to strictly domain-specific / domain general approaches. For example, scientists such as Jeff Elman, Liz Bates, and Annette Karmiloff-Smith have posited that networks in the brain emerge from the dynamic interaction between them and environmental input.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "A central tenet of cognitive science is that a complete understanding of the mind/brain cannot be attained by studying only a single level. An example would be the problem of remembering a phone number and recalling it later. One approach to understanding this process would be to study behavior through direct observation, or naturalistic observation. A person could be presented with a phone number and be asked to recall it after some delay of time; then the accuracy of the response could be measured. Another approach to measure cognitive ability would be to study the firings of individual neurons while a person is trying to remember the phone number. Neither of these experiments on its own would fully explain how the process of remembering a phone number works. Even if the technology to map out every neuron in the brain in real-time were available and it were known when each neuron fired it would still be impossible to know how a particular firing of neurons translates into the observed behavior. Thus an understanding of how these two levels relate to each other is imperative. Francisco Varela, in The Embodied Mind: Cognitive Science and Human Experience, argues that \"the new sciences of the mind need to enlarge their horizon to encompass both lived human experience and the possibilities for transformation inherent in human experience\". On the classic cognitivist view, this can be provided by a functional level account of the process. Studying a particular phenomenon from multiple levels creates a better understanding of the processes that occur in the brain to give rise to a particular behavior. Marr gave a famous description of three levels of analysis:",
"title": "Principles"
},
{
"paragraph_id": 11,
"text": "Cognitive science is an interdisciplinary field with contributors from various fields, including psychology, neuroscience, linguistics, philosophy of mind, computer science, anthropology and biology. Cognitive scientists work collectively in hope of understanding the mind and its interactions with the surrounding world much like other sciences do. The field regards itself as compatible with the physical sciences and uses the scientific method as well as simulation or modeling, often comparing the output of models with aspects of human cognition. Similarly to the field of psychology, there is some doubt whether there is a unified cognitive science, which have led some researchers to prefer 'cognitive sciences' in plural.",
"title": "Principles"
},
{
"paragraph_id": 12,
"text": "Many, but not all, who consider themselves cognitive scientists hold a functionalist view of the mind—the view that mental states and processes should be explained by their function – what they do. According to the multiple realizability account of functionalism, even non-human systems such as robots and computers can be ascribed as having cognition.",
"title": "Principles"
},
{
"paragraph_id": 13,
"text": "The term \"cognitive\" in \"cognitive science\" is used for \"any kind of mental operation or structure that can be studied in precise terms\" (Lakoff and Johnson, 1999). This conceptualization is very broad, and should not be confused with how \"cognitive\" is used in some traditions of analytic philosophy, where \"cognitive\" has to do only with formal rules and truth-conditional semantics.",
"title": "Principles"
},
{
"paragraph_id": 14,
"text": "The earliest entries for the word \"cognitive\" in the OED take it to mean roughly \"pertaining to the action or process of knowing\". The first entry, from 1586, shows the word was at one time used in the context of discussions of Platonic theories of knowledge. Most in cognitive science, however, presumably do not believe their field is the study of anything as certain as the knowledge sought by Plato.",
"title": "Principles"
},
{
"paragraph_id": 15,
"text": "Cognitive science is a large field, and covers a wide array of topics on cognition. However, it should be recognized that cognitive science has not always been equally concerned with every topic that might bear relevance to the nature and operation of minds. Classical cognitivists have largely de-emphasized or avoided social and cultural factors, embodiment, emotion, consciousness, animal cognition, and comparative and evolutionary psychologies. However, with the decline of behaviorism, internal states such as affects and emotions, as well as awareness and covert attention became approachable again. For example, situated and embodied cognition theories take into account the current state of the environment as well as the role of the body in cognition. With the newfound emphasis on information processing, observable behavior was no longer the hallmark of psychological theory, but the modeling or recording of mental states.",
"title": "Scope"
},
{
"paragraph_id": 16,
"text": "Below are some of the main topics that cognitive science is concerned with. This is not an exhaustive list. See List of cognitive science topics for a list of various aspects of the field.",
"title": "Scope"
},
{
"paragraph_id": 17,
"text": "Artificial intelligence (AI) involves the study of cognitive phenomena in machines. One of the practical goals of AI is to implement aspects of human intelligence in computers. Computers are also widely used as a tool with which to study cognitive phenomena. Computational modeling uses simulations to study how human intelligence may be structured. (See § Computational modeling.)",
"title": "Scope"
},
{
"paragraph_id": 18,
"text": "There is some debate in the field as to whether the mind is best viewed as a huge array of small but individually feeble elements (i.e. neurons), or as a collection of higher-level structures such as symbols, schemes, plans, and rules. The former view uses connectionism to study the mind, whereas the latter emphasizes symbolic artificial intelligence. One way to view the issue is whether it is possible to accurately simulate a human brain on a computer without accurately simulating the neurons that make up the human brain.",
"title": "Scope"
},
{
"paragraph_id": 19,
"text": "Attention is the selection of important information. The human mind is bombarded with millions of stimuli and it must have a way of deciding which of this information to process. Attention is sometimes seen as a spotlight, meaning one can only shine the light on a particular set of information. Experiments that support this metaphor include the dichotic listening task (Cherry, 1957) and studies of inattentional blindness (Mack and Rock, 1998). In the dichotic listening task, subjects are bombarded with two different messages, one in each ear, and told to focus on only one of the messages. At the end of the experiment, when asked about the content of the unattended message, subjects cannot report it.",
"title": "Scope"
},
{
"paragraph_id": 20,
"text": "Embodied cognition approaches to cognitive science emphasize the role of body and environment in cognition. This includes both neural and extra-neural bodily processes, and factors that range from affective and emotional processes, to posture, motor control, proprioception, and kinaesthesis, to autonomic processes that involve heartbeat and respiration, to the role of the enteric gut microbiome. It also includes accounts of how the body engages with or is coupled to social and physical environments. 4E (embodied, embedded, extended and enactive) cognition includes a broad range of views about brain-body-environment interaction, from causal embeddedness to stronger claims about how the mind extends to include tools and instruments, as well as the role of social interactions, action-oriented processes, and affordances. 4E theories range from those closer to classic cognitivism (so-called \"weak\" embodied cognition) to stronger extended and enactive versions that are sometimes referred to as radical embodied cognitive science.",
"title": "Scope"
},
{
"paragraph_id": 21,
"text": "The ability to learn and understand language is an extremely complex process. Language is acquired within the first few years of life, and all humans under normal circumstances are able to acquire language proficiently. A major driving force in the theoretical linguistic field is discovering the nature that language must have in the abstract in order to be learned in such a fashion. Some of the driving research questions in studying how the brain itself processes language include: (1) To what extent is linguistic knowledge innate or learned?, (2) Why is it more difficult for adults to acquire a second-language than it is for infants to acquire their first-language?, and (3) How are humans able to understand novel sentences?",
"title": "Scope"
},
{
"paragraph_id": 22,
"text": "The study of language processing ranges from the investigation of the sound patterns of speech to the meaning of words and whole sentences. Linguistics often divides language processing into orthography, phonetics, phonology, morphology, syntax, semantics, and pragmatics. Many aspects of language can be studied from each of these components and from their interaction.",
"title": "Scope"
},
{
"paragraph_id": 23,
"text": "The study of language processing in cognitive science is closely tied to the field of linguistics. Linguistics was traditionally studied as a part of the humanities, including studies of history, art and literature. In the last fifty years or so, more and more researchers have studied knowledge and use of language as a cognitive phenomenon, the main problems being how knowledge of language can be acquired and used, and what precisely it consists of. Linguists have found that, while humans form sentences in ways apparently governed by very complex systems, they are remarkably unaware of the rules that govern their own speech. Thus linguists must resort to indirect methods to determine what those rules might be, if indeed rules as such exist. In any event, if speech is indeed governed by rules, they appear to be opaque to any conscious consideration.",
"title": "Scope"
},
{
"paragraph_id": 24,
"text": "Learning and development are the processes by which we acquire knowledge and information over time. Infants are born with little or no knowledge (depending on how knowledge is defined), yet they rapidly acquire the ability to use language, walk, and recognize people and objects. Research in learning and development aims to explain the mechanisms by which these processes might take place.",
"title": "Scope"
},
{
"paragraph_id": 25,
"text": "A major question in the study of cognitive development is the extent to which certain abilities are innate or learned. This is often framed in terms of the nature and nurture debate. The nativist view emphasizes that certain features are innate to an organism and are determined by its genetic endowment. The empiricist view, on the other hand, emphasizes that certain abilities are learned from the environment. Although clearly both genetic and environmental input is needed for a child to develop normally, considerable debate remains about how genetic information might guide cognitive development. In the area of language acquisition, for example, some (such as Steven Pinker) have argued that specific information containing universal grammatical rules must be contained in the genes, whereas others (such as Jeffrey Elman and colleagues in Rethinking Innateness) have argued that Pinker's claims are biologically unrealistic. They argue that genes determine the architecture of a learning system, but that specific \"facts\" about how grammar works can only be learned as a result of experience.",
"title": "Scope"
},
{
"paragraph_id": 26,
"text": "Memory allows us to store information for later retrieval. Memory is often thought of as consisting of both a long-term and short-term store. Long-term memory allows us to store information over prolonged periods (days, weeks, years). We do not yet know the practical limit of long-term memory capacity. Short-term memory allows us to store information over short time scales (seconds or minutes).",
"title": "Scope"
},
{
"paragraph_id": 27,
"text": "Memory is also often grouped into declarative and procedural forms. Declarative memory—grouped into subsets of semantic and episodic forms of memory—refers to our memory for facts and specific knowledge, specific meanings, and specific experiences (e.g. \"Are apples food?\", or \"What did I eat for breakfast four days ago?\"). Procedural memory allows us to remember actions and motor sequences (e.g. how to ride a bicycle) and is often dubbed implicit knowledge or memory .",
"title": "Scope"
},
{
"paragraph_id": 28,
"text": "Cognitive scientists study memory just as psychologists do, but tend to focus more on how memory bears on cognitive processes, and the interrelationship between cognition and memory. One example of this could be, what mental processes does a person go through to retrieve a long-lost memory? Or, what differentiates between the cognitive process of recognition (seeing hints of something before remembering it, or memory in context) and recall (retrieving a memory, as in \"fill-in-the-blank\")?",
"title": "Scope"
},
{
"paragraph_id": 29,
"text": "Perception is the ability to take in information via the senses, and process it in some way. Vision and hearing are two dominant senses that allow us to perceive the environment. Some questions in the study of visual perception, for example, include: (1) How are we able to recognize objects?, (2) Why do we perceive a continuous visual environment, even though we only see small bits of it at any one time? One tool for studying visual perception is by looking at how people process optical illusions. The image on the right of a Necker cube is an example of a bistable percept, that is, the cube can be interpreted as being oriented in two different directions.",
"title": "Scope"
},
{
"paragraph_id": 30,
"text": "The study of haptic (tactile), olfactory, and gustatory stimuli also fall into the domain of perception.",
"title": "Scope"
},
{
"paragraph_id": 31,
"text": "Action is taken to refer to the output of a system. In humans, this is accomplished through motor responses. Spatial planning and movement, speech production, and complex motor movements are all aspects of action.",
"title": "Scope"
},
{
"paragraph_id": 32,
"text": "Consciousness is the awareness of experiences within oneself. This helps the mind with having the ability to experience or feel a sense of self.",
"title": "Scope"
},
{
"paragraph_id": 33,
"text": "Many different methodologies are used to study cognitive science. As the field is highly interdisciplinary, research often cuts across multiple areas of study, drawing on research methods from psychology, neuroscience, computer science and systems theory.",
"title": "Research methods"
},
{
"paragraph_id": 34,
"text": "In order to have a description of what constitutes intelligent behavior, one must study behavior itself. This type of research is closely tied to that in cognitive psychology and psychophysics. By measuring behavioral responses to different stimuli, one can understand something about how those stimuli are processed. Lewandowski & Strohmetz (2009) reviewed a collection of innovative uses of behavioral measurement in psychology including behavioral traces, behavioral observations, and behavioral choice. Behavioral traces are pieces of evidence that indicate behavior occurred, but the actor is not present (e.g., litter in a parking lot or readings on an electric meter). Behavioral observations involve the direct witnessing of the actor engaging in the behavior (e.g., watching how close a person sits next to another person). Behavioral choices are when a person selects between two or more options (e.g., voting behavior, choice of a punishment for another participant).",
"title": "Research methods"
},
{
"paragraph_id": 35,
"text": "Brain imaging involves analyzing activity within the brain while performing various tasks. This allows us to link behavior and brain function to help understand how information is processed. Different types of imaging techniques vary in their temporal (time-based) and spatial (location-based) resolution. Brain imaging is often used in cognitive neuroscience.",
"title": "Research methods"
},
{
"paragraph_id": 36,
"text": "Computational models require a mathematically and logically formal representation of a problem. Computer models are used in the simulation and experimental verification of different specific and general properties of intelligence. Computational modeling can help us understand the functional organization of a particular cognitive phenomenon. Approaches to cognitive modeling can be categorized as: (1) symbolic, on abstract mental functions of an intelligent mind by means of symbols; (2) subsymbolic, on the neural and associative properties of the human brain; and (3) across the symbolic–subsymbolic border, including hybrid.",
"title": "Research methods"
},
{
"paragraph_id": 37,
"text": "All the above approaches tend either to be generalized to the form of integrated computational models of a synthetic/abstract intelligence (i.e. cognitive architecture) in order to be applied to the explanation and improvement of individual and social/organizational decision-making and reasoning or to focus on single simulative programs (or microtheories/\"middle-range\" theories) modelling specific cognitive faculties (e.g. vision, language, categorization etc.).",
"title": "Research methods"
},
{
"paragraph_id": 38,
"text": "Research methods borrowed directly from neuroscience and neuropsychology can also help us to understand aspects of intelligence. These methods allow us to understand how intelligent behavior is implemented in a physical system.",
"title": "Research methods"
},
{
"paragraph_id": 39,
"text": "Cognitive science has given rise to models of human cognitive bias and risk perception, and has been influential in the development of behavioral finance, part of economics. It has also given rise to a new theory of the philosophy of mathematics (related to denotational mathematics), and many theories of artificial intelligence, persuasion and coercion. It has made its presence known in the philosophy of language and epistemology as well as constituting a substantial wing of modern linguistics. Fields of cognitive science have been influential in understanding the brain's particular functional systems (and functional deficits) ranging from speech production to auditory processing and visual perception. It has made progress in understanding how damage to particular areas of the brain affect cognition, and it has helped to uncover the root causes and results of specific dysfunction, such as dyslexia, anopia, and hemispatial neglect.",
"title": "Key findings"
},
{
"paragraph_id": 40,
"text": "Some of the more recognized names in cognitive science are usually either the most controversial or the most cited. Within philosophy, some familiar names include Daniel Dennett, who writes from a computational systems perspective, John Searle, known for his controversial Chinese room argument, and Jerry Fodor, who advocates functionalism.",
"title": "Notable researchers"
},
{
"paragraph_id": 41,
"text": "Others include David Chalmers, who advocates Dualism and is also known for articulating the hard problem of consciousness, and Douglas Hofstadter, famous for writing Gödel, Escher, Bach, which questions the nature of words and thought.",
"title": "Notable researchers"
},
{
"paragraph_id": 42,
"text": "In the realm of linguistics, Noam Chomsky and George Lakoff have been influential (both have also become notable as political commentators). In artificial intelligence, Marvin Minsky, Herbert A. Simon, and Allen Newell are prominent.",
"title": "Notable researchers"
},
{
"paragraph_id": 43,
"text": "Popular names in the discipline of psychology include George A. Miller, James McClelland, Philip Johnson-Laird, Lawrence Barsalou, Vittorio Guidano, Howard Gardner and Steven Pinker. Anthropologists Dan Sperber, Edwin Hutchins, Bradd Shore, James Wertsch and Scott Atran, have been involved in collaborative projects with cognitive and social psychologists, political scientists and evolutionary biologists in attempts to develop general theories of culture formation, religion, and political association.",
"title": "Notable researchers"
},
{
"paragraph_id": 44,
"text": "Computational theories (with models and simulations) have also been developed, by David Rumelhart, James McClelland and Philip Johnson-Laird.",
"title": "Notable researchers"
},
{
"paragraph_id": 45,
"text": "Epistemics is a term coined in 1969 by the University of Edinburgh with the foundation of its School of Epistemics. Epistemics is to be distinguished from epistemology in that epistemology is the philosophical theory of knowledge, whereas epistemics signifies the scientific study of knowledge.",
"title": "Epistemics"
},
{
"paragraph_id": 46,
"text": "Christopher Longuet-Higgins has defined it as \"the construction of formal models of the processes (perceptual, intellectual, and linguistic) by which knowledge and understanding are achieved and communicated.\" In his 1978 essay \"Epistemics: The Regulative Theory of Cognition\", Alvin I. Goldman claims to have coined the term \"epistemics\" to describe a reorientation of epistemology. Goldman maintains that his epistemics is continuous with traditional epistemology and the new term is only to avoid opposition. Epistemics, in Goldman's version, differs only slightly from traditional epistemology in its alliance with the psychology of cognition; epistemics stresses the detailed study of mental processes and information-processing mechanisms that lead to knowledge or beliefs.",
"title": "Epistemics"
},
{
"paragraph_id": 47,
"text": "In the mid-1980s, the School of Epistemics was renamed as The Centre for Cognitive Science (CCS). In 1998, CCS was incorporated into the University of Edinburgh's School of Informatics.",
"title": "Epistemics"
},
{
"paragraph_id": 48,
"text": "One of the core aims of cognitive science is to achieve an integrated theory of cognition. This requires integrative mechanisms explaining how the information processing that occurs simultaneously in spatially segregated (sub-)cortical areas in the brain is coordinated and bound together to give rise to coherent perceptual and symbolic representations. One approach is to solve this \"Binding problem\" (that is, the problem of dynamically representing conjunctions of informational elements, from the most basic perceptual representations (\"feature binding\") to the most complex cognitive representations, like symbol structures (\"variable binding\")), by means of integrative synchronization mechanisms. In other words, one of the coordinating mechanisms appears to be the temporal (phase) synchronization of neural activity based on dynamical self-organizing processes in neural networks, described by the Binding-by-synchrony (BBS) Hypothesis from neurophysiology. Connectionist cognitive neuroarchitectures have been developed that use integrative synchronization mechanisms to solve this binding problem in perceptual cognition and in language cognition. In perceptual cognition the problem is to explain how elementary object properties and object relations, like the object color or the object form, can be dynamically bound together or can be integrated to a representation of this perceptual object by means of a synchronization mechanism (\"feature binding\", \"feature linking\"). In language cognition the problem is to explain how semantic concepts and syntactic roles can be dynamically bound together or can be integrated to complex cognitive representations like systematic and compositional symbol structures and propositions by means of a synchronization mechanism (\"variable binding\") (see also the \"Symbolism vs. connectionism debate\" in connectionism).",
"title": "Binding problem in cognitive science"
}
] | Cognitive science is the interdisciplinary, scientific study of the mind and its processes with input from linguistics, psychology, neuroscience, philosophy, computer science/artificial intelligence, and anthropology. It examines the nature, the tasks, and the functions of cognition. Cognitive scientists study intelligence and behavior, with a focus on how nervous systems represent, process, and transform information. Mental faculties of concern to cognitive scientists include language, perception, memory, attention, reasoning, and emotion; to understand these faculties, cognitive scientists borrow from fields such as linguistics, psychology, artificial intelligence, philosophy, neuroscience, and anthropology. The typical analysis of cognitive science spans many levels of organization, from learning and decision to logic and planning; from neural circuitry to modular brain organization. One of the fundamental concepts of cognitive science is that "thinking can best be understood in terms of representational structures in the mind and computational procedures that operate on those structures." The goal of cognitive science is to understand and formulate the principles of intelligence with the hope that this will lead to a better comprehension of the mind and of learning.
The cognitive sciences began as an intellectual movement in the 1950s often referred to as the cognitive revolution. | 2001-11-04T12:02:55Z | 2023-11-13T23:58:24Z | [
"Template:Wikiversity-inline",
"Template:See also",
"Template:Better source needed",
"Template:More citations needed section",
"Template:Div col",
"Template:Cite news",
"Template:Cite encyclopedia",
"Template:Authority control",
"Template:For",
"Template:Page needed",
"Template:Cite report",
"Template:Commons category-inline",
"Template:Use dmy dates",
"Template:Annotated link",
"Template:Div col end",
"Template:Evolutionary psychology",
"Template:Portal",
"Template:Reflist",
"Template:Wikiquote-inline",
"Template:Computer science",
"Template:Main",
"Template:Section link",
"Template:Cite web",
"Template:Webarchive",
"Template:Citation",
"Template:Short description",
"Template:Anchor",
"Template:Cite journal",
"Template:Cite book",
"Template:Social sciences"
] | https://en.wikipedia.org/wiki/Cognitive_science |
5,630 | Copula (linguistics) | In linguistics, a copula (plural: copulas or copulae; abbreviated cop) is a word or phrase that links the subject of a sentence to a subject complement, such as the word is in the sentence "The sky is blue" or the phrase was not being in the sentence "It was not being co-operative." The word copula derives from the Latin noun for a "link" or "tie" that connects two different things.
A copula is often a verb or a verb-like word, though this is not universally the case. A verb that is a copula is sometimes called a copulative or copular verb. In English primary education grammar courses, a copula is often called a linking verb. In other languages, copulas show more resemblances to pronouns, as in Classical Chinese and Guarani, or may take the form of suffixes attached to a noun, as in Korean, Beja, and Inuit languages.
Most languages have one main copula (in English, the verb "to be"), although some (like Spanish, Portuguese and Thai) have more than one, while others have none. While the term copula is generally used to refer to such principal verbs, it may also be used for a wider group of verbs with similar potential functions (like become, get, feel and seem in English); alternatively, these might be distinguished as "semi-copulas" or "pseudo-copulas".
The principal use of a copula is to link the subject of a clause to a subject complement. A copular verb is often considered to be part of the predicate, the remainder being called a predicative expression. A simple clause containing a copula is illustrated below:
The book is on the table.
In that sentence, the noun phrase the book is the subject, the verb is serves as the copula, and the prepositional phrase on the table is the predicative expression. The whole expression is on the table may (in some theories of grammar) be called a predicate or a verb phrase.
The predicative expression accompanying the copula, also known as the complement of the copula, may take any of several possible forms: it may be a noun or noun phrase, an adjective or adjective phrase, a prepositional phrase (as above) or an adverb or another adverbial phrase expressing time or location. Examples are given below (with the copula in bold and the predicative expression in italics):
Mary and John are my friends. The sky was blue. I am taller than most people. The birds and the beasts were there.
The three components (subject, copula and predicative expression) do not necessarily appear in that order: their positioning depends on the rules for word order applicable to the language in question. In English (an SVO language), the ordering given above is the normal one, but certain variation is possible:
It is also possible, in certain circumstances, for one (or even two) of the three components to be absent:
Inverse copular constructions, in which the positions of the predicative expression and the subject are reversed, are found in various languages. They have been the subject of much theoretical analysis, particularly in regard to the difficulty of maintaining, in the case of such sentences, the usual division into a subject noun phrase and a predicate verb phrase.
Another issue is verb agreement when both subject and predicative expression are noun phrases (and differ in number or person): in English, the copula typically agrees with the syntactical subject even if it is not logically (i.e. semantically) the subject, as in the cause of the riot is (not are) these pictures of the wall. Compare Italian la causa della rivolta sono queste foto del muro; notice the use of the plural sono to agree with plural queste foto "these photos" rather than with singular la causa "the cause". In instances where an English syntactical subject comprises a prepositional object that is pluralized, however, the prepositional object agrees with the predicative expression, e.g. "What kind of birds are those?"
The definition and scope of the concept of a copula is not necessarily precise in any language. As noted above, though the concept of the copula in English is most strongly associated with the verb to be, there are many other verbs that can be used in a copular sense as well.
And more tenuously
A copular verb may also have other uses supplementary to or distinct from its uses as a copula. Some co-occurrences are common.
The English verb to be is also used as an auxiliary verb, especially for expressing passive voice (together with the past participle) or expressing progressive aspect (together with the present participle):
The man was killed. (passive) It is raining. (progressive)
Other languages' copulas have additional uses as auxiliaries. For example, French être can be used to express passive voice similarly to English be; both French être and German sein are used to express the perfect forms of certain verbs (formerly English be was also):
Je suis arrivé(e) French for 'I have arrived,' literally 'I am arrived.'
The auxiliary functions of these verbs derived from their copular function, and could be interpreted as special cases of the copular function (with the verbal forms it precedes being considered adjectival).
Another auxiliary usage in English is to denote an obligatory action or expected occurrence: "I am to serve you;" "The manager is to resign." This can be put also into past tense: "We were to leave at 9." For forms like "if I was/were to come", see English conditional sentences. (By certain criteria, the English copula be may always be considered an auxiliary verb; see Diagnostics for identifying auxiliary verbs in English.)
The English to be and its equivalents in certain other languages also have a non-copular use as an existential verb, meaning "to exist." This use is illustrated in the following sentences: I want only to be, and that is enough; I think therefore I am; To be or not to be, that is the question. In these cases, the verb itself expresses a predicate (that of existence), rather than linking to a predicative expression as it does when used as a copula. In ontology it is sometimes suggested that the "is" of existence is reducible to the "is" of property attribution or class membership; to be, Aristotle held, is to be something. However, Abelard in his Dialectica made a reductio ad absurdum argument against the idea that the copula can express existence.
Similar examples can be found in many other languages; for example, the French and Latin equivalents of I think therefore I am are Je pense, donc je suis and Cogito ergo sum, where suis and sum are the equivalents of English "am", normally used as copulas. However, other languages prefer a different verb for existential use, as in the Spanish version Pienso, luego existo (where the verb existir "to exist" is used rather than the copula ser or estar ‘to be’).
Another type of existential usage is in clauses of the there is… or there are… type. Languages differ in the way they express such meanings; some of them use the copular verb, possibly with an expletive pronoun like the English there, while other languages use different verbs and constructions, like the French il y a (which uses parts of the verb avoir ‘to have,’ not the copula) or the Swedish finns (the passive voice of the verb for "to find"). For details, see existential clause.
Relying on a unified theory of copular sentences, it has been proposed that the English there-sentences are subtypes of inverse copular constructions.
Predicates formed using a copula may express identity: that the two noun phrases (subject and complement) have the same referent or express an identical concept:
I want only to be myself. The Morning Star is the Evening Star.
They may also express membership of a class or a subset relationship:
She was a nurse. Cats are carnivorous mammals.
Similarly they may express some property, relation or position, permanent or temporary:
The trees are green. I am your boss. The hen is next to the cockerel. The children are confused.
Some languages use different copulas, or different syntax, to denote a permanent, essential characteristic of something versus a temporary state. For examples, see the sections on the Romance languages, Slavic languages and Irish.
In many languages the principal copula is a verb, like English (to) be, German sein, Mixtec kuu, Touareg emous, etc. It may inflect for grammatical categories like tense, aspect and mood, like other verbs in the language. Being a very commonly used verb, it is likely that the copula has irregular inflected forms; in English, the verb be has a number of highly irregular (suppletive) forms and has more different inflected forms than any other English verb (am, is, are, was, were, etc.; see English verbs for details).
Other copulas show more resemblances to pronouns. That is the case for Classical Chinese and Guarani, for instance. In highly synthetic languages, copulas are often suffixes, attached to a noun, but they may still behave otherwise like ordinary verbs: -u- in Inuit languages.
In some other languages, like Beja and Ket, the copula takes the form of suffixes that attach to a noun but are distinct from the person agreement markers used on predicative verbs. This phenomenon is known as nonverbal person agreement (or nonverbal subject agreement), and the relevant markers are always established as deriving from cliticized independent pronouns.
In some languages, copula omission occurs within a particular grammatical context. For example, speakers of Russian, Indonesian, Turkish, Hungarian, Arabic, Hebrew, Geʽez and Quechuan languages consistently drop the copula in present tense: Russian: я человек, ya chelovek ‘I (am a) human;’ Indonesian: saya seorang manusia ‘I (am) a human;’ Turkish: o insan ‘s/he (is a) human;’ Hungarian: ő ember ‘s/he (is) a human;’ Arabic: أنا إنسان, ʾana ʾinsān ‘I (am a) human;’ Hebrew: אני אדם, ʔani ʔadam "I (am a) human;" Geʽez: አነ ብእሲ/ብእሲ አነ ʔana bəʔəsi / bəʔəsi ʔana "I (am a) man" / "(a) man I (am)"; Southern Quechua: payqa runam "s/he (is) a human." The usage is known generically as the zero copula. In other tenses (sometimes in forms other than third person singular), the copula usually reappears.
Some languages drop the copula in poetic or aphorismic contexts. Examples in English include
Such poetic copula dropping is more pronounced in some languages other than English, like the Romance languages.
In informal speech of English, the copula may also be dropped in general sentences, as in "She a nurse." It is a feature of African-American Vernacular English, but is also used by a variety of other English speakers. An example is the sentence "I saw twelve men, each a soldier."
In Ancient Greek, when an adjective precedes a noun with an article, the copula is understood: ὁ οἴκος ἐστὶ μακρός, "the house is large", can be written μακρός ὁ οἴκος, "large the house (is)."
In Quechua (Southern Quechua used for the examples), zero copula is restricted to present tense in third person singular (kan): Payqa runam — "(s)he is a human;" but: (paykuna) runakunam kanku "(they) are human."ap
In Māori, the zero copula can be used in predicative expressions and with continuous verbs (many of which take a copulative verb in many Indo-European languages) — He nui te whare, literally "a big the house", "the house (is) big;" I te tēpu te pukapuka, literally "at (past locative particle) the table the book", "the book (was) on the table;" Nō Ingarangi ia, literally "from England (s)he", "(s)he (is) from England", Kei te kai au, literally "at the (act of) eating I", "I (am) eating."
Alternatively, in many cases, the particle ko can be used as a copulative (though not all instances of ko are used as thus, like all other Maori particles, ko has multiple purposes): Ko nui te whare "The house is big;" Ko te pukapuka kei te tēpu "It is the book (that is) on the table;" Ko au kei te kai "It is me eating."
However, when expressing identity or class membership, ko must be used: Ko tēnei tāku pukapuka "This is my book;" Ko Ōtautahi he tāone i Te Waipounamu "Christchurch is a city in the South Island (of New Zealand);" Ko koe tōku hoa "You are my friend."
When expressing identity, ko can be placed on either object in the clause without changing the meaning (ko tēnei tāku pukapuka is the same as ko tāku pukapuka tēnei) but not on both (ko tēnei ko tāku pukapuka would be equivalent to saying "it is this, it is my book" in English).
In Hungarian, zero copula is restricted to present tense in third person singular and plural: Ő ember/Ők emberek — "s/he is a human"/"they are humans;" but: (én) ember vagyok "I am a human", (te) ember vagy "you are a human", mi emberek vagyunk "we are humans", (ti) emberek vagytok "you (all) are humans." The copula also reappears for stating locations: az emberek a házban vannak, "the people are in the house", and for stating time: hat óra van, "it is six o'clock." However, the copula may be omitted in colloquial language: hat óra (van), "it is six o'clock."
Hungarian uses copula lenni for expressing location: Itt van Róbert "Bob is here", but it is omitted in the third person present tense for attribution or identity statements: Róbert öreg "Bob is old;" ők éhesek "They are hungry;" Kati nyelvtudós "Cathy is a linguist" (but Róbert öreg volt "Bob was old", éhesek voltak "They were hungry", Kati nyelvtudós volt "Cathy was a linguist).
In Turkish, both the third person singular and the third person plural copulas are omittable. Ali burada and Ali buradadır both mean "Ali is here", and Onlar aç and Onlar açlar both mean "They are hungry." Both of the sentences are acceptable and grammatically correct, but sentences with the copula are more formal.
The Turkish first person singular copula suffix is omitted when introducing oneself. Bora ben (I am Bora) is grammatically correct, but "Bora benim" (same sentence with the copula) is not for an introduction (but is grammatically correct in other cases).
Further restrictions may apply before omission is permitted. For example, in the Irish language, is, the present tense of the copula, may be omitted when the predicate is a noun. Ba, the past/conditional, cannot be deleted. If the present copula is omitted, the pronoun (e.g., é, í, iad) preceding the noun is omitted as well.
Sometimes, the term copula is taken to include not only a language's equivalent(s) to the verb be but also other verbs or forms that serve to link a subject to a predicative expression (while adding semantic content of their own). For example, English verbs like become, get, feel, look, taste, smell, and seem can have this function, as in the following sentences (the predicative expression, the complement of the verb, is in italics):
She became a student. They look tired. The milk tastes bad. That bread smells good. I feel bad that she can't come with us. London stands (is) on the river Thames. How is Mary? ; She seems (is) well (fine).
(This usage should be distinguished from the use of some of these verbs as "action" verbs, as in They look at the wall, in which look denotes an action and cannot be replaced by the basic copula are.)
Some verbs have rarer, secondary uses as copular verbs, like the verb fall in sentences like The zebra fell victim to the lion.
These extra copulas are sometimes called "semi-copulas" or "pseudo-copulas." For a list of common verbs of this type in English, see List of English copulae.
In Indo-European languages, the words meaning to be are sometimes similar to each other. Due to the high frequency of their use, their inflection retains a considerable degree of similarity in some cases. Thus, for example, the English form is is a cognate of German ist, Latin est, Persian ast and Russian jest', even though the Germanic, Italic, Iranian and Slavic language groups split at least 3000 years ago. The origins of the copulas of most Indo-European languages can be traced back to four Proto-Indo-European stems: *es- (*h1es-), *sta- (*steh2-), *wes- and *bhu- (*bʰuH-).
The English copular verb be has eight basic forms (be, am, is, are, being, was, were, been) and five negative forms (ain't (in some dialects), isn't, aren't, wasn't, weren't). No other English verb has more than five forms. Additional archaic forms include art, wast, wert, and occasionally beest (as a subjunctive). For more details see English verbs. For the etymology of the various forms, see Indo-European copula.
The main uses of the copula in English are described in the above sections. The possibility of copula omission is mentioned under § Zero copula.
A particular construction found in English (particularly in speech) is the use of two successive copulas when only one appears necessary, as in My point is, is that.... The acceptability of this construction is a disputed matter in English prescriptive grammar.
The simple English copula "be" may on occasion be substituted by other verbs with near identical meanings.
In Persian, the verb to be can either take the form of ast (cognate to English is) or budan (cognate to be).
In Hindustani (Hindi and Urdu), the copula होना ɦonɑ ہونا can be put into four grammatical aspects (simple, habitual, perfective, and progressive) and each of those four aspects can be put into five grammatical moods (indicative, presumptive, subjunctive, contrafactual, and imperative). Some example sentences using the simple aspect are shown below:
Besides the verb होना honā ہونا (to be), there are three other verbs which can also be used as the copula, they are रहना rêhnā رہنا (to stay), जाना jānā جانا (to go), and आना ānā آنا (to come). The following table shows the conjugations of the copula होना honā ہونا in the five grammatical moods in the simple aspect. The transliteration scheme used is ISO 15919.
Copulas in the Romance languages usually consist of two different verbs that can be translated as "to be", the main one from the Latin esse (via Vulgar Latin essere; esse deriving from *es-), often referenced as sum (another of the Latin verb's principal parts) and a secondary one from stare (from *sta-), often referenced as sto. The resulting distinction in the modern forms is found in all the Iberian Romance languages, and to a lesser extent Italian, but not in French or Romanian. The difference is that the first usually refers to essential characteristics, while the second refers to states and situations, e.g., "Bob is old" versus "Bob is well." A similar division is found in the non-Romance Basque language (viz. egon and izan). (The English words just used, "essential" and "state", are also cognate with the Latin infinitives esse and stare. The word "stay" also comes from Latin stare, through Middle French estai, stem of Old French ester.) In Spanish and Portuguese, the high degree of verbal inflection, plus the existence of two copulas (ser and estar), means that there are 105 (Spanish) and 110 (Portuguese) separate forms to express the copula, compared to eight in English and one in Chinese.
In some cases, the verb itself changes the meaning of the adjective/sentence. The following examples are from Portuguese:
Some Slavic languages make a distinction between essence and state (similar to that discussed in the above section on the Romance languages), by putting a predicative expression denoting a state into the instrumental case, and essential characteristics are in the nominative. This can apply with other copula verbs as well: the verbs for "become" are normally used with the instrumental case.
As noted above under § Zero copula, Russian and other North Slavic languages generally or often omit the copula in the present tense.
In Irish and Scottish Gaelic, there are two copulas, and the syntax is also changed when one is distinguishing between states or situations and essential characteristics.
Describing the subject's state or situation typically uses the normal VSO ordering with the verb bí. The copula is is used to state essential characteristics or equivalences.
The word is is the copula (rhymes with the English word "miss").
The pronoun used with the copula is different from the normal pronoun. For a masculine singular noun, é is used (for "he" or "it"), as opposed to the normal pronoun sé; for a feminine singular noun, í is used (for "she" or "it"), as opposed to normal pronoun sí; for plural nouns, iad is used (for "they" or "those"), as opposed to the normal pronoun siad.
To describe being in a state, condition, place, or act, the verb "to be" is used: Tá mé ag rith. "I am running."
The North Levantine Arabic dialect, spoken in Syria and Lebanon, has a negative copula formed by ما mā / ma and a suffixed pronoun.
In Chichewa, a Bantu language spoken mainly in Malawi, a very similar distinction exists between permanent and temporary states as in Spanish and Portuguese, but only in the present tense. For a permanent state, in the 3rd person, the copula used in the present tense is ndi (negative sí):
For the 1st and 2nd persons the particle ndi is combined with pronouns, e.g. ine "I":
For temporary states and location, the copula is the appropriate form of the defective verb -li:
For the 1st and 2nd persons the person is shown, as normally with Chichewa verbs, by the appropriate pronominal prefix:
In the past tenses, -li is used for both types of copula:
In the future, subjunctive, or conditional tenses, a form of the verb khala ("sit/dwell") is used as a copula:
Uniquely, the existence of the copulative verbalizer suffix in the Southern Peruvian Aymaran language variety, Muylaq' Aymara, is evident only in the surfacing of a vowel that would otherwise have been deleted because of the presence of a following suffix, lexically prespecified to suppress it. As the copulative verbalizer has no independent phonetic structure, it is represented by the Greek letter ʋ in the examples used in this entry.
Accordingly, unlike in most other Aymaran variants, whose copulative verbalizer is expressed with a vowel-lengthening component, -:, the presence of the copulative verbalizer in Muylaq' Aymara is often not apparent on the surface at all and is analyzed as existing only meta-linguistically. However, in a verb phrase like "It is old", the noun thantha meaning "old" does not require the copulative verbalizer, thantha-wa "It is old."
It is now pertinent to make some observations about the distribution of the copulative verbalizer. The best place to start is with words in which its presence or absence is obvious. When the vowel-suppressing first person simple tense suffix attaches to a verb, the vowel of the immediately preceding suffix is suppressed (in the examples in this subsection, the subscript "c" appears prior to vowel-suppressing suffixes in the interlinear gloss to better distinguish instances of deletion that arise from the presence of a lexically pre-specified suffix from those that arise from other (e.g. phonotactic) motivations). Consider the verb sara- which is inflected for the first person simple tense and so, predictably, loses its final root vowel: sar(a)-ct-wa "I go."
However, prior to the suffixation of the first person simple suffix -ct to the same root nominalized with the agentive nominalizer -iri, the word must be verbalized. The fact that the final vowel of -iri below is not suppressed indicates the presence of an intervening segment, the copulative verbalizer: sar(a)-iri-ʋ-t-wa "I usually go."
It is worthwhile to compare of the copulative verbalizer in Muylaq' Aymara as compared to La Paz Aymara, a variant which represents this suffix with vowel lengthening. Consider the near-identical sentences below, both translations of "I have a small house" in which the nominal root uta-ni "house-attributive" is verbalized with the copulative verbalizer, but the correspondence between the copulative verbalizer in these two variants is not always a strict one-to-one relation.
As in English, the verb "to be" (qopna) is irregular in Georgian (a Kartvelian language); different verb roots are employed in different tenses. The roots -ar-, -kn-, -qav-, and -qop- (past participle) are used in the present tense, future tense, past tense and the perfective tenses respectively. Examples:
In the last two examples (perfective and pluperfect), two roots are used in one verb compound. In the perfective tense, the root qop (which is the expected root for the perfective tense) is followed by the root ar, which is the root for the present tense. In the pluperfective tense, again, the root qop is followed by the past tense root qav. This formation is very similar to German (an Indo-European language), where the perfect and the pluperfect are expressed in the following way:
Here, gewesen is the past participle of sein ("to be") in German. In both examples, as in Georgian, this participle is used together with the present and the past forms of the verb in order to conjugate for the perfect and the pluperfect aspects.
Haitian Creole, a French-based creole language, has three forms of the copula: se, ye, and the zero copula, no word at all (the position of which will be indicated with Ø, just for purposes of illustration).
Although no textual record exists of Haitian-Creole at its earliest stages of development from French, se is derived from French [se] (written c'est), which is the normal French contraction of [sə] (that, written ce) and the copula [e] (is, written est) (a form of the verb être).
The derivation of ye is less obvious; but we can assume that the French source was [ile] ("he/it is", written il est), which, in rapidly spoken French, is very commonly pronounced as [je] (typically written y est).
The use of a zero copula is unknown in French, and it is thought to be an innovation from the early days when Haitian-Creole was first developing as a Romance-based pidgin. Latin also sometimes used a zero copula.
Which of se / ye / Ø is used in any given copula clause depends on complex syntactic factors that we can superficially summarize in the following four rules:
1. Use Ø (i.e., no word at all) in declarative sentences where the complement is an adjective phrase, prepositional phrase, or adverb phrase:
2. Use se when the complement is a noun phrase. But, whereas other verbs come after any tense/mood/aspect particles (like pa to mark negation, or te to explicitly mark past tense, or ap to mark progressive aspect), se comes before any such particles:
3. Use se where French and English have a dummy "it" subject:
4. Finally, use the other copula form ye in situations where the sentence's syntax leaves the copula at the end of a phrase:
The above is, however, only a simplified analysis.
The Japanese copula (most often translated into English as an inflected form of "to be") has many forms. E.g., The form da is used predicatively, na – attributively, de – adverbially or as a connector, and desu – predicatively or as a politeness indicator.
Examples:
Desu is the polite form of the copula. Thus, many sentences like the ones below are almost identical in meaning and differ only in the speaker's politeness to the addressee and in nuance of how assured the person is of their statement.
A predicate in Japanese is expressed by the predicative form of a verb, the predicative form of an adjective or noun + the predicative form of a copula.
Other forms of copula:
である de aru, であります de arimasu (used in writing and formal speaking) でございます de gozaimasu (used in public announcements, notices, etc.)
The copula is subject to dialectal variation throughout Japan, resulting in forms like や ya in Kansai and じゃ ja in Hiroshima (see map above).
Japanese also has two verbs corresponding to English "to be": aru and iru. They are not copulas but existential verbs. Aru is used for inanimate objects, including plants, whereas iru is used for animate things like people, animals, and robots, though there are exceptions to this generalization.
Japanese speakers, when learning English, often drop the auxiliary verbs "be" and "do", incorrectly believing that "be" is a semantically empty copula equivalent to desu and da.
For sentences with predicate nominatives, the copula "이" (i-) is added to the predicate nominative (with no space in between).
Some adjectives (usually colour adjectives) are nominalized and used with the copula "이"(i-).
1. Without the copula "이"(i-):
2. With the copula "이"(i-):
Some Korean adjectives are derived using the copula. Separating these articles and nominalizing the former part will often result in a sentence with a related, but different meaning. Using the separated sentence in a situation where the un-separated sentence is appropriate is usually acceptable as the listener can decide what the speaker is trying to say using the context.
In Chinese, both states and qualities are, in general, expressed with stative verbs (SV) with no need for a copula, e.g., in Chinese, "to be tired" (累 lèi), "to be hungry" (饿 è), "to be located at" (在 zài), "to be stupid" (笨 bèn) and so forth. A sentence can consist simply of a pronoun and such a verb: for example, 我饿 wǒ è ("I am hungry"). Usually, however, verbs expressing qualities are qualified by an adverb (meaning "very", "not", "quite", etc.); when not otherwise qualified, they are often preceded by 很 hěn, which in other contexts means "very", but in this use often has no particular meaning.
Only sentences with a noun as the complement (e.g., "This is my sister") use the copular verb "to be": 是; shì. This is used frequently; for example, instead of having a verb meaning "to be Chinese", the usual expression is "to be a Chinese person" (我是中国人; 我是中國人; wǒ shì Zhōngguórén; lit. "I am a Chinese person;" "I am Chinese"). This 是 is sometimes called an equative verb. Another possibility is for the complement to be just a noun modifier (ending in 的; de), the noun being omitted: 我的汽车是红色的; wǒ de qìchē shì hóngsè de; 'My car is red. (noun phrase indicator)'
Before the Han dynasty, the character 是 served as a demonstrative pronoun meaning "this." (This usage survives in some idioms and proverbs.) Some linguists believe that 是 developed into a copula because it often appeared, as a repetitive subject, after the subject of a sentence (in classical Chinese we can say, for example: "George W. Bush, this president of the United States" meaning "George W. Bush is the president of the United States). The character 是 appears to be formed as a compound of characters with the meanings of "early" and "straight."
Another use of 是 in modern Chinese is in combination with the modifier 的 de to mean "yes" or to show agreement. For example:
Question: 你的汽车是不是红色的? nǐ de qìchē shì bú shì hóngsè de? "Is your car red or not?"
Response: 是的 shì de "Is", meaning "Yes", or 不是 bú shì "Not is", meaning "No."
(A more common way of showing that the person asking the question is correct is by simply saying "right" or "correct", 对 duì; the corresponding negative answer is 不对 bú duì, "not right.")
Yet another use of 是 is in the shì...(de) construction, which is used to emphasize a particular element of the sentence; see Chinese grammar § Cleft sentences.
In Hokkien 是 sī acts as the copula, and 是 /z/ is the equivalent in Wu Chinese. Cantonese uses 係 (Jyutping: hai6) instead of 是; similarly, Hakka uses 係 he.
In Siouan languages like Lakota, in principle almost all words—according to their structure—are verbs. So not only (transitive, intransitive and so-called "stative") verbs but even nouns often behave like verbs and do not need to have copulas.
For example, the word wičháša refers to a man, and the verb "to-be-a-man" is expressed as wimáčhaša/winíčhaša/wičháša (I am/you are/he is a man). Yet there also is a copula héčha (to be a ...) that in most cases is used: wičháša hemáčha/heníčha/héčha (I am/you are/he is a man).
In order to express the statement "I am a doctor of profession", one has to say pezuta wičháša hemáčha. But, in order to express that that person is THE doctor (say, that had been phoned to help), one must use another copula iyé (to be the one): pežúta wičháša (kiŋ) miyé yeló (medicine-man DEF ART I-am-the-one MALE ASSERT).
In order to refer to space (e.g., Robert is in the house), various verbs are used, e.g., yaŋkÁ (lit., to sit) for humans, or háŋ/hé (to stand upright) for inanimate objects of a certain shape. "Robert is in the house" could be translated as Robert thimáhel yaŋké (yeló), whereas "There's one restaurant next to the gas station" translates as Owótethipi wígli-oínažiŋ kiŋ hél isákhib waŋ hé.
The constructed language Lojban has two words that act similar to a copula in natural languages. The clause me ... me'u turns whatever follows it into a predicate that means to be (among) what it follows. For example, me la .bob. (me'u) means "to be Bob", and me le ci mensi (me'u) means "to be one of the three sisters." Another one is du, which is itself a predicate that means all its arguments are the same thing (equal). One word which is often confused for a copula in Lojban, but isn't one, is cu. It merely indicates that the word which follows is the main predicate of the sentence. For example, lo pendo be mi cu zgipre means "my friend is a musician", but the word cu does not correspond to English is; instead, the word zgipre, which is a predicate, corresponds to the entire phrase "is a musician". The word cu is used to prevent lo pendo be mi zgipre, which would mean "the friend-of-me type of musician". | [
{
"paragraph_id": 0,
"text": "In linguistics, a copula (plural: copulas or copulae; abbreviated cop) is a word or phrase that links the subject of a sentence to a subject complement, such as the word is in the sentence \"The sky is blue\" or the phrase was not being in the sentence \"It was not being co-operative.\" The word copula derives from the Latin noun for a \"link\" or \"tie\" that connects two different things.",
"title": ""
},
{
"paragraph_id": 1,
"text": "A copula is often a verb or a verb-like word, though this is not universally the case. A verb that is a copula is sometimes called a copulative or copular verb. In English primary education grammar courses, a copula is often called a linking verb. In other languages, copulas show more resemblances to pronouns, as in Classical Chinese and Guarani, or may take the form of suffixes attached to a noun, as in Korean, Beja, and Inuit languages.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Most languages have one main copula (in English, the verb \"to be\"), although some (like Spanish, Portuguese and Thai) have more than one, while others have none. While the term copula is generally used to refer to such principal verbs, it may also be used for a wider group of verbs with similar potential functions (like become, get, feel and seem in English); alternatively, these might be distinguished as \"semi-copulas\" or \"pseudo-copulas\".",
"title": ""
},
{
"paragraph_id": 3,
"text": "The principal use of a copula is to link the subject of a clause to a subject complement. A copular verb is often considered to be part of the predicate, the remainder being called a predicative expression. A simple clause containing a copula is illustrated below:",
"title": "Grammatical function"
},
{
"paragraph_id": 4,
"text": "The book is on the table.",
"title": "Grammatical function"
},
{
"paragraph_id": 5,
"text": "In that sentence, the noun phrase the book is the subject, the verb is serves as the copula, and the prepositional phrase on the table is the predicative expression. The whole expression is on the table may (in some theories of grammar) be called a predicate or a verb phrase.",
"title": "Grammatical function"
},
{
"paragraph_id": 6,
"text": "The predicative expression accompanying the copula, also known as the complement of the copula, may take any of several possible forms: it may be a noun or noun phrase, an adjective or adjective phrase, a prepositional phrase (as above) or an adverb or another adverbial phrase expressing time or location. Examples are given below (with the copula in bold and the predicative expression in italics):",
"title": "Grammatical function"
},
{
"paragraph_id": 7,
"text": "Mary and John are my friends. The sky was blue. I am taller than most people. The birds and the beasts were there.",
"title": "Grammatical function"
},
{
"paragraph_id": 8,
"text": "The three components (subject, copula and predicative expression) do not necessarily appear in that order: their positioning depends on the rules for word order applicable to the language in question. In English (an SVO language), the ordering given above is the normal one, but certain variation is possible:",
"title": "Grammatical function"
},
{
"paragraph_id": 9,
"text": "It is also possible, in certain circumstances, for one (or even two) of the three components to be absent:",
"title": "Grammatical function"
},
{
"paragraph_id": 10,
"text": "Inverse copular constructions, in which the positions of the predicative expression and the subject are reversed, are found in various languages. They have been the subject of much theoretical analysis, particularly in regard to the difficulty of maintaining, in the case of such sentences, the usual division into a subject noun phrase and a predicate verb phrase.",
"title": "Grammatical function"
},
{
"paragraph_id": 11,
"text": "Another issue is verb agreement when both subject and predicative expression are noun phrases (and differ in number or person): in English, the copula typically agrees with the syntactical subject even if it is not logically (i.e. semantically) the subject, as in the cause of the riot is (not are) these pictures of the wall. Compare Italian la causa della rivolta sono queste foto del muro; notice the use of the plural sono to agree with plural queste foto \"these photos\" rather than with singular la causa \"the cause\". In instances where an English syntactical subject comprises a prepositional object that is pluralized, however, the prepositional object agrees with the predicative expression, e.g. \"What kind of birds are those?\"",
"title": "Grammatical function"
},
{
"paragraph_id": 12,
"text": "The definition and scope of the concept of a copula is not necessarily precise in any language. As noted above, though the concept of the copula in English is most strongly associated with the verb to be, there are many other verbs that can be used in a copular sense as well.",
"title": "Grammatical function"
},
{
"paragraph_id": 13,
"text": "And more tenuously",
"title": "Grammatical function"
},
{
"paragraph_id": 14,
"text": "A copular verb may also have other uses supplementary to or distinct from its uses as a copula. Some co-occurrences are common.",
"title": "Grammatical function"
},
{
"paragraph_id": 15,
"text": "The English verb to be is also used as an auxiliary verb, especially for expressing passive voice (together with the past participle) or expressing progressive aspect (together with the present participle):",
"title": "Grammatical function"
},
{
"paragraph_id": 16,
"text": "The man was killed. (passive) It is raining. (progressive)",
"title": "Grammatical function"
},
{
"paragraph_id": 17,
"text": "Other languages' copulas have additional uses as auxiliaries. For example, French être can be used to express passive voice similarly to English be; both French être and German sein are used to express the perfect forms of certain verbs (formerly English be was also):",
"title": "Grammatical function"
},
{
"paragraph_id": 18,
"text": "Je suis arrivé(e) French for 'I have arrived,' literally 'I am arrived.'",
"title": "Grammatical function"
},
{
"paragraph_id": 19,
"text": "The auxiliary functions of these verbs derived from their copular function, and could be interpreted as special cases of the copular function (with the verbal forms it precedes being considered adjectival).",
"title": "Grammatical function"
},
{
"paragraph_id": 20,
"text": "Another auxiliary usage in English is to denote an obligatory action or expected occurrence: \"I am to serve you;\" \"The manager is to resign.\" This can be put also into past tense: \"We were to leave at 9.\" For forms like \"if I was/were to come\", see English conditional sentences. (By certain criteria, the English copula be may always be considered an auxiliary verb; see Diagnostics for identifying auxiliary verbs in English.)",
"title": "Grammatical function"
},
{
"paragraph_id": 21,
"text": "The English to be and its equivalents in certain other languages also have a non-copular use as an existential verb, meaning \"to exist.\" This use is illustrated in the following sentences: I want only to be, and that is enough; I think therefore I am; To be or not to be, that is the question. In these cases, the verb itself expresses a predicate (that of existence), rather than linking to a predicative expression as it does when used as a copula. In ontology it is sometimes suggested that the \"is\" of existence is reducible to the \"is\" of property attribution or class membership; to be, Aristotle held, is to be something. However, Abelard in his Dialectica made a reductio ad absurdum argument against the idea that the copula can express existence.",
"title": "Grammatical function"
},
{
"paragraph_id": 22,
"text": "Similar examples can be found in many other languages; for example, the French and Latin equivalents of I think therefore I am are Je pense, donc je suis and Cogito ergo sum, where suis and sum are the equivalents of English \"am\", normally used as copulas. However, other languages prefer a different verb for existential use, as in the Spanish version Pienso, luego existo (where the verb existir \"to exist\" is used rather than the copula ser or estar ‘to be’).",
"title": "Grammatical function"
},
{
"paragraph_id": 23,
"text": "Another type of existential usage is in clauses of the there is… or there are… type. Languages differ in the way they express such meanings; some of them use the copular verb, possibly with an expletive pronoun like the English there, while other languages use different verbs and constructions, like the French il y a (which uses parts of the verb avoir ‘to have,’ not the copula) or the Swedish finns (the passive voice of the verb for \"to find\"). For details, see existential clause.",
"title": "Grammatical function"
},
{
"paragraph_id": 24,
"text": "Relying on a unified theory of copular sentences, it has been proposed that the English there-sentences are subtypes of inverse copular constructions.",
"title": "Grammatical function"
},
{
"paragraph_id": 25,
"text": "Predicates formed using a copula may express identity: that the two noun phrases (subject and complement) have the same referent or express an identical concept:",
"title": "Meanings"
},
{
"paragraph_id": 26,
"text": "I want only to be myself. The Morning Star is the Evening Star.",
"title": "Meanings"
},
{
"paragraph_id": 27,
"text": "They may also express membership of a class or a subset relationship:",
"title": "Meanings"
},
{
"paragraph_id": 28,
"text": "She was a nurse. Cats are carnivorous mammals.",
"title": "Meanings"
},
{
"paragraph_id": 29,
"text": "Similarly they may express some property, relation or position, permanent or temporary:",
"title": "Meanings"
},
{
"paragraph_id": 30,
"text": "The trees are green. I am your boss. The hen is next to the cockerel. The children are confused.",
"title": "Meanings"
},
{
"paragraph_id": 31,
"text": "Some languages use different copulas, or different syntax, to denote a permanent, essential characteristic of something versus a temporary state. For examples, see the sections on the Romance languages, Slavic languages and Irish.",
"title": "Meanings"
},
{
"paragraph_id": 32,
"text": "In many languages the principal copula is a verb, like English (to) be, German sein, Mixtec kuu, Touareg emous, etc. It may inflect for grammatical categories like tense, aspect and mood, like other verbs in the language. Being a very commonly used verb, it is likely that the copula has irregular inflected forms; in English, the verb be has a number of highly irregular (suppletive) forms and has more different inflected forms than any other English verb (am, is, are, was, were, etc.; see English verbs for details).",
"title": "Forms"
},
{
"paragraph_id": 33,
"text": "Other copulas show more resemblances to pronouns. That is the case for Classical Chinese and Guarani, for instance. In highly synthetic languages, copulas are often suffixes, attached to a noun, but they may still behave otherwise like ordinary verbs: -u- in Inuit languages.",
"title": "Forms"
},
{
"paragraph_id": 34,
"text": "In some other languages, like Beja and Ket, the copula takes the form of suffixes that attach to a noun but are distinct from the person agreement markers used on predicative verbs. This phenomenon is known as nonverbal person agreement (or nonverbal subject agreement), and the relevant markers are always established as deriving from cliticized independent pronouns.",
"title": "Forms"
},
{
"paragraph_id": 35,
"text": "In some languages, copula omission occurs within a particular grammatical context. For example, speakers of Russian, Indonesian, Turkish, Hungarian, Arabic, Hebrew, Geʽez and Quechuan languages consistently drop the copula in present tense: Russian: я человек, ya chelovek ‘I (am a) human;’ Indonesian: saya seorang manusia ‘I (am) a human;’ Turkish: o insan ‘s/he (is a) human;’ Hungarian: ő ember ‘s/he (is) a human;’ Arabic: أنا إنسان, ʾana ʾinsān ‘I (am a) human;’ Hebrew: אני אדם, ʔani ʔadam \"I (am a) human;\" Geʽez: አነ ብእሲ/ብእሲ አነ ʔana bəʔəsi / bəʔəsi ʔana \"I (am a) man\" / \"(a) man I (am)\"; Southern Quechua: payqa runam \"s/he (is) a human.\" The usage is known generically as the zero copula. In other tenses (sometimes in forms other than third person singular), the copula usually reappears.",
"title": "Forms"
},
{
"paragraph_id": 36,
"text": "Some languages drop the copula in poetic or aphorismic contexts. Examples in English include",
"title": "Forms"
},
{
"paragraph_id": 37,
"text": "Such poetic copula dropping is more pronounced in some languages other than English, like the Romance languages.",
"title": "Forms"
},
{
"paragraph_id": 38,
"text": "In informal speech of English, the copula may also be dropped in general sentences, as in \"She a nurse.\" It is a feature of African-American Vernacular English, but is also used by a variety of other English speakers. An example is the sentence \"I saw twelve men, each a soldier.\"",
"title": "Forms"
},
{
"paragraph_id": 39,
"text": "In Ancient Greek, when an adjective precedes a noun with an article, the copula is understood: ὁ οἴκος ἐστὶ μακρός, \"the house is large\", can be written μακρός ὁ οἴκος, \"large the house (is).\"",
"title": "Forms"
},
{
"paragraph_id": 40,
"text": "In Quechua (Southern Quechua used for the examples), zero copula is restricted to present tense in third person singular (kan): Payqa runam — \"(s)he is a human;\" but: (paykuna) runakunam kanku \"(they) are human.\"ap",
"title": "Forms"
},
{
"paragraph_id": 41,
"text": "In Māori, the zero copula can be used in predicative expressions and with continuous verbs (many of which take a copulative verb in many Indo-European languages) — He nui te whare, literally \"a big the house\", \"the house (is) big;\" I te tēpu te pukapuka, literally \"at (past locative particle) the table the book\", \"the book (was) on the table;\" Nō Ingarangi ia, literally \"from England (s)he\", \"(s)he (is) from England\", Kei te kai au, literally \"at the (act of) eating I\", \"I (am) eating.\"",
"title": "Forms"
},
{
"paragraph_id": 42,
"text": "Alternatively, in many cases, the particle ko can be used as a copulative (though not all instances of ko are used as thus, like all other Maori particles, ko has multiple purposes): Ko nui te whare \"The house is big;\" Ko te pukapuka kei te tēpu \"It is the book (that is) on the table;\" Ko au kei te kai \"It is me eating.\"",
"title": "Forms"
},
{
"paragraph_id": 43,
"text": "However, when expressing identity or class membership, ko must be used: Ko tēnei tāku pukapuka \"This is my book;\" Ko Ōtautahi he tāone i Te Waipounamu \"Christchurch is a city in the South Island (of New Zealand);\" Ko koe tōku hoa \"You are my friend.\"",
"title": "Forms"
},
{
"paragraph_id": 44,
"text": "When expressing identity, ko can be placed on either object in the clause without changing the meaning (ko tēnei tāku pukapuka is the same as ko tāku pukapuka tēnei) but not on both (ko tēnei ko tāku pukapuka would be equivalent to saying \"it is this, it is my book\" in English).",
"title": "Forms"
},
{
"paragraph_id": 45,
"text": "In Hungarian, zero copula is restricted to present tense in third person singular and plural: Ő ember/Ők emberek — \"s/he is a human\"/\"they are humans;\" but: (én) ember vagyok \"I am a human\", (te) ember vagy \"you are a human\", mi emberek vagyunk \"we are humans\", (ti) emberek vagytok \"you (all) are humans.\" The copula also reappears for stating locations: az emberek a házban vannak, \"the people are in the house\", and for stating time: hat óra van, \"it is six o'clock.\" However, the copula may be omitted in colloquial language: hat óra (van), \"it is six o'clock.\"",
"title": "Forms"
},
{
"paragraph_id": 46,
"text": "Hungarian uses copula lenni for expressing location: Itt van Róbert \"Bob is here\", but it is omitted in the third person present tense for attribution or identity statements: Róbert öreg \"Bob is old;\" ők éhesek \"They are hungry;\" Kati nyelvtudós \"Cathy is a linguist\" (but Róbert öreg volt \"Bob was old\", éhesek voltak \"They were hungry\", Kati nyelvtudós volt \"Cathy was a linguist).",
"title": "Forms"
},
{
"paragraph_id": 47,
"text": "In Turkish, both the third person singular and the third person plural copulas are omittable. Ali burada and Ali buradadır both mean \"Ali is here\", and Onlar aç and Onlar açlar both mean \"They are hungry.\" Both of the sentences are acceptable and grammatically correct, but sentences with the copula are more formal.",
"title": "Forms"
},
{
"paragraph_id": 48,
"text": "The Turkish first person singular copula suffix is omitted when introducing oneself. Bora ben (I am Bora) is grammatically correct, but \"Bora benim\" (same sentence with the copula) is not for an introduction (but is grammatically correct in other cases).",
"title": "Forms"
},
{
"paragraph_id": 49,
"text": "Further restrictions may apply before omission is permitted. For example, in the Irish language, is, the present tense of the copula, may be omitted when the predicate is a noun. Ba, the past/conditional, cannot be deleted. If the present copula is omitted, the pronoun (e.g., é, í, iad) preceding the noun is omitted as well.",
"title": "Forms"
},
{
"paragraph_id": 50,
"text": "Sometimes, the term copula is taken to include not only a language's equivalent(s) to the verb be but also other verbs or forms that serve to link a subject to a predicative expression (while adding semantic content of their own). For example, English verbs like become, get, feel, look, taste, smell, and seem can have this function, as in the following sentences (the predicative expression, the complement of the verb, is in italics):",
"title": "Copula-like words"
},
{
"paragraph_id": 51,
"text": "She became a student. They look tired. The milk tastes bad. That bread smells good. I feel bad that she can't come with us. London stands (is) on the river Thames. How is Mary? ; She seems (is) well (fine).",
"title": "Copula-like words"
},
{
"paragraph_id": 52,
"text": "(This usage should be distinguished from the use of some of these verbs as \"action\" verbs, as in They look at the wall, in which look denotes an action and cannot be replaced by the basic copula are.)",
"title": "Copula-like words"
},
{
"paragraph_id": 53,
"text": "Some verbs have rarer, secondary uses as copular verbs, like the verb fall in sentences like The zebra fell victim to the lion.",
"title": "Copula-like words"
},
{
"paragraph_id": 54,
"text": "These extra copulas are sometimes called \"semi-copulas\" or \"pseudo-copulas.\" For a list of common verbs of this type in English, see List of English copulae.",
"title": "Copula-like words"
},
{
"paragraph_id": 55,
"text": "In Indo-European languages, the words meaning to be are sometimes similar to each other. Due to the high frequency of their use, their inflection retains a considerable degree of similarity in some cases. Thus, for example, the English form is is a cognate of German ist, Latin est, Persian ast and Russian jest', even though the Germanic, Italic, Iranian and Slavic language groups split at least 3000 years ago. The origins of the copulas of most Indo-European languages can be traced back to four Proto-Indo-European stems: *es- (*h1es-), *sta- (*steh2-), *wes- and *bhu- (*bʰuH-).",
"title": "In particular languages"
},
{
"paragraph_id": 56,
"text": "The English copular verb be has eight basic forms (be, am, is, are, being, was, were, been) and five negative forms (ain't (in some dialects), isn't, aren't, wasn't, weren't). No other English verb has more than five forms. Additional archaic forms include art, wast, wert, and occasionally beest (as a subjunctive). For more details see English verbs. For the etymology of the various forms, see Indo-European copula.",
"title": "In particular languages"
},
{
"paragraph_id": 57,
"text": "The main uses of the copula in English are described in the above sections. The possibility of copula omission is mentioned under § Zero copula.",
"title": "In particular languages"
},
{
"paragraph_id": 58,
"text": "A particular construction found in English (particularly in speech) is the use of two successive copulas when only one appears necessary, as in My point is, is that.... The acceptability of this construction is a disputed matter in English prescriptive grammar.",
"title": "In particular languages"
},
{
"paragraph_id": 59,
"text": "The simple English copula \"be\" may on occasion be substituted by other verbs with near identical meanings.",
"title": "In particular languages"
},
{
"paragraph_id": 60,
"text": "In Persian, the verb to be can either take the form of ast (cognate to English is) or budan (cognate to be).",
"title": "In particular languages"
},
{
"paragraph_id": 61,
"text": "In Hindustani (Hindi and Urdu), the copula होना ɦonɑ ہونا can be put into four grammatical aspects (simple, habitual, perfective, and progressive) and each of those four aspects can be put into five grammatical moods (indicative, presumptive, subjunctive, contrafactual, and imperative). Some example sentences using the simple aspect are shown below:",
"title": "In particular languages"
},
{
"paragraph_id": 62,
"text": "Besides the verb होना honā ہونا (to be), there are three other verbs which can also be used as the copula, they are रहना rêhnā رہنا (to stay), जाना jānā جانا (to go), and आना ānā آنا (to come). The following table shows the conjugations of the copula होना honā ہونا in the five grammatical moods in the simple aspect. The transliteration scheme used is ISO 15919.",
"title": "In particular languages"
},
{
"paragraph_id": 63,
"text": "Copulas in the Romance languages usually consist of two different verbs that can be translated as \"to be\", the main one from the Latin esse (via Vulgar Latin essere; esse deriving from *es-), often referenced as sum (another of the Latin verb's principal parts) and a secondary one from stare (from *sta-), often referenced as sto. The resulting distinction in the modern forms is found in all the Iberian Romance languages, and to a lesser extent Italian, but not in French or Romanian. The difference is that the first usually refers to essential characteristics, while the second refers to states and situations, e.g., \"Bob is old\" versus \"Bob is well.\" A similar division is found in the non-Romance Basque language (viz. egon and izan). (The English words just used, \"essential\" and \"state\", are also cognate with the Latin infinitives esse and stare. The word \"stay\" also comes from Latin stare, through Middle French estai, stem of Old French ester.) In Spanish and Portuguese, the high degree of verbal inflection, plus the existence of two copulas (ser and estar), means that there are 105 (Spanish) and 110 (Portuguese) separate forms to express the copula, compared to eight in English and one in Chinese.",
"title": "In particular languages"
},
{
"paragraph_id": 64,
"text": "In some cases, the verb itself changes the meaning of the adjective/sentence. The following examples are from Portuguese:",
"title": "In particular languages"
},
{
"paragraph_id": 65,
"text": "Some Slavic languages make a distinction between essence and state (similar to that discussed in the above section on the Romance languages), by putting a predicative expression denoting a state into the instrumental case, and essential characteristics are in the nominative. This can apply with other copula verbs as well: the verbs for \"become\" are normally used with the instrumental case.",
"title": "In particular languages"
},
{
"paragraph_id": 66,
"text": "As noted above under § Zero copula, Russian and other North Slavic languages generally or often omit the copula in the present tense.",
"title": "In particular languages"
},
{
"paragraph_id": 67,
"text": "In Irish and Scottish Gaelic, there are two copulas, and the syntax is also changed when one is distinguishing between states or situations and essential characteristics.",
"title": "In particular languages"
},
{
"paragraph_id": 68,
"text": "Describing the subject's state or situation typically uses the normal VSO ordering with the verb bí. The copula is is used to state essential characteristics or equivalences.",
"title": "In particular languages"
},
{
"paragraph_id": 69,
"text": "The word is is the copula (rhymes with the English word \"miss\").",
"title": "In particular languages"
},
{
"paragraph_id": 70,
"text": "The pronoun used with the copula is different from the normal pronoun. For a masculine singular noun, é is used (for \"he\" or \"it\"), as opposed to the normal pronoun sé; for a feminine singular noun, í is used (for \"she\" or \"it\"), as opposed to normal pronoun sí; for plural nouns, iad is used (for \"they\" or \"those\"), as opposed to the normal pronoun siad.",
"title": "In particular languages"
},
{
"paragraph_id": 71,
"text": "To describe being in a state, condition, place, or act, the verb \"to be\" is used: Tá mé ag rith. \"I am running.\"",
"title": "In particular languages"
},
{
"paragraph_id": 72,
"text": "The North Levantine Arabic dialect, spoken in Syria and Lebanon, has a negative copula formed by ما mā / ma and a suffixed pronoun.",
"title": "In particular languages"
},
{
"paragraph_id": 73,
"text": "In Chichewa, a Bantu language spoken mainly in Malawi, a very similar distinction exists between permanent and temporary states as in Spanish and Portuguese, but only in the present tense. For a permanent state, in the 3rd person, the copula used in the present tense is ndi (negative sí):",
"title": "In particular languages"
},
{
"paragraph_id": 74,
"text": "For the 1st and 2nd persons the particle ndi is combined with pronouns, e.g. ine \"I\":",
"title": "In particular languages"
},
{
"paragraph_id": 75,
"text": "For temporary states and location, the copula is the appropriate form of the defective verb -li:",
"title": "In particular languages"
},
{
"paragraph_id": 76,
"text": "For the 1st and 2nd persons the person is shown, as normally with Chichewa verbs, by the appropriate pronominal prefix:",
"title": "In particular languages"
},
{
"paragraph_id": 77,
"text": "In the past tenses, -li is used for both types of copula:",
"title": "In particular languages"
},
{
"paragraph_id": 78,
"text": "In the future, subjunctive, or conditional tenses, a form of the verb khala (\"sit/dwell\") is used as a copula:",
"title": "In particular languages"
},
{
"paragraph_id": 79,
"text": "Uniquely, the existence of the copulative verbalizer suffix in the Southern Peruvian Aymaran language variety, Muylaq' Aymara, is evident only in the surfacing of a vowel that would otherwise have been deleted because of the presence of a following suffix, lexically prespecified to suppress it. As the copulative verbalizer has no independent phonetic structure, it is represented by the Greek letter ʋ in the examples used in this entry.",
"title": "In particular languages"
},
{
"paragraph_id": 80,
"text": "Accordingly, unlike in most other Aymaran variants, whose copulative verbalizer is expressed with a vowel-lengthening component, -:, the presence of the copulative verbalizer in Muylaq' Aymara is often not apparent on the surface at all and is analyzed as existing only meta-linguistically. However, in a verb phrase like \"It is old\", the noun thantha meaning \"old\" does not require the copulative verbalizer, thantha-wa \"It is old.\"",
"title": "In particular languages"
},
{
"paragraph_id": 81,
"text": "It is now pertinent to make some observations about the distribution of the copulative verbalizer. The best place to start is with words in which its presence or absence is obvious. When the vowel-suppressing first person simple tense suffix attaches to a verb, the vowel of the immediately preceding suffix is suppressed (in the examples in this subsection, the subscript \"c\" appears prior to vowel-suppressing suffixes in the interlinear gloss to better distinguish instances of deletion that arise from the presence of a lexically pre-specified suffix from those that arise from other (e.g. phonotactic) motivations). Consider the verb sara- which is inflected for the first person simple tense and so, predictably, loses its final root vowel: sar(a)-ct-wa \"I go.\"",
"title": "In particular languages"
},
{
"paragraph_id": 82,
"text": "However, prior to the suffixation of the first person simple suffix -ct to the same root nominalized with the agentive nominalizer -iri, the word must be verbalized. The fact that the final vowel of -iri below is not suppressed indicates the presence of an intervening segment, the copulative verbalizer: sar(a)-iri-ʋ-t-wa \"I usually go.\"",
"title": "In particular languages"
},
{
"paragraph_id": 83,
"text": "It is worthwhile to compare of the copulative verbalizer in Muylaq' Aymara as compared to La Paz Aymara, a variant which represents this suffix with vowel lengthening. Consider the near-identical sentences below, both translations of \"I have a small house\" in which the nominal root uta-ni \"house-attributive\" is verbalized with the copulative verbalizer, but the correspondence between the copulative verbalizer in these two variants is not always a strict one-to-one relation.",
"title": "In particular languages"
},
{
"paragraph_id": 84,
"text": "As in English, the verb \"to be\" (qopna) is irregular in Georgian (a Kartvelian language); different verb roots are employed in different tenses. The roots -ar-, -kn-, -qav-, and -qop- (past participle) are used in the present tense, future tense, past tense and the perfective tenses respectively. Examples:",
"title": "In particular languages"
},
{
"paragraph_id": 85,
"text": "In the last two examples (perfective and pluperfect), two roots are used in one verb compound. In the perfective tense, the root qop (which is the expected root for the perfective tense) is followed by the root ar, which is the root for the present tense. In the pluperfective tense, again, the root qop is followed by the past tense root qav. This formation is very similar to German (an Indo-European language), where the perfect and the pluperfect are expressed in the following way:",
"title": "In particular languages"
},
{
"paragraph_id": 86,
"text": "Here, gewesen is the past participle of sein (\"to be\") in German. In both examples, as in Georgian, this participle is used together with the present and the past forms of the verb in order to conjugate for the perfect and the pluperfect aspects.",
"title": "In particular languages"
},
{
"paragraph_id": 87,
"text": "Haitian Creole, a French-based creole language, has three forms of the copula: se, ye, and the zero copula, no word at all (the position of which will be indicated with Ø, just for purposes of illustration).",
"title": "In particular languages"
},
{
"paragraph_id": 88,
"text": "Although no textual record exists of Haitian-Creole at its earliest stages of development from French, se is derived from French [se] (written c'est), which is the normal French contraction of [sə] (that, written ce) and the copula [e] (is, written est) (a form of the verb être).",
"title": "In particular languages"
},
{
"paragraph_id": 89,
"text": "The derivation of ye is less obvious; but we can assume that the French source was [ile] (\"he/it is\", written il est), which, in rapidly spoken French, is very commonly pronounced as [je] (typically written y est).",
"title": "In particular languages"
},
{
"paragraph_id": 90,
"text": "The use of a zero copula is unknown in French, and it is thought to be an innovation from the early days when Haitian-Creole was first developing as a Romance-based pidgin. Latin also sometimes used a zero copula.",
"title": "In particular languages"
},
{
"paragraph_id": 91,
"text": "Which of se / ye / Ø is used in any given copula clause depends on complex syntactic factors that we can superficially summarize in the following four rules:",
"title": "In particular languages"
},
{
"paragraph_id": 92,
"text": "1. Use Ø (i.e., no word at all) in declarative sentences where the complement is an adjective phrase, prepositional phrase, or adverb phrase:",
"title": "In particular languages"
},
{
"paragraph_id": 93,
"text": "2. Use se when the complement is a noun phrase. But, whereas other verbs come after any tense/mood/aspect particles (like pa to mark negation, or te to explicitly mark past tense, or ap to mark progressive aspect), se comes before any such particles:",
"title": "In particular languages"
},
{
"paragraph_id": 94,
"text": "3. Use se where French and English have a dummy \"it\" subject:",
"title": "In particular languages"
},
{
"paragraph_id": 95,
"text": "4. Finally, use the other copula form ye in situations where the sentence's syntax leaves the copula at the end of a phrase:",
"title": "In particular languages"
},
{
"paragraph_id": 96,
"text": "The above is, however, only a simplified analysis.",
"title": "In particular languages"
},
{
"paragraph_id": 97,
"text": "The Japanese copula (most often translated into English as an inflected form of \"to be\") has many forms. E.g., The form da is used predicatively, na – attributively, de – adverbially or as a connector, and desu – predicatively or as a politeness indicator.",
"title": "In particular languages"
},
{
"paragraph_id": 98,
"text": "Examples:",
"title": "In particular languages"
},
{
"paragraph_id": 99,
"text": "Desu is the polite form of the copula. Thus, many sentences like the ones below are almost identical in meaning and differ only in the speaker's politeness to the addressee and in nuance of how assured the person is of their statement.",
"title": "In particular languages"
},
{
"paragraph_id": 100,
"text": "A predicate in Japanese is expressed by the predicative form of a verb, the predicative form of an adjective or noun + the predicative form of a copula.",
"title": "In particular languages"
},
{
"paragraph_id": 101,
"text": "Other forms of copula:",
"title": "In particular languages"
},
{
"paragraph_id": 102,
"text": "である de aru, であります de arimasu (used in writing and formal speaking) でございます de gozaimasu (used in public announcements, notices, etc.)",
"title": "In particular languages"
},
{
"paragraph_id": 103,
"text": "The copula is subject to dialectal variation throughout Japan, resulting in forms like や ya in Kansai and じゃ ja in Hiroshima (see map above).",
"title": "In particular languages"
},
{
"paragraph_id": 104,
"text": "Japanese also has two verbs corresponding to English \"to be\": aru and iru. They are not copulas but existential verbs. Aru is used for inanimate objects, including plants, whereas iru is used for animate things like people, animals, and robots, though there are exceptions to this generalization.",
"title": "In particular languages"
},
{
"paragraph_id": 105,
"text": "Japanese speakers, when learning English, often drop the auxiliary verbs \"be\" and \"do\", incorrectly believing that \"be\" is a semantically empty copula equivalent to desu and da.",
"title": "In particular languages"
},
{
"paragraph_id": 106,
"text": "For sentences with predicate nominatives, the copula \"이\" (i-) is added to the predicate nominative (with no space in between).",
"title": "In particular languages"
},
{
"paragraph_id": 107,
"text": "Some adjectives (usually colour adjectives) are nominalized and used with the copula \"이\"(i-).",
"title": "In particular languages"
},
{
"paragraph_id": 108,
"text": "1. Without the copula \"이\"(i-):",
"title": "In particular languages"
},
{
"paragraph_id": 109,
"text": "2. With the copula \"이\"(i-):",
"title": "In particular languages"
},
{
"paragraph_id": 110,
"text": "Some Korean adjectives are derived using the copula. Separating these articles and nominalizing the former part will often result in a sentence with a related, but different meaning. Using the separated sentence in a situation where the un-separated sentence is appropriate is usually acceptable as the listener can decide what the speaker is trying to say using the context.",
"title": "In particular languages"
},
{
"paragraph_id": 111,
"text": "In Chinese, both states and qualities are, in general, expressed with stative verbs (SV) with no need for a copula, e.g., in Chinese, \"to be tired\" (累 lèi), \"to be hungry\" (饿 è), \"to be located at\" (在 zài), \"to be stupid\" (笨 bèn) and so forth. A sentence can consist simply of a pronoun and such a verb: for example, 我饿 wǒ è (\"I am hungry\"). Usually, however, verbs expressing qualities are qualified by an adverb (meaning \"very\", \"not\", \"quite\", etc.); when not otherwise qualified, they are often preceded by 很 hěn, which in other contexts means \"very\", but in this use often has no particular meaning.",
"title": "In particular languages"
},
{
"paragraph_id": 112,
"text": "Only sentences with a noun as the complement (e.g., \"This is my sister\") use the copular verb \"to be\": 是; shì. This is used frequently; for example, instead of having a verb meaning \"to be Chinese\", the usual expression is \"to be a Chinese person\" (我是中国人; 我是中國人; wǒ shì Zhōngguórén; lit. \"I am a Chinese person;\" \"I am Chinese\"). This 是 is sometimes called an equative verb. Another possibility is for the complement to be just a noun modifier (ending in 的; de), the noun being omitted: 我的汽车是红色的; wǒ de qìchē shì hóngsè de; 'My car is red. (noun phrase indicator)'",
"title": "In particular languages"
},
{
"paragraph_id": 113,
"text": "Before the Han dynasty, the character 是 served as a demonstrative pronoun meaning \"this.\" (This usage survives in some idioms and proverbs.) Some linguists believe that 是 developed into a copula because it often appeared, as a repetitive subject, after the subject of a sentence (in classical Chinese we can say, for example: \"George W. Bush, this president of the United States\" meaning \"George W. Bush is the president of the United States). The character 是 appears to be formed as a compound of characters with the meanings of \"early\" and \"straight.\"",
"title": "In particular languages"
},
{
"paragraph_id": 114,
"text": "Another use of 是 in modern Chinese is in combination with the modifier 的 de to mean \"yes\" or to show agreement. For example:",
"title": "In particular languages"
},
{
"paragraph_id": 115,
"text": "Question: 你的汽车是不是红色的? nǐ de qìchē shì bú shì hóngsè de? \"Is your car red or not?\"",
"title": "In particular languages"
},
{
"paragraph_id": 116,
"text": "Response: 是的 shì de \"Is\", meaning \"Yes\", or 不是 bú shì \"Not is\", meaning \"No.\"",
"title": "In particular languages"
},
{
"paragraph_id": 117,
"text": "(A more common way of showing that the person asking the question is correct is by simply saying \"right\" or \"correct\", 对 duì; the corresponding negative answer is 不对 bú duì, \"not right.\")",
"title": "In particular languages"
},
{
"paragraph_id": 118,
"text": "Yet another use of 是 is in the shì...(de) construction, which is used to emphasize a particular element of the sentence; see Chinese grammar § Cleft sentences.",
"title": "In particular languages"
},
{
"paragraph_id": 119,
"text": "In Hokkien 是 sī acts as the copula, and 是 /z/ is the equivalent in Wu Chinese. Cantonese uses 係 (Jyutping: hai6) instead of 是; similarly, Hakka uses 係 he.",
"title": "In particular languages"
},
{
"paragraph_id": 120,
"text": "In Siouan languages like Lakota, in principle almost all words—according to their structure—are verbs. So not only (transitive, intransitive and so-called \"stative\") verbs but even nouns often behave like verbs and do not need to have copulas.",
"title": "In particular languages"
},
{
"paragraph_id": 121,
"text": "For example, the word wičháša refers to a man, and the verb \"to-be-a-man\" is expressed as wimáčhaša/winíčhaša/wičháša (I am/you are/he is a man). Yet there also is a copula héčha (to be a ...) that in most cases is used: wičháša hemáčha/heníčha/héčha (I am/you are/he is a man).",
"title": "In particular languages"
},
{
"paragraph_id": 122,
"text": "In order to express the statement \"I am a doctor of profession\", one has to say pezuta wičháša hemáčha. But, in order to express that that person is THE doctor (say, that had been phoned to help), one must use another copula iyé (to be the one): pežúta wičháša (kiŋ) miyé yeló (medicine-man DEF ART I-am-the-one MALE ASSERT).",
"title": "In particular languages"
},
{
"paragraph_id": 123,
"text": "In order to refer to space (e.g., Robert is in the house), various verbs are used, e.g., yaŋkÁ (lit., to sit) for humans, or háŋ/hé (to stand upright) for inanimate objects of a certain shape. \"Robert is in the house\" could be translated as Robert thimáhel yaŋké (yeló), whereas \"There's one restaurant next to the gas station\" translates as Owótethipi wígli-oínažiŋ kiŋ hél isákhib waŋ hé.",
"title": "In particular languages"
},
{
"paragraph_id": 124,
"text": "The constructed language Lojban has two words that act similar to a copula in natural languages. The clause me ... me'u turns whatever follows it into a predicate that means to be (among) what it follows. For example, me la .bob. (me'u) means \"to be Bob\", and me le ci mensi (me'u) means \"to be one of the three sisters.\" Another one is du, which is itself a predicate that means all its arguments are the same thing (equal). One word which is often confused for a copula in Lojban, but isn't one, is cu. It merely indicates that the word which follows is the main predicate of the sentence. For example, lo pendo be mi cu zgipre means \"my friend is a musician\", but the word cu does not correspond to English is; instead, the word zgipre, which is a predicate, corresponds to the entire phrase \"is a musician\". The word cu is used to prevent lo pendo be mi zgipre, which would mean \"the friend-of-me type of musician\".",
"title": "In particular languages"
}
] | In linguistics, a copula is a word or phrase that links the subject of a sentence to a subject complement, such as the word is in the sentence "The sky is blue" or the phrase was not being in the sentence "It was not being co-operative." The word copula derives from the Latin noun for a "link" or "tie" that connects two different things. A copula is often a verb or a verb-like word, though this is not universally the case. A verb that is a copula is sometimes called a copulative or copular verb. In English primary education grammar courses, a copula is often called a linking verb. In other languages, copulas show more resemblances to pronouns, as in Classical Chinese and Guarani, or may take the form of suffixes attached to a noun, as in Korean, Beja, and Inuit languages. Most languages have one main copula, although some have more than one, while others have none. While the term copula is generally used to refer to such principal verbs, it may also be used for a wider group of verbs with similar potential functions; alternatively, these might be distinguished as "semi-copulas" or "pseudo-copulas". | 2001-05-07T17:51:18Z | 2023-11-26T19:51:43Z | [
"Template:Smallcaps",
"Template:Lang",
"Template:Notice",
"Template:Webarchive",
"Template:Refbegin",
"Template:Refend",
"Template:Wikiquote",
"Template:Cleanup lang",
"Template:Lexical categories",
"Template:Authority control",
"Template:Wikt-lang",
"Template:Main",
"Template:Citation needed",
"Template:See also",
"Template:Pb",
"Template:Cite web",
"Template:Slink",
"Template:Reflist",
"Template:Cite book",
"Template:Transliteration",
"Template:Zh",
"Template:IPA",
"Template:Page needed",
"Template:Short description",
"Template:Poem quote",
"Template:Snd",
"Template:Abbr",
"Template:Cite conference",
"Template:Cite journal",
"Template:About",
"Template:Nobold",
"Template:Tooltip",
"Template:IPA-fr",
"Template:Citation",
"Template:Redirect"
] | https://en.wikipedia.org/wiki/Copula_(linguistics) |
5,635 | Christopher Columbus | Christopher Columbus (/kəˈlʌmbəs/; between 25 August and 31 October 1451 – 20 May 1506) was an Italian explorer and navigator from the Republic of Genoa who completed four Spanish-based voyages across the Atlantic Ocean sponsored by the Catholic Monarchs, opening the way for the widespread European exploration and European colonization of the Americas. His expeditions were the first known European contact with the Caribbean and Central and South America.
The name Christopher Columbus is the anglicisation of the Latin Christophorus Columbus. Growing up on the coast of Liguria, he went to sea at a young age and travelled widely, as far north as the British Isles and as far south as what is now Ghana. He married Portuguese noblewoman Filipa Moniz Perestrelo, who bore a son Diego, and was based in Lisbon for several years. He later took a Castilian mistress, Beatriz Enríquez de Arana, who bore a son, Ferdinand.
Largely self-educated, Columbus was knowledgeable in geography, astronomy, and history. He developed a plan to seek a western sea passage to the East Indies, hoping to profit from the lucrative spice trade. After the Granada War, and Columbus's persistent lobbying in multiple kingdoms, the Catholic Monarchs, Queen Isabella I and King Ferdinand II, agreed to sponsor a journey west. Columbus left Castile in August 1492 with three ships and made landfall in the Americas on 12 October, ending the period of human habitation in the Americas now referred to as the pre-Columbian era. His landing place was an island in the Bahamas, known by its native inhabitants as Guanahani. He then visited the islands now known as Cuba and Hispaniola, establishing a colony in what is now Haiti. Columbus returned to Castile in early 1493, with captured natives. Word of his voyage soon spread throughout Europe.
Columbus made three further voyages to the Americas, exploring the Lesser Antilles in 1493, Trinidad and the northern coast of South America in 1498, and the east coast of Central America in 1502. Many names he gave to geographical features, particularly islands, are still in use. He gave the name indios ("Indians") to the indigenous peoples he encountered. The extent to which he was aware the Americas were a wholly separate landmass is uncertain; he never clearly renounced his belief he had reached the Far East. As a colonial governor, Columbus was accused by some of his contemporaries of significant brutality and removed from the post. Columbus's strained relationship with the Crown of Castile and its colonial administrators in America led to his arrest and removal from Hispaniola in 1500, and later to protracted litigation over the privileges he and his heirs claimed were owed to them by the crown.
Columbus's expeditions inaugurated a period of exploration, conquest, and colonization that lasted for centuries, thus bringing the Americas into the European sphere of influence. The transfer of plants, animals, precious metals, culture, human populations, technology, diseases, and ideas between the Old World and New World that followed his first voyage are known as the Columbian exchange. These events and the effects which persist to the present are often cited as the beginning of the modern era. Columbus was widely celebrated in the centuries after his death, but public perception fractured in the 21st century due to greater attention to the harms committed under his governance, particularly the beginning of the depopulation of Hispaniola's indigenous Taínos, caused by Old World diseases and mistreatment, including slavery. Many places in the Western Hemisphere bear his name, including the South American country of Colombia, the Canadian province of British Columbia, the American city Columbus, Ohio, and the U.S. capital, the District of Columbia.
Columbus's early life is obscure, but scholars believe he was born in the Republic of Genoa between 25 August and 31 October 1451. His father was Domenico Colombo, a wool weaver who worked in Genoa and Savona and owned a cheese stand at which young Christopher worked. His mother was Susanna Fontanarossa. He had three brothers—Bartholomew, Giovanni Pellegrino, and Giacomo (also called Diego)—as well as a sister, Bianchinetta. Bartholomew ran a cartography workshop in Lisbon for at least part of his adulthood.
His native language is presumed to have been a Genoese dialect (Ligurian) as his first language, though Columbus probably never wrote in it. His name in 16th-century Genoese was Cristoffa Corombo, in Italian, Cristoforo Colombo, and in Spanish Cristóbal Colón.
In one of his writings, he says he went to sea at 14. In 1470, the family moved to Savona, where Domenico took over a tavern. Some modern authors have argued that he was not from Genoa, but from the Aragon region of Spain or from Portugal. These competing hypotheses have been discounted by most scholars.
In 1473, Columbus began his apprenticeship as business agent for the wealthy Spinola, Centurione, and Di Negro families of Genoa. Later, he made a trip to the Greek island Chios in the Aegean Sea, then ruled by Genoa. In May 1476, he took part in an armed convoy sent by Genoa to carry valuable cargo to northern Europe. He probably visited Bristol, England, and Galway, Ireland, where he may have visited St. Nicholas' Collegiate Church. It has been speculated he went to Iceland in 1477, though many scholars doubt this. It is known that in the autumn of 1477, he sailed on a Portuguese ship from Galway to Lisbon, where he found his brother Bartholomew, and they continued trading for the Centurione family. Columbus based himself in Lisbon from 1477 to 1485. In 1478, the Centuriones sent Columbus on a sugar-buying trip to Madeira. He married Felipa Perestrello e Moniz, daughter of Bartolomeu Perestrello, a Portuguese nobleman of Lombard origin, who had been the donatary captain of Porto Santo.
In 1479 or 1480, Columbus's son Diego was born. Between 1482 and 1485, Columbus traded along the coasts of West Africa, reaching the Portuguese trading post of Elmina at the Guinea coast in present-day Ghana. Before 1484, Columbus returned to Porto Santo to find that his wife had died. He returned to Portugal to settle her estate and take Diego with him.
He left Portugal for Castile in 1485, where he took a mistress in 1487, a 20-year-old orphan named Beatriz Enríquez de Arana. It is likely that Beatriz met Columbus when he was in Córdoba, a gathering place for Genoese merchants and where the court of the Catholic Monarchs was located at intervals. Beatriz, unmarried at the time, gave birth to Columbus's second son, Fernando Columbus, in July 1488, named for the monarch of Aragon. Columbus recognized the boy as his offspring. Columbus entrusted his older, legitimate son Diego to take care of Beatriz and pay the pension set aside for her following his death, but Diego was negligent in his duties.
Columbus learned Latin, Portuguese, and Castilian. He read widely about astronomy, geography, and history, including the works of Ptolemy, Pierre d'Ailly's Imago Mundi, the travels of Marco Polo and Sir John Mandeville, Pliny's Natural History, and Pope Pius II's Historia rerum ubique gestarum. According to historian Edmund Morgan,
Columbus was not a scholarly man. Yet he studied these books, made hundreds of marginal notations in them and came out with ideas about the world that were characteristically simple and strong and sometimes wrong ...
Under the Mongol Empire's hegemony over Asia and the Pax Mongolica, Europeans had long enjoyed a safe land passage on the Silk Road to India, parts of East Asia, including China and Maritime Southeast Asia, which were sources of valuable goods. With the fall of Constantinople to the Ottoman Empire in 1453, the Silk Road was closed to Christian traders.
In 1474, the Florentine astronomer Paolo dal Pozzo Toscanelli suggested to King Afonso V of Portugal that sailing west across the Atlantic would be a quicker way to reach the Maluku (Spice) Islands, China, Japan and India than the route around Africa, but Afonso rejected his proposal. In the 1480s, Columbus and his brother proposed a plan to reach the East Indies by sailing west. Columbus supposedly wrote Toscanelli in 1481 and received encouragement, along with a copy of a map the astronomer had sent Afonso implying that a westward route to Asia was possible. Columbus's plans were complicated by Bartolomeu Dias's rounding of the Cape of Good Hope in 1488, which suggested the Cape Route around Africa to Asia.
Carol Delaney and other commentators have argued that Columbus was a Christian millennialist and apocalypticist and that these beliefs motivated his quest for Asia in a variety of ways. Columbus often wrote about seeking gold in the log books of his voyages and writes about acquiring it "in such quantity that the sovereigns... will undertake and prepare to go conquer the Holy Sepulcher" in a fulfillment of Biblical prophecy. Columbus often wrote about converting all races to Christianity. Abbas Hamandi argues that Columbus was motivated by the hope of "[delivering] Jerusalem from Muslim hands" by "using the resources of newly discovered lands".
Despite a popular misconception to the contrary, nearly all educated Westerners of Columbus's time knew that the Earth is spherical, a concept that had been understood since antiquity. The techniques of celestial navigation, which uses the position of the Sun and the stars in the sky, had long been in use by astronomers and were beginning to be implemented by mariners.
As far back as the 3rd century BC, Eratosthenes had correctly computed the circumference of the Earth by using simple geometry and studying the shadows cast by objects at two remote locations. In the 1st century BC, Posidonius confirmed Eratosthenes's results by comparing stellar observations at two separate locations. These measurements were widely known among scholars, but Ptolemy's use of the smaller, old-fashioned units of distance led Columbus to underestimate the size of the Earth by about a third.
Three cosmographical parameters determined the bounds of Columbus's enterprise: the distance across the ocean between Europe and Asia, which depended on the extent of the oikumene, i.e., the Eurasian land-mass stretching east–west between Spain and China; the circumference of the Earth; and the number of miles or leagues in a degree of longitude, which was possible to deduce from the theory of the relationship between the size of the surfaces of water and the land as held by the followers of Aristotle in medieval times.
From Pierre d'Ailly's Imago Mundi (1410), Columbus learned of Alfraganus's estimate that a degree of latitude (equal to approximately a degree of longitude along the equator) spanned 56.67 Arabic miles (equivalent to 66.2 nautical miles, 122.6 kilometers or 76.2 mi), but he did not realize that this was expressed in the Arabic mile (about 1,830 meters or 1.14 mi) rather than the shorter Roman mile (about 1,480 m) with which he was familiar. Columbus therefore estimated the size of the Earth to be about 75% of Eratosthenes's calculation, and the distance westward from the Canary Islands to the Indies as only 68 degrees, equivalent to 3,080 nmi (5,700 km; 3,540 mi) (a 58% error).
Most scholars of the time accepted Ptolemy's estimate that Eurasia spanned 180° longitude, rather than the actual 130° (to the Chinese mainland) or 150° (to Japan at the latitude of Spain). Columbus believed an even higher estimate, leaving a smaller percentage for water. In d'Ailly's Imago Mundi, Columbus read Marinus of Tyre's estimate that the longitudinal span of Eurasia was 225° at the latitude of Rhodes. Some historians, such as Samuel Morison, have suggested that he followed the statement in the apocryphal book 2 Esdras (6:42) that "six parts [of the globe] are habitable and the seventh is covered with water." He was also aware of Marco Polo's claim that Japan (which he called "Cipangu") was some 2,414 km (1,500 mi) to the east of China ("Cathay"), and closer to the equator than it is. He was influenced by Toscanelli's idea that there were inhabited islands even farther to the east than Japan, including the mythical Antillia, which he thought might lie not much farther to the west than the Azores.
Based on his sources, Columbus estimated a distance of 2,400 nmi (4,400 km; 2,800 mi) from the Canary Islands west to Japan; the actual distance is 10,600 nmi (19,600 km; 12,200 mi). No ship in the 15th century could have carried enough food and fresh water for such a long voyage, and the dangers involved in navigating through the uncharted ocean would have been formidable. Most European navigators reasonably concluded that a westward voyage from Europe to Asia was unfeasible. The Catholic Monarchs, however, having completed the Reconquista, an expensive war against the Moors in the Iberian Peninsula, were eager to obtain a competitive edge over other European countries in the quest for trade with the Indies. Columbus's project, though far-fetched, held the promise of such an advantage.
Though Columbus was wrong about the number of degrees of longitude that separated Europe from the Far East and about the distance that each degree represented, he did take advantage of the trade winds, which would prove to be the key to his successful navigation of the Atlantic Ocean. He planned to first sail to the Canary Islands before continuing west with the northeast trade wind. Part of the return to Spain would require traveling against the wind using an arduous sailing technique called beating, during which progress is made very slowly. To effectively make the return voyage, Columbus would need to follow the curving trade winds northeastward to the middle latitudes of the North Atlantic, where he would be able to catch the "westerlies" that blow eastward to the coast of Western Europe.
The navigational technique for travel in the Atlantic appears to have been exploited first by the Portuguese, who referred to it as the volta do mar ('turn of the sea'). Through his marriage to his first wife, Felipa Perestrello, Columbus had access to the nautical charts and logs that had belonged to her deceased father, Bartolomeu Perestrello, who had served as a captain in the Portuguese navy under Prince Henry the Navigator. In the mapmaking shop where he worked with his brother Bartholomew, Columbus also had ample opportunity to hear the stories of old seamen about their voyages to the western seas, but his knowledge of the Atlantic wind patterns was still imperfect at the time of his first voyage. By sailing due west from the Canary Islands during hurricane season, skirting the so-called horse latitudes of the mid-Atlantic, he risked being becalmed and running into a tropical cyclone, both of which he avoided by chance.
By about 1484, Columbus proposed his planned voyage to King John II of Portugal. The king submitted Columbus's proposal to his advisors, who rejected it, correctly, on the grounds that Columbus's estimate for a voyage of 2,400 nmi was only a quarter of what it should have been. In 1488, Columbus again appealed to the court of Portugal, and John II again granted him an audience. That meeting also proved unsuccessful, in part because not long afterwards Bartolomeu Dias returned to Portugal with news of his successful rounding of the southern tip of Africa (near the Cape of Good Hope).
Columbus sought an audience with the monarchs Ferdinand II of Aragon and Isabella I of Castile, who had united several kingdoms in the Iberian Peninsula by marrying and now ruled together. On 1 May 1486, permission having been granted, Columbus presented his plans to Queen Isabella, who, in turn, referred it to a committee. The learned men of Spain, like their counterparts in Portugal, replied that Columbus had grossly underestimated the distance to Asia. They pronounced the idea impractical and advised the Catholic Monarchs to pass on the proposed venture. To keep Columbus from taking his ideas elsewhere, and perhaps to keep their options open, the sovereigns gave him an allowance, totaling about 14,000 maravedis for the year, or about the annual salary of a sailor. In May 1489, the queen sent him another 10,000 maravedis, and the same year the monarchs furnished him with a letter ordering all cities and towns under their dominion to provide him food and lodging at no cost.
Columbus also dispatched his brother Bartholomew to the court of Henry VII of England to inquire whether the English crown might sponsor his expedition, but he was captured by pirates en route, and only arrived in early 1491. By that time, Columbus had retreated to La Rábida Friary, where the Spanish crown sent him 20,000 maravedis to buy new clothes and instructions to return to the Spanish court for renewed discussions.
Columbus waited at King Ferdinand's camp until Ferdinand and Isabella conquered Granada, the last Muslim stronghold on the Iberian Peninsula, in January 1492. A council led by Isabella's confessor, Hernando de Talavera, found Columbus's proposal to reach the Indies implausible. Columbus had left for France when Ferdinand intervened, first sending Talavera and Bishop Diego Deza to appeal to the queen. Isabella was finally convinced by the king's clerk Luis de Santángel, who argued that Columbus would take his ideas elsewhere, and offered to help arrange the funding. Isabella then sent a royal guard to fetch Columbus, who had traveled 2 leagues (over 10 km) toward Córdoba.
In the April 1492 "Capitulations of Santa Fe", King Ferdinand and Queen Isabella promised Columbus that if he succeeded he would be given the rank of Admiral of the Ocean Sea and appointed Viceroy and Governor of all the new lands he might claim for Spain. He had the right to nominate three persons, from whom the sovereigns would choose one, for any office in the new lands. He would be entitled to 10% (diezmo) of all the revenues from the new lands in perpetuity. He also would have the option of buying one-eighth interest in any commercial venture in the new lands, and receive one-eighth (ochavo) of the profits.
In 1500, during his third voyage to the Americas, Columbus was arrested and dismissed from his posts. He and his sons, Diego and Fernando, then conducted a lengthy series of court cases against the Castilian crown, known as the pleitos colombinos, alleging that the Crown had illegally reneged on its contractual obligations to Columbus and his heirs. The Columbus family had some success in their first litigation, as a judgment of 1511 confirmed Diego's position as viceroy but reduced his powers. Diego resumed litigation in 1512, which lasted until 1536, and further disputes initiated by heirs continued until 1790.
Between 1492 and 1504, Columbus completed four round-trip voyages between Spain and the Americas, each voyage being sponsored by the Crown of Castile. On his first voyage he reached the Americas, initiating the European exploration and colonization of the continent, as well as the Columbian exchange. His role in history is thus important to the Age of Discovery, Western history, and human history writ large.
In Columbus's letter on the first voyage, published following his first return to Spain, he claimed that he had reached Asia, as previously described by Marco Polo and other Europeans. Over his subsequent voyages, Columbus refused to acknowledge that the lands he visited and claimed for Spain were not part of Asia, in the face of mounting evidence to the contrary. This might explain, in part, why the American continent was named after the Florentine explorer Amerigo Vespucci—who received credit for recognizing it as a "New World"—and not after Columbus.
On the evening of 3 August 1492, Columbus departed from Palos de la Frontera with three ships. The largest was a carrack, the Santa María, owned and captained by Juan de la Cosa, and under Columbus's direct command. The other two were smaller caravels, the Pinta and the Niña, piloted by the Pinzón brothers. Columbus first sailed to the Canary Islands. There he restocked provisions and made repairs then departed from San Sebastián de La Gomera on 6 September, for what turned out to be a five-week voyage across the ocean.
On 7 October, the crew spotted "[i]mmense flocks of birds". On 11 October, Columbus changed the fleet's course to due west, and sailed through the night, believing land was soon to be found. At around 02:00 the following morning, a lookout on the Pinta, Rodrigo de Triana, spotted land. The captain of the Pinta, Martín Alonso Pinzón, verified the sight of land and alerted Columbus. Columbus later maintained that he had already seen a light on the land a few hours earlier, thereby claiming for himself the lifetime pension promised by Ferdinand and Isabella to the first person to sight land. Columbus called this island (in what is now the Bahamas) San Salvador (meaning "Holy Savior"); the natives called it Guanahani. Christopher Columbus's journal entry of 12 October 1492 states:
I saw some who had marks of wounds on their bodies and I made signs to them asking what they were; and they showed me how people from other islands nearby came there and tried to take them, and how they defended themselves; and I believed and believe that they come here from tierra firme to take them captive. They should be good and intelligent servants, for I see that they say very quickly everything that is said to them; and I believe they would become Christians very easily, for it seemed to me that they had no religion. Our Lord pleasing, at the time of my departure I will take six of them from here to Your Highnesses in order that they may learn to speak.
Columbus called the inhabitants of the lands that he visited Los Indios (Spanish for "Indians"). He initially encountered the Lucayan, Taíno, and Arawak peoples. Noting their gold ear ornaments, Columbus took some of the Arawaks prisoner and insisted that they guide him to the source of the gold. Columbus did not believe he needed to create a fortified outpost, writing, "the people here are simple in war-like matters ... I could conquer the whole of them with fifty men, and govern them as I pleased." The Taínos told Columbus that another indigenous tribe, the Caribs, were fierce warriors and cannibals, who made frequent raids on the Taínos, often capturing their women, although this may have been a belief perpetuated by the Spaniards to justify enslaving them.
Columbus also explored the northeast coast of Cuba, where he landed on 28 October. On the night of 26 November, Martín Alonso Pinzón took the Pinta on an unauthorized expedition in search of an island called "Babeque" or "Baneque", which the natives had told him was rich in gold. Columbus, for his part, continued to the northern coast of Hispaniola, where he landed on 6 December. There, the Santa María ran aground on 25 December 1492 and had to be abandoned. The wreck was used as a target for cannon fire to impress the native peoples. Columbus was received by the native cacique Guacanagari, who gave him permission to leave some of his men behind. Columbus left 39 men, including the interpreter Luis de Torres, and founded the settlement of La Navidad, in present-day Haiti. Columbus took more natives prisoner and continued his exploration. He kept sailing along the northern coast of Hispaniola with a single ship until he encountered Pinzón and the Pinta on 6 January.
On 13 January 1493, Columbus made his last stop of this voyage in the Americas, in the Bay of Rincón in northeast Hispaniola. There he encountered the Ciguayos, the only natives who offered violent resistance during this voyage. The Ciguayos refused to trade the amount of bows and arrows that Columbus desired; in the ensuing clash one Ciguayo was stabbed in the buttocks and another wounded with an arrow in his chest. Because of these events, Columbus called the inlet the Golfo de Las Flechas (Bay of Arrows).
Columbus headed for Spain on the Niña, but a storm separated him from the Pinta, and forced the Niña to stop at the island of Santa Maria in the Azores. Half of his crew went ashore to say prayers of thanksgiving in a chapel for having survived the storm. But while praying, they were imprisoned by the governor of the island, ostensibly on suspicion of being pirates. After a two-day standoff, the prisoners were released, and Columbus again set sail for Spain.
Another storm forced Columbus into the port at Lisbon. From there he went to Vale do Paraíso north of Lisbon to meet King John II of Portugal, who told Columbus that he believed the voyage to be in violation of the 1479 Treaty of Alcáçovas. After spending more than a week in Portugal, Columbus set sail for Spain. Returning to Palos on 15 March 1493, he was given a hero's welcome and soon afterward received by Isabella and Ferdinand in Barcelona.
Columbus's letter on the first voyage, dispatched to the Spanish court, was instrumental in spreading the news throughout Europe about his voyage. Almost immediately after his arrival in Spain, printed versions began to appear, and word of his voyage spread rapidly. Most people initially believed that he had reached Asia. The Bulls of Donation, three papal bulls of Pope Alexander VI delivered in 1493, purported to grant overseas territories to Portugal and the Catholic Monarchs of Spain. They were replaced by the Treaty of Tordesillas of 1494.
The two earliest published copies of Columbus's letter on the first voyage aboard the Niña were donated in 2017 by the Jay I. Kislak Foundation to the University of Miami library in Coral Gables, Florida, where they are housed.
On 24 September 1493, Columbus sailed from Cádiz with 17 ships, and supplies to establish permanent colonies in the Americas. He sailed with nearly 1,500 men, including sailors, soldiers, priests, carpenters, stonemasons, metalworkers, and farmers. Among the expedition members were Alvarez Chanca, a physician who wrote a detailed account of the second voyage; Juan Ponce de León, the first governor of Puerto Rico and Florida; the father of Bartolomé de las Casas; Juan de la Cosa, a cartographer who is credited with making the first world map depicting the New World; and Columbus's youngest brother Diego. The fleet stopped at the Canary Islands to take on more supplies, and set sail again on 7 October, deliberately taking a more southerly course than on the first voyage.
On 3 November, they arrived in the Windward Islands; the first island they encountered was named Dominica by Columbus, but not finding a good harbor there, they anchored off a nearby smaller island, which he named Mariagalante, now a part of Guadeloupe and called Marie-Galante. Other islands named by Columbus on this voyage were Montserrat, Antigua, Saint Martin, the Virgin Islands, as well as many others.
On 22 November, Columbus returned to Hispaniola to visit La Navidad, where 39 Spaniards had been left during the first voyage. Columbus found the fort in ruins, destroyed by the Taínos after some of the Spaniards reportedly antagonized their hosts with their unrestrained lust for gold and women. Columbus then established a poorly located and short-lived settlement to the east, La Isabela, in the present-day Dominican Republic.
From April to August 1494, Columbus explored Cuba and Jamaica, then returned to Hispaniola. By the end of 1494, disease and famine had killed two-thirds of the Spanish settlers. Columbus implemented encomienda, a Spanish labor system that rewarded conquerors with the labor of conquered non-Christian people. Columbus executed Spanish colonists for minor crimes, and used dismemberment as punishment. Columbus and the colonists enslaved the indigenous people, including children. Natives were beaten, raped, and tortured for the location of imagined gold. Thousands committed suicide rather than face the oppression.
In February 1495, Columbus rounded up about 1,500 Arawaks, some of whom had rebelled, in a great slave raid. About 500 of the strongest were shipped to Spain as slaves, with about two hundred of those dying en route.
In June 1495, the Spanish crown sent ships and supplies to Hispaniola. In October, Florentine merchant Gianotto Berardi, who had won the contract to provision the fleet of Columbus's second voyage and to supply the colony on Hispaniola, received almost 40,000 maravedís worth of enslaved Indians. He renewed his effort to get supplies to Columbus, and was working to organize a fleet when he suddenly died in December. On 10 March 1496, having been away about 30 months, the fleet departed La Isabela. On 8 June the crew sighted land somewhere between Lisbon and Cape St. Vincent, and disembarked in Cádiz on 11 June.
On 30 May 1498, Columbus left with six ships from Sanlúcar, Spain. The fleet called at Madeira and the Canary Islands, where it divided in two, with three ships heading for Hispaniola and the other three vessels, commanded by Columbus, sailing south to the Cape Verde Islands and then westward across the Atlantic. It is probable that this expedition was intended at least partly to confirm rumors of a large continent south of the Caribbean Sea, that is, South America.
On 31 July they sighted Trinidad, the most southerly of the Caribbean islands. On 5 August, Columbus sent several small boats ashore on the southern side of the Paria Peninsula in what is now Venezuela, near the mouth of the Orinoco river. This was the first recorded landing of Europeans on the mainland of South America, which Columbus realized must be a continent. The fleet then sailed to the islands of Chacachacare and Margarita, reaching the latter on 14 August, and sighted Tobago and Grenada from afar, according to some scholars.
On 19 August, Columbus returned to Hispaniola. There he found settlers in rebellion against his rule, and his unfulfilled promises of riches. Columbus had some of the Europeans tried for their disobedience; at least one rebel leader was hanged.
In October 1499, Columbus sent two ships to Spain, asking the Court of Spain to appoint a royal commissioner to help him govern. By this time, accusations of tyranny and incompetence on the part of Columbus had also reached the Court. The sovereigns sent Francisco de Bobadilla, a relative of Marquesa Beatriz de Bobadilla, a patron of Columbus and a close friend of Queen Isabella, to investigate the accusations of brutality made against the Admiral. Arriving in Santo Domingo while Columbus was away, Bobadilla was immediately met with complaints about all three Columbus brothers. He moved into Columbus's house and seized his property, took depositions from the Admiral's enemies, and declared himself governor.
Bobadilla reported to Spain that Columbus once punished a man found guilty of stealing corn by having his ears and nose cut off and then selling him into slavery. He claimed that Columbus regularly used torture and mutilation to govern Hispaniola. Testimony recorded in the report stated that Columbus congratulated his brother Bartholomew on "defending the family" when the latter ordered a woman paraded naked through the streets and then had her tongue cut because she had "spoken ill of the admiral and his brothers". The document also describes how Columbus put down native unrest and revolt: he first ordered a brutal suppression of the uprising in which many natives were killed, and then paraded their dismembered bodies through the streets in an attempt to discourage further rebellion. Columbus vehemently denied the charges. The neutrality and accuracy of the accusations and investigations of Bobadilla toward Columbus and his brothers have been disputed by historians, given the anti-Italian sentiment of the Spaniards and Bobadilla's desire to take over Columbus's position.
In early October 1500, Columbus and Diego presented themselves to Bobadilla, and were put in chains aboard La Gorda, the caravel on which Bobadilla had arrived at Santo Domingo. They were returned to Spain, and languished in jail for six weeks before King Ferdinand ordered their release. Not long after, the king and queen summoned the Columbus brothers to the Alhambra palace in Granada. The sovereigns expressed indignation at the actions of Bobadilla, who was then recalled and ordered to make restitutions of the property he had confiscated from Columbus. The royal couple heard the brothers' pleas; restored their freedom and wealth; and, after much persuasion, agreed to fund Columbus's fourth voyage. However, Nicolás de Ovando was to replace Bobadilla and be the new governor of the West Indies.
New light was shed on the seizure of Columbus and his brother Bartholomew, the Adelantado, with the discovery by archivist Isabel Aguirre of an incomplete copy of the testimonies against them gathered by Francisco de Bobadilla at Santo Domingo in 1500. She found a manuscript copy of this pesquisa (inquiry) in the Archive of Simancas, Spain, uncatalogued until she and Consuelo Varela published their book, La caída de Cristóbal Colón: el juicio de Bobadilla (The fall of Christopher Colón: the judgement of Bobadilla) in 2006.
On 9 May 1502, Columbus left Cádiz with his flagship Santa María and three other vessels. The ships were crewed by 140 men, including his brother Bartholomew as second in command and his son Fernando. He sailed to Asilah on the Moroccan coast to rescue Portuguese soldiers said to be besieged by the Moors. The siege had been lifted by the time they arrived, so the Spaniards stayed only a day and continued on to the Canary Islands.
On 15 June, the fleet arrived at Martinique, where it lingered for several days. A hurricane was forming, so Columbus continued westward, hoping to find shelter on Hispaniola. He arrived at Santo Domingo on 29 June, but was denied port, and the new governor Francisco de Bobadilla refused to listen to his warning that a hurricane was approaching. Instead, while Columbus's ships sheltered at the mouth of the Rio Jaina, the first Spanish treasure fleet sailed into the hurricane. Columbus's ships survived with only minor damage, while 20 of the 30 ships in the governor's fleet were lost along with 500 lives (including that of Francisco de Bobadilla). Although a few surviving ships managed to straggle back to Santo Domingo, Aguja, the fragile ship carrying Columbus's personal belongings and his 4,000 pesos in gold was the sole vessel to reach Spain. The gold was his tenth (décimo) of the profits from Hispaniola, equal to 240,000 maravedis, guaranteed by the Catholic Monarchs in 1492.
After a brief stop at Jamaica, Columbus sailed to Central America, arriving at the coast of Honduras on 30 July. Here Bartholomew found native merchants and a large canoe. On 14 August, Columbus landed on the continental mainland at Punta Caxinas, now Puerto Castilla, Honduras. He spent two months exploring the coasts of Honduras, Nicaragua, and Costa Rica, seeking a strait in the western Caribbean through which he could sail to the Indian Ocean. Sailing south along the Nicaraguan coast, he found a channel that led into Almirante Bay in Panama on 5 October.
As soon as his ships anchored in Almirante Bay, Columbus encountered Ngäbe people in canoes who were wearing gold ornaments. In January 1503, he established a garrison at the mouth of the Belén River. Columbus left for Hispaniola on 16 April. On 10 May he sighted the Cayman Islands, naming them "Las Tortugas" after the numerous sea turtles there. His ships sustained damage in a storm off the coast of Cuba. Unable to travel farther, on 25 June 1503 they were beached in Saint Ann Parish, Jamaica.
For six months Columbus and 230 of his men remained stranded on Jamaica. Diego Méndez de Segura, who had shipped out as a personal secretary to Columbus, and a Spanish shipmate called Bartolomé Flisco, along with six natives, paddled a canoe to get help from Hispaniola. The governor, Nicolás de Ovando y Cáceres, detested Columbus and obstructed all efforts to rescue him and his men. In the meantime Columbus, in a desperate effort to induce the natives to continue provisioning him and his hungry men, won their favor by predicting a lunar eclipse for 29 February 1504, using Abraham Zacuto's astronomical charts. Despite the governor's obstruction, Christopher Columbus and his men were rescued on 28 June 1504, and arrived in Sanlúcar, Spain, on 7 November.
Columbus had always claimed that the conversion of non-believers was one reason for his explorations, and he grew increasingly religious in his later years. Probably with the assistance of his son Diego and his friend the Carthusian monk Gaspar Gorricio, Columbus produced two books during his later years: a Book of Privileges (1502), detailing and documenting the rewards from the Spanish Crown to which he believed he and his heirs were entitled, and a Book of Prophecies (1505), in which passages from the Bible were used to place his achievements as an explorer in the context of Christian eschatology.
In his later years, Columbus demanded that the Crown of Castile give him his tenth of all the riches and trade goods yielded by the new lands, as stipulated in the Capitulations of Santa Fe. Because he had been relieved of his duties as governor, the Crown did not feel bound by that contract and his demands were rejected. After his death, his heirs sued the Crown for a part of the profits from trade with America, as well as other rewards. This led to a protracted series of legal disputes known as the pleitos colombinos ("Columbian lawsuits").
During a violent storm on his first return voyage, Columbus, then 41, had suffered an attack of what was believed at the time to be gout. In subsequent years, he was plagued with what was thought to be influenza and other fevers, bleeding from the eyes, temporary blindness and prolonged attacks of gout. The attacks increased in duration and severity, sometimes leaving Columbus bedridden for months at a time, and culminated in his death 14 years later.
Based on Columbus's lifestyle and the described symptoms, some modern commentators suspect that he suffered from reactive arthritis, rather than gout. Reactive arthritis is a joint inflammation caused by intestinal bacterial infections or after acquiring certain sexually transmitted diseases (primarily chlamydia or gonorrhea). In 2006, Frank C. Arnett, a medical doctor, and historian Charles Merrill, published their paper in The American Journal of the Medical Sciences proposing that Columbus had a form of reactive arthritis; Merrill made the case in that same paper that Columbus was the son of Catalans and his mother possibly a member of a prominent converso (converted Jew) family. "It seems likely that [Columbus] acquired reactive arthritis from food poisoning on one of his ocean voyages because of poor sanitation and improper food preparation", says Arnett, a rheumatologist and professor of internal medicine, pathology and laboratory medicine at the University of Texas Medical School at Houston.
Some historians such as H. Micheal Tarver and Emily Slape, as well as medical doctors such as Arnett and Antonio Rodríguez Cuartero, believe that Columbus had such a form of reactive arthritis, but according to other authorities, this is "speculative", or "very speculative".
After his arrival to Sanlúcar from his fourth voyage (and Queen Isabella's death), an ill Columbus settled in Seville in April 1505. He stubbornly continued to make pleas to the Crown to defend his own personal privileges and his family's. He moved to Segovia (where the court was at the time) on a mule by early 1506, and, on the occasion of the wedding of King Ferdinand with Germaine of Foix in Valladolid, Spain, in March 1506, Columbus moved to that city to persist with his demands. On 20 May 1506, aged 54, Columbus died in Valladolid.
Columbus's remains were first buried at a convent in Valladolid, then moved to the monastery of La Cartuja in Seville (southern Spain) by the will of his son Diego. They may have been exhumed in 1513 and interred at the Seville Cathedral. In about 1536, the remains of both Columbus and his son Diego were moved to a cathedral in Colonial Santo Domingo, in the present-day Dominican Republic; Columbus had requested to be buried on the island. By some accounts, in 1793, when France took over the entire island of Hispaniola, Columbus's remains were moved to Havana, Cuba. After Cuba became independent following the Spanish–American War in 1898, at least some of these remains were moved back to the Seville Cathedral, where they were placed on an elaborate catafalque.
In June 2003, DNA samples were taken from these remains as well as those of Columbus's brother Diego and younger son Fernando. Initial observations suggested that the bones did not appear to match Columbus's physique or age at death. DNA extraction proved difficult; only short fragments of mitochondrial DNA could be isolated. These matched corresponding DNA from Columbus's brother, supporting that both individuals had shared the same mother. Such evidence, together with anthropologic and historic analyses, led the researchers to conclude that the remains belonged to Christopher Columbus.
In 1877, a priest discovered a lead box at Santo Domingo inscribed: "Discoverer of America, First Admiral". Inscriptions found the next year read "Last of the remains of the first admiral, Sire Christopher Columbus, discoverer." The box contained bones of an arm and a leg, as well as a bullet. These remains were considered legitimate by physician and U.S. Assistant Secretary of State John Eugene Osborne, who suggested in 1913 that they travel through the Panama Canal as a part of its opening ceremony. These remains were kept at the Basilica Cathedral of Santa María la Menor (in the Colonial City of Santo Domingo) before being moved to the Columbus Lighthouse (Santo Domingo Este, inaugurated in 1992). The authorities in Santo Domingo have never allowed these remains to be DNA-tested, so it is unconfirmed whether they are from Columbus's body as well.
The figure of Columbus was not ignored in the British colonies during the colonial era: Columbus became a unifying symbol early in the history of the colonies that became the United States when Puritan preachers began to use his life story as a model for a "developing American spirit". In the spring of 1692, Puritan preacher Cotton Mather described Columbus's voyage as one of three shaping events of the modern age, connecting Columbus's voyage and the Puritans' migration to North America, seeing them together as the key to a grand design.
The use of Columbus as a founding figure of New World nations spread rapidly after the American Revolution. This was out of a desire to develop a national history and founding myth with fewer ties to Britain. His name was the basis for the female national personification of the United States, Columbia, in use since the 1730s with reference to the original Thirteen Colonies, and also a historical name applied to the Americas and to the New World. Columbia, South Carolina and Columbia Rediviva, the ship for which the Columbia River was named, are named for Columbus.
Columbus's name was given to the newly born Republic of Colombia in the early 19th century, inspired by the political project of "Colombeia" developed by revolutionary Francisco de Miranda, which was put at the service of the emancipation of continental Hispanic America.
To commemorate the 400th anniversary of the landing of Columbus, the 1893 World's Fair in Chicago was named the World's Columbian Exposition. The U.S. Postal Service issued the first U.S. commemorative stamps, the Columbian Issue, depicting Columbus, Queen Isabella and others in various stages of his several voyages. The policies related to the celebration of the Spanish colonial empire as the vehicle of a nationalist project undertaken in Spain during the Restoration in the late 19th century took form with the commemoration of the 4th centenary on 12 October 1892 (in which the figure of Columbus was extolled by the Conservative government), eventually becoming the very same national day. Several monuments commemorating the "discovery" were erected in cities such as Palos, Barcelona, Granada, Madrid, Salamanca, Valladolid and Seville in the years around the 400th anniversary.
For the Columbus Quincentenary in 1992, a second Columbian issue was released jointly with Italy, Portugal, and Spain. Columbus was celebrated at Seville Expo '92, and Genoa Expo '92.
The Boal Mansion Museum, founded in 1951, contains a collection of materials concerning later descendants of Columbus and collateral branches of the family. It features a 16th-century chapel from a Spanish castle reputedly owned by Diego Colón which became the residence of Columbus's descendants. The chapel interior was dismantled and moved from Spain in 1909 and re-erected on the Boal estate at Boalsburg, Pennsylvania. Inside it are numerous religious paintings and other objects including a reliquary with fragments of wood supposedly from the True Cross. The museum also holds a collection of documents mostly relating to Columbus descendants of the late 18th and early 19th centuries.
In many countries of the Americas, as well as Spain and Italy, Columbus Day celebrates the anniversary of Columbus's arrival in the Americas on 12 October 1492.
The voyages of Columbus are considered a turning point in human history, marking the beginning of globalization and accompanying demographic, commercial, economic, social, and political changes.
His explorations resulted in permanent contact between the two hemispheres, and the term "pre-Columbian" is used to refer to the cultures of the Americas before the arrival of Columbus and his European successors. The ensuing Columbian exchange saw the massive exchange of animals, plants, fungi, diseases, technologies, mineral wealth and ideas.
In the first century after his endeavors, Columbus's figure largely languished in the backwaters of history, and his reputation was beset by his failures as a colonial administrator. His legacy was somewhat rescued from oblivion when he began to appear as a character in Italian and Spanish plays and poems from the late 16th century onward.
Columbus was subsumed into the Western narrative of colonization and empire building, which invoked notions of translatio imperii and translatio studii to underline who was considered "civilized" and who was not.
The Americanization of the figure of Columbus began in the latter decades of the 18th century, after the revolutionary period of the United States, elevating the status of his reputation to a national myth, homo americanus. His landing became a powerful icon as an "image of American genesis". The Discovery of America sculpture, depicting Columbus and a cowering Indian maiden, was commissioned on 3 April 1837, when U.S. President Martin Van Buren sanctioned the engineering of Luigi Persico's design. This representation of Columbus's triumph and the Indian's recoil is a demonstration of white superiority over savage, naive Indians. As recorded during its unveiling in 1844, the sculpture extends to "represent the meeting of the two races", as Persico captures their first interaction, highlighting the "moral and intellectual inferiority" of Indians. Placed outside the U.S. Capitol building where it remained until its removal in the mid-20th century, the sculpture reflected the contemporary view of whites in the U.S. toward the Natives; they are labeled "merciless Indian savages" in the United States Declaration of Independence. In 1836, Pennsylvania senator and future U.S. President James Buchanan, who proposed the sculpture, described it as representing "the great discoverer when he first bounded with ecstasy upon the shore, ail his toils past, presenting a hemisphere to the astonished world, with the name America inscribed upon it. Whilst he is thus standing upon the shore, a female savage, with awe and wonder depicted in her countenance, is gazing upon him."
The American Columbus myth was reconfigured later in the century when he was enlisted as an ethnic hero by immigrants to the United States who were not of Anglo-Saxon stock, such as Jewish, Italian, and Irish people, who claimed Columbus as a sort of ethnic founding father. Catholics unsuccessfully tried to promote him for canonization in the 19th century.
From the 1990s onward, a narrative of Columbus being responsible for the genocide of indigenous peoples and environmental destruction began to compete with the then predominant discourse of Columbus as Christ-bearer, scientist, or father of America. This narrative features the negative effects of Columbus' conquests on native populations. Exposed to Old World diseases, the indigenous populations of the New World collapsed, and were largely replaced by Europeans and Africans, who brought with them new methods of farming, business, governance, and religious worship.
Though Christopher Columbus came to be considered the European discoverer of America in Western popular culture, his historical legacy is more nuanced. After settling Iceland, the Norse settled the uninhabited southern part of Greenland beginning in the 10th century. Norsemen are believed to have then set sail from Greenland and Iceland to become the first known Europeans to reach the North American mainland, nearly 500 years before Columbus reached the Caribbean. The 1960s discovery of a Norse settlement dating to c. 1000 AD at L'Anse aux Meadows, Newfoundland, partially corroborates accounts within the Icelandic sagas of Erik the Red's colonization of Greenland and his son Leif Erikson's subsequent exploration of a place he called Vinland.
In the 19th century, amid a revival of interest in Norse culture, Carl Christian Rafn and Benjamin Franklin DeCosta wrote works establishing that the Norse had preceded Columbus in colonizing the Americas. Following this, in 1874 Rasmus Bjørn Anderson argued that Columbus must have known of the North American continent before he started his voyage of discovery. Most modern scholars doubt Columbus had knowledge of the Norse settlements in America, with his arrival to the continent being most likely an independent discovery.
Europeans devised explanations for the origins of the Native Americans and their geographical distribution with narratives that often served to reinforce their own preconceptions built on ancient intellectual foundations. In modern Latin America, the non-Native populations of some countries often demonstrate an ambiguous attitude toward the perspectives of indigenous peoples regarding the so-called "discovery" by Columbus and the era of colonialism that followed. In his 1960 monograph, Mexican philosopher and historian Edmundo O'Gorman explicitly rejects the Columbus discovery myth, arguing that the idea that Columbus discovered America was a misleading legend fixed in the public mind through the works of American author Washington Irving during the 19th century. O'Gorman argues that to assert Columbus "discovered America" is to shape the facts concerning the events of 1492 to make them conform to an interpretation that arose many years later. For him, the Eurocentric view of the discovery of America sustains systems of domination in ways that favor Europeans. In a 1992 article for The UNESCO Courier, Félix Fernández-Shaw argues that the word "discovery" prioritizes European explorers as the "heroes" of the contact between the Old and New World. He suggests that the word "encounter" is more appropriate, being a more universal term which includes Native Americans in the narrative.
Historians have traditionally argued that Columbus remained convinced until his death that his journeys had been along the east coast of Asia as he originally intended (excluding arguments such as Anderson's). On his third voyage he briefly referred to South America as a "hitherto unknown" continent, while also rationalizing that it was the "Earthly Paradise" located "at the end of the Orient". Columbus continued to claim in his later writings that he had reached Asia; in a 1502 letter to Pope Alexander VI, he asserts that Cuba is the east coast of Asia. On the other hand, in a document in the Book of Privileges (1502), Columbus refers to the New World as the Indias Occidentales ('West Indies'), which he says "were unknown to all the world".
Washington Irving's 1828 biography of Columbus popularized the idea that Columbus had difficulty obtaining support for his plan because many Catholic theologians insisted that the Earth was flat, but this is a popular misconception which can be traced back to 17th-century Protestants campaigning against Catholicism. In fact, the spherical shape of the Earth had been known to scholars since antiquity, and was common knowledge among sailors, including Columbus. Coincidentally, the oldest surviving globe of the Earth, the Erdapfel, was made in 1492, just before Columbus's return to Europe from his first voyage. As such it contains no sign of the Americas and yet demonstrates the common belief in a spherical Earth.
Making observations with a quadrant on his third voyage, Columbus inaccurately measured the polar radius of the North Star's diurnal motion to be five degrees, which was double the value of another erroneous reading he had made from further north. This led him to describe the figure of the Earth as pear-shaped, with the "stalk" portion ascending towards Heaven. In fact, the Earth is ever so slightly pear-shaped, with its "stalk" pointing north.
Columbus has been criticized both for his brutality and for initiating the depopulation of the indigenous peoples of the Caribbean, whether by imported diseases or intentional violence. According to scholars of Native American history, George Tinker and Mark Freedman, Columbus was responsible for creating a cycle of "murder, violence, and slavery" to maximize exploitation of the Caribbean islands' resources, and that Native deaths on the scale at which they occurred would not have been caused by new diseases alone. Further, they describe the proposition that disease and not genocide caused these deaths as "American holocaust denial". Historian Kris Lane disputes whether it is appropriate to use the term "genocide" when the atrocities were not Columbus's intent, but resulted from his decrees, family business goals, and negligence. Other scholars defend Columbus's actions or allege that the worst accusations against him are not based in fact while others claim that "he has been blamed for events far beyond his own reach or knowledge".
As a result of the protests and riots that followed the murder of George Floyd in 2020, many public monuments of Christopher Columbus have been removed.
Some historians have criticized Columbus for initiating the widespread colonization of the Americas and for abusing its native population. On St. Croix, Columbus's friend Michele da Cuneo—according to his own account—kept an indigenous woman he captured, whom Columbus "gave to [him]", then brutally raped her.
According to some historians, the punishment for an indigenous person, aged 14 and older, failing to pay a hawk's bell, or cascabela, worth of gold dust every six months (based on Bartolomé de las Casas's account) was cutting off the hands of those without tokens, often leaving them to bleed to death. Other historians dispute such accounts. For example, a study of Spanish archival sources showed that the cascabela quotas were imposed by Guarionex, not Columbus, and that there is no mention, in the primary sources, of punishment by cutting off hands for failing to pay. Columbus had an economic interest in the enslavement of the Hispaniola natives and for that reason was not eager to baptize them, which attracted criticism from some churchmen. Consuelo Varela, a Spanish historian, stated that "Columbus's government was characterized by a form of tyranny. Even those who loved him had to admit the atrocities that had taken place." Other historians have argued that some of the accounts of the brutality of Columbus and his brothers have been exaggerated as part of the Black Legend, a historical tendency towards anti-Spanish and anti-Catholic sentiment in historical sources dating as far back as the 16th century, which they speculate may continue to taint scholarship into the present day.
According to historian Emily Berquist Soule, the immense Portuguese profits from the maritime trade in African slaves along the West African coast served as an inspiration for Columbus to create a counterpart of this apparatus in the New World using indigenous American slaves. Historian William J. Connell has argued that while Columbus "brought the entrepreneurial form of slavery to the New World", this "was a phenomenon of the times", further arguing that "we have to be very careful about applying 20th-century understandings of morality to the morality of the 15th century." In a less popular defense of colonization, Spanish ambassador María Jesús Figa López-Palop has argued, "Normally we melded with the cultures in America, we stayed there, we spread our language and culture and religion."
British historian Basil Davidson has dubbed Columbus the "father of the slave trade", citing the fact that the first license to ship enslaved Africans to the Caribbean was issued by the Catholic Monarchs in 1501 to the first royal governor of Hispaniola, Nicolás de Ovando.
Around the turn of the 21st century, estimates for the pre-Columbian population of Hispaniola ranged between 250,000 and two million, but genetic analysis published in late 2020 suggests that smaller figures are more likely, perhaps as low as 10,000–50,000 for Hispaniola and Puerto Rico combined. Based on the previous figures of a few hundred thousand, some have estimated that a third or more of the natives in Haiti were dead within the first two years of Columbus's governorship. Contributors to depopulation included disease, warfare, and harsh enslavement. Indirect evidence suggests that some serious illness may have arrived with the 1,500 colonists who accompanied Columbus' second expedition in 1493. Charles C. Mann writes that "It was as if the suffering these diseases had caused in Eurasia over the past millennia were concentrated into the span of decades." A third of the natives forced to work in gold and silver mines died every six months. Within three to six decades, the surviving Arawak population numbered only in the hundreds. The indigenous population of the Americas overall is thought to have been reduced by about 90% in the century after Columbus's arrival. Among indigenous peoples, Columbus is often viewed as a key agent of genocide. Samuel Eliot Morison, a Harvard historian and author of a multivolume biography on Columbus, writes, "The cruel policy initiated by Columbus and pursued by his successors resulted in complete genocide."
According to Noble David Cook, "There were too few Spaniards to have killed the millions who were reported to have died in the first century after Old and New World contact." He instead estimates that the death toll was caused by smallpox, which may have caused a pandemic only after the arrival of Hernán Cortés in 1519. According to some estimates, smallpox had an 80–90% fatality rate in Native American populations. The natives had no acquired immunity to these new diseases and suffered high fatalities. There is also evidence that they had poor diets and were overworked. Historian Andrés Reséndez of University of California, Davis, says the available evidence suggests "slavery has emerged as major killer" of the indigenous populations of the Caribbean between 1492 and 1550 more so than diseases such as smallpox, influenza and malaria. He says that indigenous populations did not experience a rebound like European populations did following the Black Death because unlike the latter, a large portion of the former were subjected to deadly forced labor in the mines.
The diseases that devastated the Native Americans came in multiple waves at different times, sometimes as much as centuries apart, which would mean that survivors of one disease may have been killed by others, preventing the population from recovering. Historian David Stannard describes the depopulation of the indigenous Americans as "neither inadvertent nor inevitable", saying it was the result of both disease and intentional genocide.
Biographers and historians have a wide range of opinions about Columbus's expertise and experience navigating and captaining ships. One scholar lists some European works ranging from the 1890s to 1980s that support Columbus's experience and skill as among the best in Genoa, while listing some American works over a similar timeframe that portray the explorer as an untrained entrepreneur, having only minor crew or passenger experience prior to his noted journeys. According to Morison, Columbus's success in utilizing the trade winds might owe significantly to luck.
Contemporary descriptions of Columbus, including those by his son Fernando and Bartolomé de las Casas, describe him as taller than average, with light skin (often sunburnt), blue or hazel eyes, high cheekbones and freckled face, an aquiline nose, and blond to reddish hair and beard (until about the age of 30, when it began to whiten). One Spanish commentator described his eyes using the word garzos, now usually translated as "light blue", but it seems to have indicated light grey-green or hazel eyes to Columbus's contemporaries. The word rubios can mean "blond", "fair", or "ruddy". Although an abundance of artwork depicts Columbus, no authentic contemporary portrait is known.
A well-known image of Columbus is a portrait by Sebastiano del Piombo, which has been reproduced in many textbooks. It agrees with descriptions of Columbus in that it shows a large man with auburn hair, but the painting dates from 1519 so cannot have been painted from life. Furthermore, the inscription identifying the subject as Columbus was probably added later, and the face shown differs from that of other images.
Sometime between 1531 and 1536, Alejo Fernández painted an altarpiece, The Virgin of the Navigators, that includes a depiction of Columbus. The painting was commissioned for a chapel in Seville's Casa de Contratación (House of Trade) in the Alcázar of Seville and remains there.
At the World's Columbian Exposition in 1893, 71 alleged portraits of Columbus were displayed; most of them did not match contemporary descriptions. | [
{
"paragraph_id": 0,
"text": "Christopher Columbus (/kəˈlʌmbəs/; between 25 August and 31 October 1451 – 20 May 1506) was an Italian explorer and navigator from the Republic of Genoa who completed four Spanish-based voyages across the Atlantic Ocean sponsored by the Catholic Monarchs, opening the way for the widespread European exploration and European colonization of the Americas. His expeditions were the first known European contact with the Caribbean and Central and South America.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The name Christopher Columbus is the anglicisation of the Latin Christophorus Columbus. Growing up on the coast of Liguria, he went to sea at a young age and travelled widely, as far north as the British Isles and as far south as what is now Ghana. He married Portuguese noblewoman Filipa Moniz Perestrelo, who bore a son Diego, and was based in Lisbon for several years. He later took a Castilian mistress, Beatriz Enríquez de Arana, who bore a son, Ferdinand.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Largely self-educated, Columbus was knowledgeable in geography, astronomy, and history. He developed a plan to seek a western sea passage to the East Indies, hoping to profit from the lucrative spice trade. After the Granada War, and Columbus's persistent lobbying in multiple kingdoms, the Catholic Monarchs, Queen Isabella I and King Ferdinand II, agreed to sponsor a journey west. Columbus left Castile in August 1492 with three ships and made landfall in the Americas on 12 October, ending the period of human habitation in the Americas now referred to as the pre-Columbian era. His landing place was an island in the Bahamas, known by its native inhabitants as Guanahani. He then visited the islands now known as Cuba and Hispaniola, establishing a colony in what is now Haiti. Columbus returned to Castile in early 1493, with captured natives. Word of his voyage soon spread throughout Europe.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Columbus made three further voyages to the Americas, exploring the Lesser Antilles in 1493, Trinidad and the northern coast of South America in 1498, and the east coast of Central America in 1502. Many names he gave to geographical features, particularly islands, are still in use. He gave the name indios (\"Indians\") to the indigenous peoples he encountered. The extent to which he was aware the Americas were a wholly separate landmass is uncertain; he never clearly renounced his belief he had reached the Far East. As a colonial governor, Columbus was accused by some of his contemporaries of significant brutality and removed from the post. Columbus's strained relationship with the Crown of Castile and its colonial administrators in America led to his arrest and removal from Hispaniola in 1500, and later to protracted litigation over the privileges he and his heirs claimed were owed to them by the crown.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Columbus's expeditions inaugurated a period of exploration, conquest, and colonization that lasted for centuries, thus bringing the Americas into the European sphere of influence. The transfer of plants, animals, precious metals, culture, human populations, technology, diseases, and ideas between the Old World and New World that followed his first voyage are known as the Columbian exchange. These events and the effects which persist to the present are often cited as the beginning of the modern era. Columbus was widely celebrated in the centuries after his death, but public perception fractured in the 21st century due to greater attention to the harms committed under his governance, particularly the beginning of the depopulation of Hispaniola's indigenous Taínos, caused by Old World diseases and mistreatment, including slavery. Many places in the Western Hemisphere bear his name, including the South American country of Colombia, the Canadian province of British Columbia, the American city Columbus, Ohio, and the U.S. capital, the District of Columbia.",
"title": ""
},
{
"paragraph_id": 5,
"text": "Columbus's early life is obscure, but scholars believe he was born in the Republic of Genoa between 25 August and 31 October 1451. His father was Domenico Colombo, a wool weaver who worked in Genoa and Savona and owned a cheese stand at which young Christopher worked. His mother was Susanna Fontanarossa. He had three brothers—Bartholomew, Giovanni Pellegrino, and Giacomo (also called Diego)—as well as a sister, Bianchinetta. Bartholomew ran a cartography workshop in Lisbon for at least part of his adulthood.",
"title": "Early life"
},
{
"paragraph_id": 6,
"text": "His native language is presumed to have been a Genoese dialect (Ligurian) as his first language, though Columbus probably never wrote in it. His name in 16th-century Genoese was Cristoffa Corombo, in Italian, Cristoforo Colombo, and in Spanish Cristóbal Colón.",
"title": "Early life"
},
{
"paragraph_id": 7,
"text": "In one of his writings, he says he went to sea at 14. In 1470, the family moved to Savona, where Domenico took over a tavern. Some modern authors have argued that he was not from Genoa, but from the Aragon region of Spain or from Portugal. These competing hypotheses have been discounted by most scholars.",
"title": "Early life"
},
{
"paragraph_id": 8,
"text": "In 1473, Columbus began his apprenticeship as business agent for the wealthy Spinola, Centurione, and Di Negro families of Genoa. Later, he made a trip to the Greek island Chios in the Aegean Sea, then ruled by Genoa. In May 1476, he took part in an armed convoy sent by Genoa to carry valuable cargo to northern Europe. He probably visited Bristol, England, and Galway, Ireland, where he may have visited St. Nicholas' Collegiate Church. It has been speculated he went to Iceland in 1477, though many scholars doubt this. It is known that in the autumn of 1477, he sailed on a Portuguese ship from Galway to Lisbon, where he found his brother Bartholomew, and they continued trading for the Centurione family. Columbus based himself in Lisbon from 1477 to 1485. In 1478, the Centuriones sent Columbus on a sugar-buying trip to Madeira. He married Felipa Perestrello e Moniz, daughter of Bartolomeu Perestrello, a Portuguese nobleman of Lombard origin, who had been the donatary captain of Porto Santo.",
"title": "Early life"
},
{
"paragraph_id": 9,
"text": "In 1479 or 1480, Columbus's son Diego was born. Between 1482 and 1485, Columbus traded along the coasts of West Africa, reaching the Portuguese trading post of Elmina at the Guinea coast in present-day Ghana. Before 1484, Columbus returned to Porto Santo to find that his wife had died. He returned to Portugal to settle her estate and take Diego with him.",
"title": "Early life"
},
{
"paragraph_id": 10,
"text": "He left Portugal for Castile in 1485, where he took a mistress in 1487, a 20-year-old orphan named Beatriz Enríquez de Arana. It is likely that Beatriz met Columbus when he was in Córdoba, a gathering place for Genoese merchants and where the court of the Catholic Monarchs was located at intervals. Beatriz, unmarried at the time, gave birth to Columbus's second son, Fernando Columbus, in July 1488, named for the monarch of Aragon. Columbus recognized the boy as his offspring. Columbus entrusted his older, legitimate son Diego to take care of Beatriz and pay the pension set aside for her following his death, but Diego was negligent in his duties.",
"title": "Early life"
},
{
"paragraph_id": 11,
"text": "Columbus learned Latin, Portuguese, and Castilian. He read widely about astronomy, geography, and history, including the works of Ptolemy, Pierre d'Ailly's Imago Mundi, the travels of Marco Polo and Sir John Mandeville, Pliny's Natural History, and Pope Pius II's Historia rerum ubique gestarum. According to historian Edmund Morgan,",
"title": "Early life"
},
{
"paragraph_id": 12,
"text": "Columbus was not a scholarly man. Yet he studied these books, made hundreds of marginal notations in them and came out with ideas about the world that were characteristically simple and strong and sometimes wrong ...",
"title": "Early life"
},
{
"paragraph_id": 13,
"text": "Under the Mongol Empire's hegemony over Asia and the Pax Mongolica, Europeans had long enjoyed a safe land passage on the Silk Road to India, parts of East Asia, including China and Maritime Southeast Asia, which were sources of valuable goods. With the fall of Constantinople to the Ottoman Empire in 1453, the Silk Road was closed to Christian traders.",
"title": "Quest for Asia"
},
{
"paragraph_id": 14,
"text": "In 1474, the Florentine astronomer Paolo dal Pozzo Toscanelli suggested to King Afonso V of Portugal that sailing west across the Atlantic would be a quicker way to reach the Maluku (Spice) Islands, China, Japan and India than the route around Africa, but Afonso rejected his proposal. In the 1480s, Columbus and his brother proposed a plan to reach the East Indies by sailing west. Columbus supposedly wrote Toscanelli in 1481 and received encouragement, along with a copy of a map the astronomer had sent Afonso implying that a westward route to Asia was possible. Columbus's plans were complicated by Bartolomeu Dias's rounding of the Cape of Good Hope in 1488, which suggested the Cape Route around Africa to Asia.",
"title": "Quest for Asia"
},
{
"paragraph_id": 15,
"text": "Carol Delaney and other commentators have argued that Columbus was a Christian millennialist and apocalypticist and that these beliefs motivated his quest for Asia in a variety of ways. Columbus often wrote about seeking gold in the log books of his voyages and writes about acquiring it \"in such quantity that the sovereigns... will undertake and prepare to go conquer the Holy Sepulcher\" in a fulfillment of Biblical prophecy. Columbus often wrote about converting all races to Christianity. Abbas Hamandi argues that Columbus was motivated by the hope of \"[delivering] Jerusalem from Muslim hands\" by \"using the resources of newly discovered lands\".",
"title": "Quest for Asia"
},
{
"paragraph_id": 16,
"text": "Despite a popular misconception to the contrary, nearly all educated Westerners of Columbus's time knew that the Earth is spherical, a concept that had been understood since antiquity. The techniques of celestial navigation, which uses the position of the Sun and the stars in the sky, had long been in use by astronomers and were beginning to be implemented by mariners.",
"title": "Quest for Asia"
},
{
"paragraph_id": 17,
"text": "As far back as the 3rd century BC, Eratosthenes had correctly computed the circumference of the Earth by using simple geometry and studying the shadows cast by objects at two remote locations. In the 1st century BC, Posidonius confirmed Eratosthenes's results by comparing stellar observations at two separate locations. These measurements were widely known among scholars, but Ptolemy's use of the smaller, old-fashioned units of distance led Columbus to underestimate the size of the Earth by about a third.",
"title": "Quest for Asia"
},
{
"paragraph_id": 18,
"text": "Three cosmographical parameters determined the bounds of Columbus's enterprise: the distance across the ocean between Europe and Asia, which depended on the extent of the oikumene, i.e., the Eurasian land-mass stretching east–west between Spain and China; the circumference of the Earth; and the number of miles or leagues in a degree of longitude, which was possible to deduce from the theory of the relationship between the size of the surfaces of water and the land as held by the followers of Aristotle in medieval times.",
"title": "Quest for Asia"
},
{
"paragraph_id": 19,
"text": "From Pierre d'Ailly's Imago Mundi (1410), Columbus learned of Alfraganus's estimate that a degree of latitude (equal to approximately a degree of longitude along the equator) spanned 56.67 Arabic miles (equivalent to 66.2 nautical miles, 122.6 kilometers or 76.2 mi), but he did not realize that this was expressed in the Arabic mile (about 1,830 meters or 1.14 mi) rather than the shorter Roman mile (about 1,480 m) with which he was familiar. Columbus therefore estimated the size of the Earth to be about 75% of Eratosthenes's calculation, and the distance westward from the Canary Islands to the Indies as only 68 degrees, equivalent to 3,080 nmi (5,700 km; 3,540 mi) (a 58% error).",
"title": "Quest for Asia"
},
{
"paragraph_id": 20,
"text": "Most scholars of the time accepted Ptolemy's estimate that Eurasia spanned 180° longitude, rather than the actual 130° (to the Chinese mainland) or 150° (to Japan at the latitude of Spain). Columbus believed an even higher estimate, leaving a smaller percentage for water. In d'Ailly's Imago Mundi, Columbus read Marinus of Tyre's estimate that the longitudinal span of Eurasia was 225° at the latitude of Rhodes. Some historians, such as Samuel Morison, have suggested that he followed the statement in the apocryphal book 2 Esdras (6:42) that \"six parts [of the globe] are habitable and the seventh is covered with water.\" He was also aware of Marco Polo's claim that Japan (which he called \"Cipangu\") was some 2,414 km (1,500 mi) to the east of China (\"Cathay\"), and closer to the equator than it is. He was influenced by Toscanelli's idea that there were inhabited islands even farther to the east than Japan, including the mythical Antillia, which he thought might lie not much farther to the west than the Azores.",
"title": "Quest for Asia"
},
{
"paragraph_id": 21,
"text": "Based on his sources, Columbus estimated a distance of 2,400 nmi (4,400 km; 2,800 mi) from the Canary Islands west to Japan; the actual distance is 10,600 nmi (19,600 km; 12,200 mi). No ship in the 15th century could have carried enough food and fresh water for such a long voyage, and the dangers involved in navigating through the uncharted ocean would have been formidable. Most European navigators reasonably concluded that a westward voyage from Europe to Asia was unfeasible. The Catholic Monarchs, however, having completed the Reconquista, an expensive war against the Moors in the Iberian Peninsula, were eager to obtain a competitive edge over other European countries in the quest for trade with the Indies. Columbus's project, though far-fetched, held the promise of such an advantage.",
"title": "Quest for Asia"
},
{
"paragraph_id": 22,
"text": "Though Columbus was wrong about the number of degrees of longitude that separated Europe from the Far East and about the distance that each degree represented, he did take advantage of the trade winds, which would prove to be the key to his successful navigation of the Atlantic Ocean. He planned to first sail to the Canary Islands before continuing west with the northeast trade wind. Part of the return to Spain would require traveling against the wind using an arduous sailing technique called beating, during which progress is made very slowly. To effectively make the return voyage, Columbus would need to follow the curving trade winds northeastward to the middle latitudes of the North Atlantic, where he would be able to catch the \"westerlies\" that blow eastward to the coast of Western Europe.",
"title": "Quest for Asia"
},
{
"paragraph_id": 23,
"text": "The navigational technique for travel in the Atlantic appears to have been exploited first by the Portuguese, who referred to it as the volta do mar ('turn of the sea'). Through his marriage to his first wife, Felipa Perestrello, Columbus had access to the nautical charts and logs that had belonged to her deceased father, Bartolomeu Perestrello, who had served as a captain in the Portuguese navy under Prince Henry the Navigator. In the mapmaking shop where he worked with his brother Bartholomew, Columbus also had ample opportunity to hear the stories of old seamen about their voyages to the western seas, but his knowledge of the Atlantic wind patterns was still imperfect at the time of his first voyage. By sailing due west from the Canary Islands during hurricane season, skirting the so-called horse latitudes of the mid-Atlantic, he risked being becalmed and running into a tropical cyclone, both of which he avoided by chance.",
"title": "Quest for Asia"
},
{
"paragraph_id": 24,
"text": "By about 1484, Columbus proposed his planned voyage to King John II of Portugal. The king submitted Columbus's proposal to his advisors, who rejected it, correctly, on the grounds that Columbus's estimate for a voyage of 2,400 nmi was only a quarter of what it should have been. In 1488, Columbus again appealed to the court of Portugal, and John II again granted him an audience. That meeting also proved unsuccessful, in part because not long afterwards Bartolomeu Dias returned to Portugal with news of his successful rounding of the southern tip of Africa (near the Cape of Good Hope).",
"title": "Quest for Asia"
},
{
"paragraph_id": 25,
"text": "Columbus sought an audience with the monarchs Ferdinand II of Aragon and Isabella I of Castile, who had united several kingdoms in the Iberian Peninsula by marrying and now ruled together. On 1 May 1486, permission having been granted, Columbus presented his plans to Queen Isabella, who, in turn, referred it to a committee. The learned men of Spain, like their counterparts in Portugal, replied that Columbus had grossly underestimated the distance to Asia. They pronounced the idea impractical and advised the Catholic Monarchs to pass on the proposed venture. To keep Columbus from taking his ideas elsewhere, and perhaps to keep their options open, the sovereigns gave him an allowance, totaling about 14,000 maravedis for the year, or about the annual salary of a sailor. In May 1489, the queen sent him another 10,000 maravedis, and the same year the monarchs furnished him with a letter ordering all cities and towns under their dominion to provide him food and lodging at no cost.",
"title": "Quest for Asia"
},
{
"paragraph_id": 26,
"text": "Columbus also dispatched his brother Bartholomew to the court of Henry VII of England to inquire whether the English crown might sponsor his expedition, but he was captured by pirates en route, and only arrived in early 1491. By that time, Columbus had retreated to La Rábida Friary, where the Spanish crown sent him 20,000 maravedis to buy new clothes and instructions to return to the Spanish court for renewed discussions.",
"title": "Quest for Asia"
},
{
"paragraph_id": 27,
"text": "Columbus waited at King Ferdinand's camp until Ferdinand and Isabella conquered Granada, the last Muslim stronghold on the Iberian Peninsula, in January 1492. A council led by Isabella's confessor, Hernando de Talavera, found Columbus's proposal to reach the Indies implausible. Columbus had left for France when Ferdinand intervened, first sending Talavera and Bishop Diego Deza to appeal to the queen. Isabella was finally convinced by the king's clerk Luis de Santángel, who argued that Columbus would take his ideas elsewhere, and offered to help arrange the funding. Isabella then sent a royal guard to fetch Columbus, who had traveled 2 leagues (over 10 km) toward Córdoba.",
"title": "Quest for Asia"
},
{
"paragraph_id": 28,
"text": "In the April 1492 \"Capitulations of Santa Fe\", King Ferdinand and Queen Isabella promised Columbus that if he succeeded he would be given the rank of Admiral of the Ocean Sea and appointed Viceroy and Governor of all the new lands he might claim for Spain. He had the right to nominate three persons, from whom the sovereigns would choose one, for any office in the new lands. He would be entitled to 10% (diezmo) of all the revenues from the new lands in perpetuity. He also would have the option of buying one-eighth interest in any commercial venture in the new lands, and receive one-eighth (ochavo) of the profits.",
"title": "Quest for Asia"
},
{
"paragraph_id": 29,
"text": "In 1500, during his third voyage to the Americas, Columbus was arrested and dismissed from his posts. He and his sons, Diego and Fernando, then conducted a lengthy series of court cases against the Castilian crown, known as the pleitos colombinos, alleging that the Crown had illegally reneged on its contractual obligations to Columbus and his heirs. The Columbus family had some success in their first litigation, as a judgment of 1511 confirmed Diego's position as viceroy but reduced his powers. Diego resumed litigation in 1512, which lasted until 1536, and further disputes initiated by heirs continued until 1790.",
"title": "Quest for Asia"
},
{
"paragraph_id": 30,
"text": "Between 1492 and 1504, Columbus completed four round-trip voyages between Spain and the Americas, each voyage being sponsored by the Crown of Castile. On his first voyage he reached the Americas, initiating the European exploration and colonization of the continent, as well as the Columbian exchange. His role in history is thus important to the Age of Discovery, Western history, and human history writ large.",
"title": "Voyages"
},
{
"paragraph_id": 31,
"text": "In Columbus's letter on the first voyage, published following his first return to Spain, he claimed that he had reached Asia, as previously described by Marco Polo and other Europeans. Over his subsequent voyages, Columbus refused to acknowledge that the lands he visited and claimed for Spain were not part of Asia, in the face of mounting evidence to the contrary. This might explain, in part, why the American continent was named after the Florentine explorer Amerigo Vespucci—who received credit for recognizing it as a \"New World\"—and not after Columbus.",
"title": "Voyages"
},
{
"paragraph_id": 32,
"text": "On the evening of 3 August 1492, Columbus departed from Palos de la Frontera with three ships. The largest was a carrack, the Santa María, owned and captained by Juan de la Cosa, and under Columbus's direct command. The other two were smaller caravels, the Pinta and the Niña, piloted by the Pinzón brothers. Columbus first sailed to the Canary Islands. There he restocked provisions and made repairs then departed from San Sebastián de La Gomera on 6 September, for what turned out to be a five-week voyage across the ocean.",
"title": "Voyages"
},
{
"paragraph_id": 33,
"text": "On 7 October, the crew spotted \"[i]mmense flocks of birds\". On 11 October, Columbus changed the fleet's course to due west, and sailed through the night, believing land was soon to be found. At around 02:00 the following morning, a lookout on the Pinta, Rodrigo de Triana, spotted land. The captain of the Pinta, Martín Alonso Pinzón, verified the sight of land and alerted Columbus. Columbus later maintained that he had already seen a light on the land a few hours earlier, thereby claiming for himself the lifetime pension promised by Ferdinand and Isabella to the first person to sight land. Columbus called this island (in what is now the Bahamas) San Salvador (meaning \"Holy Savior\"); the natives called it Guanahani. Christopher Columbus's journal entry of 12 October 1492 states:",
"title": "Voyages"
},
{
"paragraph_id": 34,
"text": "I saw some who had marks of wounds on their bodies and I made signs to them asking what they were; and they showed me how people from other islands nearby came there and tried to take them, and how they defended themselves; and I believed and believe that they come here from tierra firme to take them captive. They should be good and intelligent servants, for I see that they say very quickly everything that is said to them; and I believe they would become Christians very easily, for it seemed to me that they had no religion. Our Lord pleasing, at the time of my departure I will take six of them from here to Your Highnesses in order that they may learn to speak.",
"title": "Voyages"
},
{
"paragraph_id": 35,
"text": "Columbus called the inhabitants of the lands that he visited Los Indios (Spanish for \"Indians\"). He initially encountered the Lucayan, Taíno, and Arawak peoples. Noting their gold ear ornaments, Columbus took some of the Arawaks prisoner and insisted that they guide him to the source of the gold. Columbus did not believe he needed to create a fortified outpost, writing, \"the people here are simple in war-like matters ... I could conquer the whole of them with fifty men, and govern them as I pleased.\" The Taínos told Columbus that another indigenous tribe, the Caribs, were fierce warriors and cannibals, who made frequent raids on the Taínos, often capturing their women, although this may have been a belief perpetuated by the Spaniards to justify enslaving them.",
"title": "Voyages"
},
{
"paragraph_id": 36,
"text": "Columbus also explored the northeast coast of Cuba, where he landed on 28 October. On the night of 26 November, Martín Alonso Pinzón took the Pinta on an unauthorized expedition in search of an island called \"Babeque\" or \"Baneque\", which the natives had told him was rich in gold. Columbus, for his part, continued to the northern coast of Hispaniola, where he landed on 6 December. There, the Santa María ran aground on 25 December 1492 and had to be abandoned. The wreck was used as a target for cannon fire to impress the native peoples. Columbus was received by the native cacique Guacanagari, who gave him permission to leave some of his men behind. Columbus left 39 men, including the interpreter Luis de Torres, and founded the settlement of La Navidad, in present-day Haiti. Columbus took more natives prisoner and continued his exploration. He kept sailing along the northern coast of Hispaniola with a single ship until he encountered Pinzón and the Pinta on 6 January.",
"title": "Voyages"
},
{
"paragraph_id": 37,
"text": "On 13 January 1493, Columbus made his last stop of this voyage in the Americas, in the Bay of Rincón in northeast Hispaniola. There he encountered the Ciguayos, the only natives who offered violent resistance during this voyage. The Ciguayos refused to trade the amount of bows and arrows that Columbus desired; in the ensuing clash one Ciguayo was stabbed in the buttocks and another wounded with an arrow in his chest. Because of these events, Columbus called the inlet the Golfo de Las Flechas (Bay of Arrows).",
"title": "Voyages"
},
{
"paragraph_id": 38,
"text": "Columbus headed for Spain on the Niña, but a storm separated him from the Pinta, and forced the Niña to stop at the island of Santa Maria in the Azores. Half of his crew went ashore to say prayers of thanksgiving in a chapel for having survived the storm. But while praying, they were imprisoned by the governor of the island, ostensibly on suspicion of being pirates. After a two-day standoff, the prisoners were released, and Columbus again set sail for Spain.",
"title": "Voyages"
},
{
"paragraph_id": 39,
"text": "Another storm forced Columbus into the port at Lisbon. From there he went to Vale do Paraíso north of Lisbon to meet King John II of Portugal, who told Columbus that he believed the voyage to be in violation of the 1479 Treaty of Alcáçovas. After spending more than a week in Portugal, Columbus set sail for Spain. Returning to Palos on 15 March 1493, he was given a hero's welcome and soon afterward received by Isabella and Ferdinand in Barcelona.",
"title": "Voyages"
},
{
"paragraph_id": 40,
"text": "Columbus's letter on the first voyage, dispatched to the Spanish court, was instrumental in spreading the news throughout Europe about his voyage. Almost immediately after his arrival in Spain, printed versions began to appear, and word of his voyage spread rapidly. Most people initially believed that he had reached Asia. The Bulls of Donation, three papal bulls of Pope Alexander VI delivered in 1493, purported to grant overseas territories to Portugal and the Catholic Monarchs of Spain. They were replaced by the Treaty of Tordesillas of 1494.",
"title": "Voyages"
},
{
"paragraph_id": 41,
"text": "The two earliest published copies of Columbus's letter on the first voyage aboard the Niña were donated in 2017 by the Jay I. Kislak Foundation to the University of Miami library in Coral Gables, Florida, where they are housed.",
"title": "Voyages"
},
{
"paragraph_id": 42,
"text": "On 24 September 1493, Columbus sailed from Cádiz with 17 ships, and supplies to establish permanent colonies in the Americas. He sailed with nearly 1,500 men, including sailors, soldiers, priests, carpenters, stonemasons, metalworkers, and farmers. Among the expedition members were Alvarez Chanca, a physician who wrote a detailed account of the second voyage; Juan Ponce de León, the first governor of Puerto Rico and Florida; the father of Bartolomé de las Casas; Juan de la Cosa, a cartographer who is credited with making the first world map depicting the New World; and Columbus's youngest brother Diego. The fleet stopped at the Canary Islands to take on more supplies, and set sail again on 7 October, deliberately taking a more southerly course than on the first voyage.",
"title": "Voyages"
},
{
"paragraph_id": 43,
"text": "On 3 November, they arrived in the Windward Islands; the first island they encountered was named Dominica by Columbus, but not finding a good harbor there, they anchored off a nearby smaller island, which he named Mariagalante, now a part of Guadeloupe and called Marie-Galante. Other islands named by Columbus on this voyage were Montserrat, Antigua, Saint Martin, the Virgin Islands, as well as many others.",
"title": "Voyages"
},
{
"paragraph_id": 44,
"text": "On 22 November, Columbus returned to Hispaniola to visit La Navidad, where 39 Spaniards had been left during the first voyage. Columbus found the fort in ruins, destroyed by the Taínos after some of the Spaniards reportedly antagonized their hosts with their unrestrained lust for gold and women. Columbus then established a poorly located and short-lived settlement to the east, La Isabela, in the present-day Dominican Republic.",
"title": "Voyages"
},
{
"paragraph_id": 45,
"text": "From April to August 1494, Columbus explored Cuba and Jamaica, then returned to Hispaniola. By the end of 1494, disease and famine had killed two-thirds of the Spanish settlers. Columbus implemented encomienda, a Spanish labor system that rewarded conquerors with the labor of conquered non-Christian people. Columbus executed Spanish colonists for minor crimes, and used dismemberment as punishment. Columbus and the colonists enslaved the indigenous people, including children. Natives were beaten, raped, and tortured for the location of imagined gold. Thousands committed suicide rather than face the oppression.",
"title": "Voyages"
},
{
"paragraph_id": 46,
"text": "In February 1495, Columbus rounded up about 1,500 Arawaks, some of whom had rebelled, in a great slave raid. About 500 of the strongest were shipped to Spain as slaves, with about two hundred of those dying en route.",
"title": "Voyages"
},
{
"paragraph_id": 47,
"text": "In June 1495, the Spanish crown sent ships and supplies to Hispaniola. In October, Florentine merchant Gianotto Berardi, who had won the contract to provision the fleet of Columbus's second voyage and to supply the colony on Hispaniola, received almost 40,000 maravedís worth of enslaved Indians. He renewed his effort to get supplies to Columbus, and was working to organize a fleet when he suddenly died in December. On 10 March 1496, having been away about 30 months, the fleet departed La Isabela. On 8 June the crew sighted land somewhere between Lisbon and Cape St. Vincent, and disembarked in Cádiz on 11 June.",
"title": "Voyages"
},
{
"paragraph_id": 48,
"text": "On 30 May 1498, Columbus left with six ships from Sanlúcar, Spain. The fleet called at Madeira and the Canary Islands, where it divided in two, with three ships heading for Hispaniola and the other three vessels, commanded by Columbus, sailing south to the Cape Verde Islands and then westward across the Atlantic. It is probable that this expedition was intended at least partly to confirm rumors of a large continent south of the Caribbean Sea, that is, South America.",
"title": "Voyages"
},
{
"paragraph_id": 49,
"text": "On 31 July they sighted Trinidad, the most southerly of the Caribbean islands. On 5 August, Columbus sent several small boats ashore on the southern side of the Paria Peninsula in what is now Venezuela, near the mouth of the Orinoco river. This was the first recorded landing of Europeans on the mainland of South America, which Columbus realized must be a continent. The fleet then sailed to the islands of Chacachacare and Margarita, reaching the latter on 14 August, and sighted Tobago and Grenada from afar, according to some scholars.",
"title": "Voyages"
},
{
"paragraph_id": 50,
"text": "On 19 August, Columbus returned to Hispaniola. There he found settlers in rebellion against his rule, and his unfulfilled promises of riches. Columbus had some of the Europeans tried for their disobedience; at least one rebel leader was hanged.",
"title": "Voyages"
},
{
"paragraph_id": 51,
"text": "In October 1499, Columbus sent two ships to Spain, asking the Court of Spain to appoint a royal commissioner to help him govern. By this time, accusations of tyranny and incompetence on the part of Columbus had also reached the Court. The sovereigns sent Francisco de Bobadilla, a relative of Marquesa Beatriz de Bobadilla, a patron of Columbus and a close friend of Queen Isabella, to investigate the accusations of brutality made against the Admiral. Arriving in Santo Domingo while Columbus was away, Bobadilla was immediately met with complaints about all three Columbus brothers. He moved into Columbus's house and seized his property, took depositions from the Admiral's enemies, and declared himself governor.",
"title": "Voyages"
},
{
"paragraph_id": 52,
"text": "Bobadilla reported to Spain that Columbus once punished a man found guilty of stealing corn by having his ears and nose cut off and then selling him into slavery. He claimed that Columbus regularly used torture and mutilation to govern Hispaniola. Testimony recorded in the report stated that Columbus congratulated his brother Bartholomew on \"defending the family\" when the latter ordered a woman paraded naked through the streets and then had her tongue cut because she had \"spoken ill of the admiral and his brothers\". The document also describes how Columbus put down native unrest and revolt: he first ordered a brutal suppression of the uprising in which many natives were killed, and then paraded their dismembered bodies through the streets in an attempt to discourage further rebellion. Columbus vehemently denied the charges. The neutrality and accuracy of the accusations and investigations of Bobadilla toward Columbus and his brothers have been disputed by historians, given the anti-Italian sentiment of the Spaniards and Bobadilla's desire to take over Columbus's position.",
"title": "Voyages"
},
{
"paragraph_id": 53,
"text": "In early October 1500, Columbus and Diego presented themselves to Bobadilla, and were put in chains aboard La Gorda, the caravel on which Bobadilla had arrived at Santo Domingo. They were returned to Spain, and languished in jail for six weeks before King Ferdinand ordered their release. Not long after, the king and queen summoned the Columbus brothers to the Alhambra palace in Granada. The sovereigns expressed indignation at the actions of Bobadilla, who was then recalled and ordered to make restitutions of the property he had confiscated from Columbus. The royal couple heard the brothers' pleas; restored their freedom and wealth; and, after much persuasion, agreed to fund Columbus's fourth voyage. However, Nicolás de Ovando was to replace Bobadilla and be the new governor of the West Indies.",
"title": "Voyages"
},
{
"paragraph_id": 54,
"text": "New light was shed on the seizure of Columbus and his brother Bartholomew, the Adelantado, with the discovery by archivist Isabel Aguirre of an incomplete copy of the testimonies against them gathered by Francisco de Bobadilla at Santo Domingo in 1500. She found a manuscript copy of this pesquisa (inquiry) in the Archive of Simancas, Spain, uncatalogued until she and Consuelo Varela published their book, La caída de Cristóbal Colón: el juicio de Bobadilla (The fall of Christopher Colón: the judgement of Bobadilla) in 2006.",
"title": "Voyages"
},
{
"paragraph_id": 55,
"text": "On 9 May 1502, Columbus left Cádiz with his flagship Santa María and three other vessels. The ships were crewed by 140 men, including his brother Bartholomew as second in command and his son Fernando. He sailed to Asilah on the Moroccan coast to rescue Portuguese soldiers said to be besieged by the Moors. The siege had been lifted by the time they arrived, so the Spaniards stayed only a day and continued on to the Canary Islands.",
"title": "Voyages"
},
{
"paragraph_id": 56,
"text": "On 15 June, the fleet arrived at Martinique, where it lingered for several days. A hurricane was forming, so Columbus continued westward, hoping to find shelter on Hispaniola. He arrived at Santo Domingo on 29 June, but was denied port, and the new governor Francisco de Bobadilla refused to listen to his warning that a hurricane was approaching. Instead, while Columbus's ships sheltered at the mouth of the Rio Jaina, the first Spanish treasure fleet sailed into the hurricane. Columbus's ships survived with only minor damage, while 20 of the 30 ships in the governor's fleet were lost along with 500 lives (including that of Francisco de Bobadilla). Although a few surviving ships managed to straggle back to Santo Domingo, Aguja, the fragile ship carrying Columbus's personal belongings and his 4,000 pesos in gold was the sole vessel to reach Spain. The gold was his tenth (décimo) of the profits from Hispaniola, equal to 240,000 maravedis, guaranteed by the Catholic Monarchs in 1492.",
"title": "Voyages"
},
{
"paragraph_id": 57,
"text": "After a brief stop at Jamaica, Columbus sailed to Central America, arriving at the coast of Honduras on 30 July. Here Bartholomew found native merchants and a large canoe. On 14 August, Columbus landed on the continental mainland at Punta Caxinas, now Puerto Castilla, Honduras. He spent two months exploring the coasts of Honduras, Nicaragua, and Costa Rica, seeking a strait in the western Caribbean through which he could sail to the Indian Ocean. Sailing south along the Nicaraguan coast, he found a channel that led into Almirante Bay in Panama on 5 October.",
"title": "Voyages"
},
{
"paragraph_id": 58,
"text": "As soon as his ships anchored in Almirante Bay, Columbus encountered Ngäbe people in canoes who were wearing gold ornaments. In January 1503, he established a garrison at the mouth of the Belén River. Columbus left for Hispaniola on 16 April. On 10 May he sighted the Cayman Islands, naming them \"Las Tortugas\" after the numerous sea turtles there. His ships sustained damage in a storm off the coast of Cuba. Unable to travel farther, on 25 June 1503 they were beached in Saint Ann Parish, Jamaica.",
"title": "Voyages"
},
{
"paragraph_id": 59,
"text": "For six months Columbus and 230 of his men remained stranded on Jamaica. Diego Méndez de Segura, who had shipped out as a personal secretary to Columbus, and a Spanish shipmate called Bartolomé Flisco, along with six natives, paddled a canoe to get help from Hispaniola. The governor, Nicolás de Ovando y Cáceres, detested Columbus and obstructed all efforts to rescue him and his men. In the meantime Columbus, in a desperate effort to induce the natives to continue provisioning him and his hungry men, won their favor by predicting a lunar eclipse for 29 February 1504, using Abraham Zacuto's astronomical charts. Despite the governor's obstruction, Christopher Columbus and his men were rescued on 28 June 1504, and arrived in Sanlúcar, Spain, on 7 November.",
"title": "Voyages"
},
{
"paragraph_id": 60,
"text": "Columbus had always claimed that the conversion of non-believers was one reason for his explorations, and he grew increasingly religious in his later years. Probably with the assistance of his son Diego and his friend the Carthusian monk Gaspar Gorricio, Columbus produced two books during his later years: a Book of Privileges (1502), detailing and documenting the rewards from the Spanish Crown to which he believed he and his heirs were entitled, and a Book of Prophecies (1505), in which passages from the Bible were used to place his achievements as an explorer in the context of Christian eschatology.",
"title": "Later life, illness, and death"
},
{
"paragraph_id": 61,
"text": "In his later years, Columbus demanded that the Crown of Castile give him his tenth of all the riches and trade goods yielded by the new lands, as stipulated in the Capitulations of Santa Fe. Because he had been relieved of his duties as governor, the Crown did not feel bound by that contract and his demands were rejected. After his death, his heirs sued the Crown for a part of the profits from trade with America, as well as other rewards. This led to a protracted series of legal disputes known as the pleitos colombinos (\"Columbian lawsuits\").",
"title": "Later life, illness, and death"
},
{
"paragraph_id": 62,
"text": "During a violent storm on his first return voyage, Columbus, then 41, had suffered an attack of what was believed at the time to be gout. In subsequent years, he was plagued with what was thought to be influenza and other fevers, bleeding from the eyes, temporary blindness and prolonged attacks of gout. The attacks increased in duration and severity, sometimes leaving Columbus bedridden for months at a time, and culminated in his death 14 years later.",
"title": "Later life, illness, and death"
},
{
"paragraph_id": 63,
"text": "Based on Columbus's lifestyle and the described symptoms, some modern commentators suspect that he suffered from reactive arthritis, rather than gout. Reactive arthritis is a joint inflammation caused by intestinal bacterial infections or after acquiring certain sexually transmitted diseases (primarily chlamydia or gonorrhea). In 2006, Frank C. Arnett, a medical doctor, and historian Charles Merrill, published their paper in The American Journal of the Medical Sciences proposing that Columbus had a form of reactive arthritis; Merrill made the case in that same paper that Columbus was the son of Catalans and his mother possibly a member of a prominent converso (converted Jew) family. \"It seems likely that [Columbus] acquired reactive arthritis from food poisoning on one of his ocean voyages because of poor sanitation and improper food preparation\", says Arnett, a rheumatologist and professor of internal medicine, pathology and laboratory medicine at the University of Texas Medical School at Houston.",
"title": "Later life, illness, and death"
},
{
"paragraph_id": 64,
"text": "Some historians such as H. Micheal Tarver and Emily Slape, as well as medical doctors such as Arnett and Antonio Rodríguez Cuartero, believe that Columbus had such a form of reactive arthritis, but according to other authorities, this is \"speculative\", or \"very speculative\".",
"title": "Later life, illness, and death"
},
{
"paragraph_id": 65,
"text": "After his arrival to Sanlúcar from his fourth voyage (and Queen Isabella's death), an ill Columbus settled in Seville in April 1505. He stubbornly continued to make pleas to the Crown to defend his own personal privileges and his family's. He moved to Segovia (where the court was at the time) on a mule by early 1506, and, on the occasion of the wedding of King Ferdinand with Germaine of Foix in Valladolid, Spain, in March 1506, Columbus moved to that city to persist with his demands. On 20 May 1506, aged 54, Columbus died in Valladolid.",
"title": "Later life, illness, and death"
},
{
"paragraph_id": 66,
"text": "Columbus's remains were first buried at a convent in Valladolid, then moved to the monastery of La Cartuja in Seville (southern Spain) by the will of his son Diego. They may have been exhumed in 1513 and interred at the Seville Cathedral. In about 1536, the remains of both Columbus and his son Diego were moved to a cathedral in Colonial Santo Domingo, in the present-day Dominican Republic; Columbus had requested to be buried on the island. By some accounts, in 1793, when France took over the entire island of Hispaniola, Columbus's remains were moved to Havana, Cuba. After Cuba became independent following the Spanish–American War in 1898, at least some of these remains were moved back to the Seville Cathedral, where they were placed on an elaborate catafalque.",
"title": "Location of remains"
},
{
"paragraph_id": 67,
"text": "In June 2003, DNA samples were taken from these remains as well as those of Columbus's brother Diego and younger son Fernando. Initial observations suggested that the bones did not appear to match Columbus's physique or age at death. DNA extraction proved difficult; only short fragments of mitochondrial DNA could be isolated. These matched corresponding DNA from Columbus's brother, supporting that both individuals had shared the same mother. Such evidence, together with anthropologic and historic analyses, led the researchers to conclude that the remains belonged to Christopher Columbus.",
"title": "Location of remains"
},
{
"paragraph_id": 68,
"text": "In 1877, a priest discovered a lead box at Santo Domingo inscribed: \"Discoverer of America, First Admiral\". Inscriptions found the next year read \"Last of the remains of the first admiral, Sire Christopher Columbus, discoverer.\" The box contained bones of an arm and a leg, as well as a bullet. These remains were considered legitimate by physician and U.S. Assistant Secretary of State John Eugene Osborne, who suggested in 1913 that they travel through the Panama Canal as a part of its opening ceremony. These remains were kept at the Basilica Cathedral of Santa María la Menor (in the Colonial City of Santo Domingo) before being moved to the Columbus Lighthouse (Santo Domingo Este, inaugurated in 1992). The authorities in Santo Domingo have never allowed these remains to be DNA-tested, so it is unconfirmed whether they are from Columbus's body as well.",
"title": "Location of remains"
},
{
"paragraph_id": 69,
"text": "The figure of Columbus was not ignored in the British colonies during the colonial era: Columbus became a unifying symbol early in the history of the colonies that became the United States when Puritan preachers began to use his life story as a model for a \"developing American spirit\". In the spring of 1692, Puritan preacher Cotton Mather described Columbus's voyage as one of three shaping events of the modern age, connecting Columbus's voyage and the Puritans' migration to North America, seeing them together as the key to a grand design.",
"title": "Commemoration"
},
{
"paragraph_id": 70,
"text": "The use of Columbus as a founding figure of New World nations spread rapidly after the American Revolution. This was out of a desire to develop a national history and founding myth with fewer ties to Britain. His name was the basis for the female national personification of the United States, Columbia, in use since the 1730s with reference to the original Thirteen Colonies, and also a historical name applied to the Americas and to the New World. Columbia, South Carolina and Columbia Rediviva, the ship for which the Columbia River was named, are named for Columbus.",
"title": "Commemoration"
},
{
"paragraph_id": 71,
"text": "Columbus's name was given to the newly born Republic of Colombia in the early 19th century, inspired by the political project of \"Colombeia\" developed by revolutionary Francisco de Miranda, which was put at the service of the emancipation of continental Hispanic America.",
"title": "Commemoration"
},
{
"paragraph_id": 72,
"text": "To commemorate the 400th anniversary of the landing of Columbus, the 1893 World's Fair in Chicago was named the World's Columbian Exposition. The U.S. Postal Service issued the first U.S. commemorative stamps, the Columbian Issue, depicting Columbus, Queen Isabella and others in various stages of his several voyages. The policies related to the celebration of the Spanish colonial empire as the vehicle of a nationalist project undertaken in Spain during the Restoration in the late 19th century took form with the commemoration of the 4th centenary on 12 October 1892 (in which the figure of Columbus was extolled by the Conservative government), eventually becoming the very same national day. Several monuments commemorating the \"discovery\" were erected in cities such as Palos, Barcelona, Granada, Madrid, Salamanca, Valladolid and Seville in the years around the 400th anniversary.",
"title": "Commemoration"
},
{
"paragraph_id": 73,
"text": "For the Columbus Quincentenary in 1992, a second Columbian issue was released jointly with Italy, Portugal, and Spain. Columbus was celebrated at Seville Expo '92, and Genoa Expo '92.",
"title": "Commemoration"
},
{
"paragraph_id": 74,
"text": "The Boal Mansion Museum, founded in 1951, contains a collection of materials concerning later descendants of Columbus and collateral branches of the family. It features a 16th-century chapel from a Spanish castle reputedly owned by Diego Colón which became the residence of Columbus's descendants. The chapel interior was dismantled and moved from Spain in 1909 and re-erected on the Boal estate at Boalsburg, Pennsylvania. Inside it are numerous religious paintings and other objects including a reliquary with fragments of wood supposedly from the True Cross. The museum also holds a collection of documents mostly relating to Columbus descendants of the late 18th and early 19th centuries.",
"title": "Commemoration"
},
{
"paragraph_id": 75,
"text": "In many countries of the Americas, as well as Spain and Italy, Columbus Day celebrates the anniversary of Columbus's arrival in the Americas on 12 October 1492.",
"title": "Commemoration"
},
{
"paragraph_id": 76,
"text": "The voyages of Columbus are considered a turning point in human history, marking the beginning of globalization and accompanying demographic, commercial, economic, social, and political changes.",
"title": "Legacy"
},
{
"paragraph_id": 77,
"text": "His explorations resulted in permanent contact between the two hemispheres, and the term \"pre-Columbian\" is used to refer to the cultures of the Americas before the arrival of Columbus and his European successors. The ensuing Columbian exchange saw the massive exchange of animals, plants, fungi, diseases, technologies, mineral wealth and ideas.",
"title": "Legacy"
},
{
"paragraph_id": 78,
"text": "In the first century after his endeavors, Columbus's figure largely languished in the backwaters of history, and his reputation was beset by his failures as a colonial administrator. His legacy was somewhat rescued from oblivion when he began to appear as a character in Italian and Spanish plays and poems from the late 16th century onward.",
"title": "Legacy"
},
{
"paragraph_id": 79,
"text": "Columbus was subsumed into the Western narrative of colonization and empire building, which invoked notions of translatio imperii and translatio studii to underline who was considered \"civilized\" and who was not.",
"title": "Legacy"
},
{
"paragraph_id": 80,
"text": "The Americanization of the figure of Columbus began in the latter decades of the 18th century, after the revolutionary period of the United States, elevating the status of his reputation to a national myth, homo americanus. His landing became a powerful icon as an \"image of American genesis\". The Discovery of America sculpture, depicting Columbus and a cowering Indian maiden, was commissioned on 3 April 1837, when U.S. President Martin Van Buren sanctioned the engineering of Luigi Persico's design. This representation of Columbus's triumph and the Indian's recoil is a demonstration of white superiority over savage, naive Indians. As recorded during its unveiling in 1844, the sculpture extends to \"represent the meeting of the two races\", as Persico captures their first interaction, highlighting the \"moral and intellectual inferiority\" of Indians. Placed outside the U.S. Capitol building where it remained until its removal in the mid-20th century, the sculpture reflected the contemporary view of whites in the U.S. toward the Natives; they are labeled \"merciless Indian savages\" in the United States Declaration of Independence. In 1836, Pennsylvania senator and future U.S. President James Buchanan, who proposed the sculpture, described it as representing \"the great discoverer when he first bounded with ecstasy upon the shore, ail his toils past, presenting a hemisphere to the astonished world, with the name America inscribed upon it. Whilst he is thus standing upon the shore, a female savage, with awe and wonder depicted in her countenance, is gazing upon him.\"",
"title": "Legacy"
},
{
"paragraph_id": 81,
"text": "The American Columbus myth was reconfigured later in the century when he was enlisted as an ethnic hero by immigrants to the United States who were not of Anglo-Saxon stock, such as Jewish, Italian, and Irish people, who claimed Columbus as a sort of ethnic founding father. Catholics unsuccessfully tried to promote him for canonization in the 19th century.",
"title": "Legacy"
},
{
"paragraph_id": 82,
"text": "From the 1990s onward, a narrative of Columbus being responsible for the genocide of indigenous peoples and environmental destruction began to compete with the then predominant discourse of Columbus as Christ-bearer, scientist, or father of America. This narrative features the negative effects of Columbus' conquests on native populations. Exposed to Old World diseases, the indigenous populations of the New World collapsed, and were largely replaced by Europeans and Africans, who brought with them new methods of farming, business, governance, and religious worship.",
"title": "Legacy"
},
{
"paragraph_id": 83,
"text": "Though Christopher Columbus came to be considered the European discoverer of America in Western popular culture, his historical legacy is more nuanced. After settling Iceland, the Norse settled the uninhabited southern part of Greenland beginning in the 10th century. Norsemen are believed to have then set sail from Greenland and Iceland to become the first known Europeans to reach the North American mainland, nearly 500 years before Columbus reached the Caribbean. The 1960s discovery of a Norse settlement dating to c. 1000 AD at L'Anse aux Meadows, Newfoundland, partially corroborates accounts within the Icelandic sagas of Erik the Red's colonization of Greenland and his son Leif Erikson's subsequent exploration of a place he called Vinland.",
"title": "Legacy"
},
{
"paragraph_id": 84,
"text": "In the 19th century, amid a revival of interest in Norse culture, Carl Christian Rafn and Benjamin Franklin DeCosta wrote works establishing that the Norse had preceded Columbus in colonizing the Americas. Following this, in 1874 Rasmus Bjørn Anderson argued that Columbus must have known of the North American continent before he started his voyage of discovery. Most modern scholars doubt Columbus had knowledge of the Norse settlements in America, with his arrival to the continent being most likely an independent discovery.",
"title": "Legacy"
},
{
"paragraph_id": 85,
"text": "Europeans devised explanations for the origins of the Native Americans and their geographical distribution with narratives that often served to reinforce their own preconceptions built on ancient intellectual foundations. In modern Latin America, the non-Native populations of some countries often demonstrate an ambiguous attitude toward the perspectives of indigenous peoples regarding the so-called \"discovery\" by Columbus and the era of colonialism that followed. In his 1960 monograph, Mexican philosopher and historian Edmundo O'Gorman explicitly rejects the Columbus discovery myth, arguing that the idea that Columbus discovered America was a misleading legend fixed in the public mind through the works of American author Washington Irving during the 19th century. O'Gorman argues that to assert Columbus \"discovered America\" is to shape the facts concerning the events of 1492 to make them conform to an interpretation that arose many years later. For him, the Eurocentric view of the discovery of America sustains systems of domination in ways that favor Europeans. In a 1992 article for The UNESCO Courier, Félix Fernández-Shaw argues that the word \"discovery\" prioritizes European explorers as the \"heroes\" of the contact between the Old and New World. He suggests that the word \"encounter\" is more appropriate, being a more universal term which includes Native Americans in the narrative.",
"title": "Legacy"
},
{
"paragraph_id": 86,
"text": "Historians have traditionally argued that Columbus remained convinced until his death that his journeys had been along the east coast of Asia as he originally intended (excluding arguments such as Anderson's). On his third voyage he briefly referred to South America as a \"hitherto unknown\" continent, while also rationalizing that it was the \"Earthly Paradise\" located \"at the end of the Orient\". Columbus continued to claim in his later writings that he had reached Asia; in a 1502 letter to Pope Alexander VI, he asserts that Cuba is the east coast of Asia. On the other hand, in a document in the Book of Privileges (1502), Columbus refers to the New World as the Indias Occidentales ('West Indies'), which he says \"were unknown to all the world\".",
"title": "Legacy"
},
{
"paragraph_id": 87,
"text": "Washington Irving's 1828 biography of Columbus popularized the idea that Columbus had difficulty obtaining support for his plan because many Catholic theologians insisted that the Earth was flat, but this is a popular misconception which can be traced back to 17th-century Protestants campaigning against Catholicism. In fact, the spherical shape of the Earth had been known to scholars since antiquity, and was common knowledge among sailors, including Columbus. Coincidentally, the oldest surviving globe of the Earth, the Erdapfel, was made in 1492, just before Columbus's return to Europe from his first voyage. As such it contains no sign of the Americas and yet demonstrates the common belief in a spherical Earth.",
"title": "Legacy"
},
{
"paragraph_id": 88,
"text": "Making observations with a quadrant on his third voyage, Columbus inaccurately measured the polar radius of the North Star's diurnal motion to be five degrees, which was double the value of another erroneous reading he had made from further north. This led him to describe the figure of the Earth as pear-shaped, with the \"stalk\" portion ascending towards Heaven. In fact, the Earth is ever so slightly pear-shaped, with its \"stalk\" pointing north.",
"title": "Legacy"
},
{
"paragraph_id": 89,
"text": "Columbus has been criticized both for his brutality and for initiating the depopulation of the indigenous peoples of the Caribbean, whether by imported diseases or intentional violence. According to scholars of Native American history, George Tinker and Mark Freedman, Columbus was responsible for creating a cycle of \"murder, violence, and slavery\" to maximize exploitation of the Caribbean islands' resources, and that Native deaths on the scale at which they occurred would not have been caused by new diseases alone. Further, they describe the proposition that disease and not genocide caused these deaths as \"American holocaust denial\". Historian Kris Lane disputes whether it is appropriate to use the term \"genocide\" when the atrocities were not Columbus's intent, but resulted from his decrees, family business goals, and negligence. Other scholars defend Columbus's actions or allege that the worst accusations against him are not based in fact while others claim that \"he has been blamed for events far beyond his own reach or knowledge\".",
"title": "Legacy"
},
{
"paragraph_id": 90,
"text": "As a result of the protests and riots that followed the murder of George Floyd in 2020, many public monuments of Christopher Columbus have been removed.",
"title": "Legacy"
},
{
"paragraph_id": 91,
"text": "Some historians have criticized Columbus for initiating the widespread colonization of the Americas and for abusing its native population. On St. Croix, Columbus's friend Michele da Cuneo—according to his own account—kept an indigenous woman he captured, whom Columbus \"gave to [him]\", then brutally raped her.",
"title": "Legacy"
},
{
"paragraph_id": 92,
"text": "According to some historians, the punishment for an indigenous person, aged 14 and older, failing to pay a hawk's bell, or cascabela, worth of gold dust every six months (based on Bartolomé de las Casas's account) was cutting off the hands of those without tokens, often leaving them to bleed to death. Other historians dispute such accounts. For example, a study of Spanish archival sources showed that the cascabela quotas were imposed by Guarionex, not Columbus, and that there is no mention, in the primary sources, of punishment by cutting off hands for failing to pay. Columbus had an economic interest in the enslavement of the Hispaniola natives and for that reason was not eager to baptize them, which attracted criticism from some churchmen. Consuelo Varela, a Spanish historian, stated that \"Columbus's government was characterized by a form of tyranny. Even those who loved him had to admit the atrocities that had taken place.\" Other historians have argued that some of the accounts of the brutality of Columbus and his brothers have been exaggerated as part of the Black Legend, a historical tendency towards anti-Spanish and anti-Catholic sentiment in historical sources dating as far back as the 16th century, which they speculate may continue to taint scholarship into the present day.",
"title": "Legacy"
},
{
"paragraph_id": 93,
"text": "According to historian Emily Berquist Soule, the immense Portuguese profits from the maritime trade in African slaves along the West African coast served as an inspiration for Columbus to create a counterpart of this apparatus in the New World using indigenous American slaves. Historian William J. Connell has argued that while Columbus \"brought the entrepreneurial form of slavery to the New World\", this \"was a phenomenon of the times\", further arguing that \"we have to be very careful about applying 20th-century understandings of morality to the morality of the 15th century.\" In a less popular defense of colonization, Spanish ambassador María Jesús Figa López-Palop has argued, \"Normally we melded with the cultures in America, we stayed there, we spread our language and culture and religion.\"",
"title": "Legacy"
},
{
"paragraph_id": 94,
"text": "British historian Basil Davidson has dubbed Columbus the \"father of the slave trade\", citing the fact that the first license to ship enslaved Africans to the Caribbean was issued by the Catholic Monarchs in 1501 to the first royal governor of Hispaniola, Nicolás de Ovando.",
"title": "Legacy"
},
{
"paragraph_id": 95,
"text": "Around the turn of the 21st century, estimates for the pre-Columbian population of Hispaniola ranged between 250,000 and two million, but genetic analysis published in late 2020 suggests that smaller figures are more likely, perhaps as low as 10,000–50,000 for Hispaniola and Puerto Rico combined. Based on the previous figures of a few hundred thousand, some have estimated that a third or more of the natives in Haiti were dead within the first two years of Columbus's governorship. Contributors to depopulation included disease, warfare, and harsh enslavement. Indirect evidence suggests that some serious illness may have arrived with the 1,500 colonists who accompanied Columbus' second expedition in 1493. Charles C. Mann writes that \"It was as if the suffering these diseases had caused in Eurasia over the past millennia were concentrated into the span of decades.\" A third of the natives forced to work in gold and silver mines died every six months. Within three to six decades, the surviving Arawak population numbered only in the hundreds. The indigenous population of the Americas overall is thought to have been reduced by about 90% in the century after Columbus's arrival. Among indigenous peoples, Columbus is often viewed as a key agent of genocide. Samuel Eliot Morison, a Harvard historian and author of a multivolume biography on Columbus, writes, \"The cruel policy initiated by Columbus and pursued by his successors resulted in complete genocide.\"",
"title": "Legacy"
},
{
"paragraph_id": 96,
"text": "According to Noble David Cook, \"There were too few Spaniards to have killed the millions who were reported to have died in the first century after Old and New World contact.\" He instead estimates that the death toll was caused by smallpox, which may have caused a pandemic only after the arrival of Hernán Cortés in 1519. According to some estimates, smallpox had an 80–90% fatality rate in Native American populations. The natives had no acquired immunity to these new diseases and suffered high fatalities. There is also evidence that they had poor diets and were overworked. Historian Andrés Reséndez of University of California, Davis, says the available evidence suggests \"slavery has emerged as major killer\" of the indigenous populations of the Caribbean between 1492 and 1550 more so than diseases such as smallpox, influenza and malaria. He says that indigenous populations did not experience a rebound like European populations did following the Black Death because unlike the latter, a large portion of the former were subjected to deadly forced labor in the mines.",
"title": "Legacy"
},
{
"paragraph_id": 97,
"text": "The diseases that devastated the Native Americans came in multiple waves at different times, sometimes as much as centuries apart, which would mean that survivors of one disease may have been killed by others, preventing the population from recovering. Historian David Stannard describes the depopulation of the indigenous Americans as \"neither inadvertent nor inevitable\", saying it was the result of both disease and intentional genocide.",
"title": "Legacy"
},
{
"paragraph_id": 98,
"text": "Biographers and historians have a wide range of opinions about Columbus's expertise and experience navigating and captaining ships. One scholar lists some European works ranging from the 1890s to 1980s that support Columbus's experience and skill as among the best in Genoa, while listing some American works over a similar timeframe that portray the explorer as an untrained entrepreneur, having only minor crew or passenger experience prior to his noted journeys. According to Morison, Columbus's success in utilizing the trade winds might owe significantly to luck.",
"title": "Legacy"
},
{
"paragraph_id": 99,
"text": "Contemporary descriptions of Columbus, including those by his son Fernando and Bartolomé de las Casas, describe him as taller than average, with light skin (often sunburnt), blue or hazel eyes, high cheekbones and freckled face, an aquiline nose, and blond to reddish hair and beard (until about the age of 30, when it began to whiten). One Spanish commentator described his eyes using the word garzos, now usually translated as \"light blue\", but it seems to have indicated light grey-green or hazel eyes to Columbus's contemporaries. The word rubios can mean \"blond\", \"fair\", or \"ruddy\". Although an abundance of artwork depicts Columbus, no authentic contemporary portrait is known.",
"title": "Physical appearance"
},
{
"paragraph_id": 100,
"text": "A well-known image of Columbus is a portrait by Sebastiano del Piombo, which has been reproduced in many textbooks. It agrees with descriptions of Columbus in that it shows a large man with auburn hair, but the painting dates from 1519 so cannot have been painted from life. Furthermore, the inscription identifying the subject as Columbus was probably added later, and the face shown differs from that of other images.",
"title": "Physical appearance"
},
{
"paragraph_id": 101,
"text": "Sometime between 1531 and 1536, Alejo Fernández painted an altarpiece, The Virgin of the Navigators, that includes a depiction of Columbus. The painting was commissioned for a chapel in Seville's Casa de Contratación (House of Trade) in the Alcázar of Seville and remains there.",
"title": "Physical appearance"
},
{
"paragraph_id": 102,
"text": "At the World's Columbian Exposition in 1893, 71 alleged portraits of Columbus were displayed; most of them did not match contemporary descriptions.",
"title": "Physical appearance"
}
] | Christopher Columbus was an Italian explorer and navigator from the Republic of Genoa who completed four Spanish-based voyages across the Atlantic Ocean sponsored by the Catholic Monarchs, opening the way for the widespread European exploration and European colonization of the Americas. His expeditions were the first known European contact with the Caribbean and Central and South America. The name Christopher Columbus is the anglicisation of the Latin Christophorus Columbus. Growing up on the coast of Liguria, he went to sea at a young age and travelled widely, as far north as the British Isles and as far south as what is now Ghana. He married Portuguese noblewoman Filipa Moniz Perestrelo, who bore a son Diego, and was based in Lisbon for several years. He later took a Castilian mistress, Beatriz Enríquez de Arana, who bore a son, Ferdinand. Largely self-educated, Columbus was knowledgeable in geography, astronomy, and history. He developed a plan to seek a western sea passage to the East Indies, hoping to profit from the lucrative spice trade. After the Granada War, and Columbus's persistent lobbying in multiple kingdoms, the Catholic Monarchs, Queen Isabella I and King Ferdinand II, agreed to sponsor a journey west. Columbus left Castile in August 1492 with three ships and made landfall in the Americas on 12 October, ending the period of human habitation in the Americas now referred to as the pre-Columbian era. His landing place was an island in the Bahamas, known by its native inhabitants as Guanahani. He then visited the islands now known as Cuba and Hispaniola, establishing a colony in what is now Haiti. Columbus returned to Castile in early 1493, with captured natives. Word of his voyage soon spread throughout Europe. Columbus made three further voyages to the Americas, exploring the Lesser Antilles in 1493, Trinidad and the northern coast of South America in 1498, and the east coast of Central America in 1502. Many names he gave to geographical features, particularly islands, are still in use. He gave the name indios ("Indians") to the indigenous peoples he encountered. The extent to which he was aware the Americas were a wholly separate landmass is uncertain; he never clearly renounced his belief he had reached the Far East. As a colonial governor, Columbus was accused by some of his contemporaries of significant brutality and removed from the post. Columbus's strained relationship with the Crown of Castile and its colonial administrators in America led to his arrest and removal from Hispaniola in 1500, and later to protracted litigation over the privileges he and his heirs claimed were owed to them by the crown. Columbus's expeditions inaugurated a period of exploration, conquest, and colonization that lasted for centuries, thus bringing the Americas into the European sphere of influence. The transfer of plants, animals, precious metals, culture, human populations, technology, diseases, and ideas between the Old World and New World that followed his first voyage are known as the Columbian exchange. These events and the effects which persist to the present are often cited as the beginning of the modern era. Columbus was widely celebrated in the centuries after his death, but public perception fractured in the 21st century due to greater attention to the harms committed under his governance, particularly the beginning of the depopulation of Hispaniola's indigenous Taínos, caused by Old World diseases and mistreatment, including slavery. Many places in the Western Hemisphere bear his name, including the South American country of Colombia, the Canadian province of British Columbia, the American city Columbus, Ohio, and the U.S. capital, the District of Columbia. | 2001-05-09T21:04:10Z | 2023-12-18T21:39:20Z | [
"Template:Lang",
"Template:Multiple image",
"Template:In lang",
"Template:Cite press release",
"Template:Redirect2",
"Template:Refn",
"Template:Nowrap",
"Template:Cite web",
"Template:Refend",
"Template:Wikiquote",
"Template:Gutenberg author",
"Template:Further",
"Template:IPA-lij",
"Template:Cite journal",
"Template:Harvnb",
"Template:Spanish Empire",
"Template:Authority control",
"Template:Use dmy dates",
"Template:Infobox officeholder",
"Template:Reflist",
"Template:Cite book",
"Template:Cite magazine",
"Template:Cite encyclopedia",
"Template:Citation",
"Template:Sfn",
"Template:Notelist",
"Template:ISBN",
"Template:Commons",
"Template:Pp-vandalism",
"Template:IPAc-en",
"Template:See also",
"Template:Verify source",
"Template:Wikisource author",
"Template:Internet Archive author",
"Template:Librivox author",
"Template:Efn",
"Template:Library resources box",
"Template:History of the Americas",
"Template:Ship",
"Template:Short description",
"Template:Pp-move",
"Template:Convert",
"Template:Main",
"Template:Circa",
"Template:Cite NIE",
"Template:Cite news",
"Template:Refbegin",
"Template:Cite EB1911"
] | https://en.wikipedia.org/wiki/Christopher_Columbus |
5,636 | Chemist | A chemist (from Greek chēm(ía) alchemy; replacing chymist from Medieval Latin alchemist) is a scientist trained in the study of chemistry. Chemists study the composition of matter and its properties. Chemists carefully describe the properties they study in terms of quantities, with detail on the level of molecules and their component atoms. Chemists carefully measure substance proportions, chemical reaction rates, and other chemical properties. In Commonwealth English, pharmacists are often called chemists.
Chemists use their knowledge to learn the composition and properties of unfamiliar substances, as well as to reproduce and synthesize large quantities of useful naturally occurring substances and create new artificial substances and useful processes. Chemists may specialize in any number of subdisciplines of chemistry. Materials scientists and metallurgists share much of the same education and skills with chemists. The work of chemists is often related to the work of chemical engineers, who are primarily concerned with the proper design, construction and evaluation of the most cost-effective large-scale chemical plants and work closely with industrial chemists on the development of new processes and methods for the commercial-scale manufacture of chemicals and related products.
The roots of chemistry can be traced to the phenomenon of burning. Fire was a mystical force that transformed one substance into another and thus was of primary interest to mankind. It was fire that led to the discovery of iron and glasses. After gold was discovered and became a precious metal, many people were interested to find a method that could convert other substances into gold. This led to the protoscience called alchemy. The word chemist is derived from the Neo-Latin noun chimista, an abbreviation of alchimista (alchemist). Alchemists discovered many chemical processes that led to the development of modern chemistry. Chemistry as we know it today, was invented by Antoine Lavoisier with his law of conservation of mass in 1783. The discoveries of the chemical elements has a long history culminating in the creation of the periodic table by Dmitri Mendeleev. The Nobel Prize in Chemistry created in 1901 gives an excellent overview of chemical discovery since the start of the 20th century.
Jobs for chemists generally require at least a bachelor's degree in chemistry, but many positions, especially those in research, require a Master of Science or a Doctor of Philosophy (PhD.). Most undergraduate programs emphasize mathematics and physics as well as chemistry, partly because chemistry is also known as "the central science", thus chemists ought to have a well-rounded knowledge about science. At the Master's level and higher, students tend to specialize in a particular field. Fields of specialization include biochemistry, nuclear chemistry, organic chemistry, inorganic chemistry, polymer chemistry, analytical chemistry, physical chemistry, theoretical chemistry, quantum chemistry, environmental chemistry, and thermochemistry. Postdoctoral experience may be required for certain positions.
Workers whose work involves chemistry, but not at a complexity requiring an education with a chemistry degree, are commonly referred to as chemical technicians. Such technicians commonly do such work as simpler, routine analyses for quality control or in clinical laboratories, having an associate degree. A chemical technologist has more education or experience than a chemical technician but less than a chemist, often having a bachelor's degree in a different field of science with also an associate degree in chemistry (or many credits related to chemistry) or having the same education as a chemical technician but more experience. There are also degrees specific to become a chemical technologist, which are somewhat distinct from those required when a student is interested in becoming a professional chemist. A Chemical technologist is more involved in the management and operation of the equipment and instrumentation necessary to perform chemical analyzes than a chemical technician. They are part of the team of a chemical laboratory in which the quality of the raw material, intermediate products and finished products is analyzed. They also perform functions in the areas of environmental quality control and the operational phase of a chemical plant.
In addition to all the training usually given to chemical technologists in their respective degree (or one given via an associate degree), a chemist is also trained to understand more details related to chemical phenomena so that the chemist can be capable of more planning on the steps to achieve a distinct goal via a chemistry-related endeavor. The higher the competency level achieved in the field of chemistry (as assessed via a combination of education, experience and personal achievements), the higher the responsibility given to that chemist and the more complicated the task might be. Chemistry, as a field, have so many applications that different tasks and objectives can be given to workers or scientists with these different levels of education or experience. The specific title of each job varies from position to position, depending on factors such as the kind of industry, the routine level of the task, the current needs of a particular enterprise, the size of the enterprise or hiring firm, the philosophy and management principles of the hiring firm, the visibility of the competency and individual achievements of the one seeking employment, economic factors such as recession or economic depression, among other factors, so this makes it difficult to categorize the exact roles of these chemistry-related workers as standard for that given level of education. Because of these factors affecting exact job titles with distinct responsibilities, some chemists might begin doing technician tasks while other chemists might begin doing more complicated tasks than those of a technician, such as tasks that also involve formal applied research, management, or supervision included within the responsibilities of that same job title. The level of supervision given to that chemist also varies in a similar manner, with factors similar to those that affect the tasks demanded for a particular chemist.
It is important that those interested in a Chemistry degree understand the variety of roles available to them (on average), which vary depending on education and job experience. Those Chemists who hold a bachelor's degree are most commonly involved in positions related to either research assistance (working under the guidance of senior chemists in a research-oriented activity), or, alternatively, they may work on distinct (chemistry-related) aspects of a business, organization or enterprise including aspects that involve quality control, quality assurance, manufacturing, production, formulation, inspection, method validation, visitation for troubleshooting of chemistry-related instruments, regulatory affairs, "on-demand" technical services, chemical analysis for non-research purposes (e.g., as a legal request, for testing purposes, or for government or non-profit agencies); chemists may also work in environmental evaluation and assessment. Other jobs or roles may include sales and marketing of chemical products and chemistry-related instruments or technical writing. The more experience obtained, the more independence and leadership or management roles these chemists may perform in those organizations. Some chemists with relatively higher experience might change jobs or job position to become a manager of a chemistry-related enterprise, a supervisor, an entrepreneur or a chemistry consultant. Other chemists choose to combine their education and experience as a chemist with a distinct credential to provide different services (e.g., forensic chemists, chemistry-related software development, patent law specialists, environmental law firm staff, scientific news reporting staff, engineering design staff, etc.).
In comparison, chemists who have obtained a Master of Science (M.S.) in chemistry or in a very related discipline may find chemist roles that allow them to enjoy more independence, leadership and responsibility earlier in their careers with less years of experience than those with a bachelor's degree as highest degree. Sometimes, M.S. chemists receive more complex tasks duties in comparison with the roles and positions found by chemists with a bachelor's degree as their highest academic degree and with the same or close-to-same years of job experience. There are positions that are open only to those that at least have a degree related to chemistry at the master's level. Although good chemists without a Ph. D. degree but with relatively many years of experience may be allowed some applied research positions, the general rule is that Ph. D. chemists are preferred for research positions and are typically the preferred choice for the highest administrative positions on big enterprises involved in chemistry-related duties. Some positions, especially research oriented, will only allow those chemists who are Ph. D. holders. Jobs that involve intensive research and actively seek to lead the discovery of completely new chemical compounds under specifically assigned monetary funds and resources or jobs that seek to develop new scientific theories require a Ph. D. more often than not. Chemists with a Ph. D. as the highest academic degree are found typically on the research-and-development department of an enterprise and can also hold university positions as professors. Professors for research universities or for big universities usually have a Ph. D., and some research-oriented institutions might require post-doctoral training. Some smaller colleges (including some smaller four-year colleges or smaller non-research universities for undergraduates) as well as community colleges usually hire chemists with a M.S. as professors too (and rarely, some big universities who need part-time or temporary instructors, or temporary staff), but when the positions are scarce and the applicants are many, they might prefer Ph. D. holders instead.
The three major employers of chemists are academic institutions, industry, especially the chemical industry and the pharmaceutical industry, and government laboratories.
Chemistry typically is divided into several major sub-disciplines. There are also several main cross-disciplinary and more specialized fields of chemistry. There is a great deal of overlap between different branches of chemistry, as well as with other scientific fields such as biology, medicine, physics, radiology, and several engineering disciplines.
All the above major areas of chemistry employ chemists. Other fields where chemical degrees are useful include astrochemistry (and cosmochemistry), atmospheric chemistry, chemical engineering, chemo-informatics, electrochemistry, environmental science, forensic science, geochemistry, green chemistry, history of chemistry, materials science, medical science, molecular biology, molecular genetics, nanotechnology, nuclear chemistry, oenology, organometallic chemistry, petrochemistry, pharmacology, photochemistry, phytochemistry, polymer chemistry, supramolecular chemistry and surface chemistry.
Chemists may belong to professional societies specifically for professionals and researchers within the field of chemistry, such as the Royal Society of Chemistry in the United Kingdom, the American Chemical Society (ACS) in the United States, or the Institution of Chemists in India.
The highest honor awarded to chemists is the Nobel Prize in Chemistry, awarded since 1901, by the Royal Swedish Academy of Sciences. | [
{
"paragraph_id": 0,
"text": "A chemist (from Greek chēm(ía) alchemy; replacing chymist from Medieval Latin alchemist) is a scientist trained in the study of chemistry. Chemists study the composition of matter and its properties. Chemists carefully describe the properties they study in terms of quantities, with detail on the level of molecules and their component atoms. Chemists carefully measure substance proportions, chemical reaction rates, and other chemical properties. In Commonwealth English, pharmacists are often called chemists.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Chemists use their knowledge to learn the composition and properties of unfamiliar substances, as well as to reproduce and synthesize large quantities of useful naturally occurring substances and create new artificial substances and useful processes. Chemists may specialize in any number of subdisciplines of chemistry. Materials scientists and metallurgists share much of the same education and skills with chemists. The work of chemists is often related to the work of chemical engineers, who are primarily concerned with the proper design, construction and evaluation of the most cost-effective large-scale chemical plants and work closely with industrial chemists on the development of new processes and methods for the commercial-scale manufacture of chemicals and related products.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The roots of chemistry can be traced to the phenomenon of burning. Fire was a mystical force that transformed one substance into another and thus was of primary interest to mankind. It was fire that led to the discovery of iron and glasses. After gold was discovered and became a precious metal, many people were interested to find a method that could convert other substances into gold. This led to the protoscience called alchemy. The word chemist is derived from the Neo-Latin noun chimista, an abbreviation of alchimista (alchemist). Alchemists discovered many chemical processes that led to the development of modern chemistry. Chemistry as we know it today, was invented by Antoine Lavoisier with his law of conservation of mass in 1783. The discoveries of the chemical elements has a long history culminating in the creation of the periodic table by Dmitri Mendeleev. The Nobel Prize in Chemistry created in 1901 gives an excellent overview of chemical discovery since the start of the 20th century.",
"title": "History of chemistry"
},
{
"paragraph_id": 3,
"text": "Jobs for chemists generally require at least a bachelor's degree in chemistry, but many positions, especially those in research, require a Master of Science or a Doctor of Philosophy (PhD.). Most undergraduate programs emphasize mathematics and physics as well as chemistry, partly because chemistry is also known as \"the central science\", thus chemists ought to have a well-rounded knowledge about science. At the Master's level and higher, students tend to specialize in a particular field. Fields of specialization include biochemistry, nuclear chemistry, organic chemistry, inorganic chemistry, polymer chemistry, analytical chemistry, physical chemistry, theoretical chemistry, quantum chemistry, environmental chemistry, and thermochemistry. Postdoctoral experience may be required for certain positions.",
"title": "Education"
},
{
"paragraph_id": 4,
"text": "Workers whose work involves chemistry, but not at a complexity requiring an education with a chemistry degree, are commonly referred to as chemical technicians. Such technicians commonly do such work as simpler, routine analyses for quality control or in clinical laboratories, having an associate degree. A chemical technologist has more education or experience than a chemical technician but less than a chemist, often having a bachelor's degree in a different field of science with also an associate degree in chemistry (or many credits related to chemistry) or having the same education as a chemical technician but more experience. There are also degrees specific to become a chemical technologist, which are somewhat distinct from those required when a student is interested in becoming a professional chemist. A Chemical technologist is more involved in the management and operation of the equipment and instrumentation necessary to perform chemical analyzes than a chemical technician. They are part of the team of a chemical laboratory in which the quality of the raw material, intermediate products and finished products is analyzed. They also perform functions in the areas of environmental quality control and the operational phase of a chemical plant.",
"title": "Education"
},
{
"paragraph_id": 5,
"text": "In addition to all the training usually given to chemical technologists in their respective degree (or one given via an associate degree), a chemist is also trained to understand more details related to chemical phenomena so that the chemist can be capable of more planning on the steps to achieve a distinct goal via a chemistry-related endeavor. The higher the competency level achieved in the field of chemistry (as assessed via a combination of education, experience and personal achievements), the higher the responsibility given to that chemist and the more complicated the task might be. Chemistry, as a field, have so many applications that different tasks and objectives can be given to workers or scientists with these different levels of education or experience. The specific title of each job varies from position to position, depending on factors such as the kind of industry, the routine level of the task, the current needs of a particular enterprise, the size of the enterprise or hiring firm, the philosophy and management principles of the hiring firm, the visibility of the competency and individual achievements of the one seeking employment, economic factors such as recession or economic depression, among other factors, so this makes it difficult to categorize the exact roles of these chemistry-related workers as standard for that given level of education. Because of these factors affecting exact job titles with distinct responsibilities, some chemists might begin doing technician tasks while other chemists might begin doing more complicated tasks than those of a technician, such as tasks that also involve formal applied research, management, or supervision included within the responsibilities of that same job title. The level of supervision given to that chemist also varies in a similar manner, with factors similar to those that affect the tasks demanded for a particular chemist.",
"title": "Education"
},
{
"paragraph_id": 6,
"text": "It is important that those interested in a Chemistry degree understand the variety of roles available to them (on average), which vary depending on education and job experience. Those Chemists who hold a bachelor's degree are most commonly involved in positions related to either research assistance (working under the guidance of senior chemists in a research-oriented activity), or, alternatively, they may work on distinct (chemistry-related) aspects of a business, organization or enterprise including aspects that involve quality control, quality assurance, manufacturing, production, formulation, inspection, method validation, visitation for troubleshooting of chemistry-related instruments, regulatory affairs, \"on-demand\" technical services, chemical analysis for non-research purposes (e.g., as a legal request, for testing purposes, or for government or non-profit agencies); chemists may also work in environmental evaluation and assessment. Other jobs or roles may include sales and marketing of chemical products and chemistry-related instruments or technical writing. The more experience obtained, the more independence and leadership or management roles these chemists may perform in those organizations. Some chemists with relatively higher experience might change jobs or job position to become a manager of a chemistry-related enterprise, a supervisor, an entrepreneur or a chemistry consultant. Other chemists choose to combine their education and experience as a chemist with a distinct credential to provide different services (e.g., forensic chemists, chemistry-related software development, patent law specialists, environmental law firm staff, scientific news reporting staff, engineering design staff, etc.).",
"title": "Education"
},
{
"paragraph_id": 7,
"text": "In comparison, chemists who have obtained a Master of Science (M.S.) in chemistry or in a very related discipline may find chemist roles that allow them to enjoy more independence, leadership and responsibility earlier in their careers with less years of experience than those with a bachelor's degree as highest degree. Sometimes, M.S. chemists receive more complex tasks duties in comparison with the roles and positions found by chemists with a bachelor's degree as their highest academic degree and with the same or close-to-same years of job experience. There are positions that are open only to those that at least have a degree related to chemistry at the master's level. Although good chemists without a Ph. D. degree but with relatively many years of experience may be allowed some applied research positions, the general rule is that Ph. D. chemists are preferred for research positions and are typically the preferred choice for the highest administrative positions on big enterprises involved in chemistry-related duties. Some positions, especially research oriented, will only allow those chemists who are Ph. D. holders. Jobs that involve intensive research and actively seek to lead the discovery of completely new chemical compounds under specifically assigned monetary funds and resources or jobs that seek to develop new scientific theories require a Ph. D. more often than not. Chemists with a Ph. D. as the highest academic degree are found typically on the research-and-development department of an enterprise and can also hold university positions as professors. Professors for research universities or for big universities usually have a Ph. D., and some research-oriented institutions might require post-doctoral training. Some smaller colleges (including some smaller four-year colleges or smaller non-research universities for undergraduates) as well as community colleges usually hire chemists with a M.S. as professors too (and rarely, some big universities who need part-time or temporary instructors, or temporary staff), but when the positions are scarce and the applicants are many, they might prefer Ph. D. holders instead.",
"title": "Education"
},
{
"paragraph_id": 8,
"text": "The three major employers of chemists are academic institutions, industry, especially the chemical industry and the pharmaceutical industry, and government laboratories.",
"title": "Employment"
},
{
"paragraph_id": 9,
"text": "Chemistry typically is divided into several major sub-disciplines. There are also several main cross-disciplinary and more specialized fields of chemistry. There is a great deal of overlap between different branches of chemistry, as well as with other scientific fields such as biology, medicine, physics, radiology, and several engineering disciplines.",
"title": "Employment"
},
{
"paragraph_id": 10,
"text": "All the above major areas of chemistry employ chemists. Other fields where chemical degrees are useful include astrochemistry (and cosmochemistry), atmospheric chemistry, chemical engineering, chemo-informatics, electrochemistry, environmental science, forensic science, geochemistry, green chemistry, history of chemistry, materials science, medical science, molecular biology, molecular genetics, nanotechnology, nuclear chemistry, oenology, organometallic chemistry, petrochemistry, pharmacology, photochemistry, phytochemistry, polymer chemistry, supramolecular chemistry and surface chemistry.",
"title": "Employment"
},
{
"paragraph_id": 11,
"text": "Chemists may belong to professional societies specifically for professionals and researchers within the field of chemistry, such as the Royal Society of Chemistry in the United Kingdom, the American Chemical Society (ACS) in the United States, or the Institution of Chemists in India.",
"title": "Professional societies"
},
{
"paragraph_id": 12,
"text": "The highest honor awarded to chemists is the Nobel Prize in Chemistry, awarded since 1901, by the Royal Swedish Academy of Sciences.",
"title": "Honors and awards"
}
] | A chemist is a scientist trained in the study of chemistry. Chemists study the composition of matter and its properties. Chemists carefully describe the properties they study in terms of quantities, with detail on the level of molecules and their component atoms. Chemists carefully measure substance proportions, chemical reaction rates, and other chemical properties. In Commonwealth English, pharmacists are often called chemists. Chemists use their knowledge to learn the composition and properties of unfamiliar substances, as well as to reproduce and synthesize large quantities of useful naturally occurring substances and create new artificial substances and useful processes. Chemists may specialize in any number of subdisciplines of chemistry. Materials scientists and metallurgists share much of the same education and skills with chemists. The work of chemists is often related to the work of chemical engineers, who are primarily concerned with the proper design, construction and evaluation of the most cost-effective large-scale chemical plants and work closely with industrial chemists on the development of new processes and methods for the commercial-scale manufacture of chemicals and related products. | 2001-09-22T21:10:59Z | 2023-12-28T16:49:14Z | [
"Template:About",
"Template:TopicTOC-Chemistry",
"Template:Main",
"Template:Reflist",
"Template:BranchesofChemistry",
"Template:Short description",
"Template:Refimprove",
"Template:Nobel Prize in Chemistry",
"Template:Cite web",
"Template:Webarchive",
"Template:Authority control"
] | https://en.wikipedia.org/wiki/Chemist |
5,637 | Cypress Hill | Cypress Hill is an American hip hop group from South Gate, California, formed in 1988. They have sold over 20 million albums worldwide, and they have obtained multi-platinum and platinum certifications. The group has been critically acclaimed for their first five albums. They are considered to be among the main progenitors of West Coast hip hop and 1990s hip hop. All of the group members advocate for medical and recreational use of cannabis in the United States. In 2019, Cypress Hill became the first hip hop group to have a star on the Hollywood Walk of Fame.
Senen Reyes (also known as Sen Dog) and Ulpiano Sergio Reyes (also known as Mellow Man Ace) are brothers born in Pinar del Río, Cuba. In 1971, their family immigrated to the United States and initially lived in South Gate, California. In 1988, the two brothers teamed up with New York City native Lawrence Muggerud (also known as DJ Muggs, previously in a rap group named 7A3) and Louis Freese (also known as B-Real) to form a hip-hop group named DVX (Devastating Vocal Excellence). The band soon lost Mellow Man Ace to a solo career, and changed their name to Cypress Hill, after a street in South Gate.
After recording a demo in 1989, Cypress Hill signed a record deal with Ruffhouse Records. Their self-titled first album was released in August 1991. The lead single was the double A-side "The Phuncky Feel One"/"How I Could Just Kill a Man" which received heavy airplay on urban and college radio, most notably peaking at No. 1 on Billboard's Hot Rap Tracks chart and at No. 77 on the Billboard Hot 100. The other two singles released from the album were "Hand on the Pump" and "Latin Lingo", the latter of which combined English and Spanish lyrics, a trait that was continued throughout their career. The success of these singles led Cypress Hill to sell two million copies in the U.S. alone, and it peaked at No. 31 on the Billboard 200 and was certified double platinum by the RIAA. In 1992, Cypress Hill's first contribution to a soundtrack was the song "Shoot 'Em Up" for the movie Juice. The group made their first appearance at Lollapalooza on the side stage in 1992. It was the festival's second year of touring, and featured a diverse lineup of acts such as Red Hot Chili Peppers, Ice Cube, Lush, Tool, Stone Temple Pilots, among others. The trio also supported the Cypress Hill album by touring with the Beastie Boys, who were touring behind their third album Check Your Head.
Black Sunday, the group's second album, debuted at No. 1 on the Billboard 200 in 1993, recording the highest Soundscan for a rap group up until that time. "Insane in the Brain" became a crossover hit, peaking at No. 19 on the Billboard Hot 100, at No. 16 on the Dance Club Songs chart, and at No. 1 on the Hot Rap Tracks chart. "Insane in the Brain" also garnered the group their first Grammy nomination. Black Sunday went triple platinum in the U.S. and sold about 3.26 million copies. Cypress Hill headlined the Soul Assassins tour with House of Pain and Funkdoobiest as support, then performed on a college tour with Rage Against the Machine and Seven Year Bitch. Also in 1993, Cypress Hill had two tracks on the Judgment Night soundtrack, teaming up with Pearl Jam (without vocalist Eddie Vedder) on the track "Real Thing" and Sonic Youth on "I Love You Mary Jane". The soundtrack was notable for intentionally creating collaborations between the rap/hip-hop and rock/metal genres, and as a result the soundtrack peaked at No. 17 on the Billboard 200 and was certified gold by the RIAA. On October 2, 1993, Cypress Hill performed on the comedy show Saturday Night Live, broadcast by NBC. Prior to their performances, studio executives, label representatives, and the group's own associates constantly asked the trio to not smoke marijuana on-stage. DJ Muggs became irritated due to the constant inquisitions, and he subsequently lit a joint during the group's second song. Up until that point, it was extremely uncommon to see marijuana usage on a live televised broadcast. The incident prompted NBC to ban the group from returning on the show, a distinction shared only by six other artists.
The group later played at Woodstock 94, officially making percussionist Eric Bobo a member of the group during the performance. Eric Bobo was known as the son of Willie Bobo and as a touring member of the Beastie Boys, who Cypress Hill previously toured with in 1992. That same year, Rolling Stone named the group as the Best Rap Group in their music awards voted by critics and readers. Cypress Hill then played at Lollapalooza for two successive years, topping the bill in 1995. They also appeared on the "Homerpalooza" episode of The Simpsons. The group received their second Grammy nomination in 1995 for "I Ain't Goin' Out Like That".
Cypress Hill's third album III: Temples of Boom was released in 1995 as it peaked at No. 3 on the Billboard 200 and at No. 3 on the Canadian Albums Chart. The album was certified platinum by the RIAA. "Throw Your Set in the Air" was the most successful single off the album, peaking at No. 45 on the Billboard Hot 100 and No. 11 on the Hot Rap Tracks charts. The single also earned Cypress Hill's third Grammy nomination. Shortly after the release of III: Temples of Boom, Sen Dog became frustrated due to the rigorous touring schedule. Just prior to an overseas tour, he departed from the group unexpectedly. Cypress Hill continued their tours throughout 1995 and 1996, with Eric Bobo and also various guest vocalists covering Sen Dog's verses. Sen Dog later formed the rock band SX-10 to explore other musical genres. Later on in 1996, Cypress Hill appeared on the first Smokin' Grooves tour, featuring Ziggy Marley, The Fugees, Busta Rhymes, and A Tribe Called Quest. The group also released a nine track EP Unreleased and Revamped with rare mixes.
In 1997, the members focused on their solo careers. DJ Muggs released Soul Assassins: Chapter 1, with features from Dr. Dre, KRS-One, Wyclef Jean, and Mobb Deep. B-Real appeared with Busta Rhymes, Coolio, LL Cool J, and Method Man on "Hit 'Em High" from the multi-platinum Space Jam Soundtrack. He also appeared with RBX, Nas, and KRS-One on "East Coast Killer, West Coast Killer" from Dr. Dre's Dr. Dre Presents the Aftermath album, and contributed to an album entitled The Psycho Realm with the group of the same name. Sen Dog also released the Get Wood sampler as part of SX-10 on the label Flip Records. In addition, Eric Bobo contributed drums to various rock bands on their albums, such as 311 and Soulfly.
In early 1998, Sen Dog returned to Cypress Hill. He cited his therapist and also his creative collaborations with the band SX-10 as catalysts for his rejoining. The quartet then embarked on the third annual Smokin' Grooves tour with Public Enemy, Wyclef Jean, Busta Rhymes, and Gang Starr. Cypress Hill released IV in October 1998 which went gold in the U.S. and peaked at No. 11 on the Billboard 200. The lead single off the album was "Dr. Greenthumb", as it peaked at No. 11 on the Hot Rap Tracks chart. It also peaked at No. 70 on the Billboard Hot 100, their last appearance on the chart to date. In 1999, Cypress Hill helped with the PC first-person shooter video game Kingpin: Life of Crime. Three of the band's songs from the 1998 IV album were in the game; "16 Men Till There's No Men Left", "Checkmate", and "Lightning Strikes". The group also did voice work for some of the game's characters. Also in 1999, the band released a greatest hits album in Spanish, Los Grandes Éxitos en Español.
In 2000, Cypress Hill fused genres with their fifth album, Skull & Bones, which consisted of two discs. The first disc Skull was composed of rap tracks while Bones explored further the group's forays into rock. The album peaked at No. 5 on the Billboard 200 and at No. 3 on the Canadian Albums Chart, and the album was eventually certified platinum by the RIAA. The first two singles were "(Rock) Superstar" for rock radio and "(Rap) Superstar" for urban radio. Both singles received heavy airplay on both rock and urban radio, enabling Cypress Hill to crossover again. "(Rock) Superstar" peaked at No. 18 on the Modern Rock Tracks chart and "(Rap) Superstar" peaked at No. 43 on the Hot Rap Tracks chart.
Due to the rock genre's prominent appearance on Skull & Bones, Cypress Hill employed the members of Sen Dog's band SX-10 as backing musicians for the live shows. Cypress Hill supported Skull & Bones by initially playing a summer tour with Limp Bizkit and Cold called the Back 2 Basics Tour. The tour was controversial as it was sponsored by the file sharing service Napster. In addition, Napster enabled each show of the tour to be free to the fans, and no security guards were employed during the performances. After the tour's conclusion, the acts had not reported any disturbances. Towards the end of 2000, Cypress Hill and MxPx landed a slot opening for The Offspring on the Conspiracy of One Tour. The group also released Live at the Fillmore, a concert disc recorded at San Francisco's The Fillmore in 2000. Cypress Hill continued their experimentation with rock on the Stoned Raiders album in 2001; however, its sales were a disappointment. The album peaked at No. 64 on the Billboard 200, the group's lowest position to that point. Also in 2001, the group made a cameo appearance as themselves in the film How High. Cypress Hill then recorded the track "Just Another Victim" for WWF as a theme song for Tazz, borrowing elements from the 2000 single "(Rock) Superstar". The song would later be featured on the compilation WWF Forceable Entry in March 2002, which peaked at No. 3 on the Billboard 200 and was certified gold by the RIAA.
Cypress Hill released Till Death Do Us Part in March 2004 as it peaked at No. 21 on the Billboard 200. It featured appearances by Bob Marley's son Damian Marley, Prodigy of Mobb Deep, and producers The Alchemist and Fredwreck. The album represented a further departure from the group's signature sound. Reggae was a strong influence on its sound, especially on the lead single "What's Your Number?". The track featured Tim Armstrong of Rancid on guitar and backup vocals. It was based on the classic song "The Guns of Brixton" from The Clash's album London Calling. "What's Your Number?" saw Cypress Hill crossover into the rock charts again, as the single peaked at No. 23 on the Modern Rock Tracks chart.
Afterwards, DJ Muggs took a hiatus from the group to focus on other projects, such as Soul Assassins and his DJ Muggs vs. collaboration albums. In December 2005 another compilation album titled Greatest Hits From the Bong was released. It included nine hits from previous albums and two new tracks. In the summer of 2006, B-Real appeared on Snoop Dogg's single "Vato", which was produced by Pharrell Williams. The group's next album was tentatively scheduled for an early 2007 release, but it was pushed back numerous times. In 2007 Cypress Hill toured as a part of the Rock the Bells tour. They headlined with Public Enemy, Wu-Tang Clan, Nas, and a reunited Rage Against the Machine.
On July 25, 2008, Cypress Hill performed at a benefit concert at the House of Blues Chicago, where a majority of the proceeds went to the Chicago Alliance to End Homelessness. In August 2009, a new song by Cypress Hill titled "Get 'Em Up" was made available on iTunes. The song was also featured in the Madden NFL 2010 video game. It was the first sampling of the group's then-upcoming album.
Cypress Hill's eighth studio album Rise Up featured contributions from Everlast, Tom Morello, Daron Malakian, Pitbull, Marc Anthony, and Mike Shinoda. Previously, the vast majority of the group's albums were produced by DJ Muggs; however, Rise Up instead featured a large array of guest features and producers, with DJ Muggs only appearing on two tracks. The album was released on Priority Records/EMI Entertainment, as the group was signed to the label by new creative chairman Snoop Dogg. Rise Up was released on April 20, 2010 and it peaked at No. 19 on the Billboard 200. The single "Rise Up" was featured at WWE's pay-per-view Elimination Chamber as the official theme song for the event. It also appeared in the trailer for the movie The Green Hornet. "Rise Up" managed to peak at No. 20 on both the Modern Rock Tracks and Mainstream Rock Tracks charts. "Armada Latina", which featured Pitbull and Marc Anthony, was Cypress Hill's last song to chart in the U.S. to date, peaking at No. 25 on the Hot Rap Tracks chart.
Cypress Hill commenced its Rise Up tour in Philadelphia on April 10, 2010. In one particular instance, the group was supposed to stop in Tucson, Arizona but canceled the show in protest of the recent immigration legislation. At the Rock en Seine festival in Paris on August 27, 2010, they had said in an interview that they would anticipate the outcome of the legislation before returning. Also in 2010, Cypress Hill performed at the Reading and Leeds Festivals on August 28 at Leeds and August 29 at Reading. On June 5, 2012, Cypress Hill and dubstep artist Rusko released a collaborative EP entitled Cypress X Rusko. DJ Muggs, who was still on a hiatus, and Eric Bobo were absent on the release. Also in 2012, Cypress Hill collaborated with Deadmau5 on his sixth studio album Album Title Goes Here, lending vocals on "Failbait".
During the interval between Cypress Hill albums, the four members commenced work on various projects. B-Real formed the band Prophets of Rage alongside three members of Rage Against the Machine and two members of Public Enemy. He also released The Prescription EP under his Dr. Greenthumb persona. Sen Dog formed the band Powerflo alongside members of Fear Factory, downset., and Biohazard. DJ Muggs revived his Soul Assassins project as its main producer. Eric Bobo formed a duo named Ritmo Machine. He also contributed to an unreleased album by his father Willie Bobo.
On September 28, 2018, Cypress Hill released the album Elephants on Acid, which saw the return of DJ Muggs as main composer and producer. It peaked at No. 120 on the Billboard 200 and at No. 6 on the Top Independent Albums chart. Overall, four different singles were released to promote the album. In April 2019 Cypress Hill received a star on the Hollywood Walk of Fame. Although various solo hip hop artists had received stars, Cypress Hill became the first collective hip hop group to receive a star. The entire lineup of B-Real, Sen Dog, Eric Bobo, and DJ Muggs had all attended the ceremony.
In January 2022, the group announced their 10th studio album entitled Back in Black. In addition, Cypress Hill planned to support the album by joining Slipknot alongside Ho99o9 for the second half of the 2022 Knotfest Roadshow. They had previously invited Slipknot to join their Great Smoke-Out festival back in 2009. Back in Black was released on March 18, 2022. It was the group's first album to not feature DJ Muggs on any of the tracks, as producing duties were handled by Black Milk. Back in Black was the lowest charting album of the group's career, and the first to not reach the Billboard 200 chart; however, it peaked at No. 69 on the Top Current Album Sales chart.
A documentary about the group, entitled Cypress Hill: Insane in the Brain, was released on the Showtime service in April 2022. Estevan Oriol, Cypress Hill's former tour manager and close associate, directed the film. It had mainly chronicled the group's formation and their first decade of existence. In relation to the Cypress Hill: Insane in the Brain documentary, Cypress Hill digitally released the single "Crossroads" in September 2022. The single featured the return of DJ Muggs on production.
In an interview, Sen Dog claimed that the group will fully reunite with DJ Muggs for an 11th album; however, he stated that it will be the group's final album of their career.
One of the band's most striking aspects is B-Real's exaggeratedly high-pitched nasal vocals. In the book Check the Technique, B-Real described his nasal style, saying his rapping voice is "high and annoying...the nasal style I have was just something that I developed...my more natural style wasn't so pleasing to DJ Muggs and Sen Dog's ears" and talking about the nasal style in the book How to Rap, B-Real said "you want to stand out from the others and just be distinct...when you got something that can separate you from everybody else, you gotta use it to your advantage." In the film Art of Rap, B-Real credited the Beastie Boys as an influence when developing his rapping style. Sen Dog's voice is deeper, more violent, and often shouted alongside the rapping; his vocals are often emphasized by adding another background/choir voice to say them. Sen Dog's style is in contrast to B-Real's, who said "Sen's voice is so strong" and "it all blends together" when they are both on the same track.
Both B-Real and Sen Dog started writing lyrics in both Spanish and English. Initially, B-Real was inspired to start writing raps from watching Sen Dog and Mellow Man Ace writing their lyrics, and originally B-Real was going to just be the writer for the group rather than a rapper. Their lyrics are noted for bringing a "cartoonish" approach to violence by Peter Shapiro and Allmusic.
The sound and groove of their music, mostly produced by DJ Muggs, has spooky sounds and a stoned aesthetic; with its bass-heavy rhythms and odd sample loops ("Insane in the Brain" has a blues guitar pitched looped in its chorus), it carries a psychedelic value, which is lessened in their rock-oriented albums. The double album Skull & Bones consists of a pure rap disc (Skull) and a separate rock disc (Bones). In the live album Live at The Fillmore, some of the old classics were played in a rock/metal version, with Eric Bobo playing the drums and Sen Dog's band SX-10 as the other instrumentalists. 2010's Rise Up was the most radically different album in regards to production. DJ Muggs had produced the majority of each prior Cypress Hill album, but he only appeared on Rise Up twice. The remaining songs were handled by various other guests. 2018's Elephants on Acid marked the return of DJ Muggs, and the album featured a more psychedelic and hip-hop approach.
Cypress Hill are often credited for being one of the few Latin American hip hop groups to break through with their own stylistic impact on rap music. Cypress Hill have been cited as an influence by artists such as Eminem, Baby Bash, Paul Wall ,Post Malone, Luniz, and Fat Joe. Cypress Hill have also been cited as a strong influence on nu metal bands such as Deftones, Limp Bizkit, System of a Down, Linkin Park, and Korn. Famously, the bassline during the outro of Korn's 1994 single "Blind" was a direct tribute to Cypress Hill's 1993 track "Lick a Shot".
Billboard Music Awards
Grammy Awards
MTV Video Music Awards
Hollywood Walk of Fame | [
{
"paragraph_id": 0,
"text": "Cypress Hill is an American hip hop group from South Gate, California, formed in 1988. They have sold over 20 million albums worldwide, and they have obtained multi-platinum and platinum certifications. The group has been critically acclaimed for their first five albums. They are considered to be among the main progenitors of West Coast hip hop and 1990s hip hop. All of the group members advocate for medical and recreational use of cannabis in the United States. In 2019, Cypress Hill became the first hip hop group to have a star on the Hollywood Walk of Fame.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Senen Reyes (also known as Sen Dog) and Ulpiano Sergio Reyes (also known as Mellow Man Ace) are brothers born in Pinar del Río, Cuba. In 1971, their family immigrated to the United States and initially lived in South Gate, California. In 1988, the two brothers teamed up with New York City native Lawrence Muggerud (also known as DJ Muggs, previously in a rap group named 7A3) and Louis Freese (also known as B-Real) to form a hip-hop group named DVX (Devastating Vocal Excellence). The band soon lost Mellow Man Ace to a solo career, and changed their name to Cypress Hill, after a street in South Gate.",
"title": "History"
},
{
"paragraph_id": 2,
"text": "After recording a demo in 1989, Cypress Hill signed a record deal with Ruffhouse Records. Their self-titled first album was released in August 1991. The lead single was the double A-side \"The Phuncky Feel One\"/\"How I Could Just Kill a Man\" which received heavy airplay on urban and college radio, most notably peaking at No. 1 on Billboard's Hot Rap Tracks chart and at No. 77 on the Billboard Hot 100. The other two singles released from the album were \"Hand on the Pump\" and \"Latin Lingo\", the latter of which combined English and Spanish lyrics, a trait that was continued throughout their career. The success of these singles led Cypress Hill to sell two million copies in the U.S. alone, and it peaked at No. 31 on the Billboard 200 and was certified double platinum by the RIAA. In 1992, Cypress Hill's first contribution to a soundtrack was the song \"Shoot 'Em Up\" for the movie Juice. The group made their first appearance at Lollapalooza on the side stage in 1992. It was the festival's second year of touring, and featured a diverse lineup of acts such as Red Hot Chili Peppers, Ice Cube, Lush, Tool, Stone Temple Pilots, among others. The trio also supported the Cypress Hill album by touring with the Beastie Boys, who were touring behind their third album Check Your Head.",
"title": "History"
},
{
"paragraph_id": 3,
"text": "Black Sunday, the group's second album, debuted at No. 1 on the Billboard 200 in 1993, recording the highest Soundscan for a rap group up until that time. \"Insane in the Brain\" became a crossover hit, peaking at No. 19 on the Billboard Hot 100, at No. 16 on the Dance Club Songs chart, and at No. 1 on the Hot Rap Tracks chart. \"Insane in the Brain\" also garnered the group their first Grammy nomination. Black Sunday went triple platinum in the U.S. and sold about 3.26 million copies. Cypress Hill headlined the Soul Assassins tour with House of Pain and Funkdoobiest as support, then performed on a college tour with Rage Against the Machine and Seven Year Bitch. Also in 1993, Cypress Hill had two tracks on the Judgment Night soundtrack, teaming up with Pearl Jam (without vocalist Eddie Vedder) on the track \"Real Thing\" and Sonic Youth on \"I Love You Mary Jane\". The soundtrack was notable for intentionally creating collaborations between the rap/hip-hop and rock/metal genres, and as a result the soundtrack peaked at No. 17 on the Billboard 200 and was certified gold by the RIAA. On October 2, 1993, Cypress Hill performed on the comedy show Saturday Night Live, broadcast by NBC. Prior to their performances, studio executives, label representatives, and the group's own associates constantly asked the trio to not smoke marijuana on-stage. DJ Muggs became irritated due to the constant inquisitions, and he subsequently lit a joint during the group's second song. Up until that point, it was extremely uncommon to see marijuana usage on a live televised broadcast. The incident prompted NBC to ban the group from returning on the show, a distinction shared only by six other artists.",
"title": "History"
},
{
"paragraph_id": 4,
"text": "The group later played at Woodstock 94, officially making percussionist Eric Bobo a member of the group during the performance. Eric Bobo was known as the son of Willie Bobo and as a touring member of the Beastie Boys, who Cypress Hill previously toured with in 1992. That same year, Rolling Stone named the group as the Best Rap Group in their music awards voted by critics and readers. Cypress Hill then played at Lollapalooza for two successive years, topping the bill in 1995. They also appeared on the \"Homerpalooza\" episode of The Simpsons. The group received their second Grammy nomination in 1995 for \"I Ain't Goin' Out Like That\".",
"title": "History"
},
{
"paragraph_id": 5,
"text": "Cypress Hill's third album III: Temples of Boom was released in 1995 as it peaked at No. 3 on the Billboard 200 and at No. 3 on the Canadian Albums Chart. The album was certified platinum by the RIAA. \"Throw Your Set in the Air\" was the most successful single off the album, peaking at No. 45 on the Billboard Hot 100 and No. 11 on the Hot Rap Tracks charts. The single also earned Cypress Hill's third Grammy nomination. Shortly after the release of III: Temples of Boom, Sen Dog became frustrated due to the rigorous touring schedule. Just prior to an overseas tour, he departed from the group unexpectedly. Cypress Hill continued their tours throughout 1995 and 1996, with Eric Bobo and also various guest vocalists covering Sen Dog's verses. Sen Dog later formed the rock band SX-10 to explore other musical genres. Later on in 1996, Cypress Hill appeared on the first Smokin' Grooves tour, featuring Ziggy Marley, The Fugees, Busta Rhymes, and A Tribe Called Quest. The group also released a nine track EP Unreleased and Revamped with rare mixes.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "In 1997, the members focused on their solo careers. DJ Muggs released Soul Assassins: Chapter 1, with features from Dr. Dre, KRS-One, Wyclef Jean, and Mobb Deep. B-Real appeared with Busta Rhymes, Coolio, LL Cool J, and Method Man on \"Hit 'Em High\" from the multi-platinum Space Jam Soundtrack. He also appeared with RBX, Nas, and KRS-One on \"East Coast Killer, West Coast Killer\" from Dr. Dre's Dr. Dre Presents the Aftermath album, and contributed to an album entitled The Psycho Realm with the group of the same name. Sen Dog also released the Get Wood sampler as part of SX-10 on the label Flip Records. In addition, Eric Bobo contributed drums to various rock bands on their albums, such as 311 and Soulfly.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "In early 1998, Sen Dog returned to Cypress Hill. He cited his therapist and also his creative collaborations with the band SX-10 as catalysts for his rejoining. The quartet then embarked on the third annual Smokin' Grooves tour with Public Enemy, Wyclef Jean, Busta Rhymes, and Gang Starr. Cypress Hill released IV in October 1998 which went gold in the U.S. and peaked at No. 11 on the Billboard 200. The lead single off the album was \"Dr. Greenthumb\", as it peaked at No. 11 on the Hot Rap Tracks chart. It also peaked at No. 70 on the Billboard Hot 100, their last appearance on the chart to date. In 1999, Cypress Hill helped with the PC first-person shooter video game Kingpin: Life of Crime. Three of the band's songs from the 1998 IV album were in the game; \"16 Men Till There's No Men Left\", \"Checkmate\", and \"Lightning Strikes\". The group also did voice work for some of the game's characters. Also in 1999, the band released a greatest hits album in Spanish, Los Grandes Éxitos en Español.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "In 2000, Cypress Hill fused genres with their fifth album, Skull & Bones, which consisted of two discs. The first disc Skull was composed of rap tracks while Bones explored further the group's forays into rock. The album peaked at No. 5 on the Billboard 200 and at No. 3 on the Canadian Albums Chart, and the album was eventually certified platinum by the RIAA. The first two singles were \"(Rock) Superstar\" for rock radio and \"(Rap) Superstar\" for urban radio. Both singles received heavy airplay on both rock and urban radio, enabling Cypress Hill to crossover again. \"(Rock) Superstar\" peaked at No. 18 on the Modern Rock Tracks chart and \"(Rap) Superstar\" peaked at No. 43 on the Hot Rap Tracks chart.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "Due to the rock genre's prominent appearance on Skull & Bones, Cypress Hill employed the members of Sen Dog's band SX-10 as backing musicians for the live shows. Cypress Hill supported Skull & Bones by initially playing a summer tour with Limp Bizkit and Cold called the Back 2 Basics Tour. The tour was controversial as it was sponsored by the file sharing service Napster. In addition, Napster enabled each show of the tour to be free to the fans, and no security guards were employed during the performances. After the tour's conclusion, the acts had not reported any disturbances. Towards the end of 2000, Cypress Hill and MxPx landed a slot opening for The Offspring on the Conspiracy of One Tour. The group also released Live at the Fillmore, a concert disc recorded at San Francisco's The Fillmore in 2000. Cypress Hill continued their experimentation with rock on the Stoned Raiders album in 2001; however, its sales were a disappointment. The album peaked at No. 64 on the Billboard 200, the group's lowest position to that point. Also in 2001, the group made a cameo appearance as themselves in the film How High. Cypress Hill then recorded the track \"Just Another Victim\" for WWF as a theme song for Tazz, borrowing elements from the 2000 single \"(Rock) Superstar\". The song would later be featured on the compilation WWF Forceable Entry in March 2002, which peaked at No. 3 on the Billboard 200 and was certified gold by the RIAA.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "Cypress Hill released Till Death Do Us Part in March 2004 as it peaked at No. 21 on the Billboard 200. It featured appearances by Bob Marley's son Damian Marley, Prodigy of Mobb Deep, and producers The Alchemist and Fredwreck. The album represented a further departure from the group's signature sound. Reggae was a strong influence on its sound, especially on the lead single \"What's Your Number?\". The track featured Tim Armstrong of Rancid on guitar and backup vocals. It was based on the classic song \"The Guns of Brixton\" from The Clash's album London Calling. \"What's Your Number?\" saw Cypress Hill crossover into the rock charts again, as the single peaked at No. 23 on the Modern Rock Tracks chart.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "Afterwards, DJ Muggs took a hiatus from the group to focus on other projects, such as Soul Assassins and his DJ Muggs vs. collaboration albums. In December 2005 another compilation album titled Greatest Hits From the Bong was released. It included nine hits from previous albums and two new tracks. In the summer of 2006, B-Real appeared on Snoop Dogg's single \"Vato\", which was produced by Pharrell Williams. The group's next album was tentatively scheduled for an early 2007 release, but it was pushed back numerous times. In 2007 Cypress Hill toured as a part of the Rock the Bells tour. They headlined with Public Enemy, Wu-Tang Clan, Nas, and a reunited Rage Against the Machine.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "On July 25, 2008, Cypress Hill performed at a benefit concert at the House of Blues Chicago, where a majority of the proceeds went to the Chicago Alliance to End Homelessness. In August 2009, a new song by Cypress Hill titled \"Get 'Em Up\" was made available on iTunes. The song was also featured in the Madden NFL 2010 video game. It was the first sampling of the group's then-upcoming album.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "Cypress Hill's eighth studio album Rise Up featured contributions from Everlast, Tom Morello, Daron Malakian, Pitbull, Marc Anthony, and Mike Shinoda. Previously, the vast majority of the group's albums were produced by DJ Muggs; however, Rise Up instead featured a large array of guest features and producers, with DJ Muggs only appearing on two tracks. The album was released on Priority Records/EMI Entertainment, as the group was signed to the label by new creative chairman Snoop Dogg. Rise Up was released on April 20, 2010 and it peaked at No. 19 on the Billboard 200. The single \"Rise Up\" was featured at WWE's pay-per-view Elimination Chamber as the official theme song for the event. It also appeared in the trailer for the movie The Green Hornet. \"Rise Up\" managed to peak at No. 20 on both the Modern Rock Tracks and Mainstream Rock Tracks charts. \"Armada Latina\", which featured Pitbull and Marc Anthony, was Cypress Hill's last song to chart in the U.S. to date, peaking at No. 25 on the Hot Rap Tracks chart.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "Cypress Hill commenced its Rise Up tour in Philadelphia on April 10, 2010. In one particular instance, the group was supposed to stop in Tucson, Arizona but canceled the show in protest of the recent immigration legislation. At the Rock en Seine festival in Paris on August 27, 2010, they had said in an interview that they would anticipate the outcome of the legislation before returning. Also in 2010, Cypress Hill performed at the Reading and Leeds Festivals on August 28 at Leeds and August 29 at Reading. On June 5, 2012, Cypress Hill and dubstep artist Rusko released a collaborative EP entitled Cypress X Rusko. DJ Muggs, who was still on a hiatus, and Eric Bobo were absent on the release. Also in 2012, Cypress Hill collaborated with Deadmau5 on his sixth studio album Album Title Goes Here, lending vocals on \"Failbait\".",
"title": "History"
},
{
"paragraph_id": 15,
"text": "During the interval between Cypress Hill albums, the four members commenced work on various projects. B-Real formed the band Prophets of Rage alongside three members of Rage Against the Machine and two members of Public Enemy. He also released The Prescription EP under his Dr. Greenthumb persona. Sen Dog formed the band Powerflo alongside members of Fear Factory, downset., and Biohazard. DJ Muggs revived his Soul Assassins project as its main producer. Eric Bobo formed a duo named Ritmo Machine. He also contributed to an unreleased album by his father Willie Bobo.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "On September 28, 2018, Cypress Hill released the album Elephants on Acid, which saw the return of DJ Muggs as main composer and producer. It peaked at No. 120 on the Billboard 200 and at No. 6 on the Top Independent Albums chart. Overall, four different singles were released to promote the album. In April 2019 Cypress Hill received a star on the Hollywood Walk of Fame. Although various solo hip hop artists had received stars, Cypress Hill became the first collective hip hop group to receive a star. The entire lineup of B-Real, Sen Dog, Eric Bobo, and DJ Muggs had all attended the ceremony.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "In January 2022, the group announced their 10th studio album entitled Back in Black. In addition, Cypress Hill planned to support the album by joining Slipknot alongside Ho99o9 for the second half of the 2022 Knotfest Roadshow. They had previously invited Slipknot to join their Great Smoke-Out festival back in 2009. Back in Black was released on March 18, 2022. It was the group's first album to not feature DJ Muggs on any of the tracks, as producing duties were handled by Black Milk. Back in Black was the lowest charting album of the group's career, and the first to not reach the Billboard 200 chart; however, it peaked at No. 69 on the Top Current Album Sales chart.",
"title": "History"
},
{
"paragraph_id": 18,
"text": "A documentary about the group, entitled Cypress Hill: Insane in the Brain, was released on the Showtime service in April 2022. Estevan Oriol, Cypress Hill's former tour manager and close associate, directed the film. It had mainly chronicled the group's formation and their first decade of existence. In relation to the Cypress Hill: Insane in the Brain documentary, Cypress Hill digitally released the single \"Crossroads\" in September 2022. The single featured the return of DJ Muggs on production.",
"title": "History"
},
{
"paragraph_id": 19,
"text": "In an interview, Sen Dog claimed that the group will fully reunite with DJ Muggs for an 11th album; however, he stated that it will be the group's final album of their career.",
"title": "History"
},
{
"paragraph_id": 20,
"text": "One of the band's most striking aspects is B-Real's exaggeratedly high-pitched nasal vocals. In the book Check the Technique, B-Real described his nasal style, saying his rapping voice is \"high and annoying...the nasal style I have was just something that I developed...my more natural style wasn't so pleasing to DJ Muggs and Sen Dog's ears\" and talking about the nasal style in the book How to Rap, B-Real said \"you want to stand out from the others and just be distinct...when you got something that can separate you from everybody else, you gotta use it to your advantage.\" In the film Art of Rap, B-Real credited the Beastie Boys as an influence when developing his rapping style. Sen Dog's voice is deeper, more violent, and often shouted alongside the rapping; his vocals are often emphasized by adding another background/choir voice to say them. Sen Dog's style is in contrast to B-Real's, who said \"Sen's voice is so strong\" and \"it all blends together\" when they are both on the same track.",
"title": "Style"
},
{
"paragraph_id": 21,
"text": "Both B-Real and Sen Dog started writing lyrics in both Spanish and English. Initially, B-Real was inspired to start writing raps from watching Sen Dog and Mellow Man Ace writing their lyrics, and originally B-Real was going to just be the writer for the group rather than a rapper. Their lyrics are noted for bringing a \"cartoonish\" approach to violence by Peter Shapiro and Allmusic.",
"title": "Style"
},
{
"paragraph_id": 22,
"text": "The sound and groove of their music, mostly produced by DJ Muggs, has spooky sounds and a stoned aesthetic; with its bass-heavy rhythms and odd sample loops (\"Insane in the Brain\" has a blues guitar pitched looped in its chorus), it carries a psychedelic value, which is lessened in their rock-oriented albums. The double album Skull & Bones consists of a pure rap disc (Skull) and a separate rock disc (Bones). In the live album Live at The Fillmore, some of the old classics were played in a rock/metal version, with Eric Bobo playing the drums and Sen Dog's band SX-10 as the other instrumentalists. 2010's Rise Up was the most radically different album in regards to production. DJ Muggs had produced the majority of each prior Cypress Hill album, but he only appeared on Rise Up twice. The remaining songs were handled by various other guests. 2018's Elephants on Acid marked the return of DJ Muggs, and the album featured a more psychedelic and hip-hop approach.",
"title": "Style"
},
{
"paragraph_id": 23,
"text": "Cypress Hill are often credited for being one of the few Latin American hip hop groups to break through with their own stylistic impact on rap music. Cypress Hill have been cited as an influence by artists such as Eminem, Baby Bash, Paul Wall ,Post Malone, Luniz, and Fat Joe. Cypress Hill have also been cited as a strong influence on nu metal bands such as Deftones, Limp Bizkit, System of a Down, Linkin Park, and Korn. Famously, the bassline during the outro of Korn's 1994 single \"Blind\" was a direct tribute to Cypress Hill's 1993 track \"Lick a Shot\".",
"title": "Style"
},
{
"paragraph_id": 24,
"text": "Billboard Music Awards",
"title": "Awards and nominations"
},
{
"paragraph_id": 25,
"text": "Grammy Awards",
"title": "Awards and nominations"
},
{
"paragraph_id": 26,
"text": "MTV Video Music Awards",
"title": "Awards and nominations"
},
{
"paragraph_id": 27,
"text": "Hollywood Walk of Fame",
"title": "Awards and nominations"
}
] | Cypress Hill is an American hip hop group from South Gate, California, formed in 1988. They have sold over 20 million albums worldwide, and they have obtained multi-platinum and platinum certifications. The group has been critically acclaimed for their first five albums. They are considered to be among the main progenitors of West Coast hip hop and 1990s hip hop. All of the group members advocate for medical and recreational use of cannabis in the United States. In 2019, Cypress Hill became the first hip hop group to have a star on the Hollywood Walk of Fame. | 2001-08-17T16:14:46Z | 2023-08-29T21:49:20Z | [
"Template:Authority control",
"Template:Short description",
"Template:'s",
"Template:Cite magazine",
"Template:Cite news",
"Template:Official website",
"Template:B-Real",
"Template:Use mdy dates",
"Template:Main",
"Template:Won",
"Template:Nom",
"Template:Col-3",
"Template:Awards table",
"Template:Col-begin",
"Template:Cite web",
"Template:Commons category",
"Template:Allmusic",
"Template:Distinguish",
"Template:Infobox musical artist",
"Template:Col-2",
"Template:Col-end",
"Template:Reflist",
"Template:Discogs artist",
"Template:Cypress Hill",
"Template:Sen Dog",
"Template:DJ Muggs"
] | https://en.wikipedia.org/wiki/Cypress_Hill |
5,638 | Combustion | Combustion, or burning, is a high-temperature exothermic redox chemical reaction between a fuel (the reductant) and an oxidant, usually atmospheric oxygen, that produces oxidized, often gaseous products, in a mixture termed as smoke. Combustion does not always result in fire, because a flame is only visible when substances undergoing combustion vaporize, but when it does, a flame is a characteristic indicator of the reaction. While activation energy must be supplied to initiate combustion (e.g., using a lit match to light a fire), the heat from a flame may provide enough energy to make the reaction self-sustaining.
Combustion is often a complicated sequence of elementary radical reactions. Solid fuels, such as wood and coal, first undergo endothermic pyrolysis to produce gaseous fuels whose combustion then supplies the heat required to produce more of them. Combustion is often hot enough that incandescent light in the form of either glowing or a flame is produced. A simple example can be seen in the combustion of hydrogen and oxygen into water vapor, a reaction which is commonly used to fuel rocket engines. This reaction releases 242 kJ/mol of heat and reduces the enthalpy accordingly (at constant temperature and pressure):
Uncatalyzed combustion in air requires relatively high temperatures. Complete combustion is stoichiometric concerning the fuel, where there is no remaining fuel, and ideally, no residual oxidant. Thermodynamically, the chemical equilibrium of combustion in air is overwhelmingly on the side of the products. However, complete combustion is almost impossible to achieve, since the chemical equilibrium is not necessarily reached, or may contain unburnt products such as carbon monoxide, hydrogen and even carbon (soot or ash). Thus, the produced smoke is usually toxic and contains unburned or partially oxidized products. Any combustion at high temperatures in atmospheric air, which is 78 percent nitrogen, will also create small amounts of several nitrogen oxides, commonly referred to as NOx, since the combustion of nitrogen is thermodynamically favored at high, but not low temperatures. Since burning is rarely clean, fuel gas cleaning or catalytic converters may be required by law.
Fires occur naturally, ignited by lightning strikes or by volcanic products. Combustion (fire) was the first controlled chemical reaction discovered by humans, in the form of campfires and bonfires, and continues to be the main method to produce energy for humanity. Usually, the fuel is carbon, hydrocarbons, or more complicated mixtures such as wood that contain partially oxidized hydrocarbons. The thermal energy produced from the combustion of either fossil fuels such as coal or oil, or from renewable fuels such as firewood, is harvested for diverse uses such as cooking, production of electricity or industrial or domestic heating. Combustion is also currently the only reaction used to power rockets. Combustion is also used to destroy (incinerate) waste, both nonhazardous and hazardous.
Oxidants for combustion have high oxidation potential and include atmospheric or pure oxygen, chlorine, fluorine, chlorine trifluoride, nitrous oxide and nitric acid. For instance, hydrogen burns in chlorine to form hydrogen chloride with the liberation of heat and light characteristic of combustion. Although usually not catalyzed, combustion can be catalyzed by platinum or vanadium, as in the contact process.
In complete combustion, the reactant burns in oxygen and produces a limited number of products. When a hydrocarbon burns in oxygen, the reaction will primarily yield carbon dioxide and water. When elements are burned, the products are primarily the most common oxides. Carbon will yield carbon dioxide, sulfur will yield sulfur dioxide, and iron will yield iron(III) oxide. Nitrogen is not considered to be a combustible substance when oxygen is the oxidant. Still, small amounts of various nitrogen oxides (commonly designated NOx species) form when the air is the oxidative.
Combustion is not necessarily favorable to the maximum degree of oxidation, and it can be temperature-dependent. For example, sulfur trioxide is not produced quantitatively by the combustion of sulfur. NOx species appear in significant amounts above about 2,800 °F (1,540 °C), and more is produced at higher temperatures. The amount of NOx is also a function of oxygen excess.
In most industrial applications and in fires, air is the source of oxygen (O2). In the air, each mole of oxygen is mixed with approximately 3.71 mol of nitrogen. Nitrogen does not take part in combustion, but at high temperatures, some nitrogen will be converted to NOx (mostly NO, with much smaller amounts of NO2). On the other hand, when there is insufficient oxygen to combust the fuel completely, some fuel carbon is converted to carbon monoxide, and some of the hydrogens remain unreacted. A complete set of equations for the combustion of a hydrocarbon in the air, therefore, requires an additional calculation for the distribution of oxygen between the carbon and hydrogen in the fuel.
The amount of air required for complete combustion is known as the "theoretical air" or "stoichiometric air". The amount of air above this value actually needed for optimal combustion is known as the "excess air", and can vary from 5% for a natural gas boiler, to 40% for anthracite coal, to 300% for a gas turbine.
Incomplete combustion will occur when there is not enough oxygen to allow the fuel to react completely to produce carbon dioxide and water. It also happens when the combustion is quenched by a heat sink, such as a solid surface or flame trap. As is the case with complete combustion, water is produced by incomplete combustion; however, carbon and carbon monoxide are produced instead of carbon dioxide.
For most fuels, such as diesel oil, coal, or wood, pyrolysis occurs before combustion. In incomplete combustion, products of pyrolysis remain unburnt and contaminate the smoke with noxious particulate matter and gases. Partially oxidized compounds are also a concern; partial oxidation of ethanol can produce harmful acetaldehyde, and carbon can produce toxic carbon monoxide.
The designs of combustion devices can improve the quality of combustion, such as burners and internal combustion engines. Further improvements are achievable by catalytic after-burning devices (such as catalytic converters) or by the simple partial return of the exhaust gases into the combustion process. Such devices are required by environmental legislation for cars in most countries. They may be necessary to enable large combustion devices, such as thermal power stations, to reach legal emission standards.
The degree of combustion can be measured and analyzed with test equipment. HVAC contractors, firefighters and engineers use combustion analyzers to test the efficiency of a burner during the combustion process. Also, the efficiency of an internal combustion engine can be measured in this way, and some U.S. states and local municipalities use combustion analysis to define and rate the efficiency of vehicles on the road today.
Carbon monoxide is one of the products from incomplete combustion. The formation of carbon monoxide produces less heat than formation of carbon dioxide so complete combustion is greatly preferred especially as carbon monoxide is a poisonous gas. When breathed, carbon monoxide takes the place of oxygen and combines with some of the hemoglobin in the blood, rendering it unable to transport oxygen.
These oxides combine with water and oxygen in the atmosphere, creating nitric acid and sulfuric acids, which return to Earth's surface as acid deposition, or "acid rain." Acid deposition harms aquatic organisms and kills trees. Due to its formation of certain nutrients that are less available to plants such as calcium and phosphorus, it reduces the productivity of the ecosystem and farms. An additional problem associated with nitrogen oxides is that they, along with hydrocarbon pollutants, contribute to the formation of ground level ozone, a major component of smog.
Breathing carbon monoxide causes headache, dizziness, vomiting, and nausea. If carbon monoxide levels are high enough, humans become unconscious or die. Exposure to moderate and high levels of carbon monoxide over long periods is positively correlated with the risk of heart disease. People who survive severe carbon monoxide poisoning may suffer long-term health problems. Carbon monoxide from the air is absorbed in the lungs which then binds with hemoglobin in human's red blood cells. This reduces the capacity of red blood cells that carry oxygen throughout the body.
Smoldering is the slow, low-temperature, flameless form of combustion, sustained by the heat evolved when oxygen directly attacks the surface of a condensed-phase fuel. It is a typically incomplete combustion reaction. Solid materials that can sustain a smoldering reaction include coal, cellulose, wood, cotton, tobacco, peat, duff, humus, synthetic foams, charring polymers (including polyurethane foam) and dust. Common examples of smoldering phenomena are the initiation of residential fires on upholstered furniture by weak heat sources (e.g., a cigarette, a short-circuited wire) and the persistent combustion of biomass behind the flaming fronts of wildfires.
Spontaneous combustion is a type of combustion that occurs by self-heating (increase in temperature due to exothermic internal reactions), followed by thermal runaway (self-heating which rapidly accelerates to high temperatures) and finally, ignition. For example, phosphorus self-ignites at room temperature without the application of heat. Organic materials undergoing bacterial composting can generate enough heat to reach the point of combustion.
Combustion resulting in a turbulent flame is the most used for industrial applications (e.g. gas turbines, gasoline engines, etc.) because the turbulence helps the mixing process between the fuel and oxidizer.
The term 'micro' gravity refers to a gravitational state that is 'low' (i.e., 'micro' in the sense of 'small' and not necessarily a millionth of Earth's normal gravity) such that the influence of buoyancy on physical processes may be considered small relative to other flow processes that would be present at normal gravity. In such an environment, the thermal and flow transport dynamics can behave quite differently than in normal gravity conditions (e.g., a candle's flame takes the shape of a sphere.). Microgravity combustion research contributes to the understanding of a wide variety of aspects that are relevant to both the environment of a spacecraft (e.g., fire dynamics relevant to crew safety on the International Space Station) and terrestrial (Earth-based) conditions (e.g., droplet combustion dynamics to assist developing new fuel blends for improved combustion, materials fabrication processes, thermal management of electronic systems, multiphase flow boiling dynamics, and many others).
Combustion processes that happen in very small volumes are considered micro-combustion. The high surface-to-volume ratio increases specific heat loss. Quenching distance plays a vital role in stabilizing the flame in such combustion chambers.
Generally, the chemical equation for stoichiometric combustion of a hydrocarbon in oxygen is:
where z = x + y 4 {\displaystyle z=x+{\frac {y}{4}}} .
For example, the stoichiometric burning of propane in oxygen is:
If the stoichiometric combustion takes place using air as the oxygen source, the nitrogen present in the air (Atmosphere of Earth) can be added to the equation (although it does not react) to show the stoichiometric composition of the fuel in air and the composition of the resultant flue gas. Treating all non-oxygen components in air as nitrogen gives a 'nitrogen' to oxygen ratio of 3.77, i.e. (100% - O2%) / O2% where O2% is 20.95% vol:
where z = x + 1 4 y {\displaystyle z=x+{\frac {1}{4}}y} .
For example, the stoichiometric combustion of propane ( C 3 H 8 {\displaystyle {\ce {C3H8}}} ) in air is:
The stoichiometric composition of propane in air is 1 / (1 + 5 + 18.87) = 4.02% vol.
The stoichiometric combustion reaction for CαHβOγ in air:
The stoichiometric combustion reaction for CαHβOγSδ:
The stoichiometric combustion reaction for CαHβOγNδSε:
The stoichiometric combustion reaction for CαHβOγFδ:
Various other substances begin to appear in significant amounts in combustion products when the flame temperature is above about 1600 K. When excess air is used, nitrogen may oxidize to NO and, to a much lesser extent, to NO2. CO forms by disproportionation of CO2, and H2 and OH form by disproportionation of H2O.
For example, when 1 mol of propane is burned with 28.6 mol of air (120% of the stoichiometric amount), the combustion products contain 3.3% O2. At 1400 K, the equilibrium combustion products contain 0.03% NO and 0.002% OH. At 1800 K, the combustion products contain 0.17% NO, 0.05% OH, 0.01% CO, and 0.004% H2.
Diesel engines are run with an excess of oxygen to combust small particles that tend to form with only a stoichiometric amount of oxygen, necessarily producing nitrogen oxide emissions. Both the United States and European Union enforce limits to vehicle nitrogen oxide emissions, which necessitate the use of special catalytic converters or treatment of the exhaust with urea (see Diesel exhaust fluid).
The incomplete (partial) combustion of a hydrocarbon with oxygen produces a gas mixture containing mainly CO2, CO, H2O, and H2. Such gas mixtures are commonly prepared for use as protective atmospheres for the heat-treatment of metals and for gas carburizing. The general reaction equation for incomplete combustion of one mole of a hydrocarbon in oxygen is:
When z falls below roughly 50% of the stoichiometric value, CH4 can become an important combustion product; when z falls below roughly 35% of the stoichiometric value, elemental carbon may become stable.
The products of incomplete combustion can be calculated with the aid of a material balance, together with the assumption that the combustion products reach equilibrium. For example, in the combustion of one mole of propane (C3H8) with four moles of O2, seven moles of combustion gas are formed, and z is 80% of the stoichiometric value. The three elemental balance equations are:
These three equations are insufficient in themselves to calculate the combustion gas composition. However, at the equilibrium position, the water-gas shift reaction gives another equation:
For example, at 1200 K the value of Keq is 0.728. Solving, the combustion gas consists of 42.4% H2O, 29.0% CO2, 14.7% H2, and 13.9% CO. Carbon becomes a stable phase at 1200 K and 1 atm pressure when z is less than 30% of the stoichiometric value, at which point the combustion products contain more than 98% H2 and CO and about 0.5% CH4.
Substances or materials which undergo combustion are called fuels. The most common examples are natural gas, propane, kerosene, diesel, petrol, charcoal, coal, wood, etc.
Combustion of a liquid fuel in an oxidizing atmosphere actually happens in the gas phase. It is the vapor that burns, not the liquid. Therefore, a liquid will normally catch fire only above a certain temperature: its flash point. The flash point of liquid fuel is the lowest temperature at which it can form an ignitable mix with air. It is the minimum temperature at which there is enough evaporated fuel in the air to start combustion.
Combustion of gaseous fuels may occur through one of four distinctive types of burning: diffusion flame, premixed flame, autoignitive reaction front, or as a detonation. The type of burning that actually occurs depends on the degree to which the fuel and oxidizer are mixed prior to heating: for example, a diffusion flame is formed if the fuel and oxidizer are separated initially, whereas a premixed flame is formed otherwise. Similarly, the type of burning also depends on the pressure: a detonation, for example, is an autoignitive reaction front coupled to a strong shock wave giving it its characteristic high-pressure peak and high detonation velocity.
The act of combustion consists of three relatively distinct but overlapping phases:
Efficient process heating requires recovery of the largest possible part of a fuel's heat of combustion into the material being processed. There are many avenues of loss in the operation of a heating process. Typically, the dominant loss is sensible heat leaving with the offgas (i.e., the flue gas). The temperature and quantity of offgas indicates its heat content (enthalpy), so keeping its quantity low minimizes heat loss.
In a perfect furnace, the combustion air flow would be matched to the fuel flow to give each fuel molecule the exact amount of oxygen needed to cause complete combustion. However, in the real world, combustion does not proceed in a perfect manner. Unburned fuel (usually CO and H2) discharged from the system represents a heating value loss (as well as a safety hazard). Since combustibles are undesirable in the offgas, while the presence of unreacted oxygen there presents minimal safety and environmental concerns, the first principle of combustion management is to provide more oxygen than is theoretically needed to ensure that all the fuel burns. For methane (CH4) combustion, for example, slightly more than two molecules of oxygen are required.
The second principle of combustion management, however, is to not use too much oxygen. The correct amount of oxygen requires three types of measurement: first, active control of air and fuel flow; second, offgas oxygen measurement; and third, measurement of offgas combustibles. For each heating process, there exists an optimum condition of minimal offgas heat loss with acceptable levels of combustibles concentration. Minimizing excess oxygen pays an additional benefit: for a given offgas temperature, the NOx level is lowest when excess oxygen is kept lowest.
Adherence to these two principles is furthered by making material and heat balances on the combustion process. The material balance directly relates the air/fuel ratio to the percentage of O2 in the combustion gas. The heat balance relates the heat available for the charge to the overall net heat produced by fuel combustion. Additional material and heat balances can be made to quantify the thermal advantage from preheating the combustion air, or enriching it in oxygen.
Combustion in oxygen is a chain reaction in which many distinct radical intermediates participate. The high energy required for initiation is explained by the unusual structure of the dioxygen molecule. The lowest-energy configuration of the dioxygen molecule is a stable, relatively unreactive diradical in a triplet spin state. Bonding can be described with three bonding electron pairs and two antibonding electrons, with spins aligned, such that the molecule has nonzero total angular momentum. Most fuels, on the other hand, are in a singlet state, with paired spins and zero total angular momentum. Interaction between the two is quantum mechanically a "forbidden transition", i.e. possible with a very low probability. To initiate combustion, energy is required to force dioxygen into a spin-paired state, or singlet oxygen. This intermediate is extremely reactive. The energy is supplied as heat, and the reaction then produces additional heat, which allows it to continue.
Combustion of hydrocarbons is thought to be initiated by hydrogen atom abstraction (not proton abstraction) from the fuel to oxygen, to give a hydroperoxide radical (HOO). This reacts further to give hydroperoxides, which break up to give hydroxyl radicals. There are a great variety of these processes that produce fuel radicals and oxidizing radicals. Oxidizing species include singlet oxygen, hydroxyl, monatomic oxygen, and hydroperoxyl. Such intermediates are short-lived and cannot be isolated. However, non-radical intermediates are stable and are produced in incomplete combustion. An example is acetaldehyde produced in the combustion of ethanol. An intermediate in the combustion of carbon and hydrocarbons, carbon monoxide, is of special importance because it is a poisonous gas, but also economically useful for the production of syngas.
Solid and heavy liquid fuels also undergo a great number of pyrolysis reactions that give more easily oxidized, gaseous fuels. These reactions are endothermic and require constant energy input from the ongoing combustion reactions. A lack of oxygen or other improperly designed conditions result in these noxious and carcinogenic pyrolysis products being emitted as thick, black smoke.
The rate of combustion is the amount of a material that undergoes combustion over a period of time. It can be expressed in grams per second (g/s) or kilograms per second (kg/s).
Detailed descriptions of combustion processes, from the chemical kinetics perspective, require the formulation of large and intricate webs of elementary reactions. For instance, combustion of hydrocarbon fuels typically involve hundreds of chemical species reacting according to thousands of reactions.
The inclusion of such mechanisms within computational flow solvers still represents a pretty challenging task mainly in two aspects. First, the number of degrees of freedom (proportional to the number of chemical species) can be dramatically large; second, the source term due to reactions introduces a disparate number of time scales which makes the whole dynamical system stiff. As a result, the direct numerical simulation of turbulent reactive flows with heavy fuels soon becomes intractable even for modern supercomputers.
Therefore, a plethora of methodologies have been devised for reducing the complexity of combustion mechanisms without resorting to high detail levels. Examples are provided by:
The kinetic modelling may be explored for insight into the reaction mechanisms of thermal decomposition in the combustion of different materials by using for instance Thermogravimetric analysis.
Assuming perfect combustion conditions, such as complete combustion under adiabatic conditions (i.e., no heat loss or gain), the adiabatic combustion temperature can be determined. The formula that yields this temperature is based on the first law of thermodynamics and takes note of the fact that the heat of combustion is used entirely for heating the fuel, the combustion air or oxygen, and the combustion product gases (commonly referred to as the flue gas).
In the case of fossil fuels burnt in air, the combustion temperature depends on all of the following:
The adiabatic combustion temperature (also known as the adiabatic flame temperature) increases for higher heating values and inlet air and fuel temperatures and for stoichiometric air ratios approaching one.
Most commonly, the adiabatic combustion temperatures for coals are around 2,200 °C (3,992 °F) (for inlet air and fuel at ambient temperatures and for λ = 1.0 {\displaystyle \lambda =1.0} ), around 2,150 °C (3,902 °F) for oil and 2,000 °C (3,632 °F) for natural gas.
In industrial fired heaters, power station steam generators, and large gas-fired turbines, the more common way of expressing the usage of more than the stoichiometric combustion air is percent excess combustion air. For example, excess combustion air of 15 percent means that 15 percent more than the required stoichiometric air is being used.
Combustion instabilities are typically violent pressure oscillations in a combustion chamber. These pressure oscillations can be as high as 180 dB, and long-term exposure to these cyclic pressure and thermal loads reduces the life of engine components. In rockets, such as the F1 used in the Saturn V program, instabilities led to massive damage to the combustion chamber and surrounding components. This problem was solved by re-designing the fuel injector. In liquid jet engines, the droplet size and distribution can be used to attenuate the instabilities. Combustion instabilities are a major concern in ground-based gas turbine engines because of NOx emissions. The tendency is to run lean, an equivalence ratio less than 1, to reduce the combustion temperature and thus reduce the NOx emissions; however, running the combustion lean makes it very susceptible to combustion instability.
The Rayleigh Criterion is the basis for analysis of thermoacoustic combustion instability and is evaluated using the Rayleigh Index over one cycle of instability
where q' is the heat release rate perturbation and p' is the pressure fluctuation. When the heat release oscillations are in phase with the pressure oscillations, the Rayleigh Index is positive and the magnitude of the thermoacoustic instability is maximised. On the other hand, if the Rayleigh Index is negative, then thermoacoustic damping occurs. The Rayleigh Criterion implies that thermoacoustic instability can be optimally controlled by having heat release oscillations 180 degrees out of phase with pressure oscillations at the same frequency. This minimizes the Rayleigh Index. | [
{
"paragraph_id": 0,
"text": "Combustion, or burning, is a high-temperature exothermic redox chemical reaction between a fuel (the reductant) and an oxidant, usually atmospheric oxygen, that produces oxidized, often gaseous products, in a mixture termed as smoke. Combustion does not always result in fire, because a flame is only visible when substances undergoing combustion vaporize, but when it does, a flame is a characteristic indicator of the reaction. While activation energy must be supplied to initiate combustion (e.g., using a lit match to light a fire), the heat from a flame may provide enough energy to make the reaction self-sustaining.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Combustion is often a complicated sequence of elementary radical reactions. Solid fuels, such as wood and coal, first undergo endothermic pyrolysis to produce gaseous fuels whose combustion then supplies the heat required to produce more of them. Combustion is often hot enough that incandescent light in the form of either glowing or a flame is produced. A simple example can be seen in the combustion of hydrogen and oxygen into water vapor, a reaction which is commonly used to fuel rocket engines. This reaction releases 242 kJ/mol of heat and reduces the enthalpy accordingly (at constant temperature and pressure):",
"title": ""
},
{
"paragraph_id": 2,
"text": "Uncatalyzed combustion in air requires relatively high temperatures. Complete combustion is stoichiometric concerning the fuel, where there is no remaining fuel, and ideally, no residual oxidant. Thermodynamically, the chemical equilibrium of combustion in air is overwhelmingly on the side of the products. However, complete combustion is almost impossible to achieve, since the chemical equilibrium is not necessarily reached, or may contain unburnt products such as carbon monoxide, hydrogen and even carbon (soot or ash). Thus, the produced smoke is usually toxic and contains unburned or partially oxidized products. Any combustion at high temperatures in atmospheric air, which is 78 percent nitrogen, will also create small amounts of several nitrogen oxides, commonly referred to as NOx, since the combustion of nitrogen is thermodynamically favored at high, but not low temperatures. Since burning is rarely clean, fuel gas cleaning or catalytic converters may be required by law.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Fires occur naturally, ignited by lightning strikes or by volcanic products. Combustion (fire) was the first controlled chemical reaction discovered by humans, in the form of campfires and bonfires, and continues to be the main method to produce energy for humanity. Usually, the fuel is carbon, hydrocarbons, or more complicated mixtures such as wood that contain partially oxidized hydrocarbons. The thermal energy produced from the combustion of either fossil fuels such as coal or oil, or from renewable fuels such as firewood, is harvested for diverse uses such as cooking, production of electricity or industrial or domestic heating. Combustion is also currently the only reaction used to power rockets. Combustion is also used to destroy (incinerate) waste, both nonhazardous and hazardous.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Oxidants for combustion have high oxidation potential and include atmospheric or pure oxygen, chlorine, fluorine, chlorine trifluoride, nitrous oxide and nitric acid. For instance, hydrogen burns in chlorine to form hydrogen chloride with the liberation of heat and light characteristic of combustion. Although usually not catalyzed, combustion can be catalyzed by platinum or vanadium, as in the contact process.",
"title": ""
},
{
"paragraph_id": 5,
"text": "In complete combustion, the reactant burns in oxygen and produces a limited number of products. When a hydrocarbon burns in oxygen, the reaction will primarily yield carbon dioxide and water. When elements are burned, the products are primarily the most common oxides. Carbon will yield carbon dioxide, sulfur will yield sulfur dioxide, and iron will yield iron(III) oxide. Nitrogen is not considered to be a combustible substance when oxygen is the oxidant. Still, small amounts of various nitrogen oxides (commonly designated NOx species) form when the air is the oxidative.",
"title": "Types"
},
{
"paragraph_id": 6,
"text": "Combustion is not necessarily favorable to the maximum degree of oxidation, and it can be temperature-dependent. For example, sulfur trioxide is not produced quantitatively by the combustion of sulfur. NOx species appear in significant amounts above about 2,800 °F (1,540 °C), and more is produced at higher temperatures. The amount of NOx is also a function of oxygen excess.",
"title": "Types"
},
{
"paragraph_id": 7,
"text": "In most industrial applications and in fires, air is the source of oxygen (O2). In the air, each mole of oxygen is mixed with approximately 3.71 mol of nitrogen. Nitrogen does not take part in combustion, but at high temperatures, some nitrogen will be converted to NOx (mostly NO, with much smaller amounts of NO2). On the other hand, when there is insufficient oxygen to combust the fuel completely, some fuel carbon is converted to carbon monoxide, and some of the hydrogens remain unreacted. A complete set of equations for the combustion of a hydrocarbon in the air, therefore, requires an additional calculation for the distribution of oxygen between the carbon and hydrogen in the fuel.",
"title": "Types"
},
{
"paragraph_id": 8,
"text": "The amount of air required for complete combustion is known as the \"theoretical air\" or \"stoichiometric air\". The amount of air above this value actually needed for optimal combustion is known as the \"excess air\", and can vary from 5% for a natural gas boiler, to 40% for anthracite coal, to 300% for a gas turbine.",
"title": "Types"
},
{
"paragraph_id": 9,
"text": "Incomplete combustion will occur when there is not enough oxygen to allow the fuel to react completely to produce carbon dioxide and water. It also happens when the combustion is quenched by a heat sink, such as a solid surface or flame trap. As is the case with complete combustion, water is produced by incomplete combustion; however, carbon and carbon monoxide are produced instead of carbon dioxide.",
"title": "Types"
},
{
"paragraph_id": 10,
"text": "For most fuels, such as diesel oil, coal, or wood, pyrolysis occurs before combustion. In incomplete combustion, products of pyrolysis remain unburnt and contaminate the smoke with noxious particulate matter and gases. Partially oxidized compounds are also a concern; partial oxidation of ethanol can produce harmful acetaldehyde, and carbon can produce toxic carbon monoxide.",
"title": "Types"
},
{
"paragraph_id": 11,
"text": "The designs of combustion devices can improve the quality of combustion, such as burners and internal combustion engines. Further improvements are achievable by catalytic after-burning devices (such as catalytic converters) or by the simple partial return of the exhaust gases into the combustion process. Such devices are required by environmental legislation for cars in most countries. They may be necessary to enable large combustion devices, such as thermal power stations, to reach legal emission standards.",
"title": "Types"
},
{
"paragraph_id": 12,
"text": "The degree of combustion can be measured and analyzed with test equipment. HVAC contractors, firefighters and engineers use combustion analyzers to test the efficiency of a burner during the combustion process. Also, the efficiency of an internal combustion engine can be measured in this way, and some U.S. states and local municipalities use combustion analysis to define and rate the efficiency of vehicles on the road today.",
"title": "Types"
},
{
"paragraph_id": 13,
"text": "Carbon monoxide is one of the products from incomplete combustion. The formation of carbon monoxide produces less heat than formation of carbon dioxide so complete combustion is greatly preferred especially as carbon monoxide is a poisonous gas. When breathed, carbon monoxide takes the place of oxygen and combines with some of the hemoglobin in the blood, rendering it unable to transport oxygen.",
"title": "Types"
},
{
"paragraph_id": 14,
"text": "These oxides combine with water and oxygen in the atmosphere, creating nitric acid and sulfuric acids, which return to Earth's surface as acid deposition, or \"acid rain.\" Acid deposition harms aquatic organisms and kills trees. Due to its formation of certain nutrients that are less available to plants such as calcium and phosphorus, it reduces the productivity of the ecosystem and farms. An additional problem associated with nitrogen oxides is that they, along with hydrocarbon pollutants, contribute to the formation of ground level ozone, a major component of smog.",
"title": "Types"
},
{
"paragraph_id": 15,
"text": "Breathing carbon monoxide causes headache, dizziness, vomiting, and nausea. If carbon monoxide levels are high enough, humans become unconscious or die. Exposure to moderate and high levels of carbon monoxide over long periods is positively correlated with the risk of heart disease. People who survive severe carbon monoxide poisoning may suffer long-term health problems. Carbon monoxide from the air is absorbed in the lungs which then binds with hemoglobin in human's red blood cells. This reduces the capacity of red blood cells that carry oxygen throughout the body.",
"title": "Types"
},
{
"paragraph_id": 16,
"text": "Smoldering is the slow, low-temperature, flameless form of combustion, sustained by the heat evolved when oxygen directly attacks the surface of a condensed-phase fuel. It is a typically incomplete combustion reaction. Solid materials that can sustain a smoldering reaction include coal, cellulose, wood, cotton, tobacco, peat, duff, humus, synthetic foams, charring polymers (including polyurethane foam) and dust. Common examples of smoldering phenomena are the initiation of residential fires on upholstered furniture by weak heat sources (e.g., a cigarette, a short-circuited wire) and the persistent combustion of biomass behind the flaming fronts of wildfires.",
"title": "Types"
},
{
"paragraph_id": 17,
"text": "Spontaneous combustion is a type of combustion that occurs by self-heating (increase in temperature due to exothermic internal reactions), followed by thermal runaway (self-heating which rapidly accelerates to high temperatures) and finally, ignition. For example, phosphorus self-ignites at room temperature without the application of heat. Organic materials undergoing bacterial composting can generate enough heat to reach the point of combustion.",
"title": "Types"
},
{
"paragraph_id": 18,
"text": "Combustion resulting in a turbulent flame is the most used for industrial applications (e.g. gas turbines, gasoline engines, etc.) because the turbulence helps the mixing process between the fuel and oxidizer.",
"title": "Types"
},
{
"paragraph_id": 19,
"text": "The term 'micro' gravity refers to a gravitational state that is 'low' (i.e., 'micro' in the sense of 'small' and not necessarily a millionth of Earth's normal gravity) such that the influence of buoyancy on physical processes may be considered small relative to other flow processes that would be present at normal gravity. In such an environment, the thermal and flow transport dynamics can behave quite differently than in normal gravity conditions (e.g., a candle's flame takes the shape of a sphere.). Microgravity combustion research contributes to the understanding of a wide variety of aspects that are relevant to both the environment of a spacecraft (e.g., fire dynamics relevant to crew safety on the International Space Station) and terrestrial (Earth-based) conditions (e.g., droplet combustion dynamics to assist developing new fuel blends for improved combustion, materials fabrication processes, thermal management of electronic systems, multiphase flow boiling dynamics, and many others).",
"title": "Types"
},
{
"paragraph_id": 20,
"text": "Combustion processes that happen in very small volumes are considered micro-combustion. The high surface-to-volume ratio increases specific heat loss. Quenching distance plays a vital role in stabilizing the flame in such combustion chambers.",
"title": "Types"
},
{
"paragraph_id": 21,
"text": "Generally, the chemical equation for stoichiometric combustion of a hydrocarbon in oxygen is:",
"title": "Chemical equations"
},
{
"paragraph_id": 22,
"text": "where z = x + y 4 {\\displaystyle z=x+{\\frac {y}{4}}} .",
"title": "Chemical equations"
},
{
"paragraph_id": 23,
"text": "For example, the stoichiometric burning of propane in oxygen is:",
"title": "Chemical equations"
},
{
"paragraph_id": 24,
"text": "If the stoichiometric combustion takes place using air as the oxygen source, the nitrogen present in the air (Atmosphere of Earth) can be added to the equation (although it does not react) to show the stoichiometric composition of the fuel in air and the composition of the resultant flue gas. Treating all non-oxygen components in air as nitrogen gives a 'nitrogen' to oxygen ratio of 3.77, i.e. (100% - O2%) / O2% where O2% is 20.95% vol:",
"title": "Chemical equations"
},
{
"paragraph_id": 25,
"text": "where z = x + 1 4 y {\\displaystyle z=x+{\\frac {1}{4}}y} .",
"title": "Chemical equations"
},
{
"paragraph_id": 26,
"text": "For example, the stoichiometric combustion of propane ( C 3 H 8 {\\displaystyle {\\ce {C3H8}}} ) in air is:",
"title": "Chemical equations"
},
{
"paragraph_id": 27,
"text": "The stoichiometric composition of propane in air is 1 / (1 + 5 + 18.87) = 4.02% vol.",
"title": "Chemical equations"
},
{
"paragraph_id": 28,
"text": "The stoichiometric combustion reaction for CαHβOγ in air:",
"title": "Chemical equations"
},
{
"paragraph_id": 29,
"text": "The stoichiometric combustion reaction for CαHβOγSδ:",
"title": "Chemical equations"
},
{
"paragraph_id": 30,
"text": "The stoichiometric combustion reaction for CαHβOγNδSε:",
"title": "Chemical equations"
},
{
"paragraph_id": 31,
"text": "The stoichiometric combustion reaction for CαHβOγFδ:",
"title": "Chemical equations"
},
{
"paragraph_id": 32,
"text": "Various other substances begin to appear in significant amounts in combustion products when the flame temperature is above about 1600 K. When excess air is used, nitrogen may oxidize to NO and, to a much lesser extent, to NO2. CO forms by disproportionation of CO2, and H2 and OH form by disproportionation of H2O.",
"title": "Chemical equations"
},
{
"paragraph_id": 33,
"text": "For example, when 1 mol of propane is burned with 28.6 mol of air (120% of the stoichiometric amount), the combustion products contain 3.3% O2. At 1400 K, the equilibrium combustion products contain 0.03% NO and 0.002% OH. At 1800 K, the combustion products contain 0.17% NO, 0.05% OH, 0.01% CO, and 0.004% H2.",
"title": "Chemical equations"
},
{
"paragraph_id": 34,
"text": "Diesel engines are run with an excess of oxygen to combust small particles that tend to form with only a stoichiometric amount of oxygen, necessarily producing nitrogen oxide emissions. Both the United States and European Union enforce limits to vehicle nitrogen oxide emissions, which necessitate the use of special catalytic converters or treatment of the exhaust with urea (see Diesel exhaust fluid).",
"title": "Chemical equations"
},
{
"paragraph_id": 35,
"text": "The incomplete (partial) combustion of a hydrocarbon with oxygen produces a gas mixture containing mainly CO2, CO, H2O, and H2. Such gas mixtures are commonly prepared for use as protective atmospheres for the heat-treatment of metals and for gas carburizing. The general reaction equation for incomplete combustion of one mole of a hydrocarbon in oxygen is:",
"title": "Chemical equations"
},
{
"paragraph_id": 36,
"text": "When z falls below roughly 50% of the stoichiometric value, CH4 can become an important combustion product; when z falls below roughly 35% of the stoichiometric value, elemental carbon may become stable.",
"title": "Chemical equations"
},
{
"paragraph_id": 37,
"text": "The products of incomplete combustion can be calculated with the aid of a material balance, together with the assumption that the combustion products reach equilibrium. For example, in the combustion of one mole of propane (C3H8) with four moles of O2, seven moles of combustion gas are formed, and z is 80% of the stoichiometric value. The three elemental balance equations are:",
"title": "Chemical equations"
},
{
"paragraph_id": 38,
"text": "These three equations are insufficient in themselves to calculate the combustion gas composition. However, at the equilibrium position, the water-gas shift reaction gives another equation:",
"title": "Chemical equations"
},
{
"paragraph_id": 39,
"text": "For example, at 1200 K the value of Keq is 0.728. Solving, the combustion gas consists of 42.4% H2O, 29.0% CO2, 14.7% H2, and 13.9% CO. Carbon becomes a stable phase at 1200 K and 1 atm pressure when z is less than 30% of the stoichiometric value, at which point the combustion products contain more than 98% H2 and CO and about 0.5% CH4.",
"title": "Chemical equations"
},
{
"paragraph_id": 40,
"text": "Substances or materials which undergo combustion are called fuels. The most common examples are natural gas, propane, kerosene, diesel, petrol, charcoal, coal, wood, etc.",
"title": "Chemical equations"
},
{
"paragraph_id": 41,
"text": "Combustion of a liquid fuel in an oxidizing atmosphere actually happens in the gas phase. It is the vapor that burns, not the liquid. Therefore, a liquid will normally catch fire only above a certain temperature: its flash point. The flash point of liquid fuel is the lowest temperature at which it can form an ignitable mix with air. It is the minimum temperature at which there is enough evaporated fuel in the air to start combustion.",
"title": "Chemical equations"
},
{
"paragraph_id": 42,
"text": "Combustion of gaseous fuels may occur through one of four distinctive types of burning: diffusion flame, premixed flame, autoignitive reaction front, or as a detonation. The type of burning that actually occurs depends on the degree to which the fuel and oxidizer are mixed prior to heating: for example, a diffusion flame is formed if the fuel and oxidizer are separated initially, whereas a premixed flame is formed otherwise. Similarly, the type of burning also depends on the pressure: a detonation, for example, is an autoignitive reaction front coupled to a strong shock wave giving it its characteristic high-pressure peak and high detonation velocity.",
"title": "Chemical equations"
},
{
"paragraph_id": 43,
"text": "The act of combustion consists of three relatively distinct but overlapping phases:",
"title": "Chemical equations"
},
{
"paragraph_id": 44,
"text": "Efficient process heating requires recovery of the largest possible part of a fuel's heat of combustion into the material being processed. There are many avenues of loss in the operation of a heating process. Typically, the dominant loss is sensible heat leaving with the offgas (i.e., the flue gas). The temperature and quantity of offgas indicates its heat content (enthalpy), so keeping its quantity low minimizes heat loss.",
"title": "Combustion management"
},
{
"paragraph_id": 45,
"text": "In a perfect furnace, the combustion air flow would be matched to the fuel flow to give each fuel molecule the exact amount of oxygen needed to cause complete combustion. However, in the real world, combustion does not proceed in a perfect manner. Unburned fuel (usually CO and H2) discharged from the system represents a heating value loss (as well as a safety hazard). Since combustibles are undesirable in the offgas, while the presence of unreacted oxygen there presents minimal safety and environmental concerns, the first principle of combustion management is to provide more oxygen than is theoretically needed to ensure that all the fuel burns. For methane (CH4) combustion, for example, slightly more than two molecules of oxygen are required.",
"title": "Combustion management"
},
{
"paragraph_id": 46,
"text": "The second principle of combustion management, however, is to not use too much oxygen. The correct amount of oxygen requires three types of measurement: first, active control of air and fuel flow; second, offgas oxygen measurement; and third, measurement of offgas combustibles. For each heating process, there exists an optimum condition of minimal offgas heat loss with acceptable levels of combustibles concentration. Minimizing excess oxygen pays an additional benefit: for a given offgas temperature, the NOx level is lowest when excess oxygen is kept lowest.",
"title": "Combustion management"
},
{
"paragraph_id": 47,
"text": "Adherence to these two principles is furthered by making material and heat balances on the combustion process. The material balance directly relates the air/fuel ratio to the percentage of O2 in the combustion gas. The heat balance relates the heat available for the charge to the overall net heat produced by fuel combustion. Additional material and heat balances can be made to quantify the thermal advantage from preheating the combustion air, or enriching it in oxygen.",
"title": "Combustion management"
},
{
"paragraph_id": 48,
"text": "Combustion in oxygen is a chain reaction in which many distinct radical intermediates participate. The high energy required for initiation is explained by the unusual structure of the dioxygen molecule. The lowest-energy configuration of the dioxygen molecule is a stable, relatively unreactive diradical in a triplet spin state. Bonding can be described with three bonding electron pairs and two antibonding electrons, with spins aligned, such that the molecule has nonzero total angular momentum. Most fuels, on the other hand, are in a singlet state, with paired spins and zero total angular momentum. Interaction between the two is quantum mechanically a \"forbidden transition\", i.e. possible with a very low probability. To initiate combustion, energy is required to force dioxygen into a spin-paired state, or singlet oxygen. This intermediate is extremely reactive. The energy is supplied as heat, and the reaction then produces additional heat, which allows it to continue.",
"title": "Reaction mechanism"
},
{
"paragraph_id": 49,
"text": "Combustion of hydrocarbons is thought to be initiated by hydrogen atom abstraction (not proton abstraction) from the fuel to oxygen, to give a hydroperoxide radical (HOO). This reacts further to give hydroperoxides, which break up to give hydroxyl radicals. There are a great variety of these processes that produce fuel radicals and oxidizing radicals. Oxidizing species include singlet oxygen, hydroxyl, monatomic oxygen, and hydroperoxyl. Such intermediates are short-lived and cannot be isolated. However, non-radical intermediates are stable and are produced in incomplete combustion. An example is acetaldehyde produced in the combustion of ethanol. An intermediate in the combustion of carbon and hydrocarbons, carbon monoxide, is of special importance because it is a poisonous gas, but also economically useful for the production of syngas.",
"title": "Reaction mechanism"
},
{
"paragraph_id": 50,
"text": "Solid and heavy liquid fuels also undergo a great number of pyrolysis reactions that give more easily oxidized, gaseous fuels. These reactions are endothermic and require constant energy input from the ongoing combustion reactions. A lack of oxygen or other improperly designed conditions result in these noxious and carcinogenic pyrolysis products being emitted as thick, black smoke.",
"title": "Reaction mechanism"
},
{
"paragraph_id": 51,
"text": "The rate of combustion is the amount of a material that undergoes combustion over a period of time. It can be expressed in grams per second (g/s) or kilograms per second (kg/s).",
"title": "Reaction mechanism"
},
{
"paragraph_id": 52,
"text": "Detailed descriptions of combustion processes, from the chemical kinetics perspective, require the formulation of large and intricate webs of elementary reactions. For instance, combustion of hydrocarbon fuels typically involve hundreds of chemical species reacting according to thousands of reactions.",
"title": "Reaction mechanism"
},
{
"paragraph_id": 53,
"text": "The inclusion of such mechanisms within computational flow solvers still represents a pretty challenging task mainly in two aspects. First, the number of degrees of freedom (proportional to the number of chemical species) can be dramatically large; second, the source term due to reactions introduces a disparate number of time scales which makes the whole dynamical system stiff. As a result, the direct numerical simulation of turbulent reactive flows with heavy fuels soon becomes intractable even for modern supercomputers.",
"title": "Reaction mechanism"
},
{
"paragraph_id": 54,
"text": "Therefore, a plethora of methodologies have been devised for reducing the complexity of combustion mechanisms without resorting to high detail levels. Examples are provided by:",
"title": "Reaction mechanism"
},
{
"paragraph_id": 55,
"text": "The kinetic modelling may be explored for insight into the reaction mechanisms of thermal decomposition in the combustion of different materials by using for instance Thermogravimetric analysis.",
"title": "Reaction mechanism"
},
{
"paragraph_id": 56,
"text": "Assuming perfect combustion conditions, such as complete combustion under adiabatic conditions (i.e., no heat loss or gain), the adiabatic combustion temperature can be determined. The formula that yields this temperature is based on the first law of thermodynamics and takes note of the fact that the heat of combustion is used entirely for heating the fuel, the combustion air or oxygen, and the combustion product gases (commonly referred to as the flue gas).",
"title": "Temperature"
},
{
"paragraph_id": 57,
"text": "In the case of fossil fuels burnt in air, the combustion temperature depends on all of the following:",
"title": "Temperature"
},
{
"paragraph_id": 58,
"text": "The adiabatic combustion temperature (also known as the adiabatic flame temperature) increases for higher heating values and inlet air and fuel temperatures and for stoichiometric air ratios approaching one.",
"title": "Temperature"
},
{
"paragraph_id": 59,
"text": "Most commonly, the adiabatic combustion temperatures for coals are around 2,200 °C (3,992 °F) (for inlet air and fuel at ambient temperatures and for λ = 1.0 {\\displaystyle \\lambda =1.0} ), around 2,150 °C (3,902 °F) for oil and 2,000 °C (3,632 °F) for natural gas.",
"title": "Temperature"
},
{
"paragraph_id": 60,
"text": "In industrial fired heaters, power station steam generators, and large gas-fired turbines, the more common way of expressing the usage of more than the stoichiometric combustion air is percent excess combustion air. For example, excess combustion air of 15 percent means that 15 percent more than the required stoichiometric air is being used.",
"title": "Temperature"
},
{
"paragraph_id": 61,
"text": "Combustion instabilities are typically violent pressure oscillations in a combustion chamber. These pressure oscillations can be as high as 180 dB, and long-term exposure to these cyclic pressure and thermal loads reduces the life of engine components. In rockets, such as the F1 used in the Saturn V program, instabilities led to massive damage to the combustion chamber and surrounding components. This problem was solved by re-designing the fuel injector. In liquid jet engines, the droplet size and distribution can be used to attenuate the instabilities. Combustion instabilities are a major concern in ground-based gas turbine engines because of NOx emissions. The tendency is to run lean, an equivalence ratio less than 1, to reduce the combustion temperature and thus reduce the NOx emissions; however, running the combustion lean makes it very susceptible to combustion instability.",
"title": "Instabilities"
},
{
"paragraph_id": 62,
"text": "The Rayleigh Criterion is the basis for analysis of thermoacoustic combustion instability and is evaluated using the Rayleigh Index over one cycle of instability",
"title": "Instabilities"
},
{
"paragraph_id": 63,
"text": "where q' is the heat release rate perturbation and p' is the pressure fluctuation. When the heat release oscillations are in phase with the pressure oscillations, the Rayleigh Index is positive and the magnitude of the thermoacoustic instability is maximised. On the other hand, if the Rayleigh Index is negative, then thermoacoustic damping occurs. The Rayleigh Criterion implies that thermoacoustic instability can be optimally controlled by having heat release oscillations 180 degrees out of phase with pressure oscillations at the same frequency. This minimizes the Rayleigh Index.",
"title": "Instabilities"
}
] | Combustion, or burning, is a high-temperature exothermic redox chemical reaction between a fuel and an oxidant, usually atmospheric oxygen, that produces oxidized, often gaseous products, in a mixture termed as smoke. Combustion does not always result in fire, because a flame is only visible when substances undergoing combustion vaporize, but when it does, a flame is a characteristic indicator of the reaction. While activation energy must be supplied to initiate combustion, the heat from a flame may provide enough energy to make the reaction self-sustaining. Combustion is often a complicated sequence of elementary radical reactions. Solid fuels, such as wood and coal, first undergo endothermic pyrolysis to produce gaseous fuels whose combustion then supplies the heat required to produce more of them. Combustion is often hot enough that incandescent light in the form of either glowing or a flame is produced. A simple example can be seen in the combustion of hydrogen and oxygen into water vapor, a reaction which is commonly used to fuel rocket engines. This reaction releases 242 kJ/mol of heat and reduces the enthalpy accordingly: Uncatalyzed combustion in air requires relatively high temperatures. Complete combustion is stoichiometric concerning the fuel, where there is no remaining fuel, and ideally, no residual oxidant. Thermodynamically, the chemical equilibrium of combustion in air is overwhelmingly on the side of the products. However, complete combustion is almost impossible to achieve, since the chemical equilibrium is not necessarily reached, or may contain unburnt products such as carbon monoxide, hydrogen and even carbon. Thus, the produced smoke is usually toxic and contains unburned or partially oxidized products. Any combustion at high temperatures in atmospheric air, which is 78 percent nitrogen, will also create small amounts of several nitrogen oxides, commonly referred to as NOx, since the combustion of nitrogen is thermodynamically favored at high, but not low temperatures. Since burning is rarely clean, fuel gas cleaning or catalytic converters may be required by law. Fires occur naturally, ignited by lightning strikes or by volcanic products. Combustion (fire) was the first controlled chemical reaction discovered by humans, in the form of campfires and bonfires, and continues to be the main method to produce energy for humanity. Usually, the fuel is carbon, hydrocarbons, or more complicated mixtures such as wood that contain partially oxidized hydrocarbons. The thermal energy produced from the combustion of either fossil fuels such as coal or oil, or from renewable fuels such as firewood, is harvested for diverse uses such as cooking, production of electricity or industrial or domestic heating. Combustion is also currently the only reaction used to power rockets. Combustion is also used to destroy (incinerate) waste, both nonhazardous and hazardous. Oxidants for combustion have high oxidation potential and include atmospheric or pure oxygen, chlorine, fluorine, chlorine trifluoride, nitrous oxide and nitric acid. For instance, hydrogen burns in chlorine to form hydrogen chloride with the liberation of heat and light characteristic of combustion. Although usually not catalyzed, combustion can be catalyzed by platinum or vanadium, as in the contact process. | 2001-05-10T13:07:34Z | 2023-12-24T09:14:24Z | [
"Template:Redirect",
"Template:See also",
"Template:NOx",
"Template:Firelighting",
"Template:Authority control",
"Template:Col-end",
"Template:Reflist",
"Template:Nbsp",
"Template:Chem",
"Template:CO2",
"Template:More citations needed section",
"Template:Col-begin",
"Template:Col-break",
"Template:Cite report",
"Template:Cite journal",
"Template:Short description",
"Template:Val",
"Template:Sub",
"Template:Wiktionary",
"Template:Convert",
"Template:H2O",
"Template:Cite web",
"Template:Cite book",
"Template:Fire"
] | https://en.wikipedia.org/wiki/Combustion |
5,639 | Cyrillic script | The Cyrillic script (/sɪˈrɪlɪk/ sih-RIL-ik), Slavonic script or simply Slavic script is a writing system used for various languages across Eurasia. It is the designated national script in various Slavic, Turkic, Mongolic, Uralic, Caucasian and Iranic-speaking countries in Southeastern Europe, Eastern Europe, the Caucasus, Central Asia, North Asia, and East Asia, and used by many other minority languages.
As of 2019, around 250 million people in Eurasia use Cyrillic as the official script for their national languages, with Russia accounting for about half of them. With the accession of Bulgaria to the European Union on 1 January 2007, Cyrillic became the third official script of the European Union, following the Latin and Greek alphabets.
The Early Cyrillic alphabet was developed during the 9th century AD at the Preslav Literary School in the First Bulgarian Empire during the reign of Tsar Simeon I the Great, probably by the disciples of the two Byzantine brothers Cyril and Methodius, who had previously created the Glagolitic script. Among them were Clement of Ohrid, Naum of Preslav, Angelar, Sava and other scholars. The script is named in honor of Saint Cyril.
Since the script was conceived and popularised by the Slavic followers of Cyril and Methodius, rather than by Cyril and Methodius themselves, its name denotes homage rather than authorship. The name "Cyrillic" often confuses people who are not familiar with the script's history, because it does not identify the country of origin – Bulgaria (in contrast to the "Greek alphabet").
In Bulgarian, Macedonian, Russian, Serbian, Czech and Slovak, the Cyrillic alphabet is also known as azbuka, derived from the old names of the first two letters of most Cyrillic alphabets (just as the term alphabet came from the first two Greek letters alpha and beta). In Czech and Slovak, which have never used Cyrillic, "azbuka" refers to Cyrillic and contrasts with "abeceda", which refers to the local Latin script and is composed of the names of the first letters (A, B, C, and D). In Russian, syllabaries, especially the Japanese kana, are commonly referred to as 'syllabic azbukas' rather than 'syllabic scripts'.
The Cyrillic script was created during the First Bulgarian Empire. Modern scholars believe that the Early Cyrillic alphabet was created at the Preslav Literary School, the most important early literary and cultural center of the First Bulgarian Empire and of all Slavs:
Unlike the Churchmen in Ohrid, Preslav scholars were much more dependent upon Greek models and quickly abandoned the Glagolitic scripts in favor of an adaptation of the Greek uncial to the needs of Slavic, which is now known as the Cyrillic alphabet.
A number of prominent Bulgarian writers and scholars worked at the school, including Naum of Preslav until 893; Constantine of Preslav; Joan Ekzarh (also transcr. John the Exarch); and Chernorizets Hrabar, among others. The school was also a center of translation, mostly of Byzantine authors. The Cyrillic script is derived from the Greek uncial script letters, augmented by ligatures and consonants from the older Glagolitic alphabet for sounds not found in Greek. Glagolitic and Cyrillic were formalized by the Byzantine Saints Cyril and Methodius and their disciples, such as Saints Naum, Clement, Angelar, and Sava. They spread and taught Christianity in the whole of Bulgaria. Paul Cubberley posits that although Cyril may have codified and expanded Glagolitic, it was his students in the First Bulgarian Empire under Tsar Simeon the Great that developed Cyrillic from the Greek letters in the 890s as a more suitable script for church books.
Cyrillic spread among other Slavic peoples, as well as among non-Slavic Romanians. The earliest datable Cyrillic inscriptions have been found in the area of Preslav, in the medieval city itself and at nearby Patleina Monastery, both in present-day Shumen Province, as well as in the Ravna Monastery and in the Varna Monastery. The new script became the basis of alphabets used in various languages in Orthodox Church-dominated Eastern Europe, both Slavic and non-Slavic languages (such as Romanian, until the 1860s). For centuries, Cyrillic was also used by Catholic and Muslim Slavs (see Bosnian Cyrillic).
Cyrillic and Glagolitic were used for the Church Slavonic language, especially the Old Church Slavonic variant. Hence expressions such as "И is the tenth Cyrillic letter" typically refer to the order of the Church Slavonic alphabet; not every Cyrillic alphabet uses every letter available in the script. The Cyrillic script came to dominate Glagolitic in the 12th century.
The literature produced in Old Church Slavonic soon spread north from Bulgaria and became the lingua franca of the Balkans and Eastern Europe.
Bosnian Cyrillic, widely known as Bosančica is an extinct variant of the Cyrillic alphabet that originated in medieval Bosnia. Paleographers consider the earliest features of Bosnian Cyrillic script had likely begun to appear between the 10th or 11th century, with the Humac tablet (a tablet written in Bosnian Cyrillic) to be the first such document using this type of script and is believed to date from this period. Bosnian Cyrillic was used continuously until the 18th century, with sporadic usage even taking place in the 20th century.
With the orthographic reform of Saint Evtimiy of Tarnovo and other prominent representatives of the Tarnovo Literary School of the 14th and 15th centuries, such as Gregory Tsamblak and Constantine of Kostenets, the school influenced Russian, Serbian, Wallachian and Moldavian medieval culture. This is known in Russia as the second South-Slavic influence.
In the early 18th century, the Cyrillic script used in Russia was heavily reformed by Peter the Great, who had recently returned from his Grand Embassy in Western Europe. The new letterforms, called the Civil script, became closer to those of the Latin alphabet; several archaic letters were abolished and several new letters were introduced designed by Peter himself. Letters became distinguished between upper and lower case. West European typography culture was also adopted. The pre-reform letterforms, called 'Полуустав', were notably retained in Church Slavonic and are sometimes used in Russian even today, especially if one wants to give a text a 'Slavic' or 'archaic' feel.
The alphabet used for the modern Church Slavonic language in Eastern Orthodox and Eastern Catholic rites still resembles early Cyrillic. However, over the course of the following millennium, Cyrillic adapted to changes in spoken language, developed regional variations to suit the features of national languages, and was subjected to academic reform and political decrees. A notable example of such linguistic reform can be attributed to Vuk Stefanović Karadžić, who updated the Serbian Cyrillic alphabet by removing certain graphemes no longer represented in the vernacular and introducing graphemes specific to Serbian (i.e. Љ Њ Ђ Ћ Џ Ј), distancing it from the Church Slavonic alphabet in use prior to the reform. Today, many languages in the Balkans, Eastern Europe, and northern Eurasia are written in Cyrillic alphabets.
Cyrillic script spread throughout the East Slavic and some South Slavic territories, being adopted for writing local languages, such as Old East Slavic. Its adaptation to local languages produced a number of Cyrillic alphabets, discussed below.
Capital and lowercase letters were not distinguished in old manuscripts.
Yeri (Ы) was originally a ligature of Yer and I (Ъ + І = Ы). Iotation was indicated by ligatures formed with the letter І: Ꙗ (not an ancestor of modern Ya, Я, which is derived from Ѧ), Ѥ, Ю (ligature of І and ОУ), Ѩ, Ѭ. Sometimes different letters were used interchangeably, for example И = І = Ї, as were typographical variants like О = Ѻ. There were also commonly used ligatures like ѠТ = Ѿ.
The letters also had numeric values, based not on Cyrillic alphabetical order, but inherited from the letters' Greek ancestors.
The early Cyrillic alphabet is difficult to represent on computers. Many of the letterforms differed from those of modern Cyrillic, varied a great deal in manuscripts, and changed over time. Few fonts include glyphs sufficient to reproduce the alphabet. In accordance with Unicode policy, the standard does not include letterform variations or ligatures found in manuscript sources unless they can be shown to conform to the Unicode definition of a character.
The Unicode 5.1 standard, released on 4 April 2008, greatly improved computer support for the early Cyrillic and the modern Church Slavonic language. In Microsoft Windows, the Segoe UI user interface font is notable for having complete support for the archaic Cyrillic letters since Windows 8.
Some currency signs have derived from Cyrillic letters:
The development of Cyrillic typography passed directly from the medieval stage to the late Baroque, without a Renaissance phase as in Western Europe. Late Medieval Cyrillic letters (categorized as vyaz' and still found on many icon inscriptions today) show a marked tendency to be very tall and narrow, with strokes often shared between adjacent letters.
Peter the Great, Tsar of Russia, mandated the use of westernized letter forms (ru) in the early 18th century. Over time, these were largely adopted in the other languages that use the script. Thus, unlike the majority of modern Greek fonts that retained their own set of design principles for lower-case letters (such as the placement of serifs, the shapes of stroke ends, and stroke-thickness rules, although Greek capital letters do use Latin design principles), modern Cyrillic fonts are much the same as modern Latin fonts of the same font family. The development of some Cyrillic computer typefaces from Latin ones has also contributed to the visual Latinization of Cyrillic type.
Cyrillic uppercase and lowercase letter forms are not as differentiated as in Latin typography. Upright Cyrillic lowercase letters are essentially small capitals (with exceptions: Cyrillic ⟨а⟩, ⟨е⟩, ⟨і⟩, ⟨ј⟩, ⟨р⟩, and ⟨у⟩ adopted Western lowercase shapes, lowercase ⟨ф⟩ is typically designed under the influence of Latin ⟨p⟩, lowercase ⟨б⟩, ⟨ђ⟩ and ⟨ћ⟩ are traditional handwritten forms), although a good-quality Cyrillic typeface will still include separate small-caps glyphs.
Cyrillic fonts, as well as Latin ones, have roman and italic types (practically all popular modern fonts include parallel sets of Latin and Cyrillic letters, where many glyphs, uppercase as well as lowercase, are shared by both). However, the native font terminology in most Slavic languages (for example, in Russian) does not use the words "roman" and "italic" in this sense. Instead, the nomenclature follows German naming patterns:
Similarly to Latin fonts, italic and cursive types of many Cyrillic letters (typically lowercase; uppercase only for handwritten or stylish types) are very different from their upright roman types. In certain cases, the correspondence between uppercase and lowercase glyphs does not coincide in Latin and Cyrillic fonts: for example, italic Cyrillic ⟨т⟩ is the lowercase counterpart of ⟨Т⟩ not of ⟨М⟩.
Note: in some fonts or styles, ⟨д⟩, i.e. the lowercase italic Cyrillic ⟨д⟩, may look like Latin ⟨g⟩, and ⟨т⟩, i.e. lowercase italic Cyrillic ⟨т⟩, may look like small-capital italic ⟨T⟩.
In Standard Serbian, as well as in Macedonian, some italic and cursive letters are allowed to be different, to more closely resemble the handwritten letters. The regular (upright) shapes are generally standardized in small caps form.
Notes: Depending on fonts available, the Serbian row may appear identical to the Russian row. Unicode approximations are used in the faux row to ensure it can be rendered properly across all systems.
In Bulgarian typography, many lowercase letterforms may more closely resemble the cursive forms on the one hand and Latin glyphs on the other hand, e.g. by having an ascender or descender or by using rounded arcs instead of sharp corners. Sometimes, uppercase letters may have a different shape as well, e.g. more triangular, Д and Л, like Greek delta Δ and lambda Λ.
Notes: Depending on fonts available, the Bulgarian row may appear identical to the Russian row. Unicode approximations are used in the faux row to ensure it can be rendered properly across all systems; in some cases, such as ж with k-like ascender, no such approximation exists.
Computer fonts typically default to the Central/Eastern, Russian letterforms, and require the use of OpenType Layout (OTL) features to display the Western, Bulgarian or Southern, Serbian/Macedonian forms. Depending on the choices of the font manufacturer, they may either be automatically activated by the local variant locl feature for text tagged with an appropriate language code, or the author needs to opt-in by activating a stylistic set ss## or character variant cv## feature. These solutions only enjoy partial support and may render with default glyphs in certain software configurations.
Among others, Cyrillic is the standard script for writing the following languages:
The Cyrillic script has also been used for languages of Alaska, Slavic Europe (except for Western Slavic and some Southern Slavic), the Caucasus, the languages of Idel-Ural, Siberia, and the Russian Far East.
The first alphabet derived from Cyrillic was Abur, used for the Komi language. Other Cyrillic alphabets include the Molodtsov alphabet for the Komi language and various alphabets for Caucasian languages.
A number of languages written in a Cyrillic alphabet have also been written in a Latin alphabet, such as Azerbaijani, Uzbek, Serbian, and Romanian (in the Republic of Moldova until 1989 and in the Danubian Principalities throughout the 19th century). After the disintegration of the Soviet Union in 1991, some of the former republics officially shifted from Cyrillic to Latin. The transition is complete in most of Moldova (except the breakaway region of Transnistria, where Moldovan Cyrillic is official), Turkmenistan, and Azerbaijan. Uzbekistan still uses both systems, and Kazakhstan has officially begun a transition from Cyrillic to Latin (scheduled to be complete by 2025). The Russian government has mandated that Cyrillic must be used for all public communications in all federal subjects of Russia, to promote closer ties across the federation. This act was controversial for speakers of many Slavic languages; for others, such as Chechen and Ingush speakers, the law had political ramifications. For example, the separatist Chechen government mandated a Latin script which is still used by many Chechens.
Standard Serbian uses both the Cyrillic and Latin scripts. Cyrillic is nominally the official script of Serbia's administration according to the Serbian constitution; however, the law does not regulate scripts in standard language, or standard language itself by any means. In practice the scripts are equal, with Latin being used more often in a less official capacity.
The Zhuang alphabet, used between the 1950s and 1980s in portions of the People's Republic of China, used a mixture of Latin, phonetic, numeral-based, and Cyrillic letters. The non-Latin letters, including Cyrillic, were removed from the alphabet in 1982 and replaced with Latin letters that closely resembled the letters they replaced.
There are various systems for romanization of Cyrillic text, including transliteration to convey Cyrillic spelling in Latin letters, and transcription to convey pronunciation.
Standard Cyrillic-to-Latin transliteration systems include:
See also Romanization of Belarusian, Bulgarian, Kyrgyz, Russian, Macedonian and Ukrainian.
Representing other writing systems with Cyrillic letters is called Cyrillization.
As of Unicode version 15.1, Cyrillic letters, including national and historical alphabets, are encoded across several blocks:
The characters in the range U+0400 to U+045F are essentially the characters from ISO 8859-5 moved upward by 864 positions. The characters in the range U+0460 to U+0489 are historic letters, not used now. The characters in the range U+048A to U+052F are additional letters for various languages that are written with Cyrillic script.
Unicode as a general rule does not include accented Cyrillic letters. A few exceptions include:
To indicate stressed or long vowels, combining diacritical marks can be used after the respective letter (for example, U+0301 ◌́ COMBINING ACUTE ACCENT: е́ у́ э́ etc.).
Some languages, including Church Slavonic, are still not fully supported.
Unicode 5.1, released on 4 April 2008, introduces major changes to the Cyrillic blocks. Revisions to the existing Cyrillic blocks, and the addition of Cyrillic Extended A (2DE0 ... 2DFF) and Cyrillic Extended B (A640 ... A69F), significantly improve support for the early Cyrillic alphabet, Abkhaz, Aleut, Chuvash, Kurdish, and Moksha.
Other character encoding systems for Cyrillic:
Each language has its own standard keyboard layout, adopted from typewriters. With the flexibility of computer input methods, there are also transliterating or phonetic/homophonic keyboard layouts made for typists who are more familiar with other layouts, like the common English QWERTY keyboard. When practical Cyrillic keyboard layouts or fonts are unavailable, computer users sometimes use transliteration or look-alike "volapuk" encoding to type in languages that are normally written with the Cyrillic alphabet. | [
{
"paragraph_id": 0,
"text": "The Cyrillic script (/sɪˈrɪlɪk/ sih-RIL-ik), Slavonic script or simply Slavic script is a writing system used for various languages across Eurasia. It is the designated national script in various Slavic, Turkic, Mongolic, Uralic, Caucasian and Iranic-speaking countries in Southeastern Europe, Eastern Europe, the Caucasus, Central Asia, North Asia, and East Asia, and used by many other minority languages.",
"title": ""
},
{
"paragraph_id": 1,
"text": "As of 2019, around 250 million people in Eurasia use Cyrillic as the official script for their national languages, with Russia accounting for about half of them. With the accession of Bulgaria to the European Union on 1 January 2007, Cyrillic became the third official script of the European Union, following the Latin and Greek alphabets.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The Early Cyrillic alphabet was developed during the 9th century AD at the Preslav Literary School in the First Bulgarian Empire during the reign of Tsar Simeon I the Great, probably by the disciples of the two Byzantine brothers Cyril and Methodius, who had previously created the Glagolitic script. Among them were Clement of Ohrid, Naum of Preslav, Angelar, Sava and other scholars. The script is named in honor of Saint Cyril.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Since the script was conceived and popularised by the Slavic followers of Cyril and Methodius, rather than by Cyril and Methodius themselves, its name denotes homage rather than authorship. The name \"Cyrillic\" often confuses people who are not familiar with the script's history, because it does not identify the country of origin – Bulgaria (in contrast to the \"Greek alphabet\").",
"title": "Etymology"
},
{
"paragraph_id": 4,
"text": "In Bulgarian, Macedonian, Russian, Serbian, Czech and Slovak, the Cyrillic alphabet is also known as azbuka, derived from the old names of the first two letters of most Cyrillic alphabets (just as the term alphabet came from the first two Greek letters alpha and beta). In Czech and Slovak, which have never used Cyrillic, \"azbuka\" refers to Cyrillic and contrasts with \"abeceda\", which refers to the local Latin script and is composed of the names of the first letters (A, B, C, and D). In Russian, syllabaries, especially the Japanese kana, are commonly referred to as 'syllabic azbukas' rather than 'syllabic scripts'.",
"title": "Etymology"
},
{
"paragraph_id": 5,
"text": "The Cyrillic script was created during the First Bulgarian Empire. Modern scholars believe that the Early Cyrillic alphabet was created at the Preslav Literary School, the most important early literary and cultural center of the First Bulgarian Empire and of all Slavs:",
"title": "History"
},
{
"paragraph_id": 6,
"text": "Unlike the Churchmen in Ohrid, Preslav scholars were much more dependent upon Greek models and quickly abandoned the Glagolitic scripts in favor of an adaptation of the Greek uncial to the needs of Slavic, which is now known as the Cyrillic alphabet.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "A number of prominent Bulgarian writers and scholars worked at the school, including Naum of Preslav until 893; Constantine of Preslav; Joan Ekzarh (also transcr. John the Exarch); and Chernorizets Hrabar, among others. The school was also a center of translation, mostly of Byzantine authors. The Cyrillic script is derived from the Greek uncial script letters, augmented by ligatures and consonants from the older Glagolitic alphabet for sounds not found in Greek. Glagolitic and Cyrillic were formalized by the Byzantine Saints Cyril and Methodius and their disciples, such as Saints Naum, Clement, Angelar, and Sava. They spread and taught Christianity in the whole of Bulgaria. Paul Cubberley posits that although Cyril may have codified and expanded Glagolitic, it was his students in the First Bulgarian Empire under Tsar Simeon the Great that developed Cyrillic from the Greek letters in the 890s as a more suitable script for church books.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "Cyrillic spread among other Slavic peoples, as well as among non-Slavic Romanians. The earliest datable Cyrillic inscriptions have been found in the area of Preslav, in the medieval city itself and at nearby Patleina Monastery, both in present-day Shumen Province, as well as in the Ravna Monastery and in the Varna Monastery. The new script became the basis of alphabets used in various languages in Orthodox Church-dominated Eastern Europe, both Slavic and non-Slavic languages (such as Romanian, until the 1860s). For centuries, Cyrillic was also used by Catholic and Muslim Slavs (see Bosnian Cyrillic).",
"title": "History"
},
{
"paragraph_id": 9,
"text": "Cyrillic and Glagolitic were used for the Church Slavonic language, especially the Old Church Slavonic variant. Hence expressions such as \"И is the tenth Cyrillic letter\" typically refer to the order of the Church Slavonic alphabet; not every Cyrillic alphabet uses every letter available in the script. The Cyrillic script came to dominate Glagolitic in the 12th century.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "The literature produced in Old Church Slavonic soon spread north from Bulgaria and became the lingua franca of the Balkans and Eastern Europe.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "Bosnian Cyrillic, widely known as Bosančica is an extinct variant of the Cyrillic alphabet that originated in medieval Bosnia. Paleographers consider the earliest features of Bosnian Cyrillic script had likely begun to appear between the 10th or 11th century, with the Humac tablet (a tablet written in Bosnian Cyrillic) to be the first such document using this type of script and is believed to date from this period. Bosnian Cyrillic was used continuously until the 18th century, with sporadic usage even taking place in the 20th century.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "With the orthographic reform of Saint Evtimiy of Tarnovo and other prominent representatives of the Tarnovo Literary School of the 14th and 15th centuries, such as Gregory Tsamblak and Constantine of Kostenets, the school influenced Russian, Serbian, Wallachian and Moldavian medieval culture. This is known in Russia as the second South-Slavic influence.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "In the early 18th century, the Cyrillic script used in Russia was heavily reformed by Peter the Great, who had recently returned from his Grand Embassy in Western Europe. The new letterforms, called the Civil script, became closer to those of the Latin alphabet; several archaic letters were abolished and several new letters were introduced designed by Peter himself. Letters became distinguished between upper and lower case. West European typography culture was also adopted. The pre-reform letterforms, called 'Полуустав', were notably retained in Church Slavonic and are sometimes used in Russian even today, especially if one wants to give a text a 'Slavic' or 'archaic' feel.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "The alphabet used for the modern Church Slavonic language in Eastern Orthodox and Eastern Catholic rites still resembles early Cyrillic. However, over the course of the following millennium, Cyrillic adapted to changes in spoken language, developed regional variations to suit the features of national languages, and was subjected to academic reform and political decrees. A notable example of such linguistic reform can be attributed to Vuk Stefanović Karadžić, who updated the Serbian Cyrillic alphabet by removing certain graphemes no longer represented in the vernacular and introducing graphemes specific to Serbian (i.e. Љ Њ Ђ Ћ Џ Ј), distancing it from the Church Slavonic alphabet in use prior to the reform. Today, many languages in the Balkans, Eastern Europe, and northern Eurasia are written in Cyrillic alphabets.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "Cyrillic script spread throughout the East Slavic and some South Slavic territories, being adopted for writing local languages, such as Old East Slavic. Its adaptation to local languages produced a number of Cyrillic alphabets, discussed below.",
"title": "Letters"
},
{
"paragraph_id": 16,
"text": "Capital and lowercase letters were not distinguished in old manuscripts.",
"title": "Letters"
},
{
"paragraph_id": 17,
"text": "Yeri (Ы) was originally a ligature of Yer and I (Ъ + І = Ы). Iotation was indicated by ligatures formed with the letter І: Ꙗ (not an ancestor of modern Ya, Я, which is derived from Ѧ), Ѥ, Ю (ligature of І and ОУ), Ѩ, Ѭ. Sometimes different letters were used interchangeably, for example И = І = Ї, as were typographical variants like О = Ѻ. There were also commonly used ligatures like ѠТ = Ѿ.",
"title": "Letters"
},
{
"paragraph_id": 18,
"text": "The letters also had numeric values, based not on Cyrillic alphabetical order, but inherited from the letters' Greek ancestors.",
"title": "Letters"
},
{
"paragraph_id": 19,
"text": "The early Cyrillic alphabet is difficult to represent on computers. Many of the letterforms differed from those of modern Cyrillic, varied a great deal in manuscripts, and changed over time. Few fonts include glyphs sufficient to reproduce the alphabet. In accordance with Unicode policy, the standard does not include letterform variations or ligatures found in manuscript sources unless they can be shown to conform to the Unicode definition of a character.",
"title": "Letters"
},
{
"paragraph_id": 20,
"text": "The Unicode 5.1 standard, released on 4 April 2008, greatly improved computer support for the early Cyrillic and the modern Church Slavonic language. In Microsoft Windows, the Segoe UI user interface font is notable for having complete support for the archaic Cyrillic letters since Windows 8.",
"title": "Letters"
},
{
"paragraph_id": 21,
"text": "Some currency signs have derived from Cyrillic letters:",
"title": "Letters"
},
{
"paragraph_id": 22,
"text": "The development of Cyrillic typography passed directly from the medieval stage to the late Baroque, without a Renaissance phase as in Western Europe. Late Medieval Cyrillic letters (categorized as vyaz' and still found on many icon inscriptions today) show a marked tendency to be very tall and narrow, with strokes often shared between adjacent letters.",
"title": "Letterforms and typography"
},
{
"paragraph_id": 23,
"text": "Peter the Great, Tsar of Russia, mandated the use of westernized letter forms (ru) in the early 18th century. Over time, these were largely adopted in the other languages that use the script. Thus, unlike the majority of modern Greek fonts that retained their own set of design principles for lower-case letters (such as the placement of serifs, the shapes of stroke ends, and stroke-thickness rules, although Greek capital letters do use Latin design principles), modern Cyrillic fonts are much the same as modern Latin fonts of the same font family. The development of some Cyrillic computer typefaces from Latin ones has also contributed to the visual Latinization of Cyrillic type.",
"title": "Letterforms and typography"
},
{
"paragraph_id": 24,
"text": "Cyrillic uppercase and lowercase letter forms are not as differentiated as in Latin typography. Upright Cyrillic lowercase letters are essentially small capitals (with exceptions: Cyrillic ⟨а⟩, ⟨е⟩, ⟨і⟩, ⟨ј⟩, ⟨р⟩, and ⟨у⟩ adopted Western lowercase shapes, lowercase ⟨ф⟩ is typically designed under the influence of Latin ⟨p⟩, lowercase ⟨б⟩, ⟨ђ⟩ and ⟨ћ⟩ are traditional handwritten forms), although a good-quality Cyrillic typeface will still include separate small-caps glyphs.",
"title": "Letterforms and typography"
},
{
"paragraph_id": 25,
"text": "Cyrillic fonts, as well as Latin ones, have roman and italic types (practically all popular modern fonts include parallel sets of Latin and Cyrillic letters, where many glyphs, uppercase as well as lowercase, are shared by both). However, the native font terminology in most Slavic languages (for example, in Russian) does not use the words \"roman\" and \"italic\" in this sense. Instead, the nomenclature follows German naming patterns:",
"title": "Letterforms and typography"
},
{
"paragraph_id": 26,
"text": "Similarly to Latin fonts, italic and cursive types of many Cyrillic letters (typically lowercase; uppercase only for handwritten or stylish types) are very different from their upright roman types. In certain cases, the correspondence between uppercase and lowercase glyphs does not coincide in Latin and Cyrillic fonts: for example, italic Cyrillic ⟨т⟩ is the lowercase counterpart of ⟨Т⟩ not of ⟨М⟩.",
"title": "Letterforms and typography"
},
{
"paragraph_id": 27,
"text": "Note: in some fonts or styles, ⟨д⟩, i.e. the lowercase italic Cyrillic ⟨д⟩, may look like Latin ⟨g⟩, and ⟨т⟩, i.e. lowercase italic Cyrillic ⟨т⟩, may look like small-capital italic ⟨T⟩.",
"title": "Letterforms and typography"
},
{
"paragraph_id": 28,
"text": "In Standard Serbian, as well as in Macedonian, some italic and cursive letters are allowed to be different, to more closely resemble the handwritten letters. The regular (upright) shapes are generally standardized in small caps form.",
"title": "Letterforms and typography"
},
{
"paragraph_id": 29,
"text": "Notes: Depending on fonts available, the Serbian row may appear identical to the Russian row. Unicode approximations are used in the faux row to ensure it can be rendered properly across all systems.",
"title": "Letterforms and typography"
},
{
"paragraph_id": 30,
"text": "In Bulgarian typography, many lowercase letterforms may more closely resemble the cursive forms on the one hand and Latin glyphs on the other hand, e.g. by having an ascender or descender or by using rounded arcs instead of sharp corners. Sometimes, uppercase letters may have a different shape as well, e.g. more triangular, Д and Л, like Greek delta Δ and lambda Λ.",
"title": "Letterforms and typography"
},
{
"paragraph_id": 31,
"text": "Notes: Depending on fonts available, the Bulgarian row may appear identical to the Russian row. Unicode approximations are used in the faux row to ensure it can be rendered properly across all systems; in some cases, such as ж with k-like ascender, no such approximation exists.",
"title": "Letterforms and typography"
},
{
"paragraph_id": 32,
"text": "Computer fonts typically default to the Central/Eastern, Russian letterforms, and require the use of OpenType Layout (OTL) features to display the Western, Bulgarian or Southern, Serbian/Macedonian forms. Depending on the choices of the font manufacturer, they may either be automatically activated by the local variant locl feature for text tagged with an appropriate language code, or the author needs to opt-in by activating a stylistic set ss## or character variant cv## feature. These solutions only enjoy partial support and may render with default glyphs in certain software configurations.",
"title": "Letterforms and typography"
},
{
"paragraph_id": 33,
"text": "Among others, Cyrillic is the standard script for writing the following languages:",
"title": "Cyrillic alphabets"
},
{
"paragraph_id": 34,
"text": "The Cyrillic script has also been used for languages of Alaska, Slavic Europe (except for Western Slavic and some Southern Slavic), the Caucasus, the languages of Idel-Ural, Siberia, and the Russian Far East.",
"title": "Cyrillic alphabets"
},
{
"paragraph_id": 35,
"text": "The first alphabet derived from Cyrillic was Abur, used for the Komi language. Other Cyrillic alphabets include the Molodtsov alphabet for the Komi language and various alphabets for Caucasian languages.",
"title": "Cyrillic alphabets"
},
{
"paragraph_id": 36,
"text": "A number of languages written in a Cyrillic alphabet have also been written in a Latin alphabet, such as Azerbaijani, Uzbek, Serbian, and Romanian (in the Republic of Moldova until 1989 and in the Danubian Principalities throughout the 19th century). After the disintegration of the Soviet Union in 1991, some of the former republics officially shifted from Cyrillic to Latin. The transition is complete in most of Moldova (except the breakaway region of Transnistria, where Moldovan Cyrillic is official), Turkmenistan, and Azerbaijan. Uzbekistan still uses both systems, and Kazakhstan has officially begun a transition from Cyrillic to Latin (scheduled to be complete by 2025). The Russian government has mandated that Cyrillic must be used for all public communications in all federal subjects of Russia, to promote closer ties across the federation. This act was controversial for speakers of many Slavic languages; for others, such as Chechen and Ingush speakers, the law had political ramifications. For example, the separatist Chechen government mandated a Latin script which is still used by many Chechens.",
"title": "Usage of Cyrillic versus other scripts"
},
{
"paragraph_id": 37,
"text": "Standard Serbian uses both the Cyrillic and Latin scripts. Cyrillic is nominally the official script of Serbia's administration according to the Serbian constitution; however, the law does not regulate scripts in standard language, or standard language itself by any means. In practice the scripts are equal, with Latin being used more often in a less official capacity.",
"title": "Usage of Cyrillic versus other scripts"
},
{
"paragraph_id": 38,
"text": "The Zhuang alphabet, used between the 1950s and 1980s in portions of the People's Republic of China, used a mixture of Latin, phonetic, numeral-based, and Cyrillic letters. The non-Latin letters, including Cyrillic, were removed from the alphabet in 1982 and replaced with Latin letters that closely resembled the letters they replaced.",
"title": "Usage of Cyrillic versus other scripts"
},
{
"paragraph_id": 39,
"text": "There are various systems for romanization of Cyrillic text, including transliteration to convey Cyrillic spelling in Latin letters, and transcription to convey pronunciation.",
"title": "Usage of Cyrillic versus other scripts"
},
{
"paragraph_id": 40,
"text": "Standard Cyrillic-to-Latin transliteration systems include:",
"title": "Usage of Cyrillic versus other scripts"
},
{
"paragraph_id": 41,
"text": "See also Romanization of Belarusian, Bulgarian, Kyrgyz, Russian, Macedonian and Ukrainian.",
"title": "Usage of Cyrillic versus other scripts"
},
{
"paragraph_id": 42,
"text": "Representing other writing systems with Cyrillic letters is called Cyrillization.",
"title": "Usage of Cyrillic versus other scripts"
},
{
"paragraph_id": 43,
"text": "As of Unicode version 15.1, Cyrillic letters, including national and historical alphabets, are encoded across several blocks:",
"title": "Computer encoding"
},
{
"paragraph_id": 44,
"text": "The characters in the range U+0400 to U+045F are essentially the characters from ISO 8859-5 moved upward by 864 positions. The characters in the range U+0460 to U+0489 are historic letters, not used now. The characters in the range U+048A to U+052F are additional letters for various languages that are written with Cyrillic script.",
"title": "Computer encoding"
},
{
"paragraph_id": 45,
"text": "Unicode as a general rule does not include accented Cyrillic letters. A few exceptions include:",
"title": "Computer encoding"
},
{
"paragraph_id": 46,
"text": "To indicate stressed or long vowels, combining diacritical marks can be used after the respective letter (for example, U+0301 ◌́ COMBINING ACUTE ACCENT: е́ у́ э́ etc.).",
"title": "Computer encoding"
},
{
"paragraph_id": 47,
"text": "Some languages, including Church Slavonic, are still not fully supported.",
"title": "Computer encoding"
},
{
"paragraph_id": 48,
"text": "Unicode 5.1, released on 4 April 2008, introduces major changes to the Cyrillic blocks. Revisions to the existing Cyrillic blocks, and the addition of Cyrillic Extended A (2DE0 ... 2DFF) and Cyrillic Extended B (A640 ... A69F), significantly improve support for the early Cyrillic alphabet, Abkhaz, Aleut, Chuvash, Kurdish, and Moksha.",
"title": "Computer encoding"
},
{
"paragraph_id": 49,
"text": "Other character encoding systems for Cyrillic:",
"title": "Computer encoding"
},
{
"paragraph_id": 50,
"text": "Each language has its own standard keyboard layout, adopted from typewriters. With the flexibility of computer input methods, there are also transliterating or phonetic/homophonic keyboard layouts made for typists who are more familiar with other layouts, like the common English QWERTY keyboard. When practical Cyrillic keyboard layouts or fonts are unavailable, computer users sometimes use transliteration or look-alike \"volapuk\" encoding to type in languages that are normally written with the Cyrillic alphabet.",
"title": "Computer encoding"
}
] | The Cyrillic script, Slavonic script or simply Slavic script is a writing system used for various languages across Eurasia. It is the designated national script in various Slavic, Turkic, Mongolic, Uralic, Caucasian and Iranic-speaking countries in Southeastern Europe, Eastern Europe, the Caucasus, Central Asia, North Asia, and East Asia, and used by many other minority languages. As of 2019, around 250 million people in Eurasia use Cyrillic as the official script for their national languages, with Russia accounting for about half of them. With the accession of Bulgaria to the European Union on 1 January 2007, Cyrillic became the third official script of the European Union, following the Latin and Greek alphabets. The Early Cyrillic alphabet was developed during the 9th century AD at the Preslav Literary School in the First Bulgarian Empire during the reign of Tsar Simeon I the Great, probably by the disciples of the two Byzantine brothers Cyril and Methodius, who had previously created the Glagolitic script. Among them were Clement of Ohrid, Naum of Preslav, Angelar, Sava and other scholars. The script is named in honor of Saint Cyril. | 2001-09-20T19:26:26Z | 2023-12-16T16:33:20Z | [
"Template:Unicode version",
"Template:Div col",
"Template:Respell",
"Template:As of",
"Template:Not a typo",
"Template:CSS image crop",
"Template:Cyrillic alphabet navbox",
"Template:Small",
"Template:Refbegin",
"Template:ISSN",
"Template:Navboxes",
"Template:Wiktionary",
"Template:Redirect2",
"Template:See also",
"Template:Use dmy dates",
"Template:Cite book",
"Template:Cite news",
"Template:Commons category",
"Template:IPAc-en",
"Template:ODB",
"Template:Cyrillization",
"Template:Infobox writing system",
"Template:Angle bracket",
"Template:Unichar",
"Template:Citation",
"Template:Southeastern Europe in the Middle Ages, 500–1250",
"Template:Refend",
"Template:Cite podcast",
"Template:More citation needed",
"Template:Main",
"Template:Script",
"Template:Lang",
"Template:Reflist",
"Template:ISBN",
"Template:Sfn",
"Template:Alphabet",
"Template:Cite web",
"Template:Snd",
"Template:Anchor",
"Template:Cite journal",
"Template:Authority control",
"Template:Webarchive",
"Template:List of writing systems",
"Template:Short description",
"Template:Citation needed",
"Template:Legend",
"Template:Portal",
"Template:Div col end",
"Template:Notelist"
] | https://en.wikipedia.org/wiki/Cyrillic_script |
5,641 | Consonant | In articulatory phonetics, a consonant is a speech sound that is articulated with complete or partial closure of the vocal tract, except for the h, which is pronounced without any stricture in the vocal tract. Examples are [p] and [b], pronounced with the lips; [t] and [d], pronounced with the front of the tongue; [k] and [g], pronounced with the back of the tongue; [h], pronounced in the throat; [f], [v], and [s], pronounced by forcing air through a narrow channel (fricatives); and [m] and [n], which have air flowing through the nose (nasals). Contrasting with consonants are vowels.
Since the number of speech sounds in the world's languages is much greater than the number of letters in any one alphabet, linguists have devised systems such as the International Phonetic Alphabet (IPA) to assign a unique and unambiguous symbol to each attested consonant. The English alphabet has fewer consonant letters than the English language has consonant sounds, so digraphs like ⟨ch⟩, ⟨sh⟩, ⟨th⟩, and ⟨ng⟩ are used to extend the alphabet, though some letters and digraphs represent more than one consonant. For example, the sound spelled ⟨th⟩ in "this" is a different consonant from the ⟨th⟩ sound in "thin". (In the IPA, these are [ð] and [θ], respectively.)
The word consonant comes from Latin oblique stem cōnsonant-, from cōnsonāns 'sounding-together', a calque of Greek σύμφωνον sýmphōnon (plural sýmphōna, σύμφωνα).
Dionysius Thrax calls consonants sýmphōna (σύμφωνα 'sounded with') because in Greek they can only be pronounced with a vowel. He divides them into two subcategories: hēmíphōna (ἡμίφωνα 'half-sounded'), which are the continuants, and áphōna (ἄφωνος 'unsounded'), which correspond to plosives.
This description does not apply to some languages, such as the Salishan languages, in which plosives may occur without vowels (see Nuxalk), and the modern concept of "consonant" does not require co-occurrence with a vowel.
The word consonant may be used ambiguously for both speech sounds and the letters of the alphabet used to write them. In English, these letters are B, C, D, F, G, J, K, L, M, N, P, Q, S, T, V, X, Z and often H, R, W, Y.
In English orthography, the letters H, R, W, Y and the digraph GH are used for both consonants and vowels. For instance, the letter Y stands for the consonant/semi-vowel /j/ in yoke, the vowel /ɪ/ in myth, the vowel /i/ in funny, the diphthong /aɪ/ in sky, and forms several digraphs for other diphthongs, such as say, boy, key. Similarly, R commonly indicates or modifies a vowel in non-rhotic accents.
This article is concerned with consonant sounds, however they are written.
Consonants and vowels correspond to distinct parts of a syllable: The most sonorous part of the syllable (that is, the part that is easiest to sing), called the syllabic peak or nucleus, is typically a vowel, while the less sonorous margins (called the onset and coda) are typically consonants. Such syllables may be abbreviated CV, V, and CVC, where C stands for consonant and V stands for vowel. This can be argued to be the only pattern found in most of the world's languages, and perhaps the primary pattern in all of them. However, the distinction between consonant and vowel is not always clear cut: there are syllabic consonants and non-syllabic vowels in many of the world's languages.
One blurry area is in segments variously called semivowels, semiconsonants, or glides. On one side, there are vowel-like segments that are not in themselves syllabic, but form diphthongs as part of the syllable nucleus, as the i in English boil [ˈbɔɪ̯l]. On the other, there are approximants that behave like consonants in forming onsets, but are articulated very much like vowels, as the y in English yes [ˈjɛs]. Some phonologists model these as both being the underlying vowel /i/, so that the English word bit would phonemically be /bit/, beet would be /bii̯t/, and yield would be phonemically /i̯ii̯ld/. Likewise, foot would be /fut/, food would be /fuu̯d/, wood would be /u̯ud/, and wooed would be /u̯uu̯d/. However, there is a (perhaps allophonic) difference in articulation between these segments, with the [j] in [ˈjɛs] yes and [ˈjiʲld] yield and the [w] of [ˈwuʷd] wooed having more constriction and a more definite place of articulation than the [ɪ] in [ˈbɔɪ̯l] boil or [ˈbɪt] bit or the [ʊ] of [ˈfʊt] foot.
The other problematic area is that of syllabic consonants, segments articulated as consonants but occupying the nucleus of a syllable. This may be the case for words such as church in rhotic dialects of English, although phoneticians differ in whether they consider this to be a syllabic consonant, /ˈtʃɹ̩tʃ/, or a rhotic vowel, /ˈtʃɝtʃ/: Some distinguish an approximant /ɹ/ that corresponds to a vowel /ɝ/, for rural as /ˈɹɝl/ or [ˈɹʷɝːl̩]; others see these as a single phoneme, /ˈɹɹ̩l/.
Other languages use fricative and often trilled segments as syllabic nuclei, as in Czech and several languages in Democratic Republic of the Congo, and China, including Mandarin Chinese. In Mandarin, they are historically allophones of /i/, and spelled that way in Pinyin. Ladefoged and Maddieson call these "fricative vowels" and say that "they can usually be thought of as syllabic fricatives that are allophones of vowels". That is, phonetically they are consonants, but phonemically they behave as vowels.
Many Slavic languages allow the trill [r̩] and the lateral [l̩] as syllabic nuclei (see Words without vowels). In languages like Nuxalk, it is difficult to know what the nucleus of a syllable is, or if all syllables even have nuclei. If the concept of 'syllable' applies in Nuxalk, there are syllabic consonants in words like /sx̩s/ (/s̩xs̩/?) 'seal fat'. Miyako in Japan is similar, with /f̩ks̩/ 'to build' and /ps̩ks̩/ 'to pull'.
Each spoken consonant can be distinguished by several phonetic features:
All English consonants can be classified by a combination of these features, such as "voiceless alveolar stop" [t]. In this case, the airstream mechanism is omitted.
Some pairs of consonants like p::b, t::d are sometimes called fortis and lenis, but this is a phonological rather than phonetic distinction.
Consonants are scheduled by their features in a number of IPA charts:
The recently extinct Ubykh language had only 2 or 3 vowels but 84 consonants; the Taa language has 87 consonants under one analysis, 164 under another, plus some 30 vowels and tone. The types of consonants used in various languages are by no means universal. For instance, nearly all Australian languages lack fricatives; a large percentage of the world's languages lack voiced stops such as /b/, /d/, /ɡ/ as phonemes, though they may appear phonetically. Most languages, however, do include one or more fricatives, with /s/ being the most common, and a liquid consonant or two, with /l/ the most common. The approximant /w/ is also widespread, and virtually all languages have one or more nasals, though a very few, such as the Central dialect of Rotokas, lack even these. This last language has the smallest number of consonants in the world, with just six.
In rhotic American English, the consonants spoken most frequently are /n, ɹ, t/. (/ɹ/ is less common in non-rhotic accents.) The most frequent consonant in many other languages is /p/.
The most universal consonants around the world (that is, the ones appearing in nearly all languages) are the three voiceless stops /p/, /t/, /k/, and the two nasals /m/, /n/. However, even these common five are not completely universal. Several languages in the vicinity of the Sahara Desert, including Arabic, lack /p/. Several languages of North America, such as Mohawk, lack both of the labials /p/ and /m/. The Wichita language of Oklahoma and some West African languages, such as Ijo, lack the consonant /n/ on a phonemic level, but do use it phonetically, as an allophone of another consonant (of /l/ in the case of Ijo, and of /ɾ/ in Wichita). A few languages on Bougainville Island and around Puget Sound, such as Makah, lack both of the nasals [m] and [n] altogether, except in special speech registers such as baby-talk. The 'click language' Nǁng lacks /t/, and colloquial Samoan lacks both alveolars, /t/ and /n/. Despite the 80-odd consonants of Ubykh, it lacks the plain velar /k/ in native words, as do the related Adyghe and Kabardian languages. But with a few striking exceptions, such as Xavante and Tahitian—which have no dorsal consonants whatsoever—nearly all other languages have at least one velar consonant: most of the few languages that do not have a simple /k/ (that is, a sound that is generally pronounced [k]) have a consonant that is very similar. For instance, an areal feature of the Pacific Northwest coast is that historical *k has become palatalized in many languages, so that Saanich for example has /tʃ/ and /kʷ/ but no plain /k/; similarly, historical *k in the Northwest Caucasian languages became palatalized to /kʲ/ in extinct Ubykh and to /tʃ/ in most Circassian dialects. | [
{
"paragraph_id": 0,
"text": "In articulatory phonetics, a consonant is a speech sound that is articulated with complete or partial closure of the vocal tract, except for the h, which is pronounced without any stricture in the vocal tract. Examples are [p] and [b], pronounced with the lips; [t] and [d], pronounced with the front of the tongue; [k] and [g], pronounced with the back of the tongue; [h], pronounced in the throat; [f], [v], and [s], pronounced by forcing air through a narrow channel (fricatives); and [m] and [n], which have air flowing through the nose (nasals). Contrasting with consonants are vowels.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Since the number of speech sounds in the world's languages is much greater than the number of letters in any one alphabet, linguists have devised systems such as the International Phonetic Alphabet (IPA) to assign a unique and unambiguous symbol to each attested consonant. The English alphabet has fewer consonant letters than the English language has consonant sounds, so digraphs like ⟨ch⟩, ⟨sh⟩, ⟨th⟩, and ⟨ng⟩ are used to extend the alphabet, though some letters and digraphs represent more than one consonant. For example, the sound spelled ⟨th⟩ in \"this\" is a different consonant from the ⟨th⟩ sound in \"thin\". (In the IPA, these are [ð] and [θ], respectively.)",
"title": ""
},
{
"paragraph_id": 2,
"text": "The word consonant comes from Latin oblique stem cōnsonant-, from cōnsonāns 'sounding-together', a calque of Greek σύμφωνον sýmphōnon (plural sýmphōna, σύμφωνα).",
"title": "Etymology"
},
{
"paragraph_id": 3,
"text": "Dionysius Thrax calls consonants sýmphōna (σύμφωνα 'sounded with') because in Greek they can only be pronounced with a vowel. He divides them into two subcategories: hēmíphōna (ἡμίφωνα 'half-sounded'), which are the continuants, and áphōna (ἄφωνος 'unsounded'), which correspond to plosives.",
"title": "Etymology"
},
{
"paragraph_id": 4,
"text": "This description does not apply to some languages, such as the Salishan languages, in which plosives may occur without vowels (see Nuxalk), and the modern concept of \"consonant\" does not require co-occurrence with a vowel.",
"title": "Etymology"
},
{
"paragraph_id": 5,
"text": "The word consonant may be used ambiguously for both speech sounds and the letters of the alphabet used to write them. In English, these letters are B, C, D, F, G, J, K, L, M, N, P, Q, S, T, V, X, Z and often H, R, W, Y.",
"title": "Consonant sounds and consonant letters"
},
{
"paragraph_id": 6,
"text": "In English orthography, the letters H, R, W, Y and the digraph GH are used for both consonants and vowels. For instance, the letter Y stands for the consonant/semi-vowel /j/ in yoke, the vowel /ɪ/ in myth, the vowel /i/ in funny, the diphthong /aɪ/ in sky, and forms several digraphs for other diphthongs, such as say, boy, key. Similarly, R commonly indicates or modifies a vowel in non-rhotic accents.",
"title": "Consonant sounds and consonant letters"
},
{
"paragraph_id": 7,
"text": "This article is concerned with consonant sounds, however they are written.",
"title": "Consonant sounds and consonant letters"
},
{
"paragraph_id": 8,
"text": "Consonants and vowels correspond to distinct parts of a syllable: The most sonorous part of the syllable (that is, the part that is easiest to sing), called the syllabic peak or nucleus, is typically a vowel, while the less sonorous margins (called the onset and coda) are typically consonants. Such syllables may be abbreviated CV, V, and CVC, where C stands for consonant and V stands for vowel. This can be argued to be the only pattern found in most of the world's languages, and perhaps the primary pattern in all of them. However, the distinction between consonant and vowel is not always clear cut: there are syllabic consonants and non-syllabic vowels in many of the world's languages.",
"title": "Consonants versus vowels"
},
{
"paragraph_id": 9,
"text": "One blurry area is in segments variously called semivowels, semiconsonants, or glides. On one side, there are vowel-like segments that are not in themselves syllabic, but form diphthongs as part of the syllable nucleus, as the i in English boil [ˈbɔɪ̯l]. On the other, there are approximants that behave like consonants in forming onsets, but are articulated very much like vowels, as the y in English yes [ˈjɛs]. Some phonologists model these as both being the underlying vowel /i/, so that the English word bit would phonemically be /bit/, beet would be /bii̯t/, and yield would be phonemically /i̯ii̯ld/. Likewise, foot would be /fut/, food would be /fuu̯d/, wood would be /u̯ud/, and wooed would be /u̯uu̯d/. However, there is a (perhaps allophonic) difference in articulation between these segments, with the [j] in [ˈjɛs] yes and [ˈjiʲld] yield and the [w] of [ˈwuʷd] wooed having more constriction and a more definite place of articulation than the [ɪ] in [ˈbɔɪ̯l] boil or [ˈbɪt] bit or the [ʊ] of [ˈfʊt] foot.",
"title": "Consonants versus vowels"
},
{
"paragraph_id": 10,
"text": "The other problematic area is that of syllabic consonants, segments articulated as consonants but occupying the nucleus of a syllable. This may be the case for words such as church in rhotic dialects of English, although phoneticians differ in whether they consider this to be a syllabic consonant, /ˈtʃɹ̩tʃ/, or a rhotic vowel, /ˈtʃɝtʃ/: Some distinguish an approximant /ɹ/ that corresponds to a vowel /ɝ/, for rural as /ˈɹɝl/ or [ˈɹʷɝːl̩]; others see these as a single phoneme, /ˈɹɹ̩l/.",
"title": "Consonants versus vowels"
},
{
"paragraph_id": 11,
"text": "Other languages use fricative and often trilled segments as syllabic nuclei, as in Czech and several languages in Democratic Republic of the Congo, and China, including Mandarin Chinese. In Mandarin, they are historically allophones of /i/, and spelled that way in Pinyin. Ladefoged and Maddieson call these \"fricative vowels\" and say that \"they can usually be thought of as syllabic fricatives that are allophones of vowels\". That is, phonetically they are consonants, but phonemically they behave as vowels.",
"title": "Consonants versus vowels"
},
{
"paragraph_id": 12,
"text": "Many Slavic languages allow the trill [r̩] and the lateral [l̩] as syllabic nuclei (see Words without vowels). In languages like Nuxalk, it is difficult to know what the nucleus of a syllable is, or if all syllables even have nuclei. If the concept of 'syllable' applies in Nuxalk, there are syllabic consonants in words like /sx̩s/ (/s̩xs̩/?) 'seal fat'. Miyako in Japan is similar, with /f̩ks̩/ 'to build' and /ps̩ks̩/ 'to pull'.",
"title": "Consonants versus vowels"
},
{
"paragraph_id": 13,
"text": "Each spoken consonant can be distinguished by several phonetic features:",
"title": "Consonants versus vowels"
},
{
"paragraph_id": 14,
"text": "All English consonants can be classified by a combination of these features, such as \"voiceless alveolar stop\" [t]. In this case, the airstream mechanism is omitted.",
"title": "Consonants versus vowels"
},
{
"paragraph_id": 15,
"text": "Some pairs of consonants like p::b, t::d are sometimes called fortis and lenis, but this is a phonological rather than phonetic distinction.",
"title": "Consonants versus vowels"
},
{
"paragraph_id": 16,
"text": "Consonants are scheduled by their features in a number of IPA charts:",
"title": "Consonants versus vowels"
},
{
"paragraph_id": 17,
"text": "The recently extinct Ubykh language had only 2 or 3 vowels but 84 consonants; the Taa language has 87 consonants under one analysis, 164 under another, plus some 30 vowels and tone. The types of consonants used in various languages are by no means universal. For instance, nearly all Australian languages lack fricatives; a large percentage of the world's languages lack voiced stops such as /b/, /d/, /ɡ/ as phonemes, though they may appear phonetically. Most languages, however, do include one or more fricatives, with /s/ being the most common, and a liquid consonant or two, with /l/ the most common. The approximant /w/ is also widespread, and virtually all languages have one or more nasals, though a very few, such as the Central dialect of Rotokas, lack even these. This last language has the smallest number of consonants in the world, with just six.",
"title": "Examples"
},
{
"paragraph_id": 18,
"text": "In rhotic American English, the consonants spoken most frequently are /n, ɹ, t/. (/ɹ/ is less common in non-rhotic accents.) The most frequent consonant in many other languages is /p/.",
"title": "Examples"
},
{
"paragraph_id": 19,
"text": "The most universal consonants around the world (that is, the ones appearing in nearly all languages) are the three voiceless stops /p/, /t/, /k/, and the two nasals /m/, /n/. However, even these common five are not completely universal. Several languages in the vicinity of the Sahara Desert, including Arabic, lack /p/. Several languages of North America, such as Mohawk, lack both of the labials /p/ and /m/. The Wichita language of Oklahoma and some West African languages, such as Ijo, lack the consonant /n/ on a phonemic level, but do use it phonetically, as an allophone of another consonant (of /l/ in the case of Ijo, and of /ɾ/ in Wichita). A few languages on Bougainville Island and around Puget Sound, such as Makah, lack both of the nasals [m] and [n] altogether, except in special speech registers such as baby-talk. The 'click language' Nǁng lacks /t/, and colloquial Samoan lacks both alveolars, /t/ and /n/. Despite the 80-odd consonants of Ubykh, it lacks the plain velar /k/ in native words, as do the related Adyghe and Kabardian languages. But with a few striking exceptions, such as Xavante and Tahitian—which have no dorsal consonants whatsoever—nearly all other languages have at least one velar consonant: most of the few languages that do not have a simple /k/ (that is, a sound that is generally pronounced [k]) have a consonant that is very similar. For instance, an areal feature of the Pacific Northwest coast is that historical *k has become palatalized in many languages, so that Saanich for example has /tʃ/ and /kʷ/ but no plain /k/; similarly, historical *k in the Northwest Caucasian languages became palatalized to /kʲ/ in extinct Ubykh and to /tʃ/ in most Circassian dialects.",
"title": "Examples"
}
] | In articulatory phonetics, a consonant is a speech sound that is articulated with complete or partial closure of the vocal tract, except for the h, which is pronounced without any stricture in the vocal tract. Examples are and [b], pronounced with the lips; and [d], pronounced with the front of the tongue; and [g], pronounced with the back of the tongue;, pronounced in the throat;, [v], and, pronounced by forcing air through a narrow channel (fricatives); and and, which have air flowing through the nose (nasals). Contrasting with consonants are vowels. Since the number of speech sounds in the world's languages is much greater than the number of letters in any one alphabet, linguists have devised systems such as the International Phonetic Alphabet (IPA) to assign a unique and unambiguous symbol to each attested consonant. The English alphabet has fewer consonant letters than the English language has consonant sounds, so digraphs like ⟨ch⟩, ⟨sh⟩, ⟨th⟩, and ⟨ng⟩ are used to extend the alphabet, though some letters and digraphs represent more than one consonant. For example, the sound spelled ⟨th⟩ in "this" is a different consonant from the ⟨th⟩ sound in "thin". | 2001-09-05T21:02:52Z | 2023-12-30T15:06:59Z | [
"Template:More footnotes needed",
"Template:Angbr",
"Template:Unreferenced section",
"Template:IPA pulmonic consonants",
"Template:Reflist",
"Template:Cite web",
"Template:ISBN",
"Template:About",
"Template:Authority control",
"Template:IPA navigation",
"Template:LSJ",
"Template:Spoken Wikipedia",
"Template:Articulation navbox",
"Template:Angle bracket",
"Template:IPA",
"Template:Efn",
"Template:IPA non-pulmonic consonants",
"Template:IPA co-articulated consonants",
"Template:Notelist",
"Template:SOWL",
"Template:Short description",
"Template:Lang",
"Template:More citations needed section",
"Template:Page needed",
"Template:Commons category-inline",
"Template:IPA notice"
] | https://en.wikipedia.org/wiki/Consonant |
5,642 | Costume jewelry | Costume or fashion jewelry includes a range of decorative items worn for personal adornment that are manufactured as less expensive ornamentation to complement a particular fashionable outfit or garment as opposed to "real" (fine) jewelry, which is more costly and which may be regarded primarily as collectibles, keepsakes, or investments. From the outset, costume jewelry — also known as fashion jewelry — paralleled the styles of its more precious fine counterparts.
It is also known as artificial jewellery, imitation jewellery, imitated jewelry, trinkets, fashion jewelry, junk jewelry, fake jewelry, or fallalery.
The term costume jewelry dates back to the early 20th century. It reflects the use of the word "costume" to refer to what is now called an "outfit".
Originally, costume or fashion jewelry was made of inexpensive simulated gemstones, such as rhinestones or lucite, set in pewter, silver, nickel, or brass. During the depression years, rhinestones were even down-graded by some manufacturers to meet the cost of production.
During the World War II era, sterling silver was often incorporated into costume jewelry designs primarily because:
This resulted in a number of years during which sterling silver costume jewelry was produced and some can still be found in today's vintage jewelry marketplace.
Modern costume jewelry incorporates a wide range of materials. High-end crystals, cubic zirconia simulated diamonds, and some semi-precious stones are used in place of precious stones. Metals include gold- or silver-plated brass, and sometimes vermeil or sterling silver. Lower-priced jewelry may still use gold plating over pewter, nickel, or other metals; items made in countries outside the United States may contain lead. Some pieces incorporate plastic, acrylic, leather, or wood.
Costume jewelry can be characterized by the period in history in which it was made.
The Art Deco movement was an attempt to combine the harshness of mass production with the sensitivity of art and design. It was during this period that Coco Chanel introduced costume jewelry to complete the costume. The Art Deco movement died with the onset of the Great Depression and the outbreak of World War II.
According to Schiffer, some of the characteristics of the costume jewelry in the Art Deco period were:
In the Retro period, designers struggled with the art versus mass production dilemma. Natural materials merged with plastics. The retro period primarily included American-made jewelry, which had a distinctly American look. With the war in Europe, many European jewelry firms were forced to shut down. Many European designers emigrated to the U.S. since the economy was recovering.
According to Schiffer, some of the characteristics of costume jewelry in the Retro period were:
In the Art Modern period following World War II, jewelry designs became more traditional and understated. The big, bold styles of the Retro period went out of style and were replaced by the more tailored styles of the 1950s and 1960s.
According to Schiffer, some of the characteristics of costume jewelry in the Art Modern period were:
With the advent of the Mod period came "Body Jewelry". Carl Schimel of Kim Craftsmen Jewelry was at the forefront of this style. While Kim Craftsmen closed in the early 1990s, many collectors still forage for their items at antique shows and flea markets.
Costume jewelry has been part of the culture for almost 300 years. During the 18th century, jewelers began making pieces with inexpensive glass. In the 19th century, costume jewelry made of semi-precious material came into the market. Jewels made of semi-precious material were more affordable, and this affordability gave common people the chance to own costume jewelry.
But the real golden era for costume jewelry began in the middle of the 20th century. The new middle class wanted beautiful, but affordable jewelry. The demand for jewelry of this type coincided with the machine age and the industrial revolution. The revolution made the production of carefully executed replicas of admired heirloom pieces possible.
As the class structure in America changed, so did measures of real wealth. Women in all social stations, even the working-class woman, could own a small piece of costume jewelry. The average town and countrywoman could acquire and wear a considerable amount of this mass-produced jewelry that was both affordable and stylish.
Costume jewelry was also made popular by various designers in the mid-20th century. Some of the most remembered names in costume jewelry include both the high and low priced brands: Crown Trifari, Dior, Chanel, Miriam Haskell, Monet, Napier, Corocraft, Coventry, and Kim Craftsmen.
A significant factor in the popularization of costume jewelry was Hollywood movies. The leading female stars of the 1940s and 1950s often wore and then endorsed the pieces produced by a range of designers. If you admired a necklace worn by Bette Davis in The Private Lives of Elizabeth and Essex, you could buy a copy from Joseff of Hollywood, who made the original. Stars such as Vivien Leigh, Elizabeth Taylor, and Jane Russell appeared in adverts for the pieces and the availability of the collections in shops such as Woolworth made it possible for ordinary women to own and wear such jewelry.
Coco Chanel greatly popularized the use of faux jewelry in her years as a fashion designer, bringing costume jewelry to life with gold and faux pearls. Kenneth Jay Lane has since the 1960s been known for creating unique pieces for Jackie Onassis, Elizabeth Taylor, Diana Vreeland, and Audrey Hepburn. He is probably best known for his three-strand faux pearl necklace worn by Barbara Bush to her husband's inaugural ball.
In many instances, high-end fashion jewelry has achieved a "collectible" status and increased value over time. Today, there is a substantial secondary market for vintage fashion jewelry. The main collecting market is for 'signed pieces', that is pieces that have the maker's mark, usually stamped on the reverse. Amongst the most sought after are Miriam Haskell, Coro, Butler and Wilson, Crown Trifari, and Sphinx. However, there is also demand for good quality 'unsigned' pieces, especially if they are of an unusual design.
Costume jewelry is considered a discrete category of fashion accessory and displays many characteristics of a self-contained industry. Costume jewelry manufacturers are located throughout the world, with a particular concentration in parts of China and India, where entire citywide and region-wide economies are dominated by the trade of these goods. There has been considerable controversy in the United States and elsewhere about the lack of regulations in the manufacture of such jewelry—these range from human rights issues surrounding the treatment of labor, to the use of manufacturing processes in which small, but potentially harmful, amounts of toxic metals are added during production. In 2010, the Associated Press released the story that toxic levels of the metal cadmium were found in children's jewelry. An Associated Press investigation found some pieces contained more than 80 percent of cadmium. The wider issues surrounding imports, exports, trade laws, and globalization also apply to the costume jewelry trade.
As part of the supply chain, wholesalers in the United States and other nations purchase costume jewelry from manufacturers and typically import or export it to wholesale distributors and suppliers who deal directly with retailers. Wholesale costume jewelry merchants will traditionally seek out new suppliers at trade shows. As the Internet has become increasingly important in global trade, the trade-show model has changed. Retailers can now select from a large number of wholesalers with sites on the World Wide Web. The wholesalers purchase from international suppliers who are also available on the Web from different parts of the world like Chinese, Korean, Indonesian, Thai, and Indian jewelry companies, with their wide range of products in bulk quantities. Some of these sites also market directly to consumers who can purchase costume jewelry at greatly reduced prices. Some of these websites categorize fashion jewelry separately, while others use this term in place of costume jewelry. The trend of jewelry-making at home by hobbyists for personal enjoyment or for sale on sites like Etsy has resulted in the common practice of buying wholesale costume jewelry in bulk and using it for parts.
There is a rise in demand for artificial or imitation jewelry by 85% due to the increase in gold prices, according to a 2011 report. | [
{
"paragraph_id": 0,
"text": "Costume or fashion jewelry includes a range of decorative items worn for personal adornment that are manufactured as less expensive ornamentation to complement a particular fashionable outfit or garment as opposed to \"real\" (fine) jewelry, which is more costly and which may be regarded primarily as collectibles, keepsakes, or investments. From the outset, costume jewelry — also known as fashion jewelry — paralleled the styles of its more precious fine counterparts.",
"title": ""
},
{
"paragraph_id": 1,
"text": "It is also known as artificial jewellery, imitation jewellery, imitated jewelry, trinkets, fashion jewelry, junk jewelry, fake jewelry, or fallalery.",
"title": "Terminology"
},
{
"paragraph_id": 2,
"text": "The term costume jewelry dates back to the early 20th century. It reflects the use of the word \"costume\" to refer to what is now called an \"outfit\".",
"title": "Etymology"
},
{
"paragraph_id": 3,
"text": "Originally, costume or fashion jewelry was made of inexpensive simulated gemstones, such as rhinestones or lucite, set in pewter, silver, nickel, or brass. During the depression years, rhinestones were even down-graded by some manufacturers to meet the cost of production.",
"title": "Components"
},
{
"paragraph_id": 4,
"text": "During the World War II era, sterling silver was often incorporated into costume jewelry designs primarily because:",
"title": "Components"
},
{
"paragraph_id": 5,
"text": "This resulted in a number of years during which sterling silver costume jewelry was produced and some can still be found in today's vintage jewelry marketplace.",
"title": "Components"
},
{
"paragraph_id": 6,
"text": "Modern costume jewelry incorporates a wide range of materials. High-end crystals, cubic zirconia simulated diamonds, and some semi-precious stones are used in place of precious stones. Metals include gold- or silver-plated brass, and sometimes vermeil or sterling silver. Lower-priced jewelry may still use gold plating over pewter, nickel, or other metals; items made in countries outside the United States may contain lead. Some pieces incorporate plastic, acrylic, leather, or wood.",
"title": "Components"
},
{
"paragraph_id": 7,
"text": "Costume jewelry can be characterized by the period in history in which it was made.",
"title": "Historical expression"
},
{
"paragraph_id": 8,
"text": "The Art Deco movement was an attempt to combine the harshness of mass production with the sensitivity of art and design. It was during this period that Coco Chanel introduced costume jewelry to complete the costume. The Art Deco movement died with the onset of the Great Depression and the outbreak of World War II.",
"title": "Historical expression"
},
{
"paragraph_id": 9,
"text": "According to Schiffer, some of the characteristics of the costume jewelry in the Art Deco period were:",
"title": "Historical expression"
},
{
"paragraph_id": 10,
"text": "In the Retro period, designers struggled with the art versus mass production dilemma. Natural materials merged with plastics. The retro period primarily included American-made jewelry, which had a distinctly American look. With the war in Europe, many European jewelry firms were forced to shut down. Many European designers emigrated to the U.S. since the economy was recovering.",
"title": "Historical expression"
},
{
"paragraph_id": 11,
"text": "According to Schiffer, some of the characteristics of costume jewelry in the Retro period were:",
"title": "Historical expression"
},
{
"paragraph_id": 12,
"text": "In the Art Modern period following World War II, jewelry designs became more traditional and understated. The big, bold styles of the Retro period went out of style and were replaced by the more tailored styles of the 1950s and 1960s.",
"title": "Historical expression"
},
{
"paragraph_id": 13,
"text": "According to Schiffer, some of the characteristics of costume jewelry in the Art Modern period were:",
"title": "Historical expression"
},
{
"paragraph_id": 14,
"text": "With the advent of the Mod period came \"Body Jewelry\". Carl Schimel of Kim Craftsmen Jewelry was at the forefront of this style. While Kim Craftsmen closed in the early 1990s, many collectors still forage for their items at antique shows and flea markets.",
"title": "Historical expression"
},
{
"paragraph_id": 15,
"text": "Costume jewelry has been part of the culture for almost 300 years. During the 18th century, jewelers began making pieces with inexpensive glass. In the 19th century, costume jewelry made of semi-precious material came into the market. Jewels made of semi-precious material were more affordable, and this affordability gave common people the chance to own costume jewelry.",
"title": "General history"
},
{
"paragraph_id": 16,
"text": "But the real golden era for costume jewelry began in the middle of the 20th century. The new middle class wanted beautiful, but affordable jewelry. The demand for jewelry of this type coincided with the machine age and the industrial revolution. The revolution made the production of carefully executed replicas of admired heirloom pieces possible.",
"title": "General history"
},
{
"paragraph_id": 17,
"text": "As the class structure in America changed, so did measures of real wealth. Women in all social stations, even the working-class woman, could own a small piece of costume jewelry. The average town and countrywoman could acquire and wear a considerable amount of this mass-produced jewelry that was both affordable and stylish.",
"title": "General history"
},
{
"paragraph_id": 18,
"text": "Costume jewelry was also made popular by various designers in the mid-20th century. Some of the most remembered names in costume jewelry include both the high and low priced brands: Crown Trifari, Dior, Chanel, Miriam Haskell, Monet, Napier, Corocraft, Coventry, and Kim Craftsmen.",
"title": "General history"
},
{
"paragraph_id": 19,
"text": "A significant factor in the popularization of costume jewelry was Hollywood movies. The leading female stars of the 1940s and 1950s often wore and then endorsed the pieces produced by a range of designers. If you admired a necklace worn by Bette Davis in The Private Lives of Elizabeth and Essex, you could buy a copy from Joseff of Hollywood, who made the original. Stars such as Vivien Leigh, Elizabeth Taylor, and Jane Russell appeared in adverts for the pieces and the availability of the collections in shops such as Woolworth made it possible for ordinary women to own and wear such jewelry.",
"title": "General history"
},
{
"paragraph_id": 20,
"text": "Coco Chanel greatly popularized the use of faux jewelry in her years as a fashion designer, bringing costume jewelry to life with gold and faux pearls. Kenneth Jay Lane has since the 1960s been known for creating unique pieces for Jackie Onassis, Elizabeth Taylor, Diana Vreeland, and Audrey Hepburn. He is probably best known for his three-strand faux pearl necklace worn by Barbara Bush to her husband's inaugural ball.",
"title": "General history"
},
{
"paragraph_id": 21,
"text": "In many instances, high-end fashion jewelry has achieved a \"collectible\" status and increased value over time. Today, there is a substantial secondary market for vintage fashion jewelry. The main collecting market is for 'signed pieces', that is pieces that have the maker's mark, usually stamped on the reverse. Amongst the most sought after are Miriam Haskell, Coro, Butler and Wilson, Crown Trifari, and Sphinx. However, there is also demand for good quality 'unsigned' pieces, especially if they are of an unusual design.",
"title": "General history"
},
{
"paragraph_id": 22,
"text": "Costume jewelry is considered a discrete category of fashion accessory and displays many characteristics of a self-contained industry. Costume jewelry manufacturers are located throughout the world, with a particular concentration in parts of China and India, where entire citywide and region-wide economies are dominated by the trade of these goods. There has been considerable controversy in the United States and elsewhere about the lack of regulations in the manufacture of such jewelry—these range from human rights issues surrounding the treatment of labor, to the use of manufacturing processes in which small, but potentially harmful, amounts of toxic metals are added during production. In 2010, the Associated Press released the story that toxic levels of the metal cadmium were found in children's jewelry. An Associated Press investigation found some pieces contained more than 80 percent of cadmium. The wider issues surrounding imports, exports, trade laws, and globalization also apply to the costume jewelry trade.",
"title": "Business and industry"
},
{
"paragraph_id": 23,
"text": "As part of the supply chain, wholesalers in the United States and other nations purchase costume jewelry from manufacturers and typically import or export it to wholesale distributors and suppliers who deal directly with retailers. Wholesale costume jewelry merchants will traditionally seek out new suppliers at trade shows. As the Internet has become increasingly important in global trade, the trade-show model has changed. Retailers can now select from a large number of wholesalers with sites on the World Wide Web. The wholesalers purchase from international suppliers who are also available on the Web from different parts of the world like Chinese, Korean, Indonesian, Thai, and Indian jewelry companies, with their wide range of products in bulk quantities. Some of these sites also market directly to consumers who can purchase costume jewelry at greatly reduced prices. Some of these websites categorize fashion jewelry separately, while others use this term in place of costume jewelry. The trend of jewelry-making at home by hobbyists for personal enjoyment or for sale on sites like Etsy has resulted in the common practice of buying wholesale costume jewelry in bulk and using it for parts.",
"title": "Business and industry"
},
{
"paragraph_id": 24,
"text": "There is a rise in demand for artificial or imitation jewelry by 85% due to the increase in gold prices, according to a 2011 report.",
"title": "Business and industry"
}
] | Costume or fashion jewelry includes a range of decorative items worn for personal adornment that are manufactured as less expensive ornamentation to complement a particular fashionable outfit or garment as opposed to "real" (fine) jewelry, which is more costly and which may be regarded primarily as collectibles, keepsakes, or investments. From the outset, costume jewelry — also known as fashion jewelry — paralleled the styles of its more precious fine counterparts. | 2023-06-13T23:57:02Z | [
"Template:Dubious",
"Template:Reflist",
"Template:Wiktionary",
"Template:Commons category",
"Template:Short description",
"Template:More citations needed",
"Template:Cite web",
"Template:Webarchive",
"Template:ISBN",
"Template:Cite news"
] | https://en.wikipedia.org/wiki/Costume_jewelry |
|
5,643 | Channel Islands | The Channel Islands are an archipelago in the English Channel, off the French coast of Normandy. They are divided into two Crown Dependencies: the Bailiwick of Jersey, which is the largest of the islands; and the Bailiwick of Guernsey, consisting of Guernsey, Alderney, Sark, Herm and some smaller islands. Historically, they are the remnants of the Duchy of Normandy. Although they are not part of the United Kingdom, the UK is currently responsible for the defence and international relations of the islands. The Crown Dependencies are neither members of the Commonwealth of Nations, nor part of the European Union. They have a total population of about 171,916, and the bailiwicks' capitals, Saint Helier and Saint Peter Port, have populations of 33,500 and 18,207 respectively.
"Channel Islands" is a geographical term, not a political unit. The two bailiwicks have been administered separately since the late 13th century. Each has its own independent laws, elections, and representative bodies (although in modern times, politicians from the islands' legislatures are in regular contact). Any institution common to both is the exception rather than the rule.
The Bailiwick of Guernsey is divided into three jurisdictions – Guernsey, Alderney and Sark – each with its own legislature. Although there are a few pan-island institutions (such as the Channel Islands Brussels Office, the Director of Civil Aviation and the Channel Islands Financial Ombudsman, which are actually joint ventures between the bailiwicks), these tend to be established structurally as equal projects between Guernsey and Jersey. Otherwise, entities whose names imply membership of both Guernsey and Jersey might in fact be from one bailiwick only. For instance, The International Stock Exchange is in Saint Peter Port and therefore is in Guernsey.
The term "Channel Islands" began to be used around 1830, possibly first by the Royal Navy as a collective name for the islands. The term refers only to the archipelago to the west of the Cotentin Peninsula. Other populated islands located in the English Channel, and close to the coast of Britain, such as the Isle of Wight, Hayling Island and Portsea Island, are not regarded as "Channel Islands".
The two major islands are Jersey and Guernsey. They make up 99% of the population and 92% of the area.
The names of the larger islands in the archipelago in general have the -ey suffix, whilst those of the smaller ones have the -hou suffix. These are believed to be from the Old Norse ey (island) and holmr (islet).
The Chausey Islands south of Jersey are not generally included in the geographical definition of the Channel Islands but are occasionally described in English as 'French Channel Islands' in view of their French jurisdiction. They were historically linked to the Duchy of Normandy, but they are part of the French territory along with continental Normandy, and not part of the British Isles or of the Channel Islands in a political sense. They are an incorporated part of the commune of Granville (Manche). While they are popular with visitors from France, Channel Islanders can only visit them by private or charter boats as there are no direct transport links from the other islands.
In official Jersey Standard French, the Channel Islands are called 'Îles de la Manche', while in France, the term 'Îles Anglo-normandes' (Anglo-Norman Isles) is used to refer to the British 'Channel Islands' in contrast to other islands in the Channel. Chausey is referred to as an 'Île normande' (as opposed to anglo-normande). 'Îles Normandes' and 'Archipel Normand' have also, historically, been used in Channel Island French to refer to the islands as a whole.
The very large tidal variation provides an environmentally rich inter-tidal zone around the islands, and some islands such as Burhou, the Écréhous, and the Minquiers have been designated Ramsar sites.
The waters around the islands include the following:
The highest point in the islands is Les Platons in Jersey at 143 metres (469 ft) above sea level. The lowest point is the English Channel (sea level).
The earliest evidence of human occupation of the Channel Islands has been dated to 250,000 years ago when they were attached to the landmass of continental Europe. The islands became detached by rising sea levels in the Mesolithic period. The numerous dolmens and other archaeological sites extant and recorded in history demonstrate the existence of a population large enough and organised enough to undertake constructions of considerable size and sophistication, such as the burial mound at La Hougue Bie in Jersey or the statue menhirs of Guernsey.
Hoards of Armorican coins have been excavated, providing evidence of trade and contact in the Iron Age period. Evidence for Roman settlement is sparse, although evidently the islands were visited by Roman officials and traders. The Roman name for the Channel Islands was I. Lenuri (Lenur Islands) and is included in the Peutinger Table The traditional Latin names used for the islands (Caesarea for Jersey, Sarnia for Guernsey, Riduna for Alderney) derive (possibly mistakenly) from the Antonine Itinerary. Gallo-Roman culture was adopted to an unknown extent in the islands.
In the sixth century, Christian missionaries visited the islands. Samson of Dol, Helier, Marculf and Magloire are among saints associated with the islands. In the sixth century, they were already included in the diocese of Coutances where they remained until the Reformation.
There were probably some Celtic Britons who settled on the Islands in the 5th and 6th centuries AD (the indigenous Celts of Great Britain, and the ancestors of the modern Welsh, Cornish, and Bretons) who had emigrated from Great Britain in the face of invading Anglo-Saxons. But there were not enough of them to leave any trace, and the islands continued to be ruled by the king of the Franks and its church remained part of the diocese of Coutances.
From the beginning of the ninth century, Norse raiders appeared on the coasts. Norse settlement eventually succeeded initial attacks, and it is from this period that many place names of Norse origin appear, including the modern names of the islands.
In 933, the islands were granted to William I Longsword by Raoul, the King of Western Francia, and annexed to the Duchy of Normandy. In 1066, William II of Normandy invaded and conquered England, becoming William I of England, also known as William the Conqueror. In the period 1204–1214, King John lost the Angevin lands in northern France, including mainland Normandy, to King Philip II of France, but managed to retain control of the Channel Islands. In 1259, his successor, Henry III of England, by the Treaty of Paris, officially surrendered his claim and title to the Duchy of Normandy, while retaining the Channel Islands, as peer of France and feudal vassal of the King of France. Since then, the Channel Islands have been governed as two separate bailiwicks and were never absorbed into the Kingdom of England nor its successor kingdoms of Great Britain and the United Kingdom. During the Hundred Years' War, the Channel Islands were part of the French territory recognizing the claims of the English kings to the French throne.
The islands were invaded by the French in 1338, who held some territory until 1345. Edward III of England granted a Charter in July 1341 to Jersey, Guernsey, Sark and Alderney, confirming their customs and laws to secure allegiance to the English Crown. Owain Lawgoch, a mercenary leader of a Free Company in the service of the French Crown, attacked Jersey and Guernsey in 1372, and in 1373 Bertrand du Guesclin besieged Mont Orgueil. The young King Richard II of England reconfirmed in 1378 the Charter rights granted by his grandfather, followed in 1394 with a second Charter granting, because of great loyalty shown to the Crown, exemption for ever, from English tolls, customs and duties. Jersey was occupied by the French in 1461 as part of an exchange for helping the Lancastrians fight against the Yorkists during The War of the Roses. It was retaken by the Yorkists in 1468. In 1483 a Papal bull decreed that the islands would be neutral during time of war. This privilege of neutrality enabled islanders to trade with both France and England and was respected until 1689 when it was abolished by Order in Council following the Glorious Revolution in Great Britain.
Various attempts to transfer the islands from the diocese of Coutances (to Nantes (1400), Salisbury (1496), and Winchester (1499)) had little effect until an Order in Council of 1569 brought the islands formally into the diocese of Winchester. Control by the bishop of Winchester was ineffectual as the islands had turned overwhelmingly Calvinist and the episcopacy was not restored until 1620 in Jersey and 1663 in Guernsey.
After the loss of Calais in 1558, the Channel Islands were the last remaining English holdings in France and the only French territory that was controlled by the English kings as Kings of France. This situation lasted until the English kings dropped their title and claims to the French throne in 1801, confirming the Channel Islands in a situation of a crown dependency under the sovereignty of neither Great-Britain nor France but of the British crown directly.
Sark in the 16th century was uninhabited until colonised from Jersey in the 1560s. The grant of seigneurship from Elizabeth I of England in 1565 forms the basis of Sark's constitution today.
During the Wars of the Three Kingdoms, Jersey held out strongly for the Royalist cause, providing refuge for Charles, Prince of Wales in 1646 and 1649–1650, while the more strongly Presbyterian Guernsey more generally favoured the parliamentary cause (although Castle Cornet was held by Royalists and did not surrender until October 1651).
The islands acquired commercial and political interests in the North American colonies. Islanders became involved with the Newfoundland fisheries in the 17th century. In recognition for all the help given to him during his exile in Jersey in the 1640s, Charles II gave George Carteret, Bailiff and governor, a large grant of land in the American colonies, which he promptly named New Jersey, now part of the United States of America. Sir Edmund Andros, bailiff of Guernsey, was an early colonial governor in North America, and head of the short-lived Dominion of New England.
In the late 18th century, the islands were dubbed "the French Isles". Wealthy French émigrés fleeing the French Revolution sought residency in the islands. Many of the town domiciles existing today were built in that time. In Saint Peter Port, a large part of the harbour had been built by 1865.
The islands were occupied by the German Army during World War II.
The British Government demilitarised the islands in June 1940, and the lieutenant-governors were withdrawn on 21 June, leaving the insular administrations to continue government as best they could under impending military occupation.
Before German troops landed, between 30 June and 4 July 1940, evacuation took place. Many young men had already left to join the Allied armed forces, as volunteers. 6,600 out of 50,000 left Jersey while 17,000 out of 42,000 left Guernsey. Thousands of children were evacuated with their schools to England and Scotland.
The population of Sark largely remained where they were; but in Alderney, all but six people left. In Alderney, the occupying Germans built four prison camps which housed approximately 6,000 people, of whom over 700 died. Due to the destruction of documents, it is impossible to state how many forced workers died in the other islands. Alderney had the only Nazi concentration camps on British soil.
The Royal Navy blockaded the islands from time to time, particularly following the Invasion of Normandy in June 1944. There was considerable hunger and privation during the five years of German occupation, particularly in the final months when the population was close to starvation. Intense negotiations resulted in some humanitarian aid being sent via the Red Cross, leading to the arrival of Red Cross parcels in the supply ship SS Vega in December 1944.
The German occupation of 1940–45 was harsh: over 2,000 islanders were deported by the Germans, and some Jews were sent to concentration camps; partisan resistance and retribution, accusations of collaboration, and slave labour also occurred. Many Spaniards, initially refugees from the Spanish Civil War, were brought to the islands to build fortifications. Later, Russians and Central Europeans continued the work. Many land mines were laid, with 65,718 land mines laid in Jersey alone.
There was no resistance movement in the Channel Islands on the scale of that in mainland France. This has been ascribed to a range of factors including the physical separation of the islands, the density of troops (up to one German for every two Islanders), the small size of the islands precluding any hiding places for resistance groups, and the absence of the Gestapo from the occupying forces. Moreover, much of the population of military age had already joined the British Army.
The end of the occupation came after VE-Day on 8 May 1945, with Jersey and Guernsey being liberated on 9 May. The German garrison in Alderney was left until 16 May, and it was one of the last of the Nazi German remnants to surrender. The first evacuees returned on the first sailing from Great Britain on 23 June, but the people of Alderney were unable to start returning until December 1945. Many of the evacuees who returned home had difficulty reconnecting with their families after five years of separation.
Following the liberation of 1945, reconstruction led to a transformation of the economies of the islands, attracting immigration and developing tourism. The legislatures were reformed and non-party governments embarked on social programmes, aided by the incomes from offshore finance, which grew rapidly from the 1960s. The islands decided not to join the European Economic Community when the UK joined. Since the 1990s, declining profitability of agriculture and tourism has challenged the governments of the islands.
The Channel Islands fall into two separate self-governing bailiwicks, the Bailiwick of Guernsey and the Bailiwick of Jersey. Each of these is a British Crown Dependency, and neither is a part of the United Kingdom. They have been parts of the Duchy of Normandy since the 10th century, and Queen Elizabeth II was often referred to by her traditional and conventional title of Duke of Normandy. However, pursuant to the Treaty of Paris (1259), she governed in her right as The Queen (the "Crown in right of Jersey", and the "Crown in right of the république of the Bailiwick of Guernsey"), and not as the Duke. This notwithstanding, it is a matter of local pride for monarchists to treat the situation otherwise: the Loyal toast at formal dinners was to 'The Queen, our Duke', rather than to 'Her Majesty, The Queen' as in the UK. The Queen died in 2022 and her son Charles III became the King.
A bailiwick is a territory administered by a bailiff. Although the words derive from a common root ('bail' = 'to give charge of') there is a vast difference between the meanings of the word 'bailiff' in Great Britain and in the Channel Islands; a bailiff in Britain is a court-appointed private debt-collector authorised to collect judgment debts, in the Channel Islands, the Bailiff in each bailiwick is the civil head, presiding officer of the States, and also head of the judiciary, and thus the most important citizen in the bailiwick.
In the early 21st century, the existence of governmental offices such as the bailiffs' with multiple roles straddling the different branches of government came under increased scrutiny for their apparent contravention of the doctrine of separation of powers—most notably in the Guernsey case of McGonnell -v- United Kingdom (2000) 30 EHRR 289. That case, following final judgement at the European Court of Human Rights, became part of the impetus for much recent constitutional change, particularly the Constitutional Reform Act 2005 (2005 c.4) in the UK, including the separation of the roles of the Lord Chancellor, the abolition of the House of Lords' judicial role, and its replacement by the UK Supreme Court. The islands' bailiffs, however, still retain their historic roles.
The systems of government in the islands date from Norman times, which accounts for the names of the legislatures, the States, derived from the Norman 'États' or 'estates' (i.e. the Crown, the Church, and the people). The States have evolved over the centuries into democratic parliaments.
The UK Parliament has power to legislate for the islands, but Acts of Parliament do not extend to the islands automatically. Usually, an Act gives power to extend its application to the islands by an Order in Council, after consultation. For the most part the islands legislate for themselves. Each island has its own primary legislature, known as the States of Guernsey and the States of Jersey, with Chief Pleas in Sark and the States of Alderney. The Channel Islands are not represented in the UK Parliament. Laws passed by the States are given royal assent by The King in Council, to whom the islands' governments are responsible.
The islands have never been part of the European Union, and thus were not a party to the 2016 referendum on the EU membership, but were part of the Customs Territory of the European Community by virtue of Protocol Three to the Treaty on European Union. In September 2010, a Channel Islands Brussels Office was set up jointly by the two Bailiwicks to develop the Channel Islands' influence with the EU, to advise the Channel Islands' governments on European matters, and to promote economic links with the EU.
Both bailiwicks are members of the British–Irish Council, and Jèrriais and Guernésiais are recognised regional languages of the islands.
The legal courts are separate; separate courts of appeal have been in place since 1961. Among the legal heritage from Norman law is the Clameur de haro. The basis of the legal systems of both Bailiwicks is Norman customary law (Coutume) rather than the English Common Law, although elements of the latter have become established over time.
Islanders are full British citizens, but were not classed as European citizens unless by descent from a UK national. Any British citizen who applies for a passport in Jersey or Guernsey receives a passport bearing the words "British Islands, Bailiwick of Jersey" or "British Islands, Bailiwick of Guernsey". Under the provisions of Protocol Three, Channel Islanders who do not have a close connection with the UK (no parent or grandparent from the UK, and have never been resident in the UK for a five-year period) did not automatically benefit from the EU provisions on free movement within the EU, and their passports received an endorsement to that effect. This affected only a minority of islanders.
Under the UK Interpretation Act 1978, the Channel Islands are deemed to be part of the British Islands, not to be confused with the British Isles. For the purposes of the British Nationality Act 1981, the "British Islands" include the United Kingdom (Great Britain and Northern Ireland), the Channel Islands and the Isle of Man, taken together, unless the context otherwise requires.
Tourism is still important. However, Jersey and Guernsey have, since the 1960s, become major offshore financial centres. Historically Guernsey's horticultural and greenhouse activities have been more significant than in Jersey, and Guernsey has maintained light industry as a higher proportion of its economy than Jersey. In Jersey, potatoes are an important export crop, shipped mostly to the UK.
Jersey is heavily reliant on financial services, with 39.4% of Gross Value Added (GVA) in 2018 contributed by the sector. Rental income comes second at 15.1% with other business activities at 11.2%. Tourism 4.5% with agriculture contributing just 1.2% and manufacturing even lower at 1.1%. GVA has fluctuated between £4.5 and £5 billion for 20 years.
Jersey has had a steadily rising population, increasing from below 90,000 in 2000 to over 105,000 in 2018 which combined with a flat GVA has resulted in GVA per head of population falling from £57,000 to £44,000 per person. Guernsey had a GDP of £3.2 billion in 2018 and with a stable population of around 66,000 has had a steadily rising GDP, and a GVA per head of population which in 2018 surpassed £52,000.
Both bailiwicks issue their own banknotes and coins, which circulate freely in all the islands alongside UK coinage and Bank of England and Scottish banknotes.
Since 1969, Jersey and Guernsey have operated postal administrations independently of the UK's Royal Mail, with their own postage stamps, which can be used for postage only in their respective Bailiwicks. UK stamps are no longer valid, but mail to the islands, and to the Isle of Man, is charged at UK inland rates. It was not until the early 1990s that the islands joined the UK's postcode system, Jersey postcodes using the initials JE and Guernsey GY.
Each of the three largest islands has a distinct vehicle registration scheme:
In Sark, where most motor traffic is prohibited, the few vehicles – nearly all tractors – do not display plates. Bicycles display tax discs.
In the 1960s, names used for the cross-Channel ferries plying the mail route between the islands and Weymouth, Dorset, were taken from the popular Latin names for the islands: Caesarea (Jersey), Sarnia (Guernsey) and Riduna (Alderney). Fifty years later, the ferry route between the Channel Islands and the UK is operated by Condor Ferries from both St Helier, Jersey and St Peter Port, Guernsey, using high-speed catamaran fast craft to Poole in the UK. A regular passenger ferry service on the Commodore Clipper goes from both Channel Island ports to Portsmouth daily, and carries both passengers and freight.
Ferry services to Normandy are operated by Manche Îles Express, and services between Jersey and Saint-Malo are operated by Compagnie Corsaire and Condor Ferries. The Isle of Sark Shipping Company operates small ferries to Sark. Normandy Trader operates an ex military tank landing craft for transporting freight between the islands and France.
On 20 August 2013, Huelin-Renouf, which had operated a "lift-on lift-off" container service for 80 years between the Port of Southampton and the Port of Jersey, ceased trading. Senator Alan Maclean, a Jersey politician, had previously tried to save the 90-odd jobs furnished by the company to no avail. On 20 September, it was announced that Channel Island Lines would continue this service, and would purchase the MV Huelin Dispatch from Associated British Ports who in turn had purchased them from the receiver in the bankruptcy. The new operator was to be funded by Rockayne Limited, a closely held association of Jersey businesspeople.
There are three airports in the Channel Islands: Alderney Airport, Guernsey Airport and Jersey Airport. They are directly connected to each other by services operated by Blue Islands and Aurigny.
Historically, there have been railway networks on Jersey, Guernsey, and Alderney, but all of the lines on Jersey and Guernsey have been closed and dismantled. Today there are three working railways in the Channel Islands, of which the Alderney Railway is the only one providing a regular timetabled passenger service. The other two are a 7+1⁄4 in (184 mm) gauge miniature railway, also on Alderney, and the heritage steam railway operated on Jersey as part of the Pallot Heritage Steam Museum.
The Channel Islands are served by a number of local radio services – BBC Radio Jersey and BBC Radio Guernsey, Channel 103 and Island FM – as well as regional television news opt-outs from BBC Channel Islands and ITV Channel Television.
On 1 August 2021, DAB+ digital radio became available for the first time, introducing new stations like the local Bailiwick Radio and Soleil Radio, and UK-wide services like Capital, Heart, and Times Radio.
There are two broadcast transmitters serving Jersey – at Frémont Point and Les Platons – as well as one at Les Touillets in Guernsey and a relay in Alderney.
There are several local newspapers including the Guernsey Press and the Jersey Evening Post and magazines.
Jersey always operated its own telephone services independently of Britain's national system, Guernsey established its own telephone service in 1968. Both islands still form part of the British telephone numbering plan, but Ofcom on the mainlines does not have responsibility for telecommunications regulatory and licensing issues on the islands. It is responsible for wireless telegraphy licensing throughout the islands, and by agreement, for broadcasting regulation in the two large islands only. Submarine cables connect the various islands and provide connectivity with England and France.
Modern broadband speeds are available in all the islands, including full-fibre (FTTH) in Jersey (offering speeds of up to 1Gbps on all broadband connections) and VDSL and some business and homes with fibre connectivity in Guernsey. Providers include Sure and JT.
The two Bailiwicks each have their own internet domain, .GG (Guernsey, Alderney, Sark) and .JE (Jersey), which are managed by channelisles.net.
The Norman language predominated in the islands until the nineteenth century, when increasing influence from English-speaking settlers and easier transport links led to Anglicisation. There are four main dialects/languages of Norman in the islands, Auregnais (Alderney, extinct in late twentieth century), Dgèrnésiais (Guernsey), Jèrriais (Jersey) and Sercquiais (Sark, an offshoot of Jèrriais).
Victor Hugo spent many years in exile, first in Jersey and then in Guernsey, where he finished Les Misérables. Guernsey is the setting of Hugo's later novel Les Travailleurs de la Mer (Toilers of the Sea). A "Guernsey-man" also makes an appearance in chapter 91 of Herman Melville's Moby-Dick.
The annual "Muratti", the inter-island football match, is considered the sporting event of the year, although, due to broadcast coverage, it no longer attracts the crowds of spectators, travelling between the islands, that it did during the twentieth century.
Cricket is popular in the Channel Islands. The Jersey cricket team and the Guernsey cricket team are both associate members of the International Cricket Council. The teams have played each other in the inter-insular match since 1957. In 2001 and 2002, the Channel Islands entered a team into the MCCA Knockout Trophy, the one-day tournament of the minor counties of English and Welsh cricket.
Channel Island sportsmen and women compete in the Commonwealth Games for their respective islands and the islands have also been enthusiastic supporters of the Island Games. Shooting is a popular sport, in which islanders have won Commonwealth medals.
Guernsey's traditional colour for sporting and other purposes is green and Jersey's is red.
The main islanders have traditional animal nicknames:
Christianity was brought to the islands around the sixth century; according to tradition, Jersey was evangelised by St Helier, Guernsey by St Samson of Dol, and the smaller islands were occupied at various times by monastic communities representing strands of Celtic Christianity. At the Reformation, the previously Catholic islands converted to Calvinism under the influence of an influx of French-language pamphlets published in Geneva. Anglicanism was imposed in the seventeenth century, but the Non-Conformist local tendency returned with a strong adoption of Methodism. In the late twentieth century, a strong Catholic presence re-emerged with the arrival of numerous Portuguese workers (both from mainland Portugal and the island of Madeira). Their numbers have been reinforced by recent migrants from Poland and elsewhere in Eastern Europe. Today, Evangelical churches have been established. Services are held in a number of languages.
According to 2015 statistics, 39% of the population was non-religious.
A number of islands in the English Channel are part of France. Among these are Bréhat, Île de Batz, Chausey, Tatihou and the Îles Saint-Marcouf.
The Isle of Wight, which is part of England, lies just off the coast of Great Britain, between the Channel and the Solent.
Hayling and Portsea islands, both being near or part of Portsmouth, are also part of England (and thus part of the United Kingdom). | [
{
"paragraph_id": 0,
"text": "The Channel Islands are an archipelago in the English Channel, off the French coast of Normandy. They are divided into two Crown Dependencies: the Bailiwick of Jersey, which is the largest of the islands; and the Bailiwick of Guernsey, consisting of Guernsey, Alderney, Sark, Herm and some smaller islands. Historically, they are the remnants of the Duchy of Normandy. Although they are not part of the United Kingdom, the UK is currently responsible for the defence and international relations of the islands. The Crown Dependencies are neither members of the Commonwealth of Nations, nor part of the European Union. They have a total population of about 171,916, and the bailiwicks' capitals, Saint Helier and Saint Peter Port, have populations of 33,500 and 18,207 respectively.",
"title": ""
},
{
"paragraph_id": 1,
"text": "\"Channel Islands\" is a geographical term, not a political unit. The two bailiwicks have been administered separately since the late 13th century. Each has its own independent laws, elections, and representative bodies (although in modern times, politicians from the islands' legislatures are in regular contact). Any institution common to both is the exception rather than the rule.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The Bailiwick of Guernsey is divided into three jurisdictions – Guernsey, Alderney and Sark – each with its own legislature. Although there are a few pan-island institutions (such as the Channel Islands Brussels Office, the Director of Civil Aviation and the Channel Islands Financial Ombudsman, which are actually joint ventures between the bailiwicks), these tend to be established structurally as equal projects between Guernsey and Jersey. Otherwise, entities whose names imply membership of both Guernsey and Jersey might in fact be from one bailiwick only. For instance, The International Stock Exchange is in Saint Peter Port and therefore is in Guernsey.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The term \"Channel Islands\" began to be used around 1830, possibly first by the Royal Navy as a collective name for the islands. The term refers only to the archipelago to the west of the Cotentin Peninsula. Other populated islands located in the English Channel, and close to the coast of Britain, such as the Isle of Wight, Hayling Island and Portsea Island, are not regarded as \"Channel Islands\".",
"title": ""
},
{
"paragraph_id": 4,
"text": "The two major islands are Jersey and Guernsey. They make up 99% of the population and 92% of the area.",
"title": "Geography"
},
{
"paragraph_id": 5,
"text": "The names of the larger islands in the archipelago in general have the -ey suffix, whilst those of the smaller ones have the -hou suffix. These are believed to be from the Old Norse ey (island) and holmr (islet).",
"title": "Geography"
},
{
"paragraph_id": 6,
"text": "The Chausey Islands south of Jersey are not generally included in the geographical definition of the Channel Islands but are occasionally described in English as 'French Channel Islands' in view of their French jurisdiction. They were historically linked to the Duchy of Normandy, but they are part of the French territory along with continental Normandy, and not part of the British Isles or of the Channel Islands in a political sense. They are an incorporated part of the commune of Granville (Manche). While they are popular with visitors from France, Channel Islanders can only visit them by private or charter boats as there are no direct transport links from the other islands.",
"title": "Geography"
},
{
"paragraph_id": 7,
"text": "In official Jersey Standard French, the Channel Islands are called 'Îles de la Manche', while in France, the term 'Îles Anglo-normandes' (Anglo-Norman Isles) is used to refer to the British 'Channel Islands' in contrast to other islands in the Channel. Chausey is referred to as an 'Île normande' (as opposed to anglo-normande). 'Îles Normandes' and 'Archipel Normand' have also, historically, been used in Channel Island French to refer to the islands as a whole.",
"title": "Geography"
},
{
"paragraph_id": 8,
"text": "The very large tidal variation provides an environmentally rich inter-tidal zone around the islands, and some islands such as Burhou, the Écréhous, and the Minquiers have been designated Ramsar sites.",
"title": "Geography"
},
{
"paragraph_id": 9,
"text": "The waters around the islands include the following:",
"title": "Geography"
},
{
"paragraph_id": 10,
"text": "The highest point in the islands is Les Platons in Jersey at 143 metres (469 ft) above sea level. The lowest point is the English Channel (sea level).",
"title": "Geography"
},
{
"paragraph_id": 11,
"text": "The earliest evidence of human occupation of the Channel Islands has been dated to 250,000 years ago when they were attached to the landmass of continental Europe. The islands became detached by rising sea levels in the Mesolithic period. The numerous dolmens and other archaeological sites extant and recorded in history demonstrate the existence of a population large enough and organised enough to undertake constructions of considerable size and sophistication, such as the burial mound at La Hougue Bie in Jersey or the statue menhirs of Guernsey.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "Hoards of Armorican coins have been excavated, providing evidence of trade and contact in the Iron Age period. Evidence for Roman settlement is sparse, although evidently the islands were visited by Roman officials and traders. The Roman name for the Channel Islands was I. Lenuri (Lenur Islands) and is included in the Peutinger Table The traditional Latin names used for the islands (Caesarea for Jersey, Sarnia for Guernsey, Riduna for Alderney) derive (possibly mistakenly) from the Antonine Itinerary. Gallo-Roman culture was adopted to an unknown extent in the islands.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "In the sixth century, Christian missionaries visited the islands. Samson of Dol, Helier, Marculf and Magloire are among saints associated with the islands. In the sixth century, they were already included in the diocese of Coutances where they remained until the Reformation.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "There were probably some Celtic Britons who settled on the Islands in the 5th and 6th centuries AD (the indigenous Celts of Great Britain, and the ancestors of the modern Welsh, Cornish, and Bretons) who had emigrated from Great Britain in the face of invading Anglo-Saxons. But there were not enough of them to leave any trace, and the islands continued to be ruled by the king of the Franks and its church remained part of the diocese of Coutances.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "From the beginning of the ninth century, Norse raiders appeared on the coasts. Norse settlement eventually succeeded initial attacks, and it is from this period that many place names of Norse origin appear, including the modern names of the islands.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "In 933, the islands were granted to William I Longsword by Raoul, the King of Western Francia, and annexed to the Duchy of Normandy. In 1066, William II of Normandy invaded and conquered England, becoming William I of England, also known as William the Conqueror. In the period 1204–1214, King John lost the Angevin lands in northern France, including mainland Normandy, to King Philip II of France, but managed to retain control of the Channel Islands. In 1259, his successor, Henry III of England, by the Treaty of Paris, officially surrendered his claim and title to the Duchy of Normandy, while retaining the Channel Islands, as peer of France and feudal vassal of the King of France. Since then, the Channel Islands have been governed as two separate bailiwicks and were never absorbed into the Kingdom of England nor its successor kingdoms of Great Britain and the United Kingdom. During the Hundred Years' War, the Channel Islands were part of the French territory recognizing the claims of the English kings to the French throne.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "The islands were invaded by the French in 1338, who held some territory until 1345. Edward III of England granted a Charter in July 1341 to Jersey, Guernsey, Sark and Alderney, confirming their customs and laws to secure allegiance to the English Crown. Owain Lawgoch, a mercenary leader of a Free Company in the service of the French Crown, attacked Jersey and Guernsey in 1372, and in 1373 Bertrand du Guesclin besieged Mont Orgueil. The young King Richard II of England reconfirmed in 1378 the Charter rights granted by his grandfather, followed in 1394 with a second Charter granting, because of great loyalty shown to the Crown, exemption for ever, from English tolls, customs and duties. Jersey was occupied by the French in 1461 as part of an exchange for helping the Lancastrians fight against the Yorkists during The War of the Roses. It was retaken by the Yorkists in 1468. In 1483 a Papal bull decreed that the islands would be neutral during time of war. This privilege of neutrality enabled islanders to trade with both France and England and was respected until 1689 when it was abolished by Order in Council following the Glorious Revolution in Great Britain.",
"title": "History"
},
{
"paragraph_id": 18,
"text": "Various attempts to transfer the islands from the diocese of Coutances (to Nantes (1400), Salisbury (1496), and Winchester (1499)) had little effect until an Order in Council of 1569 brought the islands formally into the diocese of Winchester. Control by the bishop of Winchester was ineffectual as the islands had turned overwhelmingly Calvinist and the episcopacy was not restored until 1620 in Jersey and 1663 in Guernsey.",
"title": "History"
},
{
"paragraph_id": 19,
"text": "After the loss of Calais in 1558, the Channel Islands were the last remaining English holdings in France and the only French territory that was controlled by the English kings as Kings of France. This situation lasted until the English kings dropped their title and claims to the French throne in 1801, confirming the Channel Islands in a situation of a crown dependency under the sovereignty of neither Great-Britain nor France but of the British crown directly.",
"title": "History"
},
{
"paragraph_id": 20,
"text": "Sark in the 16th century was uninhabited until colonised from Jersey in the 1560s. The grant of seigneurship from Elizabeth I of England in 1565 forms the basis of Sark's constitution today.",
"title": "History"
},
{
"paragraph_id": 21,
"text": "During the Wars of the Three Kingdoms, Jersey held out strongly for the Royalist cause, providing refuge for Charles, Prince of Wales in 1646 and 1649–1650, while the more strongly Presbyterian Guernsey more generally favoured the parliamentary cause (although Castle Cornet was held by Royalists and did not surrender until October 1651).",
"title": "History"
},
{
"paragraph_id": 22,
"text": "The islands acquired commercial and political interests in the North American colonies. Islanders became involved with the Newfoundland fisheries in the 17th century. In recognition for all the help given to him during his exile in Jersey in the 1640s, Charles II gave George Carteret, Bailiff and governor, a large grant of land in the American colonies, which he promptly named New Jersey, now part of the United States of America. Sir Edmund Andros, bailiff of Guernsey, was an early colonial governor in North America, and head of the short-lived Dominion of New England.",
"title": "History"
},
{
"paragraph_id": 23,
"text": "In the late 18th century, the islands were dubbed \"the French Isles\". Wealthy French émigrés fleeing the French Revolution sought residency in the islands. Many of the town domiciles existing today were built in that time. In Saint Peter Port, a large part of the harbour had been built by 1865.",
"title": "History"
},
{
"paragraph_id": 24,
"text": "The islands were occupied by the German Army during World War II.",
"title": "History"
},
{
"paragraph_id": 25,
"text": "The British Government demilitarised the islands in June 1940, and the lieutenant-governors were withdrawn on 21 June, leaving the insular administrations to continue government as best they could under impending military occupation.",
"title": "History"
},
{
"paragraph_id": 26,
"text": "Before German troops landed, between 30 June and 4 July 1940, evacuation took place. Many young men had already left to join the Allied armed forces, as volunteers. 6,600 out of 50,000 left Jersey while 17,000 out of 42,000 left Guernsey. Thousands of children were evacuated with their schools to England and Scotland.",
"title": "History"
},
{
"paragraph_id": 27,
"text": "The population of Sark largely remained where they were; but in Alderney, all but six people left. In Alderney, the occupying Germans built four prison camps which housed approximately 6,000 people, of whom over 700 died. Due to the destruction of documents, it is impossible to state how many forced workers died in the other islands. Alderney had the only Nazi concentration camps on British soil.",
"title": "History"
},
{
"paragraph_id": 28,
"text": "The Royal Navy blockaded the islands from time to time, particularly following the Invasion of Normandy in June 1944. There was considerable hunger and privation during the five years of German occupation, particularly in the final months when the population was close to starvation. Intense negotiations resulted in some humanitarian aid being sent via the Red Cross, leading to the arrival of Red Cross parcels in the supply ship SS Vega in December 1944.",
"title": "History"
},
{
"paragraph_id": 29,
"text": "The German occupation of 1940–45 was harsh: over 2,000 islanders were deported by the Germans, and some Jews were sent to concentration camps; partisan resistance and retribution, accusations of collaboration, and slave labour also occurred. Many Spaniards, initially refugees from the Spanish Civil War, were brought to the islands to build fortifications. Later, Russians and Central Europeans continued the work. Many land mines were laid, with 65,718 land mines laid in Jersey alone.",
"title": "History"
},
{
"paragraph_id": 30,
"text": "There was no resistance movement in the Channel Islands on the scale of that in mainland France. This has been ascribed to a range of factors including the physical separation of the islands, the density of troops (up to one German for every two Islanders), the small size of the islands precluding any hiding places for resistance groups, and the absence of the Gestapo from the occupying forces. Moreover, much of the population of military age had already joined the British Army.",
"title": "History"
},
{
"paragraph_id": 31,
"text": "The end of the occupation came after VE-Day on 8 May 1945, with Jersey and Guernsey being liberated on 9 May. The German garrison in Alderney was left until 16 May, and it was one of the last of the Nazi German remnants to surrender. The first evacuees returned on the first sailing from Great Britain on 23 June, but the people of Alderney were unable to start returning until December 1945. Many of the evacuees who returned home had difficulty reconnecting with their families after five years of separation.",
"title": "History"
},
{
"paragraph_id": 32,
"text": "Following the liberation of 1945, reconstruction led to a transformation of the economies of the islands, attracting immigration and developing tourism. The legislatures were reformed and non-party governments embarked on social programmes, aided by the incomes from offshore finance, which grew rapidly from the 1960s. The islands decided not to join the European Economic Community when the UK joined. Since the 1990s, declining profitability of agriculture and tourism has challenged the governments of the islands.",
"title": "History"
},
{
"paragraph_id": 33,
"text": "The Channel Islands fall into two separate self-governing bailiwicks, the Bailiwick of Guernsey and the Bailiwick of Jersey. Each of these is a British Crown Dependency, and neither is a part of the United Kingdom. They have been parts of the Duchy of Normandy since the 10th century, and Queen Elizabeth II was often referred to by her traditional and conventional title of Duke of Normandy. However, pursuant to the Treaty of Paris (1259), she governed in her right as The Queen (the \"Crown in right of Jersey\", and the \"Crown in right of the république of the Bailiwick of Guernsey\"), and not as the Duke. This notwithstanding, it is a matter of local pride for monarchists to treat the situation otherwise: the Loyal toast at formal dinners was to 'The Queen, our Duke', rather than to 'Her Majesty, The Queen' as in the UK. The Queen died in 2022 and her son Charles III became the King.",
"title": "Governance"
},
{
"paragraph_id": 34,
"text": "A bailiwick is a territory administered by a bailiff. Although the words derive from a common root ('bail' = 'to give charge of') there is a vast difference between the meanings of the word 'bailiff' in Great Britain and in the Channel Islands; a bailiff in Britain is a court-appointed private debt-collector authorised to collect judgment debts, in the Channel Islands, the Bailiff in each bailiwick is the civil head, presiding officer of the States, and also head of the judiciary, and thus the most important citizen in the bailiwick.",
"title": "Governance"
},
{
"paragraph_id": 35,
"text": "In the early 21st century, the existence of governmental offices such as the bailiffs' with multiple roles straddling the different branches of government came under increased scrutiny for their apparent contravention of the doctrine of separation of powers—most notably in the Guernsey case of McGonnell -v- United Kingdom (2000) 30 EHRR 289. That case, following final judgement at the European Court of Human Rights, became part of the impetus for much recent constitutional change, particularly the Constitutional Reform Act 2005 (2005 c.4) in the UK, including the separation of the roles of the Lord Chancellor, the abolition of the House of Lords' judicial role, and its replacement by the UK Supreme Court. The islands' bailiffs, however, still retain their historic roles.",
"title": "Governance"
},
{
"paragraph_id": 36,
"text": "The systems of government in the islands date from Norman times, which accounts for the names of the legislatures, the States, derived from the Norman 'États' or 'estates' (i.e. the Crown, the Church, and the people). The States have evolved over the centuries into democratic parliaments.",
"title": "Governance"
},
{
"paragraph_id": 37,
"text": "The UK Parliament has power to legislate for the islands, but Acts of Parliament do not extend to the islands automatically. Usually, an Act gives power to extend its application to the islands by an Order in Council, after consultation. For the most part the islands legislate for themselves. Each island has its own primary legislature, known as the States of Guernsey and the States of Jersey, with Chief Pleas in Sark and the States of Alderney. The Channel Islands are not represented in the UK Parliament. Laws passed by the States are given royal assent by The King in Council, to whom the islands' governments are responsible.",
"title": "Governance"
},
{
"paragraph_id": 38,
"text": "The islands have never been part of the European Union, and thus were not a party to the 2016 referendum on the EU membership, but were part of the Customs Territory of the European Community by virtue of Protocol Three to the Treaty on European Union. In September 2010, a Channel Islands Brussels Office was set up jointly by the two Bailiwicks to develop the Channel Islands' influence with the EU, to advise the Channel Islands' governments on European matters, and to promote economic links with the EU.",
"title": "Governance"
},
{
"paragraph_id": 39,
"text": "Both bailiwicks are members of the British–Irish Council, and Jèrriais and Guernésiais are recognised regional languages of the islands.",
"title": "Governance"
},
{
"paragraph_id": 40,
"text": "The legal courts are separate; separate courts of appeal have been in place since 1961. Among the legal heritage from Norman law is the Clameur de haro. The basis of the legal systems of both Bailiwicks is Norman customary law (Coutume) rather than the English Common Law, although elements of the latter have become established over time.",
"title": "Governance"
},
{
"paragraph_id": 41,
"text": "Islanders are full British citizens, but were not classed as European citizens unless by descent from a UK national. Any British citizen who applies for a passport in Jersey or Guernsey receives a passport bearing the words \"British Islands, Bailiwick of Jersey\" or \"British Islands, Bailiwick of Guernsey\". Under the provisions of Protocol Three, Channel Islanders who do not have a close connection with the UK (no parent or grandparent from the UK, and have never been resident in the UK for a five-year period) did not automatically benefit from the EU provisions on free movement within the EU, and their passports received an endorsement to that effect. This affected only a minority of islanders.",
"title": "Governance"
},
{
"paragraph_id": 42,
"text": "Under the UK Interpretation Act 1978, the Channel Islands are deemed to be part of the British Islands, not to be confused with the British Isles. For the purposes of the British Nationality Act 1981, the \"British Islands\" include the United Kingdom (Great Britain and Northern Ireland), the Channel Islands and the Isle of Man, taken together, unless the context otherwise requires.",
"title": "Governance"
},
{
"paragraph_id": 43,
"text": "Tourism is still important. However, Jersey and Guernsey have, since the 1960s, become major offshore financial centres. Historically Guernsey's horticultural and greenhouse activities have been more significant than in Jersey, and Guernsey has maintained light industry as a higher proportion of its economy than Jersey. In Jersey, potatoes are an important export crop, shipped mostly to the UK.",
"title": "Economy"
},
{
"paragraph_id": 44,
"text": "Jersey is heavily reliant on financial services, with 39.4% of Gross Value Added (GVA) in 2018 contributed by the sector. Rental income comes second at 15.1% with other business activities at 11.2%. Tourism 4.5% with agriculture contributing just 1.2% and manufacturing even lower at 1.1%. GVA has fluctuated between £4.5 and £5 billion for 20 years.",
"title": "Economy"
},
{
"paragraph_id": 45,
"text": "Jersey has had a steadily rising population, increasing from below 90,000 in 2000 to over 105,000 in 2018 which combined with a flat GVA has resulted in GVA per head of population falling from £57,000 to £44,000 per person. Guernsey had a GDP of £3.2 billion in 2018 and with a stable population of around 66,000 has had a steadily rising GDP, and a GVA per head of population which in 2018 surpassed £52,000.",
"title": "Economy"
},
{
"paragraph_id": 46,
"text": "Both bailiwicks issue their own banknotes and coins, which circulate freely in all the islands alongside UK coinage and Bank of England and Scottish banknotes.",
"title": "Economy"
},
{
"paragraph_id": 47,
"text": "Since 1969, Jersey and Guernsey have operated postal administrations independently of the UK's Royal Mail, with their own postage stamps, which can be used for postage only in their respective Bailiwicks. UK stamps are no longer valid, but mail to the islands, and to the Isle of Man, is charged at UK inland rates. It was not until the early 1990s that the islands joined the UK's postcode system, Jersey postcodes using the initials JE and Guernsey GY.",
"title": "Transport and communications"
},
{
"paragraph_id": 48,
"text": "Each of the three largest islands has a distinct vehicle registration scheme:",
"title": "Transport and communications"
},
{
"paragraph_id": 49,
"text": "In Sark, where most motor traffic is prohibited, the few vehicles – nearly all tractors – do not display plates. Bicycles display tax discs.",
"title": "Transport and communications"
},
{
"paragraph_id": 50,
"text": "In the 1960s, names used for the cross-Channel ferries plying the mail route between the islands and Weymouth, Dorset, were taken from the popular Latin names for the islands: Caesarea (Jersey), Sarnia (Guernsey) and Riduna (Alderney). Fifty years later, the ferry route between the Channel Islands and the UK is operated by Condor Ferries from both St Helier, Jersey and St Peter Port, Guernsey, using high-speed catamaran fast craft to Poole in the UK. A regular passenger ferry service on the Commodore Clipper goes from both Channel Island ports to Portsmouth daily, and carries both passengers and freight.",
"title": "Transport and communications"
},
{
"paragraph_id": 51,
"text": "Ferry services to Normandy are operated by Manche Îles Express, and services between Jersey and Saint-Malo are operated by Compagnie Corsaire and Condor Ferries. The Isle of Sark Shipping Company operates small ferries to Sark. Normandy Trader operates an ex military tank landing craft for transporting freight between the islands and France.",
"title": "Transport and communications"
},
{
"paragraph_id": 52,
"text": "On 20 August 2013, Huelin-Renouf, which had operated a \"lift-on lift-off\" container service for 80 years between the Port of Southampton and the Port of Jersey, ceased trading. Senator Alan Maclean, a Jersey politician, had previously tried to save the 90-odd jobs furnished by the company to no avail. On 20 September, it was announced that Channel Island Lines would continue this service, and would purchase the MV Huelin Dispatch from Associated British Ports who in turn had purchased them from the receiver in the bankruptcy. The new operator was to be funded by Rockayne Limited, a closely held association of Jersey businesspeople.",
"title": "Transport and communications"
},
{
"paragraph_id": 53,
"text": "There are three airports in the Channel Islands: Alderney Airport, Guernsey Airport and Jersey Airport. They are directly connected to each other by services operated by Blue Islands and Aurigny.",
"title": "Transport and communications"
},
{
"paragraph_id": 54,
"text": "Historically, there have been railway networks on Jersey, Guernsey, and Alderney, but all of the lines on Jersey and Guernsey have been closed and dismantled. Today there are three working railways in the Channel Islands, of which the Alderney Railway is the only one providing a regular timetabled passenger service. The other two are a 7+1⁄4 in (184 mm) gauge miniature railway, also on Alderney, and the heritage steam railway operated on Jersey as part of the Pallot Heritage Steam Museum.",
"title": "Transport and communications"
},
{
"paragraph_id": 55,
"text": "The Channel Islands are served by a number of local radio services – BBC Radio Jersey and BBC Radio Guernsey, Channel 103 and Island FM – as well as regional television news opt-outs from BBC Channel Islands and ITV Channel Television.",
"title": "Transport and communications"
},
{
"paragraph_id": 56,
"text": "On 1 August 2021, DAB+ digital radio became available for the first time, introducing new stations like the local Bailiwick Radio and Soleil Radio, and UK-wide services like Capital, Heart, and Times Radio.",
"title": "Transport and communications"
},
{
"paragraph_id": 57,
"text": "There are two broadcast transmitters serving Jersey – at Frémont Point and Les Platons – as well as one at Les Touillets in Guernsey and a relay in Alderney.",
"title": "Transport and communications"
},
{
"paragraph_id": 58,
"text": "There are several local newspapers including the Guernsey Press and the Jersey Evening Post and magazines.",
"title": "Transport and communications"
},
{
"paragraph_id": 59,
"text": "Jersey always operated its own telephone services independently of Britain's national system, Guernsey established its own telephone service in 1968. Both islands still form part of the British telephone numbering plan, but Ofcom on the mainlines does not have responsibility for telecommunications regulatory and licensing issues on the islands. It is responsible for wireless telegraphy licensing throughout the islands, and by agreement, for broadcasting regulation in the two large islands only. Submarine cables connect the various islands and provide connectivity with England and France.",
"title": "Transport and communications"
},
{
"paragraph_id": 60,
"text": "Modern broadband speeds are available in all the islands, including full-fibre (FTTH) in Jersey (offering speeds of up to 1Gbps on all broadband connections) and VDSL and some business and homes with fibre connectivity in Guernsey. Providers include Sure and JT.",
"title": "Transport and communications"
},
{
"paragraph_id": 61,
"text": "The two Bailiwicks each have their own internet domain, .GG (Guernsey, Alderney, Sark) and .JE (Jersey), which are managed by channelisles.net.",
"title": "Transport and communications"
},
{
"paragraph_id": 62,
"text": "The Norman language predominated in the islands until the nineteenth century, when increasing influence from English-speaking settlers and easier transport links led to Anglicisation. There are four main dialects/languages of Norman in the islands, Auregnais (Alderney, extinct in late twentieth century), Dgèrnésiais (Guernsey), Jèrriais (Jersey) and Sercquiais (Sark, an offshoot of Jèrriais).",
"title": "Culture"
},
{
"paragraph_id": 63,
"text": "Victor Hugo spent many years in exile, first in Jersey and then in Guernsey, where he finished Les Misérables. Guernsey is the setting of Hugo's later novel Les Travailleurs de la Mer (Toilers of the Sea). A \"Guernsey-man\" also makes an appearance in chapter 91 of Herman Melville's Moby-Dick.",
"title": "Culture"
},
{
"paragraph_id": 64,
"text": "The annual \"Muratti\", the inter-island football match, is considered the sporting event of the year, although, due to broadcast coverage, it no longer attracts the crowds of spectators, travelling between the islands, that it did during the twentieth century.",
"title": "Culture"
},
{
"paragraph_id": 65,
"text": "Cricket is popular in the Channel Islands. The Jersey cricket team and the Guernsey cricket team are both associate members of the International Cricket Council. The teams have played each other in the inter-insular match since 1957. In 2001 and 2002, the Channel Islands entered a team into the MCCA Knockout Trophy, the one-day tournament of the minor counties of English and Welsh cricket.",
"title": "Culture"
},
{
"paragraph_id": 66,
"text": "Channel Island sportsmen and women compete in the Commonwealth Games for their respective islands and the islands have also been enthusiastic supporters of the Island Games. Shooting is a popular sport, in which islanders have won Commonwealth medals.",
"title": "Culture"
},
{
"paragraph_id": 67,
"text": "Guernsey's traditional colour for sporting and other purposes is green and Jersey's is red.",
"title": "Culture"
},
{
"paragraph_id": 68,
"text": "The main islanders have traditional animal nicknames:",
"title": "Culture"
},
{
"paragraph_id": 69,
"text": "Christianity was brought to the islands around the sixth century; according to tradition, Jersey was evangelised by St Helier, Guernsey by St Samson of Dol, and the smaller islands were occupied at various times by monastic communities representing strands of Celtic Christianity. At the Reformation, the previously Catholic islands converted to Calvinism under the influence of an influx of French-language pamphlets published in Geneva. Anglicanism was imposed in the seventeenth century, but the Non-Conformist local tendency returned with a strong adoption of Methodism. In the late twentieth century, a strong Catholic presence re-emerged with the arrival of numerous Portuguese workers (both from mainland Portugal and the island of Madeira). Their numbers have been reinforced by recent migrants from Poland and elsewhere in Eastern Europe. Today, Evangelical churches have been established. Services are held in a number of languages.",
"title": "Culture"
},
{
"paragraph_id": 70,
"text": "According to 2015 statistics, 39% of the population was non-religious.",
"title": "Culture"
},
{
"paragraph_id": 71,
"text": "A number of islands in the English Channel are part of France. Among these are Bréhat, Île de Batz, Chausey, Tatihou and the Îles Saint-Marcouf.",
"title": "Other islands in the English Channel"
},
{
"paragraph_id": 72,
"text": "The Isle of Wight, which is part of England, lies just off the coast of Great Britain, between the Channel and the Solent.",
"title": "Other islands in the English Channel"
},
{
"paragraph_id": 73,
"text": "Hayling and Portsea islands, both being near or part of Portsmouth, are also part of England (and thus part of the United Kingdom).",
"title": "Other islands in the English Channel"
}
] | The Channel Islands are an archipelago in the English Channel, off the French coast of Normandy. They are divided into two Crown Dependencies: the Bailiwick of Jersey, which is the largest of the islands; and the Bailiwick of Guernsey, consisting of Guernsey, Alderney, Sark, Herm and some smaller islands. Historically, they are the remnants of the Duchy of Normandy. Although they are not part of the United Kingdom, the UK is currently responsible for the defence and international relations of the islands. The Crown Dependencies are neither members of the Commonwealth of Nations, nor part of the European Union. They have a total population of about 171,916, and the bailiwicks' capitals, Saint Helier and Saint Peter Port, have populations of 33,500 and 18,207 respectively. "Channel Islands" is a geographical term, not a political unit. The two bailiwicks have been administered separately since the late 13th century. Each has its own independent laws, elections, and representative bodies. Any institution common to both is the exception rather than the rule. The Bailiwick of Guernsey is divided into three jurisdictions – Guernsey, Alderney and Sark – each with its own legislature. Although there are a few pan-island institutions, these tend to be established structurally as equal projects between Guernsey and Jersey. Otherwise, entities whose names imply membership of both Guernsey and Jersey might in fact be from one bailiwick only. For instance, The International Stock Exchange is in Saint Peter Port and therefore is in Guernsey. The term "Channel Islands" began to be used around 1830, possibly first by the Royal Navy as a collective name for the islands. The term refers only to the archipelago to the west of the Cotentin Peninsula. Other populated islands located in the English Channel, and close to the coast of Britain, such as the Isle of Wight, Hayling Island and Portsea Island, are not regarded as "Channel Islands". | 2001-11-14T22:20:04Z | 2023-11-17T23:32:13Z | [
"Template:Infobox islands",
"Template:Clarify",
"Template:In lang",
"Template:Authority control",
"Template:Usurped",
"Template:Climate chart",
"Template:Lang",
"Template:Cite web",
"Template:Cite book",
"Template:Navboxes",
"Template:Refn",
"Template:Rp",
"Template:See also",
"Template:ISBN",
"Template:Channel Islands",
"Template:Main",
"Template:Citation",
"Template:ISSN",
"Template:Webarchive",
"Template:Cite EB1911",
"Template:Short description",
"Template:Use dmy dates",
"Template:Sfn",
"Template:Who",
"Template:Cite journal",
"Template:Cite news",
"Template:Sister project links",
"Template:About",
"Template:UN Population",
"Template:Spaced ndash",
"Template:Reflist",
"Template:Use British English",
"Template:Citation needed",
"Template:RailGauge",
"Template:Dead link"
] | https://en.wikipedia.org/wiki/Channel_Islands |
5,644 | Comedy film | A comedy film is a category of film which emphasizes humor. These films are designed to amuse audiences and make them laugh. Films in this genre typically have a happy ending, with dark comedy being an exception to this rule. Comedy is one of the oldest genres in film, and is derived from classical comedy in theatre. Some of the earliest silent films were comedies such as slapstick comedy which often relies on visual depictions, such as sight gags and pratfalls, so they can be enjoyed without requiring sound. To provide drama and excitement to silent movies, live music was played in sync with the action on the screen, on pianos, organs, and other instruments. When sound films became more prevalent during the 1920s, comedy films grew in popularity, as laughter could result from both burlesque situations but also from humorous dialogue.
Comedy, compared with other film genres, places more focus on individual star actors, with many former stand-up comics transitioning to the film industry due to their popularity.
In The Screenwriters Taxonomy (2017), Eric R. Williams contends that film genres are fundamentally based upon a film's atmosphere, character, and story, and therefore the labels "drama" and "comedy" are too broad to be considered a genre. Instead, his comedy taxonomy argues that comedy is a type of film that contains at least a dozen different sub-types. A number of hybrid genres use comedy, such as action comedy and the romantic comedy. Comedy is a genre of entertainment that is designed to make audiences laugh. It can take many forms, including stand-up comedy, sketch comedy, sitcoms, and comedic films. Comedy often uses humor and satire to comment on social and political issues, as well as everyday life. Many comedians use observational humor, in which they draw on their own experiences and the world around them to create comedic material. Physical comedy, which uses gestures, facial expressions and body language to create humor, is also a popular form of comedy. The genre of comedy is known for its ability to make people laugh, but also make them think. It can be a reflection of society and its issues.
The first comedy film was L'Arroseur Arrosé (1895), directed and produced by film pioneer Louis Lumière. Less than 60 seconds long, it shows a boy playing a prank on a gardener. The most noted comedy actors of the silent film era (1895–1927) were Charlie Chaplin, Harold Lloyd, and Buster Keaton.
In a 2023 article in Collider, Lisa Laman states that "modern-day [film] comedies tend to suffer from so many visual problems" and use "frustratingly inert images" and "overly-lit" sets, making them "look like sitcoms, not movies." She says "modern comedy movies are filmed with "little imagination in…staging", poor production values, "awkward editing and flat camerawork", and few "visual gags".
The anarchic comedy film, as name suggests, is a random or stream-of-consciousness type of humor that often lampoons a form of authority. The genre dates from the silent era. Notable examples of this type of film are those produced by Monty Python. Other examples include Duck Soup (1933) and Caddyshack (1980).
Gross out films are aimed at the young adult market (age 18–24) and rely heavily on vulgar, sexual, or "toilet" humor. They often contain a large amount of profanity and nudity. Examples include Animal House (1978) and Freddy Got Fingered (2001).
This sub-type uses comedy to explore serious ideas such as religion, sex, or politics. Often, the characters represent particular divergent world views and are forced to interact for comedic effect and social commentary. Some examples include both Ferris Bueller's Day Off (1986) and Swing Vote (2008).
A comedy of manners satirizes the mores and affectations of a social class. The plot of a comedy of manners is often concerned with an illicit love affair or other scandals. Generally, the plot is less important for its comedic effect than its witty dialogue. This form of comedy has a long ancestry that dates back at least as far as Much Ado about Nothing created by William Shakespeare, published in 1623. Examples for comedy of manners films include Breakfast at Tiffany's (1961) and Under the Tuscan Sun (2003).
The black comedy film deals with taboo subjects—including death, murder, crime, suicide, and war—in a satirical manner. An example is Dr. Strangelove (1964).
Farcical films exaggerate situations beyond the realm of possibility—thereby making them entertaining. Film examples include Sleeper (1973).
Mockumentary comedies are fictional but use a documentary style that includes interviews and "documentary" footage, along regular scenes. Examples include This Is Spinal Tap (1984) and Reboot Camp (2020).
Musical comedy as a film genre has its roots in the 1920s, with Disney's Steamboat Willie (1928) being the most recognized of these early films. The subgenre resurged with popularity in the 1970s, with movies such as Bugsy Malone (1976) and Grease (1978) gaining status as cult classics.
Observational humor films find humor in the common practices of everyday life. Some film examples of observational humor include Knocked Up (2007) and The Intern (2015).
A parody or spoof film satirizes other film genres or classic films. Such films employ sarcasm, stereotyping, mockery of scenes from other films, and the obviousness of meaning in a character's actions. Examples of this form include Blazing Saddles (1974) and Spaceballs (1987).
The humor in sex comedy is primarily derived from sexual situations and desire, as in Bachelor Party (1984) and The Inbetweeners Movie (2011).
Situational comedy films' humor come from knowing a stock group of characters (or character types) and then exposing them to different situations to create humorous and ironic juxtaposition. Examples include Planes, Trains and Automobiles (1987) and The Hangover (2009).
This broad sub-type applies to films that do not attempt a specific approach to comedy but, rather, used comedy for comedic sake. Chasing Amy (1997) and The Shaggy Dog (2006) are examples of straight comedy films.
Slapstick films involve exaggerated, boisterous physical action to create impossible and humorous situations. Because it relies predominantly on visual depictions of events, it does not require sound. Accordingly, the subgenre was ideal for silent movies and was prevalent during that era. Popular stars of the slapstick genre include Harold Lloyd, Roscoe Arbuckle, Charlie Chaplin, Peter Sellers and Norman Wisdom. Some of these stars, as well as acts such as Laurel and Hardy and the Three Stooges, also found success incorporating slapstick comedy into sound films. Modern examples of slapstick comedy include Mr. Bean's Holiday (2007) and Get Smart (2008).
Although not specifically linked to the history of surrealism, surreal comedies comedies include behavior and storytelling techniques that are illogical—including bizarre juxtapositions, absurd situations, and unpredictable reactions to normal situations. Some examples are It's a Mad, Mad, Mad, Mad World (1963) and Everything Everywhere All at Once (2022).
According to Williams' taxonomy, all film descriptions should contain their type (comedy or drama) combined with one (or more) subgenres. This combination does not create a separate genre, but rather, provides a better understanding of the film.
Films of this type blend comic antics and action where the stars combine one-liners with a thrilling plot and daring stunts. The genre became a specific draw in North America in the eighties when comedians such as Eddie Murphy started taking more action-oriented roles, such as in 48 Hrs. (1982) and Beverly Hills Cop (1984).
Sub-genres of the action comedy (labeled macro-genres by Williams) include:
Slapstick martial arts films became a mainstay of Hong Kong action cinema through the work of Jackie Chan among others, such as Who Am I? (1998). Kung Fu Panda is an action comedy that focuses on the martial art of kung fu.
Some action films focus on superheroes; for example, The Incredibles, Hancock, Kick-Ass, and Mystery Men.
Other categories of the action comedy include:
Films starring mismatched partners for comedic effects, such as in Midnight Run, Rush Hour, 21 Jump Street, Bad Boys, Starsky and Hutch, Booksmart, The Odd Couple, and Ted.
Comedy thriller is a type that combines elements of humor and suspense. Films such as Silver Streak, Charade, Kiss Kiss Bang Bang, In Bruges, Mr. and Mrs. Smith, Grosse Point Blank, The Thin Man, The Big Fix, and The Lady Vanishes.
Comedy mystery is a film genre combining elements of comedy and mystery fiction. Though the genre arguably peaked in the 1930s and 1940s, comedy-mystery films have been continually produced since. Examples include the Pink Panther series,Scooby-Doo films, Clue (1985) and Knives Out (2019).
A hybrid mix of crime and comedy films, examples include Inspector Palmu's Mistake (1960), Oh Brother Where Art Thou? (2000), Take the Money and Run (1969) and Who Framed Roger Rabbit (1988).
Fantasy comedy films use magic, supernatural or mythological figures for comedic purposes. Some fantasy comedy includes an element of parody, or satire, turning fantasy conventions on their head, such as the hero becoming a cowardly fool or the princess being a klutz. Examples of these films include Big, Being John Malkovich, Ernest Saves Christmas, Ernest Scared Stupid, Night at the Museum, Groundhog Day, Click, and Shrek.
Comedy horror is a genre/type in which the usual dark themes and "scare tactics" attributed to horror films are treated with a humorous approach. These films either often goofy horror cliches, such as in Scream, Young Frankenstein, The Rocky Horror Picture Show, Little Shop of Horrors, The Haunted Mansion, and Scary Movie where campy styles are favored. Some are much more subtle and do not parody horror, such as An American Werewolf in London. Another style of comedy horror can also rely on over-the-top violence and gore such as in The Evil Dead (1981), The Return of the Living Dead (1985), Braindead (1992), and Club Dread (2004) – such films are sometimes known as splatstick, a portmanteau of the words splatter and slapstick. It would be reasonable to put Ghostbusters in this category.
Day-in-the-life films take small events in a person's life and raises their level of importance. The "small things in life" feel as important to the protagonist (and the audience) as the climactic battle in an action film, or the final shootout in a western. Often, the protagonists deal with multiple, overlapping issues in the course of the film. The day-in-the-life comedy often finds humor in commenting upon the absurdity or irony of daily life; for example The Terminal (2004) or Waitress (2007). Character humor is also used extensively in day-in-the-life comedies, as can be seen in American Splendor (2003).
Romantic comedies are humorous films with central themes that reinforce societal beliefs about love (e.g., themes such as "love at first sight", "love conquers all", or "there is someone out there for everyone"); the story typically revolves around characters falling into (and out of, and back into) love. Amélie (2001), Annie Hall (1977), Charade (1963), City Lights (1931), Four Weddings and a Funeral (1994), It (1927), The Lobster (2015), My Wife, the Director General (1966), My Favorite Wife (1940), Pretty Woman (1990), Some Like It Hot (1959), There's Something About Mary (1998) and When Harry Met Sally... (1989) are examples of romantic comedies.
A subgenre of the romantic comedy, screwball comedies appears to focus on the story of a central male character until a strong female character takes center stage; at this point, the man's story becomes secondary to a new issue typically introduced by the woman; this story grows in significance and, as it does, the man's masculinity is challenged by the sharp-witted woman, who is often his love interest. Typically it can include a romantic element, an interplay between people of different economic strata, quick and witty repartee, some form of role reversal, and a happy ending. Some examples of screwball comedy during its heyday include It Happened One Night (1934), Bringing Up Baby (1938), The Philadelphia Story (1940), His Girl Friday (1940), Mr. & Mrs. Smith (1941); more recent examples include What's Up, Doc? (1972), Rat Race (2001), and Our Idiot Brother (2011).
Science fiction comedy films often exaggerate the elements of traditional science fiction films to comic effect. Examples include Spaceballs, Ghostbusters, Galaxy Quest, Mars Attacks!, Men in Black, and many more.
Sports comedy combines the genre of comedy with that of the sports film genre. Thematically, the story is often one of "Our Team" versus "Their Team"; their team will always try to win, and our team will show the world that they deserve recognition or redemption; the story does not always have to involve a team. The story could also be about an individual athlete or the story could focus on an individual playing on a team. The comedic aspect of this super-genre often comes from physical humor (Happy Gilmore - 1996), character humor (Caddyshack - 1980), or the juxtaposition of bad athletes succeeding against the odds (The Bad News Bears - 1976).
War films typically tell the story of a small group of isolated individuals who – one by one – get killed (literally or metaphorically) by an outside force until there is a final fight to the death; the idea of the protagonists facing death is a central expectation in a war film. War comedies infuse this idea of confronting death with a morbid sense of humor. In a war film even though the enemy may out-number, or out-power, the hero, we assume that the enemy can be defeated if only the hero can figure out how. Often, this strategic sensibility provides humorous opportunities in a war comedy. Examples include Good Morning, Vietnam; M*A*S*H; the Francis the Talking Mule series; and others.
Films in the western super-genre often take place in the American Southwest or in Mexico, with a large number of scenes occurring outside so we can soak in nature's rugged beauty. Visceral expectations for the audience include fistfights, gunplay, and chase scenes. There is also the expectation of spectacular panoramic images of the countryside including sunsets, wide open landscapes, and endless deserts and sky. Western comedies often find their humor in specific characters (Three Amigos, 1986), in interpersonal relationships (Lone Ranger, 2013) or in creating a parody of the western (Rango, 2011). | [
{
"paragraph_id": 0,
"text": "A comedy film is a category of film which emphasizes humor. These films are designed to amuse audiences and make them laugh. Films in this genre typically have a happy ending, with dark comedy being an exception to this rule. Comedy is one of the oldest genres in film, and is derived from classical comedy in theatre. Some of the earliest silent films were comedies such as slapstick comedy which often relies on visual depictions, such as sight gags and pratfalls, so they can be enjoyed without requiring sound. To provide drama and excitement to silent movies, live music was played in sync with the action on the screen, on pianos, organs, and other instruments. When sound films became more prevalent during the 1920s, comedy films grew in popularity, as laughter could result from both burlesque situations but also from humorous dialogue.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Comedy, compared with other film genres, places more focus on individual star actors, with many former stand-up comics transitioning to the film industry due to their popularity.",
"title": ""
},
{
"paragraph_id": 2,
"text": "In The Screenwriters Taxonomy (2017), Eric R. Williams contends that film genres are fundamentally based upon a film's atmosphere, character, and story, and therefore the labels \"drama\" and \"comedy\" are too broad to be considered a genre. Instead, his comedy taxonomy argues that comedy is a type of film that contains at least a dozen different sub-types. A number of hybrid genres use comedy, such as action comedy and the romantic comedy. Comedy is a genre of entertainment that is designed to make audiences laugh. It can take many forms, including stand-up comedy, sketch comedy, sitcoms, and comedic films. Comedy often uses humor and satire to comment on social and political issues, as well as everyday life. Many comedians use observational humor, in which they draw on their own experiences and the world around them to create comedic material. Physical comedy, which uses gestures, facial expressions and body language to create humor, is also a popular form of comedy. The genre of comedy is known for its ability to make people laugh, but also make them think. It can be a reflection of society and its issues.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The first comedy film was L'Arroseur Arrosé (1895), directed and produced by film pioneer Louis Lumière. Less than 60 seconds long, it shows a boy playing a prank on a gardener. The most noted comedy actors of the silent film era (1895–1927) were Charlie Chaplin, Harold Lloyd, and Buster Keaton.",
"title": "History"
},
{
"paragraph_id": 4,
"text": "In a 2023 article in Collider, Lisa Laman states that \"modern-day [film] comedies tend to suffer from so many visual problems\" and use \"frustratingly inert images\" and \"overly-lit\" sets, making them \"look like sitcoms, not movies.\" She says \"modern comedy movies are filmed with \"little imagination in…staging\", poor production values, \"awkward editing and flat camerawork\", and few \"visual gags\".",
"title": "History"
},
{
"paragraph_id": 5,
"text": "The anarchic comedy film, as name suggests, is a random or stream-of-consciousness type of humor that often lampoons a form of authority. The genre dates from the silent era. Notable examples of this type of film are those produced by Monty Python. Other examples include Duck Soup (1933) and Caddyshack (1980).",
"title": "Sub-types"
},
{
"paragraph_id": 6,
"text": "Gross out films are aimed at the young adult market (age 18–24) and rely heavily on vulgar, sexual, or \"toilet\" humor. They often contain a large amount of profanity and nudity. Examples include Animal House (1978) and Freddy Got Fingered (2001).",
"title": "Sub-types"
},
{
"paragraph_id": 7,
"text": "This sub-type uses comedy to explore serious ideas such as religion, sex, or politics. Often, the characters represent particular divergent world views and are forced to interact for comedic effect and social commentary. Some examples include both Ferris Bueller's Day Off (1986) and Swing Vote (2008).",
"title": "Sub-types"
},
{
"paragraph_id": 8,
"text": "A comedy of manners satirizes the mores and affectations of a social class. The plot of a comedy of manners is often concerned with an illicit love affair or other scandals. Generally, the plot is less important for its comedic effect than its witty dialogue. This form of comedy has a long ancestry that dates back at least as far as Much Ado about Nothing created by William Shakespeare, published in 1623. Examples for comedy of manners films include Breakfast at Tiffany's (1961) and Under the Tuscan Sun (2003).",
"title": "Sub-types"
},
{
"paragraph_id": 9,
"text": "The black comedy film deals with taboo subjects—including death, murder, crime, suicide, and war—in a satirical manner. An example is Dr. Strangelove (1964).",
"title": "Sub-types"
},
{
"paragraph_id": 10,
"text": "Farcical films exaggerate situations beyond the realm of possibility—thereby making them entertaining. Film examples include Sleeper (1973).",
"title": "Sub-types"
},
{
"paragraph_id": 11,
"text": "Mockumentary comedies are fictional but use a documentary style that includes interviews and \"documentary\" footage, along regular scenes. Examples include This Is Spinal Tap (1984) and Reboot Camp (2020).",
"title": "Sub-types"
},
{
"paragraph_id": 12,
"text": "Musical comedy as a film genre has its roots in the 1920s, with Disney's Steamboat Willie (1928) being the most recognized of these early films. The subgenre resurged with popularity in the 1970s, with movies such as Bugsy Malone (1976) and Grease (1978) gaining status as cult classics.",
"title": "Sub-types"
},
{
"paragraph_id": 13,
"text": "Observational humor films find humor in the common practices of everyday life. Some film examples of observational humor include Knocked Up (2007) and The Intern (2015).",
"title": "Sub-types"
},
{
"paragraph_id": 14,
"text": "A parody or spoof film satirizes other film genres or classic films. Such films employ sarcasm, stereotyping, mockery of scenes from other films, and the obviousness of meaning in a character's actions. Examples of this form include Blazing Saddles (1974) and Spaceballs (1987).",
"title": "Sub-types"
},
{
"paragraph_id": 15,
"text": "The humor in sex comedy is primarily derived from sexual situations and desire, as in Bachelor Party (1984) and The Inbetweeners Movie (2011).",
"title": "Sub-types"
},
{
"paragraph_id": 16,
"text": "Situational comedy films' humor come from knowing a stock group of characters (or character types) and then exposing them to different situations to create humorous and ironic juxtaposition. Examples include Planes, Trains and Automobiles (1987) and The Hangover (2009).",
"title": "Sub-types"
},
{
"paragraph_id": 17,
"text": "This broad sub-type applies to films that do not attempt a specific approach to comedy but, rather, used comedy for comedic sake. Chasing Amy (1997) and The Shaggy Dog (2006) are examples of straight comedy films.",
"title": "Sub-types"
},
{
"paragraph_id": 18,
"text": "Slapstick films involve exaggerated, boisterous physical action to create impossible and humorous situations. Because it relies predominantly on visual depictions of events, it does not require sound. Accordingly, the subgenre was ideal for silent movies and was prevalent during that era. Popular stars of the slapstick genre include Harold Lloyd, Roscoe Arbuckle, Charlie Chaplin, Peter Sellers and Norman Wisdom. Some of these stars, as well as acts such as Laurel and Hardy and the Three Stooges, also found success incorporating slapstick comedy into sound films. Modern examples of slapstick comedy include Mr. Bean's Holiday (2007) and Get Smart (2008).",
"title": "Sub-types"
},
{
"paragraph_id": 19,
"text": "Although not specifically linked to the history of surrealism, surreal comedies comedies include behavior and storytelling techniques that are illogical—including bizarre juxtapositions, absurd situations, and unpredictable reactions to normal situations. Some examples are It's a Mad, Mad, Mad, Mad World (1963) and Everything Everywhere All at Once (2022).",
"title": "Sub-types"
},
{
"paragraph_id": 20,
"text": "According to Williams' taxonomy, all film descriptions should contain their type (comedy or drama) combined with one (or more) subgenres. This combination does not create a separate genre, but rather, provides a better understanding of the film.",
"title": "Hybrid subgenres"
},
{
"paragraph_id": 21,
"text": "Films of this type blend comic antics and action where the stars combine one-liners with a thrilling plot and daring stunts. The genre became a specific draw in North America in the eighties when comedians such as Eddie Murphy started taking more action-oriented roles, such as in 48 Hrs. (1982) and Beverly Hills Cop (1984).",
"title": "Hybrid subgenres"
},
{
"paragraph_id": 22,
"text": "Sub-genres of the action comedy (labeled macro-genres by Williams) include:",
"title": "Hybrid subgenres"
},
{
"paragraph_id": 23,
"text": "Slapstick martial arts films became a mainstay of Hong Kong action cinema through the work of Jackie Chan among others, such as Who Am I? (1998). Kung Fu Panda is an action comedy that focuses on the martial art of kung fu.",
"title": "Hybrid subgenres"
},
{
"paragraph_id": 24,
"text": "Some action films focus on superheroes; for example, The Incredibles, Hancock, Kick-Ass, and Mystery Men.",
"title": "Hybrid subgenres"
},
{
"paragraph_id": 25,
"text": "Other categories of the action comedy include:",
"title": "Hybrid subgenres"
},
{
"paragraph_id": 26,
"text": "Films starring mismatched partners for comedic effects, such as in Midnight Run, Rush Hour, 21 Jump Street, Bad Boys, Starsky and Hutch, Booksmart, The Odd Couple, and Ted.",
"title": "Hybrid subgenres"
},
{
"paragraph_id": 27,
"text": "Comedy thriller is a type that combines elements of humor and suspense. Films such as Silver Streak, Charade, Kiss Kiss Bang Bang, In Bruges, Mr. and Mrs. Smith, Grosse Point Blank, The Thin Man, The Big Fix, and The Lady Vanishes.",
"title": "Hybrid subgenres"
},
{
"paragraph_id": 28,
"text": "Comedy mystery is a film genre combining elements of comedy and mystery fiction. Though the genre arguably peaked in the 1930s and 1940s, comedy-mystery films have been continually produced since. Examples include the Pink Panther series,Scooby-Doo films, Clue (1985) and Knives Out (2019).",
"title": "Hybrid subgenres"
},
{
"paragraph_id": 29,
"text": "A hybrid mix of crime and comedy films, examples include Inspector Palmu's Mistake (1960), Oh Brother Where Art Thou? (2000), Take the Money and Run (1969) and Who Framed Roger Rabbit (1988).",
"title": "Hybrid subgenres"
},
{
"paragraph_id": 30,
"text": "Fantasy comedy films use magic, supernatural or mythological figures for comedic purposes. Some fantasy comedy includes an element of parody, or satire, turning fantasy conventions on their head, such as the hero becoming a cowardly fool or the princess being a klutz. Examples of these films include Big, Being John Malkovich, Ernest Saves Christmas, Ernest Scared Stupid, Night at the Museum, Groundhog Day, Click, and Shrek.",
"title": "Hybrid subgenres"
},
{
"paragraph_id": 31,
"text": "Comedy horror is a genre/type in which the usual dark themes and \"scare tactics\" attributed to horror films are treated with a humorous approach. These films either often goofy horror cliches, such as in Scream, Young Frankenstein, The Rocky Horror Picture Show, Little Shop of Horrors, The Haunted Mansion, and Scary Movie where campy styles are favored. Some are much more subtle and do not parody horror, such as An American Werewolf in London. Another style of comedy horror can also rely on over-the-top violence and gore such as in The Evil Dead (1981), The Return of the Living Dead (1985), Braindead (1992), and Club Dread (2004) – such films are sometimes known as splatstick, a portmanteau of the words splatter and slapstick. It would be reasonable to put Ghostbusters in this category.",
"title": "Hybrid subgenres"
},
{
"paragraph_id": 32,
"text": "Day-in-the-life films take small events in a person's life and raises their level of importance. The \"small things in life\" feel as important to the protagonist (and the audience) as the climactic battle in an action film, or the final shootout in a western. Often, the protagonists deal with multiple, overlapping issues in the course of the film. The day-in-the-life comedy often finds humor in commenting upon the absurdity or irony of daily life; for example The Terminal (2004) or Waitress (2007). Character humor is also used extensively in day-in-the-life comedies, as can be seen in American Splendor (2003).",
"title": "Hybrid subgenres"
},
{
"paragraph_id": 33,
"text": "Romantic comedies are humorous films with central themes that reinforce societal beliefs about love (e.g., themes such as \"love at first sight\", \"love conquers all\", or \"there is someone out there for everyone\"); the story typically revolves around characters falling into (and out of, and back into) love. Amélie (2001), Annie Hall (1977), Charade (1963), City Lights (1931), Four Weddings and a Funeral (1994), It (1927), The Lobster (2015), My Wife, the Director General (1966), My Favorite Wife (1940), Pretty Woman (1990), Some Like It Hot (1959), There's Something About Mary (1998) and When Harry Met Sally... (1989) are examples of romantic comedies.",
"title": "Hybrid subgenres"
},
{
"paragraph_id": 34,
"text": "A subgenre of the romantic comedy, screwball comedies appears to focus on the story of a central male character until a strong female character takes center stage; at this point, the man's story becomes secondary to a new issue typically introduced by the woman; this story grows in significance and, as it does, the man's masculinity is challenged by the sharp-witted woman, who is often his love interest. Typically it can include a romantic element, an interplay between people of different economic strata, quick and witty repartee, some form of role reversal, and a happy ending. Some examples of screwball comedy during its heyday include It Happened One Night (1934), Bringing Up Baby (1938), The Philadelphia Story (1940), His Girl Friday (1940), Mr. & Mrs. Smith (1941); more recent examples include What's Up, Doc? (1972), Rat Race (2001), and Our Idiot Brother (2011).",
"title": "Hybrid subgenres"
},
{
"paragraph_id": 35,
"text": "Science fiction comedy films often exaggerate the elements of traditional science fiction films to comic effect. Examples include Spaceballs, Ghostbusters, Galaxy Quest, Mars Attacks!, Men in Black, and many more.",
"title": "Hybrid subgenres"
},
{
"paragraph_id": 36,
"text": "Sports comedy combines the genre of comedy with that of the sports film genre. Thematically, the story is often one of \"Our Team\" versus \"Their Team\"; their team will always try to win, and our team will show the world that they deserve recognition or redemption; the story does not always have to involve a team. The story could also be about an individual athlete or the story could focus on an individual playing on a team. The comedic aspect of this super-genre often comes from physical humor (Happy Gilmore - 1996), character humor (Caddyshack - 1980), or the juxtaposition of bad athletes succeeding against the odds (The Bad News Bears - 1976).",
"title": "Hybrid subgenres"
},
{
"paragraph_id": 37,
"text": "War films typically tell the story of a small group of isolated individuals who – one by one – get killed (literally or metaphorically) by an outside force until there is a final fight to the death; the idea of the protagonists facing death is a central expectation in a war film. War comedies infuse this idea of confronting death with a morbid sense of humor. In a war film even though the enemy may out-number, or out-power, the hero, we assume that the enemy can be defeated if only the hero can figure out how. Often, this strategic sensibility provides humorous opportunities in a war comedy. Examples include Good Morning, Vietnam; M*A*S*H; the Francis the Talking Mule series; and others.",
"title": "Hybrid subgenres"
},
{
"paragraph_id": 38,
"text": "Films in the western super-genre often take place in the American Southwest or in Mexico, with a large number of scenes occurring outside so we can soak in nature's rugged beauty. Visceral expectations for the audience include fistfights, gunplay, and chase scenes. There is also the expectation of spectacular panoramic images of the countryside including sunsets, wide open landscapes, and endless deserts and sky. Western comedies often find their humor in specific characters (Three Amigos, 1986), in interpersonal relationships (Lone Ranger, 2013) or in creating a parody of the western (Rango, 2011).",
"title": "Hybrid subgenres"
}
] | A comedy film is a category of film which emphasizes humor. These films are designed to amuse audiences and make them laugh. Films in this genre typically have a happy ending, with dark comedy being an exception to this rule. Comedy is one of the oldest genres in film, and is derived from classical comedy in theatre. Some of the earliest silent films were comedies such as slapstick comedy which often relies on visual depictions, such as sight gags and pratfalls, so they can be enjoyed without requiring sound. To provide drama and excitement to silent movies, live music was played in sync with the action on the screen, on pianos, organs, and other instruments. When sound films became more prevalent during the 1920s, comedy films grew in popularity, as laughter could result from both burlesque situations but also from humorous dialogue. Comedy, compared with other film genres, places more focus on individual star actors, with many former stand-up comics transitioning to the film industry due to their popularity. In The Screenwriters Taxonomy (2017), Eric R. Williams contends that film genres are fundamentally based upon a film's atmosphere, character, and story, and therefore the labels "drama" and "comedy" are too broad to be considered a genre. Instead, his comedy taxonomy argues that comedy is a type of film that contains at least a dozen different sub-types. A number of hybrid genres use comedy, such as action comedy and the romantic comedy. Comedy is a genre of entertainment that is designed to make audiences laugh. It can take many forms, including stand-up comedy, sketch comedy, sitcoms, and comedic films. Comedy often uses humor and satire to comment on social and political issues, as well as everyday life. Many comedians use observational humor, in which they draw on their own experiences and the world around them to create comedic material. Physical comedy, which uses gestures, facial expressions and body language to create humor, is also a popular form of comedy. The genre of comedy is known for its ability to make people laugh, but also make them think. It can be a reflection of society and its issues. | 2001-11-05T00:49:23Z | 2023-12-12T13:41:29Z | [
"Template:Short description",
"Template:More citations needed",
"Template:--",
"Template:Authority control",
"Template:Tone",
"Template:See also",
"Template:Reflist",
"Template:Cite book",
"Template:ISBN",
"Template:Use dmy dates",
"Template:Flag",
"Template:Filmsbygenre",
"Template:Cite web",
"Template:Film genres",
"Template:Comedy footer"
] | https://en.wikipedia.org/wiki/Comedy_film |
5,645 | Cult film | A cult film or cult movie, also commonly referred to as a cult classic, is a film that has acquired a cult following. Cult films are known for their dedicated, passionate fanbase which forms an elaborate subculture, members of which engage in repeated viewings, dialogue-quoting, and audience participation. Inclusive definitions allow for major studio productions, especially box-office bombs, while exclusive definitions focus more on obscure, transgressive films shunned by the mainstream. The difficulty in defining the term and subjectivity of what qualifies as a cult film mirror classificatory disputes about art. The term cult film itself was first used in the 1970s to describe the culture that surrounded underground films and midnight movies, though cult was in common use in film analysis for decades prior to that.
Cult films trace their origin back to controversial and suppressed films kept alive by dedicated fans. In some cases, reclaimed or rediscovered films have acquired cult followings decades after their original release, occasionally for their camp value. Other cult films have since become well-respected or reassessed as classics; there is debate as to whether these popular and accepted films are still cult films. After failing at the cinema, some cult films have become regular fixtures on cable television or profitable sellers on home video. Others have inspired their own film festivals. Cult films can both appeal to specific subcultures and form their own subcultures. Other media that reference cult films can easily identify which demographics they desire to attract and offer savvy fans an opportunity to demonstrate their knowledge.
Cult films frequently break cultural taboos, and many feature excessive displays of violence, gore, sexuality, profanity, or combinations thereof. This can lead to controversy, censorship, and outright bans; less transgressive films may attract similar amounts of controversy when critics call them frivolous or incompetent. Films that fail to attract requisite amounts of controversy may face resistance when labeled as cult films. Mainstream films and big budget blockbusters have attracted cult followings similar to more underground and lesser known films; fans of these films often emphasize the films' niche appeal and reject the more popular aspects. Fans who like the films for the wrong reasons, such as perceived elements that represent mainstream appeal and marketing, will often be ostracized or ridiculed. Likewise, fans who stray from accepted subcultural scripts may experience similar rejection.
Since the late 1970s, cult films have become increasingly popular. Films that once would have been limited to obscure cult followings are now capable of breaking into the mainstream, and showings of cult films have proved to be a profitable business venture. Overbroad usage of the term has resulted in controversy, as purists state it has become a meaningless descriptor applied to any film that is the slightest bit weird or unconventional; others accuse Hollywood studios of trying to artificially create cult films or use the term as a marketing tactic. Films are frequently stated to be an "instant cult classic" now, occasionally before they are released. Some films have acquired massive, quick cult followings, owing to advertisements and posts made by fans spreading virally through social media. Easy access to cult films via video on demand and peer-to-peer file sharing has led some critics to pronounce the death of cult films.
What is a cult film? A cult film is one that has a passionate following, but does not appeal to everybody. James Bond movies are not cult films, but chainsaw movies are. Just because a movie is a cult film does not automatically guarantee quality: some cult movies are very bad; others are very, very good. Some make an awful lot of money at the box office; others make no money at all. Some are considered quality films; others are exploitation. —Alex Cox in his introduction to The Wicker Man on Moviedrome, 1988
A cult film is any film that has a cult following, although the term is not easily defined and can be applied to a wide variety of films. Some definitions exclude films that have been released by major studios or have big budgets, that try specifically to become cult films, or become accepted by mainstream audiences and critics. Cult films are defined by audience reaction as much as by their content. This may take the form of elaborate and ritualized audience participation, film festivals, or cosplay. Over time, the definition has become more vague and inclusive as it drifts away from earlier, stricter views. Increasing use of the term by mainstream publications has resulted in controversy, as cinephiles argue that the term has become meaningless or "elastic, a catchall for anything slightly maverick or strange". Academic Mark Shiel has criticized the term itself as being a weak concept, reliant on subjectivity; different groups can interpret films in their own terms. According to feminist scholar Joanne Hollows, this subjectivity causes films with large female cult followings to be perceived as too mainstream and not transgressive enough to qualify as a cult film. Academic Mike Chopra‑Gant says that cult films become decontextualized when studied as a group, and Shiel criticizes this recontextualization as cultural commodification.
In 2008, Cineaste asked a range of academics for their definition of a cult film. Several people defined cult films primarily in terms of their opposition to mainstream films and conformism, explicitly requiring a transgressive element, though others disputed the transgressive potential, given the demographic appeal to conventional moviegoers and mainstreaming of cult films. Jeffrey Andrew Weinstock instead called them mainstream films with transgressive elements. Most definitions also required a strong community aspect, such as obsessed fans or ritualistic behavior. Citing misuse of the term, Mikel J. Koven took a self-described hard-line stance that rejected definitions that use any other criteria. Matt Hills instead stressed the need for an open-ended definition rooted in structuration, where the film and the audience reaction are interrelated and neither is prioritized. Ernest Mathijs focused on the accidental nature of cult followings, arguing that cult film fans consider themselves too savvy to be marketed to, while Jonathan Rosenbaum rejected the continued existence of cult films and called the term a marketing buzzword. Mathijs suggests that cult films help to understand ambiguity and incompleteness in life given the difficulty in even defining the term. That cult films can have opposing qualities – such as good and bad, failure and success, innovative and retro – helps to illustrate that art is subjective and never self-evident. This ambiguity leads critics of postmodernism to accuse cult films of being beyond criticism, as the emphasis is now on personal interpretation rather than critical analysis or metanarratives. These inherent dichotomies can lead audiences to be split between ironic and earnest fans.
Writing in Defining Cult Movies, Jancovich et al. quote academic Jeffrey Sconce, who defines cult films in terms of paracinema, marginal films that exist outside critical and cultural acceptance: everything from exploitation to beach party musicals to softcore pornography. However, they reject cult films as having a single unifying feature; instead, they state that cult films are united in their "subcultural ideology" and opposition to mainstream tastes, itself a vague and undefinable term. Cult followings themselves can range from adoration to contempt, and they have little in common except for their celebration of nonconformity – even the bad films ridiculed by fans are artistically nonconformist, albeit unintentionally. At the same time, they state that bourgeois, masculine tastes are frequently reinforced, which makes cult films more of an internal conflict within the bourgeoisie, rather than a rebellion against it. This results in an anti-academic bias despite the use of formal methodologies, such as defamiliarization. This contradiction exists in many subcultures, especially those dependent on defining themselves in terms of opposition to the mainstream. This nonconformity is eventually co-opted by the dominant forces, such as Hollywood, and marketed to the mainstream. Academic Xavier Mendik also defines cult films as opposing the mainstream and further proposes that films can become cult by virtue of their genre or content, especially if it is transgressive. Due to their rejection of mainstream appeal, Mendik says cult films can be more creative and political; times of relative political instability produce more interesting films.
Cult films have existed since the early days of cinema. Film critic Harry Allan Potamkin traces them back to 1910s France and the reception of Pearl White, William S. Hart, and Charlie Chaplin, which he described as "a dissent from the popular ritual". Nosferatu (1922) was an unauthorized adaptation of Bram Stoker's Dracula. Stoker's widow sued the production company and drove it to bankruptcy. All known copies of the film were destroyed, and Nosferatu become an early cult film, kept alive by a cult following that circulated illegal bootlegs. Academic Chuck Kleinhans identifies the Marx Brothers as making other early cult films. On their original release, some highly regarded classics from the Golden Age of Hollywood were panned by critics and audiences, relegated to cult status. The Night of the Hunter (1955) was a cult film for years, quoted often and championed by fans, before it was reassessed as an important and influential classic. During this time, American exploitation films and imported European art films were marketed similarly. Although critics Pauline Kael and Arthur Knight argued against arbitrary divisions into high and low culture, American films settled into rigid genres; European art films continued to push the boundaries of simple definitions, and these exploitative art films and artistic exploitation films would go on to influence American cult films. Much like later cult films, these early exploitation films encouraged audience participation, influenced by live theater and vaudeville.
Modern cult films grew from 1960s counterculture and underground films, popular among those who rejected mainstream Hollywood films. These underground film festivals led to the creation of midnight movies, which attracted cult followings. The term cult film itself was an outgrowth of this movement and was first used in the 1970s, though cult had been in use for decades in film analysis with both positive and negative connotations. These films were more concerned with cultural significance than the social justice sought by earlier avant-garde films. Midnight movies became more popular and mainstream, peaking with the release of The Rocky Horror Picture Show (1975), which finally found its audience several years after its release. Eventually, the rise of home video would marginalize midnight movies once again, after which many directors joined the burgeoning independent film scene or went back underground. Home video would give a second life to box-office flops, as positive word-of-mouth or excessive replay on cable television led these films to develop an appreciative audience, as well as obsessive replay and study. For example, The Beastmaster (1982), despite its failure at the box office, became one of the most played movies on American cable television and developed into a cult film. Home video and television broadcasts of cult films were initially greeted with hostility. Joanne Hollows states that they were seen as turning cult films mainstream – in effect, feminizing them by opening them to distracted, passive audiences.
Releases from major studios – such as The Big Lebowski (1998), which was distributed by Universal Studios – can become cult films when they fail at the box office and develop a cult following through reissues, such as midnight movies, festivals, and home video. Hollywood films, due to their nature, are more likely to attract this kind of attention, which leads to a mainstreaming effect of cult culture. With major studios behind them, even financially unsuccessful films can be re-released multiple times, which plays into a trend to capture audiences through repetitious reissues. The constant use of profanity and drugs in otherwise mainstream, Hollywood films, such as The Big Lebowski, can alienate critics and audiences yet lead to a large cult following among more open-minded demographics not often associated with cult films, such as Wall Street bankers and professional soldiers. Thus, even comparatively mainstream films can satisfy the traditional demands of a cult film, perceived by fans as transgressive, niche, and uncommercial. Discussing his reputation for making cult films, Bollywood director Anurag Kashyap said, "I didn't set out to make cult films. I wanted to make box-office hits." Writing in Cult Cinema, academics Ernest Mathijs and Jamie Sexton state that this acceptance of mainstream culture and commercialism is not out of character, as cult audiences have a more complex relationship to these concepts: they are more opposed to mainstream values and excessive commercialism than they are anything else.
In a global context, popularity can vary widely by territory, especially with regard to limited releases. Mad Max (1979) was an international hit – except in America where it became an obscure cult favorite, ignored by critics and available for years only in a dubbed version though it earned over $100M internationally. Foreign cinema can put a different spin on popular genres, such as Japanese horror, which was initially a cult favorite in America. Asian imports to the West are often marketed as exotic cult films and of interchangeable national identity, which academic Chi-Yun Shin criticizes as reductive. Foreign influence can affect fan response, especially on genres tied to a national identity; when they become more global in scope, questions of authenticity may arise. Filmmakers and films ignored in their own country can become the objects of cult adoration in another, producing perplexed reactions in their native country. Cult films can also establish an early viability for more mainstream films both for filmmakers and national cinema. The early cult horror films of Peter Jackson were so strongly associated with his homeland that they affected the international reputation of New Zealand and its cinema. As more artistic films emerged, New Zealand was perceived as a legitimate competitor to Hollywood, which mirrored Jackson's career trajectory. Heavenly Creatures (1994) acquired its own cult following, became a part of New Zealand's national identity, and paved the way for big-budget, Hollywood-style epics, such as Jackson's The Lord of the Rings trilogy.
Mathijs states that cult films and fandom frequently involve nontraditional elements of time and time management. Fans will often watch films obsessively, an activity that is viewed by the mainstream as wasting time yet can be seen as resisting the commodification of leisure time. They may also watch films idiosyncratically: sped up, slowed down, frequently paused, or at odd hours. Cult films themselves subvert traditional views of time – time travel, non-linear narratives, and ambiguous establishments of time are all popular. Mathijs also identifies specific cult film viewing habits, such as viewing horror films on Halloween, sentimental melodrama on Christmas, and romantic films on Valentine's Day. These films are often viewed as marathons where fans can gorge themselves on their favorites. Mathijs states that cult films broadcast on Christmas have a nostalgic factor. These films, ritually watched every season, give a sense of community and shared nostalgia to viewers. New films often have trouble making inroads against the institutions of It's A Wonderful Life (1946) and Miracle on 34th Street (1947). These films provide mild criticism of consumerism while encouraging family values. Halloween, on the other hand, allows flaunting society's taboos and testing one's fears. Horror films have appropriated the holiday, and many horror films debut on Halloween. Mathijs criticizes the over-cultified, commercialized nature of Halloween and horror films, which feed into each other so much that Halloween has turned into an image or product with no real community. Mathijs states that Halloween horror conventions can provide the missing community aspect.
Despite their oppositional nature, cult films can produce celebrities. Like cult films themselves, authenticity is an important aspect of their popularity. Actors can become typecast as they become strongly associated with such iconic roles. Tim Curry, despite his acknowledged range as an actor, found casting difficult after he achieved fame in The Rocky Horror Picture Show. Even when discussing unrelated projects, interviewers frequently bring up the role, which causes him to tire of discussing it. Mary Woronov, known for her transgressive roles in cult films, eventually transitioned to mainstream films. She was expected to recreate the transgressive elements of her cult films within the confines of mainstream cinema. Instead of the complex gender deconstructions of her Andy Warhol films, she became typecast as a lesbian or domineering woman. Sylvia Kristel, after starring in Emmanuelle (1974), found herself highly associated with the film and the sexual liberation of the 1970s. Caught between the transgressive elements of her cult film and the mainstream appeal of soft-core pornography, she was unable to work in anything but exploitation films and Emmanuelle sequels. Despite her immense popularity and cult following, she would rate only a footnote in most histories of European cinema if she was even mentioned. Similarly, Chloë Sevigny has struggled with her reputation as a cult independent film star famous for her daring roles in transgressive films. Cult films can also trap directors. Leonard Kastle, who directed The Honeymoon Killers (1969), never directed another film again. Despite his cult following, which included François Truffaut, he was unable to find financing for any of his other screenplays. Qualities that bring cult films to prominence – such as an uncompromising, unorthodox vision – caused Alejandro Jodorowsky to languish in obscurity for years.
Transgressive films as a distinct artistic movement began in the 1970s. Unconcerned with genre distinctions, they drew inspiration equally from the nonconformity of European art cinema and experimental film, the gritty subject matter of Italian neorealism, and the shocking images of 1960s exploitation. Some used hardcore pornography and horror, occasionally at the same time. In the 1980s, filmmaker Nick Zedd identified this movement as the Cinema of Transgression and later wrote a manifesto. Popular in midnight showings, they were mainly limited to large urban areas, which led academic Joan Hawkins to label them as "downtown culture". These films acquired a legendary reputation as they were discussed and debated in alternative weeklies, such as The Village Voice. Home video would finally allow general audiences to see them, which gave many people their first taste of underground film. Ernest Mathijs says that cult films often disrupt viewer expectations, such as giving characters transgressive motivations or focusing attention on elements outside the film. Cult films can also transgress national stereotypes and genre conventions, such as Battle Royale (2000), which broke many rules of teenage slasher films. The reverse – when films based on cult properties lose their transgressive edge – can result in derision and rejection by fans. Audience participation itself can be transgressive, such as breaking long-standing taboos against talking during films and throwing things at the screen.
According to Mathijs, critical reception is important to a film's perception as cult, through topicality and controversy. Topicality, which can be regional (such as objection to government funding of the film) or critical (such as philosophical objections to the themes), enables attention and a contextual response. Cultural topics make the film relevant and can lead to controversy, such as a moral panic, which provides opposition. Cultural values transgressed in the film, such as sexual promiscuity, can be attacked by proxy, through attacks on the film. These concerns can vary from culture to culture, and they need not be at all similar. However, Mathijs says the film must invoke metacommentary for it to be more than simply culturally important. While referencing previous arguments, critics may attack its choice of genre or its very right to exist. Taking stances on these varied issues, critics assure their own relevance while helping to elevate the film to cult status. Perceived racist and reductive remarks by critics can rally fans and raise the profile of cult films, an example of which would be Rex Reed's comments about Korean culture in his review of Oldboy (2003). Critics can also polarize audiences and lead debates, such as how Joe Bob Briggs and Roger Ebert dueled over I Spit On Your Grave (1978). Briggs would later contribute a commentary track to the DVD release in which he describes it as a feminist film. Films which do not attract enough controversy may be ridiculed and rejected when suggested as cult films.
Academic Peter Hutchings, noting the many definitions of a cult film that require transgressive elements, states that cult films are known in part for their excesses. Both subject matter and its depiction are portrayed in extreme ways that break taboos of good taste and aesthetic norms. Violence, gore, sexual perversity, and even the music can be pushed to stylistic excess far beyond that allowed by mainstream cinema. Film censorship can make these films obscure and difficult to find, common criteria used to define cult films. Despite this, these films remain well-known and prized among collectors. Fans will occasionally express frustration with dismissive critics and conventional analysis, which they believe marginalizes and misinterprets paracinema. In marketing these films, young men are predominantly targeted. Horror films in particular can draw fans who seek the most extreme films. Audiences can also ironically latch on to offensive themes, such as misogyny, using these films as catharsis for the things that they hate most in life. Exploitative, transgressive elements can be pushed to excessive extremes for both humor and satire. Frank Henenlotter faced censorship and ridicule, but he found acceptance among audiences receptive to themes that Hollywood was reluctant to touch, such as violence, drug addiction, and misogyny. Lloyd Kaufman sees his films' political statements as more populist and authentic than the hypocrisy of mainstream films and celebrities. Despite featuring an abundance of fake blood, vomit, and diarrhea, Kaufman's films have attracted positive attention from critics and academics. Excess can also exist as camp, such as films that highlight the excesses of 1980s fashion and commercialism.
Films that are influenced by unpopular styles or genres can become cult films. Director Jean Rollin worked within cinéma fantastique, an unpopular genre in modern France. Influenced by American films and early French fantasists, he drifted between art, exploitation, and pornography. His films were reviled by critics, but he retained a cult following drawn by the nudity and eroticism. Similarly, Jess Franco chafed under fascist censorship in Spain but became influential in Spain's horror boom of the 1960s. These transgressive films that straddle the line between art and horror may have overlapping cult followings, each with their own interpretation and reasons for appreciating it. The films that followed Jess Franco were unique in their rejection of mainstream art. Popular among fans of European horror for their subversiveness and obscurity, these later Spanish films allowed political dissidents to criticize the fascist regime within the cloak of exploitation and horror. Unlike most exploitation directors, they were not trying to establish a reputation. They were already established in the art-house world and intentionally chose to work within paracinema as a reaction against the New Spanish Cinema, an artistic revival supported by the fascists. As late as the 1980s, critics still cited Pedro Almodóvar's anti-macho iconoclasm as a rebellion against fascist mores, as he grew from countercultural rebel to mainstream respectability. Transgressive elements that limit a director's appeal in one country can be celebrated or highlighted in another. Takashi Miike has been marketed in the West as a shocking and avant-garde filmmaker despite his many family-friendly comedies, which have not been imported.
The transgressive nature of cult films can lead to their censorship. During the 1970s and early 1980s, a wave of explicit, graphic exploitation films caused controversy. Called "video nasties" within the UK, they ignited calls for censorship and stricter laws on home video releases, which were largely unregulated. Consequently, the British Board of Film Classification banned many popular cult films due to issues of sex, violence, and incitement to crime. Released during the cannibal boom, Cannibal Holocaust (1980) was banned in dozens of countries and caused the director to be briefly jailed over fears that it was a real snuff film. Although opposed to censorship, director Ruggero Deodato would later agree with cuts made by the BBFC which removed unsimulated animal killings, which limited the film's distribution. Frequently banned films may introduce questions of authenticity as fans question whether they have seen a truly uncensored cut. Cult films have been falsely claimed to have been banned to increase their transgressive reputation and explain their lack of mainstream penetration. Marketing campaigns have also used such claims to raise interest among curious audiences. Home video has allowed cult film fans to import rare or banned films, finally giving them a chance to complete their collection with imports and bootlegs. Cult films previously banned are sometimes released with much fanfare and the fans assumed to be already familiar with the controversy. Personal responsibility is often highlighted, and a strong anti-censorship message may be present. Previously lost scenes cut by studios can be re-added and restore a director's original vision, which draws similar fanfare and acclaim from fans. Imports are sometimes censored to remove elements that would be controversial, such as references to Islamic spirituality in Indonesian cult films.
Academics have written of how transgressive themes in cult films can be regressive. David Church and Chuck Kleinhans describe an uncritical celebration of transgressive themes in cult films, including misogyny and racism. Church has also criticized gendered descriptions of transgressive content that celebrate masculinity. Joanne Hollows further identifies a gendered component to the celebration of transgressive themes in cult films, where male terms are used to describe films outside the mainstream while female terms are used to describe mainstream, conformist cinema. Jacinda Read's expansion states that cult films, despite their potential for empowerment of the marginalized, are more often used by politically incorrect males. Knowledgeable about feminism and multiculturalism, they seek a refuge from the academic acceptance of these progressive ideals. Their playful and ironic acceptance of regressive lad culture invites, and even dares, condemnation from academics and the uncool. Thus, cult films become a tool to reinforce mainstream values through transgressive content; Rebecca Feasy states that cultural hierarchies can also be reaffirmed through mockery of films perceived to be lacking masculinity. However, the sexploitation films of Doris Wishman took a feminist approach which avoids and subverts the male gaze and traditional goal-oriented methods. Wishman's subject matter, though exploitative and transgressive, was always framed in terms of female empowerment and the feminine spectator. Her use of common cult film motifs – female nudity and ambiguous gender – were repurposed to comment on feminist topics. Similarly, the films of Russ Meyer were a complicated combination of transgressive, mainstream, progressive, and regressive elements. They attracted both acclaim and denouncement from critics and progressives. Transgressive films imported from cultures that are recognizably different yet still relatable can be used to progressively examine issues in another culture.
Cult films can be used to help define or create groups as a form of subcultural capital; knowledge of cult films proves that one is "authentic" or "non-mainstream". They can be used to provoke an outraged response from the mainstream, which further defines the subculture, as only members could possibly tolerate such deviant entertainment. More accessible films have less subcultural capital; among extremists, banned films will have the most. By referencing cult films, media can identify desired demographics, strengthen bonds with specific subcultures, and stand out among those who understand the intertextuality. Popular films from previous eras may be reclaimed by genre fans long after they have been forgotten by the original audiences. This can be done for authenticity, such as horror fans who seek out now-obscure titles from the 1950s instead of the modern, well-known remakes. Authenticity may also drive fans to deny genre categorization to films perceived as too mainstream or accessible. Authenticity in performance and expertise can drive fan acclaim. Authenticity can also drive fans to decry the mainstream in the form of hostile critics and censors. Especially when promoted by enthusiastic and knowledgeable programmers, choice of venue can be an important part of expressing individuality. Besides creating new communities, cult films can link formerly disparate groups, such as fans and critics. As these groups intermix, they can influence each other, though this may be resisted by older fans, unfamiliar with these new references. In extreme cases, cult films can lead to the creation of religions, such as Dudeism. For their avoidance of mainstream culture and audiences, enjoyment of irony, and celebration of obscure subcultures, academic Martin Roberts compares cult film fans to hipsters.
A film can become the object of a cult following within a particular region or culture if it has unusual significance. For example, Norman Wisdom's films, friendly to Marxist interpretation, amassed a cult following in Albania, as they were among the few Western films allowed by the country's Communist rulers. The Wizard of Oz (1939) and its star, Judy Garland, hold special significance to American and British gay culture, although it is a widely viewed and historically important film in greater American culture. Similarly, James Dean and his brief film career have become icons of alienated youth. Cult films can have such niche appeal that they are only popular within certain subcultures, such as Reefer Madness (1936) and Hemp for Victory (1942) among the stoner subculture. Beach party musicals, popular among American surfers, failed to find an equivalent audience when imported to the United Kingdom. When films target subcultures like this, they may seem unintelligible without the proper cultural capital. Films which appeal to teenagers may offer subcultural identities that are easily recognized and differentiate various subcultural groups. Films which appeal to stereotypical male activities, such as sports, can easily gain strong male cult followings. Sports metaphors are often used in the marketing of cult films to males, such as emphasizing the "extreme" nature of the film, which increases the appeal to youth subcultures fond of extreme sports.
Matt Hills' concept of the "cult blockbuster" involves cult followings inside larger, mainstream films. Although these are big budget, mainstream films, they still attract cult followings. The cult fans differentiate themselves from ordinary fans in several ways: longstanding devotion to the film, distinctive interpretations, and fan works. Hills identifies three different cult followings for The Lord of the Rings, each with their own fandom separate from the mainstream. Academic Emma Pett identifies Back to the Future (1985) as another example of a cult blockbuster. Although the film was an instant hit when released, it has also developed a nostalgic cult following over the years. The hammy acting by Christopher Lloyd and quotable dialogue have drawn a cult following, as they mimic traditional cult films. Blockbuster science fiction films that include philosophical subtexts, such as The Matrix, allow cult film fans to enjoy them on a higher level than the mainstream. Star Wars, with its large cult following in geek subculture, has been cited as both a cult blockbuster and a cult film. Although a mainstream epic, Star Wars has provided its fans with a spirituality and culture outside of the mainstream.
Fans, in response to the popularity of these blockbusters, will claim elements for themselves while rejecting others. For example, in the Star Wars film series, mainstream criticism of Jar Jar Binks focused on racial stereotyping; although cult film fans will use that to bolster their arguments, he is rejected because he represents mainstream appeal and marketing. Also, instead of valuing textual rarity, fans of cult blockbusters will value repeat viewings. They may also engage in behaviors more traditional for fans of cult television and other serial media, as cult blockbusters are often franchised, preconceived as a film series, or both. To reduce mainstream accessibility, a film series can be self-reflexive and full of in-jokes that only longtime fans can understand. Mainstream critics may ridicule commercially successful directors of cult blockbusters, such as James Cameron, Michael Bay, and Luc Besson, whose films have been called simplistic. This critical backlash may serve to embellish the filmmakers' reception as cult auteurs. In the same way, critics may ridicule fans of cult blockbusters as immature or shallow.
Cult films can create their own subculture. Rocky Horror, originally made to exploit the popularity of glam subculture, became what academic Gina Marchetti called a "sub-subculture", a variant that outlived its parent subculture. Although often described as primarily composed of obsessed fans, cult film fandom can include many newer, less experienced members. Familiar with the film's reputation and having watched clips on YouTube, these fans may take the next step and enter the film's fandom. If they are the majority, they may alter or ignore long-standing traditions, such as audience participation rituals; rituals which lack perceived authenticity may be criticized, but accepted rituals bring subcultural capital to veteran fans who introduce them to the newer members. Fans who flaunt their knowledge receive negative reactions. Newer fans may cite the film itself as their reason for attending a showing, but longtime fans often cite the community. Organized fandoms may spread and become popular as a way of introducing new people to the film, as well as theatrical screenings being privileged by the media and fandom itself. Fandom can also be used as a process of legitimation. Fans of cult films, as in media fandom, are frequently producers instead of mere consumers. Unconcerned with traditional views on intellectual property, these fan works are often unsanctioned, transformative, and ignore fictional canon.
Like cult films themselves, magazines and websites dedicated to cult films revel in their self-conscious offensiveness. They maintain a sense of exclusivity by offending mainstream audiences with misogyny, gore, and racism. Obsessive trivia can be used to bore mainstream audiences while building up subcultural capital. Specialist stores on the fringes of society (or websites which prominently partner with hardcore pornographic sites) can be used to reinforce the outsider nature of cult film fandom, especially when they use erotic or gory imagery. By assuming a preexisting knowledge of trivia, non-fans can be excluded. Previous articles and controversies can also be alluded to without explanation. Casual readers and non-fans will thus be left out of discussions and debates, as they lack enough information to meaningfully contribute. When fans like a cult film for the wrong reasons, such as casting or characters aimed at mainstream appeal, they may be ridiculed. Thus, fandom can keep the mainstream at bay while defining themselves in terms of the "Other", a philosophical construct divergent from social norms. Commercial aspects of fandom (such as magazines or books) can also be defined in terms of "otherness" and thus valid to consume: consumers purchasing independent or niche publications are discerning consumers, but the mainstream is denigrated. Irony or self-deprecating humor can also be used. In online communities, different subcultures attracted to transgressive films can clash over values and criteria for subcultural capital. Even within subcultures, fans who break subcultural scripts, such as denying the affectivity of a disturbing film, will be ridiculed for their lack of authenticity.
The critic Michael Medved characterized examples of the "so bad it's good" class of low-budget cult film through books such as The Golden Turkey Awards. These films include financially fruitless and critically scorned films that have become inadvertent comedies to film buffs, such as Plan 9 from Outer Space (1959), Mommie Dearest (1981), The Room (2003), and the Ugandan action comedy film Who Killed Captain Alex? (2010). Similarly, Paul Verhoeven's Showgirls (1995) bombed in theaters but developed a cult following on video. Catching on, Metro-Goldwyn-Mayer capitalized on the film's ironic appeal and marketed it as a cult film. Sometimes, fans will impose their own interpretation of films which have attracted derision, such as reinterpreting an earnest melodrama as a comedy. Jacob deNobel of the Carroll County Times states that films can be perceived as nonsensical or inept when audiences misunderstand avant-garde filmmaking or misinterpret parody. Films such as Rocky Horror can be misinterpreted as "weird for weirdness' sake" by people unfamiliar with the cult films that it parodies. deNobel ultimately rejects the use of the label "so bad it's good" as mean-spirited and often misapplied. Alamo Drafthouse programmer Zack Carlson has further said that any film which succeeds in entertaining an audience is good, regardless of irony. In francophone culture, "so bad it's good" films, known as nanars [Fr], have given rise to a subculture with dedicated websites such as Nanarland, film festivals and viewings in theaters, as well as various books analyzing the phenomenon. The rise of the Internet and on-demand films has led critics to question whether "so bad it's good" films have a future now that people have such diverse options in both availability and catalog, though fans eager to experience the worst films ever made can lead to lucrative showings for local theaters and merchandisers.
Chuck Kleinhans states that the difference between a guilty pleasure and a cult film can be as simple as the number of fans; David Church raises the question of how many people it takes to form a cult following, especially now that home video makes fans difficult to count. As these cult films become more popular, they can bring varied responses from fans that depend on different interpretations, such as camp, irony, genuine affection, or combinations thereof. Earnest fans, who recognize and accept the film's faults, can make minor celebrities of the film's cast, though the benefits are not always clear. Cult film stars known for their camp can inject subtle parody or signal when films should not be taken seriously. Campy actors can also provide comic book supervillains for serious, artistic-minded films. This can draw fan acclaim and obsession more readily than subtle, method-inspired acting. Mark Chalon Smith of the Los Angeles Times says technical faults may be forgiven if a film makes up for them in other areas, such as camp or transgressive content. Smith states that the early films of John Waters are amateurish and less influential than claimed, but Waters' outrageous vision cements his place in cult cinema. Films such as Myra Breckinridge (1970) and Beyond the Valley of the Dolls (1970) can experience critical reappraisal later, once their camp excess and avant-garde filmmaking are better accepted, and films that are initially dismissed as frivolous are often reassessed as campy. Films that intentionally try to appeal to fans of camp may end up alienating them, as the films become perceived as trying too hard or not authentic.
According to academic Brigid Cherry, nostalgia "is a strong element of certain kinds of cult appeal." When Veoh added many cult films to their site, they cited nostalgia as a factor for their popularity. Academic I. Q. Hunter describes cult films as "New Hollywood in extremis" and a form of nostalgia for that period. Ernest Mathijs instead states that cult films use nostalgia as a form of resistance against progress and capitalistic ideas of a time-based economy. By virtue of the time travel plot, Back to the Future permits nostalgia for both the 1950s and 1980s. Many members of its nostalgic cult following are too young to have been alive during those periods, which Emma Pett interprets as fondness for retro aesthetics, nostalgia for when they saw the film rather than when it was released, and looking to the past to find a better time period. Similarly, films directed by John Hughes have taken hold in midnight movie venues, trading off of nostalgia for the 1980s and an ironic appreciation for their optimism. Mathijs and Sexton describe Grease (1978) as a film nostalgic about an imagined past that has acquired a nostalgic cult following. Other cult films, such as Streets of Fire (1984), create a new fictional world based on nostalgic views of the past. Cult films may also subvert nostalgia, such as The Big Lebowski, which introduces many nostalgic elements and then reveals them as fake and hollow. Scott Pilgrim vs. the World is a recent example, containing extensive nostalgia for the music and video gaming culture of the 2000s. Nathan Lee of the New York Sun identifies the retro aesthetic and nostalgic pastiche in films such as Donnie Darko as factors in its popularity among midnight movie crowds.
Author Tomas Crowder-Taraborrelli describes midnight movies as a reaction against the political and cultural conservatism in America, and Joan Hawkins identifies the movement as running the gamut from anarchist to libertarian, united in their anti-establishment attitude and punk aesthetic. These films are resistant to simple categorization and are defined by the fanaticism and ritualistic behaviors of their audiences. Midnight movies require a night life and an audience willing to invest themselves actively. Hawkins states that these films took a rather bleak point of view due to the living conditions of the artists and the economic prospects of the 1970s. Like the surrealists and dadaists, they not only satirically attacked society but also the very structure of film – a counter-cinema that deconstructs narrative and traditional processes. In the late 1980s and 1990s, midnight movies transitioned from underground showings to home video viewings; eventually, a desire for community brought a resurgence, and The Big Lebowski kick-started a new generation. Demographics shifted, and more hip and mainstream audiences were drawn to them. Although studios expressed skepticism, large audiences were drawn to box-office flops, such as Donnie Darko (2001), The Warriors (1979) and Office Space (1999). Modern midnight movies retain their popularity and have been strongly diverging from mainstream films shown at midnight. Mainstream cinemas, eager to disassociate themselves from negative associations and increase profits, have begun abandoning midnight screenings. Although classic midnight movies have dropped off in popularity, they still bring reliable crowds.
Although seemingly at odds with each other, art and exploitation films are frequently treated as equal and interchangeable in cult fandom, listed alongside each other and described in similar terms: their ability to provoke a response. The most exploitative aspects of art films are thus played up and their academic recognition ignored. This flattening of culture follows the popularity of post-structuralism, which rejects a hierarchy of artistic merit and equates exploitation and art. Mathijs and Sexton state that although cult films are not synonymous with exploitation, as is occasionally assumed, this is a key component; they write that exploitation, which exists on the fringes of the mainstream and deals with taboo subjects, is well-suited for cult followings. Academic David Andrews writes that cult softcore films are "the most masculinized, youth-oriented, populist, and openly pornographic softcore area." The sexploitation films of Russ Meyer were among the first to abandon all hypocritical pretenses of morality and were technically proficient enough to gain a cult following. His persistent vision saw him received as an auteur worthy of academic study; director John Waters attributes this to Meyer's ability to create complicated, sexually charged films without resorting to explicit sex. Myrna Oliver described Doris Wishman's exploitation films as "crass, coarse, and camp ... perfect fodder for a cult following." "Sick films", the most disturbing and graphically transgressive films, have their own distinct cult following; these films transcend their roots in exploitation, horror, and art films. In 1960s and 1970s America, exploitation and art films shared audiences and marketing, especially in New York City's grindhouse cinemas.
Mathijs and Sexton state that genre is an important part of cult films; cult films will often mix, mock, or exaggerate the tropes associated with traditional genres. Science fiction, fantasy, and horror are known for their large and dedicated cult followings; as science fiction films become more popular, fans emphasize non-mainstream and less commercial aspects of it. B films, which are often conflated with exploitation, are as important to cult films as exploitation. Teodor Reljic of Malta Today states that cult B films are a realistic goal for Malta's burgeoning film industry. Genre films, B films that strictly adhere to genre limitations, can appeal to cult film fans: given their transgressive excesses, horror films are likely to become to cult films; films like Galaxy Quest (1999) highlight the importance of cult followings and fandom to science fiction; and authentic martial arts skills in Hong Kong action films can drive them to become cult favorites. Cult musicals can range from the traditional, such as Singin' in the Rain (1952), which appeal to cult audiences through nostalgia, camp, and spectacle, to the more non-traditional, such as Cry-Baby (1990), which parodies musicals, and Rocky Horror, which uses a rock soundtrack. Romantic fairy tale The Princess Bride (1987) failed to attract audiences in its original release, as the studio did not know how to market it. The freedom and excitement associated with cars can be an important part of drawing cult film fans to genre films, and they can signify action and danger with more ambiguity than a gun. Ad Week writes that cult B films, when released on home video, market themselves and need only enough advertising to raise curiosity or nostalgia.
Animation can provide wide open vistas for stories. The French film Fantastic Planet (1973) explored ideas beyond the limits of traditional, live-action science fiction films. Ralph Bakshi's career has been marked with controversy: Fritz the Cat (1972), the first animated film to be rated "X" by the MPAA, provoked outrage for its racial caricatures and graphic depictions of sex, and Coonskin (1975) was decried as racist. Bakshi recalls that older animators had tired of "kid stuff" and desired edgier work, whereas younger animators hated his work for "destroying the Disney images". Eventually, his work would be reassessed and cult followings, which include Quentin Tarantino and Robert Rodriguez, developed around several of his films. Heavy Metal (1981) faced similar denunciations from critics. Donald Liebenson of the Los Angeles Times cites the violence and sexual imagery as alienating critics, who did not know what to make of the film. It would go on to become a popular midnight movie and frequently bootlegged by fans, as licensing issues kept it from being released on video for many years.
Phil Hoad of The Guardian identifies Akira (1988) as introducing violent, adult Japanese animation (known as anime) to the West and paving the way for later works. Anime, according to academic Brian Ruh, is not a cult genre, but the lack of individual fandoms inside anime fandom itself lends itself to a bleeding over of cult attention and can help spread works internationally. Anime, which is frequently presented as a series (with movies either rising from existing series, or spinning off series based on the film), provides its fans with alternative fictional canons and points of view that can drive fan activity. The Ghost in the Shell films, for example, provided Japanese fans with enough bonus material and spinoffs that it encouraged cult tendencies. Markets that did not support the sale of these materials saw less cult activity. The claymation film Gumby: The Movie (1995), which made only $57,100 at the box office against its $2.8 million budget but sold a million copies on VHS alone, was subsequently released on DVD and remastered in high definition for Blu-ray due to its strong cult following. Like many cult films, RiffTrax made their own humorous audio commentary for Gumby: The Movie in 2021.
Sensationalistic documentaries called mondo films replicate the most shocking and transgressive elements of exploitation films. They are usually modeled after "sick films" and cover similar subject matter. In The Cult Film Reader, academics Mathijs and Mendik write that these documentaries often present non-Western societies as "stereotypically mysterious, seductive, immoral, deceptive, barbaric or savage". Though they can be interpreted as racist, Mathijs and Mendik state that they also "exhibit a liberal attitude towards the breaking of cultural taboos". Mondo films like Faces of Death mix real and fake footage freely, and they gain their cult following through the outrage and debate over authenticity that results. Like "so bad it's good" cult films, old propaganda and government hygiene films may be enjoyed ironically by more modern audiences for the camp value of the outdated themes and outlandish claims made about perceived social threats, such as drug use. Academic Barry K. Grant states that Frank Capra's Why We Fight World War II propaganda films are explicitly not cult, because they are "slickly made and have proven their ability to persuade an audience." The sponsored film Mr. B Natural became a cult hit when it was broadcast on the satirical television show Mystery Science Theater 3000; cast member Trace Beaulieu cited these educational shorts as his favorite to mock on the show. Mark Jancovich states that cult audiences are drawn to these films because of their "very banality or incoherence of their political positions", unlike traditional cult films, which achieve popularity through auteurist radicalism.
Mark Shiel explains the rising popularity of cult films as an attempt by cinephiles and scholars to escape the oppressive conformity and mainstream appeal of even independent film, as well as a lack of condescension in both critics and the films; Academic Donna de Ville says it is a chance to subvert the dominance of academics and cinephiles. According to Xavier Mendik, "academics have been really interested in cult movies for quite a while now." Mendik has sought to bring together academic interest and fandom through Cine-Excess, a film festival. I. Q. Hunter states that "it's much easier to be a cultist now, but it is also rather more inconsequential." Citing the mainstream availability of Cannibal Holocaust, Jeffrey Sconce rejects definitions of cult films based on controversy and excess, as they've now become meaningless. Cult films have influenced such diverse industries as cosmetics, music videos, and fashion. Cult films have shown up in less expected places; as a sign of his popularity, a bronze statue of Ed Wood has been proposed in his hometown, and L'Osservatore Romano, the official newspaper of the Holy See, has courted controversy for its endorsement of cult films and pop culture. When cities attempt to renovate neighborhoods, fans have called attempts to demolish iconic settings from cult films "cultural vandalism". Cult films can also drive tourism, even when it is unwanted. From Latin America, Alejandro Jodorowsky's film El Topo (1970) has attracted attention of rock musicians such as John Lennon, Mick Jagger, and Bob Dylan.
As far back as the 1970s, Attack of the Killer Tomatoes (1978) was designed specifically to be a cult film, and The Rocky Horror Picture Show was produced by 20th Century Fox, a major Hollywood studio. Over its decades-long release, Rocky Horror became the seventh highest grossing R-rated film when adjusted for inflation; journalist Matt Singer has questioned whether Rocky Horror's popularity invalidates its cult status. Founded in 1974, Troma Entertainment, an independent studio, would become known for both its cult following and cult films. In the 1980s, Danny Peary's Cult Movies (1981) would influence director Edgar Wright and film critic Scott Tobias of The A.V. Club. The rise of home video would have a mainstreaming effect on cult films and cultish behavior, though some collectors would be unlikely to self-identify as cult film fans. Film critic Joe Bob Briggs began reviewing drive-in theater and cult films, though he faced much criticism as an early advocate of exploitation and cult films. Briggs highlights the mainstreaming of cult films by pointing out the respectful obituaries that cult directors have received from formerly hostile publications and acceptance of politically incorrect films at mainstream film festivals. This acceptance is not universal, though, and some critics have resisted this mainstreaming of paracinema. Beginning in the 1990s, director Quentin Tarantino would have the greatest success in turning cult films mainstream. Tarantino later used his fame to champion obscure cult films that had influenced him and set up the short-lived Rolling Thunder Pictures, which distributed several of his favorite cult films. Tarantino's clout led Phil Hoad of The Guardian to call Tarantino the world's most influential director.
As major Hollywood studios and audiences both become savvy to cult films, productions once limited to cult appeal have instead become popular hits, and cult directors have become hot properties known for more mainstream and accessible films. Remarking on the popular trend of remaking cult films, Claude Brodesser-Akner of New York magazine states that Hollywood studios have been superstitiously hoping to recreate past successes rather than trading on nostalgia. Their popularity would bring some critics to proclaim the death of cult films now that they have finally become successful and mainstream, are too slick to attract a proper cult following, lack context, or are too easily found online. In response, David Church says that cult film fans have retreated to more obscure and difficult to find films, often using illegal distribution methods, which preserves the outlaw status of cult films. Virtual spaces, such as online forums and fan sites, replace the traditional fanzines and newsletters. Cult film fans consider themselves collectors, rather than consumers, as they associate consumers with mainstream, Hollywood audiences. This collecting can take the place of fetishization of a single film. Addressing concerns that DVDs have revoked the cult status of films like Rocky Horror, academic Mikel J. Koven states that small scale screenings with friends and family can replace midnight showings. Koven also identifies television shows, such as Twin Peaks, as retaining more traditional cult activities inside popular culture. David Lynch himself has not ruled out another television series, as studios have become reluctant to take chances on non-mainstream ideas. Despite this, the Alamo Drafthouse has capitalized on cult films and the surrounding culture through inspiration drawn from Rocky Horror and retro promotional gimmickry. They sell out their shows regularly and have acquired a cult following of their own.
Academic Bob Batchelor, writing in Cult Pop Culture, states that the internet has democratized cult culture and destroyed the line between cult and mainstream. Fans of even the most obscure films can communicate online with each other in vibrant communities. Although known for their big-budget blockbusters, Steven Spielberg and George Lucas have criticized the current Hollywood system of gambling everything on the opening weekend of these productions. Geoffrey Macnab of The Independent instead suggests that Hollywood look to capitalize on cult films, which have exploded in popularity on the internet. The rise of social media has been a boon to cult films. Sites such as Twitter have displaced traditional venues for fandom and courted controversy from cultural critics who are unamused by campy cult films. After a clip from one of his films went viral, director-producer Roger Corman made a distribution deal with YouTube. Found footage which had originally been distributed as cult VHS collections eventually went viral on YouTube, which opened them to new generations of fans. Films such as Birdemic (2008) and The Room (2003) gained quick, massive popularity, as prominent members of social networking sites discussed them. Their rise as "instant cult classics" bypasses the years of obscurity that most cult films labor under. In response, critics have described the use of viral marketing as astroturfing and an attempt to manufacture cult films.
I. Q. Hunter identifies a prefabricated cult film style which includes "deliberately, insulting bad films", "slick exercises in dysfunction and alienation", and mainstream films "that sell themselves as worth obsessing over". Writing for NPR, Scott Tobias states that Don Coscarelli, whose previous films effortlessly attracted cult followings, has drifted into this realm. Tobias criticizes Coscarelli as trying too hard to appeal to cult audiences and sacrificing internal consistency for calculated quirkiness. Influenced by the successful online hype of The Blair Witch Project (1999), other films have attempted to draw online cult fandom with the use of prefabricated cult appeal. Snakes on a Plane (2006) is an example that attracted massive attention from curious fans. Uniquely, its cult following preceded the film's release and included speculative parodies of what fans imagined the film might be. This reached the point of convergence culture when fan speculation began to impact on the film's production. Although it was proclaimed a cult film and major game-changer before it was released, it failed to win either mainstream audiences or maintain its cult following. In retrospect, critic Spencer Kornhaber would call it a serendipitous novelty and a footnote to a "more naive era of the Internet". However, it became influential in both marketing and titling. This trend of "instant cult classics" which are hailed yet fail to attain a lasting following is described by Matt Singer, who states that the phrase is an oxymoron.
Cult films are often approached in terms of auteur theory, which states that the director's creative vision drives a film. This has fallen out of favor in academia, creating a disconnect between cult film fans and critics. Matt Hills states that auteur theory can help to create cult films; fans that see a film as continuing a director's creative vision are likely to accept it as cult. According to academic Greg Taylor, auteur theory also helped to popularize cult films when middlebrow audiences found an accessible way to approach avant-garde film criticism. Auteur theory provided an alternative culture for cult film fans while carrying the weight of scholarship. By requiring repeated viewings and extensive knowledge of details, auteur theory naturally appealed to cult film fans. Taylor further states that this was instrumental in allowing cult films to break through to the mainstream. Academic Joe Tompkins states that this auteurism is often highlighted when mainstream success occurs. This may take the place of – and even ignore – political readings of the director. Cult films and directors may be celebrated for their transgressive content, daring, and independence, but Tompkins argues that mainstream recognition requires they be palatable to corporate interests who stand to gain much from the mainstreaming of cult film culture. While critics may champion revolutionary aspects of filmmaking and political interpretation, Hollywood studios and other corporate interests will instead highlight only the aspects that they wish to legitimize in their own films, such as sensational exploitation. Someone like George Romero, whose films are both transgressive and subversive, will have the transgressive aspects highlighted while the subversive aspects are ignored. | [
{
"paragraph_id": 0,
"text": "A cult film or cult movie, also commonly referred to as a cult classic, is a film that has acquired a cult following. Cult films are known for their dedicated, passionate fanbase which forms an elaborate subculture, members of which engage in repeated viewings, dialogue-quoting, and audience participation. Inclusive definitions allow for major studio productions, especially box-office bombs, while exclusive definitions focus more on obscure, transgressive films shunned by the mainstream. The difficulty in defining the term and subjectivity of what qualifies as a cult film mirror classificatory disputes about art. The term cult film itself was first used in the 1970s to describe the culture that surrounded underground films and midnight movies, though cult was in common use in film analysis for decades prior to that.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Cult films trace their origin back to controversial and suppressed films kept alive by dedicated fans. In some cases, reclaimed or rediscovered films have acquired cult followings decades after their original release, occasionally for their camp value. Other cult films have since become well-respected or reassessed as classics; there is debate as to whether these popular and accepted films are still cult films. After failing at the cinema, some cult films have become regular fixtures on cable television or profitable sellers on home video. Others have inspired their own film festivals. Cult films can both appeal to specific subcultures and form their own subcultures. Other media that reference cult films can easily identify which demographics they desire to attract and offer savvy fans an opportunity to demonstrate their knowledge.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Cult films frequently break cultural taboos, and many feature excessive displays of violence, gore, sexuality, profanity, or combinations thereof. This can lead to controversy, censorship, and outright bans; less transgressive films may attract similar amounts of controversy when critics call them frivolous or incompetent. Films that fail to attract requisite amounts of controversy may face resistance when labeled as cult films. Mainstream films and big budget blockbusters have attracted cult followings similar to more underground and lesser known films; fans of these films often emphasize the films' niche appeal and reject the more popular aspects. Fans who like the films for the wrong reasons, such as perceived elements that represent mainstream appeal and marketing, will often be ostracized or ridiculed. Likewise, fans who stray from accepted subcultural scripts may experience similar rejection.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Since the late 1970s, cult films have become increasingly popular. Films that once would have been limited to obscure cult followings are now capable of breaking into the mainstream, and showings of cult films have proved to be a profitable business venture. Overbroad usage of the term has resulted in controversy, as purists state it has become a meaningless descriptor applied to any film that is the slightest bit weird or unconventional; others accuse Hollywood studios of trying to artificially create cult films or use the term as a marketing tactic. Films are frequently stated to be an \"instant cult classic\" now, occasionally before they are released. Some films have acquired massive, quick cult followings, owing to advertisements and posts made by fans spreading virally through social media. Easy access to cult films via video on demand and peer-to-peer file sharing has led some critics to pronounce the death of cult films.",
"title": ""
},
{
"paragraph_id": 4,
"text": "What is a cult film? A cult film is one that has a passionate following, but does not appeal to everybody. James Bond movies are not cult films, but chainsaw movies are. Just because a movie is a cult film does not automatically guarantee quality: some cult movies are very bad; others are very, very good. Some make an awful lot of money at the box office; others make no money at all. Some are considered quality films; others are exploitation. —Alex Cox in his introduction to The Wicker Man on Moviedrome, 1988",
"title": "Definition"
},
{
"paragraph_id": 5,
"text": "A cult film is any film that has a cult following, although the term is not easily defined and can be applied to a wide variety of films. Some definitions exclude films that have been released by major studios or have big budgets, that try specifically to become cult films, or become accepted by mainstream audiences and critics. Cult films are defined by audience reaction as much as by their content. This may take the form of elaborate and ritualized audience participation, film festivals, or cosplay. Over time, the definition has become more vague and inclusive as it drifts away from earlier, stricter views. Increasing use of the term by mainstream publications has resulted in controversy, as cinephiles argue that the term has become meaningless or \"elastic, a catchall for anything slightly maverick or strange\". Academic Mark Shiel has criticized the term itself as being a weak concept, reliant on subjectivity; different groups can interpret films in their own terms. According to feminist scholar Joanne Hollows, this subjectivity causes films with large female cult followings to be perceived as too mainstream and not transgressive enough to qualify as a cult film. Academic Mike Chopra‑Gant says that cult films become decontextualized when studied as a group, and Shiel criticizes this recontextualization as cultural commodification.",
"title": "Definition"
},
{
"paragraph_id": 6,
"text": "In 2008, Cineaste asked a range of academics for their definition of a cult film. Several people defined cult films primarily in terms of their opposition to mainstream films and conformism, explicitly requiring a transgressive element, though others disputed the transgressive potential, given the demographic appeal to conventional moviegoers and mainstreaming of cult films. Jeffrey Andrew Weinstock instead called them mainstream films with transgressive elements. Most definitions also required a strong community aspect, such as obsessed fans or ritualistic behavior. Citing misuse of the term, Mikel J. Koven took a self-described hard-line stance that rejected definitions that use any other criteria. Matt Hills instead stressed the need for an open-ended definition rooted in structuration, where the film and the audience reaction are interrelated and neither is prioritized. Ernest Mathijs focused on the accidental nature of cult followings, arguing that cult film fans consider themselves too savvy to be marketed to, while Jonathan Rosenbaum rejected the continued existence of cult films and called the term a marketing buzzword. Mathijs suggests that cult films help to understand ambiguity and incompleteness in life given the difficulty in even defining the term. That cult films can have opposing qualities – such as good and bad, failure and success, innovative and retro – helps to illustrate that art is subjective and never self-evident. This ambiguity leads critics of postmodernism to accuse cult films of being beyond criticism, as the emphasis is now on personal interpretation rather than critical analysis or metanarratives. These inherent dichotomies can lead audiences to be split between ironic and earnest fans.",
"title": "Definition"
},
{
"paragraph_id": 7,
"text": "Writing in Defining Cult Movies, Jancovich et al. quote academic Jeffrey Sconce, who defines cult films in terms of paracinema, marginal films that exist outside critical and cultural acceptance: everything from exploitation to beach party musicals to softcore pornography. However, they reject cult films as having a single unifying feature; instead, they state that cult films are united in their \"subcultural ideology\" and opposition to mainstream tastes, itself a vague and undefinable term. Cult followings themselves can range from adoration to contempt, and they have little in common except for their celebration of nonconformity – even the bad films ridiculed by fans are artistically nonconformist, albeit unintentionally. At the same time, they state that bourgeois, masculine tastes are frequently reinforced, which makes cult films more of an internal conflict within the bourgeoisie, rather than a rebellion against it. This results in an anti-academic bias despite the use of formal methodologies, such as defamiliarization. This contradiction exists in many subcultures, especially those dependent on defining themselves in terms of opposition to the mainstream. This nonconformity is eventually co-opted by the dominant forces, such as Hollywood, and marketed to the mainstream. Academic Xavier Mendik also defines cult films as opposing the mainstream and further proposes that films can become cult by virtue of their genre or content, especially if it is transgressive. Due to their rejection of mainstream appeal, Mendik says cult films can be more creative and political; times of relative political instability produce more interesting films.",
"title": "Definition"
},
{
"paragraph_id": 8,
"text": "Cult films have existed since the early days of cinema. Film critic Harry Allan Potamkin traces them back to 1910s France and the reception of Pearl White, William S. Hart, and Charlie Chaplin, which he described as \"a dissent from the popular ritual\". Nosferatu (1922) was an unauthorized adaptation of Bram Stoker's Dracula. Stoker's widow sued the production company and drove it to bankruptcy. All known copies of the film were destroyed, and Nosferatu become an early cult film, kept alive by a cult following that circulated illegal bootlegs. Academic Chuck Kleinhans identifies the Marx Brothers as making other early cult films. On their original release, some highly regarded classics from the Golden Age of Hollywood were panned by critics and audiences, relegated to cult status. The Night of the Hunter (1955) was a cult film for years, quoted often and championed by fans, before it was reassessed as an important and influential classic. During this time, American exploitation films and imported European art films were marketed similarly. Although critics Pauline Kael and Arthur Knight argued against arbitrary divisions into high and low culture, American films settled into rigid genres; European art films continued to push the boundaries of simple definitions, and these exploitative art films and artistic exploitation films would go on to influence American cult films. Much like later cult films, these early exploitation films encouraged audience participation, influenced by live theater and vaudeville.",
"title": "General overview"
},
{
"paragraph_id": 9,
"text": "Modern cult films grew from 1960s counterculture and underground films, popular among those who rejected mainstream Hollywood films. These underground film festivals led to the creation of midnight movies, which attracted cult followings. The term cult film itself was an outgrowth of this movement and was first used in the 1970s, though cult had been in use for decades in film analysis with both positive and negative connotations. These films were more concerned with cultural significance than the social justice sought by earlier avant-garde films. Midnight movies became more popular and mainstream, peaking with the release of The Rocky Horror Picture Show (1975), which finally found its audience several years after its release. Eventually, the rise of home video would marginalize midnight movies once again, after which many directors joined the burgeoning independent film scene or went back underground. Home video would give a second life to box-office flops, as positive word-of-mouth or excessive replay on cable television led these films to develop an appreciative audience, as well as obsessive replay and study. For example, The Beastmaster (1982), despite its failure at the box office, became one of the most played movies on American cable television and developed into a cult film. Home video and television broadcasts of cult films were initially greeted with hostility. Joanne Hollows states that they were seen as turning cult films mainstream – in effect, feminizing them by opening them to distracted, passive audiences.",
"title": "General overview"
},
{
"paragraph_id": 10,
"text": "Releases from major studios – such as The Big Lebowski (1998), which was distributed by Universal Studios – can become cult films when they fail at the box office and develop a cult following through reissues, such as midnight movies, festivals, and home video. Hollywood films, due to their nature, are more likely to attract this kind of attention, which leads to a mainstreaming effect of cult culture. With major studios behind them, even financially unsuccessful films can be re-released multiple times, which plays into a trend to capture audiences through repetitious reissues. The constant use of profanity and drugs in otherwise mainstream, Hollywood films, such as The Big Lebowski, can alienate critics and audiences yet lead to a large cult following among more open-minded demographics not often associated with cult films, such as Wall Street bankers and professional soldiers. Thus, even comparatively mainstream films can satisfy the traditional demands of a cult film, perceived by fans as transgressive, niche, and uncommercial. Discussing his reputation for making cult films, Bollywood director Anurag Kashyap said, \"I didn't set out to make cult films. I wanted to make box-office hits.\" Writing in Cult Cinema, academics Ernest Mathijs and Jamie Sexton state that this acceptance of mainstream culture and commercialism is not out of character, as cult audiences have a more complex relationship to these concepts: they are more opposed to mainstream values and excessive commercialism than they are anything else.",
"title": "General overview"
},
{
"paragraph_id": 11,
"text": "In a global context, popularity can vary widely by territory, especially with regard to limited releases. Mad Max (1979) was an international hit – except in America where it became an obscure cult favorite, ignored by critics and available for years only in a dubbed version though it earned over $100M internationally. Foreign cinema can put a different spin on popular genres, such as Japanese horror, which was initially a cult favorite in America. Asian imports to the West are often marketed as exotic cult films and of interchangeable national identity, which academic Chi-Yun Shin criticizes as reductive. Foreign influence can affect fan response, especially on genres tied to a national identity; when they become more global in scope, questions of authenticity may arise. Filmmakers and films ignored in their own country can become the objects of cult adoration in another, producing perplexed reactions in their native country. Cult films can also establish an early viability for more mainstream films both for filmmakers and national cinema. The early cult horror films of Peter Jackson were so strongly associated with his homeland that they affected the international reputation of New Zealand and its cinema. As more artistic films emerged, New Zealand was perceived as a legitimate competitor to Hollywood, which mirrored Jackson's career trajectory. Heavenly Creatures (1994) acquired its own cult following, became a part of New Zealand's national identity, and paved the way for big-budget, Hollywood-style epics, such as Jackson's The Lord of the Rings trilogy.",
"title": "General overview"
},
{
"paragraph_id": 12,
"text": "Mathijs states that cult films and fandom frequently involve nontraditional elements of time and time management. Fans will often watch films obsessively, an activity that is viewed by the mainstream as wasting time yet can be seen as resisting the commodification of leisure time. They may also watch films idiosyncratically: sped up, slowed down, frequently paused, or at odd hours. Cult films themselves subvert traditional views of time – time travel, non-linear narratives, and ambiguous establishments of time are all popular. Mathijs also identifies specific cult film viewing habits, such as viewing horror films on Halloween, sentimental melodrama on Christmas, and romantic films on Valentine's Day. These films are often viewed as marathons where fans can gorge themselves on their favorites. Mathijs states that cult films broadcast on Christmas have a nostalgic factor. These films, ritually watched every season, give a sense of community and shared nostalgia to viewers. New films often have trouble making inroads against the institutions of It's A Wonderful Life (1946) and Miracle on 34th Street (1947). These films provide mild criticism of consumerism while encouraging family values. Halloween, on the other hand, allows flaunting society's taboos and testing one's fears. Horror films have appropriated the holiday, and many horror films debut on Halloween. Mathijs criticizes the over-cultified, commercialized nature of Halloween and horror films, which feed into each other so much that Halloween has turned into an image or product with no real community. Mathijs states that Halloween horror conventions can provide the missing community aspect.",
"title": "General overview"
},
{
"paragraph_id": 13,
"text": "Despite their oppositional nature, cult films can produce celebrities. Like cult films themselves, authenticity is an important aspect of their popularity. Actors can become typecast as they become strongly associated with such iconic roles. Tim Curry, despite his acknowledged range as an actor, found casting difficult after he achieved fame in The Rocky Horror Picture Show. Even when discussing unrelated projects, interviewers frequently bring up the role, which causes him to tire of discussing it. Mary Woronov, known for her transgressive roles in cult films, eventually transitioned to mainstream films. She was expected to recreate the transgressive elements of her cult films within the confines of mainstream cinema. Instead of the complex gender deconstructions of her Andy Warhol films, she became typecast as a lesbian or domineering woman. Sylvia Kristel, after starring in Emmanuelle (1974), found herself highly associated with the film and the sexual liberation of the 1970s. Caught between the transgressive elements of her cult film and the mainstream appeal of soft-core pornography, she was unable to work in anything but exploitation films and Emmanuelle sequels. Despite her immense popularity and cult following, she would rate only a footnote in most histories of European cinema if she was even mentioned. Similarly, Chloë Sevigny has struggled with her reputation as a cult independent film star famous for her daring roles in transgressive films. Cult films can also trap directors. Leonard Kastle, who directed The Honeymoon Killers (1969), never directed another film again. Despite his cult following, which included François Truffaut, he was unable to find financing for any of his other screenplays. Qualities that bring cult films to prominence – such as an uncompromising, unorthodox vision – caused Alejandro Jodorowsky to languish in obscurity for years.",
"title": "General overview"
},
{
"paragraph_id": 14,
"text": "Transgressive films as a distinct artistic movement began in the 1970s. Unconcerned with genre distinctions, they drew inspiration equally from the nonconformity of European art cinema and experimental film, the gritty subject matter of Italian neorealism, and the shocking images of 1960s exploitation. Some used hardcore pornography and horror, occasionally at the same time. In the 1980s, filmmaker Nick Zedd identified this movement as the Cinema of Transgression and later wrote a manifesto. Popular in midnight showings, they were mainly limited to large urban areas, which led academic Joan Hawkins to label them as \"downtown culture\". These films acquired a legendary reputation as they were discussed and debated in alternative weeklies, such as The Village Voice. Home video would finally allow general audiences to see them, which gave many people their first taste of underground film. Ernest Mathijs says that cult films often disrupt viewer expectations, such as giving characters transgressive motivations or focusing attention on elements outside the film. Cult films can also transgress national stereotypes and genre conventions, such as Battle Royale (2000), which broke many rules of teenage slasher films. The reverse – when films based on cult properties lose their transgressive edge – can result in derision and rejection by fans. Audience participation itself can be transgressive, such as breaking long-standing taboos against talking during films and throwing things at the screen.",
"title": "Transgression and censorship"
},
{
"paragraph_id": 15,
"text": "According to Mathijs, critical reception is important to a film's perception as cult, through topicality and controversy. Topicality, which can be regional (such as objection to government funding of the film) or critical (such as philosophical objections to the themes), enables attention and a contextual response. Cultural topics make the film relevant and can lead to controversy, such as a moral panic, which provides opposition. Cultural values transgressed in the film, such as sexual promiscuity, can be attacked by proxy, through attacks on the film. These concerns can vary from culture to culture, and they need not be at all similar. However, Mathijs says the film must invoke metacommentary for it to be more than simply culturally important. While referencing previous arguments, critics may attack its choice of genre or its very right to exist. Taking stances on these varied issues, critics assure their own relevance while helping to elevate the film to cult status. Perceived racist and reductive remarks by critics can rally fans and raise the profile of cult films, an example of which would be Rex Reed's comments about Korean culture in his review of Oldboy (2003). Critics can also polarize audiences and lead debates, such as how Joe Bob Briggs and Roger Ebert dueled over I Spit On Your Grave (1978). Briggs would later contribute a commentary track to the DVD release in which he describes it as a feminist film. Films which do not attract enough controversy may be ridiculed and rejected when suggested as cult films.",
"title": "Transgression and censorship"
},
{
"paragraph_id": 16,
"text": "Academic Peter Hutchings, noting the many definitions of a cult film that require transgressive elements, states that cult films are known in part for their excesses. Both subject matter and its depiction are portrayed in extreme ways that break taboos of good taste and aesthetic norms. Violence, gore, sexual perversity, and even the music can be pushed to stylistic excess far beyond that allowed by mainstream cinema. Film censorship can make these films obscure and difficult to find, common criteria used to define cult films. Despite this, these films remain well-known and prized among collectors. Fans will occasionally express frustration with dismissive critics and conventional analysis, which they believe marginalizes and misinterprets paracinema. In marketing these films, young men are predominantly targeted. Horror films in particular can draw fans who seek the most extreme films. Audiences can also ironically latch on to offensive themes, such as misogyny, using these films as catharsis for the things that they hate most in life. Exploitative, transgressive elements can be pushed to excessive extremes for both humor and satire. Frank Henenlotter faced censorship and ridicule, but he found acceptance among audiences receptive to themes that Hollywood was reluctant to touch, such as violence, drug addiction, and misogyny. Lloyd Kaufman sees his films' political statements as more populist and authentic than the hypocrisy of mainstream films and celebrities. Despite featuring an abundance of fake blood, vomit, and diarrhea, Kaufman's films have attracted positive attention from critics and academics. Excess can also exist as camp, such as films that highlight the excesses of 1980s fashion and commercialism.",
"title": "Transgression and censorship"
},
{
"paragraph_id": 17,
"text": "Films that are influenced by unpopular styles or genres can become cult films. Director Jean Rollin worked within cinéma fantastique, an unpopular genre in modern France. Influenced by American films and early French fantasists, he drifted between art, exploitation, and pornography. His films were reviled by critics, but he retained a cult following drawn by the nudity and eroticism. Similarly, Jess Franco chafed under fascist censorship in Spain but became influential in Spain's horror boom of the 1960s. These transgressive films that straddle the line between art and horror may have overlapping cult followings, each with their own interpretation and reasons for appreciating it. The films that followed Jess Franco were unique in their rejection of mainstream art. Popular among fans of European horror for their subversiveness and obscurity, these later Spanish films allowed political dissidents to criticize the fascist regime within the cloak of exploitation and horror. Unlike most exploitation directors, they were not trying to establish a reputation. They were already established in the art-house world and intentionally chose to work within paracinema as a reaction against the New Spanish Cinema, an artistic revival supported by the fascists. As late as the 1980s, critics still cited Pedro Almodóvar's anti-macho iconoclasm as a rebellion against fascist mores, as he grew from countercultural rebel to mainstream respectability. Transgressive elements that limit a director's appeal in one country can be celebrated or highlighted in another. Takashi Miike has been marketed in the West as a shocking and avant-garde filmmaker despite his many family-friendly comedies, which have not been imported.",
"title": "Transgression and censorship"
},
{
"paragraph_id": 18,
"text": "The transgressive nature of cult films can lead to their censorship. During the 1970s and early 1980s, a wave of explicit, graphic exploitation films caused controversy. Called \"video nasties\" within the UK, they ignited calls for censorship and stricter laws on home video releases, which were largely unregulated. Consequently, the British Board of Film Classification banned many popular cult films due to issues of sex, violence, and incitement to crime. Released during the cannibal boom, Cannibal Holocaust (1980) was banned in dozens of countries and caused the director to be briefly jailed over fears that it was a real snuff film. Although opposed to censorship, director Ruggero Deodato would later agree with cuts made by the BBFC which removed unsimulated animal killings, which limited the film's distribution. Frequently banned films may introduce questions of authenticity as fans question whether they have seen a truly uncensored cut. Cult films have been falsely claimed to have been banned to increase their transgressive reputation and explain their lack of mainstream penetration. Marketing campaigns have also used such claims to raise interest among curious audiences. Home video has allowed cult film fans to import rare or banned films, finally giving them a chance to complete their collection with imports and bootlegs. Cult films previously banned are sometimes released with much fanfare and the fans assumed to be already familiar with the controversy. Personal responsibility is often highlighted, and a strong anti-censorship message may be present. Previously lost scenes cut by studios can be re-added and restore a director's original vision, which draws similar fanfare and acclaim from fans. Imports are sometimes censored to remove elements that would be controversial, such as references to Islamic spirituality in Indonesian cult films.",
"title": "Transgression and censorship"
},
{
"paragraph_id": 19,
"text": "Academics have written of how transgressive themes in cult films can be regressive. David Church and Chuck Kleinhans describe an uncritical celebration of transgressive themes in cult films, including misogyny and racism. Church has also criticized gendered descriptions of transgressive content that celebrate masculinity. Joanne Hollows further identifies a gendered component to the celebration of transgressive themes in cult films, where male terms are used to describe films outside the mainstream while female terms are used to describe mainstream, conformist cinema. Jacinda Read's expansion states that cult films, despite their potential for empowerment of the marginalized, are more often used by politically incorrect males. Knowledgeable about feminism and multiculturalism, they seek a refuge from the academic acceptance of these progressive ideals. Their playful and ironic acceptance of regressive lad culture invites, and even dares, condemnation from academics and the uncool. Thus, cult films become a tool to reinforce mainstream values through transgressive content; Rebecca Feasy states that cultural hierarchies can also be reaffirmed through mockery of films perceived to be lacking masculinity. However, the sexploitation films of Doris Wishman took a feminist approach which avoids and subverts the male gaze and traditional goal-oriented methods. Wishman's subject matter, though exploitative and transgressive, was always framed in terms of female empowerment and the feminine spectator. Her use of common cult film motifs – female nudity and ambiguous gender – were repurposed to comment on feminist topics. Similarly, the films of Russ Meyer were a complicated combination of transgressive, mainstream, progressive, and regressive elements. They attracted both acclaim and denouncement from critics and progressives. Transgressive films imported from cultures that are recognizably different yet still relatable can be used to progressively examine issues in another culture.",
"title": "Transgression and censorship"
},
{
"paragraph_id": 20,
"text": "Cult films can be used to help define or create groups as a form of subcultural capital; knowledge of cult films proves that one is \"authentic\" or \"non-mainstream\". They can be used to provoke an outraged response from the mainstream, which further defines the subculture, as only members could possibly tolerate such deviant entertainment. More accessible films have less subcultural capital; among extremists, banned films will have the most. By referencing cult films, media can identify desired demographics, strengthen bonds with specific subcultures, and stand out among those who understand the intertextuality. Popular films from previous eras may be reclaimed by genre fans long after they have been forgotten by the original audiences. This can be done for authenticity, such as horror fans who seek out now-obscure titles from the 1950s instead of the modern, well-known remakes. Authenticity may also drive fans to deny genre categorization to films perceived as too mainstream or accessible. Authenticity in performance and expertise can drive fan acclaim. Authenticity can also drive fans to decry the mainstream in the form of hostile critics and censors. Especially when promoted by enthusiastic and knowledgeable programmers, choice of venue can be an important part of expressing individuality. Besides creating new communities, cult films can link formerly disparate groups, such as fans and critics. As these groups intermix, they can influence each other, though this may be resisted by older fans, unfamiliar with these new references. In extreme cases, cult films can lead to the creation of religions, such as Dudeism. For their avoidance of mainstream culture and audiences, enjoyment of irony, and celebration of obscure subcultures, academic Martin Roberts compares cult film fans to hipsters.",
"title": "Subcultural appeal and fandom"
},
{
"paragraph_id": 21,
"text": "A film can become the object of a cult following within a particular region or culture if it has unusual significance. For example, Norman Wisdom's films, friendly to Marxist interpretation, amassed a cult following in Albania, as they were among the few Western films allowed by the country's Communist rulers. The Wizard of Oz (1939) and its star, Judy Garland, hold special significance to American and British gay culture, although it is a widely viewed and historically important film in greater American culture. Similarly, James Dean and his brief film career have become icons of alienated youth. Cult films can have such niche appeal that they are only popular within certain subcultures, such as Reefer Madness (1936) and Hemp for Victory (1942) among the stoner subculture. Beach party musicals, popular among American surfers, failed to find an equivalent audience when imported to the United Kingdom. When films target subcultures like this, they may seem unintelligible without the proper cultural capital. Films which appeal to teenagers may offer subcultural identities that are easily recognized and differentiate various subcultural groups. Films which appeal to stereotypical male activities, such as sports, can easily gain strong male cult followings. Sports metaphors are often used in the marketing of cult films to males, such as emphasizing the \"extreme\" nature of the film, which increases the appeal to youth subcultures fond of extreme sports.",
"title": "Subcultural appeal and fandom"
},
{
"paragraph_id": 22,
"text": "Matt Hills' concept of the \"cult blockbuster\" involves cult followings inside larger, mainstream films. Although these are big budget, mainstream films, they still attract cult followings. The cult fans differentiate themselves from ordinary fans in several ways: longstanding devotion to the film, distinctive interpretations, and fan works. Hills identifies three different cult followings for The Lord of the Rings, each with their own fandom separate from the mainstream. Academic Emma Pett identifies Back to the Future (1985) as another example of a cult blockbuster. Although the film was an instant hit when released, it has also developed a nostalgic cult following over the years. The hammy acting by Christopher Lloyd and quotable dialogue have drawn a cult following, as they mimic traditional cult films. Blockbuster science fiction films that include philosophical subtexts, such as The Matrix, allow cult film fans to enjoy them on a higher level than the mainstream. Star Wars, with its large cult following in geek subculture, has been cited as both a cult blockbuster and a cult film. Although a mainstream epic, Star Wars has provided its fans with a spirituality and culture outside of the mainstream.",
"title": "Subcultural appeal and fandom"
},
{
"paragraph_id": 23,
"text": "Fans, in response to the popularity of these blockbusters, will claim elements for themselves while rejecting others. For example, in the Star Wars film series, mainstream criticism of Jar Jar Binks focused on racial stereotyping; although cult film fans will use that to bolster their arguments, he is rejected because he represents mainstream appeal and marketing. Also, instead of valuing textual rarity, fans of cult blockbusters will value repeat viewings. They may also engage in behaviors more traditional for fans of cult television and other serial media, as cult blockbusters are often franchised, preconceived as a film series, or both. To reduce mainstream accessibility, a film series can be self-reflexive and full of in-jokes that only longtime fans can understand. Mainstream critics may ridicule commercially successful directors of cult blockbusters, such as James Cameron, Michael Bay, and Luc Besson, whose films have been called simplistic. This critical backlash may serve to embellish the filmmakers' reception as cult auteurs. In the same way, critics may ridicule fans of cult blockbusters as immature or shallow.",
"title": "Subcultural appeal and fandom"
},
{
"paragraph_id": 24,
"text": "Cult films can create their own subculture. Rocky Horror, originally made to exploit the popularity of glam subculture, became what academic Gina Marchetti called a \"sub-subculture\", a variant that outlived its parent subculture. Although often described as primarily composed of obsessed fans, cult film fandom can include many newer, less experienced members. Familiar with the film's reputation and having watched clips on YouTube, these fans may take the next step and enter the film's fandom. If they are the majority, they may alter or ignore long-standing traditions, such as audience participation rituals; rituals which lack perceived authenticity may be criticized, but accepted rituals bring subcultural capital to veteran fans who introduce them to the newer members. Fans who flaunt their knowledge receive negative reactions. Newer fans may cite the film itself as their reason for attending a showing, but longtime fans often cite the community. Organized fandoms may spread and become popular as a way of introducing new people to the film, as well as theatrical screenings being privileged by the media and fandom itself. Fandom can also be used as a process of legitimation. Fans of cult films, as in media fandom, are frequently producers instead of mere consumers. Unconcerned with traditional views on intellectual property, these fan works are often unsanctioned, transformative, and ignore fictional canon.",
"title": "Subcultural appeal and fandom"
},
{
"paragraph_id": 25,
"text": "Like cult films themselves, magazines and websites dedicated to cult films revel in their self-conscious offensiveness. They maintain a sense of exclusivity by offending mainstream audiences with misogyny, gore, and racism. Obsessive trivia can be used to bore mainstream audiences while building up subcultural capital. Specialist stores on the fringes of society (or websites which prominently partner with hardcore pornographic sites) can be used to reinforce the outsider nature of cult film fandom, especially when they use erotic or gory imagery. By assuming a preexisting knowledge of trivia, non-fans can be excluded. Previous articles and controversies can also be alluded to without explanation. Casual readers and non-fans will thus be left out of discussions and debates, as they lack enough information to meaningfully contribute. When fans like a cult film for the wrong reasons, such as casting or characters aimed at mainstream appeal, they may be ridiculed. Thus, fandom can keep the mainstream at bay while defining themselves in terms of the \"Other\", a philosophical construct divergent from social norms. Commercial aspects of fandom (such as magazines or books) can also be defined in terms of \"otherness\" and thus valid to consume: consumers purchasing independent or niche publications are discerning consumers, but the mainstream is denigrated. Irony or self-deprecating humor can also be used. In online communities, different subcultures attracted to transgressive films can clash over values and criteria for subcultural capital. Even within subcultures, fans who break subcultural scripts, such as denying the affectivity of a disturbing film, will be ridiculed for their lack of authenticity.",
"title": "Subcultural appeal and fandom"
},
{
"paragraph_id": 26,
"text": "The critic Michael Medved characterized examples of the \"so bad it's good\" class of low-budget cult film through books such as The Golden Turkey Awards. These films include financially fruitless and critically scorned films that have become inadvertent comedies to film buffs, such as Plan 9 from Outer Space (1959), Mommie Dearest (1981), The Room (2003), and the Ugandan action comedy film Who Killed Captain Alex? (2010). Similarly, Paul Verhoeven's Showgirls (1995) bombed in theaters but developed a cult following on video. Catching on, Metro-Goldwyn-Mayer capitalized on the film's ironic appeal and marketed it as a cult film. Sometimes, fans will impose their own interpretation of films which have attracted derision, such as reinterpreting an earnest melodrama as a comedy. Jacob deNobel of the Carroll County Times states that films can be perceived as nonsensical or inept when audiences misunderstand avant-garde filmmaking or misinterpret parody. Films such as Rocky Horror can be misinterpreted as \"weird for weirdness' sake\" by people unfamiliar with the cult films that it parodies. deNobel ultimately rejects the use of the label \"so bad it's good\" as mean-spirited and often misapplied. Alamo Drafthouse programmer Zack Carlson has further said that any film which succeeds in entertaining an audience is good, regardless of irony. In francophone culture, \"so bad it's good\" films, known as nanars [Fr], have given rise to a subculture with dedicated websites such as Nanarland, film festivals and viewings in theaters, as well as various books analyzing the phenomenon. The rise of the Internet and on-demand films has led critics to question whether \"so bad it's good\" films have a future now that people have such diverse options in both availability and catalog, though fans eager to experience the worst films ever made can lead to lucrative showings for local theaters and merchandisers.",
"title": "Types"
},
{
"paragraph_id": 27,
"text": "Chuck Kleinhans states that the difference between a guilty pleasure and a cult film can be as simple as the number of fans; David Church raises the question of how many people it takes to form a cult following, especially now that home video makes fans difficult to count. As these cult films become more popular, they can bring varied responses from fans that depend on different interpretations, such as camp, irony, genuine affection, or combinations thereof. Earnest fans, who recognize and accept the film's faults, can make minor celebrities of the film's cast, though the benefits are not always clear. Cult film stars known for their camp can inject subtle parody or signal when films should not be taken seriously. Campy actors can also provide comic book supervillains for serious, artistic-minded films. This can draw fan acclaim and obsession more readily than subtle, method-inspired acting. Mark Chalon Smith of the Los Angeles Times says technical faults may be forgiven if a film makes up for them in other areas, such as camp or transgressive content. Smith states that the early films of John Waters are amateurish and less influential than claimed, but Waters' outrageous vision cements his place in cult cinema. Films such as Myra Breckinridge (1970) and Beyond the Valley of the Dolls (1970) can experience critical reappraisal later, once their camp excess and avant-garde filmmaking are better accepted, and films that are initially dismissed as frivolous are often reassessed as campy. Films that intentionally try to appeal to fans of camp may end up alienating them, as the films become perceived as trying too hard or not authentic.",
"title": "Types"
},
{
"paragraph_id": 28,
"text": "According to academic Brigid Cherry, nostalgia \"is a strong element of certain kinds of cult appeal.\" When Veoh added many cult films to their site, they cited nostalgia as a factor for their popularity. Academic I. Q. Hunter describes cult films as \"New Hollywood in extremis\" and a form of nostalgia for that period. Ernest Mathijs instead states that cult films use nostalgia as a form of resistance against progress and capitalistic ideas of a time-based economy. By virtue of the time travel plot, Back to the Future permits nostalgia for both the 1950s and 1980s. Many members of its nostalgic cult following are too young to have been alive during those periods, which Emma Pett interprets as fondness for retro aesthetics, nostalgia for when they saw the film rather than when it was released, and looking to the past to find a better time period. Similarly, films directed by John Hughes have taken hold in midnight movie venues, trading off of nostalgia for the 1980s and an ironic appreciation for their optimism. Mathijs and Sexton describe Grease (1978) as a film nostalgic about an imagined past that has acquired a nostalgic cult following. Other cult films, such as Streets of Fire (1984), create a new fictional world based on nostalgic views of the past. Cult films may also subvert nostalgia, such as The Big Lebowski, which introduces many nostalgic elements and then reveals them as fake and hollow. Scott Pilgrim vs. the World is a recent example, containing extensive nostalgia for the music and video gaming culture of the 2000s. Nathan Lee of the New York Sun identifies the retro aesthetic and nostalgic pastiche in films such as Donnie Darko as factors in its popularity among midnight movie crowds.",
"title": "Types"
},
{
"paragraph_id": 29,
"text": "Author Tomas Crowder-Taraborrelli describes midnight movies as a reaction against the political and cultural conservatism in America, and Joan Hawkins identifies the movement as running the gamut from anarchist to libertarian, united in their anti-establishment attitude and punk aesthetic. These films are resistant to simple categorization and are defined by the fanaticism and ritualistic behaviors of their audiences. Midnight movies require a night life and an audience willing to invest themselves actively. Hawkins states that these films took a rather bleak point of view due to the living conditions of the artists and the economic prospects of the 1970s. Like the surrealists and dadaists, they not only satirically attacked society but also the very structure of film – a counter-cinema that deconstructs narrative and traditional processes. In the late 1980s and 1990s, midnight movies transitioned from underground showings to home video viewings; eventually, a desire for community brought a resurgence, and The Big Lebowski kick-started a new generation. Demographics shifted, and more hip and mainstream audiences were drawn to them. Although studios expressed skepticism, large audiences were drawn to box-office flops, such as Donnie Darko (2001), The Warriors (1979) and Office Space (1999). Modern midnight movies retain their popularity and have been strongly diverging from mainstream films shown at midnight. Mainstream cinemas, eager to disassociate themselves from negative associations and increase profits, have begun abandoning midnight screenings. Although classic midnight movies have dropped off in popularity, they still bring reliable crowds.",
"title": "Types"
},
{
"paragraph_id": 30,
"text": "Although seemingly at odds with each other, art and exploitation films are frequently treated as equal and interchangeable in cult fandom, listed alongside each other and described in similar terms: their ability to provoke a response. The most exploitative aspects of art films are thus played up and their academic recognition ignored. This flattening of culture follows the popularity of post-structuralism, which rejects a hierarchy of artistic merit and equates exploitation and art. Mathijs and Sexton state that although cult films are not synonymous with exploitation, as is occasionally assumed, this is a key component; they write that exploitation, which exists on the fringes of the mainstream and deals with taboo subjects, is well-suited for cult followings. Academic David Andrews writes that cult softcore films are \"the most masculinized, youth-oriented, populist, and openly pornographic softcore area.\" The sexploitation films of Russ Meyer were among the first to abandon all hypocritical pretenses of morality and were technically proficient enough to gain a cult following. His persistent vision saw him received as an auteur worthy of academic study; director John Waters attributes this to Meyer's ability to create complicated, sexually charged films without resorting to explicit sex. Myrna Oliver described Doris Wishman's exploitation films as \"crass, coarse, and camp ... perfect fodder for a cult following.\" \"Sick films\", the most disturbing and graphically transgressive films, have their own distinct cult following; these films transcend their roots in exploitation, horror, and art films. In 1960s and 1970s America, exploitation and art films shared audiences and marketing, especially in New York City's grindhouse cinemas.",
"title": "Types"
},
{
"paragraph_id": 31,
"text": "Mathijs and Sexton state that genre is an important part of cult films; cult films will often mix, mock, or exaggerate the tropes associated with traditional genres. Science fiction, fantasy, and horror are known for their large and dedicated cult followings; as science fiction films become more popular, fans emphasize non-mainstream and less commercial aspects of it. B films, which are often conflated with exploitation, are as important to cult films as exploitation. Teodor Reljic of Malta Today states that cult B films are a realistic goal for Malta's burgeoning film industry. Genre films, B films that strictly adhere to genre limitations, can appeal to cult film fans: given their transgressive excesses, horror films are likely to become to cult films; films like Galaxy Quest (1999) highlight the importance of cult followings and fandom to science fiction; and authentic martial arts skills in Hong Kong action films can drive them to become cult favorites. Cult musicals can range from the traditional, such as Singin' in the Rain (1952), which appeal to cult audiences through nostalgia, camp, and spectacle, to the more non-traditional, such as Cry-Baby (1990), which parodies musicals, and Rocky Horror, which uses a rock soundtrack. Romantic fairy tale The Princess Bride (1987) failed to attract audiences in its original release, as the studio did not know how to market it. The freedom and excitement associated with cars can be an important part of drawing cult film fans to genre films, and they can signify action and danger with more ambiguity than a gun. Ad Week writes that cult B films, when released on home video, market themselves and need only enough advertising to raise curiosity or nostalgia.",
"title": "Types"
},
{
"paragraph_id": 32,
"text": "Animation can provide wide open vistas for stories. The French film Fantastic Planet (1973) explored ideas beyond the limits of traditional, live-action science fiction films. Ralph Bakshi's career has been marked with controversy: Fritz the Cat (1972), the first animated film to be rated \"X\" by the MPAA, provoked outrage for its racial caricatures and graphic depictions of sex, and Coonskin (1975) was decried as racist. Bakshi recalls that older animators had tired of \"kid stuff\" and desired edgier work, whereas younger animators hated his work for \"destroying the Disney images\". Eventually, his work would be reassessed and cult followings, which include Quentin Tarantino and Robert Rodriguez, developed around several of his films. Heavy Metal (1981) faced similar denunciations from critics. Donald Liebenson of the Los Angeles Times cites the violence and sexual imagery as alienating critics, who did not know what to make of the film. It would go on to become a popular midnight movie and frequently bootlegged by fans, as licensing issues kept it from being released on video for many years.",
"title": "Types"
},
{
"paragraph_id": 33,
"text": "Phil Hoad of The Guardian identifies Akira (1988) as introducing violent, adult Japanese animation (known as anime) to the West and paving the way for later works. Anime, according to academic Brian Ruh, is not a cult genre, but the lack of individual fandoms inside anime fandom itself lends itself to a bleeding over of cult attention and can help spread works internationally. Anime, which is frequently presented as a series (with movies either rising from existing series, or spinning off series based on the film), provides its fans with alternative fictional canons and points of view that can drive fan activity. The Ghost in the Shell films, for example, provided Japanese fans with enough bonus material and spinoffs that it encouraged cult tendencies. Markets that did not support the sale of these materials saw less cult activity. The claymation film Gumby: The Movie (1995), which made only $57,100 at the box office against its $2.8 million budget but sold a million copies on VHS alone, was subsequently released on DVD and remastered in high definition for Blu-ray due to its strong cult following. Like many cult films, RiffTrax made their own humorous audio commentary for Gumby: The Movie in 2021.",
"title": "Types"
},
{
"paragraph_id": 34,
"text": "Sensationalistic documentaries called mondo films replicate the most shocking and transgressive elements of exploitation films. They are usually modeled after \"sick films\" and cover similar subject matter. In The Cult Film Reader, academics Mathijs and Mendik write that these documentaries often present non-Western societies as \"stereotypically mysterious, seductive, immoral, deceptive, barbaric or savage\". Though they can be interpreted as racist, Mathijs and Mendik state that they also \"exhibit a liberal attitude towards the breaking of cultural taboos\". Mondo films like Faces of Death mix real and fake footage freely, and they gain their cult following through the outrage and debate over authenticity that results. Like \"so bad it's good\" cult films, old propaganda and government hygiene films may be enjoyed ironically by more modern audiences for the camp value of the outdated themes and outlandish claims made about perceived social threats, such as drug use. Academic Barry K. Grant states that Frank Capra's Why We Fight World War II propaganda films are explicitly not cult, because they are \"slickly made and have proven their ability to persuade an audience.\" The sponsored film Mr. B Natural became a cult hit when it was broadcast on the satirical television show Mystery Science Theater 3000; cast member Trace Beaulieu cited these educational shorts as his favorite to mock on the show. Mark Jancovich states that cult audiences are drawn to these films because of their \"very banality or incoherence of their political positions\", unlike traditional cult films, which achieve popularity through auteurist radicalism.",
"title": "Types"
},
{
"paragraph_id": 35,
"text": "Mark Shiel explains the rising popularity of cult films as an attempt by cinephiles and scholars to escape the oppressive conformity and mainstream appeal of even independent film, as well as a lack of condescension in both critics and the films; Academic Donna de Ville says it is a chance to subvert the dominance of academics and cinephiles. According to Xavier Mendik, \"academics have been really interested in cult movies for quite a while now.\" Mendik has sought to bring together academic interest and fandom through Cine-Excess, a film festival. I. Q. Hunter states that \"it's much easier to be a cultist now, but it is also rather more inconsequential.\" Citing the mainstream availability of Cannibal Holocaust, Jeffrey Sconce rejects definitions of cult films based on controversy and excess, as they've now become meaningless. Cult films have influenced such diverse industries as cosmetics, music videos, and fashion. Cult films have shown up in less expected places; as a sign of his popularity, a bronze statue of Ed Wood has been proposed in his hometown, and L'Osservatore Romano, the official newspaper of the Holy See, has courted controversy for its endorsement of cult films and pop culture. When cities attempt to renovate neighborhoods, fans have called attempts to demolish iconic settings from cult films \"cultural vandalism\". Cult films can also drive tourism, even when it is unwanted. From Latin America, Alejandro Jodorowsky's film El Topo (1970) has attracted attention of rock musicians such as John Lennon, Mick Jagger, and Bob Dylan.",
"title": "Mainstream popularity"
},
{
"paragraph_id": 36,
"text": "As far back as the 1970s, Attack of the Killer Tomatoes (1978) was designed specifically to be a cult film, and The Rocky Horror Picture Show was produced by 20th Century Fox, a major Hollywood studio. Over its decades-long release, Rocky Horror became the seventh highest grossing R-rated film when adjusted for inflation; journalist Matt Singer has questioned whether Rocky Horror's popularity invalidates its cult status. Founded in 1974, Troma Entertainment, an independent studio, would become known for both its cult following and cult films. In the 1980s, Danny Peary's Cult Movies (1981) would influence director Edgar Wright and film critic Scott Tobias of The A.V. Club. The rise of home video would have a mainstreaming effect on cult films and cultish behavior, though some collectors would be unlikely to self-identify as cult film fans. Film critic Joe Bob Briggs began reviewing drive-in theater and cult films, though he faced much criticism as an early advocate of exploitation and cult films. Briggs highlights the mainstreaming of cult films by pointing out the respectful obituaries that cult directors have received from formerly hostile publications and acceptance of politically incorrect films at mainstream film festivals. This acceptance is not universal, though, and some critics have resisted this mainstreaming of paracinema. Beginning in the 1990s, director Quentin Tarantino would have the greatest success in turning cult films mainstream. Tarantino later used his fame to champion obscure cult films that had influenced him and set up the short-lived Rolling Thunder Pictures, which distributed several of his favorite cult films. Tarantino's clout led Phil Hoad of The Guardian to call Tarantino the world's most influential director.",
"title": "Mainstream popularity"
},
{
"paragraph_id": 37,
"text": "As major Hollywood studios and audiences both become savvy to cult films, productions once limited to cult appeal have instead become popular hits, and cult directors have become hot properties known for more mainstream and accessible films. Remarking on the popular trend of remaking cult films, Claude Brodesser-Akner of New York magazine states that Hollywood studios have been superstitiously hoping to recreate past successes rather than trading on nostalgia. Their popularity would bring some critics to proclaim the death of cult films now that they have finally become successful and mainstream, are too slick to attract a proper cult following, lack context, or are too easily found online. In response, David Church says that cult film fans have retreated to more obscure and difficult to find films, often using illegal distribution methods, which preserves the outlaw status of cult films. Virtual spaces, such as online forums and fan sites, replace the traditional fanzines and newsletters. Cult film fans consider themselves collectors, rather than consumers, as they associate consumers with mainstream, Hollywood audiences. This collecting can take the place of fetishization of a single film. Addressing concerns that DVDs have revoked the cult status of films like Rocky Horror, academic Mikel J. Koven states that small scale screenings with friends and family can replace midnight showings. Koven also identifies television shows, such as Twin Peaks, as retaining more traditional cult activities inside popular culture. David Lynch himself has not ruled out another television series, as studios have become reluctant to take chances on non-mainstream ideas. Despite this, the Alamo Drafthouse has capitalized on cult films and the surrounding culture through inspiration drawn from Rocky Horror and retro promotional gimmickry. They sell out their shows regularly and have acquired a cult following of their own.",
"title": "Mainstream popularity"
},
{
"paragraph_id": 38,
"text": "Academic Bob Batchelor, writing in Cult Pop Culture, states that the internet has democratized cult culture and destroyed the line between cult and mainstream. Fans of even the most obscure films can communicate online with each other in vibrant communities. Although known for their big-budget blockbusters, Steven Spielberg and George Lucas have criticized the current Hollywood system of gambling everything on the opening weekend of these productions. Geoffrey Macnab of The Independent instead suggests that Hollywood look to capitalize on cult films, which have exploded in popularity on the internet. The rise of social media has been a boon to cult films. Sites such as Twitter have displaced traditional venues for fandom and courted controversy from cultural critics who are unamused by campy cult films. After a clip from one of his films went viral, director-producer Roger Corman made a distribution deal with YouTube. Found footage which had originally been distributed as cult VHS collections eventually went viral on YouTube, which opened them to new generations of fans. Films such as Birdemic (2008) and The Room (2003) gained quick, massive popularity, as prominent members of social networking sites discussed them. Their rise as \"instant cult classics\" bypasses the years of obscurity that most cult films labor under. In response, critics have described the use of viral marketing as astroturfing and an attempt to manufacture cult films.",
"title": "Mainstream popularity"
},
{
"paragraph_id": 39,
"text": "I. Q. Hunter identifies a prefabricated cult film style which includes \"deliberately, insulting bad films\", \"slick exercises in dysfunction and alienation\", and mainstream films \"that sell themselves as worth obsessing over\". Writing for NPR, Scott Tobias states that Don Coscarelli, whose previous films effortlessly attracted cult followings, has drifted into this realm. Tobias criticizes Coscarelli as trying too hard to appeal to cult audiences and sacrificing internal consistency for calculated quirkiness. Influenced by the successful online hype of The Blair Witch Project (1999), other films have attempted to draw online cult fandom with the use of prefabricated cult appeal. Snakes on a Plane (2006) is an example that attracted massive attention from curious fans. Uniquely, its cult following preceded the film's release and included speculative parodies of what fans imagined the film might be. This reached the point of convergence culture when fan speculation began to impact on the film's production. Although it was proclaimed a cult film and major game-changer before it was released, it failed to win either mainstream audiences or maintain its cult following. In retrospect, critic Spencer Kornhaber would call it a serendipitous novelty and a footnote to a \"more naive era of the Internet\". However, it became influential in both marketing and titling. This trend of \"instant cult classics\" which are hailed yet fail to attain a lasting following is described by Matt Singer, who states that the phrase is an oxymoron.",
"title": "Mainstream popularity"
},
{
"paragraph_id": 40,
"text": "Cult films are often approached in terms of auteur theory, which states that the director's creative vision drives a film. This has fallen out of favor in academia, creating a disconnect between cult film fans and critics. Matt Hills states that auteur theory can help to create cult films; fans that see a film as continuing a director's creative vision are likely to accept it as cult. According to academic Greg Taylor, auteur theory also helped to popularize cult films when middlebrow audiences found an accessible way to approach avant-garde film criticism. Auteur theory provided an alternative culture for cult film fans while carrying the weight of scholarship. By requiring repeated viewings and extensive knowledge of details, auteur theory naturally appealed to cult film fans. Taylor further states that this was instrumental in allowing cult films to break through to the mainstream. Academic Joe Tompkins states that this auteurism is often highlighted when mainstream success occurs. This may take the place of – and even ignore – political readings of the director. Cult films and directors may be celebrated for their transgressive content, daring, and independence, but Tompkins argues that mainstream recognition requires they be palatable to corporate interests who stand to gain much from the mainstreaming of cult film culture. While critics may champion revolutionary aspects of filmmaking and political interpretation, Hollywood studios and other corporate interests will instead highlight only the aspects that they wish to legitimize in their own films, such as sensational exploitation. Someone like George Romero, whose films are both transgressive and subversive, will have the transgressive aspects highlighted while the subversive aspects are ignored.",
"title": "Mainstream popularity"
}
] | A cult film or cult movie, also commonly referred to as a cult classic, is a film that has acquired a cult following. Cult films are known for their dedicated, passionate fanbase which forms an elaborate subculture, members of which engage in repeated viewings, dialogue-quoting, and audience participation. Inclusive definitions allow for major studio productions, especially box-office bombs, while exclusive definitions focus more on obscure, transgressive films shunned by the mainstream. The difficulty in defining the term and subjectivity of what qualifies as a cult film mirror classificatory disputes about art. The term cult film itself was first used in the 1970s to describe the culture that surrounded underground films and midnight movies, though cult was in common use in film analysis for decades prior to that. Cult films trace their origin back to controversial and suppressed films kept alive by dedicated fans. In some cases, reclaimed or rediscovered films have acquired cult followings decades after their original release, occasionally for their camp value. Other cult films have since become well-respected or reassessed as classics; there is debate as to whether these popular and accepted films are still cult films. After failing at the cinema, some cult films have become regular fixtures on cable television or profitable sellers on home video. Others have inspired their own film festivals. Cult films can both appeal to specific subcultures and form their own subcultures. Other media that reference cult films can easily identify which demographics they desire to attract and offer savvy fans an opportunity to demonstrate their knowledge. Cult films frequently break cultural taboos, and many feature excessive displays of violence, gore, sexuality, profanity, or combinations thereof. This can lead to controversy, censorship, and outright bans; less transgressive films may attract similar amounts of controversy when critics call them frivolous or incompetent. Films that fail to attract requisite amounts of controversy may face resistance when labeled as cult films. Mainstream films and big budget blockbusters have attracted cult followings similar to more underground and lesser known films; fans of these films often emphasize the films' niche appeal and reject the more popular aspects. Fans who like the films for the wrong reasons, such as perceived elements that represent mainstream appeal and marketing, will often be ostracized or ridiculed. Likewise, fans who stray from accepted subcultural scripts may experience similar rejection. Since the late 1970s, cult films have become increasingly popular. Films that once would have been limited to obscure cult followings are now capable of breaking into the mainstream, and showings of cult films have proved to be a profitable business venture. Overbroad usage of the term has resulted in controversy, as purists state it has become a meaningless descriptor applied to any film that is the slightest bit weird or unconventional; others accuse Hollywood studios of trying to artificially create cult films or use the term as a marketing tactic. Films are frequently stated to be an "instant cult classic" now, occasionally before they are released. Some films have acquired massive, quick cult followings, owing to advertisements and posts made by fans spreading virally through social media. Easy access to cult films via video on demand and peer-to-peer file sharing has led some critics to pronounce the death of cult films. | 2001-09-04T10:37:44Z | 2023-12-31T04:37:11Z | [
"Template:Portal",
"Template:Reflist",
"Template:Use mdy dates",
"Template:Anchor",
"Template:See also",
"Template:'",
"Template:Cite web",
"Template:Cbignore",
"Template:Citation",
"Template:Good article",
"Template:Interlanguage link",
"Template:Cite journal",
"Template:Cite book",
"Template:Cite magazine",
"Template:Short description",
"Template:Quote box",
"Template:Rp",
"Template:Cite news",
"Template:Film genres",
"Template:Redirect"
] | https://en.wikipedia.org/wiki/Cult_film |
5,646 | Constantinople | Constantinople (see other names) became the capital of the Roman Empire during the reign of Constantine the Great in 330. Following the collapse of the Western Roman Empire in the late 5th century, Constantinople remained the capital of the Eastern Roman Empire (also known as the Byzantine Empire; 330–1204 and 1261–1453), the Latin Empire (1204–1261), and the Ottoman Empire (1453–1922). Following the Turkish War of Independence, the Turkish capital then moved to Ankara. Officially renamed Istanbul in 1930, the city is today the largest city and financial centre of Turkey and the largest city in Europe, straddling the Bosporus strait, lying in both Europe and Asia.
In 324, after the Western and Eastern Roman Empires were reunited, the ancient city of Byzantium was selected to serve as the new capital of the Roman Empire, and the city was renamed Nova Roma, or 'New Rome', by Emperor Constantine the Great. On 11 May 330, it was renamed Constantinople and dedicated to Constantine. Constantinople is generally considered to be the center and the "cradle of Orthodox Christian civilization". From the mid-5th century to the early 13th century, Constantinople was the largest and wealthiest city in Europe. The city became famous for its architectural masterpieces, such as Hagia Sophia, the cathedral of the Eastern Orthodox Church, which served as the seat of the Ecumenical Patriarchate; the sacred Imperial Palace, where the emperors lived; the Hippodrome; the Golden Gate of the Land Walls; and opulent aristocratic palaces. The University of Constantinople was founded in the 5th century and contained artistic and literary treasures before it was sacked in 1204 and 1453, including its vast Imperial Library which contained the remnants of the Library of Alexandria and had 100,000 volumes. The city was the home of the Ecumenical Patriarch of Constantinople and guardian of Christendom's holiest relics such as the Crown of thorns and the True Cross.
Constantinople was famous for its massive and complex fortifications, which ranked among the most sophisticated defensive architecture of antiquity. The Theodosian Walls consisted of a double wall lying about 2 kilometres (1.2 mi) to the west of the first wall and a moat with palisades in front. Constantinople's location between the Golden Horn and the Sea of Marmara reduced the land area that needed defensive walls. The city was built intentionally to rival Rome, and it was claimed that several elevations within its walls matched Rome's 'seven hills'. The impenetrable defenses enclosed magnificent palaces, domes, and towers, the result of prosperity Constantinople achieved as the gateway between two continents (Europe and Asia) and two seas (the Mediterranean and the Black Sea). Although besieged on numerous occasions by various armies, the defenses of Constantinople proved impenetrable for nearly nine hundred years.
In 1204, however, the armies of the Fourth Crusade took and devastated the city, and for several decades, its inhabitants resided under Latin occupation in a dwindling and depopulated city. In 1261 the Byzantine Emperor Michael VIII Palaiologos liberated the city, and after the restoration under the Palaiologos dynasty, it enjoyed a partial recovery. With the advent of the Ottoman Empire in 1299, the Byzantine Empire began to lose territories, and the city began to lose population. By the early 15th century, the Byzantine Empire was reduced to just Constantinople and its environs, along with Morea in Greece, making it an enclave inside the Ottoman Empire. The city was finally besieged and conquered by the Ottoman Empire in 1453, remaining under its control until the early 20th century, after which it was renamed Istanbul under the Empire's successor state, Turkey.
According to Pliny the Elder in his Natural History, the first known name of a settlement on the site of Constantinople was Lygos, a settlement likely of Thracian origin founded between the 13th and 11th centuries BC. The site, according to the founding myth of the city, was abandoned by the time Greek settlers from the city-state of Megara founded Byzantium (Ancient Greek: Βυζάντιον, Byzántion) in around 657 BC, across from the town of Chalcedon on the Asiatic side of the Bosphorus.
The origins of the name of Byzantion, more commonly known by the later Latin Byzantium, are not entirely clear, though some suggest it is of Thracian origin. The founding myth of the city has it told that the settlement was named after the leader of the Megarian colonists, Byzas. The later Byzantines of Constantinople themselves would maintain that the city was named in honor of two men, Byzas and Antes, though this was more likely just a play on the word Byzantion.
The city was briefly renamed Augusta Antonina in the early 3rd century AD by the Emperor Septimius Severus (193–211), who razed the city to the ground in 196 for supporting a rival contender in the civil war and had it rebuilt in honor of his son Marcus Aurelius Antoninus (who succeeded him as Emperor), popularly known as Caracalla. The name appears to have been quickly forgotten and abandoned, and the city reverted to Byzantium/Byzantion after either the assassination of Caracalla in 217 or, at the latest, the fall of the Severan dynasty in 235.
Byzantium took on the name of Constantinople (Greek: Κωνσταντινούπολις, romanized: Kōnstantinoupolis; "city of Constantine") after its refoundation under Roman emperor Constantine I, who transferred the capital of the Roman Empire to Byzantium in 330 and designated his new capital officially as Nova Roma (Νέα Ῥώμη) 'New Rome'. During this time, the city was also called 'Second Rome', 'Eastern Rome', and Roma Constantinopolitana (Latin for 'Constantinopolitan Rome'). As the city became the sole remaining capital of the Roman Empire after the fall of the West, and its wealth, population, and influence grew, the city also came to have a multitude of nicknames.
As the largest and wealthiest city in Europe during the 4th–13th centuries and a center of culture and education of the Mediterranean basin, Constantinople came to be known by prestigious titles such as Basileuousa (Queen of Cities) and Megalopolis (the Great City) and was, in colloquial speech, commonly referred to as just Polis (ἡ Πόλις) 'the City' by Constantinopolitans and provincial Byzantines alike.
In the language of other peoples, Constantinople was referred to just as reverently. The medieval Vikings, who had contacts with the empire through their expansion in eastern Europe (Varangians), used the Old Norse name Miklagarðr (from mikill 'big' and garðr 'city'), and later Miklagard and Miklagarth. In Arabic, the city was sometimes called Rūmiyyat al-Kubra (Great City of the Romans) and in Persian as Takht-e Rum (Throne of the Romans).
In East and South Slavic languages, including in Kievan Rus', Constantinople has been referred to as Tsargrad (Царьград) or Carigrad, 'City of the Caesar (Emperor)', from the Slavonic words tsar ('Caesar' or 'King') and grad ('city'). This was presumably a calque on a Greek phrase such as Βασιλέως Πόλις (Vasileos Polis), 'the city of the emperor [king]'.
In Persian the city was also called Asitane (the Threshold of the State), and in Armenian, it was called Gosdantnubolis (City of Constantine).
The modern Turkish name for the city, İstanbul, derives from the Greek phrase eis tin Polin (εἰς τὴν πόλιν), meaning '(in)to the city'. This name was used in colloquial speech in Turkish alongside Kostantiniyye, the more formal adaptation of the original Constantinople, during the period of Ottoman rule, while western languages mostly continued to refer to the city as Constantinople until the early 20th century. In 1928, the Turkish alphabet was changed from Arabic script to Latin script. After that, as part of the Turkification movement, Turkey started to urge other countries to use Turkish names for Turkish cities, instead of other transliterations to Latin script that had been used in Ottoman times and the city came to be known as Istanbul and its variations in most world languages.
The name Constantinople is still used by members of the Eastern Orthodox Church in the title of one of their most important leaders, the Orthodox patriarch based in the city, referred to as "His Most Divine All-Holiness the Archbishop of Constantinople New Rome and Ecumenical Patriarch". In Greece today, the city is still called Konstantinoúpoli(s) (Κωνσταντινούπολις/Κωνσταντινούπολη) or simply just "the City" (Η Πόλη).
Constantinople was founded by the Roman emperor Constantine I (272–337) in 324 on the site of an already-existing city, Byzantium, which was settled in the early days of Greek colonial expansion, in around 657 BC, by colonists of the city-state of Megara. This is the first major settlement that would develop on the site of later Constantinople, but the first known settlements was that of Lygos, referred to in Pliny's Natural Histories. Apart from this, little is known about this initial settlement. The site, according to the founding myth of the city, was abandoned by the time Greek settlers from the city-state of Megara founded Byzantium (Βυζάντιον) in around 657 BC, across from the town of Chalcedon on the Asiatic side of the Bosphorus.
Hesychius of Miletus wrote that some "claim that people from Megara, who derived their descent from Nisos, sailed to this place under their leader Byzas, and invent the fable that his name was attached to the city". Some versions of the founding myth say Byzas was the son of a local nymph, while others say he was conceived by one of Zeus' daughters and Poseidon. Hesychius also gives alternate versions of the city's founding legend, which he attributed to old poets and writers:
It is said that the first Argives, after having received this prophecy from Pythia, Blessed are those who will inhabit that holy city, a narrow strip of the Thracian shore at the mouth of the Pontos, where two pups drink of the gray sea, where fish and stag graze on the same pasture, set up their dwellings at the place where the rivers Kydaros and Barbyses have their estuaries, one flowing from the north, the other from the west, and merging with the sea at the altar of the nymph called Semestre"
The city maintained independence as a city-state until it was annexed by Darius I in 512 BC into the Persian Empire, who saw the site as the optimal location to construct a pontoon bridge crossing into Europe as Byzantium was situated at the narrowest point in the Bosphorus strait. Persian rule lasted until 478 BC when as part of the Greek counterattack to the Second Persian invasion of Greece, a Greek army led by the Spartan general Pausanias captured the city which remained an independent, yet subordinate, city under the Athenians, and later to the Spartans after 411 BC. A farsighted treaty with the emergent power of Rome in c. 150 BC which stipulated tribute in exchange for independent status allowed it to enter Roman rule unscathed. This treaty would pay dividends retrospectively as Byzantium would maintain this independent status, and prosper under peace and stability in the Pax Romana, for nearly three centuries until the late 2nd century AD.
Byzantium was never a major influential city-state like that of Athens, Corinth or Sparta, but the city enjoyed relative peace and steady growth as a prosperous trading city lent by its remarkable position. The site lay astride the land route from Europe to Asia and the seaway from the Black Sea to the Mediterranean, and had in the Golden Horn an excellent and spacious harbor. Already then, in Greek and early Roman times, Byzantium was famous for the strategic geographic position that made it difficult to besiege and capture, and its position at the crossroads of the Asiatic-European trade route over land and as the gateway between the Mediterranean and Black Seas made it too valuable a settlement to abandon, as Emperor Septimius Severus later realized when he razed the city to the ground for supporting Pescennius Niger's claimancy. It was a move greatly criticized by the contemporary consul and historian Cassius Dio who said that Severus had destroyed "a strong Roman outpost and a base of operations against the barbarians from Pontus and Asia". He would later rebuild Byzantium towards the end of his reign, in which it would be briefly renamed Augusta Antonina, fortifying it with a new city wall in his name, the Severan Wall.
Constantine had altogether more colourful plans. Having restored the unity of the Empire, and, being in the course of major governmental reforms as well as of sponsoring the consolidation of the Christian church, he was well aware that Rome was an unsatisfactory capital. Rome was too far from the frontiers, and hence from the armies and the imperial courts, and it offered an undesirable playground for disaffected politicians. Yet it had been the capital of the state for over a thousand years, and it might have seemed unthinkable to suggest that the capital be moved to a different location. Nevertheless, Constantine identified the site of Byzantium as the right place: a place where an emperor could sit, readily defended, with easy access to the Danube or the Euphrates frontiers, his court supplied from the rich gardens and sophisticated workshops of Roman Asia, his treasuries filled by the wealthiest provinces of the Empire.
Constantinople was built over six years, and consecrated on 11 May 330. Constantine divided the expanded city, like Rome, into 14 regions, and ornamented it with public works worthy of an imperial metropolis. Yet, at first, Constantine's new Rome did not have all the dignities of old Rome. It possessed a proconsul, rather than an urban prefect. It had no praetors, tribunes, or quaestors. Although it did have senators, they held the title clarus, not clarissimus, like those of Rome. It also lacked the panoply of other administrative offices regulating the food supply, police, statues, temples, sewers, aqueducts, or other public works. The new programme of building was carried out in great haste: columns, marbles, doors, and tiles were taken wholesale from the temples of the empire and moved to the new city. In similar fashion, many of the greatest works of Greek and Roman art were soon to be seen in its squares and streets. The emperor stimulated private building by promising householders gifts of land from the imperial estates in Asiana and Pontica and on 18 May 332 he announced that, as in Rome, free distributions of food would be made to the citizens. At the time, the amount is said to have been 80,000 rations a day, doled out from 117 distribution points around the city.
Constantine laid out a new square at the centre of old Byzantium, naming it the Augustaeum. The new senate-house (or Curia) was housed in a basilica on the east side. On the south side of the great square was erected the Great Palace of the Emperor with its imposing entrance, the Chalke, and its ceremonial suite known as the Palace of Daphne. Nearby was the vast Hippodrome for chariot-races, seating over 80,000 spectators, and the famed Baths of Zeuxippus. At the western entrance to the Augustaeum was the Milion, a vaulted monument from which distances were measured across the Eastern Roman Empire.
From the Augustaeum led a great street, the Mese, lined with colonnades. As it descended the First Hill of the city and climbed the Second Hill, it passed on the left the Praetorium or law-court. Then it passed through the oval Forum of Constantine where there was a second Senate-house and a high column with a statue of Constantine himself in the guise of Helios, crowned with a halo of seven rays and looking toward the rising sun. From there, the Mese passed on and through the Forum Tauri and then the Forum Bovis, and finally up the Seventh Hill (or Xerolophus) and through to the Golden Gate in the Constantinian Wall. After the construction of the Theodosian Walls in the early 5th century, it was extended to the new Golden Gate, reaching a total length of seven Roman miles. After the construction of the Theodosian Walls, Constantinople consisted of an area approximately the size of Old Rome within the Aurelian walls, or some 1,400 ha.
The importance of Constantinople increased, but it was gradual. From the death of Constantine in 337 to the accession of Theodosius I, emperors had been resident only in the years 337–338, 347–351, 358–361, 368–369. Its status as a capital was recognized by the appointment of the first known Urban Prefect of the City Honoratus, who held office from 11 December 359 until 361. The urban prefects had concurrent jurisdiction over three provinces each in the adjacent dioceses of Thrace (in which the city was located), Pontus and Asia comparable to the 100-mile extraordinary jurisdiction of the prefect of Rome. The emperor Valens, who hated the city and spent only one year there, nevertheless built the Palace of Hebdomon on the shore of the Propontis near the Golden Gate, probably for use when reviewing troops. All the emperors up to Zeno and Basiliscus were crowned and acclaimed at the Hebdomon. Theodosius I founded the Church of John the Baptist to house the skull of the saint (today preserved at the Topkapı Palace), put up a memorial pillar to himself in the Forum of Taurus, and turned the ruined temple of Aphrodite into a coach house for the Praetorian Prefect; Arcadius built a new forum named after himself on the Mese, near the walls of Constantine.
After the shock of the Battle of Adrianople in 378, in which the emperor Valens with the flower of the Roman armies was destroyed by the Visigoths within a few days' march, the city looked to its defences, and in 413–414 Theodosius II built the 18-metre (60-foot)-tall triple-wall fortifications, which were not to be breached until the coming of gunpowder. Theodosius also founded a University near the Forum of Taurus, on 27 February 425.
Uldin, a prince of the Huns, appeared on the Danube about this time and advanced into Thrace, but he was deserted by many of his followers, who joined with the Romans in driving their king back north of the river. Subsequent to this, new walls were built to defend the city and the fleet on the Danube improved.
After the barbarians overran the Western Roman Empire, Constantinople became the indisputable capital city of the Roman Empire. Emperors were no longer peripatetic between various court capitals and palaces. They remained in their palace in the Great City and sent generals to command their armies. The wealth of the eastern Mediterranean and western Asia flowed into Constantinople.
The emperor Justinian I (527–565) was known for his successes in war, for his legal reforms and for his public works. It was from Constantinople that his expedition for the reconquest of the former Diocese of Africa set sail on or about 21 June 533. Before their departure, the ship of the commander Belisarius was anchored in front of the Imperial palace, and the Patriarch offered prayers for the success of the enterprise. After the victory, in 534, the Temple treasure of Jerusalem, looted by the Romans in AD 70 and taken to Carthage by the Vandals after their sack of Rome in 455, was brought to Constantinople and deposited for a time, perhaps in the Church of St Polyeuctus, before being returned to Jerusalem in either the Church of the Resurrection or the New Church.
Chariot-racing had been important in Rome for centuries. In Constantinople, the hippodrome became over time increasingly a place of political significance. It was where (as a shadow of the popular elections of old Rome) the people by acclamation showed their approval of a new emperor, and also where they openly criticized the government, or clamoured for the removal of unpopular ministers. It played a crucial role during the riots and in times of political unrest. The Hippodrome provided a space for a crowd to be responded to positively or where the acclamations of a crowd were subverted, resorting to the riots that would ensue in coming years. In the time of Justinian, public order in Constantinople became a critical political issue.
Throughout the late Roman and early Byzantine periods, Christianity was resolving fundamental questions of identity, and the dispute between the orthodox and the monophysites became the cause of serious disorder, expressed through allegiance to the chariot-racing parties of the Blues and the Greens. The partisans of the Blues and the Greens were said to affect untrimmed facial hair, head hair shaved at the front and grown long at the back, and wide-sleeved tunics tight at the wrist; and to form gangs to engage in night-time muggings and street violence. At last these disorders took the form of a major rebellion of 532, known as the "Nika" riots (from the battle-cry of "Conquer!" of those involved). The Nika Riots began in the Hippodrome and finished there with the onslaught of over 30,000 people according to Procopius, those in the blue and green factions, innocent and guilty. This came full circle on the relationship within the Hippodrome between the power and the people during the time of Justinian.
Fires started by the Nika rioters consumed the Theodosian basilica of Hagia Sophia (Holy Wisdom), the city's cathedral, which lay to the north of the Augustaeum and had itself replaced the Constantinian basilica founded by Constantius II to replace the first Byzantine cathedral, Hagia Irene (Holy Peace). Justinian commissioned Anthemius of Tralles and Isidore of Miletus to replace it with a new and incomparable Hagia Sophia. This was the great cathedral of the city, whose dome was said to be held aloft by God alone, and which was directly connected to the palace so that the imperial family could attend services without passing through the streets. The dedication took place on 26 December 537 in the presence of the emperor, who was later reported to have exclaimed, "O Solomon, I have outdone thee!" Hagia Sophia was served by 600 people including 80 priests, and cost 20,000 pounds of gold to build.
Justinian also had Anthemius and Isidore demolish and replace the original Church of the Holy Apostles and Hagia Irene built by Constantine with new churches under the same dedication. The Justinianic Church of the Holy Apostles was designed in the form of an equal-armed cross with five domes, and ornamented with beautiful mosaics. This church was to remain the burial place of the emperors from Constantine himself until the 11th century. When the city fell to the Turks in 1453, the church was demolished to make room for the tomb of Mehmet II the Conqueror. Justinian was also concerned with other aspects of the city's built environment, legislating against the abuse of laws prohibiting building within 100 ft (30 m) of the sea front, in order to protect the view.
During Justinian I's reign, the city's population reached about 500,000 people. However, the social fabric of Constantinople was also damaged by the onset of the Plague of Justinian between 541 and 542 AD. It killed perhaps 40% of the city's inhabitants.
In the early 7th century, the Avars and later the Bulgars overwhelmed much of the Balkans, threatening Constantinople with attack from the west. Simultaneously, the Persian Sassanids overwhelmed the Prefecture of the East and penetrated deep into Anatolia. Heraclius, son of the exarch of Africa, set sail for the city and assumed the throne. He found the military situation so dire that he is said to have contemplated withdrawing the imperial capital to Carthage, but relented after the people of Constantinople begged him to stay. The citizens lost their right to free grain in 618 when Heraclius realized that the city could no longer be supplied from Egypt as a result of the Persian wars: the population fell substantially as a result.
While the city withstood a siege by the Sassanids and Avars in 626, Heraclius campaigned deep into Persian territory and briefly restored the status quo in 628, when the Persians surrendered all their conquests. However, further sieges followed the Arab conquests, first from 674 to 678 and then in 717 to 718. The Theodosian Walls kept the city impenetrable from the land, while a newly discovered incendiary substance known as Greek fire allowed the Byzantine navy to destroy the Arab fleets and keep the city supplied. In the second siege, the second ruler of Bulgaria, Khan Tervel, rendered decisive help. He was called Saviour of Europe.
In the 730s Leo III carried out extensive repairs of the Theodosian walls, which had been damaged by frequent and violent attacks; this work was financed by a special tax on all the subjects of the Empire.
Theodora, widow of the Emperor Theophilus (died 842), acted as regent during the minority of her son Michael III, who was said to have been introduced to dissolute habits by her brother Bardas. When Michael assumed power in 856, he became known for excessive drunkenness, appeared in the hippodrome as a charioteer and burlesqued the religious processions of the clergy. He removed Theodora from the Great Palace to the Carian Palace and later to the monastery of Gastria, but, after the death of Bardas, she was released to live in the palace of St Mamas; she also had a rural residence at the Anthemian Palace, where Michael was assassinated in 867.
In 860, an attack was made on the city by a new principality set up a few years earlier at Kiev by Askold and Dir, two Varangian chiefs: Two hundred small vessels passed through the Bosporus and plundered the monasteries and other properties on the suburban Princes' Islands. Oryphas, the admiral of the Byzantine fleet, alerted the emperor Michael, who promptly put the invaders to flight; but the suddenness and savagery of the onslaught made a deep impression on the citizens.
In 980, the emperor Basil II received an unusual gift from Prince Vladimir of Kiev: 6,000 Varangian warriors, which Basil formed into a new bodyguard known as the Varangian Guard. They were known for their ferocity, honour, and loyalty. It is said that, in 1038, they were dispersed in winter quarters in the Thracesian Theme when one of their number attempted to violate a countrywoman, but in the struggle she seized his sword and killed him; instead of taking revenge, however, his comrades applauded her conduct, compensated her with all his possessions, and exposed his body without burial as if he had committed suicide. However, following the death of an Emperor, they became known also for plunder in the Imperial palaces. Later in the 11th century the Varangian Guard became dominated by Anglo-Saxons who preferred this way of life to subjugation by the new Norman kings of England.
The Book of the Eparch, which dates to the 10th century, gives a detailed picture of the city's commercial life and its organization at that time. The corporations in which the tradesmen of Constantinople were organised were supervised by the Eparch, who regulated such matters as production, prices, import, and export. Each guild had its own monopoly, and tradesmen might not belong to more than one. It is an impressive testament to the strength of tradition how little these arrangements had changed since the office, then known by the Latin version of its title, had been set up in 330 to mirror the urban prefecture of Rome.
In the 9th and 10th centuries, Constantinople had a population of between 500,000 and 800,000.
In the 8th and 9th centuries, the iconoclast movement caused serious political unrest throughout the Empire. The emperor Leo III issued a decree in 726 against images, and ordered the destruction of a statue of Christ over one of the doors of the Chalke, an act that was fiercely resisted by the citizens. Constantine V convoked a church council in 754, which condemned the worship of images, after which many treasures were broken, burned, or painted over with depictions of trees, birds or animals: One source refers to the church of the Holy Virgin at Blachernae as having been transformed into a "fruit store and aviary". Following the death of her husband Leo IV in 780, the empress Irene restored the veneration of images through the agency of the Second Council of Nicaea in 787.
The iconoclast controversy returned in the early 9th century, only to be resolved once more in 843 during the regency of Empress Theodora, who restored the icons. These controversies contributed to the deterioration of relations between the Western and the Eastern Churches.
In the late 11th century catastrophe struck with the unexpected and calamitous defeat of the imperial armies at the Battle of Manzikert in Armenia in 1071. The Emperor Romanus Diogenes was captured. The peace terms demanded by Alp Arslan, sultan of the Seljuk Turks, were not excessive, and Romanus accepted them. On his release, however, Romanus found that enemies had placed their own candidate on the throne in his absence; he surrendered to them and suffered death by torture, and the new ruler, Michael VII Ducas, refused to honour the treaty. In response, the Turks began to move into Anatolia in 1073. The collapse of the old defensive system meant that they met no opposition, and the empire's resources were distracted and squandered in a series of civil wars. Thousands of Turkoman tribesmen crossed the unguarded frontier and moved into Anatolia. By 1080, a huge area had been lost to the Empire, and the Turks were within striking distance of Constantinople.
Under the Comnenian dynasty (1081–1185), Byzantium staged a remarkable recovery. In 1090–91, the nomadic Pechenegs reached the walls of Constantinople, where Emperor Alexius I with the aid of the Kipchaks annihilated their army. In response to a call for aid from Alexius, the First Crusade assembled at Constantinople in 1096, but declining to put itself under Byzantine command set out for Jerusalem on its own account. John II built the monastery of the Pantocrator (Almighty) with a hospital for the poor of 50 beds.
With the restoration of firm central government, the empire became fabulously wealthy. The population was rising (estimates for Constantinople in the 12th century vary from some 100,000 to 500,000), and towns and cities across the realm flourished. Meanwhile, the volume of money in circulation dramatically increased. This was reflected in Constantinople by the construction of the Blachernae palace, the creation of brilliant new works of art, and general prosperity at this time: an increase in trade, made possible by the growth of the Italian city-states, may have helped the growth of the economy. It is certain that the Venetians and others were active traders in Constantinople, making a living out of shipping goods between the Crusader Kingdoms of Outremer and the West, while also trading extensively with Byzantium and Egypt. The Venetians had factories on the north side of the Golden Horn, and large numbers of westerners were present in the city throughout the 12th century. Toward the end of Manuel I Komnenos's reign, the number of foreigners in the city reached about 60,000–80,000 people out of a total population of about 400,000 people. In 1171, Constantinople also contained a small community of 2,500 Jews. In 1182, most Latin (Western European) inhabitants of Constantinople were massacred.
In artistic terms, the 12th century was a very productive period. There was a revival in the mosaic art, for example: Mosaics became more realistic and vivid, with an increased emphasis on depicting three-dimensional forms. There was an increased demand for art, with more people having access to the necessary wealth to commission and pay for such work.
On 25 July 1197, Constantinople was struck by a severe fire which burned the Latin Quarter and the area around the Gate of the Droungarios (Turkish: Odun Kapısı) on the Golden Horn. Nevertheless, the destruction wrought by the 1197 fire paled in comparison with that brought by the Crusaders. In the course of a plot between Philip of Swabia, Boniface of Montferrat and the Doge of Venice, the Fourth Crusade was, despite papal excommunication, diverted in 1203 against Constantinople, ostensibly promoting the claims of Alexios IV Angelos brother-in-law of Philip, son of the deposed emperor Isaac II Angelos. The reigning emperor Alexios III Angelos had made no preparation. The Crusaders occupied Galata, broke the defensive chain protecting the Golden Horn, and entered the harbour, where on 27 July they breached the sea walls: Alexios III fled. But the new Alexios IV Angelos found the Treasury inadequate, and was unable to make good the rewards he had promised to his western allies. Tension between the citizens and the Latin soldiers increased. In January 1204, the protovestiarius Alexios Murzuphlos provoked a riot, it is presumed, to intimidate Alexios IV, but whose only result was the destruction of the great statue of Athena Promachos, the work of Phidias, which stood in the principal forum facing west.
In February 1204, the people rose again: Alexios IV was imprisoned and executed, and Murzuphlos took the purple as Alexios V Doukas. He made some attempt to repair the walls and organise the citizenry, but there had been no opportunity to bring in troops from the provinces and the guards were demoralised by the revolution. An attack by the Crusaders on 6 April failed, but a second from the Golden Horn on 12 April succeeded, and the invaders poured in. Alexios V fled. The Senate met in Hagia Sophia and offered the crown to Theodore Lascaris, who had married into the Angelos dynasty, but it was too late. He came out with the Patriarch to the Golden Milestone before the Great Palace and addressed the Varangian Guard. Then the two of them slipped away with many of the nobility and embarked for Asia. By the next day the Doge and the leading Franks were installed in the Great Palace, and the city was given over to pillage for three days.
Sir Steven Runciman, historian of the Crusades, wrote that the sack of Constantinople is "unparalleled in history".
For nine centuries, [...] the great city had been the capital of Christian civilization. It was filled with works of art that had survived from ancient Greece and with the masterpieces of its own exquisite craftsmen. The Venetians [...] seized treasures and carried them off to adorn [...] their town. But the Frenchmen and Flemings were filled with a lust for destruction. They rushed in a howling mob down the streets and through the houses, snatching up everything that glittered and destroying whatever they could not carry, pausing only to murder or to rape, or to break open the wine-cellars [...] . Neither monasteries nor churches nor libraries were spared. In Hagia Sophia itself, drunken soldiers could be seen tearing down the silken hangings and pulling the great silver iconostasis to pieces, while sacred books and icons were trampled under foot. While they drank merrily from the altar-vessels a prostitute set herself on the Patriarch's throne and began to sing a ribald French song. Nuns were ravished in their convents. Palaces and hovels alike were entered and wrecked. Wounded women and children lay dying in the streets. For three days the ghastly scenes [...] continued, till the huge and beautiful city was a shambles. [...] When [...] order was restored, [...] citizens were tortured to make them reveal the goods that they had contrived to hide.
For the next half-century, Constantinople was the seat of the Latin Empire. Under the rulers of the Latin Empire, the city declined, both in population and the condition of its buildings. Alice-Mary Talbot cites an estimated population for Constantinople of 400,000 inhabitants; after the destruction wrought by the Crusaders on the city, about one third were homeless, and numerous courtiers, nobility, and higher clergy, followed various leading personages into exile. "As a result Constantinople became seriously depopulated," Talbot concludes.
The Latins took over at least 20 churches and 13 monasteries, most prominently the Hagia Sophia, which became the cathedral of the Latin Patriarch of Constantinople. It is to these that E.H. Swift attributed the construction of a series of flying buttresses to shore up the walls of the church, which had been weakened over the centuries by earthquake tremors. However, this act of maintenance is an exception: for the most part, the Latin occupiers were too few to maintain all of the buildings, either secular and sacred, and many became targets for vandalism or dismantling. Bronze and lead were removed from the roofs of abandoned buildings and melted down and sold to provide money to the chronically under-funded Empire for defense and to support the court; Deno John Geanokoplos writes that "it may well be that a division is suggested here: Latin laymen stripped secular buildings, ecclesiastics, the churches." Buildings were not the only targets of officials looking to raise funds for the impoverished Latin Empire: the monumental sculptures which adorned the Hippodrome and fora of the city were pulled down and melted for coinage. "Among the masterpieces destroyed, writes Talbot, "were a Herakles attributed to the fourth-century B.C. sculptor Lysippos, and monumental figures of Hera, Paris, and Helen."
The Nicaean emperor John III Vatatzes reportedly saved several churches from being dismantled for their valuable building materials; by sending money to the Latins "to buy them off" (exonesamenos), he prevented the destruction of several churches. According to Talbot, these included the churches of Blachernae, Rouphinianai, and St. Michael at Anaplous. He also granted funds for the restoration of the Church of the Holy Apostles, which had been seriously damaged in an earthquake.
The Byzantine nobility scattered, many going to Nicaea, where Theodore Lascaris set up an imperial court, or to Epirus, where Theodore Angelus did the same; others fled to Trebizond, where one of the Comneni had already with Georgian support established an independent seat of empire. Nicaea and Epirus both vied for the imperial title, and tried to recover Constantinople. In 1261, Constantinople was captured from its last Latin ruler, Baldwin II, by the forces of the Nicaean emperor Michael VIII Palaiologos under the command of Caesar Alexios Strategopoulos.
Although Constantinople was retaken by Michael VIII Palaiologos, the Empire had lost many of its key economic resources, and struggled to survive. The palace of Blachernae in the north-west of the city became the main Imperial residence, with the old Great Palace on the shores of the Bosporus going into decline. When Michael VIII captured the city, its population was 35,000 people, but, by the end of his reign, he had succeeded in increasing the population to about 70,000 people. The Emperor achieved this by summoning former residents who had fled the city when the crusaders captured it, and by relocating Greeks from the recently reconquered Peloponnese to the capital. Military defeats, civil wars, earthquakes and natural disasters were joined by the Black Death, which in 1347 spread to Constantinople, exacerbated the people's sense that they were doomed by God.
Castilian traveler and writer Ruy González de Clavijo, who saw Constantinople in 1403, wrote that the area within the city walls included small neighborhoods separated by orchards and fields. The ruins of palaces and churches could be seen everywhere. The aqueducts and the most densely inhabited neighborhoods were along the coast of the Marmara Sea and Golden Horn. Only the coastal areas, in particular the commercial areas facing the Golden Horn, had a dense population. Although the Genoese colony in Galata was small, it was overcrowded and had magnificent mansions.
By May 1453, the city no longer possessed the treasure troves of Aladdin that the Ottoman troops longingly imagined as they stared up at the walls. Gennadios Scholarios, Patriarch of Constantinople from 1454 to 1464, was saying that the capital of the Empire, that was once the "city of wisdom", became "the city of ruins".
When the Ottoman Turks captured the city (1453) it contained approximately 50,000 people. Tedaldi of Florence estimated the population at 30,000 to 36,000, while in Chronica Vicentina, the italian Andrei di Arnaldo estimated it at 50,000. The plague epidemic of 1435 must have caused the population to drop.
The population decline also had a huge impact upon the Constantinople's defense capabilities. At the end of March 1453, emperor Constantine XI ordered a census of districts to record how many able-bodied men were in the city and whatever weapons each possessed for defense. George Sphrantzes, the faithful chancellor of the last emperor, recorded that "in spite of the great size of our city, our defenders amounted to 4,773 Greeks, as well as just 200 foreigners". In addition there were volunteers from outside, the "Genoese, Venetians and those who came secretly from Galata to help the defense", who numbered "hardly as many as three thousand", amounting to something under 8,000 men in total to defend a perimeter wall of twelve miles.
Constantinople was conquered by the Ottoman Empire on 29 May 1453. Mehmed II intended to complete his father's mission and conquer Constantinople for the Ottomans. In 1452 he reached peace treaties with Hungary and Venice. He also began the construction of the Boğazkesen (later called the Rumelihisarı), a fortress at the narrowest point of the Bosphorus Strait, in order to restrict passage between the Black and Mediterranean seas. Mehmed then tasked the Hungarian gunsmith Urban with both arming Rumelihisarı and building cannon powerful enough to bring down the walls of Constantinople. By March 1453 Urban's cannon had been transported from the Ottoman capital of Edirne to the outskirts of Constantinople. In April, having quickly seized Byzantine coastal settlements along the Black Sea and Sea of Marmara, Ottoman troops in Rumelia and Anatolia assembled outside the Byzantine capital. Their fleet moved from Gallipoli to nearby Diplokionion, and the sultan himself set out to meet his army. The Ottomans were commanded by 21-year-old Ottoman Sultan Mehmed II. The conquest of Constantinople followed a seven-week siege which had begun on 6 April 1453. The Empire fell on 29 May 1453.
The number of people captured by the Ottomans after the fall of the city was around 33,000. The small number of people left in the city indicates that there could not have been many residents there. The primary concern of Mehmed II in the early years of his reign was the construction and settlement of the city. However, since an insufficient number of Muslims accepted his invitation, the settlement of 30 abandoned neighborhoods with the inhabitants of formerly conquered areas became necessary.
The Christian Orthodox city of Constantinople was now under Ottoman control. As tradition followed for the region, Ottoman soldiers had three days to pillage the city. When Mehmed II on the second day entered Constantinople through the Gate of Charisius (today known as Edirnekapı or Adrianople Gate), it is said that first thing he did was ride his horse to Hagia Sophia, which was not in good shape even though it was avoided in the pillage by strict orders. Displeased by the pillaging, Mehmed II ordered it to end, for it will be the capital of his empire. He then ordered that an imam meet him in Hagia Sophia in order to chant the adhan thus transforming the Orthodox cathedral into a Muslim mosque, solidifying Islamic rule in Constantinople.
Mehmed's main concern with Constantinople had to do with consolidating control over the city and rebuilding its defenses. After 45,000 captives were marched from the city, building projects were commenced immediately after the conquest, which included the repair of the walls, construction of the citadel, and building a new palace. Mehmed issued orders across his empire that Muslims, Christians, and Jews should resettle the city, with Christians and Jews required to pay jizya and Muslims pay Zakat; he demanded that five thousand households needed to be transferred to Constantinople by September. From all over the Islamic empire, prisoners of war and deported people were sent to the city: these people were called "Sürgün" in Turkish (Greek: σουργούνιδες). Two centuries later, Ottoman traveler Evliya Çelebi gave a list of groups introduced into the city with their respective origins. Even today, many quarters of Istanbul, such as Aksaray, Çarşamba, bear the names of the places of origin of their inhabitants. However, many people escaped again from the city, and there were several outbreaks of plague, so that in 1459 Mehmed allowed the deported Greeks to come back to the city.
Constantinople was the largest and richest urban center in the Eastern Mediterranean Sea during the late Eastern Roman Empire, mostly as a result of its strategic position commanding the trade routes between the Aegean Sea and the Black Sea. It would remain the capital of the eastern, Greek-speaking empire for over a thousand years and in some ways is the nexus of Byzantine art production. At its peak, roughly corresponding to the Middle Ages, it was one of the richest and largest cities in Europe. It exerted a powerful cultural pull and dominated much of the economic life in the Mediterranean. Visitors and merchants were especially struck by the beautiful monasteries and churches of the city, in particular the Hagia Sophia, or the Church of Holy Wisdom. According to Russian 14th-century traveler Stephen of Novgorod: "As for Hagia Sophia, the human mind can neither tell it nor make description of it."
It was especially important for preserving in its libraries manuscripts of Greek and Latin authors throughout a period when instability and disorder caused their mass-destruction in western Europe and north Africa: On the city's fall, thousands of these were brought by refugees to Italy, and played a key part in stimulating the Renaissance, and the transition to the modern world. The cumulative influence of the city on the west, over the many centuries of its existence, is incalculable. In terms of technology, art and culture, as well as sheer size, Constantinople was without parallel anywhere in Europe for a thousand years. Many languages were spoken in Constantinople. A 16th century Chinese geographical treatise specifically recorded that there were translators living in the city, indicating this was a multilingual, multicultural cosmopolitan.
Constantinople was home to the first known Western Armenian journal published and edited by a woman (Elpis Kesaratsian). Entering circulation in 1862, Kit'arr or Guitar stayed in print for only seven months. Female writers who openly expressed their desires were viewed as immodest, but this changed slowly as journals began to publish more "women's sections". In the 1880s, Matteos Mamurian invited Srpouhi Dussap to submit essays for Arevelian Mamal. According to Zaruhi Galemkearian's autobiography, she was told to write about women's place in the family and home after she published two volumes of poetry in the 1890s. By 1900, several Armenian journals had started to include works by female contributors including the Constantinople-based Tsaghik.
Even before Constantinople was founded, the markets of Byzantion were mentioned first by Xenophon and then by Theopompus who wrote that Byzantians "spent their time at the market and the harbour". In Justinian's age the Mese street running across the city from east to west was a daily market. Procopius claimed "more than 500 prostitutes" did business along the market street. Ibn Batutta who traveled to the city in 1325 wrote of the bazaars "Astanbul" in which the "majority of the artisans and salespeople in them are women".
The Byzantine Empire used Roman and Greek architectural models and styles to create its own unique type of architecture. The influence of Byzantine architecture and art can be seen in the copies taken from it throughout Europe. Particular examples include St Mark's Basilica in Venice, the basilicas of Ravenna, and many churches throughout the Slavic East. Also, alone in Europe until the 13th-century Italian florin, the Empire continued to produce sound gold coinage, the solidus of Diocletian becoming the bezant prized throughout the Middle Ages. Its city walls were much imitated (for example, see Caernarfon Castle) and its urban infrastructure was moreover a marvel throughout the Middle Ages, keeping alive the art, skill and technical expertise of the Roman Empire. In the Ottoman period Islamic architecture and symbolism were used. Great bathhouses were built in Byzantine centers such as Constantinople and Antioch.
Constantine's foundation gave prestige to the Bishop of Constantinople, who eventually came to be known as the Ecumenical Patriarch, and made it a prime center of Christianity alongside Rome. This contributed to cultural and theological differences between Eastern and Western Christianity eventually leading to the Great Schism that divided Western Catholicism from Eastern Orthodoxy from 1054 onwards. Constantinople is also of great religious importance to Islam, as the conquest of Constantinople is one of the signs of the End time in Islam.
There were many institutions in ancient Constantinople such as the Imperial University of Constantinople, sometimes known as the University of the Palace Hall of Magnaura (Greek: Πανδιδακτήριον τῆς Μαγναύρας), an Eastern Roman educational institution that could trace its corporate origins to 425 AD, when the emperor Theodosius II founded the Pandidacterium (Medieval Greek: Πανδιδακτήριον).
In the past the Bulgarian newspapers in the late Ottoman period were Makedoniya, Napredŭk, and Pravo.
The city acted as a defence for the eastern provinces of the old Roman Empire against the barbarian invasions of the 5th century. The 18-meter-tall walls built by Theodosius II were, in essence, impregnable to the barbarians coming from south of the Danube river, who found easier targets to the west rather than the richer provinces to the east in Asia. From the 5th century, the city was also protected by the Anastasian Wall, a 60-kilometer chain of walls across the Thracian peninsula. Many scholars argue that these sophisticated fortifications allowed the east to develop relatively unmolested while Ancient Rome and the west collapsed.
Constantinople's fame was such that it was described even in contemporary Chinese histories, the Old and New Book of Tang, which mentioned its massive walls and gates as well as a purported clepsydra mounted with a golden statue of a man. The Chinese histories even related how the city had been besieged in the 7th century by Mu'awiya I and how he exacted tribute in a peace settlement. | [
{
"paragraph_id": 0,
"text": "Constantinople (see other names) became the capital of the Roman Empire during the reign of Constantine the Great in 330. Following the collapse of the Western Roman Empire in the late 5th century, Constantinople remained the capital of the Eastern Roman Empire (also known as the Byzantine Empire; 330–1204 and 1261–1453), the Latin Empire (1204–1261), and the Ottoman Empire (1453–1922). Following the Turkish War of Independence, the Turkish capital then moved to Ankara. Officially renamed Istanbul in 1930, the city is today the largest city and financial centre of Turkey and the largest city in Europe, straddling the Bosporus strait, lying in both Europe and Asia.",
"title": ""
},
{
"paragraph_id": 1,
"text": "In 324, after the Western and Eastern Roman Empires were reunited, the ancient city of Byzantium was selected to serve as the new capital of the Roman Empire, and the city was renamed Nova Roma, or 'New Rome', by Emperor Constantine the Great. On 11 May 330, it was renamed Constantinople and dedicated to Constantine. Constantinople is generally considered to be the center and the \"cradle of Orthodox Christian civilization\". From the mid-5th century to the early 13th century, Constantinople was the largest and wealthiest city in Europe. The city became famous for its architectural masterpieces, such as Hagia Sophia, the cathedral of the Eastern Orthodox Church, which served as the seat of the Ecumenical Patriarchate; the sacred Imperial Palace, where the emperors lived; the Hippodrome; the Golden Gate of the Land Walls; and opulent aristocratic palaces. The University of Constantinople was founded in the 5th century and contained artistic and literary treasures before it was sacked in 1204 and 1453, including its vast Imperial Library which contained the remnants of the Library of Alexandria and had 100,000 volumes. The city was the home of the Ecumenical Patriarch of Constantinople and guardian of Christendom's holiest relics such as the Crown of thorns and the True Cross.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Constantinople was famous for its massive and complex fortifications, which ranked among the most sophisticated defensive architecture of antiquity. The Theodosian Walls consisted of a double wall lying about 2 kilometres (1.2 mi) to the west of the first wall and a moat with palisades in front. Constantinople's location between the Golden Horn and the Sea of Marmara reduced the land area that needed defensive walls. The city was built intentionally to rival Rome, and it was claimed that several elevations within its walls matched Rome's 'seven hills'. The impenetrable defenses enclosed magnificent palaces, domes, and towers, the result of prosperity Constantinople achieved as the gateway between two continents (Europe and Asia) and two seas (the Mediterranean and the Black Sea). Although besieged on numerous occasions by various armies, the defenses of Constantinople proved impenetrable for nearly nine hundred years.",
"title": ""
},
{
"paragraph_id": 3,
"text": "In 1204, however, the armies of the Fourth Crusade took and devastated the city, and for several decades, its inhabitants resided under Latin occupation in a dwindling and depopulated city. In 1261 the Byzantine Emperor Michael VIII Palaiologos liberated the city, and after the restoration under the Palaiologos dynasty, it enjoyed a partial recovery. With the advent of the Ottoman Empire in 1299, the Byzantine Empire began to lose territories, and the city began to lose population. By the early 15th century, the Byzantine Empire was reduced to just Constantinople and its environs, along with Morea in Greece, making it an enclave inside the Ottoman Empire. The city was finally besieged and conquered by the Ottoman Empire in 1453, remaining under its control until the early 20th century, after which it was renamed Istanbul under the Empire's successor state, Turkey.",
"title": ""
},
{
"paragraph_id": 4,
"text": "According to Pliny the Elder in his Natural History, the first known name of a settlement on the site of Constantinople was Lygos, a settlement likely of Thracian origin founded between the 13th and 11th centuries BC. The site, according to the founding myth of the city, was abandoned by the time Greek settlers from the city-state of Megara founded Byzantium (Ancient Greek: Βυζάντιον, Byzántion) in around 657 BC, across from the town of Chalcedon on the Asiatic side of the Bosphorus.",
"title": "Names"
},
{
"paragraph_id": 5,
"text": "The origins of the name of Byzantion, more commonly known by the later Latin Byzantium, are not entirely clear, though some suggest it is of Thracian origin. The founding myth of the city has it told that the settlement was named after the leader of the Megarian colonists, Byzas. The later Byzantines of Constantinople themselves would maintain that the city was named in honor of two men, Byzas and Antes, though this was more likely just a play on the word Byzantion.",
"title": "Names"
},
{
"paragraph_id": 6,
"text": "The city was briefly renamed Augusta Antonina in the early 3rd century AD by the Emperor Septimius Severus (193–211), who razed the city to the ground in 196 for supporting a rival contender in the civil war and had it rebuilt in honor of his son Marcus Aurelius Antoninus (who succeeded him as Emperor), popularly known as Caracalla. The name appears to have been quickly forgotten and abandoned, and the city reverted to Byzantium/Byzantion after either the assassination of Caracalla in 217 or, at the latest, the fall of the Severan dynasty in 235.",
"title": "Names"
},
{
"paragraph_id": 7,
"text": "Byzantium took on the name of Constantinople (Greek: Κωνσταντινούπολις, romanized: Kōnstantinoupolis; \"city of Constantine\") after its refoundation under Roman emperor Constantine I, who transferred the capital of the Roman Empire to Byzantium in 330 and designated his new capital officially as Nova Roma (Νέα Ῥώμη) 'New Rome'. During this time, the city was also called 'Second Rome', 'Eastern Rome', and Roma Constantinopolitana (Latin for 'Constantinopolitan Rome'). As the city became the sole remaining capital of the Roman Empire after the fall of the West, and its wealth, population, and influence grew, the city also came to have a multitude of nicknames.",
"title": "Names"
},
{
"paragraph_id": 8,
"text": "As the largest and wealthiest city in Europe during the 4th–13th centuries and a center of culture and education of the Mediterranean basin, Constantinople came to be known by prestigious titles such as Basileuousa (Queen of Cities) and Megalopolis (the Great City) and was, in colloquial speech, commonly referred to as just Polis (ἡ Πόλις) 'the City' by Constantinopolitans and provincial Byzantines alike.",
"title": "Names"
},
{
"paragraph_id": 9,
"text": "In the language of other peoples, Constantinople was referred to just as reverently. The medieval Vikings, who had contacts with the empire through their expansion in eastern Europe (Varangians), used the Old Norse name Miklagarðr (from mikill 'big' and garðr 'city'), and later Miklagard and Miklagarth. In Arabic, the city was sometimes called Rūmiyyat al-Kubra (Great City of the Romans) and in Persian as Takht-e Rum (Throne of the Romans).",
"title": "Names"
},
{
"paragraph_id": 10,
"text": "In East and South Slavic languages, including in Kievan Rus', Constantinople has been referred to as Tsargrad (Царьград) or Carigrad, 'City of the Caesar (Emperor)', from the Slavonic words tsar ('Caesar' or 'King') and grad ('city'). This was presumably a calque on a Greek phrase such as Βασιλέως Πόλις (Vasileos Polis), 'the city of the emperor [king]'.",
"title": "Names"
},
{
"paragraph_id": 11,
"text": "In Persian the city was also called Asitane (the Threshold of the State), and in Armenian, it was called Gosdantnubolis (City of Constantine).",
"title": "Names"
},
{
"paragraph_id": 12,
"text": "The modern Turkish name for the city, İstanbul, derives from the Greek phrase eis tin Polin (εἰς τὴν πόλιν), meaning '(in)to the city'. This name was used in colloquial speech in Turkish alongside Kostantiniyye, the more formal adaptation of the original Constantinople, during the period of Ottoman rule, while western languages mostly continued to refer to the city as Constantinople until the early 20th century. In 1928, the Turkish alphabet was changed from Arabic script to Latin script. After that, as part of the Turkification movement, Turkey started to urge other countries to use Turkish names for Turkish cities, instead of other transliterations to Latin script that had been used in Ottoman times and the city came to be known as Istanbul and its variations in most world languages.",
"title": "Names"
},
{
"paragraph_id": 13,
"text": "The name Constantinople is still used by members of the Eastern Orthodox Church in the title of one of their most important leaders, the Orthodox patriarch based in the city, referred to as \"His Most Divine All-Holiness the Archbishop of Constantinople New Rome and Ecumenical Patriarch\". In Greece today, the city is still called Konstantinoúpoli(s) (Κωνσταντινούπολις/Κωνσταντινούπολη) or simply just \"the City\" (Η Πόλη).",
"title": "Names"
},
{
"paragraph_id": 14,
"text": "Constantinople was founded by the Roman emperor Constantine I (272–337) in 324 on the site of an already-existing city, Byzantium, which was settled in the early days of Greek colonial expansion, in around 657 BC, by colonists of the city-state of Megara. This is the first major settlement that would develop on the site of later Constantinople, but the first known settlements was that of Lygos, referred to in Pliny's Natural Histories. Apart from this, little is known about this initial settlement. The site, according to the founding myth of the city, was abandoned by the time Greek settlers from the city-state of Megara founded Byzantium (Βυζάντιον) in around 657 BC, across from the town of Chalcedon on the Asiatic side of the Bosphorus.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "Hesychius of Miletus wrote that some \"claim that people from Megara, who derived their descent from Nisos, sailed to this place under their leader Byzas, and invent the fable that his name was attached to the city\". Some versions of the founding myth say Byzas was the son of a local nymph, while others say he was conceived by one of Zeus' daughters and Poseidon. Hesychius also gives alternate versions of the city's founding legend, which he attributed to old poets and writers:",
"title": "History"
},
{
"paragraph_id": 16,
"text": "It is said that the first Argives, after having received this prophecy from Pythia, Blessed are those who will inhabit that holy city, a narrow strip of the Thracian shore at the mouth of the Pontos, where two pups drink of the gray sea, where fish and stag graze on the same pasture, set up their dwellings at the place where the rivers Kydaros and Barbyses have their estuaries, one flowing from the north, the other from the west, and merging with the sea at the altar of the nymph called Semestre\"",
"title": "History"
},
{
"paragraph_id": 17,
"text": "The city maintained independence as a city-state until it was annexed by Darius I in 512 BC into the Persian Empire, who saw the site as the optimal location to construct a pontoon bridge crossing into Europe as Byzantium was situated at the narrowest point in the Bosphorus strait. Persian rule lasted until 478 BC when as part of the Greek counterattack to the Second Persian invasion of Greece, a Greek army led by the Spartan general Pausanias captured the city which remained an independent, yet subordinate, city under the Athenians, and later to the Spartans after 411 BC. A farsighted treaty with the emergent power of Rome in c. 150 BC which stipulated tribute in exchange for independent status allowed it to enter Roman rule unscathed. This treaty would pay dividends retrospectively as Byzantium would maintain this independent status, and prosper under peace and stability in the Pax Romana, for nearly three centuries until the late 2nd century AD.",
"title": "History"
},
{
"paragraph_id": 18,
"text": "Byzantium was never a major influential city-state like that of Athens, Corinth or Sparta, but the city enjoyed relative peace and steady growth as a prosperous trading city lent by its remarkable position. The site lay astride the land route from Europe to Asia and the seaway from the Black Sea to the Mediterranean, and had in the Golden Horn an excellent and spacious harbor. Already then, in Greek and early Roman times, Byzantium was famous for the strategic geographic position that made it difficult to besiege and capture, and its position at the crossroads of the Asiatic-European trade route over land and as the gateway between the Mediterranean and Black Seas made it too valuable a settlement to abandon, as Emperor Septimius Severus later realized when he razed the city to the ground for supporting Pescennius Niger's claimancy. It was a move greatly criticized by the contemporary consul and historian Cassius Dio who said that Severus had destroyed \"a strong Roman outpost and a base of operations against the barbarians from Pontus and Asia\". He would later rebuild Byzantium towards the end of his reign, in which it would be briefly renamed Augusta Antonina, fortifying it with a new city wall in his name, the Severan Wall.",
"title": "History"
},
{
"paragraph_id": 19,
"text": "Constantine had altogether more colourful plans. Having restored the unity of the Empire, and, being in the course of major governmental reforms as well as of sponsoring the consolidation of the Christian church, he was well aware that Rome was an unsatisfactory capital. Rome was too far from the frontiers, and hence from the armies and the imperial courts, and it offered an undesirable playground for disaffected politicians. Yet it had been the capital of the state for over a thousand years, and it might have seemed unthinkable to suggest that the capital be moved to a different location. Nevertheless, Constantine identified the site of Byzantium as the right place: a place where an emperor could sit, readily defended, with easy access to the Danube or the Euphrates frontiers, his court supplied from the rich gardens and sophisticated workshops of Roman Asia, his treasuries filled by the wealthiest provinces of the Empire.",
"title": "History"
},
{
"paragraph_id": 20,
"text": "Constantinople was built over six years, and consecrated on 11 May 330. Constantine divided the expanded city, like Rome, into 14 regions, and ornamented it with public works worthy of an imperial metropolis. Yet, at first, Constantine's new Rome did not have all the dignities of old Rome. It possessed a proconsul, rather than an urban prefect. It had no praetors, tribunes, or quaestors. Although it did have senators, they held the title clarus, not clarissimus, like those of Rome. It also lacked the panoply of other administrative offices regulating the food supply, police, statues, temples, sewers, aqueducts, or other public works. The new programme of building was carried out in great haste: columns, marbles, doors, and tiles were taken wholesale from the temples of the empire and moved to the new city. In similar fashion, many of the greatest works of Greek and Roman art were soon to be seen in its squares and streets. The emperor stimulated private building by promising householders gifts of land from the imperial estates in Asiana and Pontica and on 18 May 332 he announced that, as in Rome, free distributions of food would be made to the citizens. At the time, the amount is said to have been 80,000 rations a day, doled out from 117 distribution points around the city.",
"title": "History"
},
{
"paragraph_id": 21,
"text": "Constantine laid out a new square at the centre of old Byzantium, naming it the Augustaeum. The new senate-house (or Curia) was housed in a basilica on the east side. On the south side of the great square was erected the Great Palace of the Emperor with its imposing entrance, the Chalke, and its ceremonial suite known as the Palace of Daphne. Nearby was the vast Hippodrome for chariot-races, seating over 80,000 spectators, and the famed Baths of Zeuxippus. At the western entrance to the Augustaeum was the Milion, a vaulted monument from which distances were measured across the Eastern Roman Empire.",
"title": "History"
},
{
"paragraph_id": 22,
"text": "From the Augustaeum led a great street, the Mese, lined with colonnades. As it descended the First Hill of the city and climbed the Second Hill, it passed on the left the Praetorium or law-court. Then it passed through the oval Forum of Constantine where there was a second Senate-house and a high column with a statue of Constantine himself in the guise of Helios, crowned with a halo of seven rays and looking toward the rising sun. From there, the Mese passed on and through the Forum Tauri and then the Forum Bovis, and finally up the Seventh Hill (or Xerolophus) and through to the Golden Gate in the Constantinian Wall. After the construction of the Theodosian Walls in the early 5th century, it was extended to the new Golden Gate, reaching a total length of seven Roman miles. After the construction of the Theodosian Walls, Constantinople consisted of an area approximately the size of Old Rome within the Aurelian walls, or some 1,400 ha.",
"title": "History"
},
{
"paragraph_id": 23,
"text": "The importance of Constantinople increased, but it was gradual. From the death of Constantine in 337 to the accession of Theodosius I, emperors had been resident only in the years 337–338, 347–351, 358–361, 368–369. Its status as a capital was recognized by the appointment of the first known Urban Prefect of the City Honoratus, who held office from 11 December 359 until 361. The urban prefects had concurrent jurisdiction over three provinces each in the adjacent dioceses of Thrace (in which the city was located), Pontus and Asia comparable to the 100-mile extraordinary jurisdiction of the prefect of Rome. The emperor Valens, who hated the city and spent only one year there, nevertheless built the Palace of Hebdomon on the shore of the Propontis near the Golden Gate, probably for use when reviewing troops. All the emperors up to Zeno and Basiliscus were crowned and acclaimed at the Hebdomon. Theodosius I founded the Church of John the Baptist to house the skull of the saint (today preserved at the Topkapı Palace), put up a memorial pillar to himself in the Forum of Taurus, and turned the ruined temple of Aphrodite into a coach house for the Praetorian Prefect; Arcadius built a new forum named after himself on the Mese, near the walls of Constantine.",
"title": "History"
},
{
"paragraph_id": 24,
"text": "After the shock of the Battle of Adrianople in 378, in which the emperor Valens with the flower of the Roman armies was destroyed by the Visigoths within a few days' march, the city looked to its defences, and in 413–414 Theodosius II built the 18-metre (60-foot)-tall triple-wall fortifications, which were not to be breached until the coming of gunpowder. Theodosius also founded a University near the Forum of Taurus, on 27 February 425.",
"title": "History"
},
{
"paragraph_id": 25,
"text": "Uldin, a prince of the Huns, appeared on the Danube about this time and advanced into Thrace, but he was deserted by many of his followers, who joined with the Romans in driving their king back north of the river. Subsequent to this, new walls were built to defend the city and the fleet on the Danube improved.",
"title": "History"
},
{
"paragraph_id": 26,
"text": "After the barbarians overran the Western Roman Empire, Constantinople became the indisputable capital city of the Roman Empire. Emperors were no longer peripatetic between various court capitals and palaces. They remained in their palace in the Great City and sent generals to command their armies. The wealth of the eastern Mediterranean and western Asia flowed into Constantinople.",
"title": "History"
},
{
"paragraph_id": 27,
"text": "The emperor Justinian I (527–565) was known for his successes in war, for his legal reforms and for his public works. It was from Constantinople that his expedition for the reconquest of the former Diocese of Africa set sail on or about 21 June 533. Before their departure, the ship of the commander Belisarius was anchored in front of the Imperial palace, and the Patriarch offered prayers for the success of the enterprise. After the victory, in 534, the Temple treasure of Jerusalem, looted by the Romans in AD 70 and taken to Carthage by the Vandals after their sack of Rome in 455, was brought to Constantinople and deposited for a time, perhaps in the Church of St Polyeuctus, before being returned to Jerusalem in either the Church of the Resurrection or the New Church.",
"title": "History"
},
{
"paragraph_id": 28,
"text": "Chariot-racing had been important in Rome for centuries. In Constantinople, the hippodrome became over time increasingly a place of political significance. It was where (as a shadow of the popular elections of old Rome) the people by acclamation showed their approval of a new emperor, and also where they openly criticized the government, or clamoured for the removal of unpopular ministers. It played a crucial role during the riots and in times of political unrest. The Hippodrome provided a space for a crowd to be responded to positively or where the acclamations of a crowd were subverted, resorting to the riots that would ensue in coming years. In the time of Justinian, public order in Constantinople became a critical political issue.",
"title": "History"
},
{
"paragraph_id": 29,
"text": "Throughout the late Roman and early Byzantine periods, Christianity was resolving fundamental questions of identity, and the dispute between the orthodox and the monophysites became the cause of serious disorder, expressed through allegiance to the chariot-racing parties of the Blues and the Greens. The partisans of the Blues and the Greens were said to affect untrimmed facial hair, head hair shaved at the front and grown long at the back, and wide-sleeved tunics tight at the wrist; and to form gangs to engage in night-time muggings and street violence. At last these disorders took the form of a major rebellion of 532, known as the \"Nika\" riots (from the battle-cry of \"Conquer!\" of those involved). The Nika Riots began in the Hippodrome and finished there with the onslaught of over 30,000 people according to Procopius, those in the blue and green factions, innocent and guilty. This came full circle on the relationship within the Hippodrome between the power and the people during the time of Justinian.",
"title": "History"
},
{
"paragraph_id": 30,
"text": "Fires started by the Nika rioters consumed the Theodosian basilica of Hagia Sophia (Holy Wisdom), the city's cathedral, which lay to the north of the Augustaeum and had itself replaced the Constantinian basilica founded by Constantius II to replace the first Byzantine cathedral, Hagia Irene (Holy Peace). Justinian commissioned Anthemius of Tralles and Isidore of Miletus to replace it with a new and incomparable Hagia Sophia. This was the great cathedral of the city, whose dome was said to be held aloft by God alone, and which was directly connected to the palace so that the imperial family could attend services without passing through the streets. The dedication took place on 26 December 537 in the presence of the emperor, who was later reported to have exclaimed, \"O Solomon, I have outdone thee!\" Hagia Sophia was served by 600 people including 80 priests, and cost 20,000 pounds of gold to build.",
"title": "History"
},
{
"paragraph_id": 31,
"text": "Justinian also had Anthemius and Isidore demolish and replace the original Church of the Holy Apostles and Hagia Irene built by Constantine with new churches under the same dedication. The Justinianic Church of the Holy Apostles was designed in the form of an equal-armed cross with five domes, and ornamented with beautiful mosaics. This church was to remain the burial place of the emperors from Constantine himself until the 11th century. When the city fell to the Turks in 1453, the church was demolished to make room for the tomb of Mehmet II the Conqueror. Justinian was also concerned with other aspects of the city's built environment, legislating against the abuse of laws prohibiting building within 100 ft (30 m) of the sea front, in order to protect the view.",
"title": "History"
},
{
"paragraph_id": 32,
"text": "During Justinian I's reign, the city's population reached about 500,000 people. However, the social fabric of Constantinople was also damaged by the onset of the Plague of Justinian between 541 and 542 AD. It killed perhaps 40% of the city's inhabitants.",
"title": "History"
},
{
"paragraph_id": 33,
"text": "In the early 7th century, the Avars and later the Bulgars overwhelmed much of the Balkans, threatening Constantinople with attack from the west. Simultaneously, the Persian Sassanids overwhelmed the Prefecture of the East and penetrated deep into Anatolia. Heraclius, son of the exarch of Africa, set sail for the city and assumed the throne. He found the military situation so dire that he is said to have contemplated withdrawing the imperial capital to Carthage, but relented after the people of Constantinople begged him to stay. The citizens lost their right to free grain in 618 when Heraclius realized that the city could no longer be supplied from Egypt as a result of the Persian wars: the population fell substantially as a result.",
"title": "History"
},
{
"paragraph_id": 34,
"text": "While the city withstood a siege by the Sassanids and Avars in 626, Heraclius campaigned deep into Persian territory and briefly restored the status quo in 628, when the Persians surrendered all their conquests. However, further sieges followed the Arab conquests, first from 674 to 678 and then in 717 to 718. The Theodosian Walls kept the city impenetrable from the land, while a newly discovered incendiary substance known as Greek fire allowed the Byzantine navy to destroy the Arab fleets and keep the city supplied. In the second siege, the second ruler of Bulgaria, Khan Tervel, rendered decisive help. He was called Saviour of Europe.",
"title": "History"
},
{
"paragraph_id": 35,
"text": "In the 730s Leo III carried out extensive repairs of the Theodosian walls, which had been damaged by frequent and violent attacks; this work was financed by a special tax on all the subjects of the Empire.",
"title": "History"
},
{
"paragraph_id": 36,
"text": "Theodora, widow of the Emperor Theophilus (died 842), acted as regent during the minority of her son Michael III, who was said to have been introduced to dissolute habits by her brother Bardas. When Michael assumed power in 856, he became known for excessive drunkenness, appeared in the hippodrome as a charioteer and burlesqued the religious processions of the clergy. He removed Theodora from the Great Palace to the Carian Palace and later to the monastery of Gastria, but, after the death of Bardas, she was released to live in the palace of St Mamas; she also had a rural residence at the Anthemian Palace, where Michael was assassinated in 867.",
"title": "History"
},
{
"paragraph_id": 37,
"text": "In 860, an attack was made on the city by a new principality set up a few years earlier at Kiev by Askold and Dir, two Varangian chiefs: Two hundred small vessels passed through the Bosporus and plundered the monasteries and other properties on the suburban Princes' Islands. Oryphas, the admiral of the Byzantine fleet, alerted the emperor Michael, who promptly put the invaders to flight; but the suddenness and savagery of the onslaught made a deep impression on the citizens.",
"title": "History"
},
{
"paragraph_id": 38,
"text": "In 980, the emperor Basil II received an unusual gift from Prince Vladimir of Kiev: 6,000 Varangian warriors, which Basil formed into a new bodyguard known as the Varangian Guard. They were known for their ferocity, honour, and loyalty. It is said that, in 1038, they were dispersed in winter quarters in the Thracesian Theme when one of their number attempted to violate a countrywoman, but in the struggle she seized his sword and killed him; instead of taking revenge, however, his comrades applauded her conduct, compensated her with all his possessions, and exposed his body without burial as if he had committed suicide. However, following the death of an Emperor, they became known also for plunder in the Imperial palaces. Later in the 11th century the Varangian Guard became dominated by Anglo-Saxons who preferred this way of life to subjugation by the new Norman kings of England.",
"title": "History"
},
{
"paragraph_id": 39,
"text": "The Book of the Eparch, which dates to the 10th century, gives a detailed picture of the city's commercial life and its organization at that time. The corporations in which the tradesmen of Constantinople were organised were supervised by the Eparch, who regulated such matters as production, prices, import, and export. Each guild had its own monopoly, and tradesmen might not belong to more than one. It is an impressive testament to the strength of tradition how little these arrangements had changed since the office, then known by the Latin version of its title, had been set up in 330 to mirror the urban prefecture of Rome.",
"title": "History"
},
{
"paragraph_id": 40,
"text": "In the 9th and 10th centuries, Constantinople had a population of between 500,000 and 800,000.",
"title": "History"
},
{
"paragraph_id": 41,
"text": "In the 8th and 9th centuries, the iconoclast movement caused serious political unrest throughout the Empire. The emperor Leo III issued a decree in 726 against images, and ordered the destruction of a statue of Christ over one of the doors of the Chalke, an act that was fiercely resisted by the citizens. Constantine V convoked a church council in 754, which condemned the worship of images, after which many treasures were broken, burned, or painted over with depictions of trees, birds or animals: One source refers to the church of the Holy Virgin at Blachernae as having been transformed into a \"fruit store and aviary\". Following the death of her husband Leo IV in 780, the empress Irene restored the veneration of images through the agency of the Second Council of Nicaea in 787.",
"title": "History"
},
{
"paragraph_id": 42,
"text": "The iconoclast controversy returned in the early 9th century, only to be resolved once more in 843 during the regency of Empress Theodora, who restored the icons. These controversies contributed to the deterioration of relations between the Western and the Eastern Churches.",
"title": "History"
},
{
"paragraph_id": 43,
"text": "In the late 11th century catastrophe struck with the unexpected and calamitous defeat of the imperial armies at the Battle of Manzikert in Armenia in 1071. The Emperor Romanus Diogenes was captured. The peace terms demanded by Alp Arslan, sultan of the Seljuk Turks, were not excessive, and Romanus accepted them. On his release, however, Romanus found that enemies had placed their own candidate on the throne in his absence; he surrendered to them and suffered death by torture, and the new ruler, Michael VII Ducas, refused to honour the treaty. In response, the Turks began to move into Anatolia in 1073. The collapse of the old defensive system meant that they met no opposition, and the empire's resources were distracted and squandered in a series of civil wars. Thousands of Turkoman tribesmen crossed the unguarded frontier and moved into Anatolia. By 1080, a huge area had been lost to the Empire, and the Turks were within striking distance of Constantinople.",
"title": "History"
},
{
"paragraph_id": 44,
"text": "Under the Comnenian dynasty (1081–1185), Byzantium staged a remarkable recovery. In 1090–91, the nomadic Pechenegs reached the walls of Constantinople, where Emperor Alexius I with the aid of the Kipchaks annihilated their army. In response to a call for aid from Alexius, the First Crusade assembled at Constantinople in 1096, but declining to put itself under Byzantine command set out for Jerusalem on its own account. John II built the monastery of the Pantocrator (Almighty) with a hospital for the poor of 50 beds.",
"title": "History"
},
{
"paragraph_id": 45,
"text": "With the restoration of firm central government, the empire became fabulously wealthy. The population was rising (estimates for Constantinople in the 12th century vary from some 100,000 to 500,000), and towns and cities across the realm flourished. Meanwhile, the volume of money in circulation dramatically increased. This was reflected in Constantinople by the construction of the Blachernae palace, the creation of brilliant new works of art, and general prosperity at this time: an increase in trade, made possible by the growth of the Italian city-states, may have helped the growth of the economy. It is certain that the Venetians and others were active traders in Constantinople, making a living out of shipping goods between the Crusader Kingdoms of Outremer and the West, while also trading extensively with Byzantium and Egypt. The Venetians had factories on the north side of the Golden Horn, and large numbers of westerners were present in the city throughout the 12th century. Toward the end of Manuel I Komnenos's reign, the number of foreigners in the city reached about 60,000–80,000 people out of a total population of about 400,000 people. In 1171, Constantinople also contained a small community of 2,500 Jews. In 1182, most Latin (Western European) inhabitants of Constantinople were massacred.",
"title": "History"
},
{
"paragraph_id": 46,
"text": "In artistic terms, the 12th century was a very productive period. There was a revival in the mosaic art, for example: Mosaics became more realistic and vivid, with an increased emphasis on depicting three-dimensional forms. There was an increased demand for art, with more people having access to the necessary wealth to commission and pay for such work.",
"title": "History"
},
{
"paragraph_id": 47,
"text": "On 25 July 1197, Constantinople was struck by a severe fire which burned the Latin Quarter and the area around the Gate of the Droungarios (Turkish: Odun Kapısı) on the Golden Horn. Nevertheless, the destruction wrought by the 1197 fire paled in comparison with that brought by the Crusaders. In the course of a plot between Philip of Swabia, Boniface of Montferrat and the Doge of Venice, the Fourth Crusade was, despite papal excommunication, diverted in 1203 against Constantinople, ostensibly promoting the claims of Alexios IV Angelos brother-in-law of Philip, son of the deposed emperor Isaac II Angelos. The reigning emperor Alexios III Angelos had made no preparation. The Crusaders occupied Galata, broke the defensive chain protecting the Golden Horn, and entered the harbour, where on 27 July they breached the sea walls: Alexios III fled. But the new Alexios IV Angelos found the Treasury inadequate, and was unable to make good the rewards he had promised to his western allies. Tension between the citizens and the Latin soldiers increased. In January 1204, the protovestiarius Alexios Murzuphlos provoked a riot, it is presumed, to intimidate Alexios IV, but whose only result was the destruction of the great statue of Athena Promachos, the work of Phidias, which stood in the principal forum facing west.",
"title": "History"
},
{
"paragraph_id": 48,
"text": "In February 1204, the people rose again: Alexios IV was imprisoned and executed, and Murzuphlos took the purple as Alexios V Doukas. He made some attempt to repair the walls and organise the citizenry, but there had been no opportunity to bring in troops from the provinces and the guards were demoralised by the revolution. An attack by the Crusaders on 6 April failed, but a second from the Golden Horn on 12 April succeeded, and the invaders poured in. Alexios V fled. The Senate met in Hagia Sophia and offered the crown to Theodore Lascaris, who had married into the Angelos dynasty, but it was too late. He came out with the Patriarch to the Golden Milestone before the Great Palace and addressed the Varangian Guard. Then the two of them slipped away with many of the nobility and embarked for Asia. By the next day the Doge and the leading Franks were installed in the Great Palace, and the city was given over to pillage for three days.",
"title": "History"
},
{
"paragraph_id": 49,
"text": "Sir Steven Runciman, historian of the Crusades, wrote that the sack of Constantinople is \"unparalleled in history\".",
"title": "History"
},
{
"paragraph_id": 50,
"text": "For nine centuries, [...] the great city had been the capital of Christian civilization. It was filled with works of art that had survived from ancient Greece and with the masterpieces of its own exquisite craftsmen. The Venetians [...] seized treasures and carried them off to adorn [...] their town. But the Frenchmen and Flemings were filled with a lust for destruction. They rushed in a howling mob down the streets and through the houses, snatching up everything that glittered and destroying whatever they could not carry, pausing only to murder or to rape, or to break open the wine-cellars [...] . Neither monasteries nor churches nor libraries were spared. In Hagia Sophia itself, drunken soldiers could be seen tearing down the silken hangings and pulling the great silver iconostasis to pieces, while sacred books and icons were trampled under foot. While they drank merrily from the altar-vessels a prostitute set herself on the Patriarch's throne and began to sing a ribald French song. Nuns were ravished in their convents. Palaces and hovels alike were entered and wrecked. Wounded women and children lay dying in the streets. For three days the ghastly scenes [...] continued, till the huge and beautiful city was a shambles. [...] When [...] order was restored, [...] citizens were tortured to make them reveal the goods that they had contrived to hide.",
"title": "History"
},
{
"paragraph_id": 51,
"text": "For the next half-century, Constantinople was the seat of the Latin Empire. Under the rulers of the Latin Empire, the city declined, both in population and the condition of its buildings. Alice-Mary Talbot cites an estimated population for Constantinople of 400,000 inhabitants; after the destruction wrought by the Crusaders on the city, about one third were homeless, and numerous courtiers, nobility, and higher clergy, followed various leading personages into exile. \"As a result Constantinople became seriously depopulated,\" Talbot concludes.",
"title": "History"
},
{
"paragraph_id": 52,
"text": "The Latins took over at least 20 churches and 13 monasteries, most prominently the Hagia Sophia, which became the cathedral of the Latin Patriarch of Constantinople. It is to these that E.H. Swift attributed the construction of a series of flying buttresses to shore up the walls of the church, which had been weakened over the centuries by earthquake tremors. However, this act of maintenance is an exception: for the most part, the Latin occupiers were too few to maintain all of the buildings, either secular and sacred, and many became targets for vandalism or dismantling. Bronze and lead were removed from the roofs of abandoned buildings and melted down and sold to provide money to the chronically under-funded Empire for defense and to support the court; Deno John Geanokoplos writes that \"it may well be that a division is suggested here: Latin laymen stripped secular buildings, ecclesiastics, the churches.\" Buildings were not the only targets of officials looking to raise funds for the impoverished Latin Empire: the monumental sculptures which adorned the Hippodrome and fora of the city were pulled down and melted for coinage. \"Among the masterpieces destroyed, writes Talbot, \"were a Herakles attributed to the fourth-century B.C. sculptor Lysippos, and monumental figures of Hera, Paris, and Helen.\"",
"title": "History"
},
{
"paragraph_id": 53,
"text": "The Nicaean emperor John III Vatatzes reportedly saved several churches from being dismantled for their valuable building materials; by sending money to the Latins \"to buy them off\" (exonesamenos), he prevented the destruction of several churches. According to Talbot, these included the churches of Blachernae, Rouphinianai, and St. Michael at Anaplous. He also granted funds for the restoration of the Church of the Holy Apostles, which had been seriously damaged in an earthquake.",
"title": "History"
},
{
"paragraph_id": 54,
"text": "The Byzantine nobility scattered, many going to Nicaea, where Theodore Lascaris set up an imperial court, or to Epirus, where Theodore Angelus did the same; others fled to Trebizond, where one of the Comneni had already with Georgian support established an independent seat of empire. Nicaea and Epirus both vied for the imperial title, and tried to recover Constantinople. In 1261, Constantinople was captured from its last Latin ruler, Baldwin II, by the forces of the Nicaean emperor Michael VIII Palaiologos under the command of Caesar Alexios Strategopoulos.",
"title": "History"
},
{
"paragraph_id": 55,
"text": "Although Constantinople was retaken by Michael VIII Palaiologos, the Empire had lost many of its key economic resources, and struggled to survive. The palace of Blachernae in the north-west of the city became the main Imperial residence, with the old Great Palace on the shores of the Bosporus going into decline. When Michael VIII captured the city, its population was 35,000 people, but, by the end of his reign, he had succeeded in increasing the population to about 70,000 people. The Emperor achieved this by summoning former residents who had fled the city when the crusaders captured it, and by relocating Greeks from the recently reconquered Peloponnese to the capital. Military defeats, civil wars, earthquakes and natural disasters were joined by the Black Death, which in 1347 spread to Constantinople, exacerbated the people's sense that they were doomed by God.",
"title": "History"
},
{
"paragraph_id": 56,
"text": "Castilian traveler and writer Ruy González de Clavijo, who saw Constantinople in 1403, wrote that the area within the city walls included small neighborhoods separated by orchards and fields. The ruins of palaces and churches could be seen everywhere. The aqueducts and the most densely inhabited neighborhoods were along the coast of the Marmara Sea and Golden Horn. Only the coastal areas, in particular the commercial areas facing the Golden Horn, had a dense population. Although the Genoese colony in Galata was small, it was overcrowded and had magnificent mansions.",
"title": "History"
},
{
"paragraph_id": 57,
"text": "By May 1453, the city no longer possessed the treasure troves of Aladdin that the Ottoman troops longingly imagined as they stared up at the walls. Gennadios Scholarios, Patriarch of Constantinople from 1454 to 1464, was saying that the capital of the Empire, that was once the \"city of wisdom\", became \"the city of ruins\".",
"title": "History"
},
{
"paragraph_id": 58,
"text": "When the Ottoman Turks captured the city (1453) it contained approximately 50,000 people. Tedaldi of Florence estimated the population at 30,000 to 36,000, while in Chronica Vicentina, the italian Andrei di Arnaldo estimated it at 50,000. The plague epidemic of 1435 must have caused the population to drop.",
"title": "History"
},
{
"paragraph_id": 59,
"text": "The population decline also had a huge impact upon the Constantinople's defense capabilities. At the end of March 1453, emperor Constantine XI ordered a census of districts to record how many able-bodied men were in the city and whatever weapons each possessed for defense. George Sphrantzes, the faithful chancellor of the last emperor, recorded that \"in spite of the great size of our city, our defenders amounted to 4,773 Greeks, as well as just 200 foreigners\". In addition there were volunteers from outside, the \"Genoese, Venetians and those who came secretly from Galata to help the defense\", who numbered \"hardly as many as three thousand\", amounting to something under 8,000 men in total to defend a perimeter wall of twelve miles.",
"title": "History"
},
{
"paragraph_id": 60,
"text": "Constantinople was conquered by the Ottoman Empire on 29 May 1453. Mehmed II intended to complete his father's mission and conquer Constantinople for the Ottomans. In 1452 he reached peace treaties with Hungary and Venice. He also began the construction of the Boğazkesen (later called the Rumelihisarı), a fortress at the narrowest point of the Bosphorus Strait, in order to restrict passage between the Black and Mediterranean seas. Mehmed then tasked the Hungarian gunsmith Urban with both arming Rumelihisarı and building cannon powerful enough to bring down the walls of Constantinople. By March 1453 Urban's cannon had been transported from the Ottoman capital of Edirne to the outskirts of Constantinople. In April, having quickly seized Byzantine coastal settlements along the Black Sea and Sea of Marmara, Ottoman troops in Rumelia and Anatolia assembled outside the Byzantine capital. Their fleet moved from Gallipoli to nearby Diplokionion, and the sultan himself set out to meet his army. The Ottomans were commanded by 21-year-old Ottoman Sultan Mehmed II. The conquest of Constantinople followed a seven-week siege which had begun on 6 April 1453. The Empire fell on 29 May 1453.",
"title": "History"
},
{
"paragraph_id": 61,
"text": "The number of people captured by the Ottomans after the fall of the city was around 33,000. The small number of people left in the city indicates that there could not have been many residents there. The primary concern of Mehmed II in the early years of his reign was the construction and settlement of the city. However, since an insufficient number of Muslims accepted his invitation, the settlement of 30 abandoned neighborhoods with the inhabitants of formerly conquered areas became necessary.",
"title": "History"
},
{
"paragraph_id": 62,
"text": "The Christian Orthodox city of Constantinople was now under Ottoman control. As tradition followed for the region, Ottoman soldiers had three days to pillage the city. When Mehmed II on the second day entered Constantinople through the Gate of Charisius (today known as Edirnekapı or Adrianople Gate), it is said that first thing he did was ride his horse to Hagia Sophia, which was not in good shape even though it was avoided in the pillage by strict orders. Displeased by the pillaging, Mehmed II ordered it to end, for it will be the capital of his empire. He then ordered that an imam meet him in Hagia Sophia in order to chant the adhan thus transforming the Orthodox cathedral into a Muslim mosque, solidifying Islamic rule in Constantinople.",
"title": "History"
},
{
"paragraph_id": 63,
"text": "Mehmed's main concern with Constantinople had to do with consolidating control over the city and rebuilding its defenses. After 45,000 captives were marched from the city, building projects were commenced immediately after the conquest, which included the repair of the walls, construction of the citadel, and building a new palace. Mehmed issued orders across his empire that Muslims, Christians, and Jews should resettle the city, with Christians and Jews required to pay jizya and Muslims pay Zakat; he demanded that five thousand households needed to be transferred to Constantinople by September. From all over the Islamic empire, prisoners of war and deported people were sent to the city: these people were called \"Sürgün\" in Turkish (Greek: σουργούνιδες). Two centuries later, Ottoman traveler Evliya Çelebi gave a list of groups introduced into the city with their respective origins. Even today, many quarters of Istanbul, such as Aksaray, Çarşamba, bear the names of the places of origin of their inhabitants. However, many people escaped again from the city, and there were several outbreaks of plague, so that in 1459 Mehmed allowed the deported Greeks to come back to the city.",
"title": "History"
},
{
"paragraph_id": 64,
"text": "Constantinople was the largest and richest urban center in the Eastern Mediterranean Sea during the late Eastern Roman Empire, mostly as a result of its strategic position commanding the trade routes between the Aegean Sea and the Black Sea. It would remain the capital of the eastern, Greek-speaking empire for over a thousand years and in some ways is the nexus of Byzantine art production. At its peak, roughly corresponding to the Middle Ages, it was one of the richest and largest cities in Europe. It exerted a powerful cultural pull and dominated much of the economic life in the Mediterranean. Visitors and merchants were especially struck by the beautiful monasteries and churches of the city, in particular the Hagia Sophia, or the Church of Holy Wisdom. According to Russian 14th-century traveler Stephen of Novgorod: \"As for Hagia Sophia, the human mind can neither tell it nor make description of it.\"",
"title": "Culture"
},
{
"paragraph_id": 65,
"text": "It was especially important for preserving in its libraries manuscripts of Greek and Latin authors throughout a period when instability and disorder caused their mass-destruction in western Europe and north Africa: On the city's fall, thousands of these were brought by refugees to Italy, and played a key part in stimulating the Renaissance, and the transition to the modern world. The cumulative influence of the city on the west, over the many centuries of its existence, is incalculable. In terms of technology, art and culture, as well as sheer size, Constantinople was without parallel anywhere in Europe for a thousand years. Many languages were spoken in Constantinople. A 16th century Chinese geographical treatise specifically recorded that there were translators living in the city, indicating this was a multilingual, multicultural cosmopolitan.",
"title": "Culture"
},
{
"paragraph_id": 66,
"text": "Constantinople was home to the first known Western Armenian journal published and edited by a woman (Elpis Kesaratsian). Entering circulation in 1862, Kit'arr or Guitar stayed in print for only seven months. Female writers who openly expressed their desires were viewed as immodest, but this changed slowly as journals began to publish more \"women's sections\". In the 1880s, Matteos Mamurian invited Srpouhi Dussap to submit essays for Arevelian Mamal. According to Zaruhi Galemkearian's autobiography, she was told to write about women's place in the family and home after she published two volumes of poetry in the 1890s. By 1900, several Armenian journals had started to include works by female contributors including the Constantinople-based Tsaghik.",
"title": "Culture"
},
{
"paragraph_id": 67,
"text": "Even before Constantinople was founded, the markets of Byzantion were mentioned first by Xenophon and then by Theopompus who wrote that Byzantians \"spent their time at the market and the harbour\". In Justinian's age the Mese street running across the city from east to west was a daily market. Procopius claimed \"more than 500 prostitutes\" did business along the market street. Ibn Batutta who traveled to the city in 1325 wrote of the bazaars \"Astanbul\" in which the \"majority of the artisans and salespeople in them are women\".",
"title": "Culture"
},
{
"paragraph_id": 68,
"text": "The Byzantine Empire used Roman and Greek architectural models and styles to create its own unique type of architecture. The influence of Byzantine architecture and art can be seen in the copies taken from it throughout Europe. Particular examples include St Mark's Basilica in Venice, the basilicas of Ravenna, and many churches throughout the Slavic East. Also, alone in Europe until the 13th-century Italian florin, the Empire continued to produce sound gold coinage, the solidus of Diocletian becoming the bezant prized throughout the Middle Ages. Its city walls were much imitated (for example, see Caernarfon Castle) and its urban infrastructure was moreover a marvel throughout the Middle Ages, keeping alive the art, skill and technical expertise of the Roman Empire. In the Ottoman period Islamic architecture and symbolism were used. Great bathhouses were built in Byzantine centers such as Constantinople and Antioch.",
"title": "Culture"
},
{
"paragraph_id": 69,
"text": "Constantine's foundation gave prestige to the Bishop of Constantinople, who eventually came to be known as the Ecumenical Patriarch, and made it a prime center of Christianity alongside Rome. This contributed to cultural and theological differences between Eastern and Western Christianity eventually leading to the Great Schism that divided Western Catholicism from Eastern Orthodoxy from 1054 onwards. Constantinople is also of great religious importance to Islam, as the conquest of Constantinople is one of the signs of the End time in Islam.",
"title": "Culture"
},
{
"paragraph_id": 70,
"text": "There were many institutions in ancient Constantinople such as the Imperial University of Constantinople, sometimes known as the University of the Palace Hall of Magnaura (Greek: Πανδιδακτήριον τῆς Μαγναύρας), an Eastern Roman educational institution that could trace its corporate origins to 425 AD, when the emperor Theodosius II founded the Pandidacterium (Medieval Greek: Πανδιδακτήριον).",
"title": "Culture"
},
{
"paragraph_id": 71,
"text": "In the past the Bulgarian newspapers in the late Ottoman period were Makedoniya, Napredŭk, and Pravo.",
"title": "Culture"
},
{
"paragraph_id": 72,
"text": "The city acted as a defence for the eastern provinces of the old Roman Empire against the barbarian invasions of the 5th century. The 18-meter-tall walls built by Theodosius II were, in essence, impregnable to the barbarians coming from south of the Danube river, who found easier targets to the west rather than the richer provinces to the east in Asia. From the 5th century, the city was also protected by the Anastasian Wall, a 60-kilometer chain of walls across the Thracian peninsula. Many scholars argue that these sophisticated fortifications allowed the east to develop relatively unmolested while Ancient Rome and the west collapsed.",
"title": "International status"
},
{
"paragraph_id": 73,
"text": "Constantinople's fame was such that it was described even in contemporary Chinese histories, the Old and New Book of Tang, which mentioned its massive walls and gates as well as a purported clepsydra mounted with a golden statue of a man. The Chinese histories even related how the city had been besieged in the 7th century by Mu'awiya I and how he exacted tribute in a peace settlement.",
"title": "International status"
}
] | Constantinople became the capital of the Roman Empire during the reign of Constantine the Great in 330. Following the collapse of the Western Roman Empire in the late 5th century, Constantinople remained the capital of the Eastern Roman Empire, the Latin Empire (1204–1261), and the Ottoman Empire (1453–1922). Following the Turkish War of Independence, the Turkish capital then moved to Ankara. Officially renamed Istanbul in 1930, the city is today the largest city and financial centre of Turkey and the largest city in Europe, straddling the Bosporus strait, lying in both Europe and Asia. In 324, after the Western and Eastern Roman Empires were reunited, the ancient city of Byzantium was selected to serve as the new capital of the Roman Empire, and the city was renamed Nova Roma, or 'New Rome', by Emperor Constantine the Great. On 11 May 330, it was renamed Constantinople and dedicated to Constantine. Constantinople is generally considered to be the center and the "cradle of Orthodox Christian civilization". From the mid-5th century to the early 13th century, Constantinople was the largest and wealthiest city in Europe. The city became famous for its architectural masterpieces, such as Hagia Sophia, the cathedral of the Eastern Orthodox Church, which served as the seat of the Ecumenical Patriarchate; the sacred Imperial Palace, where the emperors lived; the Hippodrome; the Golden Gate of the Land Walls; and opulent aristocratic palaces. The University of Constantinople was founded in the 5th century and contained artistic and literary treasures before it was sacked in 1204 and 1453, including its vast Imperial Library which contained the remnants of the Library of Alexandria and had 100,000 volumes. The city was the home of the Ecumenical Patriarch of Constantinople and guardian of Christendom's holiest relics such as the Crown of thorns and the True Cross. Constantinople was famous for its massive and complex fortifications, which ranked among the most sophisticated defensive architecture of antiquity. The Theodosian Walls consisted of a double wall lying about 2 kilometres (1.2 mi) to the west of the first wall and a moat with palisades in front. Constantinople's location between the Golden Horn and the Sea of Marmara reduced the land area that needed defensive walls. The city was built intentionally to rival Rome, and it was claimed that several elevations within its walls matched Rome's 'seven hills'. The impenetrable defenses enclosed magnificent palaces, domes, and towers, the result of prosperity Constantinople achieved as the gateway between two continents and two seas. Although besieged on numerous occasions by various armies, the defenses of Constantinople proved impenetrable for nearly nine hundred years. In 1204, however, the armies of the Fourth Crusade took and devastated the city, and for several decades, its inhabitants resided under Latin occupation in a dwindling and depopulated city. In 1261 the Byzantine Emperor Michael VIII Palaiologos liberated the city, and after the restoration under the Palaiologos dynasty, it enjoyed a partial recovery. With the advent of the Ottoman Empire in 1299, the Byzantine Empire began to lose territories, and the city began to lose population. By the early 15th century, the Byzantine Empire was reduced to just Constantinople and its environs, along with Morea in Greece, making it an enclave inside the Ottoman Empire. The city was finally besieged and conquered by the Ottoman Empire in 1453, remaining under its control until the early 20th century, after which it was renamed Istanbul under the Empire's successor state, Turkey. | 2001-10-18T08:04:30Z | 2023-12-30T17:25:19Z | [
"Template:JSTOR",
"Template:Citation",
"Template:Refbegin",
"Template:Other uses",
"Template:Timeline of Constantinople",
"Template:Lang-tr",
"Template:Doi",
"Template:Expand section",
"Template:Div col",
"Template:Commons category",
"Template:EB1911 poster",
"Template:Webarchive",
"Template:Istanbul",
"Template:Use dmy dates",
"Template:See also",
"Template:Cvt",
"Template:Further",
"Template:OEtymD",
"Template:Cite journal",
"Template:Refend",
"Template:Circa",
"Template:Who",
"Template:Div col end",
"Template:Cite web",
"Template:Infobox ancient site",
"Template:Notelist",
"Template:Reflist",
"Template:Authority control",
"Template:Short description",
"Template:Main",
"Template:Lang",
"Template:Blockquote",
"Template:ODB",
"Template:Cite book",
"Template:ISBN",
"Template:Cite encyclopedia",
"Template:Efn",
"Template:Lang-grc",
"Template:Sfn",
"Template:Lang-el",
"Template:Cite news",
"Template:Byzantine Empire topics",
"Template:Redirect2",
"Template:Convert",
"Template:Lang-gr",
"Template:Lang-grc-x-byzant"
] | https://en.wikipedia.org/wiki/Constantinople |
5,647 | Columbus | Columbus is a Latinized version of the Italian surname "Colombo". It most commonly refers to:
Columbus may also refer to: | [
{
"paragraph_id": 0,
"text": "Columbus is a Latinized version of the Italian surname \"Colombo\". It most commonly refers to:",
"title": ""
},
{
"paragraph_id": 1,
"text": "Columbus may also refer to:",
"title": ""
}
] | Columbus is a Latinized version of the Italian surname "Colombo". It most commonly refers to: Christopher Columbus (1451–1506), the Italian explorer
Columbus, Ohio, capital of the U.S. state of Ohio
Columbus, Georgia, 2nd largest city in the U.S. State of Georgia Columbus may also refer to: | 2001-07-04T15:09:28Z | 2023-09-01T17:49:16Z | [
"Template:Lookfrom",
"Template:Intitle",
"Template:Disambiguation",
"Template:Wiktionary",
"Template:TOC right",
"Template:Canned search"
] | https://en.wikipedia.org/wiki/Columbus |
5,648 | Cornwall | Cornwall (/ˈkɔːrnwɔːl, -wəl/; Cornish: Kernow [ˈkɛrnɔʊ]) is a ceremonial county in South West England. It is recognised as one of the Celtic nations and is the homeland of the Cornish people. The county is bordered by the Atlantic Ocean to the north and west, Devon to the east, and the English Channel to the south. The largest settlement is Falmouth, and the county town is the city of Truro.
The county is rural, with an area of 1,375 square miles (3,562 km) and population of 568,210. After Falmouth (23,061), the largest settlements are Newquay (20,342), St Austell (19,958), and Truro (18,766). For local government purposes most of Cornwall is a unitary authority area, with the Isles of Scilly having a unique local authority. The Cornish nationalist movement disputes the constitutional status of Cornwall and seeks greater autonomy within the United Kingdom.
Cornwall is the westernmost part of the South West Peninsula. Its coastline is characterised by steep cliffs and, to the south, several rias, including those at the mouths of the rivers Fal and Fowey. It includes the southernmost point on Great Britain, Lizard Point, and forms a large part of the Cornwall National Landscape. The national landscape also includes Bodmin Moor, an upland outcrop of the Cornubian batholith granite formation. The county contains many short rivers; the longest is the Tamar, which forms the border with Devon.
Cornwall had a minor Roman presence, and later formed part of the Brittonic kingdom of Dumnonia. From the 7th century, the Britons in the South West increasingly came into conflict with the expanding Anglo-Saxon kingdom of Wessex, eventually being pushed west of the Tamar; by the Norman Conquest Cornwall was administered as part of England, though it retained its own culture. The remainder of the Middle Ages and Early Modern Period were relatively settled, with Cornwall developing its tin mining industry and becoming a duchy in 1337. During the Industrial Revolution, the tin and copper mines were expanded and then declined, with china clay extraction becoming a major industry. Railways were built, leading to a growth of tourism in the 20th century. The Cornish language became extinct as a living community language at the end of the 18th century, but is now being revived.
The modern English name "Cornwall" is a compound of two terms coming from two different language groups:
In the Cornish language, Cornwall is Kernow which stems from the same Proto-Celtic root.
Humans reoccupied Britain after the last Ice Age. The area now known as Cornwall was first inhabited in the Palaeolithic and Mesolithic periods. It continued to be occupied by Neolithic and then by Bronze Age people.
Cornwall in the Late Bronze Age formed part of a maritime trading-networked culture which researchers have dubbed the Atlantic Bronze Age system, and which extended over most of the areas of present-day Ireland, England, Wales, France, Spain, and Portugal.
During the British Iron Age, Cornwall, like all of Britain (modern England, Scotland, Wales, and the Isle of Man), was inhabited by a Celtic-speaking people known as the Britons with distinctive cultural relations to neighbouring Brittany. The Common Brittonic spoken at this time eventually developed into several distinct tongues, including Cornish, Welsh, Breton, Cumbric and Pictish.
The first written account of Cornwall comes from the 1st-century BC Sicilian Greek historian Diodorus Siculus, supposedly quoting or paraphrasing the 4th-century BCE geographer Pytheas, who had sailed to Britain:
The inhabitants of that part of Britain called Belerion (or Land's End) from their intercourse with foreign merchants, are civilised in their manner of life. They prepare the tin, working very carefully the earth in which it is produced ... Here then the merchants buy the tin from the natives and carry it over to Gaul, and after travelling overland for about thirty days, they finally bring their loads on horses to the mouth of the Rhône.
The identity of these merchants is unknown. It has been theorised that they were Phoenicians, but there is no evidence for this. Professor Timothy Champion, discussing Diodorus Siculus's comments on the tin trade, states that "Diodorus never actually says that the Phoenicians sailed to Cornwall. In fact, he says quite the opposite: the production of Cornish tin was in the hands of the natives of Cornwall, and its transport to the Mediterranean was organised by local merchants, by sea and then overland through France, passing through areas well outside Phoenician control." Isotopic evidence suggests that tin ingots found off the coast of Haifa, Israel, may have from Cornwall. Tin, required for the production of bronze, was a relatively rare and precious commodity in the Bronze Age – hence the interest shown in Devon and Cornwall's tin resources. (For further discussion of tin mining see the section on the economy below.)
In the first four centuries AD, during the time of Roman dominance in Britain, Cornwall was rather remote from the main centres of Romanisation – the nearest being Isca Dumnoniorum, modern-day Exeter. However, the Roman road system extended into Cornwall with four significant Roman sites based on forts: Tregear near Nanstallon was discovered in the early 1970s, two others were found at Restormel Castle, Lostwithiel in 2007, and a third fort near Calstock was also discovered early in 2007. In addition, a Roman-style villa was found at Magor Farm, Illogan in 1935. Ptolemy's Geographike Hyphegesis mentions four towns controlled by the Dumnonii, three of which may have been in Cornwall. However, after 410 AD, Cornwall appears to have reverted to rule by Romano-Celtic chieftains of the Cornovii tribe as part of the Brittonic kingdom of Dumnonia (which also included present-day Devonshire and the Scilly Isles), including the territory of one Marcus Cunomorus, with at least one significant power base at Tintagel in the early 6th century.
"King" Mark of Cornwall is a semi-historical figure known from Welsh literature, from the Matter of Britain, and, in particular, from the later Norman-Breton medieval romance of Tristan and Yseult, where he appears as a close relative of King Arthur, himself usually considered to be born of the Cornish people in folklore traditions derived from Geoffrey of Monmouth's 12th-century Historia Regum Britanniae.
Archaeology supports ecclesiastical, literary and legendary evidence for some relative economic stability and close cultural ties between the sub-Roman Westcountry, South Wales, Brittany, the Channel Islands, and Ireland through the fifth and sixth centuries. In Cornwall, the arrival of Celtic saints such as Nectan, Paul Aurelian, Petroc, Piran, Samson and numerous others reinforced the preexisting Roman christianity.
The Battle of Deorham in 577 saw the separation of Dumnonia (and therefore Cornwall) from Wales, following which the Dumnonii often came into conflict with the expanding English kingdom of Wessex. Centwine of Wessex "drove the Britons as far as the sea" in 682, and by 690 St Bonifice, then a Saxon boy, was attending an abbey in Exeter, which was in turn ruled by a Saxon abbot. The Carmen Rhythmicum written by Aldhelm contains the earliest literary reference to Cornwall as distinct from Devon. Religious tensions between the Dumnonians (who celebrated celtic Christian traditions) and Wessex (who were Roman Catholic) are described in Aldhelm's letter to King Geraint. The Annales Cambriae report that in AD 722 the Britons of Cornwall won a battle at "Hehil". It seems likely that the enemy the Cornish fought was a West Saxon force, as evidenced by the naming of King Ine of Wessex and his kinsman Nonna in reference to an earlier Battle of Llongborth in 710.
The Anglo-Saxon Chronicle stated in 815 (adjusted date) "and in this year king Ecgbryht raided in Cornwall from east to west." this has been interpreted to mean a raid from the Tamar to Land's End, and the end of Cornish independence. However, the Anglo-Saxon Chronicle states that in 825 (adjusted date) a battle took place between the Wealas (Cornish) and the Defnas (men of Devon) at Gafulforda. The Cornish giving battle here, and the later battle at Hingston Down, casts doubt on any claims of control Wessex had at this stage.
In 838, the Cornish and their Danish allies were defeated by Egbert in the Battle of Hingston Down at Hengestesdune. In 875, the last recorded king of Cornwall, Dumgarth, is said to have drowned. Around the 880s, Anglo-Saxons from Wessex had established modest land holdings in the north eastern part of Cornwall; notably Alfred the Great who had acquired a few estates. William of Malmesbury, writing around 1120, says that King Athelstan of England (924–939) fixed the boundary between English and Cornish people at the east bank of the River Tamar. While elements of William's story, like the burning of Exeter, have been cast in doubt by recent writers Athelstan did re-establish a separate Cornish Bishop and relations between Wessex and the Cornish elite improved from the time of his rule.
Eventually King Edgar was able to issue charters the width of Cornwall, and frequently sent emissaries or visited personally as seen by his appearances in the Bodmin Manumissions.
One interpretation of the Domesday Book is that by this time the native Cornish landowning class had been almost completely dispossessed and replaced by English landowners, particularly Harold Godwinson himself. However, the Bodmin manumissions show that two leading Cornish figures nominally had Saxon names, but these were both glossed with native Cornish names. In 1068, Brian of Brittany may have been created Earl of Cornwall, and naming evidence cited by medievalist Edith Ditmas suggests that many other post-Conquest landowners in Cornwall were Breton allies of the Normans, the Bretons being descended from Britons who had fled to what is today Brittany during the early years of the Anglo-Saxon conquest. She also proposed this period for the early composition of the Tristan and Iseult cycle by poets such as Béroul from a pre-existing shared Brittonic oral tradition.
Soon after the Norman conquest most of the land was transferred to the new Breton–Norman aristocracy, with the lion's share going to Robert, Count of Mortain, half-brother of King William and the largest landholder in England after the king with his stronghold at Trematon Castle near the mouth of the Tamar.
Subsequently, however, Norman absentee landlords became replaced by a new Cornish-Norman ruling class including scholars such as Richard Rufus of Cornwall. These families eventually became the new rulers of Cornwall, typically speaking Norman French, Breton-Cornish, Latin, and eventually English, with many becoming involved in the operation of the Stannary Parliament system, the Earldom and eventually the Duchy of Cornwall. The Cornish language continued to be spoken and acquired a number of characteristics establishing its identity as a separate language from Breton.
The stannary parliaments and stannary courts were legislative and legal institutions in Cornwall and in Devon (in the Dartmoor area). The stannary courts administered equity for the region's tin-miners and tin mining interests, and they were also courts of record for the towns dependent on the mines. The separate and powerful government institutions available to the tin miners reflected the enormous importance of the tin industry to the English economy during the Middle Ages. Special laws for tin miners pre-date written legal codes in Britain, and ancient traditions exempted everyone connected with tin mining in Cornwall and Devon from any jurisdiction other than the stannary courts in all but the most exceptional circumstances.
Cornish piracy was active during the Elizabethan era on the west coast of Britain. Cornwall is well known for its wreckers who preyed on ships passing Cornwall's rocky coastline. During the 17th and 18th centuries Cornwall was a major smuggling area.
In later times, Cornwall was known to the Anglo-Saxons as "West Wales" to distinguish it from "North Wales" (the modern nation of Wales). The name appears in the Anglo-Saxon Chronicle in 891 as On Corn walum. In the Domesday Book it was referred to as Cornualia and in c. 1198 as Cornwal. Other names for the county include a latinisation of the name as Cornubia (first appears in a mid-9th-century deed purporting to be a copy of one dating from c. 705), and as Cornugallia in 1086.
Cornwall forms the tip of the south-west peninsula of the island of Great Britain, and is therefore exposed to the full force of the prevailing winds that blow in from the Atlantic Ocean. The coastline is composed mainly of resistant rocks that give rise in many places to tall cliffs. Cornwall has a border with only one other county, Devon, which is formed almost entirely by the River Tamar, and the remainder (to the north) by the Marsland Valley.
The north and south coasts have different characteristics. The north coast on the Celtic Sea, part of the Atlantic Ocean, is more exposed and therefore has a wilder nature. The prosaically named High Cliff, between Boscastle and St Gennys, is the highest sheer-drop cliff in Cornwall at 223 metres (732 ft). However, there are also many extensive stretches of fine golden sand which form the beaches important to the tourist industry, such as those at Bude, Polzeath, Watergate Bay, Perranporth, Porthtowan, Fistral Beach, Newquay, St Agnes, St Ives, and on the south coast Gyllyngvase beach in Falmouth and the large beach at Praa Sands further to the south-west. There are two river estuaries on the north coast: Hayle Estuary and the estuary of the River Camel, which provides Padstow and Rock with a safe harbour. The seaside town of Newlyn is a popular holiday destination, as it is one of the last remaining traditional Cornish fishing ports, with views reaching over Mount's Bay.
The south coast, dubbed the "Cornish Riviera", is more sheltered and there are several broad estuaries offering safe anchorages, such as at Falmouth and Fowey. Beaches on the south coast usually consist of coarser sand and shingle, interspersed with rocky sections of wave-cut platform. Also on the south coast, the picturesque fishing village of Polperro, at the mouth of the Pol River, and the fishing port of Looe on the River Looe are both popular with tourists.
The interior of the county consists of a roughly east–west spine of infertile and exposed upland, with a series of granite intrusions, such as Bodmin Moor, which contains the highest land within Cornwall. From east to west, and with approximately descending altitude, these are Bodmin Moor, Hensbarrow north of St Austell, Carnmenellis to the south of Camborne, and the Penwith or Land's End peninsula. These intrusions are the central part of the granite outcrops that form the exposed parts of the Cornubian batholith of south-west Britain, which also includes Dartmoor to the east in Devon and the Isles of Scilly to the west, the latter now being partially submerged.
The intrusion of the granite into the surrounding sedimentary rocks gave rise to extensive metamorphism and mineralisation, and this led to Cornwall being one of the most important mining areas in Europe until the early 20th century. It is thought tin was mined here as early as the Bronze Age, and copper, lead, zinc and silver have all been mined in Cornwall. Alteration of the granite also gave rise to extensive deposits of China Clay, especially in the area to the north of St Austell, and the extraction of this remains an important industry.
The uplands are surrounded by more fertile, mainly pastoral farmland. Near the south coast, deep wooded valleys provide sheltered conditions for flora that like shade and a moist, mild climate. These areas lie mainly on Devonian sandstone and slate. The north east of Cornwall lies on Carboniferous rocks known as the Culm Measures. In places these have been subjected to severe folding, as can be seen on the north coast near Crackington Haven and in several other locations.
The geology of the Lizard peninsula is unusual, in that it is mainland Britain's only example of an ophiolite, a section of oceanic crust now found on land. Much of the peninsula consists of the dark green and red Precambrian serpentinite, which forms spectacular cliffs, notably at Kynance Cove, and carved and polished serpentine ornaments are sold in local gift shops. This ultramafic rock also forms a very infertile soil which covers the flat and marshy heaths of the interior of the peninsula. This is home to rare plants, such as the Cornish Heath, which has been adopted as the county flower.
Cornwall's only city, and the home of the council headquarters, is Truro. Nearby Falmouth is notable as a port. St Just in Penwith is the westernmost town in England, though the same claim has been made for Penzance, which is larger. St Ives and Padstow are today small vessel ports with a major tourism and leisure sector in their economies. Newquay on the north coast is another major urban settlement which is known for its beaches and is a popular surfing destination, as is Bude further north, but Newquay is now also becoming important for its aviation-related industries. Camborne is the county's largest town and more populous than the capital Truro. Together with the neighbouring town of Redruth, it forms the largest urban area in Cornwall, and both towns were significant as centres of the global tin mining industry in the 19th century; nearby copper mines were also very productive during that period. St Austell is also larger than Truro and was the centre of the china clay industry in Cornwall. Until four new parishes were created for the St Austell area on 1 April 2009 St Austell was the largest settlement in Cornwall.
Cornwall borders the county of Devon at the River Tamar. Major roads between Cornwall and the rest of Great Britain are the A38 which crosses the Tamar at Plymouth via the Tamar Bridge and the town of Saltash, the A39 road (Atlantic Highway) from Barnstaple, passing through North Cornwall to end in Falmouth, and the A30 which connects Cornwall to the M5 motorway at Exeter, crosses the border south of Launceston, crosses Bodmin Moor and connects Bodmin, Truro, Redruth, Camborne, Hayle and Penzance. Torpoint Ferry links Plymouth with Torpoint on the opposite side of the Hamoaze. A rail bridge, the Royal Albert Bridge built by Isambard Kingdom Brunel (1859), provides the other main land transport link. The city of Plymouth, a large urban centre in south west Devon, is an important location for services such as hospitals, department stores, road and rail transport, and cultural venues, particularly for people living in east Cornwall.
Cardiff and Swansea, across the Bristol Channel, have at some times in the past been connected to Cornwall by ferry, but these do not operate now.
The Isles of Scilly are served by ferry (from Penzance) and by aeroplane, having its own airport: St Mary's Airport. There are regular flights between St Mary's and Land's End Airport, near St Just, and Newquay Airport; during the summer season, a service is also provided between St Mary's and Exeter Airport, in Devon.
Cornwall has varied habitats including terrestrial and marine ecosystems. One noted species in decline locally is the Reindeer lichen, which species has been made a priority for protection under the national UK Biodiversity Action Plan.
Botanists divide Cornwall and Scilly into two vice-counties: West (1) and East (2). The standard flora is by F. H. Davey Flora of Cornwall (1909). Davey was assisted by A. O. Hume and he thanks Hume, his companion on excursions in Cornwall and Devon, and for help in the compilation of that Flora, publication of which was financed by him.
Cornwall has a temperate Oceanic climate (Köppen climate classification: Cfb), with mild winters and cool summers. Cornwall has the mildest and one of the sunniest climates of the United Kingdom, as a result of its oceanic setting and the influence of the Gulf Stream. The average annual temperature in Cornwall ranges from 11.6 °C (52.9 °F) on the Isles of Scilly to 9.8 °C (49.6 °F) in the central uplands. Winters are among the warmest in the country due to the moderating effects of the warm ocean currents, and frost and snow are very rare at the coast and are also rare in the central upland areas. Summers are, however, not as warm as in other parts of southern England. The surrounding sea and its southwesterly position mean that Cornwall's weather can be relatively changeable.
Cornwall is one of the sunniest areas in the UK. It has more than 1,541 hours of sunshine per year, with the highest average of 7.6 hours of sunshine per day in July. The moist, mild air coming from the southwest brings higher amounts of rainfall than in eastern Great Britain, at 1,051 to 1,290 mm (41.4 to 50.8 in) per year. However, this is not as much as in more northern areas of the west coast. The Isles of Scilly, for example, where there are on average fewer than two days of air frost per year, is the only area in the UK to be in the Hardiness zone 10. The islands have, on average, less than one day of air temperature exceeding 30 °C per year and are in the AHS Heat Zone 1. Extreme temperatures in Cornwall are particularly rare; however, extreme weather in the form of storms and floods is common. Due to climate change Cornwall faces more heatwaves and severe droughts, faster coastal erosion, stronger storms and higher wind speeds as well as the possibility of more high impact flooding.
Cornish, a member of the Brythonic branch of the Celtic language family, is a revived language that died out as a first language in the late 18th century. It is closely related to the other Brythonic languages, Breton and Welsh, and less so to the Goidelic languages. Cornish has no legal status in the UK.
There has been a revival of the language by academics and optimistic enthusiasts since the mid-19th century that gained momentum from the publication in 1904 of Henry Jenner's Handbook of the Cornish Language. It is a social networking community language rather than a social community group language. Cornwall Council encourages and facilitates language classes within the county, in schools and within the wider community.
In 2002, Cornish was named as a UK regional language in the European Charter for Regional or Minority Languages. As a result, in 2005 its promoters received limited government funding. Several words originating in Cornish are used in the mining terminology of English, such as costean, gossan, gunnies, kibbal, kieve and vug.
The Cornish language and culture influenced the emergence of particular pronunciations and grammar not used elsewhere in England. The Cornish dialect is spoken to varying degrees; however, someone speaking in broad Cornish may be practically unintelligible to one not accustomed to it. Cornish dialect has generally declined, as in most places it is now little more than a regional accent and grammatical differences have been eroded over time. Marked differences in vocabulary and usage still exist between the eastern and western parts of Cornwall.
Saint Piran's Flag is the national flag and ancient banner of Cornwall, and an emblem of the Cornish people. The banner of Saint Piran is a white cross on a black background (in terms of heraldry 'sable, a cross argent'). According to legend Saint Piran adopted these colours from seeing the white tin in the black coals and ashes during his discovery of tin. The Cornish flag is an exact reverse of the former Breton black cross national flag and is known by the same name "Kroaz Du".
Since the 19th century, Cornwall, with its unspoilt maritime scenery and strong light, has sustained a vibrant visual art scene of international renown. Artistic activity within Cornwall was initially centred on the art-colony of Newlyn, most active at the turn of the 20th century. This Newlyn School is associated with the names of Stanhope Forbes, Elizabeth Forbes, Norman Garstin and Lamorna Birch. Modernist writers such as D. H. Lawrence and Virginia Woolf lived in Cornwall between the wars, and Ben Nicholson, the painter, having visited in the 1920s came to live in St Ives with his then wife, the sculptor Barbara Hepworth, at the outbreak of the Second World War. They were later joined by the Russian emigrant Naum Gabo, and other artists. These included Peter Lanyon, Terry Frost, Patrick Heron, Bryan Wynter and Roger Hilton. St Ives also houses the Leach Pottery, where Bernard Leach, and his followers championed Japanese inspired studio pottery. Much of this modernist work can be seen in Tate St Ives. The Newlyn Society and Penwith Society of Arts continue to be active, and contemporary visual art is documented in a dedicated online journal.
Local television programmes are provided by BBC South West & ITV West Country. Radio programmes are produced by BBC Radio Cornwall in Truro for the entire county, Heart West, Source FM for the Falmouth and Penryn areas, Coast FM for west Cornwall, Radio St Austell Bay for the St Austell area, NCB Radio for north Cornwall & Pirate FM.
Cornwall has a folk music tradition that has survived into the present and is well known for its unusual folk survivals such as Mummers Plays, the Furry Dance in Helston played by the famous Helston Town Band, and Obby Oss in Padstow.
Newlyn is home to a food and music festival that hosts live music, cooking demonstrations, and displays of locally caught fish.
As in other former mining districts of Britain, male voice choirs and brass bands, such as Brass on the Grass concerts during the summer at Constantine, are still very popular in Cornwall. Cornwall also has around 40 brass bands, including the six-times National Champions of Great Britain, Camborne Youth Band, and the bands of Lanner and St Dennis.
Cornish players are regular participants in inter-Celtic festivals, and Cornwall itself has several inter-Celtic festivals such as Perranporth's Lowender Peran folk festival.
Contemporary musician Richard D. James (also known as Aphex Twin) grew up in Cornwall, as did Luke Vibert and Alex Parks, winner of Fame Academy 2003. Roger Taylor, the drummer from the band Queen was also raised in the county, and currently lives not far from Falmouth. The American singer-songwriter Tori Amos now resides predominantly in North Cornwall not far from Bude with her family. The lutenist, composer and festival director Ben Salfield lives in Truro. Mick Fleetwood of Fleetwood Mac was born in Redruth.
Cornwall's rich heritage and dramatic landscape have inspired numerous writers.
Sir Arthur Quiller-Couch, author of many novels and works of literary criticism, lived in Fowey: his novels are mainly set in Cornwall. Daphne du Maurier lived at Menabilly near Fowey and many of her novels had Cornish settings: The Loving Spirit, Jamaica Inn, Rebecca, Frenchman's Creek, The King's General (partially), My Cousin Rachel, The House on the Strand and Rule Britannia. She is also noted for writing Vanishing Cornwall. Cornwall provided the inspiration for The Birds, one of her terrifying series of short stories, made famous as a film by Alfred Hitchcock.
Conan Doyle's The Adventure of the Devil's Foot featuring Sherlock Holmes is set in Cornwall. Winston Graham's series Poldark, Kate Tremayne's Adam Loveday series, Susan Cooper's novels Over Sea, Under Stone and Greenwitch, and Mary Wesley's The Camomile Lawn are all set in Cornwall. Writing under the pseudonym of Alexander Kent, Douglas Reeman sets parts of his Richard Bolitho and Adam Bolitho series in the Cornwall of the late 18th and the early 19th centuries, particularly in Falmouth. Gilbert K. Chesterton placed the action of many of his stories there.
Medieval Cornwall is the setting of the trilogy by Monica Furlong, Wise Child, Juniper and Colman, as well as part of Charles Kingsley's Hereward the Wake.
Hammond Innes's novel, The Killer Mine; Charles de Lint's novel The Little Country; and Chapters 24–25 of J. K. Rowling's Harry Potter and the Deathly Hallows take place in Cornwall (Shell Cottage, on the beach outside the fictional village of Tinworth).
David Cornwell, who wrote espionage novels under the name John le Carré, lived and worked in Cornwall. Nobel Prize-winning novelist William Golding was born in St Columb Minor in 1911, and returned to live near Truro from 1985 until his death in 1993. D. H. Lawrence spent a short time living in Cornwall. Rosamunde Pilcher grew up in Cornwall, and several of her books take place there.
St. Michael's Mount in Cornwall (under the fictional name of Mount Polbearne) is the setting of the Little Beach Street Bakery series by Jenny Colgan, who spent holidays in Cornwall as a child. The book series includes Little Beach Street Bakery (2014), Summer at Little Beach Street Bakery (2015), Christmas at Little Beach Street Bakery (2016), and Sunrise by the Sea (2021).
In the Paddington Bear novels by Michael Bond the title character is said to have landed at an unspecified port in Cornwall having travelled in a lifeboat aboard a cargo ship from darkest Peru. From here he travels to London on a train and eventually arrives at Paddington Station.
Enid Blyton's 1953 novel Five Go Down to the Sea (the twelfth book in The Famous Five series) is set in Cornwall, near the fictional coastal village of Tremannon.
The late Poet Laureate Sir John Betjeman was famously fond of Cornwall and it featured prominently in his poetry. He is buried in the churchyard at St Enodoc's Church, Trebetherick. Charles Causley, the poet, was born in Launceston and is perhaps the best known of Cornish poets. Jack Clemo and the scholar A. L. Rowse were also notable Cornishmen known for their poetry; The Rev. R. S. Hawker of Morwenstow wrote some poetry which was very popular in the Victorian period. The Scottish poet W. S. Graham lived in West Cornwall from 1944 until his death in 1986.
The poet Laurence Binyon wrote "For the Fallen" (first published in 1914) while sitting on the cliffs between Pentire Point and The Rumps and a stone plaque was erected in 2001 to commemorate the fact. The plaque bears the inscription "FOR THE FALLEN / Composed on these cliffs, 1914". The plaque also bears below this the fourth stanza (sometimes referred to as "The Ode") of the poem:
Cornwall produced a substantial number of passion plays such as the Ordinalia during the Middle Ages. Many are still extant, and provide valuable information about the Cornish language. See also Cornish literature
Colin Wilson, a prolific writer who is best known for his debut work The Outsider (1956) and for The Mind Parasites (1967), lived in Gorran Haven, a small village on the southern Cornish coast. The writer D. M. Thomas was born in Redruth but lived and worked in Australia and the United States before returning to his native Cornwall. He has written novels, poetry, and other works, including translations from Russian.
Thomas Hardy's drama The Queen of Cornwall (1923) is a version of the Tristan story; the second act of Richard Wagner's opera Tristan und Isolde takes place in Cornwall, as do Gilbert and Sullivan's operettas The Pirates of Penzance and Ruddigore.
Clara Vyvyan was the author of various books about many aspects of Cornish life such as Our Cornwall. She once wrote: "The Loneliness of Cornwall is a loneliness unchanged by the presence of men, its freedoms a freedom inexpressible by description or epitaph. Your cannot say Cornwall is this or that. Your cannot describe it in a word or visualise it in a second. You may know the country from east to west and sea to sea, but if you close your eyes and think about it no clear-cut image rises before you. In this quality of changefulness have we possibly surprised the secret of Cornwall's wild spirit—in this intimacy the essence of its charm? Cornwall!". A level of Tomb Raider: Legend, a game dealing with Arthurian Legend, takes place in Cornwall at a museum above King Arthur's tomb. The adventure game The Lost Crown is set in the fictional town of Saxton, which uses the Cornish settlements of Polperro, Talland and Looe as its model.
The fairy tale Jack the Giant Killer takes place in Cornwall.
The Mousehole Cat, a children's book written by Antonia Barber and illustrated by Nicola Bayley, is set in the Cornish village Mousehole and based on the legend of Tom Bawcock and the continuing tradition of Tom Bawcock's Eve.
The main sports played in Cornwall are rugby, football and cricket. Athletes from Truro have done well in Olympic and Commonwealth Games fencing, winning several medals. Surfing is popular, particularly with tourists, thousands of whom take to the water throughout the summer months. Some towns and villages have bowling clubs, and a wide variety of British sports are played throughout Cornwall. Cornwall is also one of the few places in England where shinty is played; the English Shinty Association is based in Penryn.
The Cornwall County Cricket Club plays as one of the minor counties of English cricket.
Truro, and all of the towns and some villages have football clubs belonging to the Cornwall County Football Association, and some clubs have teams competing higher within the English football league pyramid. Of these, the highest ranked — by two flights — is Truro City F.C., who will be playing in the National League South in the 2023–24 season. Other notable Cornish teams include Mousehole A.F.C., Helston Athletic F.C., and Falmouth Town F.C.
Viewed as an "important identifier of ethnic affiliation", rugby union has become a sport strongly tied to notions of Cornishness. and since the 20th century, rugby union has emerged as one of the most popular spectator and team sports in Cornwall (perhaps the most popular), with professional Cornish rugby footballers being described as a "formidable force", "naturally independent, both in thought and deed, yet paradoxically staunch English patriots whose top players have represented England with pride and passion".
In 1985, sports journalist Alan Gibson made a direct connection between the love of rugby in Cornwall and the ancient parish games of hurling and wrestling that existed for centuries before rugby officially began. Among Cornwall's native sports are a distinctive form of Celtic wrestling related to Breton wrestling, and Cornish hurling, a kind of mediaeval football played with a silver ball (distinct from Irish Hurling). Cornish Wrestling is Cornwall's oldest sport and as Cornwall's native tradition it has travelled the world to places like Victoria, Australia and Grass Valley, California following the miners and gold rushes. Cornish hurling now takes place at St. Columb Major, St Ives, and less frequently at Bodmin.
In rugby league, Cornwall R.L.F.C., founded in 2021, will represent the county in the professional league system. The semi-pro club will start in the third tier RFL League 1. At an amateur level, the county is represented by Cornish Rebels.
Due to its long coastline, various maritime sports are popular in Cornwall, notably sailing and surfing. International events in both are held in Cornwall. Cornwall hosted the Inter-Celtic Watersports Festival in 2006. Surfing in particular is very popular, as locations such as Bude and Newquay offer some of the best surf in the UK. Pilot gig rowing has been popular for many years and the World championships takes place annually on the Isles of Scilly. On 2 September 2007, 300 surfers at Polzeath beach set a new world record for the highest number of surfers riding the same wave as part of the Global Surf Challenge and part of a project called Earthwave to raise awareness about global warming.
As its population is comparatively small, and largely rural, Cornwall's contribution to British national sport in the United Kingdom has been limited; the county's greatest successes have come in fencing. In 2014, half of the men's GB team fenced for Truro Fencing Club, and 3 Truro fencers appeared at the 2012 Olympics.
Cornwall has a strong culinary heritage. Surrounded on three sides by the sea amid fertile fishing grounds, Cornwall naturally has fresh seafood readily available; Newlyn is the largest fishing port in the UK by value of fish landed, and is known for its wide range of restaurants. Television chef Rick Stein has long operated a fish restaurant in Padstow for this reason, and Jamie Oliver chose to open his second restaurant, Fifteen, in Watergate Bay near Newquay. MasterChef host and founder of Smiths of Smithfield, John Torode, in 2007 purchased Seiners in Perranporth. One famous local fish dish is Stargazy pie, a fish-based pie in which the heads of the fish stick through the piecrust, as though "star-gazing". The pie is cooked as part of traditional celebrations for Tom Bawcock's Eve, but is not generally eaten at any other time.
Cornwall is perhaps best known though for its pasties, a savoury dish made with pastry. Today's pasties usually contain a filling of beef steak, onion, potato and swede with salt and white pepper, but historically pasties had a variety of different fillings. "Turmut, 'tates and mate" (i.e. "Turnip, potatoes and meat", turnip being the Cornish and Scottish term for swede, itself an abbreviation of 'Swedish Turnip', the British term for rutabaga) describes a filling once very common. For instance, the licky pasty contained mostly leeks, and the herb pasty contained watercress, parsley, and shallots. Pasties are often locally referred to as oggies. Historically, pasties were also often made with sweet fillings such as jam, apple and blackberry, plums or cherries. The wet climate and relatively poor soil of Cornwall make it unsuitable for growing many arable crops. However, it is ideal for growing the rich grass required for dairying, leading to the production of Cornwall's other famous export, clotted cream. This forms the basis for many local specialities including Cornish fudge and Cornish ice cream. Cornish clotted cream has Protected Geographical Status under EU law, and cannot be made anywhere else. Its principal manufacturer is A. E. Rodda & Son of Scorrier.
Local cakes and desserts include Saffron cake, Cornish heavy (hevva) cake, Cornish fairings biscuits, figgy 'obbin, Cream tea and whortleberry pie.
There are also many types of beers brewed in Cornwall—those produced by Sharp's Brewery, Skinner's Brewery, Keltek Brewery and St Austell Brewery are the best known—including stouts, ales and other beer types. There is some small scale production of wine, mead and cider.
Cornwall is recognised by Cornish and Celtic political groups as one of six Celtic nations, alongside Brittany, Ireland, the Isle of Man, Scotland and Wales. (The Isle of Man Government and the Welsh Government also recognise Asturias and Galicia.) Cornwall is represented, as one of the Celtic nations, at the Festival Interceltique de Lorient, an annual celebration of Celtic culture held in Brittany.
Cornwall Council consider Cornwall's unique cultural heritage and distinctiveness to be one of the area's major assets. They see Cornwall's language, landscape, Celtic identity, political history, patterns of settlement, maritime tradition, industrial heritage, and non-conformist tradition, to be among the features making up its "distinctive" culture. However, it is uncertain exactly how many of the people living in Cornwall consider themselves to be Cornish; results from different surveys (including the national census) have varied. In the 2001 census, 7 per cent of people in Cornwall identified themselves as Cornish, rather than British or English. However, activists have argued that this underestimated the true number as there was no explicit "Cornish" option included in the official census form. Subsequent surveys have suggested that as many as 44 per cent identify as Cornish. Many people in Cornwall say that this issue would be resolved if a Cornish option became available on the census. The question and content recommendations for the 2011 census provided an explanation of the process of selecting an ethnic identity which is relevant to the understanding of the often quoted figure of 37,000 who claimed Cornish identity. The 2021 census found that 17% of people in Cornwall identified as being Cornish (89,000), with 14% of people in Cornwall identifying as Cornish-only (80,000). Again there was no tick-box provided, and "Cornish" had to be written-in as "Other".
On 24 April 2014 it was announced that Cornish people have been granted minority status under the European Framework Convention for the Protection of National Minorities.
Cornwall forms two local government districts; Cornwall and the Isles of Scilly. The district of Cornwall is governed by Cornwall Council, a unitary authority based at Lys Kernow in Truro, and the Council of the Isles of Scilly governs the archipelago from Hugh Town. The Crown Court is based at the Courts of Justice in Truro. Magistrates' Courts are found in Truro (but at a different location to the Crown Court) and at Bodmin.
The Isles of Scilly form part of the ceremonial county of Cornwall, and have, at times, been served by the same county administration. Since 1890 they have been administered by their own unitary authority, the Council of the Isles of Scilly. They are grouped with Cornwall for other administrative purposes, such as the National Health Service and Devon and Cornwall Police.
Before reorganisation on 1 April 2009, council functions throughout the rest of Cornwall were organised in two tiers, with Cornwall County Council and district councils for its six districts, Caradon, Carrick, Kerrier, North Cornwall, Penwith, and Restormel. While projected to streamline services, cut red tape and save around £17 million a year, the reorganisation was met with wide opposition, with a poll in 2008 showing 89% disapproval from Cornish residents.
The first elections for the unitary authority were held on 4 June 2009. The council has 123 seats; the largest party (in 2017) is the Conservatives, with 46 seats. The Liberal Democrats are the second-largest party, with 37 seats, with the Independents the third-largest grouping with 30.
Before the creation of the unitary council, the former county council had 82 seats, the majority of which were held by the Liberal Democrats, elected at the 2005 county council elections. The six former districts had a total of 249 council seats, and the groups with greatest numbers of councillors were Liberal Democrats, Conservatives and Independents.
Following a review by the Boundary Commission for England taking effect at the 2010 general election, Cornwall is divided into six county constituencies to elect MPs to the House of Commons of the United Kingdom.
Before the 2010 boundary changes Cornwall had five constituencies, all of which were won by Liberal Democrats at the 2005 general election. In the 2010 general election Liberal Democrat candidates won three constituencies and Conservative candidates won three other constituencies. At the 2015 general election all six Cornish seats were won by Conservative candidates; all these Conservative MPs retained their seats at the 2017 general election, and the Conservatives won all six constituencies again at the 2019 general election.
Until 1832, Cornwall had 44 MPs—more than any other county—reflecting the importance of tin to the Crown. Most of the increase in numbers of MPs came between 1529 and 1584 after which there was no change until 1832.
Although Cornwall does not have a designated government department, in 2007 while Leader of the Opposition David Cameron created a Shadow Secretary of State for Cornwall. The position was not made into a formal UK Cabinet position when Cameron entered government following the 2010 United Kingdom general election
Cornish nationalists have organised into two political parties: Mebyon Kernow, formed in 1951, and the Cornish Nationalist Party. In addition to the political parties, there are various interest groups such as the Revived Cornish Stannary Parliament and the Celtic League. The Cornish Constitutional Convention was formed in 2000 as a cross-party organisation including representatives from the private, public and voluntary sectors to campaign for the creation of a Cornish Assembly, along the lines of the National Assembly for Wales, Northern Ireland Assembly and the Scottish Parliament. Between 5 March 2000 and December 2001, the campaign collected the signatures of 41,650 Cornish residents endorsing the call for a devolved assembly, along with 8,896 signatories from outside Cornwall. The resulting petition was presented to the Prime Minister, Tony Blair.
Cornwall is one of the poorest parts of the United Kingdom in terms of per capita GDP and average household incomes. At the same time, parts of the county, especially on the coast, have high house prices, driven up by demand from relatively wealthy retired people and second-home owners. The GVA per head was 65% of the UK average for 2004. The GDP per head for Cornwall and the Isles of Scilly was 79.2% of the EU-27 average for 2004, the UK per head average was 123.0%. In 2011, the latest available figures, Cornwall's (including the Isles of Scilly) measure of wealth was 64% of the European average per capita.
Historically mining of tin (and later also of copper) was important in the Cornish economy. The first reference to this appears to be by Pytheas: see above. Julius Caesar was the last classical writer to mention the tin trade, which appears to have declined during the Roman occupation. The tin trade revived in the Middle Ages and its importance to the Kings of England resulted in certain privileges being granted to the tinners; the Cornish rebellion of 1497 is attributed to grievances of the tin miners. In the mid-19th century, however, the tin trade again fell into decline. Other primary sector industries that have declined since the 1960s include china clay production, fishing and farming.
Today, the Cornish economy depends heavily on its tourist industry, which makes up around a quarter of the economy. The official measures of deprivation and poverty at district and 'sub-ward' level show that there is great variation in poverty and prosperity in Cornwall with some areas among the poorest in England and others among the top half in prosperity. For example, the ranking of 32,482 sub-wards in England in the index of multiple deprivation (2006) ranged from 819th (part of Penzance East) to 30,899th (part of Saltash Burraton in Caradon), where the lower number represents the greater deprivation.
Cornwall was one of two UK areas designated as 'less developed regions' by the European Union, which, prior to Brexit, meant the area qualified for EU Cohesion Policy grants. It was granted Objective 1 status by the European Commission for 2000 to 2006, followed by further rounds of funding known as 'Convergence Funding' from 2007 to 2013 and 'Growth Programme' for 2014 to 2020.
Cornwall has a tourism-based seasonal economy which is estimated to contribute up to 24% of Cornwall's gross domestic product. In 2011 tourism brought £1.85 billion into the Cornish economy. Cornwall's unique culture, spectacular landscape and mild climate make it a popular tourist destination, despite being somewhat distant from the United Kingdom's main centres of population. Surrounded on three sides by the English Channel and Celtic Sea, Cornwall has many miles of beaches and cliffs; the South West Coast Path follows a complete circuit of both coasts. Other tourist attractions include moorland, country gardens, museums, historic and prehistoric sites, and wooded valleys. Five million tourists visit Cornwall each year, mostly drawn from within the UK. Visitors to Cornwall are served by the airport at Newquay, whilst private jets, charters and helicopters are also served by Perranporth airfield; nightsleeper and daily rail services run between Cornwall, London and other regions of the UK.
Newquay and Porthtowan are popular destinations for surfers. In recent years, the Eden Project near St Austell has been a major financial success, drawing one in eight of Cornwall's visitors in 2004.
In the summer of 2018, due to the recognition of its beaches and weather through social media and the marketing of travel companies, Cornwall received about 20 per cent more visitors than the usual 4.5 million figure. The sudden rise and demand of tourism in Cornwall caused multiple traffic and safety issues in coastal areas.
In October 2021, Cornwall was longlisted for the UK City of Culture 2025, but failed to make the March 2022 shortlist.
Other industries include fishing, although this has been significantly re-structured by EU fishing policies (as of 2010 the Southwest Handline Fishermen's Association has started to revive the fishing industry).
Agriculture, once an important part of the Cornish economy, has declined significantly relative to other industries. However, there is still a strong dairy industry, with products such as Cornish clotted cream.
Mining of tin and copper was also an industry, but today the derelict mine workings survive only as a World Heritage Site. However, the Camborne School of Mines, which was relocated to Penryn in 2004, is still a world centre of excellence in the field of mining and applied geology and the grant of World Heritage status has attracted funding for conservation and heritage tourism. China clay extraction has also been an important industry in the St Austell area, but this sector has been in decline, and this, coupled with increased mechanisation, has led to a decrease in employment in this sector, although the industry still employs around 2,133 people in Cornwall, and generates over £80 million to the local economy.
In March 2016, a Canadian company, Strongbow Exploration, had acquired, from administration, a 100% interest in the South Crofty tin mine and the associated mineral rights in Cornwall with the aim of reopening the mine and bringing it back to full production. Work is currently ongoing to build a water filtration plant in order to dewater the mine.
Cornwall is the landing point for twenty-two of the world's fastest high-speed undersea and transatlantic fibre optic cables, making Cornwall an important hub within Europe's Internet infrastructure. The Superfast Cornwall project completed in 2015, and saw 95% of Cornish houses and businesses connected to a fibre-based broadband network, with over 90% of properties able to connect with speeds above 24 Mbit/s.
The county's newest industry is aviation: Newquay Airport is the home of a growing business park with Enterprise Zone status, known as Aerohub. Also a space launch facility, Spaceport Cornwall, has been established at Newquay, in partnership with Goonhilly satellite tracking station near Helston in south Cornwall.
Cornwall's population was 537,400 in the 2011 census, with a population density of 144 people per square kilometre, ranking it 40th and 41st, respectively, among the 47 counties of England. Cornwall's population was 95.7% White British and has a relatively high rate of population growth. At 11.2% in the 1980s and 5.3% in the 1990s, it had the fifth-highest population growth rate of the counties of England. The natural change has been a small population decline, and the population increase is due to inward migration into Cornwall. According to the 1991 census, the population was 469,800.
Cornwall has a relatively high retired population, with 22.9% of pensionable age, compared with 20.3% for the United Kingdom as a whole. This may be due partly to Cornwall's rural and coastal geography increasing its popularity as a retirement location, and partly to outward migration of younger residents to more economically diverse areas.
Over 10,000 students attend Cornwall's two universities, Falmouth University and the University of Exeter (including Camborne School of Mines). Falmouth University is a specialist public university for the creative industries and arts, while the University Of Exeter has two campuses in Cornwall, Truro and Penryn, the latter shared with Falmouth. Penryn campus is home to educational departments such as the rapidly growing Centre for Ecology and Conservation (CEC), the Environment and Sustainability Institute (ESI), and the Institute of Cornish Studies.
Cornwall has a comprehensive education system, with 31 state and eight independent secondary schools. There are three further education colleges: Truro and Penwith College, Cornwall College and Callywith College which opened in September 2017. The Isles of Scilly only has one school, while the former Restormel district has the highest school population, and school year sizes are around 200, with none above 270. Before the introduction of comprehensive schools there were a number of grammar schools and secondary modern schools, e.g. the schools that later became Sir James Smith's School and Wadebridge School. There are also primary schools in many villages and towns: e.g. St Mabyn Church of England Primary School. | [
{
"paragraph_id": 0,
"text": "Cornwall (/ˈkɔːrnwɔːl, -wəl/; Cornish: Kernow [ˈkɛrnɔʊ]) is a ceremonial county in South West England. It is recognised as one of the Celtic nations and is the homeland of the Cornish people. The county is bordered by the Atlantic Ocean to the north and west, Devon to the east, and the English Channel to the south. The largest settlement is Falmouth, and the county town is the city of Truro.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The county is rural, with an area of 1,375 square miles (3,562 km) and population of 568,210. After Falmouth (23,061), the largest settlements are Newquay (20,342), St Austell (19,958), and Truro (18,766). For local government purposes most of Cornwall is a unitary authority area, with the Isles of Scilly having a unique local authority. The Cornish nationalist movement disputes the constitutional status of Cornwall and seeks greater autonomy within the United Kingdom.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Cornwall is the westernmost part of the South West Peninsula. Its coastline is characterised by steep cliffs and, to the south, several rias, including those at the mouths of the rivers Fal and Fowey. It includes the southernmost point on Great Britain, Lizard Point, and forms a large part of the Cornwall National Landscape. The national landscape also includes Bodmin Moor, an upland outcrop of the Cornubian batholith granite formation. The county contains many short rivers; the longest is the Tamar, which forms the border with Devon.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Cornwall had a minor Roman presence, and later formed part of the Brittonic kingdom of Dumnonia. From the 7th century, the Britons in the South West increasingly came into conflict with the expanding Anglo-Saxon kingdom of Wessex, eventually being pushed west of the Tamar; by the Norman Conquest Cornwall was administered as part of England, though it retained its own culture. The remainder of the Middle Ages and Early Modern Period were relatively settled, with Cornwall developing its tin mining industry and becoming a duchy in 1337. During the Industrial Revolution, the tin and copper mines were expanded and then declined, with china clay extraction becoming a major industry. Railways were built, leading to a growth of tourism in the 20th century. The Cornish language became extinct as a living community language at the end of the 18th century, but is now being revived.",
"title": ""
},
{
"paragraph_id": 4,
"text": "The modern English name \"Cornwall\" is a compound of two terms coming from two different language groups:",
"title": "Name"
},
{
"paragraph_id": 5,
"text": "In the Cornish language, Cornwall is Kernow which stems from the same Proto-Celtic root.",
"title": "Name"
},
{
"paragraph_id": 6,
"text": "Humans reoccupied Britain after the last Ice Age. The area now known as Cornwall was first inhabited in the Palaeolithic and Mesolithic periods. It continued to be occupied by Neolithic and then by Bronze Age people.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "Cornwall in the Late Bronze Age formed part of a maritime trading-networked culture which researchers have dubbed the Atlantic Bronze Age system, and which extended over most of the areas of present-day Ireland, England, Wales, France, Spain, and Portugal.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "During the British Iron Age, Cornwall, like all of Britain (modern England, Scotland, Wales, and the Isle of Man), was inhabited by a Celtic-speaking people known as the Britons with distinctive cultural relations to neighbouring Brittany. The Common Brittonic spoken at this time eventually developed into several distinct tongues, including Cornish, Welsh, Breton, Cumbric and Pictish.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "The first written account of Cornwall comes from the 1st-century BC Sicilian Greek historian Diodorus Siculus, supposedly quoting or paraphrasing the 4th-century BCE geographer Pytheas, who had sailed to Britain:",
"title": "History"
},
{
"paragraph_id": 10,
"text": "The inhabitants of that part of Britain called Belerion (or Land's End) from their intercourse with foreign merchants, are civilised in their manner of life. They prepare the tin, working very carefully the earth in which it is produced ... Here then the merchants buy the tin from the natives and carry it over to Gaul, and after travelling overland for about thirty days, they finally bring their loads on horses to the mouth of the Rhône.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "The identity of these merchants is unknown. It has been theorised that they were Phoenicians, but there is no evidence for this. Professor Timothy Champion, discussing Diodorus Siculus's comments on the tin trade, states that \"Diodorus never actually says that the Phoenicians sailed to Cornwall. In fact, he says quite the opposite: the production of Cornish tin was in the hands of the natives of Cornwall, and its transport to the Mediterranean was organised by local merchants, by sea and then overland through France, passing through areas well outside Phoenician control.\" Isotopic evidence suggests that tin ingots found off the coast of Haifa, Israel, may have from Cornwall. Tin, required for the production of bronze, was a relatively rare and precious commodity in the Bronze Age – hence the interest shown in Devon and Cornwall's tin resources. (For further discussion of tin mining see the section on the economy below.)",
"title": "History"
},
{
"paragraph_id": 12,
"text": "In the first four centuries AD, during the time of Roman dominance in Britain, Cornwall was rather remote from the main centres of Romanisation – the nearest being Isca Dumnoniorum, modern-day Exeter. However, the Roman road system extended into Cornwall with four significant Roman sites based on forts: Tregear near Nanstallon was discovered in the early 1970s, two others were found at Restormel Castle, Lostwithiel in 2007, and a third fort near Calstock was also discovered early in 2007. In addition, a Roman-style villa was found at Magor Farm, Illogan in 1935. Ptolemy's Geographike Hyphegesis mentions four towns controlled by the Dumnonii, three of which may have been in Cornwall. However, after 410 AD, Cornwall appears to have reverted to rule by Romano-Celtic chieftains of the Cornovii tribe as part of the Brittonic kingdom of Dumnonia (which also included present-day Devonshire and the Scilly Isles), including the territory of one Marcus Cunomorus, with at least one significant power base at Tintagel in the early 6th century.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "\"King\" Mark of Cornwall is a semi-historical figure known from Welsh literature, from the Matter of Britain, and, in particular, from the later Norman-Breton medieval romance of Tristan and Yseult, where he appears as a close relative of King Arthur, himself usually considered to be born of the Cornish people in folklore traditions derived from Geoffrey of Monmouth's 12th-century Historia Regum Britanniae.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "Archaeology supports ecclesiastical, literary and legendary evidence for some relative economic stability and close cultural ties between the sub-Roman Westcountry, South Wales, Brittany, the Channel Islands, and Ireland through the fifth and sixth centuries. In Cornwall, the arrival of Celtic saints such as Nectan, Paul Aurelian, Petroc, Piran, Samson and numerous others reinforced the preexisting Roman christianity.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "The Battle of Deorham in 577 saw the separation of Dumnonia (and therefore Cornwall) from Wales, following which the Dumnonii often came into conflict with the expanding English kingdom of Wessex. Centwine of Wessex \"drove the Britons as far as the sea\" in 682, and by 690 St Bonifice, then a Saxon boy, was attending an abbey in Exeter, which was in turn ruled by a Saxon abbot. The Carmen Rhythmicum written by Aldhelm contains the earliest literary reference to Cornwall as distinct from Devon. Religious tensions between the Dumnonians (who celebrated celtic Christian traditions) and Wessex (who were Roman Catholic) are described in Aldhelm's letter to King Geraint. The Annales Cambriae report that in AD 722 the Britons of Cornwall won a battle at \"Hehil\". It seems likely that the enemy the Cornish fought was a West Saxon force, as evidenced by the naming of King Ine of Wessex and his kinsman Nonna in reference to an earlier Battle of Llongborth in 710.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "The Anglo-Saxon Chronicle stated in 815 (adjusted date) \"and in this year king Ecgbryht raided in Cornwall from east to west.\" this has been interpreted to mean a raid from the Tamar to Land's End, and the end of Cornish independence. However, the Anglo-Saxon Chronicle states that in 825 (adjusted date) a battle took place between the Wealas (Cornish) and the Defnas (men of Devon) at Gafulforda. The Cornish giving battle here, and the later battle at Hingston Down, casts doubt on any claims of control Wessex had at this stage.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "In 838, the Cornish and their Danish allies were defeated by Egbert in the Battle of Hingston Down at Hengestesdune. In 875, the last recorded king of Cornwall, Dumgarth, is said to have drowned. Around the 880s, Anglo-Saxons from Wessex had established modest land holdings in the north eastern part of Cornwall; notably Alfred the Great who had acquired a few estates. William of Malmesbury, writing around 1120, says that King Athelstan of England (924–939) fixed the boundary between English and Cornish people at the east bank of the River Tamar. While elements of William's story, like the burning of Exeter, have been cast in doubt by recent writers Athelstan did re-establish a separate Cornish Bishop and relations between Wessex and the Cornish elite improved from the time of his rule.",
"title": "History"
},
{
"paragraph_id": 18,
"text": "Eventually King Edgar was able to issue charters the width of Cornwall, and frequently sent emissaries or visited personally as seen by his appearances in the Bodmin Manumissions.",
"title": "History"
},
{
"paragraph_id": 19,
"text": "One interpretation of the Domesday Book is that by this time the native Cornish landowning class had been almost completely dispossessed and replaced by English landowners, particularly Harold Godwinson himself. However, the Bodmin manumissions show that two leading Cornish figures nominally had Saxon names, but these were both glossed with native Cornish names. In 1068, Brian of Brittany may have been created Earl of Cornwall, and naming evidence cited by medievalist Edith Ditmas suggests that many other post-Conquest landowners in Cornwall were Breton allies of the Normans, the Bretons being descended from Britons who had fled to what is today Brittany during the early years of the Anglo-Saxon conquest. She also proposed this period for the early composition of the Tristan and Iseult cycle by poets such as Béroul from a pre-existing shared Brittonic oral tradition.",
"title": "History"
},
{
"paragraph_id": 20,
"text": "Soon after the Norman conquest most of the land was transferred to the new Breton–Norman aristocracy, with the lion's share going to Robert, Count of Mortain, half-brother of King William and the largest landholder in England after the king with his stronghold at Trematon Castle near the mouth of the Tamar.",
"title": "History"
},
{
"paragraph_id": 21,
"text": "Subsequently, however, Norman absentee landlords became replaced by a new Cornish-Norman ruling class including scholars such as Richard Rufus of Cornwall. These families eventually became the new rulers of Cornwall, typically speaking Norman French, Breton-Cornish, Latin, and eventually English, with many becoming involved in the operation of the Stannary Parliament system, the Earldom and eventually the Duchy of Cornwall. The Cornish language continued to be spoken and acquired a number of characteristics establishing its identity as a separate language from Breton.",
"title": "History"
},
{
"paragraph_id": 22,
"text": "The stannary parliaments and stannary courts were legislative and legal institutions in Cornwall and in Devon (in the Dartmoor area). The stannary courts administered equity for the region's tin-miners and tin mining interests, and they were also courts of record for the towns dependent on the mines. The separate and powerful government institutions available to the tin miners reflected the enormous importance of the tin industry to the English economy during the Middle Ages. Special laws for tin miners pre-date written legal codes in Britain, and ancient traditions exempted everyone connected with tin mining in Cornwall and Devon from any jurisdiction other than the stannary courts in all but the most exceptional circumstances.",
"title": "History"
},
{
"paragraph_id": 23,
"text": "Cornish piracy was active during the Elizabethan era on the west coast of Britain. Cornwall is well known for its wreckers who preyed on ships passing Cornwall's rocky coastline. During the 17th and 18th centuries Cornwall was a major smuggling area.",
"title": "History"
},
{
"paragraph_id": 24,
"text": "In later times, Cornwall was known to the Anglo-Saxons as \"West Wales\" to distinguish it from \"North Wales\" (the modern nation of Wales). The name appears in the Anglo-Saxon Chronicle in 891 as On Corn walum. In the Domesday Book it was referred to as Cornualia and in c. 1198 as Cornwal. Other names for the county include a latinisation of the name as Cornubia (first appears in a mid-9th-century deed purporting to be a copy of one dating from c. 705), and as Cornugallia in 1086.",
"title": "History"
},
{
"paragraph_id": 25,
"text": "Cornwall forms the tip of the south-west peninsula of the island of Great Britain, and is therefore exposed to the full force of the prevailing winds that blow in from the Atlantic Ocean. The coastline is composed mainly of resistant rocks that give rise in many places to tall cliffs. Cornwall has a border with only one other county, Devon, which is formed almost entirely by the River Tamar, and the remainder (to the north) by the Marsland Valley.",
"title": "Physical geography"
},
{
"paragraph_id": 26,
"text": "The north and south coasts have different characteristics. The north coast on the Celtic Sea, part of the Atlantic Ocean, is more exposed and therefore has a wilder nature. The prosaically named High Cliff, between Boscastle and St Gennys, is the highest sheer-drop cliff in Cornwall at 223 metres (732 ft). However, there are also many extensive stretches of fine golden sand which form the beaches important to the tourist industry, such as those at Bude, Polzeath, Watergate Bay, Perranporth, Porthtowan, Fistral Beach, Newquay, St Agnes, St Ives, and on the south coast Gyllyngvase beach in Falmouth and the large beach at Praa Sands further to the south-west. There are two river estuaries on the north coast: Hayle Estuary and the estuary of the River Camel, which provides Padstow and Rock with a safe harbour. The seaside town of Newlyn is a popular holiday destination, as it is one of the last remaining traditional Cornish fishing ports, with views reaching over Mount's Bay.",
"title": "Physical geography"
},
{
"paragraph_id": 27,
"text": "The south coast, dubbed the \"Cornish Riviera\", is more sheltered and there are several broad estuaries offering safe anchorages, such as at Falmouth and Fowey. Beaches on the south coast usually consist of coarser sand and shingle, interspersed with rocky sections of wave-cut platform. Also on the south coast, the picturesque fishing village of Polperro, at the mouth of the Pol River, and the fishing port of Looe on the River Looe are both popular with tourists.",
"title": "Physical geography"
},
{
"paragraph_id": 28,
"text": "The interior of the county consists of a roughly east–west spine of infertile and exposed upland, with a series of granite intrusions, such as Bodmin Moor, which contains the highest land within Cornwall. From east to west, and with approximately descending altitude, these are Bodmin Moor, Hensbarrow north of St Austell, Carnmenellis to the south of Camborne, and the Penwith or Land's End peninsula. These intrusions are the central part of the granite outcrops that form the exposed parts of the Cornubian batholith of south-west Britain, which also includes Dartmoor to the east in Devon and the Isles of Scilly to the west, the latter now being partially submerged.",
"title": "Physical geography"
},
{
"paragraph_id": 29,
"text": "The intrusion of the granite into the surrounding sedimentary rocks gave rise to extensive metamorphism and mineralisation, and this led to Cornwall being one of the most important mining areas in Europe until the early 20th century. It is thought tin was mined here as early as the Bronze Age, and copper, lead, zinc and silver have all been mined in Cornwall. Alteration of the granite also gave rise to extensive deposits of China Clay, especially in the area to the north of St Austell, and the extraction of this remains an important industry.",
"title": "Physical geography"
},
{
"paragraph_id": 30,
"text": "The uplands are surrounded by more fertile, mainly pastoral farmland. Near the south coast, deep wooded valleys provide sheltered conditions for flora that like shade and a moist, mild climate. These areas lie mainly on Devonian sandstone and slate. The north east of Cornwall lies on Carboniferous rocks known as the Culm Measures. In places these have been subjected to severe folding, as can be seen on the north coast near Crackington Haven and in several other locations.",
"title": "Physical geography"
},
{
"paragraph_id": 31,
"text": "The geology of the Lizard peninsula is unusual, in that it is mainland Britain's only example of an ophiolite, a section of oceanic crust now found on land. Much of the peninsula consists of the dark green and red Precambrian serpentinite, which forms spectacular cliffs, notably at Kynance Cove, and carved and polished serpentine ornaments are sold in local gift shops. This ultramafic rock also forms a very infertile soil which covers the flat and marshy heaths of the interior of the peninsula. This is home to rare plants, such as the Cornish Heath, which has been adopted as the county flower.",
"title": "Physical geography"
},
{
"paragraph_id": 32,
"text": "Cornwall's only city, and the home of the council headquarters, is Truro. Nearby Falmouth is notable as a port. St Just in Penwith is the westernmost town in England, though the same claim has been made for Penzance, which is larger. St Ives and Padstow are today small vessel ports with a major tourism and leisure sector in their economies. Newquay on the north coast is another major urban settlement which is known for its beaches and is a popular surfing destination, as is Bude further north, but Newquay is now also becoming important for its aviation-related industries. Camborne is the county's largest town and more populous than the capital Truro. Together with the neighbouring town of Redruth, it forms the largest urban area in Cornwall, and both towns were significant as centres of the global tin mining industry in the 19th century; nearby copper mines were also very productive during that period. St Austell is also larger than Truro and was the centre of the china clay industry in Cornwall. Until four new parishes were created for the St Austell area on 1 April 2009 St Austell was the largest settlement in Cornwall.",
"title": "Settlements and transport"
},
{
"paragraph_id": 33,
"text": "Cornwall borders the county of Devon at the River Tamar. Major roads between Cornwall and the rest of Great Britain are the A38 which crosses the Tamar at Plymouth via the Tamar Bridge and the town of Saltash, the A39 road (Atlantic Highway) from Barnstaple, passing through North Cornwall to end in Falmouth, and the A30 which connects Cornwall to the M5 motorway at Exeter, crosses the border south of Launceston, crosses Bodmin Moor and connects Bodmin, Truro, Redruth, Camborne, Hayle and Penzance. Torpoint Ferry links Plymouth with Torpoint on the opposite side of the Hamoaze. A rail bridge, the Royal Albert Bridge built by Isambard Kingdom Brunel (1859), provides the other main land transport link. The city of Plymouth, a large urban centre in south west Devon, is an important location for services such as hospitals, department stores, road and rail transport, and cultural venues, particularly for people living in east Cornwall.",
"title": "Settlements and transport"
},
{
"paragraph_id": 34,
"text": "Cardiff and Swansea, across the Bristol Channel, have at some times in the past been connected to Cornwall by ferry, but these do not operate now.",
"title": "Settlements and transport"
},
{
"paragraph_id": 35,
"text": "The Isles of Scilly are served by ferry (from Penzance) and by aeroplane, having its own airport: St Mary's Airport. There are regular flights between St Mary's and Land's End Airport, near St Just, and Newquay Airport; during the summer season, a service is also provided between St Mary's and Exeter Airport, in Devon.",
"title": "Settlements and transport"
},
{
"paragraph_id": 36,
"text": "Cornwall has varied habitats including terrestrial and marine ecosystems. One noted species in decline locally is the Reindeer lichen, which species has been made a priority for protection under the national UK Biodiversity Action Plan.",
"title": "Ecology"
},
{
"paragraph_id": 37,
"text": "Botanists divide Cornwall and Scilly into two vice-counties: West (1) and East (2). The standard flora is by F. H. Davey Flora of Cornwall (1909). Davey was assisted by A. O. Hume and he thanks Hume, his companion on excursions in Cornwall and Devon, and for help in the compilation of that Flora, publication of which was financed by him.",
"title": "Ecology"
},
{
"paragraph_id": 38,
"text": "Cornwall has a temperate Oceanic climate (Köppen climate classification: Cfb), with mild winters and cool summers. Cornwall has the mildest and one of the sunniest climates of the United Kingdom, as a result of its oceanic setting and the influence of the Gulf Stream. The average annual temperature in Cornwall ranges from 11.6 °C (52.9 °F) on the Isles of Scilly to 9.8 °C (49.6 °F) in the central uplands. Winters are among the warmest in the country due to the moderating effects of the warm ocean currents, and frost and snow are very rare at the coast and are also rare in the central upland areas. Summers are, however, not as warm as in other parts of southern England. The surrounding sea and its southwesterly position mean that Cornwall's weather can be relatively changeable.",
"title": "Ecology"
},
{
"paragraph_id": 39,
"text": "Cornwall is one of the sunniest areas in the UK. It has more than 1,541 hours of sunshine per year, with the highest average of 7.6 hours of sunshine per day in July. The moist, mild air coming from the southwest brings higher amounts of rainfall than in eastern Great Britain, at 1,051 to 1,290 mm (41.4 to 50.8 in) per year. However, this is not as much as in more northern areas of the west coast. The Isles of Scilly, for example, where there are on average fewer than two days of air frost per year, is the only area in the UK to be in the Hardiness zone 10. The islands have, on average, less than one day of air temperature exceeding 30 °C per year and are in the AHS Heat Zone 1. Extreme temperatures in Cornwall are particularly rare; however, extreme weather in the form of storms and floods is common. Due to climate change Cornwall faces more heatwaves and severe droughts, faster coastal erosion, stronger storms and higher wind speeds as well as the possibility of more high impact flooding.",
"title": "Ecology"
},
{
"paragraph_id": 40,
"text": "Cornish, a member of the Brythonic branch of the Celtic language family, is a revived language that died out as a first language in the late 18th century. It is closely related to the other Brythonic languages, Breton and Welsh, and less so to the Goidelic languages. Cornish has no legal status in the UK.",
"title": "Culture"
},
{
"paragraph_id": 41,
"text": "There has been a revival of the language by academics and optimistic enthusiasts since the mid-19th century that gained momentum from the publication in 1904 of Henry Jenner's Handbook of the Cornish Language. It is a social networking community language rather than a social community group language. Cornwall Council encourages and facilitates language classes within the county, in schools and within the wider community.",
"title": "Culture"
},
{
"paragraph_id": 42,
"text": "In 2002, Cornish was named as a UK regional language in the European Charter for Regional or Minority Languages. As a result, in 2005 its promoters received limited government funding. Several words originating in Cornish are used in the mining terminology of English, such as costean, gossan, gunnies, kibbal, kieve and vug.",
"title": "Culture"
},
{
"paragraph_id": 43,
"text": "The Cornish language and culture influenced the emergence of particular pronunciations and grammar not used elsewhere in England. The Cornish dialect is spoken to varying degrees; however, someone speaking in broad Cornish may be practically unintelligible to one not accustomed to it. Cornish dialect has generally declined, as in most places it is now little more than a regional accent and grammatical differences have been eroded over time. Marked differences in vocabulary and usage still exist between the eastern and western parts of Cornwall.",
"title": "Culture"
},
{
"paragraph_id": 44,
"text": "Saint Piran's Flag is the national flag and ancient banner of Cornwall, and an emblem of the Cornish people. The banner of Saint Piran is a white cross on a black background (in terms of heraldry 'sable, a cross argent'). According to legend Saint Piran adopted these colours from seeing the white tin in the black coals and ashes during his discovery of tin. The Cornish flag is an exact reverse of the former Breton black cross national flag and is known by the same name \"Kroaz Du\".",
"title": "Culture"
},
{
"paragraph_id": 45,
"text": "Since the 19th century, Cornwall, with its unspoilt maritime scenery and strong light, has sustained a vibrant visual art scene of international renown. Artistic activity within Cornwall was initially centred on the art-colony of Newlyn, most active at the turn of the 20th century. This Newlyn School is associated with the names of Stanhope Forbes, Elizabeth Forbes, Norman Garstin and Lamorna Birch. Modernist writers such as D. H. Lawrence and Virginia Woolf lived in Cornwall between the wars, and Ben Nicholson, the painter, having visited in the 1920s came to live in St Ives with his then wife, the sculptor Barbara Hepworth, at the outbreak of the Second World War. They were later joined by the Russian emigrant Naum Gabo, and other artists. These included Peter Lanyon, Terry Frost, Patrick Heron, Bryan Wynter and Roger Hilton. St Ives also houses the Leach Pottery, where Bernard Leach, and his followers championed Japanese inspired studio pottery. Much of this modernist work can be seen in Tate St Ives. The Newlyn Society and Penwith Society of Arts continue to be active, and contemporary visual art is documented in a dedicated online journal.",
"title": "Culture"
},
{
"paragraph_id": 46,
"text": "Local television programmes are provided by BBC South West & ITV West Country. Radio programmes are produced by BBC Radio Cornwall in Truro for the entire county, Heart West, Source FM for the Falmouth and Penryn areas, Coast FM for west Cornwall, Radio St Austell Bay for the St Austell area, NCB Radio for north Cornwall & Pirate FM.",
"title": "Culture"
},
{
"paragraph_id": 47,
"text": "Cornwall has a folk music tradition that has survived into the present and is well known for its unusual folk survivals such as Mummers Plays, the Furry Dance in Helston played by the famous Helston Town Band, and Obby Oss in Padstow.",
"title": "Culture"
},
{
"paragraph_id": 48,
"text": "Newlyn is home to a food and music festival that hosts live music, cooking demonstrations, and displays of locally caught fish.",
"title": "Culture"
},
{
"paragraph_id": 49,
"text": "As in other former mining districts of Britain, male voice choirs and brass bands, such as Brass on the Grass concerts during the summer at Constantine, are still very popular in Cornwall. Cornwall also has around 40 brass bands, including the six-times National Champions of Great Britain, Camborne Youth Band, and the bands of Lanner and St Dennis.",
"title": "Culture"
},
{
"paragraph_id": 50,
"text": "Cornish players are regular participants in inter-Celtic festivals, and Cornwall itself has several inter-Celtic festivals such as Perranporth's Lowender Peran folk festival.",
"title": "Culture"
},
{
"paragraph_id": 51,
"text": "Contemporary musician Richard D. James (also known as Aphex Twin) grew up in Cornwall, as did Luke Vibert and Alex Parks, winner of Fame Academy 2003. Roger Taylor, the drummer from the band Queen was also raised in the county, and currently lives not far from Falmouth. The American singer-songwriter Tori Amos now resides predominantly in North Cornwall not far from Bude with her family. The lutenist, composer and festival director Ben Salfield lives in Truro. Mick Fleetwood of Fleetwood Mac was born in Redruth.",
"title": "Culture"
},
{
"paragraph_id": 52,
"text": "Cornwall's rich heritage and dramatic landscape have inspired numerous writers.",
"title": "Culture"
},
{
"paragraph_id": 53,
"text": "Sir Arthur Quiller-Couch, author of many novels and works of literary criticism, lived in Fowey: his novels are mainly set in Cornwall. Daphne du Maurier lived at Menabilly near Fowey and many of her novels had Cornish settings: The Loving Spirit, Jamaica Inn, Rebecca, Frenchman's Creek, The King's General (partially), My Cousin Rachel, The House on the Strand and Rule Britannia. She is also noted for writing Vanishing Cornwall. Cornwall provided the inspiration for The Birds, one of her terrifying series of short stories, made famous as a film by Alfred Hitchcock.",
"title": "Culture"
},
{
"paragraph_id": 54,
"text": "Conan Doyle's The Adventure of the Devil's Foot featuring Sherlock Holmes is set in Cornwall. Winston Graham's series Poldark, Kate Tremayne's Adam Loveday series, Susan Cooper's novels Over Sea, Under Stone and Greenwitch, and Mary Wesley's The Camomile Lawn are all set in Cornwall. Writing under the pseudonym of Alexander Kent, Douglas Reeman sets parts of his Richard Bolitho and Adam Bolitho series in the Cornwall of the late 18th and the early 19th centuries, particularly in Falmouth. Gilbert K. Chesterton placed the action of many of his stories there.",
"title": "Culture"
},
{
"paragraph_id": 55,
"text": "Medieval Cornwall is the setting of the trilogy by Monica Furlong, Wise Child, Juniper and Colman, as well as part of Charles Kingsley's Hereward the Wake.",
"title": "Culture"
},
{
"paragraph_id": 56,
"text": "Hammond Innes's novel, The Killer Mine; Charles de Lint's novel The Little Country; and Chapters 24–25 of J. K. Rowling's Harry Potter and the Deathly Hallows take place in Cornwall (Shell Cottage, on the beach outside the fictional village of Tinworth).",
"title": "Culture"
},
{
"paragraph_id": 57,
"text": "David Cornwell, who wrote espionage novels under the name John le Carré, lived and worked in Cornwall. Nobel Prize-winning novelist William Golding was born in St Columb Minor in 1911, and returned to live near Truro from 1985 until his death in 1993. D. H. Lawrence spent a short time living in Cornwall. Rosamunde Pilcher grew up in Cornwall, and several of her books take place there.",
"title": "Culture"
},
{
"paragraph_id": 58,
"text": "St. Michael's Mount in Cornwall (under the fictional name of Mount Polbearne) is the setting of the Little Beach Street Bakery series by Jenny Colgan, who spent holidays in Cornwall as a child. The book series includes Little Beach Street Bakery (2014), Summer at Little Beach Street Bakery (2015), Christmas at Little Beach Street Bakery (2016), and Sunrise by the Sea (2021).",
"title": "Culture"
},
{
"paragraph_id": 59,
"text": "In the Paddington Bear novels by Michael Bond the title character is said to have landed at an unspecified port in Cornwall having travelled in a lifeboat aboard a cargo ship from darkest Peru. From here he travels to London on a train and eventually arrives at Paddington Station.",
"title": "Culture"
},
{
"paragraph_id": 60,
"text": "Enid Blyton's 1953 novel Five Go Down to the Sea (the twelfth book in The Famous Five series) is set in Cornwall, near the fictional coastal village of Tremannon.",
"title": "Culture"
},
{
"paragraph_id": 61,
"text": "The late Poet Laureate Sir John Betjeman was famously fond of Cornwall and it featured prominently in his poetry. He is buried in the churchyard at St Enodoc's Church, Trebetherick. Charles Causley, the poet, was born in Launceston and is perhaps the best known of Cornish poets. Jack Clemo and the scholar A. L. Rowse were also notable Cornishmen known for their poetry; The Rev. R. S. Hawker of Morwenstow wrote some poetry which was very popular in the Victorian period. The Scottish poet W. S. Graham lived in West Cornwall from 1944 until his death in 1986.",
"title": "Culture"
},
{
"paragraph_id": 62,
"text": "The poet Laurence Binyon wrote \"For the Fallen\" (first published in 1914) while sitting on the cliffs between Pentire Point and The Rumps and a stone plaque was erected in 2001 to commemorate the fact. The plaque bears the inscription \"FOR THE FALLEN / Composed on these cliffs, 1914\". The plaque also bears below this the fourth stanza (sometimes referred to as \"The Ode\") of the poem:",
"title": "Culture"
},
{
"paragraph_id": 63,
"text": "Cornwall produced a substantial number of passion plays such as the Ordinalia during the Middle Ages. Many are still extant, and provide valuable information about the Cornish language. See also Cornish literature",
"title": "Culture"
},
{
"paragraph_id": 64,
"text": "Colin Wilson, a prolific writer who is best known for his debut work The Outsider (1956) and for The Mind Parasites (1967), lived in Gorran Haven, a small village on the southern Cornish coast. The writer D. M. Thomas was born in Redruth but lived and worked in Australia and the United States before returning to his native Cornwall. He has written novels, poetry, and other works, including translations from Russian.",
"title": "Culture"
},
{
"paragraph_id": 65,
"text": "Thomas Hardy's drama The Queen of Cornwall (1923) is a version of the Tristan story; the second act of Richard Wagner's opera Tristan und Isolde takes place in Cornwall, as do Gilbert and Sullivan's operettas The Pirates of Penzance and Ruddigore.",
"title": "Culture"
},
{
"paragraph_id": 66,
"text": "Clara Vyvyan was the author of various books about many aspects of Cornish life such as Our Cornwall. She once wrote: \"The Loneliness of Cornwall is a loneliness unchanged by the presence of men, its freedoms a freedom inexpressible by description or epitaph. Your cannot say Cornwall is this or that. Your cannot describe it in a word or visualise it in a second. You may know the country from east to west and sea to sea, but if you close your eyes and think about it no clear-cut image rises before you. In this quality of changefulness have we possibly surprised the secret of Cornwall's wild spirit—in this intimacy the essence of its charm? Cornwall!\". A level of Tomb Raider: Legend, a game dealing with Arthurian Legend, takes place in Cornwall at a museum above King Arthur's tomb. The adventure game The Lost Crown is set in the fictional town of Saxton, which uses the Cornish settlements of Polperro, Talland and Looe as its model.",
"title": "Culture"
},
{
"paragraph_id": 67,
"text": "The fairy tale Jack the Giant Killer takes place in Cornwall.",
"title": "Culture"
},
{
"paragraph_id": 68,
"text": "The Mousehole Cat, a children's book written by Antonia Barber and illustrated by Nicola Bayley, is set in the Cornish village Mousehole and based on the legend of Tom Bawcock and the continuing tradition of Tom Bawcock's Eve.",
"title": "Culture"
},
{
"paragraph_id": 69,
"text": "The main sports played in Cornwall are rugby, football and cricket. Athletes from Truro have done well in Olympic and Commonwealth Games fencing, winning several medals. Surfing is popular, particularly with tourists, thousands of whom take to the water throughout the summer months. Some towns and villages have bowling clubs, and a wide variety of British sports are played throughout Cornwall. Cornwall is also one of the few places in England where shinty is played; the English Shinty Association is based in Penryn.",
"title": "Culture"
},
{
"paragraph_id": 70,
"text": "The Cornwall County Cricket Club plays as one of the minor counties of English cricket.",
"title": "Culture"
},
{
"paragraph_id": 71,
"text": "Truro, and all of the towns and some villages have football clubs belonging to the Cornwall County Football Association, and some clubs have teams competing higher within the English football league pyramid. Of these, the highest ranked — by two flights — is Truro City F.C., who will be playing in the National League South in the 2023–24 season. Other notable Cornish teams include Mousehole A.F.C., Helston Athletic F.C., and Falmouth Town F.C.",
"title": "Culture"
},
{
"paragraph_id": 72,
"text": "Viewed as an \"important identifier of ethnic affiliation\", rugby union has become a sport strongly tied to notions of Cornishness. and since the 20th century, rugby union has emerged as one of the most popular spectator and team sports in Cornwall (perhaps the most popular), with professional Cornish rugby footballers being described as a \"formidable force\", \"naturally independent, both in thought and deed, yet paradoxically staunch English patriots whose top players have represented England with pride and passion\".",
"title": "Culture"
},
{
"paragraph_id": 73,
"text": "In 1985, sports journalist Alan Gibson made a direct connection between the love of rugby in Cornwall and the ancient parish games of hurling and wrestling that existed for centuries before rugby officially began. Among Cornwall's native sports are a distinctive form of Celtic wrestling related to Breton wrestling, and Cornish hurling, a kind of mediaeval football played with a silver ball (distinct from Irish Hurling). Cornish Wrestling is Cornwall's oldest sport and as Cornwall's native tradition it has travelled the world to places like Victoria, Australia and Grass Valley, California following the miners and gold rushes. Cornish hurling now takes place at St. Columb Major, St Ives, and less frequently at Bodmin.",
"title": "Culture"
},
{
"paragraph_id": 74,
"text": "In rugby league, Cornwall R.L.F.C., founded in 2021, will represent the county in the professional league system. The semi-pro club will start in the third tier RFL League 1. At an amateur level, the county is represented by Cornish Rebels.",
"title": "Culture"
},
{
"paragraph_id": 75,
"text": "Due to its long coastline, various maritime sports are popular in Cornwall, notably sailing and surfing. International events in both are held in Cornwall. Cornwall hosted the Inter-Celtic Watersports Festival in 2006. Surfing in particular is very popular, as locations such as Bude and Newquay offer some of the best surf in the UK. Pilot gig rowing has been popular for many years and the World championships takes place annually on the Isles of Scilly. On 2 September 2007, 300 surfers at Polzeath beach set a new world record for the highest number of surfers riding the same wave as part of the Global Surf Challenge and part of a project called Earthwave to raise awareness about global warming.",
"title": "Culture"
},
{
"paragraph_id": 76,
"text": "As its population is comparatively small, and largely rural, Cornwall's contribution to British national sport in the United Kingdom has been limited; the county's greatest successes have come in fencing. In 2014, half of the men's GB team fenced for Truro Fencing Club, and 3 Truro fencers appeared at the 2012 Olympics.",
"title": "Culture"
},
{
"paragraph_id": 77,
"text": "Cornwall has a strong culinary heritage. Surrounded on three sides by the sea amid fertile fishing grounds, Cornwall naturally has fresh seafood readily available; Newlyn is the largest fishing port in the UK by value of fish landed, and is known for its wide range of restaurants. Television chef Rick Stein has long operated a fish restaurant in Padstow for this reason, and Jamie Oliver chose to open his second restaurant, Fifteen, in Watergate Bay near Newquay. MasterChef host and founder of Smiths of Smithfield, John Torode, in 2007 purchased Seiners in Perranporth. One famous local fish dish is Stargazy pie, a fish-based pie in which the heads of the fish stick through the piecrust, as though \"star-gazing\". The pie is cooked as part of traditional celebrations for Tom Bawcock's Eve, but is not generally eaten at any other time.",
"title": "Culture"
},
{
"paragraph_id": 78,
"text": "Cornwall is perhaps best known though for its pasties, a savoury dish made with pastry. Today's pasties usually contain a filling of beef steak, onion, potato and swede with salt and white pepper, but historically pasties had a variety of different fillings. \"Turmut, 'tates and mate\" (i.e. \"Turnip, potatoes and meat\", turnip being the Cornish and Scottish term for swede, itself an abbreviation of 'Swedish Turnip', the British term for rutabaga) describes a filling once very common. For instance, the licky pasty contained mostly leeks, and the herb pasty contained watercress, parsley, and shallots. Pasties are often locally referred to as oggies. Historically, pasties were also often made with sweet fillings such as jam, apple and blackberry, plums or cherries. The wet climate and relatively poor soil of Cornwall make it unsuitable for growing many arable crops. However, it is ideal for growing the rich grass required for dairying, leading to the production of Cornwall's other famous export, clotted cream. This forms the basis for many local specialities including Cornish fudge and Cornish ice cream. Cornish clotted cream has Protected Geographical Status under EU law, and cannot be made anywhere else. Its principal manufacturer is A. E. Rodda & Son of Scorrier.",
"title": "Culture"
},
{
"paragraph_id": 79,
"text": "Local cakes and desserts include Saffron cake, Cornish heavy (hevva) cake, Cornish fairings biscuits, figgy 'obbin, Cream tea and whortleberry pie.",
"title": "Culture"
},
{
"paragraph_id": 80,
"text": "There are also many types of beers brewed in Cornwall—those produced by Sharp's Brewery, Skinner's Brewery, Keltek Brewery and St Austell Brewery are the best known—including stouts, ales and other beer types. There is some small scale production of wine, mead and cider.",
"title": "Culture"
},
{
"paragraph_id": 81,
"text": "Cornwall is recognised by Cornish and Celtic political groups as one of six Celtic nations, alongside Brittany, Ireland, the Isle of Man, Scotland and Wales. (The Isle of Man Government and the Welsh Government also recognise Asturias and Galicia.) Cornwall is represented, as one of the Celtic nations, at the Festival Interceltique de Lorient, an annual celebration of Celtic culture held in Brittany.",
"title": "Politics and administration"
},
{
"paragraph_id": 82,
"text": "Cornwall Council consider Cornwall's unique cultural heritage and distinctiveness to be one of the area's major assets. They see Cornwall's language, landscape, Celtic identity, political history, patterns of settlement, maritime tradition, industrial heritage, and non-conformist tradition, to be among the features making up its \"distinctive\" culture. However, it is uncertain exactly how many of the people living in Cornwall consider themselves to be Cornish; results from different surveys (including the national census) have varied. In the 2001 census, 7 per cent of people in Cornwall identified themselves as Cornish, rather than British or English. However, activists have argued that this underestimated the true number as there was no explicit \"Cornish\" option included in the official census form. Subsequent surveys have suggested that as many as 44 per cent identify as Cornish. Many people in Cornwall say that this issue would be resolved if a Cornish option became available on the census. The question and content recommendations for the 2011 census provided an explanation of the process of selecting an ethnic identity which is relevant to the understanding of the often quoted figure of 37,000 who claimed Cornish identity. The 2021 census found that 17% of people in Cornwall identified as being Cornish (89,000), with 14% of people in Cornwall identifying as Cornish-only (80,000). Again there was no tick-box provided, and \"Cornish\" had to be written-in as \"Other\".",
"title": "Politics and administration"
},
{
"paragraph_id": 83,
"text": "On 24 April 2014 it was announced that Cornish people have been granted minority status under the European Framework Convention for the Protection of National Minorities.",
"title": "Politics and administration"
},
{
"paragraph_id": 84,
"text": "Cornwall forms two local government districts; Cornwall and the Isles of Scilly. The district of Cornwall is governed by Cornwall Council, a unitary authority based at Lys Kernow in Truro, and the Council of the Isles of Scilly governs the archipelago from Hugh Town. The Crown Court is based at the Courts of Justice in Truro. Magistrates' Courts are found in Truro (but at a different location to the Crown Court) and at Bodmin.",
"title": "Politics and administration"
},
{
"paragraph_id": 85,
"text": "The Isles of Scilly form part of the ceremonial county of Cornwall, and have, at times, been served by the same county administration. Since 1890 they have been administered by their own unitary authority, the Council of the Isles of Scilly. They are grouped with Cornwall for other administrative purposes, such as the National Health Service and Devon and Cornwall Police.",
"title": "Politics and administration"
},
{
"paragraph_id": 86,
"text": "Before reorganisation on 1 April 2009, council functions throughout the rest of Cornwall were organised in two tiers, with Cornwall County Council and district councils for its six districts, Caradon, Carrick, Kerrier, North Cornwall, Penwith, and Restormel. While projected to streamline services, cut red tape and save around £17 million a year, the reorganisation was met with wide opposition, with a poll in 2008 showing 89% disapproval from Cornish residents.",
"title": "Politics and administration"
},
{
"paragraph_id": 87,
"text": "The first elections for the unitary authority were held on 4 June 2009. The council has 123 seats; the largest party (in 2017) is the Conservatives, with 46 seats. The Liberal Democrats are the second-largest party, with 37 seats, with the Independents the third-largest grouping with 30.",
"title": "Politics and administration"
},
{
"paragraph_id": 88,
"text": "Before the creation of the unitary council, the former county council had 82 seats, the majority of which were held by the Liberal Democrats, elected at the 2005 county council elections. The six former districts had a total of 249 council seats, and the groups with greatest numbers of councillors were Liberal Democrats, Conservatives and Independents.",
"title": "Politics and administration"
},
{
"paragraph_id": 89,
"text": "Following a review by the Boundary Commission for England taking effect at the 2010 general election, Cornwall is divided into six county constituencies to elect MPs to the House of Commons of the United Kingdom.",
"title": "Politics and administration"
},
{
"paragraph_id": 90,
"text": "Before the 2010 boundary changes Cornwall had five constituencies, all of which were won by Liberal Democrats at the 2005 general election. In the 2010 general election Liberal Democrat candidates won three constituencies and Conservative candidates won three other constituencies. At the 2015 general election all six Cornish seats were won by Conservative candidates; all these Conservative MPs retained their seats at the 2017 general election, and the Conservatives won all six constituencies again at the 2019 general election.",
"title": "Politics and administration"
},
{
"paragraph_id": 91,
"text": "Until 1832, Cornwall had 44 MPs—more than any other county—reflecting the importance of tin to the Crown. Most of the increase in numbers of MPs came between 1529 and 1584 after which there was no change until 1832.",
"title": "Politics and administration"
},
{
"paragraph_id": 92,
"text": "Although Cornwall does not have a designated government department, in 2007 while Leader of the Opposition David Cameron created a Shadow Secretary of State for Cornwall. The position was not made into a formal UK Cabinet position when Cameron entered government following the 2010 United Kingdom general election",
"title": "Politics and administration"
},
{
"paragraph_id": 93,
"text": "Cornish nationalists have organised into two political parties: Mebyon Kernow, formed in 1951, and the Cornish Nationalist Party. In addition to the political parties, there are various interest groups such as the Revived Cornish Stannary Parliament and the Celtic League. The Cornish Constitutional Convention was formed in 2000 as a cross-party organisation including representatives from the private, public and voluntary sectors to campaign for the creation of a Cornish Assembly, along the lines of the National Assembly for Wales, Northern Ireland Assembly and the Scottish Parliament. Between 5 March 2000 and December 2001, the campaign collected the signatures of 41,650 Cornish residents endorsing the call for a devolved assembly, along with 8,896 signatories from outside Cornwall. The resulting petition was presented to the Prime Minister, Tony Blair.",
"title": "Politics and administration"
},
{
"paragraph_id": 94,
"text": "Cornwall is one of the poorest parts of the United Kingdom in terms of per capita GDP and average household incomes. At the same time, parts of the county, especially on the coast, have high house prices, driven up by demand from relatively wealthy retired people and second-home owners. The GVA per head was 65% of the UK average for 2004. The GDP per head for Cornwall and the Isles of Scilly was 79.2% of the EU-27 average for 2004, the UK per head average was 123.0%. In 2011, the latest available figures, Cornwall's (including the Isles of Scilly) measure of wealth was 64% of the European average per capita.",
"title": "Economy"
},
{
"paragraph_id": 95,
"text": "Historically mining of tin (and later also of copper) was important in the Cornish economy. The first reference to this appears to be by Pytheas: see above. Julius Caesar was the last classical writer to mention the tin trade, which appears to have declined during the Roman occupation. The tin trade revived in the Middle Ages and its importance to the Kings of England resulted in certain privileges being granted to the tinners; the Cornish rebellion of 1497 is attributed to grievances of the tin miners. In the mid-19th century, however, the tin trade again fell into decline. Other primary sector industries that have declined since the 1960s include china clay production, fishing and farming.",
"title": "Economy"
},
{
"paragraph_id": 96,
"text": "Today, the Cornish economy depends heavily on its tourist industry, which makes up around a quarter of the economy. The official measures of deprivation and poverty at district and 'sub-ward' level show that there is great variation in poverty and prosperity in Cornwall with some areas among the poorest in England and others among the top half in prosperity. For example, the ranking of 32,482 sub-wards in England in the index of multiple deprivation (2006) ranged from 819th (part of Penzance East) to 30,899th (part of Saltash Burraton in Caradon), where the lower number represents the greater deprivation.",
"title": "Economy"
},
{
"paragraph_id": 97,
"text": "Cornwall was one of two UK areas designated as 'less developed regions' by the European Union, which, prior to Brexit, meant the area qualified for EU Cohesion Policy grants. It was granted Objective 1 status by the European Commission for 2000 to 2006, followed by further rounds of funding known as 'Convergence Funding' from 2007 to 2013 and 'Growth Programme' for 2014 to 2020.",
"title": "Economy"
},
{
"paragraph_id": 98,
"text": "Cornwall has a tourism-based seasonal economy which is estimated to contribute up to 24% of Cornwall's gross domestic product. In 2011 tourism brought £1.85 billion into the Cornish economy. Cornwall's unique culture, spectacular landscape and mild climate make it a popular tourist destination, despite being somewhat distant from the United Kingdom's main centres of population. Surrounded on three sides by the English Channel and Celtic Sea, Cornwall has many miles of beaches and cliffs; the South West Coast Path follows a complete circuit of both coasts. Other tourist attractions include moorland, country gardens, museums, historic and prehistoric sites, and wooded valleys. Five million tourists visit Cornwall each year, mostly drawn from within the UK. Visitors to Cornwall are served by the airport at Newquay, whilst private jets, charters and helicopters are also served by Perranporth airfield; nightsleeper and daily rail services run between Cornwall, London and other regions of the UK.",
"title": "Economy"
},
{
"paragraph_id": 99,
"text": "Newquay and Porthtowan are popular destinations for surfers. In recent years, the Eden Project near St Austell has been a major financial success, drawing one in eight of Cornwall's visitors in 2004.",
"title": "Economy"
},
{
"paragraph_id": 100,
"text": "In the summer of 2018, due to the recognition of its beaches and weather through social media and the marketing of travel companies, Cornwall received about 20 per cent more visitors than the usual 4.5 million figure. The sudden rise and demand of tourism in Cornwall caused multiple traffic and safety issues in coastal areas.",
"title": "Economy"
},
{
"paragraph_id": 101,
"text": "In October 2021, Cornwall was longlisted for the UK City of Culture 2025, but failed to make the March 2022 shortlist.",
"title": "Economy"
},
{
"paragraph_id": 102,
"text": "Other industries include fishing, although this has been significantly re-structured by EU fishing policies (as of 2010 the Southwest Handline Fishermen's Association has started to revive the fishing industry).",
"title": "Economy"
},
{
"paragraph_id": 103,
"text": "Agriculture, once an important part of the Cornish economy, has declined significantly relative to other industries. However, there is still a strong dairy industry, with products such as Cornish clotted cream.",
"title": "Economy"
},
{
"paragraph_id": 104,
"text": "Mining of tin and copper was also an industry, but today the derelict mine workings survive only as a World Heritage Site. However, the Camborne School of Mines, which was relocated to Penryn in 2004, is still a world centre of excellence in the field of mining and applied geology and the grant of World Heritage status has attracted funding for conservation and heritage tourism. China clay extraction has also been an important industry in the St Austell area, but this sector has been in decline, and this, coupled with increased mechanisation, has led to a decrease in employment in this sector, although the industry still employs around 2,133 people in Cornwall, and generates over £80 million to the local economy.",
"title": "Economy"
},
{
"paragraph_id": 105,
"text": "In March 2016, a Canadian company, Strongbow Exploration, had acquired, from administration, a 100% interest in the South Crofty tin mine and the associated mineral rights in Cornwall with the aim of reopening the mine and bringing it back to full production. Work is currently ongoing to build a water filtration plant in order to dewater the mine.",
"title": "Economy"
},
{
"paragraph_id": 106,
"text": "Cornwall is the landing point for twenty-two of the world's fastest high-speed undersea and transatlantic fibre optic cables, making Cornwall an important hub within Europe's Internet infrastructure. The Superfast Cornwall project completed in 2015, and saw 95% of Cornish houses and businesses connected to a fibre-based broadband network, with over 90% of properties able to connect with speeds above 24 Mbit/s.",
"title": "Economy"
},
{
"paragraph_id": 107,
"text": "The county's newest industry is aviation: Newquay Airport is the home of a growing business park with Enterprise Zone status, known as Aerohub. Also a space launch facility, Spaceport Cornwall, has been established at Newquay, in partnership with Goonhilly satellite tracking station near Helston in south Cornwall.",
"title": "Economy"
},
{
"paragraph_id": 108,
"text": "Cornwall's population was 537,400 in the 2011 census, with a population density of 144 people per square kilometre, ranking it 40th and 41st, respectively, among the 47 counties of England. Cornwall's population was 95.7% White British and has a relatively high rate of population growth. At 11.2% in the 1980s and 5.3% in the 1990s, it had the fifth-highest population growth rate of the counties of England. The natural change has been a small population decline, and the population increase is due to inward migration into Cornwall. According to the 1991 census, the population was 469,800.",
"title": "Demographics"
},
{
"paragraph_id": 109,
"text": "Cornwall has a relatively high retired population, with 22.9% of pensionable age, compared with 20.3% for the United Kingdom as a whole. This may be due partly to Cornwall's rural and coastal geography increasing its popularity as a retirement location, and partly to outward migration of younger residents to more economically diverse areas.",
"title": "Demographics"
},
{
"paragraph_id": 110,
"text": "Over 10,000 students attend Cornwall's two universities, Falmouth University and the University of Exeter (including Camborne School of Mines). Falmouth University is a specialist public university for the creative industries and arts, while the University Of Exeter has two campuses in Cornwall, Truro and Penryn, the latter shared with Falmouth. Penryn campus is home to educational departments such as the rapidly growing Centre for Ecology and Conservation (CEC), the Environment and Sustainability Institute (ESI), and the Institute of Cornish Studies.",
"title": "Education"
},
{
"paragraph_id": 111,
"text": "Cornwall has a comprehensive education system, with 31 state and eight independent secondary schools. There are three further education colleges: Truro and Penwith College, Cornwall College and Callywith College which opened in September 2017. The Isles of Scilly only has one school, while the former Restormel district has the highest school population, and school year sizes are around 200, with none above 270. Before the introduction of comprehensive schools there were a number of grammar schools and secondary modern schools, e.g. the schools that later became Sir James Smith's School and Wadebridge School. There are also primary schools in many villages and towns: e.g. St Mabyn Church of England Primary School.",
"title": "Education"
}
] | Cornwall is a ceremonial county in South West England. It is recognised as one of the Celtic nations and is the homeland of the Cornish people. The county is bordered by the Atlantic Ocean to the north and west, Devon to the east, and the English Channel to the south. The largest settlement is Falmouth, and the county town is the city of Truro. The county is rural, with an area of 1,375 square miles (3,562 km2) and population of 568,210. After Falmouth (23,061), the largest settlements are Newquay (20,342), St Austell (19,958), and Truro (18,766). For local government purposes most of Cornwall is a unitary authority area, with the Isles of Scilly having a unique local authority. The Cornish nationalist movement disputes the constitutional status of Cornwall and seeks greater autonomy within the United Kingdom. Cornwall is the westernmost part of the South West Peninsula. Its coastline is characterised by steep cliffs and, to the south, several rias, including those at the mouths of the rivers Fal and Fowey. It includes the southernmost point on Great Britain, Lizard Point, and forms a large part of the Cornwall National Landscape. The national landscape also includes Bodmin Moor, an upland outcrop of the Cornubian batholith granite formation. The county contains many short rivers; the longest is the Tamar, which forms the border with Devon. Cornwall had a minor Roman presence, and later formed part of the Brittonic kingdom of Dumnonia. From the 7th century, the Britons in the South West increasingly came into conflict with the expanding Anglo-Saxon kingdom of Wessex, eventually being pushed west of the Tamar; by the Norman Conquest Cornwall was administered as part of England, though it retained its own culture. The remainder of the Middle Ages and Early Modern Period were relatively settled, with Cornwall developing its tin mining industry and becoming a duchy in 1337. During the Industrial Revolution, the tin and copper mines were expanded and then declined, with china clay extraction becoming a major industry. Railways were built, leading to a growth of tourism in the 20th century. The Cornish language became extinct as a living community language at the end of the 18th century, but is now being revived. | 2001-10-12T19:43:39Z | 2023-12-30T01:14:19Z | [
"Template:Infobox English county",
"Template:Clarify",
"Template:Dead link",
"Template:IPA",
"Template:Refend",
"Template:Curlie",
"Template:SW England",
"Template:Lang-kw",
"Template:Efn",
"Template:Main",
"Template:Blockquote",
"Template:Cite web",
"Template:Cite EB1911",
"Template:AONBs in England",
"Template:Subject bar",
"Template:Use dmy dates",
"Template:Lang",
"Template:Original research inline",
"Template:Cite journal",
"Template:Citation",
"Template:Library resources about",
"Template:Short description",
"Template:Use British English",
"Template:See also",
"Template:Reflist",
"Template:Cite encyclopedia",
"Template:Geographic location",
"Template:Cornwall",
"Template:Convert",
"Template:For timeline",
"Template:Notelist",
"Template:Cite book",
"Template:Cite news",
"Template:Refbegin",
"Template:ISBN",
"Template:Celts",
"Template:Authority control",
"Template:IPAc-en",
"Template:Further",
"Template:Circa",
"Template:Harvnb",
"Template:England counties",
"Template:Unitary authorities of England",
"Template:About",
"Template:Redirect-distinguish",
"Template:As of",
"Template:Webarchive"
] | https://en.wikipedia.org/wiki/Cornwall |
5,649 | Constitutional monarchy | Constitutional monarchy, also known as limited monarchy, parliamentary monarchy or democratic monarchy, is a form of monarchy in which the monarch exercises their authority in accordance with a constitution and is not alone in making decisions. Constitutional monarchies differ from absolute monarchies (in which a monarch is the only decision-maker) in that they are bound to exercise powers and authorities within limits prescribed by an established legal framework.
Constitutional monarchies range from countries such as Liechtenstein, Monaco, Morocco, Jordan, Kuwait, Bahrain and Bhutan, where the constitution grants substantial discretionary powers to the sovereign, to countries such as the United Kingdom and other Commonwealth realms, the Netherlands, Spain, Belgium, Norway, Sweden, Lesotho, Malaysia, Thailand, Cambodia, and Japan, where the monarch retains significantly less, if any, personal discretion in the exercise of their authority. On the surface level, this distinction may be hard to establish, with numerous liberal democracies restraining monarchic power in practice rather than written law, e.g., the constitution of the United Kingdom, which affords the monarch substantial, if limited, legislative and executive powers.
Constitutional monarchy may refer to a system in which the monarch acts as a non-party political head of state under the constitution, whether codified or uncodified. While most monarchs may hold formal authority and the government may legally operate in the monarch's name, in the form typical in Europe the monarch no longer personally sets public policy or chooses political leaders. Political scientist Vernon Bogdanor, paraphrasing Thomas Macaulay, has defined a constitutional monarch as "A sovereign who reigns but does not rule".
In addition to acting as a visible symbol of national unity, a constitutional monarch may hold formal powers such as dissolving parliament or giving royal assent to legislation. However, such powers generally may only be exercised strictly in accordance with either written constitutional principles or unwritten constitutional conventions, rather than any personal political preferences of the sovereign. In The English Constitution, British political theorist Walter Bagehot identified three main political rights which a constitutional monarch may freely exercise: the right to be consulted, the right to encourage, and the right to warn. Many constitutional monarchies still retain significant authorities or political influence, however, such as through certain reserve powers, and may also play an important political role.
The Commonwealth realms share the same person as hereditary monarchy under the Westminster system of constitutional governance. Two constitutional monarchies – Malaysia and Cambodia – are elective monarchies, in which the ruler is periodically selected by a small electoral college.
The concept of semi-constitutional monarch identifies constitutional monarchies where the monarch retains substantial powers, on a par with a president in a presidential or semi-presidential system. As a result, constitutional monarchies where the monarch has a largely ceremonial role may also be referred to as "parliamentary monarchies" to differentiate them from semi-constitutional monarchies. Strongly limited constitutional monarchies, such as those of the United Kingdom and Australia, have been referred to as crowned republics by writers H. G. Wells and Glenn Patmore.
The oldest constitutional monarchy dating back to ancient times was that of the Hittites. They were an ancient Anatolian people that lived during the Bronze Age whose king had to share his authority with an assembly, called the Panku, which was the equivalent to a modern-day deliberative assembly or a legislature. Members of the Panku came from scattered noble families who worked as representatives of their subjects in an adjutant or subaltern federal-type landscape.
In the Kingdom of England, the Glorious Revolution of 1688 furthered the constitutional monarchy, restricted by laws such as the Bill of Rights 1689 and the Act of Settlement 1701, although the first form of constitution was enacted with the Magna Carta of 1215. At the same time, in Scotland, the Convention of Estates enacted the Claim of Right Act 1689, which placed similar limits on the Scottish monarchy.
Queen Anne was the last monarch to veto an Act of Parliament when, on 11 March 1708, she blocked the Scottish Militia Bill. However Hanoverian monarchs continued to selectively dictate government policies. For instance King George III constantly blocked Catholic Emancipation, eventually precipitating the resignation of William Pitt the Younger as prime minister in 1801. The sovereign's influence on the choice of prime minister gradually declined over this period. King William IV was the last monarch to dismiss a prime minister, when in 1834 he removed Lord Melbourne as a result of Melbourne's choice of Lord John Russell as Leader of the House of Commons. Queen Victoria was the last monarch to exercise real personal power, but this diminished over the course of her reign. In 1839, she became the last sovereign to keep a prime minister in power against the will of Parliament when the Bedchamber crisis resulted in the retention of Lord Melbourne's administration. By the end of her reign, however, she could do nothing to block the unacceptable (to her) premierships of William Gladstone, although she still exercised power in appointments to the Cabinet. For example in 1886 she vetoed Gladstone's choice of Hugh Childers as War Secretary in favour of Sir Henry Campbell-Bannerman.
Today, the role of the British monarch is by convention effectively ceremonial. The British Parliament and the Government – chiefly in the office of Prime Minister of the United Kingdom – exercise their powers under "royal (or Crown) prerogative": on behalf of the monarch and through powers still formally possessed by the monarch.
No person may accept significant public office without swearing an oath of allegiance to the King. With few exceptions, the monarch is bound by constitutional convention to act on the advice of the government.
Poland developed the first constitution for a monarchy in continental Europe, with the Constitution of 3 May 1791; it was the second single-document constitution in the world just after the first republican Constitution of the United States. Constitutional monarchy also occurred briefly in the early years of the French Revolution, but much more widely afterwards. Napoleon Bonaparte is considered the first monarch proclaiming himself as an embodiment of the nation, rather than as a divinely appointed ruler; this interpretation of monarchy is germane to continental constitutional monarchies. German philosopher Georg Wilhelm Friedrich Hegel, in his work Elements of the Philosophy of Right (1820), gave the concept a philosophical justification that concurred with evolving contemporary political theory and the Protestant Christian view of natural law. Hegel's forecast of a constitutional monarch with very limited powers whose function is to embody the national character and provide constitutional continuity in times of emergency was reflected in the development of constitutional monarchies in Europe and Japan.
There exist at least two different types of constitutional monarchies in the modern world – executive and ceremonial. In executive monarchies, the monarch wields significant (though not absolute) power. The monarchy under this system of government is a powerful political (and social) institution. By contrast, in ceremonial monarchies, the monarch holds little or no actual power or direct political influence, though they frequently have a great deal of social and cultural influence.
Ceremonial and executive monarchy should not be confused with democratic and non-democratic monarchical systems. For example, in Liechtenstein and Monaco, the ruling monarchs wield significant executive power. However, while they are theoretically very powerful within their small states, they are not absolute monarchs and have very limited de facto power compared to the Islamic monarchs, which is why their countries are generally considered to be liberal democracies. For instance, when Hereditary Prince Alois of Liechtenstein threatened to veto a referendum to legalize abortion in 2011, it came as a surprise because the prince had not vetoed any law for over 30 years (in the end, this referendum failed to make it to a vote).
As originally conceived, a constitutional monarch was head of the executive branch and quite a powerful figure even though their power was limited by the constitution and the elected parliament. Some of the framers of the U.S. Constitution may have envisioned the president as an elected constitutional monarch, as the term was then understood, following Montesquieu's account of the separation of powers.
The present-day concept of a constitutional monarchy developed in the United Kingdom, where the democratically elected parliaments, and their leader, the prime minister, exercise power, with the monarchs having ceded power and remaining as a titular position. In many cases the monarchs, while still at the very top of the political and social hierarchy, were given the status of "servants of the people" to reflect the new, egalitarian position. In the course of France's July Monarchy, Louis-Philippe I was styled "King of the French" rather than "King of France".
Following the unification of Germany, Otto von Bismarck rejected the British model. In the constitutional monarchy established under the Constitution of the German Empire which Bismarck inspired, the Kaiser retained considerable actual executive power, while the Imperial Chancellor needed no parliamentary vote of confidence and ruled solely by the imperial mandate. However, this model of constitutional monarchy was discredited and abolished following Germany's defeat in the First World War. Later, Fascist Italy could also be considered a constitutional monarchy, in that there was a king as the titular head of state while actual power was held by Benito Mussolini under a constitution. This eventually discredited the Italian monarchy and led to its abolition in 1946. After the Second World War, surviving European monarchies almost invariably adopted some variant of the constitutional monarchy model originally developed in Britain.
Nowadays a parliamentary democracy that is a constitutional monarchy is considered to differ from one that is a republic only in detail rather than in substance. In both cases, the titular head of state – monarch or president – serves the traditional role of embodying and representing the nation, while the government is carried on by a cabinet composed predominantly of elected Members of Parliament.
However, three important factors distinguish monarchies such as the United Kingdom from systems where greater power might otherwise rest with Parliament. These are:
Other privileges may be nominal or ceremonial (e.g., where the executive, judiciary, police or armed forces act on the authority of or owe allegiance to the Crown).
Today slightly more than a quarter of constitutional monarchies are Western European countries, including the United Kingdom, Spain, the Netherlands, Belgium, Norway, Denmark, Luxembourg, Monaco, Liechtenstein and Sweden. However, the two most populous constitutional monarchies in the world are in Asia: Japan and Thailand. In these countries, the prime minister holds the day-to-day powers of governance, while the monarch retains residual (but not always insignificant) powers. The powers of the monarch differ between countries. In Denmark and in Belgium, for example, the monarch formally appoints a representative to preside over the creation of a coalition government following a parliamentary election, while in Norway the King chairs special meetings of the cabinet.
In nearly all cases, the monarch is still the nominal chief executive, but is bound by convention to act on the advice of the Cabinet. Only a few monarchies (most notably Japan and Sweden) have amended their constitutions so that the monarch is no longer even the nominal chief executive.
There are fifteen constitutional monarchies under King Charles III, which are known as Commonwealth realms. Unlike some of their continental European counterparts, the Monarch and his Governors-General in the Commonwealth realms hold significant "reserve" or "prerogative" powers, to be wielded in times of extreme emergency or constitutional crises, usually to uphold parliamentary government. For example, during the 1975 Australian constitutional crisis, the Governor-General dismissed the Australian Prime Minister Gough Whitlam. The Australian Senate had threatened to block the Government's budget by refusing to pass the necessary appropriation bills. On 11 November 1975, Whitlam intended to call a half-Senate election to try to break the deadlock. When he sought the Governor-General's approval of the election, the Governor-General instead dismissed him as Prime Minister. Shortly after that, he installed leader of the opposition Malcolm Fraser in his place. Acting quickly before all parliamentarians became aware of the government change, Fraser and his allies secured passage of the appropriation bills, and the Governor-General dissolved Parliament for a double dissolution election. Fraser and his government were returned with a massive majority. This led to much speculation among Whitlam's supporters as to whether this use of the Governor-General's reserve powers was appropriate, and whether Australia should become a republic. Among supporters of constitutional monarchy, however, the event confirmed the monarchy's value as a source of checks and balances against elected politicians who might seek powers in excess of those conferred by the constitution, and ultimately as a safeguard against dictatorship.
In Thailand's constitutional monarchy, the monarch is recognized as the Head of State, Head of the Armed Forces, Upholder of the Buddhist Religion, and Defender of the Faith. The immediate former King, Bhumibol Adulyadej, was the longest-reigning monarch in the world and in all of Thailand's history, before passing away on 13 October 2016. Bhumibol reigned through several political changes in the Thai government. He played an influential role in each incident, often acting as mediator between disputing political opponents. (See Bhumibol's role in Thai Politics.) Among the powers retained by the Thai monarch under the constitution, lèse majesté protects the image of the monarch and enables him to play a role in politics. It carries strict criminal penalties for violators. Generally, the Thai people were reverent of Bhumibol. Much of his social influence arose from this reverence and from the socioeconomic improvement efforts undertaken by the royal family.
In the United Kingdom, a frequent debate centres on when it is appropriate for a British monarch to act. When a monarch does act, political controversy can often ensue, partially because the neutrality of the crown is seen to be compromised in favour of a partisan goal, while some political scientists champion the idea of an "interventionist monarch" as a check against possible illegal action by politicians. For instance, the monarch of the United Kingdom can theoretically exercise an absolute veto over legislation by withholding royal assent. However, no monarch has done so since 1708, and it is widely believed that this and many of the monarch's other political powers are lapsed powers.
There are currently 43 monarchies worldwide. | [
{
"paragraph_id": 0,
"text": "Constitutional monarchy, also known as limited monarchy, parliamentary monarchy or democratic monarchy, is a form of monarchy in which the monarch exercises their authority in accordance with a constitution and is not alone in making decisions. Constitutional monarchies differ from absolute monarchies (in which a monarch is the only decision-maker) in that they are bound to exercise powers and authorities within limits prescribed by an established legal framework.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Constitutional monarchies range from countries such as Liechtenstein, Monaco, Morocco, Jordan, Kuwait, Bahrain and Bhutan, where the constitution grants substantial discretionary powers to the sovereign, to countries such as the United Kingdom and other Commonwealth realms, the Netherlands, Spain, Belgium, Norway, Sweden, Lesotho, Malaysia, Thailand, Cambodia, and Japan, where the monarch retains significantly less, if any, personal discretion in the exercise of their authority. On the surface level, this distinction may be hard to establish, with numerous liberal democracies restraining monarchic power in practice rather than written law, e.g., the constitution of the United Kingdom, which affords the monarch substantial, if limited, legislative and executive powers.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Constitutional monarchy may refer to a system in which the monarch acts as a non-party political head of state under the constitution, whether codified or uncodified. While most monarchs may hold formal authority and the government may legally operate in the monarch's name, in the form typical in Europe the monarch no longer personally sets public policy or chooses political leaders. Political scientist Vernon Bogdanor, paraphrasing Thomas Macaulay, has defined a constitutional monarch as \"A sovereign who reigns but does not rule\".",
"title": ""
},
{
"paragraph_id": 3,
"text": "In addition to acting as a visible symbol of national unity, a constitutional monarch may hold formal powers such as dissolving parliament or giving royal assent to legislation. However, such powers generally may only be exercised strictly in accordance with either written constitutional principles or unwritten constitutional conventions, rather than any personal political preferences of the sovereign. In The English Constitution, British political theorist Walter Bagehot identified three main political rights which a constitutional monarch may freely exercise: the right to be consulted, the right to encourage, and the right to warn. Many constitutional monarchies still retain significant authorities or political influence, however, such as through certain reserve powers, and may also play an important political role.",
"title": ""
},
{
"paragraph_id": 4,
"text": "The Commonwealth realms share the same person as hereditary monarchy under the Westminster system of constitutional governance. Two constitutional monarchies – Malaysia and Cambodia – are elective monarchies, in which the ruler is periodically selected by a small electoral college.",
"title": ""
},
{
"paragraph_id": 5,
"text": "The concept of semi-constitutional monarch identifies constitutional monarchies where the monarch retains substantial powers, on a par with a president in a presidential or semi-presidential system. As a result, constitutional monarchies where the monarch has a largely ceremonial role may also be referred to as \"parliamentary monarchies\" to differentiate them from semi-constitutional monarchies. Strongly limited constitutional monarchies, such as those of the United Kingdom and Australia, have been referred to as crowned republics by writers H. G. Wells and Glenn Patmore.",
"title": ""
},
{
"paragraph_id": 6,
"text": "The oldest constitutional monarchy dating back to ancient times was that of the Hittites. They were an ancient Anatolian people that lived during the Bronze Age whose king had to share his authority with an assembly, called the Panku, which was the equivalent to a modern-day deliberative assembly or a legislature. Members of the Panku came from scattered noble families who worked as representatives of their subjects in an adjutant or subaltern federal-type landscape.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "In the Kingdom of England, the Glorious Revolution of 1688 furthered the constitutional monarchy, restricted by laws such as the Bill of Rights 1689 and the Act of Settlement 1701, although the first form of constitution was enacted with the Magna Carta of 1215. At the same time, in Scotland, the Convention of Estates enacted the Claim of Right Act 1689, which placed similar limits on the Scottish monarchy.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "Queen Anne was the last monarch to veto an Act of Parliament when, on 11 March 1708, she blocked the Scottish Militia Bill. However Hanoverian monarchs continued to selectively dictate government policies. For instance King George III constantly blocked Catholic Emancipation, eventually precipitating the resignation of William Pitt the Younger as prime minister in 1801. The sovereign's influence on the choice of prime minister gradually declined over this period. King William IV was the last monarch to dismiss a prime minister, when in 1834 he removed Lord Melbourne as a result of Melbourne's choice of Lord John Russell as Leader of the House of Commons. Queen Victoria was the last monarch to exercise real personal power, but this diminished over the course of her reign. In 1839, she became the last sovereign to keep a prime minister in power against the will of Parliament when the Bedchamber crisis resulted in the retention of Lord Melbourne's administration. By the end of her reign, however, she could do nothing to block the unacceptable (to her) premierships of William Gladstone, although she still exercised power in appointments to the Cabinet. For example in 1886 she vetoed Gladstone's choice of Hugh Childers as War Secretary in favour of Sir Henry Campbell-Bannerman.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "Today, the role of the British monarch is by convention effectively ceremonial. The British Parliament and the Government – chiefly in the office of Prime Minister of the United Kingdom – exercise their powers under \"royal (or Crown) prerogative\": on behalf of the monarch and through powers still formally possessed by the monarch.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "No person may accept significant public office without swearing an oath of allegiance to the King. With few exceptions, the monarch is bound by constitutional convention to act on the advice of the government.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "Poland developed the first constitution for a monarchy in continental Europe, with the Constitution of 3 May 1791; it was the second single-document constitution in the world just after the first republican Constitution of the United States. Constitutional monarchy also occurred briefly in the early years of the French Revolution, but much more widely afterwards. Napoleon Bonaparte is considered the first monarch proclaiming himself as an embodiment of the nation, rather than as a divinely appointed ruler; this interpretation of monarchy is germane to continental constitutional monarchies. German philosopher Georg Wilhelm Friedrich Hegel, in his work Elements of the Philosophy of Right (1820), gave the concept a philosophical justification that concurred with evolving contemporary political theory and the Protestant Christian view of natural law. Hegel's forecast of a constitutional monarch with very limited powers whose function is to embody the national character and provide constitutional continuity in times of emergency was reflected in the development of constitutional monarchies in Europe and Japan.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "There exist at least two different types of constitutional monarchies in the modern world – executive and ceremonial. In executive monarchies, the monarch wields significant (though not absolute) power. The monarchy under this system of government is a powerful political (and social) institution. By contrast, in ceremonial monarchies, the monarch holds little or no actual power or direct political influence, though they frequently have a great deal of social and cultural influence.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "Ceremonial and executive monarchy should not be confused with democratic and non-democratic monarchical systems. For example, in Liechtenstein and Monaco, the ruling monarchs wield significant executive power. However, while they are theoretically very powerful within their small states, they are not absolute monarchs and have very limited de facto power compared to the Islamic monarchs, which is why their countries are generally considered to be liberal democracies. For instance, when Hereditary Prince Alois of Liechtenstein threatened to veto a referendum to legalize abortion in 2011, it came as a surprise because the prince had not vetoed any law for over 30 years (in the end, this referendum failed to make it to a vote).",
"title": "History"
},
{
"paragraph_id": 14,
"text": "As originally conceived, a constitutional monarch was head of the executive branch and quite a powerful figure even though their power was limited by the constitution and the elected parliament. Some of the framers of the U.S. Constitution may have envisioned the president as an elected constitutional monarch, as the term was then understood, following Montesquieu's account of the separation of powers.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "The present-day concept of a constitutional monarchy developed in the United Kingdom, where the democratically elected parliaments, and their leader, the prime minister, exercise power, with the monarchs having ceded power and remaining as a titular position. In many cases the monarchs, while still at the very top of the political and social hierarchy, were given the status of \"servants of the people\" to reflect the new, egalitarian position. In the course of France's July Monarchy, Louis-Philippe I was styled \"King of the French\" rather than \"King of France\".",
"title": "History"
},
{
"paragraph_id": 16,
"text": "Following the unification of Germany, Otto von Bismarck rejected the British model. In the constitutional monarchy established under the Constitution of the German Empire which Bismarck inspired, the Kaiser retained considerable actual executive power, while the Imperial Chancellor needed no parliamentary vote of confidence and ruled solely by the imperial mandate. However, this model of constitutional monarchy was discredited and abolished following Germany's defeat in the First World War. Later, Fascist Italy could also be considered a constitutional monarchy, in that there was a king as the titular head of state while actual power was held by Benito Mussolini under a constitution. This eventually discredited the Italian monarchy and led to its abolition in 1946. After the Second World War, surviving European monarchies almost invariably adopted some variant of the constitutional monarchy model originally developed in Britain.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "Nowadays a parliamentary democracy that is a constitutional monarchy is considered to differ from one that is a republic only in detail rather than in substance. In both cases, the titular head of state – monarch or president – serves the traditional role of embodying and representing the nation, while the government is carried on by a cabinet composed predominantly of elected Members of Parliament.",
"title": "History"
},
{
"paragraph_id": 18,
"text": "However, three important factors distinguish monarchies such as the United Kingdom from systems where greater power might otherwise rest with Parliament. These are:",
"title": "History"
},
{
"paragraph_id": 19,
"text": "Other privileges may be nominal or ceremonial (e.g., where the executive, judiciary, police or armed forces act on the authority of or owe allegiance to the Crown).",
"title": "History"
},
{
"paragraph_id": 20,
"text": "Today slightly more than a quarter of constitutional monarchies are Western European countries, including the United Kingdom, Spain, the Netherlands, Belgium, Norway, Denmark, Luxembourg, Monaco, Liechtenstein and Sweden. However, the two most populous constitutional monarchies in the world are in Asia: Japan and Thailand. In these countries, the prime minister holds the day-to-day powers of governance, while the monarch retains residual (but not always insignificant) powers. The powers of the monarch differ between countries. In Denmark and in Belgium, for example, the monarch formally appoints a representative to preside over the creation of a coalition government following a parliamentary election, while in Norway the King chairs special meetings of the cabinet.",
"title": "History"
},
{
"paragraph_id": 21,
"text": "In nearly all cases, the monarch is still the nominal chief executive, but is bound by convention to act on the advice of the Cabinet. Only a few monarchies (most notably Japan and Sweden) have amended their constitutions so that the monarch is no longer even the nominal chief executive.",
"title": "History"
},
{
"paragraph_id": 22,
"text": "There are fifteen constitutional monarchies under King Charles III, which are known as Commonwealth realms. Unlike some of their continental European counterparts, the Monarch and his Governors-General in the Commonwealth realms hold significant \"reserve\" or \"prerogative\" powers, to be wielded in times of extreme emergency or constitutional crises, usually to uphold parliamentary government. For example, during the 1975 Australian constitutional crisis, the Governor-General dismissed the Australian Prime Minister Gough Whitlam. The Australian Senate had threatened to block the Government's budget by refusing to pass the necessary appropriation bills. On 11 November 1975, Whitlam intended to call a half-Senate election to try to break the deadlock. When he sought the Governor-General's approval of the election, the Governor-General instead dismissed him as Prime Minister. Shortly after that, he installed leader of the opposition Malcolm Fraser in his place. Acting quickly before all parliamentarians became aware of the government change, Fraser and his allies secured passage of the appropriation bills, and the Governor-General dissolved Parliament for a double dissolution election. Fraser and his government were returned with a massive majority. This led to much speculation among Whitlam's supporters as to whether this use of the Governor-General's reserve powers was appropriate, and whether Australia should become a republic. Among supporters of constitutional monarchy, however, the event confirmed the monarchy's value as a source of checks and balances against elected politicians who might seek powers in excess of those conferred by the constitution, and ultimately as a safeguard against dictatorship.",
"title": "History"
},
{
"paragraph_id": 23,
"text": "In Thailand's constitutional monarchy, the monarch is recognized as the Head of State, Head of the Armed Forces, Upholder of the Buddhist Religion, and Defender of the Faith. The immediate former King, Bhumibol Adulyadej, was the longest-reigning monarch in the world and in all of Thailand's history, before passing away on 13 October 2016. Bhumibol reigned through several political changes in the Thai government. He played an influential role in each incident, often acting as mediator between disputing political opponents. (See Bhumibol's role in Thai Politics.) Among the powers retained by the Thai monarch under the constitution, lèse majesté protects the image of the monarch and enables him to play a role in politics. It carries strict criminal penalties for violators. Generally, the Thai people were reverent of Bhumibol. Much of his social influence arose from this reverence and from the socioeconomic improvement efforts undertaken by the royal family.",
"title": "History"
},
{
"paragraph_id": 24,
"text": "In the United Kingdom, a frequent debate centres on when it is appropriate for a British monarch to act. When a monarch does act, political controversy can often ensue, partially because the neutrality of the crown is seen to be compromised in favour of a partisan goal, while some political scientists champion the idea of an \"interventionist monarch\" as a check against possible illegal action by politicians. For instance, the monarch of the United Kingdom can theoretically exercise an absolute veto over legislation by withholding royal assent. However, no monarch has done so since 1708, and it is widely believed that this and many of the monarch's other political powers are lapsed powers.",
"title": "History"
},
{
"paragraph_id": 25,
"text": "There are currently 43 monarchies worldwide.",
"title": "List of current constitutional monarchies"
}
] | Constitutional monarchy, also known as limited monarchy, parliamentary monarchy or democratic monarchy, is a form of monarchy in which the monarch exercises their authority in accordance with a constitution and is not alone in making decisions. Constitutional monarchies differ from absolute monarchies in that they are bound to exercise powers and authorities within limits prescribed by an established legal framework. Constitutional monarchies range from countries such as Liechtenstein, Monaco, Morocco, Jordan, Kuwait, Bahrain and Bhutan, where the constitution grants substantial discretionary powers to the sovereign, to countries such as the United Kingdom and other Commonwealth realms, the Netherlands, Spain, Belgium, Norway, Sweden, Lesotho, Malaysia, Thailand, Cambodia, and Japan, where the monarch retains significantly less, if any, personal discretion in the exercise of their authority. On the surface level, this distinction may be hard to establish, with numerous liberal democracies restraining monarchic power in practice rather than written law, e.g., the constitution of the United Kingdom, which affords the monarch substantial, if limited, legislative and executive powers. Constitutional monarchy may refer to a system in which the monarch acts as a non-party political head of state under the constitution, whether codified or uncodified. While most monarchs may hold formal authority and the government may legally operate in the monarch's name, in the form typical in Europe the monarch no longer personally sets public policy or chooses political leaders. Political scientist Vernon Bogdanor, paraphrasing Thomas Macaulay, has defined a constitutional monarch as "A sovereign who reigns but does not rule". In addition to acting as a visible symbol of national unity, a constitutional monarch may hold formal powers such as dissolving parliament or giving royal assent to legislation. However, such powers generally may only be exercised strictly in accordance with either written constitutional principles or unwritten constitutional conventions, rather than any personal political preferences of the sovereign. In The English Constitution, British political theorist Walter Bagehot identified three main political rights which a constitutional monarch may freely exercise: the right to be consulted, the right to encourage, and the right to warn. Many constitutional monarchies still retain significant authorities or political influence, however, such as through certain reserve powers, and may also play an important political role. The Commonwealth realms share the same person as hereditary monarchy under the Westminster system of constitutional governance. Two constitutional monarchies – Malaysia and Cambodia – are elective monarchies, in which the ruler is periodically selected by a small electoral college. The concept of semi-constitutional monarch identifies constitutional monarchies where the monarch retains substantial powers, on a par with a president in a presidential or semi-presidential system. As a result, constitutional monarchies where the monarch has a largely ceremonial role may also be referred to as "parliamentary monarchies" to differentiate them from semi-constitutional monarchies. Strongly limited constitutional monarchies, such as those of the United Kingdom and Australia, have been referred to as crowned republics by writers H. G. Wells and Glenn Patmore. | 2001-07-02T19:51:56Z | 2023-12-31T03:17:45Z | [
"Template:Snd",
"Template:Columns-list",
"Template:Cite web",
"Template:Sfn",
"Template:Systems of government",
"Template:Citation needed",
"Template:Reflist",
"Template:Citation",
"Template:Refbegin",
"Template:Dead link",
"Template:Use dmy dates",
"Template:Basic forms of government",
"Template:Main list",
"Template:Cite journal",
"Template:Refend",
"Template:Authority control",
"Template:Monarchism",
"Template:Unreferenced section",
"Template:More citations needed",
"Template:Flagg",
"Template:Lang",
"Template:Cite book",
"Template:Short description",
"Template:Better source needed"
] | https://en.wikipedia.org/wiki/Constitutional_monarchy |
5,653 | Clarke's three laws | British science fiction writer Arthur C. Clarke formulated three adages that are known as Clarke's three laws, of which the third law is the best known and most widely cited. They are part of his ideas in his extensive writings about the future.
The laws are:
One account stated that Clarke's laws were developed after the editor of his works in French started numbering the author's assertions. All three laws appear in Clarke's essay "Hazards of Prophecy: The Failure of Imagination", first published in Profiles of the Future (1962); however, they were not all published at the same time. Clarke's first law was proposed in the 1962 edition of the essay, as "Clarke's Law" in Profiles of the Future.
The second law is offered as a simple observation in the same essay but its status as Clarke's second law was conferred by others. It was initially a derivative of the first law and formally became Clarke's second law where the author proposed the third law in the 1973 revision of Profiles of the Future, which included an acknowledgement. It was also here that Clarke wrote about the third law in these words: "As three laws were good enough for Newton, I have modestly decided to stop there".
The third law is the best known and most widely cited. It was published in a 1968 letter to Science magazine and eventually added to the 1973 revision of the "Hazards of Prophecy" essay. In 1952, Isaac Asimov in his book Foundation and Empire (part 1.1 Search for Magicians) wrote down a similar phrase "... an uninformed public tends to confuse scholarship with magicians..." It also echoes a statement in a 1942 story by Leigh Brackett: "Witchcraft to the ignorant, ... simple science to the learned". Even earlier examples of this sentiment may be found in Wild Talents (1932) by Charles Fort: "...a performance that may someday be considered understandable, but that, in these primitive times, so transcends what is said to be the known that it is what I mean by magic," and in the short story The Hound of Death (1933) by Agatha Christie: "The supernatural is only the nature of which the laws are not yet understood." Virginia Woolf's 1928 novel Orlando: A Biography explicitly compares advanced technology to magic:
Then she got into the lift, for the good reason that the door stood open; and was shot smoothly upwards. The very fabric of life now, she thought as she rose, is magic. In the eighteenth century, we knew how everything was done; but here I rise through the air; I listen to voices in America; I see men flying – but how it's done I can't even begin to wonder. So my belief in magic returns.
Clarke gave an example of the third law when he said that while he "would have believed anyone who told him back in 1962 that there would one day exist a book-sized object capable of holding the content of an entire library, he would never have accepted that the same device could find a page or word in a second and then convert it into any typeface and size from Albertus Extra Bold to Zurich Calligraphic", referring to his memory of "seeing and hearing Linotype machines which slowly converted 'molten lead into front pages that required two men to lift them'".
The third law has inspired many snowclones and other variations:
Isaac Asimov's Corollary to Clarke's First Law: "When, however, the lay public rallies round an idea that is denounced by distinguished but elderly scientists and supports that idea with great fervour and emotion – the distinguished but elderly scientists are then, after all, probably right."
A contrapositive of the third law is "Any technology distinguishable from magic is insufficiently advanced." (Gehm's corollary) | [
{
"paragraph_id": 0,
"text": "British science fiction writer Arthur C. Clarke formulated three adages that are known as Clarke's three laws, of which the third law is the best known and most widely cited. They are part of his ideas in his extensive writings about the future.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The laws are:",
"title": "The laws"
},
{
"paragraph_id": 2,
"text": "One account stated that Clarke's laws were developed after the editor of his works in French started numbering the author's assertions. All three laws appear in Clarke's essay \"Hazards of Prophecy: The Failure of Imagination\", first published in Profiles of the Future (1962); however, they were not all published at the same time. Clarke's first law was proposed in the 1962 edition of the essay, as \"Clarke's Law\" in Profiles of the Future.",
"title": "Origins"
},
{
"paragraph_id": 3,
"text": "The second law is offered as a simple observation in the same essay but its status as Clarke's second law was conferred by others. It was initially a derivative of the first law and formally became Clarke's second law where the author proposed the third law in the 1973 revision of Profiles of the Future, which included an acknowledgement. It was also here that Clarke wrote about the third law in these words: \"As three laws were good enough for Newton, I have modestly decided to stop there\".",
"title": "Origins"
},
{
"paragraph_id": 4,
"text": "The third law is the best known and most widely cited. It was published in a 1968 letter to Science magazine and eventually added to the 1973 revision of the \"Hazards of Prophecy\" essay. In 1952, Isaac Asimov in his book Foundation and Empire (part 1.1 Search for Magicians) wrote down a similar phrase \"... an uninformed public tends to confuse scholarship with magicians...\" It also echoes a statement in a 1942 story by Leigh Brackett: \"Witchcraft to the ignorant, ... simple science to the learned\". Even earlier examples of this sentiment may be found in Wild Talents (1932) by Charles Fort: \"...a performance that may someday be considered understandable, but that, in these primitive times, so transcends what is said to be the known that it is what I mean by magic,\" and in the short story The Hound of Death (1933) by Agatha Christie: \"The supernatural is only the nature of which the laws are not yet understood.\" Virginia Woolf's 1928 novel Orlando: A Biography explicitly compares advanced technology to magic:",
"title": "Origins"
},
{
"paragraph_id": 5,
"text": "Then she got into the lift, for the good reason that the door stood open; and was shot smoothly upwards. The very fabric of life now, she thought as she rose, is magic. In the eighteenth century, we knew how everything was done; but here I rise through the air; I listen to voices in America; I see men flying – but how it's done I can't even begin to wonder. So my belief in magic returns.",
"title": "Origins"
},
{
"paragraph_id": 6,
"text": "Clarke gave an example of the third law when he said that while he \"would have believed anyone who told him back in 1962 that there would one day exist a book-sized object capable of holding the content of an entire library, he would never have accepted that the same device could find a page or word in a second and then convert it into any typeface and size from Albertus Extra Bold to Zurich Calligraphic\", referring to his memory of \"seeing and hearing Linotype machines which slowly converted 'molten lead into front pages that required two men to lift them'\".",
"title": "Origins"
},
{
"paragraph_id": 7,
"text": "The third law has inspired many snowclones and other variations:",
"title": "Variants of the third law"
},
{
"paragraph_id": 8,
"text": "Isaac Asimov's Corollary to Clarke's First Law: \"When, however, the lay public rallies round an idea that is denounced by distinguished but elderly scientists and supports that idea with great fervour and emotion – the distinguished but elderly scientists are then, after all, probably right.\"",
"title": "Corollaries"
},
{
"paragraph_id": 9,
"text": "A contrapositive of the third law is \"Any technology distinguishable from magic is insufficiently advanced.\" (Gehm's corollary)",
"title": "Corollaries"
}
] | British science fiction writer Arthur C. Clarke formulated three adages that are known as Clarke's three laws, of which the third law is the best known and most widely cited. They are part of his ideas in his extensive writings about the future. | 2001-05-15T13:26:33Z | 2023-12-12T01:33:39Z | [
"Template:Annotated link",
"Template:Cite book",
"Template:Cite journal",
"Template:Cite magazine",
"Template:Use British English",
"Template:Anchor",
"Template:Blockquote",
"Template:Reflist",
"Template:Spoken Wikipedia",
"Template:Arthur C. Clarke",
"Template:Portal bar",
"Template:Short description",
"Template:Use dmy dates"
] | https://en.wikipedia.org/wiki/Clarke%27s_three_laws |
5,654 | Caspar David Friedrich | Caspar David Friedrich (5 September 1774 – 7 May 1840) was a German Romantic landscape painter, generally considered the most important German artist of his generation. He is best known for his allegorical landscapes, which typically feature contemplative figures silhouetted against night skies, morning mists, barren trees or Gothic ruins. His primary interest was the contemplation of nature, and his often symbolic and anti-classical work seeks to convey a subjective, emotional response to the natural world. Friedrich's paintings characteristically set a human presence in diminished perspective amid expansive landscapes, reducing the figures to a scale that, according to the art historian Christopher John Murray, directs "the viewer's gaze towards their metaphysical dimension".
Friedrich was born in the town of Greifswald on the Baltic Sea in what was at the time Swedish Pomerania. He studied in Copenhagen until 1798, before settling in Dresden. He came of age during a period when, across Europe, a growing disillusionment with materialistic society was giving rise to a new appreciation of spirituality. This shift in ideals was often expressed through a reevaluation of the natural world, as artists such as Friedrich, J. M. W. Turner and John Constable sought to depict nature as a "divine creation, to be set against the artifice of human civilization".
Friedrich's work brought him renown early in his career. Contemporaries such as the French sculptor David d'Angers spoke of him as having discovered "the tragedy of landscape". His work nevertheless fell from favour during his later years, and he died in obscurity. As Germany moved towards modernisation in the late 19th century, a new sense of urgency characterised its art, and Friedrich's contemplative depictions of stillness came to be seen as products of a bygone age.
The early 20th century brought a renewed appreciation of his art, beginning in 1906 with an exhibition of thirty-two of his paintings in Berlin. His work influenced Expressionist artists and later Surrealists and Existentialists. The rise of Nazism in the early 1930s saw a resurgence in Friedrich's popularity, but this was followed by a sharp decline as his paintings were, by association with the Nazi movement, seen as promoting German nationalism. In the late 1970s Friedrich regained his reputation as an icon of the German Romantic movement and a painter of international importance.
Caspar David Friedrich was born on 5 September 1774, in Greifswald, Swedish Pomerania, on the Baltic coast of Germany. The sixth of ten children, he was raised in the strict Lutheran creed of his father Adolf Gottlieb Friedrich, a candle-maker and soap boiler. Records of the family's financial circumstances are contradictory; while some sources indicate the children were privately tutored, others record that they were raised in relative poverty. He became familiar with death from an early age. His mother, Sophie, died in 1781 when he was seven. A year later, his sister Elisabeth died, and a second sister, Maria, succumbed to typhus in 1791. Arguably the greatest tragedy of his childhood happened in 1787 when his brother Johann Christoffer died: at the age of thirteen, Caspar David witnessed his younger brother fall through the ice of a frozen lake, and drown. Some accounts suggest that Johann Christoffer perished while trying to rescue Caspar David, who was also in danger on the ice.
Friedrich began his formal study of art in 1790 as a private student of artist Johann Gottfried Quistorp at the University of Greifswald in his home city, at which the art department is now named Caspar-David-Friedrich-Institut in his honour. Quistorp took his students on outdoor drawing excursions; as a result, Friedrich was encouraged to sketch from life at an early age. Through Quistorp, Friedrich met and was subsequently influenced by the theologian Ludwig Gotthard Kosegarten, who taught that nature was a revelation of God. Quistorp introduced Friedrich to the work of the German 17th-century artist Adam Elsheimer, whose works often included religious subjects dominated by landscape, and nocturnal subjects. During this period he also studied literature and aesthetics with Swedish professor Thomas Thorild. Four years later Friedrich entered the prestigious Academy of Copenhagen, where he began his education by making copies of casts from antique sculptures before proceeding to drawing from life.
Living in Copenhagen afforded the young painter access to the Royal Picture Gallery's collection of 17th-century Dutch landscape painting. At the Academy he studied under teachers such as Christian August Lorentzen and the landscape painter Jens Juel. These artists were inspired by the Sturm und Drang movement and represented a midpoint between the dramatic intensity and expressive manner of the budding Romantic aesthetic and the waning neo-classical ideal. Mood was paramount, and influence was drawn from such sources as the Icelandic legend of Edda, the poems of Ossian and Norse mythology.
Friedrich settled permanently in Dresden in 1798. During this early period, he experimented in printmaking with etchings and designs for woodcuts which his furniture-maker brother cut. By 1804 he had produced 18 etchings and four woodcuts; they were apparently made in small numbers and only distributed to friends. Despite these forays into other media, he gravitated toward working primarily with ink, watercolour and sepias. With the exception of a few early pieces, such as Landscape with Temple in Ruins (1797), he did not work extensively with oils until his reputation was more established.
Landscapes were his preferred subject, inspired by frequent trips, beginning in 1801, to the Baltic coast, Bohemia, the Krkonoše and the Harz Mountains. Mostly based on the landscapes of northern Germany, his paintings depict woods, hills, harbors, morning mists and other light effects based on a close observation of nature. These works were modeled on sketches and studies of scenic spots, such as the cliffs on Rügen, the surroundings of Dresden and the river Elbe. He executed his studies almost exclusively in pencil, even providing topographical information, yet the subtle atmospheric effects characteristic of Friedrich's mid-period paintings were rendered from memory. These effects took their strength from the depiction of light, and of the illumination of sun and moon on clouds and water: optical phenomena peculiar to the Baltic coast that had never before been painted with such an emphasis.
His reputation as an artist was established when he won a prize in 1805 at the Weimar competition organised by Johann Wolfgang von Goethe. At the time, the Weimar competition tended to draw mediocre and now-forgotten artists presenting derivative mixtures of neo-classical and pseudo-Greek styles. The poor quality of the entries began to prove damaging to Goethe's reputation, so when Friedrich entered two sepia drawings—Procession at Dawn and Fisher-Folk by the Sea—the poet responded enthusiastically and wrote, "We must praise the artist's resourcefulness in this picture fairly. The drawing is well done, the procession is ingenious and appropriate ... his treatment combines a great deal of firmness, diligence and neatness ... the ingenious watercolour ... is also worthy of praise."
Friedrich completed the first of his major paintings in 1808, at the age of 34. Cross in the Mountains, today known as the Tetschen Altar, is an altarpiece panel said to have been commissioned for a family chapel in Tetschen, Bohemia. The panel depicts a cross in profile at the top of a mountain, alone, and surrounded by pine trees.
Although the altarpiece was generally coldly received, it was Friedrich's first painting to receive wide publicity. The artist's friends publicly defended the work, while art critic Basilius von Ramdohr published a long article challenging Friedrich's use of landscape in a religious context. He rejected the idea that landscape painting could convey explicit meaning, writing that it would be "a veritable presumption, if landscape painting were to sneak into the church and creep onto the altar". Friedrich responded with a programme describing his intentions in 1809, comparing the rays of the evening sun to the light of the Holy Father. This statement marked the only time Friedrich recorded a detailed interpretation of his own work, and the painting was among the few commissions the artist ever received.
Following the purchase of two of his paintings by the Prussian Crown Prince, Friedrich was elected a member of the Berlin Academy in 1810. Yet in 1816, he sought to distance himself from Prussian authority and applied that June for Saxon citizenship. The move was not expected; the Saxon government was pro-French, while Friedrich's paintings were seen as generally patriotic and distinctly anti-French. Nevertheless, with the aid of his Dresden-based friend Graf Vitzthum von Eckstädt, Friedrich attained citizenship, and in 1818, membership in the Saxon Academy with a yearly dividend of 150 thalers. Although he had hoped to receive a full professorship, it was never awarded him as, according to the German Library of Information, "it was felt that his painting was too personal, his point of view too individual to serve as a fruitful example to students." Politics too may have played a role in stalling his career: Friedrich's decidedly Germanic subjects and costuming frequently clashed with the era's prevailing pro-French attitudes.
On 21 January 1818, Friedrich married Caroline Bommer, the twenty-five-year-old daughter of a dyer from Dresden. The couple had three children, with their first, Emma, arriving in 1820. Physiologist and painter Carl Gustav Carus notes in his biographical essays that marriage did not impact significantly on either Friedrich's life or personality, yet his canvasses from this period, including Chalk Cliffs on Rügen—painted after his honeymoon—display a new sense of levity, while his palette is brighter and less austere. Human figures appear with increasing frequency in the paintings of this period, which Siegel interprets as a reflection that "the importance of human life, particularly his family, now occupies his thoughts more and more, and his friends, his wife, and his townspeople appear as frequent subjects in his art."
Around this time, he found support from two sources in Russia. In 1820, the Grand Duke Nikolai Pavlovich, at the behest of his wife Alexandra Feodorovna, visited Friedrich's studio and returned to Saint Petersburg with a number of his paintings, an exchange that began a patronage that continued for many years. Not long thereafter, the poet Vasily Zhukovsky, tutor to the Grand Duke's son (later Tsar Alexander II), met Friedrich in 1821 and found in him a kindred spirit. For decades Zhukovsky helped Friedrich both by purchasing his work himself and by recommending his art to the royal family; his assistance toward the end of Friedrich's career proved invaluable to the ailing and impoverished artist. Zhukovsky remarked that his friend's paintings "please us by their precision, each of them awakening a memory in our mind."
Friedrich was acquainted with Philipp Otto Runge, another leading German painter of the Romantic period. He was also a friend of Georg Friedrich Kersting, and painted him at work in his unadorned studio, and of the Norwegian painter Johan Christian Clausen Dahl (1788–1857). Dahl was close to Friedrich during the artist's final years, and he expressed dismay that to the art-buying public, Friedrich's pictures were only "curiosities". While the poet Zhukovsky appreciated Friedrich's psychological themes, Dahl praised the descriptive quality of Friedrich's landscapes, commenting that "artists and connoisseurs saw in Friedrich's art only a kind of mystic, because they themselves were only looking out for the mystic ... They did not see Friedrich's faithful and conscientious study of nature in everything he represented".
During this period Friedrich frequently sketched memorial monuments and sculptures for mausoleums, reflecting his obsession with death and the afterlife; he even created designs for some of the funerary art in Dresden's cemeteries. Some of these works were lost in the fire that destroyed Munich's Glass Palace (1931) and later in the 1945 bombing of Dresden.
Friedrich's reputation steadily declined over the final fifteen years of his life. As the ideals of early Romanticism passed from fashion, he came to be viewed as an eccentric and melancholy character, out of touch with the times. Gradually his patrons fell away. By 1820, he was living as a recluse and was described by friends as the "most solitary of the solitary". Towards the end of his life he lived in relative poverty. He became isolated and spent long periods of the day and night walking alone through woods and fields, often beginning his strolls before sunrise.
He suffered his first stroke in June 1835, which left him with minor limb paralysis and greatly reduced his ability to paint. As a result, he was unable to work in oil; instead he was limited to watercolour, sepia and reworking older compositions. Although his vision remained strong, he had lost the full strength of his hand. Yet he was able to produce a final 'black painting', Seashore by Moonlight (1835–1836), described by Vaughan as the "darkest of all his shorelines, in which richness of tonality compensates for the lack of his former finesse". Symbols of death appeared in his work from this period. Soon after his stroke, the Russian royal family purchased a number of his earlier works, and the proceeds allowed him to travel to Teplitz—in today's Czech Republic—to recover.
During the mid-1830s, Friedrich began a series of portraits and he returned to observing himself in nature. As the art historian William Vaughan observed, however, "He can see himself as a man greatly changed. He is no longer the upright, supportive figure that appeared in Two Men Contemplating the Moon in 1819. He is old and stiff ... he moves with a stoop". By 1838, he was capable working in a small format only. He and his family were living in poverty and grew increasingly dependent for support on the charity of friends.
Friedrich died in Dresden on 7 May 1840, and was buried in Dresden's Trinitatis-Friedhof (Trinity Cemetery) east of the city centre (the entrance to which he had painted some 15 years earlier). His simple flat gravestone lies north-west of the central roundel within the main avenue.
By this time his reputation and fame had waned, and his passing was little noticed within the artistic community. His artwork had certainly been acknowledged during his lifetime, but not widely. While the close study of landscape and an emphasis on the spiritual elements of nature were commonplace in contemporary art, his interpretations were highly original and personal. By 1838, his work no longer sold or received attention from critics; the Romantic movement had moved away from the early idealism that the artist had helped found.
Carl Gustav Carus later wrote a series of articles which paid tribute to Friedrich's transformation of the conventions of landscape painting. However, Carus' articles placed Friedrich firmly in his time, and did not place the artist within a continuing tradition. Only one of his paintings had been reproduced as a print, and that was produced in very few copies.
What the newer landscape artists see in a circle of a hundred degrees in Nature they press together unmercifully into an angle of vision of only forty-five degrees. And furthermore, what is in Nature separated by large spaces, is compressed into a cramped space and overfills and oversatiates the eye, creating an unfavorable and disquieting effect on the viewer.
The visualisation and portrayal of landscape in an entirely new manner was Friedrich's key innovation. He sought not just to explore the blissful enjoyment of a beautiful view, as in the classic conception, but rather to examine an instant of sublimity, a reunion with the spiritual self through the contemplation of nature. Friedrich was instrumental in transforming landscape in art from a backdrop subordinated to human drama to a self-contained emotive subject. Friedrich's paintings commonly employed the Rückenfigur—a person seen from behind, contemplating the view. The viewer is encouraged to place himself in the position of the Rückenfigur, by which means he experiences the sublime potential of nature, understanding that the scene is as perceived and idealised by a human.
Friedrich created the idea of a landscape full of romantic feeling—die romantische Stimmungslandschaft. His art details a wide range of geographical features, such as rock coasts, forests and mountain scenes, and often used landscape to express religious themes. During his time, most of the best-known paintings were viewed as expressions of a religious mysticism. He wrote: "The artist should paint not only what he sees before him, but also what he sees within him. If, however, he sees nothing within him, then he should also refrain from painting that which he sees before him. Otherwise, his pictures will be like those folding screens behind which one expects to find only the sick or the dead." Expansive skies, storms, mist, forests, ruins and crosses bearing witness to the presence of God are frequent elements in Friedrich's landscapes. Though death finds symbolic expression in boats that move away from shore—a Charon-like motif—and in the poplar tree, it is referenced more directly in paintings like The Abbey in the Oakwood (1808–1810), in which monks carry a coffin past an open grave, toward a cross, and through the portal of a church in ruins.
He was one of the first artists to portray winter landscapes in which the land is rendered as stark and dead. Friedrich's winter scenes are solemn and still—according to the art historian Hermann Beenken, Friedrich painted winter scenes in which "no man has yet set his foot. The theme of nearly all the older winter pictures had been less winter itself than life in winter. In the 16th and 17th centuries, it was thought impossible to leave out such motifs as the crowd of skaters, the wanderer ... It was Friedrich who first felt the wholly detached and distinctive features of a natural life. Instead of many tones, he sought the one; and so, in his landscape, he subordinated the composite chord into one single basic note".
Bare oak trees and tree stumps, such as those in Raven Tree (c. 1822), Man and Woman Contemplating the Moon (c. 1824), and Willow Bush under a Setting Sun (c. 1835), are recurring elements of his paintings, and usually symbolise death. Countering the sense of despair are Friedrich's symbols for redemption: the cross and the clearing sky promise eternal life, and the slender moon suggests hope and the growing closeness of Christ. In his paintings of the sea, anchors often appear on the shore, also indicating a spiritual hope. In The Abbey in the Oakwood, the movement of the monks away from the open grave and toward the cross and the horizon imparts Friedrich's message that the final destination of man's life lies beyond the grave.
With dawn and dusk constituting prominent themes of his landscapes, Friedrich's own later years were characterised by a growing pessimism. His work becomes darker, revealing a fearsome monumentality. The Wreck of the Hope—also known as The Polar Sea or The Sea of Ice (1823–1824)—perhaps best summarises Friedrich's ideas and aims at this point, though in such a radical way that the painting was not well received. Completed in 1824, it depicted a grim subject, a shipwreck in the Arctic Ocean; "the image he produced, with its grinding slabs of travertine-colored floe ice chewing up a wooden ship, goes beyond documentary into allegory: the frail bark of human aspiration crushed by the world's immense and glacial indifference."
Friedrich's written commentary on aesthetics was limited to a collection of aphorisms set down in 1830, in which he explained the need for the artist to match natural observation with an introspective scrutiny of his own personality. His best-known remark advises the artist to "close your bodily eye so that you may see your picture first with the spiritual eye. Then bring to the light of day that which you have seen in the darkness so that it may react upon others from the outside inwards."
Both Friedrich's life and art have at times been perceived by some to have been marked with an overwhelming sense of loneliness. Art historians and some of his contemporaries attribute such interpretations to the losses suffered during his youth to the bleak outlook of his adulthood, while Friedrich's pale and withdrawn appearance helped reinforce the popular notion of the "taciturn man from the North".
Friedrich suffered depressive episodes in 1799, 1803–1805, c. 1813, in 1816 and between 1824 and 1826. There are noticeable thematic shifts in the works he produced during these episodes, which see the emergence of such motifs and symbols as vultures, owls, graveyards and ruins. From 1826 these motifs became a permanent feature of his output, while his use of color became more dark and muted. Carus wrote in 1929 that Friedrich "is surrounded by a thick, gloomy cloud of spiritual uncertainty", though the noted art historian and curator Hubertus Gassner disagrees with such notions, seeing in Friedrich's work a positive and life-affirming subtext inspired by Freemasonry and religion.
Reflecting Friedrich's patriotism and resentment during the 1813 French occupation of the dominion of Pomerania, motifs from German folklore became increasingly prominent in his work. An anti-French German nationalist, Friedrich used motifs from his native landscape to celebrate Germanic culture, customs and mythology. He was impressed by the anti-Napoleonic poetry of Ernst Moritz Arndt and Theodor Körner, and the patriotic literature of Adam Müller and Heinrich von Kleist. Moved by the deaths of three friends killed in battle against France, as well as by Kleist's 1808 drama Die Hermannsschlacht, Friedrich undertook a number of paintings in which he intended to convey political symbols solely by means of the landscape—a first in the history of art.
In Old Heroes' Graves (1812), a dilapidated monument inscribed "Arminius" invokes the Germanic chieftain, a symbol of nationalism, while the four tombs of fallen heroes are slightly ajar, freeing their spirits for eternity. Two French soldiers appear as small figures before a cave, lower and deep in a grotto surrounded by rock, as if farther from heaven. A second political painting, Fir Forest with the French Dragoon and the Raven (c. 1813), depicts a lost French soldier dwarfed by a dense forest, while on a tree stump a raven is perched—a prophet of doom, symbolizing the anticipated defeat of France.
Alongside other Romantic painters, Friedrich helped position landscape painting as a major genre within Western art. Of his contemporaries, Friedrich's style most influenced the painting of Johan Christian Dahl (1788–1857). Among later generations, Arnold Böcklin (1827–1901) was strongly influenced by his work, and the substantial presence of Friedrich's works in Russian collections influenced many Russian painters, in particular Arkhip Kuindzhi (c. 1842–1910) and Ivan Shishkin (1832–1898). Friedrich's spirituality anticipated American painters such as Albert Pinkham Ryder (1847–1917), Ralph Blakelock (1847–1919), the painters of the Hudson River School and the New England Luminists.
At the turn of the 20th century, Friedrich was rediscovered by the Norwegian art historian Andreas Aubert (1851–1913), whose writing initiated modern Friedrich scholarship, and by the Symbolist painters, who valued his visionary and allegorical landscapes. The Norwegian Symbolist Edvard Munch (1863–1944) would have seen Friedrich's work during a visit to Berlin in the 1880s. Munch's 1899 print The Lonely Ones echoes Friedrich's Rückenfigur (back figure), although in Munch's work the focus has shifted away from the broad landscape and toward the sense of dislocation between the two melancholy figures in the foreground.
Friedrich's modern revival gained momentum in 1906, when thirty-two of his works were featured in an exhibition in Berlin of Romantic-era art. His landscapes exercised a strong influence on the work of German artist Max Ernst (1891–1976), and as a result other Surrealists came to view Friedrich as a precursor to their movement. In 1934, the Belgian painter René Magritte (1898–1967) paid tribute in his work The Human Condition, which directly echoes motifs from Friedrich's art in its questioning of perception and the role of the viewer.
A few years later, the Surrealist journal Minotaure included Friedrich in a 1939 article by the critic Marie Landsberger, thereby exposing his work to a far wider circle of artists. The influence of The Wreck of Hope (or The Sea of Ice) is evident in the 1940–41 painting Totes Meer by Paul Nash (1889–1946), a fervent admirer of Ernst. Friedrich's work has been cited as an inspiration by other major 20th-century artists, including Mark Rothko (1903–1970), Gerhard Richter (b. 1932), Gotthard Graubner and Anselm Kiefer (b. 1945). Friedrich's Romantic paintings have also been singled out by writer Samuel Beckett (1906–89), who, standing before Man and Woman Contemplating the Moon, said "This was the source of Waiting for Godot, you know."
In his 1961 article "The Abstract Sublime", originally published in ARTnews, the art historian Robert Rosenblum drew comparisons between the Romantic landscape paintings of both Friedrich and Turner with the Abstract Expressionist paintings of Mark Rothko. Rosenblum specifically describes Friedrich's 1809 painting The Monk by the Sea, Turner's The Evening Star and Rothko's 1954 Light, Earth and Blue as revealing affinities of vision and feeling. According to Rosenblum, "Rothko, like Friedrich and Turner, places us on the threshold of those shapeless infinities discussed by the aestheticians of the Sublime. The tiny monk in the Friedrich and the fisher in the Turner establish a poignant contrast between the infinite vastness of a pantheistic God and the infinite smallness of His creatures. In the abstract language of Rothko, such literal detail—a bridge of empathy between the real spectator and the presentation of a transcendental landscape—is no longer necessary; we ourselves are the monk before the sea, standing silently and contemplatively before these huge and soundless pictures as if we were looking at a sunset or a moonlit night."
Until 1890, and especially after his friends had died, Friedrich's work lay in near-oblivion for decades. Yet, by 1890, the symbolism in his work began to ring true with the artistic mood of the day, especially in central Europe. However, despite a renewed interest and an acknowledgment of his originality, his lack of regard for "painterly effect" and thinly rendered surfaces jarred with the theories of the time.
During the 1930s, Friedrich's work was used in the promotion of Nazi ideology, which attempted to fit the Romantic artist within the nationalistic Blut und Boden. It took decades for Friedrich's reputation to recover from this association with Nazism. His reliance on symbolism and the fact that his work fell outside the narrow definitions of modernism contributed to his fall from favour. In 1949, art historian Kenneth Clark wrote that Friedrich "worked in the frigid technique of his time, which could hardly inspire a school of modern painting", and suggested that the artist was trying to express in painting what is best left to poetry. Clark's dismissal of Friedrich reflected the damage the artist's reputation sustained during the late 1930s.
Friedrich's reputation suffered further damage when his imagery was adopted by a number of Hollywood directors, including Walt Disney, built on the work of such German cinema masters as Fritz Lang and F. W. Murnau, within the horror and fantasy genres. His rehabilitation was slow, but enhanced through the writings of such critics and scholars as Werner Hofmann, Helmut Börsch-Supan and Sigrid Hinz, who successfully rebutted the political associations ascribed to his work, developed a catalogue raisonné, and placed Friedrich within a purely art-historical context.
By the 1970s, he was again being exhibited in major international galleries and found favour with a new generation of critics and art historians. Today, his international reputation is well established. He is a national icon in his native Germany, and highly regarded by art historians and connoisseurs across the Western World. He is generally viewed as a figure of great psychological complexity, and according to Vaughan, "a believer who struggled with doubt, a celebrator of beauty haunted by darkness. In the end, he transcends interpretation, reaching across cultures through the compelling appeal of his imagery. He has truly emerged as a butterfly—hopefully one that will never again disappear from our sight".
Friedrich was a prolific artist who produced more than 500 attributed works. In line with the Romantic ideals of his time, he intended his paintings to function as pure aesthetic statements, so he was cautious that the titles given to his work were not overly descriptive or evocative. It is likely that some of today's more literal titles, such as The Stages of Life, were not given by the artist himself, but were instead adopted during one of the revivals of interest in Friedrich. Complications arise when dating Friedrich's work, in part because he often did not directly name or date his canvases. He kept a carefully detailed notebook on his output, however, which has been used by scholars to tie paintings to their completion dates. | [
{
"paragraph_id": 0,
"text": "Caspar David Friedrich (5 September 1774 – 7 May 1840) was a German Romantic landscape painter, generally considered the most important German artist of his generation. He is best known for his allegorical landscapes, which typically feature contemplative figures silhouetted against night skies, morning mists, barren trees or Gothic ruins. His primary interest was the contemplation of nature, and his often symbolic and anti-classical work seeks to convey a subjective, emotional response to the natural world. Friedrich's paintings characteristically set a human presence in diminished perspective amid expansive landscapes, reducing the figures to a scale that, according to the art historian Christopher John Murray, directs \"the viewer's gaze towards their metaphysical dimension\".",
"title": ""
},
{
"paragraph_id": 1,
"text": "Friedrich was born in the town of Greifswald on the Baltic Sea in what was at the time Swedish Pomerania. He studied in Copenhagen until 1798, before settling in Dresden. He came of age during a period when, across Europe, a growing disillusionment with materialistic society was giving rise to a new appreciation of spirituality. This shift in ideals was often expressed through a reevaluation of the natural world, as artists such as Friedrich, J. M. W. Turner and John Constable sought to depict nature as a \"divine creation, to be set against the artifice of human civilization\".",
"title": ""
},
{
"paragraph_id": 2,
"text": "Friedrich's work brought him renown early in his career. Contemporaries such as the French sculptor David d'Angers spoke of him as having discovered \"the tragedy of landscape\". His work nevertheless fell from favour during his later years, and he died in obscurity. As Germany moved towards modernisation in the late 19th century, a new sense of urgency characterised its art, and Friedrich's contemplative depictions of stillness came to be seen as products of a bygone age.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The early 20th century brought a renewed appreciation of his art, beginning in 1906 with an exhibition of thirty-two of his paintings in Berlin. His work influenced Expressionist artists and later Surrealists and Existentialists. The rise of Nazism in the early 1930s saw a resurgence in Friedrich's popularity, but this was followed by a sharp decline as his paintings were, by association with the Nazi movement, seen as promoting German nationalism. In the late 1970s Friedrich regained his reputation as an icon of the German Romantic movement and a painter of international importance.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Caspar David Friedrich was born on 5 September 1774, in Greifswald, Swedish Pomerania, on the Baltic coast of Germany. The sixth of ten children, he was raised in the strict Lutheran creed of his father Adolf Gottlieb Friedrich, a candle-maker and soap boiler. Records of the family's financial circumstances are contradictory; while some sources indicate the children were privately tutored, others record that they were raised in relative poverty. He became familiar with death from an early age. His mother, Sophie, died in 1781 when he was seven. A year later, his sister Elisabeth died, and a second sister, Maria, succumbed to typhus in 1791. Arguably the greatest tragedy of his childhood happened in 1787 when his brother Johann Christoffer died: at the age of thirteen, Caspar David witnessed his younger brother fall through the ice of a frozen lake, and drown. Some accounts suggest that Johann Christoffer perished while trying to rescue Caspar David, who was also in danger on the ice.",
"title": "Life"
},
{
"paragraph_id": 5,
"text": "Friedrich began his formal study of art in 1790 as a private student of artist Johann Gottfried Quistorp at the University of Greifswald in his home city, at which the art department is now named Caspar-David-Friedrich-Institut in his honour. Quistorp took his students on outdoor drawing excursions; as a result, Friedrich was encouraged to sketch from life at an early age. Through Quistorp, Friedrich met and was subsequently influenced by the theologian Ludwig Gotthard Kosegarten, who taught that nature was a revelation of God. Quistorp introduced Friedrich to the work of the German 17th-century artist Adam Elsheimer, whose works often included religious subjects dominated by landscape, and nocturnal subjects. During this period he also studied literature and aesthetics with Swedish professor Thomas Thorild. Four years later Friedrich entered the prestigious Academy of Copenhagen, where he began his education by making copies of casts from antique sculptures before proceeding to drawing from life.",
"title": "Life"
},
{
"paragraph_id": 6,
"text": "Living in Copenhagen afforded the young painter access to the Royal Picture Gallery's collection of 17th-century Dutch landscape painting. At the Academy he studied under teachers such as Christian August Lorentzen and the landscape painter Jens Juel. These artists were inspired by the Sturm und Drang movement and represented a midpoint between the dramatic intensity and expressive manner of the budding Romantic aesthetic and the waning neo-classical ideal. Mood was paramount, and influence was drawn from such sources as the Icelandic legend of Edda, the poems of Ossian and Norse mythology.",
"title": "Life"
},
{
"paragraph_id": 7,
"text": "Friedrich settled permanently in Dresden in 1798. During this early period, he experimented in printmaking with etchings and designs for woodcuts which his furniture-maker brother cut. By 1804 he had produced 18 etchings and four woodcuts; they were apparently made in small numbers and only distributed to friends. Despite these forays into other media, he gravitated toward working primarily with ink, watercolour and sepias. With the exception of a few early pieces, such as Landscape with Temple in Ruins (1797), he did not work extensively with oils until his reputation was more established.",
"title": "Life"
},
{
"paragraph_id": 8,
"text": "Landscapes were his preferred subject, inspired by frequent trips, beginning in 1801, to the Baltic coast, Bohemia, the Krkonoše and the Harz Mountains. Mostly based on the landscapes of northern Germany, his paintings depict woods, hills, harbors, morning mists and other light effects based on a close observation of nature. These works were modeled on sketches and studies of scenic spots, such as the cliffs on Rügen, the surroundings of Dresden and the river Elbe. He executed his studies almost exclusively in pencil, even providing topographical information, yet the subtle atmospheric effects characteristic of Friedrich's mid-period paintings were rendered from memory. These effects took their strength from the depiction of light, and of the illumination of sun and moon on clouds and water: optical phenomena peculiar to the Baltic coast that had never before been painted with such an emphasis.",
"title": "Life"
},
{
"paragraph_id": 9,
"text": "His reputation as an artist was established when he won a prize in 1805 at the Weimar competition organised by Johann Wolfgang von Goethe. At the time, the Weimar competition tended to draw mediocre and now-forgotten artists presenting derivative mixtures of neo-classical and pseudo-Greek styles. The poor quality of the entries began to prove damaging to Goethe's reputation, so when Friedrich entered two sepia drawings—Procession at Dawn and Fisher-Folk by the Sea—the poet responded enthusiastically and wrote, \"We must praise the artist's resourcefulness in this picture fairly. The drawing is well done, the procession is ingenious and appropriate ... his treatment combines a great deal of firmness, diligence and neatness ... the ingenious watercolour ... is also worthy of praise.\"",
"title": "Life"
},
{
"paragraph_id": 10,
"text": "Friedrich completed the first of his major paintings in 1808, at the age of 34. Cross in the Mountains, today known as the Tetschen Altar, is an altarpiece panel said to have been commissioned for a family chapel in Tetschen, Bohemia. The panel depicts a cross in profile at the top of a mountain, alone, and surrounded by pine trees.",
"title": "Life"
},
{
"paragraph_id": 11,
"text": "Although the altarpiece was generally coldly received, it was Friedrich's first painting to receive wide publicity. The artist's friends publicly defended the work, while art critic Basilius von Ramdohr published a long article challenging Friedrich's use of landscape in a religious context. He rejected the idea that landscape painting could convey explicit meaning, writing that it would be \"a veritable presumption, if landscape painting were to sneak into the church and creep onto the altar\". Friedrich responded with a programme describing his intentions in 1809, comparing the rays of the evening sun to the light of the Holy Father. This statement marked the only time Friedrich recorded a detailed interpretation of his own work, and the painting was among the few commissions the artist ever received.",
"title": "Life"
},
{
"paragraph_id": 12,
"text": "Following the purchase of two of his paintings by the Prussian Crown Prince, Friedrich was elected a member of the Berlin Academy in 1810. Yet in 1816, he sought to distance himself from Prussian authority and applied that June for Saxon citizenship. The move was not expected; the Saxon government was pro-French, while Friedrich's paintings were seen as generally patriotic and distinctly anti-French. Nevertheless, with the aid of his Dresden-based friend Graf Vitzthum von Eckstädt, Friedrich attained citizenship, and in 1818, membership in the Saxon Academy with a yearly dividend of 150 thalers. Although he had hoped to receive a full professorship, it was never awarded him as, according to the German Library of Information, \"it was felt that his painting was too personal, his point of view too individual to serve as a fruitful example to students.\" Politics too may have played a role in stalling his career: Friedrich's decidedly Germanic subjects and costuming frequently clashed with the era's prevailing pro-French attitudes.",
"title": "Life"
},
{
"paragraph_id": 13,
"text": "On 21 January 1818, Friedrich married Caroline Bommer, the twenty-five-year-old daughter of a dyer from Dresden. The couple had three children, with their first, Emma, arriving in 1820. Physiologist and painter Carl Gustav Carus notes in his biographical essays that marriage did not impact significantly on either Friedrich's life or personality, yet his canvasses from this period, including Chalk Cliffs on Rügen—painted after his honeymoon—display a new sense of levity, while his palette is brighter and less austere. Human figures appear with increasing frequency in the paintings of this period, which Siegel interprets as a reflection that \"the importance of human life, particularly his family, now occupies his thoughts more and more, and his friends, his wife, and his townspeople appear as frequent subjects in his art.\"",
"title": "Life"
},
{
"paragraph_id": 14,
"text": "Around this time, he found support from two sources in Russia. In 1820, the Grand Duke Nikolai Pavlovich, at the behest of his wife Alexandra Feodorovna, visited Friedrich's studio and returned to Saint Petersburg with a number of his paintings, an exchange that began a patronage that continued for many years. Not long thereafter, the poet Vasily Zhukovsky, tutor to the Grand Duke's son (later Tsar Alexander II), met Friedrich in 1821 and found in him a kindred spirit. For decades Zhukovsky helped Friedrich both by purchasing his work himself and by recommending his art to the royal family; his assistance toward the end of Friedrich's career proved invaluable to the ailing and impoverished artist. Zhukovsky remarked that his friend's paintings \"please us by their precision, each of them awakening a memory in our mind.\"",
"title": "Life"
},
{
"paragraph_id": 15,
"text": "Friedrich was acquainted with Philipp Otto Runge, another leading German painter of the Romantic period. He was also a friend of Georg Friedrich Kersting, and painted him at work in his unadorned studio, and of the Norwegian painter Johan Christian Clausen Dahl (1788–1857). Dahl was close to Friedrich during the artist's final years, and he expressed dismay that to the art-buying public, Friedrich's pictures were only \"curiosities\". While the poet Zhukovsky appreciated Friedrich's psychological themes, Dahl praised the descriptive quality of Friedrich's landscapes, commenting that \"artists and connoisseurs saw in Friedrich's art only a kind of mystic, because they themselves were only looking out for the mystic ... They did not see Friedrich's faithful and conscientious study of nature in everything he represented\".",
"title": "Life"
},
{
"paragraph_id": 16,
"text": "During this period Friedrich frequently sketched memorial monuments and sculptures for mausoleums, reflecting his obsession with death and the afterlife; he even created designs for some of the funerary art in Dresden's cemeteries. Some of these works were lost in the fire that destroyed Munich's Glass Palace (1931) and later in the 1945 bombing of Dresden.",
"title": "Life"
},
{
"paragraph_id": 17,
"text": "Friedrich's reputation steadily declined over the final fifteen years of his life. As the ideals of early Romanticism passed from fashion, he came to be viewed as an eccentric and melancholy character, out of touch with the times. Gradually his patrons fell away. By 1820, he was living as a recluse and was described by friends as the \"most solitary of the solitary\". Towards the end of his life he lived in relative poverty. He became isolated and spent long periods of the day and night walking alone through woods and fields, often beginning his strolls before sunrise.",
"title": "Later life"
},
{
"paragraph_id": 18,
"text": "He suffered his first stroke in June 1835, which left him with minor limb paralysis and greatly reduced his ability to paint. As a result, he was unable to work in oil; instead he was limited to watercolour, sepia and reworking older compositions. Although his vision remained strong, he had lost the full strength of his hand. Yet he was able to produce a final 'black painting', Seashore by Moonlight (1835–1836), described by Vaughan as the \"darkest of all his shorelines, in which richness of tonality compensates for the lack of his former finesse\". Symbols of death appeared in his work from this period. Soon after his stroke, the Russian royal family purchased a number of his earlier works, and the proceeds allowed him to travel to Teplitz—in today's Czech Republic—to recover.",
"title": "Later life"
},
{
"paragraph_id": 19,
"text": "During the mid-1830s, Friedrich began a series of portraits and he returned to observing himself in nature. As the art historian William Vaughan observed, however, \"He can see himself as a man greatly changed. He is no longer the upright, supportive figure that appeared in Two Men Contemplating the Moon in 1819. He is old and stiff ... he moves with a stoop\". By 1838, he was capable working in a small format only. He and his family were living in poverty and grew increasingly dependent for support on the charity of friends.",
"title": "Later life"
},
{
"paragraph_id": 20,
"text": "Friedrich died in Dresden on 7 May 1840, and was buried in Dresden's Trinitatis-Friedhof (Trinity Cemetery) east of the city centre (the entrance to which he had painted some 15 years earlier). His simple flat gravestone lies north-west of the central roundel within the main avenue.",
"title": "Later life"
},
{
"paragraph_id": 21,
"text": "By this time his reputation and fame had waned, and his passing was little noticed within the artistic community. His artwork had certainly been acknowledged during his lifetime, but not widely. While the close study of landscape and an emphasis on the spiritual elements of nature were commonplace in contemporary art, his interpretations were highly original and personal. By 1838, his work no longer sold or received attention from critics; the Romantic movement had moved away from the early idealism that the artist had helped found.",
"title": "Later life"
},
{
"paragraph_id": 22,
"text": "Carl Gustav Carus later wrote a series of articles which paid tribute to Friedrich's transformation of the conventions of landscape painting. However, Carus' articles placed Friedrich firmly in his time, and did not place the artist within a continuing tradition. Only one of his paintings had been reproduced as a print, and that was produced in very few copies.",
"title": "Later life"
},
{
"paragraph_id": 23,
"text": "What the newer landscape artists see in a circle of a hundred degrees in Nature they press together unmercifully into an angle of vision of only forty-five degrees. And furthermore, what is in Nature separated by large spaces, is compressed into a cramped space and overfills and oversatiates the eye, creating an unfavorable and disquieting effect on the viewer.",
"title": "Themes"
},
{
"paragraph_id": 24,
"text": "The visualisation and portrayal of landscape in an entirely new manner was Friedrich's key innovation. He sought not just to explore the blissful enjoyment of a beautiful view, as in the classic conception, but rather to examine an instant of sublimity, a reunion with the spiritual self through the contemplation of nature. Friedrich was instrumental in transforming landscape in art from a backdrop subordinated to human drama to a self-contained emotive subject. Friedrich's paintings commonly employed the Rückenfigur—a person seen from behind, contemplating the view. The viewer is encouraged to place himself in the position of the Rückenfigur, by which means he experiences the sublime potential of nature, understanding that the scene is as perceived and idealised by a human.",
"title": "Themes"
},
{
"paragraph_id": 25,
"text": "Friedrich created the idea of a landscape full of romantic feeling—die romantische Stimmungslandschaft. His art details a wide range of geographical features, such as rock coasts, forests and mountain scenes, and often used landscape to express religious themes. During his time, most of the best-known paintings were viewed as expressions of a religious mysticism. He wrote: \"The artist should paint not only what he sees before him, but also what he sees within him. If, however, he sees nothing within him, then he should also refrain from painting that which he sees before him. Otherwise, his pictures will be like those folding screens behind which one expects to find only the sick or the dead.\" Expansive skies, storms, mist, forests, ruins and crosses bearing witness to the presence of God are frequent elements in Friedrich's landscapes. Though death finds symbolic expression in boats that move away from shore—a Charon-like motif—and in the poplar tree, it is referenced more directly in paintings like The Abbey in the Oakwood (1808–1810), in which monks carry a coffin past an open grave, toward a cross, and through the portal of a church in ruins.",
"title": "Themes"
},
{
"paragraph_id": 26,
"text": "He was one of the first artists to portray winter landscapes in which the land is rendered as stark and dead. Friedrich's winter scenes are solemn and still—according to the art historian Hermann Beenken, Friedrich painted winter scenes in which \"no man has yet set his foot. The theme of nearly all the older winter pictures had been less winter itself than life in winter. In the 16th and 17th centuries, it was thought impossible to leave out such motifs as the crowd of skaters, the wanderer ... It was Friedrich who first felt the wholly detached and distinctive features of a natural life. Instead of many tones, he sought the one; and so, in his landscape, he subordinated the composite chord into one single basic note\".",
"title": "Themes"
},
{
"paragraph_id": 27,
"text": "Bare oak trees and tree stumps, such as those in Raven Tree (c. 1822), Man and Woman Contemplating the Moon (c. 1824), and Willow Bush under a Setting Sun (c. 1835), are recurring elements of his paintings, and usually symbolise death. Countering the sense of despair are Friedrich's symbols for redemption: the cross and the clearing sky promise eternal life, and the slender moon suggests hope and the growing closeness of Christ. In his paintings of the sea, anchors often appear on the shore, also indicating a spiritual hope. In The Abbey in the Oakwood, the movement of the monks away from the open grave and toward the cross and the horizon imparts Friedrich's message that the final destination of man's life lies beyond the grave.",
"title": "Themes"
},
{
"paragraph_id": 28,
"text": "With dawn and dusk constituting prominent themes of his landscapes, Friedrich's own later years were characterised by a growing pessimism. His work becomes darker, revealing a fearsome monumentality. The Wreck of the Hope—also known as The Polar Sea or The Sea of Ice (1823–1824)—perhaps best summarises Friedrich's ideas and aims at this point, though in such a radical way that the painting was not well received. Completed in 1824, it depicted a grim subject, a shipwreck in the Arctic Ocean; \"the image he produced, with its grinding slabs of travertine-colored floe ice chewing up a wooden ship, goes beyond documentary into allegory: the frail bark of human aspiration crushed by the world's immense and glacial indifference.\"",
"title": "Themes"
},
{
"paragraph_id": 29,
"text": "Friedrich's written commentary on aesthetics was limited to a collection of aphorisms set down in 1830, in which he explained the need for the artist to match natural observation with an introspective scrutiny of his own personality. His best-known remark advises the artist to \"close your bodily eye so that you may see your picture first with the spiritual eye. Then bring to the light of day that which you have seen in the darkness so that it may react upon others from the outside inwards.\"",
"title": "Themes"
},
{
"paragraph_id": 30,
"text": "Both Friedrich's life and art have at times been perceived by some to have been marked with an overwhelming sense of loneliness. Art historians and some of his contemporaries attribute such interpretations to the losses suffered during his youth to the bleak outlook of his adulthood, while Friedrich's pale and withdrawn appearance helped reinforce the popular notion of the \"taciturn man from the North\".",
"title": "Themes"
},
{
"paragraph_id": 31,
"text": "Friedrich suffered depressive episodes in 1799, 1803–1805, c. 1813, in 1816 and between 1824 and 1826. There are noticeable thematic shifts in the works he produced during these episodes, which see the emergence of such motifs and symbols as vultures, owls, graveyards and ruins. From 1826 these motifs became a permanent feature of his output, while his use of color became more dark and muted. Carus wrote in 1929 that Friedrich \"is surrounded by a thick, gloomy cloud of spiritual uncertainty\", though the noted art historian and curator Hubertus Gassner disagrees with such notions, seeing in Friedrich's work a positive and life-affirming subtext inspired by Freemasonry and religion.",
"title": "Themes"
},
{
"paragraph_id": 32,
"text": "Reflecting Friedrich's patriotism and resentment during the 1813 French occupation of the dominion of Pomerania, motifs from German folklore became increasingly prominent in his work. An anti-French German nationalist, Friedrich used motifs from his native landscape to celebrate Germanic culture, customs and mythology. He was impressed by the anti-Napoleonic poetry of Ernst Moritz Arndt and Theodor Körner, and the patriotic literature of Adam Müller and Heinrich von Kleist. Moved by the deaths of three friends killed in battle against France, as well as by Kleist's 1808 drama Die Hermannsschlacht, Friedrich undertook a number of paintings in which he intended to convey political symbols solely by means of the landscape—a first in the history of art.",
"title": "Themes"
},
{
"paragraph_id": 33,
"text": "In Old Heroes' Graves (1812), a dilapidated monument inscribed \"Arminius\" invokes the Germanic chieftain, a symbol of nationalism, while the four tombs of fallen heroes are slightly ajar, freeing their spirits for eternity. Two French soldiers appear as small figures before a cave, lower and deep in a grotto surrounded by rock, as if farther from heaven. A second political painting, Fir Forest with the French Dragoon and the Raven (c. 1813), depicts a lost French soldier dwarfed by a dense forest, while on a tree stump a raven is perched—a prophet of doom, symbolizing the anticipated defeat of France.",
"title": "Themes"
},
{
"paragraph_id": 34,
"text": "Alongside other Romantic painters, Friedrich helped position landscape painting as a major genre within Western art. Of his contemporaries, Friedrich's style most influenced the painting of Johan Christian Dahl (1788–1857). Among later generations, Arnold Böcklin (1827–1901) was strongly influenced by his work, and the substantial presence of Friedrich's works in Russian collections influenced many Russian painters, in particular Arkhip Kuindzhi (c. 1842–1910) and Ivan Shishkin (1832–1898). Friedrich's spirituality anticipated American painters such as Albert Pinkham Ryder (1847–1917), Ralph Blakelock (1847–1919), the painters of the Hudson River School and the New England Luminists.",
"title": "Legacy"
},
{
"paragraph_id": 35,
"text": "At the turn of the 20th century, Friedrich was rediscovered by the Norwegian art historian Andreas Aubert (1851–1913), whose writing initiated modern Friedrich scholarship, and by the Symbolist painters, who valued his visionary and allegorical landscapes. The Norwegian Symbolist Edvard Munch (1863–1944) would have seen Friedrich's work during a visit to Berlin in the 1880s. Munch's 1899 print The Lonely Ones echoes Friedrich's Rückenfigur (back figure), although in Munch's work the focus has shifted away from the broad landscape and toward the sense of dislocation between the two melancholy figures in the foreground.",
"title": "Legacy"
},
{
"paragraph_id": 36,
"text": "Friedrich's modern revival gained momentum in 1906, when thirty-two of his works were featured in an exhibition in Berlin of Romantic-era art. His landscapes exercised a strong influence on the work of German artist Max Ernst (1891–1976), and as a result other Surrealists came to view Friedrich as a precursor to their movement. In 1934, the Belgian painter René Magritte (1898–1967) paid tribute in his work The Human Condition, which directly echoes motifs from Friedrich's art in its questioning of perception and the role of the viewer.",
"title": "Legacy"
},
{
"paragraph_id": 37,
"text": "A few years later, the Surrealist journal Minotaure included Friedrich in a 1939 article by the critic Marie Landsberger, thereby exposing his work to a far wider circle of artists. The influence of The Wreck of Hope (or The Sea of Ice) is evident in the 1940–41 painting Totes Meer by Paul Nash (1889–1946), a fervent admirer of Ernst. Friedrich's work has been cited as an inspiration by other major 20th-century artists, including Mark Rothko (1903–1970), Gerhard Richter (b. 1932), Gotthard Graubner and Anselm Kiefer (b. 1945). Friedrich's Romantic paintings have also been singled out by writer Samuel Beckett (1906–89), who, standing before Man and Woman Contemplating the Moon, said \"This was the source of Waiting for Godot, you know.\"",
"title": "Legacy"
},
{
"paragraph_id": 38,
"text": "In his 1961 article \"The Abstract Sublime\", originally published in ARTnews, the art historian Robert Rosenblum drew comparisons between the Romantic landscape paintings of both Friedrich and Turner with the Abstract Expressionist paintings of Mark Rothko. Rosenblum specifically describes Friedrich's 1809 painting The Monk by the Sea, Turner's The Evening Star and Rothko's 1954 Light, Earth and Blue as revealing affinities of vision and feeling. According to Rosenblum, \"Rothko, like Friedrich and Turner, places us on the threshold of those shapeless infinities discussed by the aestheticians of the Sublime. The tiny monk in the Friedrich and the fisher in the Turner establish a poignant contrast between the infinite vastness of a pantheistic God and the infinite smallness of His creatures. In the abstract language of Rothko, such literal detail—a bridge of empathy between the real spectator and the presentation of a transcendental landscape—is no longer necessary; we ourselves are the monk before the sea, standing silently and contemplatively before these huge and soundless pictures as if we were looking at a sunset or a moonlit night.\"",
"title": "Legacy"
},
{
"paragraph_id": 39,
"text": "Until 1890, and especially after his friends had died, Friedrich's work lay in near-oblivion for decades. Yet, by 1890, the symbolism in his work began to ring true with the artistic mood of the day, especially in central Europe. However, despite a renewed interest and an acknowledgment of his originality, his lack of regard for \"painterly effect\" and thinly rendered surfaces jarred with the theories of the time.",
"title": "Legacy"
},
{
"paragraph_id": 40,
"text": "During the 1930s, Friedrich's work was used in the promotion of Nazi ideology, which attempted to fit the Romantic artist within the nationalistic Blut und Boden. It took decades for Friedrich's reputation to recover from this association with Nazism. His reliance on symbolism and the fact that his work fell outside the narrow definitions of modernism contributed to his fall from favour. In 1949, art historian Kenneth Clark wrote that Friedrich \"worked in the frigid technique of his time, which could hardly inspire a school of modern painting\", and suggested that the artist was trying to express in painting what is best left to poetry. Clark's dismissal of Friedrich reflected the damage the artist's reputation sustained during the late 1930s.",
"title": "Legacy"
},
{
"paragraph_id": 41,
"text": "Friedrich's reputation suffered further damage when his imagery was adopted by a number of Hollywood directors, including Walt Disney, built on the work of such German cinema masters as Fritz Lang and F. W. Murnau, within the horror and fantasy genres. His rehabilitation was slow, but enhanced through the writings of such critics and scholars as Werner Hofmann, Helmut Börsch-Supan and Sigrid Hinz, who successfully rebutted the political associations ascribed to his work, developed a catalogue raisonné, and placed Friedrich within a purely art-historical context.",
"title": "Legacy"
},
{
"paragraph_id": 42,
"text": "By the 1970s, he was again being exhibited in major international galleries and found favour with a new generation of critics and art historians. Today, his international reputation is well established. He is a national icon in his native Germany, and highly regarded by art historians and connoisseurs across the Western World. He is generally viewed as a figure of great psychological complexity, and according to Vaughan, \"a believer who struggled with doubt, a celebrator of beauty haunted by darkness. In the end, he transcends interpretation, reaching across cultures through the compelling appeal of his imagery. He has truly emerged as a butterfly—hopefully one that will never again disappear from our sight\".",
"title": "Legacy"
},
{
"paragraph_id": 43,
"text": "Friedrich was a prolific artist who produced more than 500 attributed works. In line with the Romantic ideals of his time, he intended his paintings to function as pure aesthetic statements, so he was cautious that the titles given to his work were not overly descriptive or evocative. It is likely that some of today's more literal titles, such as The Stages of Life, were not given by the artist himself, but were instead adopted during one of the revivals of interest in Friedrich. Complications arise when dating Friedrich's work, in part because he often did not directly name or date his canvases. He kept a carefully detailed notebook on his output, however, which has been used by scholars to tie paintings to their completion dates.",
"title": "Work"
},
{
"paragraph_id": 44,
"text": "",
"title": "External links"
}
] | Caspar David Friedrich was a German Romantic landscape painter, generally considered the most important German artist of his generation. He is best known for his allegorical landscapes, which typically feature contemplative figures silhouetted against night skies, morning mists, barren trees or Gothic ruins. His primary interest was the contemplation of nature, and his often symbolic and anti-classical work seeks to convey a subjective, emotional response to the natural world. Friedrich's paintings characteristically set a human presence in diminished perspective amid expansive landscapes, reducing the figures to a scale that, according to the art historian Christopher John Murray, directs "the viewer's gaze towards their metaphysical dimension". Friedrich was born in the town of Greifswald on the Baltic Sea in what was at the time Swedish Pomerania. He studied in Copenhagen until 1798, before settling in Dresden. He came of age during a period when, across Europe, a growing disillusionment with materialistic society was giving rise to a new appreciation of spirituality. This shift in ideals was often expressed through a reevaluation of the natural world, as artists such as Friedrich, J. M. W. Turner and John Constable sought to depict nature as a "divine creation, to be set against the artifice of human civilization". Friedrich's work brought him renown early in his career. Contemporaries such as the French sculptor David d'Angers spoke of him as having discovered "the tragedy of landscape". His work nevertheless fell from favour during his later years, and he died in obscurity. As Germany moved towards modernisation in the late 19th century, a new sense of urgency characterised its art, and Friedrich's contemplative depictions of stillness came to be seen as products of a bygone age. The early 20th century brought a renewed appreciation of his art, beginning in 1906 with an exhibition of thirty-two of his paintings in Berlin. His work influenced Expressionist artists and later Surrealists and Existentialists. The rise of Nazism in the early 1930s saw a resurgence in Friedrich's popularity, but this was followed by a sharp decline as his paintings were, by association with the Nazi movement, seen as promoting German nationalism. In the late 1970s Friedrich regained his reputation as an icon of the German Romantic movement and a painter of international importance. | 2001-05-15T15:54:33Z | 2024-01-01T00:40:07Z | [
"Template:Main",
"Template:Romanticism",
"Template:Pp-move",
"Template:Infobox artist",
"Template:External media",
"Template:Cite news",
"Template:Cite book",
"Template:Cite encyclopedia",
"Template:Use British English",
"Template:Caspar David Friedrich",
"Template:Cn",
"Template:Webarchive",
"Template:Lang",
"Template:Cite web",
"Template:Use dmy dates",
"Template:Circa",
"Template:Refbegin",
"Template:Commons category",
"Template:ACArt",
"Template:Featured article",
"Template:Short description",
"Template:Sfn",
"Template:Rquote",
"Template:Efn",
"Template:Cite journal",
"Template:Refend",
"Template:Reflist"
] | https://en.wikipedia.org/wiki/Caspar_David_Friedrich |
5,655 | Courtney Love | Courtney Michelle Love (née Harrison; born July 9, 1964) is an American singer, guitarist, songwriter, and actress. A figure in the alternative and grunge scenes of the 1990s, her career has spanned four decades. She rose to prominence as the lead vocalist and rhythm guitarist of the alternative rock band Hole, which she formed in 1989. Love has drawn public attention for her uninhibited live performances and confrontational lyrics, as well as her highly publicized personal life following her marriage to Nirvana frontman Kurt Cobain. In 2020, NME named her one of the most influential singers in alternative culture of the last 30 years.
Love had an itinerant childhood, but was primarily raised in Portland, Oregon, where she played in a series of short-lived bands and was active in the local punk scene. After briefly being in a juvenile hall, she spent a year living in Dublin and Liverpool before returning to the United States and pursuing an acting career. She appeared in supporting roles in the Alex Cox films Sid and Nancy (1986) and Straight to Hell (1987) before forming the band Hole in Los Angeles with guitarist Eric Erlandson. The group received critical acclaim from underground rock press for their 1991 debut album, produced by Kim Gordon, while their second release, Live Through This (1994), was met with critical accolades and multi-platinum sales. In 1995, Love returned to acting, earning a Golden Globe Award nomination for her performance as Althea Leasure in Miloš Forman's The People vs. Larry Flynt (1996), which established her as a mainstream actress. The following year, Hole's third album, Celebrity Skin (1998), was nominated for three Grammy Awards.
Love continued to work as an actress into the early 2000s, appearing in big-budget pictures such as Man on the Moon (1999) and Trapped (2002), before releasing her first solo album, America's Sweetheart, in 2004. The subsequent several years were marred with publicity surrounding Love's legal troubles and drug relapse, which resulted in a mandatory lockdown rehabilitation sentence in 2005 while she was writing a second solo album. That project became Nobody's Daughter, released in 2010 as a Hole album but without the former Hole lineup. Between 2014 and 2015, Love released two solo singles and returned to acting in the network series Sons of Anarchy and Empire. In 2020, she confirmed she was writing new music. Love has also been active as a writer; she co-created and co-wrote three volumes of a manga, Princess Ai, between 2004 and 2006, and wrote a memoir, Dirty Blonde (2006).
Courtney Michelle Harrison was born July 9, 1964, at Saint Francis Memorial Hospital in San Francisco, California, the first child of psychotherapist Linda Carroll (née Risi; born 1944) and Hank Harrison (1941–2022), a publisher and road manager for the Grateful Dead. Her parents met at a party held for Dizzy Gillespie in 1963, and the two married in Reno, Nevada after Carroll discovered she was pregnant. Carroll, who was adopted at birth, is the biological daughter of novelist Paula Fox. Love's matrilineal great-grandmother was Elsie Fox (née de Sola), a Cuban writer who co-wrote the film The Last Train from Madrid with Love's great-grandfather, Paul Hervey Fox, cousin of writer Faith Baldwin and actor Douglas Fairbanks. Phil Lesh, the founding bassist of the Grateful Dead, is Love's godfather. According to Love, she was named after Courtney Farrell, the protagonist of Pamela Moore's 1956 novel Chocolates for Breakfast. Love is of Cuban, English, German, Irish, Ashkenazi Jewish, and Welsh descent. Through her mother's subsequent marriages, Love has two younger half-sisters, three younger half-brothers (one of whom died in infancy), and one adopted brother.
Love spent her early years in Haight-Ashbury, San Francisco, until her parents divorced in 1970. In a custody hearing, her mother, as well as one of her father's girlfriends, testified that Hank had dosed Courtney with LSD when she was a toddler. Carroll also alleged that Hank threatened to abduct his daughter and flee with her to a foreign country. Though Hank denied these allegations, his custody was revoked. In 1970, Carroll relocated with Love to the rural community of Marcola, Oregon where they lived along the Mohawk River while Carroll completed her psychology degree at the University of Oregon. There, Carroll remarried to schoolteacher Frank Rodríguez, who legally adopted Love. Though Love was baptized a Roman Catholic, her mother maintained an unorthodox home; according to Love, "There were hairy, wangly-ass hippies running around naked [doing] Gestalt therapy", and her mother raised her in a gender-free household with "no dresses, no patent leather shoes, no canopy beds, nothing". Love attended a Montessori school in Eugene, Oregon, where she struggled academically and socially. She has said that she began seeing psychiatrists at "like, [age] three. Observational therapy. TM for tots. You name it, I've been there." At age nine, a psychologist noted that she exhibited signs of autism, among them tactile defensiveness. Love commented in 1995: "When I talk about being introverted, I was diagnosed autistic. At an early age, I would not speak. Then I simply bloomed."
In 1972, Love's mother divorced Rodríguez, remarried to sportswriter David Menely, and moved the family to Nelson, New Zealand. Love was enrolled at Nelson College for Girls, but soon expelled for misbehavior. In 1973, Carroll sent Love back to Portland, Oregon, to be raised by her former stepfather and other family friends. At age 14, Love was arrested for shoplifting from a Portland department store and remanded at Hillcrest Correctional Facility, a juvenile hall in Salem, Oregon. While at Hillcrest, she became acquainted with records by Patti Smith, the Runaways, and the Pretenders, who later inspired her to start a band. She was intermittently placed in foster care throughout late 1979 until becoming legally emancipated in 1980, after which she remained staunchly estranged from her mother. Shortly after her emancipation, Love spent two months in Japan working as a topless dancer, but was deported after her passport was confiscated. She returned to Portland and began working at the strip club Mary's Club, adopting the surname Love to conceal her identity; she later adopted Love as her surname. She worked odd jobs, including as a DJ at a gay disco. Love said she lacked social skills, and learned them while frequenting gay clubs and spending time with drag queens. During this period, she enrolled at Portland State University, studying English and philosophy. She later commented that, had she not found a passion for music, she would have sought a career working with children.
Before Liverpool, my life doesn't count. Ian McCulloch and Julian Cope taught me a great deal. I owe them a lot. Liverpool had been a great school to become a rock star.
–Love on her time in Liverpool
In 1981, Love was granted a small trust fund that had been left by her maternal grandparents, which she used to travel to Dublin, Ireland, where her biological father was living. She audited courses at Trinity College, studying theology for two semesters. She later received honorary patronage from Trinity's University Philosophical Society in 2010. While in Dublin, Love met musician Julian Cope of the Teardrop Explodes at one of the band's concerts. Cope took a liking to Love and offered to let her stay at his Liverpool home in his absence. She traveled to London, where she was met by her friend and future bandmate, Robin Barbur, from Portland. Recalling Cope's offer, Love and Barbur moved into Cope's home with him and several other artists, including Pete de Freitas of Echo & the Bunnymen. De Freitas was initially hesitant to allow the girls to stay, but acquiesced as they were "alarmingly young and obviously had nowhere else to go". Love recalled: "They kind of took me in. I was sort of a mascot; I would get them coffee or tea during rehearsals." Cope writes of Love frequently in his 1994 autobiography, Head-On, in which he refers to her as "the adolescent".
In July 1982, Love returned to the United States. In late 1982, she attended a Faith No More concert in San Francisco and convinced the members to let her join as a singer. The group recorded material with Love as a vocalist, but fired her; according to keyboardist Roddy Bottum, who remained Love's friend in the years after, the band wanted a "male energy". Love returned to working abroad as an erotic dancer, briefly in Taiwan, and then at a taxi dance hall in Hong Kong. By Love's account, she first used heroin while working at the Hong Kong dance hall, having mistaken it for cocaine. While still inebriated from the drug, Love was pursued by a wealthy male client who requested that she return with him to the Philippines, and gave her money to purchase new clothes. She used the money to purchase an airfare back to the United States.
At age 19, through her then-boyfriend's mother, film costume designer Bernadene Mann, Love took a job at Paramount Studios cleaning out the wardrobe department of vintage pieces that had suffered dry rot or other damage. During this time, Love became interested in vintage fashion. She subsequently returned to Portland, where she formed short-lived musical projects with her friends Ursula Wehr and Robin Barbur (namely Sugar Babylon, later known as Sugar Babydoll). Love briefly fronted Faith no More when they did their first TV appearance in 1984: she sang with a Siouxsie Sioux-style vocal. After meeting Kat Bjelland at the Satyricon nightclub in 1984, the two formed the group the Pagan Babies. Love asked Bjelland to start the band with her as a guitarist, and the two moved to San Francisco in June 1985, where they recruited bassist Jennifer Finch and drummer Janis Tanaka. According to Bjelland, "[Courtney] didn't play an instrument at the time" aside from keyboards, so Bjelland would transcribe Love's musical ideas on guitar for her. The group played several house shows and recorded one 4-track demo before disbanding in late 1985. After Pagan Babies, Love moved to Minneapolis, where Bjelland had formed the group Babes in Toyland, and briefly worked as a concert promoter before returning to California. Drummer Lori Barbero recalled Love's time in Minneapolis:
She lived in my house for a little while. And then we did a concert at the Orpheum. It was in 1988. It was called O-88 with Butthole Surfers, Cows & Bastards, Run Westy Run, and Babes in Toyland. And I guess Maureen [Herman] took Courtney to the airport after she stole all the money. She stayed and stayed, and then the next day she wanted me to take her to the airport. And so I drove her to the airport. She had just had some weird fight with the guy at the desk, and then she left. She said, "I'm going to go to L.A. and I'm going to get my face done and I'm going to be famous." And then she did.
Deciding to shift her focus to acting, Love enrolled at the San Francisco Art Institute and studied film under experimental director George Kuchar, featuring in one of his short films, Club Vatican. She also took experimental theater courses in Oakland taught by Whoopi Goldberg. In 1985, Love submitted an audition tape for the role of Nancy Spungen in the Sid Vicious biopic Sid and Nancy (1986) and was given a minor supporting role by director Alex Cox. After filming Sid and Nancy in New York City, she worked at a peep show in Times Square and squatted at the ABC No Rio social center and Pyramid Club in the East Village. That year, Cox cast her in a leading role in his film Straight to Hell (1987), a Spaghetti Western starring Joe Strummer, Dennis Hopper, and Grace Jones, shot in Spain in 1986. The film was poorly reviewed by critics, but it caught the attention of Andy Warhol, who featured Love in an episode of Andy Warhol's Fifteen Minutes. She also had a part in the 1988 Ramones music video for "I Wanna Be Sedated", appearing as a bride among dozens of party guests.
Displeased by the "celebutante" fame she had attained, Love abandoned her acting career in 1988 and resumed work as a stripper in Oregon, where she was recognized by customers at a bar in the small town of McMinnville. This prompted Love to go into isolation and relocate to Anchorage, Alaska, where she lived for three months to "gather her thoughts", supporting herself by working at a strip club frequented by local fishermen. "I decided to move to Alaska because I needed to get my shit together and learn how to work", she said in retrospect. "So I went on this sort of vision quest. I got rid of all my earthly possessions. I had my bad little strip clothes and some big sweaters, and I moved into a trailer with a bunch of other strippers."
She was the most gung-ho person I've ever met ... She gave 180%. I've worked with some people that you've had to coax the performance out of them. With Courtney, there was no attitude.
–Don Fleming, who co-produced Hole's debut album with Kim Gordon, on Love
At the end of 1988, Love taught herself to play guitar and relocated to Los Angeles, where she placed an ad in a local music zine: "I want to start a band. My influences are Big Black, Sonic Youth, and Fleetwood Mac." By 1989, Love had recruited guitarist Eric Erlandson; bassist Lisa Roberts, her neighbor; and drummer Caroline Rue, whom she met at a Gwar concert. Love named the band Hole after a line from Euripides' Medea ("There is a hole that pierces right through me") and a conversation in which her mother told her that she could not live her life "with a hole running through her". On July 23, 1989, Love married Leaving Trains vocalist James Moreland in Las Vegas; the marriage was annulled the same year. She later said that Moreland was a transvestite and that they had married "as a joke". After forming Hole, Love and Erlandson had a romantic relationship that lasted over a year.
In Hole's formative stages, Love continued to work at strip clubs in Hollywood (including Jumbo's Clown Room and the Seventh Veil), saving money to purchase backline equipment and a touring van, while rehearsing at a Hollywood studio loaned to her by the Red Hot Chili Peppers. Hole played their first show in November 1989 at Raji's, a rock club in central Hollywood. Their debut single, "Retard Girl", was issued in April 1990 through the Long Beach indie label Sympathy for the Record Industry and was played by Rodney Bingenheimer on local rock station KROQ. Hole appeared on the cover of Flipside, a Los Angeles-based punk fanzine. In early 1991, they released their second single, "Dicknail", through Sub Pop Records.
With no wave, noise rock, and grindcore bands being major influences on Love, Hole's first studio album, Pretty on the Inside, captured an abrasive sound and contained disturbing, graphic lyrics, described by Q as "confrontational [and] genuinely uninhibited". The record was released in September 1991 on Caroline Records, produced by Kim Gordon of Sonic Youth with assistant production from Gumball's Don Fleming; Love and Gordon had met when Hole opened for Sonic Youth during their promotional tour for Goo at the Whisky a Go Go in November 1990. In early 1991, Love sent Gordon a personal letter asking her to produce the record for the band, to which she agreed.
Pretty on the Inside received generally positive critical reception from indie and punk rock critics and was named one of the 20 best albums of the year by Spin. It gained a following in the United Kingdom, charting at 59 on the UK Albums Chart, and its lead single, "Teenage Whore", entered the UK Indie Chart at number one. The album's feminist slant led many to tag the band as part of the riot grrrl movement, a movement with which Love did not associate. The band toured in support of the record, headlining with Mudhoney in Europe; in the United States, they opened for the Smashing Pumpkins, and performed at CBGB in New York City.
During the tour, Love briefly dated Smashing Pumpkins frontman Billy Corgan and then the Nirvana frontman Kurt Cobain. The journalist Michael Azerrad states that Love and Cobain met in 1989 at the Satyricon nightclub in Portland, Oregon. However, the Cobain biographer Charles Cross gives the date as February 12, 1990; Cross said that Cobain playfully wrestled Love to the floor after she said that he looked like Dave Pirner of Soul Asylum. According to Love, she met Cobain at a Dharma Bums show in Portland, while Love's bandmate Eric Erlandson said that he and Love were introduced to Cobain in a parking lot after a concert at the Hollywood Palladium on May 17, 1991. In late 1991, Love and Cobain became re-acquainted through Jennifer Finch, one of Love's friends and former bandmates. Love and Cobain were a couple by 1992.
Just marrying [him] created a mythology around me that I didn't expect for myself, because I had a very controlled, five-year plan about how I was going to be successful in the rock industry. Marrying Kurt, it all kind of went sideways in a way that I could not control and I became seen in a certain light–a vilified light that made Yoko Ono look like Pollyanna–and I couldn't stop it.
–Love on her public image after marrying Kurt Cobain
Shortly after completing the tour for Pretty on the Inside, Love married Cobain on Waikiki Beach in Honolulu, Hawaii, on February 24, 1992. She wore a satin and lace dress once owned by actress Frances Farmer, and Cobain wore plaid pajamas. During Love's pregnancy, Hole recorded a cover of "Over the Edge" for a Wipers tribute album, and recorded their fourth single, "Beautiful Son", which was released in April 1993. On August 18, the couple's only child, a daughter, Frances Bean Cobain, was born in Los Angeles. They relocated to Carnation, Washington, and then Seattle.
Love's first major media exposure came in a September 1992 profile with Cobain for Vanity Fair by Lynn Hirschberg, entitled "Strange Love". Cobain had become a major public figure following the surprise success of Nirvana's album Nevermind. Love was urged by her manager to participate in the cover story. During the prior year, Love and Cobain had developed a heroin addiction; the profile painted them in an unflattering light, suggesting that Love had been addicted to heroin during her pregnancy. The Los Angeles Department of Children and Family Services investigated, and custody of Frances was temporarily awarded to Love's sister Jaimee. Love claimed she was misquoted by Hirschberg, and asserted that she had immediately quit heroin during her first trimester after she discovered she was pregnant. Love later said the article had serious implications for her marriage and Cobain's mental state, suggesting it was a factor in his suicide two years later.
On September 8, 1993, Love and Cobain made their only public performance together at the Rock Against Rape benefit in Hollywood, performing two acoustic duets of "Pennyroyal Tea" and "Where Did You Sleep Last Night". Love also performed electric versions of two new Hole songs, "Doll Parts" and "Miss World", both written for their upcoming second album. In October 1993, Hole recorded their second album, Live Through This, in Atlanta. The album featured a new lineup with bassist Kristen Pfaff and drummer Patty Schemel.
In April 1994, Cobain killed himself in the Seattle home he shared with Love, who was in rehab in Los Angeles at the time. In the following months, Love was rarely seen in public, staying at her home with friends and family. Cobain's remains were cremated and his ashes divided into portions by Love, who kept some in a teddy bear and some in an urn. In June, she traveled to the Namgyal Buddhist Monastery in Ithaca, New York and had Cobain's ashes ceremonially blessed by Buddhist monks. Another portion was mixed into clay and made into memorial sculptures.
Live Through This was released one week after Cobain's death on Geffen's subsidiary label DGC. On June 16, Pfaff died of a heroin overdose in Seattle. For Hole's impending tour, Love recruited the Canadian bassist Melissa Auf der Maur. Hole's performance on August 26, 1994, at the Reading Festival—Love's first public performance following Cobain's death—was described by MTV as "by turns macabre, frightening and inspirational". John Peel wrote in The Guardian that Love's disheveled appearance "would have drawn whistles of astonishment in Bedlam", and that her performance "verged on the heroic ... Love steered her band through a set which dared you to pity either her recent history or that of the band ... The band teetered on the edge of chaos, generating a tension which I cannot remember having felt before from any stage."
Live Through This was certified platinum in April 1995 and received numerous accolades. The success combined with Cobain's suicide produced publicity for Love, and she was featured on Barbara Walters' 10 Most Fascinating People in 1995. Her erratic onstage behavior and various legal troubles during Hole's tour compounded the media coverage of her. Hole performed a series of riotous concerts over the following year, with Love frequently appearing hysterical onstage, flashing crowds, stage diving, and getting into fights with audience members. One journalist reported that at the band's show in Boston in December 1994: "Love interrupted the music and talked about her deceased husband Kurt Cobain, and also broke out into Tourette syndrome-like rants. The music was great, but the raving was vulgar and offensive, and prompted some of the audience to shout back at her."
In January 1995, Love was arrested in Melbourne for disrupting a Qantas flight after getting into an argument with a stewardess. On July 4, 1995, at the Lollapalooza Festival in George, Washington, Love threw a lit cigarette at musician Kathleen Hanna before punching her in the face, alleging that she had made a joke about her daughter. She pleaded guilty to an assault charge and was sentenced to anger management classes. In November 1995, two male teenagers sued Love for allegedly punching them during a Hole concert in Orlando, Florida in March 1995. The judge dismissed the case on grounds that the teens "weren't exposed to any greater amount of violence than could reasonably be expected at an alternative rock concert". Love later said she had little memory of 1994 and 1995, as she had been using large quantities of heroin and Rohypnol at the time.
I went for that part so hard because I felt a need for atonement for some cultural damage that had arisen out of me and things that I had done. By doing that role, I felt that, personally and creatively, I could exemplify why this was the most un-glorious, unglamorous, fucked-up thing. And then, bang!, I was done with all that. I could fuck off and do something else.
–Love on her role in The People vs. Larry Flynt (1996)
After Hole's world tour concluded in 1996, Love made a return to acting, first in small roles in the Jean-Michel Basquiat biopic Basquiat and the drama Feeling Minnesota (1996), and then a starring role as Larry Flynt's wife Althea in Miloš Forman's critically acclaimed 1996 film The People vs. Larry Flynt. Love went through rehabilitation and quit using heroin at the insistence of Forman; she was ordered to take multiple urine tests under the supervision of Columbia Pictures while filming, and passed all of them. Despite Columbia Pictures' initial reluctance to hire Love due to her troubled past, her performance received acclaim, earning a Golden Globe nomination for Best Actress, and a New York Film Critics Circle Award for Best Supporting Actress. Critic Roger Ebert called her work in the film "quite a performance; Love proves she is not a rock star pretending to act, but a true actress." She won several other awards from various film critic associations for the film. During this time, Love maintained what the media noted as a more decorous public image, and she appeared in ad campaigns for Versace and in a Vogue Italia spread. Following the release of The People vs. Larry Flynt, she dated her co-star Edward Norton, with whom she remained until 1999.
In late 1997, Hole released the compilations My Body, the Hand Grenade and The First Session, both of which featured previously recorded material. Love attracted media attention in May 1998 after punching journalist Belissa Cohen at a party; the suit was settled out of court for an undisclosed sum. In September 1998, Hole released their third studio album, Celebrity Skin, which featured a stark power pop sound that contrasted with their earlier punk influences. Love divulged her ambition of making an album where "art meets commerce ... there are no compromises made, it has commercial appeal, and it sticks to [our] original vision." She said she was influenced by Neil Young, Fleetwood Mac, and My Bloody Valentine when writing the album. Smashing Pumpkins frontman Billy Corgan co-wrote several songs. Celebrity Skin was well received by critics; Rolling Stone called it "accessible, fiery and intimate—often at the same time ... a basic guitar record that's anything but basic." Celebrity Skin went multi-platinum, and topped "Best of Year" lists at Spin and The Village Voice. It garnered Hole's only number-one single on the Modern Rock Tracks chart with "Celebrity Skin". Hole promoted the album through MTV performances and at the 1998 Billboard Music Awards, and were nominated for three Grammy Awards at the 41st Grammy Awards ceremony.
Before the release of Celebrity Skin, Love and Fender designed a low-priced Squier brand guitar, the Vista Venus. The instrument featured a shape inspired by Mercury, a little-known independent guitar manufacturer, Stratocaster, and Rickenbacker's solid body guitars. It had a single-coil and a humbucker pickup and was available in 6-string and 12-string versions. In an early 1999 interview, Love said about the Venus: "I wanted a guitar that sounded really warm and pop, but which required just one box to go dirty ... And something that could also be your first band guitar. I didn't want it all teched out. I wanted it real simple, with just one pickup switch."
Hole toured with Marilyn Manson on the Beautiful Monsters Tour in 1999, but dropped out after nine performances; Love and Manson disagreed over production costs, and Hole was forced to open for Manson under an agreement with Interscope Records. Hole resumed touring with Imperial Teen. Love later said Hole also abandoned the tour due to Manson and Korn's (whom they also toured with in Australia) sexualized treatment of teenage female audience members. Love told interviewers at 99X.FM in Atlanta: "What I really don't like—there are certain girls that like us, or like me, who are really messed up ... they're very young, and they do not need to be taken and raped, or filmed having enema contests ... [they were] going out into the audience and picking up fourteen and fifteen-year-old girls who obviously cut themselves, and then [I had] to see them in the morning ... it's just uncool."
In 1999, Love was awarded an Orville H. Gibson award for Best Female Rock Guitarist. During this time, she starred opposite Jim Carrey as his partner Lynne Margulies in the Andy Kaufman biopic Man on the Moon (1999), followed by a role as William S. Burroughs's wife Joan Vollmer in Beat (2000) alongside Kiefer Sutherland. Love was cast as the lead in John Carpenter's sci-fi horror film Ghosts of Mars, but backed out after injuring her foot. She sued the ex-wife of her then-boyfriend, James Barber, whom Love alleged had caused the injury by running over her foot with her Volvo. The following year, she returned to film opposite Lili Taylor in Julie Johnson (2001), in which she played a woman who has a lesbian relationship; Love won an Outstanding Actress award at L.A.'s Outfest. She was then cast in the thriller Trapped (2002), alongside Kevin Bacon and Charlize Theron. The film was a box-office flop.
In the interim, Hole had become dormant. In March 2001, Love began a "punk rock femme supergroup", Bastard, enlisting Schemel, Veruca Salt co-frontwoman Louise Post, and bassist Gina Crosley. Post recalled: "[Love] was like, 'Listen, you guys: I've been in my Malibu, manicure, movie-star world for two years, alright? I wanna make a record. And let's leave all that grunge shit behind us, eh? We were being so improvisational, and singing together, and with a trust developing between us. It was the shit." The group recorded a demo tape, but by September 2001, Post and Crosley had left, with Post citing "unhealthy and unprofessional working conditions". In May 2002, Hole announced their breakup amid continuing litigation with Universal Music Group over their record contract.
In 1997, Love and former Nirvana members Krist Novoselic and Dave Grohl formed a limited liability company, Nirvana LLC, to manage Nirvana's business dealings. In June 2001, Love filed a lawsuit to dissolve it, blocking the release of unreleased Nirvana material and delaying the release of the Nirvana compilation With the Lights Out. Grohl and Novoselic sued Love, calling her "irrational, mercurial, self-centered, unmanageable, inconsistent and unpredictable". She responded with a letter stating that "Kurt Cobain was Nirvana" and that she and his family were the "rightful heirs" to the Nirvana legacy.
In February 2003, Love was arrested at Heathrow Airport for disrupting a flight and was banned from Virgin Airlines. In October, she was arrested in Los Angeles after breaking several windows of her producer and then-boyfriend James Barber's home and was charged with being under the influence of a controlled substance; the ordeal resulted in her temporarily losing custody of her daughter.
After the breakup of Hole, Love began composing material with songwriter Linda Perry, and in July 2003 signed a contract with Virgin Records. She began recording her debut solo album, America's Sweetheart, in France shortly after. Virgin Records released America's Sweetheart in February 2004; it received mixed reviews. Charles Aaron of Spin called it a "jaw-dropping act of artistic will and a fiery, proper follow-up to 1994's Live Through This" and awarded it eight out of ten, while Amy Phillips of The Village Voice wrote: "[Love is] willing to act out the dream of every teenage brat who ever wanted to have a glamorous, high-profile hissyfit, and she turns those egocentric nervous breakdowns into art. Sure, the art becomes less compelling when you've been pulling the same stunts for a decade. But, honestly, is there anybody out there who fucks up better?" The album sold fewer than 100,000 copies. Love later expressed regret over the record, blaming her drug problems at the time. Shortly after it was released, she told Kurt Loder on TRL: "I cannot exist as a solo artist. It's a joke."
On March 17, 2004, Love appeared on the Late Show with David Letterman to promote America's Sweetheart. Her appearance drew media coverage when she lifted her shirt multiple times, flashed Letterman, and stood on his desk. The New York Times wrote: "The episode was not altogether surprising for Ms. Love, 39, whose most public moments have veered from extreme pathos—like the time she read the suicide note of her famous husband, Kurt Cobain, on MTV—to angry feminism to catfights to incoherent ranting." Hours later, in the early morning of March 18, Love was arrested in Manhattan for allegedly striking a fan with a microphone stand during a small concert in the East Village. She was released within hours and performed a scheduled concert the following evening at the Bowery Ballroom. Four days later, she called in multiple times to The Howard Stern Show, claiming in broadcast conversations with Stern that the incident had not occurred, and that actress Natasha Lyonne, who was at the concert, was told by the alleged victim that he had been paid $10,000 to file a false claim leading to Love's arrest.
On July 9, 2004, her 40th birthday, Love was arrested for failing to make a court appearance for the March 2004 charges, and taken to Bellevue Hospital, allegedly incoherent, where she was placed on a 72-hour watch. According to police, she was believed to be a potential danger to herself, but deemed mentally sound and released to a rehab facility two days later. Amidst public criticism and press coverage, comedian Margaret Cho published an opinion piece, "Courtney Deserves Better from Feminists", arguing that negative associations of Love with her drug and personal problems (including from feminists) overshadowed her music and wellbeing. Love pleaded guilty in October 2004 to disorderly conduct over the incident in East Village.
Love's appearance as a roaster on the Comedy Central Roast of Pamela Anderson in August 2005, in which she appeared intoxicated and disheveled, attracted further media attention. One review said that Love "acted as if she belonged in an institution". Six days after the broadcast, Love was sentenced to a 28-day lockdown rehab program for being under the influence of a controlled substance, violating her probation. To avoid jail time, she accepted an additional 180-day rehab sentence in September 2005. In November 2005, after completing the program, Love was discharged from the rehab center under the provision that she complete further outpatient rehab. In subsequent interviews, Love said she had been addicted to substances including prescription drugs, cocaine, and crack cocaine. She said she had been sober since completing rehabilitation in 2007, and cited her Soka Gakkai Buddhist practice (which she began in 1988) as integral to her sobriety.
In the midst of her legal troubles, Love had endeavors in writing and publishing. She co-wrote a semi-autobiographical manga, Princess Ai (Japanese: プリンセス·アイ物語), with Stu Levy, illustrated by Misaho Kujiradou and Ai Yazawa; it was released in three volumes in the United States and Japan between 2004 and 2006. In 2006, Love published a memoir, Dirty Blonde, and began recording her second solo album, How Dirty Girls Get Clean, collaborating again with Perry and Billy Corgan. Love had written several songs, including an anti-cocaine song titled "Loser Dust", during her time in rehab in 2005. She told Billboard: "My hand-eye coordination was so bad [after the drug use], I didn't even know chords anymore. It was like my fingers were frozen. And I wasn't allowed to make noise [in rehab] ... I never thought I would work again." Tracks and demos for the album leaked online in 2006, and a documentary, The Return of Courtney Love, detailing the making of the album, aired on the British television network More4 in the fall of that year. A rough acoustic version of "Never Go Hungry Again", recorded during an interview for The Times in November, was also released. Incomplete audio clips of the song "Samantha", originating from an interview with NPR, were distributed on the internet in 2007.
In March 2009, fashion designer Dawn Simorangkir brought a libel suit against Love concerning a defamatory post Love made on her Twitter account, which was eventually settled for $450,000. Several months later, in June 2009, NME published an article detailing Love's plan to reunite Hole and release a new album, Nobody's Daughter. In response, former Hole guitarist Eric Erlandson stated in Spin magazine that contractually no reunion could take place without his involvement; therefore Nobody's Daughter would remain Love's solo record, as opposed to a "Hole" record. Love responded to Erlandson's comments in a Twitter post, claiming "he's out of his mind, Hole is my band, my name, and my Trademark". Nobody's Daughter was released worldwide as a Hole album on April 27, 2010. For the new line-up, Love recruited guitarist Micko Larkin, Shawn Dailey (bass guitar), and Stu Fisher (drums, percussion). Nobody's Daughter featured material written and recorded for Love's unfinished solo album, How Dirty Girls Get Clean, including "Pacific Coast Highway", "Letter to God", "Samantha", and "Never Go Hungry", although they were re-produced in the studio with Larkin and engineer Michael Beinhorn. The album's subject matter was largely centered on Love's tumultuous life between 2003 and 2007, and featured a polished folk rock sound, and more acoustic guitar work than previous Hole albums.
The first single from Nobody's Daughter was "Skinny Little Bitch", released to promote the album in March 2010. The album received mixed reviews. Robert Sheffield of Rolling Stone gave the album three out of five, saying Love "worked hard on these songs, instead of just babbling a bunch of druggy bullshit and assuming people would buy it, the way she did on her 2004 flop, America's Sweetheart". Sal Cinquemani of Slant Magazine also gave the album three out of five: "It's Marianne Faithfull's substance-ravaged voice that comes to mind most often while listening to songs like 'Honey' and 'For Once in Your Life'. The latter track is, in fact, one of Love's most raw and vulnerable vocal performances to date ... the song offers a rare glimpse into the mind of a woman who, for the last 15 years, has been as famous for being a rock star as she's been for being a victim." Love and the band toured internationally from 2010 into late 2012 promoting the record, with their pre-release shows in London and at South by Southwest receiving critical acclaim. In 2011, Love participated in Hit So Hard, a documentary chronicling bandmate Schemel's time in Hole.
In May 2012, Love debuted an art collection at Fred Torres Collaborations in New York titled "And She's Not Even Pretty", which contained over 40 drawings and paintings by Love composed in ink, colored pencil, pastels, and watercolors. Later in the year, she collaborated with Michael Stipe on the track "Rio Grande" for Johnny Depp's sea shanty album Son of Rogues Gallery, and in 2013, co-wrote and contributed vocals on "Rat A Tat" from Fall Out Boy's album Save Rock and Roll, also appearing in the song's music video.
After dropping the Hole name and performing as a solo artist in late 2012, Love appeared in spring 2013 advertisements for Yves Saint Laurent alongside Kim Gordon and Ariel Pink. Love completed a solo tour of North America in mid-2013, which was purported to be in promotion of an upcoming solo album; however, it was ultimately dubbed a "greatest hits" tour, and featured songs from Love's and Hole's back catalogue. Love told Billboard at the time that she had recorded eight songs in the studio.
Love was subject of a second landmark libel lawsuit brought against her in January 2014 by her former attorney Rhonda Holmes, who accused Love of online defamation, seeking $8 million in damages. It was the first case of alleged Twitter-based libel in U.S. history to make it to trial. The jury, however, found in Love's favor. A subsequent defamation lawsuit filed by fashion designer Simorangkir in February 2014, however, resulted in Love being ordered to pay a further $350,000 in recompense.
On April 22, 2014, Love debuted the song "You Know My Name" on BBC Radio 6 to promote her tour of the United Kingdom. It was released as a double A-side single with the song "Wedding Day" on May 4, 2014, on her own label Cherry Forever Records via Kobalt Label Services. The tracks were produced by Michael Beinhorn, and feature Tommy Lee on drums. In an interview with the BBC, Love revealed that she and former Hole guitarist Eric Erlandson had reconciled, and had been rehearsing new material together, along with former bassist Melissa Auf der Maur and drummer Patty Schemel, though she did not confirm a reunion of the band. On May 1, 2014, in an interview with Pitchfork, Love commented further on the possibility of Hole reuniting, saying: "I'm not going to commit to it happening, because we want an element of surprise. There's a lot of is to be dotted and ts to be crossed."
Love was cast in several television series in supporting parts throughout 2014, including the FX series Sons of Anarchy, Revenge, and Lee Daniels' network series Empire in a recurring guest role as Elle Dallas. The track "Walk Out on Me", featuring Love, was included on the Empire: Original Soundtrack from Season 1 album, which debuted at number 1 on the Billboard 200. Alexis Petridis of The Guardian praised the track, saying: "The idea of Courtney Love singing a ballad with a group of gospel singers seems faintly terrifying ... The reality is brilliant. Love's voice fits the careworn lyrics, effortlessly summoning the kind of ravaged darkness that Lana Del Rey nearly ruptures herself trying to conjure up."
In January 2015, Love starred in a New York City stage production, Kansas City Choir Boy, a "pop opera" conceived by and co-starring Todd Almond. Charles Isherwood of The New York Times praised her performance, noting a "soft-edged and bewitching" stage presence, and wrote: "Her voice, never the most supple or rangy of instruments, retains the singular sound that made her an electrifying front woman for the band Hole: a single sustained noted can seem to simultaneously contain a plea, a wound and a threat." The show toured later in the year, with performances in Boston and Los Angeles. In April 2015, the journalist Anthony Bozza sued Love, alleging a contractual violation regarding his co-writing of her memoir. Love performed as the opening act for Lana Del Rey on her Endless Summer Tour for eight West Coast shows in May and June 2015. During her tenure, Love debuted the single "Miss Narcissist", released on Wavves' independent label Ghost Ramp. She was also cast in a supporting role in James Franco's film The Long Home, based on the novel by William Gay, her first film role in over ten years; as of 2022, it remains unreleased.
In January 2016, Love released a clothing line in collaboration with Sophia Amoruso, "Love, Courtney", featuring 18 pieces reflecting her personal style. In November 2016, she began filming the pilot for A Midsummer's Nightmare, a Shakespeare anthology series adapted for Lifetime. She starred as Kitty Menéndez in Menendez: Blood Brothers, a biopic television film based on the lives of Lyle and Erik Menéndez, which premiered on Lifetime in June 2017.
In October 2017, shortly after the Harvey Weinstein scandal made news, a 2005 video of Love warning young actresses about Weinstein went viral. In the footage, while on the red carpet for the Comedy Central Roast of Pamela Anderson, Love was asked by Natasha Leggero if she had any advice for "a young girl moving to Hollywood"; she responded, "If Harvey Weinstein invites you to a private party in the Four Seasons [hotel], don't go." She later tweeted, "Although I wasn't one of his victims, I was eternally banned by [Creative Artists Agency] for speaking out."
In the same year, Love was cast in Justin Kelly's biopic JT LeRoy, portraying a film producer opposite Laura Dern. In March 2018, she appeared in the music video for Marilyn Manson's "Tattooed in Reverse", and in April she appeared as a guest judge on RuPaul's Drag Race. In December, Love was awarded a restraining order against Sam Lutfi, who had acted as her manager for the previous six years, alleging verbal abuse and harassment. Her daughter, Frances, and sister, Jaimee, were also awarded restraining orders against Lutfi. In January 2019, a Los Angeles County judge extended the three-year order to five years, citing Lutfi's tendency to "prey upon people".
On August 18, 2019, Love performed a solo set at the Yola Día festival in Los Angeles, which also featured performances by Cat Power and Lykke Li. On September 9, Love garnered press attention when she publicly criticized Joss Sackler, an heiress to the Sackler family OxyContin fortune, after she allegedly offered Love $100,000 to attend her fashion show during New York Fashion Week. In the same statement, Love indicated that she had relapsed into opioid addiction in 2018, stating that she had recently celebrated a year of sobriety. In October 2019, Love relocated from Los Angeles to London.
On November 21, 2019, Love recorded the song "Mother", written and produced by Lawrence Rothman, as part of the soundtrack for the horror film The Turning (2020). In January 2020, she received the Icon Award at the NME Awards; NME described her as "one of the most influential singers in alternative culture of the last 30 years". The following month, she confirmed she was writing a new record which she described as "really sad ... [I'm] writing in minor chords, and that appeals to my sadness." In March 2021, Love said she had been hospitalized with acute anemia in August 2020, which had nearly killed her and reduced her weight to 97 pounds (44 kg); she made a full recovery.
In August 2022, Love revealed the completion of her memoir, The Girl with the Most Cake, after a nearly ten-year period of writing.
It was announced on May 15, 2023, that Love had been cast in Assassination, a biographical film about the assassination of John F. Kennedy, directed by David Mamet and co-starring Viggo Mortensen, Shia LaBeouf, Al Pacino, and John Travolta.
Love has been candid about her diverse musical influences, the earliest being Patti Smith, The Runaways, and The Pretenders, artists she discovered while in juvenile hall as a young teenager. As a child, her first exposure to music was records that her parents received each month through Columbia Record Club. The first record Love owned was Leonard Cohen's Songs of Leonard Cohen (1967), which she obtained from her mother: "He was so lyric-conscious and morbid, and I was a pretty morbid kid", she recalled. As a teenager, she named Flipper, Kate Bush, Soft Cell, Joni Mitchell, Laura Nyro, Lou Reed, and Dead Kennedys among her favorite artists. While in Dublin at age fifteen, Love attended a Virgin Prunes concert, an event she credited as being a pivotal influence: "I had never seen so much sex, snarl, poetry, evil, restraint, grace, filth, raw power and the very essence of rock and roll", she recalled. "[I had seen] U2 [who] gave me lashes of love and inspiration, and a few nights later the Virgin Prunes fucked–me–up." Decades later, in 2009, Love introduced the band's frontman Gavin Friday at a Carnegie Hall event, and performed a song with him.
Though often associated with punk music, Love has noted that her most significant musical influences have been post-punk and new wave artists. Commenting in 2021, Love said:
There's this idea of "Courtney is punk and stuck in 1995!" but that's not the case. I was more [influenced by] new wave or post-punk. My number one greatest song of all time is "Love Will Tear Us Apart" by Joy Division, and I will take no fucking prisoners in that battle. But the band that affected me more than even Leonard Cohen and Bob Dylan was Echo and the Bunnymen.
Over the years, Love has also named several other new wave and post-punk bands as influences, including The Smiths, Siouxsie and the Banshees, Television, and Bauhaus.
Love's diverse genre interests were illustrated in a 1991 interview with Flipside, in which she stated: "There's a part of me that wants to have a grindcore band and another that wants to have a Raspberries-type pop band." Discussing the abrasive sound of Hole's debut album, she said she felt she had to "catch up with all my hip peers who'd gone all indie on me, and who made fun of me for liking R.E.M. and The Smiths." She has also embraced the influence of experimental artists and punk rock groups, including Sonic Youth, Swans, Big Black, Diamanda Galás, the Germs, and The Stooges. While writing Celebrity Skin, she drew influence from Neil Young and My Bloody Valentine. She has also cited her contemporary PJ Harvey as an influence, saying: "The one rock star that makes me know I'm shit is Polly Harvey. I'm nothing next to the purity that she experiences."
Literature and poetry have often been a major influence on her songwriting; Love said she had "always wanted to be a poet, but there was no money in it." She has named the works of T.S. Eliot and Charles Baudelaire as influential, and referenced works by Dante Rossetti, William Shakespeare, Rudyard Kipling, and Anne Sexton in her lyrics.
Musically, Love's work with Hole and her solo efforts have been characterized as alternative rock; Hole's early material, however, was described by critics as being stylistically closer to grindcore and aggressive punk rock. Spin's October 1991 review of Hole's first album noted Love's layering of harsh and abrasive riffs buried more sophisticated musical arrangements. In 1998, she stated that Hole had "always been a pop band. We always had a subtext of pop. I always talked about it, if you go back ... what'll sound like some weird Sonic Youth tuning back then to you was sounding like the Raspberries to me, in my demented pop framework."
Love's lyrical content is composed from a female's point of view, and her lyrics have been described as "literate and mordant" and noted by scholars for "articulating a third-wave feminist consciousness." Simon Reynolds, in reviewing Hole's debut album, noted: "Ms. Love's songs explore the full spectrum of female emotions, from vulnerability to rage. The songs are fueled by adolescent traumas, feelings of disgust about the body, passionate friendships with women and the desire to escape domesticity. Her lyrical style could be described as emotional nudism." Journalist and critic Kim France, in critiquing Love's lyrics, referred to her as a "dark genius" and likened her work to that of Anne Sexton.
Love has remarked that lyrics have always been the most important component of songwriting for her: "The important thing for me ... is it has to look good on the page. I mean, you can love Led Zeppelin and not love their lyrics ... but I made a big effort in my career to have what's on the page mean something." Common themes present in Love's lyrics during her early career included body image, rape, suicide, conformity, pregnancy, prostitution, and death. In a 1991 interview with Everett True, she said: "I try to place [beautiful imagery] next to fucked up imagery, because that's how I view things ... I sometimes feel that no one's taken the time to write about certain things in rock, that there's a certain female point of view that's never been given space."
Critics have noted that Love's later musical work is more lyrically introspective. Celebrity Skin and America's Sweetheart are lyrically centered on celebrity life, Hollywood, and drug addiction, while continuing Love's interest in vanity and body image. Nobody's Daughter was lyrically reflective of Love's past relationships and her struggle for sobriety, with the majority of its lyrics written while she was in rehab in 2006.
Love has a contralto vocal range. According to Love, she never wanted to be a singer, but rather aspired to be a skilled guitarist: "I'm such a lazy bastard though that I never did that", she said. "I was always the only person with the nerve to sing, and so I got stuck with it." She has been regularly noted by critics for her husky vocals as well as her "banshee [-like]" screaming abilities. Her vocals have been compared to those of Johnny Rotten, and David Fricke of Rolling Stone described them as "lung-busting" and "a corrosive, lunatic wail". Upon the release of Hole's 2010 album, Nobody's Daughter, Amanda Petrusich of Pitchfork compared Love's raspy, unpolished vocals to those of Bob Dylan. In 2023, Rolling Stone ranked Love at number 130 on its list of the 200 Greatest Singers of All Time.
She has played a variety of Fender guitars throughout her career, including a Jaguar and a vintage 1965 Jazzmaster; the latter was purchased by the Hard Rock Cafe and is on display in New York City. Between 1989 and 1991, Love primarily played a Rickenbacker 425 because she "preferred the 3/4 neck", but she destroyed the guitar onstage at a 1991 concert opening for the Smashing Pumpkins. In the mid-1990s, she often played a guitar made by Mercury, an obscure company that manufactured custom guitars, as well as a Univox Hi-Flier. Fender's Vista Venus, designed by Love in 1998, was partially inspired by Rickenbacker guitars as well as her Mercury. During tours after the release of Nobody's Daughter (post-2010), Love has played a Rickenbacker 360 onstage. Her setup has included Fender tube gear, Matchless, Ampeg, Silvertone and a solid-state 1976 Randall Commander.
Love has referred to herself as "a shit guitar player", further commenting in a 2014 interview: "I can still write a song, but [the guitar playing] sounds like shit ... I used to be a good rhythm player but I am no longer dependable." Throughout her career, she has also garnered a reputation for unpredictable live shows. In the 1990s, her performances with Hole were characterized by confrontational behavior, with Love stage diving, smashing guitars or throwing them into the audience, wandering into the crowd at the end of sets, and engaging in sometimes incoherent rants. Critics and journalists have noted Love for her comical, often stream-of-consciousness-like stage banter. Music journalist Robert Hilburn wrote in 1993 that, "rather than simply scripted patter, Love's comments between songs [have] the natural feel of someone who is sharing her immediate feelings." In a review of a live performance published in 2010, it was noted that Love's onstage "one-liners [were] worthy of the Comedy Store."
In 1993, Love and husband Kurt Cobain performed an acoustic set together at the Rock Against Rape benefit in Los Angeles, which raised awareness and provided resources for victims of sexual abuse. In 2000, Love publicly advocated for reform of the record industry in a personal letter published by Salon. In the letter, Love said: "It's not piracy when kids swap music over the Internet using Napster or Gnutella or Freenet or iMesh or beaming their CDs into a My.MP3.com or MyPlay.com music locker. It's piracy when those guys that run those companies make side deals with the cartel lawyers and label heads so that they can be 'the label's friend', and not the artists'." In a subsequent interview with Carrie Fisher, she said that she was interested in starting a union for recording artists, and also discussed race relations in the music industry, advocating for record companies to "put money back into the black community [whom] white people have been stealing from for years."
Love has been a long-standing supporter of LGBT causes. She has frequently collaborated with Los Angeles Gay and Lesbian Center, taking part in the center's "An Evening with Women" events. The proceeds of the event help provide food and shelter for homeless youth; services for seniors; legal assistance; domestic violence services; health and mental health services, and cultural arts programs. Love participated with Linda Perry for the event in 2012, and performed alongside Aimee Mann and comedian Wanda Sykes. Speaking on her collaboration on the event, Love said: "Seven thousand kids in Los Angeles a year go out on the street, and forty percent of those kids are gay, lesbian, or transgender. They come out to their parents, and become homeless ... for whatever reason, I don't really know why, but gay men have a lot of foundations—I've played many of them—but the lesbian side of it doesn't have as much money and/or donors, so we're excited that this has grown to cover women and women's affairs."
She has also contributed to AIDS organizations, partaking in benefits for amfAR and the RED Campaign. In May 2011, she donated six of her husband Cobain's personal vinyl records for auction at Mariska Hargitay's Joyful Heart Foundation event for victims of child abuse, rape, and domestic violence. She has also supported the Sophie Lancaster Foundation.
Love has had an impact on female-fronted alternative acts and performers. She has been cited as influential on young female instrumentalists in particular, having once infamously proclaimed: "I want every girl in the world to pick up a guitar and start screaming ... I strap on that motherfucking guitar and you cannot fuck with me. That's my feeling." In The Electric Guitar: A History of an American Icon, it is noted:
[Love] truly lived up to Paul Westerberg's (The Replacements) assessment of pretty girls "playing makeup/wearing guitar" ... She frequently stood on stage, microphone in hand and foot on monitor, and simply let her Fender guitar dangle around her neck. She truly embodied the empowerment that came with playing the electric guitar ... Love depended heavily upon her male lead guitar foil Eric Erlandson, but the rest of her band remained exclusively female throughout several lineup changes.
When you're dying and your life is flashing before your eyes ... you're gonna be thinking about the great things you did, the horrible things that you did, the emotional impact that someone had on you and that you had on somebody else. Those are the things that are relevant. To have some sort of emotional impact that transcends time, that's great.
–Love on having a cultural impact, 1997
With over 3 million records sold in the United States alone, Hole became one of the most successful rock bands of all time fronted by a woman. VH1 ranked Love no. 69 in their list of The 100 Greatest Women in Music History in 2012. In 2015, the Phoenix New Times declared Love the number one greatest female rock star of all time, writing: "To build a perfect rock star, there are several crucial ingredients: musical talent, physical attractiveness, tumultuous relationships, substance abuse, and public meltdowns, just to name a few. These days, Love seems to have rebounded from her epic tailspin and has leveled out in a slightly more normal manner, but there's no doubt that her life to date is the type of story people wouldn't believe in a novel or a movie."
Among the alternative musicians who have cited Love as an influence are Scout Niblett; Brody Dalle of The Distillers; Dee Dee Penny of Dum Dum Girls; Victoria Legrand of Beach House; Annie Hardy of Giant Drag; and Nine Black Alps. Contemporary female pop artists Lana Del Rey, Avril Lavigne, Tove Lo, and Sky Ferreira have also cited Love as an influence. Love has frequently been recognized as the most high-profile contributor of feminist music during the 1990s, and for "subverting [the] mainstream expectations of how a woman should look, act, and sound." According to music journalist Maria Raha, "Hole was the highest-profile female-fronted band of the '90s to openly and directly sing about feminism." Patti Smith, a major influence of Love's, also praised her, saying: "I hate genderizing things ... [but] when I heard Hole, I was amazed to hear a girl sing like that. Janis Joplin was her own thing; she was into Big Mama Thornton and Bessie Smith. But what Courtney Love does, I'd never heard a girl do that."
She has also been a gay icon since the mid-1990s, and has jokingly referred to her fanbase as consisting of "females, gay guys, and a few advanced, evolved heterosexual men." Love's aesthetic image, particularly in the early 1990s, also became influential and was dubbed "kinderwhore" by critics and media. The subversive fashion mainly consisted of vintage babydoll dresses accompanied by smeared makeup and red lipstick. MTV reporter Kurt Loder described Love as looking like "a debauched rag doll" onstage. Love later said she had been influenced by the fashion of Chrissy Amphlett of the Divinyls. Interviewed in 1994, Love commented "I would like to think–in my heart of hearts–that I'm changing some psychosexual aspects of rock music. Not that I'm so desirable. I didn't do the kinder-whore thing because I thought I was so hot. When I see the look used to make one more appealing, it pisses me off. When I started, it was a What Ever Happened to Baby Jane? thing. My angle was irony." | [
{
"paragraph_id": 0,
"text": "Courtney Michelle Love (née Harrison; born July 9, 1964) is an American singer, guitarist, songwriter, and actress. A figure in the alternative and grunge scenes of the 1990s, her career has spanned four decades. She rose to prominence as the lead vocalist and rhythm guitarist of the alternative rock band Hole, which she formed in 1989. Love has drawn public attention for her uninhibited live performances and confrontational lyrics, as well as her highly publicized personal life following her marriage to Nirvana frontman Kurt Cobain. In 2020, NME named her one of the most influential singers in alternative culture of the last 30 years.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Love had an itinerant childhood, but was primarily raised in Portland, Oregon, where she played in a series of short-lived bands and was active in the local punk scene. After briefly being in a juvenile hall, she spent a year living in Dublin and Liverpool before returning to the United States and pursuing an acting career. She appeared in supporting roles in the Alex Cox films Sid and Nancy (1986) and Straight to Hell (1987) before forming the band Hole in Los Angeles with guitarist Eric Erlandson. The group received critical acclaim from underground rock press for their 1991 debut album, produced by Kim Gordon, while their second release, Live Through This (1994), was met with critical accolades and multi-platinum sales. In 1995, Love returned to acting, earning a Golden Globe Award nomination for her performance as Althea Leasure in Miloš Forman's The People vs. Larry Flynt (1996), which established her as a mainstream actress. The following year, Hole's third album, Celebrity Skin (1998), was nominated for three Grammy Awards.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Love continued to work as an actress into the early 2000s, appearing in big-budget pictures such as Man on the Moon (1999) and Trapped (2002), before releasing her first solo album, America's Sweetheart, in 2004. The subsequent several years were marred with publicity surrounding Love's legal troubles and drug relapse, which resulted in a mandatory lockdown rehabilitation sentence in 2005 while she was writing a second solo album. That project became Nobody's Daughter, released in 2010 as a Hole album but without the former Hole lineup. Between 2014 and 2015, Love released two solo singles and returned to acting in the network series Sons of Anarchy and Empire. In 2020, she confirmed she was writing new music. Love has also been active as a writer; she co-created and co-wrote three volumes of a manga, Princess Ai, between 2004 and 2006, and wrote a memoir, Dirty Blonde (2006).",
"title": ""
},
{
"paragraph_id": 3,
"text": "Courtney Michelle Harrison was born July 9, 1964, at Saint Francis Memorial Hospital in San Francisco, California, the first child of psychotherapist Linda Carroll (née Risi; born 1944) and Hank Harrison (1941–2022), a publisher and road manager for the Grateful Dead. Her parents met at a party held for Dizzy Gillespie in 1963, and the two married in Reno, Nevada after Carroll discovered she was pregnant. Carroll, who was adopted at birth, is the biological daughter of novelist Paula Fox. Love's matrilineal great-grandmother was Elsie Fox (née de Sola), a Cuban writer who co-wrote the film The Last Train from Madrid with Love's great-grandfather, Paul Hervey Fox, cousin of writer Faith Baldwin and actor Douglas Fairbanks. Phil Lesh, the founding bassist of the Grateful Dead, is Love's godfather. According to Love, she was named after Courtney Farrell, the protagonist of Pamela Moore's 1956 novel Chocolates for Breakfast. Love is of Cuban, English, German, Irish, Ashkenazi Jewish, and Welsh descent. Through her mother's subsequent marriages, Love has two younger half-sisters, three younger half-brothers (one of whom died in infancy), and one adopted brother.",
"title": "Life and career"
},
{
"paragraph_id": 4,
"text": "Love spent her early years in Haight-Ashbury, San Francisco, until her parents divorced in 1970. In a custody hearing, her mother, as well as one of her father's girlfriends, testified that Hank had dosed Courtney with LSD when she was a toddler. Carroll also alleged that Hank threatened to abduct his daughter and flee with her to a foreign country. Though Hank denied these allegations, his custody was revoked. In 1970, Carroll relocated with Love to the rural community of Marcola, Oregon where they lived along the Mohawk River while Carroll completed her psychology degree at the University of Oregon. There, Carroll remarried to schoolteacher Frank Rodríguez, who legally adopted Love. Though Love was baptized a Roman Catholic, her mother maintained an unorthodox home; according to Love, \"There were hairy, wangly-ass hippies running around naked [doing] Gestalt therapy\", and her mother raised her in a gender-free household with \"no dresses, no patent leather shoes, no canopy beds, nothing\". Love attended a Montessori school in Eugene, Oregon, where she struggled academically and socially. She has said that she began seeing psychiatrists at \"like, [age] three. Observational therapy. TM for tots. You name it, I've been there.\" At age nine, a psychologist noted that she exhibited signs of autism, among them tactile defensiveness. Love commented in 1995: \"When I talk about being introverted, I was diagnosed autistic. At an early age, I would not speak. Then I simply bloomed.\"",
"title": "Life and career"
},
{
"paragraph_id": 5,
"text": "In 1972, Love's mother divorced Rodríguez, remarried to sportswriter David Menely, and moved the family to Nelson, New Zealand. Love was enrolled at Nelson College for Girls, but soon expelled for misbehavior. In 1973, Carroll sent Love back to Portland, Oregon, to be raised by her former stepfather and other family friends. At age 14, Love was arrested for shoplifting from a Portland department store and remanded at Hillcrest Correctional Facility, a juvenile hall in Salem, Oregon. While at Hillcrest, she became acquainted with records by Patti Smith, the Runaways, and the Pretenders, who later inspired her to start a band. She was intermittently placed in foster care throughout late 1979 until becoming legally emancipated in 1980, after which she remained staunchly estranged from her mother. Shortly after her emancipation, Love spent two months in Japan working as a topless dancer, but was deported after her passport was confiscated. She returned to Portland and began working at the strip club Mary's Club, adopting the surname Love to conceal her identity; she later adopted Love as her surname. She worked odd jobs, including as a DJ at a gay disco. Love said she lacked social skills, and learned them while frequenting gay clubs and spending time with drag queens. During this period, she enrolled at Portland State University, studying English and philosophy. She later commented that, had she not found a passion for music, she would have sought a career working with children.",
"title": "Life and career"
},
{
"paragraph_id": 6,
"text": "Before Liverpool, my life doesn't count. Ian McCulloch and Julian Cope taught me a great deal. I owe them a lot. Liverpool had been a great school to become a rock star.",
"title": "Life and career"
},
{
"paragraph_id": 7,
"text": "–Love on her time in Liverpool",
"title": "Life and career"
},
{
"paragraph_id": 8,
"text": "In 1981, Love was granted a small trust fund that had been left by her maternal grandparents, which she used to travel to Dublin, Ireland, where her biological father was living. She audited courses at Trinity College, studying theology for two semesters. She later received honorary patronage from Trinity's University Philosophical Society in 2010. While in Dublin, Love met musician Julian Cope of the Teardrop Explodes at one of the band's concerts. Cope took a liking to Love and offered to let her stay at his Liverpool home in his absence. She traveled to London, where she was met by her friend and future bandmate, Robin Barbur, from Portland. Recalling Cope's offer, Love and Barbur moved into Cope's home with him and several other artists, including Pete de Freitas of Echo & the Bunnymen. De Freitas was initially hesitant to allow the girls to stay, but acquiesced as they were \"alarmingly young and obviously had nowhere else to go\". Love recalled: \"They kind of took me in. I was sort of a mascot; I would get them coffee or tea during rehearsals.\" Cope writes of Love frequently in his 1994 autobiography, Head-On, in which he refers to her as \"the adolescent\".",
"title": "Life and career"
},
{
"paragraph_id": 9,
"text": "In July 1982, Love returned to the United States. In late 1982, she attended a Faith No More concert in San Francisco and convinced the members to let her join as a singer. The group recorded material with Love as a vocalist, but fired her; according to keyboardist Roddy Bottum, who remained Love's friend in the years after, the band wanted a \"male energy\". Love returned to working abroad as an erotic dancer, briefly in Taiwan, and then at a taxi dance hall in Hong Kong. By Love's account, she first used heroin while working at the Hong Kong dance hall, having mistaken it for cocaine. While still inebriated from the drug, Love was pursued by a wealthy male client who requested that she return with him to the Philippines, and gave her money to purchase new clothes. She used the money to purchase an airfare back to the United States.",
"title": "Life and career"
},
{
"paragraph_id": 10,
"text": "At age 19, through her then-boyfriend's mother, film costume designer Bernadene Mann, Love took a job at Paramount Studios cleaning out the wardrobe department of vintage pieces that had suffered dry rot or other damage. During this time, Love became interested in vintage fashion. She subsequently returned to Portland, where she formed short-lived musical projects with her friends Ursula Wehr and Robin Barbur (namely Sugar Babylon, later known as Sugar Babydoll). Love briefly fronted Faith no More when they did their first TV appearance in 1984: she sang with a Siouxsie Sioux-style vocal. After meeting Kat Bjelland at the Satyricon nightclub in 1984, the two formed the group the Pagan Babies. Love asked Bjelland to start the band with her as a guitarist, and the two moved to San Francisco in June 1985, where they recruited bassist Jennifer Finch and drummer Janis Tanaka. According to Bjelland, \"[Courtney] didn't play an instrument at the time\" aside from keyboards, so Bjelland would transcribe Love's musical ideas on guitar for her. The group played several house shows and recorded one 4-track demo before disbanding in late 1985. After Pagan Babies, Love moved to Minneapolis, where Bjelland had formed the group Babes in Toyland, and briefly worked as a concert promoter before returning to California. Drummer Lori Barbero recalled Love's time in Minneapolis:",
"title": "Life and career"
},
{
"paragraph_id": 11,
"text": "She lived in my house for a little while. And then we did a concert at the Orpheum. It was in 1988. It was called O-88 with Butthole Surfers, Cows & Bastards, Run Westy Run, and Babes in Toyland. And I guess Maureen [Herman] took Courtney to the airport after she stole all the money. She stayed and stayed, and then the next day she wanted me to take her to the airport. And so I drove her to the airport. She had just had some weird fight with the guy at the desk, and then she left. She said, \"I'm going to go to L.A. and I'm going to get my face done and I'm going to be famous.\" And then she did.",
"title": "Life and career"
},
{
"paragraph_id": 12,
"text": "Deciding to shift her focus to acting, Love enrolled at the San Francisco Art Institute and studied film under experimental director George Kuchar, featuring in one of his short films, Club Vatican. She also took experimental theater courses in Oakland taught by Whoopi Goldberg. In 1985, Love submitted an audition tape for the role of Nancy Spungen in the Sid Vicious biopic Sid and Nancy (1986) and was given a minor supporting role by director Alex Cox. After filming Sid and Nancy in New York City, she worked at a peep show in Times Square and squatted at the ABC No Rio social center and Pyramid Club in the East Village. That year, Cox cast her in a leading role in his film Straight to Hell (1987), a Spaghetti Western starring Joe Strummer, Dennis Hopper, and Grace Jones, shot in Spain in 1986. The film was poorly reviewed by critics, but it caught the attention of Andy Warhol, who featured Love in an episode of Andy Warhol's Fifteen Minutes. She also had a part in the 1988 Ramones music video for \"I Wanna Be Sedated\", appearing as a bride among dozens of party guests.",
"title": "Life and career"
},
{
"paragraph_id": 13,
"text": "Displeased by the \"celebutante\" fame she had attained, Love abandoned her acting career in 1988 and resumed work as a stripper in Oregon, where she was recognized by customers at a bar in the small town of McMinnville. This prompted Love to go into isolation and relocate to Anchorage, Alaska, where she lived for three months to \"gather her thoughts\", supporting herself by working at a strip club frequented by local fishermen. \"I decided to move to Alaska because I needed to get my shit together and learn how to work\", she said in retrospect. \"So I went on this sort of vision quest. I got rid of all my earthly possessions. I had my bad little strip clothes and some big sweaters, and I moved into a trailer with a bunch of other strippers.\"",
"title": "Life and career"
},
{
"paragraph_id": 14,
"text": "She was the most gung-ho person I've ever met ... She gave 180%. I've worked with some people that you've had to coax the performance out of them. With Courtney, there was no attitude.",
"title": "Life and career"
},
{
"paragraph_id": 15,
"text": "–Don Fleming, who co-produced Hole's debut album with Kim Gordon, on Love",
"title": "Life and career"
},
{
"paragraph_id": 16,
"text": "At the end of 1988, Love taught herself to play guitar and relocated to Los Angeles, where she placed an ad in a local music zine: \"I want to start a band. My influences are Big Black, Sonic Youth, and Fleetwood Mac.\" By 1989, Love had recruited guitarist Eric Erlandson; bassist Lisa Roberts, her neighbor; and drummer Caroline Rue, whom she met at a Gwar concert. Love named the band Hole after a line from Euripides' Medea (\"There is a hole that pierces right through me\") and a conversation in which her mother told her that she could not live her life \"with a hole running through her\". On July 23, 1989, Love married Leaving Trains vocalist James Moreland in Las Vegas; the marriage was annulled the same year. She later said that Moreland was a transvestite and that they had married \"as a joke\". After forming Hole, Love and Erlandson had a romantic relationship that lasted over a year.",
"title": "Life and career"
},
{
"paragraph_id": 17,
"text": "In Hole's formative stages, Love continued to work at strip clubs in Hollywood (including Jumbo's Clown Room and the Seventh Veil), saving money to purchase backline equipment and a touring van, while rehearsing at a Hollywood studio loaned to her by the Red Hot Chili Peppers. Hole played their first show in November 1989 at Raji's, a rock club in central Hollywood. Their debut single, \"Retard Girl\", was issued in April 1990 through the Long Beach indie label Sympathy for the Record Industry and was played by Rodney Bingenheimer on local rock station KROQ. Hole appeared on the cover of Flipside, a Los Angeles-based punk fanzine. In early 1991, they released their second single, \"Dicknail\", through Sub Pop Records.",
"title": "Life and career"
},
{
"paragraph_id": 18,
"text": "With no wave, noise rock, and grindcore bands being major influences on Love, Hole's first studio album, Pretty on the Inside, captured an abrasive sound and contained disturbing, graphic lyrics, described by Q as \"confrontational [and] genuinely uninhibited\". The record was released in September 1991 on Caroline Records, produced by Kim Gordon of Sonic Youth with assistant production from Gumball's Don Fleming; Love and Gordon had met when Hole opened for Sonic Youth during their promotional tour for Goo at the Whisky a Go Go in November 1990. In early 1991, Love sent Gordon a personal letter asking her to produce the record for the band, to which she agreed.",
"title": "Life and career"
},
{
"paragraph_id": 19,
"text": "Pretty on the Inside received generally positive critical reception from indie and punk rock critics and was named one of the 20 best albums of the year by Spin. It gained a following in the United Kingdom, charting at 59 on the UK Albums Chart, and its lead single, \"Teenage Whore\", entered the UK Indie Chart at number one. The album's feminist slant led many to tag the band as part of the riot grrrl movement, a movement with which Love did not associate. The band toured in support of the record, headlining with Mudhoney in Europe; in the United States, they opened for the Smashing Pumpkins, and performed at CBGB in New York City.",
"title": "Life and career"
},
{
"paragraph_id": 20,
"text": "During the tour, Love briefly dated Smashing Pumpkins frontman Billy Corgan and then the Nirvana frontman Kurt Cobain. The journalist Michael Azerrad states that Love and Cobain met in 1989 at the Satyricon nightclub in Portland, Oregon. However, the Cobain biographer Charles Cross gives the date as February 12, 1990; Cross said that Cobain playfully wrestled Love to the floor after she said that he looked like Dave Pirner of Soul Asylum. According to Love, she met Cobain at a Dharma Bums show in Portland, while Love's bandmate Eric Erlandson said that he and Love were introduced to Cobain in a parking lot after a concert at the Hollywood Palladium on May 17, 1991. In late 1991, Love and Cobain became re-acquainted through Jennifer Finch, one of Love's friends and former bandmates. Love and Cobain were a couple by 1992.",
"title": "Life and career"
},
{
"paragraph_id": 21,
"text": "Just marrying [him] created a mythology around me that I didn't expect for myself, because I had a very controlled, five-year plan about how I was going to be successful in the rock industry. Marrying Kurt, it all kind of went sideways in a way that I could not control and I became seen in a certain light–a vilified light that made Yoko Ono look like Pollyanna–and I couldn't stop it.",
"title": "Life and career"
},
{
"paragraph_id": 22,
"text": "–Love on her public image after marrying Kurt Cobain",
"title": "Life and career"
},
{
"paragraph_id": 23,
"text": "Shortly after completing the tour for Pretty on the Inside, Love married Cobain on Waikiki Beach in Honolulu, Hawaii, on February 24, 1992. She wore a satin and lace dress once owned by actress Frances Farmer, and Cobain wore plaid pajamas. During Love's pregnancy, Hole recorded a cover of \"Over the Edge\" for a Wipers tribute album, and recorded their fourth single, \"Beautiful Son\", which was released in April 1993. On August 18, the couple's only child, a daughter, Frances Bean Cobain, was born in Los Angeles. They relocated to Carnation, Washington, and then Seattle.",
"title": "Life and career"
},
{
"paragraph_id": 24,
"text": "Love's first major media exposure came in a September 1992 profile with Cobain for Vanity Fair by Lynn Hirschberg, entitled \"Strange Love\". Cobain had become a major public figure following the surprise success of Nirvana's album Nevermind. Love was urged by her manager to participate in the cover story. During the prior year, Love and Cobain had developed a heroin addiction; the profile painted them in an unflattering light, suggesting that Love had been addicted to heroin during her pregnancy. The Los Angeles Department of Children and Family Services investigated, and custody of Frances was temporarily awarded to Love's sister Jaimee. Love claimed she was misquoted by Hirschberg, and asserted that she had immediately quit heroin during her first trimester after she discovered she was pregnant. Love later said the article had serious implications for her marriage and Cobain's mental state, suggesting it was a factor in his suicide two years later.",
"title": "Life and career"
},
{
"paragraph_id": 25,
"text": "On September 8, 1993, Love and Cobain made their only public performance together at the Rock Against Rape benefit in Hollywood, performing two acoustic duets of \"Pennyroyal Tea\" and \"Where Did You Sleep Last Night\". Love also performed electric versions of two new Hole songs, \"Doll Parts\" and \"Miss World\", both written for their upcoming second album. In October 1993, Hole recorded their second album, Live Through This, in Atlanta. The album featured a new lineup with bassist Kristen Pfaff and drummer Patty Schemel.",
"title": "Life and career"
},
{
"paragraph_id": 26,
"text": "In April 1994, Cobain killed himself in the Seattle home he shared with Love, who was in rehab in Los Angeles at the time. In the following months, Love was rarely seen in public, staying at her home with friends and family. Cobain's remains were cremated and his ashes divided into portions by Love, who kept some in a teddy bear and some in an urn. In June, she traveled to the Namgyal Buddhist Monastery in Ithaca, New York and had Cobain's ashes ceremonially blessed by Buddhist monks. Another portion was mixed into clay and made into memorial sculptures.",
"title": "Life and career"
},
{
"paragraph_id": 27,
"text": "Live Through This was released one week after Cobain's death on Geffen's subsidiary label DGC. On June 16, Pfaff died of a heroin overdose in Seattle. For Hole's impending tour, Love recruited the Canadian bassist Melissa Auf der Maur. Hole's performance on August 26, 1994, at the Reading Festival—Love's first public performance following Cobain's death—was described by MTV as \"by turns macabre, frightening and inspirational\". John Peel wrote in The Guardian that Love's disheveled appearance \"would have drawn whistles of astonishment in Bedlam\", and that her performance \"verged on the heroic ... Love steered her band through a set which dared you to pity either her recent history or that of the band ... The band teetered on the edge of chaos, generating a tension which I cannot remember having felt before from any stage.\"",
"title": "Life and career"
},
{
"paragraph_id": 28,
"text": "Live Through This was certified platinum in April 1995 and received numerous accolades. The success combined with Cobain's suicide produced publicity for Love, and she was featured on Barbara Walters' 10 Most Fascinating People in 1995. Her erratic onstage behavior and various legal troubles during Hole's tour compounded the media coverage of her. Hole performed a series of riotous concerts over the following year, with Love frequently appearing hysterical onstage, flashing crowds, stage diving, and getting into fights with audience members. One journalist reported that at the band's show in Boston in December 1994: \"Love interrupted the music and talked about her deceased husband Kurt Cobain, and also broke out into Tourette syndrome-like rants. The music was great, but the raving was vulgar and offensive, and prompted some of the audience to shout back at her.\"",
"title": "Life and career"
},
{
"paragraph_id": 29,
"text": "In January 1995, Love was arrested in Melbourne for disrupting a Qantas flight after getting into an argument with a stewardess. On July 4, 1995, at the Lollapalooza Festival in George, Washington, Love threw a lit cigarette at musician Kathleen Hanna before punching her in the face, alleging that she had made a joke about her daughter. She pleaded guilty to an assault charge and was sentenced to anger management classes. In November 1995, two male teenagers sued Love for allegedly punching them during a Hole concert in Orlando, Florida in March 1995. The judge dismissed the case on grounds that the teens \"weren't exposed to any greater amount of violence than could reasonably be expected at an alternative rock concert\". Love later said she had little memory of 1994 and 1995, as she had been using large quantities of heroin and Rohypnol at the time.",
"title": "Life and career"
},
{
"paragraph_id": 30,
"text": "I went for that part so hard because I felt a need for atonement for some cultural damage that had arisen out of me and things that I had done. By doing that role, I felt that, personally and creatively, I could exemplify why this was the most un-glorious, unglamorous, fucked-up thing. And then, bang!, I was done with all that. I could fuck off and do something else.",
"title": "Life and career"
},
{
"paragraph_id": 31,
"text": "–Love on her role in The People vs. Larry Flynt (1996)",
"title": "Life and career"
},
{
"paragraph_id": 32,
"text": "After Hole's world tour concluded in 1996, Love made a return to acting, first in small roles in the Jean-Michel Basquiat biopic Basquiat and the drama Feeling Minnesota (1996), and then a starring role as Larry Flynt's wife Althea in Miloš Forman's critically acclaimed 1996 film The People vs. Larry Flynt. Love went through rehabilitation and quit using heroin at the insistence of Forman; she was ordered to take multiple urine tests under the supervision of Columbia Pictures while filming, and passed all of them. Despite Columbia Pictures' initial reluctance to hire Love due to her troubled past, her performance received acclaim, earning a Golden Globe nomination for Best Actress, and a New York Film Critics Circle Award for Best Supporting Actress. Critic Roger Ebert called her work in the film \"quite a performance; Love proves she is not a rock star pretending to act, but a true actress.\" She won several other awards from various film critic associations for the film. During this time, Love maintained what the media noted as a more decorous public image, and she appeared in ad campaigns for Versace and in a Vogue Italia spread. Following the release of The People vs. Larry Flynt, she dated her co-star Edward Norton, with whom she remained until 1999.",
"title": "Life and career"
},
{
"paragraph_id": 33,
"text": "In late 1997, Hole released the compilations My Body, the Hand Grenade and The First Session, both of which featured previously recorded material. Love attracted media attention in May 1998 after punching journalist Belissa Cohen at a party; the suit was settled out of court for an undisclosed sum. In September 1998, Hole released their third studio album, Celebrity Skin, which featured a stark power pop sound that contrasted with their earlier punk influences. Love divulged her ambition of making an album where \"art meets commerce ... there are no compromises made, it has commercial appeal, and it sticks to [our] original vision.\" She said she was influenced by Neil Young, Fleetwood Mac, and My Bloody Valentine when writing the album. Smashing Pumpkins frontman Billy Corgan co-wrote several songs. Celebrity Skin was well received by critics; Rolling Stone called it \"accessible, fiery and intimate—often at the same time ... a basic guitar record that's anything but basic.\" Celebrity Skin went multi-platinum, and topped \"Best of Year\" lists at Spin and The Village Voice. It garnered Hole's only number-one single on the Modern Rock Tracks chart with \"Celebrity Skin\". Hole promoted the album through MTV performances and at the 1998 Billboard Music Awards, and were nominated for three Grammy Awards at the 41st Grammy Awards ceremony.",
"title": "Life and career"
},
{
"paragraph_id": 34,
"text": "Before the release of Celebrity Skin, Love and Fender designed a low-priced Squier brand guitar, the Vista Venus. The instrument featured a shape inspired by Mercury, a little-known independent guitar manufacturer, Stratocaster, and Rickenbacker's solid body guitars. It had a single-coil and a humbucker pickup and was available in 6-string and 12-string versions. In an early 1999 interview, Love said about the Venus: \"I wanted a guitar that sounded really warm and pop, but which required just one box to go dirty ... And something that could also be your first band guitar. I didn't want it all teched out. I wanted it real simple, with just one pickup switch.\"",
"title": "Life and career"
},
{
"paragraph_id": 35,
"text": "Hole toured with Marilyn Manson on the Beautiful Monsters Tour in 1999, but dropped out after nine performances; Love and Manson disagreed over production costs, and Hole was forced to open for Manson under an agreement with Interscope Records. Hole resumed touring with Imperial Teen. Love later said Hole also abandoned the tour due to Manson and Korn's (whom they also toured with in Australia) sexualized treatment of teenage female audience members. Love told interviewers at 99X.FM in Atlanta: \"What I really don't like—there are certain girls that like us, or like me, who are really messed up ... they're very young, and they do not need to be taken and raped, or filmed having enema contests ... [they were] going out into the audience and picking up fourteen and fifteen-year-old girls who obviously cut themselves, and then [I had] to see them in the morning ... it's just uncool.\"",
"title": "Life and career"
},
{
"paragraph_id": 36,
"text": "In 1999, Love was awarded an Orville H. Gibson award for Best Female Rock Guitarist. During this time, she starred opposite Jim Carrey as his partner Lynne Margulies in the Andy Kaufman biopic Man on the Moon (1999), followed by a role as William S. Burroughs's wife Joan Vollmer in Beat (2000) alongside Kiefer Sutherland. Love was cast as the lead in John Carpenter's sci-fi horror film Ghosts of Mars, but backed out after injuring her foot. She sued the ex-wife of her then-boyfriend, James Barber, whom Love alleged had caused the injury by running over her foot with her Volvo. The following year, she returned to film opposite Lili Taylor in Julie Johnson (2001), in which she played a woman who has a lesbian relationship; Love won an Outstanding Actress award at L.A.'s Outfest. She was then cast in the thriller Trapped (2002), alongside Kevin Bacon and Charlize Theron. The film was a box-office flop.",
"title": "Life and career"
},
{
"paragraph_id": 37,
"text": "In the interim, Hole had become dormant. In March 2001, Love began a \"punk rock femme supergroup\", Bastard, enlisting Schemel, Veruca Salt co-frontwoman Louise Post, and bassist Gina Crosley. Post recalled: \"[Love] was like, 'Listen, you guys: I've been in my Malibu, manicure, movie-star world for two years, alright? I wanna make a record. And let's leave all that grunge shit behind us, eh? We were being so improvisational, and singing together, and with a trust developing between us. It was the shit.\" The group recorded a demo tape, but by September 2001, Post and Crosley had left, with Post citing \"unhealthy and unprofessional working conditions\". In May 2002, Hole announced their breakup amid continuing litigation with Universal Music Group over their record contract.",
"title": "Life and career"
},
{
"paragraph_id": 38,
"text": "In 1997, Love and former Nirvana members Krist Novoselic and Dave Grohl formed a limited liability company, Nirvana LLC, to manage Nirvana's business dealings. In June 2001, Love filed a lawsuit to dissolve it, blocking the release of unreleased Nirvana material and delaying the release of the Nirvana compilation With the Lights Out. Grohl and Novoselic sued Love, calling her \"irrational, mercurial, self-centered, unmanageable, inconsistent and unpredictable\". She responded with a letter stating that \"Kurt Cobain was Nirvana\" and that she and his family were the \"rightful heirs\" to the Nirvana legacy.",
"title": "Life and career"
},
{
"paragraph_id": 39,
"text": "In February 2003, Love was arrested at Heathrow Airport for disrupting a flight and was banned from Virgin Airlines. In October, she was arrested in Los Angeles after breaking several windows of her producer and then-boyfriend James Barber's home and was charged with being under the influence of a controlled substance; the ordeal resulted in her temporarily losing custody of her daughter.",
"title": "Life and career"
},
{
"paragraph_id": 40,
"text": "After the breakup of Hole, Love began composing material with songwriter Linda Perry, and in July 2003 signed a contract with Virgin Records. She began recording her debut solo album, America's Sweetheart, in France shortly after. Virgin Records released America's Sweetheart in February 2004; it received mixed reviews. Charles Aaron of Spin called it a \"jaw-dropping act of artistic will and a fiery, proper follow-up to 1994's Live Through This\" and awarded it eight out of ten, while Amy Phillips of The Village Voice wrote: \"[Love is] willing to act out the dream of every teenage brat who ever wanted to have a glamorous, high-profile hissyfit, and she turns those egocentric nervous breakdowns into art. Sure, the art becomes less compelling when you've been pulling the same stunts for a decade. But, honestly, is there anybody out there who fucks up better?\" The album sold fewer than 100,000 copies. Love later expressed regret over the record, blaming her drug problems at the time. Shortly after it was released, she told Kurt Loder on TRL: \"I cannot exist as a solo artist. It's a joke.\"",
"title": "Life and career"
},
{
"paragraph_id": 41,
"text": "On March 17, 2004, Love appeared on the Late Show with David Letterman to promote America's Sweetheart. Her appearance drew media coverage when she lifted her shirt multiple times, flashed Letterman, and stood on his desk. The New York Times wrote: \"The episode was not altogether surprising for Ms. Love, 39, whose most public moments have veered from extreme pathos—like the time she read the suicide note of her famous husband, Kurt Cobain, on MTV—to angry feminism to catfights to incoherent ranting.\" Hours later, in the early morning of March 18, Love was arrested in Manhattan for allegedly striking a fan with a microphone stand during a small concert in the East Village. She was released within hours and performed a scheduled concert the following evening at the Bowery Ballroom. Four days later, she called in multiple times to The Howard Stern Show, claiming in broadcast conversations with Stern that the incident had not occurred, and that actress Natasha Lyonne, who was at the concert, was told by the alleged victim that he had been paid $10,000 to file a false claim leading to Love's arrest.",
"title": "Life and career"
},
{
"paragraph_id": 42,
"text": "On July 9, 2004, her 40th birthday, Love was arrested for failing to make a court appearance for the March 2004 charges, and taken to Bellevue Hospital, allegedly incoherent, where she was placed on a 72-hour watch. According to police, she was believed to be a potential danger to herself, but deemed mentally sound and released to a rehab facility two days later. Amidst public criticism and press coverage, comedian Margaret Cho published an opinion piece, \"Courtney Deserves Better from Feminists\", arguing that negative associations of Love with her drug and personal problems (including from feminists) overshadowed her music and wellbeing. Love pleaded guilty in October 2004 to disorderly conduct over the incident in East Village.",
"title": "Life and career"
},
{
"paragraph_id": 43,
"text": "Love's appearance as a roaster on the Comedy Central Roast of Pamela Anderson in August 2005, in which she appeared intoxicated and disheveled, attracted further media attention. One review said that Love \"acted as if she belonged in an institution\". Six days after the broadcast, Love was sentenced to a 28-day lockdown rehab program for being under the influence of a controlled substance, violating her probation. To avoid jail time, she accepted an additional 180-day rehab sentence in September 2005. In November 2005, after completing the program, Love was discharged from the rehab center under the provision that she complete further outpatient rehab. In subsequent interviews, Love said she had been addicted to substances including prescription drugs, cocaine, and crack cocaine. She said she had been sober since completing rehabilitation in 2007, and cited her Soka Gakkai Buddhist practice (which she began in 1988) as integral to her sobriety.",
"title": "Life and career"
},
{
"paragraph_id": 44,
"text": "In the midst of her legal troubles, Love had endeavors in writing and publishing. She co-wrote a semi-autobiographical manga, Princess Ai (Japanese: プリンセス·アイ物語), with Stu Levy, illustrated by Misaho Kujiradou and Ai Yazawa; it was released in three volumes in the United States and Japan between 2004 and 2006. In 2006, Love published a memoir, Dirty Blonde, and began recording her second solo album, How Dirty Girls Get Clean, collaborating again with Perry and Billy Corgan. Love had written several songs, including an anti-cocaine song titled \"Loser Dust\", during her time in rehab in 2005. She told Billboard: \"My hand-eye coordination was so bad [after the drug use], I didn't even know chords anymore. It was like my fingers were frozen. And I wasn't allowed to make noise [in rehab] ... I never thought I would work again.\" Tracks and demos for the album leaked online in 2006, and a documentary, The Return of Courtney Love, detailing the making of the album, aired on the British television network More4 in the fall of that year. A rough acoustic version of \"Never Go Hungry Again\", recorded during an interview for The Times in November, was also released. Incomplete audio clips of the song \"Samantha\", originating from an interview with NPR, were distributed on the internet in 2007.",
"title": "Life and career"
},
{
"paragraph_id": 45,
"text": "In March 2009, fashion designer Dawn Simorangkir brought a libel suit against Love concerning a defamatory post Love made on her Twitter account, which was eventually settled for $450,000. Several months later, in June 2009, NME published an article detailing Love's plan to reunite Hole and release a new album, Nobody's Daughter. In response, former Hole guitarist Eric Erlandson stated in Spin magazine that contractually no reunion could take place without his involvement; therefore Nobody's Daughter would remain Love's solo record, as opposed to a \"Hole\" record. Love responded to Erlandson's comments in a Twitter post, claiming \"he's out of his mind, Hole is my band, my name, and my Trademark\". Nobody's Daughter was released worldwide as a Hole album on April 27, 2010. For the new line-up, Love recruited guitarist Micko Larkin, Shawn Dailey (bass guitar), and Stu Fisher (drums, percussion). Nobody's Daughter featured material written and recorded for Love's unfinished solo album, How Dirty Girls Get Clean, including \"Pacific Coast Highway\", \"Letter to God\", \"Samantha\", and \"Never Go Hungry\", although they were re-produced in the studio with Larkin and engineer Michael Beinhorn. The album's subject matter was largely centered on Love's tumultuous life between 2003 and 2007, and featured a polished folk rock sound, and more acoustic guitar work than previous Hole albums.",
"title": "Life and career"
},
{
"paragraph_id": 46,
"text": "The first single from Nobody's Daughter was \"Skinny Little Bitch\", released to promote the album in March 2010. The album received mixed reviews. Robert Sheffield of Rolling Stone gave the album three out of five, saying Love \"worked hard on these songs, instead of just babbling a bunch of druggy bullshit and assuming people would buy it, the way she did on her 2004 flop, America's Sweetheart\". Sal Cinquemani of Slant Magazine also gave the album three out of five: \"It's Marianne Faithfull's substance-ravaged voice that comes to mind most often while listening to songs like 'Honey' and 'For Once in Your Life'. The latter track is, in fact, one of Love's most raw and vulnerable vocal performances to date ... the song offers a rare glimpse into the mind of a woman who, for the last 15 years, has been as famous for being a rock star as she's been for being a victim.\" Love and the band toured internationally from 2010 into late 2012 promoting the record, with their pre-release shows in London and at South by Southwest receiving critical acclaim. In 2011, Love participated in Hit So Hard, a documentary chronicling bandmate Schemel's time in Hole.",
"title": "Life and career"
},
{
"paragraph_id": 47,
"text": "In May 2012, Love debuted an art collection at Fred Torres Collaborations in New York titled \"And She's Not Even Pretty\", which contained over 40 drawings and paintings by Love composed in ink, colored pencil, pastels, and watercolors. Later in the year, she collaborated with Michael Stipe on the track \"Rio Grande\" for Johnny Depp's sea shanty album Son of Rogues Gallery, and in 2013, co-wrote and contributed vocals on \"Rat A Tat\" from Fall Out Boy's album Save Rock and Roll, also appearing in the song's music video.",
"title": "Life and career"
},
{
"paragraph_id": 48,
"text": "After dropping the Hole name and performing as a solo artist in late 2012, Love appeared in spring 2013 advertisements for Yves Saint Laurent alongside Kim Gordon and Ariel Pink. Love completed a solo tour of North America in mid-2013, which was purported to be in promotion of an upcoming solo album; however, it was ultimately dubbed a \"greatest hits\" tour, and featured songs from Love's and Hole's back catalogue. Love told Billboard at the time that she had recorded eight songs in the studio.",
"title": "Life and career"
},
{
"paragraph_id": 49,
"text": "Love was subject of a second landmark libel lawsuit brought against her in January 2014 by her former attorney Rhonda Holmes, who accused Love of online defamation, seeking $8 million in damages. It was the first case of alleged Twitter-based libel in U.S. history to make it to trial. The jury, however, found in Love's favor. A subsequent defamation lawsuit filed by fashion designer Simorangkir in February 2014, however, resulted in Love being ordered to pay a further $350,000 in recompense.",
"title": "Life and career"
},
{
"paragraph_id": 50,
"text": "On April 22, 2014, Love debuted the song \"You Know My Name\" on BBC Radio 6 to promote her tour of the United Kingdom. It was released as a double A-side single with the song \"Wedding Day\" on May 4, 2014, on her own label Cherry Forever Records via Kobalt Label Services. The tracks were produced by Michael Beinhorn, and feature Tommy Lee on drums. In an interview with the BBC, Love revealed that she and former Hole guitarist Eric Erlandson had reconciled, and had been rehearsing new material together, along with former bassist Melissa Auf der Maur and drummer Patty Schemel, though she did not confirm a reunion of the band. On May 1, 2014, in an interview with Pitchfork, Love commented further on the possibility of Hole reuniting, saying: \"I'm not going to commit to it happening, because we want an element of surprise. There's a lot of is to be dotted and ts to be crossed.\"",
"title": "Life and career"
},
{
"paragraph_id": 51,
"text": "Love was cast in several television series in supporting parts throughout 2014, including the FX series Sons of Anarchy, Revenge, and Lee Daniels' network series Empire in a recurring guest role as Elle Dallas. The track \"Walk Out on Me\", featuring Love, was included on the Empire: Original Soundtrack from Season 1 album, which debuted at number 1 on the Billboard 200. Alexis Petridis of The Guardian praised the track, saying: \"The idea of Courtney Love singing a ballad with a group of gospel singers seems faintly terrifying ... The reality is brilliant. Love's voice fits the careworn lyrics, effortlessly summoning the kind of ravaged darkness that Lana Del Rey nearly ruptures herself trying to conjure up.\"",
"title": "Life and career"
},
{
"paragraph_id": 52,
"text": "In January 2015, Love starred in a New York City stage production, Kansas City Choir Boy, a \"pop opera\" conceived by and co-starring Todd Almond. Charles Isherwood of The New York Times praised her performance, noting a \"soft-edged and bewitching\" stage presence, and wrote: \"Her voice, never the most supple or rangy of instruments, retains the singular sound that made her an electrifying front woman for the band Hole: a single sustained noted can seem to simultaneously contain a plea, a wound and a threat.\" The show toured later in the year, with performances in Boston and Los Angeles. In April 2015, the journalist Anthony Bozza sued Love, alleging a contractual violation regarding his co-writing of her memoir. Love performed as the opening act for Lana Del Rey on her Endless Summer Tour for eight West Coast shows in May and June 2015. During her tenure, Love debuted the single \"Miss Narcissist\", released on Wavves' independent label Ghost Ramp. She was also cast in a supporting role in James Franco's film The Long Home, based on the novel by William Gay, her first film role in over ten years; as of 2022, it remains unreleased.",
"title": "Life and career"
},
{
"paragraph_id": 53,
"text": "In January 2016, Love released a clothing line in collaboration with Sophia Amoruso, \"Love, Courtney\", featuring 18 pieces reflecting her personal style. In November 2016, she began filming the pilot for A Midsummer's Nightmare, a Shakespeare anthology series adapted for Lifetime. She starred as Kitty Menéndez in Menendez: Blood Brothers, a biopic television film based on the lives of Lyle and Erik Menéndez, which premiered on Lifetime in June 2017.",
"title": "Life and career"
},
{
"paragraph_id": 54,
"text": "In October 2017, shortly after the Harvey Weinstein scandal made news, a 2005 video of Love warning young actresses about Weinstein went viral. In the footage, while on the red carpet for the Comedy Central Roast of Pamela Anderson, Love was asked by Natasha Leggero if she had any advice for \"a young girl moving to Hollywood\"; she responded, \"If Harvey Weinstein invites you to a private party in the Four Seasons [hotel], don't go.\" She later tweeted, \"Although I wasn't one of his victims, I was eternally banned by [Creative Artists Agency] for speaking out.\"",
"title": "Life and career"
},
{
"paragraph_id": 55,
"text": "In the same year, Love was cast in Justin Kelly's biopic JT LeRoy, portraying a film producer opposite Laura Dern. In March 2018, she appeared in the music video for Marilyn Manson's \"Tattooed in Reverse\", and in April she appeared as a guest judge on RuPaul's Drag Race. In December, Love was awarded a restraining order against Sam Lutfi, who had acted as her manager for the previous six years, alleging verbal abuse and harassment. Her daughter, Frances, and sister, Jaimee, were also awarded restraining orders against Lutfi. In January 2019, a Los Angeles County judge extended the three-year order to five years, citing Lutfi's tendency to \"prey upon people\".",
"title": "Life and career"
},
{
"paragraph_id": 56,
"text": "On August 18, 2019, Love performed a solo set at the Yola Día festival in Los Angeles, which also featured performances by Cat Power and Lykke Li. On September 9, Love garnered press attention when she publicly criticized Joss Sackler, an heiress to the Sackler family OxyContin fortune, after she allegedly offered Love $100,000 to attend her fashion show during New York Fashion Week. In the same statement, Love indicated that she had relapsed into opioid addiction in 2018, stating that she had recently celebrated a year of sobriety. In October 2019, Love relocated from Los Angeles to London.",
"title": "Life and career"
},
{
"paragraph_id": 57,
"text": "On November 21, 2019, Love recorded the song \"Mother\", written and produced by Lawrence Rothman, as part of the soundtrack for the horror film The Turning (2020). In January 2020, she received the Icon Award at the NME Awards; NME described her as \"one of the most influential singers in alternative culture of the last 30 years\". The following month, she confirmed she was writing a new record which she described as \"really sad ... [I'm] writing in minor chords, and that appeals to my sadness.\" In March 2021, Love said she had been hospitalized with acute anemia in August 2020, which had nearly killed her and reduced her weight to 97 pounds (44 kg); she made a full recovery.",
"title": "Life and career"
},
{
"paragraph_id": 58,
"text": "In August 2022, Love revealed the completion of her memoir, The Girl with the Most Cake, after a nearly ten-year period of writing.",
"title": "Life and career"
},
{
"paragraph_id": 59,
"text": "It was announced on May 15, 2023, that Love had been cast in Assassination, a biographical film about the assassination of John F. Kennedy, directed by David Mamet and co-starring Viggo Mortensen, Shia LaBeouf, Al Pacino, and John Travolta.",
"title": "Life and career"
},
{
"paragraph_id": 60,
"text": "Love has been candid about her diverse musical influences, the earliest being Patti Smith, The Runaways, and The Pretenders, artists she discovered while in juvenile hall as a young teenager. As a child, her first exposure to music was records that her parents received each month through Columbia Record Club. The first record Love owned was Leonard Cohen's Songs of Leonard Cohen (1967), which she obtained from her mother: \"He was so lyric-conscious and morbid, and I was a pretty morbid kid\", she recalled. As a teenager, she named Flipper, Kate Bush, Soft Cell, Joni Mitchell, Laura Nyro, Lou Reed, and Dead Kennedys among her favorite artists. While in Dublin at age fifteen, Love attended a Virgin Prunes concert, an event she credited as being a pivotal influence: \"I had never seen so much sex, snarl, poetry, evil, restraint, grace, filth, raw power and the very essence of rock and roll\", she recalled. \"[I had seen] U2 [who] gave me lashes of love and inspiration, and a few nights later the Virgin Prunes fucked–me–up.\" Decades later, in 2009, Love introduced the band's frontman Gavin Friday at a Carnegie Hall event, and performed a song with him.",
"title": "Artistry"
},
{
"paragraph_id": 61,
"text": "Though often associated with punk music, Love has noted that her most significant musical influences have been post-punk and new wave artists. Commenting in 2021, Love said:",
"title": "Artistry"
},
{
"paragraph_id": 62,
"text": "There's this idea of \"Courtney is punk and stuck in 1995!\" but that's not the case. I was more [influenced by] new wave or post-punk. My number one greatest song of all time is \"Love Will Tear Us Apart\" by Joy Division, and I will take no fucking prisoners in that battle. But the band that affected me more than even Leonard Cohen and Bob Dylan was Echo and the Bunnymen.",
"title": "Artistry"
},
{
"paragraph_id": 63,
"text": "Over the years, Love has also named several other new wave and post-punk bands as influences, including The Smiths, Siouxsie and the Banshees, Television, and Bauhaus.",
"title": "Artistry"
},
{
"paragraph_id": 64,
"text": "Love's diverse genre interests were illustrated in a 1991 interview with Flipside, in which she stated: \"There's a part of me that wants to have a grindcore band and another that wants to have a Raspberries-type pop band.\" Discussing the abrasive sound of Hole's debut album, she said she felt she had to \"catch up with all my hip peers who'd gone all indie on me, and who made fun of me for liking R.E.M. and The Smiths.\" She has also embraced the influence of experimental artists and punk rock groups, including Sonic Youth, Swans, Big Black, Diamanda Galás, the Germs, and The Stooges. While writing Celebrity Skin, she drew influence from Neil Young and My Bloody Valentine. She has also cited her contemporary PJ Harvey as an influence, saying: \"The one rock star that makes me know I'm shit is Polly Harvey. I'm nothing next to the purity that she experiences.\"",
"title": "Artistry"
},
{
"paragraph_id": 65,
"text": "Literature and poetry have often been a major influence on her songwriting; Love said she had \"always wanted to be a poet, but there was no money in it.\" She has named the works of T.S. Eliot and Charles Baudelaire as influential, and referenced works by Dante Rossetti, William Shakespeare, Rudyard Kipling, and Anne Sexton in her lyrics.",
"title": "Artistry"
},
{
"paragraph_id": 66,
"text": "Musically, Love's work with Hole and her solo efforts have been characterized as alternative rock; Hole's early material, however, was described by critics as being stylistically closer to grindcore and aggressive punk rock. Spin's October 1991 review of Hole's first album noted Love's layering of harsh and abrasive riffs buried more sophisticated musical arrangements. In 1998, she stated that Hole had \"always been a pop band. We always had a subtext of pop. I always talked about it, if you go back ... what'll sound like some weird Sonic Youth tuning back then to you was sounding like the Raspberries to me, in my demented pop framework.\"",
"title": "Artistry"
},
{
"paragraph_id": 67,
"text": "Love's lyrical content is composed from a female's point of view, and her lyrics have been described as \"literate and mordant\" and noted by scholars for \"articulating a third-wave feminist consciousness.\" Simon Reynolds, in reviewing Hole's debut album, noted: \"Ms. Love's songs explore the full spectrum of female emotions, from vulnerability to rage. The songs are fueled by adolescent traumas, feelings of disgust about the body, passionate friendships with women and the desire to escape domesticity. Her lyrical style could be described as emotional nudism.\" Journalist and critic Kim France, in critiquing Love's lyrics, referred to her as a \"dark genius\" and likened her work to that of Anne Sexton.",
"title": "Artistry"
},
{
"paragraph_id": 68,
"text": "Love has remarked that lyrics have always been the most important component of songwriting for her: \"The important thing for me ... is it has to look good on the page. I mean, you can love Led Zeppelin and not love their lyrics ... but I made a big effort in my career to have what's on the page mean something.\" Common themes present in Love's lyrics during her early career included body image, rape, suicide, conformity, pregnancy, prostitution, and death. In a 1991 interview with Everett True, she said: \"I try to place [beautiful imagery] next to fucked up imagery, because that's how I view things ... I sometimes feel that no one's taken the time to write about certain things in rock, that there's a certain female point of view that's never been given space.\"",
"title": "Artistry"
},
{
"paragraph_id": 69,
"text": "Critics have noted that Love's later musical work is more lyrically introspective. Celebrity Skin and America's Sweetheart are lyrically centered on celebrity life, Hollywood, and drug addiction, while continuing Love's interest in vanity and body image. Nobody's Daughter was lyrically reflective of Love's past relationships and her struggle for sobriety, with the majority of its lyrics written while she was in rehab in 2006.",
"title": "Artistry"
},
{
"paragraph_id": 70,
"text": "Love has a contralto vocal range. According to Love, she never wanted to be a singer, but rather aspired to be a skilled guitarist: \"I'm such a lazy bastard though that I never did that\", she said. \"I was always the only person with the nerve to sing, and so I got stuck with it.\" She has been regularly noted by critics for her husky vocals as well as her \"banshee [-like]\" screaming abilities. Her vocals have been compared to those of Johnny Rotten, and David Fricke of Rolling Stone described them as \"lung-busting\" and \"a corrosive, lunatic wail\". Upon the release of Hole's 2010 album, Nobody's Daughter, Amanda Petrusich of Pitchfork compared Love's raspy, unpolished vocals to those of Bob Dylan. In 2023, Rolling Stone ranked Love at number 130 on its list of the 200 Greatest Singers of All Time.",
"title": "Artistry"
},
{
"paragraph_id": 71,
"text": "She has played a variety of Fender guitars throughout her career, including a Jaguar and a vintage 1965 Jazzmaster; the latter was purchased by the Hard Rock Cafe and is on display in New York City. Between 1989 and 1991, Love primarily played a Rickenbacker 425 because she \"preferred the 3/4 neck\", but she destroyed the guitar onstage at a 1991 concert opening for the Smashing Pumpkins. In the mid-1990s, she often played a guitar made by Mercury, an obscure company that manufactured custom guitars, as well as a Univox Hi-Flier. Fender's Vista Venus, designed by Love in 1998, was partially inspired by Rickenbacker guitars as well as her Mercury. During tours after the release of Nobody's Daughter (post-2010), Love has played a Rickenbacker 360 onstage. Her setup has included Fender tube gear, Matchless, Ampeg, Silvertone and a solid-state 1976 Randall Commander.",
"title": "Artistry"
},
{
"paragraph_id": 72,
"text": "Love has referred to herself as \"a shit guitar player\", further commenting in a 2014 interview: \"I can still write a song, but [the guitar playing] sounds like shit ... I used to be a good rhythm player but I am no longer dependable.\" Throughout her career, she has also garnered a reputation for unpredictable live shows. In the 1990s, her performances with Hole were characterized by confrontational behavior, with Love stage diving, smashing guitars or throwing them into the audience, wandering into the crowd at the end of sets, and engaging in sometimes incoherent rants. Critics and journalists have noted Love for her comical, often stream-of-consciousness-like stage banter. Music journalist Robert Hilburn wrote in 1993 that, \"rather than simply scripted patter, Love's comments between songs [have] the natural feel of someone who is sharing her immediate feelings.\" In a review of a live performance published in 2010, it was noted that Love's onstage \"one-liners [were] worthy of the Comedy Store.\"",
"title": "Artistry"
},
{
"paragraph_id": 73,
"text": "In 1993, Love and husband Kurt Cobain performed an acoustic set together at the Rock Against Rape benefit in Los Angeles, which raised awareness and provided resources for victims of sexual abuse. In 2000, Love publicly advocated for reform of the record industry in a personal letter published by Salon. In the letter, Love said: \"It's not piracy when kids swap music over the Internet using Napster or Gnutella or Freenet or iMesh or beaming their CDs into a My.MP3.com or MyPlay.com music locker. It's piracy when those guys that run those companies make side deals with the cartel lawyers and label heads so that they can be 'the label's friend', and not the artists'.\" In a subsequent interview with Carrie Fisher, she said that she was interested in starting a union for recording artists, and also discussed race relations in the music industry, advocating for record companies to \"put money back into the black community [whom] white people have been stealing from for years.\"",
"title": "Philanthropy"
},
{
"paragraph_id": 74,
"text": "Love has been a long-standing supporter of LGBT causes. She has frequently collaborated with Los Angeles Gay and Lesbian Center, taking part in the center's \"An Evening with Women\" events. The proceeds of the event help provide food and shelter for homeless youth; services for seniors; legal assistance; domestic violence services; health and mental health services, and cultural arts programs. Love participated with Linda Perry for the event in 2012, and performed alongside Aimee Mann and comedian Wanda Sykes. Speaking on her collaboration on the event, Love said: \"Seven thousand kids in Los Angeles a year go out on the street, and forty percent of those kids are gay, lesbian, or transgender. They come out to their parents, and become homeless ... for whatever reason, I don't really know why, but gay men have a lot of foundations—I've played many of them—but the lesbian side of it doesn't have as much money and/or donors, so we're excited that this has grown to cover women and women's affairs.\"",
"title": "Philanthropy"
},
{
"paragraph_id": 75,
"text": "She has also contributed to AIDS organizations, partaking in benefits for amfAR and the RED Campaign. In May 2011, she donated six of her husband Cobain's personal vinyl records for auction at Mariska Hargitay's Joyful Heart Foundation event for victims of child abuse, rape, and domestic violence. She has also supported the Sophie Lancaster Foundation.",
"title": "Philanthropy"
},
{
"paragraph_id": 76,
"text": "Love has had an impact on female-fronted alternative acts and performers. She has been cited as influential on young female instrumentalists in particular, having once infamously proclaimed: \"I want every girl in the world to pick up a guitar and start screaming ... I strap on that motherfucking guitar and you cannot fuck with me. That's my feeling.\" In The Electric Guitar: A History of an American Icon, it is noted:",
"title": "Influence"
},
{
"paragraph_id": 77,
"text": "[Love] truly lived up to Paul Westerberg's (The Replacements) assessment of pretty girls \"playing makeup/wearing guitar\" ... She frequently stood on stage, microphone in hand and foot on monitor, and simply let her Fender guitar dangle around her neck. She truly embodied the empowerment that came with playing the electric guitar ... Love depended heavily upon her male lead guitar foil Eric Erlandson, but the rest of her band remained exclusively female throughout several lineup changes.",
"title": "Influence"
},
{
"paragraph_id": 78,
"text": "When you're dying and your life is flashing before your eyes ... you're gonna be thinking about the great things you did, the horrible things that you did, the emotional impact that someone had on you and that you had on somebody else. Those are the things that are relevant. To have some sort of emotional impact that transcends time, that's great.",
"title": "Influence"
},
{
"paragraph_id": 79,
"text": "–Love on having a cultural impact, 1997",
"title": "Influence"
},
{
"paragraph_id": 80,
"text": "With over 3 million records sold in the United States alone, Hole became one of the most successful rock bands of all time fronted by a woman. VH1 ranked Love no. 69 in their list of The 100 Greatest Women in Music History in 2012. In 2015, the Phoenix New Times declared Love the number one greatest female rock star of all time, writing: \"To build a perfect rock star, there are several crucial ingredients: musical talent, physical attractiveness, tumultuous relationships, substance abuse, and public meltdowns, just to name a few. These days, Love seems to have rebounded from her epic tailspin and has leveled out in a slightly more normal manner, but there's no doubt that her life to date is the type of story people wouldn't believe in a novel or a movie.\"",
"title": "Influence"
},
{
"paragraph_id": 81,
"text": "Among the alternative musicians who have cited Love as an influence are Scout Niblett; Brody Dalle of The Distillers; Dee Dee Penny of Dum Dum Girls; Victoria Legrand of Beach House; Annie Hardy of Giant Drag; and Nine Black Alps. Contemporary female pop artists Lana Del Rey, Avril Lavigne, Tove Lo, and Sky Ferreira have also cited Love as an influence. Love has frequently been recognized as the most high-profile contributor of feminist music during the 1990s, and for \"subverting [the] mainstream expectations of how a woman should look, act, and sound.\" According to music journalist Maria Raha, \"Hole was the highest-profile female-fronted band of the '90s to openly and directly sing about feminism.\" Patti Smith, a major influence of Love's, also praised her, saying: \"I hate genderizing things ... [but] when I heard Hole, I was amazed to hear a girl sing like that. Janis Joplin was her own thing; she was into Big Mama Thornton and Bessie Smith. But what Courtney Love does, I'd never heard a girl do that.\"",
"title": "Influence"
},
{
"paragraph_id": 82,
"text": "She has also been a gay icon since the mid-1990s, and has jokingly referred to her fanbase as consisting of \"females, gay guys, and a few advanced, evolved heterosexual men.\" Love's aesthetic image, particularly in the early 1990s, also became influential and was dubbed \"kinderwhore\" by critics and media. The subversive fashion mainly consisted of vintage babydoll dresses accompanied by smeared makeup and red lipstick. MTV reporter Kurt Loder described Love as looking like \"a debauched rag doll\" onstage. Love later said she had been influenced by the fashion of Chrissy Amphlett of the Divinyls. Interviewed in 1994, Love commented \"I would like to think–in my heart of hearts–that I'm changing some psychosexual aspects of rock music. Not that I'm so desirable. I didn't do the kinder-whore thing because I thought I was so hot. When I see the look used to make one more appealing, it pisses me off. When I started, it was a What Ever Happened to Baby Jane? thing. My angle was irony.\"",
"title": "Influence"
}
] | Courtney Michelle Love is an American singer, guitarist, songwriter, and actress. A figure in the alternative and grunge scenes of the 1990s, her career has spanned four decades. She rose to prominence as the lead vocalist and rhythm guitarist of the alternative rock band Hole, which she formed in 1989. Love has drawn public attention for her uninhibited live performances and confrontational lyrics, as well as her highly publicized personal life following her marriage to Nirvana frontman Kurt Cobain. In 2020, NME named her one of the most influential singers in alternative culture of the last 30 years. Love had an itinerant childhood, but was primarily raised in Portland, Oregon, where she played in a series of short-lived bands and was active in the local punk scene. After briefly being in a juvenile hall, she spent a year living in Dublin and Liverpool before returning to the United States and pursuing an acting career. She appeared in supporting roles in the Alex Cox films Sid and Nancy (1986) and Straight to Hell (1987) before forming the band Hole in Los Angeles with guitarist Eric Erlandson. The group received critical acclaim from underground rock press for their 1991 debut album, produced by Kim Gordon, while their second release, Live Through This (1994), was met with critical accolades and multi-platinum sales. In 1995, Love returned to acting, earning a Golden Globe Award nomination for her performance as Althea Leasure in Miloš Forman's The People vs. Larry Flynt (1996), which established her as a mainstream actress. The following year, Hole's third album, Celebrity Skin (1998), was nominated for three Grammy Awards. Love continued to work as an actress into the early 2000s, appearing in big-budget pictures such as Man on the Moon (1999) and Trapped (2002), before releasing her first solo album, America's Sweetheart, in 2004. The subsequent several years were marred with publicity surrounding Love's legal troubles and drug relapse, which resulted in a mandatory lockdown rehabilitation sentence in 2005 while she was writing a second solo album. That project became Nobody's Daughter, released in 2010 as a Hole album but without the former Hole lineup. Between 2014 and 2015, Love released two solo singles and returned to acting in the network series Sons of Anarchy and Empire. In 2020, she confirmed she was writing new music. Love has also been active as a writer; she co-created and co-wrote three volumes of a manga, Princess Ai, between 2004 and 2006, and wrote a memoir, Dirty Blonde (2006). | 2001-11-03T11:59:55Z | 2023-12-31T01:43:52Z | [
"Template:Cite journal",
"Template:Closed access",
"Template:Short description",
"Template:About",
"Template:En dash",
"Template:Refend",
"Template:IMDb name",
"Template:Curlie",
"Template:Authority control",
"Template:Pp-blp",
"Template:Main",
"Template:'",
"Template:Cite interview",
"Template:Cite video",
"Template:YouTube",
"Template:Amg name",
"Template:Navboxes",
"Template:Sfn",
"Template:Convert",
"Template:Open access",
"Template:Cbignore",
"Template:Sister project links",
"Template:Courtney Love",
"Template:See also",
"Template:Free access",
"Template:Cite web",
"Template:Cite news",
"Template:Cite magazine",
"Template:Cite episode",
"Template:Refbegin",
"Template:Discogs artist",
"Template:Featured article",
"Template:Use mdy dates",
"Template:Faith No More",
"Template:Notelist",
"Template:Cite AV media notes",
"Template:Use American English",
"Template:Abbr",
"Template:Efn",
"Template:Blockquote",
"Template:Reflist",
"Template:Hole",
"Template:Pp-move",
"Template:Infobox person",
"Template:Small",
"Template:Cite book",
"Template:Cite AV media",
"Template:Quote box",
"Template:Listen"
] | https://en.wikipedia.org/wiki/Courtney_Love |
5,657 | Cow (disambiguation) | Cow is a colloquial term for cattle, and the name of female cattle.
Cow, cows or COW may also refer to: | [
{
"paragraph_id": 0,
"text": "Cow is a colloquial term for cattle, and the name of female cattle.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Cow, cows or COW may also refer to:",
"title": ""
}
] | Cow is a colloquial term for cattle, and the name of female cattle. Cow, cows or COW may also refer to: | 2001-05-17T08:19:29Z | 2023-12-12T23:53:45Z | [
"Template:TOC right",
"Template:Anchor",
"Template:Lookfrom",
"Template:Intitle",
"Template:Disambiguation",
"Template:Wiktionary"
] | https://en.wikipedia.org/wiki/Cow_(disambiguation) |
5,658 | Human cannibalism | Human cannibalism is the act or practice of humans eating the flesh or internal organs of other human beings. A person who practices cannibalism is called a cannibal. The meaning of "cannibalism" has been extended into zoology to describe animals consuming parts of individuals of the same species as food.
Neanderthals are believed to have practised cannibalism, and may have been eaten by anatomically modern humans. Cannibalism was occasionally practised in Egypt during ancient and Roman times, as well as later during severe famines. The Island Caribs of the Lesser Antilles, whose name is the origin of the word cannibal, acquired a long-standing reputation as eaters of human flesh, reconfirmed when their legends were recorded in the 17th century. Some controversy exists over the accuracy of these legends and the prevalence of actual cannibalism in the culture.
Cannibalism has been well documented in much of the world, including Fiji (once nicknamed the "Cannibal Isles"), the Amazon Basin, the Congo, and the Māori people of New Zealand. Cannibalism was also practised in New Guinea and in parts of the Solomon Islands, and human flesh was sold at markets in some parts of Melanesia and of the Congo Basin. A form of cannibalism popular in early modern Europe was the consumption of body parts or blood for medical purposes. Reaching its height during the 17th century, this practice continued in some cases into the second half of the 19th century.
Cannibalism has occasionally been practised as a last resort by people suffering from famine. Well-known examples include the ill-fated Donner Party (1846–1847) and the crash of Uruguayan Air Force Flight 571 (1972), after which the survivors ate the bodies of the dead. Additionally, there are cases of people engaging in cannibalism for sexual pleasure, such as Albert Fish, Issei Sagawa, Jeffrey Dahmer, and Armin Meiwes. Cannibalism has been both practised and fiercely condemned in recent several wars, especially in Liberia and the Democratic Republic of the Congo. It was still practised in Papua New Guinea as of 2012, for cultural reasons.
Cannibalism has been said to test the bounds of cultural relativism because it challenges anthropologists "to define what is or is not beyond the pale of acceptable human behavior". A few scholars argue that no firm evidence exists that cannibalism has ever been a socially acceptable practice anywhere in the world, but such views have been largely rejected as irreconcilable with the actual evidence.
The word "cannibal" is derived from Spanish caníbal or caríbal, originally used as a name for the Caribs, a people from the West Indies said to have eaten human flesh. The older term anthropophagy, meaning "eating humans", is also used for human cannibalism.
Cannibalism has been practised under a variety of circumstances and for various motives. To adequately express this diversity, Shirley Lindenbaum suggests that "it might be better to talk about 'cannibalisms'" in the plural.
One major distinction is whether cannibal acts are accepted by the culture in which they occur – institutionalized cannibalism – or whether they are merely practised under starvation conditions to ensure one's immediate survival – survival cannibalism – or by isolated individuals considered criminal and often pathological by society at large – cannibalism as psychopathology or "aberrant behavior".
Institutionalized cannibalism, sometimes also called "learned cannibalism", is the consumption of human body parts as "an institutionalized practice" generally accepted in the culture where it occurs.
By contrast, survival cannibalism means "the consumption of others under conditions of starvation such as shipwreck, military siege, and famine, in which persons normally averse to the idea are driven [to it] by the will to live". Also known as famine cannibalism, such forms of cannibalism resorted to only in situations of extreme necessity have occurred in many cultures where cannibalism is otherwise clearly rejected. The survivors of the shipwrecks of the Essex and Méduse in the 19th century are said to have engaged in cannibalism, as did the members of Franklin's lost expedition and the Donner Party. Such cases often involve only necro-cannibalism (eating the corpse of someone already dead) as opposed to homicidal cannibalism (killing someone for food). In modern English law, the latter is always considered a crime, even in the most trying circumstances. The case of R v Dudley and Stephens, in which two men were found guilty of murder for killing and eating a cabin boy while adrift at sea in a lifeboat, set the precedent that necessity is no defence to a charge of murder. This decision outlawed and effectively ended the practice of shipwrecked sailors drawing lots in order to determine who would be killed and eaten to prevent the others from starving, a time-honoured practice formerly known as a "custom of the sea".
In other cases, cannibalism is an expression of a psychopathology or mental disorder, condemned by the society in which it occurs and "considered to be an indicator of [a] severe personality disorder or psychosis". Well-known cases include Albert Fish, Issei Sagawa, and Armin Meiwes. Fantasies of cannibalism, whether acted out or not, are not specifically mentioned in manuals of mental disorders such as the DSM, presumably because at least serious cases (that lead to murder) are very rare.
Within institutionalized cannibalism, exocannibalism is often distinguished from endocannibalism. Endocannibalism refers to the consumption of a person from the same community. Often it is a part of a funerary ceremony, similar to burial or cremation in other cultures. The consumption of the recently deceased in such rites can be considered "an act of affection" and a major part of the grieving process. It has also been explained as a way of guiding the souls of the dead into the bodies of living descendants.
In contrast, exocannibalism is the consumption of a person from outside the community. It is frequently "an act of aggression, often in the context of warfare", where the flesh of killed or captured enemies may be eaten to celebrate one's victory over them.
Both types of cannibalism can also be fuelled by the belief that eating a person's flesh or internal organs will endow the cannibal with some of the characteristics of the deceased. However, several authors investigating exocannibalism in New Zealand, New Guinea, and the Congo Basin observe that such beliefs were absent in these regions.
A further type, different from both exo- and endocannibalism, is autocannibalism (also called autophagy or self-cannibalism), "the act of eating parts of oneself". It does not ever seem to have been an institutionalized practice, but occasionally occurs as pathological behaviour, or due to other reasons such as curiosity. Also on record are instances of forced autocannibalism committed as acts of aggression, where individuals are forced to eat parts of their own bodies as a form of torture.
Exocannibalism is thus often associated with the consumption of enemies as an act of aggression, a practice also known as war cannibalism. Endocannibalism is often associated with the consumption of deceased relatives in funerary rites driven by affection – a practice known as funerary or mortuary cannibalism. But acts of institutionalized cannibalism can also be driven by various other motives, for which additional names have been coined.
Medicinal cannibalism (also called medical cannibalism) means "the ingestion of human tissue ... as a supposed medicine or tonic". In contrast to other forms of cannibalism, which Europeans generally frowned upon, the "medicinal ingestion" of various "human body parts was widely practiced throughout Europe from the sixteenth to the eighteenth centuries", with early records of the practice going back to the first century CE. It was also frequently practised in China.
Sacrificial cannibalism refers the consumption of the flesh of victims of human sacrifice, for example among the Aztecs. Human and animal remains excavated in Knossos, Crete, have been interpreted as evidence of a ritual in which children and sheep were sacrificed and eaten together during the Bronze Age. According to Ancient Roman reports, the Celts in Britain practised sacrificial cannibalism, and archaeological evidence backing these claims has by now been found.
Infanticidal cannibalism or cannibalistic infanticide refers to cases where newborns or infants are killed because they are "considered unwanted or unfit to live" and then "consumed by the mother, father, both parents or close relatives". Infanticide followed by cannibalism was practised in various regions, but is particularly well documented among Aboriginal Australians. Among animals, such behaviour is called filial cannibalism, and it is common in many species, especially among fish.
Human predation is the hunting of people from unrelated and possibly hostile groups in order to eat them. In parts of the Southern New Guinea lowland rain forests, hunting people "was an opportunistic extension of seasonal foraging or pillaging strategies", with human bodies just as welcome as those of animals as sources of protein, according to the anthropologist Bruce M. Knauft. As populations living near coasts and rivers were usually better nourished and hence often physically larger and stronger than those living inland, they "raided inland 'bush' peoples with impunity and often with little fear of retaliation". Cases of human predation are also on record for the neighbouring Bismarck Archipelago and for Australia. In the Congo Basin, there lived groups such as the Zappo Zaps who hunted humans for food even when game was plentiful.
The term gastronomic cannibalism has been suggested for cases where human flesh is eaten to "provide a supplement to the regular diet" – thus essentially for its nutritional value – or, in an alternative definition, for cases where it is "eaten without ceremony (other than culinary), in the same manner as the flesh of any other animal". While the term has been criticized as being too vague to clearly identify a specific type of cannibalism, various records indicate that nutritional or culinary concerns could indeed play a role in such acts even outside of periods of starvation. Referring to the Congo Basin, where many of the eaten were butchered slaves rather than enemies killed in war, the anthropologist Emil Torday notes that "the most common [reason for cannibalism] was simply gastronomic: the natives loved 'the flesh that speaks' [as human flesh was commonly called] and paid for it". The historian Key Ray Chong observes that, throughout Chinese history, "learned cannibalism was often practiced ... for culinary appreciation".
In his popular book Guns, Germs and Steel, Jared Diamond suggests that "protein starvation is probably also the ultimate reason why cannibalism was widespread in traditional New Guinea highland societies", and both in New Zealand and Fiji, cannibals explained their acts as due to a lack of animal meat. In Liberia, a former cannibal argued that it would have been wasteful to let the flesh of killed enemies spoil, and eaters of human flesh in the Bismarck Archipelago expressed the same sentiment. In many cases, human flesh was also described as particularly delicious, especially when it came from women, children, or both. Such statements are on record for various regions and peoples, including the Aztecs, today's Liberia and Nigeria, the Fang people in west-central Africa, the Congo Basin, China up to the 14th century, Sumatra, Borneo, Australia, New Zealand, and Fiji as well as various other Melanesian and Polynesian islands.
There is a debate among anthropologists on how important functionalist reasons are for the understanding of institutionalized cannibalism. Diamond is not alone in suggesting "that the consumption of human flesh was of nutritional benefit for some populations in New Guinea" and the same case has been made for other "tropical peoples ... exploiting a diverse range of animal foods", including human flesh. The materialist anthropologist Marvin Harris argued that a "shortage of animal protein" was also the underlying reason for Aztec cannibalism. The cultural anthropologist Marshall Sahlins, on the other hand, rejected such explanations as overly simplistic, stressing that cannibal customs must be regarded as "complex phenomen[a]" with "myriad attributes" which can only be understood if one considers "symbolism, ritual, and cosmology" in addition to their "practical function".
While not a motive, the term innocent cannibalism has been suggested for cases of people eating human flesh without knowing what they are eating. It is a subject of myths, such as the myth of Thyestes who unknowingly ate the flesh of his own sons. There are also actual cases on record, for example from the Congo Basin, where cannibalism had been quite widespread and where even in the 1950s travellers were sometimes served a meat dish, learning only afterwards that the meat had been of human origin.
In pre-modern medicine, an explanation given by the now-discredited theory of humorism for cannibalism was that it was caused by a black acrimonious humor, which, being lodged in the linings of the ventricles of the heart, produced a voracity for human flesh. On the other hand, the French philosopher Michel de Montaigne understood war cannibalism as a way of expressing vengeance and hatred towards one's enemies and celebrating one's victory over them, thus giving an interpretation that is close to modern explanations. He also pointed out that some acts of Europeans in his own time could be considered as equally barbarous, making his essay "Of Cannibals" (c. 1580) a precursor to later ideas of cultural relativism.
Cases of people eating human livers and hearts, especially of enemies, have been reported from across the world. After the Battle of Uhud (625), Hind bint Utba ate (or at least attempted to) the liver of Hamza ibn Abd al-Muttalib, an uncle of the prophet Muhammad. At that time, the liver was considered "the seat of life". French Catholics ate livers and hearts of Huguenots at the St. Bartholomew's Day massacre in 1572, in some cases also offering them for sale.
In China, medical cannibalism was practised over centuries. People voluntary cut their own body parts, including parts of their livers, and boiled them to cure ailing relatives. Children were sometimes killed because eating their boiled hearts was considered a good way of extending one's life. Emperor Wuzong of Tang supposedly ordered provincial officials to send him "the hearts and livers of fifteen-year-old boys and girls" when he had become seriously ill, hoping in vain this medicine would cure him. Later private individuals sometimes followed his example, paying soldiers who kidnapped preteen children for their kitchen.
When "human flesh and organs were sold openly at the marketplace" during the Taiping Rebellion in 1850–1864, human hearts became a popular dish, according to some who afterwards freely admitted having consumed them. According to a missionary's report from the brutal suppression of the Dungan Revolt of 1895–1896 in northwestern China, "thousands of men, women and children were ruthlessly massacred by the imperial soldiers" and "many a meal of human hearts and livers was partaken of by soldiers", supposedly out of a belief that this would give them "the courage their enemies had displayed".
During the Cultural Revolution (1966–1976), hundreds of incidents of cannibalism occurred, mostly motivated by hatred against supposed "class enemies", but sometimes also by health concerns. In a case recorded by the local authorities, a school teacher in Mengshan County "heard that consuming a 'beauty's heart' could cure disease". He then chose a 13- or 14-year-old student of his and publicly denounced her as a member of the enemy faction, which was enough to get her killed by an angry mob. After the others had left, he "cut open the girl's chest ..., dug out her heart, and took it home to enjoy". In a further case that took place in Wuxuan County, likewise in the Guangxi region, three brothers were beaten to death as supposed enemies; afterwards their livers were cut out, baked, and consumed "as medicine". According to the Chinese author Zheng Yi, who researched these events, "the consumption of human liver was mentioned at least fifty or sixty times" in just a small number of archival documents. He talked with a man who had eaten human liver and told him that "barbecued liver is delicious".
In World War II, Japanese soldiers ate the livers of killed Americans in the Chichijima incident.
During a massacre of the Madurese minority in the Indonesian part of Borneo in 1999, reporter Richard Lloyd Parry met a young cannibal who had just participated in a "human barbecue" and told him without hesitation: "It tastes just like chicken. Especially the liver – just the same as chicken." In 2013, during the Syrian civil war, Syrian rebel Abu Sakkar was filmed eating parts of the lung or liver of a government soldier while declaring that "We will eat your hearts and your livers you soldiers of Bashar the dog".
A well-known case of mortuary cannibalism is that of the Fore tribe in New Guinea, which resulted in the spread of the prion disease kuru. Although the Fore's mortuary cannibalism was well-documented, the practice had ceased before the cause of the disease was recognized. However, some scholars argue that although post-mortem dismemberment was the practice during funeral rites, cannibalism was not. Marvin Harris theorizes that it happened during a famine period coincident with the arrival of Europeans and was rationalized as a religious rite.
In 2003, a publication in Science received a large amount of press attention when it suggested that early humans may have practised extensive cannibalism. According to this research, genetic markers commonly found in modern humans worldwide suggest that today many people carry a gene that evolved as protection against the brain diseases that can be spread by consuming human brain tissue. A 2006 reanalysis of the data questioned this hypothesis, because it claimed to have found a data collection bias, which led to an erroneous conclusion. This claimed bias came from incidents of cannibalism used in the analysis not being due to local cultures, but having been carried out by explorers, stranded seafarers or escaped convicts. The original authors published a subsequent paper in 2008 defending their conclusions.
Cannibalism features in the folklore and legends of many cultures and is most often attributed to evil characters or as extreme retribution for some wrongdoing. Examples include the witch in "Hansel and Gretel", Lamia of Greek mythology and the witch Baba Yaga of Slavic folklore.
A number of stories in Greek mythology involve cannibalism, in particular the eating of close family members, e.g., the stories of Thyestes, Tereus and especially Cronus, who became Saturn in the Roman pantheon. The story of Tantalus is another example, though here a family member is prepared for consumption by others.
The wendigo is a creature appearing in the legends of the Algonquian people. It is thought of variously as a malevolent cannibalistic spirit that could possess humans or a monster that humans could physically transform into. Those who indulged in cannibalism were at particular risk, and the legend appears to have reinforced this practice as taboo. The Zuni people tell the story of the Átahsaia – a giant who cannibalizes his fellow demons and seeks out human flesh.
The wechuge is a demonic cannibalistic creature that seeks out human flesh appearing in the mythology of the Athabaskan people. It is said to be half monster and half human-like; however, it has many shapes and forms.
William Arens, author of The Man-Eating Myth: Anthropology and Anthropophagy, questions the credibility of reports of cannibalism and argues that the description by one group of people of another people as cannibals is a consistent and demonstrable ideological and rhetorical device to establish perceived cultural superiority. Arens bases his thesis on a detailed analysis of various "classic" cases of cannibalism reported by explorers, missionaries, and anthropologists. He claims that all of them were steeped in racism, unsubstantiated, or based on second-hand or hearsay evidence. Though widely discussed, Arens's book generally failed to convince the academic community. Claude Lévi-Strauss observes that, in spite of his "brilliant but superficial book ... [n]o serious ethnologist disputes the reality of cannibalism". Shirley Lindenbaum notes that, while after "Arens['s] ... provocative suggestion ... many anthropologists ... reevaluated their data", the outcome was an improved and "more nuanced" understanding of where, why and under which circumstances cannibalism took place rather than a confirmation of his claims: "Anthropologists working in the Americas, Africa, and Melanesia now acknowledge that institutionalized cannibalism occurred in some places at some times. Archaeologists and evolutionary biologists are taking cannibalism seriously."
Lindenbaum and others point out that Arens displays a "strong ethnocentrism". His refusal to admit that institutionalized cannibalism ever existed seems to be motivated by the implied idea "that cannibalism is the worst thing of all" – worse than any other behaviour people engaged in, and therefore uniquely suited to vilifying others. Kajsa Ekholm Friedman calls this "a remarkable opinion in a culture [the European/American one] that has been capable of the most extreme cruelty and destructive behavior, both at home and in other parts of the world."
She observes that, contrary to European values and expectations, "in many parts of the Congo region there was no negative evaluation of cannibalism. On the contrary, people expressed their strong appreciation of this very special meat and could not understand the hysterical reactions from the white man's side." And why indeed, she goes on to ask, should they have had the same negative reactions to cannibalism as Arens and his contemporaries? Implicitly he assumes that everybody throughout human history must have shared the strong taboo placed by his own culture on cannibalism, but he never attempts to explain why this should be so, and "neither logic nor historical evidence justifies" this viewpoint, as Christian Siefkes commented.
Accusations of cannibalism could be used to characterize indigenous peoples as "uncivilized", "primitive", or even "inhuman." While this means that the reliability of reports of cannibal practices must be carefully evaluated especially if their wording suggests such a context, many actual accounts do not fit this pattern. The earliest firsthand account of cannibal customs in the Caribbean comes from Diego Álvarez Chanca, who accompanied Christopher Columbus on his second voyage. His description of the customs of the Caribs of Guadeloupe includes their cannibalism (men killed or captured in war were eaten, while captured boys were "castrated [and used as] servants until they gr[e]w up, when they [were] slaughtered" for consumption), but he nevertheless notes "that these people are more civilized than the other islanders" (who did not practice cannibalism). Nor was he an exception. Among the earliest reports of cannibalism in the Caribbean and the Americas, there are some (like those of Amerigo Vespucci) that seem to mostly consist of hearsay and "gross exaggerations", but others (by Chanca, Columbus himself, and other early travellers) show "genuine interest and respect for the natives" and include "numerous cases of sincere praise".
Reports of cannibalism from other continents follow similar patterns. Condescending remarks can be found, but many Europeans who described cannibal customs in Central Africa wrote about those who practised them in quite positive terms, calling them "splendid" and "the finest people" and not rarely, like Chanca, actually considering them as "far in advance of" and "intellectually and morally superior" to the non-cannibals around them. Writing from Melanesia, the missionary George Brown explicitly rejects the European prejudice of picturing cannibals as "particularly ferocious and repulsive", noting instead that many cannibals he met were "no more ferocious than" others and "indeed ... very nice people".
Reports or assertions of cannibal practices could nevertheless be used to promote the use of military force as a means of "civilizing" and "pacifying" the "savages". During the Spanish conquest of the Aztec Empire and its earlier conquests in the Caribbean there were widespread reports of cannibalism, and cannibals became exempted from Queen Isabella's prohibition on enslaving the indigenous. Another example of the sensationalism of cannibalism and its connection to imperialism occurred during Japan's 1874 expedition to Taiwan. As Robert Eskildsen describes, Japan's popular media "exaggerated the aborigines' violent nature", in some cases by wrongly accusing them of cannibalism.
This Horrid Practice: The Myth and Reality of Traditional Maori Cannibalism (2008) by New Zealand historian Paul Moon received a hostile reception by some Māori, who felt the book tarnished their whole people. However, the factual accuracy of the book was not seriously disputed and even critics such as Margaret Mutu grant that cannibalism was "definitely" practised and that it was "part of our [Māori] culture."
Among modern humans, cannibalism has been practised by various groups. It was practised by humans in Prehistoric Europe, Mesoamerica, South America, among Iroquoian peoples in North America, Maori in New Zealand, the Solomon Islands, parts of West Africa and Central Africa, some of the islands of Polynesia, New Guinea, Sumatra, and Fiji. Evidence of cannibalism has been found in ruins associated with the Ancestral Puebloans of the Southwestern United States as well (at Cowboy Wash in Colorado).
There is evidence, both archaeological and genetic, that cannibalism has been practised for hundreds of thousands of years by early Homo sapiens and archaic hominins. Human bones that have been "de-fleshed" by other humans go back 600,000 years. The oldest Homo sapiens bones (from Ethiopia) show signs of this as well. Some anthropologists, such as Tim D. White, suggest that cannibalism was common in human societies prior to the beginning of the Upper Paleolithic period. This theory is based on the large amount of "butchered human" bones found in Neanderthal and other Lower/Middle Paleolithic sites.
It seems likely that not all instances of prehistoric cannibalism were due to the same reason, just as cannibalistic acts known from the historical record have been motivated by a variety of reasons. One suggested reason for cannibalism in the Lower and Middle Paleolithic have been food shortages. It has been also suggested that removing dead bodies through ritual (funerary) cannibalism was a means of predator control, aiming to eliminate predators' and scavengers' access to hominid (and early human) bodies. Jim Corbett proposed that after major epidemics, when human corpses are easily accessible to predators, there are more cases of man-eating leopards, so removing dead bodies through ritual cannibalism (before the cultural traditions of burying and burning bodies appeared in human history) might have had practical reasons for hominids and early humans to control predation.
The oldest archaeological evidence of hominid cannibalism comes from the Gran Dolina cave in northern Spain. The remains of several individuals who died about 800,000 years ago and may have belongs to the Homo antecessor species show unmistakable signs of having been butchered and consumed in the same way as animals whose bones were also found at the site. They belong to at least eleven individuals, all of which were young (ranging from infancy to late teenhood). A study of this case considers it an instance of "nutritional" cannibalism, where individuals belonging to hostile or unrelated groups were hunted, killed, and eaten much like animals. Based on the placement and processing of human and animal remains, the authors conclude that cannibalism was likely a "repetitive behavior over time as part of a culinary tradition", not caused by starvation or other exceptional circumstances. They suggest that young individuals (more than half of which were children under ten) were targeted because they "posed a lower risk for hunters" and because this was an effective means for limiting the growth of competing groups.
Several sites in Croatia, France, and Spain yield evidence that the Neanderthals sometimes practised cannibalism, though the interpretation of some of the finds remains controversial.
Neanderthals could also fall victim to cannibalism by anatomically modern humans. Evidence found in southwestern France indicates that the latter butchered and ate a Neanderthal child about 30,000 years ago; it is unknown whether the child was killed by them or died of other reasons. The find has been considered as strengthening the conjecture that modern humans might have hunted Neanderthals and in this way contributed to their extinction.
In Gough's Cave, England, remains of human bones and skulls, around 14,700 years old, suggest that cannibalism took place amongst the people living in or visiting the cave, and that they may have used human skulls as drinking vessels.
The archaeological site of Herxheim in southwestern Germany was a ritual center and a mass grave formed by people of the Linear Pottery culture in Neolithic Europe. It contained the scattered remains of more than 1000 individuals from different, in some cases faraway regions, who died around 5000 BCE. Whether they were war captives or human sacrifices is unclear, but the evidence indicates that their corpses were spit-roasted whole and then consumed.
At Fontbrégoua Cave in southeastern France, the remains of six people who lived about 7,000 years ago were found (two children, one adolescent, and three adults), in addition to animal bones. The patterns of cut marks indicate that both humans and animals were skinned and processed in similar ways. Since the human victims were all processed at the same time, the main excavator, Paola Villa, suspects that they all belonged to the same family or extended family and were killed and butchered together, probably during some kind of violent conflict. Others have argued that the traces were caused by defleshing rituals preceding a secondary burial, but the fact that both humans and wild and domestic animals were processed in the same way makes this unlikely; moreover, Villa argues that the observed traces better fit a typical butchering process than a secondary burial.
Researchers have also found physical evidence of cannibalism from more recent times, including from Prehistoric Britain. In 2001, archaeologists at the University of Bristol found evidence of cannibalism practised around 2000 years ago in Gloucestershire, South West England. This is in agreement with Ancient Roman reports that the Celts in Britain practised human sacrifice, killing and eating captured enemies as well as convicted criminals.
Cannibalism is mentioned many times in early history and literature. The oldest written reference may be from the tomb of the ancient Egyptian king Unas (24th century BCE). It contained a hymn in praise of the king portraying him as a cannibal who eats both "men" and "gods", thus indicating an attitude towards cannibalism quite different from the modern one.
Herodotus claimed in his Histories (5th century BCE) that after eleven days' voyage up the Borysthenes (Dnieper River) one reached a desolated land that extended for a long way, followed by a country of man-eaters (other than the Scythians), and beyond it by another desolated and uninhabited area.
The Stoic philosopher Chrysippus approved of eating one's dead relatives in a funerary ritual, noting that such rituals were common among many peoples.
Cassius Dio recorded cannibalism practised by the bucoli, Egyptian tribes led by Isidorus against Rome. They sacrificed and consumed two Roman officers in a ritualistic fashion, swearing an oath over their entrails.
According to Appian, during the Roman siege of Numantia in the 2nd century BCE, the population of Numantia (in today's Spain) was reduced to cannibalism and suicide. Cannibalism was also reported by Josephus during the siege of Jerusalem in 70 CE.
Jerome, in his letter Against Jovinianus (written 393 CE), discusses how people came to their present condition as a result of their heritage, and lists several examples of peoples and their customs. In the list, he mentions that he has heard that the Attacotti (in Britain) eat human flesh and that the Massagetae and Derbices (two Central Asian peoples) kill and eat old people, considering this a more desirable fate than dying of old age and illness.
There is universal agreement that some Mesoamerican people practised human sacrifice, but there is a lack of scholarly consensus as to whether cannibalism in pre-Columbian America was widespread. At one extreme, the anthropologist Marvin Harris, author of Cannibals and Kings, has suggested that the flesh of the victims was a part of an aristocratic diet as a reward, since the Aztec diet was lacking in proteins. While most historians of the pre-Columbian era accept that there was ritual cannibalism related to human sacrifices, they often reject suggestions that human flesh could have been a significant portion of the Aztec diet. Cannibalism was also associated with acts of warfare, and has been interpreted as an element of blood revenge in war.
When the Moroccan explorer Ibn Battuta visited the Mali Empire in the 1350s, he was surprised to see sultan Sulayman give "a slave girl as part of his reception-gift" to a group of warriors from a cannibal region who had come to visit his court. "They slaughtered her and ate her and smeared their faces and hands with her blood and came in gratitude to the sultan." He was told that the sultan did so every time he received the cannibal guests. Though a Muslim like Ibn Battuta himself, he apparently considered catering to his visitors' preferences more important than whatever reservations he may have had about the practice. Other Muslim authors writing around that time also report that cannibalism was practised in some West Africa regions and that slave girls were sometimes slaughtered for food, since "their flesh is the best thing we have to eat."
Cases of cannibalism were recorded during the First Crusade, as there are various accounts of crusaders consuming the bodies of their dead opponents following the sieges of Antioch and of Ma'arra in 1097–1098. While the Christian sources all explain these acts as due to hunger, Amin Maalouf is sceptical of this justification, arguing that that the crusaders' behaviour indicates they might have been driven by "fanaticism" rather than, or in addition to "necessity". Thomas Asbridge states that, while the "cannibalism at Marrat is among the most infamous of all the atrocities perpetrated by the First Crusaders", it nevertheless had "some positive effects on the crusaders' short-term prospects", since reports of their brutality convinced many Muslim commanders to accept truces rather than trying to fight them.
During Europe's Great Famine of 1315–1317, there were various reports of cannibalism among starving people.
Charges of cannibalism were levied against the Qizilbash of the Safavid Ismail I.
Cannibalism has been repeatedly recorded throughout China's well-documented history. The sinologist Bengt Pettersson found references to more than three hundred different episodes of cannibalism in the Official Dynastic Histories alone. Most episodes occurred in the context of famine or war, or were otherwise motivated by vengeance or medical reasons. More than half of the episodes recorded in the Official Histories describe cases motivated by food scarcity during famines or in times of war. Pettersson observes that the records of such events "neither encouraged nor condemned" the consumption of human flesh under such circumstances, rather accepting it as an unavoidable way of "coping with a life-threatening situation".
In other cases, cannibalism was an element of vengeance or punishment – eating the hearts and livers, or sometimes the whole bodies, of killed enemies was a way of further humiliating them and sweetening the revenge. Both private individuals and state officials engaged in such acts, especially from the 4th to the 10th century CE, but in some cases right until the end of Imperial China (in 1912). More than 70 cases are listed in the Official Histories alone. In warfare, human flesh could be eaten out of a lack of other provisions, but also out of hatred against the enemy or to celebrate one's victory. Not just enemy fighters, but also their "servants and concubines were all steamed and eaten", according to one account.
At least since the Tang dynasty (618–907), the consumption of human flesh was considered a highly effective medical treatment, recommended by the Bencao Shiyi, an influential medical reference book published in the early 8th century, as well as in similar later manuals. Together with the ethical ideal of filial piety, according to which young people were supposed to do everything in their power to support their parents and parents-in-law, this idea lead to a unique form of voluntary cannibalism, in which a young person cut some of the flesh out of their body and gave it to an ill parent or parent-in-law for consumption. The majority of the donors were women, frequently daughters-in-law of the patient.
The devoted daughter-in-law would tie her thigh or her arm very tightly with a piece of clothing. She would then use a very sharp knife to quickly slice off a piece from her upper arm or upper thigh. The flesh would immediately be mixed in with soup or gruel, which had been heated in preparation, and this would then be offered to the dying mother-in-law or father-in-law.
The Official Histories describe more than 110 cases of such voluntary offerings that took place between the early 7th and the early 20th century. While these acts were (at least nominally) voluntary and the donors usually (though not always) survived them, several sources also report of children and adolescents who were killed so that their flesh could be eaten for medical purposes.
During the Tang dynasty, cannibalism was supposedly resorted to by rebel forces early in the period (who were said to raid neighbouring areas for victims to eat), and (on a large scale) by both soldiers and civilians during the siege of Suiyang, a decisive episode of the An Lushan Rebellion. Eating an enemy's heart and liver was also repeatedly mentioned as a feature of both official punishments and private vengeance. The final decades of the dynasty were marked by large-scale rebellions, during which both rebels and regular soldiers butchered prisoners for food and killed and ate civilians. Sometimes "the rebels captured by government troops were [even] sold as food", according to several of the Official Histories, while warlords likewise relied on the sale of human flesh to finance their rebellions. An Arab traveller visiting China during this time noted with surprise: "cannibalism [is] permissible for them according to their legal code, for they trade in human flesh in their markets."
References to cannibalizing the enemy also appear in poetry written in the subsequent Song dynasty (960–1279) – for example, in Man Jiang Hong – although they are perhaps meant symbolically, expressing hatred towards the enemy. The Official Histories covering this period record various cases of rebels and bandits eating the flesh of their victims.
The flesh of executed criminals was sometimes cut off and sold for consumption. During the Tang dynasty a law was enacted that forbade this practice, but whether the law was effectively enforced is unclear. The sale of human flesh is also repeatedly mentioned during famines, in accounts ranging from the 6th to the 15th century. Several of these accounts mention that animal flesh was still available, but had become so expensive that few could afford it. Dog meat was five times as expensive as human flesh, according to one such report. Sometimes, poor men sold their own wives or children to butchers who slaughtered them and sold their flesh. Cannibalism in famine situations seems to have been generally tolerated by the authorities, who did not intervene when such acts occurred.
A number of accounts suggests that human flesh was occasionally eaten for culinary reasons. An anecdote told about Duke Huan of Qi (7th century BCE) claims that he was curious about the taste of "steamed child", having already eaten everything else. His cook supposedly killed his own son to prepare the dish, and Duke Huan judged it to be "the best food of all". In later times, wealthy men, among them a son of the 4th-century emperor Shi Hu and an "open and high-spirited" man who lived in the 7th century CE, served the flesh of purchased women or children during lavish feasts. The sinologist Robert des Rotours [fr] observes that while such acts were not common, they do not seem to have been rare exceptions, and the hosts apparently did not have to face ostracism or legal prosection. Key Ray Chong even concludes that "learned cannibalism was often practiced ... for culinary appreciation, and exotic dishes [of human flesh] were prepared for jaded upper-class palates".
The Official Histories mention 10th-century officials who liked to eat the flesh of babies and children, and during the Jin dynasty (1115–1234), human flesh seems to have been readily available at the home of a general, who supposedly served it to one of his guests as a practical joke. Accounts from the 12th to 14th centuries indicate that both soldiers and writers praised this flesh as particularly delicious, considering especially children's flesh as unsurpassable in taste.
Pettersson observes that people generally seem to have had less reservations about the consumption of human flesh than one might expect today. While survival cannibalism during famines was regarded a lamentable necessity, accounts explaining the practice as due to other reasons, such as vengeance or filial piety, were generally even positive.
European explorers and colonizers brought home many stories of cannibalism practised by the native peoples they encountered. In Spain's overseas expansion to the New World, the practice of cannibalism was reported by Christopher Columbus in the Caribbean islands, and the Caribs were greatly feared because of their supposed practice of it. Queen Isabel of Castile had forbidden the Spaniards to enslave the indigenous, unless they were "guilty" of cannibalism. The accusation of cannibalism became a pretext for attacks on indigenous groups and justification for the Spanish conquest. In Yucatán, shipwrecked Spaniard Jerónimo de Aguilar, who later became a translator for Hernán Cortés, reported to have witnessed fellow Spaniards sacrificed and eaten, but escaped from captivity where he was being fattened for sacrifice himself. In the Florentine Codex (1576) compiled by Franciscan Bernardino de Sahagún from information provided by indigenous eyewitnesses has questionable evidence of Mexica (Aztec) cannibalism. Franciscan friar Diego de Landa reported on Yucatán instances.
In early Brazil, there is reportage of cannibalism among the Tupinamba. It is recorded about the natives of the captaincy of Sergipe in Brazil: "They eat human flesh when they can get it, and if a woman miscarries devour the abortive immediately. If she goes her time out, she herself cuts the navel-string with a shell, which she boils along with the secondine [i.e. placenta], and eats them both." (see human placentophagy).
The 1913 Handbook of Indians of Canada (reprinting 1907 material from the Bureau of American Ethnology) claims that North American natives practising cannibalism included
the Montagnais, and some of the tribes of Maine; the Algonkin, Armouchiquois, Iroquois, and Micmac; farther west the Assiniboine, Cree, Foxes, Chippewa, Miami, Ottawa, Kickapoo, Illinois, Sioux, and Winnebago; in the south the people who built the mounds in Florida, and the Tonkawa, Attacapa, Karankawa, Caddo, and Comanche; in the northwest and west, portions of the continent, the Thlingchadinneh and other Athapascan tribes, the Tlingit, Heiltsuk, Kwakiutl, Tsimshian, Nootka, Siksika, some of the Californian tribes, and the Ute. There is also a tradition of the practice among the Hopi, and mentions of the custom among other tribes of New Mexico and Arizona. The Mohawk, and the Attacapa, Tonkawa, and other Texas tribes were known to their neighbours as 'man-eaters'.
The forms of cannibalism described included both resorting to human flesh during famines and ritual cannibalism, the latter usually consisting of eating a small portion of an enemy warrior. From another source, according to Hans Egede, when the Inuit killed a woman accused of witchcraft, they ate a portion of her heart.
As with most lurid tales of native cannibalism, these stories are treated with a great deal of scrutiny, as accusations of cannibalism were often used as justifications for the subjugation or destruction of "savages". The historian Patrick Brantlinger suggests that Indigenous peoples that were colonized were being dehumanized as part of the justification for the atrocities.
This period of time was also rife with instances of explorers and seafarers resorting to cannibalism for survival. There is archaeological and written evidence for English settlers' cannibalism in 1609 in the Jamestown Colony under famine conditions, during a period which became known as Starving Time.
Sailors shipwrecked or lost at sea repeatedly resorted to cannibalism to face off starvation. The survivors of the sinking of the French ship Méduse in 1816 resorted to cannibalism after four days adrift on a raft. Their plight was made famous by Théodore Géricault's painting Raft of the Medusa. After a whale sank the Essex of Nantucket on November 20, 1820, the survivors, in three small boats, resorted, by common consent, to cannibalism in order for some to survive. This event became an important source of inspiration for Herman Melville's Moby-Dick.
The case of R v Dudley and Stephens (1884) is an English criminal case which dealt with four crew members of an English yacht, the Mignonette, who were cast away in a storm some 2,600 kilometres (1,600 mi) from the Cape of Good Hope. After several days, one of the crew, a seventeen-year-old cabin boy, fell unconscious due to a combination of the famine and drinking seawater. The others (one possibly objecting) decided to kill him and eat him. They were picked up four days later. Two of the three survivors were found guilty of murder. A significant outcome of this case was that necessity in English criminal law was determined to be no defence against a charge of murder. This was a break with the traditional understanding among sailors, which had been that selecting a victim for killing and consumption was acceptable in a starvation situation as long as lots were drawn so that all faced an equal risk of being killed.
On land, travellers through sparsely inhabited regions and explorers of unknown areas sometimes ate human flesh after running out of other provisions. In a famous example from the 1840s, the members of Donner Party found themselves stranded by snow in the Donner Pass, a high mountain pass in California, without adequate supplies during the Mexican–American War, leading to several instances of cannibalism, including the murder of two young Native American men for food. Sir John Franklin's lost polar expedition, which took place at approximately the same time, is another example of cannibalism out of desperation.
In frontier situations where there was no strong authority, some individuals got used to killing and eating others even in situations where other food would have been available. One notorious case was the mountain man Boone Helm, who become known as "The Kentucky Cannibal" for eating several of his fellow travellers, from 1850 until his eventual hanging in 1864.
The Leopard Society was a cannibalistic secret society that existed until the mid-1900s and was active mostly in regions that today belong to Sierra Leone, Liberia and Ivory Coast. The Leopard men would dress in leopard skins and waylay travellers with sharp claw-like weapons in the form of leopards' claws and teeth. The victims' flesh would be cut from their bodies and distributed to members of the society.
Cannibalism was practised widely in the some parts of the Congo Basin, though it was by no means universal. Some peoples, such as the Bakongo, rejected the practice altogether. In some other regions human flesh was eaten "only occasionally to mark a particularly significant ritual occasion, but in other societies in the Congo, perhaps even a majority by the late nineteenth century, people ate human flesh whenever they could, saying that it was far tastier than other meat", notes the anthropologist Robert B. Edgerton.
Many people not only freely admitted eating human flesh, but were surprised when they heard that Europeans did not eat it. Emil Torday observed: "They are not ashamed of cannibalism, and openly admit that they practise it because of their liking for human flesh", with the primary reason for cannibalism being a "gastronomic" preference for such dishes. Torday once received "a portion of a human thigh" sent as a well-intended gift, and other Europeans were offered pieces of human flesh in gestures of hospitality. People expected to be rewarded with fresh human flesh for services well performed and were disappointed when they received something else instead.
In addition to enemies killed or captured in war, slaves were frequent victims. Many "healthy children" had to die "to provide a feast for their owners". Young slave children were at particular risk since they were in low demand for other purposes and since their flesh was widely praised as especially delicious, "just as many modern meat eaters prefer lamb over mutton and veal over beef". Such acts were not considered controversial – people did not understand why Europeans objected to the killing of slaves, while themselves killing and eating goats; they argued that both were the "property" of their owners, to be used as it pleased them.
A third group of victims were persons from other ethnic groups, who in some areas were "hunt[ed] for food" just like animals. Many of the victims, who were usually killed with poisoned arrows or with clubs, were "women and children ... who had ventured too far from home while gathering firewood or fetching drinking water" and who were targeted "because they were easier to overpower" and also considered tastier than adult men.
In some regions there was a regular trade in slaves destined to be eaten, and the flesh of recently butchered slaves was available for purchase as well. Some people fattened slave children to sell them for consumption; if such a child became ill and lost too much weight, their owner drowned them in the nearest river instead of wasting further food on them, as a French missionary once witnessed. Human flesh not sold the same day was smoked, so it could be "sold at leisure" during subsequent weeks. Europeans were often hesitant to buy smoked meat since they knew that the "smoking of human flesh to preserve it was ... widespread", but once meat was smoked, its origin was hard to determine.
Instead of being killed quickly, "persons to be eaten often had both of their arms and legs broken and were made to sit up to their necks in a stream for [up to] three days, a practice said to make their flesh more tender, before they were killed and cooked." Both adults and children, and also animals such as birds and monkeys, were routinely submitted to this treatment prior to being slaughtered.
Various reports indicate that living slaves were exposed on marketplaces, so that purchasers could choose which body parts to buy before the victim was butchered and the flesh distributed.
It often happens that the poor creature destined for the knife is exposed for sale in the market. He walks to and fro and epicures come to examine him. They describe the parts they prefer, one the arm, one the leg, breast, or head. The portions which are purchased are marked off with lines of coloured ochre. When the entire body is sold, the wretch is slain.
This custom, reported around both the central Congo River and the Ubangi in the north, seem to have been motivated by a desire to get fresh rather than smoked flesh, since without refrigeration there was no other way to preserve flesh from spoiling quickly.
Killed or captured enemies made another sort of victims, even during wars fought by the colonial state. During the 1892–1894 war between the Congo Free State and the Swahili–Arab city-states of Nyangwe and Kasongo in Eastern Congo, there were reports of widespread cannibalization of the bodies of defeated combatants by the Batetela allies of the Belgian commander Francis Dhanis. In April 1892, 10,000 Batetela, under the command of Gongo Lutete, joined forces with Dhanis in a campaign against the Swahili–Arab leaders Sefu and Mohara. After one early skirmish in the campaign, Dhanis's medical officer, Captain Sidney Langford Hinde, "noticed that the bodies of both the killed and wounded had vanished." When fighting broke out again, Hinde saw his Batetela allies drop human arms, legs and heads on the road; now he had to accept that they had really "carried them off for food", which he had initially doubted.
According to Hinde, the conquest of Nyangwe was followed by "days of cannibal feasting" during which hundreds were eaten, with only their heads being kept as mementos. During this time, Lutete "hid himself in his quarters, appalled by the sight of thousands of men smoking human hands and human chops on their camp fires, enough to feed his army for many days." Hinde also noted that the Batetela town Ngandu had "at least 2,000 polished human skulls" as a "solid white pavement in front" of its gates, with human skulls crowning every post of the stockade.
Soon after, Nyangwe's surviving population rose in a rebellion, during whose brutal suppression a thousand rioters were killed by the new government. One young Belgian officer wrote home: "Happily Gongo's men ... ate them up [in a few hours]. It's horrible but exceedingly useful and hygienic.... I should have been horrified at the idea in Europe! but it seems quite natural to me here. Don't show this letter to anyone indiscreet". Hinde too commented approvingly on the thoroughness with which the cannibals "disposed of all the dead, leaving nothing even for the jackals, and thus sav[ing] us, no doubt, from many an epidemic." Generally the Free State administration seems to have done little to suppress cannibal customs, sometimes even tolerating or facilitating them among its own auxiliary troops and allies.
In August 1903, the UK diplomat Roger Casement wrote from Lake Tumba to a consular colleague: "The people round here are all cannibals.... There are also dwarfs (called Batwas) in the forest who are even worse cannibals than the taller human environment. They eat man flesh raw! It's a fact." He added that assailants would "bring down a dwarf on the way home, for the marital cooking pot.... The Dwarfs, as I say, dispense with cooking pots and eat and drink their human prey fresh cut on the battlefield while the blood is still warm and running. These are not fairy tales ..., but actual gruesome reality in the heart of this poor, benighted savage land."
The origins of Congolese cannibalism are lost in time. The oldest known references to it can be found in Filippo Pigafetta's Report of the Kingdom of Congo, published in the late 16th century based on the memories of Duarte Lopez, a Portuguese trader who had lived for several years in the Kingdom of Kongo. Lopez reported that farther up the Congo River, there lived a people who ate both killed enemies and those of their slaves which they could not sell for a "good price".
Oral records indicate that, already at a time when slavery was not widespread in the Congo Basin, people assumed that anyone sold as a slave would likely be eaten, "because cannibalism was common, and slaves were purchased especially for such purposes". In the 19th century, warfare and slave raids increased in the Congo Basin as a result of the international demand for slaves, who could no longer be so easily captured nearer to the coasts. As a result, the consumption of slaves increased as well, since most of those sold in the Atlantic slave trade were young and healthy individuals aged from 14 to 30, and similar preferences existed in the Arab–Swahili slave trade. However, many of the captives were younger, older, or otherwise considered less saleable, and such victims were often eaten by the slave raiders or sold to cannibals who purchased them as "meat".
Most of the accounts of cannibalism in the Congo are from the late 19th century, when the Atlantic slave trade had come to a halt, but slavery still existed in Africa and the Arab world. Various reports indicate that around the Ubangi River, slaves were frequently exchanged against ivory, which was then exported to Europe or the Americas, while the slaves were eaten. Some European traders seem to have directly and knowingly taken part in these deadly transactions, while others turned a blind eye. The local elephant hunters preferred the flesh especially of young human beings – four to sixteen was the preferred age range, according to one trader – "because it was not only more tender, but also much quicker to cook" than the meat of elephants or other large animals.
While sceptics such as William Arens sometimes claim that there are no credible eyewitness accounts of cannibal acts, there are numerous such accounts from the Congo. David Livingstone "saw human parts being cooked with bananas, and many other Europeans" – among them Hinde – "reported seeing cooked human remains lying around abandoned fires." Soldiers of the German explorer Hermann Wissmann saw how people captured and wounded in a slave raid were shot by a Swahili–Arab leader and then handed over "to his auxiliary troops, who ... cut them in pieces and dragged them to the fire to serve as their supper". Visiting a village near the Aruwimi River, the British artist Herbert Ward saw a man "carrying four large lumps of human flesh, with the skin still clinging to it, on a stick", and soon afterwards "a party of men squatting round a fire, before which this ghastly flesh, exposed on spits, was cooking"; he was told that the flesh came from a man who had been killed a few hours before. Another time, when "camping for the night with a party of Arab raiders and their followers", he and his companions felt "compelled to change the position of our tent owing to the offensive smell of human flesh, which was being cooked on all sides of us."
The Belgian colonial officer Camille Coquilhat saw "the remaining half of [a] steamed man" – a slave who had been purchased for consumption and slaughtered a few hours earlier – "in an enormous pot" and discussed with the slave's owner, who at first thought that Coquilhat was joking when he objected to his cannibalistic customs. Near the Ubangi River, which formed the border between the Belgian and the French colonial enterprises, the French traveller Jacques d'Uzès [fr] saw local auxiliaries of the French troops kill "some women and some children" after a punitive expedition, then cooking their flesh in pots and "enjoy[ing]" it.
Among the Mangbetu people in the north-east, Georg A. Schweinfurth saw a human arm being smoked over a fire. At other occasion, he watched a group of young women using boiling water for "scalding the hair off the lower half of a human body" in preparation for cooking it. A few years later, Gaetano Casati saw how the roasted leg of a slave woman was served at the court of the Mangbetu king. More eyewitness accounts could be added.
Various cases of revenge-driven cannibalism are on record. The historian Angelica Montanari has investigated a number of accounts from Italy between the 14th and 16th centuries, showing that the consumption of entrails or body parts of those considered enemies is repeatedly mentioned in local chronicles, sometimes without any expression of condemnation or disapproval. Another case of this type of cannibalism happened in 1672, when Dutch stadtholder Johan de Witt and his brother were lynched and partially eaten for failing to fend off a French invasion.
From the 16th century on, an unusual form of medical cannibalism became widespread in several European countries, for which thousands of Egyptian mummies were ground up and sold as medicine. Powdered human mummy – called mummia – was thought to stop internal bleeding and to have other healing properties. The practice developed into a widespread business that flourished until the early 18th century. The demand was much higher than the supply of ancient mummies, leading to much of the offered "mummia" being counterfeit, made from recent Egyptian or European corpses – often from the gallows – instead. In a few cases, mummia was still offered in medical catalogues in the early 20th century.
Cannibalism was repeatedly practised during famines, when other provisions were exhausted.
During the chaotic transition from the Ming to the Qing dynasty in the 17th century, severe famines repeatedly lead to cannibalism. During a famine in 1622, government troops took the providing of human flesh into their own hands, "openly butcher[ing] and [selling] people in a market where one jin [c. 600 grams] of flesh could be exchanged for one liang [c. 40 grams] of silver." Around 1640, a drought in Henan and Shandong became so bad that "women and babies were arrayed in the market as human food and were sold by the slaughterers just like mutton and pork." Sometimes women and children were slaughtered in the back rooms of butcher shops while customers were waiting for fresh meat. A few years later in Sichuan, "hundreds of the young and weak" were kidnapped, killed, and eaten; in the markets, men's flesh was sold at a somewhat lower price than that of women, which was considered tastier.
Contemporary reports indicate that in Shaanxi – located between Henan and Sichuan – cannibalism became so common in the early Qing period that the local government "officially sanctioned" the sale and consumption of human flesh. Butchers legally turned towards killing people sold to them and then "sell[ing] their meat"; human-based dishes were also served in restaurants. The History of Ming, one of the Official Dynastic Histories that documented cannibalistic acts, accepted them as inevitable in bad times. "When driven towards dangers, what choices do they have?" it asked rhetorically about a famine in 1611, where people were "selling their daughters and sons, and eating their wives and children".
Centuries later, during the Taiping Rebellion in 1850–1864, "human flesh and organs" – gained by dismembering corpses or by butchering kidnapped persons – "were sold openly at the marketplace" and "some people killed their own children and ate them" to alleviate their hunger. Human hearts became a popular dish, according to some who afterwards freely admitted having purchased and enjoyed them. Zeng Guofan, the general leading the army that suppressed the rebellion, confirmed the open sale of human flesh in his diary – once even complaining about its high price, which had risen again.
Reports of cannibalism and the sale of human flesh during severe famines continued into the early 20th century, up to the final years of Imperial China. Various cases were reported during the Northern Chinese Famine of 1876–1879, with eyewitnesses reporting the sale of human flesh in markets and butcher shops and various (unverified) rumours indicating that it might also have been served in restaurants.
Outside of famines, the flesh of executed criminals was frequently sold for consumption, a traditional custom that lasted until the 19th century.
The indigenous population of Taiwan (then known as Formosa) repeatedly rebelled against Chinese rule. The Chinese army reacted drastically by not only killing suspected rebels, but sometimes also eating and selling their flesh. The American journalist James W. Davidson wrote:
One horrible feature of the campaign against the savages was the sale by the Chinese in open market of savage flesh.... After killing a savage, the head was commonly severed from the body and exhibited.... The body was then either divided among its captors and eaten, or sold to wealthy Chinese and even to high officials, who disposed of it in a like manner. The kidney, liver, heart, and soles of the feet were considered the most desirable portions, and were ordinarily cut up into very small pieces, boiled, and eaten somewhat in the form of soup. The flesh and bones were boiled, and the former [latter?] made into a sort of jelly.... During the outbreak of 1891, savage flesh was brought in – in baskets – the same as pork, and sold like pork in the open markets of Tokoham before the eyes of all, foreigners included. Some of the flesh was even sent to Amoy [on the mainland] to be placed on sale there. It was frequently on sale in the small Chinese villages near the border, and often before the very eyes of peaceful groups of savages who happened to be at the place.
Newspaper reports also document the open sale of indigenous flesh. Robert des Rotours has interpreted these acts as due to "contempt for an inferior race", who were seen as so inferior that they could be treated like animals.
There are various reports of Dayaks eating human flesh, especially in the context of headhunting expeditions. James Brooke, who founded the Raj of Sarawak in northwestern Borneo, collected eyewitness accounts of the consumption of killed enemies after war campaigns. He also heard (though not from eyewitnesses) that in some areas a "fat child" was traditionally served at Makantaun, an annual festival held at the end of the harvest season.
The Norwegian explorer Carl Bock, who visited Borneo in the late 1870s, met a Dayak chief named Sibau Mobang who told him that "his people did not eat human meat every day", but rather in the context of "head-hunting expeditions". Mobang had just returned from such an expedition, in which "no less than seventy victims, men, women and children", had been killed and partially eaten. Bock also met a local priestess who said that human "palms [were] considered the best eating", together with "the brains, and the flesh on the knees" – these parts were always eaten, even if the rest of the body was not. The naturalist Albert S. Bickmore, who travelled through Borneo in the 1860s, agreed that some Dayak groups practised cannibalism. Both captured enemies and those found guilty of a crime (such as theft) were killed and eaten, out of revenge and due to an "appetite" for human flesh, which was considered uniquely tasty.
Hundreds of accounts exist of cannibalism among Aboriginal Australians in all parts of Australia, with the possible exception of Tasmania, dating from the first European settlement to the 1930s and later. While it is generally accepted that some forms of cannibalism were practised in Australia in certain circumstances, the prevalence and meaning of such acts in pre-colonial Aboriginal societies are disputed.
Before colonization, Aboriginal Australians were predominantly nomadic hunter-gatherers at times lacking in protein sources. Reported cases of cannibalism include killing and eating small children (infanticide was widely practised as a means of population control and because mothers had trouble carrying two young children not yet able to walk) and enemy warriors slain in battle.
In the late 1920s, the anthropologist Géza Róheim heard from Aboriginals that infanticidal cannibalism had been practised especially during droughts. "Years ago it had been custom for every second child to be eaten" – the baby was roasted and consumed not only by the mother, but also by the older siblings, who benefited from this meat during times of food scarcity. One woman told him that her little sister had been roasted, but denied having eaten of her. Another "admitted having killed and eaten her small daughter", and several other people he talked to remembered having "eaten one of their brothers". The consumption of infants took two different forms, depending on where it was practised:
When the Yumu, Pindupi, Ngali, or Nambutji were hungry, they ate small children with neither ceremonial nor animistic motives. Among the southern tribes, the Matuntara, Mularatara, or Pitjentara, every second child was eaten in the belief that the strength of the first child would be doubled by such a procedure.
Usually only babies who had not yet received a name (which happened around the first birthday) were consumed, but in times of severe hunger, older children (up to four years or so) could be killed and eaten too, though people tended to have bad feelings about this. Babies were killed by their mother, while a bigger child "would be killed by the father by being beaten on the head". But cases of women killing older children are on record too. In 1904 a parish priest in Broome, Western Australia, stated that infanticide was very common, including one case where a four-year-old was "killed and eaten by its mother", who later became a Christian.
The journalist and anthropologist Daisy Bates, who spent a long time among Aboriginals and was well acquainted with their customs, knew an Aboriginal woman who one day left her village to give birth a mile away, taking only her daughter with her. She then "killed and ate the baby, sharing the food with the little daughter." After her return, Bates found the place and saw "the ashes of a fire" with the baby's "broken skull, and one or two charred bones" in them. She states that "baby cannibalism was rife among these central-western peoples, as it is west of the border in Central Australia."
The Norwegian ethnographer Carl Sofus Lumholtz confirms that infants were commonly killed and eaten especially in times of food scarcity. He notes that people spoke of such acts "as an everyday occurrence, and not at all as anything remarkable."
Some have interpreted the consumption of infants as a religious practice: "In parts of New South Wales ..., it was customary long ago for the first-born of every lubra [Aboriginal woman] to be eaten by the tribe, as part of a religious ceremony." However, there seems to be no direct evidence that such acts actually had a religious meaning, and the Australian anthropologist Alfred William Howitt rejects the idea that the eaten were human sacrifices as "absolutely without foundation", arguing that religious sacrifices of any kind were unknown in Australia.
Another frequently reported practise was funerary endocannibalism, the cooking and consumption of the deceased as a funerary rite.
When anyone dies, provided he or she be not too old, certain of the male relatives take the body out into the bush and cook it in a native oven.... When all the flesh is removed – apparently everything is eaten – the bones are collected, and, with the exception of the long ones from the arm, are wrapped in paperbark and handed over to the custody of a relative.
According to Bates, exocannibalism was also practised in many regions. Foreigners and members of different ethnic groups were hunted and eaten much like animals. She met "fine sturdy fellows" who "frankly admitted the hunting and sharing of kangaroo and human meat as frequently as that of kangaroo and emu." The bodies of the killed were roasted whole in "a deep hole in the sand". There were also "killing vendettas", in which a hostile settlement was attacked and as many persons as possible killed, whose flesh was then shared according to well-defined rules: "The older men ate the soft and virile parts, and the brain; swift runners were given the thighs; hands, arms or shoulders went to the best spear-throwers, and so on." Referring to the coast of the Great Australian Bight, Bates writes: "Cannibalism had been rife for centuries in these regions and for a thousand miles north and east of them." Human flesh was not eaten for spiritual reasons and not only due to hunger; rather it was considered a "favourite food".
Lumholtz similarly notes that "the greatest delicacy known to the Australian native is human flesh", even adding that the "appetite for human flesh" was the primary motive for killing. Unrelated individuals and isolated families were attacked just to be eaten and any stranger was at risk of being "pursued like a wild beast and slain and eaten". Acquiring human flesh is this manner was something to be proud of, not a reason for shame. He stresses that such flesh was nevertheless by no means a "daily food", since opportunities to capture victims were relatively rare. One specific instance of kidnapping for cannibal purposes was recorded in the 1840s by the English immigrant George French Angas, who stated that several children were kidnapped, butchered, and eaten near Lake Alexandrina in South Australia shortly before he arrived there.
In parts of Melanesia, cannibalism was still practised in the early 20th century, for a variety of reasons – including retaliation, to insult an enemy people, or to absorb the dead person's qualities. One tribal chief, Ratu Udre Udre in Rakiraki, Fiji, is said to have consumed 872 people and to have made a pile of stones to record his achievement. Fiji was nicknamed the "Cannibal Isles" by European sailors, who avoided disembarking there.
The first encounter between Europeans and Māori may have involved cannibalism of a Dutch sailor. In June 1772, the French explorer Marion du Fresne and 26 members of his crew were killed and eaten in the Bay of Islands. In an 1809 incident known as the Boyd massacre, about 66 passengers and crew of the Boyd were killed and eaten by Māori on the Whangaroa peninsula, Northland. Cannibalism was already a regular practice in Māori wars. In another instance, on July 11, 1821, warriors from the Ngapuhi tribe killed 2,000 enemies and remained on the battlefield "eating the vanquished until they were driven off by the smell of decaying bodies". Māori warriors fighting the New Zealand government in Tītokowaru's War in New Zealand's North Island in 1868–69 revived ancient rites of cannibalism as part of the radical Hauhau movement of the Pai Marire religion.
The dense population of the Marquesas Islands, in what is now French Polynesia, was concentrated in narrow valleys, and consisted of warring tribes, who sometimes practised cannibalism on their enemies. Human flesh was called "long pig". W. D. Rubinstein wrote:
It was considered a great triumph among the Marquesans to eat the body of a dead man. They treated their captives with great cruelty. They broke their legs to prevent them from attempting to escape before being eaten, but kept them alive so that they could brood over their impending fate. ... With this tribe, as with many others, the bodies of women were in great demand.
After World War I, cannibalism continued to occur as a ritual practice and in times of drought or famine. Occasional cannibal acts committed by individual criminals are documented as well throughout the 20th and 21st centuries.
Many instances of cannibalism by necessity were recorded during World War II. For example, during the 872-day siege of Leningrad, reports of cannibalism began to appear in the winter of 1941–1942, after all birds, rats, and pets were eaten by survivors. Leningrad police even formed a special division to combat cannibalism.
Some 2.8 million Soviet POWs died in Nazi custody in less than eight months during 1941–42. According to the USHMM, by the winter of 1941, "starvation and disease resulted in mass death of unimaginable proportions". This deliberate starvation led to many incidents of cannibalism.
Following the Soviet victory at Stalingrad it was found that some German soldiers in the besieged city, cut off from supplies, resorted to cannibalism. Later, following the German surrender in January 1943, roughly 100,000 German soldiers were taken prisoner of war (POW). Almost all of them were sent to POW camps in Siberia or Central Asia where, due to being chronically underfed by their Soviet captors, many resorted to cannibalism. Fewer than 5,000 of the prisoners taken at Stalingrad survived captivity.
Cannibalism took place in the concentration and death camps in the Independent State of Croatia (NDH), a Nazi German puppet state which was governed by the fascist Ustasha organization, who committed the Genocide of Serbs and the Holocaust in NDH. Some survivors testified that some of the Ustashas drank the blood from the slashed throats of the victims.
The Australian War Crimes Section of the Tokyo tribunal, led by prosecutor William Webb (the future Judge-in-Chief), collected numerous written reports and testimonies that documented Japanese soldiers' acts of cannibalism among their own troops, on enemy dead, as well as on Allied prisoners of war in many parts of the Greater East Asia Co-Prosperity Sphere. In September 1942, Japanese daily rations on New Guinea consisted of 800 grams of rice and tinned meat. However, by December, this had fallen to 50 grams. According to historian Yuki Tanaka, "cannibalism was often a systematic activity conducted by whole squads and under the command of officers".
In some cases, flesh was cut from living people. A prisoner of war from the British Indian Army, Lance Naik Hatam Ali, testified that in New Guinea: "the Japanese started selecting prisoners and every day one prisoner was taken out and killed and eaten by the soldiers. I personally saw this happen and about 100 prisoners were eaten at this place by the Japanese. The remainder of us were taken to another spot 80 kilometres (50 miles) away where 10 prisoners died of sickness. At this place, the Japanese again started selecting prisoners to eat. Those selected were taken to a hut where their flesh was cut from their bodies while they were alive and they were thrown into a ditch where they later died."
Another well-documented case occurred in Chichi-jima in February 1945, when Japanese soldiers killed and consumed five American airmen. This case was investigated in 1947 in a war crimes trial, and of 30 Japanese soldiers prosecuted, five (Maj. Matoba, Gen. Tachibana, Adm. Mori, Capt. Yoshii, and Dr. Teraki) were found guilty and hanged. In his book Flyboys: A True Story of Courage, James Bradley details several instances of cannibalism of World War II Allied prisoners by their Japanese captors. The author claims that this included not only ritual cannibalization of the livers of freshly killed prisoners, but also the cannibalization-for-sustenance of living prisoners over the course of several days, amputating limbs only as needed to keep the meat fresh.
There are more than 100 documented cases in Australia's government archives of Japanese soldiers practising cannibalism on enemy soldiers and civilians in New Guinea during the war. For instance, from an archived case, an Australian lieutenant describes how he discovered a scene with cannibalized bodies, including one "consisting only of a head which had been scalped and a spinal column" and that "in all cases, the condition of the remains were such that there can be no doubt that the bodies had been dismembered and portions of the flesh cooked". In another archived case, a Pakistani corporal (who was captured in Singapore and transported to New Guinea by the Japanese) testified that Japanese soldiers cannibalized a prisoner (some were still alive) per day for about 100 days. There was also an archived memo, in which a Japanese general stated that eating anyone except enemy soldiers was punishable by death. Toshiyuki Tanaka, a Japanese scholar in Australia, mentions that it was done "to consolidate the group feeling of the troops" rather than due to food shortage in many of the cases. Tanaka also states that the Japanese committed the cannibalism under supervision of their senior officers and to serve as a power projection tool.
Jemadar Abdul Latif (VCO of the 4/9 Jat Regiment of the British Indian Army and POW rescued by the Australians at Sepik Bay in 1945) stated that the Japanese soldiers ate both Indian POWs and local New Guinean people. At the camp for Indian POWs in Wewak, where many died and 19 POWs were eaten, the Japanese doctor and lieutenant Tumisa would send an Indian out of the camp after which a Japanese party would kill and eat flesh from the body as well as cut off and cook certain body parts (liver, buttock muscles, thighs, legs, and arms), according to Captain R. U. Pirzai in a The Courier-Mail report of August 25, 1945.
When Uruguayan Air Force Flight 571 crashed on a glacier in the Andes on October 13, 1972, the survivors resorted to eating the deceased during their 72 days in the mountains. Their experiences and memories became the source of several books and films. Survivor Roberto Canessa described how they "agonized" for days in the knowledge that "the bodies of our friends and team-mates, preserved outside in the snow and ice, contained vital, life-giving protein that could help us survive. But could we do it?" Ultimately he and the other 15 people who were rescued months later decided they could, realizing there was no other way to face off starvation.
In 1991, Jeffrey Dahmer of Milwaukee, Wisconsin, was arrested after one of his intended victims managed to escape. Found in Dahmer's apartment were two human hearts, an entire torso, a bag full of human organs from his victims, and a portion of arm muscle. He stated that he planned to consume all of the body parts over the next few weeks.
In the 1980s, Médecins Sans Frontières, the international medical charity, supplied photographic and other documentary evidence of ritualized cannibal feasts among the participants in Liberia's internecine strife preceding the First Liberian Civil War to representatives of Amnesty International. Amnesty International declined to publicize this material; the Secretary-General of the organization, Pierre Sane, said at the time in an internal communication that "what they do with the bodies after human rights violations are committed is not part of our mandate or concern". The existence of cannibalism on a wide scale in Liberia was subsequently verified.
A few years later, reported of cannibal acts committed during the Second Liberian Civil War and Sierra Leone Civil War emerged.
Reports from the Belgian Congo indicate that cannibalism was still widely practised in some regions in the 1920s. Hermann Norden, an American who visited the Kasai region in 1923, found that "cannibalism was commonplace". People were afraid of walking outside of populated places because there was a risk of being attacked, killed, and eaten. Norden talked with a Belgian who "admitted that it was quite likely he had occasionally been served human flesh without knowing what he was eating" – it was simply a dish that appeared on the tables from time.
Other travellers heard persistent rumours that there was still a certain underground trade in slaves, some of whom (adults and children alike) were regularly killed and then "cut up and cooked as ordinary meat", around both the Kasai and the Ubangi River. The colonial state seems to have done little to discourage or punish such acts. There are also reports that human flesh was sometimes sold at markets in both Kinshasa and Brazzaville, "right in the middle of European life."
Norden observed that cannibalism was so common that people talked about it quite "casual[ly]": "No stress was put upon it, nor horror shown. This person had died of fever; that one had been eaten. It was all a matter of the way one's luck held."
The culinary use of human flesh continued in some cases even after World War II. In 1950, a Belgian administrator ate a "remarkably delicious" dish, learning after he had finished "that the meat came from a young girl." A few years later, a Danish traveller was served a piece of the "soft and tender" flesh of a butchered woman.
During the Congo Crisis, which followed the country's independence in 1960, body parts of killed enemies were eaten and the flesh of war victims was sometimes sold for consumption. In Luluabourg (today Kananga), an American journalist saw a truck smeared with blood. A police commissioner investigating the scene told her that "sixteen women and children" had been lured in a nearby village to enter the truck, kidnapped, and "butchered ... for meat." She also talked with a Presbyterian missionary, who excused this act as due to "protein need.... The bodies of their enemies are the only source of protein available."
In conflict situations, cannibalism persisted into the 21st century. During the first decade of the new century, cannibal acts have been reported from the Second Congo War and the Ituri conflict in the northeast of the Democratic Republic of the Congo. According to UN investigators, fighters belonging to several factions "grilled" human bodies "on a barbecue"; young girls were boiled "alive in ... big pots filled with boiling water and oil" or "cut into small pieces ... and then eaten."
A UN human rights expert reported in July 2007 that sexual atrocities committed by rebel groups as well as by armed forces and national police against Congolese women go "far beyond rape" and include sexual slavery, forced incest, and cannibalism. In the Ituri region, much of the violence, which included "widespread cannibalism", was consciously directed against pygmies, who were believed to be relatively helpless and even considered subhuman by some other Congolese.
UN investigators also collected eyewitness accounts of cannibalism during a violent conflict that shook the Kasai region in 2016/2017. Various parts of killed enemies and beheaded captives were cooked and eaten, including their heads, thighs, and penises.
Cannibalism has also been reported from the Central African Republic, north of the Congo Basin. Jean-Bédel Bokassa ruled the country from 1966 to 1979 as dictator and finally as self-declared emperor. Tenacious rumours that he liked to dine on the flesh of opponents and political prisoners were substantiated by several testimonies during his eventual trial in 1986/1987. Bokassa's successor David Dacko stated that he had seen photographs of butchered bodies hanging in the cold-storage rooms of Bokassa's palace immediately after taking power in 1979. These or similar photos, said to show a walk-in freezer containing the bodies of schoolchildren arrested in April 1979 during protests and beat to death in the 1979 Ngaragba Prison massacre, were also published in Paris Match magazine. During the trial, Bokassa's former chef testified that he had repeatedly cooked human flesh from the palace's freezers for his boss's table. While Bokassa was found guilty of murder in at least twenty cases, the charge of cannibalism was nevertheless not taken into account for the final verdict, since the consumption of human remains is considered a misdemeanor under CAR law and all previously committed misdemeanors had been forgiven by a general amnesty declared in 1981.
Further acts of cannibalism were reported to have targeted the Muslim minority during the Central African Republic Civil War which started in 2012.
In the 1970s the Ugandan dictator Idi Amin was reputed to practice cannibalism. More recently, the Lord's Resistance Army has been accused of routinely engaging in ritual or magical cannibalism. There are also reports that witch doctors in the country sometimes use body parts of children in their medicine.
During the South Sudanese Civil War, cannibalism and forced cannibalism have been reported from South Sudan.
Before 1931, The New York Times reporter William Seabrook, apparently disappointed that he had been unable to taste human flesh in West Africa, obtained from a hospital intern at the Sorbonne a chunk of this meat from the body of a healthy man killed in an accident, then cooked and ate it. He reported,
It was like good, fully developed veal, not young, but not yet beef. It was very definitely like that, and it was not like any other meat I had ever tasted. It was so nearly like good, fully developed veal that I think no person with a palate of ordinary, normal sensitiveness could distinguish it from veal. It was mild, good meat with no other sharply defined or highly characteristic taste such as for instance, goat, high game, and pork have. The steak was slightly tougher than prime veal, a little stringy, but not too tough or stringy to be agreeably edible. The roast, from which I cut and ate a central slice, was tender, and in color, texture, smell as well as taste, strengthened my certainty that of all the meats we habitually know, veal is the one meat to which this meat is accurately comparable.
Karl Denke, possible Carl Großmann and Fritz Haarmann, as well as Joachim Kroll were German murderers and cannibals active between the early 20th century and the 1970s. Armin Meiwes is a former computer repair technician who achieved international notoriety for killing and eating a voluntary victim in 2001, whom he had found via the Internet. After Meiwes and the victim jointly attempted to eat the victim's severed penis, Meiwes killed his victim and proceeded to eat a large amount of his flesh. He was arrested in December 2002. In January 2004, Meiwes was convicted of manslaughter and sentenced to eight years and six months in prison. Despite the victim's undisputed consent, the prosecutors successfully appealed this decision, and in a retrial that ended in May 2006, Meiwes was convicted of murder and sentenced to life imprisonment.
On July 23, 1988, Rick Gibson ate the flesh of another person in public. Because England does not have a specific law against cannibalism, he legally ate a canapé of donated human tonsils in Walthamstow High Street, London. A year later, on April 15, 1989, he publicly ate a slice of human testicle. When he tried to eat another slice of human testicle as "hors d'oeuvre" at the Pitt International Galleries in Vancouver on July 14, 1989, the police confiscated the testicle. However, the charge of publicly exhibiting a disgusting object was dropped, and two months later he finally ate the piece of human testicle on the steps of the Vancouver court house.
In 2008, a British model called Anthony Morley was imprisoned for the killing, dismemberment and partial cannibalisation of his lover, magazine executive Damian Oldfield.
In his book, The Gulag Archipelago, Soviet writer Aleksandr Solzhenitsyn described cases of cannibalism in 20th-century Soviet Union. Of the famine in Povolzhie (1921–1922) he wrote: "That horrible famine was up to cannibalism, up to consuming children by their own parents – the famine, which Russia had never known even in the Time of Troubles [in 1601–1603]".
The historian Orlando Figes observes that "thousands of cases" of cannibalism were reported, while the number of cases that were never reported was doubtless even higher. In Pugachyov, "it was dangerous for children to go out after dark since there were known to be bands of cannibals and traders who killed them to eat or sell their tender flesh." An inhabitant of a nearby village stated: "There are several cafeterias in the village – and all of them serve up young children." This was no exception – Figes estimates "that a considerable proportion of the meat in Soviet factories in the Volga area ... was human flesh." Various gangs specialized in "capturing children, murdering them and selling the human flesh as horse meat or beef", with the buyers happy to have found a source of meat in a situation of extreme shortage and often willing not to "ask too many questions".
Cannibalism was also widespread during the Holodomor, a man-made famine in Soviet Ukraine between 1932 and 1933.
Survival was a moral as well as a physical struggle. A woman doctor wrote to a friend in June 1933 that she had not yet become a cannibal, but was "not sure that I shall not be one by the time my letter reaches you". The good people died first. Those who refused to steal or to prostitute themselves died. Those who gave food to others died. Those who refused to eat corpses died. Those who refused to kill their fellow man died. ... At least 2,505 people were sentenced for cannibalism in the years 1932 and 1933 in Ukraine, though the actual number of cases was certainly much higher.
Most cases of cannibalism were "necrophagy, the consumption of corpses of people who had died of starvation". But the murder of children for food was common as well. Many survivors told of neighbours who had killed and eaten their own children. One woman, asked why she had done this, "answered that her children would not survive anyway, but this way she would". She was arrested by the police. The police also documented cases of children being kidnapped, killed, and eaten, and "stories of children being hunted down as food" circulated in many areas. When nearly all grain and all kinds of animal meat had been exhausted, "a black market arose in human flesh" and it "may even have entered the official economy." The police kept a close eye on butcher shops and slaughterhouses, trying to prevent them from bringing human flesh into circulation. The Italian consul, Sergio Gradenigo, nevertheless reported from Kharkiv that the "trade of human meat becomes more active."
In March 1933, the secret police in Kiev Oblast collected "ten or more reports of cannibalism every day" but concluded that "in reality there are many more such incidents", most of which went unreported. Those found guilty of cannibalism were often "imprisoned, executed, or lynched". But while the authorities were well informed about the extent of cannibalism, they also tried to suppress this information from becoming widely known, the chief of the secret police warning "that written notes on the subject do not circulate among the officials where they might cause rumours".
The Holodomor was part of the Soviet famine of 1930–1933, which devastated also other parts of the Soviet Union in the early 1930s. Multiple cases of cannibalism were also reported from Kazakhstan.
A few years later, starving people again resorted to cannibalism during the siege of Leningrad (1941–1944). About this time, Solzhenitsyn writes: "Those who consumed human flesh, or dealt with the human liver trading from dissecting rooms ... were accounted as the political criminals".
Of the building of Northern Railway Labor Camp ("Sevzheldorlag") Solzhenitsyn reports, "An ordinary hard working political prisoner almost could not survive at that penal camp. In the camp Sevzheldorlag (chief: colonel Klyuchkin) in 1946–47 there were many cases of cannibalism: they cut human bodies, cooked and ate."
The Soviet journalist Yevgenia Ginzburg was a long-term political prisoner who spent time in the Soviet prisons, Gulag camps and settlements from 1938 to 1955. She described in her memoir, Harsh Route (or Steep Route), of a case which she was directly involved in during the late 1940s, after she had been moved to the prisoners' hospital.
The chief warder shows me the black smoked pot, filled with some food: "I need your medical expertise regarding this meat." I look into the pot, and hardly hold vomiting. The fibres of that meat are very small, and don't resemble me anything I have seen before. The skin on some pieces bristles with black hair ... A former smith from Poltava, Kulesh worked together with Centurashvili. At this time, Centurashvili was only one month away from being discharged from the camp ... And suddenly he surprisingly disappeared ... The wardens searched for two more days, and then assumed that it was an escape case, though they wondered why, since his imprisonment period was almost over ... The crime was there. Approaching the fireplace, Kulesh killed Centurashvili with an axe, burned his clothes, then dismembered him and hid the pieces in snow, in different places, putting specific marks on each burial place. ... Just yesterday, one body part was found under two crossed logs.
The Aghori are Indian ascetics who believe that eating human flesh confers spiritual and physical benefits, such as prevention of ageing. They claim to only eat those who have voluntarily granted their body to the sect upon their death, but an Indian TV crew witnessed one Aghori feasting on a corpse discovered floating in the Ganges and a member of the Dom caste reports that Aghori often take bodies from cremation ghats (or funeral pyres).
Cannibalism is documented to have occurred in rural China during the severe famine that resulted from the Great Leap Forward (1958–1962).
During Mao Zedong's Cultural Revolution (1966–1976), local governments' documents revealed hundreds of incidents of cannibalism for ideological reasons, including large-scale cannibalism during the Guangxi Massacre. Cannibal acts occurred at public events organized by local Communist Party officials, with people taking part in them in order to prove their revolutionary passion. The writer Zheng Yi documented many of these incidents, especially those in Guangxi, in his 1993 book, Scarlet Memorial.
Pills made of human flesh were said to be used by some Tibetan Buddhists, motivated by a belief that mystical powers were bestowed upon those who consumed Brahmin flesh.
In Joshua Oppenheimer's film The Look of Silence, several of the anti-Communist militias active in the Indonesian mass killings of 1965–66 claim that drinking blood from their victims prevented them from going mad.
During a massacre of the Madurese minority in the Indonesian part of Borneo in 1999, "more than 200 people, including young babies, [were] decapitated and cannibalised", according to reporter Richard Lloyd Parry. Parry saw "two arms, numerous pieces of heart and liver, and a dismembered torso being cooked over a fire by the side of the road" in a "human barbecue". He met a Dayak teenager who told he had helped to kill and eat four Madurese people "because we hate the Madurese.... Mostly we shoot them first, and then we chop the body. It tastes just like chicken." A Dayak teacher explained that "when people do not respect our [traditions], they become enemies, and we don't consider our enemies to be human any more. They become animals in our eyes. And the Dayaks eat animals." Parry also saw at least seven severed heads, some of them apparently taken just hours before and placed on "oil drums on either side of the road" as trophies in a revival of the traditional practice of headhunting. The teenager he talked to assured him that "We don't kill babies", but only those "around 13 or 15" or older. However, he met a village chief who had "seen six or seven children with their heads cut off" and stated "they kill everyone, including babies. They chop their heads off and they eat them."
When visiting a town market, Parry saw "a charred femur ... among the embers of a fire" and met a Dayak man who held "a lump of what he said was human meat" and then started to eat it. Unsure how to react, Parry asked about the taste and the man replied: "Delicious". Parry remarked that, after the first shock had passed, "the most devastating thing about cannibalism and headhunting is not the fear and the blood, but the terrible, profound banality."
Two years later, during the Sampit conflict, Dayaks went again "on a rampage of killing and decapitation with the aim of driving the Madurese from the province." According to their own reports, they "killed 2,000 Madurese, in many cases cutting off their heads as trophies, drinking their blood and cutting out their hearts and eating them on the spot." A Dayak spokesperson said that, because of their anger and resentment against the Madurese settlers, "They don't recognize whether they are women or children. They just see them as animals that have to be destroyed." A Madurese survivor mourned his murdered children and grandchildren: "They cut off their heads and then cut them up and took them away to eat." Police and army, though called to the scene, seem to have done little to stop the violence until at least 500 people were dead.
Reports of widespread cannibalism began to emerge from North Korea during the famine of the 1990s and subsequent ongoing starvation. Kim Jong-il was reported to have ordered a crackdown on cannibalism in 1996, but Chinese travellers reported in 1998 that cannibalism had occurred. Three people in North Korea were reported to have been executed for selling or eating human flesh in 2006. Further reports of cannibalism emerged in early 2013, including reports of a man executed for killing his two children for food.
There are conflicting claims about how widespread cannibalism was in North Korea. While refugees reported that it was widespread, Barbara Demick wrote in her book, Nothing to Envy: Ordinary Lives in North Korea (2010), that it did not seem to be.
The Korowai tribe of south-eastern Papua could be one of the last surviving tribes in the world engaging in cannibalism. A local cannibal cult killed and ate victims as late as 2012.
As in some other Papuan societies, the Urapmin people engaged in cannibalism in war. Notably, the Urapmin also had a system of food taboos wherein dogs could not be eaten and they had to be kept from breathing on food, unlike humans who could be eaten and with whom food could be shared. | [
{
"paragraph_id": 0,
"text": "Human cannibalism is the act or practice of humans eating the flesh or internal organs of other human beings. A person who practices cannibalism is called a cannibal. The meaning of \"cannibalism\" has been extended into zoology to describe animals consuming parts of individuals of the same species as food.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Neanderthals are believed to have practised cannibalism, and may have been eaten by anatomically modern humans. Cannibalism was occasionally practised in Egypt during ancient and Roman times, as well as later during severe famines. The Island Caribs of the Lesser Antilles, whose name is the origin of the word cannibal, acquired a long-standing reputation as eaters of human flesh, reconfirmed when their legends were recorded in the 17th century. Some controversy exists over the accuracy of these legends and the prevalence of actual cannibalism in the culture.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Cannibalism has been well documented in much of the world, including Fiji (once nicknamed the \"Cannibal Isles\"), the Amazon Basin, the Congo, and the Māori people of New Zealand. Cannibalism was also practised in New Guinea and in parts of the Solomon Islands, and human flesh was sold at markets in some parts of Melanesia and of the Congo Basin. A form of cannibalism popular in early modern Europe was the consumption of body parts or blood for medical purposes. Reaching its height during the 17th century, this practice continued in some cases into the second half of the 19th century.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Cannibalism has occasionally been practised as a last resort by people suffering from famine. Well-known examples include the ill-fated Donner Party (1846–1847) and the crash of Uruguayan Air Force Flight 571 (1972), after which the survivors ate the bodies of the dead. Additionally, there are cases of people engaging in cannibalism for sexual pleasure, such as Albert Fish, Issei Sagawa, Jeffrey Dahmer, and Armin Meiwes. Cannibalism has been both practised and fiercely condemned in recent several wars, especially in Liberia and the Democratic Republic of the Congo. It was still practised in Papua New Guinea as of 2012, for cultural reasons.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Cannibalism has been said to test the bounds of cultural relativism because it challenges anthropologists \"to define what is or is not beyond the pale of acceptable human behavior\". A few scholars argue that no firm evidence exists that cannibalism has ever been a socially acceptable practice anywhere in the world, but such views have been largely rejected as irreconcilable with the actual evidence.",
"title": ""
},
{
"paragraph_id": 5,
"text": "The word \"cannibal\" is derived from Spanish caníbal or caríbal, originally used as a name for the Caribs, a people from the West Indies said to have eaten human flesh. The older term anthropophagy, meaning \"eating humans\", is also used for human cannibalism.",
"title": "Etymology"
},
{
"paragraph_id": 6,
"text": "Cannibalism has been practised under a variety of circumstances and for various motives. To adequately express this diversity, Shirley Lindenbaum suggests that \"it might be better to talk about 'cannibalisms'\" in the plural.",
"title": "Reasons and types"
},
{
"paragraph_id": 7,
"text": "One major distinction is whether cannibal acts are accepted by the culture in which they occur – institutionalized cannibalism – or whether they are merely practised under starvation conditions to ensure one's immediate survival – survival cannibalism – or by isolated individuals considered criminal and often pathological by society at large – cannibalism as psychopathology or \"aberrant behavior\".",
"title": "Reasons and types"
},
{
"paragraph_id": 8,
"text": "Institutionalized cannibalism, sometimes also called \"learned cannibalism\", is the consumption of human body parts as \"an institutionalized practice\" generally accepted in the culture where it occurs.",
"title": "Reasons and types"
},
{
"paragraph_id": 9,
"text": "By contrast, survival cannibalism means \"the consumption of others under conditions of starvation such as shipwreck, military siege, and famine, in which persons normally averse to the idea are driven [to it] by the will to live\". Also known as famine cannibalism, such forms of cannibalism resorted to only in situations of extreme necessity have occurred in many cultures where cannibalism is otherwise clearly rejected. The survivors of the shipwrecks of the Essex and Méduse in the 19th century are said to have engaged in cannibalism, as did the members of Franklin's lost expedition and the Donner Party. Such cases often involve only necro-cannibalism (eating the corpse of someone already dead) as opposed to homicidal cannibalism (killing someone for food). In modern English law, the latter is always considered a crime, even in the most trying circumstances. The case of R v Dudley and Stephens, in which two men were found guilty of murder for killing and eating a cabin boy while adrift at sea in a lifeboat, set the precedent that necessity is no defence to a charge of murder. This decision outlawed and effectively ended the practice of shipwrecked sailors drawing lots in order to determine who would be killed and eaten to prevent the others from starving, a time-honoured practice formerly known as a \"custom of the sea\".",
"title": "Reasons and types"
},
{
"paragraph_id": 10,
"text": "In other cases, cannibalism is an expression of a psychopathology or mental disorder, condemned by the society in which it occurs and \"considered to be an indicator of [a] severe personality disorder or psychosis\". Well-known cases include Albert Fish, Issei Sagawa, and Armin Meiwes. Fantasies of cannibalism, whether acted out or not, are not specifically mentioned in manuals of mental disorders such as the DSM, presumably because at least serious cases (that lead to murder) are very rare.",
"title": "Reasons and types"
},
{
"paragraph_id": 11,
"text": "Within institutionalized cannibalism, exocannibalism is often distinguished from endocannibalism. Endocannibalism refers to the consumption of a person from the same community. Often it is a part of a funerary ceremony, similar to burial or cremation in other cultures. The consumption of the recently deceased in such rites can be considered \"an act of affection\" and a major part of the grieving process. It has also been explained as a way of guiding the souls of the dead into the bodies of living descendants.",
"title": "Reasons and types"
},
{
"paragraph_id": 12,
"text": "In contrast, exocannibalism is the consumption of a person from outside the community. It is frequently \"an act of aggression, often in the context of warfare\", where the flesh of killed or captured enemies may be eaten to celebrate one's victory over them.",
"title": "Reasons and types"
},
{
"paragraph_id": 13,
"text": "Both types of cannibalism can also be fuelled by the belief that eating a person's flesh or internal organs will endow the cannibal with some of the characteristics of the deceased. However, several authors investigating exocannibalism in New Zealand, New Guinea, and the Congo Basin observe that such beliefs were absent in these regions.",
"title": "Reasons and types"
},
{
"paragraph_id": 14,
"text": "A further type, different from both exo- and endocannibalism, is autocannibalism (also called autophagy or self-cannibalism), \"the act of eating parts of oneself\". It does not ever seem to have been an institutionalized practice, but occasionally occurs as pathological behaviour, or due to other reasons such as curiosity. Also on record are instances of forced autocannibalism committed as acts of aggression, where individuals are forced to eat parts of their own bodies as a form of torture.",
"title": "Reasons and types"
},
{
"paragraph_id": 15,
"text": "Exocannibalism is thus often associated with the consumption of enemies as an act of aggression, a practice also known as war cannibalism. Endocannibalism is often associated with the consumption of deceased relatives in funerary rites driven by affection – a practice known as funerary or mortuary cannibalism. But acts of institutionalized cannibalism can also be driven by various other motives, for which additional names have been coined.",
"title": "Reasons and types"
},
{
"paragraph_id": 16,
"text": "Medicinal cannibalism (also called medical cannibalism) means \"the ingestion of human tissue ... as a supposed medicine or tonic\". In contrast to other forms of cannibalism, which Europeans generally frowned upon, the \"medicinal ingestion\" of various \"human body parts was widely practiced throughout Europe from the sixteenth to the eighteenth centuries\", with early records of the practice going back to the first century CE. It was also frequently practised in China.",
"title": "Reasons and types"
},
{
"paragraph_id": 17,
"text": "Sacrificial cannibalism refers the consumption of the flesh of victims of human sacrifice, for example among the Aztecs. Human and animal remains excavated in Knossos, Crete, have been interpreted as evidence of a ritual in which children and sheep were sacrificed and eaten together during the Bronze Age. According to Ancient Roman reports, the Celts in Britain practised sacrificial cannibalism, and archaeological evidence backing these claims has by now been found.",
"title": "Reasons and types"
},
{
"paragraph_id": 18,
"text": "Infanticidal cannibalism or cannibalistic infanticide refers to cases where newborns or infants are killed because they are \"considered unwanted or unfit to live\" and then \"consumed by the mother, father, both parents or close relatives\". Infanticide followed by cannibalism was practised in various regions, but is particularly well documented among Aboriginal Australians. Among animals, such behaviour is called filial cannibalism, and it is common in many species, especially among fish.",
"title": "Reasons and types"
},
{
"paragraph_id": 19,
"text": "Human predation is the hunting of people from unrelated and possibly hostile groups in order to eat them. In parts of the Southern New Guinea lowland rain forests, hunting people \"was an opportunistic extension of seasonal foraging or pillaging strategies\", with human bodies just as welcome as those of animals as sources of protein, according to the anthropologist Bruce M. Knauft. As populations living near coasts and rivers were usually better nourished and hence often physically larger and stronger than those living inland, they \"raided inland 'bush' peoples with impunity and often with little fear of retaliation\". Cases of human predation are also on record for the neighbouring Bismarck Archipelago and for Australia. In the Congo Basin, there lived groups such as the Zappo Zaps who hunted humans for food even when game was plentiful.",
"title": "Reasons and types"
},
{
"paragraph_id": 20,
"text": "The term gastronomic cannibalism has been suggested for cases where human flesh is eaten to \"provide a supplement to the regular diet\" – thus essentially for its nutritional value – or, in an alternative definition, for cases where it is \"eaten without ceremony (other than culinary), in the same manner as the flesh of any other animal\". While the term has been criticized as being too vague to clearly identify a specific type of cannibalism, various records indicate that nutritional or culinary concerns could indeed play a role in such acts even outside of periods of starvation. Referring to the Congo Basin, where many of the eaten were butchered slaves rather than enemies killed in war, the anthropologist Emil Torday notes that \"the most common [reason for cannibalism] was simply gastronomic: the natives loved 'the flesh that speaks' [as human flesh was commonly called] and paid for it\". The historian Key Ray Chong observes that, throughout Chinese history, \"learned cannibalism was often practiced ... for culinary appreciation\".",
"title": "Reasons and types"
},
{
"paragraph_id": 21,
"text": "In his popular book Guns, Germs and Steel, Jared Diamond suggests that \"protein starvation is probably also the ultimate reason why cannibalism was widespread in traditional New Guinea highland societies\", and both in New Zealand and Fiji, cannibals explained their acts as due to a lack of animal meat. In Liberia, a former cannibal argued that it would have been wasteful to let the flesh of killed enemies spoil, and eaters of human flesh in the Bismarck Archipelago expressed the same sentiment. In many cases, human flesh was also described as particularly delicious, especially when it came from women, children, or both. Such statements are on record for various regions and peoples, including the Aztecs, today's Liberia and Nigeria, the Fang people in west-central Africa, the Congo Basin, China up to the 14th century, Sumatra, Borneo, Australia, New Zealand, and Fiji as well as various other Melanesian and Polynesian islands.",
"title": "Reasons and types"
},
{
"paragraph_id": 22,
"text": "There is a debate among anthropologists on how important functionalist reasons are for the understanding of institutionalized cannibalism. Diamond is not alone in suggesting \"that the consumption of human flesh was of nutritional benefit for some populations in New Guinea\" and the same case has been made for other \"tropical peoples ... exploiting a diverse range of animal foods\", including human flesh. The materialist anthropologist Marvin Harris argued that a \"shortage of animal protein\" was also the underlying reason for Aztec cannibalism. The cultural anthropologist Marshall Sahlins, on the other hand, rejected such explanations as overly simplistic, stressing that cannibal customs must be regarded as \"complex phenomen[a]\" with \"myriad attributes\" which can only be understood if one considers \"symbolism, ritual, and cosmology\" in addition to their \"practical function\".",
"title": "Reasons and types"
},
{
"paragraph_id": 23,
"text": "While not a motive, the term innocent cannibalism has been suggested for cases of people eating human flesh without knowing what they are eating. It is a subject of myths, such as the myth of Thyestes who unknowingly ate the flesh of his own sons. There are also actual cases on record, for example from the Congo Basin, where cannibalism had been quite widespread and where even in the 1950s travellers were sometimes served a meat dish, learning only afterwards that the meat had been of human origin.",
"title": "Reasons and types"
},
{
"paragraph_id": 24,
"text": "In pre-modern medicine, an explanation given by the now-discredited theory of humorism for cannibalism was that it was caused by a black acrimonious humor, which, being lodged in the linings of the ventricles of the heart, produced a voracity for human flesh. On the other hand, the French philosopher Michel de Montaigne understood war cannibalism as a way of expressing vengeance and hatred towards one's enemies and celebrating one's victory over them, thus giving an interpretation that is close to modern explanations. He also pointed out that some acts of Europeans in his own time could be considered as equally barbarous, making his essay \"Of Cannibals\" (c. 1580) a precursor to later ideas of cultural relativism.",
"title": "Reasons and types"
},
{
"paragraph_id": 25,
"text": "Cases of people eating human livers and hearts, especially of enemies, have been reported from across the world. After the Battle of Uhud (625), Hind bint Utba ate (or at least attempted to) the liver of Hamza ibn Abd al-Muttalib, an uncle of the prophet Muhammad. At that time, the liver was considered \"the seat of life\". French Catholics ate livers and hearts of Huguenots at the St. Bartholomew's Day massacre in 1572, in some cases also offering them for sale.",
"title": "Body parts and culinary practices"
},
{
"paragraph_id": 26,
"text": "In China, medical cannibalism was practised over centuries. People voluntary cut their own body parts, including parts of their livers, and boiled them to cure ailing relatives. Children were sometimes killed because eating their boiled hearts was considered a good way of extending one's life. Emperor Wuzong of Tang supposedly ordered provincial officials to send him \"the hearts and livers of fifteen-year-old boys and girls\" when he had become seriously ill, hoping in vain this medicine would cure him. Later private individuals sometimes followed his example, paying soldiers who kidnapped preteen children for their kitchen.",
"title": "Body parts and culinary practices"
},
{
"paragraph_id": 27,
"text": "When \"human flesh and organs were sold openly at the marketplace\" during the Taiping Rebellion in 1850–1864, human hearts became a popular dish, according to some who afterwards freely admitted having consumed them. According to a missionary's report from the brutal suppression of the Dungan Revolt of 1895–1896 in northwestern China, \"thousands of men, women and children were ruthlessly massacred by the imperial soldiers\" and \"many a meal of human hearts and livers was partaken of by soldiers\", supposedly out of a belief that this would give them \"the courage their enemies had displayed\".",
"title": "Body parts and culinary practices"
},
{
"paragraph_id": 28,
"text": "During the Cultural Revolution (1966–1976), hundreds of incidents of cannibalism occurred, mostly motivated by hatred against supposed \"class enemies\", but sometimes also by health concerns. In a case recorded by the local authorities, a school teacher in Mengshan County \"heard that consuming a 'beauty's heart' could cure disease\". He then chose a 13- or 14-year-old student of his and publicly denounced her as a member of the enemy faction, which was enough to get her killed by an angry mob. After the others had left, he \"cut open the girl's chest ..., dug out her heart, and took it home to enjoy\". In a further case that took place in Wuxuan County, likewise in the Guangxi region, three brothers were beaten to death as supposed enemies; afterwards their livers were cut out, baked, and consumed \"as medicine\". According to the Chinese author Zheng Yi, who researched these events, \"the consumption of human liver was mentioned at least fifty or sixty times\" in just a small number of archival documents. He talked with a man who had eaten human liver and told him that \"barbecued liver is delicious\".",
"title": "Body parts and culinary practices"
},
{
"paragraph_id": 29,
"text": "In World War II, Japanese soldiers ate the livers of killed Americans in the Chichijima incident.",
"title": "Body parts and culinary practices"
},
{
"paragraph_id": 30,
"text": "During a massacre of the Madurese minority in the Indonesian part of Borneo in 1999, reporter Richard Lloyd Parry met a young cannibal who had just participated in a \"human barbecue\" and told him without hesitation: \"It tastes just like chicken. Especially the liver – just the same as chicken.\" In 2013, during the Syrian civil war, Syrian rebel Abu Sakkar was filmed eating parts of the lung or liver of a government soldier while declaring that \"We will eat your hearts and your livers you soldiers of Bashar the dog\".",
"title": "Body parts and culinary practices"
},
{
"paragraph_id": 31,
"text": "A well-known case of mortuary cannibalism is that of the Fore tribe in New Guinea, which resulted in the spread of the prion disease kuru. Although the Fore's mortuary cannibalism was well-documented, the practice had ceased before the cause of the disease was recognized. However, some scholars argue that although post-mortem dismemberment was the practice during funeral rites, cannibalism was not. Marvin Harris theorizes that it happened during a famine period coincident with the arrival of Europeans and was rationalized as a religious rite.",
"title": "Medical aspects"
},
{
"paragraph_id": 32,
"text": "In 2003, a publication in Science received a large amount of press attention when it suggested that early humans may have practised extensive cannibalism. According to this research, genetic markers commonly found in modern humans worldwide suggest that today many people carry a gene that evolved as protection against the brain diseases that can be spread by consuming human brain tissue. A 2006 reanalysis of the data questioned this hypothesis, because it claimed to have found a data collection bias, which led to an erroneous conclusion. This claimed bias came from incidents of cannibalism used in the analysis not being due to local cultures, but having been carried out by explorers, stranded seafarers or escaped convicts. The original authors published a subsequent paper in 2008 defending their conclusions.",
"title": "Medical aspects"
},
{
"paragraph_id": 33,
"text": "Cannibalism features in the folklore and legends of many cultures and is most often attributed to evil characters or as extreme retribution for some wrongdoing. Examples include the witch in \"Hansel and Gretel\", Lamia of Greek mythology and the witch Baba Yaga of Slavic folklore.",
"title": "Myths, legends and folklore"
},
{
"paragraph_id": 34,
"text": "A number of stories in Greek mythology involve cannibalism, in particular the eating of close family members, e.g., the stories of Thyestes, Tereus and especially Cronus, who became Saturn in the Roman pantheon. The story of Tantalus is another example, though here a family member is prepared for consumption by others.",
"title": "Myths, legends and folklore"
},
{
"paragraph_id": 35,
"text": "The wendigo is a creature appearing in the legends of the Algonquian people. It is thought of variously as a malevolent cannibalistic spirit that could possess humans or a monster that humans could physically transform into. Those who indulged in cannibalism were at particular risk, and the legend appears to have reinforced this practice as taboo. The Zuni people tell the story of the Átahsaia – a giant who cannibalizes his fellow demons and seeks out human flesh.",
"title": "Myths, legends and folklore"
},
{
"paragraph_id": 36,
"text": "The wechuge is a demonic cannibalistic creature that seeks out human flesh appearing in the mythology of the Athabaskan people. It is said to be half monster and half human-like; however, it has many shapes and forms.",
"title": "Myths, legends and folklore"
},
{
"paragraph_id": 37,
"text": "William Arens, author of The Man-Eating Myth: Anthropology and Anthropophagy, questions the credibility of reports of cannibalism and argues that the description by one group of people of another people as cannibals is a consistent and demonstrable ideological and rhetorical device to establish perceived cultural superiority. Arens bases his thesis on a detailed analysis of various \"classic\" cases of cannibalism reported by explorers, missionaries, and anthropologists. He claims that all of them were steeped in racism, unsubstantiated, or based on second-hand or hearsay evidence. Though widely discussed, Arens's book generally failed to convince the academic community. Claude Lévi-Strauss observes that, in spite of his \"brilliant but superficial book ... [n]o serious ethnologist disputes the reality of cannibalism\". Shirley Lindenbaum notes that, while after \"Arens['s] ... provocative suggestion ... many anthropologists ... reevaluated their data\", the outcome was an improved and \"more nuanced\" understanding of where, why and under which circumstances cannibalism took place rather than a confirmation of his claims: \"Anthropologists working in the Americas, Africa, and Melanesia now acknowledge that institutionalized cannibalism occurred in some places at some times. Archaeologists and evolutionary biologists are taking cannibalism seriously.\"",
"title": "Scepticism"
},
{
"paragraph_id": 38,
"text": "Lindenbaum and others point out that Arens displays a \"strong ethnocentrism\". His refusal to admit that institutionalized cannibalism ever existed seems to be motivated by the implied idea \"that cannibalism is the worst thing of all\" – worse than any other behaviour people engaged in, and therefore uniquely suited to vilifying others. Kajsa Ekholm Friedman calls this \"a remarkable opinion in a culture [the European/American one] that has been capable of the most extreme cruelty and destructive behavior, both at home and in other parts of the world.\"",
"title": "Scepticism"
},
{
"paragraph_id": 39,
"text": "She observes that, contrary to European values and expectations, \"in many parts of the Congo region there was no negative evaluation of cannibalism. On the contrary, people expressed their strong appreciation of this very special meat and could not understand the hysterical reactions from the white man's side.\" And why indeed, she goes on to ask, should they have had the same negative reactions to cannibalism as Arens and his contemporaries? Implicitly he assumes that everybody throughout human history must have shared the strong taboo placed by his own culture on cannibalism, but he never attempts to explain why this should be so, and \"neither logic nor historical evidence justifies\" this viewpoint, as Christian Siefkes commented.",
"title": "Scepticism"
},
{
"paragraph_id": 40,
"text": "Accusations of cannibalism could be used to characterize indigenous peoples as \"uncivilized\", \"primitive\", or even \"inhuman.\" While this means that the reliability of reports of cannibal practices must be carefully evaluated especially if their wording suggests such a context, many actual accounts do not fit this pattern. The earliest firsthand account of cannibal customs in the Caribbean comes from Diego Álvarez Chanca, who accompanied Christopher Columbus on his second voyage. His description of the customs of the Caribs of Guadeloupe includes their cannibalism (men killed or captured in war were eaten, while captured boys were \"castrated [and used as] servants until they gr[e]w up, when they [were] slaughtered\" for consumption), but he nevertheless notes \"that these people are more civilized than the other islanders\" (who did not practice cannibalism). Nor was he an exception. Among the earliest reports of cannibalism in the Caribbean and the Americas, there are some (like those of Amerigo Vespucci) that seem to mostly consist of hearsay and \"gross exaggerations\", but others (by Chanca, Columbus himself, and other early travellers) show \"genuine interest and respect for the natives\" and include \"numerous cases of sincere praise\".",
"title": "Scepticism"
},
{
"paragraph_id": 41,
"text": "Reports of cannibalism from other continents follow similar patterns. Condescending remarks can be found, but many Europeans who described cannibal customs in Central Africa wrote about those who practised them in quite positive terms, calling them \"splendid\" and \"the finest people\" and not rarely, like Chanca, actually considering them as \"far in advance of\" and \"intellectually and morally superior\" to the non-cannibals around them. Writing from Melanesia, the missionary George Brown explicitly rejects the European prejudice of picturing cannibals as \"particularly ferocious and repulsive\", noting instead that many cannibals he met were \"no more ferocious than\" others and \"indeed ... very nice people\".",
"title": "Scepticism"
},
{
"paragraph_id": 42,
"text": "Reports or assertions of cannibal practices could nevertheless be used to promote the use of military force as a means of \"civilizing\" and \"pacifying\" the \"savages\". During the Spanish conquest of the Aztec Empire and its earlier conquests in the Caribbean there were widespread reports of cannibalism, and cannibals became exempted from Queen Isabella's prohibition on enslaving the indigenous. Another example of the sensationalism of cannibalism and its connection to imperialism occurred during Japan's 1874 expedition to Taiwan. As Robert Eskildsen describes, Japan's popular media \"exaggerated the aborigines' violent nature\", in some cases by wrongly accusing them of cannibalism.",
"title": "Scepticism"
},
{
"paragraph_id": 43,
"text": "This Horrid Practice: The Myth and Reality of Traditional Maori Cannibalism (2008) by New Zealand historian Paul Moon received a hostile reception by some Māori, who felt the book tarnished their whole people. However, the factual accuracy of the book was not seriously disputed and even critics such as Margaret Mutu grant that cannibalism was \"definitely\" practised and that it was \"part of our [Māori] culture.\"",
"title": "Scepticism"
},
{
"paragraph_id": 44,
"text": "Among modern humans, cannibalism has been practised by various groups. It was practised by humans in Prehistoric Europe, Mesoamerica, South America, among Iroquoian peoples in North America, Maori in New Zealand, the Solomon Islands, parts of West Africa and Central Africa, some of the islands of Polynesia, New Guinea, Sumatra, and Fiji. Evidence of cannibalism has been found in ruins associated with the Ancestral Puebloans of the Southwestern United States as well (at Cowboy Wash in Colorado).",
"title": "History"
},
{
"paragraph_id": 45,
"text": "There is evidence, both archaeological and genetic, that cannibalism has been practised for hundreds of thousands of years by early Homo sapiens and archaic hominins. Human bones that have been \"de-fleshed\" by other humans go back 600,000 years. The oldest Homo sapiens bones (from Ethiopia) show signs of this as well. Some anthropologists, such as Tim D. White, suggest that cannibalism was common in human societies prior to the beginning of the Upper Paleolithic period. This theory is based on the large amount of \"butchered human\" bones found in Neanderthal and other Lower/Middle Paleolithic sites.",
"title": "History"
},
{
"paragraph_id": 46,
"text": "It seems likely that not all instances of prehistoric cannibalism were due to the same reason, just as cannibalistic acts known from the historical record have been motivated by a variety of reasons. One suggested reason for cannibalism in the Lower and Middle Paleolithic have been food shortages. It has been also suggested that removing dead bodies through ritual (funerary) cannibalism was a means of predator control, aiming to eliminate predators' and scavengers' access to hominid (and early human) bodies. Jim Corbett proposed that after major epidemics, when human corpses are easily accessible to predators, there are more cases of man-eating leopards, so removing dead bodies through ritual cannibalism (before the cultural traditions of burying and burning bodies appeared in human history) might have had practical reasons for hominids and early humans to control predation.",
"title": "History"
},
{
"paragraph_id": 47,
"text": "The oldest archaeological evidence of hominid cannibalism comes from the Gran Dolina cave in northern Spain. The remains of several individuals who died about 800,000 years ago and may have belongs to the Homo antecessor species show unmistakable signs of having been butchered and consumed in the same way as animals whose bones were also found at the site. They belong to at least eleven individuals, all of which were young (ranging from infancy to late teenhood). A study of this case considers it an instance of \"nutritional\" cannibalism, where individuals belonging to hostile or unrelated groups were hunted, killed, and eaten much like animals. Based on the placement and processing of human and animal remains, the authors conclude that cannibalism was likely a \"repetitive behavior over time as part of a culinary tradition\", not caused by starvation or other exceptional circumstances. They suggest that young individuals (more than half of which were children under ten) were targeted because they \"posed a lower risk for hunters\" and because this was an effective means for limiting the growth of competing groups.",
"title": "History"
},
{
"paragraph_id": 48,
"text": "Several sites in Croatia, France, and Spain yield evidence that the Neanderthals sometimes practised cannibalism, though the interpretation of some of the finds remains controversial.",
"title": "History"
},
{
"paragraph_id": 49,
"text": "Neanderthals could also fall victim to cannibalism by anatomically modern humans. Evidence found in southwestern France indicates that the latter butchered and ate a Neanderthal child about 30,000 years ago; it is unknown whether the child was killed by them or died of other reasons. The find has been considered as strengthening the conjecture that modern humans might have hunted Neanderthals and in this way contributed to their extinction.",
"title": "History"
},
{
"paragraph_id": 50,
"text": "In Gough's Cave, England, remains of human bones and skulls, around 14,700 years old, suggest that cannibalism took place amongst the people living in or visiting the cave, and that they may have used human skulls as drinking vessels.",
"title": "History"
},
{
"paragraph_id": 51,
"text": "The archaeological site of Herxheim in southwestern Germany was a ritual center and a mass grave formed by people of the Linear Pottery culture in Neolithic Europe. It contained the scattered remains of more than 1000 individuals from different, in some cases faraway regions, who died around 5000 BCE. Whether they were war captives or human sacrifices is unclear, but the evidence indicates that their corpses were spit-roasted whole and then consumed.",
"title": "History"
},
{
"paragraph_id": 52,
"text": "At Fontbrégoua Cave in southeastern France, the remains of six people who lived about 7,000 years ago were found (two children, one adolescent, and three adults), in addition to animal bones. The patterns of cut marks indicate that both humans and animals were skinned and processed in similar ways. Since the human victims were all processed at the same time, the main excavator, Paola Villa, suspects that they all belonged to the same family or extended family and were killed and butchered together, probably during some kind of violent conflict. Others have argued that the traces were caused by defleshing rituals preceding a secondary burial, but the fact that both humans and wild and domestic animals were processed in the same way makes this unlikely; moreover, Villa argues that the observed traces better fit a typical butchering process than a secondary burial.",
"title": "History"
},
{
"paragraph_id": 53,
"text": "Researchers have also found physical evidence of cannibalism from more recent times, including from Prehistoric Britain. In 2001, archaeologists at the University of Bristol found evidence of cannibalism practised around 2000 years ago in Gloucestershire, South West England. This is in agreement with Ancient Roman reports that the Celts in Britain practised human sacrifice, killing and eating captured enemies as well as convicted criminals.",
"title": "History"
},
{
"paragraph_id": 54,
"text": "Cannibalism is mentioned many times in early history and literature. The oldest written reference may be from the tomb of the ancient Egyptian king Unas (24th century BCE). It contained a hymn in praise of the king portraying him as a cannibal who eats both \"men\" and \"gods\", thus indicating an attitude towards cannibalism quite different from the modern one.",
"title": "History"
},
{
"paragraph_id": 55,
"text": "Herodotus claimed in his Histories (5th century BCE) that after eleven days' voyage up the Borysthenes (Dnieper River) one reached a desolated land that extended for a long way, followed by a country of man-eaters (other than the Scythians), and beyond it by another desolated and uninhabited area.",
"title": "History"
},
{
"paragraph_id": 56,
"text": "The Stoic philosopher Chrysippus approved of eating one's dead relatives in a funerary ritual, noting that such rituals were common among many peoples.",
"title": "History"
},
{
"paragraph_id": 57,
"text": "Cassius Dio recorded cannibalism practised by the bucoli, Egyptian tribes led by Isidorus against Rome. They sacrificed and consumed two Roman officers in a ritualistic fashion, swearing an oath over their entrails.",
"title": "History"
},
{
"paragraph_id": 58,
"text": "According to Appian, during the Roman siege of Numantia in the 2nd century BCE, the population of Numantia (in today's Spain) was reduced to cannibalism and suicide. Cannibalism was also reported by Josephus during the siege of Jerusalem in 70 CE.",
"title": "History"
},
{
"paragraph_id": 59,
"text": "Jerome, in his letter Against Jovinianus (written 393 CE), discusses how people came to their present condition as a result of their heritage, and lists several examples of peoples and their customs. In the list, he mentions that he has heard that the Attacotti (in Britain) eat human flesh and that the Massagetae and Derbices (two Central Asian peoples) kill and eat old people, considering this a more desirable fate than dying of old age and illness.",
"title": "History"
},
{
"paragraph_id": 60,
"text": "There is universal agreement that some Mesoamerican people practised human sacrifice, but there is a lack of scholarly consensus as to whether cannibalism in pre-Columbian America was widespread. At one extreme, the anthropologist Marvin Harris, author of Cannibals and Kings, has suggested that the flesh of the victims was a part of an aristocratic diet as a reward, since the Aztec diet was lacking in proteins. While most historians of the pre-Columbian era accept that there was ritual cannibalism related to human sacrifices, they often reject suggestions that human flesh could have been a significant portion of the Aztec diet. Cannibalism was also associated with acts of warfare, and has been interpreted as an element of blood revenge in war.",
"title": "History"
},
{
"paragraph_id": 61,
"text": "When the Moroccan explorer Ibn Battuta visited the Mali Empire in the 1350s, he was surprised to see sultan Sulayman give \"a slave girl as part of his reception-gift\" to a group of warriors from a cannibal region who had come to visit his court. \"They slaughtered her and ate her and smeared their faces and hands with her blood and came in gratitude to the sultan.\" He was told that the sultan did so every time he received the cannibal guests. Though a Muslim like Ibn Battuta himself, he apparently considered catering to his visitors' preferences more important than whatever reservations he may have had about the practice. Other Muslim authors writing around that time also report that cannibalism was practised in some West Africa regions and that slave girls were sometimes slaughtered for food, since \"their flesh is the best thing we have to eat.\"",
"title": "History"
},
{
"paragraph_id": 62,
"text": "Cases of cannibalism were recorded during the First Crusade, as there are various accounts of crusaders consuming the bodies of their dead opponents following the sieges of Antioch and of Ma'arra in 1097–1098. While the Christian sources all explain these acts as due to hunger, Amin Maalouf is sceptical of this justification, arguing that that the crusaders' behaviour indicates they might have been driven by \"fanaticism\" rather than, or in addition to \"necessity\". Thomas Asbridge states that, while the \"cannibalism at Marrat is among the most infamous of all the atrocities perpetrated by the First Crusaders\", it nevertheless had \"some positive effects on the crusaders' short-term prospects\", since reports of their brutality convinced many Muslim commanders to accept truces rather than trying to fight them.",
"title": "History"
},
{
"paragraph_id": 63,
"text": "During Europe's Great Famine of 1315–1317, there were various reports of cannibalism among starving people.",
"title": "History"
},
{
"paragraph_id": 64,
"text": "Charges of cannibalism were levied against the Qizilbash of the Safavid Ismail I.",
"title": "History"
},
{
"paragraph_id": 65,
"text": "Cannibalism has been repeatedly recorded throughout China's well-documented history. The sinologist Bengt Pettersson found references to more than three hundred different episodes of cannibalism in the Official Dynastic Histories alone. Most episodes occurred in the context of famine or war, or were otherwise motivated by vengeance or medical reasons. More than half of the episodes recorded in the Official Histories describe cases motivated by food scarcity during famines or in times of war. Pettersson observes that the records of such events \"neither encouraged nor condemned\" the consumption of human flesh under such circumstances, rather accepting it as an unavoidable way of \"coping with a life-threatening situation\".",
"title": "History"
},
{
"paragraph_id": 66,
"text": "In other cases, cannibalism was an element of vengeance or punishment – eating the hearts and livers, or sometimes the whole bodies, of killed enemies was a way of further humiliating them and sweetening the revenge. Both private individuals and state officials engaged in such acts, especially from the 4th to the 10th century CE, but in some cases right until the end of Imperial China (in 1912). More than 70 cases are listed in the Official Histories alone. In warfare, human flesh could be eaten out of a lack of other provisions, but also out of hatred against the enemy or to celebrate one's victory. Not just enemy fighters, but also their \"servants and concubines were all steamed and eaten\", according to one account.",
"title": "History"
},
{
"paragraph_id": 67,
"text": "At least since the Tang dynasty (618–907), the consumption of human flesh was considered a highly effective medical treatment, recommended by the Bencao Shiyi, an influential medical reference book published in the early 8th century, as well as in similar later manuals. Together with the ethical ideal of filial piety, according to which young people were supposed to do everything in their power to support their parents and parents-in-law, this idea lead to a unique form of voluntary cannibalism, in which a young person cut some of the flesh out of their body and gave it to an ill parent or parent-in-law for consumption. The majority of the donors were women, frequently daughters-in-law of the patient.",
"title": "History"
},
{
"paragraph_id": 68,
"text": "The devoted daughter-in-law would tie her thigh or her arm very tightly with a piece of clothing. She would then use a very sharp knife to quickly slice off a piece from her upper arm or upper thigh. The flesh would immediately be mixed in with soup or gruel, which had been heated in preparation, and this would then be offered to the dying mother-in-law or father-in-law.",
"title": "History"
},
{
"paragraph_id": 69,
"text": "The Official Histories describe more than 110 cases of such voluntary offerings that took place between the early 7th and the early 20th century. While these acts were (at least nominally) voluntary and the donors usually (though not always) survived them, several sources also report of children and adolescents who were killed so that their flesh could be eaten for medical purposes.",
"title": "History"
},
{
"paragraph_id": 70,
"text": "During the Tang dynasty, cannibalism was supposedly resorted to by rebel forces early in the period (who were said to raid neighbouring areas for victims to eat), and (on a large scale) by both soldiers and civilians during the siege of Suiyang, a decisive episode of the An Lushan Rebellion. Eating an enemy's heart and liver was also repeatedly mentioned as a feature of both official punishments and private vengeance. The final decades of the dynasty were marked by large-scale rebellions, during which both rebels and regular soldiers butchered prisoners for food and killed and ate civilians. Sometimes \"the rebels captured by government troops were [even] sold as food\", according to several of the Official Histories, while warlords likewise relied on the sale of human flesh to finance their rebellions. An Arab traveller visiting China during this time noted with surprise: \"cannibalism [is] permissible for them according to their legal code, for they trade in human flesh in their markets.\"",
"title": "History"
},
{
"paragraph_id": 71,
"text": "References to cannibalizing the enemy also appear in poetry written in the subsequent Song dynasty (960–1279) – for example, in Man Jiang Hong – although they are perhaps meant symbolically, expressing hatred towards the enemy. The Official Histories covering this period record various cases of rebels and bandits eating the flesh of their victims.",
"title": "History"
},
{
"paragraph_id": 72,
"text": "The flesh of executed criminals was sometimes cut off and sold for consumption. During the Tang dynasty a law was enacted that forbade this practice, but whether the law was effectively enforced is unclear. The sale of human flesh is also repeatedly mentioned during famines, in accounts ranging from the 6th to the 15th century. Several of these accounts mention that animal flesh was still available, but had become so expensive that few could afford it. Dog meat was five times as expensive as human flesh, according to one such report. Sometimes, poor men sold their own wives or children to butchers who slaughtered them and sold their flesh. Cannibalism in famine situations seems to have been generally tolerated by the authorities, who did not intervene when such acts occurred.",
"title": "History"
},
{
"paragraph_id": 73,
"text": "A number of accounts suggests that human flesh was occasionally eaten for culinary reasons. An anecdote told about Duke Huan of Qi (7th century BCE) claims that he was curious about the taste of \"steamed child\", having already eaten everything else. His cook supposedly killed his own son to prepare the dish, and Duke Huan judged it to be \"the best food of all\". In later times, wealthy men, among them a son of the 4th-century emperor Shi Hu and an \"open and high-spirited\" man who lived in the 7th century CE, served the flesh of purchased women or children during lavish feasts. The sinologist Robert des Rotours [fr] observes that while such acts were not common, they do not seem to have been rare exceptions, and the hosts apparently did not have to face ostracism or legal prosection. Key Ray Chong even concludes that \"learned cannibalism was often practiced ... for culinary appreciation, and exotic dishes [of human flesh] were prepared for jaded upper-class palates\".",
"title": "History"
},
{
"paragraph_id": 74,
"text": "The Official Histories mention 10th-century officials who liked to eat the flesh of babies and children, and during the Jin dynasty (1115–1234), human flesh seems to have been readily available at the home of a general, who supposedly served it to one of his guests as a practical joke. Accounts from the 12th to 14th centuries indicate that both soldiers and writers praised this flesh as particularly delicious, considering especially children's flesh as unsurpassable in taste.",
"title": "History"
},
{
"paragraph_id": 75,
"text": "Pettersson observes that people generally seem to have had less reservations about the consumption of human flesh than one might expect today. While survival cannibalism during famines was regarded a lamentable necessity, accounts explaining the practice as due to other reasons, such as vengeance or filial piety, were generally even positive.",
"title": "History"
},
{
"paragraph_id": 76,
"text": "European explorers and colonizers brought home many stories of cannibalism practised by the native peoples they encountered. In Spain's overseas expansion to the New World, the practice of cannibalism was reported by Christopher Columbus in the Caribbean islands, and the Caribs were greatly feared because of their supposed practice of it. Queen Isabel of Castile had forbidden the Spaniards to enslave the indigenous, unless they were \"guilty\" of cannibalism. The accusation of cannibalism became a pretext for attacks on indigenous groups and justification for the Spanish conquest. In Yucatán, shipwrecked Spaniard Jerónimo de Aguilar, who later became a translator for Hernán Cortés, reported to have witnessed fellow Spaniards sacrificed and eaten, but escaped from captivity where he was being fattened for sacrifice himself. In the Florentine Codex (1576) compiled by Franciscan Bernardino de Sahagún from information provided by indigenous eyewitnesses has questionable evidence of Mexica (Aztec) cannibalism. Franciscan friar Diego de Landa reported on Yucatán instances.",
"title": "History"
},
{
"paragraph_id": 77,
"text": "In early Brazil, there is reportage of cannibalism among the Tupinamba. It is recorded about the natives of the captaincy of Sergipe in Brazil: \"They eat human flesh when they can get it, and if a woman miscarries devour the abortive immediately. If she goes her time out, she herself cuts the navel-string with a shell, which she boils along with the secondine [i.e. placenta], and eats them both.\" (see human placentophagy).",
"title": "History"
},
{
"paragraph_id": 78,
"text": "The 1913 Handbook of Indians of Canada (reprinting 1907 material from the Bureau of American Ethnology) claims that North American natives practising cannibalism included",
"title": "History"
},
{
"paragraph_id": 79,
"text": "the Montagnais, and some of the tribes of Maine; the Algonkin, Armouchiquois, Iroquois, and Micmac; farther west the Assiniboine, Cree, Foxes, Chippewa, Miami, Ottawa, Kickapoo, Illinois, Sioux, and Winnebago; in the south the people who built the mounds in Florida, and the Tonkawa, Attacapa, Karankawa, Caddo, and Comanche; in the northwest and west, portions of the continent, the Thlingchadinneh and other Athapascan tribes, the Tlingit, Heiltsuk, Kwakiutl, Tsimshian, Nootka, Siksika, some of the Californian tribes, and the Ute. There is also a tradition of the practice among the Hopi, and mentions of the custom among other tribes of New Mexico and Arizona. The Mohawk, and the Attacapa, Tonkawa, and other Texas tribes were known to their neighbours as 'man-eaters'.",
"title": "History"
},
{
"paragraph_id": 80,
"text": "The forms of cannibalism described included both resorting to human flesh during famines and ritual cannibalism, the latter usually consisting of eating a small portion of an enemy warrior. From another source, according to Hans Egede, when the Inuit killed a woman accused of witchcraft, they ate a portion of her heart.",
"title": "History"
},
{
"paragraph_id": 81,
"text": "As with most lurid tales of native cannibalism, these stories are treated with a great deal of scrutiny, as accusations of cannibalism were often used as justifications for the subjugation or destruction of \"savages\". The historian Patrick Brantlinger suggests that Indigenous peoples that were colonized were being dehumanized as part of the justification for the atrocities.",
"title": "History"
},
{
"paragraph_id": 82,
"text": "This period of time was also rife with instances of explorers and seafarers resorting to cannibalism for survival. There is archaeological and written evidence for English settlers' cannibalism in 1609 in the Jamestown Colony under famine conditions, during a period which became known as Starving Time.",
"title": "History"
},
{
"paragraph_id": 83,
"text": "Sailors shipwrecked or lost at sea repeatedly resorted to cannibalism to face off starvation. The survivors of the sinking of the French ship Méduse in 1816 resorted to cannibalism after four days adrift on a raft. Their plight was made famous by Théodore Géricault's painting Raft of the Medusa. After a whale sank the Essex of Nantucket on November 20, 1820, the survivors, in three small boats, resorted, by common consent, to cannibalism in order for some to survive. This event became an important source of inspiration for Herman Melville's Moby-Dick.",
"title": "History"
},
{
"paragraph_id": 84,
"text": "The case of R v Dudley and Stephens (1884) is an English criminal case which dealt with four crew members of an English yacht, the Mignonette, who were cast away in a storm some 2,600 kilometres (1,600 mi) from the Cape of Good Hope. After several days, one of the crew, a seventeen-year-old cabin boy, fell unconscious due to a combination of the famine and drinking seawater. The others (one possibly objecting) decided to kill him and eat him. They were picked up four days later. Two of the three survivors were found guilty of murder. A significant outcome of this case was that necessity in English criminal law was determined to be no defence against a charge of murder. This was a break with the traditional understanding among sailors, which had been that selecting a victim for killing and consumption was acceptable in a starvation situation as long as lots were drawn so that all faced an equal risk of being killed.",
"title": "History"
},
{
"paragraph_id": 85,
"text": "On land, travellers through sparsely inhabited regions and explorers of unknown areas sometimes ate human flesh after running out of other provisions. In a famous example from the 1840s, the members of Donner Party found themselves stranded by snow in the Donner Pass, a high mountain pass in California, without adequate supplies during the Mexican–American War, leading to several instances of cannibalism, including the murder of two young Native American men for food. Sir John Franklin's lost polar expedition, which took place at approximately the same time, is another example of cannibalism out of desperation.",
"title": "History"
},
{
"paragraph_id": 86,
"text": "In frontier situations where there was no strong authority, some individuals got used to killing and eating others even in situations where other food would have been available. One notorious case was the mountain man Boone Helm, who become known as \"The Kentucky Cannibal\" for eating several of his fellow travellers, from 1850 until his eventual hanging in 1864.",
"title": "History"
},
{
"paragraph_id": 87,
"text": "The Leopard Society was a cannibalistic secret society that existed until the mid-1900s and was active mostly in regions that today belong to Sierra Leone, Liberia and Ivory Coast. The Leopard men would dress in leopard skins and waylay travellers with sharp claw-like weapons in the form of leopards' claws and teeth. The victims' flesh would be cut from their bodies and distributed to members of the society.",
"title": "History"
},
{
"paragraph_id": 88,
"text": "Cannibalism was practised widely in the some parts of the Congo Basin, though it was by no means universal. Some peoples, such as the Bakongo, rejected the practice altogether. In some other regions human flesh was eaten \"only occasionally to mark a particularly significant ritual occasion, but in other societies in the Congo, perhaps even a majority by the late nineteenth century, people ate human flesh whenever they could, saying that it was far tastier than other meat\", notes the anthropologist Robert B. Edgerton.",
"title": "History"
},
{
"paragraph_id": 89,
"text": "Many people not only freely admitted eating human flesh, but were surprised when they heard that Europeans did not eat it. Emil Torday observed: \"They are not ashamed of cannibalism, and openly admit that they practise it because of their liking for human flesh\", with the primary reason for cannibalism being a \"gastronomic\" preference for such dishes. Torday once received \"a portion of a human thigh\" sent as a well-intended gift, and other Europeans were offered pieces of human flesh in gestures of hospitality. People expected to be rewarded with fresh human flesh for services well performed and were disappointed when they received something else instead.",
"title": "History"
},
{
"paragraph_id": 90,
"text": "In addition to enemies killed or captured in war, slaves were frequent victims. Many \"healthy children\" had to die \"to provide a feast for their owners\". Young slave children were at particular risk since they were in low demand for other purposes and since their flesh was widely praised as especially delicious, \"just as many modern meat eaters prefer lamb over mutton and veal over beef\". Such acts were not considered controversial – people did not understand why Europeans objected to the killing of slaves, while themselves killing and eating goats; they argued that both were the \"property\" of their owners, to be used as it pleased them.",
"title": "History"
},
{
"paragraph_id": 91,
"text": "A third group of victims were persons from other ethnic groups, who in some areas were \"hunt[ed] for food\" just like animals. Many of the victims, who were usually killed with poisoned arrows or with clubs, were \"women and children ... who had ventured too far from home while gathering firewood or fetching drinking water\" and who were targeted \"because they were easier to overpower\" and also considered tastier than adult men.",
"title": "History"
},
{
"paragraph_id": 92,
"text": "In some regions there was a regular trade in slaves destined to be eaten, and the flesh of recently butchered slaves was available for purchase as well. Some people fattened slave children to sell them for consumption; if such a child became ill and lost too much weight, their owner drowned them in the nearest river instead of wasting further food on them, as a French missionary once witnessed. Human flesh not sold the same day was smoked, so it could be \"sold at leisure\" during subsequent weeks. Europeans were often hesitant to buy smoked meat since they knew that the \"smoking of human flesh to preserve it was ... widespread\", but once meat was smoked, its origin was hard to determine.",
"title": "History"
},
{
"paragraph_id": 93,
"text": "Instead of being killed quickly, \"persons to be eaten often had both of their arms and legs broken and were made to sit up to their necks in a stream for [up to] three days, a practice said to make their flesh more tender, before they were killed and cooked.\" Both adults and children, and also animals such as birds and monkeys, were routinely submitted to this treatment prior to being slaughtered.",
"title": "History"
},
{
"paragraph_id": 94,
"text": "Various reports indicate that living slaves were exposed on marketplaces, so that purchasers could choose which body parts to buy before the victim was butchered and the flesh distributed.",
"title": "History"
},
{
"paragraph_id": 95,
"text": "It often happens that the poor creature destined for the knife is exposed for sale in the market. He walks to and fro and epicures come to examine him. They describe the parts they prefer, one the arm, one the leg, breast, or head. The portions which are purchased are marked off with lines of coloured ochre. When the entire body is sold, the wretch is slain.",
"title": "History"
},
{
"paragraph_id": 96,
"text": "This custom, reported around both the central Congo River and the Ubangi in the north, seem to have been motivated by a desire to get fresh rather than smoked flesh, since without refrigeration there was no other way to preserve flesh from spoiling quickly.",
"title": "History"
},
{
"paragraph_id": 97,
"text": "Killed or captured enemies made another sort of victims, even during wars fought by the colonial state. During the 1892–1894 war between the Congo Free State and the Swahili–Arab city-states of Nyangwe and Kasongo in Eastern Congo, there were reports of widespread cannibalization of the bodies of defeated combatants by the Batetela allies of the Belgian commander Francis Dhanis. In April 1892, 10,000 Batetela, under the command of Gongo Lutete, joined forces with Dhanis in a campaign against the Swahili–Arab leaders Sefu and Mohara. After one early skirmish in the campaign, Dhanis's medical officer, Captain Sidney Langford Hinde, \"noticed that the bodies of both the killed and wounded had vanished.\" When fighting broke out again, Hinde saw his Batetela allies drop human arms, legs and heads on the road; now he had to accept that they had really \"carried them off for food\", which he had initially doubted.",
"title": "History"
},
{
"paragraph_id": 98,
"text": "According to Hinde, the conquest of Nyangwe was followed by \"days of cannibal feasting\" during which hundreds were eaten, with only their heads being kept as mementos. During this time, Lutete \"hid himself in his quarters, appalled by the sight of thousands of men smoking human hands and human chops on their camp fires, enough to feed his army for many days.\" Hinde also noted that the Batetela town Ngandu had \"at least 2,000 polished human skulls\" as a \"solid white pavement in front\" of its gates, with human skulls crowning every post of the stockade.",
"title": "History"
},
{
"paragraph_id": 99,
"text": "Soon after, Nyangwe's surviving population rose in a rebellion, during whose brutal suppression a thousand rioters were killed by the new government. One young Belgian officer wrote home: \"Happily Gongo's men ... ate them up [in a few hours]. It's horrible but exceedingly useful and hygienic.... I should have been horrified at the idea in Europe! but it seems quite natural to me here. Don't show this letter to anyone indiscreet\". Hinde too commented approvingly on the thoroughness with which the cannibals \"disposed of all the dead, leaving nothing even for the jackals, and thus sav[ing] us, no doubt, from many an epidemic.\" Generally the Free State administration seems to have done little to suppress cannibal customs, sometimes even tolerating or facilitating them among its own auxiliary troops and allies.",
"title": "History"
},
{
"paragraph_id": 100,
"text": "In August 1903, the UK diplomat Roger Casement wrote from Lake Tumba to a consular colleague: \"The people round here are all cannibals.... There are also dwarfs (called Batwas) in the forest who are even worse cannibals than the taller human environment. They eat man flesh raw! It's a fact.\" He added that assailants would \"bring down a dwarf on the way home, for the marital cooking pot.... The Dwarfs, as I say, dispense with cooking pots and eat and drink their human prey fresh cut on the battlefield while the blood is still warm and running. These are not fairy tales ..., but actual gruesome reality in the heart of this poor, benighted savage land.\"",
"title": "History"
},
{
"paragraph_id": 101,
"text": "The origins of Congolese cannibalism are lost in time. The oldest known references to it can be found in Filippo Pigafetta's Report of the Kingdom of Congo, published in the late 16th century based on the memories of Duarte Lopez, a Portuguese trader who had lived for several years in the Kingdom of Kongo. Lopez reported that farther up the Congo River, there lived a people who ate both killed enemies and those of their slaves which they could not sell for a \"good price\".",
"title": "History"
},
{
"paragraph_id": 102,
"text": "Oral records indicate that, already at a time when slavery was not widespread in the Congo Basin, people assumed that anyone sold as a slave would likely be eaten, \"because cannibalism was common, and slaves were purchased especially for such purposes\". In the 19th century, warfare and slave raids increased in the Congo Basin as a result of the international demand for slaves, who could no longer be so easily captured nearer to the coasts. As a result, the consumption of slaves increased as well, since most of those sold in the Atlantic slave trade were young and healthy individuals aged from 14 to 30, and similar preferences existed in the Arab–Swahili slave trade. However, many of the captives were younger, older, or otherwise considered less saleable, and such victims were often eaten by the slave raiders or sold to cannibals who purchased them as \"meat\".",
"title": "History"
},
{
"paragraph_id": 103,
"text": "Most of the accounts of cannibalism in the Congo are from the late 19th century, when the Atlantic slave trade had come to a halt, but slavery still existed in Africa and the Arab world. Various reports indicate that around the Ubangi River, slaves were frequently exchanged against ivory, which was then exported to Europe or the Americas, while the slaves were eaten. Some European traders seem to have directly and knowingly taken part in these deadly transactions, while others turned a blind eye. The local elephant hunters preferred the flesh especially of young human beings – four to sixteen was the preferred age range, according to one trader – \"because it was not only more tender, but also much quicker to cook\" than the meat of elephants or other large animals.",
"title": "History"
},
{
"paragraph_id": 104,
"text": "While sceptics such as William Arens sometimes claim that there are no credible eyewitness accounts of cannibal acts, there are numerous such accounts from the Congo. David Livingstone \"saw human parts being cooked with bananas, and many other Europeans\" – among them Hinde – \"reported seeing cooked human remains lying around abandoned fires.\" Soldiers of the German explorer Hermann Wissmann saw how people captured and wounded in a slave raid were shot by a Swahili–Arab leader and then handed over \"to his auxiliary troops, who ... cut them in pieces and dragged them to the fire to serve as their supper\". Visiting a village near the Aruwimi River, the British artist Herbert Ward saw a man \"carrying four large lumps of human flesh, with the skin still clinging to it, on a stick\", and soon afterwards \"a party of men squatting round a fire, before which this ghastly flesh, exposed on spits, was cooking\"; he was told that the flesh came from a man who had been killed a few hours before. Another time, when \"camping for the night with a party of Arab raiders and their followers\", he and his companions felt \"compelled to change the position of our tent owing to the offensive smell of human flesh, which was being cooked on all sides of us.\"",
"title": "History"
},
{
"paragraph_id": 105,
"text": "The Belgian colonial officer Camille Coquilhat saw \"the remaining half of [a] steamed man\" – a slave who had been purchased for consumption and slaughtered a few hours earlier – \"in an enormous pot\" and discussed with the slave's owner, who at first thought that Coquilhat was joking when he objected to his cannibalistic customs. Near the Ubangi River, which formed the border between the Belgian and the French colonial enterprises, the French traveller Jacques d'Uzès [fr] saw local auxiliaries of the French troops kill \"some women and some children\" after a punitive expedition, then cooking their flesh in pots and \"enjoy[ing]\" it.",
"title": "History"
},
{
"paragraph_id": 106,
"text": "Among the Mangbetu people in the north-east, Georg A. Schweinfurth saw a human arm being smoked over a fire. At other occasion, he watched a group of young women using boiling water for \"scalding the hair off the lower half of a human body\" in preparation for cooking it. A few years later, Gaetano Casati saw how the roasted leg of a slave woman was served at the court of the Mangbetu king. More eyewitness accounts could be added.",
"title": "History"
},
{
"paragraph_id": 107,
"text": "Various cases of revenge-driven cannibalism are on record. The historian Angelica Montanari has investigated a number of accounts from Italy between the 14th and 16th centuries, showing that the consumption of entrails or body parts of those considered enemies is repeatedly mentioned in local chronicles, sometimes without any expression of condemnation or disapproval. Another case of this type of cannibalism happened in 1672, when Dutch stadtholder Johan de Witt and his brother were lynched and partially eaten for failing to fend off a French invasion.",
"title": "History"
},
{
"paragraph_id": 108,
"text": "From the 16th century on, an unusual form of medical cannibalism became widespread in several European countries, for which thousands of Egyptian mummies were ground up and sold as medicine. Powdered human mummy – called mummia – was thought to stop internal bleeding and to have other healing properties. The practice developed into a widespread business that flourished until the early 18th century. The demand was much higher than the supply of ancient mummies, leading to much of the offered \"mummia\" being counterfeit, made from recent Egyptian or European corpses – often from the gallows – instead. In a few cases, mummia was still offered in medical catalogues in the early 20th century.",
"title": "History"
},
{
"paragraph_id": 109,
"text": "Cannibalism was repeatedly practised during famines, when other provisions were exhausted.",
"title": "History"
},
{
"paragraph_id": 110,
"text": "During the chaotic transition from the Ming to the Qing dynasty in the 17th century, severe famines repeatedly lead to cannibalism. During a famine in 1622, government troops took the providing of human flesh into their own hands, \"openly butcher[ing] and [selling] people in a market where one jin [c. 600 grams] of flesh could be exchanged for one liang [c. 40 grams] of silver.\" Around 1640, a drought in Henan and Shandong became so bad that \"women and babies were arrayed in the market as human food and were sold by the slaughterers just like mutton and pork.\" Sometimes women and children were slaughtered in the back rooms of butcher shops while customers were waiting for fresh meat. A few years later in Sichuan, \"hundreds of the young and weak\" were kidnapped, killed, and eaten; in the markets, men's flesh was sold at a somewhat lower price than that of women, which was considered tastier.",
"title": "History"
},
{
"paragraph_id": 111,
"text": "Contemporary reports indicate that in Shaanxi – located between Henan and Sichuan – cannibalism became so common in the early Qing period that the local government \"officially sanctioned\" the sale and consumption of human flesh. Butchers legally turned towards killing people sold to them and then \"sell[ing] their meat\"; human-based dishes were also served in restaurants. The History of Ming, one of the Official Dynastic Histories that documented cannibalistic acts, accepted them as inevitable in bad times. \"When driven towards dangers, what choices do they have?\" it asked rhetorically about a famine in 1611, where people were \"selling their daughters and sons, and eating their wives and children\".",
"title": "History"
},
{
"paragraph_id": 112,
"text": "Centuries later, during the Taiping Rebellion in 1850–1864, \"human flesh and organs\" – gained by dismembering corpses or by butchering kidnapped persons – \"were sold openly at the marketplace\" and \"some people killed their own children and ate them\" to alleviate their hunger. Human hearts became a popular dish, according to some who afterwards freely admitted having purchased and enjoyed them. Zeng Guofan, the general leading the army that suppressed the rebellion, confirmed the open sale of human flesh in his diary – once even complaining about its high price, which had risen again.",
"title": "History"
},
{
"paragraph_id": 113,
"text": "Reports of cannibalism and the sale of human flesh during severe famines continued into the early 20th century, up to the final years of Imperial China. Various cases were reported during the Northern Chinese Famine of 1876–1879, with eyewitnesses reporting the sale of human flesh in markets and butcher shops and various (unverified) rumours indicating that it might also have been served in restaurants.",
"title": "History"
},
{
"paragraph_id": 114,
"text": "Outside of famines, the flesh of executed criminals was frequently sold for consumption, a traditional custom that lasted until the 19th century.",
"title": "History"
},
{
"paragraph_id": 115,
"text": "The indigenous population of Taiwan (then known as Formosa) repeatedly rebelled against Chinese rule. The Chinese army reacted drastically by not only killing suspected rebels, but sometimes also eating and selling their flesh. The American journalist James W. Davidson wrote:",
"title": "History"
},
{
"paragraph_id": 116,
"text": "One horrible feature of the campaign against the savages was the sale by the Chinese in open market of savage flesh.... After killing a savage, the head was commonly severed from the body and exhibited.... The body was then either divided among its captors and eaten, or sold to wealthy Chinese and even to high officials, who disposed of it in a like manner. The kidney, liver, heart, and soles of the feet were considered the most desirable portions, and were ordinarily cut up into very small pieces, boiled, and eaten somewhat in the form of soup. The flesh and bones were boiled, and the former [latter?] made into a sort of jelly.... During the outbreak of 1891, savage flesh was brought in – in baskets – the same as pork, and sold like pork in the open markets of Tokoham before the eyes of all, foreigners included. Some of the flesh was even sent to Amoy [on the mainland] to be placed on sale there. It was frequently on sale in the small Chinese villages near the border, and often before the very eyes of peaceful groups of savages who happened to be at the place.",
"title": "History"
},
{
"paragraph_id": 117,
"text": "Newspaper reports also document the open sale of indigenous flesh. Robert des Rotours has interpreted these acts as due to \"contempt for an inferior race\", who were seen as so inferior that they could be treated like animals.",
"title": "History"
},
{
"paragraph_id": 118,
"text": "There are various reports of Dayaks eating human flesh, especially in the context of headhunting expeditions. James Brooke, who founded the Raj of Sarawak in northwestern Borneo, collected eyewitness accounts of the consumption of killed enemies after war campaigns. He also heard (though not from eyewitnesses) that in some areas a \"fat child\" was traditionally served at Makantaun, an annual festival held at the end of the harvest season.",
"title": "History"
},
{
"paragraph_id": 119,
"text": "The Norwegian explorer Carl Bock, who visited Borneo in the late 1870s, met a Dayak chief named Sibau Mobang who told him that \"his people did not eat human meat every day\", but rather in the context of \"head-hunting expeditions\". Mobang had just returned from such an expedition, in which \"no less than seventy victims, men, women and children\", had been killed and partially eaten. Bock also met a local priestess who said that human \"palms [were] considered the best eating\", together with \"the brains, and the flesh on the knees\" – these parts were always eaten, even if the rest of the body was not. The naturalist Albert S. Bickmore, who travelled through Borneo in the 1860s, agreed that some Dayak groups practised cannibalism. Both captured enemies and those found guilty of a crime (such as theft) were killed and eaten, out of revenge and due to an \"appetite\" for human flesh, which was considered uniquely tasty.",
"title": "History"
},
{
"paragraph_id": 120,
"text": "Hundreds of accounts exist of cannibalism among Aboriginal Australians in all parts of Australia, with the possible exception of Tasmania, dating from the first European settlement to the 1930s and later. While it is generally accepted that some forms of cannibalism were practised in Australia in certain circumstances, the prevalence and meaning of such acts in pre-colonial Aboriginal societies are disputed.",
"title": "History"
},
{
"paragraph_id": 121,
"text": "Before colonization, Aboriginal Australians were predominantly nomadic hunter-gatherers at times lacking in protein sources. Reported cases of cannibalism include killing and eating small children (infanticide was widely practised as a means of population control and because mothers had trouble carrying two young children not yet able to walk) and enemy warriors slain in battle.",
"title": "History"
},
{
"paragraph_id": 122,
"text": "In the late 1920s, the anthropologist Géza Róheim heard from Aboriginals that infanticidal cannibalism had been practised especially during droughts. \"Years ago it had been custom for every second child to be eaten\" – the baby was roasted and consumed not only by the mother, but also by the older siblings, who benefited from this meat during times of food scarcity. One woman told him that her little sister had been roasted, but denied having eaten of her. Another \"admitted having killed and eaten her small daughter\", and several other people he talked to remembered having \"eaten one of their brothers\". The consumption of infants took two different forms, depending on where it was practised:",
"title": "History"
},
{
"paragraph_id": 123,
"text": "When the Yumu, Pindupi, Ngali, or Nambutji were hungry, they ate small children with neither ceremonial nor animistic motives. Among the southern tribes, the Matuntara, Mularatara, or Pitjentara, every second child was eaten in the belief that the strength of the first child would be doubled by such a procedure.",
"title": "History"
},
{
"paragraph_id": 124,
"text": "Usually only babies who had not yet received a name (which happened around the first birthday) were consumed, but in times of severe hunger, older children (up to four years or so) could be killed and eaten too, though people tended to have bad feelings about this. Babies were killed by their mother, while a bigger child \"would be killed by the father by being beaten on the head\". But cases of women killing older children are on record too. In 1904 a parish priest in Broome, Western Australia, stated that infanticide was very common, including one case where a four-year-old was \"killed and eaten by its mother\", who later became a Christian.",
"title": "History"
},
{
"paragraph_id": 125,
"text": "The journalist and anthropologist Daisy Bates, who spent a long time among Aboriginals and was well acquainted with their customs, knew an Aboriginal woman who one day left her village to give birth a mile away, taking only her daughter with her. She then \"killed and ate the baby, sharing the food with the little daughter.\" After her return, Bates found the place and saw \"the ashes of a fire\" with the baby's \"broken skull, and one or two charred bones\" in them. She states that \"baby cannibalism was rife among these central-western peoples, as it is west of the border in Central Australia.\"",
"title": "History"
},
{
"paragraph_id": 126,
"text": "The Norwegian ethnographer Carl Sofus Lumholtz confirms that infants were commonly killed and eaten especially in times of food scarcity. He notes that people spoke of such acts \"as an everyday occurrence, and not at all as anything remarkable.\"",
"title": "History"
},
{
"paragraph_id": 127,
"text": "Some have interpreted the consumption of infants as a religious practice: \"In parts of New South Wales ..., it was customary long ago for the first-born of every lubra [Aboriginal woman] to be eaten by the tribe, as part of a religious ceremony.\" However, there seems to be no direct evidence that such acts actually had a religious meaning, and the Australian anthropologist Alfred William Howitt rejects the idea that the eaten were human sacrifices as \"absolutely without foundation\", arguing that religious sacrifices of any kind were unknown in Australia.",
"title": "History"
},
{
"paragraph_id": 128,
"text": "Another frequently reported practise was funerary endocannibalism, the cooking and consumption of the deceased as a funerary rite.",
"title": "History"
},
{
"paragraph_id": 129,
"text": "When anyone dies, provided he or she be not too old, certain of the male relatives take the body out into the bush and cook it in a native oven.... When all the flesh is removed – apparently everything is eaten – the bones are collected, and, with the exception of the long ones from the arm, are wrapped in paperbark and handed over to the custody of a relative.",
"title": "History"
},
{
"paragraph_id": 130,
"text": "According to Bates, exocannibalism was also practised in many regions. Foreigners and members of different ethnic groups were hunted and eaten much like animals. She met \"fine sturdy fellows\" who \"frankly admitted the hunting and sharing of kangaroo and human meat as frequently as that of kangaroo and emu.\" The bodies of the killed were roasted whole in \"a deep hole in the sand\". There were also \"killing vendettas\", in which a hostile settlement was attacked and as many persons as possible killed, whose flesh was then shared according to well-defined rules: \"The older men ate the soft and virile parts, and the brain; swift runners were given the thighs; hands, arms or shoulders went to the best spear-throwers, and so on.\" Referring to the coast of the Great Australian Bight, Bates writes: \"Cannibalism had been rife for centuries in these regions and for a thousand miles north and east of them.\" Human flesh was not eaten for spiritual reasons and not only due to hunger; rather it was considered a \"favourite food\".",
"title": "History"
},
{
"paragraph_id": 131,
"text": "Lumholtz similarly notes that \"the greatest delicacy known to the Australian native is human flesh\", even adding that the \"appetite for human flesh\" was the primary motive for killing. Unrelated individuals and isolated families were attacked just to be eaten and any stranger was at risk of being \"pursued like a wild beast and slain and eaten\". Acquiring human flesh is this manner was something to be proud of, not a reason for shame. He stresses that such flesh was nevertheless by no means a \"daily food\", since opportunities to capture victims were relatively rare. One specific instance of kidnapping for cannibal purposes was recorded in the 1840s by the English immigrant George French Angas, who stated that several children were kidnapped, butchered, and eaten near Lake Alexandrina in South Australia shortly before he arrived there.",
"title": "History"
},
{
"paragraph_id": 132,
"text": "In parts of Melanesia, cannibalism was still practised in the early 20th century, for a variety of reasons – including retaliation, to insult an enemy people, or to absorb the dead person's qualities. One tribal chief, Ratu Udre Udre in Rakiraki, Fiji, is said to have consumed 872 people and to have made a pile of stones to record his achievement. Fiji was nicknamed the \"Cannibal Isles\" by European sailors, who avoided disembarking there.",
"title": "History"
},
{
"paragraph_id": 133,
"text": "The first encounter between Europeans and Māori may have involved cannibalism of a Dutch sailor. In June 1772, the French explorer Marion du Fresne and 26 members of his crew were killed and eaten in the Bay of Islands. In an 1809 incident known as the Boyd massacre, about 66 passengers and crew of the Boyd were killed and eaten by Māori on the Whangaroa peninsula, Northland. Cannibalism was already a regular practice in Māori wars. In another instance, on July 11, 1821, warriors from the Ngapuhi tribe killed 2,000 enemies and remained on the battlefield \"eating the vanquished until they were driven off by the smell of decaying bodies\". Māori warriors fighting the New Zealand government in Tītokowaru's War in New Zealand's North Island in 1868–69 revived ancient rites of cannibalism as part of the radical Hauhau movement of the Pai Marire religion.",
"title": "History"
},
{
"paragraph_id": 134,
"text": "The dense population of the Marquesas Islands, in what is now French Polynesia, was concentrated in narrow valleys, and consisted of warring tribes, who sometimes practised cannibalism on their enemies. Human flesh was called \"long pig\". W. D. Rubinstein wrote:",
"title": "History"
},
{
"paragraph_id": 135,
"text": "It was considered a great triumph among the Marquesans to eat the body of a dead man. They treated their captives with great cruelty. They broke their legs to prevent them from attempting to escape before being eaten, but kept them alive so that they could brood over their impending fate. ... With this tribe, as with many others, the bodies of women were in great demand.",
"title": "History"
},
{
"paragraph_id": 136,
"text": "After World War I, cannibalism continued to occur as a ritual practice and in times of drought or famine. Occasional cannibal acts committed by individual criminals are documented as well throughout the 20th and 21st centuries.",
"title": "History"
},
{
"paragraph_id": 137,
"text": "Many instances of cannibalism by necessity were recorded during World War II. For example, during the 872-day siege of Leningrad, reports of cannibalism began to appear in the winter of 1941–1942, after all birds, rats, and pets were eaten by survivors. Leningrad police even formed a special division to combat cannibalism.",
"title": "History"
},
{
"paragraph_id": 138,
"text": "Some 2.8 million Soviet POWs died in Nazi custody in less than eight months during 1941–42. According to the USHMM, by the winter of 1941, \"starvation and disease resulted in mass death of unimaginable proportions\". This deliberate starvation led to many incidents of cannibalism.",
"title": "History"
},
{
"paragraph_id": 139,
"text": "Following the Soviet victory at Stalingrad it was found that some German soldiers in the besieged city, cut off from supplies, resorted to cannibalism. Later, following the German surrender in January 1943, roughly 100,000 German soldiers were taken prisoner of war (POW). Almost all of them were sent to POW camps in Siberia or Central Asia where, due to being chronically underfed by their Soviet captors, many resorted to cannibalism. Fewer than 5,000 of the prisoners taken at Stalingrad survived captivity.",
"title": "History"
},
{
"paragraph_id": 140,
"text": "Cannibalism took place in the concentration and death camps in the Independent State of Croatia (NDH), a Nazi German puppet state which was governed by the fascist Ustasha organization, who committed the Genocide of Serbs and the Holocaust in NDH. Some survivors testified that some of the Ustashas drank the blood from the slashed throats of the victims.",
"title": "History"
},
{
"paragraph_id": 141,
"text": "The Australian War Crimes Section of the Tokyo tribunal, led by prosecutor William Webb (the future Judge-in-Chief), collected numerous written reports and testimonies that documented Japanese soldiers' acts of cannibalism among their own troops, on enemy dead, as well as on Allied prisoners of war in many parts of the Greater East Asia Co-Prosperity Sphere. In September 1942, Japanese daily rations on New Guinea consisted of 800 grams of rice and tinned meat. However, by December, this had fallen to 50 grams. According to historian Yuki Tanaka, \"cannibalism was often a systematic activity conducted by whole squads and under the command of officers\".",
"title": "History"
},
{
"paragraph_id": 142,
"text": "In some cases, flesh was cut from living people. A prisoner of war from the British Indian Army, Lance Naik Hatam Ali, testified that in New Guinea: \"the Japanese started selecting prisoners and every day one prisoner was taken out and killed and eaten by the soldiers. I personally saw this happen and about 100 prisoners were eaten at this place by the Japanese. The remainder of us were taken to another spot 80 kilometres (50 miles) away where 10 prisoners died of sickness. At this place, the Japanese again started selecting prisoners to eat. Those selected were taken to a hut where their flesh was cut from their bodies while they were alive and they were thrown into a ditch where they later died.\"",
"title": "History"
},
{
"paragraph_id": 143,
"text": "Another well-documented case occurred in Chichi-jima in February 1945, when Japanese soldiers killed and consumed five American airmen. This case was investigated in 1947 in a war crimes trial, and of 30 Japanese soldiers prosecuted, five (Maj. Matoba, Gen. Tachibana, Adm. Mori, Capt. Yoshii, and Dr. Teraki) were found guilty and hanged. In his book Flyboys: A True Story of Courage, James Bradley details several instances of cannibalism of World War II Allied prisoners by their Japanese captors. The author claims that this included not only ritual cannibalization of the livers of freshly killed prisoners, but also the cannibalization-for-sustenance of living prisoners over the course of several days, amputating limbs only as needed to keep the meat fresh.",
"title": "History"
},
{
"paragraph_id": 144,
"text": "There are more than 100 documented cases in Australia's government archives of Japanese soldiers practising cannibalism on enemy soldiers and civilians in New Guinea during the war. For instance, from an archived case, an Australian lieutenant describes how he discovered a scene with cannibalized bodies, including one \"consisting only of a head which had been scalped and a spinal column\" and that \"in all cases, the condition of the remains were such that there can be no doubt that the bodies had been dismembered and portions of the flesh cooked\". In another archived case, a Pakistani corporal (who was captured in Singapore and transported to New Guinea by the Japanese) testified that Japanese soldiers cannibalized a prisoner (some were still alive) per day for about 100 days. There was also an archived memo, in which a Japanese general stated that eating anyone except enemy soldiers was punishable by death. Toshiyuki Tanaka, a Japanese scholar in Australia, mentions that it was done \"to consolidate the group feeling of the troops\" rather than due to food shortage in many of the cases. Tanaka also states that the Japanese committed the cannibalism under supervision of their senior officers and to serve as a power projection tool.",
"title": "History"
},
{
"paragraph_id": 145,
"text": "Jemadar Abdul Latif (VCO of the 4/9 Jat Regiment of the British Indian Army and POW rescued by the Australians at Sepik Bay in 1945) stated that the Japanese soldiers ate both Indian POWs and local New Guinean people. At the camp for Indian POWs in Wewak, where many died and 19 POWs were eaten, the Japanese doctor and lieutenant Tumisa would send an Indian out of the camp after which a Japanese party would kill and eat flesh from the body as well as cut off and cook certain body parts (liver, buttock muscles, thighs, legs, and arms), according to Captain R. U. Pirzai in a The Courier-Mail report of August 25, 1945.",
"title": "History"
},
{
"paragraph_id": 146,
"text": "When Uruguayan Air Force Flight 571 crashed on a glacier in the Andes on October 13, 1972, the survivors resorted to eating the deceased during their 72 days in the mountains. Their experiences and memories became the source of several books and films. Survivor Roberto Canessa described how they \"agonized\" for days in the knowledge that \"the bodies of our friends and team-mates, preserved outside in the snow and ice, contained vital, life-giving protein that could help us survive. But could we do it?\" Ultimately he and the other 15 people who were rescued months later decided they could, realizing there was no other way to face off starvation.",
"title": "History"
},
{
"paragraph_id": 147,
"text": "In 1991, Jeffrey Dahmer of Milwaukee, Wisconsin, was arrested after one of his intended victims managed to escape. Found in Dahmer's apartment were two human hearts, an entire torso, a bag full of human organs from his victims, and a portion of arm muscle. He stated that he planned to consume all of the body parts over the next few weeks.",
"title": "History"
},
{
"paragraph_id": 148,
"text": "In the 1980s, Médecins Sans Frontières, the international medical charity, supplied photographic and other documentary evidence of ritualized cannibal feasts among the participants in Liberia's internecine strife preceding the First Liberian Civil War to representatives of Amnesty International. Amnesty International declined to publicize this material; the Secretary-General of the organization, Pierre Sane, said at the time in an internal communication that \"what they do with the bodies after human rights violations are committed is not part of our mandate or concern\". The existence of cannibalism on a wide scale in Liberia was subsequently verified.",
"title": "History"
},
{
"paragraph_id": 149,
"text": "A few years later, reported of cannibal acts committed during the Second Liberian Civil War and Sierra Leone Civil War emerged.",
"title": "History"
},
{
"paragraph_id": 150,
"text": "Reports from the Belgian Congo indicate that cannibalism was still widely practised in some regions in the 1920s. Hermann Norden, an American who visited the Kasai region in 1923, found that \"cannibalism was commonplace\". People were afraid of walking outside of populated places because there was a risk of being attacked, killed, and eaten. Norden talked with a Belgian who \"admitted that it was quite likely he had occasionally been served human flesh without knowing what he was eating\" – it was simply a dish that appeared on the tables from time.",
"title": "History"
},
{
"paragraph_id": 151,
"text": "Other travellers heard persistent rumours that there was still a certain underground trade in slaves, some of whom (adults and children alike) were regularly killed and then \"cut up and cooked as ordinary meat\", around both the Kasai and the Ubangi River. The colonial state seems to have done little to discourage or punish such acts. There are also reports that human flesh was sometimes sold at markets in both Kinshasa and Brazzaville, \"right in the middle of European life.\"",
"title": "History"
},
{
"paragraph_id": 152,
"text": "Norden observed that cannibalism was so common that people talked about it quite \"casual[ly]\": \"No stress was put upon it, nor horror shown. This person had died of fever; that one had been eaten. It was all a matter of the way one's luck held.\"",
"title": "History"
},
{
"paragraph_id": 153,
"text": "The culinary use of human flesh continued in some cases even after World War II. In 1950, a Belgian administrator ate a \"remarkably delicious\" dish, learning after he had finished \"that the meat came from a young girl.\" A few years later, a Danish traveller was served a piece of the \"soft and tender\" flesh of a butchered woman.",
"title": "History"
},
{
"paragraph_id": 154,
"text": "During the Congo Crisis, which followed the country's independence in 1960, body parts of killed enemies were eaten and the flesh of war victims was sometimes sold for consumption. In Luluabourg (today Kananga), an American journalist saw a truck smeared with blood. A police commissioner investigating the scene told her that \"sixteen women and children\" had been lured in a nearby village to enter the truck, kidnapped, and \"butchered ... for meat.\" She also talked with a Presbyterian missionary, who excused this act as due to \"protein need.... The bodies of their enemies are the only source of protein available.\"",
"title": "History"
},
{
"paragraph_id": 155,
"text": "In conflict situations, cannibalism persisted into the 21st century. During the first decade of the new century, cannibal acts have been reported from the Second Congo War and the Ituri conflict in the northeast of the Democratic Republic of the Congo. According to UN investigators, fighters belonging to several factions \"grilled\" human bodies \"on a barbecue\"; young girls were boiled \"alive in ... big pots filled with boiling water and oil\" or \"cut into small pieces ... and then eaten.\"",
"title": "History"
},
{
"paragraph_id": 156,
"text": "A UN human rights expert reported in July 2007 that sexual atrocities committed by rebel groups as well as by armed forces and national police against Congolese women go \"far beyond rape\" and include sexual slavery, forced incest, and cannibalism. In the Ituri region, much of the violence, which included \"widespread cannibalism\", was consciously directed against pygmies, who were believed to be relatively helpless and even considered subhuman by some other Congolese.",
"title": "History"
},
{
"paragraph_id": 157,
"text": "UN investigators also collected eyewitness accounts of cannibalism during a violent conflict that shook the Kasai region in 2016/2017. Various parts of killed enemies and beheaded captives were cooked and eaten, including their heads, thighs, and penises.",
"title": "History"
},
{
"paragraph_id": 158,
"text": "Cannibalism has also been reported from the Central African Republic, north of the Congo Basin. Jean-Bédel Bokassa ruled the country from 1966 to 1979 as dictator and finally as self-declared emperor. Tenacious rumours that he liked to dine on the flesh of opponents and political prisoners were substantiated by several testimonies during his eventual trial in 1986/1987. Bokassa's successor David Dacko stated that he had seen photographs of butchered bodies hanging in the cold-storage rooms of Bokassa's palace immediately after taking power in 1979. These or similar photos, said to show a walk-in freezer containing the bodies of schoolchildren arrested in April 1979 during protests and beat to death in the 1979 Ngaragba Prison massacre, were also published in Paris Match magazine. During the trial, Bokassa's former chef testified that he had repeatedly cooked human flesh from the palace's freezers for his boss's table. While Bokassa was found guilty of murder in at least twenty cases, the charge of cannibalism was nevertheless not taken into account for the final verdict, since the consumption of human remains is considered a misdemeanor under CAR law and all previously committed misdemeanors had been forgiven by a general amnesty declared in 1981.",
"title": "History"
},
{
"paragraph_id": 159,
"text": "Further acts of cannibalism were reported to have targeted the Muslim minority during the Central African Republic Civil War which started in 2012.",
"title": "History"
},
{
"paragraph_id": 160,
"text": "In the 1970s the Ugandan dictator Idi Amin was reputed to practice cannibalism. More recently, the Lord's Resistance Army has been accused of routinely engaging in ritual or magical cannibalism. There are also reports that witch doctors in the country sometimes use body parts of children in their medicine.",
"title": "History"
},
{
"paragraph_id": 161,
"text": "During the South Sudanese Civil War, cannibalism and forced cannibalism have been reported from South Sudan.",
"title": "History"
},
{
"paragraph_id": 162,
"text": "Before 1931, The New York Times reporter William Seabrook, apparently disappointed that he had been unable to taste human flesh in West Africa, obtained from a hospital intern at the Sorbonne a chunk of this meat from the body of a healthy man killed in an accident, then cooked and ate it. He reported,",
"title": "History"
},
{
"paragraph_id": 163,
"text": "It was like good, fully developed veal, not young, but not yet beef. It was very definitely like that, and it was not like any other meat I had ever tasted. It was so nearly like good, fully developed veal that I think no person with a palate of ordinary, normal sensitiveness could distinguish it from veal. It was mild, good meat with no other sharply defined or highly characteristic taste such as for instance, goat, high game, and pork have. The steak was slightly tougher than prime veal, a little stringy, but not too tough or stringy to be agreeably edible. The roast, from which I cut and ate a central slice, was tender, and in color, texture, smell as well as taste, strengthened my certainty that of all the meats we habitually know, veal is the one meat to which this meat is accurately comparable.",
"title": "History"
},
{
"paragraph_id": 164,
"text": "Karl Denke, possible Carl Großmann and Fritz Haarmann, as well as Joachim Kroll were German murderers and cannibals active between the early 20th century and the 1970s. Armin Meiwes is a former computer repair technician who achieved international notoriety for killing and eating a voluntary victim in 2001, whom he had found via the Internet. After Meiwes and the victim jointly attempted to eat the victim's severed penis, Meiwes killed his victim and proceeded to eat a large amount of his flesh. He was arrested in December 2002. In January 2004, Meiwes was convicted of manslaughter and sentenced to eight years and six months in prison. Despite the victim's undisputed consent, the prosecutors successfully appealed this decision, and in a retrial that ended in May 2006, Meiwes was convicted of murder and sentenced to life imprisonment.",
"title": "History"
},
{
"paragraph_id": 165,
"text": "On July 23, 1988, Rick Gibson ate the flesh of another person in public. Because England does not have a specific law against cannibalism, he legally ate a canapé of donated human tonsils in Walthamstow High Street, London. A year later, on April 15, 1989, he publicly ate a slice of human testicle. When he tried to eat another slice of human testicle as \"hors d'oeuvre\" at the Pitt International Galleries in Vancouver on July 14, 1989, the police confiscated the testicle. However, the charge of publicly exhibiting a disgusting object was dropped, and two months later he finally ate the piece of human testicle on the steps of the Vancouver court house.",
"title": "History"
},
{
"paragraph_id": 166,
"text": "In 2008, a British model called Anthony Morley was imprisoned for the killing, dismemberment and partial cannibalisation of his lover, magazine executive Damian Oldfield.",
"title": "History"
},
{
"paragraph_id": 167,
"text": "In his book, The Gulag Archipelago, Soviet writer Aleksandr Solzhenitsyn described cases of cannibalism in 20th-century Soviet Union. Of the famine in Povolzhie (1921–1922) he wrote: \"That horrible famine was up to cannibalism, up to consuming children by their own parents – the famine, which Russia had never known even in the Time of Troubles [in 1601–1603]\".",
"title": "History"
},
{
"paragraph_id": 168,
"text": "The historian Orlando Figes observes that \"thousands of cases\" of cannibalism were reported, while the number of cases that were never reported was doubtless even higher. In Pugachyov, \"it was dangerous for children to go out after dark since there were known to be bands of cannibals and traders who killed them to eat or sell their tender flesh.\" An inhabitant of a nearby village stated: \"There are several cafeterias in the village – and all of them serve up young children.\" This was no exception – Figes estimates \"that a considerable proportion of the meat in Soviet factories in the Volga area ... was human flesh.\" Various gangs specialized in \"capturing children, murdering them and selling the human flesh as horse meat or beef\", with the buyers happy to have found a source of meat in a situation of extreme shortage and often willing not to \"ask too many questions\".",
"title": "History"
},
{
"paragraph_id": 169,
"text": "Cannibalism was also widespread during the Holodomor, a man-made famine in Soviet Ukraine between 1932 and 1933.",
"title": "History"
},
{
"paragraph_id": 170,
"text": "Survival was a moral as well as a physical struggle. A woman doctor wrote to a friend in June 1933 that she had not yet become a cannibal, but was \"not sure that I shall not be one by the time my letter reaches you\". The good people died first. Those who refused to steal or to prostitute themselves died. Those who gave food to others died. Those who refused to eat corpses died. Those who refused to kill their fellow man died. ... At least 2,505 people were sentenced for cannibalism in the years 1932 and 1933 in Ukraine, though the actual number of cases was certainly much higher.",
"title": "History"
},
{
"paragraph_id": 171,
"text": "Most cases of cannibalism were \"necrophagy, the consumption of corpses of people who had died of starvation\". But the murder of children for food was common as well. Many survivors told of neighbours who had killed and eaten their own children. One woman, asked why she had done this, \"answered that her children would not survive anyway, but this way she would\". She was arrested by the police. The police also documented cases of children being kidnapped, killed, and eaten, and \"stories of children being hunted down as food\" circulated in many areas. When nearly all grain and all kinds of animal meat had been exhausted, \"a black market arose in human flesh\" and it \"may even have entered the official economy.\" The police kept a close eye on butcher shops and slaughterhouses, trying to prevent them from bringing human flesh into circulation. The Italian consul, Sergio Gradenigo, nevertheless reported from Kharkiv that the \"trade of human meat becomes more active.\"",
"title": "History"
},
{
"paragraph_id": 172,
"text": "In March 1933, the secret police in Kiev Oblast collected \"ten or more reports of cannibalism every day\" but concluded that \"in reality there are many more such incidents\", most of which went unreported. Those found guilty of cannibalism were often \"imprisoned, executed, or lynched\". But while the authorities were well informed about the extent of cannibalism, they also tried to suppress this information from becoming widely known, the chief of the secret police warning \"that written notes on the subject do not circulate among the officials where they might cause rumours\".",
"title": "History"
},
{
"paragraph_id": 173,
"text": "The Holodomor was part of the Soviet famine of 1930–1933, which devastated also other parts of the Soviet Union in the early 1930s. Multiple cases of cannibalism were also reported from Kazakhstan.",
"title": "History"
},
{
"paragraph_id": 174,
"text": "A few years later, starving people again resorted to cannibalism during the siege of Leningrad (1941–1944). About this time, Solzhenitsyn writes: \"Those who consumed human flesh, or dealt with the human liver trading from dissecting rooms ... were accounted as the political criminals\".",
"title": "History"
},
{
"paragraph_id": 175,
"text": "Of the building of Northern Railway Labor Camp (\"Sevzheldorlag\") Solzhenitsyn reports, \"An ordinary hard working political prisoner almost could not survive at that penal camp. In the camp Sevzheldorlag (chief: colonel Klyuchkin) in 1946–47 there were many cases of cannibalism: they cut human bodies, cooked and ate.\"",
"title": "History"
},
{
"paragraph_id": 176,
"text": "The Soviet journalist Yevgenia Ginzburg was a long-term political prisoner who spent time in the Soviet prisons, Gulag camps and settlements from 1938 to 1955. She described in her memoir, Harsh Route (or Steep Route), of a case which she was directly involved in during the late 1940s, after she had been moved to the prisoners' hospital.",
"title": "History"
},
{
"paragraph_id": 177,
"text": "The chief warder shows me the black smoked pot, filled with some food: \"I need your medical expertise regarding this meat.\" I look into the pot, and hardly hold vomiting. The fibres of that meat are very small, and don't resemble me anything I have seen before. The skin on some pieces bristles with black hair ... A former smith from Poltava, Kulesh worked together with Centurashvili. At this time, Centurashvili was only one month away from being discharged from the camp ... And suddenly he surprisingly disappeared ... The wardens searched for two more days, and then assumed that it was an escape case, though they wondered why, since his imprisonment period was almost over ... The crime was there. Approaching the fireplace, Kulesh killed Centurashvili with an axe, burned his clothes, then dismembered him and hid the pieces in snow, in different places, putting specific marks on each burial place. ... Just yesterday, one body part was found under two crossed logs.",
"title": "History"
},
{
"paragraph_id": 178,
"text": "The Aghori are Indian ascetics who believe that eating human flesh confers spiritual and physical benefits, such as prevention of ageing. They claim to only eat those who have voluntarily granted their body to the sect upon their death, but an Indian TV crew witnessed one Aghori feasting on a corpse discovered floating in the Ganges and a member of the Dom caste reports that Aghori often take bodies from cremation ghats (or funeral pyres).",
"title": "History"
},
{
"paragraph_id": 179,
"text": "Cannibalism is documented to have occurred in rural China during the severe famine that resulted from the Great Leap Forward (1958–1962).",
"title": "History"
},
{
"paragraph_id": 180,
"text": "During Mao Zedong's Cultural Revolution (1966–1976), local governments' documents revealed hundreds of incidents of cannibalism for ideological reasons, including large-scale cannibalism during the Guangxi Massacre. Cannibal acts occurred at public events organized by local Communist Party officials, with people taking part in them in order to prove their revolutionary passion. The writer Zheng Yi documented many of these incidents, especially those in Guangxi, in his 1993 book, Scarlet Memorial.",
"title": "History"
},
{
"paragraph_id": 181,
"text": "Pills made of human flesh were said to be used by some Tibetan Buddhists, motivated by a belief that mystical powers were bestowed upon those who consumed Brahmin flesh.",
"title": "History"
},
{
"paragraph_id": 182,
"text": "In Joshua Oppenheimer's film The Look of Silence, several of the anti-Communist militias active in the Indonesian mass killings of 1965–66 claim that drinking blood from their victims prevented them from going mad.",
"title": "History"
},
{
"paragraph_id": 183,
"text": "During a massacre of the Madurese minority in the Indonesian part of Borneo in 1999, \"more than 200 people, including young babies, [were] decapitated and cannibalised\", according to reporter Richard Lloyd Parry. Parry saw \"two arms, numerous pieces of heart and liver, and a dismembered torso being cooked over a fire by the side of the road\" in a \"human barbecue\". He met a Dayak teenager who told he had helped to kill and eat four Madurese people \"because we hate the Madurese.... Mostly we shoot them first, and then we chop the body. It tastes just like chicken.\" A Dayak teacher explained that \"when people do not respect our [traditions], they become enemies, and we don't consider our enemies to be human any more. They become animals in our eyes. And the Dayaks eat animals.\" Parry also saw at least seven severed heads, some of them apparently taken just hours before and placed on \"oil drums on either side of the road\" as trophies in a revival of the traditional practice of headhunting. The teenager he talked to assured him that \"We don't kill babies\", but only those \"around 13 or 15\" or older. However, he met a village chief who had \"seen six or seven children with their heads cut off\" and stated \"they kill everyone, including babies. They chop their heads off and they eat them.\"",
"title": "History"
},
{
"paragraph_id": 184,
"text": "When visiting a town market, Parry saw \"a charred femur ... among the embers of a fire\" and met a Dayak man who held \"a lump of what he said was human meat\" and then started to eat it. Unsure how to react, Parry asked about the taste and the man replied: \"Delicious\". Parry remarked that, after the first shock had passed, \"the most devastating thing about cannibalism and headhunting is not the fear and the blood, but the terrible, profound banality.\"",
"title": "History"
},
{
"paragraph_id": 185,
"text": "Two years later, during the Sampit conflict, Dayaks went again \"on a rampage of killing and decapitation with the aim of driving the Madurese from the province.\" According to their own reports, they \"killed 2,000 Madurese, in many cases cutting off their heads as trophies, drinking their blood and cutting out their hearts and eating them on the spot.\" A Dayak spokesperson said that, because of their anger and resentment against the Madurese settlers, \"They don't recognize whether they are women or children. They just see them as animals that have to be destroyed.\" A Madurese survivor mourned his murdered children and grandchildren: \"They cut off their heads and then cut them up and took them away to eat.\" Police and army, though called to the scene, seem to have done little to stop the violence until at least 500 people were dead.",
"title": "History"
},
{
"paragraph_id": 186,
"text": "Reports of widespread cannibalism began to emerge from North Korea during the famine of the 1990s and subsequent ongoing starvation. Kim Jong-il was reported to have ordered a crackdown on cannibalism in 1996, but Chinese travellers reported in 1998 that cannibalism had occurred. Three people in North Korea were reported to have been executed for selling or eating human flesh in 2006. Further reports of cannibalism emerged in early 2013, including reports of a man executed for killing his two children for food.",
"title": "History"
},
{
"paragraph_id": 187,
"text": "There are conflicting claims about how widespread cannibalism was in North Korea. While refugees reported that it was widespread, Barbara Demick wrote in her book, Nothing to Envy: Ordinary Lives in North Korea (2010), that it did not seem to be.",
"title": "History"
},
{
"paragraph_id": 188,
"text": "The Korowai tribe of south-eastern Papua could be one of the last surviving tribes in the world engaging in cannibalism. A local cannibal cult killed and ate victims as late as 2012.",
"title": "History"
},
{
"paragraph_id": 189,
"text": "As in some other Papuan societies, the Urapmin people engaged in cannibalism in war. Notably, the Urapmin also had a system of food taboos wherein dogs could not be eaten and they had to be kept from breathing on food, unlike humans who could be eaten and with whom food could be shared.",
"title": "History"
}
] | Human cannibalism is the act or practice of humans eating the flesh or internal organs of other human beings. A person who practices cannibalism is called a cannibal. The meaning of "cannibalism" has been extended into zoology to describe animals consuming parts of individuals of the same species as food. Neanderthals are believed to have practised cannibalism, and may have been eaten by anatomically modern humans. Cannibalism was occasionally practised in Egypt during ancient and Roman times, as well as later during severe famines. The Island Caribs of the Lesser Antilles, whose name is the origin of the word cannibal, acquired a long-standing reputation as eaters of human flesh, reconfirmed when their legends were recorded in the 17th century. Some controversy exists over the accuracy of these legends and the prevalence of actual cannibalism in the culture. Cannibalism has been well documented in much of the world, including Fiji, the Amazon Basin, the Congo, and the Māori people of New Zealand. Cannibalism was also practised in New Guinea and in parts of the Solomon Islands, and human flesh was sold at markets in some parts of Melanesia and of the Congo Basin. A form of cannibalism popular in early modern Europe was the consumption of body parts or blood for medical purposes. Reaching its height during the 17th century, this practice continued in some cases into the second half of the 19th century. Cannibalism has occasionally been practised as a last resort by people suffering from famine. Well-known examples include the ill-fated Donner Party (1846–1847) and the crash of Uruguayan Air Force Flight 571 (1972), after which the survivors ate the bodies of the dead. Additionally, there are cases of people engaging in cannibalism for sexual pleasure, such as Albert Fish, Issei Sagawa, Jeffrey Dahmer, and Armin Meiwes. Cannibalism has been both practised and fiercely condemned in recent several wars, especially in Liberia and the Democratic Republic of the Congo. It was still practised in Papua New Guinea as of 2012, for cultural reasons. Cannibalism has been said to test the bounds of cultural relativism because it challenges anthropologists "to define what is or is not beyond the pale of acceptable human behavior". A few scholars argue that no firm evidence exists that cannibalism has ever been a socially acceptable practice anywhere in the world, but such views have been largely rejected as irreconcilable with the actual evidence. | 2001-06-28T01:59:43Z | 2023-12-30T14:54:34Z | [
"Template:Page needed",
"Template:Cite news",
"Template:ISBN?",
"Template:' \"",
"Template:Font colour",
"Template:Cite book",
"Template:Cite web",
"Template:Citation",
"Template:Failed verification",
"Template:See also",
"Template:Reflist",
"Template:Cite magazine",
"Template:Pp-vandalism",
"Template:Cite EB1911",
"Template:Interlanguage link",
"Template:Primary sources section",
"Template:Convert",
"Template:Div col end",
"Template:Short description",
"Template:Use Oxford spelling",
"Template:Circa",
"Template:Homicide",
"Template:Webarchive",
"Template:Feeding",
"Template:Authority control",
"Template:Cite thesis",
"Template:Hatnote",
"Template:Rp",
"Template:Div col",
"Template:Cite journal",
"Template:Use mdy dates",
"Template:Blockquote",
"Template:Harvnb",
"Template:ISBN",
"Template:Sfn",
"Template:Further",
"Template:Cite episode",
"Template:Commons category"
] | https://en.wikipedia.org/wiki/Human_cannibalism |
5,659 | Chemical element | A chemical element is a chemical substance that cannot be broken down into other substances. The basic particle that constitutes a chemical element is the atom, and each chemical element is distinguished by the number of protons in the nuclei of its atoms, known as its atomic number. For example, oxygen has an atomic number of 8, meaning that each oxygen atom has 8 protons in its nucleus. This is in contrast to chemical compounds and mixtures, which contain atoms with different atomic numbers.
Almost all of the baryonic matter of the universe is composed of chemical elements (among rare exceptions are neutron stars). When different elements undergo chemical reactions, atoms are rearranged into new compounds held together by chemical bonds. Only a minority of elements, such as silver and gold, are found uncombined as relatively pure native element minerals. Nearly all other naturally occurring elements occur in the Earth as compounds or mixtures. Air is primarily a mixture of the elements nitrogen, oxygen, and argon, though it does contain compounds including carbon dioxide and water.
The history of the discovery and use of the elements began with primitive human societies that discovered native minerals like carbon, sulfur, copper and gold (though the concept of a chemical element was not yet understood). Attempts to classify materials such as these resulted in the concepts of classical elements, alchemy, and various similar theories throughout human history. Much of the modern understanding of elements developed from the work of Dmitri Mendeleev, a Russian chemist who published the first recognizable periodic table in 1869. This table organizes the elements by increasing atomic number into rows ("periods") in which the columns ("groups") share recurring ("periodic") physical and chemical properties. The periodic table summarizes various properties of the elements, allowing chemists to derive relationships between them and to make predictions about compounds and potential new ones.
By November 2016, the International Union of Pure and Applied Chemistry had recognized a total of 118 elements. The first 94 occur naturally on Earth, and the remaining 24 are synthetic elements produced in nuclear reactions. Save for unstable radioactive elements (radionuclides) which decay quickly, nearly all of the elements are available industrially in varying amounts. The discovery and synthesis of further new elements is an ongoing area of scientific study.
The lightest chemical elements are hydrogen and helium, both created by Big Bang nucleosynthesis during the first 20 minutes of the universe in a ratio of around 3:1 by mass (or 12:1 by number of atoms), along with tiny traces of the next two elements, lithium and beryllium. Almost all other elements found in nature were made by various natural methods of nucleosynthesis. On Earth, small amounts of new atoms are naturally produced in nucleogenic reactions, or in cosmogenic processes, such as cosmic ray spallation. New atoms are also naturally produced on Earth as radiogenic daughter isotopes of ongoing radioactive decay processes such as alpha decay, beta decay, spontaneous fission, cluster decay, and other rarer modes of decay.
Of the 94 naturally occurring elements, those with atomic numbers 1 through 82 each have at least one stable isotope (except for technetium, element 43 and promethium, element 61, which have no stable isotopes). Isotopes considered stable are those for which no radioactive decay has yet been observed. Elements with atomic numbers 83 through 94 are unstable to the point that radioactive decay of all isotopes can be detected. Some of these elements, notably bismuth (atomic number 83), thorium (atomic number 90), and uranium (atomic number 92), have one or more isotopes with half-lives long enough to survive as remnants of the explosive stellar nucleosynthesis that produced the heavy metals before the formation of our Solar System. At over 1.9×10 years, over a billion times longer than the current estimated age of the universe, bismuth-209 (atomic number 83) has the longest known alpha decay half-life of any naturally occurring element, and is almost always considered on par with the 80 stable elements. The very heaviest elements (those beyond plutonium, element 94) undergo radioactive decay with half-lives so short that they are not found in nature and must be synthesized.
There are now 118 known elements. In this context, "known" means observed well enough, even from just a few decay products, to have been differentiated from other elements. Most recently, the synthesis of element 118 (since named oganesson) was reported in October 2006, and the synthesis of element 117 (tennessine) was reported in April 2010. Of these 118 elements, 94 occur naturally on Earth. Six of these occur in extreme trace quantities: technetium, atomic number 43; promethium, number 61; astatine, number 85; francium, number 87; neptunium, number 93; and plutonium, number 94. These 94 elements have been detected in the universe at large, in the spectra of stars and also supernovae, where short-lived radioactive elements are newly being made. The first 94 elements have been detected directly on Earth as primordial nuclides present from the formation of the Solar System, or as naturally occurring fission or transmutation products of uranium and thorium.
The remaining 24 heavier elements, not found today either on Earth or in astronomical spectra, have been produced artificially: these are all radioactive, with very short half-lives; if any atoms of these elements were present at the formation of Earth, they are extremely likely, to the point of certainty, to have already decayed, and if present in novae have been in quantities too small to have been noted. Technetium was the first purportedly non-naturally occurring element synthesized, in 1937, although trace amounts of technetium have since been found in nature (and also the element may have been discovered naturally in 1925). This pattern of artificial production and later natural discovery has been repeated with several other radioactive naturally occurring rare elements.
List of the elements are available by name, atomic number, density, melting point, boiling point and by symbol, as well as ionization energies of the elements. The nuclides of stable and radioactive elements are also available as a list of nuclides, sorted by length of half-life for those that are unstable. One of the most convenient, and certainly the most traditional presentation of the elements, is in the form of the periodic table, which groups together elements with similar chemical properties (and usually also similar electronic structures).
The atomic number of an element is equal to the number of protons in each atom, and defines the element. For example, all carbon atoms contain 6 protons in their atomic nucleus; so the atomic number of carbon is 6. Carbon atoms may have different numbers of neutrons; atoms of the same element having different numbers of neutrons are known as isotopes of the element.
The number of protons in the atomic nucleus also determines its electric charge, which in turn determines the number of electrons of the atom in its non-ionized state. The electrons are placed into atomic orbitals that determine the atom's various chemical properties. The number of neutrons in a nucleus usually has very little effect on an element's chemical properties (except in the case of hydrogen and deuterium). Thus, all carbon isotopes have nearly identical chemical properties because they all have six protons and six electrons, even though carbon atoms may, for example, have 6 or 8 neutrons. That is why the atomic number, rather than mass number or atomic weight, is considered the identifying characteristic of a chemical element.
The symbol for atomic number is Z.
Isotopes are atoms of the same element (that is, with the same number of protons in their atomic nucleus), but having different numbers of neutrons. Thus, for example, there are three main isotopes of carbon. All carbon atoms have 6 protons in the nucleus, but they can have either 6, 7, or 8 neutrons. Since the mass numbers of these are 12, 13 and 14 respectively, the three isotopes of carbon are known as carbon-12, carbon-13, and carbon-14, often abbreviated to C, C, and C. Carbon in everyday life and in chemistry is a mixture of C (about 98.9%), C (about 1.1%) and about 1 atom per trillion of C.
Most (66 of 94) naturally occurring elements have more than one stable isotope. Except for the isotopes of hydrogen (which differ greatly from each other in relative mass—enough to cause chemical effects), the isotopes of a given element are chemically nearly indistinguishable.
All of the elements have some isotopes that are radioactive (radioisotopes), although not all of these radioisotopes occur naturally. The radioisotopes typically decay into other elements upon radiating an alpha or beta particle. If an element has isotopes that are not radioactive, these are termed "stable" isotopes. All of the known stable isotopes occur naturally (see primordial isotope). The many radioisotopes that are not found in nature have been characterized after being artificially made. Certain elements have no stable isotopes and are composed only of radioactive isotopes: specifically the elements without any stable isotopes are technetium (atomic number 43), promethium (atomic number 61), and all observed elements with atomic numbers greater than 82.
Of the 80 elements with at least one stable isotope, 26 have only one single stable isotope. The mean number of stable isotopes for the 80 stable elements is 3.1 stable isotopes per element. The largest number of stable isotopes that occur for a single element is 10 (for tin, element 50).
The mass number of an element, A, is the number of nucleons (protons and neutrons) in the atomic nucleus. Different isotopes of a given element are distinguished by their mass numbers, which are conventionally written as a superscript on the left hand side of the atomic symbol (e.g. U). The mass number is always a whole number and has units of "nucleons". For example, magnesium-24 (24 is the mass number) is an atom with 24 nucleons (12 protons and 12 neutrons).
Whereas the mass number simply counts the total number of neutrons and protons and is thus a natural (or whole) number, the atomic mass of a particular isotope (or "nuclide") of the element is the mass of a single atom of that isotope, and is typically expressed in daltons (symbol: Da), or universal atomic mass units (symbol: u). Its relative atomic mass is a dimensionless number equal to the atomic mass divided by the atomic mass constant, which equals 1 Da. In general, the mass number of a given nuclide differs in value slightly from its relative atomic mass, since the mass of each proton and neutron is not exactly 1 Da; since the electrons contribute a lesser share to the atomic mass as neutron number exceeds proton number; and because of the nuclear binding energy and the electron binding energy. For example, the atomic mass of chlorine-35 to five significant digits is 34.969 Da and that of chlorine-37 is 36.966 Da. However, the relative atomic mass of each isotope is quite close to its mass number (always within 1%). The only isotope whose atomic mass is exactly a natural number is C, which has a mass of 12 Da because the dalton is defined as 1/12 of the mass of a free neutral carbon-12 atom in the ground state.
The standard atomic weight (commonly called "atomic weight") of an element is the average of the atomic masses of all the chemical element's isotopes as found in a particular environment, weighted by isotopic abundance, relative to the atomic mass unit. This number may be a fraction that is not close to a whole number. For example, the relative atomic mass of chlorine is 35.453 u, which differs greatly from a whole number as it is an average of about 76% chlorine-35 and 24% chlorine-37. Whenever a relative atomic mass value differs by more than 1% from a whole number, it is due to this averaging effect, as significant amounts of more than one isotope are naturally present in a sample of that element.
Chemists and nuclear scientists have different definitions of a pure element. In chemistry, a pure element means a substance whose atoms all (or in practice almost all) have the same atomic number, or number of protons. Nuclear scientists, however, define a pure element as one that consists of only one stable isotope.
For example, a copper wire is 99.99% chemically pure if 99.99% of its atoms are copper, with 29 protons each. However it is not isotopically pure since ordinary copper consists of two stable isotopes, 69% Cu and 31% Cu, with different numbers of neutrons. However, a pure gold ingot would be both chemically and isotopically pure, since ordinary gold consists only of one isotope, Au.
Atoms of chemically pure elements may bond to each other chemically in more than one way, allowing the pure element to exist in multiple chemical structures (spatial arrangements of atoms), known as allotropes, which differ in their properties. For example, carbon can be found as diamond, which has a tetrahedral structure around each carbon atom; graphite, which has layers of carbon atoms with a hexagonal structure stacked on top of each other; graphene, which is a single layer of graphite that is very strong; fullerenes, which have nearly spherical shapes; and carbon nanotubes, which are tubes with a hexagonal structure (even these may differ from each other in electrical properties). The ability of an element to exist in one of many structural forms is known as 'allotropy'.
The reference state of an element is defined by convention, usually as the thermodynamically most stable allotrope and physical state at a pressure of 1 bar and a given temperature (typically at 298.15K). However, for phosphorus, the reference state is white phosphorus even though it is not the most stable allotrope. In thermochemistry, an element is defined to have an enthalpy of formation of zero in its reference state. For example, the reference state for carbon is graphite, because the structure of graphite is more stable than that of the other allotropes.
Several kinds of descriptive categorizations can be applied broadly to the elements, including consideration of their general physical and chemical properties, their states of matter under familiar conditions, their melting and boiling points, their densities, their crystal structures as solids, and their origins.
Several terms are commonly used to characterize the general physical and chemical properties of the chemical elements. A first distinction is between metals, which readily conduct electricity, nonmetals, which do not, and a small group, (the metalloids), having intermediate properties and often behaving as semiconductors.
A more refined classification is often shown in colored presentations of the periodic table. This system restricts the terms "metal" and "nonmetal" to only certain of the more broadly defined metals and nonmetals, adding additional terms for certain sets of the more broadly viewed metals and nonmetals. The version of this classification used in the periodic tables presented here includes: actinides, alkali metals, alkaline earth metals, halogens, lanthanides, transition metals, post-transition metals, metalloids, reactive nonmetals, and noble gases. In this system, the alkali metals, alkaline earth metals, and transition metals, as well as the lanthanides and the actinides, are special groups of the metals viewed in a broader sense. Similarly, the reactive nonmetals and the noble gases are nonmetals viewed in the broader sense. In some presentations, the halogens are not distinguished, with astatine identified as a metalloid and the others identified as nonmetals.
Another commonly used basic distinction among the elements is their state of matter (phase), whether solid, liquid, or gas, at a selected standard temperature and pressure (STP). Most of the elements are solids at conventional temperatures and atmospheric pressure, while several are gases. Only bromine and mercury are liquids at 0 degrees Celsius (32 degrees Fahrenheit) and normal atmospheric pressure; caesium and gallium are solids at that temperature, but melt at 28.4 °C (83.2 °F) and 29.8 °C (85.6 °F), respectively.
Melting and boiling points, typically expressed in degrees Celsius at a pressure of one atmosphere, are commonly used in characterizing the various elements. While known for most elements, either or both of these measurements is still undetermined for some of the radioactive elements available in only tiny quantities. Since helium remains a liquid even at absolute zero at atmospheric pressure, it has only a boiling point, and not a melting point, in conventional presentations.
The density at selected standard temperature and pressure (STP) is frequently used in characterizing the elements. Density is often expressed in grams per cubic centimeter (g/cm). Since several elements are gases at commonly encountered temperatures, their densities are usually stated for their gaseous forms; when liquefied or solidified, the gaseous elements have densities similar to those of the other elements.
When an element has allotropes with different densities, one representative allotrope is typically selected in summary presentations, while densities for each allotrope can be stated where more detail is provided. For example, the three familiar allotropes of carbon (amorphous carbon, graphite, and diamond) have densities of 1.8–2.1, 2.267, and 3.515 g/cm, respectively.
The elements studied to date as solid samples have eight kinds of crystal structures: cubic, body-centered cubic, face-centered cubic, hexagonal, monoclinic, orthorhombic, rhombohedral, and tetragonal. For some of the synthetically produced transuranic elements, available samples have been too small to determine crystal structures.
Chemical elements may also be categorized by their origin on Earth, with the first 94 considered naturally occurring, while those with atomic numbers beyond 94 have only been produced artificially as the synthetic products of human-made nuclear reactions.
Of the 94 naturally occurring elements, 83 are considered primordial and either stable or weakly radioactive. The remaining 11 naturally occurring elements possess half lives too short for them to have been present at the beginning of the Solar System, and are therefore considered transient elements. Of these 11 transient elements, 5 (polonium, radon, radium, actinium, and protactinium) are relatively common decay products of thorium and uranium. The remaining 6 transient elements (technetium, promethium, astatine, francium, neptunium, and plutonium) occur only rarely, as products of rare decay modes or nuclear reaction processes involving uranium or other heavy elements.
No radioactive decay has been observed for elements with atomic numbers 1 through 82, except 43 (technetium) and 61 (promethium). Observationally stable isotopes of some elements (such as tungsten and lead), however, are predicted to be slightly radioactive with very long half-lives: for example, the half-lives predicted for the observationally stable lead isotopes range from 10 to 10 years. Elements with atomic numbers 43, 61, and 83 through 94 are unstable enough that their radioactive decay can readily be detected. Three of these elements, bismuth (element 83), thorium (element 90), and uranium (element 92) have one or more isotopes with half-lives long enough to survive as remnants of the explosive stellar nucleosynthesis that produced the heavy elements before the formation of the Solar System. For example, at over 1.9×10 years, over a billion times longer than the current estimated age of the universe, bismuth-209 has the longest known alpha decay half-life of any naturally occurring element. The very heaviest 24 elements (those beyond plutonium, element 94) undergo radioactive decay with short half-lives and cannot be produced as daughters of longer-lived elements, and thus are not known to occur in nature at all.
The properties of the chemical elements are often summarized using the periodic table, which powerfully and elegantly organizes the elements by increasing atomic number into rows ("periods") in which the columns ("groups") share recurring ("periodic") physical and chemical properties. The current standard table contains 118 confirmed elements as of 2021.
Although earlier precursors to this presentation exist, its invention is generally credited to the Russian chemist Dmitri Mendeleev in 1869, who intended the table to illustrate recurring trends in the properties of the elements. The layout of the table has been refined and extended over time as new elements have been discovered and new theoretical models have been developed to explain chemical behavior.
Use of the periodic table is now ubiquitous within the academic discipline of chemistry, providing an extremely useful framework to classify, systematize and compare all the many different forms of chemical behavior. The table has also found wide application in physics, geology, biology, materials science, engineering, agriculture, medicine, nutrition, environmental health, and astronomy. Its principles are especially important in chemical engineering.
The various chemical elements are formally identified by their unique atomic numbers, by their accepted names, and by their symbols.
The known elements have atomic numbers from 1 through 118, conventionally presented as Arabic numerals. Since the elements can be uniquely sequenced by atomic number, conventionally from lowest to highest (as in a periodic table), sets of elements are sometimes specified by such notation as "through", "beyond", or "from ... through", as in "through iron", "beyond uranium", or "from lanthanum through lutetium". The terms "light" and "heavy" are sometimes also used informally to indicate relative atomic numbers (not densities), as in "lighter than carbon" or "heavier than lead", although technically the weight or mass of atoms of an element (their atomic weights or atomic masses) do not always increase monotonically with their atomic numbers.
The naming of various substances now known as elements precedes the atomic theory of matter, as names were given locally by various cultures to various minerals, metals, compounds, alloys, mixtures, and other materials, although at the time it was not known which chemicals were elements and which compounds. As they were identified as elements, the existing names for anciently known elements (e.g., gold, mercury, iron) were kept in most countries. National differences emerged over the names of elements either for convenience, linguistic niceties, or nationalism. For a few illustrative examples: German speakers use "Wasserstoff" (water substance) for "hydrogen", "Sauerstoff" (acid substance) for "oxygen" and "Stickstoff" (smothering substance) for "nitrogen", while English and some romance languages use "sodium" for "natrium" and "potassium" for "kalium", and the French, Italians, Greeks, Portuguese and Poles prefer "azote/azot/azoto" (from roots meaning "no life") for "nitrogen".
For purposes of international communication and trade, the official names of the chemical elements both ancient and more recently recognized are decided by the International Union of Pure and Applied Chemistry (IUPAC), which has decided on a sort of international English language, drawing on traditional English names even when an element's chemical symbol is based on a Latin or other traditional word, for example adopting "gold" rather than "aurum" as the name for the 79th element (Au). IUPAC prefers the British spellings "aluminium" and "caesium" over the U.S. spellings "aluminum" and "cesium", and the U.S. "sulfur" over the British "sulphur". However, elements that are practical to sell in bulk in many countries often still have locally used national names, and countries whose national language does not use the Latin alphabet are likely to use the IUPAC element names.
According to IUPAC, chemical elements are not proper nouns in English; consequently, the full name of an element is not routinely capitalized in English, even if derived from a proper noun, as in californium and einsteinium. Isotope names of chemical elements are also uncapitalized if written out, e.g., carbon-12 or uranium-235. Chemical element symbols (such as Cf for californium and Es for einsteinium), are always capitalized (see below).
In the second half of the twentieth century, physics laboratories became able to produce nuclei of chemical elements with half-lives too short for an appreciable amount of them to exist at any time. These are also named by IUPAC, which generally adopts the name chosen by the discoverer. This practice can lead to the controversial question of which research group actually discovered an element, a question that delayed the naming of elements with atomic number of 104 and higher for a considerable amount of time. (See element naming controversy).
Precursors of such controversies involved the nationalistic namings of elements in the late 19th century. For example, lutetium was named in reference to Paris, France. The Germans were reluctant to relinquish naming rights to the French, often calling it cassiopeium. Similarly, the British discoverer of niobium originally named it columbium, in reference to the New World. It was used extensively as such by American publications before the international standardization (in 1950).
Before chemistry became a science, alchemists had designed arcane symbols for both metals and common compounds. These were however used as abbreviations in diagrams or procedures; there was no concept of atoms combining to form molecules. With his advances in the atomic theory of matter, John Dalton devised his own simpler symbols, based on circles, to depict molecules.
The current system of chemical notation was invented by Jöns Jakob Berzelius in 1814. In this typographical system, chemical symbols are not mere abbreviations—though each consists of letters of the Latin alphabet. They are intended as universal symbols for people of all languages and alphabets.
Since Latin was the common language of science at Berzelius's time, his symbols were abbreviations based on the Latin names of elements (they may be Classical Latin names of elementary substances known since antiquity or Neo-Latin coinages for later elements). The symbols are not followed by a period (full stop) as with abbreviations. For example, hydrogen has the chemical symbol "H" after the Neo-Latin hydrogenium; sodium has the chemical symbol "Na" after the Neo-Latin natrium. The same applies to "Fe" (ferrum) for iron, "Hg" (hydrargyrum) for mercury, "Sn" (stannum) for tin, "Au" (aurum) for gold, "Ag" (argentum) for silver, "Pb" (plumbum) for lead, "Cu" (cuprum) for copper, and "Sb" (stibium) for antimony. "W" (wolframium) for tungsten ultimately derives from German, "K" (kalium) for potassium ultimately from Arabic.
Later chemical elements were also assigned unique chemical symbols, based on the name of the element, but not necessarily in English.
Chemical symbols are understood internationally when element names might require translation. There have sometimes been differences in the past. For example, Germans in the past have used "J" (for the alternate name Jod) for iodine, but now use "I" and "Iod".
The first letter of a chemical symbol is always capitalized, as in the preceding examples, and the subsequent letters, if any, are always lower case (small letters). Thus, the symbols for californium and einsteinium are Cf and Es.
There are also symbols in chemical equations for groups of chemical elements, for example in comparative formulas. These are often a single capital letter, and the letters are reserved and not used for names of specific elements. For example, an "X" indicates a variable group (usually a halogen) in a class of compounds, while "R" is a radical, meaning a compound structure such as a hydrocarbon chain. The letter "Q" is reserved for "heat" in a chemical reaction. "Y" is also often used as a general chemical symbol, although it is also the symbol of yttrium. "Z" is also frequently used as a general variable group. "E" is used in organic chemistry to denote an electron-withdrawing group or an electrophile; similarly "Nu" denotes a nucleophile. "L" is used to represent a general ligand in inorganic and organometallic chemistry. "M" is also often used in place of a general metal.
At least two additional, two-letter generic chemical symbols are also in informal usage, "Ln" for any lanthanide element and "An" for any actinide element. "Rg" was formerly used for any rare gas element, but the group of rare gases has now been renamed noble gases and the symbol "Rg" has now been assigned to the element roentgenium.
Isotopes are distinguished by the atomic mass number (total protons and neutrons) for a particular isotope of an element, with this number combined with the pertinent element's symbol. IUPAC prefers that isotope symbols be written in superscript notation when practical, for example C and U. However, other notations, such as carbon-12 and uranium-235, or C-12 and U-235, are also used.
As a special case, the three naturally occurring isotopes of the element hydrogen are often specified as H for H (protium), D for H (deuterium), and T for H (tritium). This convention is easier to use in chemical equations, replacing the need to write out the mass number for each atom. For example, the formula for heavy water may be written D2O instead of H2O.
Only about 4% of the total mass of the universe is made of atoms or ions, and thus represented by chemical elements. This fraction is about 15% of the total matter, with the remainder of the matter (85%) being dark matter. The nature of dark matter is unknown, but it is not composed of atoms of chemical elements because it contains no protons, neutrons, or electrons. (The remaining non-matter part of the mass of the universe is composed of the even less well understood dark energy).
The 94 naturally occurring chemical elements were produced by at least four classes of astrophysical process. Most of the hydrogen, helium and a very small quantity of lithium were produced in the first few minutes of the Big Bang. This Big Bang nucleosynthesis happened only once; the other processes are ongoing. Nuclear fusion inside stars produces elements through stellar nucleosynthesis, including all elements from carbon to iron in atomic number. Elements higher in atomic number than iron, including heavy elements like uranium and plutonium, are produced by various forms of explosive nucleosynthesis in supernovae and neutron star mergers. The light elements lithium, beryllium and boron are produced mostly through cosmic ray spallation (fragmentation induced by cosmic rays) of carbon, nitrogen, and oxygen.
During the early phases of the Big Bang, nucleosynthesis of hydrogen nuclei resulted in the production of hydrogen-1 (protium, H) and helium-4 (He), as well as a smaller amount of deuterium (H) and very minuscule amounts (on the order of 10) of lithium and beryllium. Even smaller amounts of boron may have been produced in the Big Bang, since it has been observed in some very old stars, while carbon has not. No elements heavier than boron were produced in the Big Bang. As a result, the primordial abundance of atoms (or ions) consisted of roughly 75% H, 25% He, and 0.01% deuterium, with only tiny traces of lithium, beryllium, and perhaps boron. Subsequent enrichment of galactic halos occurred due to stellar nucleosynthesis and supernova nucleosynthesis. However, the element abundance in intergalactic space can still closely resemble primordial conditions, unless it has been enriched by some means.
On Earth (and elsewhere), trace amounts of various elements continue to be produced from other elements as products of nuclear transmutation processes. These include some produced by cosmic rays or other nuclear reactions (see cosmogenic and nucleogenic nuclides), and others produced as decay products of long-lived primordial nuclides. For example, trace (but detectable) amounts of carbon-14 (C) are continually produced in the atmosphere by cosmic rays impacting nitrogen atoms, and argon-40 (Ar) is continually produced by the decay of primordially occurring but unstable potassium-40 (K). Also, three primordially occurring but radioactive actinides, thorium, uranium, and plutonium, decay through a series of recurrently produced but unstable radioactive elements such as radium and radon, which are transiently present in any sample of these metals or their ores or compounds. Three other radioactive elements, technetium, promethium, and neptunium, occur only incidentally in natural materials, produced as individual atoms by nuclear fission of the nuclei of various heavy elements or in other rare nuclear processes.
In addition to the 94 naturally occurring elements, several artificial elements have been produced by human nuclear physics technology. As of 2021, these experiments have produced all elements up to atomic number 118.
The following graph (note log scale) shows the abundance of elements in our Solar System. The table shows the twelve most common elements in our galaxy (estimated spectroscopically), as measured in parts per million, by mass. Nearby galaxies that have evolved along similar lines have a corresponding enrichment of elements heavier than hydrogen and helium. The more distant galaxies are being viewed as they appeared in the past, so their abundances of elements appear closer to the primordial mixture. As physical laws and processes appear common throughout the visible universe, however, scientist expect that these galaxies evolved elements in similar abundance.
The abundance of elements in the Solar System is in keeping with their origin from nucleosynthesis in the Big Bang and a number of progenitor supernova stars. Very abundant hydrogen and helium are products of the Big Bang, but the next three elements are rare since they had little time to form in the Big Bang and are not made in stars (they are, however, produced in small quantities by the breakup of heavier elements in interstellar dust, as a result of impact by cosmic rays). Beginning with carbon, elements are produced in stars by buildup from alpha particles (helium nuclei), resulting in an alternatingly larger abundance of elements with even atomic numbers (these are also more stable). In general, such elements up to iron are made in large stars in the process of becoming supernovas. Iron-56 is particularly common, since it is the most stable element that can easily be made from alpha particles (being a product of decay of radioactive nickel-56, ultimately made from 14 helium nuclei). Elements heavier than iron are made in energy-absorbing processes in large stars, and their abundance in the universe (and on Earth) generally decreases with their atomic number.
The abundance of the chemical elements on Earth varies from air to crust to ocean, and in various types of life. The abundance of elements in Earth's crust differs from that in the Solar System (as seen in the Sun and heavy planets like Jupiter) mainly in selective loss of the very lightest elements (hydrogen and helium) and also volatile neon, carbon (as hydrocarbons), nitrogen and sulfur, as a result of solar heating in the early formation of the solar system. Oxygen, the most abundant Earth element by mass, is retained on Earth by combination with silicon. Aluminium at 8% by mass is more common in the Earth's crust than in the universe and solar system, but the composition of the far more bulky mantle, which has magnesium and iron in place of aluminium (which occurs there only at 2% of mass) more closely mirrors the elemental composition of the solar system, save for the noted loss of volatile elements to space, and loss of iron which has migrated to the Earth's core.
The composition of the human body, by contrast, more closely follows the composition of seawater—save that the human body has additional stores of carbon and nitrogen necessary to form the proteins and nucleic acids, together with phosphorus in the nucleic acids and energy transfer molecule adenosine triphosphate (ATP) that occurs in the cells of all living organisms. Certain kinds of organisms require particular additional elements, for example the magnesium in chlorophyll in green plants, the calcium in mollusc shells, or the iron in the hemoglobin in vertebrate animals' red blood cells.
The concept of an "element" as an undivisible substance has developed through three major historical phases: Classical definitions (such as those of the ancient Greeks), chemical definitions, and atomic definitions.
Ancient philosophy posited a set of classical elements to explain observed patterns in nature. These elements originally referred to earth, water, air and fire rather than the chemical elements of modern science.
The term 'elements' (stoicheia) was first used by the Greek philosopher Plato in about 360 BCE in his dialogue Timaeus, which includes a discussion of the composition of inorganic and organic bodies and is a speculative treatise on chemistry. Plato believed the elements introduced a century earlier by Empedocles were composed of small polyhedral forms: tetrahedron (fire), octahedron (air), icosahedron (water), and cube (earth).
Aristotle, c. 350 BCE, also used the term stoicheia and added a fifth element called aether, which formed the heavens. Aristotle defined an element as:
Element – one of those bodies into which other bodies can decompose, and that itself is not capable of being divided into other.
In 1661, in The Sceptical Chymist, Robert Boyle proposed his theory of corpuscularism which favoured the analysis of matter as constituted by irreducible units of matter (atoms) and, choosing to side with neither Aristotle's view of the four elements nor Paracelsus' view of three fundamental elements, left open the question of the number of elements. Boyle argued against a pre-determined number elements—directly against Paracelsus three principles (sulfur, mercury, and salt), indirectly against the “Aristotelian” elements (earth, water, air, and fire), for Boyle felt that the arguments against the former should be at least as valid against the latter.
Much of what I am to deliver ... may be indifferently apply’d to the four Peripatetick Elements, and the three Chymical Principles ... the Chymical Hypothesis seeming to be much more countenanc’d by Experience then the other, it will be expedient to insist chiefly upon the disproving of that; especially since most of the Arguments that are imploy’d against it, may, by a little variation, be made ... at least as strongly against the less plausible, Aristotelian Doctrine.
Then Boyle states his own view in four propositions. In the first and second, he suggests that matter consists of particles, but that these particles may be difficult to separate.
Propos. I. ... At the first Production of mixt Bodies, the Universal Matter whereof they among other Parts of the Universe consisted, was actually divided into little Particles of several sizes and shapes.
The Generation, Corruption ... and wasting of Bodies ... and ... the Chymical Resolutions of mixt Bodies, and ... Operations of ... Fires upon them ... manifest their consisting of parts very minute... And that there does also intervene a various local Motion of such small Bodies ... Epicurus ... as you well know, supposes not only all mixt Bodies, but all others to be produc’d by ... Atomes, moving themselves to and fro ... in the Immense or rather Infinite Vacuum.
Propos. II. ... These minute Particles ... were here and there associated into minute Masses or Clusters ... as were not easily dissipable into such Particles as compos’d them.
Gold will also by common Aqua Regis ... be reduc’d into a seeming Liquor, in so much that the Corpuscles of Gold will, with those of the Menstruum, pass through Cap-Paper, and will, with those of the Menstruum, coagulate into a Crystalline Salt. ... and neverthelesse be afterward reduc’d to the self-same ... Gold it was before its commixture. ... Quicksilver ... with Aqua fortis will be brought into either a red or white Powder ... with Oyl of Vitriol into a pale Yellow one, with Sulphur it will compose a blood-red and volatile Cinaber. And yet out of all these exotick Compounds, we may recover the very same running Mercury that was the main Ingredient of them.
If we assigne to the Corpuscles, whereof each Element consists, a peculiar size and shape, it may easily enough be manifested, That such differingly figur’d Corpuscles may be mingled in such various Proportions, and may be connected so many several wayes, that an almost incredible number of variously qualified Concretes may be compos’d of them.
Boyle did not, however, consider gold or mercury to be elements:
Gold and Mercury, though they be not primary Concretions of the most minute Particles or matter, but confessedly mixt Bodies, ...
Propos. III. ... From most of such mixt Bodies ... there may by the Help of the Fire, be actually obtain’d a determinate number (whether Three, Four or Five, or fewer or more) of Substances ... Propos. IV. ... which Concretes... are made up of, may ... be call’d the Elements or Principles of them.
The Chymists are wont to call the Ingredients of mixt Bodies, Principles, as the Aristotelians name them Elements. ... Principles? as not being compounded of any more primary Bodies: and Elements, in regard that all mix’d Bodies are compounded of them.
The first modern list of chemical elements was given in Antoine Lavoisier's 1789 Elements of Chemistry, which contained thirty-three elements, including light and caloric. By 1818, Jöns Jakob Berzelius had determined atomic weights for forty-five of the forty-nine then-accepted elements. Dmitri Mendeleev had sixty-three elements in his periodic table of 1869.
From Boyle until the early 20th century, an element was defined as a pure substance that could not be decomposed into any simpler substance. Put another way, a chemical element cannot be transformed into other chemical elements by chemical processes. Elements during this time were generally distinguished by their atomic weights, a property measurable with fair accuracy by available analytical techniques.
The 1913 discovery by English physicist Henry Moseley that the nuclear charge is the physical basis for an atom's atomic number, further refined when the nature of protons and neutrons became appreciated, eventually led to the current definition of an element based on atomic number (number of protons per atomic nucleus). The use of atomic numbers, rather than atomic weights, to distinguish elements has greater predictive value (since these numbers are integers), and also resolves some ambiguities in the chemistry-based view due to varying properties of isotopes and allotropes within the same element. Currently, IUPAC defines an element to exist if it has isotopes with a lifetime longer than the 10 seconds it takes the nucleus to form an electronic cloud.
By 1914, eighty-seven elements were known, all naturally occurring.(See Timeline of chemical element discoveries) The remaining naturally occurring elements were discovered or isolated in subsequent decades, and various additional elements have also been produced synthetically, with much of that work pioneered by Glenn T. Seaborg. In 1955, element 101 was discovered and named mendelevium in honor of D.I. Mendeleev, the first to arrange the elements in a periodic manner.
Ten materials familiar to various prehistoric cultures are now known to be chemical elements: Carbon, copper, gold, iron, lead, mercury, silver, sulfur, tin, and zinc. Three additional materials now accepted as elements, arsenic, antimony, and bismuth, were recognized as distinct substances prior to 1500 AD. Phosphorus, cobalt, and platinum were isolated before 1750.
Most of the remaining naturally occurring chemical elements were identified and characterized by 1900, including:
Elements isolated or produced since 1900 include:
The first transuranium element (element with atomic number greater than 92) discovered was neptunium in 1940. Since 1999, claims for the discovery of new elements have been considered by the IUPAC/IUPAP Joint Working Party. As of January 2016, all 118 elements have been confirmed by IUPAC as being discovered. The discovery of element 112 was acknowledged in 2009, and the name copernicium and the atomic symbol Cn were suggested for it. The name and symbol were officially endorsed by IUPAC on 19 February 2010. The heaviest element that is believed to have been synthesized to date is element 118, oganesson, on 9 October 2006, by the Flerov Laboratory of Nuclear Reactions in Dubna, Russia. Tennessine, element 117 was the latest element claimed to be discovered, in 2009. On 28 November 2016, scientists at the IUPAC officially recognized the names for the four newest chemical elements, with atomic numbers 113, 115, 117, and 118.
The following sortable table shows the 118 known chemical elements. | [
{
"paragraph_id": 0,
"text": "A chemical element is a chemical substance that cannot be broken down into other substances. The basic particle that constitutes a chemical element is the atom, and each chemical element is distinguished by the number of protons in the nuclei of its atoms, known as its atomic number. For example, oxygen has an atomic number of 8, meaning that each oxygen atom has 8 protons in its nucleus. This is in contrast to chemical compounds and mixtures, which contain atoms with different atomic numbers.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Almost all of the baryonic matter of the universe is composed of chemical elements (among rare exceptions are neutron stars). When different elements undergo chemical reactions, atoms are rearranged into new compounds held together by chemical bonds. Only a minority of elements, such as silver and gold, are found uncombined as relatively pure native element minerals. Nearly all other naturally occurring elements occur in the Earth as compounds or mixtures. Air is primarily a mixture of the elements nitrogen, oxygen, and argon, though it does contain compounds including carbon dioxide and water.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The history of the discovery and use of the elements began with primitive human societies that discovered native minerals like carbon, sulfur, copper and gold (though the concept of a chemical element was not yet understood). Attempts to classify materials such as these resulted in the concepts of classical elements, alchemy, and various similar theories throughout human history. Much of the modern understanding of elements developed from the work of Dmitri Mendeleev, a Russian chemist who published the first recognizable periodic table in 1869. This table organizes the elements by increasing atomic number into rows (\"periods\") in which the columns (\"groups\") share recurring (\"periodic\") physical and chemical properties. The periodic table summarizes various properties of the elements, allowing chemists to derive relationships between them and to make predictions about compounds and potential new ones.",
"title": ""
},
{
"paragraph_id": 3,
"text": "By November 2016, the International Union of Pure and Applied Chemistry had recognized a total of 118 elements. The first 94 occur naturally on Earth, and the remaining 24 are synthetic elements produced in nuclear reactions. Save for unstable radioactive elements (radionuclides) which decay quickly, nearly all of the elements are available industrially in varying amounts. The discovery and synthesis of further new elements is an ongoing area of scientific study.",
"title": ""
},
{
"paragraph_id": 4,
"text": "The lightest chemical elements are hydrogen and helium, both created by Big Bang nucleosynthesis during the first 20 minutes of the universe in a ratio of around 3:1 by mass (or 12:1 by number of atoms), along with tiny traces of the next two elements, lithium and beryllium. Almost all other elements found in nature were made by various natural methods of nucleosynthesis. On Earth, small amounts of new atoms are naturally produced in nucleogenic reactions, or in cosmogenic processes, such as cosmic ray spallation. New atoms are also naturally produced on Earth as radiogenic daughter isotopes of ongoing radioactive decay processes such as alpha decay, beta decay, spontaneous fission, cluster decay, and other rarer modes of decay.",
"title": "Description"
},
{
"paragraph_id": 5,
"text": "Of the 94 naturally occurring elements, those with atomic numbers 1 through 82 each have at least one stable isotope (except for technetium, element 43 and promethium, element 61, which have no stable isotopes). Isotopes considered stable are those for which no radioactive decay has yet been observed. Elements with atomic numbers 83 through 94 are unstable to the point that radioactive decay of all isotopes can be detected. Some of these elements, notably bismuth (atomic number 83), thorium (atomic number 90), and uranium (atomic number 92), have one or more isotopes with half-lives long enough to survive as remnants of the explosive stellar nucleosynthesis that produced the heavy metals before the formation of our Solar System. At over 1.9×10 years, over a billion times longer than the current estimated age of the universe, bismuth-209 (atomic number 83) has the longest known alpha decay half-life of any naturally occurring element, and is almost always considered on par with the 80 stable elements. The very heaviest elements (those beyond plutonium, element 94) undergo radioactive decay with half-lives so short that they are not found in nature and must be synthesized.",
"title": "Description"
},
{
"paragraph_id": 6,
"text": "There are now 118 known elements. In this context, \"known\" means observed well enough, even from just a few decay products, to have been differentiated from other elements. Most recently, the synthesis of element 118 (since named oganesson) was reported in October 2006, and the synthesis of element 117 (tennessine) was reported in April 2010. Of these 118 elements, 94 occur naturally on Earth. Six of these occur in extreme trace quantities: technetium, atomic number 43; promethium, number 61; astatine, number 85; francium, number 87; neptunium, number 93; and plutonium, number 94. These 94 elements have been detected in the universe at large, in the spectra of stars and also supernovae, where short-lived radioactive elements are newly being made. The first 94 elements have been detected directly on Earth as primordial nuclides present from the formation of the Solar System, or as naturally occurring fission or transmutation products of uranium and thorium.",
"title": "Description"
},
{
"paragraph_id": 7,
"text": "The remaining 24 heavier elements, not found today either on Earth or in astronomical spectra, have been produced artificially: these are all radioactive, with very short half-lives; if any atoms of these elements were present at the formation of Earth, they are extremely likely, to the point of certainty, to have already decayed, and if present in novae have been in quantities too small to have been noted. Technetium was the first purportedly non-naturally occurring element synthesized, in 1937, although trace amounts of technetium have since been found in nature (and also the element may have been discovered naturally in 1925). This pattern of artificial production and later natural discovery has been repeated with several other radioactive naturally occurring rare elements.",
"title": "Description"
},
{
"paragraph_id": 8,
"text": "List of the elements are available by name, atomic number, density, melting point, boiling point and by symbol, as well as ionization energies of the elements. The nuclides of stable and radioactive elements are also available as a list of nuclides, sorted by length of half-life for those that are unstable. One of the most convenient, and certainly the most traditional presentation of the elements, is in the form of the periodic table, which groups together elements with similar chemical properties (and usually also similar electronic structures).",
"title": "Description"
},
{
"paragraph_id": 9,
"text": "The atomic number of an element is equal to the number of protons in each atom, and defines the element. For example, all carbon atoms contain 6 protons in their atomic nucleus; so the atomic number of carbon is 6. Carbon atoms may have different numbers of neutrons; atoms of the same element having different numbers of neutrons are known as isotopes of the element.",
"title": "Description"
},
{
"paragraph_id": 10,
"text": "The number of protons in the atomic nucleus also determines its electric charge, which in turn determines the number of electrons of the atom in its non-ionized state. The electrons are placed into atomic orbitals that determine the atom's various chemical properties. The number of neutrons in a nucleus usually has very little effect on an element's chemical properties (except in the case of hydrogen and deuterium). Thus, all carbon isotopes have nearly identical chemical properties because they all have six protons and six electrons, even though carbon atoms may, for example, have 6 or 8 neutrons. That is why the atomic number, rather than mass number or atomic weight, is considered the identifying characteristic of a chemical element.",
"title": "Description"
},
{
"paragraph_id": 11,
"text": "The symbol for atomic number is Z.",
"title": "Description"
},
{
"paragraph_id": 12,
"text": "Isotopes are atoms of the same element (that is, with the same number of protons in their atomic nucleus), but having different numbers of neutrons. Thus, for example, there are three main isotopes of carbon. All carbon atoms have 6 protons in the nucleus, but they can have either 6, 7, or 8 neutrons. Since the mass numbers of these are 12, 13 and 14 respectively, the three isotopes of carbon are known as carbon-12, carbon-13, and carbon-14, often abbreviated to C, C, and C. Carbon in everyday life and in chemistry is a mixture of C (about 98.9%), C (about 1.1%) and about 1 atom per trillion of C.",
"title": "Description"
},
{
"paragraph_id": 13,
"text": "Most (66 of 94) naturally occurring elements have more than one stable isotope. Except for the isotopes of hydrogen (which differ greatly from each other in relative mass—enough to cause chemical effects), the isotopes of a given element are chemically nearly indistinguishable.",
"title": "Description"
},
{
"paragraph_id": 14,
"text": "All of the elements have some isotopes that are radioactive (radioisotopes), although not all of these radioisotopes occur naturally. The radioisotopes typically decay into other elements upon radiating an alpha or beta particle. If an element has isotopes that are not radioactive, these are termed \"stable\" isotopes. All of the known stable isotopes occur naturally (see primordial isotope). The many radioisotopes that are not found in nature have been characterized after being artificially made. Certain elements have no stable isotopes and are composed only of radioactive isotopes: specifically the elements without any stable isotopes are technetium (atomic number 43), promethium (atomic number 61), and all observed elements with atomic numbers greater than 82.",
"title": "Description"
},
{
"paragraph_id": 15,
"text": "Of the 80 elements with at least one stable isotope, 26 have only one single stable isotope. The mean number of stable isotopes for the 80 stable elements is 3.1 stable isotopes per element. The largest number of stable isotopes that occur for a single element is 10 (for tin, element 50).",
"title": "Description"
},
{
"paragraph_id": 16,
"text": "The mass number of an element, A, is the number of nucleons (protons and neutrons) in the atomic nucleus. Different isotopes of a given element are distinguished by their mass numbers, which are conventionally written as a superscript on the left hand side of the atomic symbol (e.g. U). The mass number is always a whole number and has units of \"nucleons\". For example, magnesium-24 (24 is the mass number) is an atom with 24 nucleons (12 protons and 12 neutrons).",
"title": "Description"
},
{
"paragraph_id": 17,
"text": "Whereas the mass number simply counts the total number of neutrons and protons and is thus a natural (or whole) number, the atomic mass of a particular isotope (or \"nuclide\") of the element is the mass of a single atom of that isotope, and is typically expressed in daltons (symbol: Da), or universal atomic mass units (symbol: u). Its relative atomic mass is a dimensionless number equal to the atomic mass divided by the atomic mass constant, which equals 1 Da. In general, the mass number of a given nuclide differs in value slightly from its relative atomic mass, since the mass of each proton and neutron is not exactly 1 Da; since the electrons contribute a lesser share to the atomic mass as neutron number exceeds proton number; and because of the nuclear binding energy and the electron binding energy. For example, the atomic mass of chlorine-35 to five significant digits is 34.969 Da and that of chlorine-37 is 36.966 Da. However, the relative atomic mass of each isotope is quite close to its mass number (always within 1%). The only isotope whose atomic mass is exactly a natural number is C, which has a mass of 12 Da because the dalton is defined as 1/12 of the mass of a free neutral carbon-12 atom in the ground state.",
"title": "Description"
},
{
"paragraph_id": 18,
"text": "The standard atomic weight (commonly called \"atomic weight\") of an element is the average of the atomic masses of all the chemical element's isotopes as found in a particular environment, weighted by isotopic abundance, relative to the atomic mass unit. This number may be a fraction that is not close to a whole number. For example, the relative atomic mass of chlorine is 35.453 u, which differs greatly from a whole number as it is an average of about 76% chlorine-35 and 24% chlorine-37. Whenever a relative atomic mass value differs by more than 1% from a whole number, it is due to this averaging effect, as significant amounts of more than one isotope are naturally present in a sample of that element.",
"title": "Description"
},
{
"paragraph_id": 19,
"text": "Chemists and nuclear scientists have different definitions of a pure element. In chemistry, a pure element means a substance whose atoms all (or in practice almost all) have the same atomic number, or number of protons. Nuclear scientists, however, define a pure element as one that consists of only one stable isotope.",
"title": "Description"
},
{
"paragraph_id": 20,
"text": "For example, a copper wire is 99.99% chemically pure if 99.99% of its atoms are copper, with 29 protons each. However it is not isotopically pure since ordinary copper consists of two stable isotopes, 69% Cu and 31% Cu, with different numbers of neutrons. However, a pure gold ingot would be both chemically and isotopically pure, since ordinary gold consists only of one isotope, Au.",
"title": "Description"
},
{
"paragraph_id": 21,
"text": "Atoms of chemically pure elements may bond to each other chemically in more than one way, allowing the pure element to exist in multiple chemical structures (spatial arrangements of atoms), known as allotropes, which differ in their properties. For example, carbon can be found as diamond, which has a tetrahedral structure around each carbon atom; graphite, which has layers of carbon atoms with a hexagonal structure stacked on top of each other; graphene, which is a single layer of graphite that is very strong; fullerenes, which have nearly spherical shapes; and carbon nanotubes, which are tubes with a hexagonal structure (even these may differ from each other in electrical properties). The ability of an element to exist in one of many structural forms is known as 'allotropy'.",
"title": "Description"
},
{
"paragraph_id": 22,
"text": "The reference state of an element is defined by convention, usually as the thermodynamically most stable allotrope and physical state at a pressure of 1 bar and a given temperature (typically at 298.15K). However, for phosphorus, the reference state is white phosphorus even though it is not the most stable allotrope. In thermochemistry, an element is defined to have an enthalpy of formation of zero in its reference state. For example, the reference state for carbon is graphite, because the structure of graphite is more stable than that of the other allotropes.",
"title": "Description"
},
{
"paragraph_id": 23,
"text": "Several kinds of descriptive categorizations can be applied broadly to the elements, including consideration of their general physical and chemical properties, their states of matter under familiar conditions, their melting and boiling points, their densities, their crystal structures as solids, and their origins.",
"title": "Description"
},
{
"paragraph_id": 24,
"text": "Several terms are commonly used to characterize the general physical and chemical properties of the chemical elements. A first distinction is between metals, which readily conduct electricity, nonmetals, which do not, and a small group, (the metalloids), having intermediate properties and often behaving as semiconductors.",
"title": "Description"
},
{
"paragraph_id": 25,
"text": "A more refined classification is often shown in colored presentations of the periodic table. This system restricts the terms \"metal\" and \"nonmetal\" to only certain of the more broadly defined metals and nonmetals, adding additional terms for certain sets of the more broadly viewed metals and nonmetals. The version of this classification used in the periodic tables presented here includes: actinides, alkali metals, alkaline earth metals, halogens, lanthanides, transition metals, post-transition metals, metalloids, reactive nonmetals, and noble gases. In this system, the alkali metals, alkaline earth metals, and transition metals, as well as the lanthanides and the actinides, are special groups of the metals viewed in a broader sense. Similarly, the reactive nonmetals and the noble gases are nonmetals viewed in the broader sense. In some presentations, the halogens are not distinguished, with astatine identified as a metalloid and the others identified as nonmetals.",
"title": "Description"
},
{
"paragraph_id": 26,
"text": "Another commonly used basic distinction among the elements is their state of matter (phase), whether solid, liquid, or gas, at a selected standard temperature and pressure (STP). Most of the elements are solids at conventional temperatures and atmospheric pressure, while several are gases. Only bromine and mercury are liquids at 0 degrees Celsius (32 degrees Fahrenheit) and normal atmospheric pressure; caesium and gallium are solids at that temperature, but melt at 28.4 °C (83.2 °F) and 29.8 °C (85.6 °F), respectively.",
"title": "Description"
},
{
"paragraph_id": 27,
"text": "Melting and boiling points, typically expressed in degrees Celsius at a pressure of one atmosphere, are commonly used in characterizing the various elements. While known for most elements, either or both of these measurements is still undetermined for some of the radioactive elements available in only tiny quantities. Since helium remains a liquid even at absolute zero at atmospheric pressure, it has only a boiling point, and not a melting point, in conventional presentations.",
"title": "Description"
},
{
"paragraph_id": 28,
"text": "The density at selected standard temperature and pressure (STP) is frequently used in characterizing the elements. Density is often expressed in grams per cubic centimeter (g/cm). Since several elements are gases at commonly encountered temperatures, their densities are usually stated for their gaseous forms; when liquefied or solidified, the gaseous elements have densities similar to those of the other elements.",
"title": "Description"
},
{
"paragraph_id": 29,
"text": "When an element has allotropes with different densities, one representative allotrope is typically selected in summary presentations, while densities for each allotrope can be stated where more detail is provided. For example, the three familiar allotropes of carbon (amorphous carbon, graphite, and diamond) have densities of 1.8–2.1, 2.267, and 3.515 g/cm, respectively.",
"title": "Description"
},
{
"paragraph_id": 30,
"text": "The elements studied to date as solid samples have eight kinds of crystal structures: cubic, body-centered cubic, face-centered cubic, hexagonal, monoclinic, orthorhombic, rhombohedral, and tetragonal. For some of the synthetically produced transuranic elements, available samples have been too small to determine crystal structures.",
"title": "Description"
},
{
"paragraph_id": 31,
"text": "Chemical elements may also be categorized by their origin on Earth, with the first 94 considered naturally occurring, while those with atomic numbers beyond 94 have only been produced artificially as the synthetic products of human-made nuclear reactions.",
"title": "Description"
},
{
"paragraph_id": 32,
"text": "Of the 94 naturally occurring elements, 83 are considered primordial and either stable or weakly radioactive. The remaining 11 naturally occurring elements possess half lives too short for them to have been present at the beginning of the Solar System, and are therefore considered transient elements. Of these 11 transient elements, 5 (polonium, radon, radium, actinium, and protactinium) are relatively common decay products of thorium and uranium. The remaining 6 transient elements (technetium, promethium, astatine, francium, neptunium, and plutonium) occur only rarely, as products of rare decay modes or nuclear reaction processes involving uranium or other heavy elements.",
"title": "Description"
},
{
"paragraph_id": 33,
"text": "No radioactive decay has been observed for elements with atomic numbers 1 through 82, except 43 (technetium) and 61 (promethium). Observationally stable isotopes of some elements (such as tungsten and lead), however, are predicted to be slightly radioactive with very long half-lives: for example, the half-lives predicted for the observationally stable lead isotopes range from 10 to 10 years. Elements with atomic numbers 43, 61, and 83 through 94 are unstable enough that their radioactive decay can readily be detected. Three of these elements, bismuth (element 83), thorium (element 90), and uranium (element 92) have one or more isotopes with half-lives long enough to survive as remnants of the explosive stellar nucleosynthesis that produced the heavy elements before the formation of the Solar System. For example, at over 1.9×10 years, over a billion times longer than the current estimated age of the universe, bismuth-209 has the longest known alpha decay half-life of any naturally occurring element. The very heaviest 24 elements (those beyond plutonium, element 94) undergo radioactive decay with short half-lives and cannot be produced as daughters of longer-lived elements, and thus are not known to occur in nature at all.",
"title": "Description"
},
{
"paragraph_id": 34,
"text": "The properties of the chemical elements are often summarized using the periodic table, which powerfully and elegantly organizes the elements by increasing atomic number into rows (\"periods\") in which the columns (\"groups\") share recurring (\"periodic\") physical and chemical properties. The current standard table contains 118 confirmed elements as of 2021.",
"title": "Description"
},
{
"paragraph_id": 35,
"text": "Although earlier precursors to this presentation exist, its invention is generally credited to the Russian chemist Dmitri Mendeleev in 1869, who intended the table to illustrate recurring trends in the properties of the elements. The layout of the table has been refined and extended over time as new elements have been discovered and new theoretical models have been developed to explain chemical behavior.",
"title": "Description"
},
{
"paragraph_id": 36,
"text": "Use of the periodic table is now ubiquitous within the academic discipline of chemistry, providing an extremely useful framework to classify, systematize and compare all the many different forms of chemical behavior. The table has also found wide application in physics, geology, biology, materials science, engineering, agriculture, medicine, nutrition, environmental health, and astronomy. Its principles are especially important in chemical engineering.",
"title": "Description"
},
{
"paragraph_id": 37,
"text": "",
"title": "Nomenclature and symbols"
},
{
"paragraph_id": 38,
"text": "The various chemical elements are formally identified by their unique atomic numbers, by their accepted names, and by their symbols.",
"title": "Nomenclature and symbols"
},
{
"paragraph_id": 39,
"text": "The known elements have atomic numbers from 1 through 118, conventionally presented as Arabic numerals. Since the elements can be uniquely sequenced by atomic number, conventionally from lowest to highest (as in a periodic table), sets of elements are sometimes specified by such notation as \"through\", \"beyond\", or \"from ... through\", as in \"through iron\", \"beyond uranium\", or \"from lanthanum through lutetium\". The terms \"light\" and \"heavy\" are sometimes also used informally to indicate relative atomic numbers (not densities), as in \"lighter than carbon\" or \"heavier than lead\", although technically the weight or mass of atoms of an element (their atomic weights or atomic masses) do not always increase monotonically with their atomic numbers.",
"title": "Nomenclature and symbols"
},
{
"paragraph_id": 40,
"text": "The naming of various substances now known as elements precedes the atomic theory of matter, as names were given locally by various cultures to various minerals, metals, compounds, alloys, mixtures, and other materials, although at the time it was not known which chemicals were elements and which compounds. As they were identified as elements, the existing names for anciently known elements (e.g., gold, mercury, iron) were kept in most countries. National differences emerged over the names of elements either for convenience, linguistic niceties, or nationalism. For a few illustrative examples: German speakers use \"Wasserstoff\" (water substance) for \"hydrogen\", \"Sauerstoff\" (acid substance) for \"oxygen\" and \"Stickstoff\" (smothering substance) for \"nitrogen\", while English and some romance languages use \"sodium\" for \"natrium\" and \"potassium\" for \"kalium\", and the French, Italians, Greeks, Portuguese and Poles prefer \"azote/azot/azoto\" (from roots meaning \"no life\") for \"nitrogen\".",
"title": "Nomenclature and symbols"
},
{
"paragraph_id": 41,
"text": "For purposes of international communication and trade, the official names of the chemical elements both ancient and more recently recognized are decided by the International Union of Pure and Applied Chemistry (IUPAC), which has decided on a sort of international English language, drawing on traditional English names even when an element's chemical symbol is based on a Latin or other traditional word, for example adopting \"gold\" rather than \"aurum\" as the name for the 79th element (Au). IUPAC prefers the British spellings \"aluminium\" and \"caesium\" over the U.S. spellings \"aluminum\" and \"cesium\", and the U.S. \"sulfur\" over the British \"sulphur\". However, elements that are practical to sell in bulk in many countries often still have locally used national names, and countries whose national language does not use the Latin alphabet are likely to use the IUPAC element names.",
"title": "Nomenclature and symbols"
},
{
"paragraph_id": 42,
"text": "According to IUPAC, chemical elements are not proper nouns in English; consequently, the full name of an element is not routinely capitalized in English, even if derived from a proper noun, as in californium and einsteinium. Isotope names of chemical elements are also uncapitalized if written out, e.g., carbon-12 or uranium-235. Chemical element symbols (such as Cf for californium and Es for einsteinium), are always capitalized (see below).",
"title": "Nomenclature and symbols"
},
{
"paragraph_id": 43,
"text": "In the second half of the twentieth century, physics laboratories became able to produce nuclei of chemical elements with half-lives too short for an appreciable amount of them to exist at any time. These are also named by IUPAC, which generally adopts the name chosen by the discoverer. This practice can lead to the controversial question of which research group actually discovered an element, a question that delayed the naming of elements with atomic number of 104 and higher for a considerable amount of time. (See element naming controversy).",
"title": "Nomenclature and symbols"
},
{
"paragraph_id": 44,
"text": "Precursors of such controversies involved the nationalistic namings of elements in the late 19th century. For example, lutetium was named in reference to Paris, France. The Germans were reluctant to relinquish naming rights to the French, often calling it cassiopeium. Similarly, the British discoverer of niobium originally named it columbium, in reference to the New World. It was used extensively as such by American publications before the international standardization (in 1950).",
"title": "Nomenclature and symbols"
},
{
"paragraph_id": 45,
"text": "Before chemistry became a science, alchemists had designed arcane symbols for both metals and common compounds. These were however used as abbreviations in diagrams or procedures; there was no concept of atoms combining to form molecules. With his advances in the atomic theory of matter, John Dalton devised his own simpler symbols, based on circles, to depict molecules.",
"title": "Nomenclature and symbols"
},
{
"paragraph_id": 46,
"text": "The current system of chemical notation was invented by Jöns Jakob Berzelius in 1814. In this typographical system, chemical symbols are not mere abbreviations—though each consists of letters of the Latin alphabet. They are intended as universal symbols for people of all languages and alphabets.",
"title": "Nomenclature and symbols"
},
{
"paragraph_id": 47,
"text": "Since Latin was the common language of science at Berzelius's time, his symbols were abbreviations based on the Latin names of elements (they may be Classical Latin names of elementary substances known since antiquity or Neo-Latin coinages for later elements). The symbols are not followed by a period (full stop) as with abbreviations. For example, hydrogen has the chemical symbol \"H\" after the Neo-Latin hydrogenium; sodium has the chemical symbol \"Na\" after the Neo-Latin natrium. The same applies to \"Fe\" (ferrum) for iron, \"Hg\" (hydrargyrum) for mercury, \"Sn\" (stannum) for tin, \"Au\" (aurum) for gold, \"Ag\" (argentum) for silver, \"Pb\" (plumbum) for lead, \"Cu\" (cuprum) for copper, and \"Sb\" (stibium) for antimony. \"W\" (wolframium) for tungsten ultimately derives from German, \"K\" (kalium) for potassium ultimately from Arabic.",
"title": "Nomenclature and symbols"
},
{
"paragraph_id": 48,
"text": "Later chemical elements were also assigned unique chemical symbols, based on the name of the element, but not necessarily in English.",
"title": "Nomenclature and symbols"
},
{
"paragraph_id": 49,
"text": "Chemical symbols are understood internationally when element names might require translation. There have sometimes been differences in the past. For example, Germans in the past have used \"J\" (for the alternate name Jod) for iodine, but now use \"I\" and \"Iod\".",
"title": "Nomenclature and symbols"
},
{
"paragraph_id": 50,
"text": "The first letter of a chemical symbol is always capitalized, as in the preceding examples, and the subsequent letters, if any, are always lower case (small letters). Thus, the symbols for californium and einsteinium are Cf and Es.",
"title": "Nomenclature and symbols"
},
{
"paragraph_id": 51,
"text": "There are also symbols in chemical equations for groups of chemical elements, for example in comparative formulas. These are often a single capital letter, and the letters are reserved and not used for names of specific elements. For example, an \"X\" indicates a variable group (usually a halogen) in a class of compounds, while \"R\" is a radical, meaning a compound structure such as a hydrocarbon chain. The letter \"Q\" is reserved for \"heat\" in a chemical reaction. \"Y\" is also often used as a general chemical symbol, although it is also the symbol of yttrium. \"Z\" is also frequently used as a general variable group. \"E\" is used in organic chemistry to denote an electron-withdrawing group or an electrophile; similarly \"Nu\" denotes a nucleophile. \"L\" is used to represent a general ligand in inorganic and organometallic chemistry. \"M\" is also often used in place of a general metal.",
"title": "Nomenclature and symbols"
},
{
"paragraph_id": 52,
"text": "At least two additional, two-letter generic chemical symbols are also in informal usage, \"Ln\" for any lanthanide element and \"An\" for any actinide element. \"Rg\" was formerly used for any rare gas element, but the group of rare gases has now been renamed noble gases and the symbol \"Rg\" has now been assigned to the element roentgenium.",
"title": "Nomenclature and symbols"
},
{
"paragraph_id": 53,
"text": "Isotopes are distinguished by the atomic mass number (total protons and neutrons) for a particular isotope of an element, with this number combined with the pertinent element's symbol. IUPAC prefers that isotope symbols be written in superscript notation when practical, for example C and U. However, other notations, such as carbon-12 and uranium-235, or C-12 and U-235, are also used.",
"title": "Nomenclature and symbols"
},
{
"paragraph_id": 54,
"text": "As a special case, the three naturally occurring isotopes of the element hydrogen are often specified as H for H (protium), D for H (deuterium), and T for H (tritium). This convention is easier to use in chemical equations, replacing the need to write out the mass number for each atom. For example, the formula for heavy water may be written D2O instead of H2O.",
"title": "Nomenclature and symbols"
},
{
"paragraph_id": 55,
"text": "Only about 4% of the total mass of the universe is made of atoms or ions, and thus represented by chemical elements. This fraction is about 15% of the total matter, with the remainder of the matter (85%) being dark matter. The nature of dark matter is unknown, but it is not composed of atoms of chemical elements because it contains no protons, neutrons, or electrons. (The remaining non-matter part of the mass of the universe is composed of the even less well understood dark energy).",
"title": "Origin of the elements"
},
{
"paragraph_id": 56,
"text": "The 94 naturally occurring chemical elements were produced by at least four classes of astrophysical process. Most of the hydrogen, helium and a very small quantity of lithium were produced in the first few minutes of the Big Bang. This Big Bang nucleosynthesis happened only once; the other processes are ongoing. Nuclear fusion inside stars produces elements through stellar nucleosynthesis, including all elements from carbon to iron in atomic number. Elements higher in atomic number than iron, including heavy elements like uranium and plutonium, are produced by various forms of explosive nucleosynthesis in supernovae and neutron star mergers. The light elements lithium, beryllium and boron are produced mostly through cosmic ray spallation (fragmentation induced by cosmic rays) of carbon, nitrogen, and oxygen.",
"title": "Origin of the elements"
},
{
"paragraph_id": 57,
"text": "During the early phases of the Big Bang, nucleosynthesis of hydrogen nuclei resulted in the production of hydrogen-1 (protium, H) and helium-4 (He), as well as a smaller amount of deuterium (H) and very minuscule amounts (on the order of 10) of lithium and beryllium. Even smaller amounts of boron may have been produced in the Big Bang, since it has been observed in some very old stars, while carbon has not. No elements heavier than boron were produced in the Big Bang. As a result, the primordial abundance of atoms (or ions) consisted of roughly 75% H, 25% He, and 0.01% deuterium, with only tiny traces of lithium, beryllium, and perhaps boron. Subsequent enrichment of galactic halos occurred due to stellar nucleosynthesis and supernova nucleosynthesis. However, the element abundance in intergalactic space can still closely resemble primordial conditions, unless it has been enriched by some means.",
"title": "Origin of the elements"
},
{
"paragraph_id": 58,
"text": "On Earth (and elsewhere), trace amounts of various elements continue to be produced from other elements as products of nuclear transmutation processes. These include some produced by cosmic rays or other nuclear reactions (see cosmogenic and nucleogenic nuclides), and others produced as decay products of long-lived primordial nuclides. For example, trace (but detectable) amounts of carbon-14 (C) are continually produced in the atmosphere by cosmic rays impacting nitrogen atoms, and argon-40 (Ar) is continually produced by the decay of primordially occurring but unstable potassium-40 (K). Also, three primordially occurring but radioactive actinides, thorium, uranium, and plutonium, decay through a series of recurrently produced but unstable radioactive elements such as radium and radon, which are transiently present in any sample of these metals or their ores or compounds. Three other radioactive elements, technetium, promethium, and neptunium, occur only incidentally in natural materials, produced as individual atoms by nuclear fission of the nuclei of various heavy elements or in other rare nuclear processes.",
"title": "Origin of the elements"
},
{
"paragraph_id": 59,
"text": "In addition to the 94 naturally occurring elements, several artificial elements have been produced by human nuclear physics technology. As of 2021, these experiments have produced all elements up to atomic number 118.",
"title": "Origin of the elements"
},
{
"paragraph_id": 60,
"text": "The following graph (note log scale) shows the abundance of elements in our Solar System. The table shows the twelve most common elements in our galaxy (estimated spectroscopically), as measured in parts per million, by mass. Nearby galaxies that have evolved along similar lines have a corresponding enrichment of elements heavier than hydrogen and helium. The more distant galaxies are being viewed as they appeared in the past, so their abundances of elements appear closer to the primordial mixture. As physical laws and processes appear common throughout the visible universe, however, scientist expect that these galaxies evolved elements in similar abundance.",
"title": "Abundance"
},
{
"paragraph_id": 61,
"text": "The abundance of elements in the Solar System is in keeping with their origin from nucleosynthesis in the Big Bang and a number of progenitor supernova stars. Very abundant hydrogen and helium are products of the Big Bang, but the next three elements are rare since they had little time to form in the Big Bang and are not made in stars (they are, however, produced in small quantities by the breakup of heavier elements in interstellar dust, as a result of impact by cosmic rays). Beginning with carbon, elements are produced in stars by buildup from alpha particles (helium nuclei), resulting in an alternatingly larger abundance of elements with even atomic numbers (these are also more stable). In general, such elements up to iron are made in large stars in the process of becoming supernovas. Iron-56 is particularly common, since it is the most stable element that can easily be made from alpha particles (being a product of decay of radioactive nickel-56, ultimately made from 14 helium nuclei). Elements heavier than iron are made in energy-absorbing processes in large stars, and their abundance in the universe (and on Earth) generally decreases with their atomic number.",
"title": "Abundance"
},
{
"paragraph_id": 62,
"text": "The abundance of the chemical elements on Earth varies from air to crust to ocean, and in various types of life. The abundance of elements in Earth's crust differs from that in the Solar System (as seen in the Sun and heavy planets like Jupiter) mainly in selective loss of the very lightest elements (hydrogen and helium) and also volatile neon, carbon (as hydrocarbons), nitrogen and sulfur, as a result of solar heating in the early formation of the solar system. Oxygen, the most abundant Earth element by mass, is retained on Earth by combination with silicon. Aluminium at 8% by mass is more common in the Earth's crust than in the universe and solar system, but the composition of the far more bulky mantle, which has magnesium and iron in place of aluminium (which occurs there only at 2% of mass) more closely mirrors the elemental composition of the solar system, save for the noted loss of volatile elements to space, and loss of iron which has migrated to the Earth's core.",
"title": "Abundance"
},
{
"paragraph_id": 63,
"text": "The composition of the human body, by contrast, more closely follows the composition of seawater—save that the human body has additional stores of carbon and nitrogen necessary to form the proteins and nucleic acids, together with phosphorus in the nucleic acids and energy transfer molecule adenosine triphosphate (ATP) that occurs in the cells of all living organisms. Certain kinds of organisms require particular additional elements, for example the magnesium in chlorophyll in green plants, the calcium in mollusc shells, or the iron in the hemoglobin in vertebrate animals' red blood cells.",
"title": "Abundance"
},
{
"paragraph_id": 64,
"text": "The concept of an \"element\" as an undivisible substance has developed through three major historical phases: Classical definitions (such as those of the ancient Greeks), chemical definitions, and atomic definitions.",
"title": "History"
},
{
"paragraph_id": 65,
"text": "Ancient philosophy posited a set of classical elements to explain observed patterns in nature. These elements originally referred to earth, water, air and fire rather than the chemical elements of modern science.",
"title": "History"
},
{
"paragraph_id": 66,
"text": "The term 'elements' (stoicheia) was first used by the Greek philosopher Plato in about 360 BCE in his dialogue Timaeus, which includes a discussion of the composition of inorganic and organic bodies and is a speculative treatise on chemistry. Plato believed the elements introduced a century earlier by Empedocles were composed of small polyhedral forms: tetrahedron (fire), octahedron (air), icosahedron (water), and cube (earth).",
"title": "History"
},
{
"paragraph_id": 67,
"text": "Aristotle, c. 350 BCE, also used the term stoicheia and added a fifth element called aether, which formed the heavens. Aristotle defined an element as:",
"title": "History"
},
{
"paragraph_id": 68,
"text": "Element – one of those bodies into which other bodies can decompose, and that itself is not capable of being divided into other.",
"title": "History"
},
{
"paragraph_id": 69,
"text": "In 1661, in The Sceptical Chymist, Robert Boyle proposed his theory of corpuscularism which favoured the analysis of matter as constituted by irreducible units of matter (atoms) and, choosing to side with neither Aristotle's view of the four elements nor Paracelsus' view of three fundamental elements, left open the question of the number of elements. Boyle argued against a pre-determined number elements—directly against Paracelsus three principles (sulfur, mercury, and salt), indirectly against the “Aristotelian” elements (earth, water, air, and fire), for Boyle felt that the arguments against the former should be at least as valid against the latter.",
"title": "History"
},
{
"paragraph_id": 70,
"text": "Much of what I am to deliver ... may be indifferently apply’d to the four Peripatetick Elements, and the three Chymical Principles ... the Chymical Hypothesis seeming to be much more countenanc’d by Experience then the other, it will be expedient to insist chiefly upon the disproving of that; especially since most of the Arguments that are imploy’d against it, may, by a little variation, be made ... at least as strongly against the less plausible, Aristotelian Doctrine.",
"title": "History"
},
{
"paragraph_id": 71,
"text": "Then Boyle states his own view in four propositions. In the first and second, he suggests that matter consists of particles, but that these particles may be difficult to separate.",
"title": "History"
},
{
"paragraph_id": 72,
"text": "Propos. I. ... At the first Production of mixt Bodies, the Universal Matter whereof they among other Parts of the Universe consisted, was actually divided into little Particles of several sizes and shapes.",
"title": "History"
},
{
"paragraph_id": 73,
"text": "The Generation, Corruption ... and wasting of Bodies ... and ... the Chymical Resolutions of mixt Bodies, and ... Operations of ... Fires upon them ... manifest their consisting of parts very minute... And that there does also intervene a various local Motion of such small Bodies ... Epicurus ... as you well know, supposes not only all mixt Bodies, but all others to be produc’d by ... Atomes, moving themselves to and fro ... in the Immense or rather Infinite Vacuum.",
"title": "History"
},
{
"paragraph_id": 74,
"text": "Propos. II. ... These minute Particles ... were here and there associated into minute Masses or Clusters ... as were not easily dissipable into such Particles as compos’d them.",
"title": "History"
},
{
"paragraph_id": 75,
"text": "Gold will also by common Aqua Regis ... be reduc’d into a seeming Liquor, in so much that the Corpuscles of Gold will, with those of the Menstruum, pass through Cap-Paper, and will, with those of the Menstruum, coagulate into a Crystalline Salt. ... and neverthelesse be afterward reduc’d to the self-same ... Gold it was before its commixture. ... Quicksilver ... with Aqua fortis will be brought into either a red or white Powder ... with Oyl of Vitriol into a pale Yellow one, with Sulphur it will compose a blood-red and volatile Cinaber. And yet out of all these exotick Compounds, we may recover the very same running Mercury that was the main Ingredient of them.",
"title": "History"
},
{
"paragraph_id": 76,
"text": "If we assigne to the Corpuscles, whereof each Element consists, a peculiar size and shape, it may easily enough be manifested, That such differingly figur’d Corpuscles may be mingled in such various Proportions, and may be connected so many several wayes, that an almost incredible number of variously qualified Concretes may be compos’d of them.",
"title": "History"
},
{
"paragraph_id": 77,
"text": "Boyle did not, however, consider gold or mercury to be elements:",
"title": "History"
},
{
"paragraph_id": 78,
"text": "Gold and Mercury, though they be not primary Concretions of the most minute Particles or matter, but confessedly mixt Bodies, ...",
"title": "History"
},
{
"paragraph_id": 79,
"text": "Propos. III. ... From most of such mixt Bodies ... there may by the Help of the Fire, be actually obtain’d a determinate number (whether Three, Four or Five, or fewer or more) of Substances ... Propos. IV. ... which Concretes... are made up of, may ... be call’d the Elements or Principles of them.",
"title": "History"
},
{
"paragraph_id": 80,
"text": "The Chymists are wont to call the Ingredients of mixt Bodies, Principles, as the Aristotelians name them Elements. ... Principles? as not being compounded of any more primary Bodies: and Elements, in regard that all mix’d Bodies are compounded of them.",
"title": "History"
},
{
"paragraph_id": 81,
"text": "The first modern list of chemical elements was given in Antoine Lavoisier's 1789 Elements of Chemistry, which contained thirty-three elements, including light and caloric. By 1818, Jöns Jakob Berzelius had determined atomic weights for forty-five of the forty-nine then-accepted elements. Dmitri Mendeleev had sixty-three elements in his periodic table of 1869.",
"title": "History"
},
{
"paragraph_id": 82,
"text": "From Boyle until the early 20th century, an element was defined as a pure substance that could not be decomposed into any simpler substance. Put another way, a chemical element cannot be transformed into other chemical elements by chemical processes. Elements during this time were generally distinguished by their atomic weights, a property measurable with fair accuracy by available analytical techniques.",
"title": "History"
},
{
"paragraph_id": 83,
"text": "The 1913 discovery by English physicist Henry Moseley that the nuclear charge is the physical basis for an atom's atomic number, further refined when the nature of protons and neutrons became appreciated, eventually led to the current definition of an element based on atomic number (number of protons per atomic nucleus). The use of atomic numbers, rather than atomic weights, to distinguish elements has greater predictive value (since these numbers are integers), and also resolves some ambiguities in the chemistry-based view due to varying properties of isotopes and allotropes within the same element. Currently, IUPAC defines an element to exist if it has isotopes with a lifetime longer than the 10 seconds it takes the nucleus to form an electronic cloud.",
"title": "History"
},
{
"paragraph_id": 84,
"text": "By 1914, eighty-seven elements were known, all naturally occurring.(See Timeline of chemical element discoveries) The remaining naturally occurring elements were discovered or isolated in subsequent decades, and various additional elements have also been produced synthetically, with much of that work pioneered by Glenn T. Seaborg. In 1955, element 101 was discovered and named mendelevium in honor of D.I. Mendeleev, the first to arrange the elements in a periodic manner.",
"title": "History"
},
{
"paragraph_id": 85,
"text": "Ten materials familiar to various prehistoric cultures are now known to be chemical elements: Carbon, copper, gold, iron, lead, mercury, silver, sulfur, tin, and zinc. Three additional materials now accepted as elements, arsenic, antimony, and bismuth, were recognized as distinct substances prior to 1500 AD. Phosphorus, cobalt, and platinum were isolated before 1750.",
"title": "History"
},
{
"paragraph_id": 86,
"text": "Most of the remaining naturally occurring chemical elements were identified and characterized by 1900, including:",
"title": "History"
},
{
"paragraph_id": 87,
"text": "Elements isolated or produced since 1900 include:",
"title": "History"
},
{
"paragraph_id": 88,
"text": "The first transuranium element (element with atomic number greater than 92) discovered was neptunium in 1940. Since 1999, claims for the discovery of new elements have been considered by the IUPAC/IUPAP Joint Working Party. As of January 2016, all 118 elements have been confirmed by IUPAC as being discovered. The discovery of element 112 was acknowledged in 2009, and the name copernicium and the atomic symbol Cn were suggested for it. The name and symbol were officially endorsed by IUPAC on 19 February 2010. The heaviest element that is believed to have been synthesized to date is element 118, oganesson, on 9 October 2006, by the Flerov Laboratory of Nuclear Reactions in Dubna, Russia. Tennessine, element 117 was the latest element claimed to be discovered, in 2009. On 28 November 2016, scientists at the IUPAC officially recognized the names for the four newest chemical elements, with atomic numbers 113, 115, 117, and 118.",
"title": "History"
},
{
"paragraph_id": 89,
"text": "The following sortable table shows the 118 known chemical elements.",
"title": "List of the 118 known chemical elements"
}
] | A chemical element is a chemical substance that cannot be broken down into other substances. The basic particle that constitutes a chemical element is the atom, and each chemical element is distinguished by the number of protons in the nuclei of its atoms, known as its atomic number. For example, oxygen has an atomic number of 8, meaning that each oxygen atom has 8 protons in its nucleus. This is in contrast to chemical compounds and mixtures, which contain atoms with different atomic numbers. Almost all of the baryonic matter of the universe is composed of chemical elements. When different elements undergo chemical reactions, atoms are rearranged into new compounds held together by chemical bonds. Only a minority of elements, such as silver and gold, are found uncombined as relatively pure native element minerals. Nearly all other naturally occurring elements occur in the Earth as compounds or mixtures. Air is primarily a mixture of the elements nitrogen, oxygen, and argon, though it does contain compounds including carbon dioxide and water. The history of the discovery and use of the elements began with primitive human societies that discovered native minerals like carbon, sulfur, copper and gold. Attempts to classify materials such as these resulted in the concepts of classical elements, alchemy, and various similar theories throughout human history. Much of the modern understanding of elements developed from the work of Dmitri Mendeleev, a Russian chemist who published the first recognizable periodic table in 1869. This table organizes the elements by increasing atomic number into rows ("periods") in which the columns ("groups") share recurring ("periodic") physical and chemical properties. The periodic table summarizes various properties of the elements, allowing chemists to derive relationships between them and to make predictions about compounds and potential new ones. By November 2016, the International Union of Pure and Applied Chemistry had recognized a total of 118 elements. The first 94 occur naturally on Earth, and the remaining 24 are synthetic elements produced in nuclear reactions. Save for unstable radioactive elements (radionuclides) which decay quickly, nearly all of the elements are available industrially in varying amounts. The discovery and synthesis of further new elements is an ongoing area of scientific study. | 2001-10-07T19:01:31Z | 2023-12-31T06:27:19Z | [
"Template:NUBASE2016",
"Template:More citations needed",
"Template:List of chemical elements",
"Template:Cite journal",
"Template:Navbox periodic table",
"Template:Pp-vandalism",
"Template:Use British English",
"Template:Periodic table",
"Template:As of",
"Template:Periodic table (dietary elements)",
"Template:Div col end",
"Template:Blockquote",
"Template:Reflist",
"Template:Cite web",
"Template:Cite book",
"Template:Cbignore",
"Template:Nature",
"Template:Authority control",
"Template:E",
"Template:For",
"Template:Cite news",
"Template:Citation-attribution",
"Template:Webarchive",
"Template:Navbox element isotopes",
"Template:Short description",
"Template:Clear",
"Template:Biology nav",
"Template:Sidebar periodic table",
"Template:See also",
"Template:Div col",
"Template:Commons category",
"Template:Branches of chemistry",
"Template:Main",
"Template:Use dmy dates",
"Template:R",
"Template:Anchor",
"Template:Circa"
] | https://en.wikipedia.org/wiki/Chemical_element |
5,661 | Centime | Centime (from Latin: centesimus) is French for "cent", and is used in English as the name of the fraction currency in several Francophone countries (including Switzerland, Algeria, Belgium, Morocco and France).
In France, the usage of centime goes back to the introduction of the decimal monetary system under Napoleon. This system aimed at replacing non-decimal fractions of older coins. A five-centime coin was known as a sou, i.e. a solidus or shilling.
In Francophone Canada 1⁄100 of a Canadian dollar is officially known as a cent (pronounced /sɛnt/) in both English and French. However, in practice, the form of cenne (pronounced /sɛn/) has completely replaced the official cent. Spoken and written use of the official form cent in Francophone Canada is exceptionally uncommon. In the Canadian French vernacular sou, sou noir (noir means "black" in French), cenne, and cenne noire are all widely known, used, and accepted monikers when referring to either 1⁄100 of a Canadian dollar or the 1¢ coin (colloquially known as a "penny" in North American English).
In the European community, cent is the official name for one hundredth of a euro. However, in French-speaking countries, the word centime is the preferred term. The Superior Council of the French language of Belgium recommended in 2001 the use of centime, since cent is also the French word for "hundred". An analogous decision was published in the Journal officiel in France (2 December 1997).
In Morocco, dirhams are divided into 100 centimes and one may find prices in the country quoted in centimes rather than in dirhams. Sometimes centimes are known as francs or, in former Spanish areas, pesetas.
A centime is one-hundredth of the following basic monetary units: | [
{
"paragraph_id": 0,
"text": "Centime (from Latin: centesimus) is French for \"cent\", and is used in English as the name of the fraction currency in several Francophone countries (including Switzerland, Algeria, Belgium, Morocco and France).",
"title": ""
},
{
"paragraph_id": 1,
"text": "In France, the usage of centime goes back to the introduction of the decimal monetary system under Napoleon. This system aimed at replacing non-decimal fractions of older coins. A five-centime coin was known as a sou, i.e. a solidus or shilling.",
"title": ""
},
{
"paragraph_id": 2,
"text": "In Francophone Canada 1⁄100 of a Canadian dollar is officially known as a cent (pronounced /sɛnt/) in both English and French. However, in practice, the form of cenne (pronounced /sɛn/) has completely replaced the official cent. Spoken and written use of the official form cent in Francophone Canada is exceptionally uncommon. In the Canadian French vernacular sou, sou noir (noir means \"black\" in French), cenne, and cenne noire are all widely known, used, and accepted monikers when referring to either 1⁄100 of a Canadian dollar or the 1¢ coin (colloquially known as a \"penny\" in North American English).",
"title": ""
},
{
"paragraph_id": 3,
"text": "In the European community, cent is the official name for one hundredth of a euro. However, in French-speaking countries, the word centime is the preferred term. The Superior Council of the French language of Belgium recommended in 2001 the use of centime, since cent is also the French word for \"hundred\". An analogous decision was published in the Journal officiel in France (2 December 1997).",
"title": "Subdivision of euro: cent or centime?"
},
{
"paragraph_id": 4,
"text": "In Morocco, dirhams are divided into 100 centimes and one may find prices in the country quoted in centimes rather than in dirhams. Sometimes centimes are known as francs or, in former Spanish areas, pesetas.",
"title": "Subdivision of euro: cent or centime?"
},
{
"paragraph_id": 5,
"text": "A centime is one-hundredth of the following basic monetary units:",
"title": "Usage"
}
] | Centime is French for "cent", and is used in English as the name of the fraction currency in several Francophone countries. In France, the usage of centime goes back to the introduction of the decimal monetary system under Napoleon. This system aimed at replacing non-decimal fractions of older coins. A five-centime coin was known as a sou, i.e. a solidus or shilling. In Francophone Canada 1⁄100 of a Canadian dollar is officially known as a cent in both English and French. However, in practice, the form of cenne has completely replaced the official cent. Spoken and written use of the official form cent in Francophone Canada is exceptionally uncommon.
In the Canadian French vernacular sou, sou noir, cenne, and cenne noire are all widely known, used, and accepted monikers when referring to either 1⁄100 of a Canadian dollar or the 1¢ coin. | 2001-05-16T18:41:38Z | 2023-08-11T16:17:59Z | [
"Template:Use mdy dates",
"Template:Coin image box 1 double",
"Template:Lang-la",
"Template:Lang",
"Template:Refimprove",
"Template:Frac",
"Template:Expand list",
"Template:Cent (currency)",
"Template:Portal",
"Template:Reflist"
] | https://en.wikipedia.org/wiki/Centime |
5,662 | Calendar year | Generally speaking, a calendar year begins on the New Year's Day of the given calendar system and ends on the day before the following New Year's Day, and thus consists of a whole number of days. A year can also be measured by starting on any other named day of the calendar, and ending on the day before this named day in the following year. This may be termed a "year's time", but not a "calendar year". To reconcile the calendar year with the astronomical cycle (which has a fractional number of days) certain years contain extra days ("leap days" or "intercalary days"). The Gregorian year, which is in use in most of the world, begins on January 1 and ends on December 31. It has a length of 365 days in an ordinary year, with 8760 hours, 525,600 minutes, or 31,536,000 seconds; but 366 days in a leap year, with 8784 hours, 527,040 minutes, or 31,622,400 seconds. With 97 leap years every 400 years, the year has an average length of 365.2425 days. Other formula-based calendars can have lengths which are further out of step with the solar cycle: for example, the Julian calendar has an average length of 365.25 days, and the Hebrew calendar has an average length of 365.2468 days. The Lunar Hijri calendar is a lunar calendar consisting of 12 months in a year of 354 or 355 days. The astronomer's mean tropical year, which is averaged over equinoxes and solstices, is currently 365.24219 days, slightly shorter than the average length of the year in most calendars.
The calendar year can be divided into four quarters, often abbreviated as Q1, Q2, Q3, and Q4. Since they are three months each, they are also called trimesters. In the Gregorian calendar:
In some domains, weeks are preferred over months for scheduling and reporting, so they use quarters of exactly 13 weeks each, often following ISO week date conventions. One in five to six years has a 53rd week which is usually appended to the last quarter. It is then 98 days instead of 91 days long, which complicates comparisons.
In the Chinese calendar, the quarters are traditionally associated with the 4 seasons of the year: | [
{
"paragraph_id": 0,
"text": "Generally speaking, a calendar year begins on the New Year's Day of the given calendar system and ends on the day before the following New Year's Day, and thus consists of a whole number of days. A year can also be measured by starting on any other named day of the calendar, and ending on the day before this named day in the following year. This may be termed a \"year's time\", but not a \"calendar year\". To reconcile the calendar year with the astronomical cycle (which has a fractional number of days) certain years contain extra days (\"leap days\" or \"intercalary days\"). The Gregorian year, which is in use in most of the world, begins on January 1 and ends on December 31. It has a length of 365 days in an ordinary year, with 8760 hours, 525,600 minutes, or 31,536,000 seconds; but 366 days in a leap year, with 8784 hours, 527,040 minutes, or 31,622,400 seconds. With 97 leap years every 400 years, the year has an average length of 365.2425 days. Other formula-based calendars can have lengths which are further out of step with the solar cycle: for example, the Julian calendar has an average length of 365.25 days, and the Hebrew calendar has an average length of 365.2468 days. The Lunar Hijri calendar is a lunar calendar consisting of 12 months in a year of 354 or 355 days. The astronomer's mean tropical year, which is averaged over equinoxes and solstices, is currently 365.24219 days, slightly shorter than the average length of the year in most calendars.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The calendar year can be divided into four quarters, often abbreviated as Q1, Q2, Q3, and Q4. Since they are three months each, they are also called trimesters. In the Gregorian calendar:",
"title": "Quarter year"
},
{
"paragraph_id": 2,
"text": "In some domains, weeks are preferred over months for scheduling and reporting, so they use quarters of exactly 13 weeks each, often following ISO week date conventions. One in five to six years has a 53rd week which is usually appended to the last quarter. It is then 98 days instead of 91 days long, which complicates comparisons.",
"title": "Quarter year"
},
{
"paragraph_id": 3,
"text": "In the Chinese calendar, the quarters are traditionally associated with the 4 seasons of the year:",
"title": "Quarter year"
}
] | Generally speaking, a calendar year begins on the New Year's Day of the given calendar system and ends on the day before the following New Year's Day, and thus consists of a whole number of days. A year can also be measured by starting on any other named day of the calendar, and ending on the day before this named day in the following year. This may be termed a "year's time", but not a "calendar year". To reconcile the calendar year with the astronomical cycle certain years contain extra days. The Gregorian year, which is in use in most of the world, begins on January 1 and ends on December 31. It has a length of 365 days in an ordinary year, with 8760 hours, 525,600 minutes, or 31,536,000 seconds; but 366 days in a leap year, with 8784 hours, 527,040 minutes, or 31,622,400 seconds. With 97 leap years every 400 years, the year has an average length of 365.2425 days. Other formula-based calendars can have lengths which are further out of step with the solar cycle: for example, the Julian calendar has an average length of 365.25 days, and the Hebrew calendar has an average length of 365.2468 days. The Lunar Hijri calendar is a lunar calendar consisting of 12 months in a year of 354 or 355 days. The astronomer's mean tropical year, which is averaged over equinoxes and solstices, is currently 365.24219 days, slightly shorter than the average length of the year in most calendars. | 2001-05-16T20:37:21Z | 2023-12-25T01:48:48Z | [
"Template:Reflist",
"Template:Cite encyclopedia",
"Template:Authority control",
"Template:Short description",
"Template:More citations needed",
"Template:Anchor",
"Template:Div col",
"Template:Div col end"
] | https://en.wikipedia.org/wiki/Calendar_year |
5,663 | CFA franc | The CFA franc (French: franc CFA, [fʁɑ̃ seɛfɑ], Franc of the Financial Community of Africa, originally Franc of the French Colonies in Africa, or colloquially franc; abbreviation: F.CFA) is the name of two currencies, the West African CFA franc, used in eight West African countries, and the Central African CFA franc, used in six Central African countries. Although separate, the two CFA franc currencies have always been at parity and are effectively interchangeable. The ISO currency codes are XAF for the Central African CFA franc and XOF for the West African CFA franc. On 22 December 2019, it was announced that the West African currency would be reformed and replaced by an independent currency to be called Eco.
Both CFA francs have a fixed exchange rate (peg) to the euro: €1 = F.CFA 655.957 exactly, and member countries deposited half of their foreign exchange reserves with the French Treasury. The currency has been criticized for restricting the sovereignty of the African member states, effectively putting their monetary policy in the hands of the European Central Bank. Others argue that the CFA "helps stabilize the national currencies of Franc Zone member-countries and greatly facilitates the flow of exports and imports between France and the member-countries".
In May 2020, the French National Assembly agreed to end the French engagement in the West African CFA franc, including the foreign reserve deposit requirements. The West African CFA franc is expected to be renamed as the "Eco" in the near future.
CFA francs are used in fourteen countries: twelve nations formerly ruled by France in West and Central Africa (excluding Guinea and Mauritania, which withdrew), plus Guinea-Bissau (a former Portuguese colony), and Equatorial Guinea (a former Spanish colony). These fourteen countries have a combined population of 193.1 million people (as of 2021), and a combined GDP of US$283.0 billion (as of 2021).
Between 1945 and 1958, CFA stood for Colonies françaises d'Afrique ("French colonies of Africa"); then for Communauté française d'Afrique ("French Community of Africa") between 1958 (establishment of the French Fifth Republic) and the independence of these African countries at the beginning of the 1960s. Since independence, CFA is taken to mean Communauté Financière Africaine (African Financial Community) or Coopération financière en Afrique centrale (see Institutions below).
The CFA franc was created on 26 December 1945, along with the CFP franc. The reason for their creation was the weakness of the French franc immediately after World War II. When France ratified the Bretton Woods Agreement in December 1945, the French franc was devalued in order to set a fixed exchange rate with the US dollar. New currencies were created in the French colonies to spare them the strong devaluation, thereby making it easier for them to import goods from France (and simultaneously making it harder for them to export goods to France). French officials presented the decision as an act of generosity. René Pleven, the French Minister of Finance, was quoted as saying:
In a show of her generosity and selflessness, metropolitan France, wishing not to impose on her far-away daughters the consequences of her own poverty, is setting different exchange rates for their currency.
The CFA franc was created with a fixed exchange rate versus the French franc. This exchange rate was changed only twice, in 1948 and in 1994 (besides nominal adaptation to the new French franc in 1960 and the Euro in 1999).
Exchange rate:
The 1960 and 1999 events merely reflect changes of currency in use in France: the actual relative value of the CFA franc versus the French franc/euro only changed in 1948 and 1994.
Over time, the number of countries and territories using the CFA franc has changed as some countries began introducing their own separate currencies. A couple of nations in West Africa have also chosen to adopt the CFA franc since its introduction, despite the fact that they had never been French colonies.
In 1998, in anticipation of Economic and Monetary Union of the European Union, the Council of the European Union addressed the monetary agreements France had with the CFA Zone and Comoros and ruled that:
The currency has been criticized for making national monetary policy for the developing countries of French West Africa all but impossible, since the CFA's value is pegged to the euro (whose monetary policy is set by the European Central Bank). Others disagree and argue that the CFA "helps stabilize the national currencies of Franc Zone member-countries and greatly facilitates the flow of exports and imports between France and the member-countries". The European Union's 2008 assessment of the CFA's link to the euro noted that "benefits from economic integration within each of the two monetary unions of the CFA franc zone, and even more so between them, remained remarkably low" but that "the peg to the French franc and, since 1999, to the euro as exchange rate anchor is usually found to have had favourable effects in the region in terms of macroeconomic stability".
Critics point out that the currency is controlled by the French treasury, and in turn African countries channel more money to France than they receive in aid and have no sovereignty over their monetary policies. In January 2019, Italian ministers accused France of impoverishing Africa through the CFA franc, and criticism continued from various African organizations. On 21 December 2019, President Alassane Ouattara of the Ivory Coast and President Emmanuel Macron of France announced an initiative to replace the West African CFA Franc with the Eco. Subsequently, a reform of the West African CFA franc was initiated. In May 2020, the French National Assembly agreed to end the French engagement in the West African CFA franc. The countries using the currency will no longer have to deposit half of their foreign exchange reserves with the French Treasury.
The broader Economic Community of West African States (ECOWAS), which includes the members of UEMOA, plans to introduce its own common currency for its member states by 2027, for which they have also formally adopted the name Eco.
On April 25, 2023, the subject of the CFA franc was discussed at the ministerial meeting of the Economic and Monetary Community of Central Africa (CEMAC) and France. The French perceive the guarantee provided to the CFA franc, and the assurance of its convertibility, as a pillar of economic stability for the region. France remains “open” and “available” to CEMAC proposals to reform monetary cooperation in Central Africa, as has happened in West Africa.
There are two different currencies called the CFA franc: the West African CFA franc (ISO 4217 currency code XOF), and the Central Africa CFA franc (ISO 4217 currency code XAF). They are distinguished in French by the meaning of the abbreviation CFA. These two CFA francs have the same exchange rate with the euro (1 euro = 655.957 XOF = 655.957 XAF), and they are both guaranteed by the French treasury (Trésor public), but the two currencies are only legal tender in their respective member countries.
The West African CFA franc (XOF) is known in French as the Franc CFA, where CFA stands for Communauté financière d'Afrique ('Financial Community of Africa') or Communauté Financière Africaine ("African Financial Community"). It is issued by the BCEAO (Banque Centrale des États de l'Afrique de l'Ouest, i.e., "Central Bank of the West African States"), located in Dakar, Senegal, for the eight countries of the UEMOA (Union Économique et Monétaire Ouest Africaine, i.e., "West African Economic and Monetary Union"):
These eight countries have a combined population of 134.7 million people (as of 2021), and a combined GDP of US$179.7 billion (as of 2021).
The Central Africa CFA franc (XAF) is known in French as the Franc CFA, where CFA stands for Coopération financière en Afrique centrale ("Financial Cooperation in Central Africa"). It is issued by the BEAC (Banque des États de l'Afrique Centrale, i.e., "Bank of the Central African States"), located in Yaoundé, Cameroon, for the six countries of the CEMAC (Communauté Économique et Monétaire de l'Afrique Centrale, i.e., "Economic and Monetary Community of Central Africa"):
These six countries have a combined population of 58.4 million people (as of 2021), and a combined GDP of US$103.3 billion (as of 2021).
In 1975, Central African CFA banknotes were issued with an obverse unique to each participating country, and common reverse, in a fashion similar to euro coins.
Equatorial Guinea, the only former Spanish colony in the zone, adopted the CFA in 1984. | [
{
"paragraph_id": 0,
"text": "The CFA franc (French: franc CFA, [fʁɑ̃ seɛfɑ], Franc of the Financial Community of Africa, originally Franc of the French Colonies in Africa, or colloquially franc; abbreviation: F.CFA) is the name of two currencies, the West African CFA franc, used in eight West African countries, and the Central African CFA franc, used in six Central African countries. Although separate, the two CFA franc currencies have always been at parity and are effectively interchangeable. The ISO currency codes are XAF for the Central African CFA franc and XOF for the West African CFA franc. On 22 December 2019, it was announced that the West African currency would be reformed and replaced by an independent currency to be called Eco.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Both CFA francs have a fixed exchange rate (peg) to the euro: €1 = F.CFA 655.957 exactly, and member countries deposited half of their foreign exchange reserves with the French Treasury. The currency has been criticized for restricting the sovereignty of the African member states, effectively putting their monetary policy in the hands of the European Central Bank. Others argue that the CFA \"helps stabilize the national currencies of Franc Zone member-countries and greatly facilitates the flow of exports and imports between France and the member-countries\".",
"title": ""
},
{
"paragraph_id": 2,
"text": "In May 2020, the French National Assembly agreed to end the French engagement in the West African CFA franc, including the foreign reserve deposit requirements. The West African CFA franc is expected to be renamed as the \"Eco\" in the near future.",
"title": ""
},
{
"paragraph_id": 3,
"text": "CFA francs are used in fourteen countries: twelve nations formerly ruled by France in West and Central Africa (excluding Guinea and Mauritania, which withdrew), plus Guinea-Bissau (a former Portuguese colony), and Equatorial Guinea (a former Spanish colony). These fourteen countries have a combined population of 193.1 million people (as of 2021), and a combined GDP of US$283.0 billion (as of 2021).",
"title": "Usage"
},
{
"paragraph_id": 4,
"text": "Between 1945 and 1958, CFA stood for Colonies françaises d'Afrique (\"French colonies of Africa\"); then for Communauté française d'Afrique (\"French Community of Africa\") between 1958 (establishment of the French Fifth Republic) and the independence of these African countries at the beginning of the 1960s. Since independence, CFA is taken to mean Communauté Financière Africaine (African Financial Community) or Coopération financière en Afrique centrale (see Institutions below).",
"title": "Name"
},
{
"paragraph_id": 5,
"text": "The CFA franc was created on 26 December 1945, along with the CFP franc. The reason for their creation was the weakness of the French franc immediately after World War II. When France ratified the Bretton Woods Agreement in December 1945, the French franc was devalued in order to set a fixed exchange rate with the US dollar. New currencies were created in the French colonies to spare them the strong devaluation, thereby making it easier for them to import goods from France (and simultaneously making it harder for them to export goods to France). French officials presented the decision as an act of generosity. René Pleven, the French Minister of Finance, was quoted as saying:",
"title": "History"
},
{
"paragraph_id": 6,
"text": "In a show of her generosity and selflessness, metropolitan France, wishing not to impose on her far-away daughters the consequences of her own poverty, is setting different exchange rates for their currency.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "The CFA franc was created with a fixed exchange rate versus the French franc. This exchange rate was changed only twice, in 1948 and in 1994 (besides nominal adaptation to the new French franc in 1960 and the Euro in 1999).",
"title": "History"
},
{
"paragraph_id": 8,
"text": "Exchange rate:",
"title": "History"
},
{
"paragraph_id": 9,
"text": "The 1960 and 1999 events merely reflect changes of currency in use in France: the actual relative value of the CFA franc versus the French franc/euro only changed in 1948 and 1994.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "Over time, the number of countries and territories using the CFA franc has changed as some countries began introducing their own separate currencies. A couple of nations in West Africa have also chosen to adopt the CFA franc since its introduction, despite the fact that they had never been French colonies.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "In 1998, in anticipation of Economic and Monetary Union of the European Union, the Council of the European Union addressed the monetary agreements France had with the CFA Zone and Comoros and ruled that:",
"title": "History"
},
{
"paragraph_id": 12,
"text": "The currency has been criticized for making national monetary policy for the developing countries of French West Africa all but impossible, since the CFA's value is pegged to the euro (whose monetary policy is set by the European Central Bank). Others disagree and argue that the CFA \"helps stabilize the national currencies of Franc Zone member-countries and greatly facilitates the flow of exports and imports between France and the member-countries\". The European Union's 2008 assessment of the CFA's link to the euro noted that \"benefits from economic integration within each of the two monetary unions of the CFA franc zone, and even more so between them, remained remarkably low\" but that \"the peg to the French franc and, since 1999, to the euro as exchange rate anchor is usually found to have had favourable effects in the region in terms of macroeconomic stability\".",
"title": "History"
},
{
"paragraph_id": 13,
"text": "Critics point out that the currency is controlled by the French treasury, and in turn African countries channel more money to France than they receive in aid and have no sovereignty over their monetary policies. In January 2019, Italian ministers accused France of impoverishing Africa through the CFA franc, and criticism continued from various African organizations. On 21 December 2019, President Alassane Ouattara of the Ivory Coast and President Emmanuel Macron of France announced an initiative to replace the West African CFA Franc with the Eco. Subsequently, a reform of the West African CFA franc was initiated. In May 2020, the French National Assembly agreed to end the French engagement in the West African CFA franc. The countries using the currency will no longer have to deposit half of their foreign exchange reserves with the French Treasury.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "The broader Economic Community of West African States (ECOWAS), which includes the members of UEMOA, plans to introduce its own common currency for its member states by 2027, for which they have also formally adopted the name Eco.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "On April 25, 2023, the subject of the CFA franc was discussed at the ministerial meeting of the Economic and Monetary Community of Central Africa (CEMAC) and France. The French perceive the guarantee provided to the CFA franc, and the assurance of its convertibility, as a pillar of economic stability for the region. France remains “open” and “available” to CEMAC proposals to reform monetary cooperation in Central Africa, as has happened in West Africa.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "There are two different currencies called the CFA franc: the West African CFA franc (ISO 4217 currency code XOF), and the Central Africa CFA franc (ISO 4217 currency code XAF). They are distinguished in French by the meaning of the abbreviation CFA. These two CFA francs have the same exchange rate with the euro (1 euro = 655.957 XOF = 655.957 XAF), and they are both guaranteed by the French treasury (Trésor public), but the two currencies are only legal tender in their respective member countries.",
"title": "Institutions"
},
{
"paragraph_id": 17,
"text": "The West African CFA franc (XOF) is known in French as the Franc CFA, where CFA stands for Communauté financière d'Afrique ('Financial Community of Africa') or Communauté Financière Africaine (\"African Financial Community\"). It is issued by the BCEAO (Banque Centrale des États de l'Afrique de l'Ouest, i.e., \"Central Bank of the West African States\"), located in Dakar, Senegal, for the eight countries of the UEMOA (Union Économique et Monétaire Ouest Africaine, i.e., \"West African Economic and Monetary Union\"):",
"title": "Institutions"
},
{
"paragraph_id": 18,
"text": "These eight countries have a combined population of 134.7 million people (as of 2021), and a combined GDP of US$179.7 billion (as of 2021).",
"title": "Institutions"
},
{
"paragraph_id": 19,
"text": "The Central Africa CFA franc (XAF) is known in French as the Franc CFA, where CFA stands for Coopération financière en Afrique centrale (\"Financial Cooperation in Central Africa\"). It is issued by the BEAC (Banque des États de l'Afrique Centrale, i.e., \"Bank of the Central African States\"), located in Yaoundé, Cameroon, for the six countries of the CEMAC (Communauté Économique et Monétaire de l'Afrique Centrale, i.e., \"Economic and Monetary Community of Central Africa\"):",
"title": "Institutions"
},
{
"paragraph_id": 20,
"text": "These six countries have a combined population of 58.4 million people (as of 2021), and a combined GDP of US$103.3 billion (as of 2021).",
"title": "Institutions"
},
{
"paragraph_id": 21,
"text": "In 1975, Central African CFA banknotes were issued with an obverse unique to each participating country, and common reverse, in a fashion similar to euro coins.",
"title": "Institutions"
},
{
"paragraph_id": 22,
"text": "Equatorial Guinea, the only former Spanish colony in the zone, adopted the CFA in 1984.",
"title": "Institutions"
}
] | The CFA franc is the name of two currencies, the West African CFA franc, used in eight West African countries, and the Central African CFA franc, used in six Central African countries. Although separate, the two CFA franc currencies have always been at parity and are effectively interchangeable. The ISO currency codes are XAF for the Central African CFA franc and XOF for the West African CFA franc. On 22 December 2019, it was announced that the West African currency would be reformed and replaced by an independent currency to be called Eco. Both CFA francs have a fixed exchange rate (peg) to the euro: €1 = F.CFA 655.957 exactly, and member countries deposited half of their foreign exchange reserves with the French Treasury. The currency has been criticized for restricting the sovereignty of the African member states, effectively putting their monetary policy in the hands of the European Central Bank. Others argue that the CFA "helps stabilize the national currencies of Franc Zone member-countries and greatly facilitates the flow of exports and imports between France and the member-countries". In May 2020, the French National Assembly agreed to end the French engagement in the West African CFA franc, including the foreign reserve deposit requirements. The West African CFA franc is expected to be renamed as the "Eco" in the near future. | 2001-05-16T20:10:18Z | 2023-12-10T19:34:00Z | [
"Template:Portal",
"Template:Use dmy dates",
"Template:Color box",
"Template:Lang-fr",
"Template:Lang",
"Template:BFA",
"Template:Short description",
"Template:BEN",
"Template:Blockquote",
"Template:Reflist",
"Template:In lang",
"Template:IPA-fr",
"Template:MLI",
"Template:Flag",
"Template:GNB",
"Template:TOG",
"Template:Cite web",
"Template:Webarchive",
"Template:Benin topics",
"Template:Main",
"Template:Cite journal",
"Template:Cite news",
"Template:NIG",
"Template:SEN",
"Template:Franc",
"Template:CIV",
"Template:Commons category",
"Template:Webtrans",
"Template:Authority control"
] | https://en.wikipedia.org/wiki/CFA_franc |
5,664 | Consciousness | Consciousness, at its simplest, is awareness of internal and external existence. However, its nature has led to millennia of analyses, explanations and debate by philosophers, theologians, and all of science. Opinions differ about what exactly needs to be studied or even considered consciousness. In some explanations, it is synonymous with the mind, and at other times, an aspect of mind. In the past, it was one's "inner life", the world of introspection, of private thought, imagination and volition. Today, it often includes any kind of cognition, experience, feeling or perception. It may be awareness, awareness of awareness, or self-awareness either continuously changing or not. The disparate range of research, notions and speculations raises a curiosity about whether the right questions are being asked.
Examples of the range of descriptions, definitions or explanations are: simple wakefulness, one's sense of selfhood or soul explored by "looking within"; being a metaphorical "stream" of contents, or being a mental state, mental event or mental process of the brain.
The earliest English language uses of "conscious" and "consciousness" date to the 1500s, but not with today's meanings. The English word "conscious" originally derived from the Latin conscius (con- "together" and scio "to know") which meant "knowing with" or "having joint or common knowledge with another". In its earliest uses in the 1500s, the English word "conscious" retained the meaning of the Latin conscius. For example, Thomas Hobbes in Leviathan wrote: "Where two, or more men, know of one and the same fact, they are said to be Conscious of it one to another." There were also many occurrences in Latin writings of the phrase conscius sibi, which translates literally as "knowing with oneself", or in other words "sharing knowledge with oneself about something". This phrase has the figurative sense of "knowing that one knows", which is something like the modern English word "conscious", but it was rendered into English as "conscious to oneself" or "conscious unto oneself". For example, Archbishop Ussher wrote in 1613 of "being so conscious unto myself of my great weakness".
The origin of the modern concept of consciousness is often attributed to John Locke who defined consciousness in his Essay Concerning Human Understanding, published in 1690, as "the perception of what passes in a man's own mind". The essay strongly influenced 18th-century British philosophy, and Locke's definition appeared in Samuel Johnson's celebrated Dictionary (1755).
A related word was conscientia, which primarily means moral conscience. In the literal sense, "conscientia" means knowledge-with, that is, shared knowledge. The word first appears in Latin juridical texts by writers such as Cicero. Here, conscientia is the knowledge that a witness has of the deed of someone else. René Descartes (1596–1650) is generally taken to be the first philosopher to use conscientia in a way that does not fit this traditional meaning. Descartes used conscientia the way modern English speakers would use "conscience". In Search after Truth (Regulæ ad directionem ingenii ut et inquisitio veritatis per lumen naturale, Amsterdam 1701) he says "conscience or internal testimony" (conscientiâ, vel interno testimonio).
The French term conscience is defined roughly like English "consciousness" in the 1753 volume of Diderot and d'Alembert's Encyclopédie as "the opinion or internal feeling that we ourselves have from what we do".
About forty meanings attributed to the term consciousness can be identified and categorized based on functions and experiences. The prospects for reaching any single, agreed-upon, theory-independent definition of consciousness appear remote.
Scholars are divided as to whether Aristotle had a concept of consciousness. He does not use any single word or terminology that is clearly similar to the phenomenon or concept defined by John Locke. Victor Caston contends that Aristotle did have a concept more clearly similar to perceptual awareness.
The modern dictionary definitions of the word consciousness evolved through several centuries and reflect a range of seemingly related meanings, with some differences that have been controversial, such as the distinction between 'inward awareness' and 'perception' of the physical world, or the distinction between 'conscious' and 'unconscious', or the notion of a "mental entity" or "mental activity" that is not physical.
The common usage definitions of consciousness in Webster's Third New International Dictionary (1966 edition, Volume 1, page 482) are as follows:
The Cambridge Dictionary defines consciousness as "the state of understanding and realizing something." The Oxford Living Dictionary defines consciousness as "The state of being aware of and responsive to one's surroundings.", "A person's awareness or perception of something." and "The fact of awareness by the mind of itself and the world."
Philosophers have attempted to clarify technical distinctions by using a jargon of their own. The Routledge Encyclopedia of Philosophy in 1998 defines consciousness as follows:
Consciousness—Philosophers have used the term 'consciousness' for four main topics: knowledge in general, intentionality, introspection (and the knowledge it specifically generates) and phenomenal experience... Something within one's mind is 'introspectively conscious' just in case one introspects it (or is poised to do so). Introspection is often thought to deliver one's primary knowledge of one's mental life. An experience or other mental entity is 'phenomenally conscious' just in case there is 'something it is like' for one to have it. The clearest examples are: perceptual experience, such as tastings and seeings; bodily-sensational experiences, such as those of pains, tickles and itches; imaginative experiences, such as those of one's own actions or perceptions; and streams of thought, as in the experience of thinking 'in words' or 'in images'. Introspection and phenomenality seem independent, or dissociable, although this is controversial.
In philosophy before the 20th century, consciousness as a phenomenon was the 'inner world' of 'one's own mind', and introspection was the mind "attending to" itself, an activity seemingly distinct from that of perceiving the 'outer world' and its physical phenomena. In 1892 William James noted the distinction along with doubts about the "inward" character of the mind:
'Things' have been doubted, but thoughts and feelings have never been doubted. The outer world, but never the inner world, has been denied. Everyone assumes that we have direct introspective acquaintance with our thinking activity as such, with our consciousness as something inward and contrasted with the outer objects which it knows. Yet I must confess that for my part I cannot feel sure of this conclusion. ... It seems as if consciousness as an inner activity were rather a postulate than a sensibly given fact...
By the 1960s, for many philosophers and psychologists who talked about consciousness, the word no longer meant the 'inner world' but an indefinite, large category called awareness, as in the following example:
It is difficult for modern Western man to grasp that the Greeks really had no concept of consciousness in that they did not class together phenomena as varied as problem solving, remembering, imagining, perceiving, feeling pain, dreaming, and acting on the grounds that all these are manifestations of being aware or being conscious.
Many philosophers and scientists have been unhappy about the difficulty of producing a definition that does not involve circularity or fuzziness. In The Macmillan Dictionary of Psychology (1989 edition), Stuart Sutherland emphasized external awareness, and expressed a skeptical attitude more than a definition:
Consciousness—The having of perceptions, thoughts, and feelings; awareness. The term is impossible to define except in terms that are unintelligible without a grasp of what consciousness means. Many fall into the trap of equating consciousness with self-consciousness—to be conscious it is only necessary to be aware of the external world. Consciousness is a fascinating but elusive phenomenon: it is impossible to specify what it is, what it does, or why it has evolved. Nothing worth reading has been written on it.
Max Velmans noted, as of 2009, that there was a deep level of "confusion and internal division" among experts about the phenomenon of consciousness, because researchers lacked "a sufficiently well-specified use of the term...to agree that they are investigating the same thing". Within the "modern consciousness studies" community the technical phrase 'phenomenal consciousness' is a common synonym for all forms of awareness, or simply 'experience', without differentiating between inner and outer, or between higher and lower types. Using 'awareness', however, as a definition or synonym of consciousness is not a simple matter:
If awareness of the environment . . . is the criterion of consciousness, then even the protozoans are conscious. If awareness of awareness is required, then it is doubtful whether the great apes and human infants are conscious.
Many philosophers have argued that consciousness is a unitary concept that is understood by the majority of people despite the difficulty philosophers have had defining it. Velmans proposed that the "everyday understanding of consciousness" uncontroversially "refers to experience itself rather than any particular thing that we observe or experience" and he added that consciousness "is [therefore] exemplified by all the things that we observe or experience", whether thoughts, feelings, or perceptions.
Velmans argued additionally that "pre-existing theoretical commitments" to competing explanations of consciousness might be a source of bias. With advances in brain research, "the presence or absence of experienced phenomena" of any kind underlies the work of those neuroscientists who seek "to analyze the precise relation of conscious phenomenology to its associated information processing" in the brain. This neuroscientific goal, to find the "neural correlates of consciousness" (NCC), begins with a theoretical commitment to the neurological origin of all "experienced phenomena" whether inner or outer. However, the easiest 'content of consciousness' to be so analyzed is "the experienced three-dimensional world (the phenomenal world) beyond the body surface", and most consciousness research since the 1990s, perhaps because of bias, has focused on processes of external perception.
By contrast, a cognitive science point of view — with an inter-disciplinary perspective involving fields such as psychology, linguistics and anthropology — requires no agreed definition of 'consciousness' but studies the interaction of many processes besides perception, for example certain pragmatic issues such as the feeling of agency and the effects of regret and action on 'self-experience' of one's own body or social identity.
Julian Jaynes, from a history of psychology perspective, rejected popular but "superficial views of consciousness" especially those which equate it with "that vaguest of terms, experience". In 1976 he insisted that if not for introspection, which for decades had been ignored or taken for granted rather than explained, there could be no "conception of what consciousness is" and in 1990, he reaffirmed the traditional idea of the phenomenon called 'consciousness', writing that "its denotative definition is, as it was for Descartes, Locke, and Hume, what is introspectable". Jaynes saw consciousness as an important but small part of human mentality, and he asserted: "there can be no progress in the science of consciousness until ... what is introspectable [is] sharply distinguished" from the unconscious processes of cognition such as perception, reactive awareness and attention, and automatic forms of learning, problem-solving and decision-making.
Some have argued that we should eliminate the concept from our understanding of the mind, a position known as consciousness semanticism.
In medicine, a "level of consciousness" terminology is used to describe a patient's arousal and responsiveness, which can be seen as a continuum of states ranging from full alertness and comprehension, through disorientation, delirium, loss of meaningful communication, and finally loss of movement in response to painful stimuli. Issues of practical concern include how the level of consciousness can be assessed in severely ill, comatose, or anesthetized people, and how to treat conditions in which consciousness is impaired or disrupted. The degree or level of consciousness is measured by standardized behavior observation scales such as the Glasgow Coma Scale.
Most writers on the philosophy of consciousness have been concerned with defending a particular point of view, and have organized their material accordingly. For surveys, the most common approach is to follow a historical path by associating stances with the philosophers who are most strongly associated with them, for example, Descartes, Locke, Kant, etc. An alternative is to organize philosophical stances according to basic issues.
Philosophers differ from non-philosophers in their intuitions about what consciousness is. While most people have a strong intuition for the existence of what they refer to as consciousness, skeptics argue that this intuition is false, either because the concept of consciousness is intrinsically incoherent, or because our intuitions about it are based in illusions. Gilbert Ryle, for example, argued that traditional understanding of consciousness depends on a Cartesian dualist outlook that improperly distinguishes between mind and body, or between mind and world. He proposed that we speak not of minds, bodies, and the world, but of individuals, or persons, acting in the world. Thus, by speaking of "consciousness" we end up misleading ourselves by thinking that there is any sort of thing as consciousness separated from behavioral and linguistic understandings.
Ned Block argued that discussions on consciousness often failed to properly distinguish phenomenal (P-consciousness) from access (A-consciousness), though these terms had been used before Block. P-consciousness, according to Block, is raw experience: it is moving, colored forms, sounds, sensations, emotions and feelings with our bodies and responses at the center. These experiences, considered independently of any impact on behavior, are called qualia. A-consciousness, on the other hand, is the phenomenon whereby information in our minds is accessible for verbal report, reasoning, and the control of behavior. So, when we perceive, information about what we perceive is access conscious; when we introspect, information about our thoughts is access conscious; when we remember, information about the past is access conscious, and so on. Although some philosophers, such as Daniel Dennett, have disputed the validity of this distinction, others have broadly accepted it. David Chalmers has argued that A-consciousness can in principle be understood in mechanistic terms, but that understanding P-consciousness is much more challenging: he calls this the hard problem of consciousness.
Some philosophers believe that Block's two types of consciousness are not the end of the story. William Lycan, for example, argued in his book Consciousness and Experience that at least eight clearly distinct types of consciousness can be identified (organism consciousness; control consciousness; consciousness of; state/event consciousness; reportability; introspective consciousness; subjective consciousness; self-consciousness)—and that even this list omits several more obscure forms.
There is also debate over whether or not A-consciousness and P-consciousness always coexist or if they can exist separately. Although P-consciousness without A-consciousness is more widely accepted, there have been some hypothetical examples of A without P. Block, for instance, suggests the case of a "zombie" that is computationally identical to a person but without any subjectivity. However, he remains somewhat skeptical concluding "I don't know whether there are any actual cases of A-consciousness without P-consciousness, but I hope I have illustrated their conceptual possibility."
Sam Harris observes: "At the level of your experience, you are not a body of cells, organelles, and atoms; you are consciousness and its ever-changing contents". Seen in this way, consciousness is a subjectively experienced, ever-present field in which things (the contents of consciousness) come and go.
Christopher Tricker argues that this field of consciousness is symbolized by the mythical bird that opens the Daoist classic the Zhuangzi. This bird's name is Of a Flock (peng 鵬), yet its back is countless thousands of miles across and its wings are like clouds arcing across the heavens. "Like Of a Flock, whose wings arc across the heavens, the wings of your consciousness span to the horizon. At the same time, the wings of every other being's consciousness span to the horizon. You are of a flock, one bird among kin."
Mental processes (such as consciousness) and physical processes (such as brain events) seem to be correlated, however the specific nature of the connection is unknown.
The first influential philosopher to discuss this question specifically was Descartes, and the answer he gave is known as Cartesian dualism. Descartes proposed that consciousness resides within an immaterial domain he called res cogitans (the realm of thought), in contrast to the domain of material things, which he called res extensa (the realm of extension). He suggested that the interaction between these two domains occurs inside the brain, perhaps in a small midline structure called the pineal gland.
Although it is widely accepted that Descartes explained the problem cogently, few later philosophers have been happy with his solution, and his ideas about the pineal gland have especially been ridiculed. However, no alternative solution has gained general acceptance. Proposed solutions can be divided broadly into two categories: dualist solutions that maintain Descartes's rigid distinction between the realm of consciousness and the realm of matter but give different answers for how the two realms relate to each other; and monist solutions that maintain that there is really only one realm of being, of which consciousness and matter are both aspects. Each of these categories itself contains numerous variants. The two main types of dualism are substance dualism (which holds that the mind is formed of a distinct type of substance not governed by the laws of physics) and property dualism (which holds that the laws of physics are universally valid but cannot be used to explain the mind). The three main types of monism are physicalism (which holds that the mind consists of matter organized in a particular way), idealism (which holds that only thought or experience truly exists, and matter is merely an illusion), and neutral monism (which holds that both mind and matter are aspects of a distinct essence that is itself identical to neither of them). There are also, however, a large number of idiosyncratic theories that cannot cleanly be assigned to any of these schools of thought.
Since the dawn of Newtonian science with its vision of simple mechanical principles governing the entire universe, some philosophers have been tempted by the idea that consciousness could be explained in purely physical terms. The first influential writer to propose such an idea explicitly was Julien Offray de La Mettrie, in his book Man a Machine (L'homme machine). His arguments, however, were very abstract. The most influential modern physical theories of consciousness are based on psychology and neuroscience. Theories proposed by neuroscientists such as Gerald Edelman and Antonio Damasio, and by philosophers such as Daniel Dennett, seek to explain consciousness in terms of neural events occurring within the brain. Many other neuroscientists, such as Christof Koch, have explored the neural basis of consciousness without attempting to frame all-encompassing global theories. At the same time, computer scientists working in the field of artificial intelligence have pursued the goal of creating digital computer programs that can simulate or embody consciousness.
A few theoretical physicists have argued that classical physics is intrinsically incapable of explaining the holistic aspects of consciousness, but that quantum theory may provide the missing ingredients. Several theorists have therefore proposed quantum mind (QM) theories of consciousness. Notable theories falling into this category include the holonomic brain theory of Karl Pribram and David Bohm, and the Orch-OR theory formulated by Stuart Hameroff and Roger Penrose. Some of these QM theories offer descriptions of phenomenal consciousness, as well as QM interpretations of access consciousness. None of the quantum mechanical theories have been confirmed by experiment. Recent publications by G. Guerreshi, J. Cia, S. Popescu, and H. Briegel could falsify proposals such as those of Hameroff, which rely on quantum entanglement in protein. At the present time many scientists and philosophers consider the arguments for an important role of quantum phenomena to be unconvincing.
Apart from the general question of the "hard problem" of consciousness (which is, roughly speaking, the question of how mental experience can arise from a physical basis), a more specialized question is how to square the subjective notion that we are in control of our decisions (at least in some small measure) with the customary view of causality that subsequent events are caused by prior events. The topic of free will is the philosophical and scientific examination of this conundrum.
Many philosophers consider experience to be the essence of consciousness, and believe that experience can only fully be known from the inside, subjectively. But if consciousness is subjective and not visible from the outside, why do the vast majority of people believe that other people are conscious, but rocks and trees are not? This is called the problem of other minds. It is particularly acute for people who believe in the possibility of philosophical zombies, that is, people who think it is possible in principle to have an entity that is physically indistinguishable from a human being and behaves like a human being in every way but nevertheless lacks consciousness. Related issues have also been studied extensively by Greg Littmann of the University of Illinois, and by Colin Allen (a professor at the University of Pittsburgh) regarding the literature and research studying artificial intelligence in androids.
The most commonly given answer is that we attribute consciousness to other people because we see that they resemble us in appearance and behavior; we reason that if they look like us and act like us, they must be like us in other ways, including having experiences of the sort that we do. There are, however, a variety of problems with that explanation. For one thing, it seems to violate the principle of parsimony, by postulating an invisible entity that is not necessary to explain what we observe. Some philosophers, such as Daniel Dennett in a research paper titled "The Unimagined Preposterousness of Zombies", argue that people who give this explanation do not really understand what they are saying. More broadly, philosophers who do not accept the possibility of zombies generally believe that consciousness is reflected in behavior (including verbal behavior), and that we attribute consciousness on the basis of behavior. A more straightforward way of saying this is that we attribute experiences to people because of what they can do, including the fact that they can tell us about their experiences.
For many decades, consciousness as a research topic was avoided by the majority of mainstream scientists, because of a general feeling that a phenomenon defined in subjective terms could not properly be studied using objective experimental methods. In 1975 George Mandler published an influential psychological study which distinguished between slow, serial, and limited conscious processes and fast, parallel and extensive unconscious ones. The Science and Religion Forum 1984 annual conference, 'From Artificial Intelligence to Human Consciousness' identified the nature of consciousness as a matter for investigation; Donald Michie was a keynote speaker. Starting in the 1980s, an expanding community of neuroscientists and psychologists have associated themselves with a field called Consciousness Studies, giving rise to a stream of experimental work published in books, journals such as Consciousness and Cognition, Frontiers in Consciousness Research, Psyche, and the Journal of Consciousness Studies, along with regular conferences organized by groups such as the Association for the Scientific Study of Consciousness and the Society for Consciousness Studies.
Modern medical and psychological investigations into consciousness are based on psychological experiments (including, for example, the investigation of priming effects using subliminal stimuli), and on case studies of alterations in consciousness produced by trauma, illness, or drugs. Broadly viewed, scientific approaches are based on two core concepts. The first identifies the content of consciousness with the experiences that are reported by human subjects; the second makes use of the concept of consciousness that has been developed by neurologists and other medical professionals who deal with patients whose behavior is impaired. In either case, the ultimate goals are to develop techniques for assessing consciousness objectively in humans as well as other animals, and to understand the neural and psychological mechanisms that underlie it.
Experimental research on consciousness presents special difficulties, due to the lack of a universally accepted operational definition. In the majority of experiments that are specifically about consciousness, the subjects are human, and the criterion used is verbal report: in other words, subjects are asked to describe their experiences, and their descriptions are treated as observations of the contents of consciousness. For example, subjects who stare continuously at a Necker cube usually report that they experience it "flipping" between two 3D configurations, even though the stimulus itself remains the same. The objective is to understand the relationship between the conscious awareness of stimuli (as indicated by verbal report) and the effects the stimuli have on brain activity and behavior. In several paradigms, such as the technique of response priming, the behavior of subjects is clearly influenced by stimuli for which they report no awareness, and suitable experimental manipulations can lead to increasing priming effects despite decreasing prime identification (double dissociation).
Verbal report is widely considered to be the most reliable indicator of consciousness, but it raises a number of issues. For one thing, if verbal reports are treated as observations, akin to observations in other branches of science, then the possibility arises that they may contain errors—but it is difficult to make sense of the idea that subjects could be wrong about their own experiences, and even more difficult to see how such an error could be detected. Daniel Dennett has argued for an approach he calls heterophenomenology, which means treating verbal reports as stories that may or may not be true, but his ideas about how to do this have not been widely adopted. Another issue with verbal report as a criterion is that it restricts the field of study to humans who have language: this approach cannot be used to study consciousness in other species, pre-linguistic children, or people with types of brain damage that impair language. As a third issue, philosophers who dispute the validity of the Turing test may feel that it is possible, at least in principle, for verbal report to be dissociated from consciousness entirely: a philosophical zombie may give detailed verbal reports of awareness in the absence of any genuine awareness.
Although verbal report is in practice the "gold standard" for ascribing consciousness, it is not the only possible criterion. In medicine, consciousness is assessed as a combination of verbal behavior, arousal, brain activity and purposeful movement. The last three of these can be used as indicators of consciousness when verbal behavior is absent. The scientific literature regarding the neural bases of arousal and purposeful movement is very extensive. Their reliability as indicators of consciousness is disputed, however, due to numerous studies showing that alert human subjects can be induced to behave purposefully in a variety of ways in spite of reporting a complete lack of awareness. Studies of the neuroscience of free will have also shown that the experiences that people report when they behave purposefully sometimes do not correspond to their actual behaviors or to the patterns of electrical activity recorded from their brains.
Another approach applies specifically to the study of self-awareness, that is, the ability to distinguish oneself from others. In the 1970s Gordon Gallup developed an operational test for self-awareness, known as the mirror test. The test examines whether animals are able to differentiate between seeing themselves in a mirror versus seeing other animals. The classic example involves placing a spot of coloring on the skin or fur near the individual's forehead and seeing if they attempt to remove it or at least touch the spot, thus indicating that they recognize that the individual they are seeing in the mirror is themselves. Humans (older than 18 months) and other great apes, bottlenose dolphins, orcas, pigeons, European magpies and elephants have all been observed to pass this test.
A major part of the scientific literature on consciousness consists of studies that examine the relationship between the experiences reported by subjects and the activity that simultaneously takes place in their brains—that is, studies of the neural correlates of consciousness. The hope is to find that activity in a particular part of the brain, or a particular pattern of global brain activity, which will be strongly predictive of conscious awareness. Several brain imaging techniques, such as EEG and fMRI, have been used for physical measures of brain activity in these studies.
Another idea that has drawn attention for several decades is that consciousness is associated with high-frequency (gamma band) oscillations in brain activity. This idea arose from proposals in the 1980s, by Christof von der Malsburg and Wolf Singer, that gamma oscillations could solve the so-called binding problem, by linking information represented in different parts of the brain into a unified experience. Rodolfo Llinás, for example, proposed that consciousness results from recurrent thalamo-cortical resonance where the specific thalamocortical systems (content) and the non-specific (centromedial thalamus) thalamocortical systems (context) interact in the gamma band frequency via synchronous oscillations.
A number of studies have shown that activity in primary sensory areas of the brain is not sufficient to produce consciousness: it is possible for subjects to report a lack of awareness even when areas such as the primary visual cortex (V1) show clear electrical responses to a stimulus. Higher brain areas are seen as more promising, especially the prefrontal cortex, which is involved in a range of higher cognitive functions collectively known as executive functions. There is substantial evidence that a "top-down" flow of neural activity (i.e., activity propagating from the frontal cortex to sensory areas) is more predictive of conscious awareness than a "bottom-up" flow of activity. The prefrontal cortex is not the only candidate area, however: studies by Nikos Logothetis and his colleagues have shown, for example, that visually responsive neurons in parts of the temporal lobe reflect the visual perception in the situation when conflicting visual images are presented to different eyes (i.e., bistable percepts during binocular rivalry). Furthermore, top-down feedback from higher to lower visual brain areas may be weaker or absent in the peripheral visual field, as suggested by some experimental data and theoretical arguments; nevertheless humans can perceive visual inputs in the peripheral visual field arising from bottom-up V1 neural activities. Meanwhile, bottom-up V1 activities for the central visual fields can be vetoed, and thus made invisible to perception, by the top-down feedback, when these bottom-up signals are inconsistent with the brain's internal model of the visual world.
Modulation of neural responses may correlate with phenomenal experiences. In contrast to the raw electrical responses that do not correlate with consciousness, the modulation of these responses by other stimuli correlates surprisingly well with an important aspect of consciousness: namely with the phenomenal experience of stimulus intensity (brightness, contrast). In the research group of Danko Nikolić it has been shown that some of the changes in the subjectively perceived brightness correlated with the modulation of firing rates while others correlated with the modulation of neural synchrony. An fMRI investigation suggested that these findings were strictly limited to the primary visual areas. This indicates that, in the primary visual areas, changes in firing rates and synchrony can be considered as neural correlates of qualia—at least for some type of qualia.
In 2013, the perturbational complexity index (PCI) was proposed, a measure of the algorithmic complexity of the electrophysiological response of the cortex to transcranial magnetic stimulation. This measure was shown to be higher in individuals that are awake, in REM sleep or in a locked-in state than in those who are in deep sleep or in a vegetative state, making it potentially useful as a quantitative assessment of consciousness states.
Assuming that not only humans but even some non-mammalian species are conscious, a number of evolutionary approaches to the problem of neural correlates of consciousness open up. For example, assuming that birds are conscious—a common assumption among neuroscientists and ethologists due to the extensive cognitive repertoire of birds—there are comparative neuroanatomical ways to validate some of the principal, currently competing, mammalian consciousness–brain theories. The rationale for such a comparative study is that the avian brain deviates structurally from the mammalian brain. So how similar are they? What homologs can be identified? The general conclusion from the study by Butler, et al., is that some of the major theories for the mammalian brain also appear to be valid for the avian brain. The structures assumed to be critical for consciousness in mammalian brains have homologous counterparts in avian brains. Thus the main portions of the theories of Crick and Koch, Edelman and Tononi, and Cotterill seem to be compatible with the assumption that birds are conscious. Edelman also differentiates between what he calls primary consciousness (which is a trait shared by humans and non-human animals) and higher-order consciousness as it appears in humans alone along with human language capacity. Certain aspects of the three theories, however, seem less easy to apply to the hypothesis of avian consciousness. For instance, the suggestion by Crick and Koch that layer 5 neurons of the mammalian brain have a special role, seems difficult to apply to the avian brain, since the avian homologs have a different morphology. Likewise, the theory of Eccles seems incompatible, since a structural homolog/analogue to the dendron has not been found in avian brains. The assumption of an avian consciousness also brings the reptilian brain into focus. The reason is the structural continuity between avian and reptilian brains, meaning that the phylogenetic origin of consciousness may be earlier than suggested by many leading neuroscientists.
Joaquin Fuster of UCLA has advocated the position of the importance of the prefrontal cortex in humans, along with the areas of Wernicke and Broca, as being of particular importance to the development of human language capacities neuro-anatomically necessary for the emergence of higher-order consciousness in humans.
A study in 2016 looked at lesions in specific areas of the brainstem that were associated with coma and vegetative states. A small region of the rostral dorsolateral pontine tegmentum in the brainstem was suggested to drive consciousness through functional connectivity with two cortical regions, the left ventral anterior insular cortex, and the pregenual anterior cingulate cortex. These three regions may work together as a triad to maintain consciousness.
A considerable amount of research is being carried out on the chemical basis of thought formation, storage, memory consolidation and formation of logical thought processes.In 2001 Atta-ur-Rahman proposed that the folding of glycoproteins by intermolecular or intramolecular hydrogen bonding may be the key process involved in formation of partly folded patterns for memory storage. The hydrogen bonding protein patterns hypothesis (HBPPH) proposes the formation of hydrogen bonds between hydroxyl groups of sugar moieties present in the glycoproteins with hydroxyl (or NH) groups of other sugar moieties or biomolecules leading to the creation of certain partly folded protein patterns. This provides a reasonable mechanism by which the brain may be able to gather and store information by the construction of intermolecular and intramolecular networks of folded glycoproteins. Support for partly folded proteins being involved in memory processes has come from recent researches in the field. Two possible mechanisms through which such partly folded protein patterns may be correlated leading to logical thought and to consciousness can be via quantum effects, or by an overlap of molecular vibrations arising from these patterns. The Nobel Laureate Roger Penrose and others have also proposed that quantum oscillations may be involved in consciousness.
A wide range of empirical theories of consciousness have been proposed. Adrian Doerig and colleagues list 13 notable theories, while Anil Seth and Tim Bayne list 22 notable theories.
Global workspace theory (GWT) is a cognitive architecture and theory of consciousness proposed by the cognitive psychologist Bernard Baars in 1988. Baars explains the theory with the metaphor of a theater, with conscious processes represented by an illuminated stage. This theater integrates inputs from a variety of unconscious and otherwise autonomous networks in the brain and then broadcasts them to unconscious networks (represented in the metaphor by a broad, unlit "audience"). The theory has since been expanded upon by other scientists including cognitive neuroscientist Stanislas Dehaene and Lionel Naccache.
Integrated information theory (IIT) postulates that consciousness resides in the information being processed and arises once the information reaches a certain level of complexity. Proponents of this model suggest that it may provide a physical grounding for consciousness in neurons, as they provide the mechanism by which information is integrated.
Orchestrated objective reduction (Orch OR) postulates that consciousness originates at the quantum level inside neurons. The mechanism is held to be a quantum process called objective reduction that is orchestrated by cellular structures called microtubules. However the details of the mechanism would go beyond current quantum theory.
In 2011, Graziano and Kastner proposed the "attention schema" theory of awareness. In that theory, specific cortical areas, notably in the superior temporal sulcus and the temporo-parietal junction, are used to build the construct of awareness and attribute it to other people. The same cortical machinery is also used to attribute awareness to oneself. Damage to these cortical regions can lead to deficits in consciousness such as hemispatial neglect. In the attention schema theory, the value of explaining the feature of awareness and attributing it to a person is to gain a useful predictive model of that person's attentional processing. Attention is a style of information processing in which a brain focuses its resources on a limited set of interrelated signals. Awareness, in this theory, is a useful, simplified schema that represents attentional states. To be aware of X is explained by constructing a model of one's attentional focus on X.
The entropic brain is a theory of conscious states informed by neuroimaging research with psychedelic drugs. The theory suggests that the brain in primary states such as rapid eye movement (REM) sleep, early psychosis and under the influence of psychedelic drugs, is in a disordered state; normal waking consciousness constrains some of this freedom and makes possible metacognitive functions such as internal self-administered reality testing and self-awareness. Criticism has included questioning whether the theory has been adequately tested.
In 2017, work by David Rudrauf and colleagues, including Karl Friston, applied the active inference paradigm to consciousness, a model of how sensory data is integrated with priors in a process of projective transformation. The authors argue that, while their model identifies a key relationship between computation and phenomenology, it does not completely solve the hard problem of consciousness or completely close the explanatory gap.
Opinions are divided as to where in biological evolution consciousness emerged and about whether or not consciousness has any survival value. Some argue that consciousness is a byproduct of evolution. It has been argued that consciousness emerged (i) exclusively with the first humans, (ii) exclusively with the first mammals, (iii) independently in mammals and birds, or (iv) with the first reptiles. Other authors date the origins of consciousness to the first animals with nervous systems or early vertebrates in the Cambrian over 500 million years ago. Donald Griffin suggests in his book Animal Minds a gradual evolution of consciousness. Each of these scenarios raises the question of the possible survival value of consciousness.
Thomas Henry Huxley defends in an essay titled On the Hypothesis that Animals are Automata, and its History an epiphenomenalist theory of consciousness according to which consciousness is a causally inert effect of neural activity—"as the steam-whistle which accompanies the work of a locomotive engine is without influence upon its machinery". To this William James objects in his essay Are We Automata? by stating an evolutionary argument for mind-brain interaction implying that if the preservation and development of consciousness in the biological evolution is a result of natural selection, it is plausible that consciousness has not only been influenced by neural processes, but has had a survival value itself; and it could only have had this if it had been efficacious. Karl Popper develops a similar evolutionary argument in the book The Self and Its Brain.
Regarding the primary function of conscious processing, a recurring idea in recent theories is that phenomenal states somehow integrate neural activities and information-processing that would otherwise be independent. This has been called the integration consensus. Another example has been proposed by Gerald Edelman called dynamic core hypothesis which puts emphasis on reentrant connections that reciprocally link areas of the brain in a massively parallel manner. Edelman also stresses the importance of the evolutionary emergence of higher-order consciousness in humans from the historically older trait of primary consciousness which humans share with non-human animals (see Neural correlates section above). These theories of integrative function present solutions to two classic problems associated with consciousness: differentiation and unity. They show how our conscious experience can discriminate between a virtually unlimited number of different possible scenes and details (differentiation) because it integrates those details from our sensory systems, while the integrative nature of consciousness in this view easily explains how our experience can seem unified as one whole despite all of these individual parts. However, it remains unspecified which kinds of information are integrated in a conscious manner and which kinds can be integrated without consciousness. Nor is it explained what specific causal role conscious integration plays, nor why the same functionality cannot be achieved without consciousness. Obviously not all kinds of information are capable of being disseminated consciously (e.g., neural activity related to vegetative functions, reflexes, unconscious motor programs, low-level perceptual analyzes, etc.) and many kinds of information can be disseminated and combined with other kinds without consciousness, as in intersensory interactions such as the ventriloquism effect. Hence it remains unclear why any of it is conscious. For a review of the differences between conscious and unconscious integrations, see the article of Ezequiel Morsella.
As noted earlier, even among writers who consider consciousness to be well-defined, there is widespread dispute about which animals other than humans can be said to possess it. Edelman has described this distinction as that of humans possessing higher-order consciousness while sharing the trait of primary consciousness with non-human animals (see previous paragraph). Thus, any examination of the evolution of consciousness is faced with great difficulties. Nevertheless, some writers have argued that consciousness can be viewed from the standpoint of evolutionary biology as an adaptation in the sense of a trait that increases fitness. In his article "Evolution of consciousness", John Eccles argued that special anatomical and physical properties of the mammalian cerebral cortex gave rise to consciousness ("[a] psychon ... linked to [a] dendron through quantum physics"). Bernard Baars proposed that once in place, this "recursive" circuitry may have provided a basis for the subsequent development of many of the functions that consciousness facilitates in higher organisms. Peter Carruthers has put forth one such potential adaptive advantage gained by conscious creatures by suggesting that consciousness allows an individual to make distinctions between appearance and reality. This ability would enable a creature to recognize the likelihood that their perceptions are deceiving them (e.g. that water in the distance may be a mirage) and behave accordingly, and it could also facilitate the manipulation of others by recognizing how things appear to them for both cooperative and devious ends.
Other philosophers, however, have suggested that consciousness would not be necessary for any functional advantage in evolutionary processes. No one has given a causal explanation, they argue, of why it would not be possible for a functionally equivalent non-conscious organism (i.e., a philosophical zombie) to achieve the very same survival advantages as a conscious organism. If evolutionary processes are blind to the difference between function F being performed by conscious organism O and non-conscious organism O*, it is unclear what adaptive advantage consciousness could provide. As a result, an exaptive explanation of consciousness has gained favor with some theorists that posit consciousness did not evolve as an adaptation but was an exaptation arising as a consequence of other developments such as increases in brain size or cortical rearrangement. Consciousness in this sense has been compared to the blind spot in the retina where it is not an adaption of the retina, but instead just a by-product of the way the retinal axons were wired. Several scholars including Pinker, Chomsky, Edelman, and Luria have indicated the importance of the emergence of human language as an important regulative mechanism of learning and memory in the context of the development of higher-order consciousness (see Neural correlates section above).
There are some brain states in which consciousness seems to be absent, including dreamless sleep or coma. There are also a variety of circumstances that can change the relationship between the mind and the world in less drastic ways, producing what are known as altered states of consciousness. Some altered states occur naturally; others can be produced by drugs or brain damage. Altered states can be accompanied by changes in thinking, disturbances in the sense of time, feelings of loss of control, changes in emotional expression, alternations in body image and changes in meaning or significance.
The two most widely accepted altered states are sleep and dreaming. Although dream sleep and non-dream sleep appear very similar to an outside observer, each is associated with a distinct pattern of brain activity, metabolic activity, and eye movement; each is also associated with a distinct pattern of experience and cognition. During ordinary non-dream sleep, people who are awakened report only vague and sketchy thoughts, and their experiences do not cohere into a continuous narrative. During dream sleep, in contrast, people who are awakened report rich and detailed experiences in which events form a continuous progression, which may however be interrupted by bizarre or fantastic intrusions. Thought processes during the dream state frequently show a high level of irrationality. Both dream and non-dream states are associated with severe disruption of memory: it usually disappears in seconds during the non-dream state, and in minutes after awakening from a dream unless actively refreshed.
Research conducted on the effects of partial epileptic seizures on consciousness found that patients who have partial epileptic seizures experience altered states of consciousness. In partial epileptic seizures, consciousness is impaired or lost while some aspects of consciousness, often automated behaviors, remain intact. Studies found that when measuring the qualitative features during partial epileptic seizures, patients exhibited an increase in arousal and became absorbed in the experience of the seizure, followed by difficulty in focusing and shifting attention.
A variety of psychoactive drugs, including alcohol, have notable effects on consciousness. These range from a simple dulling of awareness produced by sedatives, to increases in the intensity of sensory qualities produced by stimulants, cannabis, empathogens–entactogens such as MDMA ("Ecstasy"), or most notably by the class of drugs known as psychedelics. LSD, mescaline, psilocybin, dimethyltryptamine, and others in this group can produce major distortions of perception, including hallucinations; some users even describe their drug-induced experiences as mystical or spiritual in quality. The brain mechanisms underlying these effects are not as well understood as those induced by use of alcohol, but there is substantial evidence that alterations in the brain system that uses the chemical neurotransmitter serotonin play an essential role.
There has been some research into physiological changes in yogis and people who practise various techniques of meditation. Some research with brain waves during meditation has reported differences between those corresponding to ordinary relaxation and those corresponding to meditation. It has been disputed, however, whether there is enough evidence to count these as physiologically distinct states of consciousness.
The most extensive study of the characteristics of altered states of consciousness was made by psychologist Charles Tart in the 1960s and 1970s. Tart analyzed a state of consciousness as made up of a number of component processes, including exteroception (sensing the external world); interoception (sensing the body); input-processing (seeing meaning); emotions; memory; time sense; sense of identity; evaluation and cognitive processing; motor output; and interaction with the environment. Each of these, in his view, could be altered in multiple ways by drugs or other manipulations. The components that Tart identified have not, however, been validated by empirical studies. Research in this area has not yet reached firm conclusions, but a recent questionnaire-based study identified eleven significant factors contributing to drug-induced states of consciousness: experience of unity; spiritual experience; blissful state; insightfulness; disembodiment; impaired control and cognition; anxiety; complex imagery; elementary imagery; audio-visual synesthesia; and changed meaning of percepts.
The medical approach to consciousness is scientifically oriented. It derives from a need to treat people whose brain function has been impaired as a result of disease, brain damage, toxins, or drugs. In medicine, conceptual distinctions are considered useful to the degree that they can help to guide treatments. The medical approach focuses mostly on the amount of consciousness a person has: in medicine, consciousness is assessed as a "level" ranging from coma and brain death at the low end, to full alertness and purposeful responsiveness at the high end.
Consciousness is of concern to patients and physicians, especially neurologists and anesthesiologists. Patients may have disorders of consciousness or may need to be anesthetized for a surgical procedure. Physicians may perform consciousness-related interventions such as instructing the patient to sleep, administering general anesthesia, or inducing medical coma. Also, bioethicists may be concerned with the ethical implications of consciousness in medical cases of patients such as the Karen Ann Quinlan case, while neuroscientists may study patients with impaired consciousness in hopes of gaining information about how the brain works.
In medicine, consciousness is examined using a set of procedures known as neuropsychological assessment. There are two commonly used methods for assessing the level of consciousness of a patient: a simple procedure that requires minimal training, and a more complex procedure that requires substantial expertise. The simple procedure begins by asking whether the patient is able to move and react to physical stimuli. If so, the next question is whether the patient can respond in a meaningful way to questions and commands. If so, the patient is asked for name, current location, and current day and time. A patient who can answer all of these questions is said to be "alert and oriented times four" (sometimes denoted "A&Ox4" on a medical chart), and is usually considered fully conscious.
The more complex procedure is known as a neurological examination, and is usually carried out by a neurologist in a hospital setting. A formal neurological examination runs through a precisely delineated series of tests, beginning with tests for basic sensorimotor reflexes, and culminating with tests for sophisticated use of language. The outcome may be summarized using the Glasgow Coma Scale, which yields a number in the range 3–15, with a score of 3 to 8 indicating coma, and 15 indicating full consciousness. The Glasgow Coma Scale has three subscales, measuring the best motor response (ranging from "no motor response" to "obeys commands"), the best eye response (ranging from "no eye opening" to "eyes opening spontaneously") and the best verbal response (ranging from "no verbal response" to "fully oriented"). There is also a simpler pediatric version of the scale, for children too young to be able to use language.
In 2013, an experimental procedure was developed to measure degrees of consciousness, the procedure involving stimulating the brain with a magnetic pulse, measuring resulting waves of electrical activity, and developing a consciousness score based on the complexity of the brain activity.
Medical conditions that inhibit consciousness are considered disorders of consciousness. This category generally includes minimally conscious state and persistent vegetative state, but sometimes also includes the less severe locked-in syndrome and more severe chronic coma. Differential diagnosis of these disorders is an active area of biomedical research. Finally, brain death results in possible irreversible disruption of consciousness. While other conditions may cause a moderate deterioration (e.g., dementia and delirium) or transient interruption (e.g., grand mal and petit mal seizures) of consciousness, they are not included in this category.
Medical experts increasingly view anosognosia as a disorder of consciousness. Anosognosia is a Greek-derived term meaning "unawareness of disease". This is a condition in which patients are disabled in some way, most commonly as a result of a stroke, but either misunderstand the nature of the problem or deny that there is anything wrong with them. The most frequently occurring form is seen in people who have experienced a stroke damaging the parietal lobe in the right hemisphere of the brain, giving rise to a syndrome known as hemispatial neglect, characterized by an inability to direct action or attention toward objects located to the left with respect to their bodies. Patients with hemispatial neglect are often paralyzed on the left side of the body, but sometimes deny being unable to move. When questioned about the obvious problem, the patient may avoid giving a direct answer, or may give an explanation that does not make sense. Patients with hemispatial neglect may also fail to recognize paralyzed parts of their bodies: one frequently mentioned case is of a man who repeatedly tried to throw his own paralyzed right leg out of the bed he was lying in, and when asked what he was doing, complained that somebody had put a dead leg into the bed with him. An even more striking type of anosognosia is Anton–Babinski syndrome, a rarely occurring condition in which patients become blind but claim to be able to see normally, and persist in this claim in spite of all evidence to the contrary.
Of the eight types of consciousness in the Lycan classification, some are detectable in utero and others develop years after birth. Psychologist and educator William Foulkes studied children's dreams and concluded that prior to the shift in cognitive maturation that humans experience during ages five to seven, children lack the Lockean consciousness that Lycan had labeled "introspective consciousness" and that Foulkes labels "self-reflection." In a 2020 paper, Katherine Nelson and Robyn Fivush use "autobiographical consciousness" to label essentially the same faculty, and agree with Foulkes on the timing of this faculty's acquisition. Nelson and Fivush contend that "language is the tool by which humans create a new, uniquely human form of consciousness, namely, autobiographical consciousness." Julian Jaynes had staked out these positions decades earlier. Citing the developmental steps that lead the infant to autobiographical consciousness, Nelson and Fivush point to the acquisition of "theory of mind," calling theory of mind "necessary for autobiographical consciousness" and defining it as "understanding differences between one's own mind and others' minds in terms of beliefs, desires, emotions and thoughts." They write, "The hallmark of theory of mind, the understanding of false belief, occurs ... at five to six years of age."
The topic of animal consciousness is beset by a number of difficulties. It poses the problem of other minds in an especially severe form, because non-human animals, lacking the ability to express human language, cannot tell humans about their experiences. Also, it is difficult to reason objectively about the question, because a denial that an animal is conscious is often taken to imply that it does not feel, its life has no value, and that harming it is not morally wrong. Descartes, for example, has sometimes been blamed for mistreatment of animals due to the fact that he believed only humans have a non-physical mind. Most people have a strong intuition that some animals, such as cats and dogs, are conscious, while others, such as insects, are not; but the sources of this intuition are not obvious, and are often based on personal interactions with pets and other animals they have observed.
Philosophers who consider subjective experience the essence of consciousness also generally believe, as a correlate, that the existence and nature of animal consciousness can never rigorously be known. Thomas Nagel spelled out this point of view in an influential essay titled What Is it Like to Be a Bat?. He said that an organism is conscious "if and only if there is something that it is like to be that organism—something it is like for the organism"; and he argued that no matter how much we know about an animal's brain and behavior, we can never really put ourselves into the mind of the animal and experience its world in the way it does itself. Other thinkers, such as Douglas Hofstadter, dismiss this argument as incoherent. Several psychologists and ethologists have argued for the existence of animal consciousness by describing a range of behaviors that appear to show animals holding beliefs about things they cannot directly perceive—Donald Griffin's 2001 book Animal Minds reviews a substantial portion of the evidence.
On July 7, 2012, eminent scientists from different branches of neuroscience gathered at the University of Cambridge to celebrate the Francis Crick Memorial Conference, which deals with consciousness in humans and pre-linguistic consciousness in nonhuman animals. After the conference, they signed in the presence of Stephen Hawking, the 'Cambridge Declaration on Consciousness', which summarizes the most important findings of the survey:
"We decided to reach a consensus and make a statement directed to the public that is not scientific. It's obvious to everyone in this room that animals have consciousness, but it is not obvious to the rest of the world. It is not obvious to the rest of the Western world or the Far East. It is not obvious to the society."
"Convergent evidence indicates that non-human animals ..., including all mammals and birds, and other creatures, ... have the necessary neural substrates of consciousness and the capacity to exhibit intentional behaviors."
The idea of an artifact made conscious is an ancient theme of mythology, appearing for example in the Greek myth of Pygmalion, who carved a statue that was magically brought to life, and in medieval Jewish stories of the Golem, a magically animated homunculus built of clay. However, the possibility of actually constructing a conscious machine was probably first discussed by Ada Lovelace, in a set of notes written in 1842 about the Analytical Engine invented by Charles Babbage, a precursor (never built) to modern electronic computers. Lovelace was essentially dismissive of the idea that a machine such as the Analytical Engine could think in a humanlike way. She wrote:
It is desirable to guard against the possibility of exaggerated ideas that might arise as to the powers of the Analytical Engine. ... The Analytical Engine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform. It can follow analysis; but it has no power of anticipating any analytical relations or truths. Its province is to assist us in making available what we are already acquainted with.
One of the most influential contributions to this question was an essay written in 1950 by pioneering computer scientist Alan Turing, titled Computing Machinery and Intelligence. Turing disavowed any interest in terminology, saying that even "Can machines think?" is too loaded with spurious connotations to be meaningful; but he proposed to replace all such questions with a specific operational test, which has become known as the Turing test. To pass the test, a computer must be able to imitate a human well enough to fool interrogators. In his essay Turing discussed a variety of possible objections, and presented a counterargument to each of them. The Turing test is commonly cited in discussions of artificial intelligence as a proposed criterion for machine consciousness; it has provoked a great deal of philosophical debate. For example, Daniel Dennett and Douglas Hofstadter argue that anything capable of passing the Turing test is necessarily conscious, while David Chalmers argues that a philosophical zombie could pass the test, yet fail to be conscious. A third group of scholars have argued that with technological growth once machines begin to display any substantial signs of human-like behavior then the dichotomy (of human consciousness compared to human-like consciousness) becomes passé and issues of machine autonomy begin to prevail even as observed in its nascent form within contemporary industry and technology. Jürgen Schmidhuber argues that consciousness is the result of compression. As an agent sees representation of itself recurring in the environment, the compression of this representation can be called consciousness.
In a lively exchange over what has come to be referred to as "the Chinese room argument", John Searle sought to refute the claim of proponents of what he calls "strong artificial intelligence (AI)" that a computer program can be conscious, though he does agree with advocates of "weak AI" that computer programs can be formatted to "simulate" conscious states. His own view is that consciousness has subjective, first-person causal powers by being essentially intentional due to the way human brains function biologically; conscious persons can perform computations, but consciousness is not inherently computational the way computer programs are. To make a Turing machine that speaks Chinese, Searle imagines a room with one monolingual English speaker (Searle himself, in fact), a book that designates a combination of Chinese symbols to be output paired with Chinese symbol input, and boxes filled with Chinese symbols. In this case, the English speaker is acting as a computer and the rulebook as a program. Searle argues that with such a machine, he would be able to process the inputs to outputs perfectly without having any understanding of Chinese, nor having any idea what the questions and answers could possibly mean. If the experiment were done in English, since Searle knows English, he would be able to take questions and give answers without any algorithms for English questions, and he would be effectively aware of what was being said and the purposes it might serve. Searle would pass the Turing test of answering the questions in both languages, but he is only conscious of what he is doing when he speaks English. Another way of putting the argument is to say that computer programs can pass the Turing test for processing the syntax of a language, but that the syntax cannot lead to semantic meaning in the way strong AI advocates hoped.
In the literature concerning artificial intelligence, Searle's essay has been second only to Turing's in the volume of debate it has generated. Searle himself was vague about what extra ingredients it would take to make a machine conscious: all he proposed was that what was needed was "causal powers" of the sort that the brain has and that computers lack. But other thinkers sympathetic to his basic argument have suggested that the necessary (though perhaps still not sufficient) extra conditions may include the ability to pass not just the verbal version of the Turing test, but the robotic version, which requires grounding the robot's words in the robot's sensorimotor capacity to categorize and interact with the things in the world that its words are about, Turing-indistinguishably from a real person. Turing-scale robotics is an empirical branch of research on embodied cognition and situated cognition.
In 2014, Victor Argonov has suggested a non-Turing test for machine consciousness based on a machine's ability to produce philosophical judgments. He argues that a deterministic machine must be regarded as conscious if it is able to produce judgments on all problematic properties of consciousness (such as qualia or binding) having no innate (preloaded) philosophical knowledge on these issues, no philosophical discussions while learning, and no informational models of other creatures in its memory (such models may implicitly or explicitly contain knowledge about these creatures' consciousness). However, this test can be used only to detect, but not refute the existence of consciousness. A positive result proves that a machine is conscious but a negative result proves nothing. For example, absence of philosophical judgments may be caused by lack of the machine's intellect, not by absence of consciousness.
William James is usually credited with popularizing the idea that human consciousness flows like a stream, in his Principles of Psychology of 1890.
According to James, the "stream of thought" is governed by five characteristics:
A similar concept appears in Buddhist philosophy, expressed by the Sanskrit term Citta-saṃtāna, which is usually translated as mindstream or "mental continuum". Buddhist teachings describe that consciousness manifests moment to moment as sense impressions and mental phenomena that are continuously changing. The teachings list six triggers that can result in the generation of different mental events. These triggers are input from the five senses (seeing, hearing, smelling, tasting or touch sensations), or a thought (relating to the past, present or the future) that happen to arise in the mind. The mental events generated as a result of these triggers are: feelings, perceptions and intentions/behaviour. The moment-by-moment manifestation of the mind-stream is said to happen in every person all the time. It even happens in a scientist who analyzes various phenomena in the world, or analyzes the material body including the organ brain. The manifestation of the mindstream is also described as being influenced by physical laws, biological laws, psychological laws, volitional laws, and universal laws. The purpose of the Buddhist practice of mindfulness is to understand the inherent nature of the consciousness and its characteristics.
In the West, the primary impact of the idea has been on literature rather than science: "stream of consciousness as a narrative mode" means writing in a way that attempts to portray the moment-to-moment thoughts and experiences of a character. This technique perhaps had its beginnings in the monologs of Shakespeare's plays and reached its fullest development in the novels of James Joyce and Virginia Woolf, although it has also been used by many other noted writers.
Here, for example, is a passage from Joyce's Ulysses about the thoughts of Molly Bloom:
Yes because he never did a thing like that before as ask to get his breakfast in bed with a couple of eggs since the City Arms hotel when he used to be pretending to be laid up with a sick voice doing his highness to make himself interesting for that old faggot Mrs Riordan that he thought he had a great leg of and she never left us a farthing all for masses for herself and her soul greatest miser ever was actually afraid to lay out 4d for her methylated spirit telling me all her ailments she had too much old chat in her about politics and earthquakes and the end of the world let us have a bit of fun first God help the world if all the women were her sort down on bathingsuits and lownecks of course nobody wanted her to wear them I suppose she was pious because no man would look at her twice I hope Ill never be like her a wonder she didnt want us to cover our faces but she was a welleducated woman certainly and her gabby talk about Mr Riordan here and Mr Riordan there I suppose he was glad to get shut of her.
To most philosophers, the word "consciousness" connotes the relationship between the mind and the world. To writers on spiritual or religious topics, it frequently connotes the relationship between the mind and God, or the relationship between the mind and deeper truths that are thought to be more fundamental than the physical world.
The Canadian psychiatrist Richard Maurice Bucke, author of the 1901 book Cosmic Consciousness: A Study in the Evolution of the Human Mind, distinguished between three types of consciousness: 'Simple Consciousness', awareness of the body, possessed by many animals; 'Self Consciousness', awareness of being aware, possessed only by humans; and 'Cosmic Consciousness', awareness of the life and order of the universe, possessed only by humans who have attained "intellectual enlightenment or illumination".
Another thorough account of the spiritual approach is Ken Wilber's 1977 book The Spectrum of Consciousness, a comparison of western and eastern ways of thinking about the mind. Wilber described consciousness as a spectrum with ordinary awareness at one end, and more profound types of awareness at higher levels.
Other examples include the various levels of spiritual consciousness presented by Prem Saran Satsangi and Stuart Hameroff. | [
{
"paragraph_id": 0,
"text": "Consciousness, at its simplest, is awareness of internal and external existence. However, its nature has led to millennia of analyses, explanations and debate by philosophers, theologians, and all of science. Opinions differ about what exactly needs to be studied or even considered consciousness. In some explanations, it is synonymous with the mind, and at other times, an aspect of mind. In the past, it was one's \"inner life\", the world of introspection, of private thought, imagination and volition. Today, it often includes any kind of cognition, experience, feeling or perception. It may be awareness, awareness of awareness, or self-awareness either continuously changing or not. The disparate range of research, notions and speculations raises a curiosity about whether the right questions are being asked.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Examples of the range of descriptions, definitions or explanations are: simple wakefulness, one's sense of selfhood or soul explored by \"looking within\"; being a metaphorical \"stream\" of contents, or being a mental state, mental event or mental process of the brain.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The earliest English language uses of \"conscious\" and \"consciousness\" date to the 1500s, but not with today's meanings. The English word \"conscious\" originally derived from the Latin conscius (con- \"together\" and scio \"to know\") which meant \"knowing with\" or \"having joint or common knowledge with another\". In its earliest uses in the 1500s, the English word \"conscious\" retained the meaning of the Latin conscius. For example, Thomas Hobbes in Leviathan wrote: \"Where two, or more men, know of one and the same fact, they are said to be Conscious of it one to another.\" There were also many occurrences in Latin writings of the phrase conscius sibi, which translates literally as \"knowing with oneself\", or in other words \"sharing knowledge with oneself about something\". This phrase has the figurative sense of \"knowing that one knows\", which is something like the modern English word \"conscious\", but it was rendered into English as \"conscious to oneself\" or \"conscious unto oneself\". For example, Archbishop Ussher wrote in 1613 of \"being so conscious unto myself of my great weakness\".",
"title": "Etymology"
},
{
"paragraph_id": 3,
"text": "The origin of the modern concept of consciousness is often attributed to John Locke who defined consciousness in his Essay Concerning Human Understanding, published in 1690, as \"the perception of what passes in a man's own mind\". The essay strongly influenced 18th-century British philosophy, and Locke's definition appeared in Samuel Johnson's celebrated Dictionary (1755).",
"title": "Etymology"
},
{
"paragraph_id": 4,
"text": "A related word was conscientia, which primarily means moral conscience. In the literal sense, \"conscientia\" means knowledge-with, that is, shared knowledge. The word first appears in Latin juridical texts by writers such as Cicero. Here, conscientia is the knowledge that a witness has of the deed of someone else. René Descartes (1596–1650) is generally taken to be the first philosopher to use conscientia in a way that does not fit this traditional meaning. Descartes used conscientia the way modern English speakers would use \"conscience\". In Search after Truth (Regulæ ad directionem ingenii ut et inquisitio veritatis per lumen naturale, Amsterdam 1701) he says \"conscience or internal testimony\" (conscientiâ, vel interno testimonio).",
"title": "Etymology"
},
{
"paragraph_id": 5,
"text": "The French term conscience is defined roughly like English \"consciousness\" in the 1753 volume of Diderot and d'Alembert's Encyclopédie as \"the opinion or internal feeling that we ourselves have from what we do\".",
"title": "Etymology"
},
{
"paragraph_id": 6,
"text": "About forty meanings attributed to the term consciousness can be identified and categorized based on functions and experiences. The prospects for reaching any single, agreed-upon, theory-independent definition of consciousness appear remote.",
"title": "The problem of definition"
},
{
"paragraph_id": 7,
"text": "Scholars are divided as to whether Aristotle had a concept of consciousness. He does not use any single word or terminology that is clearly similar to the phenomenon or concept defined by John Locke. Victor Caston contends that Aristotle did have a concept more clearly similar to perceptual awareness.",
"title": "The problem of definition"
},
{
"paragraph_id": 8,
"text": "The modern dictionary definitions of the word consciousness evolved through several centuries and reflect a range of seemingly related meanings, with some differences that have been controversial, such as the distinction between 'inward awareness' and 'perception' of the physical world, or the distinction between 'conscious' and 'unconscious', or the notion of a \"mental entity\" or \"mental activity\" that is not physical.",
"title": "The problem of definition"
},
{
"paragraph_id": 9,
"text": "The common usage definitions of consciousness in Webster's Third New International Dictionary (1966 edition, Volume 1, page 482) are as follows:",
"title": "The problem of definition"
},
{
"paragraph_id": 10,
"text": "The Cambridge Dictionary defines consciousness as \"the state of understanding and realizing something.\" The Oxford Living Dictionary defines consciousness as \"The state of being aware of and responsive to one's surroundings.\", \"A person's awareness or perception of something.\" and \"The fact of awareness by the mind of itself and the world.\"",
"title": "The problem of definition"
},
{
"paragraph_id": 11,
"text": "Philosophers have attempted to clarify technical distinctions by using a jargon of their own. The Routledge Encyclopedia of Philosophy in 1998 defines consciousness as follows:",
"title": "The problem of definition"
},
{
"paragraph_id": 12,
"text": "Consciousness—Philosophers have used the term 'consciousness' for four main topics: knowledge in general, intentionality, introspection (and the knowledge it specifically generates) and phenomenal experience... Something within one's mind is 'introspectively conscious' just in case one introspects it (or is poised to do so). Introspection is often thought to deliver one's primary knowledge of one's mental life. An experience or other mental entity is 'phenomenally conscious' just in case there is 'something it is like' for one to have it. The clearest examples are: perceptual experience, such as tastings and seeings; bodily-sensational experiences, such as those of pains, tickles and itches; imaginative experiences, such as those of one's own actions or perceptions; and streams of thought, as in the experience of thinking 'in words' or 'in images'. Introspection and phenomenality seem independent, or dissociable, although this is controversial.",
"title": "The problem of definition"
},
{
"paragraph_id": 13,
"text": "In philosophy before the 20th century, consciousness as a phenomenon was the 'inner world' of 'one's own mind', and introspection was the mind \"attending to\" itself, an activity seemingly distinct from that of perceiving the 'outer world' and its physical phenomena. In 1892 William James noted the distinction along with doubts about the \"inward\" character of the mind:",
"title": "The problem of definition"
},
{
"paragraph_id": 14,
"text": "'Things' have been doubted, but thoughts and feelings have never been doubted. The outer world, but never the inner world, has been denied. Everyone assumes that we have direct introspective acquaintance with our thinking activity as such, with our consciousness as something inward and contrasted with the outer objects which it knows. Yet I must confess that for my part I cannot feel sure of this conclusion. ... It seems as if consciousness as an inner activity were rather a postulate than a sensibly given fact...",
"title": "The problem of definition"
},
{
"paragraph_id": 15,
"text": "By the 1960s, for many philosophers and psychologists who talked about consciousness, the word no longer meant the 'inner world' but an indefinite, large category called awareness, as in the following example:",
"title": "The problem of definition"
},
{
"paragraph_id": 16,
"text": "It is difficult for modern Western man to grasp that the Greeks really had no concept of consciousness in that they did not class together phenomena as varied as problem solving, remembering, imagining, perceiving, feeling pain, dreaming, and acting on the grounds that all these are manifestations of being aware or being conscious.",
"title": "The problem of definition"
},
{
"paragraph_id": 17,
"text": "Many philosophers and scientists have been unhappy about the difficulty of producing a definition that does not involve circularity or fuzziness. In The Macmillan Dictionary of Psychology (1989 edition), Stuart Sutherland emphasized external awareness, and expressed a skeptical attitude more than a definition:",
"title": "The problem of definition"
},
{
"paragraph_id": 18,
"text": "Consciousness—The having of perceptions, thoughts, and feelings; awareness. The term is impossible to define except in terms that are unintelligible without a grasp of what consciousness means. Many fall into the trap of equating consciousness with self-consciousness—to be conscious it is only necessary to be aware of the external world. Consciousness is a fascinating but elusive phenomenon: it is impossible to specify what it is, what it does, or why it has evolved. Nothing worth reading has been written on it.",
"title": "The problem of definition"
},
{
"paragraph_id": 19,
"text": "Max Velmans noted, as of 2009, that there was a deep level of \"confusion and internal division\" among experts about the phenomenon of consciousness, because researchers lacked \"a sufficiently well-specified use of the term...to agree that they are investigating the same thing\". Within the \"modern consciousness studies\" community the technical phrase 'phenomenal consciousness' is a common synonym for all forms of awareness, or simply 'experience', without differentiating between inner and outer, or between higher and lower types. Using 'awareness', however, as a definition or synonym of consciousness is not a simple matter:",
"title": "The problem of definition"
},
{
"paragraph_id": 20,
"text": "If awareness of the environment . . . is the criterion of consciousness, then even the protozoans are conscious. If awareness of awareness is required, then it is doubtful whether the great apes and human infants are conscious.",
"title": "The problem of definition"
},
{
"paragraph_id": 21,
"text": "Many philosophers have argued that consciousness is a unitary concept that is understood by the majority of people despite the difficulty philosophers have had defining it. Velmans proposed that the \"everyday understanding of consciousness\" uncontroversially \"refers to experience itself rather than any particular thing that we observe or experience\" and he added that consciousness \"is [therefore] exemplified by all the things that we observe or experience\", whether thoughts, feelings, or perceptions.",
"title": "The problem of definition"
},
{
"paragraph_id": 22,
"text": "Velmans argued additionally that \"pre-existing theoretical commitments\" to competing explanations of consciousness might be a source of bias. With advances in brain research, \"the presence or absence of experienced phenomena\" of any kind underlies the work of those neuroscientists who seek \"to analyze the precise relation of conscious phenomenology to its associated information processing\" in the brain. This neuroscientific goal, to find the \"neural correlates of consciousness\" (NCC), begins with a theoretical commitment to the neurological origin of all \"experienced phenomena\" whether inner or outer. However, the easiest 'content of consciousness' to be so analyzed is \"the experienced three-dimensional world (the phenomenal world) beyond the body surface\", and most consciousness research since the 1990s, perhaps because of bias, has focused on processes of external perception.",
"title": "The problem of definition"
},
{
"paragraph_id": 23,
"text": "By contrast, a cognitive science point of view — with an inter-disciplinary perspective involving fields such as psychology, linguistics and anthropology — requires no agreed definition of 'consciousness' but studies the interaction of many processes besides perception, for example certain pragmatic issues such as the feeling of agency and the effects of regret and action on 'self-experience' of one's own body or social identity.",
"title": "The problem of definition"
},
{
"paragraph_id": 24,
"text": "Julian Jaynes, from a history of psychology perspective, rejected popular but \"superficial views of consciousness\" especially those which equate it with \"that vaguest of terms, experience\". In 1976 he insisted that if not for introspection, which for decades had been ignored or taken for granted rather than explained, there could be no \"conception of what consciousness is\" and in 1990, he reaffirmed the traditional idea of the phenomenon called 'consciousness', writing that \"its denotative definition is, as it was for Descartes, Locke, and Hume, what is introspectable\". Jaynes saw consciousness as an important but small part of human mentality, and he asserted: \"there can be no progress in the science of consciousness until ... what is introspectable [is] sharply distinguished\" from the unconscious processes of cognition such as perception, reactive awareness and attention, and automatic forms of learning, problem-solving and decision-making.",
"title": "The problem of definition"
},
{
"paragraph_id": 25,
"text": "Some have argued that we should eliminate the concept from our understanding of the mind, a position known as consciousness semanticism.",
"title": "The problem of definition"
},
{
"paragraph_id": 26,
"text": "In medicine, a \"level of consciousness\" terminology is used to describe a patient's arousal and responsiveness, which can be seen as a continuum of states ranging from full alertness and comprehension, through disorientation, delirium, loss of meaningful communication, and finally loss of movement in response to painful stimuli. Issues of practical concern include how the level of consciousness can be assessed in severely ill, comatose, or anesthetized people, and how to treat conditions in which consciousness is impaired or disrupted. The degree or level of consciousness is measured by standardized behavior observation scales such as the Glasgow Coma Scale.",
"title": "The problem of definition"
},
{
"paragraph_id": 27,
"text": "Most writers on the philosophy of consciousness have been concerned with defending a particular point of view, and have organized their material accordingly. For surveys, the most common approach is to follow a historical path by associating stances with the philosophers who are most strongly associated with them, for example, Descartes, Locke, Kant, etc. An alternative is to organize philosophical stances according to basic issues.",
"title": "Philosophy of mind"
},
{
"paragraph_id": 28,
"text": "Philosophers differ from non-philosophers in their intuitions about what consciousness is. While most people have a strong intuition for the existence of what they refer to as consciousness, skeptics argue that this intuition is false, either because the concept of consciousness is intrinsically incoherent, or because our intuitions about it are based in illusions. Gilbert Ryle, for example, argued that traditional understanding of consciousness depends on a Cartesian dualist outlook that improperly distinguishes between mind and body, or between mind and world. He proposed that we speak not of minds, bodies, and the world, but of individuals, or persons, acting in the world. Thus, by speaking of \"consciousness\" we end up misleading ourselves by thinking that there is any sort of thing as consciousness separated from behavioral and linguistic understandings.",
"title": "Philosophy of mind"
},
{
"paragraph_id": 29,
"text": "Ned Block argued that discussions on consciousness often failed to properly distinguish phenomenal (P-consciousness) from access (A-consciousness), though these terms had been used before Block. P-consciousness, according to Block, is raw experience: it is moving, colored forms, sounds, sensations, emotions and feelings with our bodies and responses at the center. These experiences, considered independently of any impact on behavior, are called qualia. A-consciousness, on the other hand, is the phenomenon whereby information in our minds is accessible for verbal report, reasoning, and the control of behavior. So, when we perceive, information about what we perceive is access conscious; when we introspect, information about our thoughts is access conscious; when we remember, information about the past is access conscious, and so on. Although some philosophers, such as Daniel Dennett, have disputed the validity of this distinction, others have broadly accepted it. David Chalmers has argued that A-consciousness can in principle be understood in mechanistic terms, but that understanding P-consciousness is much more challenging: he calls this the hard problem of consciousness.",
"title": "Philosophy of mind"
},
{
"paragraph_id": 30,
"text": "Some philosophers believe that Block's two types of consciousness are not the end of the story. William Lycan, for example, argued in his book Consciousness and Experience that at least eight clearly distinct types of consciousness can be identified (organism consciousness; control consciousness; consciousness of; state/event consciousness; reportability; introspective consciousness; subjective consciousness; self-consciousness)—and that even this list omits several more obscure forms.",
"title": "Philosophy of mind"
},
{
"paragraph_id": 31,
"text": "There is also debate over whether or not A-consciousness and P-consciousness always coexist or if they can exist separately. Although P-consciousness without A-consciousness is more widely accepted, there have been some hypothetical examples of A without P. Block, for instance, suggests the case of a \"zombie\" that is computationally identical to a person but without any subjectivity. However, he remains somewhat skeptical concluding \"I don't know whether there are any actual cases of A-consciousness without P-consciousness, but I hope I have illustrated their conceptual possibility.\"",
"title": "Philosophy of mind"
},
{
"paragraph_id": 32,
"text": "Sam Harris observes: \"At the level of your experience, you are not a body of cells, organelles, and atoms; you are consciousness and its ever-changing contents\". Seen in this way, consciousness is a subjectively experienced, ever-present field in which things (the contents of consciousness) come and go.",
"title": "Philosophy of mind"
},
{
"paragraph_id": 33,
"text": "Christopher Tricker argues that this field of consciousness is symbolized by the mythical bird that opens the Daoist classic the Zhuangzi. This bird's name is Of a Flock (peng 鵬), yet its back is countless thousands of miles across and its wings are like clouds arcing across the heavens. \"Like Of a Flock, whose wings arc across the heavens, the wings of your consciousness span to the horizon. At the same time, the wings of every other being's consciousness span to the horizon. You are of a flock, one bird among kin.\"",
"title": "Philosophy of mind"
},
{
"paragraph_id": 34,
"text": "Mental processes (such as consciousness) and physical processes (such as brain events) seem to be correlated, however the specific nature of the connection is unknown.",
"title": "Philosophy of mind"
},
{
"paragraph_id": 35,
"text": "The first influential philosopher to discuss this question specifically was Descartes, and the answer he gave is known as Cartesian dualism. Descartes proposed that consciousness resides within an immaterial domain he called res cogitans (the realm of thought), in contrast to the domain of material things, which he called res extensa (the realm of extension). He suggested that the interaction between these two domains occurs inside the brain, perhaps in a small midline structure called the pineal gland.",
"title": "Philosophy of mind"
},
{
"paragraph_id": 36,
"text": "Although it is widely accepted that Descartes explained the problem cogently, few later philosophers have been happy with his solution, and his ideas about the pineal gland have especially been ridiculed. However, no alternative solution has gained general acceptance. Proposed solutions can be divided broadly into two categories: dualist solutions that maintain Descartes's rigid distinction between the realm of consciousness and the realm of matter but give different answers for how the two realms relate to each other; and monist solutions that maintain that there is really only one realm of being, of which consciousness and matter are both aspects. Each of these categories itself contains numerous variants. The two main types of dualism are substance dualism (which holds that the mind is formed of a distinct type of substance not governed by the laws of physics) and property dualism (which holds that the laws of physics are universally valid but cannot be used to explain the mind). The three main types of monism are physicalism (which holds that the mind consists of matter organized in a particular way), idealism (which holds that only thought or experience truly exists, and matter is merely an illusion), and neutral monism (which holds that both mind and matter are aspects of a distinct essence that is itself identical to neither of them). There are also, however, a large number of idiosyncratic theories that cannot cleanly be assigned to any of these schools of thought.",
"title": "Philosophy of mind"
},
{
"paragraph_id": 37,
"text": "Since the dawn of Newtonian science with its vision of simple mechanical principles governing the entire universe, some philosophers have been tempted by the idea that consciousness could be explained in purely physical terms. The first influential writer to propose such an idea explicitly was Julien Offray de La Mettrie, in his book Man a Machine (L'homme machine). His arguments, however, were very abstract. The most influential modern physical theories of consciousness are based on psychology and neuroscience. Theories proposed by neuroscientists such as Gerald Edelman and Antonio Damasio, and by philosophers such as Daniel Dennett, seek to explain consciousness in terms of neural events occurring within the brain. Many other neuroscientists, such as Christof Koch, have explored the neural basis of consciousness without attempting to frame all-encompassing global theories. At the same time, computer scientists working in the field of artificial intelligence have pursued the goal of creating digital computer programs that can simulate or embody consciousness.",
"title": "Philosophy of mind"
},
{
"paragraph_id": 38,
"text": "A few theoretical physicists have argued that classical physics is intrinsically incapable of explaining the holistic aspects of consciousness, but that quantum theory may provide the missing ingredients. Several theorists have therefore proposed quantum mind (QM) theories of consciousness. Notable theories falling into this category include the holonomic brain theory of Karl Pribram and David Bohm, and the Orch-OR theory formulated by Stuart Hameroff and Roger Penrose. Some of these QM theories offer descriptions of phenomenal consciousness, as well as QM interpretations of access consciousness. None of the quantum mechanical theories have been confirmed by experiment. Recent publications by G. Guerreshi, J. Cia, S. Popescu, and H. Briegel could falsify proposals such as those of Hameroff, which rely on quantum entanglement in protein. At the present time many scientists and philosophers consider the arguments for an important role of quantum phenomena to be unconvincing.",
"title": "Philosophy of mind"
},
{
"paragraph_id": 39,
"text": "Apart from the general question of the \"hard problem\" of consciousness (which is, roughly speaking, the question of how mental experience can arise from a physical basis), a more specialized question is how to square the subjective notion that we are in control of our decisions (at least in some small measure) with the customary view of causality that subsequent events are caused by prior events. The topic of free will is the philosophical and scientific examination of this conundrum.",
"title": "Philosophy of mind"
},
{
"paragraph_id": 40,
"text": "Many philosophers consider experience to be the essence of consciousness, and believe that experience can only fully be known from the inside, subjectively. But if consciousness is subjective and not visible from the outside, why do the vast majority of people believe that other people are conscious, but rocks and trees are not? This is called the problem of other minds. It is particularly acute for people who believe in the possibility of philosophical zombies, that is, people who think it is possible in principle to have an entity that is physically indistinguishable from a human being and behaves like a human being in every way but nevertheless lacks consciousness. Related issues have also been studied extensively by Greg Littmann of the University of Illinois, and by Colin Allen (a professor at the University of Pittsburgh) regarding the literature and research studying artificial intelligence in androids.",
"title": "Philosophy of mind"
},
{
"paragraph_id": 41,
"text": "The most commonly given answer is that we attribute consciousness to other people because we see that they resemble us in appearance and behavior; we reason that if they look like us and act like us, they must be like us in other ways, including having experiences of the sort that we do. There are, however, a variety of problems with that explanation. For one thing, it seems to violate the principle of parsimony, by postulating an invisible entity that is not necessary to explain what we observe. Some philosophers, such as Daniel Dennett in a research paper titled \"The Unimagined Preposterousness of Zombies\", argue that people who give this explanation do not really understand what they are saying. More broadly, philosophers who do not accept the possibility of zombies generally believe that consciousness is reflected in behavior (including verbal behavior), and that we attribute consciousness on the basis of behavior. A more straightforward way of saying this is that we attribute experiences to people because of what they can do, including the fact that they can tell us about their experiences.",
"title": "Philosophy of mind"
},
{
"paragraph_id": 42,
"text": "For many decades, consciousness as a research topic was avoided by the majority of mainstream scientists, because of a general feeling that a phenomenon defined in subjective terms could not properly be studied using objective experimental methods. In 1975 George Mandler published an influential psychological study which distinguished between slow, serial, and limited conscious processes and fast, parallel and extensive unconscious ones. The Science and Religion Forum 1984 annual conference, 'From Artificial Intelligence to Human Consciousness' identified the nature of consciousness as a matter for investigation; Donald Michie was a keynote speaker. Starting in the 1980s, an expanding community of neuroscientists and psychologists have associated themselves with a field called Consciousness Studies, giving rise to a stream of experimental work published in books, journals such as Consciousness and Cognition, Frontiers in Consciousness Research, Psyche, and the Journal of Consciousness Studies, along with regular conferences organized by groups such as the Association for the Scientific Study of Consciousness and the Society for Consciousness Studies.",
"title": "Scientific study"
},
{
"paragraph_id": 43,
"text": "Modern medical and psychological investigations into consciousness are based on psychological experiments (including, for example, the investigation of priming effects using subliminal stimuli), and on case studies of alterations in consciousness produced by trauma, illness, or drugs. Broadly viewed, scientific approaches are based on two core concepts. The first identifies the content of consciousness with the experiences that are reported by human subjects; the second makes use of the concept of consciousness that has been developed by neurologists and other medical professionals who deal with patients whose behavior is impaired. In either case, the ultimate goals are to develop techniques for assessing consciousness objectively in humans as well as other animals, and to understand the neural and psychological mechanisms that underlie it.",
"title": "Scientific study"
},
{
"paragraph_id": 44,
"text": "Experimental research on consciousness presents special difficulties, due to the lack of a universally accepted operational definition. In the majority of experiments that are specifically about consciousness, the subjects are human, and the criterion used is verbal report: in other words, subjects are asked to describe their experiences, and their descriptions are treated as observations of the contents of consciousness. For example, subjects who stare continuously at a Necker cube usually report that they experience it \"flipping\" between two 3D configurations, even though the stimulus itself remains the same. The objective is to understand the relationship between the conscious awareness of stimuli (as indicated by verbal report) and the effects the stimuli have on brain activity and behavior. In several paradigms, such as the technique of response priming, the behavior of subjects is clearly influenced by stimuli for which they report no awareness, and suitable experimental manipulations can lead to increasing priming effects despite decreasing prime identification (double dissociation).",
"title": "Scientific study"
},
{
"paragraph_id": 45,
"text": "Verbal report is widely considered to be the most reliable indicator of consciousness, but it raises a number of issues. For one thing, if verbal reports are treated as observations, akin to observations in other branches of science, then the possibility arises that they may contain errors—but it is difficult to make sense of the idea that subjects could be wrong about their own experiences, and even more difficult to see how such an error could be detected. Daniel Dennett has argued for an approach he calls heterophenomenology, which means treating verbal reports as stories that may or may not be true, but his ideas about how to do this have not been widely adopted. Another issue with verbal report as a criterion is that it restricts the field of study to humans who have language: this approach cannot be used to study consciousness in other species, pre-linguistic children, or people with types of brain damage that impair language. As a third issue, philosophers who dispute the validity of the Turing test may feel that it is possible, at least in principle, for verbal report to be dissociated from consciousness entirely: a philosophical zombie may give detailed verbal reports of awareness in the absence of any genuine awareness.",
"title": "Scientific study"
},
{
"paragraph_id": 46,
"text": "Although verbal report is in practice the \"gold standard\" for ascribing consciousness, it is not the only possible criterion. In medicine, consciousness is assessed as a combination of verbal behavior, arousal, brain activity and purposeful movement. The last three of these can be used as indicators of consciousness when verbal behavior is absent. The scientific literature regarding the neural bases of arousal and purposeful movement is very extensive. Their reliability as indicators of consciousness is disputed, however, due to numerous studies showing that alert human subjects can be induced to behave purposefully in a variety of ways in spite of reporting a complete lack of awareness. Studies of the neuroscience of free will have also shown that the experiences that people report when they behave purposefully sometimes do not correspond to their actual behaviors or to the patterns of electrical activity recorded from their brains.",
"title": "Scientific study"
},
{
"paragraph_id": 47,
"text": "Another approach applies specifically to the study of self-awareness, that is, the ability to distinguish oneself from others. In the 1970s Gordon Gallup developed an operational test for self-awareness, known as the mirror test. The test examines whether animals are able to differentiate between seeing themselves in a mirror versus seeing other animals. The classic example involves placing a spot of coloring on the skin or fur near the individual's forehead and seeing if they attempt to remove it or at least touch the spot, thus indicating that they recognize that the individual they are seeing in the mirror is themselves. Humans (older than 18 months) and other great apes, bottlenose dolphins, orcas, pigeons, European magpies and elephants have all been observed to pass this test.",
"title": "Scientific study"
},
{
"paragraph_id": 48,
"text": "A major part of the scientific literature on consciousness consists of studies that examine the relationship between the experiences reported by subjects and the activity that simultaneously takes place in their brains—that is, studies of the neural correlates of consciousness. The hope is to find that activity in a particular part of the brain, or a particular pattern of global brain activity, which will be strongly predictive of conscious awareness. Several brain imaging techniques, such as EEG and fMRI, have been used for physical measures of brain activity in these studies.",
"title": "Scientific study"
},
{
"paragraph_id": 49,
"text": "Another idea that has drawn attention for several decades is that consciousness is associated with high-frequency (gamma band) oscillations in brain activity. This idea arose from proposals in the 1980s, by Christof von der Malsburg and Wolf Singer, that gamma oscillations could solve the so-called binding problem, by linking information represented in different parts of the brain into a unified experience. Rodolfo Llinás, for example, proposed that consciousness results from recurrent thalamo-cortical resonance where the specific thalamocortical systems (content) and the non-specific (centromedial thalamus) thalamocortical systems (context) interact in the gamma band frequency via synchronous oscillations.",
"title": "Scientific study"
},
{
"paragraph_id": 50,
"text": "A number of studies have shown that activity in primary sensory areas of the brain is not sufficient to produce consciousness: it is possible for subjects to report a lack of awareness even when areas such as the primary visual cortex (V1) show clear electrical responses to a stimulus. Higher brain areas are seen as more promising, especially the prefrontal cortex, which is involved in a range of higher cognitive functions collectively known as executive functions. There is substantial evidence that a \"top-down\" flow of neural activity (i.e., activity propagating from the frontal cortex to sensory areas) is more predictive of conscious awareness than a \"bottom-up\" flow of activity. The prefrontal cortex is not the only candidate area, however: studies by Nikos Logothetis and his colleagues have shown, for example, that visually responsive neurons in parts of the temporal lobe reflect the visual perception in the situation when conflicting visual images are presented to different eyes (i.e., bistable percepts during binocular rivalry). Furthermore, top-down feedback from higher to lower visual brain areas may be weaker or absent in the peripheral visual field, as suggested by some experimental data and theoretical arguments; nevertheless humans can perceive visual inputs in the peripheral visual field arising from bottom-up V1 neural activities. Meanwhile, bottom-up V1 activities for the central visual fields can be vetoed, and thus made invisible to perception, by the top-down feedback, when these bottom-up signals are inconsistent with the brain's internal model of the visual world.",
"title": "Scientific study"
},
{
"paragraph_id": 51,
"text": "Modulation of neural responses may correlate with phenomenal experiences. In contrast to the raw electrical responses that do not correlate with consciousness, the modulation of these responses by other stimuli correlates surprisingly well with an important aspect of consciousness: namely with the phenomenal experience of stimulus intensity (brightness, contrast). In the research group of Danko Nikolić it has been shown that some of the changes in the subjectively perceived brightness correlated with the modulation of firing rates while others correlated with the modulation of neural synchrony. An fMRI investigation suggested that these findings were strictly limited to the primary visual areas. This indicates that, in the primary visual areas, changes in firing rates and synchrony can be considered as neural correlates of qualia—at least for some type of qualia.",
"title": "Scientific study"
},
{
"paragraph_id": 52,
"text": "In 2013, the perturbational complexity index (PCI) was proposed, a measure of the algorithmic complexity of the electrophysiological response of the cortex to transcranial magnetic stimulation. This measure was shown to be higher in individuals that are awake, in REM sleep or in a locked-in state than in those who are in deep sleep or in a vegetative state, making it potentially useful as a quantitative assessment of consciousness states.",
"title": "Scientific study"
},
{
"paragraph_id": 53,
"text": "Assuming that not only humans but even some non-mammalian species are conscious, a number of evolutionary approaches to the problem of neural correlates of consciousness open up. For example, assuming that birds are conscious—a common assumption among neuroscientists and ethologists due to the extensive cognitive repertoire of birds—there are comparative neuroanatomical ways to validate some of the principal, currently competing, mammalian consciousness–brain theories. The rationale for such a comparative study is that the avian brain deviates structurally from the mammalian brain. So how similar are they? What homologs can be identified? The general conclusion from the study by Butler, et al., is that some of the major theories for the mammalian brain also appear to be valid for the avian brain. The structures assumed to be critical for consciousness in mammalian brains have homologous counterparts in avian brains. Thus the main portions of the theories of Crick and Koch, Edelman and Tononi, and Cotterill seem to be compatible with the assumption that birds are conscious. Edelman also differentiates between what he calls primary consciousness (which is a trait shared by humans and non-human animals) and higher-order consciousness as it appears in humans alone along with human language capacity. Certain aspects of the three theories, however, seem less easy to apply to the hypothesis of avian consciousness. For instance, the suggestion by Crick and Koch that layer 5 neurons of the mammalian brain have a special role, seems difficult to apply to the avian brain, since the avian homologs have a different morphology. Likewise, the theory of Eccles seems incompatible, since a structural homolog/analogue to the dendron has not been found in avian brains. The assumption of an avian consciousness also brings the reptilian brain into focus. The reason is the structural continuity between avian and reptilian brains, meaning that the phylogenetic origin of consciousness may be earlier than suggested by many leading neuroscientists.",
"title": "Scientific study"
},
{
"paragraph_id": 54,
"text": "Joaquin Fuster of UCLA has advocated the position of the importance of the prefrontal cortex in humans, along with the areas of Wernicke and Broca, as being of particular importance to the development of human language capacities neuro-anatomically necessary for the emergence of higher-order consciousness in humans.",
"title": "Scientific study"
},
{
"paragraph_id": 55,
"text": "A study in 2016 looked at lesions in specific areas of the brainstem that were associated with coma and vegetative states. A small region of the rostral dorsolateral pontine tegmentum in the brainstem was suggested to drive consciousness through functional connectivity with two cortical regions, the left ventral anterior insular cortex, and the pregenual anterior cingulate cortex. These three regions may work together as a triad to maintain consciousness.",
"title": "Scientific study"
},
{
"paragraph_id": 56,
"text": "A considerable amount of research is being carried out on the chemical basis of thought formation, storage, memory consolidation and formation of logical thought processes.In 2001 Atta-ur-Rahman proposed that the folding of glycoproteins by intermolecular or intramolecular hydrogen bonding may be the key process involved in formation of partly folded patterns for memory storage. The hydrogen bonding protein patterns hypothesis (HBPPH) proposes the formation of hydrogen bonds between hydroxyl groups of sugar moieties present in the glycoproteins with hydroxyl (or NH) groups of other sugar moieties or biomolecules leading to the creation of certain partly folded protein patterns. This provides a reasonable mechanism by which the brain may be able to gather and store information by the construction of intermolecular and intramolecular networks of folded glycoproteins. Support for partly folded proteins being involved in memory processes has come from recent researches in the field. Two possible mechanisms through which such partly folded protein patterns may be correlated leading to logical thought and to consciousness can be via quantum effects, or by an overlap of molecular vibrations arising from these patterns. The Nobel Laureate Roger Penrose and others have also proposed that quantum oscillations may be involved in consciousness.",
"title": "Scientific study"
},
{
"paragraph_id": 57,
"text": "A wide range of empirical theories of consciousness have been proposed. Adrian Doerig and colleagues list 13 notable theories, while Anil Seth and Tim Bayne list 22 notable theories.",
"title": "Scientific study"
},
{
"paragraph_id": 58,
"text": "Global workspace theory (GWT) is a cognitive architecture and theory of consciousness proposed by the cognitive psychologist Bernard Baars in 1988. Baars explains the theory with the metaphor of a theater, with conscious processes represented by an illuminated stage. This theater integrates inputs from a variety of unconscious and otherwise autonomous networks in the brain and then broadcasts them to unconscious networks (represented in the metaphor by a broad, unlit \"audience\"). The theory has since been expanded upon by other scientists including cognitive neuroscientist Stanislas Dehaene and Lionel Naccache.",
"title": "Scientific study"
},
{
"paragraph_id": 59,
"text": "Integrated information theory (IIT) postulates that consciousness resides in the information being processed and arises once the information reaches a certain level of complexity. Proponents of this model suggest that it may provide a physical grounding for consciousness in neurons, as they provide the mechanism by which information is integrated.",
"title": "Scientific study"
},
{
"paragraph_id": 60,
"text": "Orchestrated objective reduction (Orch OR) postulates that consciousness originates at the quantum level inside neurons. The mechanism is held to be a quantum process called objective reduction that is orchestrated by cellular structures called microtubules. However the details of the mechanism would go beyond current quantum theory.",
"title": "Scientific study"
},
{
"paragraph_id": 61,
"text": "In 2011, Graziano and Kastner proposed the \"attention schema\" theory of awareness. In that theory, specific cortical areas, notably in the superior temporal sulcus and the temporo-parietal junction, are used to build the construct of awareness and attribute it to other people. The same cortical machinery is also used to attribute awareness to oneself. Damage to these cortical regions can lead to deficits in consciousness such as hemispatial neglect. In the attention schema theory, the value of explaining the feature of awareness and attributing it to a person is to gain a useful predictive model of that person's attentional processing. Attention is a style of information processing in which a brain focuses its resources on a limited set of interrelated signals. Awareness, in this theory, is a useful, simplified schema that represents attentional states. To be aware of X is explained by constructing a model of one's attentional focus on X.",
"title": "Scientific study"
},
{
"paragraph_id": 62,
"text": "The entropic brain is a theory of conscious states informed by neuroimaging research with psychedelic drugs. The theory suggests that the brain in primary states such as rapid eye movement (REM) sleep, early psychosis and under the influence of psychedelic drugs, is in a disordered state; normal waking consciousness constrains some of this freedom and makes possible metacognitive functions such as internal self-administered reality testing and self-awareness. Criticism has included questioning whether the theory has been adequately tested.",
"title": "Scientific study"
},
{
"paragraph_id": 63,
"text": "In 2017, work by David Rudrauf and colleagues, including Karl Friston, applied the active inference paradigm to consciousness, a model of how sensory data is integrated with priors in a process of projective transformation. The authors argue that, while their model identifies a key relationship between computation and phenomenology, it does not completely solve the hard problem of consciousness or completely close the explanatory gap.",
"title": "Scientific study"
},
{
"paragraph_id": 64,
"text": "Opinions are divided as to where in biological evolution consciousness emerged and about whether or not consciousness has any survival value. Some argue that consciousness is a byproduct of evolution. It has been argued that consciousness emerged (i) exclusively with the first humans, (ii) exclusively with the first mammals, (iii) independently in mammals and birds, or (iv) with the first reptiles. Other authors date the origins of consciousness to the first animals with nervous systems or early vertebrates in the Cambrian over 500 million years ago. Donald Griffin suggests in his book Animal Minds a gradual evolution of consciousness. Each of these scenarios raises the question of the possible survival value of consciousness.",
"title": "Scientific study"
},
{
"paragraph_id": 65,
"text": "Thomas Henry Huxley defends in an essay titled On the Hypothesis that Animals are Automata, and its History an epiphenomenalist theory of consciousness according to which consciousness is a causally inert effect of neural activity—\"as the steam-whistle which accompanies the work of a locomotive engine is without influence upon its machinery\". To this William James objects in his essay Are We Automata? by stating an evolutionary argument for mind-brain interaction implying that if the preservation and development of consciousness in the biological evolution is a result of natural selection, it is plausible that consciousness has not only been influenced by neural processes, but has had a survival value itself; and it could only have had this if it had been efficacious. Karl Popper develops a similar evolutionary argument in the book The Self and Its Brain.",
"title": "Scientific study"
},
{
"paragraph_id": 66,
"text": "Regarding the primary function of conscious processing, a recurring idea in recent theories is that phenomenal states somehow integrate neural activities and information-processing that would otherwise be independent. This has been called the integration consensus. Another example has been proposed by Gerald Edelman called dynamic core hypothesis which puts emphasis on reentrant connections that reciprocally link areas of the brain in a massively parallel manner. Edelman also stresses the importance of the evolutionary emergence of higher-order consciousness in humans from the historically older trait of primary consciousness which humans share with non-human animals (see Neural correlates section above). These theories of integrative function present solutions to two classic problems associated with consciousness: differentiation and unity. They show how our conscious experience can discriminate between a virtually unlimited number of different possible scenes and details (differentiation) because it integrates those details from our sensory systems, while the integrative nature of consciousness in this view easily explains how our experience can seem unified as one whole despite all of these individual parts. However, it remains unspecified which kinds of information are integrated in a conscious manner and which kinds can be integrated without consciousness. Nor is it explained what specific causal role conscious integration plays, nor why the same functionality cannot be achieved without consciousness. Obviously not all kinds of information are capable of being disseminated consciously (e.g., neural activity related to vegetative functions, reflexes, unconscious motor programs, low-level perceptual analyzes, etc.) and many kinds of information can be disseminated and combined with other kinds without consciousness, as in intersensory interactions such as the ventriloquism effect. Hence it remains unclear why any of it is conscious. For a review of the differences between conscious and unconscious integrations, see the article of Ezequiel Morsella.",
"title": "Scientific study"
},
{
"paragraph_id": 67,
"text": "As noted earlier, even among writers who consider consciousness to be well-defined, there is widespread dispute about which animals other than humans can be said to possess it. Edelman has described this distinction as that of humans possessing higher-order consciousness while sharing the trait of primary consciousness with non-human animals (see previous paragraph). Thus, any examination of the evolution of consciousness is faced with great difficulties. Nevertheless, some writers have argued that consciousness can be viewed from the standpoint of evolutionary biology as an adaptation in the sense of a trait that increases fitness. In his article \"Evolution of consciousness\", John Eccles argued that special anatomical and physical properties of the mammalian cerebral cortex gave rise to consciousness (\"[a] psychon ... linked to [a] dendron through quantum physics\"). Bernard Baars proposed that once in place, this \"recursive\" circuitry may have provided a basis for the subsequent development of many of the functions that consciousness facilitates in higher organisms. Peter Carruthers has put forth one such potential adaptive advantage gained by conscious creatures by suggesting that consciousness allows an individual to make distinctions between appearance and reality. This ability would enable a creature to recognize the likelihood that their perceptions are deceiving them (e.g. that water in the distance may be a mirage) and behave accordingly, and it could also facilitate the manipulation of others by recognizing how things appear to them for both cooperative and devious ends.",
"title": "Scientific study"
},
{
"paragraph_id": 68,
"text": "Other philosophers, however, have suggested that consciousness would not be necessary for any functional advantage in evolutionary processes. No one has given a causal explanation, they argue, of why it would not be possible for a functionally equivalent non-conscious organism (i.e., a philosophical zombie) to achieve the very same survival advantages as a conscious organism. If evolutionary processes are blind to the difference between function F being performed by conscious organism O and non-conscious organism O*, it is unclear what adaptive advantage consciousness could provide. As a result, an exaptive explanation of consciousness has gained favor with some theorists that posit consciousness did not evolve as an adaptation but was an exaptation arising as a consequence of other developments such as increases in brain size or cortical rearrangement. Consciousness in this sense has been compared to the blind spot in the retina where it is not an adaption of the retina, but instead just a by-product of the way the retinal axons were wired. Several scholars including Pinker, Chomsky, Edelman, and Luria have indicated the importance of the emergence of human language as an important regulative mechanism of learning and memory in the context of the development of higher-order consciousness (see Neural correlates section above).",
"title": "Scientific study"
},
{
"paragraph_id": 69,
"text": "There are some brain states in which consciousness seems to be absent, including dreamless sleep or coma. There are also a variety of circumstances that can change the relationship between the mind and the world in less drastic ways, producing what are known as altered states of consciousness. Some altered states occur naturally; others can be produced by drugs or brain damage. Altered states can be accompanied by changes in thinking, disturbances in the sense of time, feelings of loss of control, changes in emotional expression, alternations in body image and changes in meaning or significance.",
"title": "Scientific study"
},
{
"paragraph_id": 70,
"text": "The two most widely accepted altered states are sleep and dreaming. Although dream sleep and non-dream sleep appear very similar to an outside observer, each is associated with a distinct pattern of brain activity, metabolic activity, and eye movement; each is also associated with a distinct pattern of experience and cognition. During ordinary non-dream sleep, people who are awakened report only vague and sketchy thoughts, and their experiences do not cohere into a continuous narrative. During dream sleep, in contrast, people who are awakened report rich and detailed experiences in which events form a continuous progression, which may however be interrupted by bizarre or fantastic intrusions. Thought processes during the dream state frequently show a high level of irrationality. Both dream and non-dream states are associated with severe disruption of memory: it usually disappears in seconds during the non-dream state, and in minutes after awakening from a dream unless actively refreshed.",
"title": "Scientific study"
},
{
"paragraph_id": 71,
"text": "Research conducted on the effects of partial epileptic seizures on consciousness found that patients who have partial epileptic seizures experience altered states of consciousness. In partial epileptic seizures, consciousness is impaired or lost while some aspects of consciousness, often automated behaviors, remain intact. Studies found that when measuring the qualitative features during partial epileptic seizures, patients exhibited an increase in arousal and became absorbed in the experience of the seizure, followed by difficulty in focusing and shifting attention.",
"title": "Scientific study"
},
{
"paragraph_id": 72,
"text": "A variety of psychoactive drugs, including alcohol, have notable effects on consciousness. These range from a simple dulling of awareness produced by sedatives, to increases in the intensity of sensory qualities produced by stimulants, cannabis, empathogens–entactogens such as MDMA (\"Ecstasy\"), or most notably by the class of drugs known as psychedelics. LSD, mescaline, psilocybin, dimethyltryptamine, and others in this group can produce major distortions of perception, including hallucinations; some users even describe their drug-induced experiences as mystical or spiritual in quality. The brain mechanisms underlying these effects are not as well understood as those induced by use of alcohol, but there is substantial evidence that alterations in the brain system that uses the chemical neurotransmitter serotonin play an essential role.",
"title": "Scientific study"
},
{
"paragraph_id": 73,
"text": "There has been some research into physiological changes in yogis and people who practise various techniques of meditation. Some research with brain waves during meditation has reported differences between those corresponding to ordinary relaxation and those corresponding to meditation. It has been disputed, however, whether there is enough evidence to count these as physiologically distinct states of consciousness.",
"title": "Scientific study"
},
{
"paragraph_id": 74,
"text": "The most extensive study of the characteristics of altered states of consciousness was made by psychologist Charles Tart in the 1960s and 1970s. Tart analyzed a state of consciousness as made up of a number of component processes, including exteroception (sensing the external world); interoception (sensing the body); input-processing (seeing meaning); emotions; memory; time sense; sense of identity; evaluation and cognitive processing; motor output; and interaction with the environment. Each of these, in his view, could be altered in multiple ways by drugs or other manipulations. The components that Tart identified have not, however, been validated by empirical studies. Research in this area has not yet reached firm conclusions, but a recent questionnaire-based study identified eleven significant factors contributing to drug-induced states of consciousness: experience of unity; spiritual experience; blissful state; insightfulness; disembodiment; impaired control and cognition; anxiety; complex imagery; elementary imagery; audio-visual synesthesia; and changed meaning of percepts.",
"title": "Scientific study"
},
{
"paragraph_id": 75,
"text": "The medical approach to consciousness is scientifically oriented. It derives from a need to treat people whose brain function has been impaired as a result of disease, brain damage, toxins, or drugs. In medicine, conceptual distinctions are considered useful to the degree that they can help to guide treatments. The medical approach focuses mostly on the amount of consciousness a person has: in medicine, consciousness is assessed as a \"level\" ranging from coma and brain death at the low end, to full alertness and purposeful responsiveness at the high end.",
"title": "Medical aspects"
},
{
"paragraph_id": 76,
"text": "Consciousness is of concern to patients and physicians, especially neurologists and anesthesiologists. Patients may have disorders of consciousness or may need to be anesthetized for a surgical procedure. Physicians may perform consciousness-related interventions such as instructing the patient to sleep, administering general anesthesia, or inducing medical coma. Also, bioethicists may be concerned with the ethical implications of consciousness in medical cases of patients such as the Karen Ann Quinlan case, while neuroscientists may study patients with impaired consciousness in hopes of gaining information about how the brain works.",
"title": "Medical aspects"
},
{
"paragraph_id": 77,
"text": "In medicine, consciousness is examined using a set of procedures known as neuropsychological assessment. There are two commonly used methods for assessing the level of consciousness of a patient: a simple procedure that requires minimal training, and a more complex procedure that requires substantial expertise. The simple procedure begins by asking whether the patient is able to move and react to physical stimuli. If so, the next question is whether the patient can respond in a meaningful way to questions and commands. If so, the patient is asked for name, current location, and current day and time. A patient who can answer all of these questions is said to be \"alert and oriented times four\" (sometimes denoted \"A&Ox4\" on a medical chart), and is usually considered fully conscious.",
"title": "Medical aspects"
},
{
"paragraph_id": 78,
"text": "The more complex procedure is known as a neurological examination, and is usually carried out by a neurologist in a hospital setting. A formal neurological examination runs through a precisely delineated series of tests, beginning with tests for basic sensorimotor reflexes, and culminating with tests for sophisticated use of language. The outcome may be summarized using the Glasgow Coma Scale, which yields a number in the range 3–15, with a score of 3 to 8 indicating coma, and 15 indicating full consciousness. The Glasgow Coma Scale has three subscales, measuring the best motor response (ranging from \"no motor response\" to \"obeys commands\"), the best eye response (ranging from \"no eye opening\" to \"eyes opening spontaneously\") and the best verbal response (ranging from \"no verbal response\" to \"fully oriented\"). There is also a simpler pediatric version of the scale, for children too young to be able to use language.",
"title": "Medical aspects"
},
{
"paragraph_id": 79,
"text": "In 2013, an experimental procedure was developed to measure degrees of consciousness, the procedure involving stimulating the brain with a magnetic pulse, measuring resulting waves of electrical activity, and developing a consciousness score based on the complexity of the brain activity.",
"title": "Medical aspects"
},
{
"paragraph_id": 80,
"text": "Medical conditions that inhibit consciousness are considered disorders of consciousness. This category generally includes minimally conscious state and persistent vegetative state, but sometimes also includes the less severe locked-in syndrome and more severe chronic coma. Differential diagnosis of these disorders is an active area of biomedical research. Finally, brain death results in possible irreversible disruption of consciousness. While other conditions may cause a moderate deterioration (e.g., dementia and delirium) or transient interruption (e.g., grand mal and petit mal seizures) of consciousness, they are not included in this category.",
"title": "Medical aspects"
},
{
"paragraph_id": 81,
"text": "Medical experts increasingly view anosognosia as a disorder of consciousness. Anosognosia is a Greek-derived term meaning \"unawareness of disease\". This is a condition in which patients are disabled in some way, most commonly as a result of a stroke, but either misunderstand the nature of the problem or deny that there is anything wrong with them. The most frequently occurring form is seen in people who have experienced a stroke damaging the parietal lobe in the right hemisphere of the brain, giving rise to a syndrome known as hemispatial neglect, characterized by an inability to direct action or attention toward objects located to the left with respect to their bodies. Patients with hemispatial neglect are often paralyzed on the left side of the body, but sometimes deny being unable to move. When questioned about the obvious problem, the patient may avoid giving a direct answer, or may give an explanation that does not make sense. Patients with hemispatial neglect may also fail to recognize paralyzed parts of their bodies: one frequently mentioned case is of a man who repeatedly tried to throw his own paralyzed right leg out of the bed he was lying in, and when asked what he was doing, complained that somebody had put a dead leg into the bed with him. An even more striking type of anosognosia is Anton–Babinski syndrome, a rarely occurring condition in which patients become blind but claim to be able to see normally, and persist in this claim in spite of all evidence to the contrary.",
"title": "Medical aspects"
},
{
"paragraph_id": 82,
"text": "Of the eight types of consciousness in the Lycan classification, some are detectable in utero and others develop years after birth. Psychologist and educator William Foulkes studied children's dreams and concluded that prior to the shift in cognitive maturation that humans experience during ages five to seven, children lack the Lockean consciousness that Lycan had labeled \"introspective consciousness\" and that Foulkes labels \"self-reflection.\" In a 2020 paper, Katherine Nelson and Robyn Fivush use \"autobiographical consciousness\" to label essentially the same faculty, and agree with Foulkes on the timing of this faculty's acquisition. Nelson and Fivush contend that \"language is the tool by which humans create a new, uniquely human form of consciousness, namely, autobiographical consciousness.\" Julian Jaynes had staked out these positions decades earlier. Citing the developmental steps that lead the infant to autobiographical consciousness, Nelson and Fivush point to the acquisition of \"theory of mind,\" calling theory of mind \"necessary for autobiographical consciousness\" and defining it as \"understanding differences between one's own mind and others' minds in terms of beliefs, desires, emotions and thoughts.\" They write, \"The hallmark of theory of mind, the understanding of false belief, occurs ... at five to six years of age.\"",
"title": "Outside human adults"
},
{
"paragraph_id": 83,
"text": "The topic of animal consciousness is beset by a number of difficulties. It poses the problem of other minds in an especially severe form, because non-human animals, lacking the ability to express human language, cannot tell humans about their experiences. Also, it is difficult to reason objectively about the question, because a denial that an animal is conscious is often taken to imply that it does not feel, its life has no value, and that harming it is not morally wrong. Descartes, for example, has sometimes been blamed for mistreatment of animals due to the fact that he believed only humans have a non-physical mind. Most people have a strong intuition that some animals, such as cats and dogs, are conscious, while others, such as insects, are not; but the sources of this intuition are not obvious, and are often based on personal interactions with pets and other animals they have observed.",
"title": "Outside human adults"
},
{
"paragraph_id": 84,
"text": "Philosophers who consider subjective experience the essence of consciousness also generally believe, as a correlate, that the existence and nature of animal consciousness can never rigorously be known. Thomas Nagel spelled out this point of view in an influential essay titled What Is it Like to Be a Bat?. He said that an organism is conscious \"if and only if there is something that it is like to be that organism—something it is like for the organism\"; and he argued that no matter how much we know about an animal's brain and behavior, we can never really put ourselves into the mind of the animal and experience its world in the way it does itself. Other thinkers, such as Douglas Hofstadter, dismiss this argument as incoherent. Several psychologists and ethologists have argued for the existence of animal consciousness by describing a range of behaviors that appear to show animals holding beliefs about things they cannot directly perceive—Donald Griffin's 2001 book Animal Minds reviews a substantial portion of the evidence.",
"title": "Outside human adults"
},
{
"paragraph_id": 85,
"text": "On July 7, 2012, eminent scientists from different branches of neuroscience gathered at the University of Cambridge to celebrate the Francis Crick Memorial Conference, which deals with consciousness in humans and pre-linguistic consciousness in nonhuman animals. After the conference, they signed in the presence of Stephen Hawking, the 'Cambridge Declaration on Consciousness', which summarizes the most important findings of the survey:",
"title": "Outside human adults"
},
{
"paragraph_id": 86,
"text": "\"We decided to reach a consensus and make a statement directed to the public that is not scientific. It's obvious to everyone in this room that animals have consciousness, but it is not obvious to the rest of the world. It is not obvious to the rest of the Western world or the Far East. It is not obvious to the society.\"",
"title": "Outside human adults"
},
{
"paragraph_id": 87,
"text": "\"Convergent evidence indicates that non-human animals ..., including all mammals and birds, and other creatures, ... have the necessary neural substrates of consciousness and the capacity to exhibit intentional behaviors.\"",
"title": "Outside human adults"
},
{
"paragraph_id": 88,
"text": "The idea of an artifact made conscious is an ancient theme of mythology, appearing for example in the Greek myth of Pygmalion, who carved a statue that was magically brought to life, and in medieval Jewish stories of the Golem, a magically animated homunculus built of clay. However, the possibility of actually constructing a conscious machine was probably first discussed by Ada Lovelace, in a set of notes written in 1842 about the Analytical Engine invented by Charles Babbage, a precursor (never built) to modern electronic computers. Lovelace was essentially dismissive of the idea that a machine such as the Analytical Engine could think in a humanlike way. She wrote:",
"title": "Outside human adults"
},
{
"paragraph_id": 89,
"text": "It is desirable to guard against the possibility of exaggerated ideas that might arise as to the powers of the Analytical Engine. ... The Analytical Engine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform. It can follow analysis; but it has no power of anticipating any analytical relations or truths. Its province is to assist us in making available what we are already acquainted with.",
"title": "Outside human adults"
},
{
"paragraph_id": 90,
"text": "One of the most influential contributions to this question was an essay written in 1950 by pioneering computer scientist Alan Turing, titled Computing Machinery and Intelligence. Turing disavowed any interest in terminology, saying that even \"Can machines think?\" is too loaded with spurious connotations to be meaningful; but he proposed to replace all such questions with a specific operational test, which has become known as the Turing test. To pass the test, a computer must be able to imitate a human well enough to fool interrogators. In his essay Turing discussed a variety of possible objections, and presented a counterargument to each of them. The Turing test is commonly cited in discussions of artificial intelligence as a proposed criterion for machine consciousness; it has provoked a great deal of philosophical debate. For example, Daniel Dennett and Douglas Hofstadter argue that anything capable of passing the Turing test is necessarily conscious, while David Chalmers argues that a philosophical zombie could pass the test, yet fail to be conscious. A third group of scholars have argued that with technological growth once machines begin to display any substantial signs of human-like behavior then the dichotomy (of human consciousness compared to human-like consciousness) becomes passé and issues of machine autonomy begin to prevail even as observed in its nascent form within contemporary industry and technology. Jürgen Schmidhuber argues that consciousness is the result of compression. As an agent sees representation of itself recurring in the environment, the compression of this representation can be called consciousness.",
"title": "Outside human adults"
},
{
"paragraph_id": 91,
"text": "In a lively exchange over what has come to be referred to as \"the Chinese room argument\", John Searle sought to refute the claim of proponents of what he calls \"strong artificial intelligence (AI)\" that a computer program can be conscious, though he does agree with advocates of \"weak AI\" that computer programs can be formatted to \"simulate\" conscious states. His own view is that consciousness has subjective, first-person causal powers by being essentially intentional due to the way human brains function biologically; conscious persons can perform computations, but consciousness is not inherently computational the way computer programs are. To make a Turing machine that speaks Chinese, Searle imagines a room with one monolingual English speaker (Searle himself, in fact), a book that designates a combination of Chinese symbols to be output paired with Chinese symbol input, and boxes filled with Chinese symbols. In this case, the English speaker is acting as a computer and the rulebook as a program. Searle argues that with such a machine, he would be able to process the inputs to outputs perfectly without having any understanding of Chinese, nor having any idea what the questions and answers could possibly mean. If the experiment were done in English, since Searle knows English, he would be able to take questions and give answers without any algorithms for English questions, and he would be effectively aware of what was being said and the purposes it might serve. Searle would pass the Turing test of answering the questions in both languages, but he is only conscious of what he is doing when he speaks English. Another way of putting the argument is to say that computer programs can pass the Turing test for processing the syntax of a language, but that the syntax cannot lead to semantic meaning in the way strong AI advocates hoped.",
"title": "Outside human adults"
},
{
"paragraph_id": 92,
"text": "In the literature concerning artificial intelligence, Searle's essay has been second only to Turing's in the volume of debate it has generated. Searle himself was vague about what extra ingredients it would take to make a machine conscious: all he proposed was that what was needed was \"causal powers\" of the sort that the brain has and that computers lack. But other thinkers sympathetic to his basic argument have suggested that the necessary (though perhaps still not sufficient) extra conditions may include the ability to pass not just the verbal version of the Turing test, but the robotic version, which requires grounding the robot's words in the robot's sensorimotor capacity to categorize and interact with the things in the world that its words are about, Turing-indistinguishably from a real person. Turing-scale robotics is an empirical branch of research on embodied cognition and situated cognition.",
"title": "Outside human adults"
},
{
"paragraph_id": 93,
"text": "In 2014, Victor Argonov has suggested a non-Turing test for machine consciousness based on a machine's ability to produce philosophical judgments. He argues that a deterministic machine must be regarded as conscious if it is able to produce judgments on all problematic properties of consciousness (such as qualia or binding) having no innate (preloaded) philosophical knowledge on these issues, no philosophical discussions while learning, and no informational models of other creatures in its memory (such models may implicitly or explicitly contain knowledge about these creatures' consciousness). However, this test can be used only to detect, but not refute the existence of consciousness. A positive result proves that a machine is conscious but a negative result proves nothing. For example, absence of philosophical judgments may be caused by lack of the machine's intellect, not by absence of consciousness.",
"title": "Outside human adults"
},
{
"paragraph_id": 94,
"text": "William James is usually credited with popularizing the idea that human consciousness flows like a stream, in his Principles of Psychology of 1890.",
"title": "Stream of consciousness"
},
{
"paragraph_id": 95,
"text": "According to James, the \"stream of thought\" is governed by five characteristics:",
"title": "Stream of consciousness"
},
{
"paragraph_id": 96,
"text": "A similar concept appears in Buddhist philosophy, expressed by the Sanskrit term Citta-saṃtāna, which is usually translated as mindstream or \"mental continuum\". Buddhist teachings describe that consciousness manifests moment to moment as sense impressions and mental phenomena that are continuously changing. The teachings list six triggers that can result in the generation of different mental events. These triggers are input from the five senses (seeing, hearing, smelling, tasting or touch sensations), or a thought (relating to the past, present or the future) that happen to arise in the mind. The mental events generated as a result of these triggers are: feelings, perceptions and intentions/behaviour. The moment-by-moment manifestation of the mind-stream is said to happen in every person all the time. It even happens in a scientist who analyzes various phenomena in the world, or analyzes the material body including the organ brain. The manifestation of the mindstream is also described as being influenced by physical laws, biological laws, psychological laws, volitional laws, and universal laws. The purpose of the Buddhist practice of mindfulness is to understand the inherent nature of the consciousness and its characteristics.",
"title": "Stream of consciousness"
},
{
"paragraph_id": 97,
"text": "In the West, the primary impact of the idea has been on literature rather than science: \"stream of consciousness as a narrative mode\" means writing in a way that attempts to portray the moment-to-moment thoughts and experiences of a character. This technique perhaps had its beginnings in the monologs of Shakespeare's plays and reached its fullest development in the novels of James Joyce and Virginia Woolf, although it has also been used by many other noted writers.",
"title": "Stream of consciousness"
},
{
"paragraph_id": 98,
"text": "Here, for example, is a passage from Joyce's Ulysses about the thoughts of Molly Bloom:",
"title": "Stream of consciousness"
},
{
"paragraph_id": 99,
"text": "Yes because he never did a thing like that before as ask to get his breakfast in bed with a couple of eggs since the City Arms hotel when he used to be pretending to be laid up with a sick voice doing his highness to make himself interesting for that old faggot Mrs Riordan that he thought he had a great leg of and she never left us a farthing all for masses for herself and her soul greatest miser ever was actually afraid to lay out 4d for her methylated spirit telling me all her ailments she had too much old chat in her about politics and earthquakes and the end of the world let us have a bit of fun first God help the world if all the women were her sort down on bathingsuits and lownecks of course nobody wanted her to wear them I suppose she was pious because no man would look at her twice I hope Ill never be like her a wonder she didnt want us to cover our faces but she was a welleducated woman certainly and her gabby talk about Mr Riordan here and Mr Riordan there I suppose he was glad to get shut of her.",
"title": "Stream of consciousness"
},
{
"paragraph_id": 100,
"text": "To most philosophers, the word \"consciousness\" connotes the relationship between the mind and the world. To writers on spiritual or religious topics, it frequently connotes the relationship between the mind and God, or the relationship between the mind and deeper truths that are thought to be more fundamental than the physical world.",
"title": "Spiritual approaches"
},
{
"paragraph_id": 101,
"text": "The Canadian psychiatrist Richard Maurice Bucke, author of the 1901 book Cosmic Consciousness: A Study in the Evolution of the Human Mind, distinguished between three types of consciousness: 'Simple Consciousness', awareness of the body, possessed by many animals; 'Self Consciousness', awareness of being aware, possessed only by humans; and 'Cosmic Consciousness', awareness of the life and order of the universe, possessed only by humans who have attained \"intellectual enlightenment or illumination\".",
"title": "Spiritual approaches"
},
{
"paragraph_id": 102,
"text": "Another thorough account of the spiritual approach is Ken Wilber's 1977 book The Spectrum of Consciousness, a comparison of western and eastern ways of thinking about the mind. Wilber described consciousness as a spectrum with ordinary awareness at one end, and more profound types of awareness at higher levels.",
"title": "Spiritual approaches"
},
{
"paragraph_id": 103,
"text": "Other examples include the various levels of spiritual consciousness presented by Prem Saran Satsangi and Stuart Hameroff.",
"title": "Spiritual approaches"
}
] | Consciousness, at its simplest, is awareness of internal and external existence. However, its nature has led to millennia of analyses, explanations and debate by philosophers, theologians, and all of science. Opinions differ about what exactly needs to be studied or even considered consciousness. In some explanations, it is synonymous with the mind, and at other times, an aspect of mind. In the past, it was one's "inner life", the world of introspection, of private thought, imagination and volition. Today, it often includes any kind of cognition, experience, feeling or perception. It may be awareness, awareness of awareness, or self-awareness either continuously changing or not. The disparate range of research, notions and speculations raises a curiosity about whether the right questions are being asked. Examples of the range of descriptions, definitions or explanations are: simple wakefulness, one's sense of selfhood or soul explored by "looking within"; being a metaphorical "stream" of contents, or being a mental state, mental event or mental process of the brain. | 2001-11-04T10:08:54Z | 2023-12-31T18:00:37Z | [
"Template:Spoken Wikipedia",
"Template:Quote box",
"Template:Efn",
"Template:Center",
"Template:Colend",
"Template:Reflist",
"Template:Cite AV media",
"Template:Div col",
"Template:Cite IEP",
"Template:Short description",
"Template:About",
"Template:Lang",
"Template:Em",
"Template:Main",
"Template:Notelist",
"Template:Wikiquote",
"Template:Authority control",
"Template:Citation needed",
"Template:Portal",
"Template:Cols",
"Template:Spirituality-related topics",
"Template:Further",
"Template:Third-party inline",
"Template:Dead link",
"Template:Wikibooks inline",
"Template:Philosophy of mind",
"Template:Distinguish",
"Template:See also",
"Template:Cite web",
"Template:Webarchive",
"Template:Wiktionary-inline",
"Template:R",
"Template:Failed verification",
"Template:Cite encyclopedia",
"Template:Cite book",
"Template:Library resources box",
"Template:Consciousness",
"Template:Cite journal",
"Template:ISBN",
"Template:Div col end",
"Template:Commons category",
"Template:Cite SEP",
"Template:Footer Neuropsychology",
"Template:Mental processes",
"Template:Good article",
"Template:Use American English",
"Template:Blockquote",
"Template:Rp",
"Template:Self-published source",
"Template:Cbignore"
] | https://en.wikipedia.org/wiki/Consciousness |
5,665 | Currency | A currency is a standardization of money in any form, in use or circulation as a medium of exchange, for example banknotes and coins. A more general definition is that a currency is a system of money in common use within a specific environment over time, especially for people in a nation state. Under this definition, the British Pound sterling (£), euros (€), Japanese yen (¥), and U.S. dollars (US$) are examples of (government-issued) fiat currencies. Currencies may act as stores of value and be traded between nations in foreign exchange markets, which determine the relative values of the different currencies. Currencies in this sense are either chosen by users or decreed by governments, and each type has limited boundaries of acceptance; i.e., legal tender laws may require a particular unit of account for payments to government agencies.
Other definitions of the term currency appear in the respective synonymous articles: banknote, coin, and money. This article uses the definition which focuses on the currency systems of countries.
One can classify currencies into three monetary systems: fiat money, commodity money, and representative money, depending on what guarantees a currency's value (the economy at large vs. the government's precious metal reserves). Some currencies function as legal tender in certain jurisdictions, or for specific purposes, such as payment to a government (taxes), or government agencies (fees, fines). Others simply get traded for their economic value.
The concept of a digital currency has arisen in recent years. Whether government-backed digital notes and coins (such as the digital renminbi in China, for example) will be successfully developed and implemented remains unknown. Digital currencies that are not issued by a government monetary authority, such as cryptocurrencies like Bitcoin, are different because their value is market-dependent and has no safety net. Various countries have expressed concern about the opportunities that cryptocurrencies create for illegal activities such as scams, ransomware (extortion), money laundering and terrorism. In 2014, the United States IRS advised that virtual currency is treated as property for federal income-tax purposes, and it provides examples of how long-standing tax principles applicable to transactions involving property apply to virtual currency.
Originally, currency was a form of receipt, representing grain stored in temple granaries in Sumer in ancient Mesopotamia and in Ancient Egypt.
In this first stage of currency, metals were used as symbols to represent value stored in the form of commodities. This formed the basis of trade in the Fertile Crescent for over 1500 years. However, the collapse of the Near Eastern trading system pointed to a flaw: in an era where there was no place that was safe to store value, the value of a circulating medium could only be as sound as the forces that defended that store. A trade could only reach as far as the credibility of that military. By the late Bronze Age, however, a series of treaties had established safe passage for merchants around the Eastern Mediterranean, spreading from Minoan Crete and Mycenae in the northwest to Elam and Bahrain in the southeast. It is not known what was used as a currency for these exchanges, but it is thought that oxhide-shaped ingots of copper, produced in Cyprus, may have functioned as a currency.
It is thought that the increase in piracy and raiding associated with the Bronze Age collapse, possibly produced by the Peoples of the Sea, brought the trading system of oxhide ingots to an end. It was only the recovery of Phoenician trade in the 10th and 9th centuries BC that led to a return to prosperity, and the appearance of real coinage, possibly first in Anatolia with Croesus of Lydia and subsequently with the Greeks and Persians. In Africa, many forms of value store have been used, including beads, ingots, ivory, various forms of weapons, livestock, the manilla currency, and ochre and other earth oxides. The manilla rings of West Africa were one of the currencies used from the 15th century onwards to sell slaves. African currency is still notable for its variety, and in many places, various forms of barter still apply.
The prevalence of metal coins possibly led to the metal itself being the store of value: first copper, then both silver and gold, and at one point also bronze. Today other non-precious metals are used for coins. Metals were mined, weighed, and stamped into coins. This was to assure the individual accepting the coin that he was getting a certain known weight of precious metal. Coins could be counterfeited, but the existence of standard coins also created a new unit of account, which helped lead to banking. Archimedes' principle provided the next link: coins could now be easily tested for their fine weight of the metal, and thus the value of a coin could be determined, even if it had been shaved, debased or otherwise tampered with (see Numismatics).
Most major economies using coinage had several tiers of coins of different values, made of copper, silver, and gold. Gold coins were the most valuable and were used for large purchases, payment of the military, and backing of state activities. Units of account were often defined as the value of a particular type of gold coin. Silver coins were used for midsized transactions, and sometimes also defined a unit of account, while coins of copper or silver, or some mixture of them (see debasement), might be used for everyday transactions. This system had been used in ancient India since the time of the Mahajanapadas. The exact ratios between the values of the three metals varied greatly between different eras and places; for example, the opening of silver mines in the Harz mountains of central Europe made silver relatively less valuable, as did the flood of New World silver after the Spanish conquests. However, the rarity of gold consistently made it more valuable than silver, and likewise silver was consistently worth more than copper.
In premodern China, the need for lending and for a medium of exchange that was less physically cumbersome than large numbers of copper coins led to the introduction of paper money, i.e. banknotes. Their introduction was a gradual process that lasted from the late Tang dynasty (618–907) into the Song dynasty (960–1279). It began as a means for merchants to exchange heavy coinage for receipts of deposit issued as promissory notes by wholesalers' shops. These notes were valid for temporary use in a small regional territory. In the 10th century, the Song dynasty government began to circulate these notes amongst the traders in its monopolized salt industry. The Song government granted several shops the right to issue banknotes, and in the early 12th century the government finally took over these shops to produce state-issued currency. Yet the banknotes issued were still only locally and temporarily valid: it was not until the mid 13th century that a standard and uniform government issue of paper money became an acceptable nationwide currency. The already widespread methods of woodblock printing and then Bi Sheng's movable type printing by the 11th century were the impetus for the mass production of paper money in premodern China.
At around the same time in the medieval Islamic world, a vigorous monetary economy was created during the 7th–12th centuries on the basis of the expanding levels of circulation of a stable high-value currency (the dinar). Innovations introduced by Muslim economists, traders and merchants include the earliest uses of credit, cheques, promissory notes, savings accounts, transaction accounts, loaning, trusts, exchange rates, the transfer of credit and debt, and banking institutions for loans and deposits.
In Europe, paper currency was first introduced on a regular basis in Sweden in 1661 (although Washington Irving records an earlier emergency use of it, by the Spanish in a siege during the Conquest of Granada). As Sweden was rich in copper, many copper coins were in circulation, but its relatively low value necessitated extraordinarily big coins, often weighing several kilograms.
The advantages of paper currency were numerous: it reduced the need to transport gold and silver, which was risky; it facilitated loans of gold or silver at interest, since the underlying specie (money in the form of gold or silver coins rather than notes) never left the possession of the lender until someone else redeemed the note; and it allowed a division of currency into credit- and specie-backed forms. It enabled the sale of investment in joint-stock companies and the redemption of those shares in a paper.
But there were also disadvantages. First, since a note has no intrinsic value, there was nothing to stop issuing authorities from printing more notes than they had specie to back them with. Second, because this increased the money supply, it increased inflationary pressures, a fact observed by David Hume in the 18th century. Thus paper money would often lead to an inflationary bubble, which could collapse if people began demanding hard money, causing the demand for paper notes to fall to zero. The printing of paper money was also associated with wars, and financing of wars, and therefore regarded as part of maintaining a standing army. For these reasons, paper currency was held in suspicion and hostility in Europe and America. It was also addictive since the speculative profits of trade and capital creation were quite large. Major nations established mints to print money and mint coins, and branches of their treasury to collect taxes and hold gold and silver stock.
At that time, both silver and gold were considered a legal tender and accepted by governments for taxes. However, the instability in the exchange rate between the two grew over the course of the 19th century, with the increases both in the supply of these metals, particularly silver, and in trade. The parallel use of both metals is called bimetallism, and the attempt to create a bimetallic standard where both gold and silver backed currency remained in circulation occupied the efforts of inflationists. Governments at this point could use currency as an instrument of policy, printing paper currency such as the United States greenback, to pay for military expenditures. They could also set the terms at which they would redeem notes for specie, by limiting the amount of purchase, or the minimum amount that could be redeemed.
By 1900, most of the industrializing nations were on some form of gold standard, with paper notes and silver coins constituting the circulating medium. Private banks and governments across the world followed Gresham's law: keeping the gold and silver they received but paying out in notes. This did not happen all around the world at the same time, but occurred sporadically, generally in times of war or financial crisis, beginning in the early 20th century and continuing across the world until the late 20th century, when the regime of floating fiat currencies came into force. One of the last countries to break away from the gold standard was the United States in 1971, an action which was known as the Nixon shock. No country has an enforceable gold standard or silver standard currency system.
A banknote or a bill is a type of currency and it is commonly used as legal tender in many jurisdictions. Together with coins, banknotes make up the cash form of a currency. Banknotes were initially mostly paper, but Australia's Commonwealth Scientific and Industrial Research Organisation developed a polymer currency in the 1980s; it went into circulation on the nation's bicentenary in 1988. Polymer banknotes had already been introduced in the Isle of Man in 1983. As of 2016, polymer currency is used in over 20 countries (over 40 if counting commemorative issues), and dramatically increases the life span of banknotes and reduces counterfeiting.
The currency used is based on the concept of lex monetae; that a sovereign state decides which currency it shall use. (See Fiat currency.)
In 1978 the International Organization for Standardization published a system of three-digit alphabetic codes (ISO 4217) to denote currencies. These codes are based on two initial letters allocated to a specific country and a final letter denoting a specific monetary unit of account.
Many currencies use a currency symbol. These are not subject to international standards and are not unique: the dollar sign in particular has many uses.
Distinct from centrally controlled government-issued currencies, private decentralized trust-reduced networks support alternative currencies (such as Bitcoin and Ethereum's ether, which are classified as cryptocurrency since transference transactions are assured through cryptographic signatures validated by all users. With few exceptions, these currencies are not asset backed. The U.S. Commodity Futures Trading Commission has declared Bitcoin (and, by extension, similar products) to be a commodity under the Commodity Exchange Act.
There are also branded currencies, for example 'obligation' based stores of value, such as quasi-regulated BarterCard, Loyalty Points (Credit Cards, Airlines) or Game-Credits (MMO games) that are based on reputation of commercial products.
Historically, pseudo-currencies have also included company scrip, a form of wages that could only be exchanged in company stores owned by the employers. Modern token money, such as the tokens operated by local exchange trading systems (LETS), is a form of barter rather than being a true currency.
The currency may be Internet-based and digital, for instance, Bitcoin is not tied to any specific country, or the IMF's SDR that is based on a basket of currencies (and assets held).
Possession and sale of alternative forms of currencies is often outlawed by governments in order to preserve the legitimacy of the constitutional currency for the benefit of all citizens. For example, Article I, section 8, clause 5 of the United States Constitution delegates to Congress the power to coin money and to regulate the value thereof. This power was delegated to Congress in order to establish and preserve a uniform standard of value and to insure a singular monetary system for all purchases and debts in the United States, public and private. Along with the power to coin money, the United States Congress has the concurrent power to restrain the circulation of money which is not issued under its own authority in order to protect and preserve the constitutional currency. It is a violation of federal law for individuals, or organizations to create private coin or currency systems to compete with the official coinage and currency of the United States.
Commonly a central bank has the exclusive power to issue all forms of currency, including coins and banknotes (fiat money), and to restrain the circulation alternative currencies for its own area of circulation (a country or group of countries); it regulates the production of currency by banks (credit) through monetary policy.
An exchange rate is a price at which two currencies can be exchanged against each other. This is used for trade between the two currency zones. Exchange rates can be classified as either floating or fixed. In the former, day-to-day movements in exchange rates are determined by the market; in the latter, governments intervene in the market to buy or sell their currency to balance supply and demand at a static exchange rate.
In cases where a country has control of its own currency, that control is exercised either by a central bank or by a Ministry of Finance. The institution that has control of monetary policy is referred to as the monetary authority. Monetary authorities have varying degrees of autonomy from the governments that create them. A monetary authority is created and supported by its sponsoring government, so independence can be reduced by the legislative or executive authority that creates it.
Several countries can use the same name for their own separate currencies (for example, a dollar in Australia, Canada, and the United States). By contrast, several countries can also use the same currency (for example, the euro or the CFA franc), or one country can declare the currency of another country to be legal tender. For example, Panama and El Salvador have declared US currency to be legal tender, and from 1791 to 1857, Spanish dollars were legal tender in the United States. At various times countries have either re-stamped foreign coins or used currency boards, issuing one note of currency for each note of a foreign government held, as Ecuador currently does.
Each currency typically has a main currency unit (the dollar, for example, or the euro) and a fractional unit, often defined as 1⁄100 of the main unit: 100 cents = 1 dollar, 100 centimes = 1 franc, 100 pence = 1 pound, although units of 1⁄10 or 1⁄1000 occasionally also occur. Some currencies do not have any smaller units at all, such as the Icelandic króna and the Japanese yen.
Mauritania and Madagascar are the only remaining countries that have theoretical fractional units not based on the decimal system; instead, the Mauritanian ouguiya is in theory divided into 5 khoums, while the Malagasy ariary is theoretically divided into 5 iraimbilanja. In these countries, words like dollar or pound "were simply names for given weights of gold". Due to inflation khoums and iraimbilanja have in practice fallen into disuse. (See non-decimal currencies for other historic currencies with non-decimal divisions.)
Subject to variation around the world, local currency can be converted to another currency or vice versa with or without central bank/government intervention. Such conversions take place in the foreign exchange market. Based on the above restrictions or free and readily conversion features, currencies are classified as:
According to the three aspects of trade in goods and services, capital flows and national policies, the supply-demand relationship of different currencies determines the exchange ratio between currencies.
Trade in goods and services
Through cost transfer, goods and services circulating in the country (such as hotels, tourism, catering, advertising, household services) will indirectly affect the trade cost of goods and services and the price of export trade. Therefore, services and goods involved in international trade are not the only reason affecting the exchange rate. The large number of international tourists and overseas students has resulted in the flow of services and goods at home and abroad. It also represents that the competitiveness of global goods and services directly affects the change of international exchange rates.
Capital flows
National currencies will be traded on international markets for investment purposes. Investment opportunities in each country attract other countries into investment programs, so that these foreign currencies become the reserves of the central banks of each country. The exchange rate mechanism, in which currencies are quoted continuously between countries, is based on foreign exchange markets in which currencies are invested by individuals and traded or speculated by central banks and investment institutions. In addition, changes in interest rates, capital market fluctuations and changes in investment opportunities will affect the global capital inflows and outflows of countries around the world, and exchange rates will fluctuate accordingly.
National policies
The country's foreign trade, monetary and fiscal policies affect the exchange rate fluctuations. Foreign trade includes policies such as tariffs and import standards for commodity exports. The impact of monetary policy on the total amount and yield of money directly determines the changes in the international exchange rate. Fiscal policies, such as transfer payments, taxation ratios, and other factors, dominate the profitability of capital and economic development, and the ratio of national debt issuance to deficit determines the repayment capacity and credit rating of the country. Such policies determine the mechanism of linking domestic and foreign currencies and therefore have a significant impact on the generation of exchange rates.
Currency convertibility is closely linked to economic development and finance. There are strict conditions for countries to achieve currency convertibility, which is a good way for countries to improve their economies. The currencies of some countries or regions in the world are freely convertible, such as the US dollar, Australian dollar and Japanese yen. The requirements for currency convertibility can be roughly divided into four parts:
With a freely convertible currency, domestic firms will have to compete fiercely with their foreign counterparts. The development of competition among them will affect the implementation effect of currency convertibility. In addition, microeconomics is a prerequisite for macroeconomic conditions.
Since currency convertibility is the cross-border flow of goods and capital, it will have an impact on the macro economy. This requires that the national economy be in a normal and orderly state, that is, there is no serious inflation and economic overheating. In addition, the government should use macro policies to make mature adjustments to deal with the impact of currency exchange on the economy.
The maintainability of international balance of payments is the main performance of reasonable economic structure. Currency convertibility not only causes difficulties in the sustainability of international balance of payments but also affects the government's direct control over international economic transactions. To eliminate the foreign exchange shortage, the government needs adequate international reserves.
The level of exchange rate is an important factor in maintaining exchange rate stability, both before and after currency convertibility. The exchange rate of freely convertible currency is too high or too low, which can easily trigger speculation and undermine the stability of macroeconomic and financial markets. Therefore, to maintain the level of exchange rate, a proper exchange rate regime is crucial.
In economics, a local currency is a currency not backed by a national government and intended to trade only in a small area. Advocates such as Jane Jacobs argue that this enables an economically depressed region to pull itself up, by giving the people living there a medium of exchange that they can use to exchange services and locally produced goods (in a broader sense, this is the original purpose of all money). Opponents of this concept argue that local currency creates a barrier that can interfere with economies of scale and comparative advantage and that in some cases they can serve as a means of tax evasion.
Local currencies can also come into being when there is economic turmoil involving the national currency. An example of this is the Argentinian economic crisis of 2002 in which IOUs issued by local governments quickly took on some of the characteristics of local currencies.
One of the best examples of a local currency is the original LETS currency, founded on Vancouver Island in the early 1980s. In 1982, the Canadian Central Bank's lending rates ran up to 14% which drove chartered bank lending rates as high as 19%. The resulting currency and credit scarcity left island residents with few options other than to create a local currency.
The following table are estimates of the 20 most frequently used currencies in world payments in October 2023 by SWIFT. | [
{
"paragraph_id": 0,
"text": "A currency is a standardization of money in any form, in use or circulation as a medium of exchange, for example banknotes and coins. A more general definition is that a currency is a system of money in common use within a specific environment over time, especially for people in a nation state. Under this definition, the British Pound sterling (£), euros (€), Japanese yen (¥), and U.S. dollars (US$) are examples of (government-issued) fiat currencies. Currencies may act as stores of value and be traded between nations in foreign exchange markets, which determine the relative values of the different currencies. Currencies in this sense are either chosen by users or decreed by governments, and each type has limited boundaries of acceptance; i.e., legal tender laws may require a particular unit of account for payments to government agencies.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Other definitions of the term currency appear in the respective synonymous articles: banknote, coin, and money. This article uses the definition which focuses on the currency systems of countries.",
"title": ""
},
{
"paragraph_id": 2,
"text": "One can classify currencies into three monetary systems: fiat money, commodity money, and representative money, depending on what guarantees a currency's value (the economy at large vs. the government's precious metal reserves). Some currencies function as legal tender in certain jurisdictions, or for specific purposes, such as payment to a government (taxes), or government agencies (fees, fines). Others simply get traded for their economic value.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The concept of a digital currency has arisen in recent years. Whether government-backed digital notes and coins (such as the digital renminbi in China, for example) will be successfully developed and implemented remains unknown. Digital currencies that are not issued by a government monetary authority, such as cryptocurrencies like Bitcoin, are different because their value is market-dependent and has no safety net. Various countries have expressed concern about the opportunities that cryptocurrencies create for illegal activities such as scams, ransomware (extortion), money laundering and terrorism. In 2014, the United States IRS advised that virtual currency is treated as property for federal income-tax purposes, and it provides examples of how long-standing tax principles applicable to transactions involving property apply to virtual currency.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Originally, currency was a form of receipt, representing grain stored in temple granaries in Sumer in ancient Mesopotamia and in Ancient Egypt.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "In this first stage of currency, metals were used as symbols to represent value stored in the form of commodities. This formed the basis of trade in the Fertile Crescent for over 1500 years. However, the collapse of the Near Eastern trading system pointed to a flaw: in an era where there was no place that was safe to store value, the value of a circulating medium could only be as sound as the forces that defended that store. A trade could only reach as far as the credibility of that military. By the late Bronze Age, however, a series of treaties had established safe passage for merchants around the Eastern Mediterranean, spreading from Minoan Crete and Mycenae in the northwest to Elam and Bahrain in the southeast. It is not known what was used as a currency for these exchanges, but it is thought that oxhide-shaped ingots of copper, produced in Cyprus, may have functioned as a currency.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "It is thought that the increase in piracy and raiding associated with the Bronze Age collapse, possibly produced by the Peoples of the Sea, brought the trading system of oxhide ingots to an end. It was only the recovery of Phoenician trade in the 10th and 9th centuries BC that led to a return to prosperity, and the appearance of real coinage, possibly first in Anatolia with Croesus of Lydia and subsequently with the Greeks and Persians. In Africa, many forms of value store have been used, including beads, ingots, ivory, various forms of weapons, livestock, the manilla currency, and ochre and other earth oxides. The manilla rings of West Africa were one of the currencies used from the 15th century onwards to sell slaves. African currency is still notable for its variety, and in many places, various forms of barter still apply.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "The prevalence of metal coins possibly led to the metal itself being the store of value: first copper, then both silver and gold, and at one point also bronze. Today other non-precious metals are used for coins. Metals were mined, weighed, and stamped into coins. This was to assure the individual accepting the coin that he was getting a certain known weight of precious metal. Coins could be counterfeited, but the existence of standard coins also created a new unit of account, which helped lead to banking. Archimedes' principle provided the next link: coins could now be easily tested for their fine weight of the metal, and thus the value of a coin could be determined, even if it had been shaved, debased or otherwise tampered with (see Numismatics).",
"title": "History"
},
{
"paragraph_id": 8,
"text": "Most major economies using coinage had several tiers of coins of different values, made of copper, silver, and gold. Gold coins were the most valuable and were used for large purchases, payment of the military, and backing of state activities. Units of account were often defined as the value of a particular type of gold coin. Silver coins were used for midsized transactions, and sometimes also defined a unit of account, while coins of copper or silver, or some mixture of them (see debasement), might be used for everyday transactions. This system had been used in ancient India since the time of the Mahajanapadas. The exact ratios between the values of the three metals varied greatly between different eras and places; for example, the opening of silver mines in the Harz mountains of central Europe made silver relatively less valuable, as did the flood of New World silver after the Spanish conquests. However, the rarity of gold consistently made it more valuable than silver, and likewise silver was consistently worth more than copper.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "In premodern China, the need for lending and for a medium of exchange that was less physically cumbersome than large numbers of copper coins led to the introduction of paper money, i.e. banknotes. Their introduction was a gradual process that lasted from the late Tang dynasty (618–907) into the Song dynasty (960–1279). It began as a means for merchants to exchange heavy coinage for receipts of deposit issued as promissory notes by wholesalers' shops. These notes were valid for temporary use in a small regional territory. In the 10th century, the Song dynasty government began to circulate these notes amongst the traders in its monopolized salt industry. The Song government granted several shops the right to issue banknotes, and in the early 12th century the government finally took over these shops to produce state-issued currency. Yet the banknotes issued were still only locally and temporarily valid: it was not until the mid 13th century that a standard and uniform government issue of paper money became an acceptable nationwide currency. The already widespread methods of woodblock printing and then Bi Sheng's movable type printing by the 11th century were the impetus for the mass production of paper money in premodern China.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "At around the same time in the medieval Islamic world, a vigorous monetary economy was created during the 7th–12th centuries on the basis of the expanding levels of circulation of a stable high-value currency (the dinar). Innovations introduced by Muslim economists, traders and merchants include the earliest uses of credit, cheques, promissory notes, savings accounts, transaction accounts, loaning, trusts, exchange rates, the transfer of credit and debt, and banking institutions for loans and deposits.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "In Europe, paper currency was first introduced on a regular basis in Sweden in 1661 (although Washington Irving records an earlier emergency use of it, by the Spanish in a siege during the Conquest of Granada). As Sweden was rich in copper, many copper coins were in circulation, but its relatively low value necessitated extraordinarily big coins, often weighing several kilograms.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "The advantages of paper currency were numerous: it reduced the need to transport gold and silver, which was risky; it facilitated loans of gold or silver at interest, since the underlying specie (money in the form of gold or silver coins rather than notes) never left the possession of the lender until someone else redeemed the note; and it allowed a division of currency into credit- and specie-backed forms. It enabled the sale of investment in joint-stock companies and the redemption of those shares in a paper.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "But there were also disadvantages. First, since a note has no intrinsic value, there was nothing to stop issuing authorities from printing more notes than they had specie to back them with. Second, because this increased the money supply, it increased inflationary pressures, a fact observed by David Hume in the 18th century. Thus paper money would often lead to an inflationary bubble, which could collapse if people began demanding hard money, causing the demand for paper notes to fall to zero. The printing of paper money was also associated with wars, and financing of wars, and therefore regarded as part of maintaining a standing army. For these reasons, paper currency was held in suspicion and hostility in Europe and America. It was also addictive since the speculative profits of trade and capital creation were quite large. Major nations established mints to print money and mint coins, and branches of their treasury to collect taxes and hold gold and silver stock.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "At that time, both silver and gold were considered a legal tender and accepted by governments for taxes. However, the instability in the exchange rate between the two grew over the course of the 19th century, with the increases both in the supply of these metals, particularly silver, and in trade. The parallel use of both metals is called bimetallism, and the attempt to create a bimetallic standard where both gold and silver backed currency remained in circulation occupied the efforts of inflationists. Governments at this point could use currency as an instrument of policy, printing paper currency such as the United States greenback, to pay for military expenditures. They could also set the terms at which they would redeem notes for specie, by limiting the amount of purchase, or the minimum amount that could be redeemed.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "By 1900, most of the industrializing nations were on some form of gold standard, with paper notes and silver coins constituting the circulating medium. Private banks and governments across the world followed Gresham's law: keeping the gold and silver they received but paying out in notes. This did not happen all around the world at the same time, but occurred sporadically, generally in times of war or financial crisis, beginning in the early 20th century and continuing across the world until the late 20th century, when the regime of floating fiat currencies came into force. One of the last countries to break away from the gold standard was the United States in 1971, an action which was known as the Nixon shock. No country has an enforceable gold standard or silver standard currency system.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "A banknote or a bill is a type of currency and it is commonly used as legal tender in many jurisdictions. Together with coins, banknotes make up the cash form of a currency. Banknotes were initially mostly paper, but Australia's Commonwealth Scientific and Industrial Research Organisation developed a polymer currency in the 1980s; it went into circulation on the nation's bicentenary in 1988. Polymer banknotes had already been introduced in the Isle of Man in 1983. As of 2016, polymer currency is used in over 20 countries (over 40 if counting commemorative issues), and dramatically increases the life span of banknotes and reduces counterfeiting.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "The currency used is based on the concept of lex monetae; that a sovereign state decides which currency it shall use. (See Fiat currency.)",
"title": "Modern currencies"
},
{
"paragraph_id": 18,
"text": "In 1978 the International Organization for Standardization published a system of three-digit alphabetic codes (ISO 4217) to denote currencies. These codes are based on two initial letters allocated to a specific country and a final letter denoting a specific monetary unit of account.",
"title": "Currency codes and currency symbols"
},
{
"paragraph_id": 19,
"text": "Many currencies use a currency symbol. These are not subject to international standards and are not unique: the dollar sign in particular has many uses.",
"title": "Currency codes and currency symbols"
},
{
"paragraph_id": 20,
"text": "Distinct from centrally controlled government-issued currencies, private decentralized trust-reduced networks support alternative currencies (such as Bitcoin and Ethereum's ether, which are classified as cryptocurrency since transference transactions are assured through cryptographic signatures validated by all users. With few exceptions, these currencies are not asset backed. The U.S. Commodity Futures Trading Commission has declared Bitcoin (and, by extension, similar products) to be a commodity under the Commodity Exchange Act.",
"title": "Alternative currencies"
},
{
"paragraph_id": 21,
"text": "There are also branded currencies, for example 'obligation' based stores of value, such as quasi-regulated BarterCard, Loyalty Points (Credit Cards, Airlines) or Game-Credits (MMO games) that are based on reputation of commercial products.",
"title": "Alternative currencies"
},
{
"paragraph_id": 22,
"text": "Historically, pseudo-currencies have also included company scrip, a form of wages that could only be exchanged in company stores owned by the employers. Modern token money, such as the tokens operated by local exchange trading systems (LETS), is a form of barter rather than being a true currency.",
"title": "Alternative currencies"
},
{
"paragraph_id": 23,
"text": "The currency may be Internet-based and digital, for instance, Bitcoin is not tied to any specific country, or the IMF's SDR that is based on a basket of currencies (and assets held).",
"title": "Alternative currencies"
},
{
"paragraph_id": 24,
"text": "Possession and sale of alternative forms of currencies is often outlawed by governments in order to preserve the legitimacy of the constitutional currency for the benefit of all citizens. For example, Article I, section 8, clause 5 of the United States Constitution delegates to Congress the power to coin money and to regulate the value thereof. This power was delegated to Congress in order to establish and preserve a uniform standard of value and to insure a singular monetary system for all purchases and debts in the United States, public and private. Along with the power to coin money, the United States Congress has the concurrent power to restrain the circulation of money which is not issued under its own authority in order to protect and preserve the constitutional currency. It is a violation of federal law for individuals, or organizations to create private coin or currency systems to compete with the official coinage and currency of the United States.",
"title": "Alternative currencies"
},
{
"paragraph_id": 25,
"text": "Commonly a central bank has the exclusive power to issue all forms of currency, including coins and banknotes (fiat money), and to restrain the circulation alternative currencies for its own area of circulation (a country or group of countries); it regulates the production of currency by banks (credit) through monetary policy.",
"title": "Control and production"
},
{
"paragraph_id": 26,
"text": "An exchange rate is a price at which two currencies can be exchanged against each other. This is used for trade between the two currency zones. Exchange rates can be classified as either floating or fixed. In the former, day-to-day movements in exchange rates are determined by the market; in the latter, governments intervene in the market to buy or sell their currency to balance supply and demand at a static exchange rate.",
"title": "Control and production"
},
{
"paragraph_id": 27,
"text": "In cases where a country has control of its own currency, that control is exercised either by a central bank or by a Ministry of Finance. The institution that has control of monetary policy is referred to as the monetary authority. Monetary authorities have varying degrees of autonomy from the governments that create them. A monetary authority is created and supported by its sponsoring government, so independence can be reduced by the legislative or executive authority that creates it.",
"title": "Control and production"
},
{
"paragraph_id": 28,
"text": "Several countries can use the same name for their own separate currencies (for example, a dollar in Australia, Canada, and the United States). By contrast, several countries can also use the same currency (for example, the euro or the CFA franc), or one country can declare the currency of another country to be legal tender. For example, Panama and El Salvador have declared US currency to be legal tender, and from 1791 to 1857, Spanish dollars were legal tender in the United States. At various times countries have either re-stamped foreign coins or used currency boards, issuing one note of currency for each note of a foreign government held, as Ecuador currently does.",
"title": "Control and production"
},
{
"paragraph_id": 29,
"text": "Each currency typically has a main currency unit (the dollar, for example, or the euro) and a fractional unit, often defined as 1⁄100 of the main unit: 100 cents = 1 dollar, 100 centimes = 1 franc, 100 pence = 1 pound, although units of 1⁄10 or 1⁄1000 occasionally also occur. Some currencies do not have any smaller units at all, such as the Icelandic króna and the Japanese yen.",
"title": "Control and production"
},
{
"paragraph_id": 30,
"text": "Mauritania and Madagascar are the only remaining countries that have theoretical fractional units not based on the decimal system; instead, the Mauritanian ouguiya is in theory divided into 5 khoums, while the Malagasy ariary is theoretically divided into 5 iraimbilanja. In these countries, words like dollar or pound \"were simply names for given weights of gold\". Due to inflation khoums and iraimbilanja have in practice fallen into disuse. (See non-decimal currencies for other historic currencies with non-decimal divisions.)",
"title": "Control and production"
},
{
"paragraph_id": 31,
"text": "Subject to variation around the world, local currency can be converted to another currency or vice versa with or without central bank/government intervention. Such conversions take place in the foreign exchange market. Based on the above restrictions or free and readily conversion features, currencies are classified as:",
"title": "Currency convertibility"
},
{
"paragraph_id": 32,
"text": "According to the three aspects of trade in goods and services, capital flows and national policies, the supply-demand relationship of different currencies determines the exchange ratio between currencies.",
"title": "Currency convertibility"
},
{
"paragraph_id": 33,
"text": "Trade in goods and services",
"title": "Currency convertibility"
},
{
"paragraph_id": 34,
"text": "Through cost transfer, goods and services circulating in the country (such as hotels, tourism, catering, advertising, household services) will indirectly affect the trade cost of goods and services and the price of export trade. Therefore, services and goods involved in international trade are not the only reason affecting the exchange rate. The large number of international tourists and overseas students has resulted in the flow of services and goods at home and abroad. It also represents that the competitiveness of global goods and services directly affects the change of international exchange rates.",
"title": "Currency convertibility"
},
{
"paragraph_id": 35,
"text": "Capital flows",
"title": "Currency convertibility"
},
{
"paragraph_id": 36,
"text": "National currencies will be traded on international markets for investment purposes. Investment opportunities in each country attract other countries into investment programs, so that these foreign currencies become the reserves of the central banks of each country. The exchange rate mechanism, in which currencies are quoted continuously between countries, is based on foreign exchange markets in which currencies are invested by individuals and traded or speculated by central banks and investment institutions. In addition, changes in interest rates, capital market fluctuations and changes in investment opportunities will affect the global capital inflows and outflows of countries around the world, and exchange rates will fluctuate accordingly.",
"title": "Currency convertibility"
},
{
"paragraph_id": 37,
"text": "National policies",
"title": "Currency convertibility"
},
{
"paragraph_id": 38,
"text": "The country's foreign trade, monetary and fiscal policies affect the exchange rate fluctuations. Foreign trade includes policies such as tariffs and import standards for commodity exports. The impact of monetary policy on the total amount and yield of money directly determines the changes in the international exchange rate. Fiscal policies, such as transfer payments, taxation ratios, and other factors, dominate the profitability of capital and economic development, and the ratio of national debt issuance to deficit determines the repayment capacity and credit rating of the country. Such policies determine the mechanism of linking domestic and foreign currencies and therefore have a significant impact on the generation of exchange rates.",
"title": "Currency convertibility"
},
{
"paragraph_id": 39,
"text": "Currency convertibility is closely linked to economic development and finance. There are strict conditions for countries to achieve currency convertibility, which is a good way for countries to improve their economies. The currencies of some countries or regions in the world are freely convertible, such as the US dollar, Australian dollar and Japanese yen. The requirements for currency convertibility can be roughly divided into four parts:",
"title": "Currency convertibility"
},
{
"paragraph_id": 40,
"text": "With a freely convertible currency, domestic firms will have to compete fiercely with their foreign counterparts. The development of competition among them will affect the implementation effect of currency convertibility. In addition, microeconomics is a prerequisite for macroeconomic conditions.",
"title": "Currency convertibility"
},
{
"paragraph_id": 41,
"text": "Since currency convertibility is the cross-border flow of goods and capital, it will have an impact on the macro economy. This requires that the national economy be in a normal and orderly state, that is, there is no serious inflation and economic overheating. In addition, the government should use macro policies to make mature adjustments to deal with the impact of currency exchange on the economy.",
"title": "Currency convertibility"
},
{
"paragraph_id": 42,
"text": "The maintainability of international balance of payments is the main performance of reasonable economic structure. Currency convertibility not only causes difficulties in the sustainability of international balance of payments but also affects the government's direct control over international economic transactions. To eliminate the foreign exchange shortage, the government needs adequate international reserves.",
"title": "Currency convertibility"
},
{
"paragraph_id": 43,
"text": "The level of exchange rate is an important factor in maintaining exchange rate stability, both before and after currency convertibility. The exchange rate of freely convertible currency is too high or too low, which can easily trigger speculation and undermine the stability of macroeconomic and financial markets. Therefore, to maintain the level of exchange rate, a proper exchange rate regime is crucial.",
"title": "Currency convertibility"
},
{
"paragraph_id": 44,
"text": "In economics, a local currency is a currency not backed by a national government and intended to trade only in a small area. Advocates such as Jane Jacobs argue that this enables an economically depressed region to pull itself up, by giving the people living there a medium of exchange that they can use to exchange services and locally produced goods (in a broader sense, this is the original purpose of all money). Opponents of this concept argue that local currency creates a barrier that can interfere with economies of scale and comparative advantage and that in some cases they can serve as a means of tax evasion.",
"title": "Local currency"
},
{
"paragraph_id": 45,
"text": "Local currencies can also come into being when there is economic turmoil involving the national currency. An example of this is the Argentinian economic crisis of 2002 in which IOUs issued by local governments quickly took on some of the characteristics of local currencies.",
"title": "Local currency"
},
{
"paragraph_id": 46,
"text": "One of the best examples of a local currency is the original LETS currency, founded on Vancouver Island in the early 1980s. In 1982, the Canadian Central Bank's lending rates ran up to 14% which drove chartered bank lending rates as high as 19%. The resulting currency and credit scarcity left island residents with few options other than to create a local currency.",
"title": "Local currency"
},
{
"paragraph_id": 47,
"text": "The following table are estimates of the 20 most frequently used currencies in world payments in October 2023 by SWIFT.",
"title": "List of major world payment currencies"
}
] | A currency is a standardization of money in any form, in use or circulation as a medium of exchange, for example banknotes and coins. A more general definition is that a currency is a system of money in common use within a specific environment over time, especially for people in a nation state. Under this definition, the British Pound sterling (£), euros (€), Japanese yen (¥), and U.S. dollars (US$) are examples of (government-issued) fiat currencies. Currencies may act as stores of value and be traded between nations in foreign exchange markets, which determine the relative values of the different currencies. Currencies in this sense are either chosen by users or decreed by governments, and each type has limited boundaries of acceptance; i.e., legal tender laws may require a particular unit of account for payments to government agencies. Other definitions of the term currency appear in the respective synonymous articles: banknote, coin, and money. This article uses the definition which focuses on the currency systems of countries. One can classify currencies into three monetary systems: fiat money, commodity money, and representative money, depending on what guarantees a currency's value. Some currencies function as legal tender in certain jurisdictions, or for specific purposes, such as payment to a government (taxes), or government agencies. Others simply get traded for their economic value. The concept of a digital currency has arisen in recent years. Whether government-backed digital notes and coins will be successfully developed and implemented remains unknown. Digital currencies that are not issued by a government monetary authority, such as cryptocurrencies like Bitcoin, are different because their value is market-dependent and has no safety net. Various countries have expressed concern about the opportunities that cryptocurrencies create for illegal activities such as scams, ransomware (extortion), money laundering and terrorism. In 2014, the United States IRS advised that virtual currency is treated as property for federal income-tax purposes, and it provides examples of how long-standing tax principles applicable to transactions involving property apply to virtual currency. | 2001-11-17T15:55:46Z | 2023-12-21T18:08:36Z | [
"Template:Wikt",
"Template:Numismatics",
"Template:Further",
"Template:Col-end",
"Template:Cite news",
"Template:Frac",
"Template:Commons category-inline",
"Template:Authority control",
"Template:Short description",
"Template:Other uses",
"Template:Use mdy dates",
"Template:Efn",
"Template:Flagicon",
"Template:Portal",
"Template:Cite book",
"Template:For",
"Template:Notelist",
"Template:Wikidata property",
"Template:Curlie",
"Template:Means of Exchange",
"Template:As of",
"Template:Most traded currencies",
"Template:Cite journal",
"Template:More",
"Template:Refn",
"Template:Nowrap",
"Template:Col-begin",
"Template:Cite web",
"Template:Wikiquote-inline",
"Template:Economics",
"Template:Unreferenced section",
"Template:Refimprove",
"Template:Main",
"Template:Col-break",
"Template:NoteFoot",
"Template:Reflist"
] | https://en.wikipedia.org/wiki/Currency |
5,666 | Central bank | A central bank, reserve bank, or monetary authority is an institution that manages the currency and monetary policy of a country or monetary union. In contrast to a commercial bank, a central bank possesses a monopoly on increasing the monetary base. Many central banks also have supervisory or regulatory powers to ensure the stability of commercial banks in their jurisdiction, to prevent bank runs, and in some cases also to enforce policies on financial consumer protection and against bank fraud, money laundering, or terrorism financing.
Central banks in most developed nations are usually set up to be institutionally independent from political interference, even though governments typically have governance rights over them, legislative bodies exercise scrutiny, and central banks frequently do show responsiveness to politics.
Issues like central bank independence, central bank policies and rhetoric in central bank governors discourse or the premises of macroeconomic policies (monetary and fiscal policy) of the state are a focus of contention and criticism by some policymakers, researchers and specialized business, economics and finance media.
The notion of central banks as a separate category from other banks has emerged gradually, and only fully coalesced in the 20th century. In the aftermath of World War I, leading central bankers of the United Kingdom and the United States respectively, Montagu Norman and Benjamin Strong, agreed on a definition of central banks that was both positive and normative. Since that time, central banks have been generally distinguishable from other financial institutions, except in so-called single-tier communist systems such as Hungary's between 1950 and 1987, where the Hungarian National Bank operated alongside three other major state-owned banks. For earlier periods, what institutions do or do not count as central banks is often not univocal.
Correlatively, different scholars have held different views about the timeline of emergence of the first central banks. A widely held view in the second half of the 20th century has been that Stockholms Banco (est. 1657), as the original issuer of banknotes, counted as the oldest central bank, and that consequently its successor the Sveriges Riksbank was the oldest central bank in continuous operation, with the Bank of England as second-oldest and direct or indirect model for all subsequent central banks. That view has persisted in some early-21st-century publications. In more recent scholarship, however, the issuance of banknotes has often been viewed as just one of several techniques to provide central bank money, defined as financial money (in contrast to commodity money) of the highest quality. Under that definition, municipal banks of the late medieval and early modern periods, such as the Taula de canvi de Barcelona (est. 1401) or Bank of Amsterdam (est. 1609), issued central bank money and count as early central banks.
There is no universal terminology for the name of a central bank. Early central banks were often the only or principal formal financial institution in their jurisdiction, and were consequently often named "bank of" the relevant city's or country's name, e.g. the Bank of Amsterdam, Bank of Hamburg, Bank of England, or Wiener Stadtbank. Naming practices subsequently evolved as more central banks were established. They include, with references to the date when the bank acquired its current name:
In some cases, the local-language name is used in English-language practice, e.g. Sveriges Riksbank (est. 1668, current name in use since 1866), De Nederlandsche Bank (est. 1814), Deutsche Bundesbank (est. 1957), or Bangko Sentral ng Pilipinas (est. 1993).
Some commercial banks have names suggestive of central banks, even if they are not: examples are the State Bank of India and Central Bank of India, National Bank of Greece, Banco do Brasil, National Bank of Pakistan, Bank of China, Bank of Cyprus, or Bank of Ireland, as well as Deutsche Bank. Some but not all of these institutions had assumed central banking roles in the past.
The leading executive of a central bank is usually known as the Governor, President, or Chair.
The widespread adoption of central banking is a rather recent phenomenon. At the start of the 20th century, approximately two-thirds of sovereign states did not have a central bank. Waves of central bank adoption occurred in the interwar period and in the aftermath of World War II.
In the 20th century, central banks were often created with the intent to attract foreign capital, as bankers preferred to lend to countries with a central bank on the gold standard.
The use of money as a unit of account predates history. Government control of money is documented in the ancient Egyptian economy (2750–2150 BCE). The Egyptians measured the value of goods with a central unit called shat. Like many other currencies, the shat was linked to gold. The value of a shat in terms of goods was defined by government administrations. Other cultures in Asia Minor later materialized their currencies in the form of gold and silver coins.
The issuance of paper currency is not to be equated with central banking, even though paper currency is a form of financial money (i.e. not commodity money). The difference is that government-issued paper currency, as present e.g. in China during the Yuan dynasty, is typically not freely convertible and thus of inferior quality, occasionally leading to hyperinflation.
From the 12th century, a network of professional banks emerged primarily in Southern Europe (including Southern France, with the Cahorsins). Banks could use book money to create deposits for their customers. Thus, they had the possibility to issue, lend and transfer money autonomously without direct control from political authorities.
The Taula de canvi de Barcelona, established in 1401, is the first example of municipal, mostly public banks which pioneered central banking on a limited scale. It was soon emulated by the Bank of Saint George in the Republic of Genoa, first established in 1407, and significantly later by the Banco del Giro in the Republic of Venice and by a network of institutions in Naples that later consolidated into Banco di Napoli. Notable municipal central banks were established in the early 17th century in leading northwestern European commercial centers, namely the Bank of Amsterdam in 1609 and the Hamburger Bank in 1619. These institutions offered a public infrastructure for cashless international payments. They aimed to increase the efficiency of international trade and to safeguard monetary stability. These municipal public banks thus fulfilled comparable functions to modern central banks.
The Swedish central bank, known since 1866 as Sveriges Riksbank, was founded in Stockholm in 1664 from the remains of the failed Stockholms Banco and answered to the Riksdag of the Estates, Sweden's early modern parliament. One role of the Swedish central bank was lending money to the government.
The establishment of the Bank of England was devised by Charles Montagu, 1st Earl of Halifax, following a 1691 proposal by William Paterson. A royal charter was granted on 27 July 1694 through the passage of the Tonnage Act. The bank was given exclusive possession of the government's balances, and was the only limited-liability corporation allowed to issue banknotes. The early modern Bank of England, however, did not have all the functions of a today's central banks, e.g. to regulate the value of the national currency, to finance the government, to be the sole authorized distributor of banknotes, or to function as a lender of last resort to banks suffering a liquidity crisis.
In the early 18th century, a major experiment in national central banking failed in France with John Law's Banque Royale in 1720-1721. Later in the century, France had other attempts with the Caisse d'Escompte first created in 1767, and King Charles III established the Bank of Spain in 1782. The Russian Assignation Bank, established in 1769 by Catherine the Great, was an outlier from the general pattern of early national central banks in that it was directly owned by the Imperial Russian government, rather than private individual shareholders. In the nascent United States, Alexander Hamilton, as Secretary of the Treasury in the 1790s, set up the First Bank of the United States despite heavy opposition from Jeffersonian Republicans.
Central banks were established in many European countries during the 19th century. Napoleon created the Banque de France in 1800, in order to stabilize and develop the French economy and to improve the financing of his wars. The Bank of France remained the most important Continental European central bank throughout the 19th century. The Bank of Finland was founded in 1812, soon after Finland had been taken over from Sweden by Russia to become a grand duchy. Simultaneously, a quasi-central banking role was played by a small group of powerful family-run banking networks, typified by the House of Rothschild, with branches in major cities across Europe, as well as Hottinguer in Switzerland and Oppenheim in Germany.
The theory of central banking, even though the name was not yet widely used, evolved in the 19th century. Henry Thornton, an opponent of the real bills doctrine, was a defender of the bullionist position and a significant figure in monetary theory. Thornton's process of monetary expansion anticipated the theories of Knut Wicksell regarding the "cumulative process which restates the Quantity Theory in a theoretically coherent form". As a response to a currency crisis in 1797, Thornton wrote in 1802 An Enquiry into the Nature and Effects of the Paper Credit of Great Britain, in which he argued that the increase in paper credit did not cause the crisis. The book also gives a detailed account of the British monetary system as well as a detailed examination of the ways in which the Bank of England should act to counteract fluctuations in the value of the pound.
In the United Kingdom until the mid-nineteenth century, commercial banks were able to issue their own banknotes, and notes issued by provincial banking companies were commonly in circulation. Many consider the origins of the central bank to lie with the passage of the Bank Charter Act 1844. Under the 1844 Act, bullionism was institutionalized in Britain, creating a ratio between the gold reserves held by the Bank of England and the notes that the bank could issue. The Act also placed strict curbs on the issuance of notes by the country banks. The Bank of England took over a role of lender of last resort in the 1870s after criticism of its lacklustre response to the failure of Overend, Gurney and Company. The journalist Walter Bagehot wrote on the subject in Lombard Street: A Description of the Money Market, in which he advocated for the bank to officially become a lender of last resort during a credit crunch, sometimes referred to as "Bagehot's dictum".
The 19th and early 20th centuries central banks in most of Europe and Japan developed under the international gold standard. Free banking or currency boards were common at the time. Problems with collapses of banks during downturns, however, led to wider support for central banks in those nations which did not as yet possess them, for example in Australia. In the United States, the role of a central bank had been ended in the so-called Bank War of the 1830s by President Andrew Jackson. In 1913, the U.S. created the Federal Reserve System through the passing of The Federal Reserve Act.
Following World War I, the Economic and Financial Organization (EFO) of the League of Nations, influenced by the ideas of Montagu Norman and other leading policymakers and economists of the time, took an active role to promote the independence of central bank, a key component of the economic orthodoxy the EFO fostered at the Brussels Conference (1920). The EFO thus directed the creation of the Oesterreichische Nationalbank in Austria, Hungarian National Bank, Bank of Danzig, and Bank of Greece, as well as comprehensive reforms of the Bulgarian National Bank and Bank of Estonia. Similar ideas were emulated in other newly independent European countries, e.g. for the National Bank of Czechoslovakia.
Brazil established a central bank in 1945, which was a precursor to the Central Bank of Brazil created twenty years later. After gaining independence, numerous African and Asian countries also established central banks or monetary unions. The Reserve Bank of India, which had been established during British colonial rule as a private company, was nationalized in 1949 following India's independence. By the early 21st century, most of the world's countries had a national central bank set up as a public sector institution, albeit with widely varying degrees of independence.
Before the near-generalized adoption of the model of national public-sector central banks, a number of economies relied on a central bank that was effectively or legally run from outside their territory. The first colonial central banks, such as the Bank of Java (est. 1828 in Batavia), Banque de l'Algérie (est. 1851 in Algiers), or Hongkong and Shanghai Banking Corporation (est. 1865 in Hong Kong), operated from the colony itself. Following the generalization of the transcontinental use of the electrical telegraph using submarine communications cable, however, new colonial banks were typically headquartered in the colonial metropolis; prominent examples included the Paris-based Banque de l'Indochine (est. 1875), Banque de l'Afrique Occidentale (est. 1901), and Banque de Madagascar (est. 1925). The Banque de l'Algérie's head office was relocated from Algiers to Paris in 1900.
In some cases, independent countries which did not have a strong domestic base of capital accumulation and were critically reliant on foreign funding found advantage in granting a central banking role to banks that were effectively or even legally foreign. A seminal case was the Imperial Ottoman Bank established in 1863 as a French-British joint venture, and a particularly egregious one was the Paris-based National Bank of Haiti (est. 1881) which captured significant financial resources from the economically struggling albeit independent nation of Haiti. Other cases include the London-based Imperial Bank of Persia, established in 1885, and the Rome-based National Bank of Albania, established in 1925. The State Bank of Morocco was established in 1907 with international shareholding and headquarters functions distributed between Paris and Tangier, a half-decade before the country lost its independence. In other cases, there have been organized currency unions such as the Belgium–Luxembourg Economic Union established in 1921, under which Luxembourg had no central bank, but that was managed by a national central bank (in that case the National Bank of Belgium) rather than a supranational one. The present-day Common Monetary Area of Southern Africa has comparable features.
Yet another pattern was set in countries where federated or otherwise sub-sovereign entities had wide policy autonomy that was echoed to varying degrees in the organization of the central bank itself. These included, for example, the Austro-Hungarian Bank from 1878 to 1918, the U.S. Federal Reserve in its first two decades, the Bank deutscher Länder between 1948 and 1957, or the National Bank of Yugoslavia between 1972 and 1993. Conversely, some countries that are politically organized as federations, such as today's Canada, Mexico, or Switzerland, rely on a unitary central bank.
In the second half of the 20th century, the dismantling of colonial systems left some groups of countries using the same currency even though they had achieved national independence. In contrast to the unraveling of Austria-Hungary and the Ottoman Empire after World War I, some of these countries decided to keep using a common currency, thus forming a monetary union, and to entrust its management to a common central bank. Examples include the Eastern Caribbean Currency Authority, the Central Bank of West African States, and the Bank of Central African States.
The concept of supranational central banking took a globally significant dimension with the Economic and Monetary Union of the European Union and the establishment of the European Central Bank (ECB) in 1998. In 2014, the ECB took an additional role of banking supervision as part of the newly established policy of European banking union.
The primary role of central banks is usually to maintain price stability, as defined as a specific level of inflation. Inflation is defined either as the devaluation of a currency or equivalently the rise of prices relative to a currency. Most central banks currently have an inflation target close to 2%.
Since inflation lowers real wages, Keynesians view inflation as the solution to involuntary unemployment. However, "unanticipated" inflation leads to lender losses as the real interest rate will be lower than expected. Thus, Keynesian monetary policy aims for a steady rate of inflation.
Central banks as monetary authorities in representative states are intertwined through globalized financial markets. As a regulator of one of the most widespread currencies in the global economy, the US Federal Reserve plays an outsized role in the international monetary market. Being the main supplier and rate adjusted for US dollars, the Federal Reserve implements a set of requirements to control inflation and unemployment in the US.
Frictional unemployment is the time period between jobs when a worker is searching for, or transitioning from one job to another. Unemployment beyond frictional unemployment is classified as unintended unemployment.
For example, structural unemployment is a form of unemployment resulting from a mismatch between demand in the labour market and the skills and locations of the workers seeking employment. Macroeconomic policy generally aims to reduce unintended unemployment.
Keynes labeled any jobs that would be created by a rise in wage-goods (i.e., a decrease in real-wages) as involuntary unemployment:
Economic growth can be enhanced by investment in capital, such as more or better machinery. A low interest rate implies that firms can borrow money to invest in their capital stock and pay less interest for it. Lowering the interest is therefore considered to encourage economic growth and is often used to alleviate times of low economic growth. On the other hand, raising the interest rate is often used in times of high economic growth as a contra-cyclical device to keep the economy from overheating and avoid market bubbles.
Further goals of monetary policy are stability of interest rates, of the financial market, and of the foreign exchange market. Goals frequently cannot be separated from each other and often conflict. Costs must therefore be carefully weighed before policy implementation.
In the aftermath of the Paris agreement on climate change, a debate is now underway on whether central banks should also pursue environmental goals as part of their activities. In 2017, eight central banks formed the Network for Greening the Financial System (NGFS) to evaluate the way in which central banks can use their regulatory and monetary policy tools to support climate change mitigation. Today more than 70 central banks are part of the NGFS.
In January 2020, the European Central Bank has announced it will consider climate considerations when reviewing its monetary policy framework.
Proponents of "green monetary policy" are proposing that central banks include climate-related criteria in their collateral eligibility frameworks, when conducting asset purchases and also in their refinancing operations. But critics such as Jens Weidmann are arguing it is not central banks' role to conduct climate policy. China is among the most advanced central banks when it comes to green monetary policy. It has given green bonds preferential status to lower their yield and uses window policy to direct green lending.
The implications of potential stranded assets in the economy highlights one example of the embedded transition risk to climate change with potential cascade effects throughout the financial system. In response, four broad types of interventions including methodology development, investor encouragement, financial regulation and policy toolkits have been adopted by or suggested for central banks.
Achieving the 2°C threshold revolve in part around the development of climate-aligned financial regulations. A significant challenge lies in the lack of awareness among corporations and investors, driven by poor information flow and insufficient disclosure. To address this issue, regulators and central banks are promoting transparency, integrated reporting, and exposure specifications, with the goal of promoting long-term, low-carbon emission goals, rather than short-term financial objectives. These regulations aim to assess risk comprehensively, identifying carbon-intensive assets and increasing their capital requirements. This should result in high-carbon assets becoming less attractive while favoring low-carbon assets, which have historically been perceived as high-risk, and low volatility investment vehicles.
Quantitative easing is a potential measure that could be applied by Central banks to achieve a low-carbon transition. Although there is a historical bias toward high-carbon companies, included in Central banks portfolios due to their high credit ratings, innovative approaches to quantitative easing could invert this trend to favor low-carbon assets.
Considering the potential impact of central banks on climate change, it is important to consider the mandates of central banks. The mandate of a central bank can be narrow, meaning only a few objectives are given, limiting the ability of a central bank to include climate change in its policies. However, central bank mandates may not necessarily have to be modified to accommodate climate change-related activities. For example, the European Central Bank has incorporated carbon-emissions into its asset purchase criteria, despite its relatively narrow mandate that focuses on price stability.
The functions of a central bank may include:
Central banks implement a country's chosen monetary policy.
At the most basic level, monetary policy involves establishing what form of currency the country may have, whether a fiat currency, gold-backed currency (disallowed for countries in the International Monetary Fund), currency board or a currency union. When a country has its own national currency, this involves the issue of some form of standardized currency, which is essentially a form of promissory note: "money" under certain circumstances. Historically, this was often a promise to exchange the money for precious metals in some fixed amount. Now, when many currencies are fiat money, the "promise to pay" consists of the promise to accept that currency to pay for taxes.
A central bank may use another country's currency either directly in a currency union, or indirectly on a currency board. In the latter case, exemplified by the Bulgarian National Bank, Hong Kong and Latvia (until 2014), the local currency is backed at a fixed rate by the central bank's holdings of a foreign currency. Similar to commercial banks, central banks hold assets (government bonds, foreign exchange, gold, and other financial assets) and incur liabilities (currency outstanding). Central banks create money by issuing banknotes and loaning them to the government in exchange for interest-bearing assets such as government bonds. When central banks decide to increase the money supply by an amount which is greater than the amount their national governments decide to borrow, the central banks may purchase private bonds or assets denominated in foreign currencies.
The European Central Bank remits its interest income to the central banks of the member countries of the European Union. The US Federal Reserve remits most of its profits to the U.S. Treasury. This income, derived from the power to issue currency, is referred to as seigniorage, and usually belongs to the national government. The state-sanctioned power to create currency is called the Right of Issuance. Throughout history, there have been disagreements over this power, since whoever controls the creation of currency controls the seigniorage income. The expression "monetary policy" may also refer more narrowly to the interest-rate targets and other active measures undertaken by the monetary authority.
The primary tools available to central banks are open market operations (including repurchase agreements), reserve requirements, interest rate policy (through control of the discount rate), and control of the money supply.
A central bank affects the monetary base through open market operations, if its country has a well developed market for its government bonds. This entails managing the quantity of money in circulation through the buying and selling of various financial instruments, such as treasury bills, repurchase agreements or "repos", company bonds, or foreign currencies, in exchange for money on deposit at the central bank. Those deposits are convertible to currency, so all of these purchases or sales result in more or less base currency entering or leaving market circulation. For example, if the central bank wishes to decrease interest rates (executing expansionary monetary policy), it purchases government debt, thereby increasing the amount of cash in circulation or crediting banks' reserve accounts. Commercial banks then have more money to lend, so they reduce lending rates, making loans less expensive. Cheaper credit card interest rates increase consumer spending. Additionally, when business loans are more affordable, companies can expand to keep up with consumer demand. They ultimately hire more workers, whose incomes increase, which in its turn also increases the demand. This method is usually enough to stimulate demand and drive economic growth to a healthy rate. Usually, the short-term goal of open market operations is to achieve a specific short-term interest rate target. In other instances, monetary policy might instead entail the targeting of a specific exchange rate relative to some foreign currency or else relative to gold. For example, in the case of the United States the Federal Reserve targets the federal funds rate, the rate at which member banks lend to one another overnight; however, the monetary policy of China (since 2014) is to target the exchange rate between the Chinese renminbi and a basket of foreign currencies.
If the open market operations do not lead to the desired effects, a second tool can be used: the central bank can increase or decrease the interest rate it charges on discounts or overdrafts (loans from the central bank to commercial banks, see discount window). If the interest rate on such transactions is sufficiently low, commercial banks can borrow from the central bank to meet reserve requirements and use the additional liquidity to expand their balance sheets, increasing the credit available to the economy.
A third alternative is to change the reserve requirements. The reserve requirement refers to the proportion of total liabilities that banks must keep on hand overnight, either in its vaults or at the central bank. Banks only maintain a small portion of their assets as cash available for immediate withdrawal; the rest is invested in illiquid assets like mortgages and loans. Lowering the reserve requirement frees up funds for banks to increase loans or buy other profitable assets. This is expansionary because it creates credit. However, even though this tool immediately increases liquidity, central banks rarely change the reserve requirement because doing so frequently adds uncertainty to banks' planning. The use of open market operations is therefore preferred.
Other forms of monetary policy, particularly used when interest rates are at or near 0% and there are concerns about deflation or deflation is occurring, are referred to as unconventional monetary policy. These include credit easing, quantitative easing, forward guidance, and signalling. In credit easing, a central bank purchases private sector assets to improve liquidity and improve access to credit. Signaling can be used to lower market expectations for lower interest rates in the future. For example, during the credit crisis of 2008, the US Federal Reserve indicated rates would be low for an "extended period", and the Bank of Canada made a "conditional commitment" to keep rates at the lower bound of 25 basis points (0.25%) until the end of the second quarter of 2010.
Some have envisaged the use of what Milton Friedman once called "helicopter money" whereby the central bank would make direct transfers to citizens in order to lift inflation up to the central bank's intended target. Such policy option could be particularly effective at the zero lower bound.
Since 2017, prospect of implementing Central Bank Digital Currency (CBDC) has been in discussion. As of the end of 2018, at least 15 central banks were considering to implementing CBDC. Since 2014, the People's Bank of China has been working on a project for digital currency to make its own digital currency and electronic payment systems.
In some countries a central bank, through its subsidiaries, controls and monitors the banking sector. In other countries banking supervision is carried out by a government department such as the UK Treasury, or by an independent government agency, for example, UK's Financial Conduct Authority. It examines the banks' balance sheets and behaviour and policies toward consumers. Apart from refinancing, it also provides banks with services such as transfer of funds, bank notes and coins or foreign currency. Thus it is often described as the "bank of banks".
Many countries will monitor and control the banking sector through several different agencies and for different purposes. The Bank regulation in the United States for example is highly fragmented with 3 federal agencies, the Federal Deposit Insurance Corporation, the Federal Reserve Board, or Office of the Comptroller of the Currency and numerous others on the state and the private level. There is usually significant cooperation between the agencies. For example, money center banks, deposit-taking institutions, and other types of financial institutions may be subject to different (and occasionally overlapping) regulation. Some types of banking regulation may be delegated to other levels of government, such as state or provincial governments.
Any cartel of banks is particularly closely watched and controlled. Most countries control bank mergers and are wary of concentration in this industry due to the danger of groupthink and runaway lending bubbles based on a single point of failure, the credit culture of the few large banks.
Numerous governments have opted to make central banks independent. The economic logic behind central bank independence is that when governments delegate monetary policy to an independent central bank (with an anti-inflationary purpose) and away from elected politicians, monetary policy will not reflect the interests of the politicians. When governments control monetary policy, politicians may be tempted to boost economic activity in advance of an election to the detriment of the long-term health of the economy and the country. As a consequence, financial markets may not consider future commitments to low inflation to be credible when monetary policy is in the hands of elected officials, which increases the risk of capital flight. An alternative to central bank independence is to have fixed exchange rate regimes.
Governments generally have some degree of influence over even "independent" central banks; the aim of independence is primarily to prevent short-term interference. In 1951, the Deutsche Bundesbank became the first central bank to be given full independence, leading this form of central bank to be referred to as the "Bundesbank model", as opposed, for instance, to the New Zealand model, which has a goal (i.e. inflation target) set by the government.
Central bank independence is usually guaranteed by legislation and the institutional framework governing the bank's relationship with elected officials, particularly the minister of finance. Central bank legislation will enshrine specific procedures for selecting and appointing the head of the central bank. Often the minister of finance will appoint the governor in consultation with the central bank's board and its incumbent governor. In addition, the legislation will specify banks governor's term of appointment. The most independent central banks enjoy a fixed non-renewable term for the governor in order to eliminate pressure on the governor to please the government in the hope of being re-appointed for a second term. Generally, independent central banks enjoy both goal and instrument independence.
Despite their independence, central banks are usually accountable at some level to government officials, either to the finance ministry or to parliament. For example, the Board of Governors of the U.S. Federal Reserve are nominated by the U.S. president and confirmed by the Senate, publishes verbatim transcripts, and balance sheets are audited by the Government Accountability Office.
In the 1990s there was a trend towards increasing the independence of central banks as a way of improving long-term economic performance. While a large volume of economic research has been done to define the relationship between central bank independence and economic performance, the results are ambiguous.
The literature on central bank independence has defined a cumulative and complementary number of aspects:
There is very strong consensus among economists that an independent central bank can run a more credible monetary policy, making market expectations more responsive to signals from the central bank. Both the Bank of England (1997) and the European Central Bank have been made independent and follow a set of published inflation targets so that markets know what to expect. Even the People's Bank of China has been accorded great latitude, though in China the official role of the bank remains that of a national bank rather than a central bank, underlined by the official refusal to "unpeg" the yuan or to revalue it "under pressure". The fact that the Communist Party is not elected also relieves the pressure to please people, increasing its independence. Populism can reduce de facto central bank independence.
International organizations such as the World Bank, the Bank for International Settlements (BIS) and the International Monetary Fund (IMF) strongly support central bank independence. This results, in part, from a belief in the intrinsic merits of increased independence. The support for independence from the international organizations also derives partly from the connection between increased independence for the central bank and increased transparency in the policy-making process. The IMF's Financial Services Action Plan (FSAP) review self-assessment, for example, includes a number of questions about central bank independence in the transparency section. An independent central bank will score higher in the review than one that is not independent.
Central bank independence indices allow a quantitative analysis of central bank independence for individual countries over time. One central bank independence index is the Garriga CBI, where a higher index indicates higher central bank independence, shown below for individual countries.
Collectively, central banks purchase less than 500 tonnes of gold each year, on average (out of an annual global production of 2,500-3,000 tonnes). In 2018, central banks collectively hold over 33,000 metric tons of the gold, about a fifth of all the gold ever mined, according to Bloomberg News.
In 2016, 75% of the world's central-bank assets were controlled by four centers in China, the United States, Japan and the eurozone. The central banks of Brazil, Switzerland, Saudi Arabia, the U.K., India and Russia, each account for an average of 2.5 percent. The remaining 107 central banks hold less than 13 percent. According to data compiled by Bloomberg News, the top 10 largest central banks owned $21.4 trillion in assets, a 10 percent increase from 2015. | [
{
"paragraph_id": 0,
"text": "A central bank, reserve bank, or monetary authority is an institution that manages the currency and monetary policy of a country or monetary union. In contrast to a commercial bank, a central bank possesses a monopoly on increasing the monetary base. Many central banks also have supervisory or regulatory powers to ensure the stability of commercial banks in their jurisdiction, to prevent bank runs, and in some cases also to enforce policies on financial consumer protection and against bank fraud, money laundering, or terrorism financing.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Central banks in most developed nations are usually set up to be institutionally independent from political interference, even though governments typically have governance rights over them, legislative bodies exercise scrutiny, and central banks frequently do show responsiveness to politics.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Issues like central bank independence, central bank policies and rhetoric in central bank governors discourse or the premises of macroeconomic policies (monetary and fiscal policy) of the state are a focus of contention and criticism by some policymakers, researchers and specialized business, economics and finance media.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The notion of central banks as a separate category from other banks has emerged gradually, and only fully coalesced in the 20th century. In the aftermath of World War I, leading central bankers of the United Kingdom and the United States respectively, Montagu Norman and Benjamin Strong, agreed on a definition of central banks that was both positive and normative. Since that time, central banks have been generally distinguishable from other financial institutions, except in so-called single-tier communist systems such as Hungary's between 1950 and 1987, where the Hungarian National Bank operated alongside three other major state-owned banks. For earlier periods, what institutions do or do not count as central banks is often not univocal.",
"title": "Definition"
},
{
"paragraph_id": 4,
"text": "Correlatively, different scholars have held different views about the timeline of emergence of the first central banks. A widely held view in the second half of the 20th century has been that Stockholms Banco (est. 1657), as the original issuer of banknotes, counted as the oldest central bank, and that consequently its successor the Sveriges Riksbank was the oldest central bank in continuous operation, with the Bank of England as second-oldest and direct or indirect model for all subsequent central banks. That view has persisted in some early-21st-century publications. In more recent scholarship, however, the issuance of banknotes has often been viewed as just one of several techniques to provide central bank money, defined as financial money (in contrast to commodity money) of the highest quality. Under that definition, municipal banks of the late medieval and early modern periods, such as the Taula de canvi de Barcelona (est. 1401) or Bank of Amsterdam (est. 1609), issued central bank money and count as early central banks.",
"title": "Definition"
},
{
"paragraph_id": 5,
"text": "There is no universal terminology for the name of a central bank. Early central banks were often the only or principal formal financial institution in their jurisdiction, and were consequently often named \"bank of\" the relevant city's or country's name, e.g. the Bank of Amsterdam, Bank of Hamburg, Bank of England, or Wiener Stadtbank. Naming practices subsequently evolved as more central banks were established. They include, with references to the date when the bank acquired its current name:",
"title": "Naming"
},
{
"paragraph_id": 6,
"text": "In some cases, the local-language name is used in English-language practice, e.g. Sveriges Riksbank (est. 1668, current name in use since 1866), De Nederlandsche Bank (est. 1814), Deutsche Bundesbank (est. 1957), or Bangko Sentral ng Pilipinas (est. 1993).",
"title": "Naming"
},
{
"paragraph_id": 7,
"text": "Some commercial banks have names suggestive of central banks, even if they are not: examples are the State Bank of India and Central Bank of India, National Bank of Greece, Banco do Brasil, National Bank of Pakistan, Bank of China, Bank of Cyprus, or Bank of Ireland, as well as Deutsche Bank. Some but not all of these institutions had assumed central banking roles in the past.",
"title": "Naming"
},
{
"paragraph_id": 8,
"text": "The leading executive of a central bank is usually known as the Governor, President, or Chair.",
"title": "Naming"
},
{
"paragraph_id": 9,
"text": "The widespread adoption of central banking is a rather recent phenomenon. At the start of the 20th century, approximately two-thirds of sovereign states did not have a central bank. Waves of central bank adoption occurred in the interwar period and in the aftermath of World War II.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "In the 20th century, central banks were often created with the intent to attract foreign capital, as bankers preferred to lend to countries with a central bank on the gold standard.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "The use of money as a unit of account predates history. Government control of money is documented in the ancient Egyptian economy (2750–2150 BCE). The Egyptians measured the value of goods with a central unit called shat. Like many other currencies, the shat was linked to gold. The value of a shat in terms of goods was defined by government administrations. Other cultures in Asia Minor later materialized their currencies in the form of gold and silver coins.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "The issuance of paper currency is not to be equated with central banking, even though paper currency is a form of financial money (i.e. not commodity money). The difference is that government-issued paper currency, as present e.g. in China during the Yuan dynasty, is typically not freely convertible and thus of inferior quality, occasionally leading to hyperinflation.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "From the 12th century, a network of professional banks emerged primarily in Southern Europe (including Southern France, with the Cahorsins). Banks could use book money to create deposits for their customers. Thus, they had the possibility to issue, lend and transfer money autonomously without direct control from political authorities.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "The Taula de canvi de Barcelona, established in 1401, is the first example of municipal, mostly public banks which pioneered central banking on a limited scale. It was soon emulated by the Bank of Saint George in the Republic of Genoa, first established in 1407, and significantly later by the Banco del Giro in the Republic of Venice and by a network of institutions in Naples that later consolidated into Banco di Napoli. Notable municipal central banks were established in the early 17th century in leading northwestern European commercial centers, namely the Bank of Amsterdam in 1609 and the Hamburger Bank in 1619. These institutions offered a public infrastructure for cashless international payments. They aimed to increase the efficiency of international trade and to safeguard monetary stability. These municipal public banks thus fulfilled comparable functions to modern central banks.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "The Swedish central bank, known since 1866 as Sveriges Riksbank, was founded in Stockholm in 1664 from the remains of the failed Stockholms Banco and answered to the Riksdag of the Estates, Sweden's early modern parliament. One role of the Swedish central bank was lending money to the government.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "The establishment of the Bank of England was devised by Charles Montagu, 1st Earl of Halifax, following a 1691 proposal by William Paterson. A royal charter was granted on 27 July 1694 through the passage of the Tonnage Act. The bank was given exclusive possession of the government's balances, and was the only limited-liability corporation allowed to issue banknotes. The early modern Bank of England, however, did not have all the functions of a today's central banks, e.g. to regulate the value of the national currency, to finance the government, to be the sole authorized distributor of banknotes, or to function as a lender of last resort to banks suffering a liquidity crisis.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "In the early 18th century, a major experiment in national central banking failed in France with John Law's Banque Royale in 1720-1721. Later in the century, France had other attempts with the Caisse d'Escompte first created in 1767, and King Charles III established the Bank of Spain in 1782. The Russian Assignation Bank, established in 1769 by Catherine the Great, was an outlier from the general pattern of early national central banks in that it was directly owned by the Imperial Russian government, rather than private individual shareholders. In the nascent United States, Alexander Hamilton, as Secretary of the Treasury in the 1790s, set up the First Bank of the United States despite heavy opposition from Jeffersonian Republicans.",
"title": "History"
},
{
"paragraph_id": 18,
"text": "Central banks were established in many European countries during the 19th century. Napoleon created the Banque de France in 1800, in order to stabilize and develop the French economy and to improve the financing of his wars. The Bank of France remained the most important Continental European central bank throughout the 19th century. The Bank of Finland was founded in 1812, soon after Finland had been taken over from Sweden by Russia to become a grand duchy. Simultaneously, a quasi-central banking role was played by a small group of powerful family-run banking networks, typified by the House of Rothschild, with branches in major cities across Europe, as well as Hottinguer in Switzerland and Oppenheim in Germany.",
"title": "History"
},
{
"paragraph_id": 19,
"text": "The theory of central banking, even though the name was not yet widely used, evolved in the 19th century. Henry Thornton, an opponent of the real bills doctrine, was a defender of the bullionist position and a significant figure in monetary theory. Thornton's process of monetary expansion anticipated the theories of Knut Wicksell regarding the \"cumulative process which restates the Quantity Theory in a theoretically coherent form\". As a response to a currency crisis in 1797, Thornton wrote in 1802 An Enquiry into the Nature and Effects of the Paper Credit of Great Britain, in which he argued that the increase in paper credit did not cause the crisis. The book also gives a detailed account of the British monetary system as well as a detailed examination of the ways in which the Bank of England should act to counteract fluctuations in the value of the pound.",
"title": "History"
},
{
"paragraph_id": 20,
"text": "In the United Kingdom until the mid-nineteenth century, commercial banks were able to issue their own banknotes, and notes issued by provincial banking companies were commonly in circulation. Many consider the origins of the central bank to lie with the passage of the Bank Charter Act 1844. Under the 1844 Act, bullionism was institutionalized in Britain, creating a ratio between the gold reserves held by the Bank of England and the notes that the bank could issue. The Act also placed strict curbs on the issuance of notes by the country banks. The Bank of England took over a role of lender of last resort in the 1870s after criticism of its lacklustre response to the failure of Overend, Gurney and Company. The journalist Walter Bagehot wrote on the subject in Lombard Street: A Description of the Money Market, in which he advocated for the bank to officially become a lender of last resort during a credit crunch, sometimes referred to as \"Bagehot's dictum\".",
"title": "History"
},
{
"paragraph_id": 21,
"text": "The 19th and early 20th centuries central banks in most of Europe and Japan developed under the international gold standard. Free banking or currency boards were common at the time. Problems with collapses of banks during downturns, however, led to wider support for central banks in those nations which did not as yet possess them, for example in Australia. In the United States, the role of a central bank had been ended in the so-called Bank War of the 1830s by President Andrew Jackson. In 1913, the U.S. created the Federal Reserve System through the passing of The Federal Reserve Act.",
"title": "History"
},
{
"paragraph_id": 22,
"text": "Following World War I, the Economic and Financial Organization (EFO) of the League of Nations, influenced by the ideas of Montagu Norman and other leading policymakers and economists of the time, took an active role to promote the independence of central bank, a key component of the economic orthodoxy the EFO fostered at the Brussels Conference (1920). The EFO thus directed the creation of the Oesterreichische Nationalbank in Austria, Hungarian National Bank, Bank of Danzig, and Bank of Greece, as well as comprehensive reforms of the Bulgarian National Bank and Bank of Estonia. Similar ideas were emulated in other newly independent European countries, e.g. for the National Bank of Czechoslovakia.",
"title": "History"
},
{
"paragraph_id": 23,
"text": "Brazil established a central bank in 1945, which was a precursor to the Central Bank of Brazil created twenty years later. After gaining independence, numerous African and Asian countries also established central banks or monetary unions. The Reserve Bank of India, which had been established during British colonial rule as a private company, was nationalized in 1949 following India's independence. By the early 21st century, most of the world's countries had a national central bank set up as a public sector institution, albeit with widely varying degrees of independence.",
"title": "History"
},
{
"paragraph_id": 24,
"text": "Before the near-generalized adoption of the model of national public-sector central banks, a number of economies relied on a central bank that was effectively or legally run from outside their territory. The first colonial central banks, such as the Bank of Java (est. 1828 in Batavia), Banque de l'Algérie (est. 1851 in Algiers), or Hongkong and Shanghai Banking Corporation (est. 1865 in Hong Kong), operated from the colony itself. Following the generalization of the transcontinental use of the electrical telegraph using submarine communications cable, however, new colonial banks were typically headquartered in the colonial metropolis; prominent examples included the Paris-based Banque de l'Indochine (est. 1875), Banque de l'Afrique Occidentale (est. 1901), and Banque de Madagascar (est. 1925). The Banque de l'Algérie's head office was relocated from Algiers to Paris in 1900.",
"title": "History"
},
{
"paragraph_id": 25,
"text": "In some cases, independent countries which did not have a strong domestic base of capital accumulation and were critically reliant on foreign funding found advantage in granting a central banking role to banks that were effectively or even legally foreign. A seminal case was the Imperial Ottoman Bank established in 1863 as a French-British joint venture, and a particularly egregious one was the Paris-based National Bank of Haiti (est. 1881) which captured significant financial resources from the economically struggling albeit independent nation of Haiti. Other cases include the London-based Imperial Bank of Persia, established in 1885, and the Rome-based National Bank of Albania, established in 1925. The State Bank of Morocco was established in 1907 with international shareholding and headquarters functions distributed between Paris and Tangier, a half-decade before the country lost its independence. In other cases, there have been organized currency unions such as the Belgium–Luxembourg Economic Union established in 1921, under which Luxembourg had no central bank, but that was managed by a national central bank (in that case the National Bank of Belgium) rather than a supranational one. The present-day Common Monetary Area of Southern Africa has comparable features.",
"title": "History"
},
{
"paragraph_id": 26,
"text": "Yet another pattern was set in countries where federated or otherwise sub-sovereign entities had wide policy autonomy that was echoed to varying degrees in the organization of the central bank itself. These included, for example, the Austro-Hungarian Bank from 1878 to 1918, the U.S. Federal Reserve in its first two decades, the Bank deutscher Länder between 1948 and 1957, or the National Bank of Yugoslavia between 1972 and 1993. Conversely, some countries that are politically organized as federations, such as today's Canada, Mexico, or Switzerland, rely on a unitary central bank.",
"title": "History"
},
{
"paragraph_id": 27,
"text": "In the second half of the 20th century, the dismantling of colonial systems left some groups of countries using the same currency even though they had achieved national independence. In contrast to the unraveling of Austria-Hungary and the Ottoman Empire after World War I, some of these countries decided to keep using a common currency, thus forming a monetary union, and to entrust its management to a common central bank. Examples include the Eastern Caribbean Currency Authority, the Central Bank of West African States, and the Bank of Central African States.",
"title": "History"
},
{
"paragraph_id": 28,
"text": "The concept of supranational central banking took a globally significant dimension with the Economic and Monetary Union of the European Union and the establishment of the European Central Bank (ECB) in 1998. In 2014, the ECB took an additional role of banking supervision as part of the newly established policy of European banking union.",
"title": "History"
},
{
"paragraph_id": 29,
"text": "The primary role of central banks is usually to maintain price stability, as defined as a specific level of inflation. Inflation is defined either as the devaluation of a currency or equivalently the rise of prices relative to a currency. Most central banks currently have an inflation target close to 2%.",
"title": "Central bank mandates"
},
{
"paragraph_id": 30,
"text": "Since inflation lowers real wages, Keynesians view inflation as the solution to involuntary unemployment. However, \"unanticipated\" inflation leads to lender losses as the real interest rate will be lower than expected. Thus, Keynesian monetary policy aims for a steady rate of inflation.",
"title": "Central bank mandates"
},
{
"paragraph_id": 31,
"text": "Central banks as monetary authorities in representative states are intertwined through globalized financial markets. As a regulator of one of the most widespread currencies in the global economy, the US Federal Reserve plays an outsized role in the international monetary market. Being the main supplier and rate adjusted for US dollars, the Federal Reserve implements a set of requirements to control inflation and unemployment in the US.",
"title": "Central bank mandates"
},
{
"paragraph_id": 32,
"text": "Frictional unemployment is the time period between jobs when a worker is searching for, or transitioning from one job to another. Unemployment beyond frictional unemployment is classified as unintended unemployment.",
"title": "Central bank mandates"
},
{
"paragraph_id": 33,
"text": "For example, structural unemployment is a form of unemployment resulting from a mismatch between demand in the labour market and the skills and locations of the workers seeking employment. Macroeconomic policy generally aims to reduce unintended unemployment.",
"title": "Central bank mandates"
},
{
"paragraph_id": 34,
"text": "Keynes labeled any jobs that would be created by a rise in wage-goods (i.e., a decrease in real-wages) as involuntary unemployment:",
"title": "Central bank mandates"
},
{
"paragraph_id": 35,
"text": "Economic growth can be enhanced by investment in capital, such as more or better machinery. A low interest rate implies that firms can borrow money to invest in their capital stock and pay less interest for it. Lowering the interest is therefore considered to encourage economic growth and is often used to alleviate times of low economic growth. On the other hand, raising the interest rate is often used in times of high economic growth as a contra-cyclical device to keep the economy from overheating and avoid market bubbles.",
"title": "Central bank mandates"
},
{
"paragraph_id": 36,
"text": "Further goals of monetary policy are stability of interest rates, of the financial market, and of the foreign exchange market. Goals frequently cannot be separated from each other and often conflict. Costs must therefore be carefully weighed before policy implementation.",
"title": "Central bank mandates"
},
{
"paragraph_id": 37,
"text": "In the aftermath of the Paris agreement on climate change, a debate is now underway on whether central banks should also pursue environmental goals as part of their activities. In 2017, eight central banks formed the Network for Greening the Financial System (NGFS) to evaluate the way in which central banks can use their regulatory and monetary policy tools to support climate change mitigation. Today more than 70 central banks are part of the NGFS.",
"title": "Central bank mandates"
},
{
"paragraph_id": 38,
"text": "In January 2020, the European Central Bank has announced it will consider climate considerations when reviewing its monetary policy framework.",
"title": "Central bank mandates"
},
{
"paragraph_id": 39,
"text": "Proponents of \"green monetary policy\" are proposing that central banks include climate-related criteria in their collateral eligibility frameworks, when conducting asset purchases and also in their refinancing operations. But critics such as Jens Weidmann are arguing it is not central banks' role to conduct climate policy. China is among the most advanced central banks when it comes to green monetary policy. It has given green bonds preferential status to lower their yield and uses window policy to direct green lending.",
"title": "Central bank mandates"
},
{
"paragraph_id": 40,
"text": "The implications of potential stranded assets in the economy highlights one example of the embedded transition risk to climate change with potential cascade effects throughout the financial system. In response, four broad types of interventions including methodology development, investor encouragement, financial regulation and policy toolkits have been adopted by or suggested for central banks.",
"title": "Central bank mandates"
},
{
"paragraph_id": 41,
"text": "Achieving the 2°C threshold revolve in part around the development of climate-aligned financial regulations. A significant challenge lies in the lack of awareness among corporations and investors, driven by poor information flow and insufficient disclosure. To address this issue, regulators and central banks are promoting transparency, integrated reporting, and exposure specifications, with the goal of promoting long-term, low-carbon emission goals, rather than short-term financial objectives. These regulations aim to assess risk comprehensively, identifying carbon-intensive assets and increasing their capital requirements. This should result in high-carbon assets becoming less attractive while favoring low-carbon assets, which have historically been perceived as high-risk, and low volatility investment vehicles.",
"title": "Central bank mandates"
},
{
"paragraph_id": 42,
"text": "Quantitative easing is a potential measure that could be applied by Central banks to achieve a low-carbon transition. Although there is a historical bias toward high-carbon companies, included in Central banks portfolios due to their high credit ratings, innovative approaches to quantitative easing could invert this trend to favor low-carbon assets.",
"title": "Central bank mandates"
},
{
"paragraph_id": 43,
"text": "Considering the potential impact of central banks on climate change, it is important to consider the mandates of central banks. The mandate of a central bank can be narrow, meaning only a few objectives are given, limiting the ability of a central bank to include climate change in its policies. However, central bank mandates may not necessarily have to be modified to accommodate climate change-related activities. For example, the European Central Bank has incorporated carbon-emissions into its asset purchase criteria, despite its relatively narrow mandate that focuses on price stability.",
"title": "Central bank mandates"
},
{
"paragraph_id": 44,
"text": "The functions of a central bank may include:",
"title": "Central bank operations"
},
{
"paragraph_id": 45,
"text": "Central banks implement a country's chosen monetary policy.",
"title": "Central bank operations"
},
{
"paragraph_id": 46,
"text": "At the most basic level, monetary policy involves establishing what form of currency the country may have, whether a fiat currency, gold-backed currency (disallowed for countries in the International Monetary Fund), currency board or a currency union. When a country has its own national currency, this involves the issue of some form of standardized currency, which is essentially a form of promissory note: \"money\" under certain circumstances. Historically, this was often a promise to exchange the money for precious metals in some fixed amount. Now, when many currencies are fiat money, the \"promise to pay\" consists of the promise to accept that currency to pay for taxes.",
"title": "Central bank operations"
},
{
"paragraph_id": 47,
"text": "A central bank may use another country's currency either directly in a currency union, or indirectly on a currency board. In the latter case, exemplified by the Bulgarian National Bank, Hong Kong and Latvia (until 2014), the local currency is backed at a fixed rate by the central bank's holdings of a foreign currency. Similar to commercial banks, central banks hold assets (government bonds, foreign exchange, gold, and other financial assets) and incur liabilities (currency outstanding). Central banks create money by issuing banknotes and loaning them to the government in exchange for interest-bearing assets such as government bonds. When central banks decide to increase the money supply by an amount which is greater than the amount their national governments decide to borrow, the central banks may purchase private bonds or assets denominated in foreign currencies.",
"title": "Central bank operations"
},
{
"paragraph_id": 48,
"text": "The European Central Bank remits its interest income to the central banks of the member countries of the European Union. The US Federal Reserve remits most of its profits to the U.S. Treasury. This income, derived from the power to issue currency, is referred to as seigniorage, and usually belongs to the national government. The state-sanctioned power to create currency is called the Right of Issuance. Throughout history, there have been disagreements over this power, since whoever controls the creation of currency controls the seigniorage income. The expression \"monetary policy\" may also refer more narrowly to the interest-rate targets and other active measures undertaken by the monetary authority.",
"title": "Central bank operations"
},
{
"paragraph_id": 49,
"text": "The primary tools available to central banks are open market operations (including repurchase agreements), reserve requirements, interest rate policy (through control of the discount rate), and control of the money supply.",
"title": "Central bank operations"
},
{
"paragraph_id": 50,
"text": "A central bank affects the monetary base through open market operations, if its country has a well developed market for its government bonds. This entails managing the quantity of money in circulation through the buying and selling of various financial instruments, such as treasury bills, repurchase agreements or \"repos\", company bonds, or foreign currencies, in exchange for money on deposit at the central bank. Those deposits are convertible to currency, so all of these purchases or sales result in more or less base currency entering or leaving market circulation. For example, if the central bank wishes to decrease interest rates (executing expansionary monetary policy), it purchases government debt, thereby increasing the amount of cash in circulation or crediting banks' reserve accounts. Commercial banks then have more money to lend, so they reduce lending rates, making loans less expensive. Cheaper credit card interest rates increase consumer spending. Additionally, when business loans are more affordable, companies can expand to keep up with consumer demand. They ultimately hire more workers, whose incomes increase, which in its turn also increases the demand. This method is usually enough to stimulate demand and drive economic growth to a healthy rate. Usually, the short-term goal of open market operations is to achieve a specific short-term interest rate target. In other instances, monetary policy might instead entail the targeting of a specific exchange rate relative to some foreign currency or else relative to gold. For example, in the case of the United States the Federal Reserve targets the federal funds rate, the rate at which member banks lend to one another overnight; however, the monetary policy of China (since 2014) is to target the exchange rate between the Chinese renminbi and a basket of foreign currencies.",
"title": "Central bank operations"
},
{
"paragraph_id": 51,
"text": "If the open market operations do not lead to the desired effects, a second tool can be used: the central bank can increase or decrease the interest rate it charges on discounts or overdrafts (loans from the central bank to commercial banks, see discount window). If the interest rate on such transactions is sufficiently low, commercial banks can borrow from the central bank to meet reserve requirements and use the additional liquidity to expand their balance sheets, increasing the credit available to the economy.",
"title": "Central bank operations"
},
{
"paragraph_id": 52,
"text": "A third alternative is to change the reserve requirements. The reserve requirement refers to the proportion of total liabilities that banks must keep on hand overnight, either in its vaults or at the central bank. Banks only maintain a small portion of their assets as cash available for immediate withdrawal; the rest is invested in illiquid assets like mortgages and loans. Lowering the reserve requirement frees up funds for banks to increase loans or buy other profitable assets. This is expansionary because it creates credit. However, even though this tool immediately increases liquidity, central banks rarely change the reserve requirement because doing so frequently adds uncertainty to banks' planning. The use of open market operations is therefore preferred.",
"title": "Central bank operations"
},
{
"paragraph_id": 53,
"text": "Other forms of monetary policy, particularly used when interest rates are at or near 0% and there are concerns about deflation or deflation is occurring, are referred to as unconventional monetary policy. These include credit easing, quantitative easing, forward guidance, and signalling. In credit easing, a central bank purchases private sector assets to improve liquidity and improve access to credit. Signaling can be used to lower market expectations for lower interest rates in the future. For example, during the credit crisis of 2008, the US Federal Reserve indicated rates would be low for an \"extended period\", and the Bank of Canada made a \"conditional commitment\" to keep rates at the lower bound of 25 basis points (0.25%) until the end of the second quarter of 2010.",
"title": "Central bank operations"
},
{
"paragraph_id": 54,
"text": "Some have envisaged the use of what Milton Friedman once called \"helicopter money\" whereby the central bank would make direct transfers to citizens in order to lift inflation up to the central bank's intended target. Such policy option could be particularly effective at the zero lower bound.",
"title": "Central bank operations"
},
{
"paragraph_id": 55,
"text": "Since 2017, prospect of implementing Central Bank Digital Currency (CBDC) has been in discussion. As of the end of 2018, at least 15 central banks were considering to implementing CBDC. Since 2014, the People's Bank of China has been working on a project for digital currency to make its own digital currency and electronic payment systems.",
"title": "Central bank operations"
},
{
"paragraph_id": 56,
"text": "In some countries a central bank, through its subsidiaries, controls and monitors the banking sector. In other countries banking supervision is carried out by a government department such as the UK Treasury, or by an independent government agency, for example, UK's Financial Conduct Authority. It examines the banks' balance sheets and behaviour and policies toward consumers. Apart from refinancing, it also provides banks with services such as transfer of funds, bank notes and coins or foreign currency. Thus it is often described as the \"bank of banks\".",
"title": "Central bank operations"
},
{
"paragraph_id": 57,
"text": "Many countries will monitor and control the banking sector through several different agencies and for different purposes. The Bank regulation in the United States for example is highly fragmented with 3 federal agencies, the Federal Deposit Insurance Corporation, the Federal Reserve Board, or Office of the Comptroller of the Currency and numerous others on the state and the private level. There is usually significant cooperation between the agencies. For example, money center banks, deposit-taking institutions, and other types of financial institutions may be subject to different (and occasionally overlapping) regulation. Some types of banking regulation may be delegated to other levels of government, such as state or provincial governments.",
"title": "Central bank operations"
},
{
"paragraph_id": 58,
"text": "Any cartel of banks is particularly closely watched and controlled. Most countries control bank mergers and are wary of concentration in this industry due to the danger of groupthink and runaway lending bubbles based on a single point of failure, the credit culture of the few large banks.",
"title": "Central bank operations"
},
{
"paragraph_id": 59,
"text": "Numerous governments have opted to make central banks independent. The economic logic behind central bank independence is that when governments delegate monetary policy to an independent central bank (with an anti-inflationary purpose) and away from elected politicians, monetary policy will not reflect the interests of the politicians. When governments control monetary policy, politicians may be tempted to boost economic activity in advance of an election to the detriment of the long-term health of the economy and the country. As a consequence, financial markets may not consider future commitments to low inflation to be credible when monetary policy is in the hands of elected officials, which increases the risk of capital flight. An alternative to central bank independence is to have fixed exchange rate regimes.",
"title": "Central bank governance and independence"
},
{
"paragraph_id": 60,
"text": "Governments generally have some degree of influence over even \"independent\" central banks; the aim of independence is primarily to prevent short-term interference. In 1951, the Deutsche Bundesbank became the first central bank to be given full independence, leading this form of central bank to be referred to as the \"Bundesbank model\", as opposed, for instance, to the New Zealand model, which has a goal (i.e. inflation target) set by the government.",
"title": "Central bank governance and independence"
},
{
"paragraph_id": 61,
"text": "Central bank independence is usually guaranteed by legislation and the institutional framework governing the bank's relationship with elected officials, particularly the minister of finance. Central bank legislation will enshrine specific procedures for selecting and appointing the head of the central bank. Often the minister of finance will appoint the governor in consultation with the central bank's board and its incumbent governor. In addition, the legislation will specify banks governor's term of appointment. The most independent central banks enjoy a fixed non-renewable term for the governor in order to eliminate pressure on the governor to please the government in the hope of being re-appointed for a second term. Generally, independent central banks enjoy both goal and instrument independence.",
"title": "Central bank governance and independence"
},
{
"paragraph_id": 62,
"text": "Despite their independence, central banks are usually accountable at some level to government officials, either to the finance ministry or to parliament. For example, the Board of Governors of the U.S. Federal Reserve are nominated by the U.S. president and confirmed by the Senate, publishes verbatim transcripts, and balance sheets are audited by the Government Accountability Office.",
"title": "Central bank governance and independence"
},
{
"paragraph_id": 63,
"text": "In the 1990s there was a trend towards increasing the independence of central banks as a way of improving long-term economic performance. While a large volume of economic research has been done to define the relationship between central bank independence and economic performance, the results are ambiguous.",
"title": "Central bank governance and independence"
},
{
"paragraph_id": 64,
"text": "The literature on central bank independence has defined a cumulative and complementary number of aspects:",
"title": "Central bank governance and independence"
},
{
"paragraph_id": 65,
"text": "There is very strong consensus among economists that an independent central bank can run a more credible monetary policy, making market expectations more responsive to signals from the central bank. Both the Bank of England (1997) and the European Central Bank have been made independent and follow a set of published inflation targets so that markets know what to expect. Even the People's Bank of China has been accorded great latitude, though in China the official role of the bank remains that of a national bank rather than a central bank, underlined by the official refusal to \"unpeg\" the yuan or to revalue it \"under pressure\". The fact that the Communist Party is not elected also relieves the pressure to please people, increasing its independence. Populism can reduce de facto central bank independence.",
"title": "Central bank governance and independence"
},
{
"paragraph_id": 66,
"text": "International organizations such as the World Bank, the Bank for International Settlements (BIS) and the International Monetary Fund (IMF) strongly support central bank independence. This results, in part, from a belief in the intrinsic merits of increased independence. The support for independence from the international organizations also derives partly from the connection between increased independence for the central bank and increased transparency in the policy-making process. The IMF's Financial Services Action Plan (FSAP) review self-assessment, for example, includes a number of questions about central bank independence in the transparency section. An independent central bank will score higher in the review than one that is not independent.",
"title": "Central bank governance and independence"
},
{
"paragraph_id": 67,
"text": "Central bank independence indices allow a quantitative analysis of central bank independence for individual countries over time. One central bank independence index is the Garriga CBI, where a higher index indicates higher central bank independence, shown below for individual countries.",
"title": "Central bank governance and independence"
},
{
"paragraph_id": 68,
"text": "Collectively, central banks purchase less than 500 tonnes of gold each year, on average (out of an annual global production of 2,500-3,000 tonnes). In 2018, central banks collectively hold over 33,000 metric tons of the gold, about a fifth of all the gold ever mined, according to Bloomberg News.",
"title": "Statistics"
},
{
"paragraph_id": 69,
"text": "In 2016, 75% of the world's central-bank assets were controlled by four centers in China, the United States, Japan and the eurozone. The central banks of Brazil, Switzerland, Saudi Arabia, the U.K., India and Russia, each account for an average of 2.5 percent. The remaining 107 central banks hold less than 13 percent. According to data compiled by Bloomberg News, the top 10 largest central banks owned $21.4 trillion in assets, a 10 percent increase from 2015.",
"title": "Statistics"
}
] | A central bank, reserve bank, or monetary authority is an institution that manages the currency and monetary policy of a country or monetary union. In contrast to a commercial bank, a central bank possesses a monopoly on increasing the monetary base. Many central banks also have supervisory or regulatory powers to ensure the stability of commercial banks in their jurisdiction, to prevent bank runs, and in some cases also to enforce policies on financial consumer protection and against bank fraud, money laundering, or terrorism financing. Central banks in most developed nations are usually set up to be institutionally independent from political interference, even though governments typically have governance rights over them, legislative bodies exercise scrutiny, and central banks frequently do show responsiveness to politics. Issues like central bank independence, central bank policies and rhetoric in central bank governors discourse or the premises of macroeconomic policies of the state are a focus of contention and criticism by some policymakers, researchers and specialized business, economics and finance media. | 2001-09-21T06:32:30Z | 2023-12-29T05:07:40Z | [
"Template:Portal",
"Template:Economics",
"Template:Page needed",
"Template:Image frame",
"Template:Unbulleted list citebundle",
"Template:'\"",
"Template:Public finance",
"Template:Date",
"Template:Basel II",
"Template:Cite book",
"Template:Webarchive",
"Template:Cite news",
"Template:See also",
"Template:Cite press release",
"Template:Use dmy dates",
"Template:Banking",
"Template:Macroeconomics sidebar",
"Template:Div col end",
"Template:Main",
"Template:Clarify",
"Template:Small",
"Template:Federal Reserve System",
"Template:Div col",
"Template:Cite web",
"Template:Cite magazine",
"Template:Central Bank by country",
"Template:Means of Exchange",
"Template:Authority control",
"Template:Central banks",
"Template:Short description",
"Template:R",
"Template:Citation needed",
"Template:Flaglist",
"Template:Reflist",
"Template:Cite journal"
] | https://en.wikipedia.org/wiki/Central_bank |
5,667 | Chlorine | Chlorine is a chemical element; it has symbol Cl and atomic number 17. The second-lightest of the halogens, it appears between fluorine and bromine in the periodic table and its properties are mostly intermediate between them. Chlorine is a yellow-green gas at room temperature. It is an extremely reactive element and a strong oxidising agent: among the elements, it has the highest electron affinity and the third-highest electronegativity on the revised Pauling scale, behind only oxygen and fluorine.
Chlorine played an important role in the experiments conducted by medieval alchemists, which commonly involved the heating of chloride salts like ammonium chloride (sal ammoniac) and sodium chloride (common salt), producing various chemical substances containing chlorine such as hydrogen chloride, mercury(II) chloride (corrosive sublimate), and aqua regia. However, the nature of free chlorine gas as a separate substance was only recognised around 1630 by Jan Baptist van Helmont. Carl Wilhelm Scheele wrote a description of chlorine gas in 1774, supposing it to be an oxide of a new element. In 1809, chemists suggested that the gas might be a pure element, and this was confirmed by Sir Humphry Davy in 1810, who named it after the Ancient Greek χλωρός (khlōrós, "pale green") because of its colour.
Because of its great reactivity, all chlorine in the Earth's crust is in the form of ionic chloride compounds, which includes table salt. It is the second-most abundant halogen (after fluorine) and twenty-first most abundant chemical element in Earth's crust. These crustal deposits are nevertheless dwarfed by the huge reserves of chloride in seawater.
Elemental chlorine is commercially produced from brine by electrolysis, predominantly in the chlor-alkali process. The high oxidising potential of elemental chlorine led to the development of commercial bleaches and disinfectants, and a reagent for many processes in the chemical industry. Chlorine is used in the manufacture of a wide range of consumer products, about two-thirds of them organic chemicals such as polyvinyl chloride (PVC), many intermediates for the production of plastics, and other end products which do not contain the element. As a common disinfectant, elemental chlorine and chlorine-generating compounds are used more directly in swimming pools to keep them sanitary. Elemental chlorine at high concentration is extremely dangerous, and poisonous to most living organisms. As a chemical warfare agent, chlorine was first used in World War I as a poison gas weapon.
In the form of chloride ions, chlorine is necessary to all known species of life. Other types of chlorine compounds are rare in living organisms, and artificially produced chlorinated organics range from inert to toxic. In the upper atmosphere, chlorine-containing organic molecules such as chlorofluorocarbons have been implicated in ozone depletion. Small quantities of elemental chlorine are generated by oxidation of chloride ions in neutrophils as part of an immune system response against bacteria.
The most common compound of chlorine, sodium chloride, has been known since ancient times; archaeologists have found evidence that rock salt was used as early as 3000 BC and brine as early as 6000 BC.
Around 900, the authors of the Arabic writings attributed to Jabir ibn Hayyan (Latin: Geber) and the Persian physician and alchemist Abu Bakr al-Razi (c. 865–925, Latin: Rhazes) were experimenting with sal ammoniac (ammonium chloride), which when it was distilled together with vitriol (hydrated sulfates of various metals) produced hydrogen chloride. However, it appears that in these early experiments with chloride salts, the gaseous products were discarded, and hydrogen chloride may have been produced many times before it was discovered that it can be put to chemical use. One of the first such uses was the synthesis of mercury(II) chloride (corrosive sublimate), whose production from the heating of mercury either with alum and ammonium chloride or with vitriol and sodium chloride was first described in the De aluminibus et salibus ("On Alums and Salts", an eleventh- or twelfth century Arabic text falsely attributed to Abu Bakr al-Razi and translated into Latin in the second half of the twelfth century by Gerard of Cremona, 1144–1187). Another important development was the discovery by pseudo-Geber (in the De inventione veritatis, "On the Discovery of Truth", after c. 1300) that by adding ammonium chloride to nitric acid, a strong solvent capable of dissolving gold (i.e., aqua regia) could be produced. Although aqua regia is an unstable mixture that continually gives off fumes containing free chlorine gas, this chlorine gas appears to have been ignored until c. 1630, when its nature as a separate gaseous substance was recognised by the Brabantian chemist and physician Jan Baptist van Helmont.
The element was first studied in detail in 1774 by Swedish chemist Carl Wilhelm Scheele, and he is credited with the discovery. Scheele produced chlorine by reacting MnO2 (as the mineral pyrolusite) with HCl:
Scheele observed several of the properties of chlorine: the bleaching effect on litmus, the deadly effect on insects, the yellow-green color, and the smell similar to aqua regia. He called it "dephlogisticated muriatic acid air" since it is a gas (then called "airs") and it came from hydrochloric acid (then known as "muriatic acid"). He failed to establish chlorine as an element.
Common chemical theory at that time held that an acid is a compound that contains oxygen (remnants of this survive in the German and Dutch names of oxygen: sauerstoff or zuurstof, both translating into English as acid substance), so a number of chemists, including Claude Berthollet, suggested that Scheele's dephlogisticated muriatic acid air must be a combination of oxygen and the yet undiscovered element, muriaticum.
In 1809, Joseph Louis Gay-Lussac and Louis-Jacques Thénard tried to decompose dephlogisticated muriatic acid air by reacting it with charcoal to release the free element muriaticum (and carbon dioxide). They did not succeed and published a report in which they considered the possibility that dephlogisticated muriatic acid air is an element, but were not convinced.
In 1810, Sir Humphry Davy tried the same experiment again, and concluded that the substance was an element, and not a compound. He announced his results to the Royal Society on 15 November that year. At that time, he named this new element "chlorine", from the Greek word χλωρος (chlōros, "green-yellow"), in reference to its color. The name "halogen", meaning "salt producer", was originally used for chlorine in 1811 by Johann Salomo Christoph Schweigger. This term was later used as a generic term to describe all the elements in the chlorine family (fluorine, bromine, iodine), after a suggestion by Jöns Jakob Berzelius in 1826. In 1823, Michael Faraday liquefied chlorine for the first time, and demonstrated that what was then known as "solid chlorine" had a structure of chlorine hydrate (Cl2·H2O).
Chlorine gas was first used by French chemist Claude Berthollet to bleach textiles in 1785. Modern bleaches resulted from further work by Berthollet, who first produced sodium hypochlorite in 1789 in his laboratory in the town of Javel (now part of Paris, France), by passing chlorine gas through a solution of sodium carbonate. The resulting liquid, known as "Eau de Javel" ("Javel water"), was a weak solution of sodium hypochlorite. This process was not very efficient, and alternative production methods were sought. Scottish chemist and industrialist Charles Tennant first produced a solution of calcium hypochlorite ("chlorinated lime"), then solid calcium hypochlorite (bleaching powder). These compounds produced low levels of elemental chlorine and could be more efficiently transported than sodium hypochlorite, which remained as dilute solutions because when purified to eliminate water, it became a dangerously powerful and unstable oxidizer. Near the end of the nineteenth century, E. S. Smith patented a method of sodium hypochlorite production involving electrolysis of brine to produce sodium hydroxide and chlorine gas, which then mixed to form sodium hypochlorite. This is known as the chloralkali process, first introduced on an industrial scale in 1892, and now the source of most elemental chlorine and sodium hydroxide. In 1884 Chemischen Fabrik Griesheim of Germany developed another chloralkali process which entered commercial production in 1888.
Elemental chlorine solutions dissolved in chemically basic water (sodium and calcium hypochlorite) were first used as anti-putrefaction agents and disinfectants in the 1820s, in France, long before the establishment of the germ theory of disease. This practice was pioneered by Antoine-Germain Labarraque, who adapted Berthollet's "Javel water" bleach and other chlorine preparations. Elemental chlorine has since served a continuous function in topical antisepsis (wound irrigation solutions and the like) and public sanitation, particularly in swimming and drinking water.
Chlorine gas was first used as a weapon on April 22, 1915 at the Second Battle of Ypres by the German Army. The effect on the allies was devastating because the existing gas masks were difficult to deploy and had not been broadly distributed.
Chlorine is the second halogen, being a nonmetal in group 17 of the periodic table. Its properties are thus similar to fluorine, bromine, and iodine, and are largely intermediate between those of the first two. Chlorine has the electron configuration [Ne]3s3p, with the seven electrons in the third and outermost shell acting as its valence electrons. Like all halogens, it is thus one electron short of a full octet, and is hence a strong oxidising agent, reacting with many elements in order to complete its outer shell. Corresponding to periodic trends, it is intermediate in electronegativity between fluorine and bromine (F: 3.98, Cl: 3.16, Br: 2.96, I: 2.66), and is less reactive than fluorine and more reactive than bromine. It is also a weaker oxidising agent than fluorine, but a stronger one than bromine. Conversely, the chloride ion is a weaker reducing agent than bromide, but a stronger one than fluoride. It is intermediate in atomic radius between fluorine and bromine, and this leads to many of its atomic properties similarly continuing the trend from iodine to bromine upward, such as first ionisation energy, electron affinity, enthalpy of dissociation of the X2 molecule (X = Cl, Br, I), ionic radius, and X–X bond length. (Fluorine is anomalous due to its small size.)
All four stable halogens experience intermolecular van der Waals forces of attraction, and their strength increases together with the number of electrons among all homonuclear diatomic halogen molecules. Thus, the melting and boiling points of chlorine are intermediate between those of fluorine and bromine: chlorine melts at −101.0 °C and boils at −34.0 °C. As a result of the increasing molecular weight of the halogens down the group, the density and heats of fusion and vaporisation of chlorine are again intermediate between those of bromine and fluorine, although all their heats of vaporisation are fairly low (leading to high volatility) thanks to their diatomic molecular structure. The halogens darken in colour as the group is descended: thus, while fluorine is a pale yellow gas, chlorine is distinctly yellow-green. This trend occurs because the wavelengths of visible light absorbed by the halogens increase down the group. Specifically, the colour of a halogen, such as chlorine, results from the electron transition between the highest occupied antibonding πg molecular orbital and the lowest vacant antibonding σu molecular orbital. The colour fades at low temperatures, so that solid chlorine at −195 °C is almost colourless.
Like solid bromine and iodine, solid chlorine crystallises in the orthorhombic crystal system, in a layered lattice of Cl2 molecules. The Cl–Cl distance is 198 pm (close to the gaseous Cl–Cl distance of 199 pm) and the Cl···Cl distance between molecules is 332 pm within a layer and 382 pm between layers (compare the van der Waals radius of chlorine, 180 pm). This structure means that chlorine is a very poor conductor of electricity, and indeed its conductivity is so low as to be practically unmeasurable.
Chlorine has two stable isotopes, Cl and Cl. These are its only two natural isotopes occurring in quantity, with Cl making up 76% of natural chlorine and Cl making up the remaining 24%. Both are synthesised in stars in the oxygen-burning and silicon-burning processes. Both have nuclear spin 3/2+ and thus may be used for nuclear magnetic resonance, although the spin magnitude being greater than 1/2 results in non-spherical nuclear charge distribution and thus resonance broadening as a result of a nonzero nuclear quadrupole moment and resultant quadrupolar relaxation. The other chlorine isotopes are all radioactive, with half-lives too short to occur in nature primordially. Of these, the most commonly used in the laboratory are Cl (t1/2 = 3.0×10 y) and Cl (t1/2 = 37.2 min), which may be produced from the neutron activation of natural chlorine.
The most stable chlorine radioisotope is Cl. The primary decay mode of isotopes lighter than Cl is electron capture to isotopes of sulfur; that of isotopes heavier than Cl is beta decay to isotopes of argon; and Cl may decay by either mode to stable S or Ar. Cl occurs in trace quantities in nature as a cosmogenic nuclide in a ratio of about (7–10) × 10 to 1 with stable chlorine isotopes: it is produced in the atmosphere by spallation of Ar by interactions with cosmic ray protons. In the top meter of the lithosphere, Cl is generated primarily by thermal neutron activation of Cl and spallation of K and Ca. In the subsurface environment, muon capture by Ca becomes more important as a way to generate Cl.
Chlorine is intermediate in reactivity between fluorine and bromine, and is one of the most reactive elements. Chlorine is a weaker oxidising agent than fluorine but a stronger one than bromine or iodine. This can be seen from the standard electrode potentials of the X2/X couples (F, +2.866 V; Cl, +1.395 V; Br, +1.087 V; I, +0.615 V; At, approximately +0.3 V). However, this trend is not shown in the bond energies because fluorine is singular due to its small size, low polarisability, and inability to show hypervalence. As another difference, chlorine has a significant chemistry in positive oxidation states while fluorine does not. Chlorination often leads to higher oxidation states than bromination or iodination but lower oxidation states than fluorination. Chlorine tends to react with compounds including M–M, M–H, or M–C bonds to form M–Cl bonds.
Given that E°(1/2O2/H2O) = +1.229 V, which is less than +1.395 V, it would be expected that chlorine should be able to oxidise water to oxygen and hydrochloric acid. However, the kinetics of this reaction are unfavorable, and there is also a bubble overpotential effect to consider, so that electrolysis of aqueous chloride solutions evolves chlorine gas and not oxygen gas, a fact that is very useful for the industrial production of chlorine.
The simplest chlorine compound is hydrogen chloride, HCl, a major chemical in industry as well as in the laboratory, both as a gas and dissolved in water as hydrochloric acid. It is often produced by burning hydrogen gas in chlorine gas, or as a byproduct of chlorinating hydrocarbons. Another approach is to treat sodium chloride with concentrated sulfuric acid to produce hydrochloric acid, also known as the "salt-cake" process:
In the laboratory, hydrogen chloride gas may be made by drying the acid with concentrated sulfuric acid. Deuterium chloride, DCl, may be produced by reacting benzoyl chloride with heavy water (D2O).
At room temperature, hydrogen chloride is a colourless gas, like all the hydrogen halides apart from hydrogen fluoride, since hydrogen cannot form strong hydrogen bonds to the larger electronegative chlorine atom; however, weak hydrogen bonding is present in solid crystalline hydrogen chloride at low temperatures, similar to the hydrogen fluoride structure, before disorder begins to prevail as the temperature is raised. Hydrochloric acid is a strong acid (pKa = −7) because the hydrogen bonds to chlorine are too weak to inhibit dissociation. The HCl/H2O system has many hydrates HCl·nH2O for n = 1, 2, 3, 4, and 6. Beyond a 1:1 mixture of HCl and H2O, the system separates completely into two separate liquid phases. Hydrochloric acid forms an azeotrope with boiling point 108.58 °C at 20.22 g HCl per 100 g solution; thus hydrochloric acid cannot be concentrated beyond this point by distillation.
Unlike hydrogen fluoride, anhydrous liquid hydrogen chloride is difficult to work with as a solvent, because its boiling point is low, it has a small liquid range, its dielectric constant is low and it does not dissociate appreciably into H2Cl and HCl2 ions – the latter, in any case, are much less stable than the bifluoride ions (HF2) due to the very weak hydrogen bonding between hydrogen and chlorine, though its salts with very large and weakly polarising cations such as Cs and NR4 (R = Me, Et, Bu) may still be isolated. Anhydrous hydrogen chloride is a poor solvent, only able to dissolve small molecular compounds such as nitrosyl chloride and phenol, or salts with very low lattice energies such as tetraalkylammonium halides. It readily protonates electrophiles containing lone-pairs or π bonds. Solvolysis, ligand replacement reactions, and oxidations are well-characterised in hydrogen chloride solution:
Nearly all elements in the periodic table form binary chlorides. The exceptions are decidedly in the minority and stem in each case from one of three causes: extreme inertness and reluctance to participate in chemical reactions (the noble gases, with the exception of xenon in the highly unstable XeCl2 and XeCl4); extreme nuclear instability hampering chemical investigation before decay and transmutation (many of the heaviest elements beyond bismuth); and having an electronegativity higher than chlorine's (oxygen and fluorine) so that the resultant binary compounds are formally not chlorides but rather oxides or fluorides of chlorine. Even though nitrogen in NCl3 is bearing a negative charge, the compound is usually called nitrogen trichloride.
Chlorination of metals with Cl2 usually leads to a higher oxidation state than bromination with Br2 when multiple oxidation states are available, such as in MoCl5 and MoBr3. Chlorides can be made by reaction of an element or its oxide, hydroxide, or carbonate with hydrochloric acid, and then dehydrated by mildly high temperatures combined with either low pressure or anhydrous hydrogen chloride gas. These methods work best when the chloride product is stable to hydrolysis; otherwise, the possibilities include high-temperature oxidative chlorination of the element with chlorine or hydrogen chloride, high-temperature chlorination of a metal oxide or other halide by chlorine, a volatile metal chloride, carbon tetrachloride, or an organic chloride. For instance, zirconium dioxide reacts with chlorine at standard conditions to produce zirconium tetrachloride, and uranium trioxide reacts with hexachloropropene when heated under reflux to give uranium tetrachloride. The second example also involves a reduction in oxidation state, which can also be achieved by reducing a higher chloride using hydrogen or a metal as a reducing agent. This may also be achieved by thermal decomposition or disproportionation as follows:
Most metal chlorides with the metal in low oxidation states (+1 to +3) are ionic. Nonmetals tend to form covalent molecular chlorides, as do metals in high oxidation states from +3 and above. Both ionic and covalent chlorides are known for metals in oxidation state +3 (e.g. scandium chloride is mostly ionic, but aluminium chloride is not). Silver chloride is very insoluble in water and is thus often used as a qualitative test for chlorine.
Although dichlorine is a strong oxidising agent with a high first ionisation energy, it may be oxidised under extreme conditions to form the [Cl2] cation. This is very unstable and has only been characterised by its electronic band spectrum when produced in a low-pressure discharge tube. The yellow [Cl3] cation is more stable and may be produced as follows:
This reaction is conducted in the oxidising solvent arsenic pentafluoride. The trichloride anion, [Cl3], has also been characterised; it is analogous to triiodide.
The three fluorides of chlorine form a subset of the interhalogen compounds, all of which are diamagnetic. Some cationic and anionic derivatives are known, such as ClF2, ClF4, ClF2, and Cl2F. Some pseudohalides of chlorine are also known, such as cyanogen chloride (ClCN, linear), chlorine cyanate (ClNCO), chlorine thiocyanate (ClSCN, unlike its oxygen counterpart), and chlorine azide (ClN3).
Chlorine monofluoride (ClF) is extremely thermally stable, and is sold commercially in 500-gram steel lecture bottles. It is a colourless gas that melts at −155.6 °C and boils at −100.1 °C. It may be produced by the reaction of its elements at 225 °C, though it must then be separated and purified from chlorine trifluoride and its reactants. Its properties are mostly intermediate between those of chlorine and fluorine. It will react with many metals and nonmetals from room temperature and above, fluorinating them and liberating chlorine. It will also act as a chlorofluorinating agent, adding chlorine and fluorine across a multiple bond or by oxidation: for example, it will attack carbon monoxide to form carbonyl chlorofluoride, COFCl. It will react analogously with hexafluoroacetone, (CF3)2CO, with a potassium fluoride catalyst to produce heptafluoroisopropyl hypochlorite, (CF3)2CFOCl; with nitriles RCN to produce RCF2NCl2; and with the sulfur oxides SO2 and SO3 to produce ClSO2F and ClOSO2F respectively. It will also react exothermically with compounds containing –OH and –NH groups, such as water:
Chlorine trifluoride (ClF3) is a volatile colourless molecular liquid which melts at −76.3 °C and boils at 11.8 °C. It may be formed by directly fluorinating gaseous chlorine or chlorine monofluoride at 200–300 °C. One of the most reactive chemical compounds known, the list of elements it sets on fire is diverse, containing hydrogen, potassium, phosphorus, arsenic, antimony, sulfur, selenium, tellurium, bromine, iodine, and powdered molybdenum, tungsten, rhodium, iridium, and iron. It will also ignite water, along with many substances which in ordinary circumstances would be considered chemically inert such as asbestos, concrete, glass, and sand. When heated, it will even corrode noble metals as palladium, platinum, and gold, and even the noble gases xenon and radon do not escape fluorination. An impermeable fluoride layer is formed by sodium, magnesium, aluminium, zinc, tin, and silver, which may be removed by heating. Nickel, copper, and steel containers are usually used due to their great resistance to attack by chlorine trifluoride, stemming from the formation of an unreactive layer of metal fluoride. Its reaction with hydrazine to form hydrogen fluoride, nitrogen, and chlorine gases was used in experimental rocket engine, but has problems largely stemming from its extreme hypergolicity resulting in ignition without any measurable delay. Today, it is mostly used in nuclear fuel processing, to oxidise uranium to uranium hexafluoride for its enriching and to separate it from plutonium, as well as in the semiconductor industry, where it is used to clean chemical vapor deposition chambers. It can act as a fluoride ion donor or acceptor (Lewis base or acid), although it does not dissociate appreciably into ClF2 and ClF4 ions.
Chlorine pentafluoride (ClF5) is made on a large scale by direct fluorination of chlorine with excess fluorine gas at 350 °C and 250 atm, and on a small scale by reacting metal chlorides with fluorine gas at 100–300 °C. It melts at −103 °C and boils at −13.1 °C. It is a very strong fluorinating agent, although it is still not as effective as chlorine trifluoride. Only a few specific stoichiometric reactions have been characterised. Arsenic pentafluoride and antimony pentafluoride form ionic adducts of the form [ClF4][MF6] (M = As, Sb) and water reacts vigorously as follows:
The product, chloryl fluoride, is one of the five known chlorine oxide fluorides. These range from the thermally unstable FClO to the chemically unreactive perchloryl fluoride (FClO3), the other three being FClO2, F3ClO, and F3ClO2. All five behave similarly to the chlorine fluorides, both structurally and chemically, and may act as Lewis acids or bases by gaining or losing fluoride ions respectively or as very strong oxidising and fluorinating agents.
The chlorine oxides are well-studied in spite of their instability (all of them are endothermic compounds). They are important because they are produced when chlorofluorocarbons undergo photolysis in the upper atmosphere and cause the destruction of the ozone layer. None of them can be made from directly reacting the elements.
Dichlorine monoxide (Cl2O) is a brownish-yellow gas (red-brown when solid or liquid) which may be obtained by reacting chlorine gas with yellow mercury(II) oxide. It is very soluble in water, in which it is in equilibrium with hypochlorous acid (HOCl), of which it is the anhydride. It is thus an effective bleach and is mostly used to make hypochlorites. It explodes on heating or sparking or in the presence of ammonia gas.
Chlorine dioxide (ClO2) was the first chlorine oxide to be discovered in 1811 by Humphry Davy. It is a yellow paramagnetic gas (deep-red as a solid or liquid), as expected from its having an odd number of electrons: it is stable towards dimerisation due to the delocalisation of the unpaired electron. It explodes above −40 °C as a liquid and under pressure as a gas and therefore must be made at low concentrations for wood-pulp bleaching and water treatment. It is usually prepared by reducing a chlorate as follows:
Its production is thus intimately linked to the redox reactions of the chlorine oxoacids. It is a strong oxidising agent, reacting with sulfur, phosphorus, phosphorus halides, and potassium borohydride. It dissolves exothermically in water to form dark-green solutions that very slowly decompose in the dark. Crystalline clathrate hydrates ClO2·nH2O (n ≈ 6–10) separate out at low temperatures. However, in the presence of light, these solutions rapidly photodecompose to form a mixture of chloric and hydrochloric acids. Photolysis of individual ClO2 molecules result in the radicals ClO and ClOO, while at room temperature mostly chlorine, oxygen, and some ClO3 and Cl2O6 are produced. Cl2O3 is also produced when photolysing the solid at −78 °C: it is a dark brown solid that explodes below 0 °C. The ClO radical leads to the depletion of atmospheric ozone and is thus environmentally important as follows:
Chlorine perchlorate (ClOClO3) is a pale yellow liquid that is less stable than ClO2 and decomposes at room temperature to form chlorine, oxygen, and dichlorine hexoxide (Cl2O6). Chlorine perchlorate may also be considered a chlorine derivative of perchloric acid (HOClO3), similar to the thermally unstable chlorine derivatives of other oxoacids: examples include chlorine nitrate (ClONO2, vigorously reactive and explosive), and chlorine fluorosulfate (ClOSO2F, more stable but still moisture-sensitive and highly reactive). Dichlorine hexoxide is a dark-red liquid that freezes to form a solid which turns yellow at −180 °C: it is usually made by reaction of chlorine dioxide with oxygen. Despite attempts to rationalise it as the dimer of ClO3, it reacts more as though it were chloryl perchlorate, [ClO2][ClO4], which has been confirmed to be the correct structure of the solid. It hydrolyses in water to give a mixture of chloric and perchloric acids: the analogous reaction with anhydrous hydrogen fluoride does not proceed to completion.
Dichlorine heptoxide (Cl2O7) is the anhydride of perchloric acid (HClO4) and can readily be obtained from it by dehydrating it with phosphoric acid at −10 °C and then distilling the product at −35 °C and 1 mmHg. It is a shock-sensitive, colourless oily liquid. It is the least reactive of the chlorine oxides, being the only one to not set organic materials on fire at room temperature. It may be dissolved in water to regenerate perchloric acid or in aqueous alkalis to regenerate perchlorates. However, it thermally decomposes explosively by breaking one of the central Cl–O bonds, producing the radicals ClO3 and ClO4 which immediately decompose to the elements through intermediate oxides.
Chlorine forms four oxoacids: hypochlorous acid (HOCl), chlorous acid (HOClO), chloric acid (HOClO2), and perchloric acid (HOClO3). As can be seen from the redox potentials given in the adjacent table, chlorine is much more stable towards disproportionation in acidic solutions than in alkaline solutions:
The hypochlorite ions also disproportionate further to produce chloride and chlorate (3 ClO ⇌ 2 Cl + ClO3) but this reaction is quite slow at temperatures below 70 °C in spite of the very favourable equilibrium constant of 10. The chlorate ions may themselves disproportionate to form chloride and perchlorate (4 ClO3 ⇌ Cl + 3 ClO4) but this is still very slow even at 100 °C despite the very favourable equilibrium constant of 10. The rates of reaction for the chlorine oxyanions increases as the oxidation state of chlorine decreases. The strengths of the chlorine oxyacids increase very quickly as the oxidation state of chlorine increases due to the increasing delocalisation of charge over more and more oxygen atoms in their conjugate bases.
Most of the chlorine oxoacids may be produced by exploiting these disproportionation reactions. Hypochlorous acid (HOCl) is highly reactive and quite unstable; its salts are mostly used for their bleaching and sterilising abilities. They are very strong oxidising agents, transferring an oxygen atom to most inorganic species. Chlorous acid (HOClO) is even more unstable and cannot be isolated or concentrated without decomposition: it is known from the decomposition of aqueous chlorine dioxide. However, sodium chlorite is a stable salt and is useful for bleaching and stripping textiles, as an oxidising agent, and as a source of chlorine dioxide. Chloric acid (HOClO2) is a strong acid that is quite stable in cold water up to 30% concentration, but on warming gives chlorine and chlorine dioxide. Evaporation under reduced pressure allows it to be concentrated further to about 40%, but then it decomposes to perchloric acid, chlorine, oxygen, water, and chlorine dioxide. Its most important salt is sodium chlorate, mostly used to make chlorine dioxide to bleach paper pulp. The decomposition of chlorate to chloride and oxygen is a common way to produce oxygen in the laboratory on a small scale. Chloride and chlorate may comproportionate to form chlorine as follows:
Perchlorates and perchloric acid (HOClO3) are the most stable oxo-compounds of chlorine, in keeping with the fact that chlorine compounds are most stable when the chlorine atom is in its lowest (−1) or highest (+7) possible oxidation states. Perchloric acid and aqueous perchlorates are vigorous and sometimes violent oxidising agents when heated, in stark contrast to their mostly inactive nature at room temperature due to the high activation energies for these reactions for kinetic reasons. Perchlorates are made by electrolytically oxidising sodium chlorate, and perchloric acid is made by reacting anhydrous sodium perchlorate or barium perchlorate with concentrated hydrochloric acid, filtering away the chloride precipitated and distilling the filtrate to concentrate it. Anhydrous perchloric acid is a colourless mobile liquid that is sensitive to shock that explodes on contact with most organic compounds, sets hydrogen iodide and thionyl chloride on fire and even oxidises silver and gold. Although it is a weak ligand, weaker than water, a few compounds involving coordinated ClO4 are known. The Table below presents typical oxidation states for chlorine element as given in the secondary schools or colleges. Anyhow in university chemistry courses it should be pointed out that there are more complex chemical compounds, the structure of which can only be explained using modern quantum chemical methods, for example, cluster technetium chloride [(CH3)4N]3[Tc6Cl14], in which 6 of the 14 chlorine atoms are formally divalent, and oxidation states are fractional . In addition, all the above chemical regularities are valid for "normal" or close to normal conditions, while at ultra-high pressures (for example, in the cores of large planets), chlorine can exhibit an oxidation state of -3, forming a Na3Cl compound with sodium, which does not fit into traditional concepts of chemistry.
Like the other carbon–halogen bonds, the C–Cl bond is a common functional group that forms part of core organic chemistry. Formally, compounds with this functional group may be considered organic derivatives of the chloride anion. Due to the difference of electronegativity between chlorine (3.16) and carbon (2.55), the carbon in a C–Cl bond is electron-deficient and thus electrophilic. Chlorination modifies the physical properties of hydrocarbons in several ways: chlorocarbons are typically denser than water due to the higher atomic weight of chlorine versus hydrogen, and aliphatic organochlorides are alkylating agents because chloride is a leaving group.
Alkanes and aryl alkanes may be chlorinated under free-radical conditions, with UV light. However, the extent of chlorination is difficult to control: the reaction is not regioselective and often results in a mixture of various isomers with different degrees of chlorination, though this may be permissible if the products are easily separated. Aryl chlorides may be prepared by the Friedel-Crafts halogenation, using chlorine and a Lewis acid catalyst. The haloform reaction, using chlorine and sodium hydroxide, is also able to generate alkyl halides from methyl ketones, and related compounds. Chlorine adds to the multiple bonds on alkenes and alkynes as well, giving di- or tetrachloro compounds. However, due to the expense and reactivity of chlorine, organochlorine compounds are more commonly produced by using hydrogen chloride, or with chlorinating agents such as phosphorus pentachloride (PCl5) or thionyl chloride (SOCl2). The last is very convenient in the laboratory because all side products are gaseous and do not have to be distilled out.
Many organochlorine compounds have been isolated from natural sources ranging from bacteria to humans. Chlorinated organic compounds are found in nearly every class of biomolecules including alkaloids, terpenes, amino acids, flavonoids, steroids, and fatty acids. Organochlorides, including dioxins, are produced in the high temperature environment of forest fires, and dioxins have been found in the preserved ashes of lightning-ignited fires that predate synthetic dioxins. In addition, a variety of simple chlorinated hydrocarbons including dichloromethane, chloroform, and carbon tetrachloride have been isolated from marine algae. A majority of the chloromethane in the environment is produced naturally by biological decomposition, forest fires, and volcanoes.
Some types of organochlorides, though not all, have significant toxicity to plants or animals, including humans. Dioxins, produced when organic matter is burned in the presence of chlorine, and some insecticides, such as DDT, are persistent organic pollutants which pose dangers when they are released into the environment. For example, DDT, which was widely used to control insects in the mid 20th century, also accumulates in food chains, and causes reproductive problems (e.g., eggshell thinning) in certain bird species. Due to the ready homolytic fission of the C–Cl bond to create chlorine radicals in the upper atmosphere, chlorofluorocarbons have been phased out due to the harm they do to the ozone layer.
Chlorine is too reactive to occur as the free element in nature but is very abundant in the form of its chloride salts. It is the twenty-first most abundant element in Earth's crust and makes up 126 parts per million of it, through the large deposits of chloride minerals, especially sodium chloride, that have been evaporated from water bodies. All of these pale in comparison to the reserves of chloride ions in seawater: smaller amounts at higher concentrations occur in some inland seas and underground brine wells, such as the Great Salt Lake in Utah and the Dead Sea in Israel.
Small batches of chlorine gas are prepared in the laboratory by combining hydrochloric acid and manganese dioxide, but the need rarely arises due to its ready availability. In industry, elemental chlorine is usually produced by the electrolysis of sodium chloride dissolved in water. This method, the chloralkali process industrialized in 1892, now provides most industrial chlorine gas. Along with chlorine, the method yields hydrogen gas and sodium hydroxide, which is the most valuable product. The process proceeds according to the following chemical equation:
The electrolysis of chloride solutions all proceed according to the following equations:
In diaphragm cell electrolysis, an asbestos (or polymer-fiber) diaphragm separates a cathode and an anode, preventing the chlorine forming at the anode from re-mixing with the sodium hydroxide and the hydrogen formed at the cathode. The salt solution (brine) is continuously fed to the anode compartment and flows through the diaphragm to the cathode compartment, where the caustic alkali is produced and the brine is partially depleted. Diaphragm methods produce dilute and slightly impure alkali, but they are not burdened with the problem of mercury disposal and they are more energy efficient.
Membrane cell electrolysis employs permeable membrane as an ion exchanger. Saturated sodium (or potassium) chloride solution is passed through the anode compartment, leaving at a lower concentration. This method also produces very pure sodium (or potassium) hydroxide but has the disadvantage of requiring very pure brine at high concentrations.
In the Deacon process, hydrogen chloride recovered from the production of organochlorine compounds is recovered as chlorine. The process relies on oxidation using oxygen:
The reaction requires a catalyst. As introduced by Deacon, early catalysts were based on copper. Commercial processes, such as the Mitsui MT-Chlorine Process, have switched to chromium and ruthenium-based catalysts. The chlorine produced is available in cylinders from sizes ranging from 450 g to 70 kg, as well as drums (865 kg), tank wagons (15 tonnes on roads; 27–90 tonnes by rail), and barges (600–1200 tonnes).
Sodium chloride is the most common chlorine compound, and is the main source of chlorine for the demand by the chemical industry. About 15000 chlorine-containing compounds are commercially traded, including such diverse compounds as chlorinated methane, ethanes, vinyl chloride, polyvinyl chloride (PVC), aluminium trichloride for catalysis, the chlorides of magnesium, titanium, zirconium, and hafnium which are the precursors for producing the pure form of those elements.
Quantitatively, of all elemental chlorine produced, about 63% is used in the manufacture of organic compounds, and 18% in the manufacture of inorganic chlorine compounds. About 15,000 chlorine compounds are used commercially. The remaining 19% of chlorine produced is used for bleaches and disinfection products. The most significant of organic compounds in terms of production volume are 1,2-dichloroethane and vinyl chloride, intermediates in the production of PVC. Other particularly important organochlorines are methyl chloride, methylene chloride, chloroform, vinylidene chloride, trichloroethylene, perchloroethylene, allyl chloride, epichlorohydrin, chlorobenzene, dichlorobenzenes, and trichlorobenzenes. The major inorganic compounds include HCl, Cl2O, HOCl, NaClO3, chlorinated isocyanurates, AlCl3, SiCl4, SnCl4, PCl3, PCl5, POCl3, AsCl3, SbCl3, SbCl5, BiCl3, and ZnCl2.
In France (as elsewhere), animal intestines were processed to make musical instrument strings, Goldbeater's skin and other products. This was done in "gut factories" (boyauderies), and it was an odiferous and unhealthy process. In or about 1820, the Société d'encouragement pour l'industrie nationale offered a prize for the discovery of a method, chemical or mechanical, for separating the peritoneal membrane of animal intestines without putrefaction. The prize was won by Antoine-Germain Labarraque, a 44-year-old French chemist and pharmacist who had discovered that Berthollet's chlorinated bleaching solutions ("Eau de Javel") not only destroyed the smell of putrefaction of animal tissue decomposition, but also actually retarded the decomposition.
Labarraque's research resulted in the use of chlorides and hypochlorites of lime (calcium hypochlorite) and of sodium (sodium hypochlorite) in the boyauderies. The same chemicals were found to be useful in the routine disinfection and deodorization of latrines, sewers, markets, abattoirs, anatomical theatres, and morgues. They were successful in hospitals, lazarets, prisons, infirmaries (both on land and at sea), magnaneries, stables, cattle-sheds, etc.; and they were beneficial during exhumations, embalming, outbreaks of epidemic disease, fever, and blackleg in cattle.
Labarraque's chlorinated lime and soda solutions have been advocated since 1828 to prevent infection (called "contagious infection", presumed to be transmitted by "miasmas"), and to treat putrefaction of existing wounds, including septic wounds. In his 1828 work, Labarraque recommended that doctors breathe chlorine, wash their hands in chlorinated lime, and even sprinkle chlorinated lime about the patients' beds in cases of "contagious infection". In 1828, the contagion of infections was well known, even though the agency of the microbe was not discovered until more than half a century later.
During the Paris cholera outbreak of 1832, large quantities of so-called chloride of lime were used to disinfect the capital. This was not simply modern calcium chloride, but chlorine gas dissolved in lime-water (dilute calcium hydroxide) to form calcium hypochlorite (chlorinated lime). Labarraque's discovery helped to remove the terrible stench of decay from hospitals and dissecting rooms, and by doing so, effectively deodorised the Latin Quarter of Paris. These "putrid miasmas" were thought by many to cause the spread of "contagion" and "infection" – both words used before the germ theory of infection. Chloride of lime was used for destroying odors and "putrid matter". One source claims chloride of lime was used by Dr. John Snow to disinfect water from the cholera-contaminated well that was feeding the Broad Street pump in 1854 London, though three other reputable sources that describe that famous cholera epidemic do not mention the incident. One reference makes it clear that chloride of lime was used to disinfect the offal and filth in the streets surrounding the Broad Street pump – a common practice in mid-nineteenth century England.
Perhaps the most famous application of Labarraque's chlorine and chemical base solutions was in 1847, when Ignaz Semmelweis used chlorine-water (chlorine dissolved in pure water, which was cheaper than chlorinated lime solutions) to disinfect the hands of Austrian doctors, which Semmelweis noticed still carried the stench of decomposition from the dissection rooms to the patient examination rooms. Long before the germ theory of disease, Semmelweis theorized that "cadaveric particles" were transmitting decay from fresh medical cadavers to living patients, and he used the well-known "Labarraque's solutions" as the only known method to remove the smell of decay and tissue decomposition (which he found that soap did not). The solutions proved to be far more effective antiseptics than soap (Semmelweis was also aware of their greater efficacy, but not the reason), and this resulted in Semmelweis's celebrated success in stopping the transmission of childbed fever ("puerperal fever") in the maternity wards of Vienna General Hospital in Austria in 1847.
Much later, during World War I in 1916, a standardized and diluted modification of Labarraque's solution containing hypochlorite (0.5%) and boric acid as an acidic stabilizer was developed by Henry Drysdale Dakin (who gave full credit to Labarraque's prior work in this area). Called Dakin's solution, the method of wound irrigation with chlorinated solutions allowed antiseptic treatment of a wide variety of open wounds, long before the modern antibiotic era. A modified version of this solution continues to be employed in wound irrigation in modern times, where it remains effective against bacteria that are resistant to multiple antibiotics (see Century Pharmaceuticals).
The first continuous application of chlorination to drinking U.S. water was installed in Jersey City, New Jersey, in 1908. By 1918, the US Department of Treasury called for all drinking water to be disinfected with chlorine. Chlorine is presently an important chemical for water purification (such as in water treatment plants), in disinfectants, and in bleach. Even small water supplies are now routinely chlorinated.
Chlorine is usually used (in the form of hypochlorous acid) to kill bacteria and other microbes in drinking water supplies and public swimming pools. In most private swimming pools, chlorine itself is not used, but rather sodium hypochlorite, formed from chlorine and sodium hydroxide, or solid tablets of chlorinated isocyanurates. The drawback of using chlorine in swimming pools is that the chlorine reacts with the amino acids in proteins in human hair and skin. Contrary to popular belief, the distinctive "chlorine aroma" associated with swimming pools is not the result of elemental chlorine itself, but of chloramine, a chemical compound produced by the reaction of free dissolved chlorine with amines in organic substances including those in urine and sweat. As a disinfectant in water, chlorine is more than three times as effective against Escherichia coli as bromine, and more than six times as effective as iodine. Increasingly, monochloramine itself is being directly added to drinking water for purposes of disinfection, a process known as chloramination.
It is often impractical to store and use poisonous chlorine gas for water treatment, so alternative methods of adding chlorine are used. These include hypochlorite solutions, which gradually release chlorine into the water, and compounds like sodium dichloro-s-triazinetrione (dihydrate or anhydrous), sometimes referred to as "dichlor", and trichloro-s-triazinetrione, sometimes referred to as "trichlor". These compounds are stable while solid and may be used in powdered, granular, or tablet form. When added in small amounts to pool water or industrial water systems, the chlorine atoms hydrolyze from the rest of the molecule, forming hypochlorous acid (HOCl), which acts as a general biocide, killing germs, microorganisms, algae, and so on.
Chlorine gas, also known as bertholite, was first used as a weapon in World War I by Germany on April 22, 1915, in the Second Battle of Ypres. As described by the soldiers, it had the distinctive smell of a mixture of pepper and pineapple. It also tasted metallic and stung the back of the throat and chest. Chlorine reacts with water in the mucosa of the lungs to form hydrochloric acid, destructive to living tissue and potentially lethal. Human respiratory systems can be protected from chlorine gas by gas masks with activated charcoal or other filters, which makes chlorine gas much less lethal than other chemical weapons. It was pioneered by a German scientist later to be a Nobel laureate, Fritz Haber of the Kaiser Wilhelm Institute in Berlin, in collaboration with the German chemical conglomerate IG Farben, which developed methods for discharging chlorine gas against an entrenched enemy. After its first use, both sides in the conflict used chlorine as a chemical weapon, but it was soon replaced by the more deadly phosgene and mustard gas.
Chlorine gas was also used during the Iraq War in Anbar Province in 2007, with insurgents packing truck bombs with mortar shells and chlorine tanks. The attacks killed two people from the explosives and sickened more than 350. Most of the deaths were caused by the force of the explosions rather than the effects of chlorine since the toxic gas is readily dispersed and diluted in the atmosphere by the blast. In some bombings, over a hundred civilians were hospitalized due to breathing difficulties. The Iraqi authorities tightened security for elemental chlorine, which is essential for providing safe drinking water to the population.
On 23 October 2014, it was reported that the Islamic State of Iraq and the Levant had used chlorine gas in the town of Duluiyah, Iraq. Laboratory analysis of clothing and soil samples confirmed the use of chlorine gas against Kurdish Peshmerga Forces in a vehicle-borne improvised explosive device attack on 23 January 2015 at the Highway 47 Kiske Junction near Mosul.
Another country in the middle east, Syria, has used chlorine as a chemical weapon delivered from barrel bombs and rockets. In 2016, the OPCW-UN Joint Investigative Mechanism concluded that the Syrian government used chlorine as a chemical weapon in three separate attacks. Later investigations from the OPCW's Investigation and Identification Team concluded that the Syrian Air Force was responsible for chlorine attacks in 2017 and 2018.
The chloride anion is an essential nutrient for metabolism. Chlorine is needed for the production of hydrochloric acid in the stomach and in cellular pump functions. The main dietary source is table salt, or sodium chloride. Overly low or high concentrations of chloride in the blood are examples of electrolyte disturbances. Hypochloremia (having too little chloride) rarely occurs in the absence of other abnormalities. It is sometimes associated with hypoventilation. It can be associated with chronic respiratory acidosis. Hyperchloremia (having too much chloride) usually does not produce symptoms. When symptoms do occur, they tend to resemble those of hypernatremia (having too much sodium). Reduction in blood chloride leads to cerebral dehydration; symptoms are most often caused by rapid rehydration which results in cerebral edema. Hyperchloremia can affect oxygen transport.
Chlorine is a toxic gas that attacks the respiratory system, eyes, and skin. Because it is denser than air, it tends to accumulate at the bottom of poorly ventilated spaces. Chlorine gas is a strong oxidizer, which may react with flammable materials.
Chlorine is detectable with measuring devices in concentrations as low as 0.2 parts per million (ppm), and by smell at 3 ppm. Coughing and vomiting may occur at 30 ppm and lung damage at 60 ppm. About 1000 ppm can be fatal after a few deep breaths of the gas. The IDLH (immediately dangerous to life and health) concentration is 10 ppm. Breathing lower concentrations can aggravate the respiratory system and exposure to the gas can irritate the eyes. When chlorine is inhaled at concentrations greater than 30 ppm, it reacts with water within the lungs, producing hydrochloric acid (HCl) and hypochlorous acid (HOCl).
When used at specified levels for water disinfection, the reaction of chlorine with water is not a major concern for human health. Other materials present in the water may generate disinfection by-products that are associated with negative effects on human health.
In the United States, the Occupational Safety and Health Administration (OSHA) has set the permissible exposure limit for elemental chlorine at 1 ppm, or 3 mg/m. The National Institute for Occupational Safety and Health has designated a recommended exposure limit of 0.5 ppm over 15 minutes.
In the home, accidents occur when hypochlorite bleach solutions come into contact with certain acidic drain-cleaners to produce chlorine gas. Hypochlorite bleach (a popular laundry additive) combined with ammonia (another popular laundry additive) produces chloramines, another toxic group of chemicals.
Chlorine is widely used for purifying water, especially potable water supplies and water used in swimming pools. Several catastrophic collapses of swimming pool ceilings have occurred from chlorine-induced stress corrosion cracking of stainless steel suspension rods. Some polymers are also sensitive to attack, including acetal resin and polybutene. Both materials were used in hot and cold water domestic plumbing, and stress corrosion cracking caused widespread failures in the US in the 1980s and 1990s.
The element iron can combine with chlorine at high temperatures in a strong exothermic reaction, creating a chlorine-iron fire. Chlorine-iron fires are a risk in chemical process plants, where much of the pipework that carries chlorine gas is made of steel. | [
{
"paragraph_id": 0,
"text": "Chlorine is a chemical element; it has symbol Cl and atomic number 17. The second-lightest of the halogens, it appears between fluorine and bromine in the periodic table and its properties are mostly intermediate between them. Chlorine is a yellow-green gas at room temperature. It is an extremely reactive element and a strong oxidising agent: among the elements, it has the highest electron affinity and the third-highest electronegativity on the revised Pauling scale, behind only oxygen and fluorine.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Chlorine played an important role in the experiments conducted by medieval alchemists, which commonly involved the heating of chloride salts like ammonium chloride (sal ammoniac) and sodium chloride (common salt), producing various chemical substances containing chlorine such as hydrogen chloride, mercury(II) chloride (corrosive sublimate), and aqua regia. However, the nature of free chlorine gas as a separate substance was only recognised around 1630 by Jan Baptist van Helmont. Carl Wilhelm Scheele wrote a description of chlorine gas in 1774, supposing it to be an oxide of a new element. In 1809, chemists suggested that the gas might be a pure element, and this was confirmed by Sir Humphry Davy in 1810, who named it after the Ancient Greek χλωρός (khlōrós, \"pale green\") because of its colour.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Because of its great reactivity, all chlorine in the Earth's crust is in the form of ionic chloride compounds, which includes table salt. It is the second-most abundant halogen (after fluorine) and twenty-first most abundant chemical element in Earth's crust. These crustal deposits are nevertheless dwarfed by the huge reserves of chloride in seawater.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Elemental chlorine is commercially produced from brine by electrolysis, predominantly in the chlor-alkali process. The high oxidising potential of elemental chlorine led to the development of commercial bleaches and disinfectants, and a reagent for many processes in the chemical industry. Chlorine is used in the manufacture of a wide range of consumer products, about two-thirds of them organic chemicals such as polyvinyl chloride (PVC), many intermediates for the production of plastics, and other end products which do not contain the element. As a common disinfectant, elemental chlorine and chlorine-generating compounds are used more directly in swimming pools to keep them sanitary. Elemental chlorine at high concentration is extremely dangerous, and poisonous to most living organisms. As a chemical warfare agent, chlorine was first used in World War I as a poison gas weapon.",
"title": ""
},
{
"paragraph_id": 4,
"text": "In the form of chloride ions, chlorine is necessary to all known species of life. Other types of chlorine compounds are rare in living organisms, and artificially produced chlorinated organics range from inert to toxic. In the upper atmosphere, chlorine-containing organic molecules such as chlorofluorocarbons have been implicated in ozone depletion. Small quantities of elemental chlorine are generated by oxidation of chloride ions in neutrophils as part of an immune system response against bacteria.",
"title": ""
},
{
"paragraph_id": 5,
"text": "The most common compound of chlorine, sodium chloride, has been known since ancient times; archaeologists have found evidence that rock salt was used as early as 3000 BC and brine as early as 6000 BC.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "Around 900, the authors of the Arabic writings attributed to Jabir ibn Hayyan (Latin: Geber) and the Persian physician and alchemist Abu Bakr al-Razi (c. 865–925, Latin: Rhazes) were experimenting with sal ammoniac (ammonium chloride), which when it was distilled together with vitriol (hydrated sulfates of various metals) produced hydrogen chloride. However, it appears that in these early experiments with chloride salts, the gaseous products were discarded, and hydrogen chloride may have been produced many times before it was discovered that it can be put to chemical use. One of the first such uses was the synthesis of mercury(II) chloride (corrosive sublimate), whose production from the heating of mercury either with alum and ammonium chloride or with vitriol and sodium chloride was first described in the De aluminibus et salibus (\"On Alums and Salts\", an eleventh- or twelfth century Arabic text falsely attributed to Abu Bakr al-Razi and translated into Latin in the second half of the twelfth century by Gerard of Cremona, 1144–1187). Another important development was the discovery by pseudo-Geber (in the De inventione veritatis, \"On the Discovery of Truth\", after c. 1300) that by adding ammonium chloride to nitric acid, a strong solvent capable of dissolving gold (i.e., aqua regia) could be produced. Although aqua regia is an unstable mixture that continually gives off fumes containing free chlorine gas, this chlorine gas appears to have been ignored until c. 1630, when its nature as a separate gaseous substance was recognised by the Brabantian chemist and physician Jan Baptist van Helmont.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "The element was first studied in detail in 1774 by Swedish chemist Carl Wilhelm Scheele, and he is credited with the discovery. Scheele produced chlorine by reacting MnO2 (as the mineral pyrolusite) with HCl:",
"title": "History"
},
{
"paragraph_id": 8,
"text": "Scheele observed several of the properties of chlorine: the bleaching effect on litmus, the deadly effect on insects, the yellow-green color, and the smell similar to aqua regia. He called it \"dephlogisticated muriatic acid air\" since it is a gas (then called \"airs\") and it came from hydrochloric acid (then known as \"muriatic acid\"). He failed to establish chlorine as an element.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "Common chemical theory at that time held that an acid is a compound that contains oxygen (remnants of this survive in the German and Dutch names of oxygen: sauerstoff or zuurstof, both translating into English as acid substance), so a number of chemists, including Claude Berthollet, suggested that Scheele's dephlogisticated muriatic acid air must be a combination of oxygen and the yet undiscovered element, muriaticum.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "In 1809, Joseph Louis Gay-Lussac and Louis-Jacques Thénard tried to decompose dephlogisticated muriatic acid air by reacting it with charcoal to release the free element muriaticum (and carbon dioxide). They did not succeed and published a report in which they considered the possibility that dephlogisticated muriatic acid air is an element, but were not convinced.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "In 1810, Sir Humphry Davy tried the same experiment again, and concluded that the substance was an element, and not a compound. He announced his results to the Royal Society on 15 November that year. At that time, he named this new element \"chlorine\", from the Greek word χλωρος (chlōros, \"green-yellow\"), in reference to its color. The name \"halogen\", meaning \"salt producer\", was originally used for chlorine in 1811 by Johann Salomo Christoph Schweigger. This term was later used as a generic term to describe all the elements in the chlorine family (fluorine, bromine, iodine), after a suggestion by Jöns Jakob Berzelius in 1826. In 1823, Michael Faraday liquefied chlorine for the first time, and demonstrated that what was then known as \"solid chlorine\" had a structure of chlorine hydrate (Cl2·H2O).",
"title": "History"
},
{
"paragraph_id": 12,
"text": "Chlorine gas was first used by French chemist Claude Berthollet to bleach textiles in 1785. Modern bleaches resulted from further work by Berthollet, who first produced sodium hypochlorite in 1789 in his laboratory in the town of Javel (now part of Paris, France), by passing chlorine gas through a solution of sodium carbonate. The resulting liquid, known as \"Eau de Javel\" (\"Javel water\"), was a weak solution of sodium hypochlorite. This process was not very efficient, and alternative production methods were sought. Scottish chemist and industrialist Charles Tennant first produced a solution of calcium hypochlorite (\"chlorinated lime\"), then solid calcium hypochlorite (bleaching powder). These compounds produced low levels of elemental chlorine and could be more efficiently transported than sodium hypochlorite, which remained as dilute solutions because when purified to eliminate water, it became a dangerously powerful and unstable oxidizer. Near the end of the nineteenth century, E. S. Smith patented a method of sodium hypochlorite production involving electrolysis of brine to produce sodium hydroxide and chlorine gas, which then mixed to form sodium hypochlorite. This is known as the chloralkali process, first introduced on an industrial scale in 1892, and now the source of most elemental chlorine and sodium hydroxide. In 1884 Chemischen Fabrik Griesheim of Germany developed another chloralkali process which entered commercial production in 1888.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "Elemental chlorine solutions dissolved in chemically basic water (sodium and calcium hypochlorite) were first used as anti-putrefaction agents and disinfectants in the 1820s, in France, long before the establishment of the germ theory of disease. This practice was pioneered by Antoine-Germain Labarraque, who adapted Berthollet's \"Javel water\" bleach and other chlorine preparations. Elemental chlorine has since served a continuous function in topical antisepsis (wound irrigation solutions and the like) and public sanitation, particularly in swimming and drinking water.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "Chlorine gas was first used as a weapon on April 22, 1915 at the Second Battle of Ypres by the German Army. The effect on the allies was devastating because the existing gas masks were difficult to deploy and had not been broadly distributed.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "Chlorine is the second halogen, being a nonmetal in group 17 of the periodic table. Its properties are thus similar to fluorine, bromine, and iodine, and are largely intermediate between those of the first two. Chlorine has the electron configuration [Ne]3s3p, with the seven electrons in the third and outermost shell acting as its valence electrons. Like all halogens, it is thus one electron short of a full octet, and is hence a strong oxidising agent, reacting with many elements in order to complete its outer shell. Corresponding to periodic trends, it is intermediate in electronegativity between fluorine and bromine (F: 3.98, Cl: 3.16, Br: 2.96, I: 2.66), and is less reactive than fluorine and more reactive than bromine. It is also a weaker oxidising agent than fluorine, but a stronger one than bromine. Conversely, the chloride ion is a weaker reducing agent than bromide, but a stronger one than fluoride. It is intermediate in atomic radius between fluorine and bromine, and this leads to many of its atomic properties similarly continuing the trend from iodine to bromine upward, such as first ionisation energy, electron affinity, enthalpy of dissociation of the X2 molecule (X = Cl, Br, I), ionic radius, and X–X bond length. (Fluorine is anomalous due to its small size.)",
"title": "Properties"
},
{
"paragraph_id": 16,
"text": "All four stable halogens experience intermolecular van der Waals forces of attraction, and their strength increases together with the number of electrons among all homonuclear diatomic halogen molecules. Thus, the melting and boiling points of chlorine are intermediate between those of fluorine and bromine: chlorine melts at −101.0 °C and boils at −34.0 °C. As a result of the increasing molecular weight of the halogens down the group, the density and heats of fusion and vaporisation of chlorine are again intermediate between those of bromine and fluorine, although all their heats of vaporisation are fairly low (leading to high volatility) thanks to their diatomic molecular structure. The halogens darken in colour as the group is descended: thus, while fluorine is a pale yellow gas, chlorine is distinctly yellow-green. This trend occurs because the wavelengths of visible light absorbed by the halogens increase down the group. Specifically, the colour of a halogen, such as chlorine, results from the electron transition between the highest occupied antibonding πg molecular orbital and the lowest vacant antibonding σu molecular orbital. The colour fades at low temperatures, so that solid chlorine at −195 °C is almost colourless.",
"title": "Properties"
},
{
"paragraph_id": 17,
"text": "Like solid bromine and iodine, solid chlorine crystallises in the orthorhombic crystal system, in a layered lattice of Cl2 molecules. The Cl–Cl distance is 198 pm (close to the gaseous Cl–Cl distance of 199 pm) and the Cl···Cl distance between molecules is 332 pm within a layer and 382 pm between layers (compare the van der Waals radius of chlorine, 180 pm). This structure means that chlorine is a very poor conductor of electricity, and indeed its conductivity is so low as to be practically unmeasurable.",
"title": "Properties"
},
{
"paragraph_id": 18,
"text": "Chlorine has two stable isotopes, Cl and Cl. These are its only two natural isotopes occurring in quantity, with Cl making up 76% of natural chlorine and Cl making up the remaining 24%. Both are synthesised in stars in the oxygen-burning and silicon-burning processes. Both have nuclear spin 3/2+ and thus may be used for nuclear magnetic resonance, although the spin magnitude being greater than 1/2 results in non-spherical nuclear charge distribution and thus resonance broadening as a result of a nonzero nuclear quadrupole moment and resultant quadrupolar relaxation. The other chlorine isotopes are all radioactive, with half-lives too short to occur in nature primordially. Of these, the most commonly used in the laboratory are Cl (t1/2 = 3.0×10 y) and Cl (t1/2 = 37.2 min), which may be produced from the neutron activation of natural chlorine.",
"title": "Properties"
},
{
"paragraph_id": 19,
"text": "The most stable chlorine radioisotope is Cl. The primary decay mode of isotopes lighter than Cl is electron capture to isotopes of sulfur; that of isotopes heavier than Cl is beta decay to isotopes of argon; and Cl may decay by either mode to stable S or Ar. Cl occurs in trace quantities in nature as a cosmogenic nuclide in a ratio of about (7–10) × 10 to 1 with stable chlorine isotopes: it is produced in the atmosphere by spallation of Ar by interactions with cosmic ray protons. In the top meter of the lithosphere, Cl is generated primarily by thermal neutron activation of Cl and spallation of K and Ca. In the subsurface environment, muon capture by Ca becomes more important as a way to generate Cl.",
"title": "Properties"
},
{
"paragraph_id": 20,
"text": "Chlorine is intermediate in reactivity between fluorine and bromine, and is one of the most reactive elements. Chlorine is a weaker oxidising agent than fluorine but a stronger one than bromine or iodine. This can be seen from the standard electrode potentials of the X2/X couples (F, +2.866 V; Cl, +1.395 V; Br, +1.087 V; I, +0.615 V; At, approximately +0.3 V). However, this trend is not shown in the bond energies because fluorine is singular due to its small size, low polarisability, and inability to show hypervalence. As another difference, chlorine has a significant chemistry in positive oxidation states while fluorine does not. Chlorination often leads to higher oxidation states than bromination or iodination but lower oxidation states than fluorination. Chlorine tends to react with compounds including M–M, M–H, or M–C bonds to form M–Cl bonds.",
"title": "Chemistry and compounds"
},
{
"paragraph_id": 21,
"text": "Given that E°(1/2O2/H2O) = +1.229 V, which is less than +1.395 V, it would be expected that chlorine should be able to oxidise water to oxygen and hydrochloric acid. However, the kinetics of this reaction are unfavorable, and there is also a bubble overpotential effect to consider, so that electrolysis of aqueous chloride solutions evolves chlorine gas and not oxygen gas, a fact that is very useful for the industrial production of chlorine.",
"title": "Chemistry and compounds"
},
{
"paragraph_id": 22,
"text": "The simplest chlorine compound is hydrogen chloride, HCl, a major chemical in industry as well as in the laboratory, both as a gas and dissolved in water as hydrochloric acid. It is often produced by burning hydrogen gas in chlorine gas, or as a byproduct of chlorinating hydrocarbons. Another approach is to treat sodium chloride with concentrated sulfuric acid to produce hydrochloric acid, also known as the \"salt-cake\" process:",
"title": "Chemistry and compounds"
},
{
"paragraph_id": 23,
"text": "In the laboratory, hydrogen chloride gas may be made by drying the acid with concentrated sulfuric acid. Deuterium chloride, DCl, may be produced by reacting benzoyl chloride with heavy water (D2O).",
"title": "Chemistry and compounds"
},
{
"paragraph_id": 24,
"text": "At room temperature, hydrogen chloride is a colourless gas, like all the hydrogen halides apart from hydrogen fluoride, since hydrogen cannot form strong hydrogen bonds to the larger electronegative chlorine atom; however, weak hydrogen bonding is present in solid crystalline hydrogen chloride at low temperatures, similar to the hydrogen fluoride structure, before disorder begins to prevail as the temperature is raised. Hydrochloric acid is a strong acid (pKa = −7) because the hydrogen bonds to chlorine are too weak to inhibit dissociation. The HCl/H2O system has many hydrates HCl·nH2O for n = 1, 2, 3, 4, and 6. Beyond a 1:1 mixture of HCl and H2O, the system separates completely into two separate liquid phases. Hydrochloric acid forms an azeotrope with boiling point 108.58 °C at 20.22 g HCl per 100 g solution; thus hydrochloric acid cannot be concentrated beyond this point by distillation.",
"title": "Chemistry and compounds"
},
{
"paragraph_id": 25,
"text": "Unlike hydrogen fluoride, anhydrous liquid hydrogen chloride is difficult to work with as a solvent, because its boiling point is low, it has a small liquid range, its dielectric constant is low and it does not dissociate appreciably into H2Cl and HCl2 ions – the latter, in any case, are much less stable than the bifluoride ions (HF2) due to the very weak hydrogen bonding between hydrogen and chlorine, though its salts with very large and weakly polarising cations such as Cs and NR4 (R = Me, Et, Bu) may still be isolated. Anhydrous hydrogen chloride is a poor solvent, only able to dissolve small molecular compounds such as nitrosyl chloride and phenol, or salts with very low lattice energies such as tetraalkylammonium halides. It readily protonates electrophiles containing lone-pairs or π bonds. Solvolysis, ligand replacement reactions, and oxidations are well-characterised in hydrogen chloride solution:",
"title": "Chemistry and compounds"
},
{
"paragraph_id": 26,
"text": "Nearly all elements in the periodic table form binary chlorides. The exceptions are decidedly in the minority and stem in each case from one of three causes: extreme inertness and reluctance to participate in chemical reactions (the noble gases, with the exception of xenon in the highly unstable XeCl2 and XeCl4); extreme nuclear instability hampering chemical investigation before decay and transmutation (many of the heaviest elements beyond bismuth); and having an electronegativity higher than chlorine's (oxygen and fluorine) so that the resultant binary compounds are formally not chlorides but rather oxides or fluorides of chlorine. Even though nitrogen in NCl3 is bearing a negative charge, the compound is usually called nitrogen trichloride.",
"title": "Chemistry and compounds"
},
{
"paragraph_id": 27,
"text": "Chlorination of metals with Cl2 usually leads to a higher oxidation state than bromination with Br2 when multiple oxidation states are available, such as in MoCl5 and MoBr3. Chlorides can be made by reaction of an element or its oxide, hydroxide, or carbonate with hydrochloric acid, and then dehydrated by mildly high temperatures combined with either low pressure or anhydrous hydrogen chloride gas. These methods work best when the chloride product is stable to hydrolysis; otherwise, the possibilities include high-temperature oxidative chlorination of the element with chlorine or hydrogen chloride, high-temperature chlorination of a metal oxide or other halide by chlorine, a volatile metal chloride, carbon tetrachloride, or an organic chloride. For instance, zirconium dioxide reacts with chlorine at standard conditions to produce zirconium tetrachloride, and uranium trioxide reacts with hexachloropropene when heated under reflux to give uranium tetrachloride. The second example also involves a reduction in oxidation state, which can also be achieved by reducing a higher chloride using hydrogen or a metal as a reducing agent. This may also be achieved by thermal decomposition or disproportionation as follows:",
"title": "Chemistry and compounds"
},
{
"paragraph_id": 28,
"text": "Most metal chlorides with the metal in low oxidation states (+1 to +3) are ionic. Nonmetals tend to form covalent molecular chlorides, as do metals in high oxidation states from +3 and above. Both ionic and covalent chlorides are known for metals in oxidation state +3 (e.g. scandium chloride is mostly ionic, but aluminium chloride is not). Silver chloride is very insoluble in water and is thus often used as a qualitative test for chlorine.",
"title": "Chemistry and compounds"
},
{
"paragraph_id": 29,
"text": "Although dichlorine is a strong oxidising agent with a high first ionisation energy, it may be oxidised under extreme conditions to form the [Cl2] cation. This is very unstable and has only been characterised by its electronic band spectrum when produced in a low-pressure discharge tube. The yellow [Cl3] cation is more stable and may be produced as follows:",
"title": "Chemistry and compounds"
},
{
"paragraph_id": 30,
"text": "This reaction is conducted in the oxidising solvent arsenic pentafluoride. The trichloride anion, [Cl3], has also been characterised; it is analogous to triiodide.",
"title": "Chemistry and compounds"
},
{
"paragraph_id": 31,
"text": "The three fluorides of chlorine form a subset of the interhalogen compounds, all of which are diamagnetic. Some cationic and anionic derivatives are known, such as ClF2, ClF4, ClF2, and Cl2F. Some pseudohalides of chlorine are also known, such as cyanogen chloride (ClCN, linear), chlorine cyanate (ClNCO), chlorine thiocyanate (ClSCN, unlike its oxygen counterpart), and chlorine azide (ClN3).",
"title": "Chemistry and compounds"
},
{
"paragraph_id": 32,
"text": "Chlorine monofluoride (ClF) is extremely thermally stable, and is sold commercially in 500-gram steel lecture bottles. It is a colourless gas that melts at −155.6 °C and boils at −100.1 °C. It may be produced by the reaction of its elements at 225 °C, though it must then be separated and purified from chlorine trifluoride and its reactants. Its properties are mostly intermediate between those of chlorine and fluorine. It will react with many metals and nonmetals from room temperature and above, fluorinating them and liberating chlorine. It will also act as a chlorofluorinating agent, adding chlorine and fluorine across a multiple bond or by oxidation: for example, it will attack carbon monoxide to form carbonyl chlorofluoride, COFCl. It will react analogously with hexafluoroacetone, (CF3)2CO, with a potassium fluoride catalyst to produce heptafluoroisopropyl hypochlorite, (CF3)2CFOCl; with nitriles RCN to produce RCF2NCl2; and with the sulfur oxides SO2 and SO3 to produce ClSO2F and ClOSO2F respectively. It will also react exothermically with compounds containing –OH and –NH groups, such as water:",
"title": "Chemistry and compounds"
},
{
"paragraph_id": 33,
"text": "Chlorine trifluoride (ClF3) is a volatile colourless molecular liquid which melts at −76.3 °C and boils at 11.8 °C. It may be formed by directly fluorinating gaseous chlorine or chlorine monofluoride at 200–300 °C. One of the most reactive chemical compounds known, the list of elements it sets on fire is diverse, containing hydrogen, potassium, phosphorus, arsenic, antimony, sulfur, selenium, tellurium, bromine, iodine, and powdered molybdenum, tungsten, rhodium, iridium, and iron. It will also ignite water, along with many substances which in ordinary circumstances would be considered chemically inert such as asbestos, concrete, glass, and sand. When heated, it will even corrode noble metals as palladium, platinum, and gold, and even the noble gases xenon and radon do not escape fluorination. An impermeable fluoride layer is formed by sodium, magnesium, aluminium, zinc, tin, and silver, which may be removed by heating. Nickel, copper, and steel containers are usually used due to their great resistance to attack by chlorine trifluoride, stemming from the formation of an unreactive layer of metal fluoride. Its reaction with hydrazine to form hydrogen fluoride, nitrogen, and chlorine gases was used in experimental rocket engine, but has problems largely stemming from its extreme hypergolicity resulting in ignition without any measurable delay. Today, it is mostly used in nuclear fuel processing, to oxidise uranium to uranium hexafluoride for its enriching and to separate it from plutonium, as well as in the semiconductor industry, where it is used to clean chemical vapor deposition chambers. It can act as a fluoride ion donor or acceptor (Lewis base or acid), although it does not dissociate appreciably into ClF2 and ClF4 ions.",
"title": "Chemistry and compounds"
},
{
"paragraph_id": 34,
"text": "Chlorine pentafluoride (ClF5) is made on a large scale by direct fluorination of chlorine with excess fluorine gas at 350 °C and 250 atm, and on a small scale by reacting metal chlorides with fluorine gas at 100–300 °C. It melts at −103 °C and boils at −13.1 °C. It is a very strong fluorinating agent, although it is still not as effective as chlorine trifluoride. Only a few specific stoichiometric reactions have been characterised. Arsenic pentafluoride and antimony pentafluoride form ionic adducts of the form [ClF4][MF6] (M = As, Sb) and water reacts vigorously as follows:",
"title": "Chemistry and compounds"
},
{
"paragraph_id": 35,
"text": "The product, chloryl fluoride, is one of the five known chlorine oxide fluorides. These range from the thermally unstable FClO to the chemically unreactive perchloryl fluoride (FClO3), the other three being FClO2, F3ClO, and F3ClO2. All five behave similarly to the chlorine fluorides, both structurally and chemically, and may act as Lewis acids or bases by gaining or losing fluoride ions respectively or as very strong oxidising and fluorinating agents.",
"title": "Chemistry and compounds"
},
{
"paragraph_id": 36,
"text": "The chlorine oxides are well-studied in spite of their instability (all of them are endothermic compounds). They are important because they are produced when chlorofluorocarbons undergo photolysis in the upper atmosphere and cause the destruction of the ozone layer. None of them can be made from directly reacting the elements.",
"title": "Chemistry and compounds"
},
{
"paragraph_id": 37,
"text": "Dichlorine monoxide (Cl2O) is a brownish-yellow gas (red-brown when solid or liquid) which may be obtained by reacting chlorine gas with yellow mercury(II) oxide. It is very soluble in water, in which it is in equilibrium with hypochlorous acid (HOCl), of which it is the anhydride. It is thus an effective bleach and is mostly used to make hypochlorites. It explodes on heating or sparking or in the presence of ammonia gas.",
"title": "Chemistry and compounds"
},
{
"paragraph_id": 38,
"text": "Chlorine dioxide (ClO2) was the first chlorine oxide to be discovered in 1811 by Humphry Davy. It is a yellow paramagnetic gas (deep-red as a solid or liquid), as expected from its having an odd number of electrons: it is stable towards dimerisation due to the delocalisation of the unpaired electron. It explodes above −40 °C as a liquid and under pressure as a gas and therefore must be made at low concentrations for wood-pulp bleaching and water treatment. It is usually prepared by reducing a chlorate as follows:",
"title": "Chemistry and compounds"
},
{
"paragraph_id": 39,
"text": "Its production is thus intimately linked to the redox reactions of the chlorine oxoacids. It is a strong oxidising agent, reacting with sulfur, phosphorus, phosphorus halides, and potassium borohydride. It dissolves exothermically in water to form dark-green solutions that very slowly decompose in the dark. Crystalline clathrate hydrates ClO2·nH2O (n ≈ 6–10) separate out at low temperatures. However, in the presence of light, these solutions rapidly photodecompose to form a mixture of chloric and hydrochloric acids. Photolysis of individual ClO2 molecules result in the radicals ClO and ClOO, while at room temperature mostly chlorine, oxygen, and some ClO3 and Cl2O6 are produced. Cl2O3 is also produced when photolysing the solid at −78 °C: it is a dark brown solid that explodes below 0 °C. The ClO radical leads to the depletion of atmospheric ozone and is thus environmentally important as follows:",
"title": "Chemistry and compounds"
},
{
"paragraph_id": 40,
"text": "Chlorine perchlorate (ClOClO3) is a pale yellow liquid that is less stable than ClO2 and decomposes at room temperature to form chlorine, oxygen, and dichlorine hexoxide (Cl2O6). Chlorine perchlorate may also be considered a chlorine derivative of perchloric acid (HOClO3), similar to the thermally unstable chlorine derivatives of other oxoacids: examples include chlorine nitrate (ClONO2, vigorously reactive and explosive), and chlorine fluorosulfate (ClOSO2F, more stable but still moisture-sensitive and highly reactive). Dichlorine hexoxide is a dark-red liquid that freezes to form a solid which turns yellow at −180 °C: it is usually made by reaction of chlorine dioxide with oxygen. Despite attempts to rationalise it as the dimer of ClO3, it reacts more as though it were chloryl perchlorate, [ClO2][ClO4], which has been confirmed to be the correct structure of the solid. It hydrolyses in water to give a mixture of chloric and perchloric acids: the analogous reaction with anhydrous hydrogen fluoride does not proceed to completion.",
"title": "Chemistry and compounds"
},
{
"paragraph_id": 41,
"text": "Dichlorine heptoxide (Cl2O7) is the anhydride of perchloric acid (HClO4) and can readily be obtained from it by dehydrating it with phosphoric acid at −10 °C and then distilling the product at −35 °C and 1 mmHg. It is a shock-sensitive, colourless oily liquid. It is the least reactive of the chlorine oxides, being the only one to not set organic materials on fire at room temperature. It may be dissolved in water to regenerate perchloric acid or in aqueous alkalis to regenerate perchlorates. However, it thermally decomposes explosively by breaking one of the central Cl–O bonds, producing the radicals ClO3 and ClO4 which immediately decompose to the elements through intermediate oxides.",
"title": "Chemistry and compounds"
},
{
"paragraph_id": 42,
"text": "Chlorine forms four oxoacids: hypochlorous acid (HOCl), chlorous acid (HOClO), chloric acid (HOClO2), and perchloric acid (HOClO3). As can be seen from the redox potentials given in the adjacent table, chlorine is much more stable towards disproportionation in acidic solutions than in alkaline solutions:",
"title": "Chemistry and compounds"
},
{
"paragraph_id": 43,
"text": "The hypochlorite ions also disproportionate further to produce chloride and chlorate (3 ClO ⇌ 2 Cl + ClO3) but this reaction is quite slow at temperatures below 70 °C in spite of the very favourable equilibrium constant of 10. The chlorate ions may themselves disproportionate to form chloride and perchlorate (4 ClO3 ⇌ Cl + 3 ClO4) but this is still very slow even at 100 °C despite the very favourable equilibrium constant of 10. The rates of reaction for the chlorine oxyanions increases as the oxidation state of chlorine decreases. The strengths of the chlorine oxyacids increase very quickly as the oxidation state of chlorine increases due to the increasing delocalisation of charge over more and more oxygen atoms in their conjugate bases.",
"title": "Chemistry and compounds"
},
{
"paragraph_id": 44,
"text": "Most of the chlorine oxoacids may be produced by exploiting these disproportionation reactions. Hypochlorous acid (HOCl) is highly reactive and quite unstable; its salts are mostly used for their bleaching and sterilising abilities. They are very strong oxidising agents, transferring an oxygen atom to most inorganic species. Chlorous acid (HOClO) is even more unstable and cannot be isolated or concentrated without decomposition: it is known from the decomposition of aqueous chlorine dioxide. However, sodium chlorite is a stable salt and is useful for bleaching and stripping textiles, as an oxidising agent, and as a source of chlorine dioxide. Chloric acid (HOClO2) is a strong acid that is quite stable in cold water up to 30% concentration, but on warming gives chlorine and chlorine dioxide. Evaporation under reduced pressure allows it to be concentrated further to about 40%, but then it decomposes to perchloric acid, chlorine, oxygen, water, and chlorine dioxide. Its most important salt is sodium chlorate, mostly used to make chlorine dioxide to bleach paper pulp. The decomposition of chlorate to chloride and oxygen is a common way to produce oxygen in the laboratory on a small scale. Chloride and chlorate may comproportionate to form chlorine as follows:",
"title": "Chemistry and compounds"
},
{
"paragraph_id": 45,
"text": "Perchlorates and perchloric acid (HOClO3) are the most stable oxo-compounds of chlorine, in keeping with the fact that chlorine compounds are most stable when the chlorine atom is in its lowest (−1) or highest (+7) possible oxidation states. Perchloric acid and aqueous perchlorates are vigorous and sometimes violent oxidising agents when heated, in stark contrast to their mostly inactive nature at room temperature due to the high activation energies for these reactions for kinetic reasons. Perchlorates are made by electrolytically oxidising sodium chlorate, and perchloric acid is made by reacting anhydrous sodium perchlorate or barium perchlorate with concentrated hydrochloric acid, filtering away the chloride precipitated and distilling the filtrate to concentrate it. Anhydrous perchloric acid is a colourless mobile liquid that is sensitive to shock that explodes on contact with most organic compounds, sets hydrogen iodide and thionyl chloride on fire and even oxidises silver and gold. Although it is a weak ligand, weaker than water, a few compounds involving coordinated ClO4 are known. The Table below presents typical oxidation states for chlorine element as given in the secondary schools or colleges. Anyhow in university chemistry courses it should be pointed out that there are more complex chemical compounds, the structure of which can only be explained using modern quantum chemical methods, for example, cluster technetium chloride [(CH3)4N]3[Tc6Cl14], in which 6 of the 14 chlorine atoms are formally divalent, and oxidation states are fractional . In addition, all the above chemical regularities are valid for \"normal\" or close to normal conditions, while at ultra-high pressures (for example, in the cores of large planets), chlorine can exhibit an oxidation state of -3, forming a Na3Cl compound with sodium, which does not fit into traditional concepts of chemistry.",
"title": "Chemistry and compounds"
},
{
"paragraph_id": 46,
"text": "Like the other carbon–halogen bonds, the C–Cl bond is a common functional group that forms part of core organic chemistry. Formally, compounds with this functional group may be considered organic derivatives of the chloride anion. Due to the difference of electronegativity between chlorine (3.16) and carbon (2.55), the carbon in a C–Cl bond is electron-deficient and thus electrophilic. Chlorination modifies the physical properties of hydrocarbons in several ways: chlorocarbons are typically denser than water due to the higher atomic weight of chlorine versus hydrogen, and aliphatic organochlorides are alkylating agents because chloride is a leaving group.",
"title": "Chemistry and compounds"
},
{
"paragraph_id": 47,
"text": "Alkanes and aryl alkanes may be chlorinated under free-radical conditions, with UV light. However, the extent of chlorination is difficult to control: the reaction is not regioselective and often results in a mixture of various isomers with different degrees of chlorination, though this may be permissible if the products are easily separated. Aryl chlorides may be prepared by the Friedel-Crafts halogenation, using chlorine and a Lewis acid catalyst. The haloform reaction, using chlorine and sodium hydroxide, is also able to generate alkyl halides from methyl ketones, and related compounds. Chlorine adds to the multiple bonds on alkenes and alkynes as well, giving di- or tetrachloro compounds. However, due to the expense and reactivity of chlorine, organochlorine compounds are more commonly produced by using hydrogen chloride, or with chlorinating agents such as phosphorus pentachloride (PCl5) or thionyl chloride (SOCl2). The last is very convenient in the laboratory because all side products are gaseous and do not have to be distilled out.",
"title": "Chemistry and compounds"
},
{
"paragraph_id": 48,
"text": "Many organochlorine compounds have been isolated from natural sources ranging from bacteria to humans. Chlorinated organic compounds are found in nearly every class of biomolecules including alkaloids, terpenes, amino acids, flavonoids, steroids, and fatty acids. Organochlorides, including dioxins, are produced in the high temperature environment of forest fires, and dioxins have been found in the preserved ashes of lightning-ignited fires that predate synthetic dioxins. In addition, a variety of simple chlorinated hydrocarbons including dichloromethane, chloroform, and carbon tetrachloride have been isolated from marine algae. A majority of the chloromethane in the environment is produced naturally by biological decomposition, forest fires, and volcanoes.",
"title": "Chemistry and compounds"
},
{
"paragraph_id": 49,
"text": "Some types of organochlorides, though not all, have significant toxicity to plants or animals, including humans. Dioxins, produced when organic matter is burned in the presence of chlorine, and some insecticides, such as DDT, are persistent organic pollutants which pose dangers when they are released into the environment. For example, DDT, which was widely used to control insects in the mid 20th century, also accumulates in food chains, and causes reproductive problems (e.g., eggshell thinning) in certain bird species. Due to the ready homolytic fission of the C–Cl bond to create chlorine radicals in the upper atmosphere, chlorofluorocarbons have been phased out due to the harm they do to the ozone layer.",
"title": "Chemistry and compounds"
},
{
"paragraph_id": 50,
"text": "Chlorine is too reactive to occur as the free element in nature but is very abundant in the form of its chloride salts. It is the twenty-first most abundant element in Earth's crust and makes up 126 parts per million of it, through the large deposits of chloride minerals, especially sodium chloride, that have been evaporated from water bodies. All of these pale in comparison to the reserves of chloride ions in seawater: smaller amounts at higher concentrations occur in some inland seas and underground brine wells, such as the Great Salt Lake in Utah and the Dead Sea in Israel.",
"title": "Occurrence and production"
},
{
"paragraph_id": 51,
"text": "Small batches of chlorine gas are prepared in the laboratory by combining hydrochloric acid and manganese dioxide, but the need rarely arises due to its ready availability. In industry, elemental chlorine is usually produced by the electrolysis of sodium chloride dissolved in water. This method, the chloralkali process industrialized in 1892, now provides most industrial chlorine gas. Along with chlorine, the method yields hydrogen gas and sodium hydroxide, which is the most valuable product. The process proceeds according to the following chemical equation:",
"title": "Occurrence and production"
},
{
"paragraph_id": 52,
"text": "The electrolysis of chloride solutions all proceed according to the following equations:",
"title": "Occurrence and production"
},
{
"paragraph_id": 53,
"text": "In diaphragm cell electrolysis, an asbestos (or polymer-fiber) diaphragm separates a cathode and an anode, preventing the chlorine forming at the anode from re-mixing with the sodium hydroxide and the hydrogen formed at the cathode. The salt solution (brine) is continuously fed to the anode compartment and flows through the diaphragm to the cathode compartment, where the caustic alkali is produced and the brine is partially depleted. Diaphragm methods produce dilute and slightly impure alkali, but they are not burdened with the problem of mercury disposal and they are more energy efficient.",
"title": "Occurrence and production"
},
{
"paragraph_id": 54,
"text": "Membrane cell electrolysis employs permeable membrane as an ion exchanger. Saturated sodium (or potassium) chloride solution is passed through the anode compartment, leaving at a lower concentration. This method also produces very pure sodium (or potassium) hydroxide but has the disadvantage of requiring very pure brine at high concentrations.",
"title": "Occurrence and production"
},
{
"paragraph_id": 55,
"text": "In the Deacon process, hydrogen chloride recovered from the production of organochlorine compounds is recovered as chlorine. The process relies on oxidation using oxygen:",
"title": "Occurrence and production"
},
{
"paragraph_id": 56,
"text": "The reaction requires a catalyst. As introduced by Deacon, early catalysts were based on copper. Commercial processes, such as the Mitsui MT-Chlorine Process, have switched to chromium and ruthenium-based catalysts. The chlorine produced is available in cylinders from sizes ranging from 450 g to 70 kg, as well as drums (865 kg), tank wagons (15 tonnes on roads; 27–90 tonnes by rail), and barges (600–1200 tonnes).",
"title": "Occurrence and production"
},
{
"paragraph_id": 57,
"text": "Sodium chloride is the most common chlorine compound, and is the main source of chlorine for the demand by the chemical industry. About 15000 chlorine-containing compounds are commercially traded, including such diverse compounds as chlorinated methane, ethanes, vinyl chloride, polyvinyl chloride (PVC), aluminium trichloride for catalysis, the chlorides of magnesium, titanium, zirconium, and hafnium which are the precursors for producing the pure form of those elements.",
"title": "Applications"
},
{
"paragraph_id": 58,
"text": "Quantitatively, of all elemental chlorine produced, about 63% is used in the manufacture of organic compounds, and 18% in the manufacture of inorganic chlorine compounds. About 15,000 chlorine compounds are used commercially. The remaining 19% of chlorine produced is used for bleaches and disinfection products. The most significant of organic compounds in terms of production volume are 1,2-dichloroethane and vinyl chloride, intermediates in the production of PVC. Other particularly important organochlorines are methyl chloride, methylene chloride, chloroform, vinylidene chloride, trichloroethylene, perchloroethylene, allyl chloride, epichlorohydrin, chlorobenzene, dichlorobenzenes, and trichlorobenzenes. The major inorganic compounds include HCl, Cl2O, HOCl, NaClO3, chlorinated isocyanurates, AlCl3, SiCl4, SnCl4, PCl3, PCl5, POCl3, AsCl3, SbCl3, SbCl5, BiCl3, and ZnCl2.",
"title": "Applications"
},
{
"paragraph_id": 59,
"text": "In France (as elsewhere), animal intestines were processed to make musical instrument strings, Goldbeater's skin and other products. This was done in \"gut factories\" (boyauderies), and it was an odiferous and unhealthy process. In or about 1820, the Société d'encouragement pour l'industrie nationale offered a prize for the discovery of a method, chemical or mechanical, for separating the peritoneal membrane of animal intestines without putrefaction. The prize was won by Antoine-Germain Labarraque, a 44-year-old French chemist and pharmacist who had discovered that Berthollet's chlorinated bleaching solutions (\"Eau de Javel\") not only destroyed the smell of putrefaction of animal tissue decomposition, but also actually retarded the decomposition.",
"title": "Applications"
},
{
"paragraph_id": 60,
"text": "Labarraque's research resulted in the use of chlorides and hypochlorites of lime (calcium hypochlorite) and of sodium (sodium hypochlorite) in the boyauderies. The same chemicals were found to be useful in the routine disinfection and deodorization of latrines, sewers, markets, abattoirs, anatomical theatres, and morgues. They were successful in hospitals, lazarets, prisons, infirmaries (both on land and at sea), magnaneries, stables, cattle-sheds, etc.; and they were beneficial during exhumations, embalming, outbreaks of epidemic disease, fever, and blackleg in cattle.",
"title": "Applications"
},
{
"paragraph_id": 61,
"text": "Labarraque's chlorinated lime and soda solutions have been advocated since 1828 to prevent infection (called \"contagious infection\", presumed to be transmitted by \"miasmas\"), and to treat putrefaction of existing wounds, including septic wounds. In his 1828 work, Labarraque recommended that doctors breathe chlorine, wash their hands in chlorinated lime, and even sprinkle chlorinated lime about the patients' beds in cases of \"contagious infection\". In 1828, the contagion of infections was well known, even though the agency of the microbe was not discovered until more than half a century later.",
"title": "Applications"
},
{
"paragraph_id": 62,
"text": "During the Paris cholera outbreak of 1832, large quantities of so-called chloride of lime were used to disinfect the capital. This was not simply modern calcium chloride, but chlorine gas dissolved in lime-water (dilute calcium hydroxide) to form calcium hypochlorite (chlorinated lime). Labarraque's discovery helped to remove the terrible stench of decay from hospitals and dissecting rooms, and by doing so, effectively deodorised the Latin Quarter of Paris. These \"putrid miasmas\" were thought by many to cause the spread of \"contagion\" and \"infection\" – both words used before the germ theory of infection. Chloride of lime was used for destroying odors and \"putrid matter\". One source claims chloride of lime was used by Dr. John Snow to disinfect water from the cholera-contaminated well that was feeding the Broad Street pump in 1854 London, though three other reputable sources that describe that famous cholera epidemic do not mention the incident. One reference makes it clear that chloride of lime was used to disinfect the offal and filth in the streets surrounding the Broad Street pump – a common practice in mid-nineteenth century England.",
"title": "Applications"
},
{
"paragraph_id": 63,
"text": "Perhaps the most famous application of Labarraque's chlorine and chemical base solutions was in 1847, when Ignaz Semmelweis used chlorine-water (chlorine dissolved in pure water, which was cheaper than chlorinated lime solutions) to disinfect the hands of Austrian doctors, which Semmelweis noticed still carried the stench of decomposition from the dissection rooms to the patient examination rooms. Long before the germ theory of disease, Semmelweis theorized that \"cadaveric particles\" were transmitting decay from fresh medical cadavers to living patients, and he used the well-known \"Labarraque's solutions\" as the only known method to remove the smell of decay and tissue decomposition (which he found that soap did not). The solutions proved to be far more effective antiseptics than soap (Semmelweis was also aware of their greater efficacy, but not the reason), and this resulted in Semmelweis's celebrated success in stopping the transmission of childbed fever (\"puerperal fever\") in the maternity wards of Vienna General Hospital in Austria in 1847.",
"title": "Applications"
},
{
"paragraph_id": 64,
"text": "Much later, during World War I in 1916, a standardized and diluted modification of Labarraque's solution containing hypochlorite (0.5%) and boric acid as an acidic stabilizer was developed by Henry Drysdale Dakin (who gave full credit to Labarraque's prior work in this area). Called Dakin's solution, the method of wound irrigation with chlorinated solutions allowed antiseptic treatment of a wide variety of open wounds, long before the modern antibiotic era. A modified version of this solution continues to be employed in wound irrigation in modern times, where it remains effective against bacteria that are resistant to multiple antibiotics (see Century Pharmaceuticals).",
"title": "Applications"
},
{
"paragraph_id": 65,
"text": "The first continuous application of chlorination to drinking U.S. water was installed in Jersey City, New Jersey, in 1908. By 1918, the US Department of Treasury called for all drinking water to be disinfected with chlorine. Chlorine is presently an important chemical for water purification (such as in water treatment plants), in disinfectants, and in bleach. Even small water supplies are now routinely chlorinated.",
"title": "Applications"
},
{
"paragraph_id": 66,
"text": "Chlorine is usually used (in the form of hypochlorous acid) to kill bacteria and other microbes in drinking water supplies and public swimming pools. In most private swimming pools, chlorine itself is not used, but rather sodium hypochlorite, formed from chlorine and sodium hydroxide, or solid tablets of chlorinated isocyanurates. The drawback of using chlorine in swimming pools is that the chlorine reacts with the amino acids in proteins in human hair and skin. Contrary to popular belief, the distinctive \"chlorine aroma\" associated with swimming pools is not the result of elemental chlorine itself, but of chloramine, a chemical compound produced by the reaction of free dissolved chlorine with amines in organic substances including those in urine and sweat. As a disinfectant in water, chlorine is more than three times as effective against Escherichia coli as bromine, and more than six times as effective as iodine. Increasingly, monochloramine itself is being directly added to drinking water for purposes of disinfection, a process known as chloramination.",
"title": "Applications"
},
{
"paragraph_id": 67,
"text": "It is often impractical to store and use poisonous chlorine gas for water treatment, so alternative methods of adding chlorine are used. These include hypochlorite solutions, which gradually release chlorine into the water, and compounds like sodium dichloro-s-triazinetrione (dihydrate or anhydrous), sometimes referred to as \"dichlor\", and trichloro-s-triazinetrione, sometimes referred to as \"trichlor\". These compounds are stable while solid and may be used in powdered, granular, or tablet form. When added in small amounts to pool water or industrial water systems, the chlorine atoms hydrolyze from the rest of the molecule, forming hypochlorous acid (HOCl), which acts as a general biocide, killing germs, microorganisms, algae, and so on.",
"title": "Applications"
},
{
"paragraph_id": 68,
"text": "Chlorine gas, also known as bertholite, was first used as a weapon in World War I by Germany on April 22, 1915, in the Second Battle of Ypres. As described by the soldiers, it had the distinctive smell of a mixture of pepper and pineapple. It also tasted metallic and stung the back of the throat and chest. Chlorine reacts with water in the mucosa of the lungs to form hydrochloric acid, destructive to living tissue and potentially lethal. Human respiratory systems can be protected from chlorine gas by gas masks with activated charcoal or other filters, which makes chlorine gas much less lethal than other chemical weapons. It was pioneered by a German scientist later to be a Nobel laureate, Fritz Haber of the Kaiser Wilhelm Institute in Berlin, in collaboration with the German chemical conglomerate IG Farben, which developed methods for discharging chlorine gas against an entrenched enemy. After its first use, both sides in the conflict used chlorine as a chemical weapon, but it was soon replaced by the more deadly phosgene and mustard gas.",
"title": "Applications"
},
{
"paragraph_id": 69,
"text": "Chlorine gas was also used during the Iraq War in Anbar Province in 2007, with insurgents packing truck bombs with mortar shells and chlorine tanks. The attacks killed two people from the explosives and sickened more than 350. Most of the deaths were caused by the force of the explosions rather than the effects of chlorine since the toxic gas is readily dispersed and diluted in the atmosphere by the blast. In some bombings, over a hundred civilians were hospitalized due to breathing difficulties. The Iraqi authorities tightened security for elemental chlorine, which is essential for providing safe drinking water to the population.",
"title": "Applications"
},
{
"paragraph_id": 70,
"text": "On 23 October 2014, it was reported that the Islamic State of Iraq and the Levant had used chlorine gas in the town of Duluiyah, Iraq. Laboratory analysis of clothing and soil samples confirmed the use of chlorine gas against Kurdish Peshmerga Forces in a vehicle-borne improvised explosive device attack on 23 January 2015 at the Highway 47 Kiske Junction near Mosul.",
"title": "Applications"
},
{
"paragraph_id": 71,
"text": "Another country in the middle east, Syria, has used chlorine as a chemical weapon delivered from barrel bombs and rockets. In 2016, the OPCW-UN Joint Investigative Mechanism concluded that the Syrian government used chlorine as a chemical weapon in three separate attacks. Later investigations from the OPCW's Investigation and Identification Team concluded that the Syrian Air Force was responsible for chlorine attacks in 2017 and 2018.",
"title": "Applications"
},
{
"paragraph_id": 72,
"text": "The chloride anion is an essential nutrient for metabolism. Chlorine is needed for the production of hydrochloric acid in the stomach and in cellular pump functions. The main dietary source is table salt, or sodium chloride. Overly low or high concentrations of chloride in the blood are examples of electrolyte disturbances. Hypochloremia (having too little chloride) rarely occurs in the absence of other abnormalities. It is sometimes associated with hypoventilation. It can be associated with chronic respiratory acidosis. Hyperchloremia (having too much chloride) usually does not produce symptoms. When symptoms do occur, they tend to resemble those of hypernatremia (having too much sodium). Reduction in blood chloride leads to cerebral dehydration; symptoms are most often caused by rapid rehydration which results in cerebral edema. Hyperchloremia can affect oxygen transport.",
"title": "Biological role"
},
{
"paragraph_id": 73,
"text": "Chlorine is a toxic gas that attacks the respiratory system, eyes, and skin. Because it is denser than air, it tends to accumulate at the bottom of poorly ventilated spaces. Chlorine gas is a strong oxidizer, which may react with flammable materials.",
"title": "Hazards"
},
{
"paragraph_id": 74,
"text": "Chlorine is detectable with measuring devices in concentrations as low as 0.2 parts per million (ppm), and by smell at 3 ppm. Coughing and vomiting may occur at 30 ppm and lung damage at 60 ppm. About 1000 ppm can be fatal after a few deep breaths of the gas. The IDLH (immediately dangerous to life and health) concentration is 10 ppm. Breathing lower concentrations can aggravate the respiratory system and exposure to the gas can irritate the eyes. When chlorine is inhaled at concentrations greater than 30 ppm, it reacts with water within the lungs, producing hydrochloric acid (HCl) and hypochlorous acid (HOCl).",
"title": "Hazards"
},
{
"paragraph_id": 75,
"text": "When used at specified levels for water disinfection, the reaction of chlorine with water is not a major concern for human health. Other materials present in the water may generate disinfection by-products that are associated with negative effects on human health.",
"title": "Hazards"
},
{
"paragraph_id": 76,
"text": "In the United States, the Occupational Safety and Health Administration (OSHA) has set the permissible exposure limit for elemental chlorine at 1 ppm, or 3 mg/m. The National Institute for Occupational Safety and Health has designated a recommended exposure limit of 0.5 ppm over 15 minutes.",
"title": "Hazards"
},
{
"paragraph_id": 77,
"text": "In the home, accidents occur when hypochlorite bleach solutions come into contact with certain acidic drain-cleaners to produce chlorine gas. Hypochlorite bleach (a popular laundry additive) combined with ammonia (another popular laundry additive) produces chloramines, another toxic group of chemicals.",
"title": "Hazards"
},
{
"paragraph_id": 78,
"text": "Chlorine is widely used for purifying water, especially potable water supplies and water used in swimming pools. Several catastrophic collapses of swimming pool ceilings have occurred from chlorine-induced stress corrosion cracking of stainless steel suspension rods. Some polymers are also sensitive to attack, including acetal resin and polybutene. Both materials were used in hot and cold water domestic plumbing, and stress corrosion cracking caused widespread failures in the US in the 1980s and 1990s.",
"title": "Hazards"
},
{
"paragraph_id": 79,
"text": "The element iron can combine with chlorine at high temperatures in a strong exothermic reaction, creating a chlorine-iron fire. Chlorine-iron fires are a risk in chemical process plants, where much of the pipework that carries chlorine gas is made of steel.",
"title": "Hazards"
}
] | Chlorine is a chemical element; it has symbol Cl and atomic number 17. The second-lightest of the halogens, it appears between fluorine and bromine in the periodic table and its properties are mostly intermediate between them. Chlorine is a yellow-green gas at room temperature. It is an extremely reactive element and a strong oxidising agent: among the elements, it has the highest electron affinity and the third-highest electronegativity on the revised Pauling scale, behind only oxygen and fluorine. Chlorine played an important role in the experiments conducted by medieval alchemists, which commonly involved the heating of chloride salts like ammonium chloride and sodium chloride, producing various chemical substances containing chlorine such as hydrogen chloride, mercury(II) chloride, and aqua regia. However, the nature of free chlorine gas as a separate substance was only recognised around 1630 by Jan Baptist van Helmont. Carl Wilhelm Scheele wrote a description of chlorine gas in 1774, supposing it to be an oxide of a new element. In 1809, chemists suggested that the gas might be a pure element, and this was confirmed by Sir Humphry Davy in 1810, who named it after the Ancient Greek χλωρός because of its colour. Because of its great reactivity, all chlorine in the Earth's crust is in the form of ionic chloride compounds, which includes table salt. It is the second-most abundant halogen and twenty-first most abundant chemical element in Earth's crust. These crustal deposits are nevertheless dwarfed by the huge reserves of chloride in seawater. Elemental chlorine is commercially produced from brine by electrolysis, predominantly in the chlor-alkali process. The high oxidising potential of elemental chlorine led to the development of commercial bleaches and disinfectants, and a reagent for many processes in the chemical industry. Chlorine is used in the manufacture of a wide range of consumer products, about two-thirds of them organic chemicals such as polyvinyl chloride (PVC), many intermediates for the production of plastics, and other end products which do not contain the element. As a common disinfectant, elemental chlorine and chlorine-generating compounds are used more directly in swimming pools to keep them sanitary. Elemental chlorine at high concentration is extremely dangerous, and poisonous to most living organisms. As a chemical warfare agent, chlorine was first used in World War I as a poison gas weapon. In the form of chloride ions, chlorine is necessary to all known species of life. Other types of chlorine compounds are rare in living organisms, and artificially produced chlorinated organics range from inert to toxic. In the upper atmosphere, chlorine-containing organic molecules such as chlorofluorocarbons have been implicated in ozone depletion. Small quantities of elemental chlorine are generated by oxidation of chloride ions in neutrophils as part of an immune system response against bacteria. | 2001-05-17T12:54:59Z | 2023-12-30T13:41:17Z | [
"Template:Use British English",
"Template:Overset",
"Template:R",
"Template:Cite web",
"Template:Cite book",
"Template:Cite EB1911",
"Template:U.S. chemical weapons",
"Template:Eqm",
"Template:Sfn",
"Template:Circa",
"Template:Main",
"Template:Chem2",
"Template:Portal",
"Template:Webarchive",
"Template:Holleman&Wiberg",
"Template:Redirect2",
"Template:Pp-semi-indef",
"Template:PGCH",
"Template:Chem",
"Template:Chembox",
"Template:Chlorine compounds",
"Template:Good article",
"Template:Sfrac",
"Template:Cite journal",
"Template:Cite news",
"Template:Doi",
"Template:Greenwood&Earnshaw2nd",
"Template:Sister project links",
"Template:Lang",
"Template:Transl",
"Template:Nowrap",
"Template:Diatomicelements",
"Template:E number infobox 920-929",
"Template:Authority control",
"Template:Infobox chlorine",
"Template:What?",
"Template:Cn",
"Template:Reflist",
"Template:NUBASE 2003",
"Template:Chemical warfare",
"Template:Distinguish",
"Template:Clear",
"Template:Overunderset",
"Template:Rp",
"Template:Harvnb",
"Template:Cite encyclopedia",
"Template:Dead link",
"Template:Periodic table (navbox)",
"Template:About",
"Template:Pp-move-indef"
] | https://en.wikipedia.org/wiki/Chlorine |
5,668 | Calcium | Calcium is a chemical element; it has symbol Ca and atomic number 20. As an alkaline earth metal, calcium is a reactive metal that forms a dark oxide-nitride layer when exposed to air. Its physical and chemical properties are most similar to its heavier homologues strontium and barium. It is the fifth most abundant element in Earth's crust, and the third most abundant metal, after iron and aluminium. The most common calcium compound on Earth is calcium carbonate, found in limestone and the fossilised remnants of early sea life; gypsum, anhydrite, fluorite, and apatite are also sources of calcium. The name derives from Latin calx "lime", which was obtained from heating limestone.
Some calcium compounds were known to the ancients, though their chemistry was unknown until the seventeenth century. Pure calcium was isolated in 1808 via electrolysis of its oxide by Humphry Davy, who named the element. Calcium compounds are widely used in many industries: in foods and pharmaceuticals for calcium supplementation, in the paper industry as bleaches, as components in cement and electrical insulators, and in the manufacture of soaps. On the other hand, the metal in pure form has few applications due to its high reactivity; still, in small quantities it is often used as an alloying component in steelmaking, and sometimes, as a calcium–lead alloy, in making automotive batteries.
Calcium is the most abundant metal and the fifth-most abundant element in the human body. As electrolytes, calcium ions (Ca) play a vital role in the physiological and biochemical processes of organisms and cells: in signal transduction pathways where they act as a second messenger; in neurotransmitter release from neurons; in contraction of all muscle cell types; as cofactors in many enzymes; and in fertilization. Calcium ions outside cells are important for maintaining the potential difference across excitable cell membranes, protein synthesis, and bone formation.
Calcium is a very ductile silvery metal (sometimes described as pale yellow) whose properties are very similar to the heavier elements in its group, strontium, barium, and radium. A calcium atom has twenty electrons, arranged in the electron configuration [Ar]4s. Like the other elements placed in group 2 of the periodic table, calcium has two valence electrons in the outermost s-orbital, which are very easily lost in chemical reactions to form a dipositive ion with the stable electron configuration of a noble gas, in this case argon.
Hence, calcium is almost always divalent in its compounds, which are usually ionic. Hypothetical univalent salts of calcium would be stable with respect to their elements, but not to disproportionation to the divalent salts and calcium metal, because the enthalpy of formation of MX2 is much higher than those of the hypothetical MX. This occurs because of the much greater lattice energy afforded by the more highly charged Ca cation compared to the hypothetical Ca cation.
Calcium, strontium, barium, and radium are always considered to be alkaline earth metals; the lighter beryllium and magnesium, also in group 2 of the periodic table, are often included as well. Nevertheless, beryllium and magnesium differ significantly from the other members of the group in their physical and chemical behaviour: they behave more like aluminium and zinc respectively and have some of the weaker metallic character of the post-transition metals, which is why the traditional definition of the term "alkaline earth metal" excludes them.
Calcium metal melts at 842 °C and boils at 1494 °C; these values are higher than those for magnesium and strontium, the neighbouring group 2 metals. It crystallises in the face-centered cubic arrangement like strontium; above 450 °C, it changes to an anisotropic hexagonal close-packed arrangement like magnesium. Its density of 1.55 g/cm is the lowest in its group.
Calcium is harder than lead but can be cut with a knife with effort. While calcium is a poorer conductor of electricity than copper or aluminium by volume, it is a better conductor by mass than both due to its very low density. While calcium is infeasible as a conductor for most terrestrial applications as it reacts quickly with atmospheric oxygen, its use as such in space has been considered.
The chemistry of calcium is that of a typical heavy alkaline earth metal. For example, calcium spontaneously reacts with water more quickly than magnesium and less quickly than strontium to produce calcium hydroxide and hydrogen gas. It also reacts with the oxygen and nitrogen in the air to form a mixture of calcium oxide and calcium nitride. When finely divided, it spontaneously burns in air to produce the nitride. In bulk, calcium is less reactive: it quickly forms a hydration coating in moist air, but below 30% relative humidity it may be stored indefinitely at room temperature.
Besides the simple oxide CaO, the peroxide CaO2 can be made by direct oxidation of calcium metal under a high pressure of oxygen, and there is some evidence for a yellow superoxide Ca(O2)2. Calcium hydroxide, Ca(OH)2, is a strong base, though it is not as strong as the hydroxides of strontium, barium or the alkali metals. All four dihalides of calcium are known. Calcium carbonate (CaCO3) and calcium sulfate (CaSO4) are particularly abundant minerals. Like strontium and barium, as well as the alkali metals and the divalent lanthanides europium and ytterbium, calcium metal dissolves directly in liquid ammonia to give a dark blue solution.
Due to the large size of the calcium ion (Ca), high coordination numbers are common, up to 24 in some intermetallic compounds such as CaZn13. Calcium is readily complexed by oxygen chelates such as EDTA and polyphosphates, which are useful in analytic chemistry and removing calcium ions from hard water. In the absence of steric hindrance, smaller group 2 cations tend to form stronger complexes, but when large polydentate macrocycles are involved the trend is reversed.
Although calcium is in the same group as magnesium and organomagnesium compounds are very commonly used throughout chemistry, organocalcium compounds are not similarly widespread because they are more difficult to make and more reactive, although they have recently been investigated as possible catalysts. Organocalcium compounds tend to be more similar to organoytterbium compounds due to the similar ionic radii of Yb (102 pm) and Ca (100 pm).
Most of these compounds can only be prepared at low temperatures; bulky ligands tend to favor stability. For example, calcium dicyclopentadienyl, Ca(C5H5)2, must be made by directly reacting calcium metal with mercurocene or cyclopentadiene itself; replacing the C5H5 ligand with the bulkier C5(CH3)5 ligand on the other hand increases the compound's solubility, volatility, and kinetic stability.
Natural calcium is a mixture of five stable isotopes (Ca, Ca, Ca, Ca, and Ca) and one isotope with a half-life so long that it can be considered stable for all practical purposes (Ca, with a half-life of about 4.3 × 10 years). Calcium is the first (lightest) element to have six naturally occurring isotopes.
By far the most common isotope of calcium in nature is Ca, which makes up 96.941% of all natural calcium. It is produced in the silicon-burning process from fusion of alpha particles and is the heaviest stable nuclide with equal proton and neutron numbers; its occurrence is also supplemented slowly by the decay of primordial K. Adding another alpha particle leads to unstable Ti, which quickly decays via two successive electron captures to stable Ca; this makes up 2.806% of all natural calcium and is the second-most common isotope.
The other four natural isotopes, Ca, Ca, Ca, and Ca, are significantly rarer, each comprising less than 1% of all natural calcium. The four lighter isotopes are mainly products of the oxygen-burning and silicon-burning processes, leaving the two heavier ones to be produced via neutron capture processes. Ca is mostly produced in a "hot" s-process, as its formation requires a rather high neutron flux to allow short-lived Ca to capture a neutron. Ca is produced by electron capture in the r-process in type Ia supernovae, where high neutron excess and low enough entropy ensures its survival.
Ca and Ca are the first "classically stable" nuclides with a six-neutron or eight-neutron excess respectively. Although extremely neutron-rich for such a light element, Ca is very stable because it is a doubly magic nucleus, having 20 protons and 28 neutrons arranged in closed shells. Its beta decay to Sc is very hindered because of the gross mismatch of nuclear spin: Ca has zero nuclear spin, being even–even, while Sc has spin 6+, so the decay is forbidden by the conservation of angular momentum. While two excited states of Sc are available for decay as well, they are also forbidden due to their high spins. As a result, when Ca does decay, it does so by double beta decay to Ti instead, being the lightest nuclide known to undergo double beta decay.
The heavy isotope Ca can also theoretically undergo double beta decay to Ti as well, but this has never been observed. The lightest and most common isotope Ca is also doubly magic and could undergo double electron capture to Ar, but this has likewise never been observed. Calcium is the only element to have two primordial doubly magic isotopes. The experimental lower limits for the half-lives of Ca and Ca are 5.9 × 10 years and 2.8 × 10 years respectively.
Apart from the practically stable Ca, the longest lived radioisotope of calcium is Ca. It decays by electron capture to stable K with a half-life of about a hundred thousand years. Its existence in the early Solar System as an extinct radionuclide has been inferred from excesses of K: traces of Ca also still exist today, as it is a cosmogenic nuclide, continuously reformed through neutron activation of natural Ca.
Many other calcium radioisotopes are known, ranging from Ca to Ca. They are all much shorter-lived than Ca, the most stable among them being Ca (half-life 163 days) and Ca (half-life 4.54 days). The isotopes lighter than Ca usually undergo beta plus decay to isotopes of potassium, and those heavier than Ca usually undergo beta minus decay to isotopes of scandium, although near the nuclear drip lines, proton emission and neutron emission begin to be significant decay modes as well.
Like other elements, a variety of processes alter the relative abundance of calcium isotopes. The best studied of these processes is the mass-dependent fractionation of calcium isotopes that accompanies the precipitation of calcium minerals such as calcite, aragonite and apatite from solution. Lighter isotopes are preferentially incorporated into these minerals, leaving the surrounding solution enriched in heavier isotopes at a magnitude of roughly 0.025% per atomic mass unit (amu) at room temperature. Mass-dependent differences in calcium isotope composition are conventionally expressed by the ratio of two isotopes (usually Ca/Ca) in a sample compared to the same ratio in a standard reference material. Ca/Ca varies by about 1% among common earth materials.
Calcium compounds were known for millennia, although their chemical makeup was not understood until the 17th century. Lime as a building material and as plaster for statues was used as far back as around 7000 BC. The first dated lime kiln dates back to 2500 BC and was found in Khafajah, Mesopotamia.
At about the same time, dehydrated gypsum (CaSO4·2H2O) was being used in the Great Pyramid of Giza. This material would later be used for the plaster in the tomb of Tutankhamun. The ancient Romans instead used lime mortars made by heating limestone (CaCO3). The name "calcium" itself derives from the Latin word calx "lime".
Vitruvius noted that the lime that resulted was lighter than the original limestone, attributing this to the boiling of the water. In 1755, Joseph Black proved that this was due to the loss of carbon dioxide, which as a gas had not been recognised by the ancient Romans.
In 1789, Antoine Lavoisier suspected that lime might be an oxide of a fundamental chemical element. In his table of the elements, Lavoisier listed five "salifiable earths" (i.e., ores that could be made to react with acids to produce salts (salis = salt, in Latin): chaux (calcium oxide), magnésie (magnesia, magnesium oxide), baryte (barium sulfate), alumine (alumina, aluminium oxide), and silice (silica, silicon dioxide)). About these "elements", Lavoisier reasoned:
We are probably only acquainted as yet with a part of the metallic substances existing in nature, as all those which have a stronger affinity to oxygen than carbon possesses, are incapable, hitherto, of being reduced to a metallic state, and consequently, being only presented to our observation under the form of oxyds, are confounded with earths. It is extremely probable that barytes, which we have just now arranged with earths, is in this situation; for in many experiments it exhibits properties nearly approaching to those of metallic bodies. It is even possible that all the substances we call earths may be only metallic oxyds, irreducible by any hitherto known process.
Calcium, along with its congeners magnesium, strontium, and barium, was first isolated by Humphry Davy in 1808. Following the work of Jöns Jakob Berzelius and Magnus Martin af Pontin on electrolysis, Davy isolated calcium and magnesium by putting a mixture of the respective metal oxides with mercury(II) oxide on a platinum plate which was used as the anode, the cathode being a platinum wire partially submerged into mercury. Electrolysis then gave calcium–mercury and magnesium–mercury amalgams, and distilling off the mercury gave the metal. However, pure calcium cannot be prepared in bulk by this method and a workable commercial process for its production was not found until over a century later.
At 3%, calcium is the fifth most abundant element in the Earth's crust, and the third most abundant metal behind aluminium and iron. It is also the fourth most abundant element in the lunar highlands. Sedimentary calcium carbonate deposits pervade the Earth's surface as fossilized remains of past marine life; they occur in two forms, the rhombohedral calcite (more common) and the orthorhombic aragonite (forming in more temperate seas). Minerals of the first type include limestone, dolomite, marble, chalk, and iceland spar; aragonite beds make up the Bahamas, the Florida Keys, and the Red Sea basins. Corals, sea shells, and pearls are mostly made up of calcium carbonate. Among the other important minerals of calcium are gypsum (CaSO4·2H2O), anhydrite (CaSO4), fluorite (CaF2), and apatite ([Ca5(PO4)3X], X = OH, Cl, or F).
The major producers of calcium are China (about 10000 to 12000 tonnes per year), Russia (about 6000 to 8000 tonnes per year), and the United States (about 2000 to 4000 tonnes per year). Canada and France are also among the minor producers. In 2005, about 24000 tonnes of calcium were produced; about half of the world's extracted calcium is used by the United States, with about 80% of the output used each year.
In Russia and China, Davy's method of electrolysis is still used, but is instead applied to molten calcium chloride. Since calcium is less reactive than strontium or barium, the oxide–nitride coating that results in air is stable and lathe machining and other standard metallurgical techniques are suitable for calcium. In the United States and Canada, calcium is instead produced by reducing lime with aluminium at high temperatures.
Calcium cycling provides a link between tectonics, climate, and the carbon cycle. In the simplest terms, uplift of mountains exposes calcium-bearing rocks such as some granites to chemical weathering and releases Ca into surface water. These ions are transported to the ocean where they react with dissolved CO2 to form limestone (CaCO3), which in turn settles to the sea floor where it is incorporated into new rocks. Dissolved CO2, along with carbonate and bicarbonate ions, are termed "dissolved inorganic carbon" (DIC).
The actual reaction is more complicated and involves the bicarbonate ion (HCO3) that forms when CO2 reacts with water at seawater pH:
At seawater pH, most of the CO2 is immediately converted back into HCO3. The reaction results in a net transport of one molecule of CO2 from the ocean/atmosphere into the lithosphere. The result is that each Ca ion released by chemical weathering ultimately removes one CO2 molecule from the surficial system (atmosphere, ocean, soils and living organisms), storing it in carbonate rocks where it is likely to stay for hundreds of millions of years. The weathering of calcium from rocks thus scrubs CO2 from the ocean and atmosphere, exerting a strong long-term effect on climate.
The largest use of metallic calcium is in steelmaking, due to its strong chemical affinity for oxygen and sulfur. Its oxides and sulfides, once formed, give liquid lime aluminate and sulfide inclusions in steel which float out; on treatment, these inclusions disperse throughout the steel and become small and spherical, improving castability, cleanliness and general mechanical properties. Calcium is also used in maintenance-free automotive batteries, in which the use of 0.1% calcium–lead alloys instead of the usual antimony–lead alloys leads to lower water loss and lower self-discharging.
Due to the risk of expansion and cracking, aluminium is sometimes also incorporated into these alloys. These lead–calcium alloys are also used in casting, replacing lead–antimony alloys. Calcium is also used to strengthen aluminium alloys used for bearings, for the control of graphitic carbon in cast iron, and to remove bismuth impurities from lead. Calcium metal is found in some drain cleaners, where it functions to generate heat and calcium hydroxide that saponifies the fats and liquefies the proteins (for example, those in hair) that block drains.
Besides metallurgy, the reactivity of calcium is exploited to remove nitrogen from high-purity argon gas and as a getter for oxygen and nitrogen. It is also used as a reducing agent in the production of chromium, zirconium, thorium, and uranium. It can also be used to store hydrogen gas, as it reacts with hydrogen to form solid calcium hydride, from which the hydrogen can easily be re-extracted.
Calcium isotope fractionation during mineral formation has led to several applications of calcium isotopes. In particular, the 1997 observation by Skulan and DePaolo that calcium minerals are isotopically lighter than the solutions from which the minerals precipitate is the basis of analogous applications in medicine and in paleoceanography. In animals with skeletons mineralized with calcium, the calcium isotopic composition of soft tissues reflects the relative rate of formation and dissolution of skeletal mineral.
In humans, changes in the calcium isotopic composition of urine have been shown to be related to changes in bone mineral balance. When the rate of bone formation exceeds the rate of bone resorption, the Ca/Ca ratio in soft tissue rises and vice versa. Because of this relationship, calcium isotopic measurements of urine or blood may be useful in the early detection of metabolic bone diseases like osteoporosis.
A similar system exists in seawater, where Ca/Ca tends to rise when the rate of removal of Ca by mineral precipitation exceeds the input of new calcium into the ocean. In 1997, Skulan and DePaolo presented the first evidence of change in seawater Ca/Ca over geologic time, along with a theoretical explanation of these changes. More recent papers have confirmed this observation, demonstrating that seawater Ca concentration is not constant, and that the ocean is never in a "steady state" with respect to calcium input and output. This has important climatological implications, as the marine calcium cycle is closely tied to the carbon cycle.
Many calcium compounds are used in food, as pharmaceuticals, and in medicine, among others. For example, calcium and phosphorus are supplemented in foods through the addition of calcium lactate, calcium diphosphate, and tricalcium phosphate. The last is also used as a polishing agent in toothpaste and in antacids. Calcium lactobionate is a white powder that is used as a suspending agent for pharmaceuticals. In baking, calcium phosphate is used as a leavening agent. Calcium sulfite is used as a bleach in papermaking and as a disinfectant, calcium silicate is used as a reinforcing agent in rubber, and calcium acetate is a component of liming rosin and is used to make metallic soaps and synthetic resins.
Calcium is on the World Health Organization's List of Essential Medicines.
Foods rich in calcium include dairy products, such as yogurt and cheese, sardines, salmon, soy products, kale, and fortified breakfast cereals.
Because of concerns for long-term adverse side effects, including calcification of arteries and kidney stones, both the U.S. Institute of Medicine (IOM) and the European Food Safety Authority (EFSA) set Tolerable Upper Intake Levels (ULs) for combined dietary and supplemental calcium. From the IOM, people of ages 9–18 years are not to exceed 3 g/day combined intake; for ages 19–50, not to exceed 2.5 g/day; for ages 51 and older, not to exceed 2 g/day. EFSA set the UL for all adults at 2.5 g/day, but decided the information for children and adolescents was not sufficient to determine ULs.
Calcium is an essential element needed in large quantities. The Ca ion acts as an electrolyte and is vital to the health of the muscular, circulatory, and digestive systems; is indispensable to the building of bone; and supports synthesis and function of blood cells. For example, it regulates the contraction of muscles, nerve conduction, and the clotting of blood. As a result, intra- and extracellular calcium levels are tightly regulated by the body. Calcium can play this role because the Ca ion forms stable coordination complexes with many organic compounds, especially proteins; it also forms compounds with a wide range of solubilities, enabling the formation of the skeleton.
Calcium ions may be complexed by proteins through binding the carboxyl groups of glutamic acid or aspartic acid residues; through interacting with phosphorylated serine, tyrosine, or threonine residues; or by being chelated by γ-carboxylated amino acid residues. Trypsin, a digestive enzyme, uses the first method; osteocalcin, a bone matrix protein, uses the third.
Some other bone matrix proteins such as osteopontin and bone sialoprotein use both the first and the second. Direct activation of enzymes by binding calcium is common; some other enzymes are activated by noncovalent association with direct calcium-binding enzymes. Calcium also binds to the phospholipid layer of the cell membrane, anchoring proteins associated with the cell surface.
As an example of the wide range of solubility of calcium compounds, monocalcium phosphate is very soluble in water, 85% of extracellular calcium is as dicalcium phosphate with a solubility of 2.00 mM, and the hydroxyapatite of bones in an organic matrix is tricalcium phosphate with a solubility of 1000 μM.
Calcium is a common constituent of multivitamin dietary supplements, but the composition of calcium complexes in supplements may affect its bioavailability which varies by solubility of the salt involved: calcium citrate, malate, and lactate are highly bioavailable, while the oxalate is less. Other calcium preparations include calcium carbonate, calcium citrate malate, and calcium gluconate. The intestine absorbs about one-third of calcium eaten as the free ion, and plasma calcium level is then regulated by the kidneys.
Parathyroid hormone and vitamin D promote the formation of bone by allowing and enhancing the deposition of calcium ions there, allowing rapid bone turnover without affecting bone mass or mineral content. When plasma calcium levels fall, cell surface receptors are activated and the secretion of parathyroid hormone occurs; it then proceeds to stimulate the entry of calcium into the plasma pool by taking it from targeted kidney, gut, and bone cells, with the bone-forming action of parathyroid hormone being antagonised by calcitonin, whose secretion increases with increasing plasma calcium levels.
Excess intake of calcium may cause hypercalcemia. However, because calcium is absorbed rather inefficiently by the intestines, high serum calcium is more likely caused by excessive secretion of parathyroid hormone (PTH) or possibly by excessive intake of vitamin D, both of which facilitate calcium absorption. All these conditions result in excess calcium salts being deposited in the heart, blood vessels, or kidneys. Symptoms include anorexia, nausea, vomiting, memory loss, confusion, muscle weakness, increased urination, dehydration, and metabolic bone disease.
Chronic hypercalcaemia typically leads to calcification of soft tissue and its serious consequences: for example, calcification can cause loss of elasticity of vascular walls and disruption of laminar blood flow—and thence to plaque rupture and thrombosis. Conversely, inadequate calcium or vitamin D intakes may result in hypocalcemia, often caused also by inadequate secretion of parathyroid hormone or defective PTH receptors in cells. Symptoms include neuromuscular excitability, which potentially causes tetany and disruption of conductivity in cardiac tissue.
As calcium is required for bone development, many bone diseases can be traced to the organic matrix or the hydroxyapatite in molecular structure or organization of bone. Osteoporosis is a reduction in mineral content of bone per unit volume, and can be treated by supplementation of calcium, vitamin D, and bisphosphonates. Inadequate amounts of calcium, vitamin D, or phosphates can lead to softening of bones, called osteomalacia.
Because calcium reacts exothermically with water and acids, calcium metal coming into contact with bodily moisture results in severe corrosive irritation. When swallowed, calcium metal has the same effect on the mouth, oesophagus, and stomach, and can be fatal. However, long-term exposure is not known to have distinct adverse effects. | [
{
"paragraph_id": 0,
"text": "Calcium is a chemical element; it has symbol Ca and atomic number 20. As an alkaline earth metal, calcium is a reactive metal that forms a dark oxide-nitride layer when exposed to air. Its physical and chemical properties are most similar to its heavier homologues strontium and barium. It is the fifth most abundant element in Earth's crust, and the third most abundant metal, after iron and aluminium. The most common calcium compound on Earth is calcium carbonate, found in limestone and the fossilised remnants of early sea life; gypsum, anhydrite, fluorite, and apatite are also sources of calcium. The name derives from Latin calx \"lime\", which was obtained from heating limestone.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Some calcium compounds were known to the ancients, though their chemistry was unknown until the seventeenth century. Pure calcium was isolated in 1808 via electrolysis of its oxide by Humphry Davy, who named the element. Calcium compounds are widely used in many industries: in foods and pharmaceuticals for calcium supplementation, in the paper industry as bleaches, as components in cement and electrical insulators, and in the manufacture of soaps. On the other hand, the metal in pure form has few applications due to its high reactivity; still, in small quantities it is often used as an alloying component in steelmaking, and sometimes, as a calcium–lead alloy, in making automotive batteries.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Calcium is the most abundant metal and the fifth-most abundant element in the human body. As electrolytes, calcium ions (Ca) play a vital role in the physiological and biochemical processes of organisms and cells: in signal transduction pathways where they act as a second messenger; in neurotransmitter release from neurons; in contraction of all muscle cell types; as cofactors in many enzymes; and in fertilization. Calcium ions outside cells are important for maintaining the potential difference across excitable cell membranes, protein synthesis, and bone formation.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Calcium is a very ductile silvery metal (sometimes described as pale yellow) whose properties are very similar to the heavier elements in its group, strontium, barium, and radium. A calcium atom has twenty electrons, arranged in the electron configuration [Ar]4s. Like the other elements placed in group 2 of the periodic table, calcium has two valence electrons in the outermost s-orbital, which are very easily lost in chemical reactions to form a dipositive ion with the stable electron configuration of a noble gas, in this case argon.",
"title": "Characteristics"
},
{
"paragraph_id": 4,
"text": "Hence, calcium is almost always divalent in its compounds, which are usually ionic. Hypothetical univalent salts of calcium would be stable with respect to their elements, but not to disproportionation to the divalent salts and calcium metal, because the enthalpy of formation of MX2 is much higher than those of the hypothetical MX. This occurs because of the much greater lattice energy afforded by the more highly charged Ca cation compared to the hypothetical Ca cation.",
"title": "Characteristics"
},
{
"paragraph_id": 5,
"text": "Calcium, strontium, barium, and radium are always considered to be alkaline earth metals; the lighter beryllium and magnesium, also in group 2 of the periodic table, are often included as well. Nevertheless, beryllium and magnesium differ significantly from the other members of the group in their physical and chemical behaviour: they behave more like aluminium and zinc respectively and have some of the weaker metallic character of the post-transition metals, which is why the traditional definition of the term \"alkaline earth metal\" excludes them.",
"title": "Characteristics"
},
{
"paragraph_id": 6,
"text": "Calcium metal melts at 842 °C and boils at 1494 °C; these values are higher than those for magnesium and strontium, the neighbouring group 2 metals. It crystallises in the face-centered cubic arrangement like strontium; above 450 °C, it changes to an anisotropic hexagonal close-packed arrangement like magnesium. Its density of 1.55 g/cm is the lowest in its group.",
"title": "Characteristics"
},
{
"paragraph_id": 7,
"text": "Calcium is harder than lead but can be cut with a knife with effort. While calcium is a poorer conductor of electricity than copper or aluminium by volume, it is a better conductor by mass than both due to its very low density. While calcium is infeasible as a conductor for most terrestrial applications as it reacts quickly with atmospheric oxygen, its use as such in space has been considered.",
"title": "Characteristics"
},
{
"paragraph_id": 8,
"text": "The chemistry of calcium is that of a typical heavy alkaline earth metal. For example, calcium spontaneously reacts with water more quickly than magnesium and less quickly than strontium to produce calcium hydroxide and hydrogen gas. It also reacts with the oxygen and nitrogen in the air to form a mixture of calcium oxide and calcium nitride. When finely divided, it spontaneously burns in air to produce the nitride. In bulk, calcium is less reactive: it quickly forms a hydration coating in moist air, but below 30% relative humidity it may be stored indefinitely at room temperature.",
"title": "Characteristics"
},
{
"paragraph_id": 9,
"text": "Besides the simple oxide CaO, the peroxide CaO2 can be made by direct oxidation of calcium metal under a high pressure of oxygen, and there is some evidence for a yellow superoxide Ca(O2)2. Calcium hydroxide, Ca(OH)2, is a strong base, though it is not as strong as the hydroxides of strontium, barium or the alkali metals. All four dihalides of calcium are known. Calcium carbonate (CaCO3) and calcium sulfate (CaSO4) are particularly abundant minerals. Like strontium and barium, as well as the alkali metals and the divalent lanthanides europium and ytterbium, calcium metal dissolves directly in liquid ammonia to give a dark blue solution.",
"title": "Characteristics"
},
{
"paragraph_id": 10,
"text": "Due to the large size of the calcium ion (Ca), high coordination numbers are common, up to 24 in some intermetallic compounds such as CaZn13. Calcium is readily complexed by oxygen chelates such as EDTA and polyphosphates, which are useful in analytic chemistry and removing calcium ions from hard water. In the absence of steric hindrance, smaller group 2 cations tend to form stronger complexes, but when large polydentate macrocycles are involved the trend is reversed.",
"title": "Characteristics"
},
{
"paragraph_id": 11,
"text": "Although calcium is in the same group as magnesium and organomagnesium compounds are very commonly used throughout chemistry, organocalcium compounds are not similarly widespread because they are more difficult to make and more reactive, although they have recently been investigated as possible catalysts. Organocalcium compounds tend to be more similar to organoytterbium compounds due to the similar ionic radii of Yb (102 pm) and Ca (100 pm).",
"title": "Characteristics"
},
{
"paragraph_id": 12,
"text": "Most of these compounds can only be prepared at low temperatures; bulky ligands tend to favor stability. For example, calcium dicyclopentadienyl, Ca(C5H5)2, must be made by directly reacting calcium metal with mercurocene or cyclopentadiene itself; replacing the C5H5 ligand with the bulkier C5(CH3)5 ligand on the other hand increases the compound's solubility, volatility, and kinetic stability.",
"title": "Characteristics"
},
{
"paragraph_id": 13,
"text": "Natural calcium is a mixture of five stable isotopes (Ca, Ca, Ca, Ca, and Ca) and one isotope with a half-life so long that it can be considered stable for all practical purposes (Ca, with a half-life of about 4.3 × 10 years). Calcium is the first (lightest) element to have six naturally occurring isotopes.",
"title": "Characteristics"
},
{
"paragraph_id": 14,
"text": "By far the most common isotope of calcium in nature is Ca, which makes up 96.941% of all natural calcium. It is produced in the silicon-burning process from fusion of alpha particles and is the heaviest stable nuclide with equal proton and neutron numbers; its occurrence is also supplemented slowly by the decay of primordial K. Adding another alpha particle leads to unstable Ti, which quickly decays via two successive electron captures to stable Ca; this makes up 2.806% of all natural calcium and is the second-most common isotope.",
"title": "Characteristics"
},
{
"paragraph_id": 15,
"text": "The other four natural isotopes, Ca, Ca, Ca, and Ca, are significantly rarer, each comprising less than 1% of all natural calcium. The four lighter isotopes are mainly products of the oxygen-burning and silicon-burning processes, leaving the two heavier ones to be produced via neutron capture processes. Ca is mostly produced in a \"hot\" s-process, as its formation requires a rather high neutron flux to allow short-lived Ca to capture a neutron. Ca is produced by electron capture in the r-process in type Ia supernovae, where high neutron excess and low enough entropy ensures its survival.",
"title": "Characteristics"
},
{
"paragraph_id": 16,
"text": "Ca and Ca are the first \"classically stable\" nuclides with a six-neutron or eight-neutron excess respectively. Although extremely neutron-rich for such a light element, Ca is very stable because it is a doubly magic nucleus, having 20 protons and 28 neutrons arranged in closed shells. Its beta decay to Sc is very hindered because of the gross mismatch of nuclear spin: Ca has zero nuclear spin, being even–even, while Sc has spin 6+, so the decay is forbidden by the conservation of angular momentum. While two excited states of Sc are available for decay as well, they are also forbidden due to their high spins. As a result, when Ca does decay, it does so by double beta decay to Ti instead, being the lightest nuclide known to undergo double beta decay.",
"title": "Characteristics"
},
{
"paragraph_id": 17,
"text": "The heavy isotope Ca can also theoretically undergo double beta decay to Ti as well, but this has never been observed. The lightest and most common isotope Ca is also doubly magic and could undergo double electron capture to Ar, but this has likewise never been observed. Calcium is the only element to have two primordial doubly magic isotopes. The experimental lower limits for the half-lives of Ca and Ca are 5.9 × 10 years and 2.8 × 10 years respectively.",
"title": "Characteristics"
},
{
"paragraph_id": 18,
"text": "Apart from the practically stable Ca, the longest lived radioisotope of calcium is Ca. It decays by electron capture to stable K with a half-life of about a hundred thousand years. Its existence in the early Solar System as an extinct radionuclide has been inferred from excesses of K: traces of Ca also still exist today, as it is a cosmogenic nuclide, continuously reformed through neutron activation of natural Ca.",
"title": "Characteristics"
},
{
"paragraph_id": 19,
"text": "Many other calcium radioisotopes are known, ranging from Ca to Ca. They are all much shorter-lived than Ca, the most stable among them being Ca (half-life 163 days) and Ca (half-life 4.54 days). The isotopes lighter than Ca usually undergo beta plus decay to isotopes of potassium, and those heavier than Ca usually undergo beta minus decay to isotopes of scandium, although near the nuclear drip lines, proton emission and neutron emission begin to be significant decay modes as well.",
"title": "Characteristics"
},
{
"paragraph_id": 20,
"text": "Like other elements, a variety of processes alter the relative abundance of calcium isotopes. The best studied of these processes is the mass-dependent fractionation of calcium isotopes that accompanies the precipitation of calcium minerals such as calcite, aragonite and apatite from solution. Lighter isotopes are preferentially incorporated into these minerals, leaving the surrounding solution enriched in heavier isotopes at a magnitude of roughly 0.025% per atomic mass unit (amu) at room temperature. Mass-dependent differences in calcium isotope composition are conventionally expressed by the ratio of two isotopes (usually Ca/Ca) in a sample compared to the same ratio in a standard reference material. Ca/Ca varies by about 1% among common earth materials.",
"title": "Characteristics"
},
{
"paragraph_id": 21,
"text": "Calcium compounds were known for millennia, although their chemical makeup was not understood until the 17th century. Lime as a building material and as plaster for statues was used as far back as around 7000 BC. The first dated lime kiln dates back to 2500 BC and was found in Khafajah, Mesopotamia.",
"title": "History"
},
{
"paragraph_id": 22,
"text": "At about the same time, dehydrated gypsum (CaSO4·2H2O) was being used in the Great Pyramid of Giza. This material would later be used for the plaster in the tomb of Tutankhamun. The ancient Romans instead used lime mortars made by heating limestone (CaCO3). The name \"calcium\" itself derives from the Latin word calx \"lime\".",
"title": "History"
},
{
"paragraph_id": 23,
"text": "Vitruvius noted that the lime that resulted was lighter than the original limestone, attributing this to the boiling of the water. In 1755, Joseph Black proved that this was due to the loss of carbon dioxide, which as a gas had not been recognised by the ancient Romans.",
"title": "History"
},
{
"paragraph_id": 24,
"text": "In 1789, Antoine Lavoisier suspected that lime might be an oxide of a fundamental chemical element. In his table of the elements, Lavoisier listed five \"salifiable earths\" (i.e., ores that could be made to react with acids to produce salts (salis = salt, in Latin): chaux (calcium oxide), magnésie (magnesia, magnesium oxide), baryte (barium sulfate), alumine (alumina, aluminium oxide), and silice (silica, silicon dioxide)). About these \"elements\", Lavoisier reasoned:",
"title": "History"
},
{
"paragraph_id": 25,
"text": "We are probably only acquainted as yet with a part of the metallic substances existing in nature, as all those which have a stronger affinity to oxygen than carbon possesses, are incapable, hitherto, of being reduced to a metallic state, and consequently, being only presented to our observation under the form of oxyds, are confounded with earths. It is extremely probable that barytes, which we have just now arranged with earths, is in this situation; for in many experiments it exhibits properties nearly approaching to those of metallic bodies. It is even possible that all the substances we call earths may be only metallic oxyds, irreducible by any hitherto known process.",
"title": "History"
},
{
"paragraph_id": 26,
"text": "Calcium, along with its congeners magnesium, strontium, and barium, was first isolated by Humphry Davy in 1808. Following the work of Jöns Jakob Berzelius and Magnus Martin af Pontin on electrolysis, Davy isolated calcium and magnesium by putting a mixture of the respective metal oxides with mercury(II) oxide on a platinum plate which was used as the anode, the cathode being a platinum wire partially submerged into mercury. Electrolysis then gave calcium–mercury and magnesium–mercury amalgams, and distilling off the mercury gave the metal. However, pure calcium cannot be prepared in bulk by this method and a workable commercial process for its production was not found until over a century later.",
"title": "History"
},
{
"paragraph_id": 27,
"text": "At 3%, calcium is the fifth most abundant element in the Earth's crust, and the third most abundant metal behind aluminium and iron. It is also the fourth most abundant element in the lunar highlands. Sedimentary calcium carbonate deposits pervade the Earth's surface as fossilized remains of past marine life; they occur in two forms, the rhombohedral calcite (more common) and the orthorhombic aragonite (forming in more temperate seas). Minerals of the first type include limestone, dolomite, marble, chalk, and iceland spar; aragonite beds make up the Bahamas, the Florida Keys, and the Red Sea basins. Corals, sea shells, and pearls are mostly made up of calcium carbonate. Among the other important minerals of calcium are gypsum (CaSO4·2H2O), anhydrite (CaSO4), fluorite (CaF2), and apatite ([Ca5(PO4)3X], X = OH, Cl, or F).",
"title": "Occurrence and production"
},
{
"paragraph_id": 28,
"text": "The major producers of calcium are China (about 10000 to 12000 tonnes per year), Russia (about 6000 to 8000 tonnes per year), and the United States (about 2000 to 4000 tonnes per year). Canada and France are also among the minor producers. In 2005, about 24000 tonnes of calcium were produced; about half of the world's extracted calcium is used by the United States, with about 80% of the output used each year.",
"title": "Occurrence and production"
},
{
"paragraph_id": 29,
"text": "In Russia and China, Davy's method of electrolysis is still used, but is instead applied to molten calcium chloride. Since calcium is less reactive than strontium or barium, the oxide–nitride coating that results in air is stable and lathe machining and other standard metallurgical techniques are suitable for calcium. In the United States and Canada, calcium is instead produced by reducing lime with aluminium at high temperatures.",
"title": "Occurrence and production"
},
{
"paragraph_id": 30,
"text": "Calcium cycling provides a link between tectonics, climate, and the carbon cycle. In the simplest terms, uplift of mountains exposes calcium-bearing rocks such as some granites to chemical weathering and releases Ca into surface water. These ions are transported to the ocean where they react with dissolved CO2 to form limestone (CaCO3), which in turn settles to the sea floor where it is incorporated into new rocks. Dissolved CO2, along with carbonate and bicarbonate ions, are termed \"dissolved inorganic carbon\" (DIC).",
"title": "Occurrence and production"
},
{
"paragraph_id": 31,
"text": "The actual reaction is more complicated and involves the bicarbonate ion (HCO3) that forms when CO2 reacts with water at seawater pH:",
"title": "Occurrence and production"
},
{
"paragraph_id": 32,
"text": "At seawater pH, most of the CO2 is immediately converted back into HCO3. The reaction results in a net transport of one molecule of CO2 from the ocean/atmosphere into the lithosphere. The result is that each Ca ion released by chemical weathering ultimately removes one CO2 molecule from the surficial system (atmosphere, ocean, soils and living organisms), storing it in carbonate rocks where it is likely to stay for hundreds of millions of years. The weathering of calcium from rocks thus scrubs CO2 from the ocean and atmosphere, exerting a strong long-term effect on climate.",
"title": "Occurrence and production"
},
{
"paragraph_id": 33,
"text": "The largest use of metallic calcium is in steelmaking, due to its strong chemical affinity for oxygen and sulfur. Its oxides and sulfides, once formed, give liquid lime aluminate and sulfide inclusions in steel which float out; on treatment, these inclusions disperse throughout the steel and become small and spherical, improving castability, cleanliness and general mechanical properties. Calcium is also used in maintenance-free automotive batteries, in which the use of 0.1% calcium–lead alloys instead of the usual antimony–lead alloys leads to lower water loss and lower self-discharging.",
"title": "Uses"
},
{
"paragraph_id": 34,
"text": "Due to the risk of expansion and cracking, aluminium is sometimes also incorporated into these alloys. These lead–calcium alloys are also used in casting, replacing lead–antimony alloys. Calcium is also used to strengthen aluminium alloys used for bearings, for the control of graphitic carbon in cast iron, and to remove bismuth impurities from lead. Calcium metal is found in some drain cleaners, where it functions to generate heat and calcium hydroxide that saponifies the fats and liquefies the proteins (for example, those in hair) that block drains.",
"title": "Uses"
},
{
"paragraph_id": 35,
"text": "Besides metallurgy, the reactivity of calcium is exploited to remove nitrogen from high-purity argon gas and as a getter for oxygen and nitrogen. It is also used as a reducing agent in the production of chromium, zirconium, thorium, and uranium. It can also be used to store hydrogen gas, as it reacts with hydrogen to form solid calcium hydride, from which the hydrogen can easily be re-extracted.",
"title": "Uses"
},
{
"paragraph_id": 36,
"text": "Calcium isotope fractionation during mineral formation has led to several applications of calcium isotopes. In particular, the 1997 observation by Skulan and DePaolo that calcium minerals are isotopically lighter than the solutions from which the minerals precipitate is the basis of analogous applications in medicine and in paleoceanography. In animals with skeletons mineralized with calcium, the calcium isotopic composition of soft tissues reflects the relative rate of formation and dissolution of skeletal mineral.",
"title": "Uses"
},
{
"paragraph_id": 37,
"text": "In humans, changes in the calcium isotopic composition of urine have been shown to be related to changes in bone mineral balance. When the rate of bone formation exceeds the rate of bone resorption, the Ca/Ca ratio in soft tissue rises and vice versa. Because of this relationship, calcium isotopic measurements of urine or blood may be useful in the early detection of metabolic bone diseases like osteoporosis.",
"title": "Uses"
},
{
"paragraph_id": 38,
"text": "A similar system exists in seawater, where Ca/Ca tends to rise when the rate of removal of Ca by mineral precipitation exceeds the input of new calcium into the ocean. In 1997, Skulan and DePaolo presented the first evidence of change in seawater Ca/Ca over geologic time, along with a theoretical explanation of these changes. More recent papers have confirmed this observation, demonstrating that seawater Ca concentration is not constant, and that the ocean is never in a \"steady state\" with respect to calcium input and output. This has important climatological implications, as the marine calcium cycle is closely tied to the carbon cycle.",
"title": "Uses"
},
{
"paragraph_id": 39,
"text": "Many calcium compounds are used in food, as pharmaceuticals, and in medicine, among others. For example, calcium and phosphorus are supplemented in foods through the addition of calcium lactate, calcium diphosphate, and tricalcium phosphate. The last is also used as a polishing agent in toothpaste and in antacids. Calcium lactobionate is a white powder that is used as a suspending agent for pharmaceuticals. In baking, calcium phosphate is used as a leavening agent. Calcium sulfite is used as a bleach in papermaking and as a disinfectant, calcium silicate is used as a reinforcing agent in rubber, and calcium acetate is a component of liming rosin and is used to make metallic soaps and synthetic resins.",
"title": "Uses"
},
{
"paragraph_id": 40,
"text": "Calcium is on the World Health Organization's List of Essential Medicines.",
"title": "Uses"
},
{
"paragraph_id": 41,
"text": "Foods rich in calcium include dairy products, such as yogurt and cheese, sardines, salmon, soy products, kale, and fortified breakfast cereals.",
"title": "Food sources"
},
{
"paragraph_id": 42,
"text": "Because of concerns for long-term adverse side effects, including calcification of arteries and kidney stones, both the U.S. Institute of Medicine (IOM) and the European Food Safety Authority (EFSA) set Tolerable Upper Intake Levels (ULs) for combined dietary and supplemental calcium. From the IOM, people of ages 9–18 years are not to exceed 3 g/day combined intake; for ages 19–50, not to exceed 2.5 g/day; for ages 51 and older, not to exceed 2 g/day. EFSA set the UL for all adults at 2.5 g/day, but decided the information for children and adolescents was not sufficient to determine ULs.",
"title": "Food sources"
},
{
"paragraph_id": 43,
"text": "Calcium is an essential element needed in large quantities. The Ca ion acts as an electrolyte and is vital to the health of the muscular, circulatory, and digestive systems; is indispensable to the building of bone; and supports synthesis and function of blood cells. For example, it regulates the contraction of muscles, nerve conduction, and the clotting of blood. As a result, intra- and extracellular calcium levels are tightly regulated by the body. Calcium can play this role because the Ca ion forms stable coordination complexes with many organic compounds, especially proteins; it also forms compounds with a wide range of solubilities, enabling the formation of the skeleton.",
"title": "Biological and pathological role"
},
{
"paragraph_id": 44,
"text": "Calcium ions may be complexed by proteins through binding the carboxyl groups of glutamic acid or aspartic acid residues; through interacting with phosphorylated serine, tyrosine, or threonine residues; or by being chelated by γ-carboxylated amino acid residues. Trypsin, a digestive enzyme, uses the first method; osteocalcin, a bone matrix protein, uses the third.",
"title": "Biological and pathological role"
},
{
"paragraph_id": 45,
"text": "Some other bone matrix proteins such as osteopontin and bone sialoprotein use both the first and the second. Direct activation of enzymes by binding calcium is common; some other enzymes are activated by noncovalent association with direct calcium-binding enzymes. Calcium also binds to the phospholipid layer of the cell membrane, anchoring proteins associated with the cell surface.",
"title": "Biological and pathological role"
},
{
"paragraph_id": 46,
"text": "As an example of the wide range of solubility of calcium compounds, monocalcium phosphate is very soluble in water, 85% of extracellular calcium is as dicalcium phosphate with a solubility of 2.00 mM, and the hydroxyapatite of bones in an organic matrix is tricalcium phosphate with a solubility of 1000 μM.",
"title": "Biological and pathological role"
},
{
"paragraph_id": 47,
"text": "Calcium is a common constituent of multivitamin dietary supplements, but the composition of calcium complexes in supplements may affect its bioavailability which varies by solubility of the salt involved: calcium citrate, malate, and lactate are highly bioavailable, while the oxalate is less. Other calcium preparations include calcium carbonate, calcium citrate malate, and calcium gluconate. The intestine absorbs about one-third of calcium eaten as the free ion, and plasma calcium level is then regulated by the kidneys.",
"title": "Biological and pathological role"
},
{
"paragraph_id": 48,
"text": "Parathyroid hormone and vitamin D promote the formation of bone by allowing and enhancing the deposition of calcium ions there, allowing rapid bone turnover without affecting bone mass or mineral content. When plasma calcium levels fall, cell surface receptors are activated and the secretion of parathyroid hormone occurs; it then proceeds to stimulate the entry of calcium into the plasma pool by taking it from targeted kidney, gut, and bone cells, with the bone-forming action of parathyroid hormone being antagonised by calcitonin, whose secretion increases with increasing plasma calcium levels.",
"title": "Biological and pathological role"
},
{
"paragraph_id": 49,
"text": "Excess intake of calcium may cause hypercalcemia. However, because calcium is absorbed rather inefficiently by the intestines, high serum calcium is more likely caused by excessive secretion of parathyroid hormone (PTH) or possibly by excessive intake of vitamin D, both of which facilitate calcium absorption. All these conditions result in excess calcium salts being deposited in the heart, blood vessels, or kidneys. Symptoms include anorexia, nausea, vomiting, memory loss, confusion, muscle weakness, increased urination, dehydration, and metabolic bone disease.",
"title": "Biological and pathological role"
},
{
"paragraph_id": 50,
"text": "Chronic hypercalcaemia typically leads to calcification of soft tissue and its serious consequences: for example, calcification can cause loss of elasticity of vascular walls and disruption of laminar blood flow—and thence to plaque rupture and thrombosis. Conversely, inadequate calcium or vitamin D intakes may result in hypocalcemia, often caused also by inadequate secretion of parathyroid hormone or defective PTH receptors in cells. Symptoms include neuromuscular excitability, which potentially causes tetany and disruption of conductivity in cardiac tissue.",
"title": "Biological and pathological role"
},
{
"paragraph_id": 51,
"text": "As calcium is required for bone development, many bone diseases can be traced to the organic matrix or the hydroxyapatite in molecular structure or organization of bone. Osteoporosis is a reduction in mineral content of bone per unit volume, and can be treated by supplementation of calcium, vitamin D, and bisphosphonates. Inadequate amounts of calcium, vitamin D, or phosphates can lead to softening of bones, called osteomalacia.",
"title": "Biological and pathological role"
},
{
"paragraph_id": 52,
"text": "Because calcium reacts exothermically with water and acids, calcium metal coming into contact with bodily moisture results in severe corrosive irritation. When swallowed, calcium metal has the same effect on the mouth, oesophagus, and stomach, and can be fatal. However, long-term exposure is not known to have distinct adverse effects.",
"title": "Safety"
}
] | Calcium is a chemical element; it has symbol Ca and atomic number 20. As an alkaline earth metal, calcium is a reactive metal that forms a dark oxide-nitride layer when exposed to air. Its physical and chemical properties are most similar to its heavier homologues strontium and barium. It is the fifth most abundant element in Earth's crust, and the third most abundant metal, after iron and aluminium. The most common calcium compound on Earth is calcium carbonate, found in limestone and the fossilised remnants of early sea life; gypsum, anhydrite, fluorite, and apatite are also sources of calcium. The name derives from Latin calx "lime", which was obtained from heating limestone. Some calcium compounds were known to the ancients, though their chemistry was unknown until the seventeenth century. Pure calcium was isolated in 1808 via electrolysis of its oxide by Humphry Davy, who named the element. Calcium compounds are widely used in many industries: in foods and pharmaceuticals for calcium supplementation, in the paper industry as bleaches, as components in cement and electrical insulators, and in the manufacture of soaps. On the other hand, the metal in pure form has few applications due to its high reactivity; still, in small quantities it is often used as an alloying component in steelmaking, and sometimes, as a calcium–lead alloy, in making automotive batteries. Calcium is the most abundant metal and the fifth-most abundant element in the human body. As electrolytes, calcium ions (Ca2+) play a vital role in the physiological and biochemical processes of organisms and cells: in signal transduction pathways where they act as a second messenger; in neurotransmitter release from neurons; in contraction of all muscle cell types; as cofactors in many enzymes; and in fertilization. Calcium ions outside cells are important for maintaining the potential difference across excitable cell membranes, protein synthesis, and bone formation. | 2001-05-17T13:00:00Z | 2023-12-17T10:44:29Z | [
"Template:Div col end",
"Template:Reflist",
"Template:Citation",
"Template:Ullmann",
"Template:Portal bar",
"Template:Infobox calcium",
"Template:Cite journal",
"Template:Doi",
"Template:Subject bar",
"Template:Pp-move-indef",
"Template:Pp-semi-vandalism",
"Template:Blockquote",
"Template:Legend",
"Template:Cite web",
"Template:Cite book",
"Template:About",
"Template:NUBASE2016",
"Template:Chembox",
"Template:Alkaline earth metals",
"Template:Authority control",
"Template:Greenwood&Earnshaw2nd",
"Template:Periodic table (navbox)",
"Template:Calcium compounds",
"Template:RubberBible86th",
"Template:Dietary supplements",
"Template:Use British English",
"Template:See also",
"Template:Good article",
"Template:Main",
"Template:Chem",
"Template:Su",
"Template:Div col"
] | https://en.wikipedia.org/wiki/Calcium |
5,669 | Chromium | Chromium is a chemical element; it has symbol Cr and atomic number 24. It is the first element in group 6. It is a steely-grey, lustrous, hard, and brittle transition metal.
Chromium metal is valued for its high corrosion resistance and hardness. A major development in steel production was the discovery that steel could be made highly resistant to corrosion and discoloration by adding metallic chromium to form stainless steel. Stainless steel and chrome plating (electroplating with chromium) together comprise 85% of the commercial use. Chromium is also greatly valued as a metal that is able to be highly polished while resisting tarnishing. Polished chromium reflects almost 70% of the visible spectrum, and almost 90% of infrared light. The name of the element is derived from the Greek word χρῶμα, chrōma, meaning color, because many chromium compounds are intensely colored.
Industrial production of chromium proceeds from chromite ore (mostly FeCr2O4) to produce ferrochromium, an iron-chromium alloy, by means of aluminothermic or silicothermic reactions. Ferrochromium is then used to produce alloys such as stainless steel. Pure chromium metal is produced by a different process: roasting and leaching of chromite to separate it from iron, followed by reduction with carbon and then aluminium.
In the United States, trivalent chromium (Cr(III)) ion is considered an essential nutrient in humans for insulin, sugar, and lipid metabolism. However, in 2014, the European Food Safety Authority, acting for the European Union, concluded that there was insufficient evidence for chromium to be recognized as essential.
While chromium metal and Cr(III) ions are considered non-toxic, hexavalent chromium, Cr(VI), is toxic and carcinogenic. According to the European Chemicals Agency (ECHA), chromium trioxide that is used in industrial electroplating processes is a "substance of very high concern" (SVHC).
Abandoned chromium production sites often require environmental cleanup.
Chromium is the fourth transition metal found on the periodic table, and has an configuration of [Ar] 3d 4s. It is also the first element in the periodic table whose ground-state electron configuration violates the Aufbau principle. This occurs again later in the periodic table with other elements and their electron configurations, such as copper, niobium, and molybdenum. This occurs because electrons in the same orbital repel each other due to their like charges. In the previous elements, the energetic cost of promoting an electron to the next higher energy level is too great to compensate for that released by lessening inter-electronic repulsion. However, in the 3d transition metals, the energy gap between the 3d and the next-higher 4s subshell is very small, and because the 3d subshell is more compact than the 4s subshell, inter-electron repulsion is smaller between 4s electrons than between 3d electrons. This lowers the energetic cost of promotion and increases the energy released by it, so that the promotion becomes energetically feasible and one or even two electrons are always promoted to the 4s subshell. (Similar promotions happen for every transition metal atom but one, palladium.)
Chromium is the first element in the 3d series where the 3d electrons start to sink into the core; they thus contribute less to metallic bonding, and hence the melting and boiling points and the enthalpy of atomisation of chromium are lower than those of the preceding element vanadium. Chromium(VI) is a strong oxidising agent in contrast to the molybdenum(VI) and tungsten(VI) oxides.
Chromium is extremely hard, and is the third hardest element behind carbon (diamond) and boron. Its Mohs hardness is 8.5, which means that it can scratch samples of quartz and topaz, but can be scratched by corundum. Chromium is highly resistant to tarnishing, which makes it useful as a metal that preserves its outermost layer from corroding, unlike other metals such as copper, magnesium, and aluminium.
Chromium has a melting point of 1907 °C (3465 °F), which is relatively low compared to the majority of transition metals. However, it still has the second highest melting point out of all the Period 4 elements, being topped by vanadium by 3 °C (5 °F) at 1910 °C (3470 °F). The boiling point of 2671 °C (4840 °F), however, is comparatively lower, having the fourth lowest boiling point out of the Period 4 transition metals alone behind copper, manganese and zinc. The electrical resistivity of chromium at 20 °C is 125 nanoohm-meters.
Chromium has a high specular reflection in comparison to other transition metals. In infrared, at 425 μm, chromium has a maximum reflectance of about 72%, reducing to a minimum of 62% at 750 μm before rising again to 90% at 4000 μm. When chromium is used in stainless steel alloys and polished, the specular reflection decreases with the inclusion of additional metals, yet is still high in comparison with other alloys. Between 40% and 60% of the visible spectrum is reflected from polished stainless steel. The explanation on why chromium displays such a high turnout of reflected photon waves in general, especially the 90% in infrared, can be attributed to chromium's magnetic properties. Chromium has unique magnetic properties - chromium is the only elemental solid that shows antiferromagnetic ordering at room temperature and below. Above 38 °C, its magnetic ordering becomes paramagnetic. The antiferromagnetic properties, which cause the chromium atoms to temporarily ionize and bond with themselves, are present because the body-centric cubic's magnetic properties are disproportionate to the lattice periodicity. This is due to the magnetic moments at the cube's corners and the unequal, but antiparallel, cube centers. From here, the frequency-dependent relative permittivity of chromium, deriving from Maxwell's equations and chromium's antiferromagnetism, leaves chromium with a high infrared and visible light reflectance.
Chromium metal left standing in air is passivated - it forms a thin, protective, surface layer of oxide. This layer has a spinel structure a few atomic layers thick; it is very dense and inhibits the diffusion of oxygen into the underlying metal. In contrast, iron forms a more porous oxide through which oxygen can migrate, causing continued rusting. Passivation can be enhanced by short contact with oxidizing acids like nitric acid. Passivated chromium is stable against acids. Passivation can be removed with a strong reducing agent that destroys the protective oxide layer on the metal. Chromium metal treated in this way readily dissolves in weak acids.
Chromium, unlike iron and nickel, does not suffer from hydrogen embrittlement. However, it does suffer from nitrogen embrittlement, reacting with nitrogen from air and forming brittle nitrides at the high temperatures necessary to work the metal parts.
Naturally occurring chromium is composed of four stable isotopes; Cr, Cr, Cr and Cr, with Cr being the most abundant (83.789% natural abundance). Cr is observationally stable, as it is theoretically capable of decaying to Ti via double electron capture with a half-life of no less than 1.3×10 years. Twenty-five radioisotopes have been characterized, ranging from Cr to Cr; the most stable radioisotope is Cr with a half-life of 27.7 days. All of the remaining radioactive isotopes have half-lives that are less than 24 hours and the majority less than 1 minute. Chromium also has two metastable nuclear isomers. The primary decay mode before the most abundant stable isotope, Cr, is electron capture and the primary mode after is beta decay.
Cr is the radiogenic decay product of Mn (half-life 3.74 million years). Chromium isotopes are typically collocated (and compounded) with manganese isotopes. This circumstance is useful in isotope geology. Manganese-chromium isotope ratios reinforce the evidence from Al and Pd concerning the early history of the Solar System. Variations in Cr/Cr and Mn/Cr ratios from several meteorites indicate an initial Mn/Mn ratio that suggests Mn-Cr isotopic composition must result from in-situ decay of Mn in differentiated planetary bodies. Hence Cr provides additional evidence for nucleosynthetic processes immediately before coalescence of the Solar System. Cr has been posited as a proxy for atmospheric oxygen concentration.
Chromium is a member of group 6, of the transition metals. The +3 and +6 states occur most commonly within chromium compounds, followed by +2; charges of +1, +4 and +5 for chromium are rare, but do nevertheless occasionally exist.
Many Cr(0) complexes are known. Bis(benzene)chromium and chromium hexacarbonyl are highlights in organochromium chemistry.
Chromium(II) compounds are uncommon, in part because they readily oxidize to chromium(III) derivatives in air. Water-stable chromium(II) chloride CrCl2 that can be made by reducing chromium(III) chloride with zinc. The resulting bright blue solution created from dissolving chromium(II) chloride is stable at neutral pH. Some other notable chromium(II) compounds include chromium(II) oxide CrO, and chromium(II) sulfate CrSO4. Many chromium(II) carboxylates are known. The red chromium(II) acetate (Cr2(O2CCH3)4) is somewhat famous. It features a Cr-Cr quadruple bond.
A large number of chromium(III) compounds are known, such as chromium(III) nitrate, chromium(III) acetate, and chromium(III) oxide. Chromium(III) can be obtained by dissolving elemental chromium in acids like hydrochloric acid or sulfuric acid, but it can also be formed through the reduction of chromium(VI) by cytochrome c7. The Cr ion has a similar radius (63 pm) to Al (radius 50 pm), and they can replace each other in some compounds, such as in chrome alum and alum.
Chromium(III) tends to form octahedral complexes. Commercially available chromium(III) chloride hydrate is the dark green complex [CrCl2(H2O)4]Cl. Closely related compounds are the pale green [CrCl(H2O)5]Cl2 and violet [Cr(H2O)6]Cl3. If anhydrous violet chromium(III) chloride is dissolved in water, the violet solution turns green after some time as the chloride in the inner coordination sphere is replaced by water. This kind of reaction is also observed with solutions of chrome alum and other water-soluble chromium(III) salts. A tetrahedral coordination of chromium(III) has been reported for the Cr-centered Keggin anion [α-CrW12O40].
Chromium(III) hydroxide (Cr(OH)3) is amphoteric, dissolving in acidic solutions to form [Cr(H2O)6], and in basic solutions to form [Cr(OH)6]. It is dehydrated by heating to form the green chromium(III) oxide (Cr2O3), a stable oxide with a crystal structure identical to that of corundum.
Chromium(VI) compounds are oxidants at low or neutral pH. Chromate anions (CrO4) and dichromate (Cr2O7) anions are the principal ions at this oxidation state. They exist at an equilibrium, determined by pH:
Chromium(VI) oxyhalides are known also and include chromyl fluoride (CrO2F2) and chromyl chloride (CrO2Cl2). However, despite several erroneous claims, chromium hexafluoride (as well as all higher hexahalides) remains unknown, as of 2020.
Sodium chromate is produced industrially by the oxidative roasting of chromite ore with sodium carbonate. The change in equilibrium is visible by a change from yellow (chromate) to orange (dichromate), such as when an acid is added to a neutral solution of potassium chromate. At yet lower pH values, further condensation to more complex oxyanions of chromium is possible.
Both the chromate and dichromate anions are strong oxidizing reagents at low pH:
They are, however, only moderately oxidizing at high pH:
Chromium(VI) compounds in solution can be detected by adding an acidic hydrogen peroxide solution. The unstable dark blue chromium(VI) peroxide (CrO5) is formed, which can be stabilized as an ether adduct CrO5·OR2.
Chromic acid has the hypothetical formula H2CrO4. It is a vaguely described chemical, despite many well-defined chromates and dichromates being known. The dark red chromium(VI) oxide CrO3, the acid anhydride of chromic acid, is sold industrially as "chromic acid". It can be produced by mixing sulfuric acid with dichromate and is a strong oxidizing agent.
Compounds of chromium(V) are rather rare; the oxidation state +5 is only realized in few compounds but are intermediates in many reactions involving oxidations by chromate. The only binary compound is the volatile chromium(V) fluoride (CrF5). This red solid has a melting point of 30 °C and a boiling point of 117 °C. It can be prepared by treating chromium metal with fluorine at 400 °C and 200 bar pressure. The peroxochromate(V) is another example of the +5 oxidation state. Potassium peroxochromate (K3[Cr(O2)4]) is made by reacting potassium chromate with hydrogen peroxide at low temperatures. This red brown compound is stable at room temperature but decomposes spontaneously at 150–170 °C.
Compounds of chromium(IV) are slightly more common than those of chromium(V). The tetrahalides, CrF4, CrCl4, and CrBr4, can be produced by treating the trihalides (CrX3) with the corresponding halogen at elevated temperatures. Such compounds are susceptible to disproportionation reactions and are not stable in water. Organic compounds containing Cr(IV) state such as chromium tetra t-butoxide are also known.
Most chromium(I) compounds are obtained solely by oxidation of electron-rich, octahedral chromium(0) complexes. Other chromium(I) complexes contain cyclopentadienyl ligands. As verified by X-ray diffraction, a Cr-Cr quintuple bond (length 183.51(4) pm) has also been described. Extremely bulky monodentate ligands stabilize this compound by shielding the quintuple bond from further reactions.
Chromium is the 21st most abundant element in Earth's crust with an average concentration of 100 ppm. Chromium compounds are found in the environment from the erosion of chromium-containing rocks, and can be redistributed by volcanic eruptions. Typical background concentrations of chromium in environmental media are: atmosphere <10 ng/m; soil <500 mg/kg; vegetation <0.5 mg/kg; freshwater <10 μg/L; seawater <1 μg/L; sediment <80 mg/kg. Chromium is mined as chromite (FeCr2O4) ore.
About two-fifths of the chromite ores and concentrates in the world are produced in South Africa, about a third in Kazakhstan, while India, Russia, and Turkey are also substantial producers. Untapped chromite deposits are plentiful, but geographically concentrated in Kazakhstan and southern Africa. Although rare, deposits of native chromium exist. The Udachnaya Pipe in Russia produces samples of the native metal. This mine is a kimberlite pipe, rich in diamonds, and the reducing environment helped produce both elemental chromium and diamonds.
The relation between Cr(III) and Cr(VI) strongly depends on pH and oxidative properties of the location. In most cases, Cr(III) is the dominating species, but in some areas, the ground water can contain up to 39 µg/L of total chromium, of which 30 µg/L is Cr(VI).
Chromium minerals as pigments came to the attention of the west in the eighteenth century. On 26 July 1761, Johann Gottlob Lehmann found an orange-red mineral in the Beryozovskoye mines in the Ural Mountains which he named Siberian red lead. Though misidentified as a lead compound with selenium and iron components, the mineral was in fact crocoite with a formula of PbCrO4. In 1770, Peter Simon Pallas visited the same site as Lehmann and found a red lead mineral that was discovered to possess useful properties as a pigment in paints. After Pallas, the use of Siberian red lead as a paint pigment began to develop rapidly throughout the region. Crocoite would be the principal source of chromium in pigments until the discovery of chromite many years later.
In 1794, Louis Nicolas Vauquelin received samples of crocoite ore. He produced chromium trioxide (CrO3) by mixing crocoite with hydrochloric acid. In 1797, Vauquelin discovered that he could isolate metallic chromium by heating the oxide in a charcoal oven, for which he is credited as the one who truly discovered the element. Vauquelin was also able to detect traces of chromium in precious gemstones, such as ruby and emerald.
During the nineteenth century, chromium was primarily used not only as a component of paints, but in tanning salts as well. For quite some time, the crocoite found in Russia was the main source for such tanning materials. In 1827, a larger chromite deposit was discovered near Baltimore, United States, which quickly met the demand for tanning salts much more adequately than the crocoite that had been used previously. This made the United States the largest producer of chromium products until the year 1848, when larger deposits of chromite were uncovered near the city of Bursa, Turkey. With the development of metallurgy and chemical industries in the Western world, the need for chromium increased.
Chromium is also famous for its reflective, metallic luster when polished. It is used as a protective and decorative coating on car parts, plumbing fixtures, furniture parts and many other items, usually applied by electroplating. Chromium was used for electroplating as early as 1848, but this use only became widespread with the development of an improved process in 1924.
Approximately 28.8 million metric tons (Mt) of marketable chromite ore was produced in 2013, and converted into 7.5 Mt of ferrochromium. According to John F. Papp, writing for the USGS, "Ferrochromium is the leading end use of chromite ore, [and] stainless steel is the leading end use of ferrochromium."
The largest producers of chromium ore in 2013 have been South Africa (48%), Kazakhstan (13%), Turkey (11%), and India (10%), with several other countries producing the rest of about 18% of the world production.
The two main products of chromium ore refining are ferrochromium and metallic chromium. For those products the ore smelter process differs considerably. For the production of ferrochromium, the chromite ore (FeCr2O4) is reduced in large scale in electric arc furnace or in smaller smelters with either aluminium or silicon in an aluminothermic reaction.
For the production of pure chromium, the iron must be separated from the chromium in a two step roasting and leaching process. The chromite ore is heated with a mixture of calcium carbonate and sodium carbonate in the presence of air. The chromium is oxidized to the hexavalent form, while the iron forms the stable Fe2O3. The subsequent leaching at higher elevated temperatures dissolves the chromates and leaves the insoluble iron oxide. The chromate is converted by sulfuric acid into the dichromate.
The dichromate is converted to the chromium(III) oxide by reduction with carbon and then reduced in an aluminothermic reaction to chromium.
The creation of metal alloys account for 85% of the available chromium's usage. The remainder of chromium is used in the chemical, refractory, and foundry industries.
The strengthening effect of forming stable metal carbides at grain boundaries, and the strong increase in corrosion resistance made chromium an important alloying material for steel. High-speed tool steels contain between 3 and 5% chromium. Stainless steel, the primary corrosion-resistant metal alloy, is formed when chromium is introduced to iron in concentrations above 11%. For stainless steel's formation, ferrochromium is added to the molten iron. Also, nickel-based alloys have increased strength due to the formation of discrete, stable, metal, carbide particles at the grain boundaries. For example, Inconel 718 contains 18.6% chromium. Because of the excellent high-temperature properties of these nickel superalloys, they are used in jet engines and gas turbines in lieu of common structural materials. ASTM B163 relies on Chromium for condenser and heat-exchanger tubes, while castings with high strength at elevated temperatures that contain Chromium are standardised with ASTM A567. AISI type 332 is used where high temperature would normally cause carburization, oxidation or corrosion. Incoloy 800 "is capable of remaining stable and maintaining its austenitic structure even after long time exposures to high temperatures". Nichrome is used as resistance wire for heating elements in things like toasters and space heaters. These uses make chromium a strategic material. Consequently, during World War II, U.S. road engineers were instructed to avoid chromium in yellow road paint, as it "may become a critical material during the emergency." The United States likewise considered chromium "essential for the German war industry" and made intense diplomatic efforts to keep it out of the hands of Nazi Germany.
The high hardness and corrosion resistance of unalloyed chromium makes it a reliable metal for surface coating; it is still the most popular metal for sheet coating, with its above-average durability, compared to other coating metals. A layer of chromium is deposited on pretreated metallic surfaces by electroplating techniques. There are two deposition methods: thin, and thick. Thin deposition involves a layer of chromium below 1 µm thickness deposited by chrome plating, and is used for decorative surfaces. Thicker chromium layers are deposited if wear-resistant surfaces are needed. Both methods use acidic chromate or dichromate solutions. To prevent the energy-consuming change in oxidation state, the use of chromium(III) sulfate is under development; for most applications of chromium, the previously established process is used.
In the chromate conversion coating process, the strong oxidative properties of chromates are used to deposit a protective oxide layer on metals like aluminium, zinc, and cadmium. This passivation and the self-healing properties of the chromate stored in the chromate conversion coating, which is able to migrate to local defects, are the benefits of this coating method. Because of environmental and health regulations on chromates, alternative coating methods are under development.
Chromic acid anodizing (or Type I anodizing) of aluminium is another electrochemical process that does not lead to the deposition of chromium, but uses chromic acid as an electrolyte in the solution. During anodization, an oxide layer is formed on the aluminium. The use of chromic acid, instead of the normally used sulfuric acid, leads to a slight difference of these oxide layers. The high toxicity of Cr(VI) compounds, used in the established chromium electroplating process, and the strengthening of safety and environmental regulations demand a search for substitutes for chromium, or at least a change to less toxic chromium(III) compounds.
The mineral crocoite (which is also lead chromate PbCrO4) was used as a yellow pigment shortly after its discovery. After a synthesis method became available starting from the more abundant chromite, chrome yellow was, together with cadmium yellow, one of the most used yellow pigments. The pigment does not photodegrade, but it tends to darken due to the formation of chromium(III) oxide. It has a strong color, and was used for school buses in the United States and for the postal services (for example, the Deutsche Post) in Europe. The use of chrome yellow has since declined due to environmental and safety concerns and was replaced by organic pigments or other alternatives that are free from lead and chromium. Other pigments that are based around chromium are, for example, the deep shade of red pigment chrome red, which is simply lead chromate with lead(II) hydroxide (PbCrO4·Pb(OH)2). A very important chromate pigment, which was used widely in metal primer formulations, was zinc chromate, now replaced by zinc phosphate. A wash primer was formulated to replace the dangerous practice of pre-treating aluminium aircraft bodies with a phosphoric acid solution. This used zinc tetroxychromate dispersed in a solution of polyvinyl butyral. An 8% solution of phosphoric acid in solvent was added just before application. It was found that an easily oxidized alcohol was an essential ingredient. A thin layer of about 10–15 µm was applied, which turned from yellow to dark green when it was cured. There is still a question as to the correct mechanism. Chrome green is a mixture of Prussian blue and chrome yellow, while the chrome oxide green is chromium(III) oxide.
Chromium oxides are also used as a green pigment in the field of glassmaking and also as a glaze for ceramics. Green chromium oxide is extremely lightfast and as such is used in cladding coatings. It is also the main ingredient in infrared reflecting paints, used by the armed forces to paint vehicles and to give them the same infrared reflectance as green leaves.
Chromium(III) ions present in corundum crystals (aluminium oxide) cause them to be colored red; when corundum appears as such, it is known as a ruby. If the corundum is lacking in chromium(III) ions, it is known as a sapphire. A red-colored artificial ruby may also be achieved by doping chromium(III) into artificial corundum crystals, thus making chromium a requirement for making synthetic rubies. Such a synthetic ruby crystal was the basis for the first laser, produced in 1960, which relied on stimulated emission of light from the chromium atoms in such a crystal. Ruby has a laser transition at 694.3 nanometers, in a deep red color.
Because of their toxicity, chromium(VI) salts are used for the preservation of wood. For example, chromated copper arsenate (CCA) is used in timber treatment to protect wood from decay fungi, wood-attacking insects, including termites, and marine borers. The formulations contain chromium based on the oxide CrO3 between 35.3% and 65.5%. In the United States, 65,300 metric tons of CCA solution were used in 1996.
Chromium(III) salts, especially chrome alum and chromium(III) sulfate, are used in the tanning of leather. The chromium(III) stabilizes the leather by cross linking the collagen fibers. Chromium tanned leather can contain between 4 and 5% of chromium, which is tightly bound to the proteins. Although the form of chromium used for tanning is not the toxic hexavalent variety, there remains interest in management of chromium in the tanning industry. Recovery and reuse, direct/indirect recycling, and "chrome-less" or "chrome-free" tanning are practiced to better manage chromium usage.
The high heat resistivity and high melting point makes chromite and chromium(III) oxide a material for high temperature refractory applications, like blast furnaces, cement kilns, molds for the firing of bricks and as foundry sands for the casting of metals. In these applications, the refractory materials are made from mixtures of chromite and magnesite. The use is declining because of the environmental regulations due to the possibility of the formation of chromium(VI).
Several chromium compounds are used as catalysts for processing hydrocarbons. For example, the Phillips catalyst, prepared from chromium oxides, is used for the production of about half the world's polyethylene. Fe-Cr mixed oxides are employed as high-temperature catalysts for the water gas shift reaction. Copper chromite is a useful hydrogenation catalyst.
Chromates of metals are used in humistor.
The biologically beneficial effects of chromium(III) are debated. Chromium is accepted by the U.S. National Institutes of Health as a trace element for its roles in the action of insulin, a hormone that mediates the metabolism and storage of carbohydrate, fat, and protein. The mechanism of its actions in the body, however, have not been defined, leaving in question the essentiality of chromium.
In contrast, hexavalent chromium (Cr(VI) or Cr) is highly toxic and mutagenic. Ingestion of chromium(VI) in water has been linked to stomach tumors, and it may also cause allergic contact dermatitis (ACD).
"Chromium deficiency", involving a lack of Cr(III) in the body, or perhaps some complex of it, such as glucose tolerance factor, is controversial. Some studies suggest that the biologically active form of chromium(III) is transported in the body via an oligopeptide called low-molecular-weight chromium-binding substance (LMWCr), which might play a role in the insulin signaling pathway.
The chromium content of common foods is generally low (1–13 micrograms per serving). The chromium content of food varies widely, due to differences in soil mineral content, growing season, plant cultivar, and contamination during processing. Chromium (and nickel) leach into food cooked in stainless steel, with the effect being largest when the cookware is new. Acidic foods that are cooked for many hours also exacerbate this effect.
There is disagreement on chromium's status as an essential nutrient. Governmental departments from Australia, New Zealand, India, Japan, and the United States consider chromium essential while the European Food Safety Authority (EFSA) of the European Union does not.
The U.S. National Academy of Medicine (NAM) updated the Estimated Average Requirements (EARs) and the Recommended Dietary Allowances (RDAs) for chromium in 2001. For chromium, there was insufficient information to set EARs and RDAs, so its needs are described as estimates for Adequate Intakes (AIs). The current AIs of chromium for women ages 14 through 50 is 25 μg/day, and the AIs for women ages 50 and above is 20 μg/day. The AIs for women who are pregnant are 30 μg/day, and for women who are lactating, the set AIs are 45 μg/day. The AIs for men ages 14 through 50 are 35 μg/day, and the AIs for men ages 50 and above are 30 μg/day. For children ages 1 through 13, the AIs increase with age from 0.2 μg/day up to 25 μg/day. As for safety, the NAM sets Tolerable Upper Intake Levels (ULs) for vitamins and minerals when the evidence is sufficient. In the case of chromium, there is not yet enough information, hence no UL has been established. Collectively, the EARs, RDAs, AIs, and ULs are the parameters for the nutrition recommendation system known as Dietary Reference Intake (DRI). Australia and New Zealand consider chromium to be an essential nutrient, with an AI of 35 μg/day for men, 25 μg/day for women, 30 μg/day for women who are pregnant, and 45 μg/day for women who are lactating. A UL has not been set due to the lack of sufficient data. India considers chromium to be an essential nutrient, with an adult recommended intake of 33 μg/day. Japan also considers chromium to be an essential nutrient, with an AI of 10 μg/day for adults, including women who are pregnant or lactating. A UL has not been set. The EFSA of the European Union however, does not consider chromium to be an essential nutrient; chromium is the only mineral for which the United States and the European Union disagree.
For U.S. food and dietary supplement labeling purposes, the amount of the substance in a serving is expressed as a percent of the Daily Value (%DV). For chromium labeling purposes, 100% of the Daily Value was 120 μg. As of May 27, 2016, the percentage of daily value was revised to 35 μg to bring the chromium intake into a consensus with the official Recommended Dietary Allowance. A table of the old and new adult daily values is provided at Reference Daily Intake.
Food composition databases such as those maintained by the U.S. Department of Agriculture do not contain information on the chromium content of foods. A wide variety of animal and vegetable foods contain chromium. Content per serving is influenced by the chromium content of the soil in which the plants are grown, by foodstuffs fed to animals, and by processing methods, as chromium is leached into foods if processed or cooked in stainless steel equipment. One diet analysis study conducted in Mexico reported an average daily chromium intake of 30 micrograms. An estimated 31% of adults in the United States consume multi-vitamin/mineral dietary supplements, which often contain 25 to 60 micrograms of chromium.
Chromium is an ingredient in total parenteral nutrition (TPN), because deficiency can occur after months of intravenous feeding with chromium-free TPN. It is also added to nutritional products for preterm infants. Although the mechanism of action in biological roles for chromium is unclear, in the United States chromium-containing products are sold as non-prescription dietary supplements in amounts ranging from 50 to 1,000 μg. Lower amounts of chromium are also often incorporated into multi-vitamin/mineral supplements consumed by an estimated 31% of adults in the United States. Chemical compounds used in dietary supplements include chromium chloride, chromium citrate, chromium(III) picolinate, chromium(III) polynicotinate, and other chemical compositions. The benefit of supplements has not been proven.
In 2005, the U.S. Food and Drug Administration had approved a qualified health claim for chromium picolinate with a requirement for very specific label wording: "One small study suggests that chromium picolinate may reduce the risk of insulin resistance, and therefore possibly may reduce the risk of type 2 diabetes. FDA concludes, however, that the existence of such a relationship between chromium picolinate and either insulin resistance or type 2 diabetes is highly uncertain." At the same time, in answer to other parts of the petition, the FDA rejected claims for chromium picolinate and cardiovascular disease, retinopathy or kidney disease caused by abnormally high blood sugar levels. In 2010, chromium(III) picolinate was approved by Health Canada to be used in dietary supplements. Approved labeling statements include: a factor in the maintenance of good health, provides support for healthy glucose metabolism, helps the body to metabolize carbohydrates and helps the body to metabolize fats. The European Food Safety Authority (EFSA) approved claims in 2010 that chromium contributed to normal macronutrient metabolism and maintenance of normal blood glucose concentration, but rejected claims for maintenance or achievement of a normal body weight, or reduction of tiredness or fatigue.
Given the evidence for chromium deficiency causing problems with glucose management in the context of intravenous nutrition products formulated without chromium, research interest turned to whether chromium supplementation would benefit people who have type 2 diabetes but are not chromium deficient. Looking at the results from four meta-analyses, one reported a statistically significant decrease in fasting plasma glucose levels (FPG) and a non-significant trend in lower hemoglobin A1C. A second reported the same, a third reported significant decreases for both measures, while a fourth reported no benefit for either. A review published in 2016 listed 53 randomized clinical trials that were included in one or more of six meta-analyses. It concluded that whereas there may be modest decreases in FPG and/or HbA1C that achieve statistical significance in some of these meta-analyses, few of the trials achieved decreases large enough to be expected to be relevant to clinical outcome.
Two systematic reviews looked at chromium supplements as a mean of managing body weight in overweight and obese people. One, limited to chromium picolinate, a popular supplement ingredient, reported a statistically significant −1.1 kg (2.4 lb) weight loss in trials longer than 12 weeks. The other included all chromium compounds and reported a statistically significant −0.50 kg (1.1 lb) weight change. Change in percent body fat did not reach statistical significance. Authors of both reviews considered the clinical relevance of this modest weight loss as uncertain/unreliable. The European Food Safety Authority reviewed the literature and concluded that there was insufficient evidence to support a claim.
Chromium is promoted as a sports performance dietary supplement, based on the theory that it potentiates insulin activity, with anticipated results of increased muscle mass, and faster recovery of glycogen storage during post-exercise recovery. A review of clinical trials reported that chromium supplementation did not improve exercise performance or increase muscle strength. The International Olympic Committee reviewed dietary supplements for high-performance athletes in 2018 and concluded there was no need to increase chromium intake for athletes, nor support for claims of losing body fat.
Chromium is naturally present in the environment in trace amounts, but industrial use in rubber and stainless steel manufacturing, chrome plating, dyes for textiles, tanneries and other uses contaminates aquatic systems. In Bangladesh, rivers in or downstream from industrialized areas exhibit heavy metal contamination. Irrigation water standards for chromium are 0.1 mg/L, but some rivers are more than five times that amount. The standard for fish for human consumption is less than 1 mg/kg, but many tested samples were more than five times that amount. Chromium, especially hexavalent chromium, is highly toxic to fish because it is easily absorbed across the gills, readily enters blood circulation, crosses cell membranes and bioconcentrates up the food chain. In contrast, the toxicity of trivalent chromium is very low, attributed to poor membrane permeability and little biomagnification.
Acute and chronic exposure to chromium(VI) affects fish behavior, physiology, reproduction and survival. Hyperactivity and erratic swimming have been reported in contaminated environments. Egg hatching and fingerling survival are affected. In adult fish there are reports of histopathological damage to liver, kidney, muscle, intestines, and gills. Mechanisms include mutagenic gene damage and disruptions of enzyme functions.
There is evidence that fish may not require chromium, but benefit from a measured amount in diet. In one study, juvenile fish gained weight on a zero chromium diet, but the addition of 500 μg of chromium in the form of chromium chloride or other supplement types, per kilogram of food (dry weight), increased weight gain. At 2,000 μg/kg the weight gain was no better than with the zero chromium diet, and there were increased DNA strand breaks.
Water-insoluble chromium(III) compounds and chromium metal are not considered a health hazard, while the toxicity and carcinogenic properties of chromium(VI) have been known for a long time. Because of the specific transport mechanisms, only limited amounts of chromium(III) enter the cells. Acute oral toxicity ranges between 50 and 150 mg/kg. A 2008 review suggested that moderate uptake of chromium(III) through dietary supplements poses no genetic-toxic risk. In the US, the Occupational Safety and Health Administration (OSHA) has designated an air permissible exposure limit (PEL) in the workplace as a time-weighted average (TWA) of 1 mg/m. The National Institute for Occupational Safety and Health (NIOSH) has set a recommended exposure limit (REL) of 0.5 mg/m, time-weighted average. The IDLH (immediately dangerous to life and health) value is 250 mg/m.
The acute oral toxicity for chromium(VI) ranges between 1.5 and 3.3 mg/kg. In the body, chromium(VI) is reduced by several mechanisms to chromium(III) already in the blood before it enters the cells. The chromium(III) is excreted from the body, whereas the chromate ion is transferred into the cell by a transport mechanism, by which also sulfate and phosphate ions enter the cell. The acute toxicity of chromium(VI) is due to its strong oxidant properties. After it reaches the blood stream, it damages the kidneys, the liver and blood cells through oxidation reactions. Hemolysis, renal, and liver failure result. Aggressive dialysis can be therapeutic.
The carcinogenity of chromate dust has been known for a long time, and in 1890 the first publication described the elevated cancer risk of workers in a chromate dye company. Three mechanisms have been proposed to describe the genotoxicity of chromium(VI). The first mechanism includes highly reactive hydroxyl radicals and other reactive radicals which are by products of the reduction of chromium(VI) to chromium(III). The second process includes the direct binding of chromium(V), produced by reduction in the cell, and chromium(IV) compounds to the DNA. The last mechanism attributed the genotoxicity to the binding to the DNA of the end product of the chromium(III) reduction.
Chromium salts (chromates) are also the cause of allergic reactions in some people. Chromates are often used to manufacture, amongst other things, leather products, paints, cement, mortar and anti-corrosives. Contact with products containing chromates can lead to allergic contact dermatitis and irritant dermatitis, resulting in ulceration of the skin, sometimes referred to as "chrome ulcers". This condition is often found in workers that have been exposed to strong chromate solutions in electroplating, tanning and chrome-producing manufacturers.
Because chromium compounds were used in dyes, paints, and leather tanning compounds, these compounds are often found in soil and groundwater at active and abandoned industrial sites, needing environmental cleanup and remediation. Primer paint containing hexavalent chromium is still widely used for aerospace and automobile refinishing applications.
In 2010, the Environmental Working Group studied the drinking water in 35 American cities in the first nationwide study. The study found measurable hexavalent chromium in the tap water of 31 of the cities sampled, with Norman, Oklahoma, at the top of list; 25 cities had levels that exceeded California's proposed limit.
The more toxic hexavalent chromium form can be reduced to the less soluble trivalent oxidation state in soils by organic matter, ferrous iron, sulfides, and other reducing agents, with the rates of such reduction being faster under more acidic conditions than under more alkaline ones. In contrast, trivalent chromium can be oxidized to hexavalent chromium in soils by manganese oxides, such as Mn(III) and Mn(IV) compounds. Since the solubility and toxicity of chromium (VI) are greater that those of chromium (III), the oxidation-reduction conversions between the two oxidation states have implications for movement and bioavailability of chromium in soils, groundwater, and plants. | [
{
"paragraph_id": 0,
"text": "Chromium is a chemical element; it has symbol Cr and atomic number 24. It is the first element in group 6. It is a steely-grey, lustrous, hard, and brittle transition metal.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Chromium metal is valued for its high corrosion resistance and hardness. A major development in steel production was the discovery that steel could be made highly resistant to corrosion and discoloration by adding metallic chromium to form stainless steel. Stainless steel and chrome plating (electroplating with chromium) together comprise 85% of the commercial use. Chromium is also greatly valued as a metal that is able to be highly polished while resisting tarnishing. Polished chromium reflects almost 70% of the visible spectrum, and almost 90% of infrared light. The name of the element is derived from the Greek word χρῶμα, chrōma, meaning color, because many chromium compounds are intensely colored.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Industrial production of chromium proceeds from chromite ore (mostly FeCr2O4) to produce ferrochromium, an iron-chromium alloy, by means of aluminothermic or silicothermic reactions. Ferrochromium is then used to produce alloys such as stainless steel. Pure chromium metal is produced by a different process: roasting and leaching of chromite to separate it from iron, followed by reduction with carbon and then aluminium.",
"title": ""
},
{
"paragraph_id": 3,
"text": "In the United States, trivalent chromium (Cr(III)) ion is considered an essential nutrient in humans for insulin, sugar, and lipid metabolism. However, in 2014, the European Food Safety Authority, acting for the European Union, concluded that there was insufficient evidence for chromium to be recognized as essential.",
"title": ""
},
{
"paragraph_id": 4,
"text": "While chromium metal and Cr(III) ions are considered non-toxic, hexavalent chromium, Cr(VI), is toxic and carcinogenic. According to the European Chemicals Agency (ECHA), chromium trioxide that is used in industrial electroplating processes is a \"substance of very high concern\" (SVHC).",
"title": ""
},
{
"paragraph_id": 5,
"text": "Abandoned chromium production sites often require environmental cleanup.",
"title": ""
},
{
"paragraph_id": 6,
"text": "Chromium is the fourth transition metal found on the periodic table, and has an configuration of [Ar] 3d 4s. It is also the first element in the periodic table whose ground-state electron configuration violates the Aufbau principle. This occurs again later in the periodic table with other elements and their electron configurations, such as copper, niobium, and molybdenum. This occurs because electrons in the same orbital repel each other due to their like charges. In the previous elements, the energetic cost of promoting an electron to the next higher energy level is too great to compensate for that released by lessening inter-electronic repulsion. However, in the 3d transition metals, the energy gap between the 3d and the next-higher 4s subshell is very small, and because the 3d subshell is more compact than the 4s subshell, inter-electron repulsion is smaller between 4s electrons than between 3d electrons. This lowers the energetic cost of promotion and increases the energy released by it, so that the promotion becomes energetically feasible and one or even two electrons are always promoted to the 4s subshell. (Similar promotions happen for every transition metal atom but one, palladium.)",
"title": "Physical properties"
},
{
"paragraph_id": 7,
"text": "Chromium is the first element in the 3d series where the 3d electrons start to sink into the core; they thus contribute less to metallic bonding, and hence the melting and boiling points and the enthalpy of atomisation of chromium are lower than those of the preceding element vanadium. Chromium(VI) is a strong oxidising agent in contrast to the molybdenum(VI) and tungsten(VI) oxides.",
"title": "Physical properties"
},
{
"paragraph_id": 8,
"text": "Chromium is extremely hard, and is the third hardest element behind carbon (diamond) and boron. Its Mohs hardness is 8.5, which means that it can scratch samples of quartz and topaz, but can be scratched by corundum. Chromium is highly resistant to tarnishing, which makes it useful as a metal that preserves its outermost layer from corroding, unlike other metals such as copper, magnesium, and aluminium.",
"title": "Physical properties"
},
{
"paragraph_id": 9,
"text": "Chromium has a melting point of 1907 °C (3465 °F), which is relatively low compared to the majority of transition metals. However, it still has the second highest melting point out of all the Period 4 elements, being topped by vanadium by 3 °C (5 °F) at 1910 °C (3470 °F). The boiling point of 2671 °C (4840 °F), however, is comparatively lower, having the fourth lowest boiling point out of the Period 4 transition metals alone behind copper, manganese and zinc. The electrical resistivity of chromium at 20 °C is 125 nanoohm-meters.",
"title": "Physical properties"
},
{
"paragraph_id": 10,
"text": "Chromium has a high specular reflection in comparison to other transition metals. In infrared, at 425 μm, chromium has a maximum reflectance of about 72%, reducing to a minimum of 62% at 750 μm before rising again to 90% at 4000 μm. When chromium is used in stainless steel alloys and polished, the specular reflection decreases with the inclusion of additional metals, yet is still high in comparison with other alloys. Between 40% and 60% of the visible spectrum is reflected from polished stainless steel. The explanation on why chromium displays such a high turnout of reflected photon waves in general, especially the 90% in infrared, can be attributed to chromium's magnetic properties. Chromium has unique magnetic properties - chromium is the only elemental solid that shows antiferromagnetic ordering at room temperature and below. Above 38 °C, its magnetic ordering becomes paramagnetic. The antiferromagnetic properties, which cause the chromium atoms to temporarily ionize and bond with themselves, are present because the body-centric cubic's magnetic properties are disproportionate to the lattice periodicity. This is due to the magnetic moments at the cube's corners and the unequal, but antiparallel, cube centers. From here, the frequency-dependent relative permittivity of chromium, deriving from Maxwell's equations and chromium's antiferromagnetism, leaves chromium with a high infrared and visible light reflectance.",
"title": "Physical properties"
},
{
"paragraph_id": 11,
"text": "Chromium metal left standing in air is passivated - it forms a thin, protective, surface layer of oxide. This layer has a spinel structure a few atomic layers thick; it is very dense and inhibits the diffusion of oxygen into the underlying metal. In contrast, iron forms a more porous oxide through which oxygen can migrate, causing continued rusting. Passivation can be enhanced by short contact with oxidizing acids like nitric acid. Passivated chromium is stable against acids. Passivation can be removed with a strong reducing agent that destroys the protective oxide layer on the metal. Chromium metal treated in this way readily dissolves in weak acids.",
"title": "Physical properties"
},
{
"paragraph_id": 12,
"text": "Chromium, unlike iron and nickel, does not suffer from hydrogen embrittlement. However, it does suffer from nitrogen embrittlement, reacting with nitrogen from air and forming brittle nitrides at the high temperatures necessary to work the metal parts.",
"title": "Physical properties"
},
{
"paragraph_id": 13,
"text": "Naturally occurring chromium is composed of four stable isotopes; Cr, Cr, Cr and Cr, with Cr being the most abundant (83.789% natural abundance). Cr is observationally stable, as it is theoretically capable of decaying to Ti via double electron capture with a half-life of no less than 1.3×10 years. Twenty-five radioisotopes have been characterized, ranging from Cr to Cr; the most stable radioisotope is Cr with a half-life of 27.7 days. All of the remaining radioactive isotopes have half-lives that are less than 24 hours and the majority less than 1 minute. Chromium also has two metastable nuclear isomers. The primary decay mode before the most abundant stable isotope, Cr, is electron capture and the primary mode after is beta decay.",
"title": "Physical properties"
},
{
"paragraph_id": 14,
"text": "Cr is the radiogenic decay product of Mn (half-life 3.74 million years). Chromium isotopes are typically collocated (and compounded) with manganese isotopes. This circumstance is useful in isotope geology. Manganese-chromium isotope ratios reinforce the evidence from Al and Pd concerning the early history of the Solar System. Variations in Cr/Cr and Mn/Cr ratios from several meteorites indicate an initial Mn/Mn ratio that suggests Mn-Cr isotopic composition must result from in-situ decay of Mn in differentiated planetary bodies. Hence Cr provides additional evidence for nucleosynthetic processes immediately before coalescence of the Solar System. Cr has been posited as a proxy for atmospheric oxygen concentration.",
"title": "Physical properties"
},
{
"paragraph_id": 15,
"text": "Chromium is a member of group 6, of the transition metals. The +3 and +6 states occur most commonly within chromium compounds, followed by +2; charges of +1, +4 and +5 for chromium are rare, but do nevertheless occasionally exist.",
"title": "Chemistry and compounds"
},
{
"paragraph_id": 16,
"text": "Many Cr(0) complexes are known. Bis(benzene)chromium and chromium hexacarbonyl are highlights in organochromium chemistry.",
"title": "Chemistry and compounds"
},
{
"paragraph_id": 17,
"text": "Chromium(II) compounds are uncommon, in part because they readily oxidize to chromium(III) derivatives in air. Water-stable chromium(II) chloride CrCl2 that can be made by reducing chromium(III) chloride with zinc. The resulting bright blue solution created from dissolving chromium(II) chloride is stable at neutral pH. Some other notable chromium(II) compounds include chromium(II) oxide CrO, and chromium(II) sulfate CrSO4. Many chromium(II) carboxylates are known. The red chromium(II) acetate (Cr2(O2CCH3)4) is somewhat famous. It features a Cr-Cr quadruple bond.",
"title": "Chemistry and compounds"
},
{
"paragraph_id": 18,
"text": "A large number of chromium(III) compounds are known, such as chromium(III) nitrate, chromium(III) acetate, and chromium(III) oxide. Chromium(III) can be obtained by dissolving elemental chromium in acids like hydrochloric acid or sulfuric acid, but it can also be formed through the reduction of chromium(VI) by cytochrome c7. The Cr ion has a similar radius (63 pm) to Al (radius 50 pm), and they can replace each other in some compounds, such as in chrome alum and alum.",
"title": "Chemistry and compounds"
},
{
"paragraph_id": 19,
"text": "Chromium(III) tends to form octahedral complexes. Commercially available chromium(III) chloride hydrate is the dark green complex [CrCl2(H2O)4]Cl. Closely related compounds are the pale green [CrCl(H2O)5]Cl2 and violet [Cr(H2O)6]Cl3. If anhydrous violet chromium(III) chloride is dissolved in water, the violet solution turns green after some time as the chloride in the inner coordination sphere is replaced by water. This kind of reaction is also observed with solutions of chrome alum and other water-soluble chromium(III) salts. A tetrahedral coordination of chromium(III) has been reported for the Cr-centered Keggin anion [α-CrW12O40].",
"title": "Chemistry and compounds"
},
{
"paragraph_id": 20,
"text": "Chromium(III) hydroxide (Cr(OH)3) is amphoteric, dissolving in acidic solutions to form [Cr(H2O)6], and in basic solutions to form [Cr(OH)6]. It is dehydrated by heating to form the green chromium(III) oxide (Cr2O3), a stable oxide with a crystal structure identical to that of corundum.",
"title": "Chemistry and compounds"
},
{
"paragraph_id": 21,
"text": "Chromium(VI) compounds are oxidants at low or neutral pH. Chromate anions (CrO4) and dichromate (Cr2O7) anions are the principal ions at this oxidation state. They exist at an equilibrium, determined by pH:",
"title": "Chemistry and compounds"
},
{
"paragraph_id": 22,
"text": "Chromium(VI) oxyhalides are known also and include chromyl fluoride (CrO2F2) and chromyl chloride (CrO2Cl2). However, despite several erroneous claims, chromium hexafluoride (as well as all higher hexahalides) remains unknown, as of 2020.",
"title": "Chemistry and compounds"
},
{
"paragraph_id": 23,
"text": "Sodium chromate is produced industrially by the oxidative roasting of chromite ore with sodium carbonate. The change in equilibrium is visible by a change from yellow (chromate) to orange (dichromate), such as when an acid is added to a neutral solution of potassium chromate. At yet lower pH values, further condensation to more complex oxyanions of chromium is possible.",
"title": "Chemistry and compounds"
},
{
"paragraph_id": 24,
"text": "Both the chromate and dichromate anions are strong oxidizing reagents at low pH:",
"title": "Chemistry and compounds"
},
{
"paragraph_id": 25,
"text": "They are, however, only moderately oxidizing at high pH:",
"title": "Chemistry and compounds"
},
{
"paragraph_id": 26,
"text": "Chromium(VI) compounds in solution can be detected by adding an acidic hydrogen peroxide solution. The unstable dark blue chromium(VI) peroxide (CrO5) is formed, which can be stabilized as an ether adduct CrO5·OR2.",
"title": "Chemistry and compounds"
},
{
"paragraph_id": 27,
"text": "Chromic acid has the hypothetical formula H2CrO4. It is a vaguely described chemical, despite many well-defined chromates and dichromates being known. The dark red chromium(VI) oxide CrO3, the acid anhydride of chromic acid, is sold industrially as \"chromic acid\". It can be produced by mixing sulfuric acid with dichromate and is a strong oxidizing agent.",
"title": "Chemistry and compounds"
},
{
"paragraph_id": 28,
"text": "Compounds of chromium(V) are rather rare; the oxidation state +5 is only realized in few compounds but are intermediates in many reactions involving oxidations by chromate. The only binary compound is the volatile chromium(V) fluoride (CrF5). This red solid has a melting point of 30 °C and a boiling point of 117 °C. It can be prepared by treating chromium metal with fluorine at 400 °C and 200 bar pressure. The peroxochromate(V) is another example of the +5 oxidation state. Potassium peroxochromate (K3[Cr(O2)4]) is made by reacting potassium chromate with hydrogen peroxide at low temperatures. This red brown compound is stable at room temperature but decomposes spontaneously at 150–170 °C.",
"title": "Chemistry and compounds"
},
{
"paragraph_id": 29,
"text": "Compounds of chromium(IV) are slightly more common than those of chromium(V). The tetrahalides, CrF4, CrCl4, and CrBr4, can be produced by treating the trihalides (CrX3) with the corresponding halogen at elevated temperatures. Such compounds are susceptible to disproportionation reactions and are not stable in water. Organic compounds containing Cr(IV) state such as chromium tetra t-butoxide are also known.",
"title": "Chemistry and compounds"
},
{
"paragraph_id": 30,
"text": "Most chromium(I) compounds are obtained solely by oxidation of electron-rich, octahedral chromium(0) complexes. Other chromium(I) complexes contain cyclopentadienyl ligands. As verified by X-ray diffraction, a Cr-Cr quintuple bond (length 183.51(4) pm) has also been described. Extremely bulky monodentate ligands stabilize this compound by shielding the quintuple bond from further reactions.",
"title": "Chemistry and compounds"
},
{
"paragraph_id": 31,
"text": "Chromium is the 21st most abundant element in Earth's crust with an average concentration of 100 ppm. Chromium compounds are found in the environment from the erosion of chromium-containing rocks, and can be redistributed by volcanic eruptions. Typical background concentrations of chromium in environmental media are: atmosphere <10 ng/m; soil <500 mg/kg; vegetation <0.5 mg/kg; freshwater <10 μg/L; seawater <1 μg/L; sediment <80 mg/kg. Chromium is mined as chromite (FeCr2O4) ore.",
"title": "Occurrence"
},
{
"paragraph_id": 32,
"text": "About two-fifths of the chromite ores and concentrates in the world are produced in South Africa, about a third in Kazakhstan, while India, Russia, and Turkey are also substantial producers. Untapped chromite deposits are plentiful, but geographically concentrated in Kazakhstan and southern Africa. Although rare, deposits of native chromium exist. The Udachnaya Pipe in Russia produces samples of the native metal. This mine is a kimberlite pipe, rich in diamonds, and the reducing environment helped produce both elemental chromium and diamonds.",
"title": "Occurrence"
},
{
"paragraph_id": 33,
"text": "The relation between Cr(III) and Cr(VI) strongly depends on pH and oxidative properties of the location. In most cases, Cr(III) is the dominating species, but in some areas, the ground water can contain up to 39 µg/L of total chromium, of which 30 µg/L is Cr(VI).",
"title": "Occurrence"
},
{
"paragraph_id": 34,
"text": "Chromium minerals as pigments came to the attention of the west in the eighteenth century. On 26 July 1761, Johann Gottlob Lehmann found an orange-red mineral in the Beryozovskoye mines in the Ural Mountains which he named Siberian red lead. Though misidentified as a lead compound with selenium and iron components, the mineral was in fact crocoite with a formula of PbCrO4. In 1770, Peter Simon Pallas visited the same site as Lehmann and found a red lead mineral that was discovered to possess useful properties as a pigment in paints. After Pallas, the use of Siberian red lead as a paint pigment began to develop rapidly throughout the region. Crocoite would be the principal source of chromium in pigments until the discovery of chromite many years later.",
"title": "History"
},
{
"paragraph_id": 35,
"text": "In 1794, Louis Nicolas Vauquelin received samples of crocoite ore. He produced chromium trioxide (CrO3) by mixing crocoite with hydrochloric acid. In 1797, Vauquelin discovered that he could isolate metallic chromium by heating the oxide in a charcoal oven, for which he is credited as the one who truly discovered the element. Vauquelin was also able to detect traces of chromium in precious gemstones, such as ruby and emerald.",
"title": "History"
},
{
"paragraph_id": 36,
"text": "During the nineteenth century, chromium was primarily used not only as a component of paints, but in tanning salts as well. For quite some time, the crocoite found in Russia was the main source for such tanning materials. In 1827, a larger chromite deposit was discovered near Baltimore, United States, which quickly met the demand for tanning salts much more adequately than the crocoite that had been used previously. This made the United States the largest producer of chromium products until the year 1848, when larger deposits of chromite were uncovered near the city of Bursa, Turkey. With the development of metallurgy and chemical industries in the Western world, the need for chromium increased.",
"title": "History"
},
{
"paragraph_id": 37,
"text": "Chromium is also famous for its reflective, metallic luster when polished. It is used as a protective and decorative coating on car parts, plumbing fixtures, furniture parts and many other items, usually applied by electroplating. Chromium was used for electroplating as early as 1848, but this use only became widespread with the development of an improved process in 1924.",
"title": "History"
},
{
"paragraph_id": 38,
"text": "Approximately 28.8 million metric tons (Mt) of marketable chromite ore was produced in 2013, and converted into 7.5 Mt of ferrochromium. According to John F. Papp, writing for the USGS, \"Ferrochromium is the leading end use of chromite ore, [and] stainless steel is the leading end use of ferrochromium.\"",
"title": "Production"
},
{
"paragraph_id": 39,
"text": "The largest producers of chromium ore in 2013 have been South Africa (48%), Kazakhstan (13%), Turkey (11%), and India (10%), with several other countries producing the rest of about 18% of the world production.",
"title": "Production"
},
{
"paragraph_id": 40,
"text": "The two main products of chromium ore refining are ferrochromium and metallic chromium. For those products the ore smelter process differs considerably. For the production of ferrochromium, the chromite ore (FeCr2O4) is reduced in large scale in electric arc furnace or in smaller smelters with either aluminium or silicon in an aluminothermic reaction.",
"title": "Production"
},
{
"paragraph_id": 41,
"text": "For the production of pure chromium, the iron must be separated from the chromium in a two step roasting and leaching process. The chromite ore is heated with a mixture of calcium carbonate and sodium carbonate in the presence of air. The chromium is oxidized to the hexavalent form, while the iron forms the stable Fe2O3. The subsequent leaching at higher elevated temperatures dissolves the chromates and leaves the insoluble iron oxide. The chromate is converted by sulfuric acid into the dichromate.",
"title": "Production"
},
{
"paragraph_id": 42,
"text": "The dichromate is converted to the chromium(III) oxide by reduction with carbon and then reduced in an aluminothermic reaction to chromium.",
"title": "Production"
},
{
"paragraph_id": 43,
"text": "The creation of metal alloys account for 85% of the available chromium's usage. The remainder of chromium is used in the chemical, refractory, and foundry industries.",
"title": "Applications"
},
{
"paragraph_id": 44,
"text": "The strengthening effect of forming stable metal carbides at grain boundaries, and the strong increase in corrosion resistance made chromium an important alloying material for steel. High-speed tool steels contain between 3 and 5% chromium. Stainless steel, the primary corrosion-resistant metal alloy, is formed when chromium is introduced to iron in concentrations above 11%. For stainless steel's formation, ferrochromium is added to the molten iron. Also, nickel-based alloys have increased strength due to the formation of discrete, stable, metal, carbide particles at the grain boundaries. For example, Inconel 718 contains 18.6% chromium. Because of the excellent high-temperature properties of these nickel superalloys, they are used in jet engines and gas turbines in lieu of common structural materials. ASTM B163 relies on Chromium for condenser and heat-exchanger tubes, while castings with high strength at elevated temperatures that contain Chromium are standardised with ASTM A567. AISI type 332 is used where high temperature would normally cause carburization, oxidation or corrosion. Incoloy 800 \"is capable of remaining stable and maintaining its austenitic structure even after long time exposures to high temperatures\". Nichrome is used as resistance wire for heating elements in things like toasters and space heaters. These uses make chromium a strategic material. Consequently, during World War II, U.S. road engineers were instructed to avoid chromium in yellow road paint, as it \"may become a critical material during the emergency.\" The United States likewise considered chromium \"essential for the German war industry\" and made intense diplomatic efforts to keep it out of the hands of Nazi Germany.",
"title": "Applications"
},
{
"paragraph_id": 45,
"text": "The high hardness and corrosion resistance of unalloyed chromium makes it a reliable metal for surface coating; it is still the most popular metal for sheet coating, with its above-average durability, compared to other coating metals. A layer of chromium is deposited on pretreated metallic surfaces by electroplating techniques. There are two deposition methods: thin, and thick. Thin deposition involves a layer of chromium below 1 µm thickness deposited by chrome plating, and is used for decorative surfaces. Thicker chromium layers are deposited if wear-resistant surfaces are needed. Both methods use acidic chromate or dichromate solutions. To prevent the energy-consuming change in oxidation state, the use of chromium(III) sulfate is under development; for most applications of chromium, the previously established process is used.",
"title": "Applications"
},
{
"paragraph_id": 46,
"text": "In the chromate conversion coating process, the strong oxidative properties of chromates are used to deposit a protective oxide layer on metals like aluminium, zinc, and cadmium. This passivation and the self-healing properties of the chromate stored in the chromate conversion coating, which is able to migrate to local defects, are the benefits of this coating method. Because of environmental and health regulations on chromates, alternative coating methods are under development.",
"title": "Applications"
},
{
"paragraph_id": 47,
"text": "Chromic acid anodizing (or Type I anodizing) of aluminium is another electrochemical process that does not lead to the deposition of chromium, but uses chromic acid as an electrolyte in the solution. During anodization, an oxide layer is formed on the aluminium. The use of chromic acid, instead of the normally used sulfuric acid, leads to a slight difference of these oxide layers. The high toxicity of Cr(VI) compounds, used in the established chromium electroplating process, and the strengthening of safety and environmental regulations demand a search for substitutes for chromium, or at least a change to less toxic chromium(III) compounds.",
"title": "Applications"
},
{
"paragraph_id": 48,
"text": "The mineral crocoite (which is also lead chromate PbCrO4) was used as a yellow pigment shortly after its discovery. After a synthesis method became available starting from the more abundant chromite, chrome yellow was, together with cadmium yellow, one of the most used yellow pigments. The pigment does not photodegrade, but it tends to darken due to the formation of chromium(III) oxide. It has a strong color, and was used for school buses in the United States and for the postal services (for example, the Deutsche Post) in Europe. The use of chrome yellow has since declined due to environmental and safety concerns and was replaced by organic pigments or other alternatives that are free from lead and chromium. Other pigments that are based around chromium are, for example, the deep shade of red pigment chrome red, which is simply lead chromate with lead(II) hydroxide (PbCrO4·Pb(OH)2). A very important chromate pigment, which was used widely in metal primer formulations, was zinc chromate, now replaced by zinc phosphate. A wash primer was formulated to replace the dangerous practice of pre-treating aluminium aircraft bodies with a phosphoric acid solution. This used zinc tetroxychromate dispersed in a solution of polyvinyl butyral. An 8% solution of phosphoric acid in solvent was added just before application. It was found that an easily oxidized alcohol was an essential ingredient. A thin layer of about 10–15 µm was applied, which turned from yellow to dark green when it was cured. There is still a question as to the correct mechanism. Chrome green is a mixture of Prussian blue and chrome yellow, while the chrome oxide green is chromium(III) oxide.",
"title": "Applications"
},
{
"paragraph_id": 49,
"text": "Chromium oxides are also used as a green pigment in the field of glassmaking and also as a glaze for ceramics. Green chromium oxide is extremely lightfast and as such is used in cladding coatings. It is also the main ingredient in infrared reflecting paints, used by the armed forces to paint vehicles and to give them the same infrared reflectance as green leaves.",
"title": "Applications"
},
{
"paragraph_id": 50,
"text": "Chromium(III) ions present in corundum crystals (aluminium oxide) cause them to be colored red; when corundum appears as such, it is known as a ruby. If the corundum is lacking in chromium(III) ions, it is known as a sapphire. A red-colored artificial ruby may also be achieved by doping chromium(III) into artificial corundum crystals, thus making chromium a requirement for making synthetic rubies. Such a synthetic ruby crystal was the basis for the first laser, produced in 1960, which relied on stimulated emission of light from the chromium atoms in such a crystal. Ruby has a laser transition at 694.3 nanometers, in a deep red color.",
"title": "Applications"
},
{
"paragraph_id": 51,
"text": "Because of their toxicity, chromium(VI) salts are used for the preservation of wood. For example, chromated copper arsenate (CCA) is used in timber treatment to protect wood from decay fungi, wood-attacking insects, including termites, and marine borers. The formulations contain chromium based on the oxide CrO3 between 35.3% and 65.5%. In the United States, 65,300 metric tons of CCA solution were used in 1996.",
"title": "Applications"
},
{
"paragraph_id": 52,
"text": "Chromium(III) salts, especially chrome alum and chromium(III) sulfate, are used in the tanning of leather. The chromium(III) stabilizes the leather by cross linking the collagen fibers. Chromium tanned leather can contain between 4 and 5% of chromium, which is tightly bound to the proteins. Although the form of chromium used for tanning is not the toxic hexavalent variety, there remains interest in management of chromium in the tanning industry. Recovery and reuse, direct/indirect recycling, and \"chrome-less\" or \"chrome-free\" tanning are practiced to better manage chromium usage.",
"title": "Applications"
},
{
"paragraph_id": 53,
"text": "The high heat resistivity and high melting point makes chromite and chromium(III) oxide a material for high temperature refractory applications, like blast furnaces, cement kilns, molds for the firing of bricks and as foundry sands for the casting of metals. In these applications, the refractory materials are made from mixtures of chromite and magnesite. The use is declining because of the environmental regulations due to the possibility of the formation of chromium(VI).",
"title": "Applications"
},
{
"paragraph_id": 54,
"text": "Several chromium compounds are used as catalysts for processing hydrocarbons. For example, the Phillips catalyst, prepared from chromium oxides, is used for the production of about half the world's polyethylene. Fe-Cr mixed oxides are employed as high-temperature catalysts for the water gas shift reaction. Copper chromite is a useful hydrogenation catalyst.",
"title": "Applications"
},
{
"paragraph_id": 55,
"text": "Chromates of metals are used in humistor.",
"title": "Applications"
},
{
"paragraph_id": 56,
"text": "The biologically beneficial effects of chromium(III) are debated. Chromium is accepted by the U.S. National Institutes of Health as a trace element for its roles in the action of insulin, a hormone that mediates the metabolism and storage of carbohydrate, fat, and protein. The mechanism of its actions in the body, however, have not been defined, leaving in question the essentiality of chromium.",
"title": "Biological role"
},
{
"paragraph_id": 57,
"text": "In contrast, hexavalent chromium (Cr(VI) or Cr) is highly toxic and mutagenic. Ingestion of chromium(VI) in water has been linked to stomach tumors, and it may also cause allergic contact dermatitis (ACD).",
"title": "Biological role"
},
{
"paragraph_id": 58,
"text": "\"Chromium deficiency\", involving a lack of Cr(III) in the body, or perhaps some complex of it, such as glucose tolerance factor, is controversial. Some studies suggest that the biologically active form of chromium(III) is transported in the body via an oligopeptide called low-molecular-weight chromium-binding substance (LMWCr), which might play a role in the insulin signaling pathway.",
"title": "Biological role"
},
{
"paragraph_id": 59,
"text": "The chromium content of common foods is generally low (1–13 micrograms per serving). The chromium content of food varies widely, due to differences in soil mineral content, growing season, plant cultivar, and contamination during processing. Chromium (and nickel) leach into food cooked in stainless steel, with the effect being largest when the cookware is new. Acidic foods that are cooked for many hours also exacerbate this effect.",
"title": "Biological role"
},
{
"paragraph_id": 60,
"text": "",
"title": "Biological role"
},
{
"paragraph_id": 61,
"text": "There is disagreement on chromium's status as an essential nutrient. Governmental departments from Australia, New Zealand, India, Japan, and the United States consider chromium essential while the European Food Safety Authority (EFSA) of the European Union does not.",
"title": "Biological role"
},
{
"paragraph_id": 62,
"text": "The U.S. National Academy of Medicine (NAM) updated the Estimated Average Requirements (EARs) and the Recommended Dietary Allowances (RDAs) for chromium in 2001. For chromium, there was insufficient information to set EARs and RDAs, so its needs are described as estimates for Adequate Intakes (AIs). The current AIs of chromium for women ages 14 through 50 is 25 μg/day, and the AIs for women ages 50 and above is 20 μg/day. The AIs for women who are pregnant are 30 μg/day, and for women who are lactating, the set AIs are 45 μg/day. The AIs for men ages 14 through 50 are 35 μg/day, and the AIs for men ages 50 and above are 30 μg/day. For children ages 1 through 13, the AIs increase with age from 0.2 μg/day up to 25 μg/day. As for safety, the NAM sets Tolerable Upper Intake Levels (ULs) for vitamins and minerals when the evidence is sufficient. In the case of chromium, there is not yet enough information, hence no UL has been established. Collectively, the EARs, RDAs, AIs, and ULs are the parameters for the nutrition recommendation system known as Dietary Reference Intake (DRI). Australia and New Zealand consider chromium to be an essential nutrient, with an AI of 35 μg/day for men, 25 μg/day for women, 30 μg/day for women who are pregnant, and 45 μg/day for women who are lactating. A UL has not been set due to the lack of sufficient data. India considers chromium to be an essential nutrient, with an adult recommended intake of 33 μg/day. Japan also considers chromium to be an essential nutrient, with an AI of 10 μg/day for adults, including women who are pregnant or lactating. A UL has not been set. The EFSA of the European Union however, does not consider chromium to be an essential nutrient; chromium is the only mineral for which the United States and the European Union disagree.",
"title": "Biological role"
},
{
"paragraph_id": 63,
"text": "For U.S. food and dietary supplement labeling purposes, the amount of the substance in a serving is expressed as a percent of the Daily Value (%DV). For chromium labeling purposes, 100% of the Daily Value was 120 μg. As of May 27, 2016, the percentage of daily value was revised to 35 μg to bring the chromium intake into a consensus with the official Recommended Dietary Allowance. A table of the old and new adult daily values is provided at Reference Daily Intake.",
"title": "Biological role"
},
{
"paragraph_id": 64,
"text": "Food composition databases such as those maintained by the U.S. Department of Agriculture do not contain information on the chromium content of foods. A wide variety of animal and vegetable foods contain chromium. Content per serving is influenced by the chromium content of the soil in which the plants are grown, by foodstuffs fed to animals, and by processing methods, as chromium is leached into foods if processed or cooked in stainless steel equipment. One diet analysis study conducted in Mexico reported an average daily chromium intake of 30 micrograms. An estimated 31% of adults in the United States consume multi-vitamin/mineral dietary supplements, which often contain 25 to 60 micrograms of chromium.",
"title": "Biological role"
},
{
"paragraph_id": 65,
"text": "Chromium is an ingredient in total parenteral nutrition (TPN), because deficiency can occur after months of intravenous feeding with chromium-free TPN. It is also added to nutritional products for preterm infants. Although the mechanism of action in biological roles for chromium is unclear, in the United States chromium-containing products are sold as non-prescription dietary supplements in amounts ranging from 50 to 1,000 μg. Lower amounts of chromium are also often incorporated into multi-vitamin/mineral supplements consumed by an estimated 31% of adults in the United States. Chemical compounds used in dietary supplements include chromium chloride, chromium citrate, chromium(III) picolinate, chromium(III) polynicotinate, and other chemical compositions. The benefit of supplements has not been proven.",
"title": "Biological role"
},
{
"paragraph_id": 66,
"text": "In 2005, the U.S. Food and Drug Administration had approved a qualified health claim for chromium picolinate with a requirement for very specific label wording: \"One small study suggests that chromium picolinate may reduce the risk of insulin resistance, and therefore possibly may reduce the risk of type 2 diabetes. FDA concludes, however, that the existence of such a relationship between chromium picolinate and either insulin resistance or type 2 diabetes is highly uncertain.\" At the same time, in answer to other parts of the petition, the FDA rejected claims for chromium picolinate and cardiovascular disease, retinopathy or kidney disease caused by abnormally high blood sugar levels. In 2010, chromium(III) picolinate was approved by Health Canada to be used in dietary supplements. Approved labeling statements include: a factor in the maintenance of good health, provides support for healthy glucose metabolism, helps the body to metabolize carbohydrates and helps the body to metabolize fats. The European Food Safety Authority (EFSA) approved claims in 2010 that chromium contributed to normal macronutrient metabolism and maintenance of normal blood glucose concentration, but rejected claims for maintenance or achievement of a normal body weight, or reduction of tiredness or fatigue.",
"title": "Biological role"
},
{
"paragraph_id": 67,
"text": "Given the evidence for chromium deficiency causing problems with glucose management in the context of intravenous nutrition products formulated without chromium, research interest turned to whether chromium supplementation would benefit people who have type 2 diabetes but are not chromium deficient. Looking at the results from four meta-analyses, one reported a statistically significant decrease in fasting plasma glucose levels (FPG) and a non-significant trend in lower hemoglobin A1C. A second reported the same, a third reported significant decreases for both measures, while a fourth reported no benefit for either. A review published in 2016 listed 53 randomized clinical trials that were included in one or more of six meta-analyses. It concluded that whereas there may be modest decreases in FPG and/or HbA1C that achieve statistical significance in some of these meta-analyses, few of the trials achieved decreases large enough to be expected to be relevant to clinical outcome.",
"title": "Biological role"
},
{
"paragraph_id": 68,
"text": "Two systematic reviews looked at chromium supplements as a mean of managing body weight in overweight and obese people. One, limited to chromium picolinate, a popular supplement ingredient, reported a statistically significant −1.1 kg (2.4 lb) weight loss in trials longer than 12 weeks. The other included all chromium compounds and reported a statistically significant −0.50 kg (1.1 lb) weight change. Change in percent body fat did not reach statistical significance. Authors of both reviews considered the clinical relevance of this modest weight loss as uncertain/unreliable. The European Food Safety Authority reviewed the literature and concluded that there was insufficient evidence to support a claim.",
"title": "Biological role"
},
{
"paragraph_id": 69,
"text": "Chromium is promoted as a sports performance dietary supplement, based on the theory that it potentiates insulin activity, with anticipated results of increased muscle mass, and faster recovery of glycogen storage during post-exercise recovery. A review of clinical trials reported that chromium supplementation did not improve exercise performance or increase muscle strength. The International Olympic Committee reviewed dietary supplements for high-performance athletes in 2018 and concluded there was no need to increase chromium intake for athletes, nor support for claims of losing body fat.",
"title": "Biological role"
},
{
"paragraph_id": 70,
"text": "Chromium is naturally present in the environment in trace amounts, but industrial use in rubber and stainless steel manufacturing, chrome plating, dyes for textiles, tanneries and other uses contaminates aquatic systems. In Bangladesh, rivers in or downstream from industrialized areas exhibit heavy metal contamination. Irrigation water standards for chromium are 0.1 mg/L, but some rivers are more than five times that amount. The standard for fish for human consumption is less than 1 mg/kg, but many tested samples were more than five times that amount. Chromium, especially hexavalent chromium, is highly toxic to fish because it is easily absorbed across the gills, readily enters blood circulation, crosses cell membranes and bioconcentrates up the food chain. In contrast, the toxicity of trivalent chromium is very low, attributed to poor membrane permeability and little biomagnification.",
"title": "Biological role"
},
{
"paragraph_id": 71,
"text": "Acute and chronic exposure to chromium(VI) affects fish behavior, physiology, reproduction and survival. Hyperactivity and erratic swimming have been reported in contaminated environments. Egg hatching and fingerling survival are affected. In adult fish there are reports of histopathological damage to liver, kidney, muscle, intestines, and gills. Mechanisms include mutagenic gene damage and disruptions of enzyme functions.",
"title": "Biological role"
},
{
"paragraph_id": 72,
"text": "There is evidence that fish may not require chromium, but benefit from a measured amount in diet. In one study, juvenile fish gained weight on a zero chromium diet, but the addition of 500 μg of chromium in the form of chromium chloride or other supplement types, per kilogram of food (dry weight), increased weight gain. At 2,000 μg/kg the weight gain was no better than with the zero chromium diet, and there were increased DNA strand breaks.",
"title": "Biological role"
},
{
"paragraph_id": 73,
"text": "Water-insoluble chromium(III) compounds and chromium metal are not considered a health hazard, while the toxicity and carcinogenic properties of chromium(VI) have been known for a long time. Because of the specific transport mechanisms, only limited amounts of chromium(III) enter the cells. Acute oral toxicity ranges between 50 and 150 mg/kg. A 2008 review suggested that moderate uptake of chromium(III) through dietary supplements poses no genetic-toxic risk. In the US, the Occupational Safety and Health Administration (OSHA) has designated an air permissible exposure limit (PEL) in the workplace as a time-weighted average (TWA) of 1 mg/m. The National Institute for Occupational Safety and Health (NIOSH) has set a recommended exposure limit (REL) of 0.5 mg/m, time-weighted average. The IDLH (immediately dangerous to life and health) value is 250 mg/m.",
"title": "Precautions"
},
{
"paragraph_id": 74,
"text": "The acute oral toxicity for chromium(VI) ranges between 1.5 and 3.3 mg/kg. In the body, chromium(VI) is reduced by several mechanisms to chromium(III) already in the blood before it enters the cells. The chromium(III) is excreted from the body, whereas the chromate ion is transferred into the cell by a transport mechanism, by which also sulfate and phosphate ions enter the cell. The acute toxicity of chromium(VI) is due to its strong oxidant properties. After it reaches the blood stream, it damages the kidneys, the liver and blood cells through oxidation reactions. Hemolysis, renal, and liver failure result. Aggressive dialysis can be therapeutic.",
"title": "Precautions"
},
{
"paragraph_id": 75,
"text": "The carcinogenity of chromate dust has been known for a long time, and in 1890 the first publication described the elevated cancer risk of workers in a chromate dye company. Three mechanisms have been proposed to describe the genotoxicity of chromium(VI). The first mechanism includes highly reactive hydroxyl radicals and other reactive radicals which are by products of the reduction of chromium(VI) to chromium(III). The second process includes the direct binding of chromium(V), produced by reduction in the cell, and chromium(IV) compounds to the DNA. The last mechanism attributed the genotoxicity to the binding to the DNA of the end product of the chromium(III) reduction.",
"title": "Precautions"
},
{
"paragraph_id": 76,
"text": "Chromium salts (chromates) are also the cause of allergic reactions in some people. Chromates are often used to manufacture, amongst other things, leather products, paints, cement, mortar and anti-corrosives. Contact with products containing chromates can lead to allergic contact dermatitis and irritant dermatitis, resulting in ulceration of the skin, sometimes referred to as \"chrome ulcers\". This condition is often found in workers that have been exposed to strong chromate solutions in electroplating, tanning and chrome-producing manufacturers.",
"title": "Precautions"
},
{
"paragraph_id": 77,
"text": "Because chromium compounds were used in dyes, paints, and leather tanning compounds, these compounds are often found in soil and groundwater at active and abandoned industrial sites, needing environmental cleanup and remediation. Primer paint containing hexavalent chromium is still widely used for aerospace and automobile refinishing applications.",
"title": "Precautions"
},
{
"paragraph_id": 78,
"text": "In 2010, the Environmental Working Group studied the drinking water in 35 American cities in the first nationwide study. The study found measurable hexavalent chromium in the tap water of 31 of the cities sampled, with Norman, Oklahoma, at the top of list; 25 cities had levels that exceeded California's proposed limit.",
"title": "Precautions"
},
{
"paragraph_id": 79,
"text": "The more toxic hexavalent chromium form can be reduced to the less soluble trivalent oxidation state in soils by organic matter, ferrous iron, sulfides, and other reducing agents, with the rates of such reduction being faster under more acidic conditions than under more alkaline ones. In contrast, trivalent chromium can be oxidized to hexavalent chromium in soils by manganese oxides, such as Mn(III) and Mn(IV) compounds. Since the solubility and toxicity of chromium (VI) are greater that those of chromium (III), the oxidation-reduction conversions between the two oxidation states have implications for movement and bioavailability of chromium in soils, groundwater, and plants.",
"title": "Precautions"
},
{
"paragraph_id": 80,
"text": "",
"title": "External links"
}
] | Chromium is a chemical element; it has symbol Cr and atomic number 24. It is the first element in group 6. It is a steely-grey, lustrous, hard, and brittle transition metal. Chromium metal is valued for its high corrosion resistance and hardness. A major development in steel production was the discovery that steel could be made highly resistant to corrosion and discoloration by adding metallic chromium to form stainless steel. Stainless steel and chrome plating (electroplating with chromium) together comprise 85% of the commercial use. Chromium is also greatly valued as a metal that is able to be highly polished while resisting tarnishing. Polished chromium reflects almost 70% of the visible spectrum, and almost 90% of infrared light. The name of the element is derived from the Greek word χρῶμα, chrōma, meaning color, because many chromium compounds are intensely colored. Industrial production of chromium proceeds from chromite ore (mostly FeCr2O4) to produce ferrochromium, an iron-chromium alloy, by means of aluminothermic or silicothermic reactions. Ferrochromium is then used to produce alloys such as stainless steel. Pure chromium metal is produced by a different process: roasting and leaching of chromite to separate it from iron, followed by reduction with carbon and then aluminium. In the United States, trivalent chromium (Cr(III)) ion is considered an essential nutrient in humans for insulin, sugar, and lipid metabolism. However, in 2014, the European Food Safety Authority, acting for the European Union, concluded that there was insufficient evidence for chromium to be recognized as essential. While chromium metal and Cr(III) ions are considered non-toxic, hexavalent chromium, Cr(VI), is toxic and carcinogenic. According to the European Chemicals Agency (ECHA), chromium trioxide that is used in industrial electroplating processes is a "substance of very high concern" (SVHC). Abandoned chromium production sites often require environmental cleanup. | 2001-05-17T13:05:14Z | 2023-12-31T08:18:36Z | [
"Template:About",
"Template:Eqm",
"Template:Clear left",
"Template:Anchor",
"Template:Reflist",
"Template:Cite web",
"Template:Wiktionary",
"Template:Use dmy dates",
"Template:E",
"Template:Category see also",
"Template:Commons",
"Template:Doi",
"Template:Good article",
"Template:Periodic table (navbox)",
"Template:Infobox chromium",
"Template:Cite thesis",
"Template:Cite news",
"Template:Main",
"Template:Chem",
"Template:Cite journal",
"Template:Citation",
"Template:Cite EB1911",
"Template:Authority control",
"Template:NUBASE2020",
"Template:Cite book",
"Template:Webarchive",
"Template:OrgSynth",
"Template:See also",
"Template:Greenwood&Earnshaw2nd",
"Template:PGCH",
"Template:Chromium compounds"
] | https://en.wikipedia.org/wiki/Chromium |
5,671 | Cymbal | A cymbal is a common percussion instrument. Often used in pairs, cymbals consist of thin, normally round plates of various alloys. The majority of cymbals are of indefinite pitch, although small disc-shaped cymbals based on ancient designs sound a definite note (such as crotales). Cymbals are used in many ensembles ranging from the orchestra, percussion ensembles, jazz bands, heavy metal bands, and marching groups. Drum kits usually incorporate at least a crash, ride, or crash/ride, and a pair of hi-hat cymbals. A player of cymbals is known as a cymbalist.
The word cymbal is derived from the Latin cymbalum, which is the latinisation of the Greek word κύμβαλον kymbalon, "cymbal", which in turn derives from κύμβη kymbē, "cup, bowl".
In orchestral scores, cymbals may be indicated by the French cymbales; German Becken, Schellbecken, Teller, or Tschinellen; Italian piatti or cinelli; and Spanish platillos. Many of these derive from the word for plates.
Cymbals have existed since ancient times. Representations of cymbals may be found in reliefs and paintings from Armenian Highlands (7th century BC), Larsa, Babylon, Assyria, ancient Egypt, ancient Greece, and ancient Rome. References to cymbals also appear throughout the Bible, through many Psalms and songs of praise to God. Cymbals may have been introduced to China from Central Asia in the 3rd or 4th century AD.
In India, cymbals have been in use since ancient times and are still used across almost all major temples and Buddhist sites. Gigantic aartis along the Ganges, which are revered by Hindus all over the world, are incomplete without large cymbals.
The Shahnameh (circa 977 and 1010 CE) mentions the use of cymbals at least 14 times in its text, most in the context of creating a loud din in war, to frighten the enemy or to celebrate. The Persian word is sanj or senj (Persian سنج), but the Shahnameh does not claim these to be Persian in origin. Several times it calls then "Indian cymbals." Other adjectives to describe them include "golden" and "brass," and to play them is to "clash" them.
A different form is called sanj angshati (سنج انگشتی) or finger cymbals. These are zill.
Besides the original use in war, another use in Persian culture was the Ashura ceremony. Originally in the ceremony, two pieces of stone were beaten on the sides of the mourner with special movements accompanied by a lamentation song. This has been replaced by beating Karbzani or Karebzani and playing sanj and ratchets. Cities where this has been performed include Lahijan and Aran of Kashan, as well as Semnan and Sabzevar.
All theories about the etymology of the word Sanj, identify it as a Pahlavi word. By some accounts means weight; and it is possible that the original term was sanjkūb meaning ”striking weights” [against each other]. By some accounts the word is reform version of "Zang" (bell), referring to its bell-shaped plate.
Cymbals were employed by Turkish janissaries in the 14th century or earlier. By the 17th century, such cymbals were used in European music, and more commonly played in military bands and orchestras by the mid 18th century. Since the 19th century, some composers have called for larger roles for cymbals in musical works, and a variety of cymbal shapes, techniques, and hardware have been developed in response.
The anatomy of the cymbal plays a large part in the sound it creates. A hole is drilled in the center of the cymbal, which is used to either mount the cymbal on a stand or for tying straps through (for hand playing). The bell, dome, or cup is the raised section immediately surrounding the hole. The bell produces a higher "pinging" pitch than the rest of the cymbal. The bow is the rest of the surface surrounding the bell. The bow is sometimes described in two areas: the ride and crash area. The ride area is the thicker section closer to the bell while the crash area is the thinner tapering section near the edge. The edge or rim is the immediate circumference of the cymbal.
Cymbals are measured by their diameter either in inches or centimeters. The size of the cymbal affects its sound, larger cymbals usually being louder and having longer sustain. The weight describes how thick the cymbal is. Cymbal weights are important to the sound they produce and how they play. Heavier cymbals have a louder volume, more cut, and better stick articulation (when using drum sticks). Thin cymbals have a fuller sound, lower pitch, and faster response.
The profile of the cymbal is the vertical distance of the bow from the bottom of the bell to the cymbal edge (higher profile cymbals are more bowl-shaped). The profile affects the pitch of the cymbal: higher profile cymbals have higher pitch.
Cymbals offer a composer nearly endless amounts of color and effect. Their unique timbre allows them to project even against a full orchestra and through the heaviest of orchestrations and enhance articulation and nearly any dynamic. Cymbals have been utilized historically to suggest frenzy, fury or bacchanalian revels, as seen in the Venus music in Wagner's Tannhäuser, Grieg's Peer Gynt suite, and Osmin's aria "O wie will ich triumphieren" from Mozart's Die Entführung aus dem Serail.
Orchestral clash cymbals are traditionally used in pairs, each one having a strap set in the bell of the cymbal by which they are held. Such a pair is known as clash cymbals, crash cymbals, hand cymbals, or plates. Certain sounds can be obtained by rubbing their edges together in a sliding movement for a "sizzle", striking them against each other in what is called a "crash", tapping the edge of one against the body of the other in what is called a "tap-crash", scraping the edge of one from the inside of the bell to the edge for a "scrape" or "zischen", or shutting the cymbals together and choking the sound in what is called a "hi-hat" or "crush". A skilled percussionist can obtain an enormous dynamic range from such cymbals. For example, in Beethoven's Symphony No. 9, the percussionist is employed to first play cymbals pianissimo, adding a touch of colour rather than loud crash.
Crash cymbals are usually damped by pressing them against the percussionist's body. A composer may write laissez vibrer, or, "let vibrate" (usually abbreviated l.v.), secco (dry), or equivalent indications on the score; more usually, the percussionist must judge when to damp based on the written duration of a crash and the context in which it occurs. Crash cymbals have traditionally been accompanied by the bass drum playing an identical part. This combination, played loudly, is an effective way to accentuate a note since it contributes to both very low and very high-frequency ranges and provides a satisfying "crash-bang-wallop". In older music the composer sometimes provided one part for this pair of instruments, writing senza piatti or piatti soli (Italian: "without cymbals" or "cymbals only") if only one is needed. This came from the common practice of having one percussionist play using one cymbal mounted to the shell of the bass drum. The percussionist would crash the cymbals with the left hand and use a mallet to strike the bass drum with the right. This method is nowadays often employed in pit orchestras and called for specifically by composers who desire a certain effect. Stravinsky calls for this in his ballet Petrushka, and Mahler calls for this in his Titan Symphony. The modern convention is for the instruments to have independent parts. However, in kit drumming, a cymbal crash is still most often accompanied by a simultaneous kick to the bass drum, which provides a musical effect and support to the crash.
Crash cymbals evolved into the low-sock and from this to the modern hi-hat. Even in a modern drum kit, they remain paired with the bass drum as the two instruments which are played with the player's feet. However, hi-hat cymbals tend to be heavy with little taper, more similar to a ride cymbal than to a clash cymbal as found in a drum kit, and perform a ride rather than a crash function.
Another use of cymbals is the suspended cymbal. This instrument takes its name from the traditional method of suspending the cymbal by means of a leather strap or rope, thus allowing the cymbal to vibrate as freely as possible for maximum musical effect. Early jazz drumming pioneers borrowed this style of cymbal mounting during the early 1900s and later drummers further developed this instrument into the mounted horizontal or nearly horizontally mounted "crash" cymbals of a modern drum kit instead of a leather strap suspension system. Many modern drum kits use a mount with felt or otherwise dampening fabric to act as a barrier to hold the cymbals between metal clamps: thus forming the modern-day ride cymbal. Suspended cymbals can be played with yarn-, sponge-, or cord wrapped mallets. The first known instance of using a sponge-headed mallet on a cymbal is the final chord of Hector Berlioz' Symphonie Fantastique. Composers sometimes specifically request other types of mallets like felt mallets or timpani mallets for different attack and sustain qualities. Suspended cymbals can produce bright and slicing tones when forcefully struck, and give an eerie transparent "windy" sound when played quietly. A tremolo, or roll (played with two mallets alternately striking on opposing sides of the cymbal) can build in volume from almost inaudible to an overwhelming climax in a satisfyingly smooth manner (as in Humperdinck's Mother Goose Suite). The edge of a suspended cymbal may be hit with the shoulder of a drum stick to obtain a sound somewhat akin to that of clash cymbals. Other methods of playing include scraping a coin or triangle beater rapidly across the ridges on the top of the cymbal, giving a "zing" sound (as some percussionists do in the fourth movement of Dvořák's Symphony No. 9). Other effects that can be used include drawing a bass bow across the edge of the cymbal for a sound like squealing car brakes.
Ancient, antique or tuned cymbals are much more rarely called for. Their timbre is entirely different, more like that of small hand-bells or of the notes of the keyed harmonica. They are not struck full against each other, but by one of their edges, and the note given in by them is higher in proportion as they are thicker and smaller. Berlioz's Romeo and Juliet calls for two pairs of cymbals, modeled on some old Pompeian instruments no larger than the hand (some are no larger than a large coin), and tuned to F and B flat. The modern instruments descended from this line are the crotales.
Cymbal types include: | [
{
"paragraph_id": 0,
"text": "A cymbal is a common percussion instrument. Often used in pairs, cymbals consist of thin, normally round plates of various alloys. The majority of cymbals are of indefinite pitch, although small disc-shaped cymbals based on ancient designs sound a definite note (such as crotales). Cymbals are used in many ensembles ranging from the orchestra, percussion ensembles, jazz bands, heavy metal bands, and marching groups. Drum kits usually incorporate at least a crash, ride, or crash/ride, and a pair of hi-hat cymbals. A player of cymbals is known as a cymbalist.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The word cymbal is derived from the Latin cymbalum, which is the latinisation of the Greek word κύμβαλον kymbalon, \"cymbal\", which in turn derives from κύμβη kymbē, \"cup, bowl\".",
"title": "Etymology and names"
},
{
"paragraph_id": 2,
"text": "In orchestral scores, cymbals may be indicated by the French cymbales; German Becken, Schellbecken, Teller, or Tschinellen; Italian piatti or cinelli; and Spanish platillos. Many of these derive from the word for plates.",
"title": "Etymology and names"
},
{
"paragraph_id": 3,
"text": "Cymbals have existed since ancient times. Representations of cymbals may be found in reliefs and paintings from Armenian Highlands (7th century BC), Larsa, Babylon, Assyria, ancient Egypt, ancient Greece, and ancient Rome. References to cymbals also appear throughout the Bible, through many Psalms and songs of praise to God. Cymbals may have been introduced to China from Central Asia in the 3rd or 4th century AD.",
"title": "History"
},
{
"paragraph_id": 4,
"text": "In India, cymbals have been in use since ancient times and are still used across almost all major temples and Buddhist sites. Gigantic aartis along the Ganges, which are revered by Hindus all over the world, are incomplete without large cymbals.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "The Shahnameh (circa 977 and 1010 CE) mentions the use of cymbals at least 14 times in its text, most in the context of creating a loud din in war, to frighten the enemy or to celebrate. The Persian word is sanj or senj (Persian سنج), but the Shahnameh does not claim these to be Persian in origin. Several times it calls then \"Indian cymbals.\" Other adjectives to describe them include \"golden\" and \"brass,\" and to play them is to \"clash\" them.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "A different form is called sanj angshati (سنج انگشتی) or finger cymbals. These are zill.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "Besides the original use in war, another use in Persian culture was the Ashura ceremony. Originally in the ceremony, two pieces of stone were beaten on the sides of the mourner with special movements accompanied by a lamentation song. This has been replaced by beating Karbzani or Karebzani and playing sanj and ratchets. Cities where this has been performed include Lahijan and Aran of Kashan, as well as Semnan and Sabzevar.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "All theories about the etymology of the word Sanj, identify it as a Pahlavi word. By some accounts means weight; and it is possible that the original term was sanjkūb meaning ”striking weights” [against each other]. By some accounts the word is reform version of \"Zang\" (bell), referring to its bell-shaped plate.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "Cymbals were employed by Turkish janissaries in the 14th century or earlier. By the 17th century, such cymbals were used in European music, and more commonly played in military bands and orchestras by the mid 18th century. Since the 19th century, some composers have called for larger roles for cymbals in musical works, and a variety of cymbal shapes, techniques, and hardware have been developed in response.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "The anatomy of the cymbal plays a large part in the sound it creates. A hole is drilled in the center of the cymbal, which is used to either mount the cymbal on a stand or for tying straps through (for hand playing). The bell, dome, or cup is the raised section immediately surrounding the hole. The bell produces a higher \"pinging\" pitch than the rest of the cymbal. The bow is the rest of the surface surrounding the bell. The bow is sometimes described in two areas: the ride and crash area. The ride area is the thicker section closer to the bell while the crash area is the thinner tapering section near the edge. The edge or rim is the immediate circumference of the cymbal.",
"title": "Anatomy"
},
{
"paragraph_id": 11,
"text": "Cymbals are measured by their diameter either in inches or centimeters. The size of the cymbal affects its sound, larger cymbals usually being louder and having longer sustain. The weight describes how thick the cymbal is. Cymbal weights are important to the sound they produce and how they play. Heavier cymbals have a louder volume, more cut, and better stick articulation (when using drum sticks). Thin cymbals have a fuller sound, lower pitch, and faster response.",
"title": "Anatomy"
},
{
"paragraph_id": 12,
"text": "The profile of the cymbal is the vertical distance of the bow from the bottom of the bell to the cymbal edge (higher profile cymbals are more bowl-shaped). The profile affects the pitch of the cymbal: higher profile cymbals have higher pitch.",
"title": "Anatomy"
},
{
"paragraph_id": 13,
"text": "Cymbals offer a composer nearly endless amounts of color and effect. Their unique timbre allows them to project even against a full orchestra and through the heaviest of orchestrations and enhance articulation and nearly any dynamic. Cymbals have been utilized historically to suggest frenzy, fury or bacchanalian revels, as seen in the Venus music in Wagner's Tannhäuser, Grieg's Peer Gynt suite, and Osmin's aria \"O wie will ich triumphieren\" from Mozart's Die Entführung aus dem Serail.",
"title": "Types"
},
{
"paragraph_id": 14,
"text": "Orchestral clash cymbals are traditionally used in pairs, each one having a strap set in the bell of the cymbal by which they are held. Such a pair is known as clash cymbals, crash cymbals, hand cymbals, or plates. Certain sounds can be obtained by rubbing their edges together in a sliding movement for a \"sizzle\", striking them against each other in what is called a \"crash\", tapping the edge of one against the body of the other in what is called a \"tap-crash\", scraping the edge of one from the inside of the bell to the edge for a \"scrape\" or \"zischen\", or shutting the cymbals together and choking the sound in what is called a \"hi-hat\" or \"crush\". A skilled percussionist can obtain an enormous dynamic range from such cymbals. For example, in Beethoven's Symphony No. 9, the percussionist is employed to first play cymbals pianissimo, adding a touch of colour rather than loud crash.",
"title": "Types"
},
{
"paragraph_id": 15,
"text": "Crash cymbals are usually damped by pressing them against the percussionist's body. A composer may write laissez vibrer, or, \"let vibrate\" (usually abbreviated l.v.), secco (dry), or equivalent indications on the score; more usually, the percussionist must judge when to damp based on the written duration of a crash and the context in which it occurs. Crash cymbals have traditionally been accompanied by the bass drum playing an identical part. This combination, played loudly, is an effective way to accentuate a note since it contributes to both very low and very high-frequency ranges and provides a satisfying \"crash-bang-wallop\". In older music the composer sometimes provided one part for this pair of instruments, writing senza piatti or piatti soli (Italian: \"without cymbals\" or \"cymbals only\") if only one is needed. This came from the common practice of having one percussionist play using one cymbal mounted to the shell of the bass drum. The percussionist would crash the cymbals with the left hand and use a mallet to strike the bass drum with the right. This method is nowadays often employed in pit orchestras and called for specifically by composers who desire a certain effect. Stravinsky calls for this in his ballet Petrushka, and Mahler calls for this in his Titan Symphony. The modern convention is for the instruments to have independent parts. However, in kit drumming, a cymbal crash is still most often accompanied by a simultaneous kick to the bass drum, which provides a musical effect and support to the crash.",
"title": "Types"
},
{
"paragraph_id": 16,
"text": "Crash cymbals evolved into the low-sock and from this to the modern hi-hat. Even in a modern drum kit, they remain paired with the bass drum as the two instruments which are played with the player's feet. However, hi-hat cymbals tend to be heavy with little taper, more similar to a ride cymbal than to a clash cymbal as found in a drum kit, and perform a ride rather than a crash function.",
"title": "Types"
},
{
"paragraph_id": 17,
"text": "Another use of cymbals is the suspended cymbal. This instrument takes its name from the traditional method of suspending the cymbal by means of a leather strap or rope, thus allowing the cymbal to vibrate as freely as possible for maximum musical effect. Early jazz drumming pioneers borrowed this style of cymbal mounting during the early 1900s and later drummers further developed this instrument into the mounted horizontal or nearly horizontally mounted \"crash\" cymbals of a modern drum kit instead of a leather strap suspension system. Many modern drum kits use a mount with felt or otherwise dampening fabric to act as a barrier to hold the cymbals between metal clamps: thus forming the modern-day ride cymbal. Suspended cymbals can be played with yarn-, sponge-, or cord wrapped mallets. The first known instance of using a sponge-headed mallet on a cymbal is the final chord of Hector Berlioz' Symphonie Fantastique. Composers sometimes specifically request other types of mallets like felt mallets or timpani mallets for different attack and sustain qualities. Suspended cymbals can produce bright and slicing tones when forcefully struck, and give an eerie transparent \"windy\" sound when played quietly. A tremolo, or roll (played with two mallets alternately striking on opposing sides of the cymbal) can build in volume from almost inaudible to an overwhelming climax in a satisfyingly smooth manner (as in Humperdinck's Mother Goose Suite). The edge of a suspended cymbal may be hit with the shoulder of a drum stick to obtain a sound somewhat akin to that of clash cymbals. Other methods of playing include scraping a coin or triangle beater rapidly across the ridges on the top of the cymbal, giving a \"zing\" sound (as some percussionists do in the fourth movement of Dvořák's Symphony No. 9). Other effects that can be used include drawing a bass bow across the edge of the cymbal for a sound like squealing car brakes.",
"title": "Types"
},
{
"paragraph_id": 18,
"text": "Ancient, antique or tuned cymbals are much more rarely called for. Their timbre is entirely different, more like that of small hand-bells or of the notes of the keyed harmonica. They are not struck full against each other, but by one of their edges, and the note given in by them is higher in proportion as they are thicker and smaller. Berlioz's Romeo and Juliet calls for two pairs of cymbals, modeled on some old Pompeian instruments no larger than the hand (some are no larger than a large coin), and tuned to F and B flat. The modern instruments descended from this line are the crotales.",
"title": "Types"
},
{
"paragraph_id": 19,
"text": "Cymbal types include:",
"title": "Types"
}
] | A cymbal is a common percussion instrument. Often used in pairs, cymbals consist of thin, normally round plates of various alloys. The majority of cymbals are of indefinite pitch, although small disc-shaped cymbals based on ancient designs sound a definite note. Cymbals are used in many ensembles ranging from the orchestra, percussion ensembles, jazz bands, heavy metal bands, and marching groups. Drum kits usually incorporate at least a crash, ride, or crash/ride, and a pair of hi-hat cymbals. A player of cymbals is known as a cymbalist. | 2001-05-17T13:11:57Z | 2023-12-23T01:10:31Z | [
"Template:L&S",
"Template:LSJ",
"Template:Infobox instrument",
"Template:Audio",
"Template:Multiple images",
"Template:Cite Grove",
"Template:Commons",
"Template:Wiktionary",
"Template:Short description",
"Template:Lang",
"Template:Cite book",
"Template:Cite web",
"Template:Cymbals",
"Template:Percussion",
"Template:Authority control",
"Template:Lang-it",
"Template:Reflist",
"Template:Citation-needed",
"Template:Sfn",
"Template:Main",
"Template:EB1911",
"Template:About",
"Template:More citations needed"
] | https://en.wikipedia.org/wiki/Cymbal |
5,672 | Cadmium | Cadmium is a chemical element; it has symbol Cd and atomic number 48. This soft, silvery-white metal is chemically similar to the two other stable metals in group 12, zinc and mercury. Like zinc, it demonstrates oxidation state +2 in most of its compounds, and like mercury, it has a lower melting point than the transition metals in groups 3 through 11. Cadmium and its congeners in group 12 are often not considered transition metals, in that they do not have partly filled d or f electron shells in the elemental or common oxidation states. The average concentration of cadmium in Earth's crust is between 0.1 and 0.5 parts per million (ppm). It was discovered in 1817 simultaneously by Stromeyer and Hermann, both in Germany, as an impurity in zinc carbonate.
Cadmium occurs as a minor component in most zinc ores and is a byproduct of zinc production. Cadmium was used for a long time as a corrosion-resistant plating on steel, and cadmium compound are used as red, orange, and yellow pigments, to color glass, and to stabilize plastic. Cadmium use is generally decreasing because it is toxic (it is specifically listed in the European Restriction of Hazardous Substances Directive) and nickel–cadmium batteries have been replaced with nickel–metal hydride and lithium-ion batteries. One of its few new uses is in cadmium telluride solar panels.
Although cadmium has no known biological function in higher organisms, a cadmium-dependent carbonic anhydrase has been found in marine diatoms.
Cadmium is a soft, malleable, ductile, silvery-white divalent metal. It is similar in many respects to zinc but forms complex compounds. Unlike most other metals, cadmium is resistant to corrosion and is used as a protective plate on other metals. As a bulk metal, cadmium is insoluble in water and is not flammable; however, in its powdered form it may burn and release toxic fumes.
Although cadmium usually has an oxidation state of +2, it also exists in the +1 state. Cadmium and its congeners are not always considered transition metals, in that they do not have partly filled d or f electron shells in the elemental or common oxidation states. Cadmium burns in air to form brown amorphous cadmium oxide (CdO); the crystalline form of this compound is a dark red which changes color when heated, similar to zinc oxide. Hydrochloric acid, sulfuric acid, and nitric acid dissolve cadmium by forming cadmium chloride (CdCl2), cadmium sulfate (CdSO4), or cadmium nitrate (Cd(NO3)2). The oxidation state +1 can be produced by dissolving cadmium in a mixture of cadmium chloride and aluminium chloride, forming the Cd2 cation, which is similar to the Hg2 cation in mercury(I) chloride.
The structures of many cadmium complexes with nucleobases, amino acids, and vitamins have been determined.
Naturally occurring cadmium is composed of eight isotopes. Two of them are radioactive, and three are expected to decay but have not measurably done so under laboratory conditions. The two natural radioactive isotopes are Cd (beta decay, half-life is 7.7×10 y) and Cd (two-neutrino double beta decay, half-life is 2.9×10 y). The other three are Cd, Cd (both double electron capture), and Cd (double beta decay); only lower limits on these half-lives have been determined. At least three isotopes – Cd, Cd, and Cd – are stable. Among the isotopes that do not occur naturally, the most long-lived are Cd with a half-life of 462.6 days, and Cd with a half-life of 53.46 hours. All of the remaining radioactive isotopes have half-lives of less than 2.5 hours, and the majority have half-lives of less than 5 minutes. Cadmium has 8 known meta states, with the most stable being Cd (t1⁄2 = 14.1 years), Cd (t1⁄2 = 44.6 days), and Cd (t1⁄2 = 3.36 hours).
The known isotopes of cadmium range in atomic mass from 94.950 u (Cd) to 131.946 u (Cd). For isotopes lighter than 112 u, the primary decay mode is electron capture and the dominant decay product is element 47 (silver). Heavier isotopes decay mostly through beta emission producing element 49 (indium).
One isotope of cadmium, Cd, absorbs neutrons with high selectivity: With very high probability, neutrons with energy below the cadmium cut-off will be absorbed; those higher than the cut-off will be transmitted. The cadmium cut-off is about 0.5 eV, and neutrons below that level are deemed slow neutrons, distinct from intermediate and fast neutrons.
Cadmium is created via the s-process in low- to medium-mass stars with masses of 0.6 to 10 solar masses, over thousands of years. In that process, a silver atom captures a neutron and then undergoes beta decay.
Cadmium (Latin cadmia, Greek καδμεία meaning "calamine", a cadmium-bearing mixture of minerals that was named after the Greek mythological character Κάδμος, Cadmus, the founder of Thebes) was discovered in contaminated zinc compounds sold in pharmacies in Germany in 1817 by Friedrich Stromeyer. Karl Samuel Leberecht Hermann simultaneously investigated the discoloration in zinc oxide and found an impurity, first suspected to be arsenic, because of the yellow precipitate with hydrogen sulfide. Additionally Stromeyer discovered that one supplier sold zinc carbonate instead of zinc oxide. Stromeyer found the new element as an impurity in zinc carbonate (calamine), and, for 100 years, Germany remained the only important producer of the metal. The metal was named after the Latin word for calamine, because it was found in this zinc ore. Stromeyer noted that some impure samples of calamine changed color when heated but pure calamine did not. He was persistent in studying these results and eventually isolated cadmium metal by roasting and reducing the sulfide. The potential for cadmium yellow as pigment was recognized in the 1840s, but the lack of cadmium limited this application.
Even though cadmium and its compounds are toxic in certain forms and concentrations, the British Pharmaceutical Codex from 1907 states that cadmium iodide was used as a medication to treat "enlarged joints, scrofulous glands, and chilblains".
In 1907, the International Astronomical Union defined the international ångström in terms of a red cadmium spectral line (1 wavelength = 6438.46963 Å). This was adopted by the 7th General Conference on Weights and Measures in 1927. In 1960, the definitions of both the metre and ångström were changed to use krypton.
After the industrial scale production of cadmium started in the 1930s and 1940s, the major application of cadmium was the coating of iron and steel to prevent corrosion; in 1944, 62% and in 1956, 59% of the cadmium in the United States was used for plating. In 1956, 24% of the cadmium in the United States was used for a second application in red, orange and yellow pigments from sulfides and selenides of cadmium.
The stabilizing effect of cadmium chemicals like the carboxylates cadmium laurate and cadmium stearate on PVC led to an increased use of those compounds in the 1970s and 1980s. The demand for cadmium in pigments, coatings, stabilizers, and alloys declined as a result of environmental and health regulations in the 1980s and 1990s; in 2006, only 7% of total cadmium consumption was used for plating, and only 10% was used for pigments. At the same time, these decreases in consumption were compensated by a growing demand for cadmium for nickel–cadmium batteries, which accounted for 81% of the cadmium consumption in the United States in 2006.
Cadmium makes up about 0.1 ppm of Earth's crust. It is much rarer than zinc, which makes up about 65 ppm. No significant deposits of cadmium-containing ores are known. The only cadmium mineral of importance, greenockite (CdS), is nearly always associated with sphalerite (ZnS). This association is caused by geochemical similarity between zinc and cadmium, with no geological process likely to separate them. Thus, cadmium is produced mainly as a byproduct of mining, smelting, and refining sulfidic ores of zinc, and, to a lesser degree, lead and copper. Small amounts of cadmium, about 10% of consumption, are produced from secondary sources, mainly from dust generated by recycling iron and steel scrap. Production in the United States began in 1907, but wide use began after World War I.
Metallic cadmium can be found in the Vilyuy River basin in Siberia.
Rocks mined for phosphate fertilizers contain varying amounts of cadmium, resulting in a cadmium concentration of as much as 300 mg/kg in the fertilizers and a high cadmium content in agricultural soils. Coal can contain significant amounts of cadmium, which ends up mostly in coal fly ash.
Cadmium in soil can be absorbed by crops such as rice and cocoa. Chinese ministry of agriculture measured in 2002 that 28% of rice it sampled had excess lead and 10% had excess cadmium above limits defined by law. Consumer Reports tested 28 brands of dark chocolate sold in the United States in 2022, and found cadmium in all of them, with 13 exceeding the California Maximum Allowable Dose level.
Some plants such as willow trees and poplars have been found to clean both lead and cadmium from soil.
Typical background concentrations of cadmium do not exceed 5 ng/m in the atmosphere; 2 mg/kg in soil; 1 μg/L in freshwater and 50 ng/L in seawater. Concentrations of cadmium above 10 μg/L may be stable in water having low total solute concentrations and p H and can be difficult to remove by conventional water treatment processes.
Cadmium is a common impurity in zinc ores, and it is most often isolated during the production of zinc. Some zinc ores concentrates from zinc sulfate ores contain up to 1.4% of cadmium. In the 1970s, the output of cadmium was 6.5 pounds (2.9 kg) per ton of zinc. Zinc sulfide ores are roasted in the presence of oxygen, converting the zinc sulfide to the oxide. Zinc metal is produced either by smelting the oxide with carbon or by electrolysis in sulfuric acid. Cadmium is isolated from the zinc metal by vacuum distillation if the zinc is smelted, or cadmium sulfate is precipitated from the electrolysis solution.
The British Geological Survey reports that in 2001, China was the top producer of cadmium with almost one-sixth of the world's production, closely followed by South Korea and Japan.
Cadmium is a common component of electric batteries, pigments, coatings, and electroplating.
In 2009, 86% of cadmium was used in batteries, predominantly in rechargeable nickel–cadmium batteries. Nickel–cadmium cells have a nominal cell potential of 1.2 V. The cell consists of a positive nickel hydroxide electrode and a negative cadmium electrode plate separated by an alkaline electrolyte (potassium hydroxide). The European Union put a limit on cadmium in electronics in 2004 of 0.01%, with some exceptions, and in 2006 reduced the limit on cadmium content to 0.002%. Another type of battery based on cadmium is the silver–cadmium battery.
Cadmium electroplating, consuming 6% of the global production, is used in the aircraft industry to reduce corrosion of steel components. This coating is passivated by chromate salts. A limitation of cadmium plating is hydrogen embrittlement of high-strength steels from the electroplating process. Therefore, steel parts heat-treated to tensile strength above 1300 MPa (200 ksi) should be coated by an alternative method (such as special low-embrittlement cadmium electroplating processes or physical vapor deposition).
Titanium embrittlement from cadmium-plated tool residues resulted in banishment of those tools (and the implementation of routine tool testing to detect cadmium contamination) in the A-12/SR-71, U-2, and subsequent aircraft programs that use titanium.
Cadmium is used in the control rods of nuclear reactors, acting as a very effective neutron poison to control neutron flux in nuclear fission. When cadmium rods are inserted in the core of a nuclear reactor, cadmium absorbs neutrons, preventing them from creating additional fission events, thus controlling the amount of reactivity. The pressurized water reactor designed by Westinghouse Electric Company uses an alloy consisting of 80% silver, 15% indium, and 5% cadmium.
QLED TVs have been starting to include cadmium in construction. Some companies have been looking to reduce the environmental impact of human exposure and pollution of the material in televisions during production.
Complexes based on heavy metals have great potential for the treatment of a wide variety of cancers but their use is often limited due to toxic side effects. However, scientists are advancing in the field and new promising cadmium complex compounds with reduced toxicity have been discovered.
Cadmium oxide was used in black and white television phosphors and in the blue and green phosphors of color television cathode ray tubes. Cadmium sulfide (CdS) is used as a photoconductive surface coating for photocopier drums.
Various cadmium salts are used in paint pigments, with CdS as a yellow pigment being the most common. Cadmium selenide is a red pigment, commonly called cadmium red. To painters who work with the pigment, cadmium provides the most brilliant and durable yellows, oranges, and reds – so much so that during production, these colors are significantly toned down before they are ground with oils and binders or blended into watercolors, gouaches, acrylics, and other paint and pigment formulations. Because these pigments are potentially toxic, users should use a barrier cream on the hands to prevent absorption through the skin even though the amount of cadmium absorbed into the body through the skin is reported to be less than 1%.
In PVC, cadmium was used as heat, light, and weathering stabilizers. Currently, cadmium stabilizers have been completely replaced with barium-zinc, calcium-zinc and organo-tin stabilizers. Cadmium is used in many kinds of solder and bearing alloys, because it has a low coefficient of friction and fatigue resistance. It is also found in some of the lowest-melting alloys, such as Wood's metal.
Cadmium is an element in some semiconductor materials. Cadmium sulfide, cadmium selenide, and cadmium telluride are used in some photodetectors and solar cells. HgCdTe detectors are sensitive to mid-infrared light and used in some motion detectors.
Helium–cadmium lasers are a common source of blue or ultraviolet laser light. Lasers at wavelengths of 325, 354 and 442 nm are made using this gain medium; some models can switch between these wavelengths. They are notably used in fluorescence microscopy as well as various laboratory uses requiring laser light at these wavelengths.
Cadmium selenide quantum dots emit bright luminescence under UV excitation (He–Cd laser, for example). The color of this luminescence can be green, yellow or red depending on the particle size. Colloidal solutions of those particles are used for imaging of biological tissues and solutions with a fluorescence microscope.
In molecular biology, cadmium is used to block voltage-dependent calcium channels from fluxing calcium ions, as well as in hypoxia research to stimulate proteasome-dependent degradation of Hif-1α.
Cadmium-selective sensors based on the fluorophore BODIPY have been developed for imaging and sensing of cadmium in cells. One powerful method for monitoring cadmium in aqueous environments involves electrochemistry. By employing a self-assembled monolayer one can obtain a cadmium selective electrode with a ppt-level sensitivity.
Cadmium has no known function in higher organisms and is considered toxic. Cadmium is considered an environmental pollutant that causes health hazard to living organisms. Administration of cadmium to cells causes oxidative stress and increases the levels of antioxidants produced by cells to protect against macro molecular damage.
However a cadmium-dependent carbonic anhydrase has been found in some marine diatoms. The diatoms live in environments with very low zinc concentrations and cadmium performs the function normally carried out by zinc in other anhydrases. This was discovered with X-ray absorption near edge structure (XANES) spectroscopy.
Cadmium is preferentially absorbed in the kidneys of humans. Up to about 30 mg of cadmium is commonly inhaled throughout human childhood and adolescence. Cadmium is under research regarding its toxicity in humans, potentially elevating risks of cancer, cardiovascular disease, and osteoporosis.
The biogeochemistry of cadmium and its release to the environment has been the subject of review, as has the speciation of cadmium in the environment.
Individuals and organizations have been reviewing cadmium's bioinorganic aspects for its toxicity. The most dangerous form of occupational exposure to cadmium is inhalation of fine dust and fumes, or ingestion of highly soluble cadmium compounds. Inhalation of cadmium fumes can result initially in metal fume fever, but may progress to chemical pneumonitis, pulmonary edema, and death.
Cadmium is also an environmental hazard. Human exposure is primarily from fossil fuel combustion, phosphate fertilizers, natural sources, iron and steel production, cement production and related activities, nonferrous metals production, and municipal solid waste incineration. Other sources of cadmium include bread, root crops, and vegetables.
There have been a few instances of general population poisoning as the result of long-term exposure to cadmium in contaminated food and water. Research into an estrogen mimicry that may induce breast cancer is ongoing, as of 2012. In the decades leading up to World War II, mining operations contaminated the Jinzū River in Japan with cadmium and traces of other toxic metals. As a consequence, cadmium accumulated in the rice crops along the riverbanks downstream of the mines. Some members of the local agricultural communities consumed the contaminated rice and developed itai-itai disease and renal abnormalities, including proteinuria and glucosuria. The victims of this poisoning were almost exclusively post-menopausal women with low iron and low body stores of other minerals. Similar general population cadmium exposures in other parts of the world have not resulted in the same health problems because the populations maintained sufficient iron and other mineral levels. Thus, although cadmium is a major factor in the itai-itai disease in Japan, most researchers have concluded that it was one of several factors.
Cadmium is one of six substances banned by the European Union's Restriction of Hazardous Substances (RoHS) directive, which regulates hazardous substances in electrical and electronic equipment, but allows for certain exemptions and exclusions from the scope of the law.
The International Agency for Research on Cancer has classified cadmium and cadmium compounds as carcinogenic to humans. Although occupational exposure to cadmium is linked to lung and prostate cancer, there is still uncertainty about the carcinogenicity of cadmium in low environmental exposure. Recent data from epidemiological studies suggest that intake of cadmium through diet is associated with a higher risk of endometrial, breast, and prostate cancer as well as with osteoporosis in humans. A recent study has demonstrated that endometrial tissue is characterized by higher levels of cadmium in current and former smoking females.
Cadmium exposure is associated with a large number of illnesses including kidney disease, early atherosclerosis, hypertension, and cardiovascular diseases. Although studies show a significant correlation between cadmium exposure and occurrence of disease in human populations, a molecular mechanism has not yet been identified. One hypothesis holds that cadmium is an endocrine disruptor and some experimental studies have shown that it can interact with different hormonal signaling pathways. For example, cadmium can bind to the estrogen receptor alpha, and affect signal transduction along the estrogen and MAPK signaling pathways at low doses.
The tobacco plant absorbs and accumulates heavy metals such as cadmium from the surrounding soil into its leaves. Following tobacco smoke inhalation, these are readily absorbed into the body of users. Tobacco smoking is the most important single source of cadmium exposure in the general population. An estimated 10% of the cadmium content of a cigarette is inhaled through smoking. Absorption of cadmium through the lungs is more effective than through the gut. As much as 50% of the cadmium inhaled in cigarette smoke may be absorbed. On average, cadmium concentrations in the blood of smokers is 4 to 5 times greater than non-smokers and in the kidney, 2–3 times greater than in non-smokers. Despite the high cadmium content in cigarette smoke, there seems to be little exposure to cadmium from passive smoking.
In a non-smoking population, food is the greatest source of exposure. High quantities of cadmium can be found in crustaceans, mollusks, offal, frog legs, cocoa solids, bitter and semi-bitter chocolate, seaweed, fungi and algae products. However, grains, vegetables, and starchy roots and tubers are consumed in much greater quantity in the U.S., and are the source of the greatest dietary exposure there. Most plants bio-accumulate metal toxins such as cadmium and when composted to form organic fertilizers, yield a product that often can contain high amounts (e.g., over 0.5 mg) of metal toxins for every kilogram of fertilizer. Fertilizers made from animal dung (e.g., cow dung) or urban waste can contain similar amounts of cadmium. The cadmium added to the soil from fertilizers (rock phosphates or organic fertilizers) become bio-available and toxic only if the soil pH is low (i.e., acidic soils).
Zinc, copper, calcium, and iron ions, and selenium with vitamin C are used to treat cadmium intoxication, though it is not easily reversed.
Because of the adverse effects of cadmium on the environment and human health, the supply and use of cadmium is restricted in Europe under the REACH Regulation.
The EFSA Panel on Contaminants in the Food Chain specifies that 2.5 μg/kg body weight is a tolerable weekly intake for humans. The Joint FAO/WHO Expert Committee on Food Additives has declared 7 μg/kg body weight to be the provisional tolerable weekly intake level. The state of California requires a food label to carry a warning about potential exposure to cadmium on products such as cocoa powder.
The U.S. Occupational Safety and Health Administration (OSHA) has set the permissible exposure limit (PEL) for cadmium at a time-weighted average (TWA) of 0.005 ppm. The National Institute for Occupational Safety and Health (NIOSH) has not set a recommended exposure limit (REL) and has designated cadmium as a known human carcinogen. The IDLH (immediately dangerous to life and health) level for cadmium is 9 mg/m.
In addition to mercury, the presence of cadmium in some batteries has led to the requirement of proper disposal (or recycling) of batteries.
In May 2006, a sale of the seats from Arsenal F.C.'s old stadium, Highbury in London, England was cancelled when the seats were discovered to contain trace amounts of cadmium. Reports of high levels of cadmium use in children's jewelry in 2010 led to a US Consumer Product Safety Commission investigation. The U.S. CPSC issued specific recall notices for cadmium content in jewelry sold by Claire's and Wal-Mart stores.
In June 2010, McDonald's voluntarily recalled more than 12 million promotional Shrek Forever After 3D Collectible Drinking Glasses because of the cadmium levels in paint pigments on the glassware. The glasses were manufactured by Arc International, of Millville, New Jersey, USA. | [
{
"paragraph_id": 0,
"text": "Cadmium is a chemical element; it has symbol Cd and atomic number 48. This soft, silvery-white metal is chemically similar to the two other stable metals in group 12, zinc and mercury. Like zinc, it demonstrates oxidation state +2 in most of its compounds, and like mercury, it has a lower melting point than the transition metals in groups 3 through 11. Cadmium and its congeners in group 12 are often not considered transition metals, in that they do not have partly filled d or f electron shells in the elemental or common oxidation states. The average concentration of cadmium in Earth's crust is between 0.1 and 0.5 parts per million (ppm). It was discovered in 1817 simultaneously by Stromeyer and Hermann, both in Germany, as an impurity in zinc carbonate.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Cadmium occurs as a minor component in most zinc ores and is a byproduct of zinc production. Cadmium was used for a long time as a corrosion-resistant plating on steel, and cadmium compound are used as red, orange, and yellow pigments, to color glass, and to stabilize plastic. Cadmium use is generally decreasing because it is toxic (it is specifically listed in the European Restriction of Hazardous Substances Directive) and nickel–cadmium batteries have been replaced with nickel–metal hydride and lithium-ion batteries. One of its few new uses is in cadmium telluride solar panels.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Although cadmium has no known biological function in higher organisms, a cadmium-dependent carbonic anhydrase has been found in marine diatoms.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Cadmium is a soft, malleable, ductile, silvery-white divalent metal. It is similar in many respects to zinc but forms complex compounds. Unlike most other metals, cadmium is resistant to corrosion and is used as a protective plate on other metals. As a bulk metal, cadmium is insoluble in water and is not flammable; however, in its powdered form it may burn and release toxic fumes.",
"title": "Characteristics"
},
{
"paragraph_id": 4,
"text": "Although cadmium usually has an oxidation state of +2, it also exists in the +1 state. Cadmium and its congeners are not always considered transition metals, in that they do not have partly filled d or f electron shells in the elemental or common oxidation states. Cadmium burns in air to form brown amorphous cadmium oxide (CdO); the crystalline form of this compound is a dark red which changes color when heated, similar to zinc oxide. Hydrochloric acid, sulfuric acid, and nitric acid dissolve cadmium by forming cadmium chloride (CdCl2), cadmium sulfate (CdSO4), or cadmium nitrate (Cd(NO3)2). The oxidation state +1 can be produced by dissolving cadmium in a mixture of cadmium chloride and aluminium chloride, forming the Cd2 cation, which is similar to the Hg2 cation in mercury(I) chloride.",
"title": "Characteristics"
},
{
"paragraph_id": 5,
"text": "The structures of many cadmium complexes with nucleobases, amino acids, and vitamins have been determined.",
"title": "Characteristics"
},
{
"paragraph_id": 6,
"text": "Naturally occurring cadmium is composed of eight isotopes. Two of them are radioactive, and three are expected to decay but have not measurably done so under laboratory conditions. The two natural radioactive isotopes are Cd (beta decay, half-life is 7.7×10 y) and Cd (two-neutrino double beta decay, half-life is 2.9×10 y). The other three are Cd, Cd (both double electron capture), and Cd (double beta decay); only lower limits on these half-lives have been determined. At least three isotopes – Cd, Cd, and Cd – are stable. Among the isotopes that do not occur naturally, the most long-lived are Cd with a half-life of 462.6 days, and Cd with a half-life of 53.46 hours. All of the remaining radioactive isotopes have half-lives of less than 2.5 hours, and the majority have half-lives of less than 5 minutes. Cadmium has 8 known meta states, with the most stable being Cd (t1⁄2 = 14.1 years), Cd (t1⁄2 = 44.6 days), and Cd (t1⁄2 = 3.36 hours).",
"title": "Characteristics"
},
{
"paragraph_id": 7,
"text": "The known isotopes of cadmium range in atomic mass from 94.950 u (Cd) to 131.946 u (Cd). For isotopes lighter than 112 u, the primary decay mode is electron capture and the dominant decay product is element 47 (silver). Heavier isotopes decay mostly through beta emission producing element 49 (indium).",
"title": "Characteristics"
},
{
"paragraph_id": 8,
"text": "One isotope of cadmium, Cd, absorbs neutrons with high selectivity: With very high probability, neutrons with energy below the cadmium cut-off will be absorbed; those higher than the cut-off will be transmitted. The cadmium cut-off is about 0.5 eV, and neutrons below that level are deemed slow neutrons, distinct from intermediate and fast neutrons.",
"title": "Characteristics"
},
{
"paragraph_id": 9,
"text": "Cadmium is created via the s-process in low- to medium-mass stars with masses of 0.6 to 10 solar masses, over thousands of years. In that process, a silver atom captures a neutron and then undergoes beta decay.",
"title": "Characteristics"
},
{
"paragraph_id": 10,
"text": "Cadmium (Latin cadmia, Greek καδμεία meaning \"calamine\", a cadmium-bearing mixture of minerals that was named after the Greek mythological character Κάδμος, Cadmus, the founder of Thebes) was discovered in contaminated zinc compounds sold in pharmacies in Germany in 1817 by Friedrich Stromeyer. Karl Samuel Leberecht Hermann simultaneously investigated the discoloration in zinc oxide and found an impurity, first suspected to be arsenic, because of the yellow precipitate with hydrogen sulfide. Additionally Stromeyer discovered that one supplier sold zinc carbonate instead of zinc oxide. Stromeyer found the new element as an impurity in zinc carbonate (calamine), and, for 100 years, Germany remained the only important producer of the metal. The metal was named after the Latin word for calamine, because it was found in this zinc ore. Stromeyer noted that some impure samples of calamine changed color when heated but pure calamine did not. He was persistent in studying these results and eventually isolated cadmium metal by roasting and reducing the sulfide. The potential for cadmium yellow as pigment was recognized in the 1840s, but the lack of cadmium limited this application.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "Even though cadmium and its compounds are toxic in certain forms and concentrations, the British Pharmaceutical Codex from 1907 states that cadmium iodide was used as a medication to treat \"enlarged joints, scrofulous glands, and chilblains\".",
"title": "History"
},
{
"paragraph_id": 12,
"text": "In 1907, the International Astronomical Union defined the international ångström in terms of a red cadmium spectral line (1 wavelength = 6438.46963 Å). This was adopted by the 7th General Conference on Weights and Measures in 1927. In 1960, the definitions of both the metre and ångström were changed to use krypton.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "After the industrial scale production of cadmium started in the 1930s and 1940s, the major application of cadmium was the coating of iron and steel to prevent corrosion; in 1944, 62% and in 1956, 59% of the cadmium in the United States was used for plating. In 1956, 24% of the cadmium in the United States was used for a second application in red, orange and yellow pigments from sulfides and selenides of cadmium.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "The stabilizing effect of cadmium chemicals like the carboxylates cadmium laurate and cadmium stearate on PVC led to an increased use of those compounds in the 1970s and 1980s. The demand for cadmium in pigments, coatings, stabilizers, and alloys declined as a result of environmental and health regulations in the 1980s and 1990s; in 2006, only 7% of total cadmium consumption was used for plating, and only 10% was used for pigments. At the same time, these decreases in consumption were compensated by a growing demand for cadmium for nickel–cadmium batteries, which accounted for 81% of the cadmium consumption in the United States in 2006.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "Cadmium makes up about 0.1 ppm of Earth's crust. It is much rarer than zinc, which makes up about 65 ppm. No significant deposits of cadmium-containing ores are known. The only cadmium mineral of importance, greenockite (CdS), is nearly always associated with sphalerite (ZnS). This association is caused by geochemical similarity between zinc and cadmium, with no geological process likely to separate them. Thus, cadmium is produced mainly as a byproduct of mining, smelting, and refining sulfidic ores of zinc, and, to a lesser degree, lead and copper. Small amounts of cadmium, about 10% of consumption, are produced from secondary sources, mainly from dust generated by recycling iron and steel scrap. Production in the United States began in 1907, but wide use began after World War I.",
"title": "Occurrence"
},
{
"paragraph_id": 16,
"text": "Metallic cadmium can be found in the Vilyuy River basin in Siberia.",
"title": "Occurrence"
},
{
"paragraph_id": 17,
"text": "Rocks mined for phosphate fertilizers contain varying amounts of cadmium, resulting in a cadmium concentration of as much as 300 mg/kg in the fertilizers and a high cadmium content in agricultural soils. Coal can contain significant amounts of cadmium, which ends up mostly in coal fly ash.",
"title": "Occurrence"
},
{
"paragraph_id": 18,
"text": "Cadmium in soil can be absorbed by crops such as rice and cocoa. Chinese ministry of agriculture measured in 2002 that 28% of rice it sampled had excess lead and 10% had excess cadmium above limits defined by law. Consumer Reports tested 28 brands of dark chocolate sold in the United States in 2022, and found cadmium in all of them, with 13 exceeding the California Maximum Allowable Dose level.",
"title": "Occurrence"
},
{
"paragraph_id": 19,
"text": "Some plants such as willow trees and poplars have been found to clean both lead and cadmium from soil.",
"title": "Occurrence"
},
{
"paragraph_id": 20,
"text": "Typical background concentrations of cadmium do not exceed 5 ng/m in the atmosphere; 2 mg/kg in soil; 1 μg/L in freshwater and 50 ng/L in seawater. Concentrations of cadmium above 10 μg/L may be stable in water having low total solute concentrations and p H and can be difficult to remove by conventional water treatment processes.",
"title": "Occurrence"
},
{
"paragraph_id": 21,
"text": "Cadmium is a common impurity in zinc ores, and it is most often isolated during the production of zinc. Some zinc ores concentrates from zinc sulfate ores contain up to 1.4% of cadmium. In the 1970s, the output of cadmium was 6.5 pounds (2.9 kg) per ton of zinc. Zinc sulfide ores are roasted in the presence of oxygen, converting the zinc sulfide to the oxide. Zinc metal is produced either by smelting the oxide with carbon or by electrolysis in sulfuric acid. Cadmium is isolated from the zinc metal by vacuum distillation if the zinc is smelted, or cadmium sulfate is precipitated from the electrolysis solution.",
"title": "Production"
},
{
"paragraph_id": 22,
"text": "The British Geological Survey reports that in 2001, China was the top producer of cadmium with almost one-sixth of the world's production, closely followed by South Korea and Japan.",
"title": "Production"
},
{
"paragraph_id": 23,
"text": "Cadmium is a common component of electric batteries, pigments, coatings, and electroplating.",
"title": "Applications"
},
{
"paragraph_id": 24,
"text": "In 2009, 86% of cadmium was used in batteries, predominantly in rechargeable nickel–cadmium batteries. Nickel–cadmium cells have a nominal cell potential of 1.2 V. The cell consists of a positive nickel hydroxide electrode and a negative cadmium electrode plate separated by an alkaline electrolyte (potassium hydroxide). The European Union put a limit on cadmium in electronics in 2004 of 0.01%, with some exceptions, and in 2006 reduced the limit on cadmium content to 0.002%. Another type of battery based on cadmium is the silver–cadmium battery.",
"title": "Applications"
},
{
"paragraph_id": 25,
"text": "Cadmium electroplating, consuming 6% of the global production, is used in the aircraft industry to reduce corrosion of steel components. This coating is passivated by chromate salts. A limitation of cadmium plating is hydrogen embrittlement of high-strength steels from the electroplating process. Therefore, steel parts heat-treated to tensile strength above 1300 MPa (200 ksi) should be coated by an alternative method (such as special low-embrittlement cadmium electroplating processes or physical vapor deposition).",
"title": "Applications"
},
{
"paragraph_id": 26,
"text": "Titanium embrittlement from cadmium-plated tool residues resulted in banishment of those tools (and the implementation of routine tool testing to detect cadmium contamination) in the A-12/SR-71, U-2, and subsequent aircraft programs that use titanium.",
"title": "Applications"
},
{
"paragraph_id": 27,
"text": "Cadmium is used in the control rods of nuclear reactors, acting as a very effective neutron poison to control neutron flux in nuclear fission. When cadmium rods are inserted in the core of a nuclear reactor, cadmium absorbs neutrons, preventing them from creating additional fission events, thus controlling the amount of reactivity. The pressurized water reactor designed by Westinghouse Electric Company uses an alloy consisting of 80% silver, 15% indium, and 5% cadmium.",
"title": "Applications"
},
{
"paragraph_id": 28,
"text": "QLED TVs have been starting to include cadmium in construction. Some companies have been looking to reduce the environmental impact of human exposure and pollution of the material in televisions during production.",
"title": "Applications"
},
{
"paragraph_id": 29,
"text": "Complexes based on heavy metals have great potential for the treatment of a wide variety of cancers but their use is often limited due to toxic side effects. However, scientists are advancing in the field and new promising cadmium complex compounds with reduced toxicity have been discovered.",
"title": "Applications"
},
{
"paragraph_id": 30,
"text": "Cadmium oxide was used in black and white television phosphors and in the blue and green phosphors of color television cathode ray tubes. Cadmium sulfide (CdS) is used as a photoconductive surface coating for photocopier drums.",
"title": "Applications"
},
{
"paragraph_id": 31,
"text": "Various cadmium salts are used in paint pigments, with CdS as a yellow pigment being the most common. Cadmium selenide is a red pigment, commonly called cadmium red. To painters who work with the pigment, cadmium provides the most brilliant and durable yellows, oranges, and reds – so much so that during production, these colors are significantly toned down before they are ground with oils and binders or blended into watercolors, gouaches, acrylics, and other paint and pigment formulations. Because these pigments are potentially toxic, users should use a barrier cream on the hands to prevent absorption through the skin even though the amount of cadmium absorbed into the body through the skin is reported to be less than 1%.",
"title": "Applications"
},
{
"paragraph_id": 32,
"text": "In PVC, cadmium was used as heat, light, and weathering stabilizers. Currently, cadmium stabilizers have been completely replaced with barium-zinc, calcium-zinc and organo-tin stabilizers. Cadmium is used in many kinds of solder and bearing alloys, because it has a low coefficient of friction and fatigue resistance. It is also found in some of the lowest-melting alloys, such as Wood's metal.",
"title": "Applications"
},
{
"paragraph_id": 33,
"text": "Cadmium is an element in some semiconductor materials. Cadmium sulfide, cadmium selenide, and cadmium telluride are used in some photodetectors and solar cells. HgCdTe detectors are sensitive to mid-infrared light and used in some motion detectors.",
"title": "Applications"
},
{
"paragraph_id": 34,
"text": "Helium–cadmium lasers are a common source of blue or ultraviolet laser light. Lasers at wavelengths of 325, 354 and 442 nm are made using this gain medium; some models can switch between these wavelengths. They are notably used in fluorescence microscopy as well as various laboratory uses requiring laser light at these wavelengths.",
"title": "Applications"
},
{
"paragraph_id": 35,
"text": "Cadmium selenide quantum dots emit bright luminescence under UV excitation (He–Cd laser, for example). The color of this luminescence can be green, yellow or red depending on the particle size. Colloidal solutions of those particles are used for imaging of biological tissues and solutions with a fluorescence microscope.",
"title": "Applications"
},
{
"paragraph_id": 36,
"text": "In molecular biology, cadmium is used to block voltage-dependent calcium channels from fluxing calcium ions, as well as in hypoxia research to stimulate proteasome-dependent degradation of Hif-1α.",
"title": "Applications"
},
{
"paragraph_id": 37,
"text": "Cadmium-selective sensors based on the fluorophore BODIPY have been developed for imaging and sensing of cadmium in cells. One powerful method for monitoring cadmium in aqueous environments involves electrochemistry. By employing a self-assembled monolayer one can obtain a cadmium selective electrode with a ppt-level sensitivity.",
"title": "Applications"
},
{
"paragraph_id": 38,
"text": "Cadmium has no known function in higher organisms and is considered toxic. Cadmium is considered an environmental pollutant that causes health hazard to living organisms. Administration of cadmium to cells causes oxidative stress and increases the levels of antioxidants produced by cells to protect against macro molecular damage.",
"title": "Biological role and research"
},
{
"paragraph_id": 39,
"text": "However a cadmium-dependent carbonic anhydrase has been found in some marine diatoms. The diatoms live in environments with very low zinc concentrations and cadmium performs the function normally carried out by zinc in other anhydrases. This was discovered with X-ray absorption near edge structure (XANES) spectroscopy.",
"title": "Biological role and research"
},
{
"paragraph_id": 40,
"text": "Cadmium is preferentially absorbed in the kidneys of humans. Up to about 30 mg of cadmium is commonly inhaled throughout human childhood and adolescence. Cadmium is under research regarding its toxicity in humans, potentially elevating risks of cancer, cardiovascular disease, and osteoporosis.",
"title": "Biological role and research"
},
{
"paragraph_id": 41,
"text": "The biogeochemistry of cadmium and its release to the environment has been the subject of review, as has the speciation of cadmium in the environment.",
"title": "Environment"
},
{
"paragraph_id": 42,
"text": "Individuals and organizations have been reviewing cadmium's bioinorganic aspects for its toxicity. The most dangerous form of occupational exposure to cadmium is inhalation of fine dust and fumes, or ingestion of highly soluble cadmium compounds. Inhalation of cadmium fumes can result initially in metal fume fever, but may progress to chemical pneumonitis, pulmonary edema, and death.",
"title": "Safety"
},
{
"paragraph_id": 43,
"text": "Cadmium is also an environmental hazard. Human exposure is primarily from fossil fuel combustion, phosphate fertilizers, natural sources, iron and steel production, cement production and related activities, nonferrous metals production, and municipal solid waste incineration. Other sources of cadmium include bread, root crops, and vegetables.",
"title": "Safety"
},
{
"paragraph_id": 44,
"text": "There have been a few instances of general population poisoning as the result of long-term exposure to cadmium in contaminated food and water. Research into an estrogen mimicry that may induce breast cancer is ongoing, as of 2012. In the decades leading up to World War II, mining operations contaminated the Jinzū River in Japan with cadmium and traces of other toxic metals. As a consequence, cadmium accumulated in the rice crops along the riverbanks downstream of the mines. Some members of the local agricultural communities consumed the contaminated rice and developed itai-itai disease and renal abnormalities, including proteinuria and glucosuria. The victims of this poisoning were almost exclusively post-menopausal women with low iron and low body stores of other minerals. Similar general population cadmium exposures in other parts of the world have not resulted in the same health problems because the populations maintained sufficient iron and other mineral levels. Thus, although cadmium is a major factor in the itai-itai disease in Japan, most researchers have concluded that it was one of several factors.",
"title": "Safety"
},
{
"paragraph_id": 45,
"text": "Cadmium is one of six substances banned by the European Union's Restriction of Hazardous Substances (RoHS) directive, which regulates hazardous substances in electrical and electronic equipment, but allows for certain exemptions and exclusions from the scope of the law.",
"title": "Safety"
},
{
"paragraph_id": 46,
"text": "The International Agency for Research on Cancer has classified cadmium and cadmium compounds as carcinogenic to humans. Although occupational exposure to cadmium is linked to lung and prostate cancer, there is still uncertainty about the carcinogenicity of cadmium in low environmental exposure. Recent data from epidemiological studies suggest that intake of cadmium through diet is associated with a higher risk of endometrial, breast, and prostate cancer as well as with osteoporosis in humans. A recent study has demonstrated that endometrial tissue is characterized by higher levels of cadmium in current and former smoking females.",
"title": "Safety"
},
{
"paragraph_id": 47,
"text": "Cadmium exposure is associated with a large number of illnesses including kidney disease, early atherosclerosis, hypertension, and cardiovascular diseases. Although studies show a significant correlation between cadmium exposure and occurrence of disease in human populations, a molecular mechanism has not yet been identified. One hypothesis holds that cadmium is an endocrine disruptor and some experimental studies have shown that it can interact with different hormonal signaling pathways. For example, cadmium can bind to the estrogen receptor alpha, and affect signal transduction along the estrogen and MAPK signaling pathways at low doses.",
"title": "Safety"
},
{
"paragraph_id": 48,
"text": "The tobacco plant absorbs and accumulates heavy metals such as cadmium from the surrounding soil into its leaves. Following tobacco smoke inhalation, these are readily absorbed into the body of users. Tobacco smoking is the most important single source of cadmium exposure in the general population. An estimated 10% of the cadmium content of a cigarette is inhaled through smoking. Absorption of cadmium through the lungs is more effective than through the gut. As much as 50% of the cadmium inhaled in cigarette smoke may be absorbed. On average, cadmium concentrations in the blood of smokers is 4 to 5 times greater than non-smokers and in the kidney, 2–3 times greater than in non-smokers. Despite the high cadmium content in cigarette smoke, there seems to be little exposure to cadmium from passive smoking.",
"title": "Safety"
},
{
"paragraph_id": 49,
"text": "In a non-smoking population, food is the greatest source of exposure. High quantities of cadmium can be found in crustaceans, mollusks, offal, frog legs, cocoa solids, bitter and semi-bitter chocolate, seaweed, fungi and algae products. However, grains, vegetables, and starchy roots and tubers are consumed in much greater quantity in the U.S., and are the source of the greatest dietary exposure there. Most plants bio-accumulate metal toxins such as cadmium and when composted to form organic fertilizers, yield a product that often can contain high amounts (e.g., over 0.5 mg) of metal toxins for every kilogram of fertilizer. Fertilizers made from animal dung (e.g., cow dung) or urban waste can contain similar amounts of cadmium. The cadmium added to the soil from fertilizers (rock phosphates or organic fertilizers) become bio-available and toxic only if the soil pH is low (i.e., acidic soils).",
"title": "Safety"
},
{
"paragraph_id": 50,
"text": "Zinc, copper, calcium, and iron ions, and selenium with vitamin C are used to treat cadmium intoxication, though it is not easily reversed.",
"title": "Safety"
},
{
"paragraph_id": 51,
"text": "Because of the adverse effects of cadmium on the environment and human health, the supply and use of cadmium is restricted in Europe under the REACH Regulation.",
"title": "Safety"
},
{
"paragraph_id": 52,
"text": "The EFSA Panel on Contaminants in the Food Chain specifies that 2.5 μg/kg body weight is a tolerable weekly intake for humans. The Joint FAO/WHO Expert Committee on Food Additives has declared 7 μg/kg body weight to be the provisional tolerable weekly intake level. The state of California requires a food label to carry a warning about potential exposure to cadmium on products such as cocoa powder.",
"title": "Safety"
},
{
"paragraph_id": 53,
"text": "The U.S. Occupational Safety and Health Administration (OSHA) has set the permissible exposure limit (PEL) for cadmium at a time-weighted average (TWA) of 0.005 ppm. The National Institute for Occupational Safety and Health (NIOSH) has not set a recommended exposure limit (REL) and has designated cadmium as a known human carcinogen. The IDLH (immediately dangerous to life and health) level for cadmium is 9 mg/m.",
"title": "Safety"
},
{
"paragraph_id": 54,
"text": "In addition to mercury, the presence of cadmium in some batteries has led to the requirement of proper disposal (or recycling) of batteries.",
"title": "Safety"
},
{
"paragraph_id": 55,
"text": "In May 2006, a sale of the seats from Arsenal F.C.'s old stadium, Highbury in London, England was cancelled when the seats were discovered to contain trace amounts of cadmium. Reports of high levels of cadmium use in children's jewelry in 2010 led to a US Consumer Product Safety Commission investigation. The U.S. CPSC issued specific recall notices for cadmium content in jewelry sold by Claire's and Wal-Mart stores.",
"title": "Safety"
},
{
"paragraph_id": 56,
"text": "In June 2010, McDonald's voluntarily recalled more than 12 million promotional Shrek Forever After 3D Collectible Drinking Glasses because of the cadmium levels in paint pigments on the glassware. The glasses were manufactured by Arc International, of Millville, New Jersey, USA.",
"title": "Safety"
},
{
"paragraph_id": 57,
"text": "",
"title": "External links"
}
] | Cadmium is a chemical element; it has symbol Cd and atomic number 48. This soft, silvery-white metal is chemically similar to the two other stable metals in group 12, zinc and mercury. Like zinc, it demonstrates oxidation state +2 in most of its compounds, and like mercury, it has a lower melting point than the transition metals in groups 3 through 11. Cadmium and its congeners in group 12 are often not considered transition metals, in that they do not have partly filled d or f electron shells in the elemental or common oxidation states. The average concentration of cadmium in Earth's crust is between 0.1 and 0.5 parts per million (ppm). It was discovered in 1817 simultaneously by Stromeyer and Hermann, both in Germany, as an impurity in zinc carbonate. Cadmium occurs as a minor component in most zinc ores and is a byproduct of zinc production. Cadmium was used for a long time as a corrosion-resistant plating on steel, and cadmium compound are used as red, orange, and yellow pigments, to color glass, and to stabilize plastic. Cadmium use is generally decreasing because it is toxic and nickel–cadmium batteries have been replaced with nickel–metal hydride and lithium-ion batteries. One of its few new uses is in cadmium telluride solar panels. Although cadmium has no known biological function in higher organisms, a cadmium-dependent carbonic anhydrase has been found in marine diatoms. | 2001-09-07T14:50:40Z | 2023-12-09T19:27:38Z | [
"Template:Periodic table (navbox)",
"Template:Commons",
"Template:Clear",
"Template:Dead link",
"Template:Use dmy dates",
"Template:Cite web",
"Template:Convert",
"Template:Authority control",
"Template:IDLH",
"Template:Wiktionary",
"Template:Wikisource1911Enc",
"Template:Cadmium compounds",
"Template:Infobox cadmium",
"Template:Cite book",
"Template:Chembox",
"Template:Good article",
"Template:Other uses",
"Template:Main",
"Template:Cite journal",
"Template:PGCH",
"Template:See also",
"Template:As of",
"Template:Reflist",
"Template:Cite news",
"Template:Category see also",
"Template:Val",
"Template:NUBASE 2003",
"Template:Webarchive"
] | https://en.wikipedia.org/wiki/Cadmium |
5,675 | Curium | Curium is a synthetic chemical element; it has symbol Cm and atomic number 96. This transuranic actinide element was named after eminent scientists Marie and Pierre Curie, both known for their research on radioactivity. Curium was first intentionally made by the team of Glenn T. Seaborg, Ralph A. James, and Albert Ghiorso in 1944, using the cyclotron at Berkeley. They bombarded the newly discovered element plutonium (the isotope Pu) with alpha particles. This was then sent to the Metallurgical Laboratory at University of Chicago where a tiny sample of curium was eventually separated and identified. The discovery was kept secret until after the end of World War II. The news was released to the public in November 1947. Most curium is produced by bombarding uranium or plutonium with neutrons in nuclear reactors – one tonne of spent nuclear fuel contains ~20 grams of curium.
Curium is a hard, dense, silvery metal with a high melting and boiling point for an actinide. It is paramagnetic at ambient conditions, but becomes antiferromagnetic upon cooling, and other magnetic transitions are also seen in many curium compounds. In compounds, curium usually has valence +3 and sometimes +4; the +3 valence is predominant in solutions. Curium readily oxidizes, and its oxides are a dominant form of this element. It forms strongly fluorescent complexes with various organic compounds, but there is no evidence of its incorporation into bacteria and archaea. If it gets into the human body, curium accumulates in bones, lungs, and liver, where it promotes cancer.
All known isotopes of curium are radioactive and have small critical mass for a nuclear chain reaction. They mostly emit α-particles; radioisotope thermoelectric generators can use the heat from this process, but this is hindered by the rarity and high cost of curium. Curium is used in making heavier actinides and the Pu radionuclide for power sources in artificial cardiac pacemakers and RTGs for spacecraft. It served as the α-source in the alpha particle X-ray spectrometers of several space probes, including the Sojourner, Spirit, Opportunity, and Curiosity Mars rovers and the Philae lander on comet 67P/Churyumov–Gerasimenko, to analyze the composition and structure of the surface.
Though curium had likely been produced in previous nuclear experiments as well as the natural nuclear fission reactor at Oklo, Gabon, it was first intentionally synthesized, isolated and identified in 1944, at University of California, Berkeley, by Glenn T. Seaborg, Ralph A. James, and Albert Ghiorso. In their experiments, they used a 60-inch (150 cm) cyclotron.
Curium was chemically identified at the Metallurgical Laboratory (now Argonne National Laboratory), University of Chicago. It was the third transuranium element to be discovered even though it is the fourth in the series – the lighter element americium was still unknown.
The sample was prepared as follows: first plutonium nitrate solution was coated on a platinum foil of ~0.5 cm area, the solution was evaporated and the residue was converted into plutonium(IV) oxide (PuO2) by annealing. Following cyclotron irradiation of the oxide, the coating was dissolved with nitric acid and then precipitated as the hydroxide using concentrated aqueous ammonia solution. The residue was dissolved in perchloric acid, and further separation was done by ion exchange to yield a certain isotope of curium. The separation of curium and americium was so painstaking that the Berkeley group initially called those elements pandemonium (from Greek for all demons or hell) and delirium (from Latin for madness).
Curium-242 was made in July–August 1944 by bombarding Pu with α-particles to produce curium with the release of a neutron:
Curium-242 was unambiguously identified by the characteristic energy of the α-particles emitted during the decay:
The half-life of this alpha decay was first measured as 150 days and then corrected to 162.8 days.
Another isotope Cm was produced in a similar reaction in March 1945:
The α-decay half-life of Cm was correctly determined as 26.7 days.
The discovery of curium and americium in 1944 was closely related to the Manhattan Project, so the results were confidential and declassified only in 1945. Seaborg leaked the synthesis of the elements 95 and 96 on the U.S. radio show for children, the Quiz Kids, five days before the official presentation at an American Chemical Society meeting on November 11, 1945, when one listener asked if any new transuranic element beside plutonium and neptunium had been discovered during the war. The discovery of curium (Cm and Cm), its production, and its compounds was later patented listing only Seaborg as the inventor.
The element was named after Marie Curie and her husband Pierre Curie, who are known for discovering radium and for their work in radioactivity. It followed the example of gadolinium, a lanthanide element above curium in the periodic table, which was named after the explorer of rare-earth elements Johan Gadolin:
The first curium samples were barely visible, and were identified by their radioactivity. Louis Werner and Isadore Perlman made the first substantial sample of 30 µg curium-242 hydroxide at University of California, Berkeley in 1947 by bombarding americium-241 with neutrons. Macroscopic amounts of curium(III) fluoride were obtained in 1950 by W. W. T. Crane, J. C. Wallmann and B. B. Cunningham. Its magnetic susceptibility was very close to that of GdF3 providing the first experimental evidence for the +3 valence of curium in its compounds. Curium metal was produced only in 1951 by reduction of CmF3 with barium.
A synthetic, radioactive element, curium is a hard, dense metal with a silvery-white appearance and physical and chemical properties resembling gadolinium. Its melting point of 1344 °C is significantly higher than that of the previous elements neptunium (637 °C), plutonium (639 °C) and americium (1176 °C). In comparison, gadolinium melts at 1312 °C. Curium boils at 3556 °C. With a density of 13.52 g/cm, curium is lighter than neptunium (20.45 g/cm) and plutonium (19.8 g/cm), but heavier than most other metals. Of two crystalline forms of curium, α-Cm is more stable at ambient conditions. It has a hexagonal symmetry, space group P63/mmc, lattice parameters a = 365 pm and c = 1182 pm, and four formula units per unit cell. The crystal consists of double-hexagonal close packing with the layer sequence ABAC and so is isotypic with α-lanthanum. At pressure >23 GPa, at room temperature, α-Cm becomes β-Cm, which has face-centered cubic symmetry, space group Fm3m and lattice constant a = 493 pm. On further compression to 43 GPa, curium becomes an orthorhombic γ-Cm structure similar to α-uranium, with no further transitions observed up to 52 GPa. These three curium phases are also called Cm I, II and III.
Curium has peculiar magnetic properties. Its neighbor element americium shows no deviation from Curie-Weiss paramagnetism in the entire temperature range, but α-Cm transforms to an antiferromagnetic state upon cooling to 65–52 K, and β-Cm exhibits a ferrimagnetic transition at ~205 K. Curium pnictides show ferromagnetic transitions upon cooling: CmN and CmAs at 109 K, CmP at 73 K and CmSb at 162 K. The lanthanide analog of curium, gadolinium, and its pnictides, also show magnetic transitions upon cooling, but the transition character is somewhat different: Gd and GdN become ferromagnetic, and GdP, GdAs and GdSb show antiferromagnetic ordering.
In accordance with magnetic data, electrical resistivity of curium increases with temperature – about twice between 4 and 60 K – and then is nearly constant up to room temperature. There is a significant increase in resistivity over time (~10 µΩ·cm/h) due to self-damage of the crystal lattice by alpha decay. This makes uncertain the true resistivity of curium (~125 µΩ·cm). Curium's resistivity is similar to that of gadolinium, and the actinides plutonium and neptunium, but significantly higher than that of americium, uranium, polonium and thorium.
Under ultraviolet illumination, curium(III) ions show strong and stable yellow-orange fluorescence with a maximum in the range of 590–640 nm depending on their environment. The fluorescence originates from the transitions from the first excited state D7/2 and the ground state S7/2. Analysis of this fluorescence allows monitoring interactions between Cm(III) ions in organic and inorganic complexes.
Curium ion in solution almost always has a +3 oxidation state, the most stable oxidation state for curium. A +4 oxidation state is seen mainly in a few solid phases, such as CmO2 and CmF4. Aqueous curium(IV) is only known in the presence of strong oxidizers such as potassium persulfate, and is easily reduced to curium(III) by radiolysis and even by water itself. Chemical behavior of curium is different from the actinides thorium and uranium, and is similar to americium and many lanthanides. In aqueous solution, the Cm ion is colorless to pale green; Cm ion is pale yellow. The optical absorption of Cm ion contains three sharp peaks at 375.4, 381.2 and 396.5 nm and their strength can be directly converted into the concentration of the ions. The +6 oxidation state has only been reported once in solution in 1978, as the curyl ion (CmO2): this was prepared from beta decay of americium-242 in the americium(V) ion AmO2. Failure to get Cm(VI) from oxidation of Cm(III) and Cm(IV) may be due to the high Cm/Cm ionization potential and the instability of Cm(V).
Curium ions are hard Lewis acids and thus form most stable complexes with hard bases. The bonding is mostly ionic, with a small covalent component. Curium in its complexes commonly exhibits a 9-fold coordination environment, with a tricapped trigonal prismatic molecular geometry.
About 19 radioisotopes and 7 nuclear isomers, Cm to Cm, are known; none are stable. The longest half-lives are 15.6 million years (Cm) and 348,000 years (Cm). Other long-lived ones are Cm (8500 years), Cm (8300 years) and Cm (4760 years). Curium-250 is unusual: it mostly (~86%) decays by spontaneous fission. The most commonly used isotopes are Cm and Cm with the half-lives 162.8 days and 18.1 years, respectively.
All isotopes ranging from Cm to Cm, as well as Cm, undergo a self-sustaining nuclear chain reaction and thus in principle can be a nuclear fuel in a reactor. As in most transuranic elements, nuclear fission cross section is especially high for the odd-mass curium isotopes Cm, Cm and Cm. These can be used in thermal-neutron reactors, whereas a mixture of curium isotopes is only suitable for fast breeder reactors since the even-mass isotopes are not fissile in a thermal reactor and accumulate as burn-up increases. The mixed-oxide (MOX) fuel, which is to be used in power reactors, should contain little or no curium because neutron activation of Cm will create californium. Californium is a strong neutron emitter, and would pollute the back end of the fuel cycle and increase the dose to reactor personnel. Hence, if minor actinides are to be used as fuel in a thermal neutron reactor, the curium should be excluded from the fuel or placed in special fuel rods where it is the only actinide present.
The adjacent table lists the critical masses for curium isotopes for a sphere, without moderator or reflector. With a metal reflector (30 cm of steel), the critical masses of the odd isotopes are about 3–4 kg. When using water (thickness ~20–30 cm) as the reflector, the critical mass can be as small as 59 grams for Cm, 155 grams for Cm and 1550 grams for Cm. There is significant uncertainty in these critical mass values. While it is usually on the order of 20%, the values for Cm and Cm were listed as large as 371 kg and 70.1 kg, respectively, by some research groups.
Curium is not currently used as nuclear fuel due to its low availability and high price. Cm and Cm have very small critical mass and so could be used in tactical nuclear weapons, but none are known to have been made. Curium-243 is not suitable for such, due to its short half-life and strong α emission, which would cause excessive heat. Curium-247 would be highly suitable due to its long half-life, which is 647 times longer than plutonium-239 (used in many existing nuclear weapons).
The longest-lived isotope, Cm, has half-life 15.6 million years; so any primordial curium, that is, present on Earth when it formed, should have decayed by now. Its past presence as an extinct radionuclide is detectable as an excess of its primordial, long-lived daughter U. Traces of curium may occur naturally in uranium minerals due to neutron capture and beta decay, though this has not been confirmed. Traces of Cm are also probably brought to Earth in cosmic rays, but again this has not been confirmed.
Curium is made artificially in small amounts for research purposes. It also occurs as one of the waste products in spent nuclear fuel. Curium is present in nature in some areas used for nuclear weapons testing. Analysis of the debris at the test site of the United States' first thermonuclear weapon, Ivy Mike, (1 November 1952, Enewetak Atoll), besides einsteinium, fermium, plutonium and americium also revealed isotopes of berkelium, californium and curium, in particular Cm, Cm and smaller quantities of Cm, Cm and Cm.
Atmospheric curium compounds are poorly soluble in common solvents and mostly adhere to soil particles. Soil analysis revealed about 4,000 times higher concentration of curium at the sandy soil particles than in water present in the soil pores. An even higher ratio of about 18,000 was measured in loam soils.
The transuranium elements from americium to fermium, including curium, occurred naturally in the natural nuclear fission reactor at Oklo, but no longer do so.
Curium, and other non-primordial actinides, have also been suspected to exist in the spectrum of Przybylski's Star.
Curium is made in small amounts in nuclear reactors, and by now only kilograms of Cm and Cm have been accumulated, and grams or even milligrams for heavier isotopes. Hence the high price of curium, which has been quoted at 160–185 USD per milligram, with a more recent estimate at US$2,000/g for Cm and US$170/g for Cm. In nuclear reactors, curium is formed from U in a series of nuclear reactions. In the first chain, U captures a neutron and converts into U, which via β decay transforms into Np and Pu.
Further neutron capture followed by β-decay gives americium (Am) which further becomes Cm:
For research purposes, curium is obtained by irradiating not uranium but plutonium, which is available in large amounts from spent nuclear fuel. A much higher neutron flux is used for the irradiation that results in a different reaction chain and formation of Cm:
Curium-244 alpha decays to Pu, but it also absorbs neutrons, hence a small amount of heavier curium isotopes. Of those, Cm and Cm are popular in scientific research due to their long half-lives. But the production rate of Cm in thermal neutron reactors is low because it is prone to fission due to thermal neutrons. Synthesis of Cm by neutron capture is unlikely due to the short half-life of the intermediate Cm (64 min), which β decays to the berkelium isotope Bk.
The above cascade of (n,γ) reactions gives a mix of different curium isotopes. Their post-synthesis separation is cumbersome, so a selective synthesis is desired. Curium-248 is favored for research purposes due to its long half-life. The most efficient way to prepare this isotope is by α-decay of the californium isotope Cf, which is available in relatively large amounts due to its long half-life (2.65 years). About 35–50 mg of Cm is produced thus, per year. The associated reaction produces Cm with isotopic purity of 97%.
Another isotope, Cm, can be obtained for research, from α-decay of Cf; the latter isotope is produced in small amounts from β-decay of Bk.
Most synthesis routines yield a mix of actinide isotopes as oxides, from which a given isotope of curium needs to be separated. An example procedure could be to dissolve spent reactor fuel (e.g. MOX fuel) in nitric acid, and remove the bulk of the uranium and plutonium using a PUREX (Plutonium – URanium EXtraction) type extraction with tributyl phosphate in a hydrocarbon. The lanthanides and the remaining actinides are then separated from the aqueous residue (raffinate) by a diamide-based extraction to give, after stripping, a mixture of trivalent actinides and lanthanides. A curium compound is then selectively extracted using multi-step chromatographic and centrifugation techniques with an appropriate reagent. Bis-triazinyl bipyridine complex has been recently proposed as such reagent which is highly selective to curium. Separation of curium from the very chemically similar americium can also be done by treating a slurry of their hydroxides in aqueous sodium bicarbonate with ozone at elevated temperature. Both americium and curium are present in solutions mostly in the +3 valence state; americium oxidizes to soluble Am(IV) complexes, but curium stays unchanged and so can be isolated by repeated centrifugation.
Metallic curium is obtained by reduction of its compounds. Initially, curium(III) fluoride was used for this purpose. The reaction was done in an environment free of water and oxygen, in an apparatus made of tantalum and tungsten, using elemental barium or lithium as reducing agents.
Another possibility is reduction of curium(IV) oxide using a magnesium-zinc alloy in a melt of magnesium chloride and magnesium fluoride.
Curium readily reacts with oxygen forming mostly Cm2O3 and CmO2 oxides, but the divalent oxide CmO is also known. Black CmO2 can be obtained by burning curium oxalate (Cm2(C2O4)3), nitrate (Cm(NO3)3), or hydroxide in pure oxygen. Upon heating to 600–650 °C in vacuum (about 0.01 Pa), it transforms into the whitish Cm2O3:
Or, Cm2O3 can be obtained by reducing CmO2 with molecular hydrogen:
Also, a number of ternary oxides of the type M(II)CmO3 are known, where M stands for a divalent metal, such as barium.
Thermal oxidation of trace quantities of curium hydride (CmH2–3) has been reported to give a volatile form of CmO2 and the volatile trioxide CmO3, one of two known examples of the very rare +6 state for curium. Another observed species was reported to behave similar to a supposed plutonium tetroxide and was tentatively characterized as CmO4, with curium in the extremely rare +8 state; but new experiments seem to indicate that CmO4 does not exist, and have cast doubt on the existence of PuO4 as well.
The colorless curium(III) fluoride (CmF3) can be made by adding fluoride ions into curium(III)-containing solutions. The brown tetravalent curium(IV) fluoride (CmF4) on the other hand is only obtained by reacting curium(III) fluoride with molecular fluorine:
A series of ternary fluorides are known of the form A7Cm6F31 (A = alkali metal).
The colorless curium(III) chloride (CmCl3) is made by reacting curium hydroxide (Cm(OH)3) with anhydrous hydrogen chloride gas. It can be further turned into other halides such as curium(III) bromide (colorless to light green) and curium(III) iodide (colorless), by reacting it with the ammonia salt of the corresponding halide at temperatures of ~400–450°C:
Or, one can heat curium oxide to ~600°C with the corresponding acid (such as hydrobromic for curium bromide). Vapor phase hydrolysis of curium(III) chloride gives curium oxychloride:
Sulfides, selenides and tellurides of curium have been obtained by treating curium with gaseous sulfur, selenium or tellurium in vacuum at elevated temperature. Curium pnictides of the type CmX are known for nitrogen, phosphorus, arsenic and antimony. They can be prepared by reacting either curium(III) hydride (CmH3) or metallic curium with these elements at elevated temperature.
Organometallic complexes analogous to uranocene are known also for other actinides, such as thorium, protactinium, neptunium, plutonium and americium. Molecular orbital theory predicts a stable "curocene" complex (η-C8H8)2Cm, but it has not been reported experimentally yet.
Formation of the complexes of the type Cm(n-C3H7-BTP)3 (BTP = 2,6-di(1,2,4-triazin-3-yl)pyridine), in solutions containing n-C3H7-BTP and Cm ions has been confirmed by EXAFS. Some of these BTP-type complexes selectively interact with curium and thus are useful for separating it from lanthanides and another actinides. Dissolved Cm ions bind with many organic compounds, such as hydroxamic acid, urea, fluorescein and adenosine triphosphate. Many of these compounds are related to biological activity of various microorganisms. The resulting complexes show strong yellow-orange emission under UV light excitation, which is convenient not only for their detection, but also for studying interactions between the Cm ion and the ligands via changes in the half-life (of the order ~0.1 ms) and spectrum of the fluorescence.
Curium has no biological significance. There are a few reports on biosorption of Cm by bacteria and archaea, but no evidence for incorporation of curium into them.
Curium is one of the most radioactive isolable elements. Its two most common isotopes Cm and Cm are strong alpha emitters (energy 6 MeV); they have fairly short half-lives, 162.8 days and 18.1 years, and give as much as 120 W/g and 3 W/g of heat, respectively. Therefore, curium can be used in its common oxide form in radioisotope thermoelectric generators like those in spacecraft. This application has been studied for the Cm isotope, while Cm was abandoned due to its prohibitive price, around 2000 USD/g. Cm with a ~30-year half-life and good energy yield of ~1.6 W/g could be a suitable fuel, but it gives significant amounts of harmful gamma and beta rays from radioactive decay products. As an α-emitter, Cm needs much less radiation shielding, but it has a high spontaneous fission rate, and thus a lot of neutron and gamma radiation. Compared to a competing thermoelectric generator isotope such as Pu, Cm emits 500 times more neutrons, and its higher gamma emission requires a shield that is 20 times thicker—2 inches (51 mm) of lead for a 1 kW source, compared to 0.1 inches (2.5 mm) for Pu. Therefore, this use of curium is currently considered impractical.
A more promising use of Cm is for making Pu, a better radioisotope for thermoelectric generators such as in heart pacemakers. The alternate routes to Pu use the (n,γ) reaction of Np, or deuteron bombardment of uranium, though both reactions always produce Pu as an undesired by-product since the latter decays to U with strong gamma emission. Curium is a common starting material for making higher transuranic and superheavy elements. Thus, bombarding Cm with neon (Ne), magnesium (Mg), or calcium (Ca) yields isotopes of seaborgium (Sg), hassium (Hs and Hs), and livermorium (Lv, Lv, and possibly Lv). Californium was discovered when a microgram-sized target of curium-242 was irradiated with 35 MeV alpha particles using the 60-inch (150 cm) cyclotron at Berkeley:
Only about 5,000 atoms of californium were produced in this experiment.
The odd-mass curium isotopes Cm, Cm, and Cm are all highly fissile and can release additional energy in a thermal spectrum nuclear reactor. All curium isotopes are fissionable in fast-neutron reactors. This is one of the motives for minor actinide separation and transmutation in the nuclear fuel cycle, helping to reduce the long-term radiotoxicity of used, or spent nuclear fuel.
The most practical application of Cm—though rather limited in total volume—is as α-particle source in alpha particle X-ray spectrometers (APXS). These instruments were installed on the Sojourner, Mars, Mars 96, Mars Exploration Rovers and Philae comet lander, as well as the Mars Science Laboratory to analyze the composition and structure of the rocks on the surface of planet Mars. APXS was also used in the Surveyor 5–7 moon probes but with a Cm source.
An elaborate APXS setup has a sensor head containing six curium sources with a total decay rate of several tens of millicuries (roughly one gigabecquerel). The sources are collimated on a sample, and the energy spectra of the alpha particles and protons scattered from the sample are analyzed (proton analysis is done only in some spectrometers). These spectra contain quantitative information on all major elements in the sample except for hydrogen, helium and lithium.
Due to its radioactivity, curium and its compounds must be handled in appropriate labs under special arrangements. While curium itself mostly emits α-particles which are absorbed by thin layers of common materials, some of its decay products emit significant fractions of beta and gamma rays, which require a more elaborate protection. If consumed, curium is excreted within a few days and only 0.05% is absorbed in the blood. From there, ~45% goes to the liver, 45% to the bones, and the remaining 10% is excreted. In bone, curium accumulates on the inside of the interfaces to the bone marrow and does not significantly redistribute with time; its radiation destroys bone marrow and thus stops red blood cell creation. The biological half-life of curium is about 20 years in the liver and 50 years in the bones. Curium is absorbed in the body much more strongly via inhalation, and the allowed total dose of Cm in soluble form is 0.3 μCi. Intravenous injection of Cm- and Cm-containing solutions to rats increased the incidence of bone tumor, and inhalation promoted lung and liver cancer.
Curium isotopes are inevitably present in spent nuclear fuel (about 20 g/tonne). The isotopes Cm–Cm have decay times of thousands of years and must be removed to neutralize the fuel for disposal. Such a procedure involves several steps, where curium is first separated and then converted by neutron bombardment in special reactors to short-lived nuclides. This procedure, nuclear transmutation, while well documented for other elements, is still being developed for curium. | [
{
"paragraph_id": 0,
"text": "Curium is a synthetic chemical element; it has symbol Cm and atomic number 96. This transuranic actinide element was named after eminent scientists Marie and Pierre Curie, both known for their research on radioactivity. Curium was first intentionally made by the team of Glenn T. Seaborg, Ralph A. James, and Albert Ghiorso in 1944, using the cyclotron at Berkeley. They bombarded the newly discovered element plutonium (the isotope Pu) with alpha particles. This was then sent to the Metallurgical Laboratory at University of Chicago where a tiny sample of curium was eventually separated and identified. The discovery was kept secret until after the end of World War II. The news was released to the public in November 1947. Most curium is produced by bombarding uranium or plutonium with neutrons in nuclear reactors – one tonne of spent nuclear fuel contains ~20 grams of curium.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Curium is a hard, dense, silvery metal with a high melting and boiling point for an actinide. It is paramagnetic at ambient conditions, but becomes antiferromagnetic upon cooling, and other magnetic transitions are also seen in many curium compounds. In compounds, curium usually has valence +3 and sometimes +4; the +3 valence is predominant in solutions. Curium readily oxidizes, and its oxides are a dominant form of this element. It forms strongly fluorescent complexes with various organic compounds, but there is no evidence of its incorporation into bacteria and archaea. If it gets into the human body, curium accumulates in bones, lungs, and liver, where it promotes cancer.",
"title": ""
},
{
"paragraph_id": 2,
"text": "All known isotopes of curium are radioactive and have small critical mass for a nuclear chain reaction. They mostly emit α-particles; radioisotope thermoelectric generators can use the heat from this process, but this is hindered by the rarity and high cost of curium. Curium is used in making heavier actinides and the Pu radionuclide for power sources in artificial cardiac pacemakers and RTGs for spacecraft. It served as the α-source in the alpha particle X-ray spectrometers of several space probes, including the Sojourner, Spirit, Opportunity, and Curiosity Mars rovers and the Philae lander on comet 67P/Churyumov–Gerasimenko, to analyze the composition and structure of the surface.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Though curium had likely been produced in previous nuclear experiments as well as the natural nuclear fission reactor at Oklo, Gabon, it was first intentionally synthesized, isolated and identified in 1944, at University of California, Berkeley, by Glenn T. Seaborg, Ralph A. James, and Albert Ghiorso. In their experiments, they used a 60-inch (150 cm) cyclotron.",
"title": "History"
},
{
"paragraph_id": 4,
"text": "Curium was chemically identified at the Metallurgical Laboratory (now Argonne National Laboratory), University of Chicago. It was the third transuranium element to be discovered even though it is the fourth in the series – the lighter element americium was still unknown.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "The sample was prepared as follows: first plutonium nitrate solution was coated on a platinum foil of ~0.5 cm area, the solution was evaporated and the residue was converted into plutonium(IV) oxide (PuO2) by annealing. Following cyclotron irradiation of the oxide, the coating was dissolved with nitric acid and then precipitated as the hydroxide using concentrated aqueous ammonia solution. The residue was dissolved in perchloric acid, and further separation was done by ion exchange to yield a certain isotope of curium. The separation of curium and americium was so painstaking that the Berkeley group initially called those elements pandemonium (from Greek for all demons or hell) and delirium (from Latin for madness).",
"title": "History"
},
{
"paragraph_id": 6,
"text": "Curium-242 was made in July–August 1944 by bombarding Pu with α-particles to produce curium with the release of a neutron:",
"title": "History"
},
{
"paragraph_id": 7,
"text": "Curium-242 was unambiguously identified by the characteristic energy of the α-particles emitted during the decay:",
"title": "History"
},
{
"paragraph_id": 8,
"text": "The half-life of this alpha decay was first measured as 150 days and then corrected to 162.8 days.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "Another isotope Cm was produced in a similar reaction in March 1945:",
"title": "History"
},
{
"paragraph_id": 10,
"text": "The α-decay half-life of Cm was correctly determined as 26.7 days.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "The discovery of curium and americium in 1944 was closely related to the Manhattan Project, so the results were confidential and declassified only in 1945. Seaborg leaked the synthesis of the elements 95 and 96 on the U.S. radio show for children, the Quiz Kids, five days before the official presentation at an American Chemical Society meeting on November 11, 1945, when one listener asked if any new transuranic element beside plutonium and neptunium had been discovered during the war. The discovery of curium (Cm and Cm), its production, and its compounds was later patented listing only Seaborg as the inventor.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "The element was named after Marie Curie and her husband Pierre Curie, who are known for discovering radium and for their work in radioactivity. It followed the example of gadolinium, a lanthanide element above curium in the periodic table, which was named after the explorer of rare-earth elements Johan Gadolin:",
"title": "History"
},
{
"paragraph_id": 13,
"text": "The first curium samples were barely visible, and were identified by their radioactivity. Louis Werner and Isadore Perlman made the first substantial sample of 30 µg curium-242 hydroxide at University of California, Berkeley in 1947 by bombarding americium-241 with neutrons. Macroscopic amounts of curium(III) fluoride were obtained in 1950 by W. W. T. Crane, J. C. Wallmann and B. B. Cunningham. Its magnetic susceptibility was very close to that of GdF3 providing the first experimental evidence for the +3 valence of curium in its compounds. Curium metal was produced only in 1951 by reduction of CmF3 with barium.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "A synthetic, radioactive element, curium is a hard, dense metal with a silvery-white appearance and physical and chemical properties resembling gadolinium. Its melting point of 1344 °C is significantly higher than that of the previous elements neptunium (637 °C), plutonium (639 °C) and americium (1176 °C). In comparison, gadolinium melts at 1312 °C. Curium boils at 3556 °C. With a density of 13.52 g/cm, curium is lighter than neptunium (20.45 g/cm) and plutonium (19.8 g/cm), but heavier than most other metals. Of two crystalline forms of curium, α-Cm is more stable at ambient conditions. It has a hexagonal symmetry, space group P63/mmc, lattice parameters a = 365 pm and c = 1182 pm, and four formula units per unit cell. The crystal consists of double-hexagonal close packing with the layer sequence ABAC and so is isotypic with α-lanthanum. At pressure >23 GPa, at room temperature, α-Cm becomes β-Cm, which has face-centered cubic symmetry, space group Fm3m and lattice constant a = 493 pm. On further compression to 43 GPa, curium becomes an orthorhombic γ-Cm structure similar to α-uranium, with no further transitions observed up to 52 GPa. These three curium phases are also called Cm I, II and III.",
"title": "Characteristics"
},
{
"paragraph_id": 15,
"text": "Curium has peculiar magnetic properties. Its neighbor element americium shows no deviation from Curie-Weiss paramagnetism in the entire temperature range, but α-Cm transforms to an antiferromagnetic state upon cooling to 65–52 K, and β-Cm exhibits a ferrimagnetic transition at ~205 K. Curium pnictides show ferromagnetic transitions upon cooling: CmN and CmAs at 109 K, CmP at 73 K and CmSb at 162 K. The lanthanide analog of curium, gadolinium, and its pnictides, also show magnetic transitions upon cooling, but the transition character is somewhat different: Gd and GdN become ferromagnetic, and GdP, GdAs and GdSb show antiferromagnetic ordering.",
"title": "Characteristics"
},
{
"paragraph_id": 16,
"text": "In accordance with magnetic data, electrical resistivity of curium increases with temperature – about twice between 4 and 60 K – and then is nearly constant up to room temperature. There is a significant increase in resistivity over time (~10 µΩ·cm/h) due to self-damage of the crystal lattice by alpha decay. This makes uncertain the true resistivity of curium (~125 µΩ·cm). Curium's resistivity is similar to that of gadolinium, and the actinides plutonium and neptunium, but significantly higher than that of americium, uranium, polonium and thorium.",
"title": "Characteristics"
},
{
"paragraph_id": 17,
"text": "Under ultraviolet illumination, curium(III) ions show strong and stable yellow-orange fluorescence with a maximum in the range of 590–640 nm depending on their environment. The fluorescence originates from the transitions from the first excited state D7/2 and the ground state S7/2. Analysis of this fluorescence allows monitoring interactions between Cm(III) ions in organic and inorganic complexes.",
"title": "Characteristics"
},
{
"paragraph_id": 18,
"text": "Curium ion in solution almost always has a +3 oxidation state, the most stable oxidation state for curium. A +4 oxidation state is seen mainly in a few solid phases, such as CmO2 and CmF4. Aqueous curium(IV) is only known in the presence of strong oxidizers such as potassium persulfate, and is easily reduced to curium(III) by radiolysis and even by water itself. Chemical behavior of curium is different from the actinides thorium and uranium, and is similar to americium and many lanthanides. In aqueous solution, the Cm ion is colorless to pale green; Cm ion is pale yellow. The optical absorption of Cm ion contains three sharp peaks at 375.4, 381.2 and 396.5 nm and their strength can be directly converted into the concentration of the ions. The +6 oxidation state has only been reported once in solution in 1978, as the curyl ion (CmO2): this was prepared from beta decay of americium-242 in the americium(V) ion AmO2. Failure to get Cm(VI) from oxidation of Cm(III) and Cm(IV) may be due to the high Cm/Cm ionization potential and the instability of Cm(V).",
"title": "Characteristics"
},
{
"paragraph_id": 19,
"text": "Curium ions are hard Lewis acids and thus form most stable complexes with hard bases. The bonding is mostly ionic, with a small covalent component. Curium in its complexes commonly exhibits a 9-fold coordination environment, with a tricapped trigonal prismatic molecular geometry.",
"title": "Characteristics"
},
{
"paragraph_id": 20,
"text": "About 19 radioisotopes and 7 nuclear isomers, Cm to Cm, are known; none are stable. The longest half-lives are 15.6 million years (Cm) and 348,000 years (Cm). Other long-lived ones are Cm (8500 years), Cm (8300 years) and Cm (4760 years). Curium-250 is unusual: it mostly (~86%) decays by spontaneous fission. The most commonly used isotopes are Cm and Cm with the half-lives 162.8 days and 18.1 years, respectively.",
"title": "Characteristics"
},
{
"paragraph_id": 21,
"text": "All isotopes ranging from Cm to Cm, as well as Cm, undergo a self-sustaining nuclear chain reaction and thus in principle can be a nuclear fuel in a reactor. As in most transuranic elements, nuclear fission cross section is especially high for the odd-mass curium isotopes Cm, Cm and Cm. These can be used in thermal-neutron reactors, whereas a mixture of curium isotopes is only suitable for fast breeder reactors since the even-mass isotopes are not fissile in a thermal reactor and accumulate as burn-up increases. The mixed-oxide (MOX) fuel, which is to be used in power reactors, should contain little or no curium because neutron activation of Cm will create californium. Californium is a strong neutron emitter, and would pollute the back end of the fuel cycle and increase the dose to reactor personnel. Hence, if minor actinides are to be used as fuel in a thermal neutron reactor, the curium should be excluded from the fuel or placed in special fuel rods where it is the only actinide present.",
"title": "Characteristics"
},
{
"paragraph_id": 22,
"text": "The adjacent table lists the critical masses for curium isotopes for a sphere, without moderator or reflector. With a metal reflector (30 cm of steel), the critical masses of the odd isotopes are about 3–4 kg. When using water (thickness ~20–30 cm) as the reflector, the critical mass can be as small as 59 grams for Cm, 155 grams for Cm and 1550 grams for Cm. There is significant uncertainty in these critical mass values. While it is usually on the order of 20%, the values for Cm and Cm were listed as large as 371 kg and 70.1 kg, respectively, by some research groups.",
"title": "Characteristics"
},
{
"paragraph_id": 23,
"text": "Curium is not currently used as nuclear fuel due to its low availability and high price. Cm and Cm have very small critical mass and so could be used in tactical nuclear weapons, but none are known to have been made. Curium-243 is not suitable for such, due to its short half-life and strong α emission, which would cause excessive heat. Curium-247 would be highly suitable due to its long half-life, which is 647 times longer than plutonium-239 (used in many existing nuclear weapons).",
"title": "Characteristics"
},
{
"paragraph_id": 24,
"text": "The longest-lived isotope, Cm, has half-life 15.6 million years; so any primordial curium, that is, present on Earth when it formed, should have decayed by now. Its past presence as an extinct radionuclide is detectable as an excess of its primordial, long-lived daughter U. Traces of curium may occur naturally in uranium minerals due to neutron capture and beta decay, though this has not been confirmed. Traces of Cm are also probably brought to Earth in cosmic rays, but again this has not been confirmed.",
"title": "Characteristics"
},
{
"paragraph_id": 25,
"text": "Curium is made artificially in small amounts for research purposes. It also occurs as one of the waste products in spent nuclear fuel. Curium is present in nature in some areas used for nuclear weapons testing. Analysis of the debris at the test site of the United States' first thermonuclear weapon, Ivy Mike, (1 November 1952, Enewetak Atoll), besides einsteinium, fermium, plutonium and americium also revealed isotopes of berkelium, californium and curium, in particular Cm, Cm and smaller quantities of Cm, Cm and Cm.",
"title": "Characteristics"
},
{
"paragraph_id": 26,
"text": "Atmospheric curium compounds are poorly soluble in common solvents and mostly adhere to soil particles. Soil analysis revealed about 4,000 times higher concentration of curium at the sandy soil particles than in water present in the soil pores. An even higher ratio of about 18,000 was measured in loam soils.",
"title": "Characteristics"
},
{
"paragraph_id": 27,
"text": "The transuranium elements from americium to fermium, including curium, occurred naturally in the natural nuclear fission reactor at Oklo, but no longer do so.",
"title": "Characteristics"
},
{
"paragraph_id": 28,
"text": "Curium, and other non-primordial actinides, have also been suspected to exist in the spectrum of Przybylski's Star.",
"title": "Characteristics"
},
{
"paragraph_id": 29,
"text": "Curium is made in small amounts in nuclear reactors, and by now only kilograms of Cm and Cm have been accumulated, and grams or even milligrams for heavier isotopes. Hence the high price of curium, which has been quoted at 160–185 USD per milligram, with a more recent estimate at US$2,000/g for Cm and US$170/g for Cm. In nuclear reactors, curium is formed from U in a series of nuclear reactions. In the first chain, U captures a neutron and converts into U, which via β decay transforms into Np and Pu.",
"title": "Synthesis"
},
{
"paragraph_id": 30,
"text": "Further neutron capture followed by β-decay gives americium (Am) which further becomes Cm:",
"title": "Synthesis"
},
{
"paragraph_id": 31,
"text": "For research purposes, curium is obtained by irradiating not uranium but plutonium, which is available in large amounts from spent nuclear fuel. A much higher neutron flux is used for the irradiation that results in a different reaction chain and formation of Cm:",
"title": "Synthesis"
},
{
"paragraph_id": 32,
"text": "Curium-244 alpha decays to Pu, but it also absorbs neutrons, hence a small amount of heavier curium isotopes. Of those, Cm and Cm are popular in scientific research due to their long half-lives. But the production rate of Cm in thermal neutron reactors is low because it is prone to fission due to thermal neutrons. Synthesis of Cm by neutron capture is unlikely due to the short half-life of the intermediate Cm (64 min), which β decays to the berkelium isotope Bk.",
"title": "Synthesis"
},
{
"paragraph_id": 33,
"text": "The above cascade of (n,γ) reactions gives a mix of different curium isotopes. Their post-synthesis separation is cumbersome, so a selective synthesis is desired. Curium-248 is favored for research purposes due to its long half-life. The most efficient way to prepare this isotope is by α-decay of the californium isotope Cf, which is available in relatively large amounts due to its long half-life (2.65 years). About 35–50 mg of Cm is produced thus, per year. The associated reaction produces Cm with isotopic purity of 97%.",
"title": "Synthesis"
},
{
"paragraph_id": 34,
"text": "Another isotope, Cm, can be obtained for research, from α-decay of Cf; the latter isotope is produced in small amounts from β-decay of Bk.",
"title": "Synthesis"
},
{
"paragraph_id": 35,
"text": "Most synthesis routines yield a mix of actinide isotopes as oxides, from which a given isotope of curium needs to be separated. An example procedure could be to dissolve spent reactor fuel (e.g. MOX fuel) in nitric acid, and remove the bulk of the uranium and plutonium using a PUREX (Plutonium – URanium EXtraction) type extraction with tributyl phosphate in a hydrocarbon. The lanthanides and the remaining actinides are then separated from the aqueous residue (raffinate) by a diamide-based extraction to give, after stripping, a mixture of trivalent actinides and lanthanides. A curium compound is then selectively extracted using multi-step chromatographic and centrifugation techniques with an appropriate reagent. Bis-triazinyl bipyridine complex has been recently proposed as such reagent which is highly selective to curium. Separation of curium from the very chemically similar americium can also be done by treating a slurry of their hydroxides in aqueous sodium bicarbonate with ozone at elevated temperature. Both americium and curium are present in solutions mostly in the +3 valence state; americium oxidizes to soluble Am(IV) complexes, but curium stays unchanged and so can be isolated by repeated centrifugation.",
"title": "Synthesis"
},
{
"paragraph_id": 36,
"text": "Metallic curium is obtained by reduction of its compounds. Initially, curium(III) fluoride was used for this purpose. The reaction was done in an environment free of water and oxygen, in an apparatus made of tantalum and tungsten, using elemental barium or lithium as reducing agents.",
"title": "Synthesis"
},
{
"paragraph_id": 37,
"text": "Another possibility is reduction of curium(IV) oxide using a magnesium-zinc alloy in a melt of magnesium chloride and magnesium fluoride.",
"title": "Synthesis"
},
{
"paragraph_id": 38,
"text": "Curium readily reacts with oxygen forming mostly Cm2O3 and CmO2 oxides, but the divalent oxide CmO is also known. Black CmO2 can be obtained by burning curium oxalate (Cm2(C2O4)3), nitrate (Cm(NO3)3), or hydroxide in pure oxygen. Upon heating to 600–650 °C in vacuum (about 0.01 Pa), it transforms into the whitish Cm2O3:",
"title": "Compounds and reactions"
},
{
"paragraph_id": 39,
"text": "Or, Cm2O3 can be obtained by reducing CmO2 with molecular hydrogen:",
"title": "Compounds and reactions"
},
{
"paragraph_id": 40,
"text": "Also, a number of ternary oxides of the type M(II)CmO3 are known, where M stands for a divalent metal, such as barium.",
"title": "Compounds and reactions"
},
{
"paragraph_id": 41,
"text": "Thermal oxidation of trace quantities of curium hydride (CmH2–3) has been reported to give a volatile form of CmO2 and the volatile trioxide CmO3, one of two known examples of the very rare +6 state for curium. Another observed species was reported to behave similar to a supposed plutonium tetroxide and was tentatively characterized as CmO4, with curium in the extremely rare +8 state; but new experiments seem to indicate that CmO4 does not exist, and have cast doubt on the existence of PuO4 as well.",
"title": "Compounds and reactions"
},
{
"paragraph_id": 42,
"text": "The colorless curium(III) fluoride (CmF3) can be made by adding fluoride ions into curium(III)-containing solutions. The brown tetravalent curium(IV) fluoride (CmF4) on the other hand is only obtained by reacting curium(III) fluoride with molecular fluorine:",
"title": "Compounds and reactions"
},
{
"paragraph_id": 43,
"text": "A series of ternary fluorides are known of the form A7Cm6F31 (A = alkali metal).",
"title": "Compounds and reactions"
},
{
"paragraph_id": 44,
"text": "The colorless curium(III) chloride (CmCl3) is made by reacting curium hydroxide (Cm(OH)3) with anhydrous hydrogen chloride gas. It can be further turned into other halides such as curium(III) bromide (colorless to light green) and curium(III) iodide (colorless), by reacting it with the ammonia salt of the corresponding halide at temperatures of ~400–450°C:",
"title": "Compounds and reactions"
},
{
"paragraph_id": 45,
"text": "Or, one can heat curium oxide to ~600°C with the corresponding acid (such as hydrobromic for curium bromide). Vapor phase hydrolysis of curium(III) chloride gives curium oxychloride:",
"title": "Compounds and reactions"
},
{
"paragraph_id": 46,
"text": "Sulfides, selenides and tellurides of curium have been obtained by treating curium with gaseous sulfur, selenium or tellurium in vacuum at elevated temperature. Curium pnictides of the type CmX are known for nitrogen, phosphorus, arsenic and antimony. They can be prepared by reacting either curium(III) hydride (CmH3) or metallic curium with these elements at elevated temperature.",
"title": "Compounds and reactions"
},
{
"paragraph_id": 47,
"text": "Organometallic complexes analogous to uranocene are known also for other actinides, such as thorium, protactinium, neptunium, plutonium and americium. Molecular orbital theory predicts a stable \"curocene\" complex (η-C8H8)2Cm, but it has not been reported experimentally yet.",
"title": "Compounds and reactions"
},
{
"paragraph_id": 48,
"text": "Formation of the complexes of the type Cm(n-C3H7-BTP)3 (BTP = 2,6-di(1,2,4-triazin-3-yl)pyridine), in solutions containing n-C3H7-BTP and Cm ions has been confirmed by EXAFS. Some of these BTP-type complexes selectively interact with curium and thus are useful for separating it from lanthanides and another actinides. Dissolved Cm ions bind with many organic compounds, such as hydroxamic acid, urea, fluorescein and adenosine triphosphate. Many of these compounds are related to biological activity of various microorganisms. The resulting complexes show strong yellow-orange emission under UV light excitation, which is convenient not only for their detection, but also for studying interactions between the Cm ion and the ligands via changes in the half-life (of the order ~0.1 ms) and spectrum of the fluorescence.",
"title": "Compounds and reactions"
},
{
"paragraph_id": 49,
"text": "Curium has no biological significance. There are a few reports on biosorption of Cm by bacteria and archaea, but no evidence for incorporation of curium into them.",
"title": "Compounds and reactions"
},
{
"paragraph_id": 50,
"text": "Curium is one of the most radioactive isolable elements. Its two most common isotopes Cm and Cm are strong alpha emitters (energy 6 MeV); they have fairly short half-lives, 162.8 days and 18.1 years, and give as much as 120 W/g and 3 W/g of heat, respectively. Therefore, curium can be used in its common oxide form in radioisotope thermoelectric generators like those in spacecraft. This application has been studied for the Cm isotope, while Cm was abandoned due to its prohibitive price, around 2000 USD/g. Cm with a ~30-year half-life and good energy yield of ~1.6 W/g could be a suitable fuel, but it gives significant amounts of harmful gamma and beta rays from radioactive decay products. As an α-emitter, Cm needs much less radiation shielding, but it has a high spontaneous fission rate, and thus a lot of neutron and gamma radiation. Compared to a competing thermoelectric generator isotope such as Pu, Cm emits 500 times more neutrons, and its higher gamma emission requires a shield that is 20 times thicker—2 inches (51 mm) of lead for a 1 kW source, compared to 0.1 inches (2.5 mm) for Pu. Therefore, this use of curium is currently considered impractical.",
"title": "Applications"
},
{
"paragraph_id": 51,
"text": "A more promising use of Cm is for making Pu, a better radioisotope for thermoelectric generators such as in heart pacemakers. The alternate routes to Pu use the (n,γ) reaction of Np, or deuteron bombardment of uranium, though both reactions always produce Pu as an undesired by-product since the latter decays to U with strong gamma emission. Curium is a common starting material for making higher transuranic and superheavy elements. Thus, bombarding Cm with neon (Ne), magnesium (Mg), or calcium (Ca) yields isotopes of seaborgium (Sg), hassium (Hs and Hs), and livermorium (Lv, Lv, and possibly Lv). Californium was discovered when a microgram-sized target of curium-242 was irradiated with 35 MeV alpha particles using the 60-inch (150 cm) cyclotron at Berkeley:",
"title": "Applications"
},
{
"paragraph_id": 52,
"text": "Only about 5,000 atoms of californium were produced in this experiment.",
"title": "Applications"
},
{
"paragraph_id": 53,
"text": "The odd-mass curium isotopes Cm, Cm, and Cm are all highly fissile and can release additional energy in a thermal spectrum nuclear reactor. All curium isotopes are fissionable in fast-neutron reactors. This is one of the motives for minor actinide separation and transmutation in the nuclear fuel cycle, helping to reduce the long-term radiotoxicity of used, or spent nuclear fuel.",
"title": "Applications"
},
{
"paragraph_id": 54,
"text": "The most practical application of Cm—though rather limited in total volume—is as α-particle source in alpha particle X-ray spectrometers (APXS). These instruments were installed on the Sojourner, Mars, Mars 96, Mars Exploration Rovers and Philae comet lander, as well as the Mars Science Laboratory to analyze the composition and structure of the rocks on the surface of planet Mars. APXS was also used in the Surveyor 5–7 moon probes but with a Cm source.",
"title": "Applications"
},
{
"paragraph_id": 55,
"text": "An elaborate APXS setup has a sensor head containing six curium sources with a total decay rate of several tens of millicuries (roughly one gigabecquerel). The sources are collimated on a sample, and the energy spectra of the alpha particles and protons scattered from the sample are analyzed (proton analysis is done only in some spectrometers). These spectra contain quantitative information on all major elements in the sample except for hydrogen, helium and lithium.",
"title": "Applications"
},
{
"paragraph_id": 56,
"text": "Due to its radioactivity, curium and its compounds must be handled in appropriate labs under special arrangements. While curium itself mostly emits α-particles which are absorbed by thin layers of common materials, some of its decay products emit significant fractions of beta and gamma rays, which require a more elaborate protection. If consumed, curium is excreted within a few days and only 0.05% is absorbed in the blood. From there, ~45% goes to the liver, 45% to the bones, and the remaining 10% is excreted. In bone, curium accumulates on the inside of the interfaces to the bone marrow and does not significantly redistribute with time; its radiation destroys bone marrow and thus stops red blood cell creation. The biological half-life of curium is about 20 years in the liver and 50 years in the bones. Curium is absorbed in the body much more strongly via inhalation, and the allowed total dose of Cm in soluble form is 0.3 μCi. Intravenous injection of Cm- and Cm-containing solutions to rats increased the incidence of bone tumor, and inhalation promoted lung and liver cancer.",
"title": "Safety"
},
{
"paragraph_id": 57,
"text": "Curium isotopes are inevitably present in spent nuclear fuel (about 20 g/tonne). The isotopes Cm–Cm have decay times of thousands of years and must be removed to neutralize the fuel for disposal. Such a procedure involves several steps, where curium is first separated and then converted by neutron bombardment in special reactors to short-lived nuclides. This procedure, nuclear transmutation, while well documented for other elements, is still being developed for curium.",
"title": "Safety"
}
] | Curium is a synthetic chemical element; it has symbol Cm and atomic number 96. This transuranic actinide element was named after eminent scientists Marie and Pierre Curie, both known for their research on radioactivity. Curium was first intentionally made by the team of Glenn T. Seaborg, Ralph A. James, and Albert Ghiorso in 1944, using the cyclotron at Berkeley. They bombarded the newly discovered element plutonium with alpha particles. This was then sent to the Metallurgical Laboratory at University of Chicago where a tiny sample of curium was eventually separated and identified. The discovery was kept secret until after the end of World War II. The news was released to the public in November 1947. Most curium is produced by bombarding uranium or plutonium with neutrons in nuclear reactors – one tonne of spent nuclear fuel contains ~20 grams of curium. Curium is a hard, dense, silvery metal with a high melting and boiling point for an actinide. It is paramagnetic at ambient conditions, but becomes antiferromagnetic upon cooling, and other magnetic transitions are also seen in many curium compounds. In compounds, curium usually has valence +3 and sometimes +4; the +3 valence is predominant in solutions. Curium readily oxidizes, and its oxides are a dominant form of this element. It forms strongly fluorescent complexes with various organic compounds, but there is no evidence of its incorporation into bacteria and archaea. If it gets into the human body, curium accumulates in bones, lungs, and liver, where it promotes cancer. All known isotopes of curium are radioactive and have small critical mass for a nuclear chain reaction. They mostly emit α-particles; radioisotope thermoelectric generators can use the heat from this process, but this is hindered by the rarity and high cost of curium. Curium is used in making heavier actinides and the 238Pu radionuclide for power sources in artificial cardiac pacemakers and RTGs for spacecraft. It served as the α-source in the alpha particle X-ray spectrometers of several space probes, including the Sojourner, Spirit, Opportunity, and Curiosity Mars rovers and the Philae lander on comet 67P/Churyumov–Gerasimenko, to analyze the composition and structure of the surface. | 2001-05-17T14:30:53Z | 2023-12-19T12:36:17Z | [
"Template:Authority control",
"Template:Marie & Pierre Curie",
"Template:Distinguish",
"Template:Infobox curium",
"Template:NUBASE 1997",
"Template:Cite news",
"Template:US patent",
"Template:Webarchive",
"Template:Clear",
"Template:Nuclide",
"Template:Cite book",
"Template:Good article",
"Template:Val",
"Template:See also",
"Template:NumBlk",
"Template:Periodic table (navbox)",
"Template:Convert",
"Template:Multiple image",
"Template:RubberBible86th",
"Template:Greenwood&Earnshaw2nd",
"Template:About",
"Template:Cite journal",
"Template:ISBN",
"Template:Cite web",
"Template:Overline",
"Template:E",
"Template:Category see also",
"Template:Curium compounds",
"Template:Nuclear Technology",
"Template:Chem",
"Template:Reflist",
"Template:Commons",
"Template:Wiktionary"
] | https://en.wikipedia.org/wiki/Curium |
5,676 | Californium | Californium is a synthetic chemical element; it has symbol Cf and atomic number 98. The element was first synthesized in 1950 at Lawrence Berkeley National Laboratory (then the University of California Radiation Laboratory), by bombarding curium with alpha particles (helium-4 ions). It is an actinide element, the sixth transuranium element to be synthesized, and has the second-highest atomic mass of all elements that have been produced in amounts large enough to see with the naked eye (after einsteinium). The element was named after the university and the U.S. state of California.
Two crystalline forms exist for californium at normal pressure: one above and one below 900 °C (1,650 °F). A third form exists at high pressure. Californium slowly tarnishes in air at room temperature. Californium compounds are dominated by the +3 oxidation state. The most stable of californium's twenty known isotopes is californium-251, with a half-life of 898 years. This short half-life means the element is not found in significant quantities in the Earth's crust. Cf, with a half-life of about 2.645 years, is the most common isotope used and is produced at Oak Ridge National Laboratory in the United States and Research Institute of Atomic Reactors in Russia.
Californium is one of the few transuranium elements with practical applications. Most of these applications exploit the property of certain radioactive isotopes of californium to emit neutrons. For example, californium can be used to help start up nuclear reactors, and it is employed as a source of neutrons when studying materials using neutron diffraction and neutron spectroscopy. Californium can also be used in nuclear synthesis of higher mass elements; oganesson (element 118) was synthesized by bombarding californium-249 atoms with calcium-48 ions. Users of californium must take into account radiological concerns and the element's ability to disrupt the formation of red blood cells by bioaccumulating in skeletal tissue.
Californium is a silvery-white actinide metal with a melting point of 900 ± 30 °C (1,650 ± 50 °F) and an estimated boiling point of 1,743 K (1,470 °C; 2,680 °F). The pure metal is malleable and is easily cut with a razor blade. Californium metal starts to vaporize above 300 °C (570 °F) when exposed to a vacuum. Below 51 K (−222 °C; −368 °F) californium metal is either ferromagnetic or ferrimagnetic (it acts like a magnet), between 48 and 66 K it is antiferromagnetic (an intermediate state), and above 160 K (−113 °C; −172 °F) it is paramagnetic (external magnetic fields can make it magnetic). It forms alloys with lanthanide metals but little is known about the resulting materials.
The element has two crystalline forms at standard atmospheric pressure: a double-hexagonal close-packed form dubbed alpha (α) and a face-centered cubic form designated beta (β). The α form exists below 600–800 °C with a density of 15.10 g/cm and the β form exists above 600–800 °C with a density of 8.74 g/cm. At 48 GPa of pressure the β form changes into an orthorhombic crystal system due to delocalization of the atom's 5f electrons, which frees them to bond.
The bulk modulus of a material is a measure of its resistance to uniform pressure. Californium's bulk modulus is 50±5 GPa, which is similar to trivalent lanthanide metals but smaller than more familiar metals, such as aluminium (70 GPa).
Californium exhibits oxidation states of 4, 3, or 2. It typically forms eight or nine bonds to surrounding atoms or ions. Its chemical properties are predicted to be similar to other primarily 3+ valence actinide elements and the element dysprosium, which is the lanthanide above californium in the periodic table. Compounds in the +4 oxidation state are strong oxidizing agents and those in the +2 state are strong reducing agents.
The element slowly tarnishes in air at room temperature, with the rate increasing when moisture is added. Californium reacts when heated with hydrogen, nitrogen, or a chalcogen (oxygen family element); reactions with dry hydrogen and aqueous mineral acids are rapid.
Californium is only water-soluble as the californium(III) cation. Attempts to reduce or oxidize the +3 ion in solution have failed. The element forms a water-soluble chloride, nitrate, perchlorate, and sulfate and is precipitated as a fluoride, oxalate, or hydroxide. Californium is the heaviest actinide to exhibit covalent properties, as is observed in the californium borate.
Twenty isotopes of californium are known (mass number ranging from 237 to 256); the most stable are Cf with half-life 898 years, Cf with half-life 351 years, Cf with half-life 13.08 years, and Cf with half-life 2.645 years. All other isotopes have half-life shorter than a year, and most of these have half-lives less than 20 minutes.
Cf is formed from beta decay of berkelium-249, and most other californium isotopes are made by subjecting berkelium to intense neutron radiation in a nuclear reactor. Though californium-251 has the longest half-life, its production yield is only 10% due to its tendency to collect neutrons (high neutron capture) and its tendency to interact with other particles (high neutron cross section).
Californium-252 is a very strong neutron emitter, which makes it extremely radioactive and harmful. Cf, 96.9% of the time, alpha decays to curium-248; the other 3.1% of decays are spontaneous fission. One microgram (μg) of Cf emits 2.3 million neutrons per second, an average of 3.7 neutrons per spontaneous fission. Most other isotopes of californium, alpha decay to curium (atomic number 96).
Californium was first made at University of California Radiation Laboratory, Berkeley, by physics researchers Stanley Gerald Thompson, Kenneth Street Jr., Albert Ghiorso, and Glenn T. Seaborg, about February 9, 1950. It was the sixth transuranium element to be discovered; the team announced its discovery on March 17, 1950.
To produce californium, a microgram-size target of curium-242 (96Cm) was bombarded with 35 MeV alpha particles (2He) in the 60-inch-diameter (1.52 m) cyclotron at Berkeley, which produced californium-245 (98Cf) plus one free neutron (n).
To identify and separate out the element, ion exchange and adsorsion methods were undertaken. Only about 5,000 atoms of californium were produced in this experiment, and these atoms had a half-life of 44 minutes.
The discoverers named the new element after the university and the state. This was a break from the convention used for elements 95 to 97, which drew inspiration from how the elements directly above them in the periodic table were named. However, the element directly above #98 in the periodic table, dysprosium, has a name that means "hard to get at", so the researchers decided to set aside the informal naming convention. They added that "the best we can do is to point out [that] ... searchers a century ago found it difficult to get to California".
Weighable amounts of californium were first produced by the irradiation of plutonium targets at Materials Testing Reactor at National Reactor Testing Station, eastern Idaho; these findings were reported in 1954. The high spontaneous fission rate of californium-252 was observed in these samples. The first experiment with californium in concentrated form occurred in 1958. The isotopes Cf to Cf were isolated that same year from a sample of plutonium-239 that had been irradiated with neutrons in a nuclear reactor for five years. Two years later, in 1960, Burris Cunningham and James Wallman of Lawrence Radiation Laboratory of the University of California created the first californium compounds—californium trichloride, californium(III) oxychloride, and californium oxide—by treating californium with steam and hydrochloric acid.
The High Flux Isotope Reactor (HFIR) at Oak Ridge National Laboratory (ORNL) in Oak Ridge, Tennessee, started producing small batches of californium in the 1960s. By 1995, HFIR nominally produced 500 milligrams (0.018 oz) of californium annually. Plutonium supplied by the United Kingdom to the United States under the 1958 US–UK Mutual Defence Agreement was used for making californium.
The Atomic Energy Commission sold Cf to industrial and academic customers in the early 1970s for $10 per microgram, and an average of 150 mg (0.0053 oz) of Cf were shipped each year from 1970 to 1990. Californium metal was first prepared in 1974 by Haire and Baybarz, who reduced californium(III) oxide with lanthanum metal to obtain microgram amounts of sub-micrometer thick films.
Traces of californium can be found near facilities that use the element in mineral prospecting and in medical treatments. The element is fairly insoluble in water, but it adheres well to ordinary soil; and concentrations of it in the soil can be 500 times higher than in the water surrounding the soil particles.
Nuclear fallout from atmospheric nuclear weapons testing prior to 1980 contributed a small amount of californium to the environment. Californium isotopes with mass numbers 249, 252, 253, and 254 have been observed in the radioactive dust collected from the air after a nuclear explosion. Californium is not a major radionuclide at United States Department of Energy legacy sites since it was not produced in large quantities.
Californium was once believed to be produced in supernovas, as their decay matches the 60-day half-life of Cf. However, subsequent studies failed to demonstrate any californium spectra, and supernova light curves are now thought to follow the decay of nickel-56.
The transuranium elements from americium to fermium, including californium, occurred naturally in the natural nuclear fission reactor at Oklo, but no longer do so.
Spectral lines of californium, along with those of several other non-primordial elements, were detected in Przybylski's Star in 2008.
Californium is produced in nuclear reactors and particle accelerators. Californium-250 is made by bombarding berkelium-249 (97Bk) with neutrons, forming berkelium-250 (97Bk) via neutron capture (n,γ) which, in turn, quickly beta decays (β) to californium-250 (98Cf) in the following reaction:
Bombardment of californium-250 with neutrons produces californium-251 and californium-252.
Prolonged irradiation of americium, curium, and plutonium with neutrons produces milligram amounts of californium-252 and microgram amounts of californium-249. As of 2006, curium isotopes 244 to 248 are irradiated by neutrons in special reactors to produce primarily californium-252 with lesser amounts of isotopes 249 to 255.
Microgram quantities of californium-252 are available for commercial use through the U.S. Nuclear Regulatory Commission. Only two sites produce californium-252: the Oak Ridge National Laboratory in the United States, and the Research Institute of Atomic Reactors in Dimitrovgrad, Russia. As of 2003, the two sites produce 0.25 grams and 0.025 grams of californium-252 per year, respectively.
Three californium isotopes with significant half-lives are produced, requiring a total of 15 neutron captures by uranium-238 without nuclear fission or alpha decay occurring during the process. Californium-253 is at the end of a production chain that starts with uranium-238, includes several isotopes of plutonium, americium, curium, berkelium, and the californium isotopes 249 to 253 (see diagram).
Californium-252 has a number of specialized uses as a strong neutron emitter; it produces 139 million neutrons per microgram per minute. This property makes it useful as a startup neutron source for some nuclear reactors and as a portable (non-reactor based) neutron source for neutron activation analysis to detect trace amounts of elements in samples. Neutrons from californium are used as a treatment of certain cervical and brain cancers where other radiation therapy is ineffective. It has been used in educational applications since 1969 when Georgia Institute of Technology got a loan of 119 μg of Cf from the Savannah River Site. It is also used with online elemental coal analyzers and bulk material analyzers in the coal and cement industries.
Neutron penetration into materials makes californium useful in detection instruments such as fuel rod scanners; neutron radiography of aircraft and weapons components to detect corrosion, bad welds, cracks and trapped moisture; and in portable metal detectors. Neutron moisture gauges use Cf to find water and petroleum layers in oil wells, as a portable neutron source for gold and silver prospecting for on-the-spot analysis, and to detect ground water movement. The main uses of Cf in 1982 were, reactor start-up (48.3%), fuel rod scanning (25.3%), and activation analysis (19.4%). By 1994, most Cf was used in neutron radiography (77.4%), with fuel rod scanning (12.1%) and reactor start-up (6.9%) as important but secondary uses. In 2021, fast neutrons from Cf were used for wireless data transmission.
Cf has a very small calculated critical mass of about 5 kg (11 lb), high lethality, and a relatively short period of toxic environmental irradiation. The low critical mass of californium led to some exaggerated claims about possible uses for the element.
In October 2006, researchers announced that three atoms of oganesson (element 118) had been identified at Joint Institute for Nuclear Research in Dubna, Russia, from bombarding Cf with calcium-48, making it the heaviest element ever made. The target contained about 10 mg of Cf deposited on a titanium foil of 32 cm area. Californium has also been used to produce other transuranium elements; for example, lawrencium was first synthesized in 1961 by bombarding californium with boron nuclei.
Californium that bioaccumulates in skeletal tissue releases radiation that disrupts the body's ability to form red blood cells. The element plays no natural biological role in any organism due to its intense radioactivity and low concentration in the environment.
Californium can enter the body from ingesting contaminated food or drinks or by breathing air with suspended particles of the element. Once in the body, only 0.05% of the californium will reach the bloodstream. About 65% of that californium will be deposited in the skeleton, 25% in the liver, and the rest in other organs, or excreted, mainly in urine. Half of the californium deposited in the skeleton and liver are gone in 50 and 20 years, respectively. Californium in the skeleton adheres to bone surfaces before slowly migrating throughout the bone.
The element is most dangerous if taken into the body. In addition, californium-249 and californium-251 can cause tissue damage externally, through gamma ray emission. Ionizing radiation emitted by californium on bone and in the liver can cause cancer.
Media related to Californium at Wikimedia Commons | [
{
"paragraph_id": 0,
"text": "Californium is a synthetic chemical element; it has symbol Cf and atomic number 98. The element was first synthesized in 1950 at Lawrence Berkeley National Laboratory (then the University of California Radiation Laboratory), by bombarding curium with alpha particles (helium-4 ions). It is an actinide element, the sixth transuranium element to be synthesized, and has the second-highest atomic mass of all elements that have been produced in amounts large enough to see with the naked eye (after einsteinium). The element was named after the university and the U.S. state of California.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Two crystalline forms exist for californium at normal pressure: one above and one below 900 °C (1,650 °F). A third form exists at high pressure. Californium slowly tarnishes in air at room temperature. Californium compounds are dominated by the +3 oxidation state. The most stable of californium's twenty known isotopes is californium-251, with a half-life of 898 years. This short half-life means the element is not found in significant quantities in the Earth's crust. Cf, with a half-life of about 2.645 years, is the most common isotope used and is produced at Oak Ridge National Laboratory in the United States and Research Institute of Atomic Reactors in Russia.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Californium is one of the few transuranium elements with practical applications. Most of these applications exploit the property of certain radioactive isotopes of californium to emit neutrons. For example, californium can be used to help start up nuclear reactors, and it is employed as a source of neutrons when studying materials using neutron diffraction and neutron spectroscopy. Californium can also be used in nuclear synthesis of higher mass elements; oganesson (element 118) was synthesized by bombarding californium-249 atoms with calcium-48 ions. Users of californium must take into account radiological concerns and the element's ability to disrupt the formation of red blood cells by bioaccumulating in skeletal tissue.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Californium is a silvery-white actinide metal with a melting point of 900 ± 30 °C (1,650 ± 50 °F) and an estimated boiling point of 1,743 K (1,470 °C; 2,680 °F). The pure metal is malleable and is easily cut with a razor blade. Californium metal starts to vaporize above 300 °C (570 °F) when exposed to a vacuum. Below 51 K (−222 °C; −368 °F) californium metal is either ferromagnetic or ferrimagnetic (it acts like a magnet), between 48 and 66 K it is antiferromagnetic (an intermediate state), and above 160 K (−113 °C; −172 °F) it is paramagnetic (external magnetic fields can make it magnetic). It forms alloys with lanthanide metals but little is known about the resulting materials.",
"title": "Characteristics"
},
{
"paragraph_id": 4,
"text": "The element has two crystalline forms at standard atmospheric pressure: a double-hexagonal close-packed form dubbed alpha (α) and a face-centered cubic form designated beta (β). The α form exists below 600–800 °C with a density of 15.10 g/cm and the β form exists above 600–800 °C with a density of 8.74 g/cm. At 48 GPa of pressure the β form changes into an orthorhombic crystal system due to delocalization of the atom's 5f electrons, which frees them to bond.",
"title": "Characteristics"
},
{
"paragraph_id": 5,
"text": "The bulk modulus of a material is a measure of its resistance to uniform pressure. Californium's bulk modulus is 50±5 GPa, which is similar to trivalent lanthanide metals but smaller than more familiar metals, such as aluminium (70 GPa).",
"title": "Characteristics"
},
{
"paragraph_id": 6,
"text": "Californium exhibits oxidation states of 4, 3, or 2. It typically forms eight or nine bonds to surrounding atoms or ions. Its chemical properties are predicted to be similar to other primarily 3+ valence actinide elements and the element dysprosium, which is the lanthanide above californium in the periodic table. Compounds in the +4 oxidation state are strong oxidizing agents and those in the +2 state are strong reducing agents.",
"title": "Characteristics"
},
{
"paragraph_id": 7,
"text": "The element slowly tarnishes in air at room temperature, with the rate increasing when moisture is added. Californium reacts when heated with hydrogen, nitrogen, or a chalcogen (oxygen family element); reactions with dry hydrogen and aqueous mineral acids are rapid.",
"title": "Characteristics"
},
{
"paragraph_id": 8,
"text": "Californium is only water-soluble as the californium(III) cation. Attempts to reduce or oxidize the +3 ion in solution have failed. The element forms a water-soluble chloride, nitrate, perchlorate, and sulfate and is precipitated as a fluoride, oxalate, or hydroxide. Californium is the heaviest actinide to exhibit covalent properties, as is observed in the californium borate.",
"title": "Characteristics"
},
{
"paragraph_id": 9,
"text": "Twenty isotopes of californium are known (mass number ranging from 237 to 256); the most stable are Cf with half-life 898 years, Cf with half-life 351 years, Cf with half-life 13.08 years, and Cf with half-life 2.645 years. All other isotopes have half-life shorter than a year, and most of these have half-lives less than 20 minutes.",
"title": "Characteristics"
},
{
"paragraph_id": 10,
"text": "Cf is formed from beta decay of berkelium-249, and most other californium isotopes are made by subjecting berkelium to intense neutron radiation in a nuclear reactor. Though californium-251 has the longest half-life, its production yield is only 10% due to its tendency to collect neutrons (high neutron capture) and its tendency to interact with other particles (high neutron cross section).",
"title": "Characteristics"
},
{
"paragraph_id": 11,
"text": "Californium-252 is a very strong neutron emitter, which makes it extremely radioactive and harmful. Cf, 96.9% of the time, alpha decays to curium-248; the other 3.1% of decays are spontaneous fission. One microgram (μg) of Cf emits 2.3 million neutrons per second, an average of 3.7 neutrons per spontaneous fission. Most other isotopes of californium, alpha decay to curium (atomic number 96).",
"title": "Characteristics"
},
{
"paragraph_id": 12,
"text": "Californium was first made at University of California Radiation Laboratory, Berkeley, by physics researchers Stanley Gerald Thompson, Kenneth Street Jr., Albert Ghiorso, and Glenn T. Seaborg, about February 9, 1950. It was the sixth transuranium element to be discovered; the team announced its discovery on March 17, 1950.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "To produce californium, a microgram-size target of curium-242 (96Cm) was bombarded with 35 MeV alpha particles (2He) in the 60-inch-diameter (1.52 m) cyclotron at Berkeley, which produced californium-245 (98Cf) plus one free neutron (n).",
"title": "History"
},
{
"paragraph_id": 14,
"text": "To identify and separate out the element, ion exchange and adsorsion methods were undertaken. Only about 5,000 atoms of californium were produced in this experiment, and these atoms had a half-life of 44 minutes.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "The discoverers named the new element after the university and the state. This was a break from the convention used for elements 95 to 97, which drew inspiration from how the elements directly above them in the periodic table were named. However, the element directly above #98 in the periodic table, dysprosium, has a name that means \"hard to get at\", so the researchers decided to set aside the informal naming convention. They added that \"the best we can do is to point out [that] ... searchers a century ago found it difficult to get to California\".",
"title": "History"
},
{
"paragraph_id": 16,
"text": "Weighable amounts of californium were first produced by the irradiation of plutonium targets at Materials Testing Reactor at National Reactor Testing Station, eastern Idaho; these findings were reported in 1954. The high spontaneous fission rate of californium-252 was observed in these samples. The first experiment with californium in concentrated form occurred in 1958. The isotopes Cf to Cf were isolated that same year from a sample of plutonium-239 that had been irradiated with neutrons in a nuclear reactor for five years. Two years later, in 1960, Burris Cunningham and James Wallman of Lawrence Radiation Laboratory of the University of California created the first californium compounds—californium trichloride, californium(III) oxychloride, and californium oxide—by treating californium with steam and hydrochloric acid.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "The High Flux Isotope Reactor (HFIR) at Oak Ridge National Laboratory (ORNL) in Oak Ridge, Tennessee, started producing small batches of californium in the 1960s. By 1995, HFIR nominally produced 500 milligrams (0.018 oz) of californium annually. Plutonium supplied by the United Kingdom to the United States under the 1958 US–UK Mutual Defence Agreement was used for making californium.",
"title": "History"
},
{
"paragraph_id": 18,
"text": "The Atomic Energy Commission sold Cf to industrial and academic customers in the early 1970s for $10 per microgram, and an average of 150 mg (0.0053 oz) of Cf were shipped each year from 1970 to 1990. Californium metal was first prepared in 1974 by Haire and Baybarz, who reduced californium(III) oxide with lanthanum metal to obtain microgram amounts of sub-micrometer thick films.",
"title": "History"
},
{
"paragraph_id": 19,
"text": "Traces of californium can be found near facilities that use the element in mineral prospecting and in medical treatments. The element is fairly insoluble in water, but it adheres well to ordinary soil; and concentrations of it in the soil can be 500 times higher than in the water surrounding the soil particles.",
"title": "Occurrence"
},
{
"paragraph_id": 20,
"text": "Nuclear fallout from atmospheric nuclear weapons testing prior to 1980 contributed a small amount of californium to the environment. Californium isotopes with mass numbers 249, 252, 253, and 254 have been observed in the radioactive dust collected from the air after a nuclear explosion. Californium is not a major radionuclide at United States Department of Energy legacy sites since it was not produced in large quantities.",
"title": "Occurrence"
},
{
"paragraph_id": 21,
"text": "Californium was once believed to be produced in supernovas, as their decay matches the 60-day half-life of Cf. However, subsequent studies failed to demonstrate any californium spectra, and supernova light curves are now thought to follow the decay of nickel-56.",
"title": "Occurrence"
},
{
"paragraph_id": 22,
"text": "The transuranium elements from americium to fermium, including californium, occurred naturally in the natural nuclear fission reactor at Oklo, but no longer do so.",
"title": "Occurrence"
},
{
"paragraph_id": 23,
"text": "Spectral lines of californium, along with those of several other non-primordial elements, were detected in Przybylski's Star in 2008.",
"title": "Occurrence"
},
{
"paragraph_id": 24,
"text": "Californium is produced in nuclear reactors and particle accelerators. Californium-250 is made by bombarding berkelium-249 (97Bk) with neutrons, forming berkelium-250 (97Bk) via neutron capture (n,γ) which, in turn, quickly beta decays (β) to californium-250 (98Cf) in the following reaction:",
"title": "Production"
},
{
"paragraph_id": 25,
"text": "Bombardment of californium-250 with neutrons produces californium-251 and californium-252.",
"title": "Production"
},
{
"paragraph_id": 26,
"text": "Prolonged irradiation of americium, curium, and plutonium with neutrons produces milligram amounts of californium-252 and microgram amounts of californium-249. As of 2006, curium isotopes 244 to 248 are irradiated by neutrons in special reactors to produce primarily californium-252 with lesser amounts of isotopes 249 to 255.",
"title": "Production"
},
{
"paragraph_id": 27,
"text": "Microgram quantities of californium-252 are available for commercial use through the U.S. Nuclear Regulatory Commission. Only two sites produce californium-252: the Oak Ridge National Laboratory in the United States, and the Research Institute of Atomic Reactors in Dimitrovgrad, Russia. As of 2003, the two sites produce 0.25 grams and 0.025 grams of californium-252 per year, respectively.",
"title": "Production"
},
{
"paragraph_id": 28,
"text": "Three californium isotopes with significant half-lives are produced, requiring a total of 15 neutron captures by uranium-238 without nuclear fission or alpha decay occurring during the process. Californium-253 is at the end of a production chain that starts with uranium-238, includes several isotopes of plutonium, americium, curium, berkelium, and the californium isotopes 249 to 253 (see diagram).",
"title": "Production"
},
{
"paragraph_id": 29,
"text": "Californium-252 has a number of specialized uses as a strong neutron emitter; it produces 139 million neutrons per microgram per minute. This property makes it useful as a startup neutron source for some nuclear reactors and as a portable (non-reactor based) neutron source for neutron activation analysis to detect trace amounts of elements in samples. Neutrons from californium are used as a treatment of certain cervical and brain cancers where other radiation therapy is ineffective. It has been used in educational applications since 1969 when Georgia Institute of Technology got a loan of 119 μg of Cf from the Savannah River Site. It is also used with online elemental coal analyzers and bulk material analyzers in the coal and cement industries.",
"title": "Applications"
},
{
"paragraph_id": 30,
"text": "Neutron penetration into materials makes californium useful in detection instruments such as fuel rod scanners; neutron radiography of aircraft and weapons components to detect corrosion, bad welds, cracks and trapped moisture; and in portable metal detectors. Neutron moisture gauges use Cf to find water and petroleum layers in oil wells, as a portable neutron source for gold and silver prospecting for on-the-spot analysis, and to detect ground water movement. The main uses of Cf in 1982 were, reactor start-up (48.3%), fuel rod scanning (25.3%), and activation analysis (19.4%). By 1994, most Cf was used in neutron radiography (77.4%), with fuel rod scanning (12.1%) and reactor start-up (6.9%) as important but secondary uses. In 2021, fast neutrons from Cf were used for wireless data transmission.",
"title": "Applications"
},
{
"paragraph_id": 31,
"text": "Cf has a very small calculated critical mass of about 5 kg (11 lb), high lethality, and a relatively short period of toxic environmental irradiation. The low critical mass of californium led to some exaggerated claims about possible uses for the element.",
"title": "Applications"
},
{
"paragraph_id": 32,
"text": "In October 2006, researchers announced that three atoms of oganesson (element 118) had been identified at Joint Institute for Nuclear Research in Dubna, Russia, from bombarding Cf with calcium-48, making it the heaviest element ever made. The target contained about 10 mg of Cf deposited on a titanium foil of 32 cm area. Californium has also been used to produce other transuranium elements; for example, lawrencium was first synthesized in 1961 by bombarding californium with boron nuclei.",
"title": "Applications"
},
{
"paragraph_id": 33,
"text": "Californium that bioaccumulates in skeletal tissue releases radiation that disrupts the body's ability to form red blood cells. The element plays no natural biological role in any organism due to its intense radioactivity and low concentration in the environment.",
"title": "Precautions"
},
{
"paragraph_id": 34,
"text": "Californium can enter the body from ingesting contaminated food or drinks or by breathing air with suspended particles of the element. Once in the body, only 0.05% of the californium will reach the bloodstream. About 65% of that californium will be deposited in the skeleton, 25% in the liver, and the rest in other organs, or excreted, mainly in urine. Half of the californium deposited in the skeleton and liver are gone in 50 and 20 years, respectively. Californium in the skeleton adheres to bone surfaces before slowly migrating throughout the bone.",
"title": "Precautions"
},
{
"paragraph_id": 35,
"text": "The element is most dangerous if taken into the body. In addition, californium-249 and californium-251 can cause tissue damage externally, through gamma ray emission. Ionizing radiation emitted by californium on bone and in the liver can cause cancer.",
"title": "Precautions"
},
{
"paragraph_id": 36,
"text": "Media related to Californium at Wikimedia Commons",
"title": "External links"
}
] | Californium is a synthetic chemical element; it has symbol Cf and atomic number 98. The element was first synthesized in 1950 at Lawrence Berkeley National Laboratory, by bombarding curium with alpha particles. It is an actinide element, the sixth transuranium element to be synthesized, and has the second-highest atomic mass of all elements that have been produced in amounts large enough to see with the naked eye. The element was named after the university and the U.S. state of California. Two crystalline forms exist for californium at normal pressure: one above and one below 900 °C (1,650 °F). A third form exists at high pressure. Californium slowly tarnishes in air at room temperature. Californium compounds are dominated by the +3 oxidation state. The most stable of californium's twenty known isotopes is californium-251, with a half-life of 898 years. This short half-life means the element is not found in significant quantities in the Earth's crust. 252Cf, with a half-life of about 2.645 years, is the most common isotope used and is produced at Oak Ridge National Laboratory in the United States and Research Institute of Atomic Reactors in Russia. Californium is one of the few transuranium elements with practical applications. Most of these applications exploit the property of certain radioactive isotopes of californium to emit neutrons. For example, californium can be used to help start up nuclear reactors, and it is employed as a source of neutrons when studying materials using neutron diffraction and neutron spectroscopy. Californium can also be used in nuclear synthesis of higher mass elements; oganesson was synthesized by bombarding californium-249 atoms with calcium-48 ions. Users of californium must take into account radiological concerns and the element's ability to disrupt the formation of red blood cells by bioaccumulating in skeletal tissue. | 2001-05-17T14:32:17Z | 2023-12-06T02:06:57Z | [
"Template:SubatomicParticle",
"Template:Infobox californium",
"Template:Notes",
"Template:Cite journal",
"Template:Cite conference",
"Template:Californium compounds",
"Template:Further",
"Template:Nuclide",
"Template:Convert",
"Template:Val",
"Template:Su",
"Template:Cite book",
"Template:Commons category-inline",
"Template:Cite encyclopedia",
"Template:Commons",
"Template:About",
"Template:Efn",
"Template:Sfn",
"Template:Main",
"Template:See also",
"Template:Clear",
"Template:Featured article",
"Template:Reflist",
"Template:Cite web",
"Template:Wiktionary",
"Template:Use mdy dates",
"Template:Periodic table (navbox)",
"Template:Authority control"
] | https://en.wikipedia.org/wiki/Californium |
5,679 | Christian Social Union in Bavaria | The Christian Social Union in Bavaria (German: Christlich-Soziale Union in Bayern, CSU) is a Christian democratic and conservative political party in Germany. Having a regionalist identity, the CSU operates only in Bavaria while its larger counterpart, the Christian Democratic Union (CDU), operates in the other fifteen states of Germany. It differs from the CDU by being somewhat more conservative in social matters, following Catholic social teaching. The CSU is considered the de facto successor of the Weimar-era Catholic Bavarian People's Party.
At the federal level, the CSU forms a common faction in the Bundestag with the CDU which is frequently referred to as the Union Faction (die Unionsfraktion) or simply CDU/CSU. The CSU has 45 seats in the Bundestag since the 2021 federal election, making it currently the second smallest of the seven parties represented. The CSU is a member of the European People's Party and the International Democrat Union.
Party leader Markus Söder serves as Minister-President of Bavaria, a position that CSU representatives have held from 1946 to 1954 and again since 1957.
Franz Josef Strauß (1915–1988) had left behind the strongest legacy as a leader of the party, having led the party from 1961 until his death in 1988. His political career in the federal cabinet was unique in that he had served in four ministerial posts in the years between 1953 and 1969. From 1978 until his death in 1988, Strauß served as the Minister-President of Bavaria. Strauß was the first leader of the CSU to be a candidate for the German chancellery in 1980. In the 1980 federal election, Strauß ran against the incumbent Helmut Schmidt of the Social Democratic Party of Germany (SPD) but lost thereafter as the SPD and the Free Democratic Party (FDP) managed to secure an absolute majority together, forming a social-liberal coalition.
The CSU has led the Bavarian state government since it came into existence in 1946, save from 1954 to 1957 when the SPD formed a state government in coalition with the Bavaria Party and the state branches of the GB/BHE and FDP.
Initially, the separatist Bavaria Party (BP) successfully competed for the same electorate as the CSU, as both parties saw and presented themselves as successors to the BVP. The CSU was ultimately able to win this power struggle for itself. Among other things, the BP was involved in the "casino affair" under dubious circumstances by the CSU at the end of the 1950s and lost considerable prestige and votes. In the 1966 state election, the BP finally left the state parliament.
Before the 2008 elections in Bavaria, the CSU perennially achieved absolute majorities at the state level by itself. This level of dominance is unique among Germany's 16 states. Edmund Stoiber took over the CSU leadership in 1999. He ran for Chancellor of Germany in 2002, but his preferred CDU/CSU–FDP coalition lost against the SPD candidate Gerhard Schröder's SPD–Green alliance.
In the 2003 Bavarian state election, the CSU won 60.7% of the vote and 124 of 180 seats in the state parliament. This was the first time any party had won a two-thirds majority in a German state parliament. The Economist later suggested that this exceptional result was due to a backlash against Schröder's government in Berlin. The CSU's popularity declined in subsequent years. Stoiber stepped down from the posts of Minister-President and CSU chairman in September 2007. A year later, the CSU lost its majority in the 2008 Bavarian state election, with its vote share dropping from 60.7% to 43.4%. The CSU remained in power by forming a coalition with the FDP. In the 2009 general election, the CSU received only 42.5% of the vote in Bavaria in the 2009 election, which by then constituted its weakest showing in the party's history.
The CSU made gains in the 2013 Bavarian state election and the 2013 federal election, which were held a week apart in September 2013. The CSU regained their majority in the Bavarian Landtag and remained in government in Berlin. They had three ministers in the Fourth Merkel cabinet, namely Horst Seehofer (Minister of the Interior, Building and Community), Andreas Scheuer (Minister of Transport and Digital Infrastructure) and Gerd Müller (Minister for Economic Cooperation and Development).
The 2018 Bavarian state election yielded the worst result for the CSU in the state elections (top candidate Markus Söder) since 1950 with 37.2% of votes, a decline of over ten percentage points compared to the last result in 2013. After that, the CSU had to form a new coalition government with the minor partner Free Voters of Bavaria.
The 2021 German federal election saw the worst election result ever for the Union. The CSU also had a weak showing with 5.2% of votes nationally and 31.7% of the total in Bavaria.
The CSU is the sister party of the Christian Democratic Union (CDU). Together, they are called the Union. The CSU operates only within Bavaria, and the CDU operates in all states other than Bavaria. While virtually independent, at the federal level the parties form a common CDU/CSU faction. No Chancellor has ever come from the CSU, although Strauß and Edmund Stoiber were CDU/CSU candidates for Chancellor in the 1980 federal election and the 2002 federal election, respectively, which were both won by the Social Democratic Party of Germany (SPD). Below the federal level, the parties are entirely independent.
Since its formation, the CSU has been more conservative than the CDU. CSU and the state of Bavaria decided not to sign the Grundgesetz of the Federal Republic of Germany as they could not agree with the division of Germany into two states after World War II. Although Bavaria like all German states has a separate police and justice system (distinctive and non-federal), the CSU has actively participated in all political affairs of the German Parliament, the German government, the German Bundesrat, the parliamentary elections of the German President, the European Parliament and meetings with Mikhail Gorbachev in Russia.
Like the CDU, the CSU is pro-European, although some Eurosceptic tendencies were shown in the past.
The CSU has contributed eleven of the twelve Ministers-President of Bavaria since 1945, with only Wilhelm Hoegner (1945–1946, 1954–1957) of the SPD also holding the office. | [
{
"paragraph_id": 0,
"text": "The Christian Social Union in Bavaria (German: Christlich-Soziale Union in Bayern, CSU) is a Christian democratic and conservative political party in Germany. Having a regionalist identity, the CSU operates only in Bavaria while its larger counterpart, the Christian Democratic Union (CDU), operates in the other fifteen states of Germany. It differs from the CDU by being somewhat more conservative in social matters, following Catholic social teaching. The CSU is considered the de facto successor of the Weimar-era Catholic Bavarian People's Party.",
"title": ""
},
{
"paragraph_id": 1,
"text": "At the federal level, the CSU forms a common faction in the Bundestag with the CDU which is frequently referred to as the Union Faction (die Unionsfraktion) or simply CDU/CSU. The CSU has 45 seats in the Bundestag since the 2021 federal election, making it currently the second smallest of the seven parties represented. The CSU is a member of the European People's Party and the International Democrat Union.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Party leader Markus Söder serves as Minister-President of Bavaria, a position that CSU representatives have held from 1946 to 1954 and again since 1957.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Franz Josef Strauß (1915–1988) had left behind the strongest legacy as a leader of the party, having led the party from 1961 until his death in 1988. His political career in the federal cabinet was unique in that he had served in four ministerial posts in the years between 1953 and 1969. From 1978 until his death in 1988, Strauß served as the Minister-President of Bavaria. Strauß was the first leader of the CSU to be a candidate for the German chancellery in 1980. In the 1980 federal election, Strauß ran against the incumbent Helmut Schmidt of the Social Democratic Party of Germany (SPD) but lost thereafter as the SPD and the Free Democratic Party (FDP) managed to secure an absolute majority together, forming a social-liberal coalition.",
"title": "History"
},
{
"paragraph_id": 4,
"text": "The CSU has led the Bavarian state government since it came into existence in 1946, save from 1954 to 1957 when the SPD formed a state government in coalition with the Bavaria Party and the state branches of the GB/BHE and FDP.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "Initially, the separatist Bavaria Party (BP) successfully competed for the same electorate as the CSU, as both parties saw and presented themselves as successors to the BVP. The CSU was ultimately able to win this power struggle for itself. Among other things, the BP was involved in the \"casino affair\" under dubious circumstances by the CSU at the end of the 1950s and lost considerable prestige and votes. In the 1966 state election, the BP finally left the state parliament.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "Before the 2008 elections in Bavaria, the CSU perennially achieved absolute majorities at the state level by itself. This level of dominance is unique among Germany's 16 states. Edmund Stoiber took over the CSU leadership in 1999. He ran for Chancellor of Germany in 2002, but his preferred CDU/CSU–FDP coalition lost against the SPD candidate Gerhard Schröder's SPD–Green alliance.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "In the 2003 Bavarian state election, the CSU won 60.7% of the vote and 124 of 180 seats in the state parliament. This was the first time any party had won a two-thirds majority in a German state parliament. The Economist later suggested that this exceptional result was due to a backlash against Schröder's government in Berlin. The CSU's popularity declined in subsequent years. Stoiber stepped down from the posts of Minister-President and CSU chairman in September 2007. A year later, the CSU lost its majority in the 2008 Bavarian state election, with its vote share dropping from 60.7% to 43.4%. The CSU remained in power by forming a coalition with the FDP. In the 2009 general election, the CSU received only 42.5% of the vote in Bavaria in the 2009 election, which by then constituted its weakest showing in the party's history.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "The CSU made gains in the 2013 Bavarian state election and the 2013 federal election, which were held a week apart in September 2013. The CSU regained their majority in the Bavarian Landtag and remained in government in Berlin. They had three ministers in the Fourth Merkel cabinet, namely Horst Seehofer (Minister of the Interior, Building and Community), Andreas Scheuer (Minister of Transport and Digital Infrastructure) and Gerd Müller (Minister for Economic Cooperation and Development).",
"title": "History"
},
{
"paragraph_id": 9,
"text": "The 2018 Bavarian state election yielded the worst result for the CSU in the state elections (top candidate Markus Söder) since 1950 with 37.2% of votes, a decline of over ten percentage points compared to the last result in 2013. After that, the CSU had to form a new coalition government with the minor partner Free Voters of Bavaria.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "The 2021 German federal election saw the worst election result ever for the Union. The CSU also had a weak showing with 5.2% of votes nationally and 31.7% of the total in Bavaria.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "The CSU is the sister party of the Christian Democratic Union (CDU). Together, they are called the Union. The CSU operates only within Bavaria, and the CDU operates in all states other than Bavaria. While virtually independent, at the federal level the parties form a common CDU/CSU faction. No Chancellor has ever come from the CSU, although Strauß and Edmund Stoiber were CDU/CSU candidates for Chancellor in the 1980 federal election and the 2002 federal election, respectively, which were both won by the Social Democratic Party of Germany (SPD). Below the federal level, the parties are entirely independent.",
"title": "Relationship with the CDU"
},
{
"paragraph_id": 12,
"text": "Since its formation, the CSU has been more conservative than the CDU. CSU and the state of Bavaria decided not to sign the Grundgesetz of the Federal Republic of Germany as they could not agree with the division of Germany into two states after World War II. Although Bavaria like all German states has a separate police and justice system (distinctive and non-federal), the CSU has actively participated in all political affairs of the German Parliament, the German government, the German Bundesrat, the parliamentary elections of the German President, the European Parliament and meetings with Mikhail Gorbachev in Russia.",
"title": "Relationship with the CDU"
},
{
"paragraph_id": 13,
"text": "Like the CDU, the CSU is pro-European, although some Eurosceptic tendencies were shown in the past.",
"title": "Relationship with the CDU"
},
{
"paragraph_id": 14,
"text": "The CSU has contributed eleven of the twelve Ministers-President of Bavaria since 1945, with only Wilhelm Hoegner (1945–1946, 1954–1957) of the SPD also holding the office.",
"title": "Leaders"
}
] | The Christian Social Union in Bavaria is a Christian democratic and conservative political party in Germany. Having a regionalist identity, the CSU operates only in Bavaria while its larger counterpart, the Christian Democratic Union (CDU), operates in the other fifteen states of Germany. It differs from the CDU by being somewhat more conservative in social matters, following Catholic social teaching. The CSU is considered the de facto successor of the Weimar-era Catholic Bavarian People's Party. At the federal level, the CSU forms a common faction in the Bundestag with the CDU which is frequently referred to as the Union Faction or simply CDU/CSU. The CSU has 45 seats in the Bundestag since the 2021 federal election, making it currently the second smallest of the seven parties represented. The CSU is a member of the European People's Party and the International Democrat Union. Party leader Markus Söder serves as Minister-President of Bavaria, a position that CSU representatives have held from 1946 to 1954 and again since 1957. | 2001-05-17T17:33:20Z | 2023-12-20T17:41:23Z | [
"Template:Conservatism in Germany",
"Template:See also",
"Template:Cite journal",
"Template:Cite news",
"Template:International Democrat Union",
"Template:Audio",
"Template:Example needed",
"Template:Yes2",
"Template:No2",
"Template:Webarchive",
"Template:In lang",
"Template:Short description",
"Template:Use dmy dates",
"Template:Infobox political party",
"Template:Composition bar",
"Template:Cite web",
"Template:Reflist",
"Template:ISBN",
"Template:Christian Social Union in Bavaria",
"Template:European People's Party",
"Template:Parties of Germany",
"Template:Politics of Bavaria",
"Template:Decrease",
"Template:Steady",
"Template:Authority control",
"Template:Increase",
"Template:Portal",
"Template:Cite book"
] | https://en.wikipedia.org/wiki/Christian_Social_Union_in_Bavaria |
5,681 | Corporate title | Corporate titles or business titles are given to corporate officers to show what duties and responsibilities they have in the organization. Such titles are used by publicly and privately held for-profit corporations, cooperatives, non-profit organizations, educational institutions, partnerships, and sole proprietorships that also confer corporate titles.
There are considerable variations in the composition and responsibilities of corporate title.
Within the corporate office or corporate center of a corporation, some corporations have a chairman and chief executive officer (CEO) as the top-ranking executive, while the number two is the president and chief operating officer (COO); other corporations have a president and CEO but no official deputy. Typically, senior managers are "higher" than vice presidents, although many times a senior officer may also hold a vice president title, such as executive vice president and chief financial officer (CFO). The board of directors is technically not part of management itself, although its chairman may be considered part of the corporate office if he or she is an executive chairman.
A corporation often consists of different businesses, whose senior executives report directly to the CEO or COO, but that depends on the form of the business. If organized as a division then the top manager is often known as an executive vice president (EVP). If that business is a subsidiary which has considerably more independence, then the title might be chairman and CEO.
In many countries, particularly in Europe and Asia, there is a separate executive board for day-to-day business and supervisory board (elected by shareholders) for control purposes. In these countries, the CEO presides over the executive board and the chairman presides over the supervisory board, and these two roles will always be held by different people. This ensures a distinction between management by the executive board and governance by the supervisory board. This seemingly allows for clear lines of authority. There is a strong parallel here with the structure of government, which tends to separate the political cabinet from the management civil service.
In the United States and other countries that follow a single-board corporate structure, the board of directors (elected by the shareholders) is often equivalent to the European or Asian supervisory board, while the functions of the executive board may be vested either in the board of directors or in a separate committee, which may be called an operating committee (J.P. Morgan Chase), management committee (Goldman Sachs), executive committee (Lehman Brothers), executive council (Hewlett-Packard), or executive board (HeiG) composed of the division/subsidiary heads and senior officers that report directly to the CEO.
State laws in the United States traditionally required certain positions to be created within every corporation, such as president, secretary and treasurer. Today, the approach under the Model Business Corporation Act, which is employed in many states, is to grant corporations discretion in determining which titles to have, with the only mandated organ being the board of directors.
Some states that do not employ the MBCA continue to require that certain offices be established. Under the law of Delaware, where most large US corporations are established, stock certificates must be signed by two officers with titles specified by law (e.g. a president and secretary or a president and treasurer). Every corporation incorporated in California must have a chairman of the board or a president (or both), as well as a secretary and a chief financial officer.
Limited liability company (LLC)-structured companies are generally run directly by their members, but the members can agree to appoint officers such as a CEO or to appoint "managers" to operate the company.
American companies are generally led by a CEO. In some companies, the CEO also has the title of "president". In other companies, a president is a different person, and the primary duties of the two positions are defined in the company's bylaws (or the laws of the governing legal jurisdiction). Many companies also have a CFO, a chief operating officer (COO) and other senior positions such as chief legal officer (CLO), chief strategy officer (CSO), chief marketing officer (CMO), etc. that report to the president and CEO. The next level, which are not executive positions, is middle management and may be called "vice presidents", "directors" or "managers", depending on the size and required managerial depth of the company.
In British English, the title of managing director is generally synonymous with that of chief executive officer. Managing directors do not have any particular authority under the Companies Act in the UK, but do have implied authority based on the general understanding of what their position entails, as well as any authority expressly delegated by the board of directors.
In Japan, corporate titles are roughly standardized across companies and organizations; although there is variation from company to company, corporate titles within a company are always consistent, and the large companies in Japan generally follow the same outline. These titles are the formal titles that are used on business cards. Korean corporate titles are similar to those of Japan.
Legally, Japanese and Korean companies are only required to have a board of directors with at least one representative director. In Japanese, a company director is called a torishimariyaku (取締役) and a representative director is called a daihyō torishimariyaku (代表取締役). The equivalent Korean titles are isa (이사, 理事) and daepyo-isa (대표이사, 代表理事). These titles are often combined with lower titles, e.g. senmu torishimariyaku or jōmu torishimariyaku for Japanese executives who are also board members. Most Japanese companies also have statutory auditors, who operate alongside the board of directors in supervisory roles.
Under the commercial code in Japan, Jugyōin (従業員) meaning the "employee", is different from Kaishain (会社員), meaning the "stockholders".
The typical structure of executive titles in large companies includes the following:
The top management group, comprising jomu/sangmu and above, is often referred to collectively as "cadre" or "senior management" (幹部 or 重役; kambu or juyaku in Japanese; ganbu or jungyŏk in Korean).
Some Japanese and Korean companies have also adopted American-style titles, but these are not yet widespread and their usage varies. For example, although there is a Korean translation for "chief operating officer" (최고운영책임자, choego unyŏng chaegimja), not many companies have yet adopted it with the exception of a few multi-national companies such as Samsung and CJ (a spin-off from Samsung), while the CFO title is often used alongside other titles such as bu-sajang (SEVP) or Jŏnmu (EVP).
Since the late 1990s, many Japanese companies have introduced the title of shikkō yakuin (執行役員) or 'officer', seeking to emulate the separation of directors and officers found in American companies. In 2002, the statutory title of shikkō yaku (執行役) was introduced for use in companies that introduced a three-committee structure in their board of directors. The titles are frequently given to buchō and higher-level personnel. Although the two titles are very similar in intent and usage, there are several legal distinctions: shikkō yaku make their own decisions in the course of performing work delegated to them by the board of directors, and are considered managers of the company rather than employees, with a legal status similar to that of directors. Shikkō yakuin are considered employees of the company that follow the decisions of the board of directors, although in some cases directors may have the shikkō yakuin title as well.
The highest-level executives in senior management usually have titles beginning with "chief" and ending with "officer", forming what is often called the "C-suite", or "CxO", where "x" is a variable that could be any functional area (not to be confused with CXO). The traditional three such officers are CEO, COO, and CFO. Depending on the management structure, titles may exist instead of, or be blended/overlapped with, other traditional executive titles, such as president, various designations of vice presidents (e.g. VP of marketing), and general managers or directors of various divisions (such as director of marketing); the latter may or may not imply membership of the board of directors.
Certain other prominent positions have emerged, some of which are sector-specific. For example, chief audit executive (CAE), chief procurement officer (CPO) and chief risk officer (CRO) positions are often found in many types of financial services companies. Technology companies of all sorts now tend to have a chief technology officer (CTO) to manage technology development. A chief information officer (CIO) oversees information technology (IT) matters, either in companies that specialize in IT or in any kind of company that relies on it for supporting infrastructure.
Many companies now also have a chief marketing officer (CMO), particularly mature companies in competitive sectors, where brand management is a high priority. A chief value officer (CVO) is introduced in companies where business processes and organizational entities are focused on the creation and maximization of value. Approximately 50% of the S&P 500 companies have created a chief strategy officer (CSO) in their top management team to lead strategic planning and manage inorganic growth, which provides a long range perspective versus the tactical view of the COO or CFO. This function often replaces a COO on the C-Suite team, in cases where the company wants to focus on growth rather than efficiency and cost containment. A chief administrative officer (CAO) may be found in many large complex organizations that have various departments or divisions. Additionally, many companies now call their top diversity leadership position the chief diversity officer (CDO). However, this and many other nontraditional and lower-ranking titles are not universally recognized as corporate officers, and they tend to be specific to particular organizational cultures or the preferences of employees.
Chairman of the board – presiding officer of the corporate board of directors. The chairman influences the board of directors, which in turn elects and removes the officers of a corporation and oversees the human, financial, environmental and technical operations of a corporation. | [
{
"paragraph_id": 0,
"text": "Corporate titles or business titles are given to corporate officers to show what duties and responsibilities they have in the organization. Such titles are used by publicly and privately held for-profit corporations, cooperatives, non-profit organizations, educational institutions, partnerships, and sole proprietorships that also confer corporate titles.",
"title": ""
},
{
"paragraph_id": 1,
"text": "There are considerable variations in the composition and responsibilities of corporate title.",
"title": "Variations"
},
{
"paragraph_id": 2,
"text": "Within the corporate office or corporate center of a corporation, some corporations have a chairman and chief executive officer (CEO) as the top-ranking executive, while the number two is the president and chief operating officer (COO); other corporations have a president and CEO but no official deputy. Typically, senior managers are \"higher\" than vice presidents, although many times a senior officer may also hold a vice president title, such as executive vice president and chief financial officer (CFO). The board of directors is technically not part of management itself, although its chairman may be considered part of the corporate office if he or she is an executive chairman.",
"title": "Variations"
},
{
"paragraph_id": 3,
"text": "A corporation often consists of different businesses, whose senior executives report directly to the CEO or COO, but that depends on the form of the business. If organized as a division then the top manager is often known as an executive vice president (EVP). If that business is a subsidiary which has considerably more independence, then the title might be chairman and CEO.",
"title": "Variations"
},
{
"paragraph_id": 4,
"text": "In many countries, particularly in Europe and Asia, there is a separate executive board for day-to-day business and supervisory board (elected by shareholders) for control purposes. In these countries, the CEO presides over the executive board and the chairman presides over the supervisory board, and these two roles will always be held by different people. This ensures a distinction between management by the executive board and governance by the supervisory board. This seemingly allows for clear lines of authority. There is a strong parallel here with the structure of government, which tends to separate the political cabinet from the management civil service.",
"title": "Variations"
},
{
"paragraph_id": 5,
"text": "In the United States and other countries that follow a single-board corporate structure, the board of directors (elected by the shareholders) is often equivalent to the European or Asian supervisory board, while the functions of the executive board may be vested either in the board of directors or in a separate committee, which may be called an operating committee (J.P. Morgan Chase), management committee (Goldman Sachs), executive committee (Lehman Brothers), executive council (Hewlett-Packard), or executive board (HeiG) composed of the division/subsidiary heads and senior officers that report directly to the CEO.",
"title": "Variations"
},
{
"paragraph_id": 6,
"text": "State laws in the United States traditionally required certain positions to be created within every corporation, such as president, secretary and treasurer. Today, the approach under the Model Business Corporation Act, which is employed in many states, is to grant corporations discretion in determining which titles to have, with the only mandated organ being the board of directors.",
"title": "Variations"
},
{
"paragraph_id": 7,
"text": "Some states that do not employ the MBCA continue to require that certain offices be established. Under the law of Delaware, where most large US corporations are established, stock certificates must be signed by two officers with titles specified by law (e.g. a president and secretary or a president and treasurer). Every corporation incorporated in California must have a chairman of the board or a president (or both), as well as a secretary and a chief financial officer.",
"title": "Variations"
},
{
"paragraph_id": 8,
"text": "Limited liability company (LLC)-structured companies are generally run directly by their members, but the members can agree to appoint officers such as a CEO or to appoint \"managers\" to operate the company.",
"title": "Variations"
},
{
"paragraph_id": 9,
"text": "American companies are generally led by a CEO. In some companies, the CEO also has the title of \"president\". In other companies, a president is a different person, and the primary duties of the two positions are defined in the company's bylaws (or the laws of the governing legal jurisdiction). Many companies also have a CFO, a chief operating officer (COO) and other senior positions such as chief legal officer (CLO), chief strategy officer (CSO), chief marketing officer (CMO), etc. that report to the president and CEO. The next level, which are not executive positions, is middle management and may be called \"vice presidents\", \"directors\" or \"managers\", depending on the size and required managerial depth of the company.",
"title": "Variations"
},
{
"paragraph_id": 10,
"text": "In British English, the title of managing director is generally synonymous with that of chief executive officer. Managing directors do not have any particular authority under the Companies Act in the UK, but do have implied authority based on the general understanding of what their position entails, as well as any authority expressly delegated by the board of directors.",
"title": "Variations"
},
{
"paragraph_id": 11,
"text": "In Japan, corporate titles are roughly standardized across companies and organizations; although there is variation from company to company, corporate titles within a company are always consistent, and the large companies in Japan generally follow the same outline. These titles are the formal titles that are used on business cards. Korean corporate titles are similar to those of Japan.",
"title": "Variations"
},
{
"paragraph_id": 12,
"text": "Legally, Japanese and Korean companies are only required to have a board of directors with at least one representative director. In Japanese, a company director is called a torishimariyaku (取締役) and a representative director is called a daihyō torishimariyaku (代表取締役). The equivalent Korean titles are isa (이사, 理事) and daepyo-isa (대표이사, 代表理事). These titles are often combined with lower titles, e.g. senmu torishimariyaku or jōmu torishimariyaku for Japanese executives who are also board members. Most Japanese companies also have statutory auditors, who operate alongside the board of directors in supervisory roles.",
"title": "Variations"
},
{
"paragraph_id": 13,
"text": "Under the commercial code in Japan, Jugyōin (従業員) meaning the \"employee\", is different from Kaishain (会社員), meaning the \"stockholders\".",
"title": "Variations"
},
{
"paragraph_id": 14,
"text": "The typical structure of executive titles in large companies includes the following:",
"title": "Variations"
},
{
"paragraph_id": 15,
"text": "The top management group, comprising jomu/sangmu and above, is often referred to collectively as \"cadre\" or \"senior management\" (幹部 or 重役; kambu or juyaku in Japanese; ganbu or jungyŏk in Korean).",
"title": "Variations"
},
{
"paragraph_id": 16,
"text": "Some Japanese and Korean companies have also adopted American-style titles, but these are not yet widespread and their usage varies. For example, although there is a Korean translation for \"chief operating officer\" (최고운영책임자, choego unyŏng chaegimja), not many companies have yet adopted it with the exception of a few multi-national companies such as Samsung and CJ (a spin-off from Samsung), while the CFO title is often used alongside other titles such as bu-sajang (SEVP) or Jŏnmu (EVP).",
"title": "Variations"
},
{
"paragraph_id": 17,
"text": "Since the late 1990s, many Japanese companies have introduced the title of shikkō yakuin (執行役員) or 'officer', seeking to emulate the separation of directors and officers found in American companies. In 2002, the statutory title of shikkō yaku (執行役) was introduced for use in companies that introduced a three-committee structure in their board of directors. The titles are frequently given to buchō and higher-level personnel. Although the two titles are very similar in intent and usage, there are several legal distinctions: shikkō yaku make their own decisions in the course of performing work delegated to them by the board of directors, and are considered managers of the company rather than employees, with a legal status similar to that of directors. Shikkō yakuin are considered employees of the company that follow the decisions of the board of directors, although in some cases directors may have the shikkō yakuin title as well.",
"title": "Variations"
},
{
"paragraph_id": 18,
"text": "The highest-level executives in senior management usually have titles beginning with \"chief\" and ending with \"officer\", forming what is often called the \"C-suite\", or \"CxO\", where \"x\" is a variable that could be any functional area (not to be confused with CXO). The traditional three such officers are CEO, COO, and CFO. Depending on the management structure, titles may exist instead of, or be blended/overlapped with, other traditional executive titles, such as president, various designations of vice presidents (e.g. VP of marketing), and general managers or directors of various divisions (such as director of marketing); the latter may or may not imply membership of the board of directors.",
"title": "Senior management"
},
{
"paragraph_id": 19,
"text": "Certain other prominent positions have emerged, some of which are sector-specific. For example, chief audit executive (CAE), chief procurement officer (CPO) and chief risk officer (CRO) positions are often found in many types of financial services companies. Technology companies of all sorts now tend to have a chief technology officer (CTO) to manage technology development. A chief information officer (CIO) oversees information technology (IT) matters, either in companies that specialize in IT or in any kind of company that relies on it for supporting infrastructure.",
"title": "Senior management"
},
{
"paragraph_id": 20,
"text": "Many companies now also have a chief marketing officer (CMO), particularly mature companies in competitive sectors, where brand management is a high priority. A chief value officer (CVO) is introduced in companies where business processes and organizational entities are focused on the creation and maximization of value. Approximately 50% of the S&P 500 companies have created a chief strategy officer (CSO) in their top management team to lead strategic planning and manage inorganic growth, which provides a long range perspective versus the tactical view of the COO or CFO. This function often replaces a COO on the C-Suite team, in cases where the company wants to focus on growth rather than efficiency and cost containment. A chief administrative officer (CAO) may be found in many large complex organizations that have various departments or divisions. Additionally, many companies now call their top diversity leadership position the chief diversity officer (CDO). However, this and many other nontraditional and lower-ranking titles are not universally recognized as corporate officers, and they tend to be specific to particular organizational cultures or the preferences of employees.",
"title": "Senior management"
},
{
"paragraph_id": 21,
"text": "Chairman of the board – presiding officer of the corporate board of directors. The chairman influences the board of directors, which in turn elects and removes the officers of a corporation and oversees the human, financial, environmental and technical operations of a corporation.",
"title": "Senior management"
}
] | Corporate titles or business titles are given to corporate officers to show what duties and responsibilities they have in the organization. Such titles are used by publicly and privately held for-profit corporations, cooperatives, non-profit organizations, educational institutions, partnerships, and sole proprietorships that also confer corporate titles. | 2001-05-17T18:22:24Z | 2023-12-24T08:26:02Z | [
"Template:Unreferenced section",
"Template:Anchor",
"Template:Reflist",
"Template:Short description",
"Template:Multiple issues",
"Template:Portal",
"Template:Cite web",
"Template:Cite book",
"Template:Aspects of corporations",
"Template:Business administration",
"Template:Citation needed",
"Template:Cite news",
"Template:Cbignore",
"Template:Corporate titles"
] | https://en.wikipedia.org/wiki/Corporate_title |
5,685 | Cambridge, Massachusetts | Cambridge (/ˈkeɪmbrɪdʒ/ KAYM-brij) is a city in Middlesex County, Massachusetts, in the United States. It is a suburb in the Greater Boston metropolitan area, located directly across the Charles River from Boston. The city's population as of the 2020 U.S. census was 118,403, making it the most populous city in the county, the fourth-largest in Massachusetts, behind Boston, Worcester, and Springfield, and ninth-largest in New England. The city was named in honor of the University of Cambridge in Cambridge, England, which was an important center of the Puritan theology that was embraced by the town's founders.
Harvard University, an Ivy League university founded in Cambridge in 1636, is the oldest institution of higher learning in the United States. Massachusetts Institute of Technology (MIT), Lesley University, and Hult International Business School also are based in Cambridge. Radcliffe College, a women's liberal arts college, was based in Cambridge from its 1879 founding until its assimilation into Harvard in 1999.
Kendall Square, near MIT in the eastern part of Cambridge, has been called "the most innovative square mile on the planet" due to the high concentration of startup companies that have emerged there since 2010.
Founded in December 1630 during the colonial era, Cambridge was one among the first cities established in the Thirteen Colonies, and it went on to play a historic role during the American Revolution.
In May 1775, approximately 16,000 American patriots assembled in Cambridge Common to begin organizing a military retaliation against British troops following the Battles of Lexington and Concord. On July 2, 1775, two weeks after the Second Continental Congress in Philadelphia formally established the Continental Army and appointed George Washington commander of it, Washington arrived at Cambridge Common to take command of the Patriot soldiers camped there, many of whom played a role in supporting Washington's successful Siege of Boston, which trapped garrisoned British troops from moving by land, forcing the British to ultimately abandon Boston. Cambridge Common is celebrated as the birthplace of the Continental Army.
Massachusett Tribe inhabited the area that would become Cambridge for thousands of years prior to European colonization of the Americas, most recently under the name Anmoughcawgen. At the time of European contact and exploration, the area was inhabited by Naumkeag or Pawtucket to the north and Massachusett to the south, and may have been inhabited by other groups such as the Totant not well described in later European narratives. The contact period introduced a number of European infectious diseases which would decimate native populations in virgin soil epidemics, leaving the area uncontested upon the arrival of large groups of English settlers in 1630.
In December 1630, the site of present-day Cambridge was chosen for settlement because it was safely upriver from Boston Harbor, making it easily defensible from attacks by enemy ships. The city was founded by Thomas Dudley, his daughter Anne Bradstreet, and his son-in-law Simon Bradstreet. The first houses were built in the spring of 1631. The settlement was initially referred to as "the newe towne". Official Massachusetts records show the name rendered as Newe Towne by 1632, and as Newtowne by 1638.
Located at the first convenient Charles River crossing west of Boston, Newtowne was one of several towns, including Boston, Dorchester, Watertown, and Weymouth, founded by the 700 original Puritan colonists of the Massachusetts Bay Colony under Governor John Winthrop. Its first preacher was Thomas Hooker, who led many of its original inhabitants west in 1636 to found Hartford and the Connecticut Colony; before leaving, they sold their plots to more recent immigrants from England. The original village site is now within Harvard Square. The marketplace where farmers sold crops from surrounding towns at the edge of a salt marsh (since filled) remains within a small park at the corner of John F. Kennedy and Winthrop Streets.
In 1636, Newe College, later renamed Harvard College after benefactor John Harvard, was founded as North America's first institution of higher learning. Its initial purpose was training ministers. According to Cotton Mather, Newtowne was chosen for the site of the college by the Great and General Court, then the legislature of Massachusetts Bay Colony, primarily for its proximity to the popular and highly respected Puritan preacher Thomas Shepard. In May 1638, the settlement's name was changed to Cambridge in honor of the University of Cambridge in Cambridge, England.
In 1639, the Massachusetts General Court purchased the land that became present-day Cambridge from the Naumkeag Squaw Sachem of Mistick.
The town comprised a much larger area than the present city, with various outlying parts becoming independent towns over the years: Cambridge Village (later Newtown and now Newton) in 1688, Cambridge Farms (now Lexington) in 1712 or 1713, and Little or South Cambridge (now Brighton) and Menotomy or West Cambridge (now Arlington) in 1807. In the late 19th century, various schemes for annexing Cambridge to Boston were pursued and rejected.
Newtowne's ministers, Hooker and Shepard, the college's first president, the college's major benefactor, and the first schoolmaster Nathaniel Eaton were all Cambridge alumni, as was the colony's governor John Winthrop. In 1629, Winthrop had led the signing of the founding document of the city of Boston, which was known as the Cambridge Agreement, after the university. In 1650, Governor Thomas Dudley signed the charter creating the corporation that still governs Harvard College.
Cambridge grew slowly as an agricultural village eight miles (13 km) by road from Boston, the colony's capital. By the American Revolution, most residents lived near the Common and Harvard College, with most of the town comprising farms and estates. Most inhabitants were descendants of the original Puritan colonists, but there was also a small elite of Anglican "worthies" who were not involved in village life, made their livings from estates, investments, and trade, and lived in mansions along "the Road to Watertown", present-day Brattle Street, which is still known as Tory Row.
Coming south from Virginia, George Washington took command of the force of Patriot soldiers camped on Cambridge Common on July 3, 1775, which is now considered the birthplace of the Continental Army.
On January 24, 1776, Henry Knox arrived with an artillery train captured from Fort Ticonderoga, which allowed Washington to force the British Army to evacuate Boston. Most of the Loyalist estates in Cambridge were confiscated after the Revolutionary War.
Between 1790 and 1840, Cambridge grew rapidly with the construction of West Boston Bridge in 1792 connecting Cambridge directly to Boston, making it no longer necessary to travel eight miles (13 km) through the Boston Neck, Roxbury, and Brookline to cross the Charles River. A second bridge, the Canal Bridge, opened in 1809 alongside the new Middlesex Canal. The new bridges and roads made what were formerly estates and marshland into prime industrial and residential districts.
In the mid-19th century, Cambridge was the center of a literary revolution. It was home to some of the famous Fireside poets, named because their poems would often be read aloud by families in front of their evening fires. The Fireside poets, including Henry Wadsworth Longfellow, James Russell Lowell, and Oliver Wendell Holmes, were highly popular and influential in this era.
Soon after, turnpikes were built: the Cambridge and Concord Turnpike (today's Broadway and Concord Ave.), the Middlesex Turnpike (Hampshire St. and Massachusetts Ave. northwest of Porter Square), and what are today's Cambridge, Main, and Harvard Streets connected various areas of Cambridge to the bridges. In addition, the town was connected to the Boston & Maine Railroad, leading to the development of Porter Square as well as the creation of neighboring Somerville from the formerly rural parts of Charlestown.
Cambridge was incorporated as a city in 1846. The city's commercial center began to shift from Harvard Square to Central Square, which became the city's downtown around that time.
Between 1850 and 1900, Cambridge took on much of its present character, featuring streetcar suburban development along the turnpikes and working class and industrial neighborhoods focused on East Cambridge, comfortable middle-class housing on the old Cambridgeport, and Mid-Cambridge estates and upper-class enclaves near Harvard University and on the minor hills. The arrival of the railroad in North Cambridge and Northwest Cambridge led to three changes: the development of massive brickyards and brickworks between Massachusetts Avenue, Concord Avenue, and Alewife Brook; the ice-cutting industry launched by Frederic Tudor on Fresh Pond; and the carving up of the last estates into residential subdivisions to house the thousands of immigrants who arrived to work in the new industries.
For much of the 19th and early 20th centuries, the city's largest employer was the New England Glass Company, founded in 1818. By the middle of the 19th century, it was the world's largest and most modern glassworks. In 1888, Edward Drummond Libbey moved all production to Toledo, Ohio, where it continues today under the name Owens-Illinois. The company's flint glassware with heavy lead content is prized by antique glass collectors, and the Toledo Museum of Art has a large collection. The Museum of Fine Arts in Boston and the Sandwich Glass Museum on Cape Cod also house several pieces.
In 1895, Edwin Ginn, founder of Ginn and Company, built the Athenaeum Press Building for his publishing textbook empire.
By 1920, Cambridge was one of New England's main industrial cities, with nearly 120,000 residents. Among the largest businesses in Cambridge during the period of industrialization was Carter's Ink Company, whose neon sign long adorned the Charles River and which was for many years the world's largest ink manufacturer. Next door was the Athenaeum Press. Confectionery and snack manufacturers in the Cambridgeport-Area 4-Kendall corridor included Kennedy Biscuit Factory, later part of Nabisco and originator of the Fig Newton, Necco, Squirrel Brands, George Close Company (1861–1930s), Page & Shaw, Daggett Chocolate (1892–1960s, recipes bought by Necco), Fox Cross Company (1920–1980, originator of the Charleston Chew, and now part of Tootsie Roll Industries), Kendall Confectionery Company, and James O. Welch (1927–1963, originator of Junior Mints, Sugar Daddies, Sugar Mamas, and Sugar Babies, now part of Tootsie Roll Industries). Main Street was nicknamed "Confectioner's Row".
Only the Cambridge Brands subsidiary of Tootsie Roll Industries remains in town, still manufacturing Junior Mints in the old Welch factory on Main Street. The Blake and Knowles Steam Pump Company (1886), the Kendall Boiler and Tank Company (1880, now in Chelmsford, Massachusetts), and the New England Glass Company (1818–1878) were among the industrial manufacturers in what are now Kendall Square and East Cambridge.
In 1935, the Cambridge Housing Authority and the Public Works Administration demolished an integrated low-income tenement neighborhood with African Americans and European immigrants. In its place, it built the whites-only "Newtowne Court" public housing development and the adjoining, blacks-only "Washington Elms" project in 1940; the city required segregation in its other public housing projects as well.
As industry in New England began to decline during the Great Depression and after World War II, Cambridge lost much of its industrial base. It also began to become an intellectual, rather than an industrial, center. Harvard University, which had always been important as both a landowner and an institution, began to play a more dominant role in the city's life and culture. When Radcliffe College was established in 1879, the town became a mecca for some of the nation's most academically talented female students. MIT's move from Boston to Cambridge in 1916 reinforced Cambridge's status as an intellectual center of the United States.
After the 1950s, the city's population began to decline slowly as families tended to be replaced by single people and young couples. In Cambridge Highlands, the technology company Bolt, Beranek, & Newman produced the first network router in 1969 and hosted the invention of computer-to-computer email in 1971. The 1980s brought a wave of high technology startups. Those selling advanced minicomputers were overtaken by the microcomputer. Cambridge-based VisiCorp made the first spreadsheet software for personal computers, VisiCalc, and helped propel the Apple II to consumer success. It was overtaken and purchased by Cambridge-based Lotus Development, maker of Lotus 1-2-3 (which was, in turn, replaced in by Microsoft Excel).
The city continues to be home to many startups. Kendall Square was a software hub through the dot-com boom and today hosts offices of such technology companies as Google, Microsoft, and Amazon. The Square also now houses the headquarters of Akamai.
In 1976, Harvard's plans to start experiments with recombinant DNA led to a three-month moratorium and a citizen review panel. In the end, Cambridge decided to allow such experiments but passed safety regulations in 1977. This led to regulatory certainty and acceptance when Biogen opened a lab in 1982, in contrast to the hostility that caused the Genetic Institute, a Harvard spinoff, to abandon Somerville and Boston for Cambridge. The biotech and pharmaceutical industries have since thrived in Cambridge, which now includes headquarters for Biogen and Genzyme; laboratories for Novartis, Teva, Takeda, Alnylam, Ironwood, Catabasis, Moderna Therapeutics, Editas Medicine; support companies such as Cytel; and many smaller companies.
By the end of the 20th century, Cambridge had one of the most costly housing markets in the Northeastern United States. While considerable class, race, and age diversity existed, it became more challenging for those who grew up in the city to afford to remain. The end of rent control in 1994 prompted many Cambridge renters to move to more affordable housing in Somerville and other Massachusetts cities and towns.
Cambridge's mix of amenities and proximity to Boston kept housing prices relatively stable despite the bursting of the United States housing bubble in 2008 and 2009. Cambridge has been a sanctuary city since 1985 and reaffirmed its status as such in 2006.
According to the U.S. Census Bureau, Cambridge has a total area of 7.1 square miles (18 km), 6.4 square miles (17 km) of which is land and 0.7 square miles (1.8 km) (9.82%) of which is water.
Cambridge is located in eastern Massachusetts, bordered by:
The border between Cambridge and the neighboring city of Somerville passes through densely populated neighborhoods, which are connected by the MBTA Red Line. Some of the main squares, Inman, Porter, and to a lesser extent, Harvard and Lechmere, are very close to the city line, as are Somerville's Union and Davis Squares.
Through the City of Cambridge's exclusive municipal water system, the city further controls two exclave areas, one being Payson Park Reservoir and Gatehouse, a 2009 listed American Water Landmark located roughly one mile west of Fresh Pond and surrounded by the town of Belmont. The second area is the larger Hobbs Brook and Stony Brook watersheds, which share borders with neighboring towns and cities including Lexington, Lincoln, Waltham and Weston.
Cambridge has been called the "City of Squares", as most of its commercial districts are major street intersections known as squares. Each square acts as a neighborhood center.
Kendall Square, formed by the junction of Broadway, Main Street, and Third Street, has been called "the most innovative square mile on the planet", owing to its high concentration of entrepreneurial start-ups and quality of innovation which have emerged in the vicinity of the square since 2010. Technology Square is an office and laboratory building cluster in this neighborhood. Just over the Longfellow Bridge from Boston, at the eastern end of the MIT campus, it is served by the Kendall/MIT station on the MBTA Red Line subway. Most of Cambridge's large office towers are located in the Square. A biotech industry has developed in this area. The Cambridge Innovation Center, a large co-working space, is in Kendall Square at 1 Broadway. The Cambridge Center office complex is in Kendall Square, and not at the actual center of Cambridge. The "One Kendall Square" complex is nearby, but not actually in Kendall Square.
Central Square is formed by the junction of Massachusetts Avenue, Prospect Street, and Western Avenue. Containing a variety of ethnic restaurants, it was economically depressed as recently as the late 1990s; it underwent gentrification in recent years (in conjunction with the development of the nearby University Park at MIT), and continues to grow more costly. It is served by the Central Station stop on the MBTA Red Line subway. Lafayette Square, formed by the junction of Massachusetts Avenue, Columbia Street, Sidney Street, and Main Street, is considered part of the Central Square area. Cambridgeport is south of Central Square, and bordered by MIT, the Charles River, Massachusetts Avenue, and River Street.
Harvard Square is formed by the junction of Massachusetts Avenue, Brattle Street, Dunster Street, and JFK Street. This is the primary site of Harvard University and a major Cambridge shopping area. It is served by a Red Line station. Harvard Square was originally the Red Line's northwestern terminus and a major transfer point to streetcars that also operated in a short tunnel—which is still a major bus terminal, although the area under the Square was reconfigured dramatically in the 1980s when the Red Line was extended. A short distance away from the square lies the Cambridge Common, while the neighborhood north of Harvard and east of Massachusetts Avenue is known as Baldwin, in honor of the first Black principal of Cambridge public schools, Maria L. Baldwin. It was renamed "Baldwin" in 2021, and so some know the area better by its former name, Agassiz, after the famed scientist Louis Agassiz.
Porter Square is about a mile north on Massachusetts Avenue from Harvard Square, at the junction of Massachusetts and Somerville Avenues. It includes part of the city of Somerville and is served by the Porter Square Station, a complex housing a Red Line stop and a Fitchburg Line commuter rail stop. Lesley University's University Hall and Porter campus are in Porter Square.
Inman Square is at the junction of Cambridge and Hampshire streets in mid-Cambridge. It is home to restaurants, bars, music venues, and boutiques. Victorian streetlights, benches, and bus stops were added to the streets in the 2000s, and a new city park was installed.
Lechmere Square is at the junction of Cambridge and First streets, adjacent to the CambridgeSide Galleria shopping mall. It is served by Lechmere station on the MBTA Green Line.
Cambridge's residential neighborhoods border but are not defined by the squares.
In the Köppen-Geiger classification, Cambridge has a hot-summer humid continental climate (Dfa) with hot summers and cold winters, that can appear in the southern end of New England's interior. Abundant rain falls on the city (and in the winter often as snow); it has no dry season. The average January temperature is 26.6 °F (−3 °C), making Cambridge part of Group D, independent of the isotherm. There are four well-defined seasons.
As of the census of 2010, there were 105,162 people, 44,032 households, and 17,420 families residing in the city. The population density was 16,354.9 inhabitants per square mile (6,314.7/km). There were 47,291 housing units at an average density of 7,354.7 per square mile (2,839.7/km). The racial makeup of the city was 66.60% White, 11.70% Black or African American, 0.20% Native American, 15.10% Asian (3.7% Chinese, 1.4% Asian Indian, 1.2% Korean, 1.0% Japanese), 0.01% Pacific Islander, 2.10% from other races, and 4.30% from two or more races. 7.60% of the population were Hispanic or Latino of any race (1.6% Puerto Rican, 1.4% Mexican, 0.6% Dominican, 0.5% Colombian & Salvadoran, 0.4% Spaniard). Non-Hispanic Whites were 62.1% of the population in 2010, down from 89.7% in 1970. An individual resident of Cambridge is known as a Cantabrigian.
In 2010, there were 44,032 households, out of which 16.9% had children under the age of 18 living with them, 28.9% were married couples living together, 8.4% had a female householder with no husband present, and 60.4% were non-families. 40.7% of all households were made up of individuals, and 9.6% had someone living alone who was 65 years of age or older. The average household size was 2.00 and the average family size was 2.76.
In the city, the population was spread out, with 13.3% of the population under the age of 18, 21.2% from 18 to 24, 38.6% from 25 to 44, 17.8% from 45 to 64, and 9.2% who were 65 years of age or older. The median age was 30.5 years. For every 100 females, there were 96.1 males. For every 100 females age 18 and over, there were 94.7 males.
The median income for a household in the city was $47,979, and the median income for a family was $59,423 (these figures had risen to $58,457 and $79,533 respectively as of a 2007 estimate). Males had a median income of $43,825 versus $38,489 for females. The per capita income for the city was $31,156. About 8.7% of families and 12.9% of the population were below the poverty line, including 15.1% of those under age 18 and 12.9% of those age 65 or over.
Cambridge has been ranked as one of the most liberal cities in America. Locals living in and near the city jokingly refer to it as "The People's Republic of Cambridge". For 2016, the residential property tax rate in Cambridge was $6.99 per $1,000. Cambridge enjoys the highest possible bond credit rating, AAA, with all three Wall Street rating agencies.
In 2000, 11.0% of city residents were of Irish ancestry; 7.2% were of English, 6.9% Italian, 5.5% West Indian and 5.3% German ancestry. 69.4% spoke only English at home, while 6.9% spoke Spanish, 3.2% Chinese or Mandarin, 3.0% Portuguese, 2.9% French Creole, 2.3% French, 1.5% Korean, and 1.0% Italian.
Data is from the 2009–2013 American Community Survey 5-Year Estimates.
Manufacturing was an important part of Cambridge's economy in the late 19th and early 20th century, but educational institutions are its biggest employers today. Harvard and MIT together employ about 20,000. As a cradle of technological innovation, Cambridge was home to technology firms Analog Devices, Akamai, Bolt, Beranek, and Newman (BBN Technologies) (now part of Raytheon), General Radio (later GenRad), Lotus Development Corporation (now part of IBM), Polaroid, Symbolics, and Thinking Machines.
In 1996, Polaroid, Arthur D. Little, and Lotus were Cambridge's top employers, with over 1,000 employees, but they faded out a few years later. Health care and biotechnology firms such as Genzyme, Biogen Idec, bluebird bio, Millennium Pharmaceuticals, Sanofi, Pfizer and Novartis have significant presences in the city. Though headquartered in Switzerland, Novartis continues to expand its operations in Cambridge.
Other major biotech and pharmaceutical firms expanding their presence in Cambridge include GlaxoSmithKline, AstraZeneca, Shire, and Pfizer. Most of Cambridge's biotech firms are in Kendall Square and East Cambridge, which decades ago were the city's center of manufacturing. Some others are in University Park at MIT, a new development in another former manufacturing area.
None of the high technology firms that once dominated the economy was among the 25 largest employers in 2005, but by 2008 Akamai and ITA Software were. Google, IBM Research, Microsoft Research, and Philips Research maintain offices in Cambridge. In late January 2012—less than a year after acquiring Billerica-based analytic database management company, Vertica—Hewlett-Packard announced it would also be opening its first offices in Cambridge. Also around that time, e-commerce giants Staples and Amazon.com said they would be opening research and innovation centers in Kendall Square. And LabCentral provides a shared laboratory facility for approximately 25 emerging biotech companies.
The proximity of Cambridge's universities has also made the city a center for nonprofit groups and think tanks, including the National Bureau of Economic Research, the Smithsonian Astrophysical Observatory, the Lincoln Institute of Land Policy, Cultural Survival, and One Laptop per Child.
In September 2011, Cambridge launched its Entrepreneur Walk of Fame initiative, recognizing people who have made contributions to innovation in global business.
In 2021, Cambridge was one of approximately 27 US cities to receive a AAA rating from each of the three major credit rating agencies in the nation, Moody's Investors Service, Standard & Poor's and Fitch Ratings. 2021 marked the 22nd consecutive year that Cambridge had retained this distinction.
As of 2019, the city's ten largest employers are:
Cambridge has a large and varied collection of permanent public art, on both city property, managed by the Cambridge Arts Council, Community Art Center, and the Harvard and MIT campuses. Temporary public artworks are displayed as part of the annual Cambridge River Festival on the banks of the Charles River during winter celebrations in Harvard and Central Squares and at Harvard University campus sites. Experimental forms of public artistic and cultural expression include the Central Square World's Fair, the annual Somerville-based Honk! Festival, and If This House Could Talk, a neighborhood art and history event.
Street musicians and other performers entertain tourists and locals in Harvard Square during the warmer months. The performances are coordinated through a public process that has been developed collaboratively by the performers, city administrators, private organizations and business groups. The Cambridge public library contains four Works Progress Administration murals completed in 1935 by Elizabeth Tracy Montminy: Religion, Fine Arts, History of Books and Paper, and The Development of the Printing Press.
Despite intensive urbanization during the late 19th century and the 20th century, Cambridge has several historic buildings, including some from the 17th century. The city also has abundant contemporary architecture, largely built by Harvard and MIT.
Notable historic buildings in the city include:
Contemporary architecture:
The city has an active music scene, from classical performances to the latest popular bands. Beyond its colleges and universities, Cambridge has many music venues, including The Middle East, Club Passim, The Plough and Stars, The Lizard Lounge and the Nameless Coffeehouse.
Consisting largely of densely built residential space, Cambridge lacks significant tracts of public parkland. Easily accessible open space on the university campuses, including Harvard Yard, Radcliffe Yard, and MIT's Great Lawn, as well as the considerable open space of Mount Auburn Cemetery and Fresh Pond Reservation, partly compensates for this. At Cambridge's western edge, the cemetery is known as a garden cemetery because of its landscaping (the oldest planned landscape in the country) and arboretum. Although known as a Cambridge landmark, much of the cemetery lies within Watertown. It is also an Important Bird Area (IBA) in the Greater Boston area. Fresh Pond Reservation is the largest open green space in Cambridge with 162 acres (656,000 m) of land around a 155-acre (627,000 m) kettle hole lake. This land includes a 2.25-mile walking trail around the reservoir and a public 9-hole golf course.
Public parkland includes the esplanade along the Charles River, which mirrors its Boston counterpart, Cambridge Common, Danehy Park, and Alewife Brook Reservation.
Cambridge is split between Massachusetts's 5th and 7th U.S. congressional districts. The 5th district seat is held by Democrat Katherine Clark, who replaced now-Senator Ed Markey in a 2013 special election; the 7th is represented by Democrat Ayanna Pressley, elected in 2018. The state's senior United States senator is Democrat Elizabeth Warren, elected in 2012, who lives in Cambridge. The governor of Massachusetts is Democrat Maura Healey, elected in 2022.
Cambridge is represented in six districts in the Massachusetts House of Representatives: the 24th Middlesex (which includes parts of Belmont and Arlington), the 25th and 26th Middlesex (the latter of which includes a portion of Somerville), the 29th Middlesex (which includes a small part of Watertown), and the Eighth and Ninth Suffolk (both including parts of the City of Boston). The city is represented in the Massachusetts Senate as a part of the 2nd Middlesex, Middlesex and Suffolk, and 1st Suffolk and Middlesex districts.
From 1860 to 1880, Republicans Abraham Lincoln, Ulysses S. Grant, Rutherford B. Hayes, and James Garfield each won Cambridge, Grant doing so by margins of over 20 points in both of his campaigns. Following that, from 1884 to 1892, Grover Cleveland won Cambridge in all three of his presidential campaigns, by less than ten points each time.
Then from 1896 to 1924, Cambridge became something of a swing city with a slight Republican lean. Republican candidates carried the city in five of the eight presidential elections during that time, with five of the elections resulting in either a plurality or a margin of victory of fewer than ten points.
In modern times, however, Cambridge has been largely Democratic. In the last 23 presidential elections, dating back to the nomination of Al Smith in the 1928 presidential election, Democratic presidential candidates have won Cambridge with every Democratic nominee since Massachusetts native John F. Kennedy in 1960 receiving at least 70% of the vote, except for Jimmy Carter in 1976 and 1980. Since 1928, the only Republican nominee to come within ten points of carrying Cambridge is Dwight Eisenhower in his 1956 reelection bid.
Cambridge has a city government led by a mayor and a nine-member city council. There is also a six-member school committee that functions alongside the superintendent of public schools. The councilors and school committee members are elected every two years using proportional representation.
The mayor is elected by the city councilors from among themselves and serves as the chair of city council meetings. The mayor also sits on the school committee. The mayor is not the city's chief executive. Rather, the city manager, who is appointed by the city council, serves in that capacity.
Under the city's Plan E form of government, the city council does not have the power to appoint or remove city officials who are under the direction of the city manager. The city council and its members are also forbidden from giving orders to any subordinate of the city manager.
Yi-An Huang is the City Manager as of September 6, 2022, succeeding Owen O'Riordan (now the Deputy City Manager) who briefly served as the Acting City Manager after Louis DePasquale resigned on July 5, 2022, after six years in office.
* = current mayor ** = former mayor
On March 8, 2021, Cambridge City Council voted to recognize polyamorous domestic partnerships, becoming the second city in the United States following neighboring Somerville, which had done so in 2020.
Cambridge was a county seat of Middlesex County, along with Lowell, until the abolition of county government. Though the county government was abolished in 1997, the county still exists as a geographical and political region. The employees of Middlesex County courts, jails, registries, and other county agencies now work directly for the state. The county's registrars of Deeds and Probate remain in Cambridge, but the Superior Court and District Attorney have had their operations transferred to Woburn. Third District Court has shifted operations to Medford, and the county Sheriff's office awaits near-term relocation.
Cambridge is perhaps best known as an academic and intellectual center. Its colleges and universities include:
At least 258 of the world's total 962 Nobel Prize winners have at some point in their careers been affiliated with universities in Cambridge.
Cambridge College is named for Cambridge and was based in Cambridge until 2017, when it consolidated to a new headquarters in neighboring Boston.
The American Academy of Arts and Sciences, one of the nation's oldest learned societies founded in 1780, is based in Cambridge.
The city's schools constitute the Cambridge Public School District. Schools include:
Five upper schools offer grades 6–8 in some of the same buildings as the elementary schools:
Cambridge has three district public high school programs, including Cambridge Rindge and Latin School (CRLS).
Other public charter schools include Benjamin Banneker Charter School, which serves grades K–6; Community Charter School of Cambridge in Kendall Square, which serves grades 7–12; and Prospect Hill Academy, a charter school whose upper school is in Central Square though it is not a part of the Cambridge Public School District.
Cambridge also has several private schools, including:
Cambridge is served by a single online newspaper, Cambridge Day. The last physical newspaper in the city, Cambridge Chronicle, ceased publication in 2022 and today only cross-posts regional stories from other Gannett properties.
Cambridge is home to the following radio stations, including both commercially-licensed and student-run stations:
Cambridge Community Television (CCTV) has served the city since its inception in 1988. CCTV operates Cambridge's public access television facility and three television channels, 8, 9, and 96, on the Cambridge cable system (Comcast). The city has invited tenders from other cable providers, but Comcast remains its only fixed television and broadband utility, though services from American satellite TV providers are available. In October 2014, Cambridge City Manager Richard Rossi appointed a citizen Broadband Task Force to "examine options to increase competition, reduce pricing, and improve speed, reliability and customer service for both residents and businesses."
Cambridge obtains water from Hobbs Brook (in Lincoln and Waltham) and Stony Brook (Waltham and Weston), as well as an emergency connection to the Massachusetts Water Resources Authority. The city owns over 1,200 acres (486 ha) of land in other towns that includes these reservoirs and portions of their watershed. Water from these reservoirs flows by gravity through an aqueduct to Fresh Pond in Cambridge. It is then treated in an adjacent plant and pumped uphill to an elevation of 176 feet (54 m) above sea level at the Payson Park Reservoir (Belmont). The water is then redistributed downhill via gravity to individual users in the city. A new water treatment plant opened in 2001.
In October 2016, the city announced that, owing to drought conditions, they would begin buying water from the MWRA. On January 3, 2017, Cambridge announced that "As a result of continued rainfall each month since October 2016, we have been able to significantly reduce the need to use MWRA water. We have not purchased any MWRA water since December 12, 2016 and if 'average' rainfall continues this could continue for several months."
Cambridge is served by several major roads, including Route 2, Route 16, and the Route 28. The Massachusetts Turnpike does not pass through Cambridge but is accessible by an exit in nearby Allston. Both U.S. Route 1 and Interstate 93 provide additional access at the eastern end of Cambridge via Leverett Circle in Boston. Route 2A runs the length of the city, chiefly along Massachusetts Avenue. The Charles River forms the southern border of Cambridge and is crossed by 11 bridges connecting Cambridge to Boston, eight of which are open to motorized road traffic, including the Longfellow Bridge and the Harvard Bridge.
Cambridge has an irregular street network because many of the roads date from the colonial era. Contrary to popular belief, the road system did not evolve from longstanding cow-paths. Roads connected various village settlements with each other and nearby towns and were shaped by geographic features, most notably streams, hills, and swampy areas. Today, the major "squares" are typically connected by long, mostly straight roads, such as Massachusetts Avenue between Harvard Square and Central Square or Hampshire Street between Kendall Square and Inman Square.
On October 25, 2022, Cambridge City Council voted 8–1 to eliminate parking minimums from the city code, citing declining car ownership, with the aim of promoting housing construction.
Cambridge is served by the Massachusetts Bay Transportation Authority, including Porter station on the regional Commuter Rail, Lechmere station on the Green Line, and Alewife, Porter, Harvard, Central, and Kendall Square/MIT stations on the Red Line. Alewife station, the terminus of the Red Line, has a large multi-story parking garage.
The Harvard bus tunnel under Harvard Square connects to the Red Line underground. This tunnel was originally opened for streetcars in 1912 and served trackless trolleys, trolleybuses, and buses as the routes were converted; four lines of the MBTA trolleybus system continued to use it until their conversion to diesel in 2022. The tunnel was partially reconfigured when the Red Line was extended to Alewife in the early 1980s.
Both Union Square station in Somerville on the Green Line and Community College station in Charlestown on the Orange Line are located just outside of Cambridge.
Besides the state-owned transit agency, the city is also served by the Charles River Transportation Management Agency (CRTMA) shuttles which are supported by some of the largest companies operating in the city, in addition to the municipal government itself.
Cambridge has several bike paths, including one along the Charles River, and the Linear Park connecting the Minuteman Bikeway at Alewife with the Somerville Community Path. A connection to Watertown opened in 2022. Bike parking is common and there are bike lanes on many streets, although concerns have been expressed regarding the suitability of many of the lanes. On several central MIT streets, bike lanes transfer onto the sidewalk. Cambridge bans cycling on certain sections of sidewalk where pedestrian traffic is heavy.
Bicycling Magazine in 2006 rated Boston as one of the worst cities in the nation for bicycling, but it has given Cambridge honorable mention as one of the best and was called "Boston's great hope" by the magazine. Boston has since then followed the example of Cambridge and made considerable efforts to improve bicycling safety and convenience.
Walking is a popular activity in Cambridge. In 2000, among U.S. cities with more than 100,000 residents, Cambridge had the highest percentage of commuters who walked to work. Cambridge's major historic squares have changed into modern walking neighborhoods, including traffic calming features based on the needs of pedestrians rather than of motorists.
The Boston intercity bus and train stations at South Station in Boston, and Logan International Airport in East Boston, both of which are accessible by subway. The Fitchburg Line rail service from Porter Square connects to some western suburbs. Since October 2010, there has also been intercity bus service between Alewife Station (Cambridge) and New York City.
In addition to the Cambridge Police Department, the city is patrolled by the Fifth (Brighton) Barracks of Troop H of the Massachusetts State Police. Owing, however, to proximity, the city also practices functional cooperation with the Fourth (Boston) Barracks of Troop H, as well. The campuses of Harvard and MIT are patrolled by the Harvard University Police Department and MIT Police Department, respectively.
The city of Cambridge is protected by the Cambridge Fire Department. Established in 1832, the CFD operates eight engine companies, four ladder companies, one rescue company, and three paramedic squad companies from eight fire stations located throughout the city. The Acting Chief is Thomas F. Cahill Jr.
The city of Cambridge receives emergency medical services from PRO EMS, a privately contracted ambulance service.
Further educational services are provided at the Cambridge Public Library. The large modern main building was built in 2009, and connects to the restored 1888 Richardson Romanesque building. It was founded as the private Cambridge Athenaeum in 1849 and was acquired by the city in 1858, and became the Dana Library. The 1888 building was a donation of Frederick H. Rindge.
Cambridge's sister cities with active relationships are:
Cambridge has ten additional inactive sister city relationships: | [
{
"paragraph_id": 0,
"text": "Cambridge (/ˈkeɪmbrɪdʒ/ KAYM-brij) is a city in Middlesex County, Massachusetts, in the United States. It is a suburb in the Greater Boston metropolitan area, located directly across the Charles River from Boston. The city's population as of the 2020 U.S. census was 118,403, making it the most populous city in the county, the fourth-largest in Massachusetts, behind Boston, Worcester, and Springfield, and ninth-largest in New England. The city was named in honor of the University of Cambridge in Cambridge, England, which was an important center of the Puritan theology that was embraced by the town's founders.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Harvard University, an Ivy League university founded in Cambridge in 1636, is the oldest institution of higher learning in the United States. Massachusetts Institute of Technology (MIT), Lesley University, and Hult International Business School also are based in Cambridge. Radcliffe College, a women's liberal arts college, was based in Cambridge from its 1879 founding until its assimilation into Harvard in 1999.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Kendall Square, near MIT in the eastern part of Cambridge, has been called \"the most innovative square mile on the planet\" due to the high concentration of startup companies that have emerged there since 2010.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Founded in December 1630 during the colonial era, Cambridge was one among the first cities established in the Thirteen Colonies, and it went on to play a historic role during the American Revolution.",
"title": ""
},
{
"paragraph_id": 4,
"text": "In May 1775, approximately 16,000 American patriots assembled in Cambridge Common to begin organizing a military retaliation against British troops following the Battles of Lexington and Concord. On July 2, 1775, two weeks after the Second Continental Congress in Philadelphia formally established the Continental Army and appointed George Washington commander of it, Washington arrived at Cambridge Common to take command of the Patriot soldiers camped there, many of whom played a role in supporting Washington's successful Siege of Boston, which trapped garrisoned British troops from moving by land, forcing the British to ultimately abandon Boston. Cambridge Common is celebrated as the birthplace of the Continental Army.",
"title": ""
},
{
"paragraph_id": 5,
"text": "Massachusett Tribe inhabited the area that would become Cambridge for thousands of years prior to European colonization of the Americas, most recently under the name Anmoughcawgen. At the time of European contact and exploration, the area was inhabited by Naumkeag or Pawtucket to the north and Massachusett to the south, and may have been inhabited by other groups such as the Totant not well described in later European narratives. The contact period introduced a number of European infectious diseases which would decimate native populations in virgin soil epidemics, leaving the area uncontested upon the arrival of large groups of English settlers in 1630.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "In December 1630, the site of present-day Cambridge was chosen for settlement because it was safely upriver from Boston Harbor, making it easily defensible from attacks by enemy ships. The city was founded by Thomas Dudley, his daughter Anne Bradstreet, and his son-in-law Simon Bradstreet. The first houses were built in the spring of 1631. The settlement was initially referred to as \"the newe towne\". Official Massachusetts records show the name rendered as Newe Towne by 1632, and as Newtowne by 1638.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "Located at the first convenient Charles River crossing west of Boston, Newtowne was one of several towns, including Boston, Dorchester, Watertown, and Weymouth, founded by the 700 original Puritan colonists of the Massachusetts Bay Colony under Governor John Winthrop. Its first preacher was Thomas Hooker, who led many of its original inhabitants west in 1636 to found Hartford and the Connecticut Colony; before leaving, they sold their plots to more recent immigrants from England. The original village site is now within Harvard Square. The marketplace where farmers sold crops from surrounding towns at the edge of a salt marsh (since filled) remains within a small park at the corner of John F. Kennedy and Winthrop Streets.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "In 1636, Newe College, later renamed Harvard College after benefactor John Harvard, was founded as North America's first institution of higher learning. Its initial purpose was training ministers. According to Cotton Mather, Newtowne was chosen for the site of the college by the Great and General Court, then the legislature of Massachusetts Bay Colony, primarily for its proximity to the popular and highly respected Puritan preacher Thomas Shepard. In May 1638, the settlement's name was changed to Cambridge in honor of the University of Cambridge in Cambridge, England.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "In 1639, the Massachusetts General Court purchased the land that became present-day Cambridge from the Naumkeag Squaw Sachem of Mistick.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "The town comprised a much larger area than the present city, with various outlying parts becoming independent towns over the years: Cambridge Village (later Newtown and now Newton) in 1688, Cambridge Farms (now Lexington) in 1712 or 1713, and Little or South Cambridge (now Brighton) and Menotomy or West Cambridge (now Arlington) in 1807. In the late 19th century, various schemes for annexing Cambridge to Boston were pursued and rejected.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "Newtowne's ministers, Hooker and Shepard, the college's first president, the college's major benefactor, and the first schoolmaster Nathaniel Eaton were all Cambridge alumni, as was the colony's governor John Winthrop. In 1629, Winthrop had led the signing of the founding document of the city of Boston, which was known as the Cambridge Agreement, after the university. In 1650, Governor Thomas Dudley signed the charter creating the corporation that still governs Harvard College.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "Cambridge grew slowly as an agricultural village eight miles (13 km) by road from Boston, the colony's capital. By the American Revolution, most residents lived near the Common and Harvard College, with most of the town comprising farms and estates. Most inhabitants were descendants of the original Puritan colonists, but there was also a small elite of Anglican \"worthies\" who were not involved in village life, made their livings from estates, investments, and trade, and lived in mansions along \"the Road to Watertown\", present-day Brattle Street, which is still known as Tory Row.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "Coming south from Virginia, George Washington took command of the force of Patriot soldiers camped on Cambridge Common on July 3, 1775, which is now considered the birthplace of the Continental Army.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "On January 24, 1776, Henry Knox arrived with an artillery train captured from Fort Ticonderoga, which allowed Washington to force the British Army to evacuate Boston. Most of the Loyalist estates in Cambridge were confiscated after the Revolutionary War.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "Between 1790 and 1840, Cambridge grew rapidly with the construction of West Boston Bridge in 1792 connecting Cambridge directly to Boston, making it no longer necessary to travel eight miles (13 km) through the Boston Neck, Roxbury, and Brookline to cross the Charles River. A second bridge, the Canal Bridge, opened in 1809 alongside the new Middlesex Canal. The new bridges and roads made what were formerly estates and marshland into prime industrial and residential districts.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "In the mid-19th century, Cambridge was the center of a literary revolution. It was home to some of the famous Fireside poets, named because their poems would often be read aloud by families in front of their evening fires. The Fireside poets, including Henry Wadsworth Longfellow, James Russell Lowell, and Oliver Wendell Holmes, were highly popular and influential in this era.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "Soon after, turnpikes were built: the Cambridge and Concord Turnpike (today's Broadway and Concord Ave.), the Middlesex Turnpike (Hampshire St. and Massachusetts Ave. northwest of Porter Square), and what are today's Cambridge, Main, and Harvard Streets connected various areas of Cambridge to the bridges. In addition, the town was connected to the Boston & Maine Railroad, leading to the development of Porter Square as well as the creation of neighboring Somerville from the formerly rural parts of Charlestown.",
"title": "History"
},
{
"paragraph_id": 18,
"text": "Cambridge was incorporated as a city in 1846. The city's commercial center began to shift from Harvard Square to Central Square, which became the city's downtown around that time.",
"title": "History"
},
{
"paragraph_id": 19,
"text": "Between 1850 and 1900, Cambridge took on much of its present character, featuring streetcar suburban development along the turnpikes and working class and industrial neighborhoods focused on East Cambridge, comfortable middle-class housing on the old Cambridgeport, and Mid-Cambridge estates and upper-class enclaves near Harvard University and on the minor hills. The arrival of the railroad in North Cambridge and Northwest Cambridge led to three changes: the development of massive brickyards and brickworks between Massachusetts Avenue, Concord Avenue, and Alewife Brook; the ice-cutting industry launched by Frederic Tudor on Fresh Pond; and the carving up of the last estates into residential subdivisions to house the thousands of immigrants who arrived to work in the new industries.",
"title": "History"
},
{
"paragraph_id": 20,
"text": "For much of the 19th and early 20th centuries, the city's largest employer was the New England Glass Company, founded in 1818. By the middle of the 19th century, it was the world's largest and most modern glassworks. In 1888, Edward Drummond Libbey moved all production to Toledo, Ohio, where it continues today under the name Owens-Illinois. The company's flint glassware with heavy lead content is prized by antique glass collectors, and the Toledo Museum of Art has a large collection. The Museum of Fine Arts in Boston and the Sandwich Glass Museum on Cape Cod also house several pieces.",
"title": "History"
},
{
"paragraph_id": 21,
"text": "In 1895, Edwin Ginn, founder of Ginn and Company, built the Athenaeum Press Building for his publishing textbook empire.",
"title": "History"
},
{
"paragraph_id": 22,
"text": "By 1920, Cambridge was one of New England's main industrial cities, with nearly 120,000 residents. Among the largest businesses in Cambridge during the period of industrialization was Carter's Ink Company, whose neon sign long adorned the Charles River and which was for many years the world's largest ink manufacturer. Next door was the Athenaeum Press. Confectionery and snack manufacturers in the Cambridgeport-Area 4-Kendall corridor included Kennedy Biscuit Factory, later part of Nabisco and originator of the Fig Newton, Necco, Squirrel Brands, George Close Company (1861–1930s), Page & Shaw, Daggett Chocolate (1892–1960s, recipes bought by Necco), Fox Cross Company (1920–1980, originator of the Charleston Chew, and now part of Tootsie Roll Industries), Kendall Confectionery Company, and James O. Welch (1927–1963, originator of Junior Mints, Sugar Daddies, Sugar Mamas, and Sugar Babies, now part of Tootsie Roll Industries). Main Street was nicknamed \"Confectioner's Row\".",
"title": "History"
},
{
"paragraph_id": 23,
"text": "Only the Cambridge Brands subsidiary of Tootsie Roll Industries remains in town, still manufacturing Junior Mints in the old Welch factory on Main Street. The Blake and Knowles Steam Pump Company (1886), the Kendall Boiler and Tank Company (1880, now in Chelmsford, Massachusetts), and the New England Glass Company (1818–1878) were among the industrial manufacturers in what are now Kendall Square and East Cambridge.",
"title": "History"
},
{
"paragraph_id": 24,
"text": "In 1935, the Cambridge Housing Authority and the Public Works Administration demolished an integrated low-income tenement neighborhood with African Americans and European immigrants. In its place, it built the whites-only \"Newtowne Court\" public housing development and the adjoining, blacks-only \"Washington Elms\" project in 1940; the city required segregation in its other public housing projects as well.",
"title": "History"
},
{
"paragraph_id": 25,
"text": "As industry in New England began to decline during the Great Depression and after World War II, Cambridge lost much of its industrial base. It also began to become an intellectual, rather than an industrial, center. Harvard University, which had always been important as both a landowner and an institution, began to play a more dominant role in the city's life and culture. When Radcliffe College was established in 1879, the town became a mecca for some of the nation's most academically talented female students. MIT's move from Boston to Cambridge in 1916 reinforced Cambridge's status as an intellectual center of the United States.",
"title": "History"
},
{
"paragraph_id": 26,
"text": "After the 1950s, the city's population began to decline slowly as families tended to be replaced by single people and young couples. In Cambridge Highlands, the technology company Bolt, Beranek, & Newman produced the first network router in 1969 and hosted the invention of computer-to-computer email in 1971. The 1980s brought a wave of high technology startups. Those selling advanced minicomputers were overtaken by the microcomputer. Cambridge-based VisiCorp made the first spreadsheet software for personal computers, VisiCalc, and helped propel the Apple II to consumer success. It was overtaken and purchased by Cambridge-based Lotus Development, maker of Lotus 1-2-3 (which was, in turn, replaced in by Microsoft Excel).",
"title": "History"
},
{
"paragraph_id": 27,
"text": "The city continues to be home to many startups. Kendall Square was a software hub through the dot-com boom and today hosts offices of such technology companies as Google, Microsoft, and Amazon. The Square also now houses the headquarters of Akamai.",
"title": "History"
},
{
"paragraph_id": 28,
"text": "In 1976, Harvard's plans to start experiments with recombinant DNA led to a three-month moratorium and a citizen review panel. In the end, Cambridge decided to allow such experiments but passed safety regulations in 1977. This led to regulatory certainty and acceptance when Biogen opened a lab in 1982, in contrast to the hostility that caused the Genetic Institute, a Harvard spinoff, to abandon Somerville and Boston for Cambridge. The biotech and pharmaceutical industries have since thrived in Cambridge, which now includes headquarters for Biogen and Genzyme; laboratories for Novartis, Teva, Takeda, Alnylam, Ironwood, Catabasis, Moderna Therapeutics, Editas Medicine; support companies such as Cytel; and many smaller companies.",
"title": "History"
},
{
"paragraph_id": 29,
"text": "By the end of the 20th century, Cambridge had one of the most costly housing markets in the Northeastern United States. While considerable class, race, and age diversity existed, it became more challenging for those who grew up in the city to afford to remain. The end of rent control in 1994 prompted many Cambridge renters to move to more affordable housing in Somerville and other Massachusetts cities and towns.",
"title": "History"
},
{
"paragraph_id": 30,
"text": "Cambridge's mix of amenities and proximity to Boston kept housing prices relatively stable despite the bursting of the United States housing bubble in 2008 and 2009. Cambridge has been a sanctuary city since 1985 and reaffirmed its status as such in 2006.",
"title": "History"
},
{
"paragraph_id": 31,
"text": "According to the U.S. Census Bureau, Cambridge has a total area of 7.1 square miles (18 km), 6.4 square miles (17 km) of which is land and 0.7 square miles (1.8 km) (9.82%) of which is water.",
"title": "Geography"
},
{
"paragraph_id": 32,
"text": "Cambridge is located in eastern Massachusetts, bordered by:",
"title": "Geography"
},
{
"paragraph_id": 33,
"text": "The border between Cambridge and the neighboring city of Somerville passes through densely populated neighborhoods, which are connected by the MBTA Red Line. Some of the main squares, Inman, Porter, and to a lesser extent, Harvard and Lechmere, are very close to the city line, as are Somerville's Union and Davis Squares.",
"title": "Geography"
},
{
"paragraph_id": 34,
"text": "Through the City of Cambridge's exclusive municipal water system, the city further controls two exclave areas, one being Payson Park Reservoir and Gatehouse, a 2009 listed American Water Landmark located roughly one mile west of Fresh Pond and surrounded by the town of Belmont. The second area is the larger Hobbs Brook and Stony Brook watersheds, which share borders with neighboring towns and cities including Lexington, Lincoln, Waltham and Weston.",
"title": "Geography"
},
{
"paragraph_id": 35,
"text": "Cambridge has been called the \"City of Squares\", as most of its commercial districts are major street intersections known as squares. Each square acts as a neighborhood center.",
"title": "Geography"
},
{
"paragraph_id": 36,
"text": "Kendall Square, formed by the junction of Broadway, Main Street, and Third Street, has been called \"the most innovative square mile on the planet\", owing to its high concentration of entrepreneurial start-ups and quality of innovation which have emerged in the vicinity of the square since 2010. Technology Square is an office and laboratory building cluster in this neighborhood. Just over the Longfellow Bridge from Boston, at the eastern end of the MIT campus, it is served by the Kendall/MIT station on the MBTA Red Line subway. Most of Cambridge's large office towers are located in the Square. A biotech industry has developed in this area. The Cambridge Innovation Center, a large co-working space, is in Kendall Square at 1 Broadway. The Cambridge Center office complex is in Kendall Square, and not at the actual center of Cambridge. The \"One Kendall Square\" complex is nearby, but not actually in Kendall Square.",
"title": "Geography"
},
{
"paragraph_id": 37,
"text": "Central Square is formed by the junction of Massachusetts Avenue, Prospect Street, and Western Avenue. Containing a variety of ethnic restaurants, it was economically depressed as recently as the late 1990s; it underwent gentrification in recent years (in conjunction with the development of the nearby University Park at MIT), and continues to grow more costly. It is served by the Central Station stop on the MBTA Red Line subway. Lafayette Square, formed by the junction of Massachusetts Avenue, Columbia Street, Sidney Street, and Main Street, is considered part of the Central Square area. Cambridgeport is south of Central Square, and bordered by MIT, the Charles River, Massachusetts Avenue, and River Street.",
"title": "Geography"
},
{
"paragraph_id": 38,
"text": "Harvard Square is formed by the junction of Massachusetts Avenue, Brattle Street, Dunster Street, and JFK Street. This is the primary site of Harvard University and a major Cambridge shopping area. It is served by a Red Line station. Harvard Square was originally the Red Line's northwestern terminus and a major transfer point to streetcars that also operated in a short tunnel—which is still a major bus terminal, although the area under the Square was reconfigured dramatically in the 1980s when the Red Line was extended. A short distance away from the square lies the Cambridge Common, while the neighborhood north of Harvard and east of Massachusetts Avenue is known as Baldwin, in honor of the first Black principal of Cambridge public schools, Maria L. Baldwin. It was renamed \"Baldwin\" in 2021, and so some know the area better by its former name, Agassiz, after the famed scientist Louis Agassiz.",
"title": "Geography"
},
{
"paragraph_id": 39,
"text": "Porter Square is about a mile north on Massachusetts Avenue from Harvard Square, at the junction of Massachusetts and Somerville Avenues. It includes part of the city of Somerville and is served by the Porter Square Station, a complex housing a Red Line stop and a Fitchburg Line commuter rail stop. Lesley University's University Hall and Porter campus are in Porter Square.",
"title": "Geography"
},
{
"paragraph_id": 40,
"text": "Inman Square is at the junction of Cambridge and Hampshire streets in mid-Cambridge. It is home to restaurants, bars, music venues, and boutiques. Victorian streetlights, benches, and bus stops were added to the streets in the 2000s, and a new city park was installed.",
"title": "Geography"
},
{
"paragraph_id": 41,
"text": "Lechmere Square is at the junction of Cambridge and First streets, adjacent to the CambridgeSide Galleria shopping mall. It is served by Lechmere station on the MBTA Green Line.",
"title": "Geography"
},
{
"paragraph_id": 42,
"text": "Cambridge's residential neighborhoods border but are not defined by the squares.",
"title": "Geography"
},
{
"paragraph_id": 43,
"text": "",
"title": "Geography"
},
{
"paragraph_id": 44,
"text": "In the Köppen-Geiger classification, Cambridge has a hot-summer humid continental climate (Dfa) with hot summers and cold winters, that can appear in the southern end of New England's interior. Abundant rain falls on the city (and in the winter often as snow); it has no dry season. The average January temperature is 26.6 °F (−3 °C), making Cambridge part of Group D, independent of the isotherm. There are four well-defined seasons.",
"title": "Geography"
},
{
"paragraph_id": 45,
"text": "As of the census of 2010, there were 105,162 people, 44,032 households, and 17,420 families residing in the city. The population density was 16,354.9 inhabitants per square mile (6,314.7/km). There were 47,291 housing units at an average density of 7,354.7 per square mile (2,839.7/km). The racial makeup of the city was 66.60% White, 11.70% Black or African American, 0.20% Native American, 15.10% Asian (3.7% Chinese, 1.4% Asian Indian, 1.2% Korean, 1.0% Japanese), 0.01% Pacific Islander, 2.10% from other races, and 4.30% from two or more races. 7.60% of the population were Hispanic or Latino of any race (1.6% Puerto Rican, 1.4% Mexican, 0.6% Dominican, 0.5% Colombian & Salvadoran, 0.4% Spaniard). Non-Hispanic Whites were 62.1% of the population in 2010, down from 89.7% in 1970. An individual resident of Cambridge is known as a Cantabrigian.",
"title": "Demographics"
},
{
"paragraph_id": 46,
"text": "In 2010, there were 44,032 households, out of which 16.9% had children under the age of 18 living with them, 28.9% were married couples living together, 8.4% had a female householder with no husband present, and 60.4% were non-families. 40.7% of all households were made up of individuals, and 9.6% had someone living alone who was 65 years of age or older. The average household size was 2.00 and the average family size was 2.76.",
"title": "Demographics"
},
{
"paragraph_id": 47,
"text": "In the city, the population was spread out, with 13.3% of the population under the age of 18, 21.2% from 18 to 24, 38.6% from 25 to 44, 17.8% from 45 to 64, and 9.2% who were 65 years of age or older. The median age was 30.5 years. For every 100 females, there were 96.1 males. For every 100 females age 18 and over, there were 94.7 males.",
"title": "Demographics"
},
{
"paragraph_id": 48,
"text": "The median income for a household in the city was $47,979, and the median income for a family was $59,423 (these figures had risen to $58,457 and $79,533 respectively as of a 2007 estimate). Males had a median income of $43,825 versus $38,489 for females. The per capita income for the city was $31,156. About 8.7% of families and 12.9% of the population were below the poverty line, including 15.1% of those under age 18 and 12.9% of those age 65 or over.",
"title": "Demographics"
},
{
"paragraph_id": 49,
"text": "Cambridge has been ranked as one of the most liberal cities in America. Locals living in and near the city jokingly refer to it as \"The People's Republic of Cambridge\". For 2016, the residential property tax rate in Cambridge was $6.99 per $1,000. Cambridge enjoys the highest possible bond credit rating, AAA, with all three Wall Street rating agencies.",
"title": "Demographics"
},
{
"paragraph_id": 50,
"text": "In 2000, 11.0% of city residents were of Irish ancestry; 7.2% were of English, 6.9% Italian, 5.5% West Indian and 5.3% German ancestry. 69.4% spoke only English at home, while 6.9% spoke Spanish, 3.2% Chinese or Mandarin, 3.0% Portuguese, 2.9% French Creole, 2.3% French, 1.5% Korean, and 1.0% Italian.",
"title": "Demographics"
},
{
"paragraph_id": 51,
"text": "Data is from the 2009–2013 American Community Survey 5-Year Estimates.",
"title": "Demographics"
},
{
"paragraph_id": 52,
"text": "Manufacturing was an important part of Cambridge's economy in the late 19th and early 20th century, but educational institutions are its biggest employers today. Harvard and MIT together employ about 20,000. As a cradle of technological innovation, Cambridge was home to technology firms Analog Devices, Akamai, Bolt, Beranek, and Newman (BBN Technologies) (now part of Raytheon), General Radio (later GenRad), Lotus Development Corporation (now part of IBM), Polaroid, Symbolics, and Thinking Machines.",
"title": "Economy"
},
{
"paragraph_id": 53,
"text": "In 1996, Polaroid, Arthur D. Little, and Lotus were Cambridge's top employers, with over 1,000 employees, but they faded out a few years later. Health care and biotechnology firms such as Genzyme, Biogen Idec, bluebird bio, Millennium Pharmaceuticals, Sanofi, Pfizer and Novartis have significant presences in the city. Though headquartered in Switzerland, Novartis continues to expand its operations in Cambridge.",
"title": "Economy"
},
{
"paragraph_id": 54,
"text": "Other major biotech and pharmaceutical firms expanding their presence in Cambridge include GlaxoSmithKline, AstraZeneca, Shire, and Pfizer. Most of Cambridge's biotech firms are in Kendall Square and East Cambridge, which decades ago were the city's center of manufacturing. Some others are in University Park at MIT, a new development in another former manufacturing area.",
"title": "Economy"
},
{
"paragraph_id": 55,
"text": "None of the high technology firms that once dominated the economy was among the 25 largest employers in 2005, but by 2008 Akamai and ITA Software were. Google, IBM Research, Microsoft Research, and Philips Research maintain offices in Cambridge. In late January 2012—less than a year after acquiring Billerica-based analytic database management company, Vertica—Hewlett-Packard announced it would also be opening its first offices in Cambridge. Also around that time, e-commerce giants Staples and Amazon.com said they would be opening research and innovation centers in Kendall Square. And LabCentral provides a shared laboratory facility for approximately 25 emerging biotech companies.",
"title": "Economy"
},
{
"paragraph_id": 56,
"text": "The proximity of Cambridge's universities has also made the city a center for nonprofit groups and think tanks, including the National Bureau of Economic Research, the Smithsonian Astrophysical Observatory, the Lincoln Institute of Land Policy, Cultural Survival, and One Laptop per Child.",
"title": "Economy"
},
{
"paragraph_id": 57,
"text": "In September 2011, Cambridge launched its Entrepreneur Walk of Fame initiative, recognizing people who have made contributions to innovation in global business.",
"title": "Economy"
},
{
"paragraph_id": 58,
"text": "In 2021, Cambridge was one of approximately 27 US cities to receive a AAA rating from each of the three major credit rating agencies in the nation, Moody's Investors Service, Standard & Poor's and Fitch Ratings. 2021 marked the 22nd consecutive year that Cambridge had retained this distinction.",
"title": "Economy"
},
{
"paragraph_id": 59,
"text": "As of 2019, the city's ten largest employers are:",
"title": "Economy"
},
{
"paragraph_id": 60,
"text": "Cambridge has a large and varied collection of permanent public art, on both city property, managed by the Cambridge Arts Council, Community Art Center, and the Harvard and MIT campuses. Temporary public artworks are displayed as part of the annual Cambridge River Festival on the banks of the Charles River during winter celebrations in Harvard and Central Squares and at Harvard University campus sites. Experimental forms of public artistic and cultural expression include the Central Square World's Fair, the annual Somerville-based Honk! Festival, and If This House Could Talk, a neighborhood art and history event.",
"title": "Arts and culture"
},
{
"paragraph_id": 61,
"text": "Street musicians and other performers entertain tourists and locals in Harvard Square during the warmer months. The performances are coordinated through a public process that has been developed collaboratively by the performers, city administrators, private organizations and business groups. The Cambridge public library contains four Works Progress Administration murals completed in 1935 by Elizabeth Tracy Montminy: Religion, Fine Arts, History of Books and Paper, and The Development of the Printing Press.",
"title": "Arts and culture"
},
{
"paragraph_id": 62,
"text": "Despite intensive urbanization during the late 19th century and the 20th century, Cambridge has several historic buildings, including some from the 17th century. The city also has abundant contemporary architecture, largely built by Harvard and MIT.",
"title": "Arts and culture"
},
{
"paragraph_id": 63,
"text": "Notable historic buildings in the city include:",
"title": "Arts and culture"
},
{
"paragraph_id": 64,
"text": "Contemporary architecture:",
"title": "Arts and culture"
},
{
"paragraph_id": 65,
"text": "The city has an active music scene, from classical performances to the latest popular bands. Beyond its colleges and universities, Cambridge has many music venues, including The Middle East, Club Passim, The Plough and Stars, The Lizard Lounge and the Nameless Coffeehouse.",
"title": "Arts and culture"
},
{
"paragraph_id": 66,
"text": "Consisting largely of densely built residential space, Cambridge lacks significant tracts of public parkland. Easily accessible open space on the university campuses, including Harvard Yard, Radcliffe Yard, and MIT's Great Lawn, as well as the considerable open space of Mount Auburn Cemetery and Fresh Pond Reservation, partly compensates for this. At Cambridge's western edge, the cemetery is known as a garden cemetery because of its landscaping (the oldest planned landscape in the country) and arboretum. Although known as a Cambridge landmark, much of the cemetery lies within Watertown. It is also an Important Bird Area (IBA) in the Greater Boston area. Fresh Pond Reservation is the largest open green space in Cambridge with 162 acres (656,000 m) of land around a 155-acre (627,000 m) kettle hole lake. This land includes a 2.25-mile walking trail around the reservoir and a public 9-hole golf course.",
"title": "Arts and culture"
},
{
"paragraph_id": 67,
"text": "Public parkland includes the esplanade along the Charles River, which mirrors its Boston counterpart, Cambridge Common, Danehy Park, and Alewife Brook Reservation.",
"title": "Arts and culture"
},
{
"paragraph_id": 68,
"text": "Cambridge is split between Massachusetts's 5th and 7th U.S. congressional districts. The 5th district seat is held by Democrat Katherine Clark, who replaced now-Senator Ed Markey in a 2013 special election; the 7th is represented by Democrat Ayanna Pressley, elected in 2018. The state's senior United States senator is Democrat Elizabeth Warren, elected in 2012, who lives in Cambridge. The governor of Massachusetts is Democrat Maura Healey, elected in 2022.",
"title": "Government"
},
{
"paragraph_id": 69,
"text": "Cambridge is represented in six districts in the Massachusetts House of Representatives: the 24th Middlesex (which includes parts of Belmont and Arlington), the 25th and 26th Middlesex (the latter of which includes a portion of Somerville), the 29th Middlesex (which includes a small part of Watertown), and the Eighth and Ninth Suffolk (both including parts of the City of Boston). The city is represented in the Massachusetts Senate as a part of the 2nd Middlesex, Middlesex and Suffolk, and 1st Suffolk and Middlesex districts.",
"title": "Government"
},
{
"paragraph_id": 70,
"text": "From 1860 to 1880, Republicans Abraham Lincoln, Ulysses S. Grant, Rutherford B. Hayes, and James Garfield each won Cambridge, Grant doing so by margins of over 20 points in both of his campaigns. Following that, from 1884 to 1892, Grover Cleveland won Cambridge in all three of his presidential campaigns, by less than ten points each time.",
"title": "Government"
},
{
"paragraph_id": 71,
"text": "Then from 1896 to 1924, Cambridge became something of a swing city with a slight Republican lean. Republican candidates carried the city in five of the eight presidential elections during that time, with five of the elections resulting in either a plurality or a margin of victory of fewer than ten points.",
"title": "Government"
},
{
"paragraph_id": 72,
"text": "In modern times, however, Cambridge has been largely Democratic. In the last 23 presidential elections, dating back to the nomination of Al Smith in the 1928 presidential election, Democratic presidential candidates have won Cambridge with every Democratic nominee since Massachusetts native John F. Kennedy in 1960 receiving at least 70% of the vote, except for Jimmy Carter in 1976 and 1980. Since 1928, the only Republican nominee to come within ten points of carrying Cambridge is Dwight Eisenhower in his 1956 reelection bid.",
"title": "Government"
},
{
"paragraph_id": 73,
"text": "",
"title": "Government"
},
{
"paragraph_id": 74,
"text": "Cambridge has a city government led by a mayor and a nine-member city council. There is also a six-member school committee that functions alongside the superintendent of public schools. The councilors and school committee members are elected every two years using proportional representation.",
"title": "Government"
},
{
"paragraph_id": 75,
"text": "The mayor is elected by the city councilors from among themselves and serves as the chair of city council meetings. The mayor also sits on the school committee. The mayor is not the city's chief executive. Rather, the city manager, who is appointed by the city council, serves in that capacity.",
"title": "Government"
},
{
"paragraph_id": 76,
"text": "Under the city's Plan E form of government, the city council does not have the power to appoint or remove city officials who are under the direction of the city manager. The city council and its members are also forbidden from giving orders to any subordinate of the city manager.",
"title": "Government"
},
{
"paragraph_id": 77,
"text": "Yi-An Huang is the City Manager as of September 6, 2022, succeeding Owen O'Riordan (now the Deputy City Manager) who briefly served as the Acting City Manager after Louis DePasquale resigned on July 5, 2022, after six years in office.",
"title": "Government"
},
{
"paragraph_id": 78,
"text": "* = current mayor ** = former mayor",
"title": "Government"
},
{
"paragraph_id": 79,
"text": "On March 8, 2021, Cambridge City Council voted to recognize polyamorous domestic partnerships, becoming the second city in the United States following neighboring Somerville, which had done so in 2020.",
"title": "Government"
},
{
"paragraph_id": 80,
"text": "Cambridge was a county seat of Middlesex County, along with Lowell, until the abolition of county government. Though the county government was abolished in 1997, the county still exists as a geographical and political region. The employees of Middlesex County courts, jails, registries, and other county agencies now work directly for the state. The county's registrars of Deeds and Probate remain in Cambridge, but the Superior Court and District Attorney have had their operations transferred to Woburn. Third District Court has shifted operations to Medford, and the county Sheriff's office awaits near-term relocation.",
"title": "Government"
},
{
"paragraph_id": 81,
"text": "Cambridge is perhaps best known as an academic and intellectual center. Its colleges and universities include:",
"title": "Education"
},
{
"paragraph_id": 82,
"text": "At least 258 of the world's total 962 Nobel Prize winners have at some point in their careers been affiliated with universities in Cambridge.",
"title": "Education"
},
{
"paragraph_id": 83,
"text": "Cambridge College is named for Cambridge and was based in Cambridge until 2017, when it consolidated to a new headquarters in neighboring Boston.",
"title": "Education"
},
{
"paragraph_id": 84,
"text": "The American Academy of Arts and Sciences, one of the nation's oldest learned societies founded in 1780, is based in Cambridge.",
"title": "Education"
},
{
"paragraph_id": 85,
"text": "The city's schools constitute the Cambridge Public School District. Schools include:",
"title": "Education"
},
{
"paragraph_id": 86,
"text": "Five upper schools offer grades 6–8 in some of the same buildings as the elementary schools:",
"title": "Education"
},
{
"paragraph_id": 87,
"text": "Cambridge has three district public high school programs, including Cambridge Rindge and Latin School (CRLS).",
"title": "Education"
},
{
"paragraph_id": 88,
"text": "Other public charter schools include Benjamin Banneker Charter School, which serves grades K–6; Community Charter School of Cambridge in Kendall Square, which serves grades 7–12; and Prospect Hill Academy, a charter school whose upper school is in Central Square though it is not a part of the Cambridge Public School District.",
"title": "Education"
},
{
"paragraph_id": 89,
"text": "Cambridge also has several private schools, including:",
"title": "Education"
},
{
"paragraph_id": 90,
"text": "Cambridge is served by a single online newspaper, Cambridge Day. The last physical newspaper in the city, Cambridge Chronicle, ceased publication in 2022 and today only cross-posts regional stories from other Gannett properties.",
"title": "Media"
},
{
"paragraph_id": 91,
"text": "Cambridge is home to the following radio stations, including both commercially-licensed and student-run stations:",
"title": "Media"
},
{
"paragraph_id": 92,
"text": "Cambridge Community Television (CCTV) has served the city since its inception in 1988. CCTV operates Cambridge's public access television facility and three television channels, 8, 9, and 96, on the Cambridge cable system (Comcast). The city has invited tenders from other cable providers, but Comcast remains its only fixed television and broadband utility, though services from American satellite TV providers are available. In October 2014, Cambridge City Manager Richard Rossi appointed a citizen Broadband Task Force to \"examine options to increase competition, reduce pricing, and improve speed, reliability and customer service for both residents and businesses.\"",
"title": "Media"
},
{
"paragraph_id": 93,
"text": "Cambridge obtains water from Hobbs Brook (in Lincoln and Waltham) and Stony Brook (Waltham and Weston), as well as an emergency connection to the Massachusetts Water Resources Authority. The city owns over 1,200 acres (486 ha) of land in other towns that includes these reservoirs and portions of their watershed. Water from these reservoirs flows by gravity through an aqueduct to Fresh Pond in Cambridge. It is then treated in an adjacent plant and pumped uphill to an elevation of 176 feet (54 m) above sea level at the Payson Park Reservoir (Belmont). The water is then redistributed downhill via gravity to individual users in the city. A new water treatment plant opened in 2001.",
"title": "Infrastructure"
},
{
"paragraph_id": 94,
"text": "In October 2016, the city announced that, owing to drought conditions, they would begin buying water from the MWRA. On January 3, 2017, Cambridge announced that \"As a result of continued rainfall each month since October 2016, we have been able to significantly reduce the need to use MWRA water. We have not purchased any MWRA water since December 12, 2016 and if 'average' rainfall continues this could continue for several months.\"",
"title": "Infrastructure"
},
{
"paragraph_id": 95,
"text": "Cambridge is served by several major roads, including Route 2, Route 16, and the Route 28. The Massachusetts Turnpike does not pass through Cambridge but is accessible by an exit in nearby Allston. Both U.S. Route 1 and Interstate 93 provide additional access at the eastern end of Cambridge via Leverett Circle in Boston. Route 2A runs the length of the city, chiefly along Massachusetts Avenue. The Charles River forms the southern border of Cambridge and is crossed by 11 bridges connecting Cambridge to Boston, eight of which are open to motorized road traffic, including the Longfellow Bridge and the Harvard Bridge.",
"title": "Infrastructure"
},
{
"paragraph_id": 96,
"text": "Cambridge has an irregular street network because many of the roads date from the colonial era. Contrary to popular belief, the road system did not evolve from longstanding cow-paths. Roads connected various village settlements with each other and nearby towns and were shaped by geographic features, most notably streams, hills, and swampy areas. Today, the major \"squares\" are typically connected by long, mostly straight roads, such as Massachusetts Avenue between Harvard Square and Central Square or Hampshire Street between Kendall Square and Inman Square.",
"title": "Infrastructure"
},
{
"paragraph_id": 97,
"text": "On October 25, 2022, Cambridge City Council voted 8–1 to eliminate parking minimums from the city code, citing declining car ownership, with the aim of promoting housing construction.",
"title": "Infrastructure"
},
{
"paragraph_id": 98,
"text": "Cambridge is served by the Massachusetts Bay Transportation Authority, including Porter station on the regional Commuter Rail, Lechmere station on the Green Line, and Alewife, Porter, Harvard, Central, and Kendall Square/MIT stations on the Red Line. Alewife station, the terminus of the Red Line, has a large multi-story parking garage.",
"title": "Infrastructure"
},
{
"paragraph_id": 99,
"text": "The Harvard bus tunnel under Harvard Square connects to the Red Line underground. This tunnel was originally opened for streetcars in 1912 and served trackless trolleys, trolleybuses, and buses as the routes were converted; four lines of the MBTA trolleybus system continued to use it until their conversion to diesel in 2022. The tunnel was partially reconfigured when the Red Line was extended to Alewife in the early 1980s.",
"title": "Infrastructure"
},
{
"paragraph_id": 100,
"text": "Both Union Square station in Somerville on the Green Line and Community College station in Charlestown on the Orange Line are located just outside of Cambridge.",
"title": "Infrastructure"
},
{
"paragraph_id": 101,
"text": "Besides the state-owned transit agency, the city is also served by the Charles River Transportation Management Agency (CRTMA) shuttles which are supported by some of the largest companies operating in the city, in addition to the municipal government itself.",
"title": "Infrastructure"
},
{
"paragraph_id": 102,
"text": "Cambridge has several bike paths, including one along the Charles River, and the Linear Park connecting the Minuteman Bikeway at Alewife with the Somerville Community Path. A connection to Watertown opened in 2022. Bike parking is common and there are bike lanes on many streets, although concerns have been expressed regarding the suitability of many of the lanes. On several central MIT streets, bike lanes transfer onto the sidewalk. Cambridge bans cycling on certain sections of sidewalk where pedestrian traffic is heavy.",
"title": "Infrastructure"
},
{
"paragraph_id": 103,
"text": "Bicycling Magazine in 2006 rated Boston as one of the worst cities in the nation for bicycling, but it has given Cambridge honorable mention as one of the best and was called \"Boston's great hope\" by the magazine. Boston has since then followed the example of Cambridge and made considerable efforts to improve bicycling safety and convenience.",
"title": "Infrastructure"
},
{
"paragraph_id": 104,
"text": "Walking is a popular activity in Cambridge. In 2000, among U.S. cities with more than 100,000 residents, Cambridge had the highest percentage of commuters who walked to work. Cambridge's major historic squares have changed into modern walking neighborhoods, including traffic calming features based on the needs of pedestrians rather than of motorists.",
"title": "Infrastructure"
},
{
"paragraph_id": 105,
"text": "The Boston intercity bus and train stations at South Station in Boston, and Logan International Airport in East Boston, both of which are accessible by subway. The Fitchburg Line rail service from Porter Square connects to some western suburbs. Since October 2010, there has also been intercity bus service between Alewife Station (Cambridge) and New York City.",
"title": "Infrastructure"
},
{
"paragraph_id": 106,
"text": "In addition to the Cambridge Police Department, the city is patrolled by the Fifth (Brighton) Barracks of Troop H of the Massachusetts State Police. Owing, however, to proximity, the city also practices functional cooperation with the Fourth (Boston) Barracks of Troop H, as well. The campuses of Harvard and MIT are patrolled by the Harvard University Police Department and MIT Police Department, respectively.",
"title": "Infrastructure"
},
{
"paragraph_id": 107,
"text": "The city of Cambridge is protected by the Cambridge Fire Department. Established in 1832, the CFD operates eight engine companies, four ladder companies, one rescue company, and three paramedic squad companies from eight fire stations located throughout the city. The Acting Chief is Thomas F. Cahill Jr.",
"title": "Infrastructure"
},
{
"paragraph_id": 108,
"text": "The city of Cambridge receives emergency medical services from PRO EMS, a privately contracted ambulance service.",
"title": "Infrastructure"
},
{
"paragraph_id": 109,
"text": "Further educational services are provided at the Cambridge Public Library. The large modern main building was built in 2009, and connects to the restored 1888 Richardson Romanesque building. It was founded as the private Cambridge Athenaeum in 1849 and was acquired by the city in 1858, and became the Dana Library. The 1888 building was a donation of Frederick H. Rindge.",
"title": "Infrastructure"
},
{
"paragraph_id": 110,
"text": "Cambridge's sister cities with active relationships are:",
"title": "Sister cities and twin towns"
},
{
"paragraph_id": 111,
"text": "Cambridge has ten additional inactive sister city relationships:",
"title": "Sister cities and twin towns"
}
] | Cambridge is a city in Middlesex County, Massachusetts, in the United States. It is a suburb in the Greater Boston metropolitan area, located directly across the Charles River from Boston. The city's population as of the 2020 U.S. census was 118,403, making it the most populous city in the county, the fourth-largest in Massachusetts, behind Boston, Worcester, and Springfield, and ninth-largest in New England. The city was named in honor of the University of Cambridge in Cambridge, England, which was an important center of the Puritan theology that was embraced by the town's founders. Harvard University, an Ivy League university founded in Cambridge in 1636, is the oldest institution of higher learning in the United States. Massachusetts Institute of Technology (MIT), Lesley University, and Hult International Business School also are based in Cambridge. Radcliffe College, a women's liberal arts college, was based in Cambridge from its 1879 founding until its assimilation into Harvard in 1999. Kendall Square, near MIT in the eastern part of Cambridge, has been called "the most innovative square mile on the planet" due to the high concentration of startup companies that have emerged there since 2010. Founded in December 1630 during the colonial era, Cambridge was one among the first cities established in the Thirteen Colonies, and it went on to play a historic role during the American Revolution. In May 1775, approximately 16,000 American patriots assembled in Cambridge Common to begin organizing a military retaliation against British troops following the Battles of Lexington and Concord. On July 2, 1775, two weeks after the Second Continental Congress in Philadelphia formally established the Continental Army and appointed George Washington commander of it, Washington arrived at Cambridge Common to take command of the Patriot soldiers camped there, many of whom played a role in supporting Washington's successful Siege of Boston, which trapped garrisoned British troops from moving by land, forcing the British to ultimately abandon Boston. Cambridge Common is celebrated as the birthplace of the Continental Army. | 2001-05-20T04:48:48Z | 2023-12-27T03:07:35Z | [
"Template:Convert",
"Template:Cite journal",
"Template:Webarchive",
"Template:-",
"Template:Authority control",
"Template:Wide image",
"Template:SemiBareRefNeedsTitle",
"Template:Cambridge, Massachusetts",
"Template:Div col",
"Template:Reflist",
"Template:Efn",
"Template:More citations needed section",
"Template:Cite web",
"Template:EB1911 Poster",
"Template:Citation",
"Template:Flagicon",
"Template:Cbignore",
"Template:As of",
"Template:Rp",
"Template:For timeline",
"Template:Anchor",
"Template:Cite book",
"Template:Osmrelation-inline",
"Template:Cite EB1911",
"Template:Commons category",
"Template:Official website",
"Template:Further",
"Template:Refend",
"Template:ISBN",
"Template:Cn",
"Template:Cite Collins Dictionary",
"Template:Historical populations",
"Template:Div col end",
"Template:Respell",
"Template:Citation needed",
"Template:Cite EB9",
"Template:Curlie",
"Template:Sfnp",
"Template:Refn",
"Template:Prose",
"Template:Use mdy dates",
"Template:IPAc-en",
"Template:Refbegin",
"Template:Unreliable source?",
"Template:Party color cell",
"Template:Dead link",
"Template:See also",
"Template:Main",
"Template:Notelist",
"Template:Short description",
"Template:Infobox settlement",
"Template:Cite news",
"Template:Navboxes",
"Template:Weather box",
"Template:Wikivoyage"
] | https://en.wikipedia.org/wiki/Cambridge,_Massachusetts |
5,686 | Cambridge (disambiguation) | Cambridge is a city and the county town of Cambridgeshire, United Kingdom, famous for being the location of the University of Cambridge.
Cambridge may also refer to: | [
{
"paragraph_id": 0,
"text": "Cambridge is a city and the county town of Cambridgeshire, United Kingdom, famous for being the location of the University of Cambridge.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Cambridge may also refer to:",
"title": ""
}
] | Cambridge is a city and the county town of Cambridgeshire, United Kingdom, famous for being the location of the University of Cambridge. Cambridge may also refer to: | 2002-02-25T15:43:11Z | 2023-09-23T23:05:59Z | [
"Template:TOC right",
"Template:Disambiguation"
] | https://en.wikipedia.org/wiki/Cambridge_(disambiguation) |
5,688 | Colin Dexter | Norman Colin Dexter OBE (29 September 1930 – 21 March 2017) was an English crime writer known for his Inspector Morse series of novels, which were written between 1975 and 1999 and adapted as an ITV television series, Inspector Morse, from 1987 to 2000. His characters have spawned a sequel series, Lewis from 2006 to 2015, and a prequel series, Endeavour from 2012 to 2023.
Dexter was born in Stamford, Lincolnshire, to Alfred and Dorothy Dexter. He had an elder brother, John, a fellow classicist, who taught Classics at The King's School, Peterborough, and a sister, Avril. Alfred ran a small garage and taxi company from premises in Scotgate, Stamford. Dexter was educated at St John's Infants School and Bluecoat Junior School, from which he gained a scholarship to Stamford School, a boys' grammar school, where a younger contemporary was England cricket captain and England rugby player M. J. K. Smith.
After leaving school, Dexter completed his national service with the Royal Corps of Signals and then read Classics at Christ's College, Cambridge, graduating in 1953 and receiving a master's degree in 1958.
In 1954, Dexter began his teaching career as assistant Classics master at Wyggeston Grammar School for Boys in Leicester. There he helped the school's Christian Union. However, in 2000 he stated that he shared the same views on politics and religion as Inspector Morse, who was portrayed in the final Morse novel, The Remorseful Day, as an atheist. A post at Loughborough Grammar School followed in 1957, then he took up the position of senior Classics teacher at Corby Grammar School, Northamptonshire, in 1959.
In 1966, he was forced by the onset of deafness to retire from teaching and took up the post of senior assistant secretary at the University of Oxford Delegacy of Local Examinations (UODLE) in Oxford, a job he held until his retirement in 1988.
In November 2008, Dexter featured prominently in the BBC Four programme "How to Solve a Cryptic Crossword" as part of the Timeshift series, in which he recounted some of the crossword clues solved by Morse.
The initial books written by Dexter were general studies textbooks. He began writing mysteries in 1972 during a family holiday. Last Bus to Woodstock was published in 1975 and introduced the character of Inspector Morse, the irascible detective whose penchants for cryptic crosswords, English literature, cask ale, and music by Wagner reflected Dexter's own enthusiasms. Dexter's plots used false leads and other red herrings, "presenting Morse, and his readers, with fiendishly difficult puzzles to solve".
The success of the 33 two-hour episodes of the ITV television series Inspector Morse, produced between 1987 and 2000, brought further attention to Dexter's writings. The show featured Inspector Morse, played by John Thaw, and his assistant Sergeant Robert Lewis, played by Kevin Whately. In the manner of Alfred Hitchcock, Dexter made a cameo appearance in almost all episodes.
From 2006 to 2015, Morse's assistant Lewis was featured in a 33-episode ITV series titled Lewis (Inspector Lewis in the United States). Lewis is assisted by DS James Hathaway, played by Laurence Fox. A prequel series, Endeavour, features a young Morse and stars Shaun Evans and Roger Allam. Endeavour was first broadcast on the ITV network in 2012, ending with the ninth series in 2023, taking young Morse's career into 1972. Dexter was a consultant for Lewis and the first few years of Endeavour. As with Morse, Dexter occasionally made cameo appearances in both Lewis and Endeavour.
Although Dexter's military service was as a Morse code operator in the Royal Corps of Signals, the character was named after his friend Sir Jeremy Morse, a crossword devotee like Dexter. The music for the television series, written by Barrington Pheloung, used a motif based on the Morse code for Morse's name.
Dexter received several Crime Writers' Association awards: two Silver Daggers for Service of All the Dead in 1979 and The Dead of Jericho in 1981; two Gold Daggers for The Wench is Dead in 1989 and The Way Through the Woods in 1992; and a Cartier Diamond Dagger for lifetime achievement in 1997. In 1996, Dexter received a Macavity Award for his short story "Evans Tries an O-Level". In 1980, he was elected a member of the by-invitation-only Detection Club. In 2005 Dexter became a Fellow by Special Election of St Cross College, Oxford.
In the 2000 Birthday Honours Dexter was appointed an Officer of the Order of the British Empire for services to literature. In 2001 he was awarded the Freedom of the City of Oxford. In September 2011, the University of Lincoln awarded Dexter an honorary Doctor of Letters degree.
In 1956 he married Dorothy Cooper. They had a daughter, Sally, and a son, Jeremy.
On 21 March 2017 Dexter's publisher, Macmillan, said in a statement "With immense sadness, Macmillan announces the death of Colin Dexter who died peacefully at his home in Oxford this morning." | [
{
"paragraph_id": 0,
"text": "Norman Colin Dexter OBE (29 September 1930 – 21 March 2017) was an English crime writer known for his Inspector Morse series of novels, which were written between 1975 and 1999 and adapted as an ITV television series, Inspector Morse, from 1987 to 2000. His characters have spawned a sequel series, Lewis from 2006 to 2015, and a prequel series, Endeavour from 2012 to 2023.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Dexter was born in Stamford, Lincolnshire, to Alfred and Dorothy Dexter. He had an elder brother, John, a fellow classicist, who taught Classics at The King's School, Peterborough, and a sister, Avril. Alfred ran a small garage and taxi company from premises in Scotgate, Stamford. Dexter was educated at St John's Infants School and Bluecoat Junior School, from which he gained a scholarship to Stamford School, a boys' grammar school, where a younger contemporary was England cricket captain and England rugby player M. J. K. Smith.",
"title": "Early life and career"
},
{
"paragraph_id": 2,
"text": "After leaving school, Dexter completed his national service with the Royal Corps of Signals and then read Classics at Christ's College, Cambridge, graduating in 1953 and receiving a master's degree in 1958.",
"title": "Early life and career"
},
{
"paragraph_id": 3,
"text": "In 1954, Dexter began his teaching career as assistant Classics master at Wyggeston Grammar School for Boys in Leicester. There he helped the school's Christian Union. However, in 2000 he stated that he shared the same views on politics and religion as Inspector Morse, who was portrayed in the final Morse novel, The Remorseful Day, as an atheist. A post at Loughborough Grammar School followed in 1957, then he took up the position of senior Classics teacher at Corby Grammar School, Northamptonshire, in 1959.",
"title": "Early life and career"
},
{
"paragraph_id": 4,
"text": "In 1966, he was forced by the onset of deafness to retire from teaching and took up the post of senior assistant secretary at the University of Oxford Delegacy of Local Examinations (UODLE) in Oxford, a job he held until his retirement in 1988.",
"title": "Early life and career"
},
{
"paragraph_id": 5,
"text": "In November 2008, Dexter featured prominently in the BBC Four programme \"How to Solve a Cryptic Crossword\" as part of the Timeshift series, in which he recounted some of the crossword clues solved by Morse.",
"title": "Early life and career"
},
{
"paragraph_id": 6,
"text": "The initial books written by Dexter were general studies textbooks. He began writing mysteries in 1972 during a family holiday. Last Bus to Woodstock was published in 1975 and introduced the character of Inspector Morse, the irascible detective whose penchants for cryptic crosswords, English literature, cask ale, and music by Wagner reflected Dexter's own enthusiasms. Dexter's plots used false leads and other red herrings, \"presenting Morse, and his readers, with fiendishly difficult puzzles to solve\".",
"title": "Writing career"
},
{
"paragraph_id": 7,
"text": "The success of the 33 two-hour episodes of the ITV television series Inspector Morse, produced between 1987 and 2000, brought further attention to Dexter's writings. The show featured Inspector Morse, played by John Thaw, and his assistant Sergeant Robert Lewis, played by Kevin Whately. In the manner of Alfred Hitchcock, Dexter made a cameo appearance in almost all episodes.",
"title": "Writing career"
},
{
"paragraph_id": 8,
"text": "From 2006 to 2015, Morse's assistant Lewis was featured in a 33-episode ITV series titled Lewis (Inspector Lewis in the United States). Lewis is assisted by DS James Hathaway, played by Laurence Fox. A prequel series, Endeavour, features a young Morse and stars Shaun Evans and Roger Allam. Endeavour was first broadcast on the ITV network in 2012, ending with the ninth series in 2023, taking young Morse's career into 1972. Dexter was a consultant for Lewis and the first few years of Endeavour. As with Morse, Dexter occasionally made cameo appearances in both Lewis and Endeavour.",
"title": "Writing career"
},
{
"paragraph_id": 9,
"text": "Although Dexter's military service was as a Morse code operator in the Royal Corps of Signals, the character was named after his friend Sir Jeremy Morse, a crossword devotee like Dexter. The music for the television series, written by Barrington Pheloung, used a motif based on the Morse code for Morse's name.",
"title": "Writing career"
},
{
"paragraph_id": 10,
"text": "Dexter received several Crime Writers' Association awards: two Silver Daggers for Service of All the Dead in 1979 and The Dead of Jericho in 1981; two Gold Daggers for The Wench is Dead in 1989 and The Way Through the Woods in 1992; and a Cartier Diamond Dagger for lifetime achievement in 1997. In 1996, Dexter received a Macavity Award for his short story \"Evans Tries an O-Level\". In 1980, he was elected a member of the by-invitation-only Detection Club. In 2005 Dexter became a Fellow by Special Election of St Cross College, Oxford.",
"title": "Awards and honours"
},
{
"paragraph_id": 11,
"text": "In the 2000 Birthday Honours Dexter was appointed an Officer of the Order of the British Empire for services to literature. In 2001 he was awarded the Freedom of the City of Oxford. In September 2011, the University of Lincoln awarded Dexter an honorary Doctor of Letters degree.",
"title": "Awards and honours"
},
{
"paragraph_id": 12,
"text": "In 1956 he married Dorothy Cooper. They had a daughter, Sally, and a son, Jeremy.",
"title": "Personal life"
},
{
"paragraph_id": 13,
"text": "On 21 March 2017 Dexter's publisher, Macmillan, said in a statement \"With immense sadness, Macmillan announces the death of Colin Dexter who died peacefully at his home in Oxford this morning.\"",
"title": "Death"
}
] | Norman Colin Dexter was an English crime writer known for his Inspector Morse series of novels, which were written between 1975 and 1999 and adapted as an ITV television series, Inspector Morse, from 1987 to 2000. His characters have spawned a sequel series, Lewis from 2006 to 2015, and a prequel series, Endeavour from 2012 to 2023. | 2001-05-19T22:50:41Z | 2023-10-09T12:01:16Z | [
"Template:Infobox writer",
"Template:Post-nom",
"Template:Cite journal",
"Template:Cite web",
"Template:Cite book",
"Template:IMDb name",
"Template:NPG name",
"Template:Short description",
"Template:Use dmy dates",
"Template:Reflist",
"Template:Dead link",
"Template:Cbignore",
"Template:Authority control",
"Template:Cite news",
"Template:Cite magazine",
"Template:OL author",
"Template:InspectorMorse"
] | https://en.wikipedia.org/wiki/Colin_Dexter |
5,689 | College | A college (Latin: collegium) is an educational institution or a constituent part of one. A college may be a degree-awarding tertiary educational institution, a part of a collegiate or federal university, an institution offering vocational education, a further education institution, or a secondary school.
In most of the world, a college may be a high school or secondary school, a college of further education, a training institution that awards trade qualifications, a higher-education provider that does not have university status (often without its own degree-awarding powers), or a constituent part of a university. In the United States, a college may offer undergraduate programs – either as an independent institution or as the undergraduate program of a university – or it may be a residential college of a university or a community college, referring to (primarily public) higher education institutions that aim to provide affordable and accessible education, usually limited to two-year associate degrees. The word is generally also used as a synonym for a university in the US. Colleges in countries such as France, Belgium, and Switzerland provide secondary education.
The word "college" is from the Latin verb lego, legere, legi, lectum, "to collect, gather together, pick", plus the preposition cum, "with", thus meaning "selected together". Thus "colleagues" are literally "persons who have been selected to work together". In ancient Rome a collegium was a "body, guild, corporation united in colleagueship; of magistrates, praetors, tribunes, priests, augurs; a political club or trade guild". Thus a college was a form of corporation or corporate body, an artificial legal person (body/corpus) with its own legal personality, with the capacity to enter into legal contracts, to sue and be sued. In mediaeval England there were colleges of priests, for example in chantry chapels; modern survivals include the Royal College of Surgeons in England (originally the Guild of Surgeons Within the City of London), the College of Arms in London (a body of heralds enforcing heraldic law), an electoral college (to elect representatives); all groups of persons "selected in common" to perform a specified function and appointed by a monarch, founder or other person in authority. As for the modern "college of education", it was a body created for that purpose, for example Eton College was founded in 1440 by letters patent of King Henry VI for the constitution of a college of Fellows, priests, clerks, choristers, poor scholars, and old poor men, with one master or governor, whose duty it shall be to instruct these scholars and any others who may resort thither from any part of England in the knowledge of letters, and especially of grammar, without payment".
Within higher education, the term can be used to refer to:
A sixth form college or college of further education is an educational institution in England, Wales, Northern Ireland, Belize, the Caribbean, Malta, Norway, Brunei, and Southern Africa, among others, where students aged 16 to 19 typically study for advanced school-level qualifications, such as A-levels, BTEC, HND or its equivalent and the International Baccalaureate Diploma, or school-level qualifications such as GCSEs. In Singapore and India, this is known as a junior college. The municipal government of the city of Paris uses the phrase "sixth form college" as the English name for a lycée.
In some national education systems, secondary schools may be called "colleges" or have "college" as part of their title.
In Australia the term "college" is applied to any private or independent (non-government) primary and, especially, secondary school as distinct from a state school. Melbourne Grammar School, Cranbrook School, Sydney and The King's School, Parramatta are considered colleges.
There has also been a recent trend to rename or create government secondary schools as "colleges". In the state of Victoria, some state high schools are referred to as secondary colleges, although the pre-eminent government secondary school for boys in Melbourne is still named Melbourne High School. In Western Australia, South Australia and the Northern Territory, "college" is used in the name of all state high schools built since the late 1990s, and also some older ones. In New South Wales, some high schools, especially multi-campus schools resulting from mergers, are known as "secondary colleges". In Queensland some newer schools which accept primary and high school students are styled state college, but state schools offering only secondary education are called "State High School". In Tasmania and the Australian Capital Territory, "college" refers to the final two years of high school (years 11 and 12), and the institutions which provide this. In this context, "college" is a system independent of the other years of high school. Here, the expression is a shorter version of matriculation college.
In a number of Canadian cities, many government-run secondary schools are called "collegiates" or "collegiate institutes" (C.I.), a complicated form of the word "college" which avoids the usual "post-secondary" connotation. This is because these secondary schools have traditionally focused on academic, rather than vocational, subjects and ability levels (for example, collegiates offered Latin while vocational schools offered technical courses). Some private secondary schools (such as Upper Canada College, Vancouver College) choose to use the word "college" in their names nevertheless. Some secondary schools elsewhere in the country, particularly ones within the separate school system, may also use the word "college" or "collegiate" in their names.
In New Zealand the word "college" normally refers to a secondary school for ages 13 to 17 and "college" appears as part of the name especially of private or integrated schools. "Colleges" most frequently appear in the North Island, whereas "high schools" are more common in the South Island.
In the Netherlands, "college" is equivalent to HBO (Higher professional education). It is oriented towards professional training with clear occupational outlook, unlike universities which are scientifically oriented.
In South Africa, some secondary schools, especially private schools on the English public school model, have "college" in their title, including six of South Africa's Elite Seven high schools. A typical example of this category would be St John's College.
Private schools that specialize in improving children's marks through intensive focus on examination needs are informally called "cram-colleges".
In Sri Lanka the word "college" (known as Vidyalaya in Sinhala) normally refers to a secondary school, which usually signifies above the 5th standard. During the British colonial period a limited number of exclusive secondary schools were established based on English public school model (Royal College Colombo, S. Thomas' College, Mount Lavinia, Trinity College, Kandy) these along with several Catholic schools (St. Joseph's College, Colombo, St Anthony's College) traditionally carry their name as colleges. Following the start of free education in 1931 large group of central colleges were established to educate the rural masses. Since Sri Lanka gained Independence in 1948, many schools that have been established have been named as "college".
As well as an educational institution, the term, in accordance with its etymology, may also refer to any formal group of colleagues set up under statute or regulation; often under a Royal Charter. Examples include an electoral college, the College of Arms, a college of canons, and the College of Cardinals. Other collegiate bodies include professional associations, particularly in medicine and allied professions. In the UK these include the Royal College of Nursing and the Royal College of Physicians. Examples in the United States include the American College of Physicians, the American College of Surgeons, and the American College of Dentists. An example in Australia is the Royal Australian College of General Practitioners.
The different ways in which the term "College" is used to describe educational institutions in various regions of the world is listed below:
In Canadian English, the term "college" usually refers to a trades school, applied arts/science/technology/business/health school or community college. These are post-secondary institutions granting certificates, diplomas, associate degrees and (in some cases) bachelor's degrees. The French acronym specific to public institutions within Quebec's particular system of pre-university and technical education is CEGEP (Collège d'enseignement général et professionnel, "college of general and professional education"). They are collegiate-level institutions that a student typically enrols in if they wish to continue onto university in the Quebec education system, or to learn a trade. In Ontario and Alberta, there are also institutions that are designated university colleges, which only grant undergraduate degrees. This is to differentiate between universities, which have both undergraduate and graduate programs and those that do not.
In Canada, there is a strong distinction between "college" and "university". In conversation, one specifically would say either "they are going to university" (i.e., studying for a three- or four-year degree at a university) or "they are going to college" (i.e., studying at a technical/career training).
The term college also applies to distinct entities that formally act as an affiliated institution of the university, formally referred to as federated college, or affiliated colleges. A university may also formally include several constituent colleges, forming a collegiate university. Examples of collegiate universities in Canada include Trent University, and the University of Toronto. These types of institutions act independently, maintaining their own endowments, and properties. However, they remain either affiliated, or federated with the overarching university, with the overarching university being the institution that formally grants the degrees. For example, Trinity College was once an independent institution, but later became federated with the University of Toronto. Several centralized universities in Canada have mimicked the collegiate university model; although constituent colleges in a centralized university remains under the authority of the central administration. Centralized universities that have adopted the collegiate model to a degree includes the University of British Columbia, with Green College and St. John's College; and the Memorial University of Newfoundland, with Sir Wilfred Grenfell College.
Occasionally, "college" refers to a subject specific faculty within a university that, while distinct, are neither federated nor affiliated—College of Education, College of Medicine, College of Dentistry, College of Biological Science among others.
The Royal Military College of Canada is a military college which trains officers for the Canadian Armed Forces. The institution is a full-fledged university, with the authority to issue graduate degrees, although it continues to word the term college in its name. The institution's sister schools, Royal Military College Saint-Jean also uses the term college in its name, although it academic offering is akin to a CEGEP institution in Quebec. A number of post-secondary art schools in Canada formerly used the word college in their names, despite formally being universities. However, most of these institutions were renamed, or re-branded in the early 21st century, omitting the word college from its name.
The word college continues to be used in the names public separate secondary schools in Ontario. A number of independent schools across Canada also use the word college in its name.
Public secular school boards in Ontario also refer to their secondary schools as collegiate institutes. However, usage of the word collegiate institute varies between school boards. Collegiate institute is the predominant name for secondary schools in Lakehead District School Board, and Toronto District School Board, although most school boards in Ontario use collegiate institute alongside high school, and secondary school in the names of their institutions. Similarly, secondary schools in Regina, and Saskatoon are referred to as Collegiate.
Officially, since 2009, the Pontifical Catholic University of Chile incorporated the term "college" as the name of a tertiary education program as a bachelor's degree. The program features a Bachelor of Natural Sciences and Mathematics, a Bachelor of Social Science and a Bachelor of Arts and Humanities. It has the same system as the American universities, it combines majors and minors and finally, it let the students continue a higher degree in the same university once the program it is completed.
But in Chile, the term "college" is not usually used for tertiary education, but is used mainly in the name of some private bilingual schools, corresponding to levels 0, 1 and 2 of the ISCED 2011. Some examples are they Santiago College, Saint George's College, among others.
In the United States, there were 5,916 post-secondary institutions (universities and colleges) as of 2020–21, having peaked at 7,253 in 2012–13 and fallen every year since. A "college" in the US can refer to a constituent part of a university (which can be a residential college, the sub-division of the university offering undergraduate courses, or a school of the university offering particular specialized courses), an independent institution offering bachelor's-level courses, or an institution offering instruction in a particular professional, technical or vocational field. In popular usage, the word "college" is the generic term for any post-secondary undergraduate education. Americans "go to college" after high school, regardless of whether the specific institution is formally a college or a university. Some students choose to dual-enroll, by taking college classes while still in high school. The word and its derivatives are the standard terms used to describe the institutions and experiences associated with American post-secondary undergraduate education.
Students must pay for college before taking classes. Some borrow the money via loans, and some students fund their educations with cash, scholarships, grants, or some combination of these payment methods. In 2011, the state or federal government subsidized $8,000 to $100,000 for each undergraduate degree. For state-owned schools (called "public" universities), the subsidy was given to the college, with the student benefiting from lower tuition. The state subsidized on average 50% of public university tuition.
Colleges vary in terms of size, degree, and length of stay. Two-year colleges, also known as junior or community colleges, usually offer an associate degree, and four-year colleges usually offer a bachelor's degree. Often, these are entirely undergraduate institutions, although some have graduate school programs.
Four-year institutions in the U.S. that emphasize a liberal arts curriculum are known as liberal arts colleges. Until the 20th century, liberal arts, law, medicine, theology, and divinity were about the only form of higher education available in the United States. These schools have traditionally emphasized instruction at the undergraduate level, although advanced research may still occur at these institutions.
While there is no national standard in the United States, the term "university" primarily designates institutions that provide undergraduate and graduate education. A university typically has as its core and its largest internal division an undergraduate college teaching a liberal arts curriculum, also culminating in a bachelor's degree. What often distinguishes a university is having, in addition, one or more graduate schools engaged in both teaching graduate classes and in research. Often these would be called a School of Law or School of Medicine, (but may also be called a college of law, or a faculty of law). An exception is Vincennes University, Indiana, which is styled and chartered as a "university" even though almost all of its academic programs lead only to two-year associate degrees. Some institutions, such as Dartmouth College and The College of William & Mary, have retained the term "college" in their names for historical reasons. In one unique case, Boston College and Boston University, the former located in Chestnut Hill, Massachusetts and the latter located in Boston, Massachusetts, are completely separate institutions.
Usage of the terms varies among the states. In 1996, for example, Georgia changed all of its four-year institutions previously designated as colleges to universities, and all of its vocational technology schools to technical colleges.
The terms "university" and "college" do not exhaust all possible titles for an American institution of higher education. Other options include "institute" (Worcester Polytechnic Institute and Massachusetts Institute of Technology), "academy" (United States Military Academy), "union" (Cooper Union), "conservatory" (New England Conservatory), and "school" (Juilliard School). In colloquial use, they are still referred to as "college" when referring to their undergraduate studies.
The term college is also, as in the United Kingdom, used for a constituent semi-autonomous part of a larger university but generally organized on academic rather than residential lines. For example, at many institutions, the undergraduate portion of the university can be briefly referred to as the college (such as The College of the University of Chicago, Harvard College at Harvard, or Columbia College at Columbia) while at others, such as the University of California, Berkeley, "colleges" are collections of academic programs and other units that share some common characteristics, mission, or disciplinary focus (the "college of engineering", the "college of nursing", and so forth). There exist other variants for historical reasons, including some uses that exist because of mergers and acquisitions; for example, Duke University, which was called Trinity College until the 1920s, still calls its main undergraduate subdivision Trinity College of Arts and Sciences.
Some American universities, such as Princeton, Rice, and Yale have established residential colleges (sometimes, as at Harvard, the first to establish such a system in the 1930s, known as houses) along the lines of Oxford or Cambridge. Unlike the Oxbridge colleges, but similarly to Durham, these residential colleges are not autonomous legal entities nor are they typically much involved in education itself, being primarily concerned with room, board, and social life. At the University of Michigan, University of California, San Diego and the University of California, Santa Cruz, each residential college teaches its own core writing courses and has its own distinctive set of graduation requirements.
Many U.S. universities have placed increased emphasis on their residential colleges in recent years. This is exemplified by the creation of new colleges at Ivy League schools such as Yale University and Princeton University, and efforts to strengthen the contribution of the residential colleges to student education, including through a 2016 taskforce at Princeton on residential colleges.
The founders of the first institutions of higher education in the United States were graduates of the University of Oxford and the University of Cambridge. The small institutions they founded would not have seemed to them like universities – they were tiny and did not offer the higher degrees in medicine and theology. Furthermore, they were not composed of several small colleges. Instead, the new institutions felt like the Oxford and Cambridge colleges they were used to – small communities, housing and feeding their students, with instruction from residential tutors (as in the United Kingdom, described above). When the first students graduated, these "colleges" assumed the right to confer degrees upon them, usually with authority—for example, The College of William & Mary has a royal charter from the British monarchy allowing it to confer degrees while Dartmouth College has a charter permitting it to award degrees "as are usually granted in either of the universities, or any other college in our realm of Great Britain."
The leaders of Harvard College (which granted America's first degrees in 1642) might have thought of their college as the first of many residential colleges that would grow up into a New Cambridge university. However, over time, few new colleges were founded there, and Harvard grew and added higher faculties. Eventually, it changed its title to university, but the term "college" had stuck and "colleges" have arisen across the United States.
In U.S. usage, the word "college" not only embodies a particular type of school, but has historically been used to refer to the general concept of higher education when it is not necessary to specify a school, as in "going to college" or "college savings accounts" offered by banks.
In a survey of more than 2,000 college students in 33 states and 156 different campuses, the U.S. Public Interest Research Group found the average student spends as much as $1,200 each year on textbooks and supplies alone. By comparison, the group says that's the equivalent of 39 percent of tuition and fees at a community college, and 14 percent of tuition and fees at a four-year public university.
In addition to private colleges and universities, the U.S. also has a system of government funded, public universities. Many were founded under the Morrill Land-Grant Colleges Act of 1862. A movement had arisen to bring a form of more practical higher education to the masses, as "...many politicians and educators wanted to make it possible for all young Americans to receive some sort of advanced education." The Morrill Act "...made it possible for the new western states to establish colleges for the citizens." Its goal was to make higher education more easily accessible to the citizenry of the country, specifically to improve agricultural systems by providing training and scholarship in the production and sales of agricultural products, and to provide formal education in "...agriculture, home economics, mechanical arts, and other professions that seemed practical at the time."
The act was eventually extended to allow all states that had remained with the Union during the American Civil War, and eventually all states, to establish such institutions. Most of the colleges established under the Morrill Act have since become full universities, and some are among the elite of the world.
Selection of a four-year college as compared to a two-year junior college, even by marginal students such as those with a C+ grade average in high school and SAT scores in the mid 800s, increases the probability of graduation and confers substantial economic and social benefits.
In Bangladesh, educational institutions offering higher secondary (11th–12th grade) education are known as colleges.
In Hong Kong, the term 'college' is used by tertiary institutions as either part of their names or to refer to a constituent part of the university, such as the colleges in the collegiate The Chinese University of Hong Kong; or to a residence hall of a university, such as St. John's College, University of Hong Kong. Many older secondary schools have the term 'college' as part of their names.
The modern system of education was heavily influenced by the British starting in 1835.
In India, the term "college" is commonly reserved for institutions that offer high school diplomas at year 12 ("Junior College", similar to American high schools), and those that offer the bachelor's degree; some colleges, however, offer programmes up to PhD level. Generally, colleges are located in different parts of a state and all of them are affiliated to a regional university. The colleges offer programmes leading to degrees of that university. Colleges may be either Autonomous or non-autonomous. Autonomous Colleges are empowered to establish their own syllabus, and conduct and assess their own examinations; in non-autonomous colleges, examinations are conducted by the university, at the same time for all colleges under its affiliation. There are several hundred universities and each university has affiliated colleges, often a large number.
The first liberal arts and sciences college in India was "Cottayam College" or the "Syrian College", Kerala in 1815. The First inter linguistic residential education institution in Asia was started at this college. At present it is a Theological seminary which is popularly known as Orthodox Theological Seminary or Old Seminary. After that, CMS College, Kottayam, established in 1817, and the Presidency College, Kolkata, also 1817, initially known as Hindu College. The first college for the study of Christian theology and ecumenical enquiry was Serampore College (1818). The first Missionary institution to impart Western style education in India was the Scottish Church College, Calcutta (1830). The first commerce and economics college in India was Sydenham College, Mumbai (1913).
In India a new term has been introduced that is Autonomous Institutes & Colleges. An autonomous Colleges are colleges which need to be affiliated to a certain university. These colleges can conduct their own admission procedure, examination syllabus, fees structure etc. However, at the end of course completion, they cannot issue their own degree or diploma. The final degree or diploma is issued by the affiliated university. Also, some significant changes can pave way under the NEP (New Education Policy 2020) which may affect the present guidelines for universities and colleges.
In Israel, any non-university higher-learning facility is called a college. Institutions accredited by the Council for Higher Education in Israel (CHE) to confer a bachelor's degree are called "Academic Colleges" (Hebrew: מִכְלָלָה, romanized: Mikhlala; plural Hebrew: מכללות, romanized: Mikhlalot). These colleges (at least 4 for 2012) may also offer master's degrees and act as Research facilities. There are also over twenty teacher training colleges or seminaries, most of which may award only a Bachelor of Education (BEd) degree.
Following the Portuguese usage, the term "college" (colégio) in Macau has traditionally been used in the names for private (and non-governmental) pre-university educational institutions, which correspond to form one to form six level tiers. Such schools are usually run by the Roman Catholic church or missionaries in Macau. Examples include Chan Sui Ki Perpetual Help College, Yuet Wah College, and Sacred Heart Canossian College.
In the Philippines, colleges usually refer to institutions of learning that grant degrees but whose scholastic fields are not as diverse as that of a university (University of Santo Tomas, University of the Philippines, Ateneo de Manila University, De La Salle University, Far Eastern University, and AMA University), such as the San Beda College which specializes in law, AMA Computer College whose campuses are spread all over the Philippines which specializes in information and computing technologies, and the Mapúa Institute of Technology which specializes in engineering, or to component units within universities that do not grant degrees but rather facilitate the instruction of a particular field, such as a College of Science and College of Engineering, among many other colleges of the University of the Philippines.
A state college may not have the word "college" on its name, but may have several component colleges, or departments. Thus, the Eulogio Amang Rodriguez Institute of Science and Technology is a state college by classification.
Usually, the term "college" is also thought of as a hierarchical demarcation between the term "university", and quite a number of colleges seek to be recognized as universities as a sign of improvement in academic standards (Colegio de San Juan de Letran, San Beda College), and increase in the diversity of the offered degree programs (called "courses"). For private colleges, this may be done through a survey and evaluation by the Commission on Higher Education and accrediting organizations, as was the case of Urios College which is now the Fr. Saturnino Urios University. For state colleges, it is usually done by a legislation by the Congress or Senate. In common usage, "going to college" simply means attending school for an undergraduate degree, whether it's from an institution recognized as a college or a university.
When it comes to referring to the level of education, college is the term more used to be synonymous to tertiary or higher education. A student who is or has studied his/her undergraduate degree at either an institution with college or university in its name is considered to be going to or have gone to college.
The term "college" in Singapore is generally only used for pre-university educational institutions called "Junior Colleges", which provide the final two years of secondary education (equivalent to sixth form in British terms or grades 11–12 in the American system). Since 1 January 2005, the term also refers to the three campuses of the Institute of Technical Education with the introduction of the "collegiate system", in which the three institutions are called ITE College East, ITE College Central, and ITE College West respectively.
The term "university" is used to describe higher-education institutions offering locally conferred degrees. Institutions offering diplomas are called "polytechnics", while other institutions are often referred to as "institutes" and so forth.
There are several professional and vocational institutions that offer post-secondary education without granting degrees that are referred to as "colleges". This includes the Sri Lanka Law College, the many Technical Colleges and Teaching Colleges.
In Turkey, the term "kolej" (college) refers to a private high school, typically preceded by one year of preparatory language education. Notable Turkish colleges include Robert College, Uskudar American Academy, American Collegiate Institute and Tarsus American College.
Although the term "college" is hardly used in any context at any university in South Africa, some non-university tertiary institutions call themselves colleges. These include teacher training colleges, business colleges and wildlife management colleges. See: List of universities in South Africa#Private colleges and universities; List of post secondary institutions in South Africa.
The term college is mainly used by private or independent secondary schools with Advanced Level (Upper 6th formers) and also Polytechnic Colleges which confer diplomas only. A student can complete secondary education (International General Certificate of Secondary Education, IGCSE) at 16 years and proceed straight to a poly-technical college or they can proceed to Advanced level (16 to 19 years) and obtain a General Certificate of Education (GCE) certificate which enables them to enroll at a university, provided they have good grades. Alternatively, with lower grades, the GCE certificate holders will have an added advantage over their GCSE counterparts if they choose to enroll at a polytechnical college. Some schools in Zimbabwe choose to offer the International Baccalaureate studies as an alternative to the IGCSE and GCE.
Kollegio (in Greek Κολλέγιο) refers to the Centers of Post-Lyceum Education (in Greek Κέντρο Μεταλυκειακής Εκπαίδευσης, abbreviated as KEME), which are principally private and belong to the Greek post-secondary education system. Some of them have links to EU or US higher education institutions or accreditation organizations, such as the NEASC. Kollegio (or Kollegia in plural) may also refer to private non-tertiary schools, such as the Athens College.
In Ireland the term "college" is normally used to describe an institution of tertiary education. University students often say they attend "college" rather than "university". Until 1989, no university provided teaching or research directly; they were formally offered by a constituent college of the university.
There are number of secondary education institutions that traditionally used the word "college" in their names: these are either older, private schools (such as Belvedere College, Gonzaga College, Castleknock College, and St. Michael's College) or what were formerly a particular kind of secondary school. These secondary schools, formerly known as "technical colleges," were renamed "community colleges," but remain secondary schools.
The country's only ancient university is the University of Dublin. Created during the reign of Elizabeth I, it is modelled on the collegiate universities of Cambridge and Oxford. However, only one constituent college was ever founded, hence the curious position of Trinity College Dublin today; although both are usually considered one and the same, the university and college are completely distinct corporate entities with separate and parallel governing structures.
Among more modern foundations, the National University of Ireland, founded in 1908, consisted of constituent colleges and recognised colleges until 1997. The former are now referred to as constituent universities – institutions that are essentially universities in their own right. The National University can trace its existence back to 1850 and the creation of the Queen's University of Ireland and the creation of the Catholic University of Ireland in 1854. From 1880, the degree awarding roles of these two universities was taken over by the Royal University of Ireland, which remained until the creation of the National University in 1908 and Queen's University Belfast.
The state's two new universities, Dublin City University and University of Limerick, were initially National Institute for Higher Education institutions. These institutions offered university level academic degrees and research from the start of their existence and were awarded university status in 1989 in recognition of this.
Third level technical education in the state has been carried out in the Institutes of Technology, which were established from the 1970s as Regional Technical Colleges. These institutions have delegated authority which entitles them to give degrees and diplomas from Quality and Qualifications Ireland (QQI) in their own names.
A number of private colleges exist such as Dublin Business School, providing undergraduate and postgraduate courses validated by QQI and in some cases by other universities.
Other types of college include colleges of education, such as the Church of Ireland College of Education. These are specialist institutions, often linked to a university, which provide both undergraduate and postgraduate academic degrees for people who want to train as teachers.
A number of state-funded further education colleges exist – which offer vocational education and training in a range of areas from business studies and information and communications technology to sports injury therapy. These courses are usually one, two or less often three years in duration and are validated by QQI at Levels 5 or 6, or for the BTEC Higher National Diploma award, which is a Level 6/7 qualification, validated by Edexcel. There are numerous private colleges (particularly in Dublin and Limerick) which offer both further and higher education qualifications. These degrees and diplomas are often certified by foreign universities/international awarding bodies and are aligned to the National Framework of Qualifications at Levels 6, 7 and 8.
In the Netherlands there are 3 main educational routes after high school.
HBO graduates can be awarded two titles, which are Baccalaureus (bc.) and Ingenieur (ing.). At a WO institution, many more bachelor's and master's titles can be awarded. Bachelor's degrees: Bachelor of Arts (BA), Bachelor of Science (BSc) and Bachelor of Laws (LLB). Master's degrees: Master of Arts (MA), Master of Laws (LLM) and Master of Science (MSc). The PhD title is a research degree awarded upon completion and defense of a doctoral thesis.
Presently in Portugal, the term colégio (college) is normally used as a generic reference to a private (non-government) school that provides from basic to secondary education. Many of the private schools include the term colégio in their name. Some special public schools – usually of the boarding school type – also include the term in their name, with a notable example being the Colégio Militar (Military College). The term colégio interno (literally "internal college") is used specifically as a generic reference to a boarding school.
Until the 19th century, a colégio was usually a secondary or pre-university school, of public or religious nature, where the students usually lived together. A model for these colleges was the Royal College of Arts and Humanities, founded in Coimbra by King John III of Portugal in 1542.
Further education (FE) colleges and sixth form colleges are institutions providing further education to students over 16. Some of these also provide higher education courses (see below). In the context of secondary education, 'college' is used in the names of some private schools, e.g. Eton College and Winchester College.
In higher education, a college is normally a provider that does not hold university status, although it can also refer to a constituent part of a collegiate or federal university or a grouping of academic faculties or departments within a university. Traditionally the distinction between colleges and universities was that colleges did not award degrees while universities did, but this is no longer the case with NCG having gained taught degree awarding powers (the same as some universities) on behalf of its colleges, and many of the colleges of the University of London holding full degree awarding powers and being effectively universities. Most colleges, however, do not hold their own degree awarding powers and continue to offer higher education courses that are validated by universities or other institutions that can award degrees.
In England, as of August 2016, over 60% of the higher education providers directly funded by HEFCE (208/340) are sixth-form or further education colleges, often termed colleges of further and higher education, along with 17 colleges of the University of London, one university college, 100 universities, and 14 other providers (six of which use 'college' in their name). Overall, this means over two-thirds of state-supported higher education providers in England are colleges of one form or another. Many private providers are also called colleges, e.g. the New College of the Humanities and St Patrick's College, London.
Colleges within universities vary immensely in their responsibilities. The large constituent colleges of the University of London are effectively universities in their own right; colleges in some universities, including those of the University of the Arts London and smaller colleges of the University of London, run their own degree courses but do not award degrees; those at the University of Roehampton provide accommodation and pastoral care as well as delivering the teaching on university courses; those at Oxford and Cambridge deliver some teaching on university courses as well as providing accommodation and pastoral care; and those in Durham, Kent, Lancaster and York provide accommodation and pastoral care but do not normally participate in formal teaching. The legal status of these colleges also varies widely, with University of London colleges being independent corporations and recognised bodies, Oxbridge colleges, colleges of the University of the Highlands and Islands (UHI) and some Durham colleges being independent corporations and listed bodies, most Durham colleges being owned by the university but still listed bodies, and those of other collegiate universities not having formal recognition. When applying for undergraduate courses through UCAS, University of London colleges are treated as independent providers, colleges of Oxford, Cambridge, Durham and UHI are treated as locations within the universities that can be selected by specifying a 'campus code' in addition to selecting the university, and colleges of other universities are not recognised.
The UHI and the University of Wales Trinity Saint David (UWTSD) both include further education colleges. However, while the UHI colleges integrate FE and HE provision, UWTSD maintains a separation between the university campuses (Lampeter, Carmarthen and Swansea) and the two colleges (Coleg Sir Gâr and Coleg Ceredigion; n.b. coleg is Welsh for college), which although part of the same group are treated as separate institutions rather than colleges within the university.
A university college is an independent institution with the power to award taught degrees, but which has not been granted university status. University College is a protected title that can only be used with permission, although note that University College London, University College, Oxford and University College, Durham are colleges within their respective universities and not university colleges (in the case of UCL holding full degree awarding powers that set it above a university college), while University College Birmingham is a university in its own right and also not a university college.
In Australia a college may be an institution of tertiary education that is smaller than a university, run independently or as part of a university. Following a reform in the 1980s many of the formerly independent colleges now belong to a larger universities.
Referring to parts of a university, there are residential colleges which provide residence for students, both undergraduate and postgraduate, called university colleges. These colleges often provide additional tutorial assistance, and some host theological study. Many colleges have strong traditions and rituals, so are a combination of dormitory style accommodation and fraternity or sorority culture.
Most technical and further education institutions (TAFEs), which offer certificate and diploma vocational courses, are styled "TAFE colleges" or "Colleges of TAFE". In some places, such as Tasmania, college refers to a type of school for Year 11 and 12 students, e.g. Don College.
The constituent colleges of the former University of New Zealand (such as Canterbury University College) have become independent universities. Some halls of residence associated with New Zealand universities retain the name of "college", particularly at the University of Otago (which although brought under the umbrella of the University of New Zealand, already possessed university status and degree awarding powers). The institutions formerly known as "Teacher-training colleges" now style themselves "College of education".
Some universities, such as the University of Canterbury, have divided their university into constituent administrative "Colleges" – the College of Arts containing departments that teach Arts, Humanities and Social Sciences, College of Science containing Science departments, and so on. This is largely modelled on the Cambridge model, discussed above.
Like the United Kingdom some professional bodies in New Zealand style themselves as "colleges", for example, the Royal Australasian College of Surgeons, the Royal Australasian College of Physicians.
In some parts of the country, secondary school is often referred to as college and the term is used interchangeably with high school. This sometimes confuses people from other parts of New Zealand. But in all parts of the country many secondary schools have "College" in their name, such as Rangitoto College, New Zealand's largest secondary. | [
{
"paragraph_id": 0,
"text": "A college (Latin: collegium) is an educational institution or a constituent part of one. A college may be a degree-awarding tertiary educational institution, a part of a collegiate or federal university, an institution offering vocational education, a further education institution, or a secondary school.",
"title": ""
},
{
"paragraph_id": 1,
"text": "In most of the world, a college may be a high school or secondary school, a college of further education, a training institution that awards trade qualifications, a higher-education provider that does not have university status (often without its own degree-awarding powers), or a constituent part of a university. In the United States, a college may offer undergraduate programs – either as an independent institution or as the undergraduate program of a university – or it may be a residential college of a university or a community college, referring to (primarily public) higher education institutions that aim to provide affordable and accessible education, usually limited to two-year associate degrees. The word is generally also used as a synonym for a university in the US. Colleges in countries such as France, Belgium, and Switzerland provide secondary education.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The word \"college\" is from the Latin verb lego, legere, legi, lectum, \"to collect, gather together, pick\", plus the preposition cum, \"with\", thus meaning \"selected together\". Thus \"colleagues\" are literally \"persons who have been selected to work together\". In ancient Rome a collegium was a \"body, guild, corporation united in colleagueship; of magistrates, praetors, tribunes, priests, augurs; a political club or trade guild\". Thus a college was a form of corporation or corporate body, an artificial legal person (body/corpus) with its own legal personality, with the capacity to enter into legal contracts, to sue and be sued. In mediaeval England there were colleges of priests, for example in chantry chapels; modern survivals include the Royal College of Surgeons in England (originally the Guild of Surgeons Within the City of London), the College of Arms in London (a body of heralds enforcing heraldic law), an electoral college (to elect representatives); all groups of persons \"selected in common\" to perform a specified function and appointed by a monarch, founder or other person in authority. As for the modern \"college of education\", it was a body created for that purpose, for example Eton College was founded in 1440 by letters patent of King Henry VI for the constitution of a college of Fellows, priests, clerks, choristers, poor scholars, and old poor men, with one master or governor, whose duty it shall be to instruct these scholars and any others who may resort thither from any part of England in the knowledge of letters, and especially of grammar, without payment\".",
"title": "Etymology"
},
{
"paragraph_id": 3,
"text": "Within higher education, the term can be used to refer to:",
"title": "Overview"
},
{
"paragraph_id": 4,
"text": "A sixth form college or college of further education is an educational institution in England, Wales, Northern Ireland, Belize, the Caribbean, Malta, Norway, Brunei, and Southern Africa, among others, where students aged 16 to 19 typically study for advanced school-level qualifications, such as A-levels, BTEC, HND or its equivalent and the International Baccalaureate Diploma, or school-level qualifications such as GCSEs. In Singapore and India, this is known as a junior college. The municipal government of the city of Paris uses the phrase \"sixth form college\" as the English name for a lycée.",
"title": "Overview"
},
{
"paragraph_id": 5,
"text": "In some national education systems, secondary schools may be called \"colleges\" or have \"college\" as part of their title.",
"title": "Overview"
},
{
"paragraph_id": 6,
"text": "In Australia the term \"college\" is applied to any private or independent (non-government) primary and, especially, secondary school as distinct from a state school. Melbourne Grammar School, Cranbrook School, Sydney and The King's School, Parramatta are considered colleges.",
"title": "Overview"
},
{
"paragraph_id": 7,
"text": "There has also been a recent trend to rename or create government secondary schools as \"colleges\". In the state of Victoria, some state high schools are referred to as secondary colleges, although the pre-eminent government secondary school for boys in Melbourne is still named Melbourne High School. In Western Australia, South Australia and the Northern Territory, \"college\" is used in the name of all state high schools built since the late 1990s, and also some older ones. In New South Wales, some high schools, especially multi-campus schools resulting from mergers, are known as \"secondary colleges\". In Queensland some newer schools which accept primary and high school students are styled state college, but state schools offering only secondary education are called \"State High School\". In Tasmania and the Australian Capital Territory, \"college\" refers to the final two years of high school (years 11 and 12), and the institutions which provide this. In this context, \"college\" is a system independent of the other years of high school. Here, the expression is a shorter version of matriculation college.",
"title": "Overview"
},
{
"paragraph_id": 8,
"text": "In a number of Canadian cities, many government-run secondary schools are called \"collegiates\" or \"collegiate institutes\" (C.I.), a complicated form of the word \"college\" which avoids the usual \"post-secondary\" connotation. This is because these secondary schools have traditionally focused on academic, rather than vocational, subjects and ability levels (for example, collegiates offered Latin while vocational schools offered technical courses). Some private secondary schools (such as Upper Canada College, Vancouver College) choose to use the word \"college\" in their names nevertheless. Some secondary schools elsewhere in the country, particularly ones within the separate school system, may also use the word \"college\" or \"collegiate\" in their names.",
"title": "Overview"
},
{
"paragraph_id": 9,
"text": "In New Zealand the word \"college\" normally refers to a secondary school for ages 13 to 17 and \"college\" appears as part of the name especially of private or integrated schools. \"Colleges\" most frequently appear in the North Island, whereas \"high schools\" are more common in the South Island.",
"title": "Overview"
},
{
"paragraph_id": 10,
"text": "In the Netherlands, \"college\" is equivalent to HBO (Higher professional education). It is oriented towards professional training with clear occupational outlook, unlike universities which are scientifically oriented.",
"title": "Overview"
},
{
"paragraph_id": 11,
"text": "In South Africa, some secondary schools, especially private schools on the English public school model, have \"college\" in their title, including six of South Africa's Elite Seven high schools. A typical example of this category would be St John's College.",
"title": "Overview"
},
{
"paragraph_id": 12,
"text": "Private schools that specialize in improving children's marks through intensive focus on examination needs are informally called \"cram-colleges\".",
"title": "Overview"
},
{
"paragraph_id": 13,
"text": "In Sri Lanka the word \"college\" (known as Vidyalaya in Sinhala) normally refers to a secondary school, which usually signifies above the 5th standard. During the British colonial period a limited number of exclusive secondary schools were established based on English public school model (Royal College Colombo, S. Thomas' College, Mount Lavinia, Trinity College, Kandy) these along with several Catholic schools (St. Joseph's College, Colombo, St Anthony's College) traditionally carry their name as colleges. Following the start of free education in 1931 large group of central colleges were established to educate the rural masses. Since Sri Lanka gained Independence in 1948, many schools that have been established have been named as \"college\".",
"title": "Overview"
},
{
"paragraph_id": 14,
"text": "As well as an educational institution, the term, in accordance with its etymology, may also refer to any formal group of colleagues set up under statute or regulation; often under a Royal Charter. Examples include an electoral college, the College of Arms, a college of canons, and the College of Cardinals. Other collegiate bodies include professional associations, particularly in medicine and allied professions. In the UK these include the Royal College of Nursing and the Royal College of Physicians. Examples in the United States include the American College of Physicians, the American College of Surgeons, and the American College of Dentists. An example in Australia is the Royal Australian College of General Practitioners.",
"title": "Overview"
},
{
"paragraph_id": 15,
"text": "The different ways in which the term \"College\" is used to describe educational institutions in various regions of the world is listed below:",
"title": "College by country"
},
{
"paragraph_id": 16,
"text": "In Canadian English, the term \"college\" usually refers to a trades school, applied arts/science/technology/business/health school or community college. These are post-secondary institutions granting certificates, diplomas, associate degrees and (in some cases) bachelor's degrees. The French acronym specific to public institutions within Quebec's particular system of pre-university and technical education is CEGEP (Collège d'enseignement général et professionnel, \"college of general and professional education\"). They are collegiate-level institutions that a student typically enrols in if they wish to continue onto university in the Quebec education system, or to learn a trade. In Ontario and Alberta, there are also institutions that are designated university colleges, which only grant undergraduate degrees. This is to differentiate between universities, which have both undergraduate and graduate programs and those that do not.",
"title": "Americas"
},
{
"paragraph_id": 17,
"text": "In Canada, there is a strong distinction between \"college\" and \"university\". In conversation, one specifically would say either \"they are going to university\" (i.e., studying for a three- or four-year degree at a university) or \"they are going to college\" (i.e., studying at a technical/career training).",
"title": "Americas"
},
{
"paragraph_id": 18,
"text": "The term college also applies to distinct entities that formally act as an affiliated institution of the university, formally referred to as federated college, or affiliated colleges. A university may also formally include several constituent colleges, forming a collegiate university. Examples of collegiate universities in Canada include Trent University, and the University of Toronto. These types of institutions act independently, maintaining their own endowments, and properties. However, they remain either affiliated, or federated with the overarching university, with the overarching university being the institution that formally grants the degrees. For example, Trinity College was once an independent institution, but later became federated with the University of Toronto. Several centralized universities in Canada have mimicked the collegiate university model; although constituent colleges in a centralized university remains under the authority of the central administration. Centralized universities that have adopted the collegiate model to a degree includes the University of British Columbia, with Green College and St. John's College; and the Memorial University of Newfoundland, with Sir Wilfred Grenfell College.",
"title": "Americas"
},
{
"paragraph_id": 19,
"text": "Occasionally, \"college\" refers to a subject specific faculty within a university that, while distinct, are neither federated nor affiliated—College of Education, College of Medicine, College of Dentistry, College of Biological Science among others.",
"title": "Americas"
},
{
"paragraph_id": 20,
"text": "The Royal Military College of Canada is a military college which trains officers for the Canadian Armed Forces. The institution is a full-fledged university, with the authority to issue graduate degrees, although it continues to word the term college in its name. The institution's sister schools, Royal Military College Saint-Jean also uses the term college in its name, although it academic offering is akin to a CEGEP institution in Quebec. A number of post-secondary art schools in Canada formerly used the word college in their names, despite formally being universities. However, most of these institutions were renamed, or re-branded in the early 21st century, omitting the word college from its name.",
"title": "Americas"
},
{
"paragraph_id": 21,
"text": "The word college continues to be used in the names public separate secondary schools in Ontario. A number of independent schools across Canada also use the word college in its name.",
"title": "Americas"
},
{
"paragraph_id": 22,
"text": "Public secular school boards in Ontario also refer to their secondary schools as collegiate institutes. However, usage of the word collegiate institute varies between school boards. Collegiate institute is the predominant name for secondary schools in Lakehead District School Board, and Toronto District School Board, although most school boards in Ontario use collegiate institute alongside high school, and secondary school in the names of their institutions. Similarly, secondary schools in Regina, and Saskatoon are referred to as Collegiate.",
"title": "Americas"
},
{
"paragraph_id": 23,
"text": "Officially, since 2009, the Pontifical Catholic University of Chile incorporated the term \"college\" as the name of a tertiary education program as a bachelor's degree. The program features a Bachelor of Natural Sciences and Mathematics, a Bachelor of Social Science and a Bachelor of Arts and Humanities. It has the same system as the American universities, it combines majors and minors and finally, it let the students continue a higher degree in the same university once the program it is completed.",
"title": "Americas"
},
{
"paragraph_id": 24,
"text": "But in Chile, the term \"college\" is not usually used for tertiary education, but is used mainly in the name of some private bilingual schools, corresponding to levels 0, 1 and 2 of the ISCED 2011. Some examples are they Santiago College, Saint George's College, among others.",
"title": "Americas"
},
{
"paragraph_id": 25,
"text": "In the United States, there were 5,916 post-secondary institutions (universities and colleges) as of 2020–21, having peaked at 7,253 in 2012–13 and fallen every year since. A \"college\" in the US can refer to a constituent part of a university (which can be a residential college, the sub-division of the university offering undergraduate courses, or a school of the university offering particular specialized courses), an independent institution offering bachelor's-level courses, or an institution offering instruction in a particular professional, technical or vocational field. In popular usage, the word \"college\" is the generic term for any post-secondary undergraduate education. Americans \"go to college\" after high school, regardless of whether the specific institution is formally a college or a university. Some students choose to dual-enroll, by taking college classes while still in high school. The word and its derivatives are the standard terms used to describe the institutions and experiences associated with American post-secondary undergraduate education.",
"title": "Americas"
},
{
"paragraph_id": 26,
"text": "Students must pay for college before taking classes. Some borrow the money via loans, and some students fund their educations with cash, scholarships, grants, or some combination of these payment methods. In 2011, the state or federal government subsidized $8,000 to $100,000 for each undergraduate degree. For state-owned schools (called \"public\" universities), the subsidy was given to the college, with the student benefiting from lower tuition. The state subsidized on average 50% of public university tuition.",
"title": "Americas"
},
{
"paragraph_id": 27,
"text": "Colleges vary in terms of size, degree, and length of stay. Two-year colleges, also known as junior or community colleges, usually offer an associate degree, and four-year colleges usually offer a bachelor's degree. Often, these are entirely undergraduate institutions, although some have graduate school programs.",
"title": "Americas"
},
{
"paragraph_id": 28,
"text": "Four-year institutions in the U.S. that emphasize a liberal arts curriculum are known as liberal arts colleges. Until the 20th century, liberal arts, law, medicine, theology, and divinity were about the only form of higher education available in the United States. These schools have traditionally emphasized instruction at the undergraduate level, although advanced research may still occur at these institutions.",
"title": "Americas"
},
{
"paragraph_id": 29,
"text": "While there is no national standard in the United States, the term \"university\" primarily designates institutions that provide undergraduate and graduate education. A university typically has as its core and its largest internal division an undergraduate college teaching a liberal arts curriculum, also culminating in a bachelor's degree. What often distinguishes a university is having, in addition, one or more graduate schools engaged in both teaching graduate classes and in research. Often these would be called a School of Law or School of Medicine, (but may also be called a college of law, or a faculty of law). An exception is Vincennes University, Indiana, which is styled and chartered as a \"university\" even though almost all of its academic programs lead only to two-year associate degrees. Some institutions, such as Dartmouth College and The College of William & Mary, have retained the term \"college\" in their names for historical reasons. In one unique case, Boston College and Boston University, the former located in Chestnut Hill, Massachusetts and the latter located in Boston, Massachusetts, are completely separate institutions.",
"title": "Americas"
},
{
"paragraph_id": 30,
"text": "Usage of the terms varies among the states. In 1996, for example, Georgia changed all of its four-year institutions previously designated as colleges to universities, and all of its vocational technology schools to technical colleges.",
"title": "Americas"
},
{
"paragraph_id": 31,
"text": "The terms \"university\" and \"college\" do not exhaust all possible titles for an American institution of higher education. Other options include \"institute\" (Worcester Polytechnic Institute and Massachusetts Institute of Technology), \"academy\" (United States Military Academy), \"union\" (Cooper Union), \"conservatory\" (New England Conservatory), and \"school\" (Juilliard School). In colloquial use, they are still referred to as \"college\" when referring to their undergraduate studies.",
"title": "Americas"
},
{
"paragraph_id": 32,
"text": "The term college is also, as in the United Kingdom, used for a constituent semi-autonomous part of a larger university but generally organized on academic rather than residential lines. For example, at many institutions, the undergraduate portion of the university can be briefly referred to as the college (such as The College of the University of Chicago, Harvard College at Harvard, or Columbia College at Columbia) while at others, such as the University of California, Berkeley, \"colleges\" are collections of academic programs and other units that share some common characteristics, mission, or disciplinary focus (the \"college of engineering\", the \"college of nursing\", and so forth). There exist other variants for historical reasons, including some uses that exist because of mergers and acquisitions; for example, Duke University, which was called Trinity College until the 1920s, still calls its main undergraduate subdivision Trinity College of Arts and Sciences.",
"title": "Americas"
},
{
"paragraph_id": 33,
"text": "Some American universities, such as Princeton, Rice, and Yale have established residential colleges (sometimes, as at Harvard, the first to establish such a system in the 1930s, known as houses) along the lines of Oxford or Cambridge. Unlike the Oxbridge colleges, but similarly to Durham, these residential colleges are not autonomous legal entities nor are they typically much involved in education itself, being primarily concerned with room, board, and social life. At the University of Michigan, University of California, San Diego and the University of California, Santa Cruz, each residential college teaches its own core writing courses and has its own distinctive set of graduation requirements.",
"title": "Americas"
},
{
"paragraph_id": 34,
"text": "Many U.S. universities have placed increased emphasis on their residential colleges in recent years. This is exemplified by the creation of new colleges at Ivy League schools such as Yale University and Princeton University, and efforts to strengthen the contribution of the residential colleges to student education, including through a 2016 taskforce at Princeton on residential colleges.",
"title": "Americas"
},
{
"paragraph_id": 35,
"text": "The founders of the first institutions of higher education in the United States were graduates of the University of Oxford and the University of Cambridge. The small institutions they founded would not have seemed to them like universities – they were tiny and did not offer the higher degrees in medicine and theology. Furthermore, they were not composed of several small colleges. Instead, the new institutions felt like the Oxford and Cambridge colleges they were used to – small communities, housing and feeding their students, with instruction from residential tutors (as in the United Kingdom, described above). When the first students graduated, these \"colleges\" assumed the right to confer degrees upon them, usually with authority—for example, The College of William & Mary has a royal charter from the British monarchy allowing it to confer degrees while Dartmouth College has a charter permitting it to award degrees \"as are usually granted in either of the universities, or any other college in our realm of Great Britain.\"",
"title": "Americas"
},
{
"paragraph_id": 36,
"text": "The leaders of Harvard College (which granted America's first degrees in 1642) might have thought of their college as the first of many residential colleges that would grow up into a New Cambridge university. However, over time, few new colleges were founded there, and Harvard grew and added higher faculties. Eventually, it changed its title to university, but the term \"college\" had stuck and \"colleges\" have arisen across the United States.",
"title": "Americas"
},
{
"paragraph_id": 37,
"text": "In U.S. usage, the word \"college\" not only embodies a particular type of school, but has historically been used to refer to the general concept of higher education when it is not necessary to specify a school, as in \"going to college\" or \"college savings accounts\" offered by banks.",
"title": "Americas"
},
{
"paragraph_id": 38,
"text": "In a survey of more than 2,000 college students in 33 states and 156 different campuses, the U.S. Public Interest Research Group found the average student spends as much as $1,200 each year on textbooks and supplies alone. By comparison, the group says that's the equivalent of 39 percent of tuition and fees at a community college, and 14 percent of tuition and fees at a four-year public university.",
"title": "Americas"
},
{
"paragraph_id": 39,
"text": "In addition to private colleges and universities, the U.S. also has a system of government funded, public universities. Many were founded under the Morrill Land-Grant Colleges Act of 1862. A movement had arisen to bring a form of more practical higher education to the masses, as \"...many politicians and educators wanted to make it possible for all young Americans to receive some sort of advanced education.\" The Morrill Act \"...made it possible for the new western states to establish colleges for the citizens.\" Its goal was to make higher education more easily accessible to the citizenry of the country, specifically to improve agricultural systems by providing training and scholarship in the production and sales of agricultural products, and to provide formal education in \"...agriculture, home economics, mechanical arts, and other professions that seemed practical at the time.\"",
"title": "Americas"
},
{
"paragraph_id": 40,
"text": "The act was eventually extended to allow all states that had remained with the Union during the American Civil War, and eventually all states, to establish such institutions. Most of the colleges established under the Morrill Act have since become full universities, and some are among the elite of the world.",
"title": "Americas"
},
{
"paragraph_id": 41,
"text": "Selection of a four-year college as compared to a two-year junior college, even by marginal students such as those with a C+ grade average in high school and SAT scores in the mid 800s, increases the probability of graduation and confers substantial economic and social benefits.",
"title": "Americas"
},
{
"paragraph_id": 42,
"text": "In Bangladesh, educational institutions offering higher secondary (11th–12th grade) education are known as colleges.",
"title": "Asia"
},
{
"paragraph_id": 43,
"text": "In Hong Kong, the term 'college' is used by tertiary institutions as either part of their names or to refer to a constituent part of the university, such as the colleges in the collegiate The Chinese University of Hong Kong; or to a residence hall of a university, such as St. John's College, University of Hong Kong. Many older secondary schools have the term 'college' as part of their names.",
"title": "Asia"
},
{
"paragraph_id": 44,
"text": "The modern system of education was heavily influenced by the British starting in 1835.",
"title": "Asia"
},
{
"paragraph_id": 45,
"text": "In India, the term \"college\" is commonly reserved for institutions that offer high school diplomas at year 12 (\"Junior College\", similar to American high schools), and those that offer the bachelor's degree; some colleges, however, offer programmes up to PhD level. Generally, colleges are located in different parts of a state and all of them are affiliated to a regional university. The colleges offer programmes leading to degrees of that university. Colleges may be either Autonomous or non-autonomous. Autonomous Colleges are empowered to establish their own syllabus, and conduct and assess their own examinations; in non-autonomous colleges, examinations are conducted by the university, at the same time for all colleges under its affiliation. There are several hundred universities and each university has affiliated colleges, often a large number.",
"title": "Asia"
},
{
"paragraph_id": 46,
"text": "The first liberal arts and sciences college in India was \"Cottayam College\" or the \"Syrian College\", Kerala in 1815. The First inter linguistic residential education institution in Asia was started at this college. At present it is a Theological seminary which is popularly known as Orthodox Theological Seminary or Old Seminary. After that, CMS College, Kottayam, established in 1817, and the Presidency College, Kolkata, also 1817, initially known as Hindu College. The first college for the study of Christian theology and ecumenical enquiry was Serampore College (1818). The first Missionary institution to impart Western style education in India was the Scottish Church College, Calcutta (1830). The first commerce and economics college in India was Sydenham College, Mumbai (1913).",
"title": "Asia"
},
{
"paragraph_id": 47,
"text": "In India a new term has been introduced that is Autonomous Institutes & Colleges. An autonomous Colleges are colleges which need to be affiliated to a certain university. These colleges can conduct their own admission procedure, examination syllabus, fees structure etc. However, at the end of course completion, they cannot issue their own degree or diploma. The final degree or diploma is issued by the affiliated university. Also, some significant changes can pave way under the NEP (New Education Policy 2020) which may affect the present guidelines for universities and colleges.",
"title": "Asia"
},
{
"paragraph_id": 48,
"text": "In Israel, any non-university higher-learning facility is called a college. Institutions accredited by the Council for Higher Education in Israel (CHE) to confer a bachelor's degree are called \"Academic Colleges\" (Hebrew: מִכְלָלָה, romanized: Mikhlala; plural Hebrew: מכללות, romanized: Mikhlalot). These colleges (at least 4 for 2012) may also offer master's degrees and act as Research facilities. There are also over twenty teacher training colleges or seminaries, most of which may award only a Bachelor of Education (BEd) degree.",
"title": "Asia"
},
{
"paragraph_id": 49,
"text": "Following the Portuguese usage, the term \"college\" (colégio) in Macau has traditionally been used in the names for private (and non-governmental) pre-university educational institutions, which correspond to form one to form six level tiers. Such schools are usually run by the Roman Catholic church or missionaries in Macau. Examples include Chan Sui Ki Perpetual Help College, Yuet Wah College, and Sacred Heart Canossian College.",
"title": "Asia"
},
{
"paragraph_id": 50,
"text": "In the Philippines, colleges usually refer to institutions of learning that grant degrees but whose scholastic fields are not as diverse as that of a university (University of Santo Tomas, University of the Philippines, Ateneo de Manila University, De La Salle University, Far Eastern University, and AMA University), such as the San Beda College which specializes in law, AMA Computer College whose campuses are spread all over the Philippines which specializes in information and computing technologies, and the Mapúa Institute of Technology which specializes in engineering, or to component units within universities that do not grant degrees but rather facilitate the instruction of a particular field, such as a College of Science and College of Engineering, among many other colleges of the University of the Philippines.",
"title": "Asia"
},
{
"paragraph_id": 51,
"text": "A state college may not have the word \"college\" on its name, but may have several component colleges, or departments. Thus, the Eulogio Amang Rodriguez Institute of Science and Technology is a state college by classification.",
"title": "Asia"
},
{
"paragraph_id": 52,
"text": "Usually, the term \"college\" is also thought of as a hierarchical demarcation between the term \"university\", and quite a number of colleges seek to be recognized as universities as a sign of improvement in academic standards (Colegio de San Juan de Letran, San Beda College), and increase in the diversity of the offered degree programs (called \"courses\"). For private colleges, this may be done through a survey and evaluation by the Commission on Higher Education and accrediting organizations, as was the case of Urios College which is now the Fr. Saturnino Urios University. For state colleges, it is usually done by a legislation by the Congress or Senate. In common usage, \"going to college\" simply means attending school for an undergraduate degree, whether it's from an institution recognized as a college or a university.",
"title": "Asia"
},
{
"paragraph_id": 53,
"text": "When it comes to referring to the level of education, college is the term more used to be synonymous to tertiary or higher education. A student who is or has studied his/her undergraduate degree at either an institution with college or university in its name is considered to be going to or have gone to college.",
"title": "Asia"
},
{
"paragraph_id": 54,
"text": "The term \"college\" in Singapore is generally only used for pre-university educational institutions called \"Junior Colleges\", which provide the final two years of secondary education (equivalent to sixth form in British terms or grades 11–12 in the American system). Since 1 January 2005, the term also refers to the three campuses of the Institute of Technical Education with the introduction of the \"collegiate system\", in which the three institutions are called ITE College East, ITE College Central, and ITE College West respectively.",
"title": "Asia"
},
{
"paragraph_id": 55,
"text": "The term \"university\" is used to describe higher-education institutions offering locally conferred degrees. Institutions offering diplomas are called \"polytechnics\", while other institutions are often referred to as \"institutes\" and so forth.",
"title": "Asia"
},
{
"paragraph_id": 56,
"text": "There are several professional and vocational institutions that offer post-secondary education without granting degrees that are referred to as \"colleges\". This includes the Sri Lanka Law College, the many Technical Colleges and Teaching Colleges.",
"title": "Asia"
},
{
"paragraph_id": 57,
"text": "In Turkey, the term \"kolej\" (college) refers to a private high school, typically preceded by one year of preparatory language education. Notable Turkish colleges include Robert College, Uskudar American Academy, American Collegiate Institute and Tarsus American College.",
"title": "Asia"
},
{
"paragraph_id": 58,
"text": "Although the term \"college\" is hardly used in any context at any university in South Africa, some non-university tertiary institutions call themselves colleges. These include teacher training colleges, business colleges and wildlife management colleges. See: List of universities in South Africa#Private colleges and universities; List of post secondary institutions in South Africa.",
"title": "Africa"
},
{
"paragraph_id": 59,
"text": "The term college is mainly used by private or independent secondary schools with Advanced Level (Upper 6th formers) and also Polytechnic Colleges which confer diplomas only. A student can complete secondary education (International General Certificate of Secondary Education, IGCSE) at 16 years and proceed straight to a poly-technical college or they can proceed to Advanced level (16 to 19 years) and obtain a General Certificate of Education (GCE) certificate which enables them to enroll at a university, provided they have good grades. Alternatively, with lower grades, the GCE certificate holders will have an added advantage over their GCSE counterparts if they choose to enroll at a polytechnical college. Some schools in Zimbabwe choose to offer the International Baccalaureate studies as an alternative to the IGCSE and GCE.",
"title": "Africa"
},
{
"paragraph_id": 60,
"text": "Kollegio (in Greek Κολλέγιο) refers to the Centers of Post-Lyceum Education (in Greek Κέντρο Μεταλυκειακής Εκπαίδευσης, abbreviated as KEME), which are principally private and belong to the Greek post-secondary education system. Some of them have links to EU or US higher education institutions or accreditation organizations, such as the NEASC. Kollegio (or Kollegia in plural) may also refer to private non-tertiary schools, such as the Athens College.",
"title": "Europe"
},
{
"paragraph_id": 61,
"text": "In Ireland the term \"college\" is normally used to describe an institution of tertiary education. University students often say they attend \"college\" rather than \"university\". Until 1989, no university provided teaching or research directly; they were formally offered by a constituent college of the university.",
"title": "Europe"
},
{
"paragraph_id": 62,
"text": "There are number of secondary education institutions that traditionally used the word \"college\" in their names: these are either older, private schools (such as Belvedere College, Gonzaga College, Castleknock College, and St. Michael's College) or what were formerly a particular kind of secondary school. These secondary schools, formerly known as \"technical colleges,\" were renamed \"community colleges,\" but remain secondary schools.",
"title": "Europe"
},
{
"paragraph_id": 63,
"text": "The country's only ancient university is the University of Dublin. Created during the reign of Elizabeth I, it is modelled on the collegiate universities of Cambridge and Oxford. However, only one constituent college was ever founded, hence the curious position of Trinity College Dublin today; although both are usually considered one and the same, the university and college are completely distinct corporate entities with separate and parallel governing structures.",
"title": "Europe"
},
{
"paragraph_id": 64,
"text": "Among more modern foundations, the National University of Ireland, founded in 1908, consisted of constituent colleges and recognised colleges until 1997. The former are now referred to as constituent universities – institutions that are essentially universities in their own right. The National University can trace its existence back to 1850 and the creation of the Queen's University of Ireland and the creation of the Catholic University of Ireland in 1854. From 1880, the degree awarding roles of these two universities was taken over by the Royal University of Ireland, which remained until the creation of the National University in 1908 and Queen's University Belfast.",
"title": "Europe"
},
{
"paragraph_id": 65,
"text": "The state's two new universities, Dublin City University and University of Limerick, were initially National Institute for Higher Education institutions. These institutions offered university level academic degrees and research from the start of their existence and were awarded university status in 1989 in recognition of this.",
"title": "Europe"
},
{
"paragraph_id": 66,
"text": "Third level technical education in the state has been carried out in the Institutes of Technology, which were established from the 1970s as Regional Technical Colleges. These institutions have delegated authority which entitles them to give degrees and diplomas from Quality and Qualifications Ireland (QQI) in their own names.",
"title": "Europe"
},
{
"paragraph_id": 67,
"text": "A number of private colleges exist such as Dublin Business School, providing undergraduate and postgraduate courses validated by QQI and in some cases by other universities.",
"title": "Europe"
},
{
"paragraph_id": 68,
"text": "Other types of college include colleges of education, such as the Church of Ireland College of Education. These are specialist institutions, often linked to a university, which provide both undergraduate and postgraduate academic degrees for people who want to train as teachers.",
"title": "Europe"
},
{
"paragraph_id": 69,
"text": "A number of state-funded further education colleges exist – which offer vocational education and training in a range of areas from business studies and information and communications technology to sports injury therapy. These courses are usually one, two or less often three years in duration and are validated by QQI at Levels 5 or 6, or for the BTEC Higher National Diploma award, which is a Level 6/7 qualification, validated by Edexcel. There are numerous private colleges (particularly in Dublin and Limerick) which offer both further and higher education qualifications. These degrees and diplomas are often certified by foreign universities/international awarding bodies and are aligned to the National Framework of Qualifications at Levels 6, 7 and 8.",
"title": "Europe"
},
{
"paragraph_id": 70,
"text": "In the Netherlands there are 3 main educational routes after high school.",
"title": "Europe"
},
{
"paragraph_id": 71,
"text": "HBO graduates can be awarded two titles, which are Baccalaureus (bc.) and Ingenieur (ing.). At a WO institution, many more bachelor's and master's titles can be awarded. Bachelor's degrees: Bachelor of Arts (BA), Bachelor of Science (BSc) and Bachelor of Laws (LLB). Master's degrees: Master of Arts (MA), Master of Laws (LLM) and Master of Science (MSc). The PhD title is a research degree awarded upon completion and defense of a doctoral thesis.",
"title": "Europe"
},
{
"paragraph_id": 72,
"text": "Presently in Portugal, the term colégio (college) is normally used as a generic reference to a private (non-government) school that provides from basic to secondary education. Many of the private schools include the term colégio in their name. Some special public schools – usually of the boarding school type – also include the term in their name, with a notable example being the Colégio Militar (Military College). The term colégio interno (literally \"internal college\") is used specifically as a generic reference to a boarding school.",
"title": "Europe"
},
{
"paragraph_id": 73,
"text": "Until the 19th century, a colégio was usually a secondary or pre-university school, of public or religious nature, where the students usually lived together. A model for these colleges was the Royal College of Arts and Humanities, founded in Coimbra by King John III of Portugal in 1542.",
"title": "Europe"
},
{
"paragraph_id": 74,
"text": "Further education (FE) colleges and sixth form colleges are institutions providing further education to students over 16. Some of these also provide higher education courses (see below). In the context of secondary education, 'college' is used in the names of some private schools, e.g. Eton College and Winchester College.",
"title": "Europe"
},
{
"paragraph_id": 75,
"text": "In higher education, a college is normally a provider that does not hold university status, although it can also refer to a constituent part of a collegiate or federal university or a grouping of academic faculties or departments within a university. Traditionally the distinction between colleges and universities was that colleges did not award degrees while universities did, but this is no longer the case with NCG having gained taught degree awarding powers (the same as some universities) on behalf of its colleges, and many of the colleges of the University of London holding full degree awarding powers and being effectively universities. Most colleges, however, do not hold their own degree awarding powers and continue to offer higher education courses that are validated by universities or other institutions that can award degrees.",
"title": "Europe"
},
{
"paragraph_id": 76,
"text": "In England, as of August 2016, over 60% of the higher education providers directly funded by HEFCE (208/340) are sixth-form or further education colleges, often termed colleges of further and higher education, along with 17 colleges of the University of London, one university college, 100 universities, and 14 other providers (six of which use 'college' in their name). Overall, this means over two-thirds of state-supported higher education providers in England are colleges of one form or another. Many private providers are also called colleges, e.g. the New College of the Humanities and St Patrick's College, London.",
"title": "Europe"
},
{
"paragraph_id": 77,
"text": "Colleges within universities vary immensely in their responsibilities. The large constituent colleges of the University of London are effectively universities in their own right; colleges in some universities, including those of the University of the Arts London and smaller colleges of the University of London, run their own degree courses but do not award degrees; those at the University of Roehampton provide accommodation and pastoral care as well as delivering the teaching on university courses; those at Oxford and Cambridge deliver some teaching on university courses as well as providing accommodation and pastoral care; and those in Durham, Kent, Lancaster and York provide accommodation and pastoral care but do not normally participate in formal teaching. The legal status of these colleges also varies widely, with University of London colleges being independent corporations and recognised bodies, Oxbridge colleges, colleges of the University of the Highlands and Islands (UHI) and some Durham colleges being independent corporations and listed bodies, most Durham colleges being owned by the university but still listed bodies, and those of other collegiate universities not having formal recognition. When applying for undergraduate courses through UCAS, University of London colleges are treated as independent providers, colleges of Oxford, Cambridge, Durham and UHI are treated as locations within the universities that can be selected by specifying a 'campus code' in addition to selecting the university, and colleges of other universities are not recognised.",
"title": "Europe"
},
{
"paragraph_id": 78,
"text": "The UHI and the University of Wales Trinity Saint David (UWTSD) both include further education colleges. However, while the UHI colleges integrate FE and HE provision, UWTSD maintains a separation between the university campuses (Lampeter, Carmarthen and Swansea) and the two colleges (Coleg Sir Gâr and Coleg Ceredigion; n.b. coleg is Welsh for college), which although part of the same group are treated as separate institutions rather than colleges within the university.",
"title": "Europe"
},
{
"paragraph_id": 79,
"text": "A university college is an independent institution with the power to award taught degrees, but which has not been granted university status. University College is a protected title that can only be used with permission, although note that University College London, University College, Oxford and University College, Durham are colleges within their respective universities and not university colleges (in the case of UCL holding full degree awarding powers that set it above a university college), while University College Birmingham is a university in its own right and also not a university college.",
"title": "Europe"
},
{
"paragraph_id": 80,
"text": "In Australia a college may be an institution of tertiary education that is smaller than a university, run independently or as part of a university. Following a reform in the 1980s many of the formerly independent colleges now belong to a larger universities.",
"title": "Oceania"
},
{
"paragraph_id": 81,
"text": "Referring to parts of a university, there are residential colleges which provide residence for students, both undergraduate and postgraduate, called university colleges. These colleges often provide additional tutorial assistance, and some host theological study. Many colleges have strong traditions and rituals, so are a combination of dormitory style accommodation and fraternity or sorority culture.",
"title": "Oceania"
},
{
"paragraph_id": 82,
"text": "Most technical and further education institutions (TAFEs), which offer certificate and diploma vocational courses, are styled \"TAFE colleges\" or \"Colleges of TAFE\". In some places, such as Tasmania, college refers to a type of school for Year 11 and 12 students, e.g. Don College.",
"title": "Oceania"
},
{
"paragraph_id": 83,
"text": "The constituent colleges of the former University of New Zealand (such as Canterbury University College) have become independent universities. Some halls of residence associated with New Zealand universities retain the name of \"college\", particularly at the University of Otago (which although brought under the umbrella of the University of New Zealand, already possessed university status and degree awarding powers). The institutions formerly known as \"Teacher-training colleges\" now style themselves \"College of education\".",
"title": "Oceania"
},
{
"paragraph_id": 84,
"text": "Some universities, such as the University of Canterbury, have divided their university into constituent administrative \"Colleges\" – the College of Arts containing departments that teach Arts, Humanities and Social Sciences, College of Science containing Science departments, and so on. This is largely modelled on the Cambridge model, discussed above.",
"title": "Oceania"
},
{
"paragraph_id": 85,
"text": "Like the United Kingdom some professional bodies in New Zealand style themselves as \"colleges\", for example, the Royal Australasian College of Surgeons, the Royal Australasian College of Physicians.",
"title": "Oceania"
},
{
"paragraph_id": 86,
"text": "In some parts of the country, secondary school is often referred to as college and the term is used interchangeably with high school. This sometimes confuses people from other parts of New Zealand. But in all parts of the country many secondary schools have \"College\" in their name, such as Rangitoto College, New Zealand's largest secondary.",
"title": "Oceania"
}
] | A college is an educational institution or a constituent part of one. A college may be a degree-awarding tertiary educational institution, a part of a collegiate or federal university, an institution offering vocational education, a further education institution, or a secondary school. In most of the world, a college may be a high school or secondary school, a college of further education, a training institution that awards trade qualifications, a higher-education provider that does not have university status, or a constituent part of a university. In the United States, a college may offer undergraduate programs – either as an independent institution or as the undergraduate program of a university – or it may be a residential college of a university or a community college, referring to higher education institutions that aim to provide affordable and accessible education, usually limited to two-year associate degrees. The word is generally also used as a synonym for a university in the US. Colleges in countries such as France, Belgium, and Switzerland provide secondary education. | 2001-05-19T23:35:30Z | 2023-11-29T09:05:05Z | [
"Template:Short description",
"Template:Notetag",
"Template:Webarchive",
"Template:Cite book",
"Template:Portal",
"Template:-",
"Template:See also",
"Template:Main list",
"Template:Sister project links",
"Template:Authority control",
"Template:About",
"Template:Distinguish",
"Template:Pp-move-indef",
"Template:Main",
"Template:As of",
"Template:Lang",
"Template:NoteFoot",
"Template:Cite web",
"Template:Cite news",
"Template:Use dmy dates",
"Template:Citation needed",
"Template:Lang-he",
"Template:Reflist",
"Template:Cite press release",
"Template:Schools"
] | https://en.wikipedia.org/wiki/College |
5,690 | Chalmers University of Technology | Chalmers University of Technology (Swedish: Chalmers tekniska högskola, commonly referred to as Chalmers) is a private research university located in Gothenburg, Sweden. Chalmers focuses on engineering and science, but more broadly it also conducts research and offers education in shipping, architecture and management. The university has approximately 3100 employees and 10,000 students.
Since 2012, Chalmers has continuously held the titles of both the most well-known and the best-reputed university in Sweden, according to annual public surveys. Moreover, it is highly reputable internationally, recognized for its excellence in engineering education and research. Chalmers is consistently ranked among the world's top 100 universities in engineering and technology, and is considered one of Europe's leading technical universities.
Chalmers is coordinating the Graphene Flagship, the European Union's biggest research initiative to bring graphene innovation out of the lab and into commercial applications, and leading the development of a Swedish quantum computer.
The university is a co-founder of the CDIO Initiative, a member of the UNITECH International program, the IDEA League, the Nordic Five Tech, and the ENHANCE alliances as well as the EURECOM consortium and the CESAER network.
Chalmers was founded in 1829 following a donation by William Chalmers, a director of the Swedish East India Company. He donated part of his fortune for the establishment of an "industrial school". The university was run as a private institution until 1937 when it became the second state-owned technical university. In 1994 the government of Sweden reorganised Chalmers into a private company (aktiebolag) owned by a government-controlled foundation. Chalmers is one of only three universities in Sweden which are named after a person, the other two being Karolinska Institutet and Linnaeus University.
Chalmers University of Technology has the following 13 departments:
Furthermore, Chalmers is home to six Areas of Advance and six national competence centers in key fields such as materials, mathematical modelling, environmental science, and vehicle safety.
Chalmers University of Technology's research infrastructure includes everything from advanced real or virtual labs to large databases, computer capacity for large-scale calculations and research facilities.
Since 2012, Chalmers has achieved the highest reputation for Swedish Universities by the Kantar Sifo's Reputation Index. According to the survey, Chalmers is the most well-known university in Sweden regarded as a successful and competitive high-class institution with a large contribution to society and credibility in media.
Moreover, the European Commission has recognized Chalmers as one of Europe's top universities, while based on the U-Multirank 2022, Chalmers characterized as a top performing university across various indicators (i.e., teaching & learning, research, knowledge transfer and international orientation) with the highest number of ‘A’ (very good) scores on the institutional level for Sweden.
Additionally, in 2018, a benchmarking report from MIT ranked Chalmers top 10 in the world of engineering education, while in 2020, the World University Research Rankings placed Chalmers 12th in the world based on the evaluation of three key research aspects, namely research multi-disciplinarity, research impact, and research cooperativeness.
Finally, in 2011, the International Professional Ranking of Higher Education Institutions, which is established on the basis of the number of alumni holding a post of Chief Executive Officer (CEO) or equivalent in one of the Fortune Global 500 companies, Chalmers ranked 38th in the world, ranking 1st in Sweden and 15th in Europe.
Chalmers is a member of the IDEA League network, a strategic alliance between five leading European universities of science and technology. The scope of the network is to provide the environment for students, researchers and staff to share knowledge, experience and resources.
Moreover, Chalmers is a partner of the UNITECH International, an organization consisting of distinguished technical universities and multinational companies across Europe. UNITECH helps bridge the gap between the industrial and academic world offering exchange programs consisting of studies as well as an integrated internship at one of the corporate partners.
Chalmers is also a member of the Nordic Five Tech network, a strategic alliance of the five leading technical universities in Denmark, Finland, Norway and Sweden. The Nordic Five Tech universities are amongst the top international technical universities with the goal of creating synergies within education, research and innovation.
Additionally, Chalmers is a member of the ENHANCE, an alliance of ten leading Universities of Technology shaping the future of Europe and driving transformation in science and society. The partner institutions have a history of solid cooperation in EU programmes and joint research projects.
Furthermore, Chalmers is a member of CESAER, a European association of universities of science and technology. Among others, the requirements for a university to be a member of CESAER is to provide excellent science and technology research, education and innovation as well as to have a leading position in their region, their country and beyond.
Additionally, Chalmers has established formal agreements with three leading materials science centers: University of California, Santa Barbara, ETH Zurich and Stanford University. Within the framework of the agreements, a yearly bilateral workshop is organized, and exchange of researchers is supported.
Chalmers has general exchange agreements with many European and U.S. universities and maintains a special exchange program agreement with National Chiao Tung University (NCTU) in Taiwan where the exchange students from the two universities maintain offices for, among other things, helping local students with applying and preparing for an exchange year as well as acting as representatives.
Finally, Chalmers has strong partnerships with major industries such as Ericsson, Volvo, Saab AB and AstraZeneca.
Approximately 40% of Sweden's graduate engineers and architects are educated at Chalmers. Each year, around 250 postgraduate degrees are awarded as well as 850 graduate degrees. About 1,000 post-graduate students attend programmes at the university, and many students are taking Master of Science engineering programmes and the Master of Architecture programme. Since 2007, all master's programmes are taught in English for both national and international students. This was a result of the adaptation to the Bologna process that started in 2004 at Chalmers (as the first technical university in Sweden).
Currently, about 10% of all students at Chalmers come from countries outside Sweden to enrol in a master's or PhD program.
Around 2,700 students also attend Bachelor of Science engineering programmes, merchant marine and other undergraduate courses at Campus Lindholmen. Chalmers also shares some students with Gothenburg University in the joint IT University project. The IT University focuses exclusively on information technology and offers bachelor's and master's programmes with degrees issued from either Chalmers or Gothenburg University, depending on the programme.
Chalmers confers honorary doctoral degrees to people outside the university who have shown great merit in their research or in society.
Chalmers is an aktiebolag with 100 shares à 1,000 SEK, all of which are owned by the Chalmers University of Technology Foundation, a private foundation, which appoints the university board and the president. The foundation has its members appointed by the Swedish government (4 to 8 seats), the departments appoint one member, the student union appoints one member and the president automatically gains one chair. Each department is led by a department head, usually a member of the faculty of that department. The faculty senate represents members of the faculty when decisions are taken.
In 1937, the school moved from the city centre to the new Gibraltar Campus, named after the mansion which owned the grounds, where it is now located. The Lindholmen College Campus was created in the early 1990s and is located on the island Hisingen. Campus Johanneberg and Campus Lindholmen, as they are now called, are connected by bus lines.
Traditions include the graduation ceremony and the Cortège procession, an annual public event.
Although the official Swedish title for the head is "rektor", the university now uses "President" as the English translation.
57°41′18″N 11°58′36″E / 57.68833°N 11.97667°E / 57.68833; 11.97667 | [
{
"paragraph_id": 0,
"text": "Chalmers University of Technology (Swedish: Chalmers tekniska högskola, commonly referred to as Chalmers) is a private research university located in Gothenburg, Sweden. Chalmers focuses on engineering and science, but more broadly it also conducts research and offers education in shipping, architecture and management. The university has approximately 3100 employees and 10,000 students.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Since 2012, Chalmers has continuously held the titles of both the most well-known and the best-reputed university in Sweden, according to annual public surveys. Moreover, it is highly reputable internationally, recognized for its excellence in engineering education and research. Chalmers is consistently ranked among the world's top 100 universities in engineering and technology, and is considered one of Europe's leading technical universities.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Chalmers is coordinating the Graphene Flagship, the European Union's biggest research initiative to bring graphene innovation out of the lab and into commercial applications, and leading the development of a Swedish quantum computer.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The university is a co-founder of the CDIO Initiative, a member of the UNITECH International program, the IDEA League, the Nordic Five Tech, and the ENHANCE alliances as well as the EURECOM consortium and the CESAER network.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Chalmers was founded in 1829 following a donation by William Chalmers, a director of the Swedish East India Company. He donated part of his fortune for the establishment of an \"industrial school\". The university was run as a private institution until 1937 when it became the second state-owned technical university. In 1994 the government of Sweden reorganised Chalmers into a private company (aktiebolag) owned by a government-controlled foundation. Chalmers is one of only three universities in Sweden which are named after a person, the other two being Karolinska Institutet and Linnaeus University.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "Chalmers University of Technology has the following 13 departments:",
"title": "Departments"
},
{
"paragraph_id": 6,
"text": "Furthermore, Chalmers is home to six Areas of Advance and six national competence centers in key fields such as materials, mathematical modelling, environmental science, and vehicle safety.",
"title": "Departments"
},
{
"paragraph_id": 7,
"text": "Chalmers University of Technology's research infrastructure includes everything from advanced real or virtual labs to large databases, computer capacity for large-scale calculations and research facilities.",
"title": "Research infrastructure"
},
{
"paragraph_id": 8,
"text": "Since 2012, Chalmers has achieved the highest reputation for Swedish Universities by the Kantar Sifo's Reputation Index. According to the survey, Chalmers is the most well-known university in Sweden regarded as a successful and competitive high-class institution with a large contribution to society and credibility in media.",
"title": "Rankings and reputation"
},
{
"paragraph_id": 9,
"text": "Moreover, the European Commission has recognized Chalmers as one of Europe's top universities, while based on the U-Multirank 2022, Chalmers characterized as a top performing university across various indicators (i.e., teaching & learning, research, knowledge transfer and international orientation) with the highest number of ‘A’ (very good) scores on the institutional level for Sweden.",
"title": "Rankings and reputation"
},
{
"paragraph_id": 10,
"text": "Additionally, in 2018, a benchmarking report from MIT ranked Chalmers top 10 in the world of engineering education, while in 2020, the World University Research Rankings placed Chalmers 12th in the world based on the evaluation of three key research aspects, namely research multi-disciplinarity, research impact, and research cooperativeness.",
"title": "Rankings and reputation"
},
{
"paragraph_id": 11,
"text": "Finally, in 2011, the International Professional Ranking of Higher Education Institutions, which is established on the basis of the number of alumni holding a post of Chief Executive Officer (CEO) or equivalent in one of the Fortune Global 500 companies, Chalmers ranked 38th in the world, ranking 1st in Sweden and 15th in Europe.",
"title": "Rankings and reputation"
},
{
"paragraph_id": 12,
"text": "Chalmers is a member of the IDEA League network, a strategic alliance between five leading European universities of science and technology. The scope of the network is to provide the environment for students, researchers and staff to share knowledge, experience and resources.",
"title": "Ties and partnerships"
},
{
"paragraph_id": 13,
"text": "Moreover, Chalmers is a partner of the UNITECH International, an organization consisting of distinguished technical universities and multinational companies across Europe. UNITECH helps bridge the gap between the industrial and academic world offering exchange programs consisting of studies as well as an integrated internship at one of the corporate partners.",
"title": "Ties and partnerships"
},
{
"paragraph_id": 14,
"text": "Chalmers is also a member of the Nordic Five Tech network, a strategic alliance of the five leading technical universities in Denmark, Finland, Norway and Sweden. The Nordic Five Tech universities are amongst the top international technical universities with the goal of creating synergies within education, research and innovation.",
"title": "Ties and partnerships"
},
{
"paragraph_id": 15,
"text": "Additionally, Chalmers is a member of the ENHANCE, an alliance of ten leading Universities of Technology shaping the future of Europe and driving transformation in science and society. The partner institutions have a history of solid cooperation in EU programmes and joint research projects.",
"title": "Ties and partnerships"
},
{
"paragraph_id": 16,
"text": "Furthermore, Chalmers is a member of CESAER, a European association of universities of science and technology. Among others, the requirements for a university to be a member of CESAER is to provide excellent science and technology research, education and innovation as well as to have a leading position in their region, their country and beyond.",
"title": "Ties and partnerships"
},
{
"paragraph_id": 17,
"text": "Additionally, Chalmers has established formal agreements with three leading materials science centers: University of California, Santa Barbara, ETH Zurich and Stanford University. Within the framework of the agreements, a yearly bilateral workshop is organized, and exchange of researchers is supported.",
"title": "Ties and partnerships"
},
{
"paragraph_id": 18,
"text": "Chalmers has general exchange agreements with many European and U.S. universities and maintains a special exchange program agreement with National Chiao Tung University (NCTU) in Taiwan where the exchange students from the two universities maintain offices for, among other things, helping local students with applying and preparing for an exchange year as well as acting as representatives.",
"title": "Ties and partnerships"
},
{
"paragraph_id": 19,
"text": "Finally, Chalmers has strong partnerships with major industries such as Ericsson, Volvo, Saab AB and AstraZeneca.",
"title": "Ties and partnerships"
},
{
"paragraph_id": 20,
"text": "Approximately 40% of Sweden's graduate engineers and architects are educated at Chalmers. Each year, around 250 postgraduate degrees are awarded as well as 850 graduate degrees. About 1,000 post-graduate students attend programmes at the university, and many students are taking Master of Science engineering programmes and the Master of Architecture programme. Since 2007, all master's programmes are taught in English for both national and international students. This was a result of the adaptation to the Bologna process that started in 2004 at Chalmers (as the first technical university in Sweden).",
"title": "Students"
},
{
"paragraph_id": 21,
"text": "Currently, about 10% of all students at Chalmers come from countries outside Sweden to enrol in a master's or PhD program.",
"title": "Students"
},
{
"paragraph_id": 22,
"text": "Around 2,700 students also attend Bachelor of Science engineering programmes, merchant marine and other undergraduate courses at Campus Lindholmen. Chalmers also shares some students with Gothenburg University in the joint IT University project. The IT University focuses exclusively on information technology and offers bachelor's and master's programmes with degrees issued from either Chalmers or Gothenburg University, depending on the programme.",
"title": "Students"
},
{
"paragraph_id": 23,
"text": "Chalmers confers honorary doctoral degrees to people outside the university who have shown great merit in their research or in society.",
"title": "Students"
},
{
"paragraph_id": 24,
"text": "Chalmers is an aktiebolag with 100 shares à 1,000 SEK, all of which are owned by the Chalmers University of Technology Foundation, a private foundation, which appoints the university board and the president. The foundation has its members appointed by the Swedish government (4 to 8 seats), the departments appoint one member, the student union appoints one member and the president automatically gains one chair. Each department is led by a department head, usually a member of the faculty of that department. The faculty senate represents members of the faculty when decisions are taken.",
"title": "Organization"
},
{
"paragraph_id": 25,
"text": "In 1937, the school moved from the city centre to the new Gibraltar Campus, named after the mansion which owned the grounds, where it is now located. The Lindholmen College Campus was created in the early 1990s and is located on the island Hisingen. Campus Johanneberg and Campus Lindholmen, as they are now called, are connected by bus lines.",
"title": "Campuses"
},
{
"paragraph_id": 26,
"text": "Traditions include the graduation ceremony and the Cortège procession, an annual public event.",
"title": "Student societies and traditions"
},
{
"paragraph_id": 27,
"text": "Although the official Swedish title for the head is \"rektor\", the university now uses \"President\" as the English translation.",
"title": "Presidents"
},
{
"paragraph_id": 28,
"text": "57°41′18″N 11°58′36″E / 57.68833°N 11.97667°E / 57.68833; 11.97667",
"title": "External links"
}
] | Chalmers University of Technology is a private research university located in Gothenburg, Sweden. Chalmers focuses on engineering and science, but more broadly it also conducts research and offers education in shipping, architecture and management. The university has approximately 3100 employees and 10,000 students. Since 2012, Chalmers has continuously held the titles of both the most well-known and the best-reputed university in Sweden, according to annual public surveys. Moreover, it is highly reputable internationally, recognized for its excellence in engineering education and research.
Chalmers is consistently ranked among the world's top 100 universities in engineering and technology, and is considered one of Europe's leading technical universities. Chalmers is coordinating the Graphene Flagship, the European Union's biggest research initiative to bring graphene innovation out of the lab and into commercial applications, and leading the development of a Swedish quantum computer. The university is a co-founder of the CDIO Initiative, a member of the UNITECH International program, the IDEA League, the Nordic Five Tech, and the ENHANCE alliances as well as the EURECOM consortium and the CESAER network. | 2002-02-25T15:51:15Z | 2023-12-31T21:14:41Z | [
"Template:Citation needed",
"Template:Webarchive",
"Template:IDEA League",
"Template:Coord",
"Template:Use dmy dates",
"Template:Infobox university",
"Template:Cite news",
"Template:Swedish universities",
"Template:Authority control",
"Template:Short description",
"Template:Cite web",
"Template:Top Industrial Managers for Europe",
"Template:CESAER",
"Template:Lang-sv",
"Template:Infobox university rankings",
"Template:Reflist",
"Template:Projects at Chalmers University of Technology",
"Template:CDIO"
] | https://en.wikipedia.org/wiki/Chalmers_University_of_Technology |
5,691 | Codex | The codex (pl.: codices /ˈkoʊdɪsiːz/) was the historical ancestor of the modern book. Instead of being composed of sheets of paper, it used sheets of vellum, papyrus, or other materials. The term codex is often used for ancient manuscript books, with handwritten contents. A codex, much like the modern book, is bound by stacking the pages and securing one set of edges by a variety of methods over the centuries, yet in a form analogous to modern bookbinding. Modern books are divided into paperback (or softback) and those bound with stiff boards, called hardbacks. Elaborate historical bindings are called treasure bindings. At least in the Western world, the main alternative to the paged codex format for a long document was the continuous scroll, which was the dominant form of document in the ancient world. Some codices are continuously folded like a concertina, in particular the Maya codices and Aztec codices, which are actually long sheets of paper or animal skin folded into pages. In Japan, concertina-style codices called orihon developed during the Heian period (794–1185) were made of paper.
The Ancient Romans developed the form from wax tablets. The gradual replacement of the scroll by the codex has been called the most important advance in book making before the invention of the printing press. The codex transformed the shape of the book itself, and offered a form that has lasted ever since. The spread of the codex is often associated with the rise of Christianity, which early on adopted the format for the Bible. First described in the 1st century of the Common Era, when the Roman poet Martial praised its convenient use, the codex achieved numerical parity with the scroll around 300 CE, and had completely replaced it throughout what was by then a Christianized Greco-Roman world by the 6th century.
The word codex comes from the Latin word caudex, meaning "trunk of a tree", "block of wood" or "book". The codex began to replace the scroll almost as soon as it was invented, although new finds add three centuries to its history (see below). In Egypt, by the fifth century, the codex outnumbered the scroll by ten to one based on surviving examples. By the sixth century, the scroll had almost vanished as a medium for literature. The change from rolls to codices roughly coincides with the transition from papyrus to parchment as the preferred writing material, but the two developments are unconnected. In fact, any combination of codices and scrolls with papyrus and parchment is technically feasible and common in the historical record.
Technically, even modern notebooks and paperbacks are codices, but publishers and scholars reserve the term for manuscript (hand-written) books produced from Late antiquity until the Middle Ages. The scholarly study of these manuscripts is sometimes called codicology. The study of ancient documents in general is called paleography.
The codex provided considerable advantages over other book formats, primarily its compactness, sturdiness, economic use of materials by using both sides (recto and verso), and ease of reference (a codex accommodates random access, as opposed to a scroll, which uses sequential access).
The Romans used precursors made of reusable wax-covered tablets of wood for taking notes and other informal writings. Two ancient polyptychs, a pentaptych and octoptych excavated at Herculaneum, used a unique connecting system that presages later sewing on of thongs or cords. A first evidence of the use of papyrus in codex form comes from the Ptolemaic period in Egypt, as a find at the University of Graz shows.
Julius Caesar may have been the first Roman to reduce scrolls to bound pages in the form of a note-book, possibly even as a papyrus codex. At the turn of the 1st century AD, a kind of folded parchment notebook called pugillares membranei in Latin became commonly used for writing in the Roman Empire. Theodore Cressy Skeat theorized that this form of notebook was invented in Rome and then spread rapidly to the Near East.
Codices are described in certain works by the Classical Latin poet, Martial. He wrote a series of five couplets meant to accompany gifts of literature that Romans exchanged during the festival of Saturnalia. Three of these books are specifically described by Martial as being in the form of a codex; the poet praises the compendiousness of the form (as opposed to the scroll), as well as the convenience with which such a book can be read on a journey. In another poem by Martial, the poet advertises a new edition of his works, specifically noting that it is produced as a codex, taking less space than a scroll and being more comfortable to hold in one hand. According to Theodore Cressy Skeat, this might be the first recorded known case of an entire edition of a literary work (not just a single copy) being published in codex form, though it was likely an isolated case and was not a common practice until a much later time.
In his discussion of one of the earliest parchment codices to survive from Oxyrhynchus in Egypt, Eric Turner seems to challenge Skeat's notion when stating, "its mere existence is evidence that this book form had a prehistory", and that "early experiments with this book form may well have taken place outside of Egypt." Early codices of parchment or papyrus appear to have been widely used as personal notebooks, for instance in recording copies of letters sent (Cicero Fam. 9.26.1). Early codices were not always cohesive. They often contained multiple languages, various topics and even multiple authors. "Such codices formed libraries in their own right." The parchment notebook pages were "more durable, and could withstand being folded and stitched to other sheets". Parchments whose writing was no longer needed were commonly washed or scraped for re-use, creating a palimpsest; the erased text, which can often be recovered, is older and usually more interesting than the newer text which replaced it. Consequently, writings in a codex were often considered informal and impermanent. Parchment (animal skin) was expensive, and therefore it was used primarily by the wealthy and powerful, who were also able to pay for textual design and color. "Official documents and deluxe manuscripts [in the late Middle Ages] were written in gold and silver ink on parchment...dyed or painted with costly purple pigments as an expression of imperial power and wealth."
As early as the early 2nd century, there is evidence that a codex—usually of papyrus—was the preferred format among Christians. In the library of the Villa of the Papyri, Herculaneum (buried in AD 79), all the texts (of Greek literature) are scrolls (see Herculaneum papyri). However, in the Nag Hammadi library, hidden about AD 390, all texts (Gnostic) are codices. Despite this comparison, a fragment of a non-Christian parchment codex of Demosthenes' De Falsa Legatione from Oxyrhynchus in Egypt demonstrates that the surviving evidence is insufficient to conclude whether Christians played a major or central role in the development of early codices—or if they simply adopted the format to distinguish themselves from Jews.
The earliest surviving fragments from codices come from Egypt, and are variously dated (always tentatively) towards the end of the 1st century or in the first half of the 2nd. This group includes the Rylands Library Papyrus P52, containing part of St John's Gospel, and perhaps dating from between 125 and 160.
In Western culture, the codex gradually replaced the scroll. Between the 4th century, when the codex gained wide acceptance, and the Carolingian Renaissance in the 8th century, many works that were not converted from scroll to codex were lost. The codex improved on the scroll in several ways. It could be opened flat at any page for easier reading, pages could be written on both front and back (recto and verso), and the protection of durable covers made it more compact and easier to transport.
The ancients stored codices with spines facing inward, and not always vertically. The spine could be used for the incipit, before the concept of a proper title developed in medieval times. Though most early codices were made of papyrus, papyrus was fragile and supplied from Egypt, the only place where papyrus grew. The more durable parchment and vellum gained favor, despite the cost.
The codices of pre-Columbian Mesoamerica (Mexico and Central America) had a similar appearance when closed to the European codex, but were instead made with long folded strips of either fig bark (amatl) or plant fibers, often with a layer of whitewash applied before writing. New World codices were written as late as the 16th century (see Maya codices and Aztec codices). Those written before the Spanish conquests seem all to have been single long sheets folded concertina-style, sometimes written on both sides of the amatl paper. There are significant codices produced in the colonial era, with pictorial and alphabetic texts in Spanish or an indigenous language such as Nahuatl.
In East Asia, the scroll remained standard for far longer than in the Mediterranean world. There were intermediate stages, such as scrolls folded concertina-style and pasted together at the back and books that were printed only on one side of the paper. This replaced traditional Chinese writing mediums such as bamboo and wooden slips, as well as silk and paper scrolls. The evolution of the codex in China began with folded-leaf pamphlets in the 9th century, during the late Tang dynasty (618–907), improved by the 'butterfly' bindings of the Song dynasty (960–1279), the wrapped back binding of the Yuan dynasty (1271–1368), the stitched binding of the Ming (1368–1644) and Qing dynasties (1644–1912), and finally the adoption of Western-style bookbinding in the 20th century. The initial phase of this evolution, the accordion-folded palm-leaf-style book, most likely came from India and was introduced to China via Buddhist missionaries and scriptures.
Judaism still retains the Torah scroll, at least for ceremonial use.
The first stage in creating a codex is to prepare the animal skin. The skin is washed with water and lime but not together. The skin is soaked in the lime for a couple of days. The hair is removed, and the skin is dried by attaching it to a frame, called a herse. The parchment maker attaches the skin at points around the circumference. The skin attaches to the herse by cords. To prevent it from being torn, the maker wraps the area of the skin attached to the cord around a pebble called a pippin. After completing that, the maker uses a crescent shaped knife called a lunarium or lunellum to remove any remaining hairs. Once the skin completely dries, the maker gives it a deep clean and processes it into sheets. The number of sheets from a piece of skin depends on the size of the skin and the final product dimensions. For example, the average calfskin can provide three-and-a-half medium sheets of writing material, which can be doubled when they are folded into two conjoint leaves, also known as a bifolium. Historians have found evidence of manuscripts in which the scribe wrote down the medieval instructions now followed by modern membrane makers. Defects can often be found in the membrane, whether they are from the original animal, human error during the preparation period, or from when the animal was killed. Defects can also appear during the writing process. Unless the manuscript is kept in perfect condition, defects can also appear later in its life.
Firstly, the membrane must be prepared. The first step is to set up the quires. The quire is a group of several sheets put together. Raymond Clemens and Timothy Graham point out, in "Introduction to Manuscript Studies", that "the quire was the scribe's basic writing unit throughout the Middle Ages":
Pricking is the process of making holes in a sheet of parchment (or membrane) in preparation of it ruling. The lines were then made by ruling between the prick marks.... The process of entering ruled lines on the page to serve as a guide for entering text. Most manuscripts were ruled with horizontal lines that served as the baselines on which the text was entered and with vertical bounding lines that marked the boundaries of the columns.
From the Carolingian period to the end of the Middle Ages, different styles of folding the quire came about. For example, in continental Europe throughout the Middle Ages, the quire was put into a system in which each side folded on to the same style. The hair side met the hair side and the flesh side to the flesh side. This was not the same style used in the British Isles, where the membrane was folded so that it turned out an eight-leaf quire, with single leaves in the third and sixth positions. The next stage was tacking the quire. Tacking is when the scribe would hold together the leaves in quire with thread. Once threaded together, the scribe would then sew a line of parchment up the "spine" of the manuscript to protect the tacking.
The materials codices are made with are their support, and include papyrus, parchment (sometimes referred to as membrane or vellum), and paper. They are written and drawn on with metals, pigments and ink. The quality, size, and choice of support determine the status of a codex. Papyrus is found only in late antiquity and the early Middle Ages. Codices intended for display were bound with more durable materials than vellum. Parchment varied widely due to animal species and finish, and identification of animals used to make it has only begun to be studied in the 21st century. How manufacturing influenced the final products, technique, and style, is little understood. However, changes in style are underpinned more by variation in technique. Before the 14th and 15th century, paper was expensive, and its use may mark off the deluxe copy.
The structure of a codex includes its size, format/ordinatio(its quires or gatherings), consisting of sheets folded a number of times, often twice- a bifolio), sewing, bookbinding and rebinding. A quire consisted of a number of folded sheets inserting into one another- at least three, but most commonly four bifolia, that is eight sheets and sixteen pages: Latin quaternio or Greek tetradion, which became a synonym for quires. Unless an exemplar (text to be copied) was copied exactly, format differed. In preparation for writing codices, ruling patterns were used that determined the layout of each page. Holes were prickled with a spiked lead wheel and a circle. Ruling was then applied separately on each page or once through the top folio. Ownership markings, decorations and illumination are also a part of it. They are specific to the scriptoria, or any production center, and libraries of codices.
Watermarks may provide, although often approximate, dates for when the copying occurred. The layout– size of the margin and the number of lines– is determined. There may be textual articulations, running heads, openings, chapters and paragraphs. Space was reserved for illustrations and decorated guide letters. The apparatus of books for scholars became more elaborate during the 13th and 14th centuries when chapter, verse, page numbering, marginalia finding guides, indexes, glossaries and tables of contents were developed.
By a close examination of the physical attributes of a codex, it is sometimes possible to match up long-separated elements originally from the same book. In 13th-century book publishing, due to secularization, stationers or libraires emerged. They would receive commissions for texts, which they would contract out to scribes, illustrators, and binders, to whom they supplied materials. Due to the systematic format used for assembly by the libraire, the structure can be used to reconstruct the original order of a manuscript. However, complications can arise in the study of a codex. Manuscripts were frequently rebound, and this resulted in a particular codex incorporating works of different dates and origins, thus different internal structures. Additionally, a binder could alter or unify these structures to ensure a better fit for the new binding. Completed quires or books of quires might constitute independent book units- booklets, which could be returned to the stationer, or combined with other texts to make anthologies or miscellanies. Exemplars were sometimes divided into quires for simultaneous copying and loaned out to students for study. To facilitate this, catchwords were used- a word at the end of a page providing the next page's first word. | [
{
"paragraph_id": 0,
"text": "The codex (pl.: codices /ˈkoʊdɪsiːz/) was the historical ancestor of the modern book. Instead of being composed of sheets of paper, it used sheets of vellum, papyrus, or other materials. The term codex is often used for ancient manuscript books, with handwritten contents. A codex, much like the modern book, is bound by stacking the pages and securing one set of edges by a variety of methods over the centuries, yet in a form analogous to modern bookbinding. Modern books are divided into paperback (or softback) and those bound with stiff boards, called hardbacks. Elaborate historical bindings are called treasure bindings. At least in the Western world, the main alternative to the paged codex format for a long document was the continuous scroll, which was the dominant form of document in the ancient world. Some codices are continuously folded like a concertina, in particular the Maya codices and Aztec codices, which are actually long sheets of paper or animal skin folded into pages. In Japan, concertina-style codices called orihon developed during the Heian period (794–1185) were made of paper.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The Ancient Romans developed the form from wax tablets. The gradual replacement of the scroll by the codex has been called the most important advance in book making before the invention of the printing press. The codex transformed the shape of the book itself, and offered a form that has lasted ever since. The spread of the codex is often associated with the rise of Christianity, which early on adopted the format for the Bible. First described in the 1st century of the Common Era, when the Roman poet Martial praised its convenient use, the codex achieved numerical parity with the scroll around 300 CE, and had completely replaced it throughout what was by then a Christianized Greco-Roman world by the 6th century.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The word codex comes from the Latin word caudex, meaning \"trunk of a tree\", \"block of wood\" or \"book\". The codex began to replace the scroll almost as soon as it was invented, although new finds add three centuries to its history (see below). In Egypt, by the fifth century, the codex outnumbered the scroll by ten to one based on surviving examples. By the sixth century, the scroll had almost vanished as a medium for literature. The change from rolls to codices roughly coincides with the transition from papyrus to parchment as the preferred writing material, but the two developments are unconnected. In fact, any combination of codices and scrolls with papyrus and parchment is technically feasible and common in the historical record.",
"title": "Etymology and origins"
},
{
"paragraph_id": 3,
"text": "Technically, even modern notebooks and paperbacks are codices, but publishers and scholars reserve the term for manuscript (hand-written) books produced from Late antiquity until the Middle Ages. The scholarly study of these manuscripts is sometimes called codicology. The study of ancient documents in general is called paleography.",
"title": "Etymology and origins"
},
{
"paragraph_id": 4,
"text": "The codex provided considerable advantages over other book formats, primarily its compactness, sturdiness, economic use of materials by using both sides (recto and verso), and ease of reference (a codex accommodates random access, as opposed to a scroll, which uses sequential access).",
"title": "Etymology and origins"
},
{
"paragraph_id": 5,
"text": "The Romans used precursors made of reusable wax-covered tablets of wood for taking notes and other informal writings. Two ancient polyptychs, a pentaptych and octoptych excavated at Herculaneum, used a unique connecting system that presages later sewing on of thongs or cords. A first evidence of the use of papyrus in codex form comes from the Ptolemaic period in Egypt, as a find at the University of Graz shows.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "Julius Caesar may have been the first Roman to reduce scrolls to bound pages in the form of a note-book, possibly even as a papyrus codex. At the turn of the 1st century AD, a kind of folded parchment notebook called pugillares membranei in Latin became commonly used for writing in the Roman Empire. Theodore Cressy Skeat theorized that this form of notebook was invented in Rome and then spread rapidly to the Near East.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "Codices are described in certain works by the Classical Latin poet, Martial. He wrote a series of five couplets meant to accompany gifts of literature that Romans exchanged during the festival of Saturnalia. Three of these books are specifically described by Martial as being in the form of a codex; the poet praises the compendiousness of the form (as opposed to the scroll), as well as the convenience with which such a book can be read on a journey. In another poem by Martial, the poet advertises a new edition of his works, specifically noting that it is produced as a codex, taking less space than a scroll and being more comfortable to hold in one hand. According to Theodore Cressy Skeat, this might be the first recorded known case of an entire edition of a literary work (not just a single copy) being published in codex form, though it was likely an isolated case and was not a common practice until a much later time.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "In his discussion of one of the earliest parchment codices to survive from Oxyrhynchus in Egypt, Eric Turner seems to challenge Skeat's notion when stating, \"its mere existence is evidence that this book form had a prehistory\", and that \"early experiments with this book form may well have taken place outside of Egypt.\" Early codices of parchment or papyrus appear to have been widely used as personal notebooks, for instance in recording copies of letters sent (Cicero Fam. 9.26.1). Early codices were not always cohesive. They often contained multiple languages, various topics and even multiple authors. \"Such codices formed libraries in their own right.\" The parchment notebook pages were \"more durable, and could withstand being folded and stitched to other sheets\". Parchments whose writing was no longer needed were commonly washed or scraped for re-use, creating a palimpsest; the erased text, which can often be recovered, is older and usually more interesting than the newer text which replaced it. Consequently, writings in a codex were often considered informal and impermanent. Parchment (animal skin) was expensive, and therefore it was used primarily by the wealthy and powerful, who were also able to pay for textual design and color. \"Official documents and deluxe manuscripts [in the late Middle Ages] were written in gold and silver ink on parchment...dyed or painted with costly purple pigments as an expression of imperial power and wealth.\"",
"title": "History"
},
{
"paragraph_id": 9,
"text": "As early as the early 2nd century, there is evidence that a codex—usually of papyrus—was the preferred format among Christians. In the library of the Villa of the Papyri, Herculaneum (buried in AD 79), all the texts (of Greek literature) are scrolls (see Herculaneum papyri). However, in the Nag Hammadi library, hidden about AD 390, all texts (Gnostic) are codices. Despite this comparison, a fragment of a non-Christian parchment codex of Demosthenes' De Falsa Legatione from Oxyrhynchus in Egypt demonstrates that the surviving evidence is insufficient to conclude whether Christians played a major or central role in the development of early codices—or if they simply adopted the format to distinguish themselves from Jews.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "The earliest surviving fragments from codices come from Egypt, and are variously dated (always tentatively) towards the end of the 1st century or in the first half of the 2nd. This group includes the Rylands Library Papyrus P52, containing part of St John's Gospel, and perhaps dating from between 125 and 160.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "In Western culture, the codex gradually replaced the scroll. Between the 4th century, when the codex gained wide acceptance, and the Carolingian Renaissance in the 8th century, many works that were not converted from scroll to codex were lost. The codex improved on the scroll in several ways. It could be opened flat at any page for easier reading, pages could be written on both front and back (recto and verso), and the protection of durable covers made it more compact and easier to transport.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "The ancients stored codices with spines facing inward, and not always vertically. The spine could be used for the incipit, before the concept of a proper title developed in medieval times. Though most early codices were made of papyrus, papyrus was fragile and supplied from Egypt, the only place where papyrus grew. The more durable parchment and vellum gained favor, despite the cost.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "The codices of pre-Columbian Mesoamerica (Mexico and Central America) had a similar appearance when closed to the European codex, but were instead made with long folded strips of either fig bark (amatl) or plant fibers, often with a layer of whitewash applied before writing. New World codices were written as late as the 16th century (see Maya codices and Aztec codices). Those written before the Spanish conquests seem all to have been single long sheets folded concertina-style, sometimes written on both sides of the amatl paper. There are significant codices produced in the colonial era, with pictorial and alphabetic texts in Spanish or an indigenous language such as Nahuatl.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "In East Asia, the scroll remained standard for far longer than in the Mediterranean world. There were intermediate stages, such as scrolls folded concertina-style and pasted together at the back and books that were printed only on one side of the paper. This replaced traditional Chinese writing mediums such as bamboo and wooden slips, as well as silk and paper scrolls. The evolution of the codex in China began with folded-leaf pamphlets in the 9th century, during the late Tang dynasty (618–907), improved by the 'butterfly' bindings of the Song dynasty (960–1279), the wrapped back binding of the Yuan dynasty (1271–1368), the stitched binding of the Ming (1368–1644) and Qing dynasties (1644–1912), and finally the adoption of Western-style bookbinding in the 20th century. The initial phase of this evolution, the accordion-folded palm-leaf-style book, most likely came from India and was introduced to China via Buddhist missionaries and scriptures.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "Judaism still retains the Torah scroll, at least for ceremonial use.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "The first stage in creating a codex is to prepare the animal skin. The skin is washed with water and lime but not together. The skin is soaked in the lime for a couple of days. The hair is removed, and the skin is dried by attaching it to a frame, called a herse. The parchment maker attaches the skin at points around the circumference. The skin attaches to the herse by cords. To prevent it from being torn, the maker wraps the area of the skin attached to the cord around a pebble called a pippin. After completing that, the maker uses a crescent shaped knife called a lunarium or lunellum to remove any remaining hairs. Once the skin completely dries, the maker gives it a deep clean and processes it into sheets. The number of sheets from a piece of skin depends on the size of the skin and the final product dimensions. For example, the average calfskin can provide three-and-a-half medium sheets of writing material, which can be doubled when they are folded into two conjoint leaves, also known as a bifolium. Historians have found evidence of manuscripts in which the scribe wrote down the medieval instructions now followed by modern membrane makers. Defects can often be found in the membrane, whether they are from the original animal, human error during the preparation period, or from when the animal was killed. Defects can also appear during the writing process. Unless the manuscript is kept in perfect condition, defects can also appear later in its life.",
"title": "Preparation"
},
{
"paragraph_id": 17,
"text": "Firstly, the membrane must be prepared. The first step is to set up the quires. The quire is a group of several sheets put together. Raymond Clemens and Timothy Graham point out, in \"Introduction to Manuscript Studies\", that \"the quire was the scribe's basic writing unit throughout the Middle Ages\":",
"title": "Preparation"
},
{
"paragraph_id": 18,
"text": "Pricking is the process of making holes in a sheet of parchment (or membrane) in preparation of it ruling. The lines were then made by ruling between the prick marks.... The process of entering ruled lines on the page to serve as a guide for entering text. Most manuscripts were ruled with horizontal lines that served as the baselines on which the text was entered and with vertical bounding lines that marked the boundaries of the columns.",
"title": "Preparation"
},
{
"paragraph_id": 19,
"text": "From the Carolingian period to the end of the Middle Ages, different styles of folding the quire came about. For example, in continental Europe throughout the Middle Ages, the quire was put into a system in which each side folded on to the same style. The hair side met the hair side and the flesh side to the flesh side. This was not the same style used in the British Isles, where the membrane was folded so that it turned out an eight-leaf quire, with single leaves in the third and sixth positions. The next stage was tacking the quire. Tacking is when the scribe would hold together the leaves in quire with thread. Once threaded together, the scribe would then sew a line of parchment up the \"spine\" of the manuscript to protect the tacking.",
"title": "Preparation"
},
{
"paragraph_id": 20,
"text": "The materials codices are made with are their support, and include papyrus, parchment (sometimes referred to as membrane or vellum), and paper. They are written and drawn on with metals, pigments and ink. The quality, size, and choice of support determine the status of a codex. Papyrus is found only in late antiquity and the early Middle Ages. Codices intended for display were bound with more durable materials than vellum. Parchment varied widely due to animal species and finish, and identification of animals used to make it has only begun to be studied in the 21st century. How manufacturing influenced the final products, technique, and style, is little understood. However, changes in style are underpinned more by variation in technique. Before the 14th and 15th century, paper was expensive, and its use may mark off the deluxe copy.",
"title": "Preparation"
},
{
"paragraph_id": 21,
"text": "The structure of a codex includes its size, format/ordinatio(its quires or gatherings), consisting of sheets folded a number of times, often twice- a bifolio), sewing, bookbinding and rebinding. A quire consisted of a number of folded sheets inserting into one another- at least three, but most commonly four bifolia, that is eight sheets and sixteen pages: Latin quaternio or Greek tetradion, which became a synonym for quires. Unless an exemplar (text to be copied) was copied exactly, format differed. In preparation for writing codices, ruling patterns were used that determined the layout of each page. Holes were prickled with a spiked lead wheel and a circle. Ruling was then applied separately on each page or once through the top folio. Ownership markings, decorations and illumination are also a part of it. They are specific to the scriptoria, or any production center, and libraries of codices.",
"title": "Preparation"
},
{
"paragraph_id": 22,
"text": "Watermarks may provide, although often approximate, dates for when the copying occurred. The layout– size of the margin and the number of lines– is determined. There may be textual articulations, running heads, openings, chapters and paragraphs. Space was reserved for illustrations and decorated guide letters. The apparatus of books for scholars became more elaborate during the 13th and 14th centuries when chapter, verse, page numbering, marginalia finding guides, indexes, glossaries and tables of contents were developed.",
"title": "Preparation"
},
{
"paragraph_id": 23,
"text": "By a close examination of the physical attributes of a codex, it is sometimes possible to match up long-separated elements originally from the same book. In 13th-century book publishing, due to secularization, stationers or libraires emerged. They would receive commissions for texts, which they would contract out to scribes, illustrators, and binders, to whom they supplied materials. Due to the systematic format used for assembly by the libraire, the structure can be used to reconstruct the original order of a manuscript. However, complications can arise in the study of a codex. Manuscripts were frequently rebound, and this resulted in a particular codex incorporating works of different dates and origins, thus different internal structures. Additionally, a binder could alter or unify these structures to ensure a better fit for the new binding. Completed quires or books of quires might constitute independent book units- booklets, which could be returned to the stationer, or combined with other texts to make anthologies or miscellanies. Exemplars were sometimes divided into quires for simultaneous copying and loaned out to students for study. To facilitate this, catchwords were used- a word at the end of a page providing the next page's first word.",
"title": "Preparation"
}
] | The codex was the historical ancestor of the modern book. Instead of being composed of sheets of paper, it used sheets of vellum, papyrus, or other materials. The term codex is often used for ancient manuscript books, with handwritten contents. A codex, much like the modern book, is bound by stacking the pages and securing one set of edges by a variety of methods over the centuries, yet in a form analogous to modern bookbinding. Modern books are divided into paperback and those bound with stiff boards, called hardbacks. Elaborate historical bindings are called treasure bindings. At least in the Western world, the main alternative to the paged codex format for a long document was the continuous scroll, which was the dominant form of document in the ancient world. Some codices are continuously folded like a concertina, in particular the Maya codices and Aztec codices, which are actually long sheets of paper or animal skin folded into pages. In Japan, concertina-style codices called orihon developed during the Heian period (794–1185) were made of paper. The Ancient Romans developed the form from wax tablets. The gradual replacement of the scroll by the codex has been called the most important advance in book making before the invention of the printing press. The codex transformed the shape of the book itself, and offered a form that has lasted ever since. The spread of the codex is often associated with the rise of Christianity, which early on adopted the format for the Bible. First described in the 1st century of the Common Era, when the Roman poet Martial praised its convenient use, the codex achieved numerical parity with the scroll around 300 CE, and had completely replaced it throughout what was by then a Christianized Greco-Roman world by the 6th century. | 2001-11-08T19:17:15Z | 2023-12-06T07:33:23Z | [
"Template:Cite web",
"Template:Plural form",
"Template:Citation needed",
"Template:Page range too broad",
"Template:Circa",
"Template:See",
"Template:Rp",
"Template:What",
"Template:Webarchive",
"Template:Wiktionary",
"Template:Books",
"Template:IPAc-en",
"Template:Pn",
"Template:Cite book",
"Template:'\"",
"Template:Authority control",
"Template:Short description",
"Template:Sfn",
"Template:About",
"Template:Failed verification",
"Template:Reflist",
"Template:ISBN",
"Template:Harvnb"
] | https://en.wikipedia.org/wiki/Codex |
5,692 | Calf (animal) | A calf (pl.: calves) is a young domestic cow or bull. Calves are reared to become adult cattle or are slaughtered for their meat, called veal, and their hide.
The term calf is also used for some other species. See "Other animals" below.
"Calf" is the term used from birth to weaning, when it becomes known as a weaner or weaner calf, though in some areas the term "calf" may be used until the animal is a yearling. The birth of a calf is known as calving. A calf that has lost its mother is an orphan calf, also known as a poddy or poddy-calf in British. Bobby calves are young calves which are to be slaughtered for human consumption. A vealer is a calf weighing less than about 330 kg (730 lb) which is at about eight to nine months of age. A young female calf from birth until she has had a calf of her own is called a heifer (/ˈhɛfər/). In the American Old West, a motherless or small, runty calf was sometimes referred to as a dodie.
The term "calf" is also used for some other species. See "Other animals" below.
Calves may be produced by natural means, or by artificial breeding using artificial insemination or embryo transfer.
Calves are born after nine months. They usually stand within a few minutes of calving, and suckle within an hour. However, for the first few days they are not easily able to keep up with the rest of the herd, so young calves are often left hidden by their mothers, who visit them several times a day to suckle them. By a week old the calf is able to follow the mother all the time.
Some calves are ear tagged soon after birth, especially those that are stud cattle in order to correctly identify their dams (mothers), or in areas (such as the EU) where tagging is a legal requirement for cattle. Typically when the calves are about two months old they are branded, ear marked, castrated and vaccinated.
The single suckler system of rearing calves is similar to that occurring naturally in wild cattle, where each calf is suckled by its own mother until it is weaned at about nine months old. This system is commonly used for rearing beef cattle throughout the world.
Cows kept on poor forage (as is typical in subsistence farming) produce a limited amount of milk. A calf left with such a mother all the time can easily drink all the milk, leaving none for human consumption. For dairy production under such circumstances, the calf's access to the cow must be limited, for example by penning the calf and bringing the mother to it once a day after partly milking her. The small amount of milk available for the calf under such systems may mean that it takes a longer time to rear, and in subsistence farming it is therefore common for cows to calve only in alternate years.
In more intensive dairy farming, cows can easily be bred and fed to produce far more milk than one calf can drink. In the multi-suckler system, several calves are fostered onto one cow in addition to her own, and these calves' mothers can then be used wholly for milk production. More commonly, calves of dairy cows are fed formula milk from soon after birth, usually from a bottle or bucket.
Purebred female calves of dairy cows are reared as replacement dairy cows. Most purebred dairy calves are produced by artificial insemination (AI). By this method each bull can serve many cows, so only a very few of the purebred dairy male calves are needed to provide bulls for breeding. The remainder of the male calves may be reared for beef or veal; Only a proportion of purebred heifers are needed to provide replacement cows, so often some of the cows in dairy herds are put to a beef bull to produce crossbred calves suitable for rearing as beef.
Veal calves may be reared entirely on milk formula and killed at about 18 or 20 weeks as "white" veal, or fed on grain and hay and killed at 22 to 35 weeks to produce red or pink veal.
A commercial steer or bull calf is expected to put on about 32 to 36 kg (71 to 79 lb) per month. A nine-month-old steer or bull is therefore expected to weigh about 250 to 270 kg (550 to 600 lb). Heifers will weigh at least 200 kg (440 lb) at eight months of age.
Calves are usually weaned at about eight to nine months of age, but depending on the season and condition of the dam, they might be weaned earlier. They may be paddock weaned, often next to their mothers, or weaned in stockyards. The latter system is preferred by some as it accustoms the weaners to the presence of people and they are trained to take feed other than grass. Small numbers may also be weaned with their dams with the use of weaning nose rings or nosebands which results in the mothers rejecting the calves' attempts to suckle. Many calves are also weaned when they are taken to the large weaner auction sales that are conducted in the south eastern states of Australia. Victoria and New South Wales have yardings (sale yard numbers) of up to 8,000 weaners (calves) for auction sale in one day. The best of these weaners may go to the butchers. Others will be purchased by re-stockers to grow out and fatten on grass or as potential breeders. In the United States these weaners may be known as feeders and would be placed directly into feedlots.
At about 12 months old a beef heifer reaches puberty if she is well grown.
Calves suffer from few congenital abnormalities but the Akabane virus is widely distributed in temperate to tropical regions of the world. The virus is a teratogenic pathogen which causes abortions, stillbirths, premature births and congenital abnormalities, but occurs only during some years.
Calves commonly face on-farm acquired diseases, often of infectious nature. Preweaned calves most commonly experience conditions such as diarrhea, omphalitis, lameness and respiratory diseases. Diarrhea, Omphalitis and Lameness are most common in calves aged up to two weeks, while the frequency of respiratory diseases tends to increase with age. These conditions also display seasonal patterns, with omphalitis being more common in the summer months, and respiratory diseases and diarrhea occurring more frequently in the fall.
Calf meat for human consumption is called veal, and is usually produced from the male calves of Dairy cattle. Also eaten are calf's brains and calf liver. The hide is used to make calfskin, or tanned into leather and called calf leather, or sometimes in the US "novillo", the Spanish term. The fourth compartment of the stomach of slaughtered milk-fed calves is the source of rennet. The intestine is used to make Goldbeater's skin, and is the source of Calf Intestinal Alkaline Phosphatase (CIP).
Dairy cows can only produce milk after having calved, and dairy cows need to produce one calf each year in order to remain in production. Female calves will become a replacement dairy cow. Male dairy calves are generally reared for beef or veal; relatively few are kept for breeding purposes.
In English the term "calf" is used by extension for the young of various other large species of mammal. In addition to other bovid species (such as bison, yak and water buffalo), these include the young of camels, dolphins, elephants, giraffes, hippopotamuses, deer (such as moose, elk (wapiti) and red deer), rhinoceroses, porpoises, whales, walruses and larger seals. (Generally, the adult males of these same species are called "bulls" and the adult females "cows".) However, common domestic species tend to have their own specific names, such as lamb, foal used for all Equidae, or piglet used for all suidae. | [
{
"paragraph_id": 0,
"text": "A calf (pl.: calves) is a young domestic cow or bull. Calves are reared to become adult cattle or are slaughtered for their meat, called veal, and their hide.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The term calf is also used for some other species. See \"Other animals\" below.",
"title": ""
},
{
"paragraph_id": 2,
"text": "\"Calf\" is the term used from birth to weaning, when it becomes known as a weaner or weaner calf, though in some areas the term \"calf\" may be used until the animal is a yearling. The birth of a calf is known as calving. A calf that has lost its mother is an orphan calf, also known as a poddy or poddy-calf in British. Bobby calves are young calves which are to be slaughtered for human consumption. A vealer is a calf weighing less than about 330 kg (730 lb) which is at about eight to nine months of age. A young female calf from birth until she has had a calf of her own is called a heifer (/ˈhɛfər/). In the American Old West, a motherless or small, runty calf was sometimes referred to as a dodie.",
"title": "Terminology"
},
{
"paragraph_id": 3,
"text": "The term \"calf\" is also used for some other species. See \"Other animals\" below.",
"title": "Terminology"
},
{
"paragraph_id": 4,
"text": "Calves may be produced by natural means, or by artificial breeding using artificial insemination or embryo transfer.",
"title": "Early development"
},
{
"paragraph_id": 5,
"text": "Calves are born after nine months. They usually stand within a few minutes of calving, and suckle within an hour. However, for the first few days they are not easily able to keep up with the rest of the herd, so young calves are often left hidden by their mothers, who visit them several times a day to suckle them. By a week old the calf is able to follow the mother all the time.",
"title": "Early development"
},
{
"paragraph_id": 6,
"text": "Some calves are ear tagged soon after birth, especially those that are stud cattle in order to correctly identify their dams (mothers), or in areas (such as the EU) where tagging is a legal requirement for cattle. Typically when the calves are about two months old they are branded, ear marked, castrated and vaccinated.",
"title": "Early development"
},
{
"paragraph_id": 7,
"text": "The single suckler system of rearing calves is similar to that occurring naturally in wild cattle, where each calf is suckled by its own mother until it is weaned at about nine months old. This system is commonly used for rearing beef cattle throughout the world.",
"title": "Calf rearing systems"
},
{
"paragraph_id": 8,
"text": "Cows kept on poor forage (as is typical in subsistence farming) produce a limited amount of milk. A calf left with such a mother all the time can easily drink all the milk, leaving none for human consumption. For dairy production under such circumstances, the calf's access to the cow must be limited, for example by penning the calf and bringing the mother to it once a day after partly milking her. The small amount of milk available for the calf under such systems may mean that it takes a longer time to rear, and in subsistence farming it is therefore common for cows to calve only in alternate years.",
"title": "Calf rearing systems"
},
{
"paragraph_id": 9,
"text": "In more intensive dairy farming, cows can easily be bred and fed to produce far more milk than one calf can drink. In the multi-suckler system, several calves are fostered onto one cow in addition to her own, and these calves' mothers can then be used wholly for milk production. More commonly, calves of dairy cows are fed formula milk from soon after birth, usually from a bottle or bucket.",
"title": "Calf rearing systems"
},
{
"paragraph_id": 10,
"text": "Purebred female calves of dairy cows are reared as replacement dairy cows. Most purebred dairy calves are produced by artificial insemination (AI). By this method each bull can serve many cows, so only a very few of the purebred dairy male calves are needed to provide bulls for breeding. The remainder of the male calves may be reared for beef or veal; Only a proportion of purebred heifers are needed to provide replacement cows, so often some of the cows in dairy herds are put to a beef bull to produce crossbred calves suitable for rearing as beef.",
"title": "Calf rearing systems"
},
{
"paragraph_id": 11,
"text": "Veal calves may be reared entirely on milk formula and killed at about 18 or 20 weeks as \"white\" veal, or fed on grain and hay and killed at 22 to 35 weeks to produce red or pink veal.",
"title": "Calf rearing systems"
},
{
"paragraph_id": 12,
"text": "A commercial steer or bull calf is expected to put on about 32 to 36 kg (71 to 79 lb) per month. A nine-month-old steer or bull is therefore expected to weigh about 250 to 270 kg (550 to 600 lb). Heifers will weigh at least 200 kg (440 lb) at eight months of age.",
"title": "Growth"
},
{
"paragraph_id": 13,
"text": "Calves are usually weaned at about eight to nine months of age, but depending on the season and condition of the dam, they might be weaned earlier. They may be paddock weaned, often next to their mothers, or weaned in stockyards. The latter system is preferred by some as it accustoms the weaners to the presence of people and they are trained to take feed other than grass. Small numbers may also be weaned with their dams with the use of weaning nose rings or nosebands which results in the mothers rejecting the calves' attempts to suckle. Many calves are also weaned when they are taken to the large weaner auction sales that are conducted in the south eastern states of Australia. Victoria and New South Wales have yardings (sale yard numbers) of up to 8,000 weaners (calves) for auction sale in one day. The best of these weaners may go to the butchers. Others will be purchased by re-stockers to grow out and fatten on grass or as potential breeders. In the United States these weaners may be known as feeders and would be placed directly into feedlots.",
"title": "Growth"
},
{
"paragraph_id": 14,
"text": "At about 12 months old a beef heifer reaches puberty if she is well grown.",
"title": "Growth"
},
{
"paragraph_id": 15,
"text": "Calves suffer from few congenital abnormalities but the Akabane virus is widely distributed in temperate to tropical regions of the world. The virus is a teratogenic pathogen which causes abortions, stillbirths, premature births and congenital abnormalities, but occurs only during some years.",
"title": "Diseases"
},
{
"paragraph_id": 16,
"text": "Calves commonly face on-farm acquired diseases, often of infectious nature. Preweaned calves most commonly experience conditions such as diarrhea, omphalitis, lameness and respiratory diseases. Diarrhea, Omphalitis and Lameness are most common in calves aged up to two weeks, while the frequency of respiratory diseases tends to increase with age. These conditions also display seasonal patterns, with omphalitis being more common in the summer months, and respiratory diseases and diarrhea occurring more frequently in the fall.",
"title": "Diseases"
},
{
"paragraph_id": 17,
"text": "Calf meat for human consumption is called veal, and is usually produced from the male calves of Dairy cattle. Also eaten are calf's brains and calf liver. The hide is used to make calfskin, or tanned into leather and called calf leather, or sometimes in the US \"novillo\", the Spanish term. The fourth compartment of the stomach of slaughtered milk-fed calves is the source of rennet. The intestine is used to make Goldbeater's skin, and is the source of Calf Intestinal Alkaline Phosphatase (CIP).",
"title": "Uses"
},
{
"paragraph_id": 18,
"text": "Dairy cows can only produce milk after having calved, and dairy cows need to produce one calf each year in order to remain in production. Female calves will become a replacement dairy cow. Male dairy calves are generally reared for beef or veal; relatively few are kept for breeding purposes.",
"title": "Uses"
},
{
"paragraph_id": 19,
"text": "In English the term \"calf\" is used by extension for the young of various other large species of mammal. In addition to other bovid species (such as bison, yak and water buffalo), these include the young of camels, dolphins, elephants, giraffes, hippopotamuses, deer (such as moose, elk (wapiti) and red deer), rhinoceroses, porpoises, whales, walruses and larger seals. (Generally, the adult males of these same species are called \"bulls\" and the adult females \"cows\".) However, common domestic species tend to have their own specific names, such as lamb, foal used for all Equidae, or piglet used for all suidae.",
"title": "Other animals"
}
] | A calf is a young domestic cow or bull. Calves are reared to become adult cattle or are slaughtered for their meat, called veal, and their hide. The term calf is also used for some other species. See "Other animals" below. | 2001-05-20T03:05:43Z | 2023-12-14T15:32:04Z | [
"Template:See also",
"Template:Reflist",
"Template:Cite book",
"Template:Commons category",
"Template:Wikiquote",
"Template:Short description",
"Template:Failed verification",
"Template:Cite journal",
"Template:Dead link",
"Template:Authority control",
"Template:More citations needed",
"Template:Plural form",
"Template:Convert",
"Template:IPAc-en",
"Template:Unreferenced section",
"Template:Not a typo",
"Template:Cite dictionary",
"Template:ISBN",
"Template:Wiktionary"
] | https://en.wikipedia.org/wiki/Calf_(animal) |
5,693 | Claude Shannon | Claude Elwood Shannon (April 30, 1916 – February 24, 2001) was an American mathematician, electrical engineer, computer scientist and cryptographer known as the "father of information theory". He is credited alongside George Boole for laying the foundations of the Information Age.
As a 21-year-old master's degree student at the Massachusetts Institute of Technology (MIT), he wrote his thesis demonstrating that electrical applications of Boolean algebra could construct any logical numerical relationship. Shannon contributed to the field of cryptanalysis for national defense of the United States during World War II, including his fundamental work on codebreaking and secure telecommunications, writing a paper which is considered one of the foundational pieces of modern cryptography.
His mathematical theory of information laid the foundations for the field of information theory, with his famous paper being called the "Magna Carta of the Information Age" by Scientific American. He also made contributions to artificial intelligence. His achievements are said to be on par with those of Albert Einstein and Alan Turing in their fields.
The Shannon family lived in Gaylord, Michigan, and Claude was born in a hospital in nearby Petoskey. His father, Claude Sr. (1862–1934), was a businessman and, for a while, a judge of probate in Gaylord. His mother, Mabel Wolf Shannon (1890–1945), was a language teacher, who also served as the principal of Gaylord High School. Claude Sr. was a descendant of New Jersey settlers, while Mabel was a child of German immigrants. Shannon's family was active in their Methodist Church during his youth.
Most of the first 16 years of Shannon's life were spent in Gaylord, where he attended public school, graduating from Gaylord High School in 1932. Shannon showed an inclination towards mechanical and electrical things. His best subjects were science and mathematics. At home, he constructed such devices as models of planes, a radio-controlled model boat and a barbed-wire telegraph system to a friend's house a half-mile away. While growing up, he also worked as a messenger for the Western Union company.
Shannon's childhood hero was Thomas Edison, whom he later learned was a distant cousin. Both Shannon and Edison were descendants of John Ogden (1609–1682), a colonial leader and an ancestor of many distinguished people.
In 1932, Shannon entered the University of Michigan, where he was introduced to the work of George Boole. He graduated in 1936 with two bachelor's degrees: one in electrical engineering and the other in mathematics.
In 1936, Shannon began his graduate studies in electrical engineering at MIT, where he worked on Vannevar Bush's differential analyzer, an early analog computer. While studying the complicated ad hoc circuits of this analyzer, Shannon designed switching circuits based on Boole's concepts. In 1937, he wrote his master's degree thesis, A Symbolic Analysis of Relay and Switching Circuits. A paper from this thesis was published in 1938. In this work, Shannon proved that his switching circuits could be used to simplify the arrangement of the electromechanical relays that were used during that time in telephone call routing switches. Next, he expanded this concept, proving that these circuits could solve all problems that Boolean algebra could solve. In the last chapter, he presented diagrams of several circuits, including a 4-bit full adder.
Using this property of electrical switches to implement logic is the fundamental concept that underlies all electronic digital computers. Shannon's work became the foundation of digital circuit design, as it became widely known in the electrical engineering community during and after World War II. The theoretical rigor of Shannon's work superseded the ad hoc methods that had prevailed previously. Howard Gardner called Shannon's thesis "possibly the most important, and also the most noted, master's thesis of the century."
Shannon received his PhD in mathematics from MIT in 1940. Vannevar Bush had suggested that Shannon should work on his dissertation at the Cold Spring Harbor Laboratory, in order to develop a mathematical formulation for Mendelian genetics. This research resulted in Shannon's PhD thesis, called An Algebra for Theoretical Genetics.
In 1940, Shannon became a National Research Fellow at the Institute for Advanced Study in Princeton, New Jersey. In Princeton, Shannon had the opportunity to discuss his ideas with influential scientists and mathematicians such as Hermann Weyl and John von Neumann, and he also had occasional encounters with Albert Einstein and Kurt Gödel. Shannon worked freely across disciplines, and this ability may have contributed to his later development of mathematical information theory.
Shannon then joined Bell Labs to work on fire-control systems and cryptography during World War II, under a contract with section D-2 (Control Systems section) of the National Defense Research Committee (NDRC).
Shannon is credited with the invention of signal-flow graphs, in 1942. He discovered the topological gain formula while investigating the functional operation of an analog computer.
For two months early in 1943, Shannon came into contact with the leading British mathematician Alan Turing. Turing had been posted to Washington to share with the U.S. Navy's cryptanalytic service the methods used by the British Government Code and Cypher School at Bletchley Park to break the cyphers used by the Kriegsmarine U-boats in the north Atlantic Ocean. He was also interested in the encipherment of speech and to this end spent time at Bell Labs. Shannon and Turing met at teatime in the cafeteria. Turing showed Shannon his 1936 paper that defined what is now known as the "universal Turing machine". This impressed Shannon, as many of its ideas complemented his own.
In 1945, as the war was coming to an end, the NDRC was issuing a summary of technical reports as a last step prior to its eventual closing down. Inside the volume on fire control, a special essay titled Data Smoothing and Prediction in Fire-Control Systems, coauthored by Shannon, Ralph Beebe Blackman, and Hendrik Wade Bode, formally treated the problem of smoothing the data in fire-control by analogy with "the problem of separating a signal from interfering noise in communications systems." In other words, it modeled the problem in terms of data and signal processing and thus heralded the coming of the Information Age.
Shannon's work on cryptography was even more closely related to his later publications on communication theory. At the close of the war, he prepared a classified memorandum for Bell Telephone Labs entitled "A Mathematical Theory of Cryptography", dated September 1945. A declassified version of this paper was published in 1949 as "Communication Theory of Secrecy Systems" in the Bell System Technical Journal. This paper incorporated many of the concepts and mathematical formulations that also appeared in his A Mathematical Theory of Communication. Shannon said that his wartime insights into communication theory and cryptography developed simultaneously, and that "they were so close together you couldn't separate them". In a footnote near the beginning of the classified report, Shannon announced his intention to "develop these results … in a forthcoming memorandum on the transmission of information."
While he was at Bell Labs, Shannon proved that the cryptographic one-time pad is unbreakable in his classified research that was later published in 1949. The same article also proved that any unbreakable system must have essentially the same characteristics as the one-time pad: the key must be truly random, as large as the plaintext, never reused in whole or part, and kept secret.
In 1948, the promised memorandum appeared as "A Mathematical Theory of Communication", an article in two parts in the July and October issues of the Bell System Technical Journal. This work focuses on the problem of how best to encode the message a sender wants to transmit. Shannon developed information entropy as a measure of the information content in a message, which is a measure of uncertainty reduced by the message. In so doing, he essentially invented the field of information theory.
The book The Mathematical Theory of Communication reprints Shannon's 1948 article and Warren Weaver's popularization of it, which is accessible to the non-specialist. Weaver pointed out that the word "information" in communication theory is not related to what you do say, but to what you could say. That is, information is a measure of one's freedom of choice when one selects a message. Shannon's concepts were also popularized, subject to his own proofreading, in John Robinson Pierce's Symbols, Signals, and Noise.
Information theory's fundamental contribution to natural language processing and computational linguistics was further established in 1951, in his article "Prediction and Entropy of Printed English", showing upper and lower bounds of entropy on the statistics of English – giving a statistical foundation to language analysis. In addition, he proved that treating space as the 27th letter of the alphabet actually lowers uncertainty in written language, providing a clear quantifiable link between cultural practice and probabilistic cognition.
Another notable paper published in 1949 is "Communication Theory of Secrecy Systems", a declassified version of his wartime work on the mathematical theory of cryptography, in which he proved that all theoretically unbreakable cyphers must have the same requirements as the one-time pad. He is also credited with the introduction of sampling theory, which is concerned with representing a continuous-time signal from a (uniform) discrete set of samples. This theory was essential in enabling telecommunications to move from analog to digital transmissions systems in the 1960s and later.
He returned to MIT to hold an endowed chair in 1956.
In 1956 Shannon joined the MIT faculty to work in the Research Laboratory of Electronics (RLE). He continued to serve on the MIT faculty until 1978.
Shannon developed Alzheimer's disease and spent the last few years of his life in a nursing home; he died in 2001, survived by his wife, a son and daughter, and two granddaughters.
Outside of Shannon's academic pursuits, he was interested in juggling, unicycling, and chess. He also invented many devices, including a Roman numeral computer called THROBAC, and juggling machines. He built a device that could solve the Rubik's Cube puzzle.
Shannon designed the Minivac 601, a digital computer trainer to teach business people about how computers functioned. It was sold by the Scientific Development Corp starting in 1961.
He is also considered the co-inventor of the first wearable computer along with Edward O. Thorp. The device was used to improve the odds when playing roulette.
Shannon married Norma Levor, a wealthy, Jewish, left-wing intellectual in January 1940. The marriage ended in divorce after about a year. Levor later married Ben Barzman.
Shannon met his second wife, Betty Shannon (née Mary Elizabeth Moore), when she was a numerical analyst at Bell Labs. They were married in 1949. Betty assisted Claude in building some of his most famous inventions. They had three children.
Shannon presented himself as apolitical and an atheist.
There are six statues of Shannon sculpted by Eugene Daub: one at the University of Michigan; one at MIT in the Laboratory for Information and Decision Systems; one in Gaylord, Michigan; one at the University of California, San Diego; one at Bell Labs; and another at AT&T Shannon Labs. The statue in Gaylord is located in the Claude Shannon Memorial Park. After the breakup of the Bell System, the part of Bell Labs that remained with AT&T Corporation was named Shannon Labs in his honor.
According to Neil Sloane, an AT&T Fellow who co-edited Shannon's large collection of papers in 1993, the perspective introduced by Shannon's communication theory (now called information theory) is the foundation of the digital revolution, and every device containing a microprocessor or microcontroller is a conceptual descendant of Shannon's publication in 1948: "He's one of the great men of the century. Without him, none of the things we know today would exist. The whole digital revolution started with him." The cryptocurrency unit shannon (a synonym for gwei) is named after him.
A Mind at Play, a biography of Shannon written by Jimmy Soni and Rob Goodman, was published in 2017. They described Shannon as "the most important genius you’ve never heard of, a man whose intellect was on par with Albert Einstein and Isaac Newton".
On April 30, 2016, Shannon was honored with a Google Doodle to celebrate his life on what would have been his 100th birthday.
The Bit Player, a feature film about Shannon directed by Mark Levinson premiered at the World Science Festival in 2019. Drawn from interviews conducted with Shannon in his house in the 1980s, the film was released on Amazon Prime in August 2020.
Shannon's The Mathematical Theory of Communication, begins with an interpretation of his own work by Warren Weaver. Although Shannon's entire work is about communication itself, Warren Weaver communicated his ideas in such a way that those not acclimated to complex theory and mathematics could comprehend the fundamental laws he put forth. The coupling of their unique communicational abilities and ideas generated the Shannon-Weaver model, although the mathematical and theoretical underpinnings emanate entirely from Shannon's work after Weaver's introduction. For the layman, Weaver's introduction better communicates The Mathematical Theory of Communication, but Shannon's subsequent logic, mathematics, and expressive precision was responsible for defining the problem itself.
"Theseus", created in 1950, was a mechanical mouse controlled by an electromechanical relay circuit that enabled it to move around a labyrinth of 25 squares. The maze configuration was flexible and it could be modified arbitrarily by rearranging movable partitions. The mouse was designed to search through the corridors until it found the target. Having travelled through the maze, the mouse could then be placed anywhere it had been before, and because of its prior experience it could go directly to the target. If placed in unfamiliar territory, it was programmed to search until it reached a known location and then it would proceed to the target, adding the new knowledge to its memory and learning new behavior. Shannon's mouse appears to have been the first artificial learning device of its kind.
In 1949 Shannon completed a paper (published in March 1950) which estimates the game-tree complexity of chess, which is approximately 10. This number is now often referred to as the "Shannon number", and is still regarded today as an accurate estimate of the game's complexity. The number is often cited as one of the barriers to solving the game of chess using an exhaustive analysis (i.e. brute force analysis).
On March 9, 1949, Shannon presented a paper called "Programming a Computer for playing Chess". The paper was presented at the National Institute for Radio Engineers Convention in New York. He described how to program a computer to play chess based on position scoring and move selection. He proposed basic strategies for restricting the number of possibilities to be considered in a game of chess. In March 1950 it was published in Philosophical Magazine, and is considered one of the first articles published on the topic of programming a computer for playing chess, and using a computer to solve the game.
His process for having the computer decide on which move to make was a minimax procedure, based on an evaluation function of a given chess position. Shannon gave a rough example of an evaluation function in which the value of the black position was subtracted from that of the white position. Material was counted according to the usual chess piece relative value (1 point for a pawn, 3 points for a knight or bishop, 5 points for a rook, and 9 points for a queen). He considered some positional factors, subtracting ½ point for each doubled pawn, backward pawn, and isolated pawn; mobility was incorporated by adding 0.1 point for each legal move available.
Shannon formulated a version of Kerckhoffs' principle as "The enemy knows the system". In this form it is known as "Shannon's maxim".
The Shannon centenary, 2016, marked the life and influence of Claude Elwood Shannon on the hundredth anniversary of his birth on April 30, 1916. It was inspired in part by the Alan Turing Year. An ad hoc committee of the IEEE Information Theory Society including Christina Fragouli, Rüdiger Urbanke, Michelle Effros, Lav Varshney and Sergio Verdú, coordinated worldwide events. The initiative was announced in the History Panel at the 2015 IEEE Information Theory Workshop Jerusalem and the IEEE Information Theory Society newsletter.
A detailed listing of confirmed events was available on the website of the IEEE Information Theory Society.
Some of the planned activities included:
The Claude E. Shannon Award was established in his honor; he was also its first recipient, in 1972. | [
{
"paragraph_id": 0,
"text": "Claude Elwood Shannon (April 30, 1916 – February 24, 2001) was an American mathematician, electrical engineer, computer scientist and cryptographer known as the \"father of information theory\". He is credited alongside George Boole for laying the foundations of the Information Age.",
"title": ""
},
{
"paragraph_id": 1,
"text": "As a 21-year-old master's degree student at the Massachusetts Institute of Technology (MIT), he wrote his thesis demonstrating that electrical applications of Boolean algebra could construct any logical numerical relationship. Shannon contributed to the field of cryptanalysis for national defense of the United States during World War II, including his fundamental work on codebreaking and secure telecommunications, writing a paper which is considered one of the foundational pieces of modern cryptography.",
"title": ""
},
{
"paragraph_id": 2,
"text": "His mathematical theory of information laid the foundations for the field of information theory, with his famous paper being called the \"Magna Carta of the Information Age\" by Scientific American. He also made contributions to artificial intelligence. His achievements are said to be on par with those of Albert Einstein and Alan Turing in their fields.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The Shannon family lived in Gaylord, Michigan, and Claude was born in a hospital in nearby Petoskey. His father, Claude Sr. (1862–1934), was a businessman and, for a while, a judge of probate in Gaylord. His mother, Mabel Wolf Shannon (1890–1945), was a language teacher, who also served as the principal of Gaylord High School. Claude Sr. was a descendant of New Jersey settlers, while Mabel was a child of German immigrants. Shannon's family was active in their Methodist Church during his youth.",
"title": "Biography"
},
{
"paragraph_id": 4,
"text": "Most of the first 16 years of Shannon's life were spent in Gaylord, where he attended public school, graduating from Gaylord High School in 1932. Shannon showed an inclination towards mechanical and electrical things. His best subjects were science and mathematics. At home, he constructed such devices as models of planes, a radio-controlled model boat and a barbed-wire telegraph system to a friend's house a half-mile away. While growing up, he also worked as a messenger for the Western Union company.",
"title": "Biography"
},
{
"paragraph_id": 5,
"text": "Shannon's childhood hero was Thomas Edison, whom he later learned was a distant cousin. Both Shannon and Edison were descendants of John Ogden (1609–1682), a colonial leader and an ancestor of many distinguished people.",
"title": "Biography"
},
{
"paragraph_id": 6,
"text": "In 1932, Shannon entered the University of Michigan, where he was introduced to the work of George Boole. He graduated in 1936 with two bachelor's degrees: one in electrical engineering and the other in mathematics.",
"title": "Biography"
},
{
"paragraph_id": 7,
"text": "In 1936, Shannon began his graduate studies in electrical engineering at MIT, where he worked on Vannevar Bush's differential analyzer, an early analog computer. While studying the complicated ad hoc circuits of this analyzer, Shannon designed switching circuits based on Boole's concepts. In 1937, he wrote his master's degree thesis, A Symbolic Analysis of Relay and Switching Circuits. A paper from this thesis was published in 1938. In this work, Shannon proved that his switching circuits could be used to simplify the arrangement of the electromechanical relays that were used during that time in telephone call routing switches. Next, he expanded this concept, proving that these circuits could solve all problems that Boolean algebra could solve. In the last chapter, he presented diagrams of several circuits, including a 4-bit full adder.",
"title": "Biography"
},
{
"paragraph_id": 8,
"text": "Using this property of electrical switches to implement logic is the fundamental concept that underlies all electronic digital computers. Shannon's work became the foundation of digital circuit design, as it became widely known in the electrical engineering community during and after World War II. The theoretical rigor of Shannon's work superseded the ad hoc methods that had prevailed previously. Howard Gardner called Shannon's thesis \"possibly the most important, and also the most noted, master's thesis of the century.\"",
"title": "Biography"
},
{
"paragraph_id": 9,
"text": "Shannon received his PhD in mathematics from MIT in 1940. Vannevar Bush had suggested that Shannon should work on his dissertation at the Cold Spring Harbor Laboratory, in order to develop a mathematical formulation for Mendelian genetics. This research resulted in Shannon's PhD thesis, called An Algebra for Theoretical Genetics.",
"title": "Biography"
},
{
"paragraph_id": 10,
"text": "In 1940, Shannon became a National Research Fellow at the Institute for Advanced Study in Princeton, New Jersey. In Princeton, Shannon had the opportunity to discuss his ideas with influential scientists and mathematicians such as Hermann Weyl and John von Neumann, and he also had occasional encounters with Albert Einstein and Kurt Gödel. Shannon worked freely across disciplines, and this ability may have contributed to his later development of mathematical information theory.",
"title": "Biography"
},
{
"paragraph_id": 11,
"text": "Shannon then joined Bell Labs to work on fire-control systems and cryptography during World War II, under a contract with section D-2 (Control Systems section) of the National Defense Research Committee (NDRC).",
"title": "Biography"
},
{
"paragraph_id": 12,
"text": "Shannon is credited with the invention of signal-flow graphs, in 1942. He discovered the topological gain formula while investigating the functional operation of an analog computer.",
"title": "Biography"
},
{
"paragraph_id": 13,
"text": "For two months early in 1943, Shannon came into contact with the leading British mathematician Alan Turing. Turing had been posted to Washington to share with the U.S. Navy's cryptanalytic service the methods used by the British Government Code and Cypher School at Bletchley Park to break the cyphers used by the Kriegsmarine U-boats in the north Atlantic Ocean. He was also interested in the encipherment of speech and to this end spent time at Bell Labs. Shannon and Turing met at teatime in the cafeteria. Turing showed Shannon his 1936 paper that defined what is now known as the \"universal Turing machine\". This impressed Shannon, as many of its ideas complemented his own.",
"title": "Biography"
},
{
"paragraph_id": 14,
"text": "In 1945, as the war was coming to an end, the NDRC was issuing a summary of technical reports as a last step prior to its eventual closing down. Inside the volume on fire control, a special essay titled Data Smoothing and Prediction in Fire-Control Systems, coauthored by Shannon, Ralph Beebe Blackman, and Hendrik Wade Bode, formally treated the problem of smoothing the data in fire-control by analogy with \"the problem of separating a signal from interfering noise in communications systems.\" In other words, it modeled the problem in terms of data and signal processing and thus heralded the coming of the Information Age.",
"title": "Biography"
},
{
"paragraph_id": 15,
"text": "Shannon's work on cryptography was even more closely related to his later publications on communication theory. At the close of the war, he prepared a classified memorandum for Bell Telephone Labs entitled \"A Mathematical Theory of Cryptography\", dated September 1945. A declassified version of this paper was published in 1949 as \"Communication Theory of Secrecy Systems\" in the Bell System Technical Journal. This paper incorporated many of the concepts and mathematical formulations that also appeared in his A Mathematical Theory of Communication. Shannon said that his wartime insights into communication theory and cryptography developed simultaneously, and that \"they were so close together you couldn't separate them\". In a footnote near the beginning of the classified report, Shannon announced his intention to \"develop these results … in a forthcoming memorandum on the transmission of information.\"",
"title": "Biography"
},
{
"paragraph_id": 16,
"text": "While he was at Bell Labs, Shannon proved that the cryptographic one-time pad is unbreakable in his classified research that was later published in 1949. The same article also proved that any unbreakable system must have essentially the same characteristics as the one-time pad: the key must be truly random, as large as the plaintext, never reused in whole or part, and kept secret.",
"title": "Biography"
},
{
"paragraph_id": 17,
"text": "In 1948, the promised memorandum appeared as \"A Mathematical Theory of Communication\", an article in two parts in the July and October issues of the Bell System Technical Journal. This work focuses on the problem of how best to encode the message a sender wants to transmit. Shannon developed information entropy as a measure of the information content in a message, which is a measure of uncertainty reduced by the message. In so doing, he essentially invented the field of information theory.",
"title": "Biography"
},
{
"paragraph_id": 18,
"text": "The book The Mathematical Theory of Communication reprints Shannon's 1948 article and Warren Weaver's popularization of it, which is accessible to the non-specialist. Weaver pointed out that the word \"information\" in communication theory is not related to what you do say, but to what you could say. That is, information is a measure of one's freedom of choice when one selects a message. Shannon's concepts were also popularized, subject to his own proofreading, in John Robinson Pierce's Symbols, Signals, and Noise.",
"title": "Biography"
},
{
"paragraph_id": 19,
"text": "Information theory's fundamental contribution to natural language processing and computational linguistics was further established in 1951, in his article \"Prediction and Entropy of Printed English\", showing upper and lower bounds of entropy on the statistics of English – giving a statistical foundation to language analysis. In addition, he proved that treating space as the 27th letter of the alphabet actually lowers uncertainty in written language, providing a clear quantifiable link between cultural practice and probabilistic cognition.",
"title": "Biography"
},
{
"paragraph_id": 20,
"text": "Another notable paper published in 1949 is \"Communication Theory of Secrecy Systems\", a declassified version of his wartime work on the mathematical theory of cryptography, in which he proved that all theoretically unbreakable cyphers must have the same requirements as the one-time pad. He is also credited with the introduction of sampling theory, which is concerned with representing a continuous-time signal from a (uniform) discrete set of samples. This theory was essential in enabling telecommunications to move from analog to digital transmissions systems in the 1960s and later.",
"title": "Biography"
},
{
"paragraph_id": 21,
"text": "He returned to MIT to hold an endowed chair in 1956.",
"title": "Biography"
},
{
"paragraph_id": 22,
"text": "In 1956 Shannon joined the MIT faculty to work in the Research Laboratory of Electronics (RLE). He continued to serve on the MIT faculty until 1978.",
"title": "Biography"
},
{
"paragraph_id": 23,
"text": "Shannon developed Alzheimer's disease and spent the last few years of his life in a nursing home; he died in 2001, survived by his wife, a son and daughter, and two granddaughters.",
"title": "Biography"
},
{
"paragraph_id": 24,
"text": "Outside of Shannon's academic pursuits, he was interested in juggling, unicycling, and chess. He also invented many devices, including a Roman numeral computer called THROBAC, and juggling machines. He built a device that could solve the Rubik's Cube puzzle.",
"title": "Biography"
},
{
"paragraph_id": 25,
"text": "Shannon designed the Minivac 601, a digital computer trainer to teach business people about how computers functioned. It was sold by the Scientific Development Corp starting in 1961.",
"title": "Biography"
},
{
"paragraph_id": 26,
"text": "He is also considered the co-inventor of the first wearable computer along with Edward O. Thorp. The device was used to improve the odds when playing roulette.",
"title": "Biography"
},
{
"paragraph_id": 27,
"text": "Shannon married Norma Levor, a wealthy, Jewish, left-wing intellectual in January 1940. The marriage ended in divorce after about a year. Levor later married Ben Barzman.",
"title": "Biography"
},
{
"paragraph_id": 28,
"text": "Shannon met his second wife, Betty Shannon (née Mary Elizabeth Moore), when she was a numerical analyst at Bell Labs. They were married in 1949. Betty assisted Claude in building some of his most famous inventions. They had three children.",
"title": "Biography"
},
{
"paragraph_id": 29,
"text": "Shannon presented himself as apolitical and an atheist.",
"title": "Biography"
},
{
"paragraph_id": 30,
"text": "There are six statues of Shannon sculpted by Eugene Daub: one at the University of Michigan; one at MIT in the Laboratory for Information and Decision Systems; one in Gaylord, Michigan; one at the University of California, San Diego; one at Bell Labs; and another at AT&T Shannon Labs. The statue in Gaylord is located in the Claude Shannon Memorial Park. After the breakup of the Bell System, the part of Bell Labs that remained with AT&T Corporation was named Shannon Labs in his honor.",
"title": "Biography"
},
{
"paragraph_id": 31,
"text": "According to Neil Sloane, an AT&T Fellow who co-edited Shannon's large collection of papers in 1993, the perspective introduced by Shannon's communication theory (now called information theory) is the foundation of the digital revolution, and every device containing a microprocessor or microcontroller is a conceptual descendant of Shannon's publication in 1948: \"He's one of the great men of the century. Without him, none of the things we know today would exist. The whole digital revolution started with him.\" The cryptocurrency unit shannon (a synonym for gwei) is named after him.",
"title": "Biography"
},
{
"paragraph_id": 32,
"text": "A Mind at Play, a biography of Shannon written by Jimmy Soni and Rob Goodman, was published in 2017. They described Shannon as \"the most important genius you’ve never heard of, a man whose intellect was on par with Albert Einstein and Isaac Newton\".",
"title": "Biography"
},
{
"paragraph_id": 33,
"text": "On April 30, 2016, Shannon was honored with a Google Doodle to celebrate his life on what would have been his 100th birthday.",
"title": "Biography"
},
{
"paragraph_id": 34,
"text": "The Bit Player, a feature film about Shannon directed by Mark Levinson premiered at the World Science Festival in 2019. Drawn from interviews conducted with Shannon in his house in the 1980s, the film was released on Amazon Prime in August 2020.",
"title": "Biography"
},
{
"paragraph_id": 35,
"text": "Shannon's The Mathematical Theory of Communication, begins with an interpretation of his own work by Warren Weaver. Although Shannon's entire work is about communication itself, Warren Weaver communicated his ideas in such a way that those not acclimated to complex theory and mathematics could comprehend the fundamental laws he put forth. The coupling of their unique communicational abilities and ideas generated the Shannon-Weaver model, although the mathematical and theoretical underpinnings emanate entirely from Shannon's work after Weaver's introduction. For the layman, Weaver's introduction better communicates The Mathematical Theory of Communication, but Shannon's subsequent logic, mathematics, and expressive precision was responsible for defining the problem itself.",
"title": "The Mathematical Theory of Communication"
},
{
"paragraph_id": 36,
"text": "\"Theseus\", created in 1950, was a mechanical mouse controlled by an electromechanical relay circuit that enabled it to move around a labyrinth of 25 squares. The maze configuration was flexible and it could be modified arbitrarily by rearranging movable partitions. The mouse was designed to search through the corridors until it found the target. Having travelled through the maze, the mouse could then be placed anywhere it had been before, and because of its prior experience it could go directly to the target. If placed in unfamiliar territory, it was programmed to search until it reached a known location and then it would proceed to the target, adding the new knowledge to its memory and learning new behavior. Shannon's mouse appears to have been the first artificial learning device of its kind.",
"title": "Other work"
},
{
"paragraph_id": 37,
"text": "In 1949 Shannon completed a paper (published in March 1950) which estimates the game-tree complexity of chess, which is approximately 10. This number is now often referred to as the \"Shannon number\", and is still regarded today as an accurate estimate of the game's complexity. The number is often cited as one of the barriers to solving the game of chess using an exhaustive analysis (i.e. brute force analysis).",
"title": "Other work"
},
{
"paragraph_id": 38,
"text": "On March 9, 1949, Shannon presented a paper called \"Programming a Computer for playing Chess\". The paper was presented at the National Institute for Radio Engineers Convention in New York. He described how to program a computer to play chess based on position scoring and move selection. He proposed basic strategies for restricting the number of possibilities to be considered in a game of chess. In March 1950 it was published in Philosophical Magazine, and is considered one of the first articles published on the topic of programming a computer for playing chess, and using a computer to solve the game.",
"title": "Other work"
},
{
"paragraph_id": 39,
"text": "His process for having the computer decide on which move to make was a minimax procedure, based on an evaluation function of a given chess position. Shannon gave a rough example of an evaluation function in which the value of the black position was subtracted from that of the white position. Material was counted according to the usual chess piece relative value (1 point for a pawn, 3 points for a knight or bishop, 5 points for a rook, and 9 points for a queen). He considered some positional factors, subtracting ½ point for each doubled pawn, backward pawn, and isolated pawn; mobility was incorporated by adding 0.1 point for each legal move available.",
"title": "Other work"
},
{
"paragraph_id": 40,
"text": "Shannon formulated a version of Kerckhoffs' principle as \"The enemy knows the system\". In this form it is known as \"Shannon's maxim\".",
"title": "Other work"
},
{
"paragraph_id": 41,
"text": "The Shannon centenary, 2016, marked the life and influence of Claude Elwood Shannon on the hundredth anniversary of his birth on April 30, 1916. It was inspired in part by the Alan Turing Year. An ad hoc committee of the IEEE Information Theory Society including Christina Fragouli, Rüdiger Urbanke, Michelle Effros, Lav Varshney and Sergio Verdú, coordinated worldwide events. The initiative was announced in the History Panel at the 2015 IEEE Information Theory Workshop Jerusalem and the IEEE Information Theory Society newsletter.",
"title": "Commemorations"
},
{
"paragraph_id": 42,
"text": "A detailed listing of confirmed events was available on the website of the IEEE Information Theory Society.",
"title": "Commemorations"
},
{
"paragraph_id": 43,
"text": "Some of the planned activities included:",
"title": "Commemorations"
},
{
"paragraph_id": 44,
"text": "The Claude E. Shannon Award was established in his honor; he was also its first recipient, in 1972.",
"title": "Awards and honors list"
}
] | Claude Elwood Shannon was an American mathematician, electrical engineer, computer scientist and cryptographer known as the "father of information theory". He is credited alongside George Boole for laying the foundations of the Information Age. As a 21-year-old master's degree student at the Massachusetts Institute of Technology (MIT), he wrote his thesis demonstrating that electrical applications of Boolean algebra could construct any logical numerical relationship. Shannon contributed to the field of cryptanalysis for national defense of the United States during World War II, including his fundamental work on codebreaking and secure telecommunications, writing a paper which is considered one of the foundational pieces of modern cryptography. His mathematical theory of information laid the foundations for the field of information theory, with his famous paper being called the "Magna Carta of the Information Age" by Scientific American. He also made contributions to artificial intelligence. His achievements are said to be on par with those of Albert Einstein and Alan Turing in their fields. | 2001-08-13T19:42:05Z | 2023-12-05T19:57:59Z | [
"Template:Main",
"Template:ISBN",
"Template:Portal",
"Template:IEEE Medal of Honor Laureates 1951–1975",
"Template:Authority control",
"Template:Cite news",
"Template:Use mdy dates",
"Template:Div col",
"Template:Div col end",
"Template:Cite web",
"Template:Wikiquote",
"Template:FRS 1991",
"Template:Cite magazine",
"Template:Commons category inline",
"Template:Cite thesis",
"Template:Webarchive",
"Template:Timelines of computing",
"Template:Claude E. Shannon Award recipients",
"Template:Cite journal",
"Template:Cite book",
"Template:YouTube",
"Template:Winners of the National Medal of Science",
"Template:Sfnp",
"Template:Reflist",
"Template:Citation",
"Template:Cite AV media",
"Template:Short description",
"Template:Infobox scientist",
"Template:Update-section"
] | https://en.wikipedia.org/wiki/Claude_Shannon |
5,694 | Cracking | Cracking may refer to:
In computing: | [
{
"paragraph_id": 0,
"text": "Cracking may refer to:",
"title": ""
},
{
"paragraph_id": 1,
"text": "In computing:",
"title": ""
}
] | Cracking may refer to: Cracking, the formation of a fracture or partial fracture in a solid material studied as fracture mechanics
Performing a sternotomy
Fluid catalytic cracking, a catalytic process widely used in oil refineries for cracking large hydrocarbon molecules into smaller molecules
Cracking (chemistry), the decomposition of complex organic molecules into smaller ones
Cracking joints, the practice of manipulating one's bone joints to make a sharp sound
Cracking codes, see cryptanalysis
Whip cracking
Safe cracking
Crackin', band featuring Lester Abrams
Packing and cracking, a method of creating voting districts to give a political party an advantage In computing: Another name for security hacking; the practice of defeating computer security.
Password cracking, the process of discovering the plaintext of an encrypted computer password.
Software cracking, the defeating of software copy protection. | 2001-05-20T04:39:13Z | 2023-09-14T09:43:44Z | [
"Template:Wiktionary",
"Template:Disambiguation"
] | https://en.wikipedia.org/wiki/Cracking |
5,695 | Community | A community is a social unit (a group of living things) with a shared socially significant characteristic, such as place, set of norms, culture, religion, values, customs, or identity. Communities may share a sense of place situated in a given geographical area (e.g. a country, village, town, or neighbourhood) or in virtual space through communication platforms. Durable good relations that extend beyond immediate genealogical ties also define a sense of community, important to their identity, practice, and roles in social institutions such as family, home, work, government, TV network, society, or humanity at large. Although communities are usually small relative to personal social ties, "community" may also refer to large group affiliations such as national communities, international communities, and virtual communities.
The English-language word "community" derives from the Old French comuneté (Modern French: communauté), which comes from the Latin communitas "community", "public spirit" (from Latin communis, "common").
Human communities may have intent, belief, resources, preferences, needs, and risks in common, affecting the identity of the participants and their degree of cohesiveness.
Archaeological studies of social communities use the term "community" in two ways, paralleling usage in other areas. The first is an informal definition of community as a place where people used to live. In this sense it is synonymous with the concept of an ancient settlement—whether a hamlet, village, town, or city. The second meaning resembles the usage of the term in other social sciences: a community is a group of people living near one another who interact socially. Social interaction on a small scale can be difficult to identify with archaeological data. Most reconstructions of social communities by archaeologists rely on the principle that social interaction in the past was conditioned by physical distance. Therefore, a small village settlement likely constituted a social community and spatial subdivisions of cities and other large settlements may have formed communities. Archaeologists typically use similarities in material culture—from house types to styles of pottery—to reconstruct communities in the past. This classification method relies on the assumption that people or households will share more similarities in the types and styles of their material goods with other members of a social community than they will with outsiders.
In ecology, a community is an assemblage of populations—potentially of different species—interacting with one another. Community ecology is the branch of ecology that studies interactions between and among species. It considers how such interactions, along with interactions between species and the abiotic environment, affect social structure and species richness, diversity and patterns of abundance. Species interact in three ways: competition, predation and mutualism:
The two main types of ecological communities are major communities, which are self-sustaining and self-regulating (such as a forest or a lake), and minor communities, which rely on other communities (like fungi decomposing a log) and are the building blocks of major communities. Moreover, we can establish other non-taxonomic subdivisions of biocenosis, such as guilds.
The concept of "community" often has a positive semantic connotation, exploited rhetorically by populist politicians and by advertisers to promote feelings and associations of mutual well-being, happiness and togetherness—veering towards an almost-achievable utopian community.
In contrast, the epidemiological term "community transmission" can have negative implications, and instead of a "criminal community" one often speaks of a "criminal underworld" or of the "criminal fraternity".
In Gemeinschaft und Gesellschaft (1887), German sociologist Ferdinand Tönnies described two types of human association: Gemeinschaft (usually translated as "community") and Gesellschaft ("society" or "association"). Tönnies proposed the Gemeinschaft–Gesellschaft dichotomy as a way to think about social ties. No group is exclusively one or the other. Gemeinschaft stress personal social interactions, and the roles, values, and beliefs based on such interactions. Gesellschaft stress indirect interactions, impersonal roles, formal values, and beliefs based on such interactions.
In a seminal 1986 study, McMillan and Chavis identify four elements of "sense of community":
A "sense of community index" (SCI) was developed by Chavis and colleagues, and revised and adapted by others. Although originally designed to assess sense of community in neighborhoods, the index has been adapted for use in schools, the workplace, and a variety of types of communities.
Studies conducted by the APPA indicate that young adults who feel a sense of belonging in a community, particularly small communities, develop fewer psychiatric and depressive disorders than those who do not have the feeling of love and belonging.
The process of learning to adopt the behavior patterns of the community is called socialization. The most fertile time of socialization is usually the early stages of life, during which individuals develop the skills and knowledge and learn the roles necessary to function within their culture and social environment. For some psychologists, especially those in the psychodynamic tradition, the most important period of socialization is between the ages of one and ten. But socialization also includes adults moving into a significantly different environment where they must learn a new set of behaviors.
Socialization is influenced primarily by the family, through which children first learn community norms. Other important influences include schools, peer groups, people, mass media, the workplace, and government. The degree to which the norms of a particular society or community are adopted determines one's willingness to engage with others. The norms of tolerance, reciprocity, and trust are important "habits of the heart", as de Tocqueville put it, in an individual's involvement in community.
Community development is often linked with community work or community planning, and may involve stakeholders, foundations, governments, or contracted entities including non-government organisations (NGOs), universities or government agencies to progress the social well-being of local, regional and, sometimes, national communities. More grassroots efforts, called community building or community organizing, seek to empower individuals and groups of people by providing them with the skills they need to effect change in their own communities. These skills often assist in building political power through the formation of large social groups working for a common agenda. Community development practitioners must understand both how to work with individuals and how to affect communities' positions within the context of larger social institutions. Public administrators, in contrast, need to understand community development in the context of rural and urban development, housing and economic development, and community, organizational and business development.
Formal accredited programs conducted by universities, as part of degree granting institutions, are often used to build a knowledge base to drive curricula in public administration, sociology and community studies. The General Social Survey from the National Opinion Research Center at the University of Chicago and the Saguaro Seminar at the Harvard Kennedy School are examples of national community development in the United States. The Maxwell School of Citizenship and Public Affairs at Syracuse University in New York State offers core courses in community and economic development, and in areas ranging from non-profit development to US budgeting (federal to local, community funds). In the United Kingdom, the University of Oxford has led in providing extensive research in the field through its Community Development Journal, used worldwide by sociologists and community development practitioners.
At the intersection between community development and community building are a number of programs and organizations with community development tools. One example of this is the program of the Asset Based Community Development Institute of Northwestern University. The institute makes available downloadable tools to assess community assets and make connections between non-profit groups and other organizations that can help in community building. The Institute focuses on helping communities develop by "mobilizing neighborhood assets" – building from the inside out rather than the outside in. In the disability field, community building was prevalent in the 1980s and 1990s with roots in John McKnight's approaches.
In The Different Drum: Community-Making and Peace (1987) Scott Peck argues that the almost accidental sense of community that exists at times of crisis can be consciously built. Peck believes that conscious community building is a process of deliberate design based on the knowledge and application of certain rules. He states that this process goes through four stages:
In 1991, Peck remarked that building a sense of community is easy but maintaining this sense of community is difficult in the modern world. An interview with M. Scott Peck by Alan Atkisson. In Context #29, p. 26. The three basic types of community organizing are grassroots organizing, coalition building, and "institution-based community organizing", (also called "broad-based community organizing", an example of which is faith-based community organizing, or Congregation-based Community Organizing).
Community building can use a wide variety of practices, ranging from simple events (e.g., potlucks, small book clubs) to larger-scale efforts (e.g., mass festivals, construction projects that involve local participants rather than outside contractors).
Community building that is geared toward citizen action is usually termed "community organizing". In these cases, organized community groups seek accountability from elected officials and increased direct representation within decision-making bodies. Where good-faith negotiations fail, these constituency-led organizations seek to pressure the decision-makers through a variety of means, including picketing, boycotting, sit-ins, petitioning, and electoral politics.
Community organizing can focus on more than just resolving specific issues. Organizing often means building a widely accessible power structure, often with the end goal of distributing power equally throughout the community. Community organizers generally seek to build groups that are open and democratic in governance. Such groups facilitate and encourage consensus decision-making with a focus on the general health of the community rather than a specific interest group.
If communities are developed based on something they share in common, whether location or values, then one challenge for developing communities is how to incorporate individuality and differences. Rebekah Nathan suggests in her book, My Freshman Year, we are drawn to developing communities totally based on sameness, despite stated commitments to diversity, such as those found on university websites.
A number of ways to categorize types of community have been proposed. One such breakdown is as follows:
The usual categorizations of community relations have a number of problems: (1) they tend to give the impression that a particular community can be defined as just this kind or another; (2) they tend to conflate modern and customary community relations; (3) they tend to take sociological categories such as ethnicity or race as given, forgetting that different ethnically defined persons live in different kinds of communities—grounded, interest-based, diasporic, etc.
In response to these problems, Paul James and his colleagues have developed a taxonomy that maps community relations, and recognizes that actual communities can be characterized by different kinds of relations at the same time:
In these terms, communities can be nested and/or intersecting; one community can contain another—for example a location-based community may contain a number of ethnic communities. Both lists above can used in a cross-cutting matrix in relation to each other.
In general, virtual communities value knowledge and information as currency or social resource. What differentiates virtual communities from their physical counterparts is the extent and impact of "weak ties", which are the relationships acquaintances or strangers form to acquire information through online networks. Relationships among members in a virtual community tend to focus on information exchange about specific topics. A survey conducted by Pew Internet and The American Life Project in 2001 found those involved in entertainment, professional, and sports virtual-groups focused their activities on obtaining information.
An epidemic of bullying and harassment has arisen from the exchange of information between strangers, especially among teenagers, in virtual communities. Despite attempts to implement anti-bullying policies, Sheri Bauman, professor of counselling at the University of Arizona, claims the "most effective strategies to prevent bullying" may cost companies revenue.
Virtual Internet-mediated communities can interact with offline real-life activity, potentially forming strong and tight-knit groups such as QAnon. | [
{
"paragraph_id": 0,
"text": "A community is a social unit (a group of living things) with a shared socially significant characteristic, such as place, set of norms, culture, religion, values, customs, or identity. Communities may share a sense of place situated in a given geographical area (e.g. a country, village, town, or neighbourhood) or in virtual space through communication platforms. Durable good relations that extend beyond immediate genealogical ties also define a sense of community, important to their identity, practice, and roles in social institutions such as family, home, work, government, TV network, society, or humanity at large. Although communities are usually small relative to personal social ties, \"community\" may also refer to large group affiliations such as national communities, international communities, and virtual communities.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The English-language word \"community\" derives from the Old French comuneté (Modern French: communauté), which comes from the Latin communitas \"community\", \"public spirit\" (from Latin communis, \"common\").",
"title": ""
},
{
"paragraph_id": 2,
"text": "Human communities may have intent, belief, resources, preferences, needs, and risks in common, affecting the identity of the participants and their degree of cohesiveness.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Archaeological studies of social communities use the term \"community\" in two ways, paralleling usage in other areas. The first is an informal definition of community as a place where people used to live. In this sense it is synonymous with the concept of an ancient settlement—whether a hamlet, village, town, or city. The second meaning resembles the usage of the term in other social sciences: a community is a group of people living near one another who interact socially. Social interaction on a small scale can be difficult to identify with archaeological data. Most reconstructions of social communities by archaeologists rely on the principle that social interaction in the past was conditioned by physical distance. Therefore, a small village settlement likely constituted a social community and spatial subdivisions of cities and other large settlements may have formed communities. Archaeologists typically use similarities in material culture—from house types to styles of pottery—to reconstruct communities in the past. This classification method relies on the assumption that people or households will share more similarities in the types and styles of their material goods with other members of a social community than they will with outsiders.",
"title": "Perspectives of various disciplines"
},
{
"paragraph_id": 4,
"text": "In ecology, a community is an assemblage of populations—potentially of different species—interacting with one another. Community ecology is the branch of ecology that studies interactions between and among species. It considers how such interactions, along with interactions between species and the abiotic environment, affect social structure and species richness, diversity and patterns of abundance. Species interact in three ways: competition, predation and mutualism:",
"title": "Perspectives of various disciplines"
},
{
"paragraph_id": 5,
"text": "The two main types of ecological communities are major communities, which are self-sustaining and self-regulating (such as a forest or a lake), and minor communities, which rely on other communities (like fungi decomposing a log) and are the building blocks of major communities. Moreover, we can establish other non-taxonomic subdivisions of biocenosis, such as guilds.",
"title": "Perspectives of various disciplines"
},
{
"paragraph_id": 6,
"text": "The concept of \"community\" often has a positive semantic connotation, exploited rhetorically by populist politicians and by advertisers to promote feelings and associations of mutual well-being, happiness and togetherness—veering towards an almost-achievable utopian community.",
"title": "Perspectives of various disciplines"
},
{
"paragraph_id": 7,
"text": "In contrast, the epidemiological term \"community transmission\" can have negative implications, and instead of a \"criminal community\" one often speaks of a \"criminal underworld\" or of the \"criminal fraternity\".",
"title": "Perspectives of various disciplines"
},
{
"paragraph_id": 8,
"text": "In Gemeinschaft und Gesellschaft (1887), German sociologist Ferdinand Tönnies described two types of human association: Gemeinschaft (usually translated as \"community\") and Gesellschaft (\"society\" or \"association\"). Tönnies proposed the Gemeinschaft–Gesellschaft dichotomy as a way to think about social ties. No group is exclusively one or the other. Gemeinschaft stress personal social interactions, and the roles, values, and beliefs based on such interactions. Gesellschaft stress indirect interactions, impersonal roles, formal values, and beliefs based on such interactions.",
"title": "Key concepts"
},
{
"paragraph_id": 9,
"text": "In a seminal 1986 study, McMillan and Chavis identify four elements of \"sense of community\":",
"title": "Key concepts"
},
{
"paragraph_id": 10,
"text": "A \"sense of community index\" (SCI) was developed by Chavis and colleagues, and revised and adapted by others. Although originally designed to assess sense of community in neighborhoods, the index has been adapted for use in schools, the workplace, and a variety of types of communities.",
"title": "Key concepts"
},
{
"paragraph_id": 11,
"text": "Studies conducted by the APPA indicate that young adults who feel a sense of belonging in a community, particularly small communities, develop fewer psychiatric and depressive disorders than those who do not have the feeling of love and belonging.",
"title": "Key concepts"
},
{
"paragraph_id": 12,
"text": "The process of learning to adopt the behavior patterns of the community is called socialization. The most fertile time of socialization is usually the early stages of life, during which individuals develop the skills and knowledge and learn the roles necessary to function within their culture and social environment. For some psychologists, especially those in the psychodynamic tradition, the most important period of socialization is between the ages of one and ten. But socialization also includes adults moving into a significantly different environment where they must learn a new set of behaviors.",
"title": "Key concepts"
},
{
"paragraph_id": 13,
"text": "Socialization is influenced primarily by the family, through which children first learn community norms. Other important influences include schools, peer groups, people, mass media, the workplace, and government. The degree to which the norms of a particular society or community are adopted determines one's willingness to engage with others. The norms of tolerance, reciprocity, and trust are important \"habits of the heart\", as de Tocqueville put it, in an individual's involvement in community.",
"title": "Key concepts"
},
{
"paragraph_id": 14,
"text": "Community development is often linked with community work or community planning, and may involve stakeholders, foundations, governments, or contracted entities including non-government organisations (NGOs), universities or government agencies to progress the social well-being of local, regional and, sometimes, national communities. More grassroots efforts, called community building or community organizing, seek to empower individuals and groups of people by providing them with the skills they need to effect change in their own communities. These skills often assist in building political power through the formation of large social groups working for a common agenda. Community development practitioners must understand both how to work with individuals and how to affect communities' positions within the context of larger social institutions. Public administrators, in contrast, need to understand community development in the context of rural and urban development, housing and economic development, and community, organizational and business development.",
"title": "Community development"
},
{
"paragraph_id": 15,
"text": "Formal accredited programs conducted by universities, as part of degree granting institutions, are often used to build a knowledge base to drive curricula in public administration, sociology and community studies. The General Social Survey from the National Opinion Research Center at the University of Chicago and the Saguaro Seminar at the Harvard Kennedy School are examples of national community development in the United States. The Maxwell School of Citizenship and Public Affairs at Syracuse University in New York State offers core courses in community and economic development, and in areas ranging from non-profit development to US budgeting (federal to local, community funds). In the United Kingdom, the University of Oxford has led in providing extensive research in the field through its Community Development Journal, used worldwide by sociologists and community development practitioners.",
"title": "Community development"
},
{
"paragraph_id": 16,
"text": "At the intersection between community development and community building are a number of programs and organizations with community development tools. One example of this is the program of the Asset Based Community Development Institute of Northwestern University. The institute makes available downloadable tools to assess community assets and make connections between non-profit groups and other organizations that can help in community building. The Institute focuses on helping communities develop by \"mobilizing neighborhood assets\" – building from the inside out rather than the outside in. In the disability field, community building was prevalent in the 1980s and 1990s with roots in John McKnight's approaches.",
"title": "Community development"
},
{
"paragraph_id": 17,
"text": "In The Different Drum: Community-Making and Peace (1987) Scott Peck argues that the almost accidental sense of community that exists at times of crisis can be consciously built. Peck believes that conscious community building is a process of deliberate design based on the knowledge and application of certain rules. He states that this process goes through four stages:",
"title": "Community development"
},
{
"paragraph_id": 18,
"text": "In 1991, Peck remarked that building a sense of community is easy but maintaining this sense of community is difficult in the modern world. An interview with M. Scott Peck by Alan Atkisson. In Context #29, p. 26. The three basic types of community organizing are grassroots organizing, coalition building, and \"institution-based community organizing\", (also called \"broad-based community organizing\", an example of which is faith-based community organizing, or Congregation-based Community Organizing).",
"title": "Community development"
},
{
"paragraph_id": 19,
"text": "Community building can use a wide variety of practices, ranging from simple events (e.g., potlucks, small book clubs) to larger-scale efforts (e.g., mass festivals, construction projects that involve local participants rather than outside contractors).",
"title": "Community development"
},
{
"paragraph_id": 20,
"text": "Community building that is geared toward citizen action is usually termed \"community organizing\". In these cases, organized community groups seek accountability from elected officials and increased direct representation within decision-making bodies. Where good-faith negotiations fail, these constituency-led organizations seek to pressure the decision-makers through a variety of means, including picketing, boycotting, sit-ins, petitioning, and electoral politics.",
"title": "Community development"
},
{
"paragraph_id": 21,
"text": "Community organizing can focus on more than just resolving specific issues. Organizing often means building a widely accessible power structure, often with the end goal of distributing power equally throughout the community. Community organizers generally seek to build groups that are open and democratic in governance. Such groups facilitate and encourage consensus decision-making with a focus on the general health of the community rather than a specific interest group.",
"title": "Community development"
},
{
"paragraph_id": 22,
"text": "If communities are developed based on something they share in common, whether location or values, then one challenge for developing communities is how to incorporate individuality and differences. Rebekah Nathan suggests in her book, My Freshman Year, we are drawn to developing communities totally based on sameness, despite stated commitments to diversity, such as those found on university websites.",
"title": "Community development"
},
{
"paragraph_id": 23,
"text": "A number of ways to categorize types of community have been proposed. One such breakdown is as follows:",
"title": "Types of community"
},
{
"paragraph_id": 24,
"text": "The usual categorizations of community relations have a number of problems: (1) they tend to give the impression that a particular community can be defined as just this kind or another; (2) they tend to conflate modern and customary community relations; (3) they tend to take sociological categories such as ethnicity or race as given, forgetting that different ethnically defined persons live in different kinds of communities—grounded, interest-based, diasporic, etc.",
"title": "Types of community"
},
{
"paragraph_id": 25,
"text": "In response to these problems, Paul James and his colleagues have developed a taxonomy that maps community relations, and recognizes that actual communities can be characterized by different kinds of relations at the same time:",
"title": "Types of community"
},
{
"paragraph_id": 26,
"text": "In these terms, communities can be nested and/or intersecting; one community can contain another—for example a location-based community may contain a number of ethnic communities. Both lists above can used in a cross-cutting matrix in relation to each other.",
"title": "Types of community"
},
{
"paragraph_id": 27,
"text": "In general, virtual communities value knowledge and information as currency or social resource. What differentiates virtual communities from their physical counterparts is the extent and impact of \"weak ties\", which are the relationships acquaintances or strangers form to acquire information through online networks. Relationships among members in a virtual community tend to focus on information exchange about specific topics. A survey conducted by Pew Internet and The American Life Project in 2001 found those involved in entertainment, professional, and sports virtual-groups focused their activities on obtaining information.",
"title": "Internet communities"
},
{
"paragraph_id": 28,
"text": "An epidemic of bullying and harassment has arisen from the exchange of information between strangers, especially among teenagers, in virtual communities. Despite attempts to implement anti-bullying policies, Sheri Bauman, professor of counselling at the University of Arizona, claims the \"most effective strategies to prevent bullying\" may cost companies revenue.",
"title": "Internet communities"
},
{
"paragraph_id": 29,
"text": "Virtual Internet-mediated communities can interact with offline real-life activity, potentially forming strong and tight-knit groups such as QAnon.",
"title": "Internet communities"
}
] | A community is a social unit with a shared socially significant characteristic, such as place, set of norms, culture, religion, values, customs, or identity. Communities may share a sense of place situated in a given geographical area or in virtual space through communication platforms. Durable good relations that extend beyond immediate genealogical ties also define a sense of community, important to their identity, practice, and roles in social institutions such as family, home, work, government, TV network, society, or humanity at large. Although communities are usually small relative to personal social ties, "community" may also refer to large group affiliations such as national communities, international communities, and virtual communities. The English-language word "community" derives from the Old French comuneté, which comes from the Latin communitas "community", "public spirit". Human communities may have intent, belief, resources, preferences, needs, and risks in common, affecting the identity of the participants and their degree of cohesiveness. | 2001-05-20T04:57:29Z | 2023-12-08T23:06:51Z | [
"Template:Who",
"Template:Reflist",
"Template:Main",
"Template:Cite book",
"Template:Doi",
"Template:Wiktionary",
"Template:Cite journal",
"Template:Authority control",
"Template:Short description",
"Template:Lang",
"Template:Empty section",
"Template:Cite web",
"Template:Webarchive",
"Template:ISBN",
"Template:Dead link",
"Template:For multi",
"Template:Community",
"Template:According to whom",
"Template:Cite news",
"Template:Page?",
"Template:PMID",
"Template:Commons category"
] | https://en.wikipedia.org/wiki/Community |
5,696 | Community college | A community college is a type of undergraduate higher education institution, generally leading to an associate degree, certificate, or diploma. The term can have different meanings in different countries: many community colleges have an "open enrollment" for students who have graduated from high school (also known as senior secondary school or upper secondary school). The term usually refers to a higher educational institution that provides workforce education and college transfer academic programs. Some institutions maintain athletic teams and dormitories similar to their university counterparts.
In Australia, the term "community college" refers to small private businesses running short (e.g. six weeks) courses generally of a self-improvement or hobbyist nature. Equivalent to the American notion of community colleges are Technical and Further Education colleges or TAFEs; these are institutions regulated mostly at state and territory level. There are also an increasing number of private providers colloquially called "colleges".
TAFEs and other providers carry on the tradition of adult education, which was established in Australia around the mid-19th century, when evening classes were held to help adults enhance their numeracy and literacy skills. Most Australian universities can also be traced back to such forerunners, although obtaining a university charter has always changed their nature. In TAFEs and colleges today, courses are designed for personal development of an individual or for employment outcomes. Educational programs cover a variety of topics such as arts, languages, business and lifestyle. They usually are scheduled to run two, three or four days of the week, depending on the level of the course undertaken. A Certificate I may only run for 4 hours twice a week for a term of 9 weeks. A full-time Diploma course might have classes 4 days per week for a year (36 weeks). Some courses may be offered in the evenings or weekends to accommodate people working full-time. Funding for colleges may come from government grants and course fees. Many are not-for-profit organisations. Such TAFES are located in metropolitan, regional and rural locations of Australia.
Education offered by TAFEs and colleges has changed over the years. By the 1980s, many colleges had recognised a community need for computer training. Since then thousands of people have increased skills through IT courses. The majority of colleges by the late 20th century had also become Registered Training Organisations. They offer individuals a nurturing, non-traditional education venue to gain skills that better prepare them for the workplace and potential job openings. TAFEs and colleges have not traditionally offered bachelor's degrees, instead providing pathway arrangements with universities to continue towards degrees. The American innovation of the associate degree is being developed at some institutions. Certificate courses I to IV, diplomas and advanced diplomas are typically offered, the latter deemed equivalent to an undergraduate qualification, albeit typically in more vocational areas. Recently, some TAFE institutes (and private providers) have also become higher education providers in their own right and are now starting to offer bachelor's degree programs.
In Canada, colleges are adult educational institutions that provide higher education and tertiary education, and grant certificates and diplomas. Alternatively, Canadian colleges are often called "institutes" or "polytechnic institutes". As well, in Ontario, the 24 colleges of applied arts and technology have been mandated to offer their own stand-alone degrees as well as to offer joint degrees with universities through "articulation agreements" that often result in students emerging with both a diploma and a degree. Thus, for example, the University of Guelph "twins" with Humber College and York University does the same with Seneca College. More recently, however, colleges have been offering a variety of their own degrees, often in business, technology, science, and other technical fields. Each province has its own educational system, as prescribed by the Canadian federalism model of governance. In the mid-1960s and early 1970s, most Canadian colleges began to provide practical education and training for the emerging and booming generation, and for immigrants from around the world who were entering Canada in increasing numbers at that time. A formative trend was the merging of the then separate vocational training and adult education (night school) institutions.
Canadian colleges are either publicly funded or private post-secondary institutions (run for profit).
In terms of academic pathways, Canadian colleges and universities collaborate with each other with the purpose of providing college students the opportunity to academically upgrade their education. Students can transfer their diplomas and earn transfer credits through their completed college credits towards undergraduate university degrees.
The term associate degree is used in western Canada to refer to a two-year college arts or science degree, similar to how the term is used in the United States. In other parts of Canada, the term advanced degree is used to indicate a three- or four-year college program.
In Quebec, three years is the norm for a university degree because a year of credit is earned in the CÉGEP (college) system. Even when speaking in English, people often refer to all colleges as Cégeps; however, the term is an acronym more correctly applied specifically to the French-language public system: Collège d'enseignement général et professionnel (CEGEP); in English: College of General and Vocational Education. The word "college" can also refer to a private high school in Quebec.
In India, 98 community colleges are recognized by the University Grants Commission. The courses offered by these colleges are diplomas, advance diplomas and certificate courses. The duration of these courses usually ranges from six months to two years.
Community colleges in Malaysia are a network of educational institutions whereby vocational and technical skills training could be provided at all levels for school leavers before they entered the workforce. The community colleges also provide an infrastructure for rural communities to gain skills training through short courses as well as providing access to a post-secondary education.
At the moment, most community colleges award qualifications up to Level 3 in the Malaysian Qualifications Framework (Certificate 3) in both the Skills sector (Sijil Kemahiran Malaysia or the Malaysian Skills Certificate) as well as the Vocational and Training sector but the number of community colleges that are starting to award Level 4 qualifications (Diploma) are increasing. This is two levels below a bachelor's degree (Level 6 in the MQF) and students within the system who intend to further their studies to that level will usually seek entry into Advanced Diploma programs in public universities, polytechnics or accredited private providers.
In the Philippines, a community school functions as elementary or secondary school at daytime and towards the end of the day convert into a community college. This type of institution offers night classes under the supervision of the same principal, and the same faculty members who are given part-time college teaching load.
The concept of community college dates back to the time of the former Minister of Education, Culture and Sports (MECS) that had under its wings the Bureaus of Elementary Education, Secondary Education, Higher Education and Vocational-Technical Education. MECS Secretary, Cecilio Putong, who in 1971 wrote that a community school is a school established in the community, by the community, and for the community itself. Pedro T. Orata of Pangasinan shared the same idea, hence the establishment of a community college, now called the City College of Urdaneta.
A community college like the one in Abuyog, Leyte can operate with only a PHP 124,000 annual budget in a two-story structure housing more than 700 students.
Except for Scotland, this term is rarely used in the United Kingdom. When it is, a community college is a school which not only provides education for the school-age population (11–18) of the locality, but also additional services and education to adults and other members of the community. This education includes but is not limited to sports, adult literacy and lifestyle education. Usually when students finish their secondary school studies at age 16, they move on to a sixth form college where they study for their A-levels (although some secondary schools have integrated sixth forms). After the two-year A-level period, they may proceed to a college of further education or a university. The former is also known as a technical college.
In the United States, community colleges, sometimes called junior colleges, technical colleges, two-year colleges, or city colleges, are primarily public institutions providing tertiary education, also known as continuing education, that focuses on certificates, diplomas, and associate degrees. After graduating from a community college, some students transfer to a liberal arts college or university for two to three years to complete a bachelor's degree.
Before the 1970s, community colleges in the United States were more commonly referred to as junior colleges. That term is still used at some institutions. Public community colleges primarily attract and accept students from the local community and are usually supported by local tax revenue. They usually work with local and regional businesses to ensure students are being prepared for the local workforce.
Some research organizations and publications focus upon the activities of community college, junior college, and technical college institutions. Many of these institutions and organizations present the most current research and practical outcomes at annual community college conferences.
Several peer-reviewed journals extensively publish research on community colleges: | [
{
"paragraph_id": 0,
"text": "A community college is a type of undergraduate higher education institution, generally leading to an associate degree, certificate, or diploma. The term can have different meanings in different countries: many community colleges have an \"open enrollment\" for students who have graduated from high school (also known as senior secondary school or upper secondary school). The term usually refers to a higher educational institution that provides workforce education and college transfer academic programs. Some institutions maintain athletic teams and dormitories similar to their university counterparts.",
"title": ""
},
{
"paragraph_id": 1,
"text": "In Australia, the term \"community college\" refers to small private businesses running short (e.g. six weeks) courses generally of a self-improvement or hobbyist nature. Equivalent to the American notion of community colleges are Technical and Further Education colleges or TAFEs; these are institutions regulated mostly at state and territory level. There are also an increasing number of private providers colloquially called \"colleges\".",
"title": "Australia"
},
{
"paragraph_id": 2,
"text": "TAFEs and other providers carry on the tradition of adult education, which was established in Australia around the mid-19th century, when evening classes were held to help adults enhance their numeracy and literacy skills. Most Australian universities can also be traced back to such forerunners, although obtaining a university charter has always changed their nature. In TAFEs and colleges today, courses are designed for personal development of an individual or for employment outcomes. Educational programs cover a variety of topics such as arts, languages, business and lifestyle. They usually are scheduled to run two, three or four days of the week, depending on the level of the course undertaken. A Certificate I may only run for 4 hours twice a week for a term of 9 weeks. A full-time Diploma course might have classes 4 days per week for a year (36 weeks). Some courses may be offered in the evenings or weekends to accommodate people working full-time. Funding for colleges may come from government grants and course fees. Many are not-for-profit organisations. Such TAFES are located in metropolitan, regional and rural locations of Australia.",
"title": "Australia"
},
{
"paragraph_id": 3,
"text": "Education offered by TAFEs and colleges has changed over the years. By the 1980s, many colleges had recognised a community need for computer training. Since then thousands of people have increased skills through IT courses. The majority of colleges by the late 20th century had also become Registered Training Organisations. They offer individuals a nurturing, non-traditional education venue to gain skills that better prepare them for the workplace and potential job openings. TAFEs and colleges have not traditionally offered bachelor's degrees, instead providing pathway arrangements with universities to continue towards degrees. The American innovation of the associate degree is being developed at some institutions. Certificate courses I to IV, diplomas and advanced diplomas are typically offered, the latter deemed equivalent to an undergraduate qualification, albeit typically in more vocational areas. Recently, some TAFE institutes (and private providers) have also become higher education providers in their own right and are now starting to offer bachelor's degree programs.",
"title": "Australia"
},
{
"paragraph_id": 4,
"text": "In Canada, colleges are adult educational institutions that provide higher education and tertiary education, and grant certificates and diplomas. Alternatively, Canadian colleges are often called \"institutes\" or \"polytechnic institutes\". As well, in Ontario, the 24 colleges of applied arts and technology have been mandated to offer their own stand-alone degrees as well as to offer joint degrees with universities through \"articulation agreements\" that often result in students emerging with both a diploma and a degree. Thus, for example, the University of Guelph \"twins\" with Humber College and York University does the same with Seneca College. More recently, however, colleges have been offering a variety of their own degrees, often in business, technology, science, and other technical fields. Each province has its own educational system, as prescribed by the Canadian federalism model of governance. In the mid-1960s and early 1970s, most Canadian colleges began to provide practical education and training for the emerging and booming generation, and for immigrants from around the world who were entering Canada in increasing numbers at that time. A formative trend was the merging of the then separate vocational training and adult education (night school) institutions.",
"title": "Canada"
},
{
"paragraph_id": 5,
"text": "Canadian colleges are either publicly funded or private post-secondary institutions (run for profit).",
"title": "Canada"
},
{
"paragraph_id": 6,
"text": "In terms of academic pathways, Canadian colleges and universities collaborate with each other with the purpose of providing college students the opportunity to academically upgrade their education. Students can transfer their diplomas and earn transfer credits through their completed college credits towards undergraduate university degrees.",
"title": "Canada"
},
{
"paragraph_id": 7,
"text": "The term associate degree is used in western Canada to refer to a two-year college arts or science degree, similar to how the term is used in the United States. In other parts of Canada, the term advanced degree is used to indicate a three- or four-year college program.",
"title": "Canada"
},
{
"paragraph_id": 8,
"text": "In Quebec, three years is the norm for a university degree because a year of credit is earned in the CÉGEP (college) system. Even when speaking in English, people often refer to all colleges as Cégeps; however, the term is an acronym more correctly applied specifically to the French-language public system: Collège d'enseignement général et professionnel (CEGEP); in English: College of General and Vocational Education. The word \"college\" can also refer to a private high school in Quebec.",
"title": "Canada"
},
{
"paragraph_id": 9,
"text": "In India, 98 community colleges are recognized by the University Grants Commission. The courses offered by these colleges are diplomas, advance diplomas and certificate courses. The duration of these courses usually ranges from six months to two years.",
"title": "India"
},
{
"paragraph_id": 10,
"text": "Community colleges in Malaysia are a network of educational institutions whereby vocational and technical skills training could be provided at all levels for school leavers before they entered the workforce. The community colleges also provide an infrastructure for rural communities to gain skills training through short courses as well as providing access to a post-secondary education.",
"title": "Malaysia"
},
{
"paragraph_id": 11,
"text": "At the moment, most community colleges award qualifications up to Level 3 in the Malaysian Qualifications Framework (Certificate 3) in both the Skills sector (Sijil Kemahiran Malaysia or the Malaysian Skills Certificate) as well as the Vocational and Training sector but the number of community colleges that are starting to award Level 4 qualifications (Diploma) are increasing. This is two levels below a bachelor's degree (Level 6 in the MQF) and students within the system who intend to further their studies to that level will usually seek entry into Advanced Diploma programs in public universities, polytechnics or accredited private providers.",
"title": "Malaysia"
},
{
"paragraph_id": 12,
"text": "In the Philippines, a community school functions as elementary or secondary school at daytime and towards the end of the day convert into a community college. This type of institution offers night classes under the supervision of the same principal, and the same faculty members who are given part-time college teaching load.",
"title": "Philippines"
},
{
"paragraph_id": 13,
"text": "The concept of community college dates back to the time of the former Minister of Education, Culture and Sports (MECS) that had under its wings the Bureaus of Elementary Education, Secondary Education, Higher Education and Vocational-Technical Education. MECS Secretary, Cecilio Putong, who in 1971 wrote that a community school is a school established in the community, by the community, and for the community itself. Pedro T. Orata of Pangasinan shared the same idea, hence the establishment of a community college, now called the City College of Urdaneta.",
"title": "Philippines"
},
{
"paragraph_id": 14,
"text": "A community college like the one in Abuyog, Leyte can operate with only a PHP 124,000 annual budget in a two-story structure housing more than 700 students.",
"title": "Philippines"
},
{
"paragraph_id": 15,
"text": "Except for Scotland, this term is rarely used in the United Kingdom. When it is, a community college is a school which not only provides education for the school-age population (11–18) of the locality, but also additional services and education to adults and other members of the community. This education includes but is not limited to sports, adult literacy and lifestyle education. Usually when students finish their secondary school studies at age 16, they move on to a sixth form college where they study for their A-levels (although some secondary schools have integrated sixth forms). After the two-year A-level period, they may proceed to a college of further education or a university. The former is also known as a technical college.",
"title": "United Kingdom"
},
{
"paragraph_id": 16,
"text": "In the United States, community colleges, sometimes called junior colleges, technical colleges, two-year colleges, or city colleges, are primarily public institutions providing tertiary education, also known as continuing education, that focuses on certificates, diplomas, and associate degrees. After graduating from a community college, some students transfer to a liberal arts college or university for two to three years to complete a bachelor's degree.",
"title": "United States"
},
{
"paragraph_id": 17,
"text": "Before the 1970s, community colleges in the United States were more commonly referred to as junior colleges. That term is still used at some institutions. Public community colleges primarily attract and accept students from the local community and are usually supported by local tax revenue. They usually work with local and regional businesses to ensure students are being prepared for the local workforce.",
"title": "United States"
},
{
"paragraph_id": 18,
"text": "Some research organizations and publications focus upon the activities of community college, junior college, and technical college institutions. Many of these institutions and organizations present the most current research and practical outcomes at annual community college conferences.",
"title": "Research"
},
{
"paragraph_id": 19,
"text": "Several peer-reviewed journals extensively publish research on community colleges:",
"title": "Research"
}
] | A community college is a type of undergraduate higher education institution, generally leading to an associate degree, certificate, or diploma. The term can have different meanings in different countries: many community colleges have an "open enrollment" for students who have graduated from high school. The term usually refers to a higher educational institution that provides workforce education and college transfer academic programs. Some institutions maintain athletic teams and dormitories similar to their university counterparts. | 2001-05-20T04:59:46Z | 2023-12-19T08:41:34Z | [
"Template:Further",
"Template:See also",
"Template:Reflist",
"Template:Schools",
"Template:Short description",
"Template:Hatnote group",
"Template:Main",
"Template:Cite web",
"Template:Webarchive",
"Template:ISBN",
"Template:Authority control"
] | https://en.wikipedia.org/wiki/Community_college |
5,697 | Civil Rights Memorial | The Civil Rights Memorial is an American memorial in Montgomery, Alabama, created by Maya Lin. The names of 41 people are inscribed on the granite fountain as martyrs who were killed in the civil rights movement. The memorial is sponsored by the Southern Poverty Law Center.
The names included in the memorial belong to those who were killed between 1955 and 1968. The dates chosen represent a time when legalized segregation was prominent. In 1956 the U.S. Supreme Court ruled in Brown v. Board of Education that racial segregation in schools was unlawful and 1968 is the year of the assassination of Martin Luther King Jr. The monument was created by Maya Lin, who also created the Vietnam Veterans Memorial in Washington, D.C. The Civil Rights Memorial was dedicated in 1989.
The concept of Lin's design is based on the soothing and healing effect of water. It was inspired by a passage from King's 1963 "I Have a Dream" speech "...we will not be satisfied "until justice rolls down like waters and righteousness like a mighty stream..." The quotation in the passage, which is inscribed on the memorial, is a direct paraphrase of Amos 5:24, as translated in the American Standard Version of the Bible. The memorial is a fountain in the form of an asymmetric inverted stone cone. A film of water flows over the base of the cone, which contains the 41 names included. It is possible to touch the smooth film of water and to alter it temporarily, which quickly returns to smoothness. The memorial is designed in a timeline manner. It begins with Brown v. Board in 1954, and ends with Martin Luther King Jr.'s assassination in 1968.
Lin, Maya, "Civil Rights Memorial, 1989", Maya Lin Studio, retrieved October 6, 2023
The memorial is in downtown Montgomery, at 400 Washington Avenue, in an open plaza in front of the Civil Rights Memorial Center, which was the offices of the Southern Poverty Law Center until it moved across the street into a new building in 2001. The memorial may be visited freely 24 hours a day, 7 days a week.
The Civil Rights Memorial Center offers guided group tours, lasting approximately one hour. Tours are available by appointment, Monday to Saturday.
The memorial is only a few blocks from other historic sites, including the Dexter Avenue King Memorial Baptist Church, the Alabama State Capitol, the Alabama Department of Archives and History, the corners where Claudette Colvin and Rosa Parks boarded buses in 1955 on which they would later refuse to give up their seats, and the Rosa Parks Library and Museum.
The 41 names included in the Civil Rights Memorial are those of:
"The Forgotten" are 74 people who are identified in a display at the Civil Rights Memorial Center. These names were not inscribed on the Memorial because there was insufficient information about their deaths at the time the Memorial was created. However, it is thought that these people were killed as a result of racially motivated violence between 1952 and 1968.
32°22′35″N 86°18′12″W / 32.37626°N 86.30325°W / 32.37626; -86.30325 | [
{
"paragraph_id": 0,
"text": "The Civil Rights Memorial is an American memorial in Montgomery, Alabama, created by Maya Lin. The names of 41 people are inscribed on the granite fountain as martyrs who were killed in the civil rights movement. The memorial is sponsored by the Southern Poverty Law Center.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The names included in the memorial belong to those who were killed between 1955 and 1968. The dates chosen represent a time when legalized segregation was prominent. In 1956 the U.S. Supreme Court ruled in Brown v. Board of Education that racial segregation in schools was unlawful and 1968 is the year of the assassination of Martin Luther King Jr. The monument was created by Maya Lin, who also created the Vietnam Veterans Memorial in Washington, D.C. The Civil Rights Memorial was dedicated in 1989.",
"title": "Design"
},
{
"paragraph_id": 2,
"text": "The concept of Lin's design is based on the soothing and healing effect of water. It was inspired by a passage from King's 1963 \"I Have a Dream\" speech \"...we will not be satisfied \"until justice rolls down like waters and righteousness like a mighty stream...\" The quotation in the passage, which is inscribed on the memorial, is a direct paraphrase of Amos 5:24, as translated in the American Standard Version of the Bible. The memorial is a fountain in the form of an asymmetric inverted stone cone. A film of water flows over the base of the cone, which contains the 41 names included. It is possible to touch the smooth film of water and to alter it temporarily, which quickly returns to smoothness. The memorial is designed in a timeline manner. It begins with Brown v. Board in 1954, and ends with Martin Luther King Jr.'s assassination in 1968.",
"title": "Design"
},
{
"paragraph_id": 3,
"text": "Lin, Maya, \"Civil Rights Memorial, 1989\", Maya Lin Studio, retrieved October 6, 2023",
"title": "Design"
},
{
"paragraph_id": 4,
"text": "The memorial is in downtown Montgomery, at 400 Washington Avenue, in an open plaza in front of the Civil Rights Memorial Center, which was the offices of the Southern Poverty Law Center until it moved across the street into a new building in 2001. The memorial may be visited freely 24 hours a day, 7 days a week.",
"title": "Tours and location"
},
{
"paragraph_id": 5,
"text": "The Civil Rights Memorial Center offers guided group tours, lasting approximately one hour. Tours are available by appointment, Monday to Saturday.",
"title": "Tours and location"
},
{
"paragraph_id": 6,
"text": "The memorial is only a few blocks from other historic sites, including the Dexter Avenue King Memorial Baptist Church, the Alabama State Capitol, the Alabama Department of Archives and History, the corners where Claudette Colvin and Rosa Parks boarded buses in 1955 on which they would later refuse to give up their seats, and the Rosa Parks Library and Museum.",
"title": "Tours and location"
},
{
"paragraph_id": 7,
"text": "The 41 names included in the Civil Rights Memorial are those of:",
"title": "Names included"
},
{
"paragraph_id": 8,
"text": "\"The Forgotten\" are 74 people who are identified in a display at the Civil Rights Memorial Center. These names were not inscribed on the Memorial because there was insufficient information about their deaths at the time the Memorial was created. However, it is thought that these people were killed as a result of racially motivated violence between 1952 and 1968.",
"title": "Names included"
},
{
"paragraph_id": 9,
"text": "32°22′35″N 86°18′12″W / 32.37626°N 86.30325°W / 32.37626; -86.30325",
"title": "External links"
}
] | The Civil Rights Memorial is an American memorial in Montgomery, Alabama, created by Maya Lin. The names of 41 people are inscribed on the granite fountain as martyrs who were killed in the civil rights movement. The memorial is sponsored by the Southern Poverty Law Center. | 2001-05-20T10:50:48Z | 2023-12-11T02:50:17Z | [
"Template:Short description",
"Template:Civil rights movement",
"Template:Infobox monument",
"Template:Citation",
"Template:Div col",
"Template:Div col end",
"Template:Reflist",
"Template:Cite news",
"Template:Cite web",
"Template:Coord",
"Template:Civil Rights Memorial",
"Template:Authority control"
] | https://en.wikipedia.org/wiki/Civil_Rights_Memorial |
5,698 | Charles Babbage | Charles Babbage KH FRS (/ˈbæbɪdʒ/; 26 December 1791 – 18 October 1871) was an English polymath. A mathematician, philosopher, inventor and mechanical engineer, Babbage originated the concept of a digital programmable computer.
Babbage is considered by some to be "father of the computer". Babbage is credited with inventing the first mechanical computer, the Difference Engine, that eventually led to more complex electronic designs, though all the essential ideas of modern computers are to be found in Babbage's Analytical Engine, programmed using a principle openly borrowed from the Jacquard loom. Babbage had a broad range of interests in addition to his work on computers covered in his 1832 book Economy of Manufactures and Machinery. His varied work in other fields has led him to be described as "pre-eminent" among the many polymaths of his century.
Babbage, who died before the complete successful engineering of many of his designs, including his Difference Engine and Analytical Engine, remained a prominent figure in the ideating of computing. Parts of Babbage's incomplete mechanisms are on display in the Science Museum in London. In 1991, a functioning difference engine was constructed from Babbage's original plans. Built to tolerances achievable in the 19th century, the success of the finished engine indicated that Babbage's machine would have worked.
Babbage's birthplace is disputed, but according to the Oxford Dictionary of National Biography he was most likely born at 44 Crosby Row, Walworth Road, London, England. A blue plaque on the junction of Larcom Street and Walworth Road commemorates the event.
His date of birth was given in his obituary in The Times as 26 December 1792; but then a nephew wrote to say that Babbage was born one year earlier, in 1791. The parish register of St. Mary's, Newington, London, shows that Babbage was baptised on 6 January 1792, supporting a birth year of 1791.
Babbage was one of four children of Benjamin Babbage and Betsy Plumleigh Teape. His father was a banking partner of William Praed in founding Praed's & Co. of Fleet Street, London, in 1801. In 1808, the Babbage family moved into the old Rowdens house in East Teignmouth. Around the age of eight, Babbage was sent to a country school in Alphington near Exeter to recover from a life-threatening fever. For a short time, he attended King Edward VI Grammar School in Totnes, South Devon, but his health forced him back to private tutors for a time.
Babbage then joined the 30-student Holmwood Academy, in Baker Street, Enfield, Middlesex, under the Reverend Stephen Freeman. The academy had a library that prompted Babbage's love of mathematics. He studied with two more private tutors after leaving the academy. The first was a clergyman near Cambridge; through him Babbage encountered Charles Simeon and his evangelical followers, but the tuition was not what he needed. He was brought home, to study at the Totnes school: this was at age 16 or 17. The second was an Oxford tutor, under whom Babbage reached a level in Classics sufficient to be accepted by the University of Cambridge.
Babbage arrived at Trinity College, Cambridge, in October 1810. He was already self-taught in some parts of contemporary mathematics; he had read Robert Woodhouse, Joseph Louis Lagrange, and Marie Agnesi. As a result, he was disappointed in the standard mathematical instruction available at the university.
Babbage, John Herschel, George Peacock, and several other friends formed the Analytical Society in 1812; they were also close to Edward Ryan. As a student, Babbage was also a member of other societies such as The Ghost Club, concerned with investigating supernatural phenomena, and the Extractors Club, dedicated to liberating its members from the madhouse, should any be committed to one.
In 1812, Babbage transferred to Peterhouse, Cambridge. He was the top mathematician there, but did not graduate with honours. He instead received a degree without examination in 1814. He had defended a thesis that was considered blasphemous in the preliminary public disputation, but it is not known whether this fact is related to his not sitting the examination.
Considering his reputation, Babbage quickly made progress. He lectured to the Royal Institution on astronomy in 1815, and was elected a Fellow of the Royal Society in 1816. After graduation, on the other hand, he applied for positions unsuccessfully, and had little in the way of a career. In 1816 he was a candidate for a teaching job at Haileybury College; he had recommendations from James Ivory and John Playfair, but lost out to Henry Walter. In 1819, Babbage and Herschel visited Paris and the Society of Arcueil, meeting leading French mathematicians and physicists. That year Babbage applied to be professor at the University of Edinburgh, with the recommendation of Pierre Simon Laplace; the post went to William Wallace.
With Herschel, Babbage worked on the electrodynamics of Arago's rotations, publishing in 1825. Their explanations were only transitional, being picked up and broadened by Michael Faraday. The phenomena are now part of the theory of eddy currents, and Babbage and Herschel missed some of the clues to unification of electromagnetic theory, staying close to Ampère's force law.
Babbage purchased the actuarial tables of George Barrett, who died in 1821 leaving unpublished work, and surveyed the field in 1826 in Comparative View of the Various Institutions for the Assurance of Lives. This interest followed a project to set up an insurance company, prompted by Francis Baily and mooted in 1824, but not carried out. Babbage did calculate actuarial tables for that scheme, using Equitable Society mortality data from 1762 onwards.
During this whole period, Babbage depended awkwardly on his father's support, given his father's attitude to his early marriage, of 1814: he and Edward Ryan wedded the Whitmore sisters. He made a home in Marylebone in London and established a large family. On his father's death in 1827, Babbage inherited a large estate (value around £100,000, equivalent to £9.21 million or $12.6 million today), making him independently wealthy. After his wife's death in the same year he spent time travelling. In Italy he met Leopold II, Grand Duke of Tuscany, foreshadowing a later visit to Piedmont. In April 1828 he was in Rome, and relying on Herschel to manage the difference engine project, when he heard that he had become a professor at Cambridge, a position he had three times failed to obtain (in 1820, 1823 and 1826).
Babbage was instrumental in founding the Royal Astronomical Society in 1820, initially known as the Astronomical Society of London. Its original aims were to reduce astronomical calculations to a more standard form, and to circulate data. These directions were closely connected with Babbage's ideas on computation, and in 1824 he won its Gold Medal, cited "for his invention of an engine for calculating mathematical and astronomical tables".
Babbage's motivation to overcome errors in tables by mechanisation had been a commonplace since Dionysius Lardner wrote about it in 1834 in the Edinburgh Review (under Babbage's guidance). The context of these developments is still debated. Babbage's own account of the origin of the difference engine begins with the Astronomical Society's wish to improve The Nautical Almanac. Babbage and Herschel were asked to oversee a trial project, to recalculate some part of those tables. With the results to hand, discrepancies were found. This was in 1821 or 1822, and was the occasion on which Babbage formulated his idea for mechanical computation. The issue of the Nautical Almanac is now described as a legacy of a polarisation in British science caused by attitudes to Sir Joseph Banks, who had died in 1820.
Babbage studied the requirements to establish a modern postal system, with his friend Thomas Frederick Colby, concluding there should be a uniform rate that was put into effect with the introduction of the Uniform Fourpenny Post supplanted by the Uniform Penny Post in 1839 and 1840. Colby was another of the founding group of the Society. He was also in charge of the Survey of Ireland. Herschel and Babbage were present at a celebrated operation of that survey, the remeasuring of the Lough Foyle baseline.
The Analytical Society had initially been no more than an undergraduate provocation. During this period it had some more substantial achievements. In 1816 Babbage, Herschel and Peacock published a translation from French of the lectures of Sylvestre Lacroix, which was then the state-of-the-art calculus textbook.
Reference to Lagrange in calculus terms marks out the application of what are now called formal power series. British mathematicians had used them from about 1730 to 1760. As re-introduced, they were not simply applied as notations in differential calculus. They opened up the fields of functional equations (including the difference equations fundamental to the difference engine) and operator (D-module) methods for differential equations. The analogy of difference and differential equations was notationally changing Δ to D, as a "finite" difference becomes "infinitesimal". These symbolic directions became popular, as operational calculus, and pushed to the point of diminishing returns. The Cauchy concept of limit was kept at bay. Woodhouse had already founded this second "British Lagrangian School" with its treatment of Taylor series as formal.
In this context function composition is complicated to express, because the chain rule is not simply applied to second and higher derivatives. This matter was known to Woodhouse by 1803, who took from Louis François Antoine Arbogast what is now called Faà di Bruno's formula. In essence it was known to Abraham De Moivre (1697). Herschel found the method impressive, Babbage knew of it, and it was later noted by Ada Lovelace as compatible with the analytical engine. In the period to 1820 Babbage worked intensively on functional equations in general, and resisted both conventional finite differences and Arbogast's approach (in which Δ and D were related by the simple additive case of the exponential map). But via Herschel he was influenced by Arbogast's ideas in the matter of iteration, i.e. composing a function with itself, possibly many times. Writing in a major paper on functional equations in the Philosophical Transactions (1815/6), Babbage said his starting point was work of Gaspard Monge.
From 1828 to 1839, Babbage was Lucasian Professor of Mathematics at Cambridge. Not a conventional resident don, and inattentive to his teaching responsibilities, he wrote three topical books during this period of his life. He was elected a Foreign Honorary Member of the American Academy of Arts and Sciences in 1832. Babbage was out of sympathy with colleagues: George Biddell Airy, his predecessor as Lucasian Professor of Mathematics at Trinity College, Cambridge, thought an issue should be made of his lack of interest in lecturing. Babbage planned to lecture in 1831 on political economy. Babbage's reforming direction looked to see university education more inclusive, universities doing more for research, a broader syllabus and more interest in applications; but William Whewell found the programme unacceptable. A controversy Babbage had with Richard Jones lasted for six years. He never did give a lecture.
It was during this period that Babbage tried to enter politics. Simon Schaffer writes that his views of the 1830s included disestablishment of the Church of England, a broader political franchise, and inclusion of manufacturers as stakeholders. He twice stood for Parliament as a candidate for the borough of Finsbury. In 1832 he came in third among five candidates, missing out by some 500 votes in the two-member constituency when two other reformist candidates, Thomas Wakley and Christopher Temple, split the vote. In his memoirs Babbage related how this election brought him the friendship of Samuel Rogers: his brother Henry Rogers wished to support Babbage again, but died within days. In 1834 Babbage finished last among four. In 1832, Babbage, Herschel and Ivory were appointed Knights of the Royal Guelphic Order, however they were not subsequently made knights bachelor to entitle them to the prefix Sir, which often came with appointments to that foreign order (though Herschel was later created a baronet).
Babbage now emerged as a polemicist. One of his biographers notes that all his books contain a "campaigning element". His Reflections on the Decline of Science and some of its Causes (1830) stands out, however, for its sharp attacks. It aimed to improve British science, and more particularly to oust Davies Gilbert as President of the Royal Society, which Babbage wished to reform. It was written out of pique, when Babbage hoped to become the junior secretary of the Royal Society, as Herschel was the senior, but failed because of his antagonism to Humphry Davy. Michael Faraday had a reply written, by Gerrit Moll, as On the Alleged Decline of Science in England (1831). On the front of the Royal Society Babbage had no impact, with the bland election of the Duke of Sussex to succeed Gilbert the same year. As a broad manifesto, on the other hand, his Decline led promptly to the formation in 1831 of the British Association for the Advancement of Science (BAAS).
The Mechanics' Magazine in 1831 identified as Declinarians the followers of Babbage. In an unsympathetic tone it pointed out David Brewster writing in the Quarterly Review as another leader; with the barb that both Babbage and Brewster had received public money.
In the debate of the period on statistics (qua data collection) and what is now statistical inference, the BAAS in its Statistical Section (which owed something also to Whewell) opted for data collection. This Section was the sixth, established in 1833 with Babbage as chairman and John Elliot Drinkwater as secretary. The foundation of the Statistical Society followed. Babbage was its public face, backed by Richard Jones and Robert Malthus.
Babbage published On the Economy of Machinery and Manufactures (1832), on the organisation of industrial production. It was an influential early work of operational research. John Rennie the Younger in addressing the Institution of Civil Engineers on manufacturing in 1846 mentioned mostly surveys in encyclopaedias, and Babbage's book was first an article in the Encyclopædia Metropolitana, the form in which Rennie noted it, in the company of related works by John Farey Jr., Peter Barlow and Andrew Ure. From An essay on the general principles which regulate the application of machinery to manufactures and the mechanical arts (1827), which became the Encyclopædia Metropolitana article of 1829, Babbage developed the schematic classification of machines that, combined with discussion of factories, made up the first part of the book. The second part considered the "domestic and political economy" of manufactures.
The book sold well, and quickly went to a fourth edition (1836). Babbage represented his work as largely a result of actual observations in factories, British and abroad. It was not, in its first edition, intended to address deeper questions of political economy; the second (late 1832) did, with three further chapters including one on piece rate. The book also contained ideas on rational design in factories, and profit sharing.
In Economy of Machinery was described what is now called the "Babbage principle". It pointed out commercial advantages available with more careful division of labour. As Babbage himself noted, it had already appeared in the work of Melchiorre Gioia in 1815. The term was introduced in 1974 by Harry Braverman. Related formulations are the "principle of multiples" of Philip Sargant Florence, and the "balance of processes".
What Babbage remarked is that skilled workers typically spend parts of their time performing tasks that are below their skill level. If the labour process can be divided among several workers, labour costs may be cut by assigning only high-skill tasks to high-cost workers, restricting other tasks to lower-paid workers. He also pointed out that training or apprenticeship can be taken as fixed costs; but that returns to scale are available by his approach of standardisation of tasks, therefore again favouring the factory system. His view of human capital was restricted to minimising the time period for recovery of training costs.
Another aspect of the work was its detailed breakdown of the cost structure of book publishing. Babbage took the unpopular line, from the publishers' perspective, of exposing the trade's profitability. He went as far as to name the organisers of the trade's restrictive practices. Twenty years later he attended a meeting hosted by John Chapman to campaign against the Booksellers Association, still a cartel.
It has been written that "what Arthur Young was to agriculture, Charles Babbage was to the factory visit and machinery". Babbage's theories are said to have influenced the layout of the 1851 Great Exhibition, and his views had a strong effect on his contemporary George Julius Poulett Scrope. Karl Marx argued that the source of the productivity of the factory system was exactly the combination of the division of labour with machinery, building on Adam Smith, Babbage and Ure. Where Marx picked up on Babbage and disagreed with Smith was on the motivation for division of labour by the manufacturer: as Babbage did, he wrote that it was for the sake of profitability, rather than productivity, and identified an impact on the concept of a trade.
John Ruskin went further, to oppose completely what manufacturing in Babbage's sense stood for. Babbage also affected the economic thinking of John Stuart Mill. George Holyoake saw Babbage's detailed discussion of profit sharing as substantive, in the tradition of Robert Owen and Charles Fourier, if requiring the attentions of a benevolent captain of industry, and ignored at the time.
Works by Babbage and Ure were published in French translation in 1830; On the Economy of Machinery was translated in 1833 into French by Édouard Biot, and into German the same year by Gottfried Friedenberg. The French engineer and writer on industrial organisation Léon Lalanne was influenced by Babbage, but also by the economist Claude Lucien Bergery, in reducing the issues to "technology". William Jevons connected Babbage's "economy of labour" with his own labour experiments of 1870. The Babbage principle is an inherent assumption in Frederick Winslow Taylor's scientific management.
Mary Everest Boole claimed that there was profound influence – via her uncle George Everest – of Indian thought in general and Indian logic, in particular, on Babbage and on her husband George Boole, as well as on Augustus De Morgan:
Think what must have been the effect of the intense Hinduizing of three such men as Babbage, De Morgan, and George Boole on the mathematical atmosphere of 1830–65. What share had it in generating the Vector Analysis and the mathematics by which investigations in physical science are now conducted?
In 1837, responding to the series of eight Bridgewater Treatises, Babbage published his Ninth Bridgewater Treatise, under the title On the Power, Wisdom and Goodness of God, as manifested in the Creation. In this work Babbage weighed in on the side of uniformitarianism in a current debate. He preferred the conception of creation in which a God-given natural law dominated, removing the need for continuous "contrivance".
The book is a work of natural theology, and incorporates extracts from related correspondence of Herschel with Charles Lyell. Babbage put forward the thesis that God had the omnipotence and foresight to create as a divine legislator. In this book, Babbage dealt with relating interpretations between science and religion; on the one hand, he insisted that "there exists no fatal collision between the words of Scripture and the facts of nature;" on the other hand, he wrote that the Book of Genesis was not meant to be read literally in relation to scientific terms. Against those who said these were in conflict, he wrote "that the contradiction they have imagined can have no real existence, and that whilst the testimony of Moses remains unimpeached, we may also be permitted to confide in the testimony of our senses."
The Ninth Bridgewater Treatise was quoted extensively in Vestiges of the Natural History of Creation. The parallel with Babbage's computing machines is made explicit, as allowing plausibility to the theory that transmutation of species could be pre-programmed.
Jonar Ganeri, author of Indian Logic, believes Babbage may have been influenced by Indian thought; one possible route would be through Henry Thomas Colebrooke. Mary Everest Boole argues that Babbage was introduced to Indian thought in the 1820s by her uncle George Everest:
Some time about 1825, [Everest] came to England for two or three years, and made a fast and lifelong friendship with Herschel and with Babbage, who was then quite young. I would ask any fair-minded mathematician to read Babbage's Ninth Bridgewater Treatise and compare it with the works of his contemporaries in England; and then ask himself whence came the peculiar conception of the nature of miracle which underlies Babbage's ideas of Singular Points on Curves (Chap, viii) – from European Theology or Hindu Metaphysic? Oh! how the English clergy of that day hated Babbage's book!
Babbage was raised in the Protestant form of the Christian faith, his family having inculcated in him an orthodox form of worship. He explained:
My excellent mother taught me the usual forms of my daily and nightly prayer; and neither in my father nor my mother was there any mixture of bigotry and intolerance on the one hand, nor on the other of that unbecoming and familiar mode of addressing the Almighty which afterwards so much disgusted me in my youthful years.
Rejecting the Athanasian Creed as a "direct contradiction in terms", in his youth he looked to Samuel Clarke's works on religion, of which Being and Attributes of God (1704) exerted a particularly strong influence on him. Later in life, Babbage concluded that "the true value of the Christian religion rested, not on speculative [theology] … but … upon those doctrines of kindness and benevolence which that religion claims and enforces, not merely in favour of man himself but of every creature susceptible of pain or of happiness."
In his autobiography Passages from the Life of a Philosopher (1864), Babbage wrote a whole chapter on the topic of religion, where he identified three sources of divine knowledge:
He stated, on the basis of the design argument, that studying the works of nature had been the more appealing evidence, and the one which led him to actively profess the existence of God. Advocating for natural theology, he wrote:
In the works of the Creator ever open to our examination, we possess a firm basis on which to raise the superstructure of an enlightened creed. The more man inquires into the laws which regulate the material universe, the more he is convinced that all its varied forms arise from the action of a few simple principles ... The works of the Creator, ever present to our senses, give a living and perpetual testimony of his power and goodness far surpassing any evidence transmitted through human testimony. The testimony of man becomes fainter at every stage of transmission, whilst each new inquiry into the works of the Almighty gives to us more exalted views of his wisdom, his goodness, and his power.
Like Samuel Vince, Babbage also wrote a defence of the belief in divine miracles. Against objections previously posed by David Hume, Babbage advocated for the belief of divine agency, stating "we must not measure the credibility or incredibility of an event by the narrow sphere of our own experience, nor forget that there is a Divine energy which overrides what we familiarly call the laws of nature." He alluded to the limits of human experience, expressing: "all that we see in a miracle is an effect which is new to our observation, and whose cause is concealed. The cause may be beyond the sphere of our observation, and would be thus beyond the familiar sphere of nature; but this does not make the event a violation of any law of nature. The limits of man's observation lie within very narrow boundaries, and it would be arrogance to suppose that the reach of man's power is to form the limits of the natural world."
The British Association was consciously modelled on the Deutsche Naturforscher-Versammlung, founded in 1822. It rejected romantic science as well as metaphysics, and started to entrench the divisions of science from literature, and professionals from amateurs. Belonging as he did to the "Wattite" faction in the BAAS, represented in particular by James Watt the younger, Babbage identified closely with industrialists. He wanted to go faster in the same directions, and had little time for the more gentlemanly component of its membership. Indeed, he subscribed to a version of conjectural history that placed industrial society as the culmination of human development (and shared this view with Herschel). A clash with Roderick Murchison led in 1838 to his withdrawal from further involvement. At the end of the same year he sent in his resignation as Lucasian professor, walking away also from the Cambridge struggle with Whewell. His interests became more focussed, on computation and metrology, and on international contacts.
A project announced by Babbage was to tabulate all physical constants (referred to as "constants of nature", a phrase in itself a neologism), and then to compile an encyclopaedic work of numerical information. He was a pioneer in the field of "absolute measurement". His ideas followed on from those of Johann Christian Poggendorff, and were mentioned to Brewster in 1832. There were to be 19 categories of constants, and Ian Hacking sees these as reflecting in part Babbage's "eccentric enthusiasms". Babbage's paper On Tables of the Constants of Nature and Art was reprinted by the Smithsonian Institution in 1856, with an added note that the physical tables of Arnold Henry Guyot "will form a part of the important work proposed in this article".
Exact measurement was also key to the development of machine tools. Here again Babbage is considered a pioneer, with Henry Maudslay, William Sellers, and Joseph Whitworth.
Through the Royal Society Babbage acquired the friendship of the engineer Marc Brunel. It was through Brunel that Babbage knew of Joseph Clement, and so came to encounter the artisans whom he observed in his work on manufactures. Babbage provided an introduction for Isambard Kingdom Brunel in 1830, for a contact with the proposed Bristol & Birmingham Railway. He carried out studies, around 1838, to show the superiority of the broad gauge for railways, used by Brunel's Great Western Railway.
In 1838, Babbage invented the pilot (also called a cow-catcher), the metal frame attached to the front of locomotives that clears the tracks of obstacles; he also constructed a dynamometer car. His eldest son, Benjamin Herschel Babbage, worked as an engineer for Brunel on the railways before emigrating to Australia in the 1850s.
Babbage also invented an ophthalmoscope, which he gave to Thomas Wharton Jones for testing. Jones, however, ignored it. The device only came into use after being independently invented by Hermann von Helmholtz.
Babbage achieved notable results in cryptography, though this was still not known a century after his death. Letter frequency was category 18 of Babbage's tabulation project. Joseph Henry later defended interest in it, in the absence of the facts, as relevant to the management of movable type.
As early as 1845, Babbage had solved a cipher that had been posed as a challenge by his nephew Henry Hollier, and in the process, he made a discovery about ciphers that were based on Vigenère tables. Specifically, he realised that enciphering plain text with a keyword rendered the cipher text subject to modular arithmetic. During the Crimean War of the 1850s, Babbage broke Vigenère's autokey cipher as well as the much weaker cipher that is called Vigenère cipher today. His discovery was kept a military secret, and was not published. Credit for the result was instead given to Friedrich Kasiski, a Prussian infantry officer, who made the same discovery some years later. However, in 1854, Babbage published the solution of a Vigenère cipher, which had been published previously in the Journal of the Society of Arts. In 1855, Babbage also published a short letter, "Cypher Writing", in the same journal. Nevertheless, his priority was not established until 1985.
Babbage involved himself in well-publicised but unpopular campaigns against public nuisances. He once counted all the broken panes of glass of a factory, publishing in 1857 a "Table of the Relative Frequency of the Causes of Breakage of Plate Glass Windows": Of 464 broken panes, 14 were caused by "drunken men, women or boys".
Babbage's distaste for commoners (the Mob) included writing "Observations of Street Nuisances" in 1864, as well as tallying up 165 "nuisances" over a period of 80 days. He especially hated street music, and in particular the music of organ grinders, against whom he railed in various venues. The following quotation is typical:
It is difficult to estimate the misery inflicted upon thousands of persons, and the absolute pecuniary penalty imposed upon multitudes of intellectual workers by the loss of their time, destroyed by organ-grinders and other similar nuisances.
Babbage was not alone in his campaign. A convert to the cause was the MP Michael Thomas Bass.
In the 1860s, Babbage also took up the anti-hoop-rolling campaign. He blamed hoop-rolling boys for driving their iron hoops under horses' legs, with the result that the rider is thrown and very often the horse breaks a leg. Babbage achieved a certain notoriety in this matter, being denounced in debate in Commons in 1864 for "commencing a crusade against the popular game of tip-cat and the trundling of hoops."
Babbage's machines were among the first mechanical computers. That they were not actually completed was largely because of funding problems and clashes of personality, most notably with George Biddell Airy, the Astronomer Royal.
Babbage directed the building of some steam-powered machines that achieved some modest success, suggesting that calculations could be mechanised. For more than ten years he received government funding for his project, which amounted to £17,000, but eventually the Treasury lost confidence in him.
While Babbage's machines were mechanical and unwieldy, their basic architecture was similar to that of a modern computer. The data and program memory were separated, operation was instruction-based, the control unit could make conditional jumps, and the machine had a separate I/O unit.
In Babbage's time, printed mathematical tables were calculated by human computers; in other words, by hand. They were central to navigation, science and engineering, as well as mathematics. Mistakes were known to occur in transcription as well as calculation.
At Cambridge, Babbage saw the fallibility of this process, and the opportunity of adding mechanisation into its management. His own account of his path towards mechanical computation references a particular occasion:
In 1812 he was sitting in his rooms in the Analytical Society looking at a table of logarithms, which he knew to be full of mistakes, when the idea occurred to him of computing all tabular functions by machinery. The French government had produced several tables by a new method. Three or four of their mathematicians decided how to compute the tables, half a dozen more broke down the operations into simple stages, and the work itself, which was restricted to addition and subtraction, was done by eighty computers who knew only these two arithmetical processes. Here, for the first time, mass production was applied to arithmetic, and Babbage was seized by the idea that the labours of the unskilled computers [people] could be taken over completely by machinery which would be quicker and more reliable.
There was another period, seven years later, when his interest was aroused by the issues around computation of mathematical tables. The French official initiative by Gaspard de Prony, and its problems of implementation, were familiar to him. After the Napoleonic Wars came to a close, scientific contacts were renewed on the level of personal contact: in 1819 Charles Blagden was in Paris looking into the printing of the stalled de Prony project, and lobbying for the support of the Royal Society. In works of the 1820s and 1830s, Babbage referred in detail to de Prony's project.
Babbage began in 1822 with what he called the difference engine, made to compute values of polynomial functions. It was created to calculate a series of values automatically. By using the method of finite differences, it was possible to avoid the need for multiplication and division.
For a prototype difference engine, Babbage brought in Joseph Clement to implement the design, in 1823. Clement worked to high standards, but his machine tools were particularly elaborate. Under the standard terms of business of the time, he could charge for their construction, and would also own them. He and Babbage fell out over costs around 1831.
Some parts of the prototype survive in the Museum of the History of Science, Oxford. This prototype evolved into the "first difference engine". It remained unfinished and the finished portion is located at the Science Museum in London. This first difference engine would have been composed of around 25,000 parts, weighed fifteen short tons (13,600 kg), and would have been 8 ft (2.4 m) tall. Although Babbage received ample funding for the project, it was never completed. He later (1847–1849) produced detailed drawings for an improved version,"Difference Engine No. 2", but did not receive funding from the British government. His design was finally constructed in 1989–1991, using his plans and 19th-century manufacturing tolerances. It performed its first calculation at the Science Museum, London, returning results to 31 digits.
Nine years later, in 2000, the Science Museum completed the printer Babbage had designed for the difference engine.
The Science Museum has constructed two Difference Engines according to Babbage's plans for the Difference Engine No 2. One is owned by the museum. The other, owned by the technology multimillionaire Nathan Myhrvold, went on exhibition at the Computer History Museum in Mountain View, California on 10 May 2008. The two models that have been constructed are not replicas.
After the attempt at making the first difference engine fell through, Babbage worked to design a more complex machine called the Analytical Engine. He hired C. G. Jarvis, who had previously worked for Clement as a draughtsman. The Analytical Engine marks the transition from mechanised arithmetic to fully-fledged general purpose computation. It is largely on it that Babbage's standing as computer pioneer rests.
The major innovation was that the Analytical Engine was to be programmed using punched cards: the Engine was intended to use loops of Jacquard's punched cards to control a mechanical calculator, which could use as input the results of preceding computations. The machine was also intended to employ several features subsequently used in modern computers, including sequential control, branching and looping. It would have been the first mechanical device to be, in principle, Turing-complete. The Engine was not a single physical machine, but rather a succession of designs that Babbage tinkered with until his death in 1871.
Ada Lovelace, who corresponded with Babbage during his development of the Analytical Engine, is credited with developing an algorithm that would enable the Engine to calculate a sequence of Bernoulli numbers. Despite documentary evidence in Lovelace's own handwriting, some scholars dispute to what extent the ideas were Lovelace's own. For this achievement, she is often described as the first computer programmer; though no programming language had yet been invented.
Lovelace also translated and wrote literature supporting the project. Describing the engine's programming by punch cards, she wrote: "We may say most aptly that the Analytical Engine weaves algebraical patterns just as the Jacquard loom weaves flowers and leaves."
Babbage visited Turin in 1840 at the invitation of Giovanni Plana, who had developed in 1831 an analog computing machine that served as a perpetual calendar. Here in 1840 in Turin, Babbage gave the only public explanation and lectures about the Analytical Engine. In 1842 Charles Wheatstone approached Lovelace to translate a paper of Luigi Menabrea, who had taken notes of Babbage's Turin talks; and Babbage asked her to add something of her own. Fortunato Prandi who acted as interpreter in Turin was an Italian exile and follower of Giuseppe Mazzini.
Per Georg Scheutz wrote about the difference engine in 1830, and experimented in automated computation. After 1834 and Lardner's Edinburgh Review article he set up a project of his own, doubting whether Babbage's initial plan could be carried out. This he pushed through with his son, Edvard Scheutz. Another Swedish engine was that of Martin Wiberg (1860).
In 2011, researchers in Britain proposed a multimillion-pound project, "Plan 28", to construct Babbage's Analytical Engine. Since Babbage's plans were continually being refined and were never completed, they intended to engage the public in the project and crowd-source the analysis of what should be built. It would have the equivalent of 675 bytes of memory, and run at a clock speed of about 7 Hz. They hoped to complete it by the 150th anniversary of Babbage's death, in 2021.
Advances in MEMS and nanotechnology have led to recent high-tech experiments in mechanical computation. The benefits suggested include operation in high radiation or high temperature environments. These modern versions of mechanical computation were highlighted in The Economist in its special "end of the millennium" black cover issue in an article entitled "Babbage's Last Laugh".
Due to his association with the town Babbage was chosen in 2007 to appear on the 5 Totnes pound note. An image of Babbage features in the British cultural icons section of the newly designed British passport in 2015.
On 25 July 1814, Babbage married Georgiana Whitmore, sister of British parliamentarian William Wolryche-Whitmore, at St. Michael's Church in Teignmouth, Devon. The couple lived at Dudmaston Hall, Shropshire (where Babbage engineered the central heating system), before moving to 5 Devonshire Street, London in 1815.
Charles and Georgiana had eight children, but only four – Benjamin Herschel, Georgiana Whitmore, Dugald Bromhead and Henry Prevost – survived childhood. Charles' wife Georgiana died in Worcester on 1 September 1827, the same year as his father, their second son (also named Charles) and their newborn son Alexander.
His youngest surviving son, Henry Prevost Babbage (1824–1918), went on to create six small demonstration pieces for Difference Engine No. 1 based on his father's designs, one of which was sent to Harvard University where it was later discovered by Howard H. Aiken, pioneer of the Harvard Mark I. Henry Prevost's 1910 Analytical Engine Mill, previously on display at Dudmaston Hall, is now on display at the Science Museum.
Babbage lived and worked for over 40 years at 1 Dorset Street, Marylebone, where he died, at the age of 79, on 18 October 1871; he was buried in London's Kensal Green Cemetery. According to Horsley, Babbage died "of renal inadequacy, secondary to cystitis." He had declined both a knighthood and baronetcy. He also argued against hereditary peerages, favouring life peerages instead.
In 1983, the autopsy report for Charles Babbage was discovered and later published by his great-great-grandson. A copy of the original is also available. Half of Babbage's brain is preserved at the Hunterian Museum in the Royal College of Surgeons in London. The other half of Babbage's brain is on display in the Science Museum, London.
There is a black plaque commemorating the 40 years Babbage spent at 1 Dorset Street, London. Locations, institutions and other things named after Babbage include:
Babbage frequently appears in steampunk works; he has been called an iconic figure of the genre. Other works in which Babbage appears include: | [
{
"paragraph_id": 0,
"text": "Charles Babbage KH FRS (/ˈbæbɪdʒ/; 26 December 1791 – 18 October 1871) was an English polymath. A mathematician, philosopher, inventor and mechanical engineer, Babbage originated the concept of a digital programmable computer.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Babbage is considered by some to be \"father of the computer\". Babbage is credited with inventing the first mechanical computer, the Difference Engine, that eventually led to more complex electronic designs, though all the essential ideas of modern computers are to be found in Babbage's Analytical Engine, programmed using a principle openly borrowed from the Jacquard loom. Babbage had a broad range of interests in addition to his work on computers covered in his 1832 book Economy of Manufactures and Machinery. His varied work in other fields has led him to be described as \"pre-eminent\" among the many polymaths of his century.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Babbage, who died before the complete successful engineering of many of his designs, including his Difference Engine and Analytical Engine, remained a prominent figure in the ideating of computing. Parts of Babbage's incomplete mechanisms are on display in the Science Museum in London. In 1991, a functioning difference engine was constructed from Babbage's original plans. Built to tolerances achievable in the 19th century, the success of the finished engine indicated that Babbage's machine would have worked.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Babbage's birthplace is disputed, but according to the Oxford Dictionary of National Biography he was most likely born at 44 Crosby Row, Walworth Road, London, England. A blue plaque on the junction of Larcom Street and Walworth Road commemorates the event.",
"title": "Early life"
},
{
"paragraph_id": 4,
"text": "His date of birth was given in his obituary in The Times as 26 December 1792; but then a nephew wrote to say that Babbage was born one year earlier, in 1791. The parish register of St. Mary's, Newington, London, shows that Babbage was baptised on 6 January 1792, supporting a birth year of 1791.",
"title": "Early life"
},
{
"paragraph_id": 5,
"text": "Babbage was one of four children of Benjamin Babbage and Betsy Plumleigh Teape. His father was a banking partner of William Praed in founding Praed's & Co. of Fleet Street, London, in 1801. In 1808, the Babbage family moved into the old Rowdens house in East Teignmouth. Around the age of eight, Babbage was sent to a country school in Alphington near Exeter to recover from a life-threatening fever. For a short time, he attended King Edward VI Grammar School in Totnes, South Devon, but his health forced him back to private tutors for a time.",
"title": "Early life"
},
{
"paragraph_id": 6,
"text": "Babbage then joined the 30-student Holmwood Academy, in Baker Street, Enfield, Middlesex, under the Reverend Stephen Freeman. The academy had a library that prompted Babbage's love of mathematics. He studied with two more private tutors after leaving the academy. The first was a clergyman near Cambridge; through him Babbage encountered Charles Simeon and his evangelical followers, but the tuition was not what he needed. He was brought home, to study at the Totnes school: this was at age 16 or 17. The second was an Oxford tutor, under whom Babbage reached a level in Classics sufficient to be accepted by the University of Cambridge.",
"title": "Early life"
},
{
"paragraph_id": 7,
"text": "Babbage arrived at Trinity College, Cambridge, in October 1810. He was already self-taught in some parts of contemporary mathematics; he had read Robert Woodhouse, Joseph Louis Lagrange, and Marie Agnesi. As a result, he was disappointed in the standard mathematical instruction available at the university.",
"title": "At the University of Cambridge"
},
{
"paragraph_id": 8,
"text": "Babbage, John Herschel, George Peacock, and several other friends formed the Analytical Society in 1812; they were also close to Edward Ryan. As a student, Babbage was also a member of other societies such as The Ghost Club, concerned with investigating supernatural phenomena, and the Extractors Club, dedicated to liberating its members from the madhouse, should any be committed to one.",
"title": "At the University of Cambridge"
},
{
"paragraph_id": 9,
"text": "In 1812, Babbage transferred to Peterhouse, Cambridge. He was the top mathematician there, but did not graduate with honours. He instead received a degree without examination in 1814. He had defended a thesis that was considered blasphemous in the preliminary public disputation, but it is not known whether this fact is related to his not sitting the examination.",
"title": "At the University of Cambridge"
},
{
"paragraph_id": 10,
"text": "Considering his reputation, Babbage quickly made progress. He lectured to the Royal Institution on astronomy in 1815, and was elected a Fellow of the Royal Society in 1816. After graduation, on the other hand, he applied for positions unsuccessfully, and had little in the way of a career. In 1816 he was a candidate for a teaching job at Haileybury College; he had recommendations from James Ivory and John Playfair, but lost out to Henry Walter. In 1819, Babbage and Herschel visited Paris and the Society of Arcueil, meeting leading French mathematicians and physicists. That year Babbage applied to be professor at the University of Edinburgh, with the recommendation of Pierre Simon Laplace; the post went to William Wallace.",
"title": "After Cambridge"
},
{
"paragraph_id": 11,
"text": "With Herschel, Babbage worked on the electrodynamics of Arago's rotations, publishing in 1825. Their explanations were only transitional, being picked up and broadened by Michael Faraday. The phenomena are now part of the theory of eddy currents, and Babbage and Herschel missed some of the clues to unification of electromagnetic theory, staying close to Ampère's force law.",
"title": "After Cambridge"
},
{
"paragraph_id": 12,
"text": "Babbage purchased the actuarial tables of George Barrett, who died in 1821 leaving unpublished work, and surveyed the field in 1826 in Comparative View of the Various Institutions for the Assurance of Lives. This interest followed a project to set up an insurance company, prompted by Francis Baily and mooted in 1824, but not carried out. Babbage did calculate actuarial tables for that scheme, using Equitable Society mortality data from 1762 onwards.",
"title": "After Cambridge"
},
{
"paragraph_id": 13,
"text": "During this whole period, Babbage depended awkwardly on his father's support, given his father's attitude to his early marriage, of 1814: he and Edward Ryan wedded the Whitmore sisters. He made a home in Marylebone in London and established a large family. On his father's death in 1827, Babbage inherited a large estate (value around £100,000, equivalent to £9.21 million or $12.6 million today), making him independently wealthy. After his wife's death in the same year he spent time travelling. In Italy he met Leopold II, Grand Duke of Tuscany, foreshadowing a later visit to Piedmont. In April 1828 he was in Rome, and relying on Herschel to manage the difference engine project, when he heard that he had become a professor at Cambridge, a position he had three times failed to obtain (in 1820, 1823 and 1826).",
"title": "After Cambridge"
},
{
"paragraph_id": 14,
"text": "Babbage was instrumental in founding the Royal Astronomical Society in 1820, initially known as the Astronomical Society of London. Its original aims were to reduce astronomical calculations to a more standard form, and to circulate data. These directions were closely connected with Babbage's ideas on computation, and in 1824 he won its Gold Medal, cited \"for his invention of an engine for calculating mathematical and astronomical tables\".",
"title": "After Cambridge"
},
{
"paragraph_id": 15,
"text": "Babbage's motivation to overcome errors in tables by mechanisation had been a commonplace since Dionysius Lardner wrote about it in 1834 in the Edinburgh Review (under Babbage's guidance). The context of these developments is still debated. Babbage's own account of the origin of the difference engine begins with the Astronomical Society's wish to improve The Nautical Almanac. Babbage and Herschel were asked to oversee a trial project, to recalculate some part of those tables. With the results to hand, discrepancies were found. This was in 1821 or 1822, and was the occasion on which Babbage formulated his idea for mechanical computation. The issue of the Nautical Almanac is now described as a legacy of a polarisation in British science caused by attitudes to Sir Joseph Banks, who had died in 1820.",
"title": "After Cambridge"
},
{
"paragraph_id": 16,
"text": "Babbage studied the requirements to establish a modern postal system, with his friend Thomas Frederick Colby, concluding there should be a uniform rate that was put into effect with the introduction of the Uniform Fourpenny Post supplanted by the Uniform Penny Post in 1839 and 1840. Colby was another of the founding group of the Society. He was also in charge of the Survey of Ireland. Herschel and Babbage were present at a celebrated operation of that survey, the remeasuring of the Lough Foyle baseline.",
"title": "After Cambridge"
},
{
"paragraph_id": 17,
"text": "The Analytical Society had initially been no more than an undergraduate provocation. During this period it had some more substantial achievements. In 1816 Babbage, Herschel and Peacock published a translation from French of the lectures of Sylvestre Lacroix, which was then the state-of-the-art calculus textbook.",
"title": "After Cambridge"
},
{
"paragraph_id": 18,
"text": "Reference to Lagrange in calculus terms marks out the application of what are now called formal power series. British mathematicians had used them from about 1730 to 1760. As re-introduced, they were not simply applied as notations in differential calculus. They opened up the fields of functional equations (including the difference equations fundamental to the difference engine) and operator (D-module) methods for differential equations. The analogy of difference and differential equations was notationally changing Δ to D, as a \"finite\" difference becomes \"infinitesimal\". These symbolic directions became popular, as operational calculus, and pushed to the point of diminishing returns. The Cauchy concept of limit was kept at bay. Woodhouse had already founded this second \"British Lagrangian School\" with its treatment of Taylor series as formal.",
"title": "After Cambridge"
},
{
"paragraph_id": 19,
"text": "In this context function composition is complicated to express, because the chain rule is not simply applied to second and higher derivatives. This matter was known to Woodhouse by 1803, who took from Louis François Antoine Arbogast what is now called Faà di Bruno's formula. In essence it was known to Abraham De Moivre (1697). Herschel found the method impressive, Babbage knew of it, and it was later noted by Ada Lovelace as compatible with the analytical engine. In the period to 1820 Babbage worked intensively on functional equations in general, and resisted both conventional finite differences and Arbogast's approach (in which Δ and D were related by the simple additive case of the exponential map). But via Herschel he was influenced by Arbogast's ideas in the matter of iteration, i.e. composing a function with itself, possibly many times. Writing in a major paper on functional equations in the Philosophical Transactions (1815/6), Babbage said his starting point was work of Gaspard Monge.",
"title": "After Cambridge"
},
{
"paragraph_id": 20,
"text": "From 1828 to 1839, Babbage was Lucasian Professor of Mathematics at Cambridge. Not a conventional resident don, and inattentive to his teaching responsibilities, he wrote three topical books during this period of his life. He was elected a Foreign Honorary Member of the American Academy of Arts and Sciences in 1832. Babbage was out of sympathy with colleagues: George Biddell Airy, his predecessor as Lucasian Professor of Mathematics at Trinity College, Cambridge, thought an issue should be made of his lack of interest in lecturing. Babbage planned to lecture in 1831 on political economy. Babbage's reforming direction looked to see university education more inclusive, universities doing more for research, a broader syllabus and more interest in applications; but William Whewell found the programme unacceptable. A controversy Babbage had with Richard Jones lasted for six years. He never did give a lecture.",
"title": "Academic"
},
{
"paragraph_id": 21,
"text": "It was during this period that Babbage tried to enter politics. Simon Schaffer writes that his views of the 1830s included disestablishment of the Church of England, a broader political franchise, and inclusion of manufacturers as stakeholders. He twice stood for Parliament as a candidate for the borough of Finsbury. In 1832 he came in third among five candidates, missing out by some 500 votes in the two-member constituency when two other reformist candidates, Thomas Wakley and Christopher Temple, split the vote. In his memoirs Babbage related how this election brought him the friendship of Samuel Rogers: his brother Henry Rogers wished to support Babbage again, but died within days. In 1834 Babbage finished last among four. In 1832, Babbage, Herschel and Ivory were appointed Knights of the Royal Guelphic Order, however they were not subsequently made knights bachelor to entitle them to the prefix Sir, which often came with appointments to that foreign order (though Herschel was later created a baronet).",
"title": "Academic"
},
{
"paragraph_id": 22,
"text": "Babbage now emerged as a polemicist. One of his biographers notes that all his books contain a \"campaigning element\". His Reflections on the Decline of Science and some of its Causes (1830) stands out, however, for its sharp attacks. It aimed to improve British science, and more particularly to oust Davies Gilbert as President of the Royal Society, which Babbage wished to reform. It was written out of pique, when Babbage hoped to become the junior secretary of the Royal Society, as Herschel was the senior, but failed because of his antagonism to Humphry Davy. Michael Faraday had a reply written, by Gerrit Moll, as On the Alleged Decline of Science in England (1831). On the front of the Royal Society Babbage had no impact, with the bland election of the Duke of Sussex to succeed Gilbert the same year. As a broad manifesto, on the other hand, his Decline led promptly to the formation in 1831 of the British Association for the Advancement of Science (BAAS).",
"title": "Academic"
},
{
"paragraph_id": 23,
"text": "The Mechanics' Magazine in 1831 identified as Declinarians the followers of Babbage. In an unsympathetic tone it pointed out David Brewster writing in the Quarterly Review as another leader; with the barb that both Babbage and Brewster had received public money.",
"title": "Academic"
},
{
"paragraph_id": 24,
"text": "In the debate of the period on statistics (qua data collection) and what is now statistical inference, the BAAS in its Statistical Section (which owed something also to Whewell) opted for data collection. This Section was the sixth, established in 1833 with Babbage as chairman and John Elliot Drinkwater as secretary. The foundation of the Statistical Society followed. Babbage was its public face, backed by Richard Jones and Robert Malthus.",
"title": "Academic"
},
{
"paragraph_id": 25,
"text": "Babbage published On the Economy of Machinery and Manufactures (1832), on the organisation of industrial production. It was an influential early work of operational research. John Rennie the Younger in addressing the Institution of Civil Engineers on manufacturing in 1846 mentioned mostly surveys in encyclopaedias, and Babbage's book was first an article in the Encyclopædia Metropolitana, the form in which Rennie noted it, in the company of related works by John Farey Jr., Peter Barlow and Andrew Ure. From An essay on the general principles which regulate the application of machinery to manufactures and the mechanical arts (1827), which became the Encyclopædia Metropolitana article of 1829, Babbage developed the schematic classification of machines that, combined with discussion of factories, made up the first part of the book. The second part considered the \"domestic and political economy\" of manufactures.",
"title": "Academic"
},
{
"paragraph_id": 26,
"text": "The book sold well, and quickly went to a fourth edition (1836). Babbage represented his work as largely a result of actual observations in factories, British and abroad. It was not, in its first edition, intended to address deeper questions of political economy; the second (late 1832) did, with three further chapters including one on piece rate. The book also contained ideas on rational design in factories, and profit sharing.",
"title": "Academic"
},
{
"paragraph_id": 27,
"text": "In Economy of Machinery was described what is now called the \"Babbage principle\". It pointed out commercial advantages available with more careful division of labour. As Babbage himself noted, it had already appeared in the work of Melchiorre Gioia in 1815. The term was introduced in 1974 by Harry Braverman. Related formulations are the \"principle of multiples\" of Philip Sargant Florence, and the \"balance of processes\".",
"title": "Academic"
},
{
"paragraph_id": 28,
"text": "What Babbage remarked is that skilled workers typically spend parts of their time performing tasks that are below their skill level. If the labour process can be divided among several workers, labour costs may be cut by assigning only high-skill tasks to high-cost workers, restricting other tasks to lower-paid workers. He also pointed out that training or apprenticeship can be taken as fixed costs; but that returns to scale are available by his approach of standardisation of tasks, therefore again favouring the factory system. His view of human capital was restricted to minimising the time period for recovery of training costs.",
"title": "Academic"
},
{
"paragraph_id": 29,
"text": "Another aspect of the work was its detailed breakdown of the cost structure of book publishing. Babbage took the unpopular line, from the publishers' perspective, of exposing the trade's profitability. He went as far as to name the organisers of the trade's restrictive practices. Twenty years later he attended a meeting hosted by John Chapman to campaign against the Booksellers Association, still a cartel.",
"title": "Academic"
},
{
"paragraph_id": 30,
"text": "It has been written that \"what Arthur Young was to agriculture, Charles Babbage was to the factory visit and machinery\". Babbage's theories are said to have influenced the layout of the 1851 Great Exhibition, and his views had a strong effect on his contemporary George Julius Poulett Scrope. Karl Marx argued that the source of the productivity of the factory system was exactly the combination of the division of labour with machinery, building on Adam Smith, Babbage and Ure. Where Marx picked up on Babbage and disagreed with Smith was on the motivation for division of labour by the manufacturer: as Babbage did, he wrote that it was for the sake of profitability, rather than productivity, and identified an impact on the concept of a trade.",
"title": "Academic"
},
{
"paragraph_id": 31,
"text": "John Ruskin went further, to oppose completely what manufacturing in Babbage's sense stood for. Babbage also affected the economic thinking of John Stuart Mill. George Holyoake saw Babbage's detailed discussion of profit sharing as substantive, in the tradition of Robert Owen and Charles Fourier, if requiring the attentions of a benevolent captain of industry, and ignored at the time.",
"title": "Academic"
},
{
"paragraph_id": 32,
"text": "Works by Babbage and Ure were published in French translation in 1830; On the Economy of Machinery was translated in 1833 into French by Édouard Biot, and into German the same year by Gottfried Friedenberg. The French engineer and writer on industrial organisation Léon Lalanne was influenced by Babbage, but also by the economist Claude Lucien Bergery, in reducing the issues to \"technology\". William Jevons connected Babbage's \"economy of labour\" with his own labour experiments of 1870. The Babbage principle is an inherent assumption in Frederick Winslow Taylor's scientific management.",
"title": "Academic"
},
{
"paragraph_id": 33,
"text": "Mary Everest Boole claimed that there was profound influence – via her uncle George Everest – of Indian thought in general and Indian logic, in particular, on Babbage and on her husband George Boole, as well as on Augustus De Morgan:",
"title": "Academic"
},
{
"paragraph_id": 34,
"text": "Think what must have been the effect of the intense Hinduizing of three such men as Babbage, De Morgan, and George Boole on the mathematical atmosphere of 1830–65. What share had it in generating the Vector Analysis and the mathematics by which investigations in physical science are now conducted?",
"title": "Academic"
},
{
"paragraph_id": 35,
"text": "In 1837, responding to the series of eight Bridgewater Treatises, Babbage published his Ninth Bridgewater Treatise, under the title On the Power, Wisdom and Goodness of God, as manifested in the Creation. In this work Babbage weighed in on the side of uniformitarianism in a current debate. He preferred the conception of creation in which a God-given natural law dominated, removing the need for continuous \"contrivance\".",
"title": "Academic"
},
{
"paragraph_id": 36,
"text": "The book is a work of natural theology, and incorporates extracts from related correspondence of Herschel with Charles Lyell. Babbage put forward the thesis that God had the omnipotence and foresight to create as a divine legislator. In this book, Babbage dealt with relating interpretations between science and religion; on the one hand, he insisted that \"there exists no fatal collision between the words of Scripture and the facts of nature;\" on the other hand, he wrote that the Book of Genesis was not meant to be read literally in relation to scientific terms. Against those who said these were in conflict, he wrote \"that the contradiction they have imagined can have no real existence, and that whilst the testimony of Moses remains unimpeached, we may also be permitted to confide in the testimony of our senses.\"",
"title": "Academic"
},
{
"paragraph_id": 37,
"text": "The Ninth Bridgewater Treatise was quoted extensively in Vestiges of the Natural History of Creation. The parallel with Babbage's computing machines is made explicit, as allowing plausibility to the theory that transmutation of species could be pre-programmed.",
"title": "Academic"
},
{
"paragraph_id": 38,
"text": "Jonar Ganeri, author of Indian Logic, believes Babbage may have been influenced by Indian thought; one possible route would be through Henry Thomas Colebrooke. Mary Everest Boole argues that Babbage was introduced to Indian thought in the 1820s by her uncle George Everest:",
"title": "Academic"
},
{
"paragraph_id": 39,
"text": "Some time about 1825, [Everest] came to England for two or three years, and made a fast and lifelong friendship with Herschel and with Babbage, who was then quite young. I would ask any fair-minded mathematician to read Babbage's Ninth Bridgewater Treatise and compare it with the works of his contemporaries in England; and then ask himself whence came the peculiar conception of the nature of miracle which underlies Babbage's ideas of Singular Points on Curves (Chap, viii) – from European Theology or Hindu Metaphysic? Oh! how the English clergy of that day hated Babbage's book!",
"title": "Academic"
},
{
"paragraph_id": 40,
"text": "Babbage was raised in the Protestant form of the Christian faith, his family having inculcated in him an orthodox form of worship. He explained:",
"title": "Academic"
},
{
"paragraph_id": 41,
"text": "My excellent mother taught me the usual forms of my daily and nightly prayer; and neither in my father nor my mother was there any mixture of bigotry and intolerance on the one hand, nor on the other of that unbecoming and familiar mode of addressing the Almighty which afterwards so much disgusted me in my youthful years.",
"title": "Academic"
},
{
"paragraph_id": 42,
"text": "Rejecting the Athanasian Creed as a \"direct contradiction in terms\", in his youth he looked to Samuel Clarke's works on religion, of which Being and Attributes of God (1704) exerted a particularly strong influence on him. Later in life, Babbage concluded that \"the true value of the Christian religion rested, not on speculative [theology] … but … upon those doctrines of kindness and benevolence which that religion claims and enforces, not merely in favour of man himself but of every creature susceptible of pain or of happiness.\"",
"title": "Academic"
},
{
"paragraph_id": 43,
"text": "In his autobiography Passages from the Life of a Philosopher (1864), Babbage wrote a whole chapter on the topic of religion, where he identified three sources of divine knowledge:",
"title": "Academic"
},
{
"paragraph_id": 44,
"text": "He stated, on the basis of the design argument, that studying the works of nature had been the more appealing evidence, and the one which led him to actively profess the existence of God. Advocating for natural theology, he wrote:",
"title": "Academic"
},
{
"paragraph_id": 45,
"text": "In the works of the Creator ever open to our examination, we possess a firm basis on which to raise the superstructure of an enlightened creed. The more man inquires into the laws which regulate the material universe, the more he is convinced that all its varied forms arise from the action of a few simple principles ... The works of the Creator, ever present to our senses, give a living and perpetual testimony of his power and goodness far surpassing any evidence transmitted through human testimony. The testimony of man becomes fainter at every stage of transmission, whilst each new inquiry into the works of the Almighty gives to us more exalted views of his wisdom, his goodness, and his power.",
"title": "Academic"
},
{
"paragraph_id": 46,
"text": "Like Samuel Vince, Babbage also wrote a defence of the belief in divine miracles. Against objections previously posed by David Hume, Babbage advocated for the belief of divine agency, stating \"we must not measure the credibility or incredibility of an event by the narrow sphere of our own experience, nor forget that there is a Divine energy which overrides what we familiarly call the laws of nature.\" He alluded to the limits of human experience, expressing: \"all that we see in a miracle is an effect which is new to our observation, and whose cause is concealed. The cause may be beyond the sphere of our observation, and would be thus beyond the familiar sphere of nature; but this does not make the event a violation of any law of nature. The limits of man's observation lie within very narrow boundaries, and it would be arrogance to suppose that the reach of man's power is to form the limits of the natural world.\"",
"title": "Academic"
},
{
"paragraph_id": 47,
"text": "The British Association was consciously modelled on the Deutsche Naturforscher-Versammlung, founded in 1822. It rejected romantic science as well as metaphysics, and started to entrench the divisions of science from literature, and professionals from amateurs. Belonging as he did to the \"Wattite\" faction in the BAAS, represented in particular by James Watt the younger, Babbage identified closely with industrialists. He wanted to go faster in the same directions, and had little time for the more gentlemanly component of its membership. Indeed, he subscribed to a version of conjectural history that placed industrial society as the culmination of human development (and shared this view with Herschel). A clash with Roderick Murchison led in 1838 to his withdrawal from further involvement. At the end of the same year he sent in his resignation as Lucasian professor, walking away also from the Cambridge struggle with Whewell. His interests became more focussed, on computation and metrology, and on international contacts.",
"title": "Later life"
},
{
"paragraph_id": 48,
"text": "A project announced by Babbage was to tabulate all physical constants (referred to as \"constants of nature\", a phrase in itself a neologism), and then to compile an encyclopaedic work of numerical information. He was a pioneer in the field of \"absolute measurement\". His ideas followed on from those of Johann Christian Poggendorff, and were mentioned to Brewster in 1832. There were to be 19 categories of constants, and Ian Hacking sees these as reflecting in part Babbage's \"eccentric enthusiasms\". Babbage's paper On Tables of the Constants of Nature and Art was reprinted by the Smithsonian Institution in 1856, with an added note that the physical tables of Arnold Henry Guyot \"will form a part of the important work proposed in this article\".",
"title": "Later life"
},
{
"paragraph_id": 49,
"text": "Exact measurement was also key to the development of machine tools. Here again Babbage is considered a pioneer, with Henry Maudslay, William Sellers, and Joseph Whitworth.",
"title": "Later life"
},
{
"paragraph_id": 50,
"text": "Through the Royal Society Babbage acquired the friendship of the engineer Marc Brunel. It was through Brunel that Babbage knew of Joseph Clement, and so came to encounter the artisans whom he observed in his work on manufactures. Babbage provided an introduction for Isambard Kingdom Brunel in 1830, for a contact with the proposed Bristol & Birmingham Railway. He carried out studies, around 1838, to show the superiority of the broad gauge for railways, used by Brunel's Great Western Railway.",
"title": "Later life"
},
{
"paragraph_id": 51,
"text": "In 1838, Babbage invented the pilot (also called a cow-catcher), the metal frame attached to the front of locomotives that clears the tracks of obstacles; he also constructed a dynamometer car. His eldest son, Benjamin Herschel Babbage, worked as an engineer for Brunel on the railways before emigrating to Australia in the 1850s.",
"title": "Later life"
},
{
"paragraph_id": 52,
"text": "Babbage also invented an ophthalmoscope, which he gave to Thomas Wharton Jones for testing. Jones, however, ignored it. The device only came into use after being independently invented by Hermann von Helmholtz.",
"title": "Later life"
},
{
"paragraph_id": 53,
"text": "Babbage achieved notable results in cryptography, though this was still not known a century after his death. Letter frequency was category 18 of Babbage's tabulation project. Joseph Henry later defended interest in it, in the absence of the facts, as relevant to the management of movable type.",
"title": "Later life"
},
{
"paragraph_id": 54,
"text": "As early as 1845, Babbage had solved a cipher that had been posed as a challenge by his nephew Henry Hollier, and in the process, he made a discovery about ciphers that were based on Vigenère tables. Specifically, he realised that enciphering plain text with a keyword rendered the cipher text subject to modular arithmetic. During the Crimean War of the 1850s, Babbage broke Vigenère's autokey cipher as well as the much weaker cipher that is called Vigenère cipher today. His discovery was kept a military secret, and was not published. Credit for the result was instead given to Friedrich Kasiski, a Prussian infantry officer, who made the same discovery some years later. However, in 1854, Babbage published the solution of a Vigenère cipher, which had been published previously in the Journal of the Society of Arts. In 1855, Babbage also published a short letter, \"Cypher Writing\", in the same journal. Nevertheless, his priority was not established until 1985.",
"title": "Later life"
},
{
"paragraph_id": 55,
"text": "Babbage involved himself in well-publicised but unpopular campaigns against public nuisances. He once counted all the broken panes of glass of a factory, publishing in 1857 a \"Table of the Relative Frequency of the Causes of Breakage of Plate Glass Windows\": Of 464 broken panes, 14 were caused by \"drunken men, women or boys\".",
"title": "Later life"
},
{
"paragraph_id": 56,
"text": "Babbage's distaste for commoners (the Mob) included writing \"Observations of Street Nuisances\" in 1864, as well as tallying up 165 \"nuisances\" over a period of 80 days. He especially hated street music, and in particular the music of organ grinders, against whom he railed in various venues. The following quotation is typical:",
"title": "Later life"
},
{
"paragraph_id": 57,
"text": "It is difficult to estimate the misery inflicted upon thousands of persons, and the absolute pecuniary penalty imposed upon multitudes of intellectual workers by the loss of their time, destroyed by organ-grinders and other similar nuisances.",
"title": "Later life"
},
{
"paragraph_id": 58,
"text": "Babbage was not alone in his campaign. A convert to the cause was the MP Michael Thomas Bass.",
"title": "Later life"
},
{
"paragraph_id": 59,
"text": "In the 1860s, Babbage also took up the anti-hoop-rolling campaign. He blamed hoop-rolling boys for driving their iron hoops under horses' legs, with the result that the rider is thrown and very often the horse breaks a leg. Babbage achieved a certain notoriety in this matter, being denounced in debate in Commons in 1864 for \"commencing a crusade against the popular game of tip-cat and the trundling of hoops.\"",
"title": "Later life"
},
{
"paragraph_id": 60,
"text": "Babbage's machines were among the first mechanical computers. That they were not actually completed was largely because of funding problems and clashes of personality, most notably with George Biddell Airy, the Astronomer Royal.",
"title": "Computing pioneer"
},
{
"paragraph_id": 61,
"text": "Babbage directed the building of some steam-powered machines that achieved some modest success, suggesting that calculations could be mechanised. For more than ten years he received government funding for his project, which amounted to £17,000, but eventually the Treasury lost confidence in him.",
"title": "Computing pioneer"
},
{
"paragraph_id": 62,
"text": "While Babbage's machines were mechanical and unwieldy, their basic architecture was similar to that of a modern computer. The data and program memory were separated, operation was instruction-based, the control unit could make conditional jumps, and the machine had a separate I/O unit.",
"title": "Computing pioneer"
},
{
"paragraph_id": 63,
"text": "In Babbage's time, printed mathematical tables were calculated by human computers; in other words, by hand. They were central to navigation, science and engineering, as well as mathematics. Mistakes were known to occur in transcription as well as calculation.",
"title": "Computing pioneer"
},
{
"paragraph_id": 64,
"text": "At Cambridge, Babbage saw the fallibility of this process, and the opportunity of adding mechanisation into its management. His own account of his path towards mechanical computation references a particular occasion:",
"title": "Computing pioneer"
},
{
"paragraph_id": 65,
"text": "In 1812 he was sitting in his rooms in the Analytical Society looking at a table of logarithms, which he knew to be full of mistakes, when the idea occurred to him of computing all tabular functions by machinery. The French government had produced several tables by a new method. Three or four of their mathematicians decided how to compute the tables, half a dozen more broke down the operations into simple stages, and the work itself, which was restricted to addition and subtraction, was done by eighty computers who knew only these two arithmetical processes. Here, for the first time, mass production was applied to arithmetic, and Babbage was seized by the idea that the labours of the unskilled computers [people] could be taken over completely by machinery which would be quicker and more reliable.",
"title": "Computing pioneer"
},
{
"paragraph_id": 66,
"text": "There was another period, seven years later, when his interest was aroused by the issues around computation of mathematical tables. The French official initiative by Gaspard de Prony, and its problems of implementation, were familiar to him. After the Napoleonic Wars came to a close, scientific contacts were renewed on the level of personal contact: in 1819 Charles Blagden was in Paris looking into the printing of the stalled de Prony project, and lobbying for the support of the Royal Society. In works of the 1820s and 1830s, Babbage referred in detail to de Prony's project.",
"title": "Computing pioneer"
},
{
"paragraph_id": 67,
"text": "Babbage began in 1822 with what he called the difference engine, made to compute values of polynomial functions. It was created to calculate a series of values automatically. By using the method of finite differences, it was possible to avoid the need for multiplication and division.",
"title": "Computing pioneer"
},
{
"paragraph_id": 68,
"text": "For a prototype difference engine, Babbage brought in Joseph Clement to implement the design, in 1823. Clement worked to high standards, but his machine tools were particularly elaborate. Under the standard terms of business of the time, he could charge for their construction, and would also own them. He and Babbage fell out over costs around 1831.",
"title": "Computing pioneer"
},
{
"paragraph_id": 69,
"text": "Some parts of the prototype survive in the Museum of the History of Science, Oxford. This prototype evolved into the \"first difference engine\". It remained unfinished and the finished portion is located at the Science Museum in London. This first difference engine would have been composed of around 25,000 parts, weighed fifteen short tons (13,600 kg), and would have been 8 ft (2.4 m) tall. Although Babbage received ample funding for the project, it was never completed. He later (1847–1849) produced detailed drawings for an improved version,\"Difference Engine No. 2\", but did not receive funding from the British government. His design was finally constructed in 1989–1991, using his plans and 19th-century manufacturing tolerances. It performed its first calculation at the Science Museum, London, returning results to 31 digits.",
"title": "Computing pioneer"
},
{
"paragraph_id": 70,
"text": "Nine years later, in 2000, the Science Museum completed the printer Babbage had designed for the difference engine.",
"title": "Computing pioneer"
},
{
"paragraph_id": 71,
"text": "The Science Museum has constructed two Difference Engines according to Babbage's plans for the Difference Engine No 2. One is owned by the museum. The other, owned by the technology multimillionaire Nathan Myhrvold, went on exhibition at the Computer History Museum in Mountain View, California on 10 May 2008. The two models that have been constructed are not replicas.",
"title": "Computing pioneer"
},
{
"paragraph_id": 72,
"text": "After the attempt at making the first difference engine fell through, Babbage worked to design a more complex machine called the Analytical Engine. He hired C. G. Jarvis, who had previously worked for Clement as a draughtsman. The Analytical Engine marks the transition from mechanised arithmetic to fully-fledged general purpose computation. It is largely on it that Babbage's standing as computer pioneer rests.",
"title": "Computing pioneer"
},
{
"paragraph_id": 73,
"text": "The major innovation was that the Analytical Engine was to be programmed using punched cards: the Engine was intended to use loops of Jacquard's punched cards to control a mechanical calculator, which could use as input the results of preceding computations. The machine was also intended to employ several features subsequently used in modern computers, including sequential control, branching and looping. It would have been the first mechanical device to be, in principle, Turing-complete. The Engine was not a single physical machine, but rather a succession of designs that Babbage tinkered with until his death in 1871.",
"title": "Computing pioneer"
},
{
"paragraph_id": 74,
"text": "Ada Lovelace, who corresponded with Babbage during his development of the Analytical Engine, is credited with developing an algorithm that would enable the Engine to calculate a sequence of Bernoulli numbers. Despite documentary evidence in Lovelace's own handwriting, some scholars dispute to what extent the ideas were Lovelace's own. For this achievement, she is often described as the first computer programmer; though no programming language had yet been invented.",
"title": "Computing pioneer"
},
{
"paragraph_id": 75,
"text": "Lovelace also translated and wrote literature supporting the project. Describing the engine's programming by punch cards, she wrote: \"We may say most aptly that the Analytical Engine weaves algebraical patterns just as the Jacquard loom weaves flowers and leaves.\"",
"title": "Computing pioneer"
},
{
"paragraph_id": 76,
"text": "Babbage visited Turin in 1840 at the invitation of Giovanni Plana, who had developed in 1831 an analog computing machine that served as a perpetual calendar. Here in 1840 in Turin, Babbage gave the only public explanation and lectures about the Analytical Engine. In 1842 Charles Wheatstone approached Lovelace to translate a paper of Luigi Menabrea, who had taken notes of Babbage's Turin talks; and Babbage asked her to add something of her own. Fortunato Prandi who acted as interpreter in Turin was an Italian exile and follower of Giuseppe Mazzini.",
"title": "Computing pioneer"
},
{
"paragraph_id": 77,
"text": "Per Georg Scheutz wrote about the difference engine in 1830, and experimented in automated computation. After 1834 and Lardner's Edinburgh Review article he set up a project of his own, doubting whether Babbage's initial plan could be carried out. This he pushed through with his son, Edvard Scheutz. Another Swedish engine was that of Martin Wiberg (1860).",
"title": "Computing pioneer"
},
{
"paragraph_id": 78,
"text": "In 2011, researchers in Britain proposed a multimillion-pound project, \"Plan 28\", to construct Babbage's Analytical Engine. Since Babbage's plans were continually being refined and were never completed, they intended to engage the public in the project and crowd-source the analysis of what should be built. It would have the equivalent of 675 bytes of memory, and run at a clock speed of about 7 Hz. They hoped to complete it by the 150th anniversary of Babbage's death, in 2021.",
"title": "Computing pioneer"
},
{
"paragraph_id": 79,
"text": "Advances in MEMS and nanotechnology have led to recent high-tech experiments in mechanical computation. The benefits suggested include operation in high radiation or high temperature environments. These modern versions of mechanical computation were highlighted in The Economist in its special \"end of the millennium\" black cover issue in an article entitled \"Babbage's Last Laugh\".",
"title": "Computing pioneer"
},
{
"paragraph_id": 80,
"text": "Due to his association with the town Babbage was chosen in 2007 to appear on the 5 Totnes pound note. An image of Babbage features in the British cultural icons section of the newly designed British passport in 2015.",
"title": "Computing pioneer"
},
{
"paragraph_id": 81,
"text": "On 25 July 1814, Babbage married Georgiana Whitmore, sister of British parliamentarian William Wolryche-Whitmore, at St. Michael's Church in Teignmouth, Devon. The couple lived at Dudmaston Hall, Shropshire (where Babbage engineered the central heating system), before moving to 5 Devonshire Street, London in 1815.",
"title": "Family"
},
{
"paragraph_id": 82,
"text": "Charles and Georgiana had eight children, but only four – Benjamin Herschel, Georgiana Whitmore, Dugald Bromhead and Henry Prevost – survived childhood. Charles' wife Georgiana died in Worcester on 1 September 1827, the same year as his father, their second son (also named Charles) and their newborn son Alexander.",
"title": "Family"
},
{
"paragraph_id": 83,
"text": "His youngest surviving son, Henry Prevost Babbage (1824–1918), went on to create six small demonstration pieces for Difference Engine No. 1 based on his father's designs, one of which was sent to Harvard University where it was later discovered by Howard H. Aiken, pioneer of the Harvard Mark I. Henry Prevost's 1910 Analytical Engine Mill, previously on display at Dudmaston Hall, is now on display at the Science Museum.",
"title": "Family"
},
{
"paragraph_id": 84,
"text": "Babbage lived and worked for over 40 years at 1 Dorset Street, Marylebone, where he died, at the age of 79, on 18 October 1871; he was buried in London's Kensal Green Cemetery. According to Horsley, Babbage died \"of renal inadequacy, secondary to cystitis.\" He had declined both a knighthood and baronetcy. He also argued against hereditary peerages, favouring life peerages instead.",
"title": "Death"
},
{
"paragraph_id": 85,
"text": "In 1983, the autopsy report for Charles Babbage was discovered and later published by his great-great-grandson. A copy of the original is also available. Half of Babbage's brain is preserved at the Hunterian Museum in the Royal College of Surgeons in London. The other half of Babbage's brain is on display in the Science Museum, London.",
"title": "Death"
},
{
"paragraph_id": 86,
"text": "There is a black plaque commemorating the 40 years Babbage spent at 1 Dorset Street, London. Locations, institutions and other things named after Babbage include:",
"title": "Memorials"
},
{
"paragraph_id": 87,
"text": "Babbage frequently appears in steampunk works; he has been called an iconic figure of the genre. Other works in which Babbage appears include:",
"title": "In fiction and film"
}
] | Charles Babbage was an English polymath. A mathematician, philosopher, inventor and mechanical engineer, Babbage originated the concept of a digital programmable computer. Babbage is considered by some to be "father of the computer". Babbage is credited with inventing the first mechanical computer, the Difference Engine, that eventually led to more complex electronic designs, though all the essential ideas of modern computers are to be found in Babbage's Analytical Engine, programmed using a principle openly borrowed from the Jacquard loom. Babbage had a broad range of interests in addition to his work on computers covered in his 1832 book Economy of Manufactures and Machinery. His varied work in other fields has led him to be described as "pre-eminent" among the many polymaths of his century. Babbage, who died before the complete successful engineering of many of his designs, including his Difference Engine and Analytical Engine, remained a prominent figure in the ideating of computing. Parts of Babbage's incomplete mechanisms are on display in the Science Museum in London. In 1991, a functioning difference engine was constructed from Babbage's original plans. Built to tolerances achievable in the 19th century, the success of the finished engine indicated that Babbage's machine would have worked. | 2001-10-29T12:31:46Z | 2023-12-18T03:48:52Z | [
"Template:ISBN",
"Template:Cite web",
"Template:Subscription required",
"Template:Portal bar",
"Template:Gutenberg author",
"Template:Internet Archive author",
"Template:NPG name",
"Template:Pp-move",
"Template:Pp-pc",
"Template:Cbignore",
"Template:Wikiquote",
"Template:IPAc-en",
"Template:Openplaque",
"Template:ThoemmesBritish19C",
"Template:Cite thesis",
"Template:Infobox scientist",
"Template:Post-nominals",
"Template:Citation needed",
"Template:Cite book",
"Template:Cite encyclopedia",
"Template:Cite journal",
"Template:Cite ODNB",
"Template:Harvnb",
"Template:Use dmy dates",
"Template:Format price",
"Template:Convert",
"Template:Reflist",
"Template:Acad",
"Template:StandardEbooks",
"Template:Librivox author",
"Template:Use British English",
"Template:Main",
"Template:Cite news",
"Template:Timelines of computing",
"Template:UK National Archives ID",
"Template:Lucasian Professors of Mathematics",
"Template:Redirect",
"Template:Citation",
"Template:Cite magazine",
"Template:Wikisource author",
"Template:Authority control",
"Template:Short description",
"Template:Blockquote",
"Template:Failed verification",
"Template:Commons category"
] | https://en.wikipedia.org/wiki/Charles_Babbage |
5,700 | Cross-dressing | Cross-dressing is the act of wearing clothes traditionally or stereotypically associated with a different gender. From as early as pre-modern history, cross-dressing has been practiced in order to disguise, comfort, entertain, and express oneself.
Socialization establishes social norms among the people of a particular society. With regard to the social aspects of clothing, such standards may reflect guidelines relating to the style, color, or type of clothing that individuals are expected to wear. Such expectations may be delineated according to gender roles. Cross-dressing involves dressing contrary to the prevailing standards (or in some cases, laws) for a person of their gender in their own society.
The term "cross-dressing" refers to an action or a behavior, without attributing or implying any specific causes or motives for that behavior. Cross-dressing is not synonymous with being transgender.
The phenomenon of cross-dressing is seen throughout recorded history, being referred to as far back as the Hebrew Bible. The terms used to describe it have changed throughout history; the Anglo-Saxon-rooted term "cross-dresser" is viewed more favorably than the Latin-origin term "transvestite" in some circles, where it has come to be seen as outdated and derogatory. Its first mention originated in Magnus Hirschfeld's Die Transvestiten (The Transvestites) in 1910, originally associating cross-dressing with non-heterosexual behavior or derivations of sexual intent. Its connotations largely changed in the 20th century as its use was more frequently associated with sexual excitement, otherwise known as transvestic disorder. This term was historically used to diagnose psychiatric disorders (e.g. transvestic fetishism), but the former (cross-dressing) was coined by the transgender community. The Oxford English Dictionary gives 1911 as the earliest citation of the term "cross-dressing", by Edward Carpenter: "Cross-dressing must be taken as a general indication of, and a cognate phenomenon to, homosexuality". In 1928, Havelock Ellis used the two terms "cross-dressing" and "transvestism" interchangeably. The earliest citations for "cross-dress" and "cross-dresser" are 1966 and 1976, respectively.
The term en femme [ɑ̃ fam] is a lexical borrowing of a French phrase. It is used in the transgender and crossdressing community to describe the act of wearing feminine clothing or expressing a stereotypically feminine personality. The term is borrowed from the modern French phrase en femme meaning "as a woman." Most crossdressers also use a female name whilst en femme; that is their "femme name". In the cross-dressing community the persona a man adopts when he dresses as a woman is known as his "femme self".
Between 1987 and 1991, JoAnn Roberts and CDS published a magazine called "En Femme" that was "for the transvestite, transsexual, crossdresser, and female impersonator."
The term en homme [ɑ̃nɔm] is an anglicized adaptation of a French phrase. It is used in the transgender and crossdressing community to describe the act of wearing masculine clothing or expressing a stereotypically masculine personality. The term is derived from the modern colloquial French phrase en tant qu'homme meaning "as a man" and the anglicized adaptation en homme literally translates as "in man". Most crossdressers also use a homme (male) name whilst en homme.
Cross-dressing has been practiced throughout much of recorded history, in many societies, and for many reasons. Examples exist in Greek, Norse, and Hindu mythology. Cross-dressing can be found in theater and religion, such as kabuki, Noh, and Korean shamanism, as well as in folklore, literature, and music. For instance, in examining kabuki culture during Japan's edo period, cross-dressing was not only used for theater purposes but also because current societal trends: cross-dressing and the switching of genders was a familiar concept to the Japanese at the time which allowed them to interchange characters's genders easily and incorporate geisha fashion into men's wear. This was especially common in the story-telling of ancient stories such as the character Benten from Benten Kozō. He was a thief in the play cross-dressing as a woman. Cross-dressing was also exhibited in Japanese Noh for similar reasons. Societal standards at the time broke boundaries between gender. For example, ancient Japanese portraits of aristocrats have no clear differentiation in characteristics between male and female beauty. Thus, in Noh performance, the cross-dressing of actors was common; especially given the ease of disguising biological sex with the use of masks and heavy robes. In a non-entertainment context, cross-dressing is also exhibited in Korean shamanism for religious purposes. Specifically, this is displayed in chaesu-gut, a shamanistic rite gut in which a shaman offers a sacrifice to the spirits to intermediate in the fortunes of the intended humans for the gut. Here, cross-dressing serves many purposes. Firstly, the shaman (typically a woman) would cross-dress as both male and female spirits can occupy her. This allows her to represent the opposite sex and become a cross-sex icon in 75% of the time of the ritual. This also allows her to become a sexually liminal being. It is clear that in entertainment, literature, art, and religion, different civilizations have utilized cross-dressing for many different purposes.
In the British and European context, theatrical troupes ("playing companies") were all-male, with the female parts undertaken by boy players.
The Rebecca Riots took place between 1839 and 1843 in West and Mid Wales. They were a series of protests undertaken by local farmers and agricultural workers in response to unfair taxation. The rioters, often men dressed as women, took their actions against toll-gates, as they were tangible representations of high taxes and tolls. The riots ceased prior to 1844 due to several factors, including increased troop levels, a desire by the protestors to avoid violence and the appearance of criminal groups using the guise of the biblical character Rebecca for their own purposes. In 1844 an Act of Parliament to consolidate and amend the laws relating to turnpike trusts in Wales was passed.
A variety of historical figures are known to have cross-dressed to varying degrees. Many women found they had to disguise themselves as men in order to participate in the wider world. For example, it is postulated that Margaret King cross-dressed in the early 19th century to attend medical school, as universities at that time accepted only male students. A century later, Vita Sackville-West dressed as a young soldier in order to "walk out" with her girlfriend Violet Keppel, to avoid the street harassment that two women would have faced. The prohibition on women wearing male garb, once strictly applied, still has echoes today in some Western societies which require girls and women to wear skirts, for example as part of school uniform or office dress codes. In some countries, even in casual settings, women are still prohibited from wearing traditionally male clothing. Sometimes all trousers, no matter how loose and long, are automatically considered "indecent", which may render their wearer subject to severe punishment, as in the case of Lubna al-Hussein in Sudan in 2009.
In many countries, cross-dressing was illegal under laws that identified it as indecent or immoral. Many such laws were challenged in the late 1900s giving people the right to freedom of gender expression with regard to their clothing.
For instance, from 1840 forward, United States saw state and city laws forbidding people from appearing in public while dressed in clothes not commonly associated with their assigned sex. The goal of this wave of policies was to create a tool that would enforce a normative gender narrative, targeting multiple gender identities across the gender spectrum. With the progression of time, styles, and societal trends, it became even more difficult to draw the line between what was cross-dressing or not. Only recently have these laws changed. As recently as 2011, it was still possible for a man to get arrested for "impersonating a woman" — a vestige of the 19th century laws. Even with this, legal issues surrounding cross-dressing perpetuated all throughout the mid 20th century. During this time period, police would often reference laws that did not exist or laws that have been repealed in order to target the LGBTQ+ community.
This extends beyond the United States: There still remains 13 UN member States that explicitly criminalize transgender individuals, and there exist even more countries that use a great deal of diverse laws to target them. The third edition of the Trans Legal Mapping Report, done by the International Lesbian, Gay, Bisexual, Trans, and Intersex Association found that an especially common method to target these individuals is through cross-dressing regulations. For instance, only in 2014 did an appeal court in Malaysia finally overturned a state law prohibiting Muslim men from cross-dressing as women.
In the Australian state of Tasmania, cross-dressing in public was made a criminal offence in 1935, and this law was only repealed in 2000.
There are many different kinds of cross-dressing and many different reasons why an individual might engage in cross-dressing behavior. Some people cross-dress as a matter of comfort or style, a personal preference for clothing associated with the opposite gender. Some people cross-dress to shock others or challenge social norms; others will limit their cross-dressing to underwear, so that it is not apparent. Some people attempt to pass as a member of the opposite gender in order to gain access to places or resources they would not otherwise be able to reach.
Single-sex theatrical troupes often have some performers who cross-dress to play roles written for members of the opposite sex (travesti and trouser roles). Cross-dressing, particularly the depiction of males wearing dresses, is often used for comic effect onstage and on-screen.
Boy player refers to children who performed in Medieval and English Renaissance playing companies. Some boy players worked for the adult companies and performed the female roles as women did not perform on the English stage in this period. Others worked for children's companies in which all roles, not just the female ones, were played by boys.
In an effort to clamp down on kabuki's popularity, women's kabuki, known as onna-kabuki, was banned in 1629 in Japan for being too erotic. Following this ban, young boys began performing in wakashū-kabuki, which was also soon banned. Thus adult men play female roles in kabuki.
Dan is the general name for female roles in Chinese opera, often referring to leading roles. They may be played by male or female actors. In the early years of Peking opera, all dan roles were played by men, but this practice is no longer common in any Chinese opera genre.
Women have often been excluded from Noh, and men often play female characters in it.
Drag is a special form of performance art based on the act of cross-dressing. A drag queen is usually a male-assigned person who performs as an exaggeratedly feminine character, in heightened costuming sometimes consisting of a showy dress, high-heeled shoes, obvious make-up, and wig. A drag queen may imitate famous female film or pop-music stars. A faux queen is a female-assigned person employing the same techniques. A drag king is a counterpart of the drag queen – a female-assigned person who adopts a masculine persona in performance or imitates a male film or pop-music star. Some female-assigned people undergoing gender reassignment therapy also self-identify as 'drag kings'.
The modern activity of battle reenactments has raised the question of women passing as male soldiers. In 1989, Lauren Burgess dressed as a male soldier in a U.S. National Park Service reenactment of the Battle of Antietam, and was ejected after she was discovered to be a woman. Burgess sued the Park Service for sexual discrimination. The case spurred spirited debate among Civil War buffs. In 1993, a federal judge ruled in Burgess's favor.
"Wigging" refers to the practice of male stunt doubles taking the place of an actress, parallel to "paint downs", where white stunt doubles are made up to resemble black actors. Female stunt doubles have begun to protest this norm of "historical sexism", saying that it restricts their already limited job possibilities.
Cross-dressing is a traditional popular trope in British comedy. The pantomime dame in British pantomime dates from the 19th century, which is part of the theatrical tradition of female characters portrayed by male actors in drag. Widow Twankey (Aladdin's mother) is a popular pantomime dame: in 2004 Ian McKellen played the role.
The Monty Python comedy troupe donned frocks and makeup, playing female roles while speaking in falsetto. Character comics such as Benny Hill and Dick Emery drew upon several female identities. In the BBC's long-running sketch show The Dick Emery Show (broadcast from 1963 to 1981), Emery played Mandy, a busty peroxide blonde whose catchphrase, "Ooh, you are awful ... but I like you!", was given in response to a seemingly innocent remark made by her interviewer, but perceived by her as ribald double entendre. The popular tradition of cross dressing in British comedy extended to the 1984 music video for Queen's "I Want to Break Free" where the band parody several female characters from the soap opera Coronation Street.
A transvestic fetishist is a person who cross-dresses as part of a sexual fetish. According to the fourth edition of Diagnostic and Statistical Manual of Mental Disorders, this fetishism was limited to heterosexual men; however, DSM-5 does not have this restriction, and opens it to women and men, regardless of their sexual orientation.
Sometimes either member of a heterosexual couple will cross-dress in order to arouse the other. For example, the male might wear skirts or lingerie and/or the female will wear boxers or other male clothing. (See also forced feminization)
Some people who cross-dress may endeavor to project a complete impression of belonging to another gender, including mannerisms, speech patterns, and emulation of sexual characteristics. This is referred to as passing or "trying to pass", depending how successful the person is. An observer who sees through the cross-dresser's attempt to pass is said to have "read" or "clocked" them. There are videos, books, and magazines on how a man may look more like a woman.
Others may choose to take a mixed approach, adopting some feminine traits and some masculine traits in their appearance. For instance, a man might wear both a dress and a beard. This is sometimes known as "genderfuck". In a broader context, cross-dressing may also refer to other actions undertaken to pass as a particular sex, such as packing (accentuating the male crotch bulge) or, the opposite, tucking (concealing the male crotch bulge).
Gender disguise has been used by women and girls to pass as male, and by men and boys to pass as female. Gender disguise has also been used as a plot device in storytelling, particularly in narrative ballads, and is a recurring motif in literature, theater, and film. Historically, some women have cross-dressed to take up male-dominated or male-exclusive professions, such as military service. Conversely, some men have cross-dressed to escape from mandatory military service or as a disguise to assist in political or social protest, as men in Wales did in the Rebecca Riots and when conducting Ceffyl Pren as a form of mob justice.
Conversation surrounding exclusion and inequality in sports has been around for decades. While the fight for equality in sports has been going on, there are a couple of notable women who have dressed as men or hid their gender to insert themselves into the very gatekept world of sports.
Roberta "Bobbi" Gibb is the first woman to have competed in the Boston Marathon. In 1966 Bobbi Gibb wrote a letter to the Boston Athletic Association asking to participate in the race happening that year. When Gibb received her letter back in the mail she was faced with the news that her entry to the race was denied due to her gender. Rather than just accept her fate, Gibb did not take no for an answer and decided to run the marathon anyways—however, she would do it hidden as a man. On the day of the race Gibb showed up in an oversized sweatshirt, her brother's shorts, and men's running shoes. Gibb hid in the bushes until the race started and then joined in with the crowd. Eventually her fellow runners figured out Gibb's real gender but stated that they would make sure that she finished the race. Gibb ended up finishing her first Boston Marathon in 3 hours, 27 minutes and 40 seconds. She crossed the finish line with blistered, bleeding feet from the men's running shoes she was wearing. Gibb's act of defiance influenced other women marathon runners of the time like Katherine Switzer, who also registered under an alias to be able to run the race in 1967. It would not be until 1972 until there was an official women's race within the Boston Marathon.
Sam Kerr is a forward for the Australian Women's Soccer Team and Chelsea FC in the FA Women's Super League. Kerr has been regarded as one of the best forward players in the sport and has been one of the most highly paid players in women's soccer as well. While Kerr now shares the world state with other great women soccer players, as a young child she shared the field with young boys. Kerr grew up in a suburb of Perth where there was little to no access to young girls soccer teams in the direct area. Not having a girls team to play on did not bother Kerr though, she simply played on a youth boys team where all of her teammates just assumed she was also a boy. Kerr states in her book My Journey to the World Cup that she continued to hide her gender because she did not want to be treated any differently. In her book Kerr also reviled that when one of her teammates found out that she was, in fact a girl, he cried. While Kerr's act of hiding her gender was initially an accident, it is still an example of how women (and in the case a young girl) can create opportunities for themselves by looking or acting as a man.
One of the most common instances of gender disguise is in the instance of war/militaristic situations. From Joan of Arc in the 15th century to Mulan from the animated Disney Mulan to young girls in World War II, there have been many different people of many different sexes that disguise themselves as men in order to be able to fight in wars.
Born c. 1412, St Joan of Arc or the Maid of Orleans is one of the oldest examples of gender disguise. At 13, after receiving a revelation that she was supposed to lead the French to victory over the English in the 100 years war, Joan donned the clothing of a male soldier in the French army. Joan was able to convince King Charles the VIII to allow her to take the lead of some of the French armies in order to help him get his crown back. Ultimately, Joan of Arc was successful in claiming victory over the English but was captured in 1430 and found guilty of heresy, leading to her execution in 1431.The impact of her actions was seen even after Joan's death in 1431. During the suffragette movement, Joan of Arc was used as an inspiration for the movement, particularly in Britain where many used her actions as fuel for their fight for political reform.
Born in 1760 in Plympton, Massachusetts, Deborah Sampson was the first female soldier in the US Army. The only woman in the Revolution to receive a full military pension, at age 18 Deborah took the name “Robert Shirtleff” and enlisted in union forces. In her capacity as a soldier, she was very successful, being named captain and leading an infantry in the capture of 15 enemy soldiers among other things. One and a half years into service, her true sex was revealed when she had to receive medical care. Following an honorable discharge, Deborah petitioned congress for her full pay that was withheld on the grounds of being an “invalid soldier” and eventually received it. She died in 1827 at age 66. Even after her death, Deborah Sampson continues to be a "hero of the American Revolution". In 2019, a diary from corporal Abner Weston shares about Deborah Sampson's previously unknown first attempt to enlist in the Continental Army.
These women are just a few among many who have disguised themselves as men in order to be able to fight in many different wars. Others who have used gender disguise for this purpose include Kit Kavanaugh/Christian Davies, Hannah Snell, Sarah Emma Edmonds, Frances Clayton, Dorothy Lawrence, Zoya Smirnow, and Brita Olofsdotter.
In some instances, women in journalism deem is necessary to wear the identity of a man in order to gather information that is only accessible from the male point of view. In other cases, people cross-dress to navigate certain cultures and/or specific circumstances that involve strict gender norms and expectations.
Norah Vincent, author of the book Self-Made Man: One Woman's Journey Into Manhood and Back Again, used gender disguise in order to go undercover as a men to penetrate men’s social circles and experience life as a man. In 2003, Vincent put her life on pause to adopt a new masculine identity as Ned Vincent. She worked with a makeup artist and vocal coach in order to convincingly play the role of a biological man. She wore an undersized sports bra, a stuffed jock strap, and size 11½ shoes to deceive those around her. In her book, Vincent makes discoveries about socialization, romance, sex, and stress as a man that leads her to conclude that, “[Men] have different problems than women have, but they don't have it better.” However, Vincent developed controversial opinions about sex and gender, claiming that transgender people are not legitimate until they undergo hormone therapy and surgical intervention. After writing Self-Made Man, Vincent became a victim of depression; she died by medically assisted suicide in 2022.
Bacha posh, an Afghani tradition, involves the crossdressing of young Afghan girls by their families so that they present to the public as boys. Families engage in bacha posh so that their daughters may avoid the oppression that women face under Afghanistan's deeply patriarchal society. Other reasons for having a bacha posh daughter include economic pressure, as girls and women are generally prohibited from work in contemporary Afghanistan, and social pressure, as families with boys tend to be more well regarded in Afghan society. While there isn’t a law that prohibits bacha posh, girls are expected to revert to traditional gender norms upon reaching puberty. According to Thomas Barfield, an anthropology professor at Boston University, bacha posh is "one of the most under-investigated" topics in the realm of gender studies, making difficult to determine exactly how common the practice is in Afghan society. However, some prominent female figures in Afghan society have admitted to being bacha posh in their youth. A more famous example of this is Afghan parliament member Azita Rafaat. Rafaat claims that bacha posh was a positive experience that built her self-confidence in Afghanistan's heavily patriarchal society and gave her a more well rounded understanding of women's issues in Afghanistan.
The actual determination of cross-dressing is largely socially constructed. For example, in Western society, trousers have long been adopted for usage by women, and it is no longer regarded as cross-dressing. In cultures where men have traditionally worn skirt-like garments such as the kilt or sarong, these are not seen as women's clothing, and wearing them is not seen as cross-dressing for men. As societies are becoming more global in nature, both men's and women's clothing are adopting styles of dress associated with other cultures.
Cosplaying may also involve cross-dressing, for some females may wish to dress as a male, and vice versa (see crossplay). Breast binding (for females) is not uncommon and is one of the things likely needed to cosplay a male character.
In most parts of the world, it remains socially disapproved for men to wear clothes traditionally associated with women. Attempts are occasionally made, e.g. by fashion designers, to promote the acceptance of skirts as everyday wear for men. Cross-dressers have complained that society permits women to wear pants or jeans and other masculine clothing, while condemning any man who wants to wear clothing sold for women.
While creating a more feminine figure, male cross-dressers will often utilize different types and styles of breast forms, which are silicone or foam prostheses traditionally used by women who have undergone mastectomies to recreate the visual appearance of a breast. Some male cross-dressers may also use hip or butt pads to create a profile that appears more stereotypically feminine.
While most male cross-dressers utilize clothing associated with modern women, some are involved in subcultures that involve dressing as little girls or in vintage clothing. Some such men have written that they enjoy dressing as femininely as possible, so they wear frilly dresses with lace and ribbons, bridal gowns complete with veils, as well as multiple petticoats, corsets, girdles and/or garter belts with nylon stockings.
The term underdressing is used by male cross-dressers to describe wearing female undergarments such as panties under their male clothes. The famous low-budget film-maker Edward D. Wood Jr. (who also went out in public dressed in drag as "Shirley", his female alter ego) said he often wore women's underwear under his military uniform as a Marine during World War II. Female masking is a form of cross-dressing in which men wear masks that present them as female.
Cross-dressers may begin wearing clothing associated with the opposite sex in childhood, using the clothes of a sibling, parent, or friend. Some parents have said they allowed their children to cross-dress and, in many cases, the child stopped when they became older. The same pattern often continues into adulthood, where there may be confrontations with a spouse, partner, family member or friend. Married cross-dressers can experience considerable anxiety and guilt if their spouse objects to their behavior.
Sometimes because of guilt or other reasons cross-dressers dispose of all their clothing, a practice called "purging", only to start collecting the other gender's clothing again.
Celebrations of cross-dressing occur in widespread cultures. The Abissa festival in Côte d'Ivoire, Ofudamaki in Japan, and Kottankulangara Festival in India are all examples of this.
Advocacy for social change has done much to relax the constrictions of gender roles on men and women, but they are still subject to prejudice from some people. It is noticeable that as being transgender becomes more socially accepted as a normal human condition, the prejudices against cross-dressing are changing quite quickly, just as the similar prejudices against homosexuals have changed rapidly in recent decades.
The reason it is so hard to have statistics for female cross-dressers is that the line where cross-dressing stops and cross-dressing begins has become blurred, whereas the same line for men is as well defined as ever. This is one of the many issues being addressed by third wave feminism as well as the modern-day masculist movement.
The general culture has very mixed views about cross-dressing. A woman who wears her husband's shirt to bed is considered attractive, while a man who wears his wife's nightgown to bed may be considered transgressive. Marlene Dietrich in a tuxedo was considered very erotic; Jack Lemmon in a dress was considered ridiculous. All this may result from an overall gender role rigidity for males; that is, because of the prevalent gender dynamic throughout the world, men frequently encounter discrimination when deviating from masculine gender norms, particularly violations of heteronormativity. A man's adoption of feminine clothing is often considered a going down in the gendered social order whereas a woman's adoption of what are traditionally men's clothing (at least in the English-speaking world) has less of an impact because women have been traditionally subordinate to men, unable to affect serious change through style of dress. Thus when a male cross-dresser puts on his clothes, he transforms into the quasi-female and thereby becomes an embodiment of the conflicted gender dynamic. Following the work of Judith Butler, gender proceeds along through ritualized performances, but in male cross-dressing it becomes a performative "breaking" of the masculine and a "subversive repetition" of the feminine.
Psychoanalysts today do not regard cross-dressing by itself as a psychological problem, unless it interferes with a person's life. "For instance," said Joseph Merlino, senior editor of Freud at 150: 21st Century Essays on a Man of Genius, "[suppose that]...I'm a cross-dresser and I don't want to keep it confined to my circle of friends, or my party circle, and I want to take that to my wife and I don't understand why she doesn't accept it, or I take it to my office and I don't understand why they don't accept it, then it's become a problem because it's interfering with my relationships and environment."
Cross-dressing today is much more common and normalized thanks to trends such as camp fashion and androgynous fashion. These trends have long histories but have recently been popularized thanks to major designers, fashion media, and celebrities today. Camp is a style of fashion that has had a long history extending all the way back to the Victorian era to the modern era. During the Victorian era up until the mid-20th century, it was defined as an exaggerated and flamboyant style of dressing. This was typically associated with ideas of effeminacy, de-masculization, and homosexuality. As the trend entered the 20th century, it also developed an association with a lack of conduct, creating the connotation that those who engaged in Camp are unrefined, improper, distasteful, and, essentially, undignified. Though this was its former understanding, Camp has now developed a new role in the fashion industry. It is considered a fashion style that has "failed seriousness" and has instead become a fun way of self-expression. Thanks to its integration with high fashion and extravagance, Camp is now seen as a high art form of absurdity: including loud, vibrant, bold, fun, and empty frivolity.
Camp is often used in drag culture as a method of exaggerating or inversing traditional conceptions of what it means to be feminine. In actuality, the QTPOC community has had a large impact on Camp. This is exhibited by ballroom culture, camp/glamour queens, Black '70s funk, Caribbean Carnival costumes, Blaxploitation movies, "pimp/player fashion", and more. This notion has also been materialized by camp icons such as Josephine Baker and RuPaul.
Androgynous fashion is described as neither masculine nor feminine rather it is the embodiment of a gender inclusive and sexually neutral fashion of expression. The general understanding of androgynous fashion is mixing both masculine and feminine pieces with the goal of producing a look that has no visual differentiations between one gender or another. This look is achieved by masking the general body so that one cannot identify the biological sex of an individual given the silhouette of the clothing pieces: Therefore, many androgynous looks include looser, baggier clothing that can conceal curves in the female body or using more "feminine" fabrics and prints for men.
Both of these style forms have been normalized and popularized by celebrities such as Harry Styles, Timothée Chalamet, Billie Eilish, Princess Diana, and more.
Beyond fashion, cross-dressing in non-Western countries have not fully outgrown the negative connotations that it has in the West. For instance, many Eastern and Southeastern Asian countries have a narrative of discrimination and stigma against LGBTQ and cross-dressing individuals. This is especially evident in the post-pandemic world. During this time, it was clear to see the failures of these governments to provide sufficient support to these individuals due to a lack of legal services, lack of job opportunity, and more. For instance, to be able to receive government aid, these individuals need to be able to quickly change their legal name, gender, and other information on official ID documents. This fault augmented the challenges of income loss, food insecurity, safe housing, healthcare, and more for many trans and cross-dressing individuals. This was especially pertinent as many of these individuals relied on entertainment and sex work for income. With the pandemic removing these job opportunities, the stigmatisation and discrimination against these individuals only increased, especially in Southeast Asian countries. On the other hand, some Asian countries have grown to be more accepting of cross-dressing as modernization has increased. For instance, among Japan's niche communities there exists the otokonoko. This is a group of male-assigned individuals who engage in female cross-dressing as a form of gender expression. This trend originated with manga and grew with an increase in maid cafes, cosplaying, and more in the 2010s. With the normalization of this through cosplay, cross-dressing has become a large part of otaku and anime culture.
Women dressed as men, and less often men dressed as women, is a common trope in fiction and folklore. For example, in Norse myth, Thor disguised himself as Freya. These disguises were also popular in Gothic fiction, such as in works by Charles Dickens, Alexandre Dumas, père, and Eugène Sue, and in a number of Shakespeare's plays, such as Twelfth Night. In The Wind in the Willows, Toad dresses as a washerwoman, and in The Lord of the Rings, Éowyn pretends to be a man.
In science fiction, fantasy and women's literature, this literary motif is occasionally taken further, with literal transformation of a character from male to female or vice versa. Virginia Woolf's Orlando: A Biography focuses on a man who becomes a woman, as does a warrior in Peter S. Beagle's The Innkeeper's Song; while in Geoff Ryman's The Warrior Who Carried Life, Cara magically transforms herself into a man.
Other popular examples of gender disguise include Madame Doubtfire (published as Alias Madame Doubtfire in the United States) and its movie adaptation Mrs. Doubtfire, featuring a man disguised as a woman. Similarly, the movie Tootsie features Dustin Hoffman disguised as a woman, while the movie The Associate features Whoopi Goldberg disguised as a man.
The 10th edition of the International Statistical Classification of Diseases and Related Health Problems lists dual-role transvestism (non-sexual cross-dressing) and fetishistic transvestism (cross-dressing for sexual pleasure) as disorders. Both listings were removed for the 11th edition. Transvestic fetishism is a paraphilia and a psychiatric diagnosis in the DSM-5 version of the Diagnostic and Statistical Manual of Mental Disorders. | [
{
"paragraph_id": 0,
"text": "Cross-dressing is the act of wearing clothes traditionally or stereotypically associated with a different gender. From as early as pre-modern history, cross-dressing has been practiced in order to disguise, comfort, entertain, and express oneself.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Socialization establishes social norms among the people of a particular society. With regard to the social aspects of clothing, such standards may reflect guidelines relating to the style, color, or type of clothing that individuals are expected to wear. Such expectations may be delineated according to gender roles. Cross-dressing involves dressing contrary to the prevailing standards (or in some cases, laws) for a person of their gender in their own society.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The term \"cross-dressing\" refers to an action or a behavior, without attributing or implying any specific causes or motives for that behavior. Cross-dressing is not synonymous with being transgender.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The phenomenon of cross-dressing is seen throughout recorded history, being referred to as far back as the Hebrew Bible. The terms used to describe it have changed throughout history; the Anglo-Saxon-rooted term \"cross-dresser\" is viewed more favorably than the Latin-origin term \"transvestite\" in some circles, where it has come to be seen as outdated and derogatory. Its first mention originated in Magnus Hirschfeld's Die Transvestiten (The Transvestites) in 1910, originally associating cross-dressing with non-heterosexual behavior or derivations of sexual intent. Its connotations largely changed in the 20th century as its use was more frequently associated with sexual excitement, otherwise known as transvestic disorder. This term was historically used to diagnose psychiatric disorders (e.g. transvestic fetishism), but the former (cross-dressing) was coined by the transgender community. The Oxford English Dictionary gives 1911 as the earliest citation of the term \"cross-dressing\", by Edward Carpenter: \"Cross-dressing must be taken as a general indication of, and a cognate phenomenon to, homosexuality\". In 1928, Havelock Ellis used the two terms \"cross-dressing\" and \"transvestism\" interchangeably. The earliest citations for \"cross-dress\" and \"cross-dresser\" are 1966 and 1976, respectively.",
"title": "Terminology"
},
{
"paragraph_id": 4,
"text": "The term en femme [ɑ̃ fam] is a lexical borrowing of a French phrase. It is used in the transgender and crossdressing community to describe the act of wearing feminine clothing or expressing a stereotypically feminine personality. The term is borrowed from the modern French phrase en femme meaning \"as a woman.\" Most crossdressers also use a female name whilst en femme; that is their \"femme name\". In the cross-dressing community the persona a man adopts when he dresses as a woman is known as his \"femme self\".",
"title": "Terminology"
},
{
"paragraph_id": 5,
"text": "Between 1987 and 1991, JoAnn Roberts and CDS published a magazine called \"En Femme\" that was \"for the transvestite, transsexual, crossdresser, and female impersonator.\"",
"title": "Terminology"
},
{
"paragraph_id": 6,
"text": "The term en homme [ɑ̃nɔm] is an anglicized adaptation of a French phrase. It is used in the transgender and crossdressing community to describe the act of wearing masculine clothing or expressing a stereotypically masculine personality. The term is derived from the modern colloquial French phrase en tant qu'homme meaning \"as a man\" and the anglicized adaptation en homme literally translates as \"in man\". Most crossdressers also use a homme (male) name whilst en homme.",
"title": "Terminology"
},
{
"paragraph_id": 7,
"text": "Cross-dressing has been practiced throughout much of recorded history, in many societies, and for many reasons. Examples exist in Greek, Norse, and Hindu mythology. Cross-dressing can be found in theater and religion, such as kabuki, Noh, and Korean shamanism, as well as in folklore, literature, and music. For instance, in examining kabuki culture during Japan's edo period, cross-dressing was not only used for theater purposes but also because current societal trends: cross-dressing and the switching of genders was a familiar concept to the Japanese at the time which allowed them to interchange characters's genders easily and incorporate geisha fashion into men's wear. This was especially common in the story-telling of ancient stories such as the character Benten from Benten Kozō. He was a thief in the play cross-dressing as a woman. Cross-dressing was also exhibited in Japanese Noh for similar reasons. Societal standards at the time broke boundaries between gender. For example, ancient Japanese portraits of aristocrats have no clear differentiation in characteristics between male and female beauty. Thus, in Noh performance, the cross-dressing of actors was common; especially given the ease of disguising biological sex with the use of masks and heavy robes. In a non-entertainment context, cross-dressing is also exhibited in Korean shamanism for religious purposes. Specifically, this is displayed in chaesu-gut, a shamanistic rite gut in which a shaman offers a sacrifice to the spirits to intermediate in the fortunes of the intended humans for the gut. Here, cross-dressing serves many purposes. Firstly, the shaman (typically a woman) would cross-dress as both male and female spirits can occupy her. This allows her to represent the opposite sex and become a cross-sex icon in 75% of the time of the ritual. This also allows her to become a sexually liminal being. It is clear that in entertainment, literature, art, and religion, different civilizations have utilized cross-dressing for many different purposes.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "In the British and European context, theatrical troupes (\"playing companies\") were all-male, with the female parts undertaken by boy players.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "The Rebecca Riots took place between 1839 and 1843 in West and Mid Wales. They were a series of protests undertaken by local farmers and agricultural workers in response to unfair taxation. The rioters, often men dressed as women, took their actions against toll-gates, as they were tangible representations of high taxes and tolls. The riots ceased prior to 1844 due to several factors, including increased troop levels, a desire by the protestors to avoid violence and the appearance of criminal groups using the guise of the biblical character Rebecca for their own purposes. In 1844 an Act of Parliament to consolidate and amend the laws relating to turnpike trusts in Wales was passed.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "A variety of historical figures are known to have cross-dressed to varying degrees. Many women found they had to disguise themselves as men in order to participate in the wider world. For example, it is postulated that Margaret King cross-dressed in the early 19th century to attend medical school, as universities at that time accepted only male students. A century later, Vita Sackville-West dressed as a young soldier in order to \"walk out\" with her girlfriend Violet Keppel, to avoid the street harassment that two women would have faced. The prohibition on women wearing male garb, once strictly applied, still has echoes today in some Western societies which require girls and women to wear skirts, for example as part of school uniform or office dress codes. In some countries, even in casual settings, women are still prohibited from wearing traditionally male clothing. Sometimes all trousers, no matter how loose and long, are automatically considered \"indecent\", which may render their wearer subject to severe punishment, as in the case of Lubna al-Hussein in Sudan in 2009.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "In many countries, cross-dressing was illegal under laws that identified it as indecent or immoral. Many such laws were challenged in the late 1900s giving people the right to freedom of gender expression with regard to their clothing.",
"title": "Legal issues"
},
{
"paragraph_id": 12,
"text": "For instance, from 1840 forward, United States saw state and city laws forbidding people from appearing in public while dressed in clothes not commonly associated with their assigned sex. The goal of this wave of policies was to create a tool that would enforce a normative gender narrative, targeting multiple gender identities across the gender spectrum. With the progression of time, styles, and societal trends, it became even more difficult to draw the line between what was cross-dressing or not. Only recently have these laws changed. As recently as 2011, it was still possible for a man to get arrested for \"impersonating a woman\" — a vestige of the 19th century laws. Even with this, legal issues surrounding cross-dressing perpetuated all throughout the mid 20th century. During this time period, police would often reference laws that did not exist or laws that have been repealed in order to target the LGBTQ+ community.",
"title": "Legal issues"
},
{
"paragraph_id": 13,
"text": "This extends beyond the United States: There still remains 13 UN member States that explicitly criminalize transgender individuals, and there exist even more countries that use a great deal of diverse laws to target them. The third edition of the Trans Legal Mapping Report, done by the International Lesbian, Gay, Bisexual, Trans, and Intersex Association found that an especially common method to target these individuals is through cross-dressing regulations. For instance, only in 2014 did an appeal court in Malaysia finally overturned a state law prohibiting Muslim men from cross-dressing as women.",
"title": "Legal issues"
},
{
"paragraph_id": 14,
"text": "In the Australian state of Tasmania, cross-dressing in public was made a criminal offence in 1935, and this law was only repealed in 2000.",
"title": "Legal issues"
},
{
"paragraph_id": 15,
"text": "There are many different kinds of cross-dressing and many different reasons why an individual might engage in cross-dressing behavior. Some people cross-dress as a matter of comfort or style, a personal preference for clothing associated with the opposite gender. Some people cross-dress to shock others or challenge social norms; others will limit their cross-dressing to underwear, so that it is not apparent. Some people attempt to pass as a member of the opposite gender in order to gain access to places or resources they would not otherwise be able to reach.",
"title": "Varieties"
},
{
"paragraph_id": 16,
"text": "Single-sex theatrical troupes often have some performers who cross-dress to play roles written for members of the opposite sex (travesti and trouser roles). Cross-dressing, particularly the depiction of males wearing dresses, is often used for comic effect onstage and on-screen.",
"title": "Varieties"
},
{
"paragraph_id": 17,
"text": "Boy player refers to children who performed in Medieval and English Renaissance playing companies. Some boy players worked for the adult companies and performed the female roles as women did not perform on the English stage in this period. Others worked for children's companies in which all roles, not just the female ones, were played by boys.",
"title": "Varieties"
},
{
"paragraph_id": 18,
"text": "In an effort to clamp down on kabuki's popularity, women's kabuki, known as onna-kabuki, was banned in 1629 in Japan for being too erotic. Following this ban, young boys began performing in wakashū-kabuki, which was also soon banned. Thus adult men play female roles in kabuki.",
"title": "Varieties"
},
{
"paragraph_id": 19,
"text": "Dan is the general name for female roles in Chinese opera, often referring to leading roles. They may be played by male or female actors. In the early years of Peking opera, all dan roles were played by men, but this practice is no longer common in any Chinese opera genre.",
"title": "Varieties"
},
{
"paragraph_id": 20,
"text": "Women have often been excluded from Noh, and men often play female characters in it.",
"title": "Varieties"
},
{
"paragraph_id": 21,
"text": "Drag is a special form of performance art based on the act of cross-dressing. A drag queen is usually a male-assigned person who performs as an exaggeratedly feminine character, in heightened costuming sometimes consisting of a showy dress, high-heeled shoes, obvious make-up, and wig. A drag queen may imitate famous female film or pop-music stars. A faux queen is a female-assigned person employing the same techniques. A drag king is a counterpart of the drag queen – a female-assigned person who adopts a masculine persona in performance or imitates a male film or pop-music star. Some female-assigned people undergoing gender reassignment therapy also self-identify as 'drag kings'.",
"title": "Varieties"
},
{
"paragraph_id": 22,
"text": "The modern activity of battle reenactments has raised the question of women passing as male soldiers. In 1989, Lauren Burgess dressed as a male soldier in a U.S. National Park Service reenactment of the Battle of Antietam, and was ejected after she was discovered to be a woman. Burgess sued the Park Service for sexual discrimination. The case spurred spirited debate among Civil War buffs. In 1993, a federal judge ruled in Burgess's favor.",
"title": "Varieties"
},
{
"paragraph_id": 23,
"text": "\"Wigging\" refers to the practice of male stunt doubles taking the place of an actress, parallel to \"paint downs\", where white stunt doubles are made up to resemble black actors. Female stunt doubles have begun to protest this norm of \"historical sexism\", saying that it restricts their already limited job possibilities.",
"title": "Varieties"
},
{
"paragraph_id": 24,
"text": "Cross-dressing is a traditional popular trope in British comedy. The pantomime dame in British pantomime dates from the 19th century, which is part of the theatrical tradition of female characters portrayed by male actors in drag. Widow Twankey (Aladdin's mother) is a popular pantomime dame: in 2004 Ian McKellen played the role.",
"title": "Varieties"
},
{
"paragraph_id": 25,
"text": "The Monty Python comedy troupe donned frocks and makeup, playing female roles while speaking in falsetto. Character comics such as Benny Hill and Dick Emery drew upon several female identities. In the BBC's long-running sketch show The Dick Emery Show (broadcast from 1963 to 1981), Emery played Mandy, a busty peroxide blonde whose catchphrase, \"Ooh, you are awful ... but I like you!\", was given in response to a seemingly innocent remark made by her interviewer, but perceived by her as ribald double entendre. The popular tradition of cross dressing in British comedy extended to the 1984 music video for Queen's \"I Want to Break Free\" where the band parody several female characters from the soap opera Coronation Street.",
"title": "Varieties"
},
{
"paragraph_id": 26,
"text": "A transvestic fetishist is a person who cross-dresses as part of a sexual fetish. According to the fourth edition of Diagnostic and Statistical Manual of Mental Disorders, this fetishism was limited to heterosexual men; however, DSM-5 does not have this restriction, and opens it to women and men, regardless of their sexual orientation.",
"title": "Varieties"
},
{
"paragraph_id": 27,
"text": "Sometimes either member of a heterosexual couple will cross-dress in order to arouse the other. For example, the male might wear skirts or lingerie and/or the female will wear boxers or other male clothing. (See also forced feminization)",
"title": "Varieties"
},
{
"paragraph_id": 28,
"text": "Some people who cross-dress may endeavor to project a complete impression of belonging to another gender, including mannerisms, speech patterns, and emulation of sexual characteristics. This is referred to as passing or \"trying to pass\", depending how successful the person is. An observer who sees through the cross-dresser's attempt to pass is said to have \"read\" or \"clocked\" them. There are videos, books, and magazines on how a man may look more like a woman.",
"title": "Varieties"
},
{
"paragraph_id": 29,
"text": "Others may choose to take a mixed approach, adopting some feminine traits and some masculine traits in their appearance. For instance, a man might wear both a dress and a beard. This is sometimes known as \"genderfuck\". In a broader context, cross-dressing may also refer to other actions undertaken to pass as a particular sex, such as packing (accentuating the male crotch bulge) or, the opposite, tucking (concealing the male crotch bulge).",
"title": "Varieties"
},
{
"paragraph_id": 30,
"text": "Gender disguise has been used by women and girls to pass as male, and by men and boys to pass as female. Gender disguise has also been used as a plot device in storytelling, particularly in narrative ballads, and is a recurring motif in literature, theater, and film. Historically, some women have cross-dressed to take up male-dominated or male-exclusive professions, such as military service. Conversely, some men have cross-dressed to escape from mandatory military service or as a disguise to assist in political or social protest, as men in Wales did in the Rebecca Riots and when conducting Ceffyl Pren as a form of mob justice.",
"title": "Gender disguise"
},
{
"paragraph_id": 31,
"text": "Conversation surrounding exclusion and inequality in sports has been around for decades. While the fight for equality in sports has been going on, there are a couple of notable women who have dressed as men or hid their gender to insert themselves into the very gatekept world of sports.",
"title": "Gender disguise"
},
{
"paragraph_id": 32,
"text": "Roberta \"Bobbi\" Gibb is the first woman to have competed in the Boston Marathon. In 1966 Bobbi Gibb wrote a letter to the Boston Athletic Association asking to participate in the race happening that year. When Gibb received her letter back in the mail she was faced with the news that her entry to the race was denied due to her gender. Rather than just accept her fate, Gibb did not take no for an answer and decided to run the marathon anyways—however, she would do it hidden as a man. On the day of the race Gibb showed up in an oversized sweatshirt, her brother's shorts, and men's running shoes. Gibb hid in the bushes until the race started and then joined in with the crowd. Eventually her fellow runners figured out Gibb's real gender but stated that they would make sure that she finished the race. Gibb ended up finishing her first Boston Marathon in 3 hours, 27 minutes and 40 seconds. She crossed the finish line with blistered, bleeding feet from the men's running shoes she was wearing. Gibb's act of defiance influenced other women marathon runners of the time like Katherine Switzer, who also registered under an alias to be able to run the race in 1967. It would not be until 1972 until there was an official women's race within the Boston Marathon.",
"title": "Gender disguise"
},
{
"paragraph_id": 33,
"text": "Sam Kerr is a forward for the Australian Women's Soccer Team and Chelsea FC in the FA Women's Super League. Kerr has been regarded as one of the best forward players in the sport and has been one of the most highly paid players in women's soccer as well. While Kerr now shares the world state with other great women soccer players, as a young child she shared the field with young boys. Kerr grew up in a suburb of Perth where there was little to no access to young girls soccer teams in the direct area. Not having a girls team to play on did not bother Kerr though, she simply played on a youth boys team where all of her teammates just assumed she was also a boy. Kerr states in her book My Journey to the World Cup that she continued to hide her gender because she did not want to be treated any differently. In her book Kerr also reviled that when one of her teammates found out that she was, in fact a girl, he cried. While Kerr's act of hiding her gender was initially an accident, it is still an example of how women (and in the case a young girl) can create opportunities for themselves by looking or acting as a man.",
"title": "Gender disguise"
},
{
"paragraph_id": 34,
"text": "One of the most common instances of gender disguise is in the instance of war/militaristic situations. From Joan of Arc in the 15th century to Mulan from the animated Disney Mulan to young girls in World War II, there have been many different people of many different sexes that disguise themselves as men in order to be able to fight in wars.",
"title": "Gender disguise"
},
{
"paragraph_id": 35,
"text": "Born c. 1412, St Joan of Arc or the Maid of Orleans is one of the oldest examples of gender disguise. At 13, after receiving a revelation that she was supposed to lead the French to victory over the English in the 100 years war, Joan donned the clothing of a male soldier in the French army. Joan was able to convince King Charles the VIII to allow her to take the lead of some of the French armies in order to help him get his crown back. Ultimately, Joan of Arc was successful in claiming victory over the English but was captured in 1430 and found guilty of heresy, leading to her execution in 1431.The impact of her actions was seen even after Joan's death in 1431. During the suffragette movement, Joan of Arc was used as an inspiration for the movement, particularly in Britain where many used her actions as fuel for their fight for political reform.",
"title": "Gender disguise"
},
{
"paragraph_id": 36,
"text": "Born in 1760 in Plympton, Massachusetts, Deborah Sampson was the first female soldier in the US Army. The only woman in the Revolution to receive a full military pension, at age 18 Deborah took the name “Robert Shirtleff” and enlisted in union forces. In her capacity as a soldier, she was very successful, being named captain and leading an infantry in the capture of 15 enemy soldiers among other things. One and a half years into service, her true sex was revealed when she had to receive medical care. Following an honorable discharge, Deborah petitioned congress for her full pay that was withheld on the grounds of being an “invalid soldier” and eventually received it. She died in 1827 at age 66. Even after her death, Deborah Sampson continues to be a \"hero of the American Revolution\". In 2019, a diary from corporal Abner Weston shares about Deborah Sampson's previously unknown first attempt to enlist in the Continental Army.",
"title": "Gender disguise"
},
{
"paragraph_id": 37,
"text": "These women are just a few among many who have disguised themselves as men in order to be able to fight in many different wars. Others who have used gender disguise for this purpose include Kit Kavanaugh/Christian Davies, Hannah Snell, Sarah Emma Edmonds, Frances Clayton, Dorothy Lawrence, Zoya Smirnow, and Brita Olofsdotter.",
"title": "Gender disguise"
},
{
"paragraph_id": 38,
"text": "In some instances, women in journalism deem is necessary to wear the identity of a man in order to gather information that is only accessible from the male point of view. In other cases, people cross-dress to navigate certain cultures and/or specific circumstances that involve strict gender norms and expectations.",
"title": "Gender disguise"
},
{
"paragraph_id": 39,
"text": "Norah Vincent, author of the book Self-Made Man: One Woman's Journey Into Manhood and Back Again, used gender disguise in order to go undercover as a men to penetrate men’s social circles and experience life as a man. In 2003, Vincent put her life on pause to adopt a new masculine identity as Ned Vincent. She worked with a makeup artist and vocal coach in order to convincingly play the role of a biological man. She wore an undersized sports bra, a stuffed jock strap, and size 11½ shoes to deceive those around her. In her book, Vincent makes discoveries about socialization, romance, sex, and stress as a man that leads her to conclude that, “[Men] have different problems than women have, but they don't have it better.” However, Vincent developed controversial opinions about sex and gender, claiming that transgender people are not legitimate until they undergo hormone therapy and surgical intervention. After writing Self-Made Man, Vincent became a victim of depression; she died by medically assisted suicide in 2022.",
"title": "Gender disguise"
},
{
"paragraph_id": 40,
"text": "Bacha posh, an Afghani tradition, involves the crossdressing of young Afghan girls by their families so that they present to the public as boys. Families engage in bacha posh so that their daughters may avoid the oppression that women face under Afghanistan's deeply patriarchal society. Other reasons for having a bacha posh daughter include economic pressure, as girls and women are generally prohibited from work in contemporary Afghanistan, and social pressure, as families with boys tend to be more well regarded in Afghan society. While there isn’t a law that prohibits bacha posh, girls are expected to revert to traditional gender norms upon reaching puberty. According to Thomas Barfield, an anthropology professor at Boston University, bacha posh is \"one of the most under-investigated\" topics in the realm of gender studies, making difficult to determine exactly how common the practice is in Afghan society. However, some prominent female figures in Afghan society have admitted to being bacha posh in their youth. A more famous example of this is Afghan parliament member Azita Rafaat. Rafaat claims that bacha posh was a positive experience that built her self-confidence in Afghanistan's heavily patriarchal society and gave her a more well rounded understanding of women's issues in Afghanistan.",
"title": "Gender disguise"
},
{
"paragraph_id": 41,
"text": "The actual determination of cross-dressing is largely socially constructed. For example, in Western society, trousers have long been adopted for usage by women, and it is no longer regarded as cross-dressing. In cultures where men have traditionally worn skirt-like garments such as the kilt or sarong, these are not seen as women's clothing, and wearing them is not seen as cross-dressing for men. As societies are becoming more global in nature, both men's and women's clothing are adopting styles of dress associated with other cultures.",
"title": "Clothes"
},
{
"paragraph_id": 42,
"text": "Cosplaying may also involve cross-dressing, for some females may wish to dress as a male, and vice versa (see crossplay). Breast binding (for females) is not uncommon and is one of the things likely needed to cosplay a male character.",
"title": "Clothes"
},
{
"paragraph_id": 43,
"text": "In most parts of the world, it remains socially disapproved for men to wear clothes traditionally associated with women. Attempts are occasionally made, e.g. by fashion designers, to promote the acceptance of skirts as everyday wear for men. Cross-dressers have complained that society permits women to wear pants or jeans and other masculine clothing, while condemning any man who wants to wear clothing sold for women.",
"title": "Clothes"
},
{
"paragraph_id": 44,
"text": "While creating a more feminine figure, male cross-dressers will often utilize different types and styles of breast forms, which are silicone or foam prostheses traditionally used by women who have undergone mastectomies to recreate the visual appearance of a breast. Some male cross-dressers may also use hip or butt pads to create a profile that appears more stereotypically feminine.",
"title": "Clothes"
},
{
"paragraph_id": 45,
"text": "While most male cross-dressers utilize clothing associated with modern women, some are involved in subcultures that involve dressing as little girls or in vintage clothing. Some such men have written that they enjoy dressing as femininely as possible, so they wear frilly dresses with lace and ribbons, bridal gowns complete with veils, as well as multiple petticoats, corsets, girdles and/or garter belts with nylon stockings.",
"title": "Clothes"
},
{
"paragraph_id": 46,
"text": "The term underdressing is used by male cross-dressers to describe wearing female undergarments such as panties under their male clothes. The famous low-budget film-maker Edward D. Wood Jr. (who also went out in public dressed in drag as \"Shirley\", his female alter ego) said he often wore women's underwear under his military uniform as a Marine during World War II. Female masking is a form of cross-dressing in which men wear masks that present them as female.",
"title": "Clothes"
},
{
"paragraph_id": 47,
"text": "Cross-dressers may begin wearing clothing associated with the opposite sex in childhood, using the clothes of a sibling, parent, or friend. Some parents have said they allowed their children to cross-dress and, in many cases, the child stopped when they became older. The same pattern often continues into adulthood, where there may be confrontations with a spouse, partner, family member or friend. Married cross-dressers can experience considerable anxiety and guilt if their spouse objects to their behavior.",
"title": "Social issues"
},
{
"paragraph_id": 48,
"text": "Sometimes because of guilt or other reasons cross-dressers dispose of all their clothing, a practice called \"purging\", only to start collecting the other gender's clothing again.",
"title": "Social issues"
},
{
"paragraph_id": 49,
"text": "Celebrations of cross-dressing occur in widespread cultures. The Abissa festival in Côte d'Ivoire, Ofudamaki in Japan, and Kottankulangara Festival in India are all examples of this.",
"title": "Festivals"
},
{
"paragraph_id": 50,
"text": "Advocacy for social change has done much to relax the constrictions of gender roles on men and women, but they are still subject to prejudice from some people. It is noticeable that as being transgender becomes more socially accepted as a normal human condition, the prejudices against cross-dressing are changing quite quickly, just as the similar prejudices against homosexuals have changed rapidly in recent decades.",
"title": "Analysis"
},
{
"paragraph_id": 51,
"text": "The reason it is so hard to have statistics for female cross-dressers is that the line where cross-dressing stops and cross-dressing begins has become blurred, whereas the same line for men is as well defined as ever. This is one of the many issues being addressed by third wave feminism as well as the modern-day masculist movement.",
"title": "Analysis"
},
{
"paragraph_id": 52,
"text": "The general culture has very mixed views about cross-dressing. A woman who wears her husband's shirt to bed is considered attractive, while a man who wears his wife's nightgown to bed may be considered transgressive. Marlene Dietrich in a tuxedo was considered very erotic; Jack Lemmon in a dress was considered ridiculous. All this may result from an overall gender role rigidity for males; that is, because of the prevalent gender dynamic throughout the world, men frequently encounter discrimination when deviating from masculine gender norms, particularly violations of heteronormativity. A man's adoption of feminine clothing is often considered a going down in the gendered social order whereas a woman's adoption of what are traditionally men's clothing (at least in the English-speaking world) has less of an impact because women have been traditionally subordinate to men, unable to affect serious change through style of dress. Thus when a male cross-dresser puts on his clothes, he transforms into the quasi-female and thereby becomes an embodiment of the conflicted gender dynamic. Following the work of Judith Butler, gender proceeds along through ritualized performances, but in male cross-dressing it becomes a performative \"breaking\" of the masculine and a \"subversive repetition\" of the feminine.",
"title": "Analysis"
},
{
"paragraph_id": 53,
"text": "Psychoanalysts today do not regard cross-dressing by itself as a psychological problem, unless it interferes with a person's life. \"For instance,\" said Joseph Merlino, senior editor of Freud at 150: 21st Century Essays on a Man of Genius, \"[suppose that]...I'm a cross-dresser and I don't want to keep it confined to my circle of friends, or my party circle, and I want to take that to my wife and I don't understand why she doesn't accept it, or I take it to my office and I don't understand why they don't accept it, then it's become a problem because it's interfering with my relationships and environment.\"",
"title": "Analysis"
},
{
"paragraph_id": 54,
"text": "Cross-dressing today is much more common and normalized thanks to trends such as camp fashion and androgynous fashion. These trends have long histories but have recently been popularized thanks to major designers, fashion media, and celebrities today. Camp is a style of fashion that has had a long history extending all the way back to the Victorian era to the modern era. During the Victorian era up until the mid-20th century, it was defined as an exaggerated and flamboyant style of dressing. This was typically associated with ideas of effeminacy, de-masculization, and homosexuality. As the trend entered the 20th century, it also developed an association with a lack of conduct, creating the connotation that those who engaged in Camp are unrefined, improper, distasteful, and, essentially, undignified. Though this was its former understanding, Camp has now developed a new role in the fashion industry. It is considered a fashion style that has \"failed seriousness\" and has instead become a fun way of self-expression. Thanks to its integration with high fashion and extravagance, Camp is now seen as a high art form of absurdity: including loud, vibrant, bold, fun, and empty frivolity.",
"title": "Cross-dressing in the 21st century"
},
{
"paragraph_id": 55,
"text": "Camp is often used in drag culture as a method of exaggerating or inversing traditional conceptions of what it means to be feminine. In actuality, the QTPOC community has had a large impact on Camp. This is exhibited by ballroom culture, camp/glamour queens, Black '70s funk, Caribbean Carnival costumes, Blaxploitation movies, \"pimp/player fashion\", and more. This notion has also been materialized by camp icons such as Josephine Baker and RuPaul.",
"title": "Cross-dressing in the 21st century"
},
{
"paragraph_id": 56,
"text": "Androgynous fashion is described as neither masculine nor feminine rather it is the embodiment of a gender inclusive and sexually neutral fashion of expression. The general understanding of androgynous fashion is mixing both masculine and feminine pieces with the goal of producing a look that has no visual differentiations between one gender or another. This look is achieved by masking the general body so that one cannot identify the biological sex of an individual given the silhouette of the clothing pieces: Therefore, many androgynous looks include looser, baggier clothing that can conceal curves in the female body or using more \"feminine\" fabrics and prints for men.",
"title": "Cross-dressing in the 21st century"
},
{
"paragraph_id": 57,
"text": "Both of these style forms have been normalized and popularized by celebrities such as Harry Styles, Timothée Chalamet, Billie Eilish, Princess Diana, and more.",
"title": "Cross-dressing in the 21st century"
},
{
"paragraph_id": 58,
"text": "Beyond fashion, cross-dressing in non-Western countries have not fully outgrown the negative connotations that it has in the West. For instance, many Eastern and Southeastern Asian countries have a narrative of discrimination and stigma against LGBTQ and cross-dressing individuals. This is especially evident in the post-pandemic world. During this time, it was clear to see the failures of these governments to provide sufficient support to these individuals due to a lack of legal services, lack of job opportunity, and more. For instance, to be able to receive government aid, these individuals need to be able to quickly change their legal name, gender, and other information on official ID documents. This fault augmented the challenges of income loss, food insecurity, safe housing, healthcare, and more for many trans and cross-dressing individuals. This was especially pertinent as many of these individuals relied on entertainment and sex work for income. With the pandemic removing these job opportunities, the stigmatisation and discrimination against these individuals only increased, especially in Southeast Asian countries. On the other hand, some Asian countries have grown to be more accepting of cross-dressing as modernization has increased. For instance, among Japan's niche communities there exists the otokonoko. This is a group of male-assigned individuals who engage in female cross-dressing as a form of gender expression. This trend originated with manga and grew with an increase in maid cafes, cosplaying, and more in the 2010s. With the normalization of this through cosplay, cross-dressing has become a large part of otaku and anime culture.",
"title": "Cross-dressing in the 21st century"
},
{
"paragraph_id": 59,
"text": "Women dressed as men, and less often men dressed as women, is a common trope in fiction and folklore. For example, in Norse myth, Thor disguised himself as Freya. These disguises were also popular in Gothic fiction, such as in works by Charles Dickens, Alexandre Dumas, père, and Eugène Sue, and in a number of Shakespeare's plays, such as Twelfth Night. In The Wind in the Willows, Toad dresses as a washerwoman, and in The Lord of the Rings, Éowyn pretends to be a man.",
"title": "Across media"
},
{
"paragraph_id": 60,
"text": "In science fiction, fantasy and women's literature, this literary motif is occasionally taken further, with literal transformation of a character from male to female or vice versa. Virginia Woolf's Orlando: A Biography focuses on a man who becomes a woman, as does a warrior in Peter S. Beagle's The Innkeeper's Song; while in Geoff Ryman's The Warrior Who Carried Life, Cara magically transforms herself into a man.",
"title": "Across media"
},
{
"paragraph_id": 61,
"text": "Other popular examples of gender disguise include Madame Doubtfire (published as Alias Madame Doubtfire in the United States) and its movie adaptation Mrs. Doubtfire, featuring a man disguised as a woman. Similarly, the movie Tootsie features Dustin Hoffman disguised as a woman, while the movie The Associate features Whoopi Goldberg disguised as a man.",
"title": "Across media"
},
{
"paragraph_id": 62,
"text": "The 10th edition of the International Statistical Classification of Diseases and Related Health Problems lists dual-role transvestism (non-sexual cross-dressing) and fetishistic transvestism (cross-dressing for sexual pleasure) as disorders. Both listings were removed for the 11th edition. Transvestic fetishism is a paraphilia and a psychiatric diagnosis in the DSM-5 version of the Diagnostic and Statistical Manual of Mental Disorders.",
"title": "Medical views"
}
] | Cross-dressing is the act of wearing clothes traditionally or stereotypically associated with a different gender. From as early as pre-modern history, cross-dressing has been practiced in order to disguise, comfort, entertain, and express oneself. Socialization establishes social norms among the people of a particular society. With regard to the social aspects of clothing, such standards may reflect guidelines relating to the style, color, or type of clothing that individuals are expected to wear. Such expectations may be delineated according to gender roles. Cross-dressing involves dressing contrary to the prevailing standards for a person of their gender in their own society. The term "cross-dressing" refers to an action or a behavior, without attributing or implying any specific causes or motives for that behavior. Cross-dressing is not synonymous with being transgender. | 2001-05-20T18:35:27Z | 2023-12-26T04:02:53Z | [
"Template:Citation needed",
"Template:Rp",
"Template:More citations needed section",
"Template:Citation",
"Template:See also",
"Template:Anchor",
"Template:Reflist",
"Template:ISBN",
"Template:Crossdressing",
"Template:Transgender sidebar",
"Template:Circa",
"Template:Clarify",
"Template:Unreliable source?",
"Template:Cite news",
"Template:Main",
"Template:Transliteration",
"Template:Portal",
"Template:Page needed",
"Template:Cite magazine",
"Template:Commons category-inline",
"Template:Efn",
"Template:Editorializing",
"Template:Primary source inline",
"Template:Div col end",
"Template:Notelist",
"Template:Cite web",
"Template:Cite book",
"Template:Short description",
"Template:Div col",
"Template:Dead link",
"Template:Authority control",
"Template:IPA-fr",
"Template:Cite journal",
"Template:Navboxes"
] | https://en.wikipedia.org/wiki/Cross-dressing |
5,702 | Channel Tunnel | The Channel Tunnel (French: Tunnel sous la Manche), also known as the Chunnel, is a 50.46-kilometre (31.35 mi) underwater railway tunnel that connects Folkestone (Kent, England) with Coquelles (Pas-de-Calais, France) beneath the English Channel at the Strait of Dover. It is the only fixed link between the island of Great Britain and the European mainland. At its lowest point, it is 75 metres (246 ft) below the sea bed and 115 metres (377 ft) below sea level. At 37.9 kilometres (23.5 mi), it has the longest underwater section of any tunnel in the world and is the third-longest railway tunnel in the world. The speed limit for trains through the tunnel is 160 kilometres per hour (99 mph). The tunnel is owned and operated by the company Getlink, formerly "Groupe Eurotunnel".
The tunnel carries high-speed Eurostar passenger trains, LeShuttle services for road vehicles and freight trains. It connects end-to-end with high-speed railway lines: the LGV Nord in France and High Speed 1 in England. In 2017, rail services carried 10.3 million passengers and 1.22 million tonnes of freight, and the Shuttle carried 10.4 million passengers, 2.6 million cars, 51,000 coaches, and 1.6 million lorries (equivalent to 21.3 million tonnes of freight), compared with 11.7 million passengers, 2.6 million lorries and 2.2 million cars by sea through the Port of Dover.
Plans to build a cross-Channel fixed link appeared as early as 1802, but British political and media pressure motivated by fears of compromising national security had disrupted attempts to build one. An early unsuccessful attempt was made in the late 19th century, on the English side, "in the hope of forcing the hand of the English Government". The eventual successful project, organised by Eurotunnel, began construction in 1988 and opened in 1994. Estimated to cost £5.5 billion in 1985, it was at the time the most expensive construction project ever proposed. The cost finally amounted to £9 billion (equivalent to £21.8 billion in 2021), well over budget.
Since its opening, the tunnel has experienced occasional mechanical problems. Both fires and cold weather have temporarily disrupted its operation. Since at least 1997, aggregations of migrants around Calais seeking entry to the United Kingdom, such as through the tunnel, have prompted deterrence and countermeasures.
In 1802, Albert Mathieu-Favier, a French mining engineer, put forward a proposal to tunnel under the English Channel, with illumination from oil lamps, horse-drawn coaches, and an artificial island positioned mid-Channel for changing horses. His design envisaged a bored two-level tunnel with the top tunnel used for transport and the bottom one for groundwater flows.
In 1839, Aimé Thomé de Gamond, a Frenchman, performed the first geological and hydrographical surveys on the Channel between Calais and Dover. He explored several schemes and, in 1856, presented a proposal to Napoleon III for a mined railway tunnel from Cap Gris-Nez to East Wear Point with a port/airshaft on the Varne sandbank at a cost of 170 million francs, or less than £7 million.
In 1865, a deputation led by George Ward Hunt proposed the idea of a tunnel to the Chancellor of the Exchequer of the day, William Ewart Gladstone.
In 1866, Henry Marc Brunel made a survey of the floor of the Strait of Dover. By his results, he proved that the floor was composed of chalk, like the adjoining cliffs, and thus a tunnel was feasible. For this survey, he invented the gravity corer, which is still used in geology.
Around 1866, William Low and Sir John Hawkshaw promoted tunnel ideas, but apart from preliminary geological studies, none were implemented.
An official Anglo-French protocol was established in 1876 for a cross-Channel railway tunnel.
In 1881, British railway entrepreneur Sir Edward Watkin and Alexandre Lavalley, a French Suez Canal contractor, were in the Anglo-French Submarine Railway Company that conducted exploratory work on both sides of the Channel. From June 1882 to March 1883, the British tunnel boring machine tunneled, through chalk, a total of 1,840 m (6,037 ft), while Lavalley used a similar machine to drill 1,669 m (5,476 ft) from Sangatte on the French side. However, the cross-Channel tunnel project was abandoned in 1883, despite this success, after fears raised by the British military that an underwater tunnel might be used as an invasion route. Nevertheless, in 1883, this TBM was used to bore a railway ventilation tunnel—7 feet (2.1 m) in diameter and 6,750 feet (2,060 m) long—between Birkenhead and Liverpool, England, through sandstone under the Mersey River. These early works were encountered more than a century later during the TML project.
A 1907 film, Tunnelling the English Channel by pioneer filmmaker Georges Méliès, depicts King Edward VII and President Armand Fallières dreaming of building a tunnel under the English Channel.
In 1919, during the Paris Peace Conference, British prime minister David Lloyd George repeatedly brought up the idea of a Channel tunnel as a way of reassuring France about British willingness to defend against another German attack. The French did not take the idea seriously, and nothing came of the proposal.
In the 1920s, Winston Churchill advocated for the Channel Tunnel, using that exact name in his essay "Should Strategists Veto The Tunnel?" It was published on 27 July 1924 in the Weekly Dispatch, and argued vehemently against the idea that the tunnel could be used by a Continental enemy in an invasion of Britain. Churchill expressed his enthusiasm for the project again in an article for the Daily Mail on 12 February 1936, "Why Not A Channel Tunnel?"
There was another proposal in 1929, but nothing came of this discussion and the idea was shelved. Proponents estimated the construction cost at US$150 million. The engineers had addressed the concerns of both nations' military leaders by designing two sumps—one near the coast of each country—that could be flooded at will to block the tunnel but this did not appease military leaders, or dispel concerns about hordes of tourists who would disrupt English life. Military fears continued during the Second World War. After the fall of France, as Britain prepared for an expected German invasion, a Royal Navy officer in the Directorate of Miscellaneous Weapons Development calculated that Hitler could use slave labour to build two Channel tunnels in 18 months. The estimate caused rumours that Germany had already begun digging.
A British film from Gaumont Studios, The Tunnel (also called TransAtlantic Tunnel), was released in 1935 as a science-fiction project concerning the creation of a transatlantic tunnel. It referred briefly to its protagonist, a Mr. McAllan, as having completed a British Channel tunnel successfully in 1940, five years into the future of the film's release.
By 1955, defence arguments had become less relevant due to the dominance of air power, and both the British and French governments supported technical and geological surveys. In 1958 the 1881 workings were cleared in preparation for a £100,000 geological survey by the Channel Tunnel Study Group. 30% of the funding came from Channel Tunnel Co Ltd, the largest shareholder of which was the British Transport Commission, as successor to the South Eastern Railway. A detailed geological survey was carried out in 1964 and 1965.
Although the two countries agreed to build a tunnel in 1964, the phase 1 initial studies and signing of a second agreement to cover phase 2 took until 1973. The plan described a government-funded project to create two tunnels to accommodate car shuttle wagons on either side of a service tunnel. Construction started on both sides of the Channel in 1974.
On 20 January 1975, to the dismay of their French partners, the then-governing Labour Party in Britain cancelled the project due to uncertainty about EEC membership, doubling cost estimates and the general economic crisis at the time. By this time the British tunnel boring machine was ready and the Ministry of Transport had conducted a 300 m (980 ft) experimental drive. (This short tunnel, called Adit A1, was eventually reused as the starting and access point for tunnelling operations from the British side, and remains an access point to the service tunnel.) The cancellation costs were estimated at £17 million. On the French side, a tunnel-boring machine had been installed underground in a stub tunnel. It lay there for 14 years until 1988, when it was sold, dismantled, refurbished and shipped to Turkey, where it was used to drive the Moda tunnel for the Istanbul Sewerage Scheme.
In 1979, the "Mouse-hole Project" was suggested when the Conservatives came to power in Britain. The concept was a single-track rail tunnel with a service tunnel but without shuttle terminals. The British government took no interest in funding the project, but British Prime Minister Margaret Thatcher did not object to a privately funded project, although she said she assumed it would be for cars rather than trains. In 1981, Thatcher and French president François Mitterrand agreed to establish a working group to evaluate a privately funded project. In June 1982 the Franco-British study group favoured a twin tunnel to accommodate conventional trains and a vehicle shuttle service. In April 1985 promoters were invited to submit scheme proposals. Four submissions were shortlisted:
The cross-Channel ferry industry protested under the name "Flexilink". In 1975 there was no campaign protesting a fixed link, with one of the largest ferry operators (Sealink) being state-owned. Flexilink continued rousing opposition throughout 1986 and 1987. Public opinion strongly favoured a drive-through tunnel, but concerns about ventilation, accident management and driver mesmerisation led to the only shortlisted rail submission, CTG/F-M, being awarded the project in January 1986. Reasons given for the selection included that it caused least disruption to shipping in the Channel and least environmental disruption, was the best protected against terrorism, and was the most likely to attract sufficient private finance.
The British Channel Tunnel Group consisted of two banks and five construction companies, while their French counterparts, France–Manche, consisted of three banks and five construction companies. The banks' role was to advise on financing and secure loan commitments. On 2 July 1985, the groups formed Channel Tunnel Group/France–Manche (CTG/F–M). Their submission to the British and French governments was drawn from the 1975 project, including 11 volumes and a substantial environmental impact statement.
The Anglo-French Treaty on the Channel Tunnel was signed by both governments in Canterbury Cathedral. The Treaty of Canterbury (1986) prepared the Concession for the construction and operation of the Fixed Link by privately owned companies and outlined arbitration methods to be used in the event of disputes. It set up the Intergovernmental Commission (IGC), responsible for monitoring all matters associated with the Tunnel's construction and operation on behalf of the British and French governments, and a Safety Authority to advise the IGC. It drew a land frontier between the two countries in the middle of the Channel tunnel—the first of its kind.
Design and construction were done by the ten construction companies in the CTG/F-M group. The French terminal and boring from Sangatte were done by the five French construction companies in the joint venture group GIE Transmanche Construction. The English Terminal and boring from Shakespeare Cliff were done by the five British construction companies in the Translink Joint Venture. The two partnerships were linked by a bi-national project organisation, TransManche Link (TML). The Maître d'Oeuvre was a supervisory engineering body employed by Eurotunnel under the terms of the concession that monitored the project and reported to the governments and banks.
In France, with its long tradition of infrastructure investment, the project had widespread approval. The French National Assembly approved it unanimously in April 1987, and after a public inquiry, the Senate approved it unanimously in June. In Britain, select committees examined the proposal, making history by holding hearings away from Westminster, in Kent. In February 1987, the third reading of the Channel Tunnel Bill took place in the House of Commons, and passed by 94 votes to 22. The Channel Tunnel Act gained Royal assent and passed into law in July. Parliamentary support for the project came partly from provincial members of Parliament on the basis of promises of regional Eurostar through train services that never materialised; the promises were repeated in 1996 when the contract for construction of the Channel Tunnel Rail Link was awarded.
The tunnel is a build-own-operate-transfer (BOOT) project with a concession. TML would design and build the tunnel, but financing was through a separate legal entity, Eurotunnel. Eurotunnel absorbed CTG/F-M and signed a construction contract with TML, but the British and French governments controlled final engineering and safety decisions, now in the hands of the Channel Tunnel Safety Authority. The British and French governments gave Eurotunnel a 55-year operating concession (from 1987; extended by 10 years to 65 years in 1993) to repay loans and pay dividends. A Railway Usage Agreement was signed between Eurotunnel, British Rail and SNCF guaranteeing future revenue in exchange for the railways obtaining half of the tunnel's capacity.
Private funding for such a complex infrastructure project was of unprecedented scale. Initial equity of £45 million was raised by CTG/F-M, increased by £206 million private institutional placement, £770 million was raised in a public share offer that included press and television advertisements, a syndicated bank loan and letter of credit arranged £5 billion. Privately financed, the total investment costs at 1985 prices were £2.6 billion. At the 1994 completion actual costs were, in 1985 prices, £4.65 billion: an 80% cost overrun. The cost overrun was partly due to enhanced safety, security, and environmental demands. Financing costs were 140% higher than forecast.
Working from both the English and French sides of the Channel, eleven tunnel boring machines or TBMs cut through chalk marl to construct two rail tunnels and a service tunnel. The vehicle shuttle terminals are at Cheriton (part of Folkestone) and Coquelles, and are connected to the English M20 and French A16 motorways respectively.
Tunnelling commenced in 1988, and the tunnel began operating in 1994. In 1985 prices, the total construction cost was £4.65 billion (equivalent to £13 billion in 2015), an 80% cost overrun. At the peak of construction 15,000 people were employed with daily expenditure over £3 million. Ten workers, eight of them British, were killed during construction between 1987 and 1993, most in the first few months of boring.
A 50 mm (2.0 in) diameter pilot hole allowed the service tunnel to break through without ceremony on 30 October 1990. On 1 December 1990, Englishman Graham Fagg and Frenchman Phillippe Cozette broke through the service tunnel with the media watching. Eurotunnel completed the tunnel on time. (A BBC TV television commentator called Graham Fagg "the first man to cross the Channel by land for 8000 years".) The two tunnelling efforts met each other with an offset of only 36.2 cm (14.3 in). A Paddington Bear soft toy was chosen by British tunnellers as the first item to pass through to their French counterparts when the two sides met.
The tunnel was officially opened, one year later than originally planned, by Queen Elizabeth II and the French president, François Mitterrand, in a ceremony held in Calais on 6 May 1994. The Queen travelled through the tunnel to Calais on a Eurostar train, which stopped nose to nose with the train that carried President Mitterrand from Paris. Following the ceremony President Mitterrand and the Queen travelled on Le Shuttle to a similar ceremony in Folkestone. A full public service did not start for several months. The first freight train, however, ran on 1 June 1994 and carried Rover and Mini cars being exported to Italy.
The Channel Tunnel Rail Link (CTRL), now called High Speed 1, runs 69 miles (111 km) from St Pancras railway station in London to the tunnel portal at Folkestone in Kent. It cost £5.8 billion. On 16 September 2003 the prime minister, Tony Blair, opened the first section of High Speed 1, from Folkestone to north Kent. On 6 November 2007, the Queen officially opened High Speed 1 and St Pancras International station, replacing the original slower link to Waterloo International railway station. High Speed 1 trains travel at up to 300 km/h (186 mph), the journey from London to Paris taking 2 hours 15 minutes, to Brussels 1 hour 51 minutes.
In 1994, the American Society of Civil Engineers elected the tunnel as one of the seven modern Wonders of the World. In 1995, the American magazine Popular Mechanics published the results.
The opening was phased for various services offered as the Channel Tunnel Safety Authority, the IGC, gave permission for various services to begin at several dates over the period 1994/1995 but start-up dates were a few days later.
Site investigation undertaken in the 20 years before construction confirmed earlier speculations that a tunnel could be bored through a chalk marl stratum. The chalk marl is conducive to tunnelling, with impermeability, ease of excavation and strength. The chalk marl runs along the entire length of the English side of the tunnel, but on the French side a length of 5 kilometres (3.1 mi) has variable and difficult geology. The tunnel consists of three bores: two 7.6-metre (24 ft 11 in) diameter rail tunnels, 30 metres (98 ft) apart, 50 kilometres (31 mi) in length with a 4.8-metre (15 ft 9 in) diameter service tunnel in between. The three bores are connected by cross-passages and piston relief ducts. The service tunnel was used as a pilot tunnel, boring ahead of the main tunnels to determine the conditions. English access was provided at Shakespeare Cliff and French access from a shaft at Sangatte. The French side used five tunnel boring machines (TBMs), and the English side six. The service tunnel uses Service Tunnel Transport System (STTS) and Light Service Tunnel Vehicles (LADOGS). Fire safety was a critical design issue.
Between the portals at Beussingue and Castle Hill the tunnel is 50.5 kilometres (31 mi) long, with 3.3 kilometres (2 mi) under land on the French side and 9.3 kilometres (6 mi) on the UK side, and 37.9 kilometres (24 mi) under sea. It is the third-longest rail tunnel in the world, behind the Gotthard Base Tunnel in Switzerland and the Seikan Tunnel in Japan, but with the longest under-sea section. The average depth is 45 metres (148 ft) below the seabed. On the UK side, of the expected 5 million cubic metres (6.5×10^ cu yd) of spoil approximately 1 million cubic metres (1.3×10^ cu yd) was used for fill at the terminal site, and the remainder was deposited at Lower Shakespeare Cliff behind a seawall, reclaiming 74 acres (30 ha) of land. This land was then made into the Samphire Hoe Country Park. Environmental impact assessment did not identify any major risks for the project, and further studies into safety, noise, and air pollution were overall positive. However, environmental objections were raised over a high-speed link to London.
Successful tunnelling required a sound understanding of topography and geology and the selection of the best rock strata through which to dig. The geology of this site generally consists of northeasterly dipping Cretaceous strata, part of the northern limb of the Wealden-Boulonnais dome. Characteristics include:
On the English side, the stratum dip is less than 5°; on the French side, this increases to 20°. Jointing and faulting are present on both sides. On the English side, only minor faults of displacement less than 2 metres (6 ft 7 in) exist; on the French side, displacements of up to 15 metres (49 ft 3 in) are present owing to the Quenocs anticlinal fold. The faults are of limited width, filled with calcite, pyrite and remolded clay. The increased dip and faulting restricted the selection of routes on the French side. To avoid confusion, microfossil assemblages were used to classify the chalk marl. On the French side, particularly near the coast, the chalk was harder, more brittle and more fractured than on the English side. This led to the adoption of different tunnelling techniques on the two sides.
The Quaternary undersea valley Fosse Dangeard, and Castle Hill landslip at the English portal, caused concerns. Identified by the 1964–65 geophysical survey, the Fosse Dangeard is an infilled valley system extending 80 metres (262 ft) below the seabed, 500 metres (1,640 ft) south of the tunnel route in mid-channel. A 1986 survey showed that a tributary crossed the path of the tunnel, and so the tunnel route was made as far north and deep as possible. The English terminal had to be located in the Castle Hill landslip, which consists of displaced and tipping blocks of lower chalk, glauconitic marl and gault debris. Thus the area was stabilised by buttressing and inserting drainage adits. The service tunnel acted as a pilot preceding the main ones, so that the geology, areas of crushed rock, and zones of high water inflow could be predicted. Exploratory probing took place in the service tunnel, in the form of extensive forward probing, vertical downward probes and sideways probing.
Marine soundings and samplings by Thomé de Gamond were carried out during 1833–67, establishing the seabed depth at a maximum of 55 metres (180 ft) and the continuity of geological strata (layers). Surveying continued over many years, with 166 marine and 70 land-deep boreholes being drilled and over 4,000 line kilometres of the marine geophysical survey completed. Surveys were undertaken in 1958–1959, 1964–1965, 1972–1974 and 1986–1988.
The surveying in 1958–59 catered for immersed tube and bridge designs, as well as a bored tunnel, and thus a wide area was investigated. At this time, marine geophysics surveying for engineering projects was in its infancy, with poor positioning and resolution from seismic profiling. The 1964–65 surveys concentrated on a northerly route that left the English coast at Dover harbour; using 70 boreholes, an area of deeply weathered rock with high permeability was located just south of Dover harbour.
Given the previous survey results and access constraints, a more southerly route was investigated in the 1972–73 survey, and the route was confirmed to be feasible. Information for the tunnelling project also came from work before the 1975 cancellation. On the French side at Sangatte, a deep shaft with adits was made. On the English side at Shakespeare Cliff, the government allowed 250 metres (820 ft) of 4.5-metre (15 ft) diameter tunnel to be driven. The actual tunnel alignment, method of excavation and support were essentially the same as the 1975 attempt. In the 1986–87 survey, previous findings were reinforced, and the characteristics of the gault clay and the tunnelling medium (chalk marl that made up 85% of the route) were investigated. Geophysical techniques from the oil industry were employed.
Tunnelling was a major engineering challenge, with the only precedent being the undersea Seikan Tunnel in Japan, which opened in 1988. A serious health and safety risk with building tunnels underwater is major water inflow due to the high hydrostatic pressure from the sea above, under weak ground conditions. The tunnel also had the challenge of time: being privately funded, the early financial return was paramount.
The objective was to construct two 7.6-metre-diameter (25 ft) rail tunnels, 30 metres (98 ft) apart, 50 kilometres (31 mi) in length; a 4.8-metre-diameter (16 ft) service tunnel between the two main ones; pairs of 3.3-metre (10 ft 10 in)-diameter cross-passages linking the rail tunnels to the service one at 375-metre (1,230 ft) spacing; piston relief ducts 2 metres (6 ft 7 in) in diameter connecting the rail tunnels 250 metres (820 ft) apart; two undersea crossover caverns to connect the rail tunnels, with the service tunnel always preceding the main ones by at least 1 kilometre (0.6 mi) to ascertain the ground conditions. There was plenty of experience with excavating through chalk in the mining industry, while the undersea crossover caverns were a complex engineering problem. The French one was based on the Mount Baker Ridge freeway tunnel in Seattle; the UK cavern was dug from the service tunnel ahead of the main ones, to avoid delay.
Precast segmental linings in the main TBM drives were used, but two different solutions were used. On the French side, neoprene and grout sealed bolted linings made of cast iron or high-strength reinforced concrete were used; on the English side, the main requirement was for speed so bolting of cast-iron lining segments was only carried out in areas of poor geology. In the UK rail tunnels, eight lining segments plus a key segment were used; in the French side, five segments plus a key. On the French side, a 55-metre (180 ft) diameter 75-metre (246 ft) deep grout-curtained shaft at Sangatte was used for access. On the English side, a marshalling area was 140 metres (459 ft) below the top of Shakespeare Cliff, the New Austrian Tunnelling method (NATM) was first applied in the chalk marl here. On the English side, the land tunnels were driven from Shakespeare Cliff—the same place as the marine tunnels—not from Folkestone. The platform at the base of the cliff was not large enough for all of the drives and, despite environmental objections, tunnel spoil was placed behind a reinforced concrete seawall, on condition of placing the chalk in an enclosed lagoon, to avoid wide dispersal of chalk fines. Owing to limited space, the precast lining factory was on the Isle of Grain in the Thames estuary, which used Scottish granite aggregate delivered by ship from the Foster Yeoman coastal super quarry at Glensanda in Loch Linnhe on the west coast of Scotland.
On the French side, owing to the greater permeability to water, earth pressure balance TBMs with open and closed modes was used. The TBMs were of a closed nature during the initial 5 kilometres (3 mi), but then operated as open, boring through the chalk marl stratum. This minimised the impact to the ground, allowed high water pressures to be withstood and it also alleviated the need to grout ahead of the tunnel. The French effort required five TBMs: two main marine machines, one mainland machine (the short land drives of 3 km (2 mi) allowed one TBM to complete the first drive then reverse direction and complete the other), and two service tunnel machines. On the English side, the simpler geology allowed faster open-faced TBMs. Six machines were used; all commenced digging from Shakespeare Cliff, three marine-bound and three for the land tunnels. Towards the completion of the undersea drives, the UK TBMs were driven steeply downwards and buried clear of the tunnel. These buried TBMs were then used to provide an electrical earth. The French TBMs then completed the tunnel and were dismantled. A 900 mm (35 in) gauge railway was used on the English side during construction.
In contrast to the English machines, which were given technical names, the French tunnelling machines were all named after women: Brigitte, Europa, Catherine, Virginie, Pascaline, Séverine.
At the end of the tunnelling, one machine was on display at the side of the M20 motorway in Folkestone until Eurotunnel sold it on eBay for £39,999 to a scrap metal merchant. Another machine (T4 "Virginie") still survives on the French side, adjacent to Junction 41 on the A16, in the middle of the D243E3/D243E4 roundabout. On it are the words "hommage aux bâtisseurs du tunnel", meaning "tribute to the builders of the tunnel".
The eleven tunnel boring machines were designed and manufactured through a joint venture between the Robbins Company of Kent, Washington, United States; Markham & Co. of Chesterfield, England; and Kawasaki Heavy Industries of Japan. The TBMs for the service tunnels and main tunnels on the UK side were designed and manufactured by James Howden & Company Ltd, Scotland.
The loading gauge height is 5.75 m (18 ft 10 in).
There are three communication systems:
Power is delivered to the locomotives via an overhead line at 25 kV 50 Hz. with a normal overhead clearance of 6.03 metres (19 ft 9+1⁄2 in). All tunnel services run on electricity, shared equally from English and French sources. There are two substations fed at 400 kV at each terminal, but in an emergency, the tunnel's lighting (about 20,000 light fittings) and the plant can be powered solely from either England or France.
The traditional railway south of London uses a 750 V DC third rail to deliver electricity, but since the opening of High Speed 1 there is no longer any need for tunnel trains to use it. High Speed 1, the tunnel and the LGV Nord all have power provided via overhead catenary at 25 kV 50 Hz. The railways on "classic" lines in Belgium are also electrified by overhead wires, but at 3000 V DC.
A cab signalling system gives information directly to train drivers on a display. There is a train protection system that stops the train if the speed exceeds that indicated on the in-cab display. TVM430, as used on LGV Nord and High Speed 1, is used in the tunnel. The TVM signalling is interconnected with the signalling on the high-speed lines on either side, allowing trains to enter and exit the tunnel system without stopping. The maximum speed is 160 km/h (99 mph).
Signalling in the tunnel is coordinated from two control centres: The main control centre at the Folkestone terminal, and a backup at the Calais terminal, which is staffed at all times and can take over all operations in the event of a breakdown or emergency.
Conventional ballasted tunnel track was ruled out owing to the difficulty of maintenance and lack of stability and precision. The Sonneville International Corporation's track system was chosen because it was reliable and also cost-effective. The type of track used is known as Low Vibration Track (LVT), which is held in place by gravity and friction. Reinforced concrete blocks of 100 kg support the rails every 60 cm and are held by 12 mm thick closed-cell polymer foam pads placed at the bottom of rubber boots. The latter separates the blocks' mass movements from the concrete. The track provides extra overhead clearance for larger trains. UIC60 (60 kg/m) rails of 900A grade rest on 6 mm (0.2 in) rail pads, which fit the RN/Sonneville bolted dual leaf-springs. The rails, LVT-blocks and their boots with pads were assembled outside the tunnel, in a fully automated process developed by the LVT inventor, Roger Sonneville. About 334,000 Sonneville blocks were made on the Sangatte site.
Maintenance activities are less than projected. The rails had initially been ground on a yearly basis or after approximately 100MGT of traffic. Maintenance is facilitated by the existence of two tunnel junctions or crossover facilities, allowing for two-way operation in each of the six tunnel segments, and providing safe access for maintenance of one isolated tunnel segment at a time. The two crossovers are the largest artificial undersea caverns ever built, at 150 m (490 ft) long, 10 m (33 ft) high and 18 m (59 ft) wide. The English crossover is 8 km (5.0 mi) from Shakespeare Cliff, and the French crossover is 12 km (7.5 mi) from Sangatte.
The ventilation system maintains the air pressure in the service tunnel higher than in the rail tunnels, so that in the event of a fire, smoke does not enter the service tunnel from the rail tunnels. Two cooling water pipes in each rail tunnel circulate chilled water to remove heat generated by the rail traffic. Pumping stations remove water in the tunnels from rain, seepage, and so on.
During the design stage of the tunnel, engineers found that its aerodynamic properties and the heat generated by high-speed trains as they passed through it would raise the temperature inside the tunnel to 50 °C (122 °F). As well as making the trains "unbearably warm" for passengers, this also presented a risk of equipment failure and track distortion. To cool the tunnel to below 35 °C (95 °F), engineers installed 480 kilometres (300 mi) of 0.61 m (24 in) diameter cooling pipes carrying 84 million litres (18 million imperial gallons) of water. The network—Europe's largest cooling system—was supplied by eight York Titan chillers running on R22, a hydrochlorofluorocarbon (HCFC) refrigerant gas.
Due to R22's ozone depletion potential (ODP) and high global warming potential (GWP), its use is being phased out in developed countries. Since 1 January 2015, it has been illegal in Europe to use HCFCs to service air-conditioning equipment; broken equipment that used HCFCs must be replaced with equipment that does not use it. In 2016, Trane was selected to provide replacement chillers for the tunnel's cooling network. The York chillers were decommissioned and four "next generation" Trane Series E CenTraVac large-capacity (2600 kW to 14,000 kW) chillers were installed—two in Sangatte, France, and two at Shakespeare Cliff, UK. The energy-efficient chillers, using Honeywell's non-flammable, ultra-low GWP R1233zd(E) refrigerant, maintain temperatures at 25 °C (77 °F), and in their first year of operation generated savings of 4.8 GWh—approximately 33%, equating to €500,000 ($585,000)—for tunnel operator Getlink.
Getlink operates the LeShuttle, a vehicle shuttle service, through the tunnel.
Car shuttle sets have two separate halves: single and double deck. Each half has two loading/unloading wagons and 12 carrier wagons. Eurotunnel's original order was for nine car shuttle sets.
Heavy goods vehicle (HGV) shuttle sets also have two halves, with each half containing one loading wagon, one unloading wagon and 14 carrier wagons. There is a club car behind the leading locomotive, where drivers must stay during the journey. Eurotunnel originally ordered six HGV shuttle sets.
Initially 38 LeShuttle locomotives were commissioned, with one at each end of a shuttle train.
Forty-six Class 92 locomotives for hauling freight trains and overnight passenger trains (the Nightstar project, which was abandoned) were commissioned, running on both overhead AC and third-rail DC power. However, RFF does not let these run on French railways, so there are plans to certify Alstom Prima II locomotives for use in the tunnel.
Thirty-one Eurostar trains, based on the French TGV, built to UK loading gauge with many modifications for safety within the tunnel, were commissioned, with ownership split between British Rail, French national railways (SNCF) and Belgian national railways (NMBS/SNCB). British Rail ordered seven more for services north of London. Around 2010, Eurostar ordered ten trains from Siemens based on its Velaro product. The Class 374 entered service in 2016 and has been operating through the Channel Tunnel ever since alongside the current Class 373.
Germany (DB) has since around 2005 tried to get permission to run train services to London. At the end of 2009, extensive fire-proofing requirements were dropped and DB received permission to run German Intercity-Express (ICE) test trains through the tunnel. In June 2013 DB was granted access to the tunnel, but these plans were ultimately dropped.
In October 2021, Renfe, the Spanish state railway company, expressed interest in operating a cross-Channel route between Paris and London using some of their existing trains with the intention of competing with Eurostar. No details have been revealed as to which trains would be used.
Between October and November 2023, three more companies expressed interest in potentially running services between London and various European cities:
Diesel locomotives for rescue and shunting work are Eurotunnel Class 0001 and Eurotunnel Class 0031.
The following chart presents the estimated number of passengers and tonnes of freight, respectively, annually transported through the Channel Tunnel since 1994 (M = million).
Transport services offered by the tunnel are as follows:
Both the freight and passenger traffic forecasts that led to the construction of the tunnel were overestimated; in particular, Eurotunnel's commissioned forecasts were over-predictions. Although the captured share of Channel crossings was forecast correctly, high competition (especially from budget airlines which expanded rapidly in the 1990s and 2000s) and reduced tariffs led to low revenue. Overall cross-Channel traffic was overestimated.
With the EU's liberalisation of international rail services, the tunnel and High Speed 1 have been open to competition since 2010. There have been a number of operators interested in running trains through the tunnel and along High Speed 1 to London. In June 2013, after several years, DB obtained a license to operate Frankfurt – London trains, not expected to run before 2016 because of delivery delays of the custom-made trains. Plans for the service to Frankfurt seem to have been shelved in 2018.
Cross-tunnel passenger traffic volumes peaked at 18.4 million in 1998, dropped to 14.9 million in 2003 and has increased substantially since then.
At the time of the decision about building the tunnel, 15.9 million passengers were predicted for Eurostar trains in the opening year. In 1995, the first full year, actual numbers were a little over 2.9 million, growing to 7.1 million in 2000, then dropping to 6.3 million in 2003. Eurostar was initially limited by the lack of a high-speed connection on the British side. After the completion of High Speed 1 in two stages in 2003 and 2007, traffic increased. In 2008, Eurostar carried 9,113,371 passengers, a 10% increase over the previous year, despite traffic limitations due to the 2008 Channel Tunnel fire. Eurostar passenger numbers continued to increase.
Freight volumes have been erratic, with a major decrease during 1997 due to a closure caused by a fire in a freight shuttle. Freight crossings increased over the period, indicating the substitutability of the tunnel by sea crossings. The tunnel has achieved a market share close to or above Eurotunnel's 1980s predictions but Eurotunnel's 1990 and 1994 predictions were overestimates.
For through freight trains, the first year prediction was 7.2 million tonnes; the actual 1995 figure was 1.3M tonnes. Through freight volumes peaked in 1998 at 3.1M tonnes. This fell back to 1.21M tonnes in 2007, increasing slightly to 1.24M tonnes in 2008. Together with that carried on freight shuttles, freight growth has occurred since opening, with 6.4M tonnes carried in 1995, 18.4M tonnes recorded in 2003 and 19.6M tonnes in 2007. Numbers fell back in the wake of the 2008 fire.
Eurotunnel's freight subsidiary is Europorte 2. In September 2006 EWS, the UK's largest rail freight operator, announced that owing to the cessation of UK-French government subsidies of £52 million per annum to cover the tunnel "Minimum User Charge" (a subsidy of around £13,000 per train, at a traffic level of 4,000 trains per annum), freight trains would stop running after 30 November.
Shares in Eurotunnel were issued at £3.50 per share on 9 December 1987. By mid-1989 the price had risen to £11.00. Delays and cost overruns led to the price dropping; during demonstration runs in October 1994, it reached an all-time low. Eurotunnel suspended payment on its debt in September 1995 to avoid bankruptcy. In December 1997 the British and French governments extended Eurotunnel's operating concession by 34 years, to 2086. The financial restructuring of Eurotunnel occurred in mid-1998, reducing debt and financial charges. Despite the restructuring, The Economist reported in 1998 that to break even Eurotunnel would have to increase fares, traffic and market share for sustainability. A cost-benefit analysis of the tunnel indicated that there were few impacts on the wider economy and few developments associated with the project and that the British economy would have been better off if it had not been constructed.
Under the terms of the Concession, Eurotunnel was obliged to investigate a cross-Channel road tunnel. In December 1999 road and rail tunnel proposals were presented to the British and French governments, but it was stressed that there was not enough demand for a second tunnel. A three-way treaty between the United Kingdom, France and Belgium governs border controls, with the establishment of control zones wherein the officers of the other nation may exercise limited customs and law enforcement powers. For most purposes, these are at either end of the tunnel, with the French border controls on the UK side of the tunnel and vice versa. For some city-to-city trains, the train is a control zone. A binational emergency plan coordinates UK and French emergency activities.
In 1999 Eurostar posted its first net profit, having made a loss of £925m in 1995. In 2005 Eurotunnel was described as being in a serious situation. In 2013, operating profits rose 4 percent from 2012, to £54 million.
There is a need for full passport controls, as the tunnel acts as a border between the Schengen Area and the Common Travel Area. There are juxtaposed controls, meaning that passports are checked before boarding by officials of the departing country, and on arrival by officials of the destination country. These control points are only at the main Eurostar stations: French officials operate at London St Pancras, Ebbsfleet International and Ashford International, while British officials operate at Calais-Fréthun, Lille-Europe, Marne-la-Vallée–Chessy, Brussels-South and Paris-Gare du Nord. There are security checks before boarding as well. For the shuttle road-vehicle trains, there are juxtaposed passport controls before boarding the trains.
For Eurostar trains originating south of Paris, there is no passport and security check before departure, and those trains must stop in Lille at least 30 minutes to allow all passengers to be checked. No checks are performed on board. There have been plans for services from Amsterdam, Frankfurt and Cologne to London, but a major reason to cancel them was the need for a stop in Lille. Direct service from London to Amsterdam started on 4 April 2018; following the building of check-in terminals at Amsterdam and Rotterdam and the intergovernmental agreement, a direct service from the two Dutch cities to London started on 30 April 2020.
The terminals' sites are at Cheriton (near Folkestone in the United Kingdom) and Coquelles (near Calais in France). The UK site uses the M20 motorway for access. The terminals are organised with the frontier controls juxtaposed with the entry to the system to allow travellers to go onto the motorway at the destination country immediately after leaving the shuttle.
To achieve design output at the French terminal, the shuttles accept cars on double-deck wagons; for flexibility, ramps were placed inside the shuttles to provide access to the top decks. At Folkestone there are 20 kilometres (12 mi) of the main-line track, 45 turnouts and eight platforms. At Calais there are 30 kilometres (19 mi) of track and 44 turnouts. At the terminals, the shuttle trains traverse a figure eight to reduce uneven wear on the wheels. There is a freight marshalling yard west of Cheriton at Dollands Moor Freight Yard.
A 1996 report from the European Commission predicted that Kent and Nord-Pas de Calais had to face increased traffic volumes due to the general growth of cross-Channel traffic and traffic attracted by the tunnel. In Kent, a high-speed rail line to London would transfer traffic from road to rail. Kent's regional development would benefit from the tunnel, but being so close to London restricts the benefits. Gains are in the traditional industries and are largely dependent on the development of Ashford International railway station, without which Kent would be totally dependent on London's expansion. Nord-Pas-de-Calais enjoys a strong internal symbolic effect of the Tunnel which results in significant gains in manufacturing.
The removal of a bottleneck by means like the tunnel does not necessarily induce economic gains in all adjacent regions. The image of a region being connected to European high-speed transport and active political response is more important for regional economic development. Some small-medium enterprises located in the immediate vicinity of the terminal have used the opportunity to re-brand the profile of their business with positive effects, such as The New Inn at Etchinghill which was able to commercially exploit its unique selling point as being 'the closest pub to the Channel Tunnel'. Tunnel-induced regional development is small compared to general economic growth. The South East of England is likely to benefit developmentally and socially from faster and cheaper transport to continental Europe, but the benefits are unlikely to be equally distributed throughout the region. The overall environmental impact is almost certainly negative.
Since the opening of the tunnel, small positive impacts on the wider economy have been felt, but it is difficult to identify major economic successes directly attributed to the tunnel. The Eurotunnel does operate profitably, offering an alternative transportation mode unaffected by poor weather. High costs of construction did delay profitability, however, and companies involved in the tunnel's construction and operation early in operation relied on government aid to deal with the accumulated debt.
Illegal immigrants and would-be asylum seekers have used the tunnel to attempt to enter Britain. By 1997, the problem had attracted international press attention, and by 1999, the French Red Cross opened the first migrant centre at Sangatte, using a warehouse once used for tunnel construction; by 2002, it housed up to 1,500 people at a time, most of them trying to get to the UK. In 2001, most came from Afghanistan, Iraq, and Iran, but African countries were also represented.
Eurotunnel, the company that operates the crossing, said that more than 37,000 migrants were intercepted between January and July 2015. Approximately 3,000 migrants, mainly from Ethiopia, Eritrea, Sudan and Afghanistan, were living in the temporary camps erected in Calais at the time of an official count in July 2015. An estimated 3,000 to 5,000 migrants were waiting in Calais for a chance to get to England.
Britain and France operate a system of juxtaposed controls on immigration and customs, where investigations happen before travel. France is part of the Schengen immigration zone, removing border checks in normal times between most EU member states; Britain and the Republic of Ireland form their own separate Common Travel Area immigration zone.
Most illegal immigrants and would-be asylum seekers who got into Britain found some way to ride a freight train. Trucks are loaded onto freight trains. In a few instances, migrants stowed away in a liquid chocolate tanker and managed to survive, spread across several attempts. Although the facilities were fenced, airtight security was deemed impossible; migrants would even jump from bridges onto moving trains. In several incidents people were injured during the crossing; others tampered with railway equipment, causing delays and requiring repairs. Eurotunnel said it was losing £5m per month because of the problem.
In 2001 and 2002, several riots broke out at Sangatte, and groups of migrants (up to 550 in a December 2001 incident) stormed the fences and attempted to enter en masse.
Other migrants seeking permanent UK settlement use the Eurostar passenger train. They may purport to be visitors (whether to be issued with a required visit visa, or deny and falsify their true intentions to obtain a maximum of 6-months-in-a-year at-port stamp); purport to be someone else whose documents they hold, or used forged or counterfeit passports. Such breaches result in refusal of permission to enter the UK, affected by Border Force after such a person's identity is fully established, assuming they persist in their application to enter the UK.
Local authorities in both France and the UK called for the closure of the Sangatte migrant camp, and Eurotunnel twice sought an injunction against the centre. As of 2006 the United Kingdom blamed France for allowing Sangatte to open, and France blamed both the UK for its then lax asylum rules/law, and the EU for not having a uniform immigration policy. The problem's cause célèbre nature even lead to journalists being detained as they followed migrants onto railway property.
In 2002, the European Commission told France that it was in breach of European Union rules on the free transfer of goods because of the delays and closures as a result of its poor security. The French government built a double fence, at a cost of £5 million, reducing the numbers of migrants detected each week reaching Britain on goods trains from 250 to almost none. Other measures included CCTV cameras and increased police patrols. At the end of 2002, the Sangatte centre was closed after the UK agreed to absorb some migrants.
On 23 and 30 June 2015, striking workers associated with MyFerryLink damaged sections of track by burning car tires, cancelling all trains and creating a backlog of vehicles. Hundreds seeking to reach Britain attempted to stow away inside and underneath transport trucks destined for the UK. Extra security measures included a £2 million upgrade of detection technology, £1 million extra for dog searches, and £12 million (over three years) towards a joint fund with France for security surrounding the Port of Calais.
In 2002, a dozen migrants died in crossing attempts. In the two months from June to July 2015, ten migrants died near the French tunnel terminal, during a period when 1,500 attempts to evade security precautions were being made each day.
On 6 July 2015, a migrant died while attempting to climb onto a freight train while trying to reach Britain from the French side of the Channel. The previous month an Eritrean man was killed under similar circumstances.
During the night of 28 July 2015, one person, aged 25–30, was found dead after a night in which 1,500–2,000 migrants had attempted to enter the Eurotunnel terminal. The body of a Sudanese migrant was subsequently found inside the tunnel. On 4 August 2015, another Sudanese migrant walked nearly the entire length of one of the tunnels. He was arrested close to the British side, after having walked about 30 miles (48 km) through the tunnel.
There have been three fires in the tunnel, all on the heavy goods vehicle (HGV) shuttles, that were significant enough to close the tunnel, as well as other minor incidents.
On 9 December 1994, during an "invitation only" testing phase, a fire broke out in a Ford Escort car while its owner was loading it onto the upper deck of a tourist shuttle. The fire started at about 10:00, with the shuttle train stationary in the Folkestone terminal, and was put out about 40 minutes later with no passenger injuries.
On 18 November 1996, a fire broke out on an HGV shuttle wagon in the tunnel, but nobody was seriously hurt. The exact cause is unknown, although it was neither a Eurotunnel equipment nor rolling stock problem; it may have been due to arson of a heavy goods vehicle. It is estimated that the heart of the fire reached 1,000 °C (1,800 °F), with the tunnel severely damaged over 46 metres (151 ft), with some 500 metres (1,640 ft) affected to some extent. Full operation recommenced six months after the fire.
On 21 August 2006, the tunnel was closed for several hours when a truck on an HGV shuttle train caught fire.
On 11 September 2008, a fire occurred in the Channel Tunnel at 13:57 GMT. The incident started on an HGV shuttle train travelling towards France. The event occurred 11 kilometres (6.8 mi) from the French entrance to the tunnel. No one was killed but several people were taken to hospitals suffering from smoke inhalation, and minor cuts and bruises. The tunnel was closed to all traffic, with the undamaged South Tunnel reopening for limited services two days later. Full service resumed on 9 February 2009 after repairs costing €60 million.
On 29 November 2012, the tunnel was closed for several hours after a truck on an HGV shuttle caught fire.
On 17 January 2015, both tunnels were closed following a lorry fire that filled the midsection of Running Tunnel North with smoke. Eurostar cancelled all services. The shuttle train had been heading from Folkestone to Coquelles and stopped adjacent to cross-passage CP 4418 just before 12:30 UTC. 38 passengers and four members of Eurotunnel staff were evacuated into the service tunnel and transported to France in special STTS road vehicles. They were taken to the Eurotunnel Fire/Emergency Management Centre close to the French portal.
On the night of 19/20 February 1996, about 1,000 passengers became trapped in the Channel Tunnel when Eurostar trains from London broke down owing to failures of electronic circuits caused by snow and ice being deposited and then melting on the circuit boards.
On 3 August 2007, an electrical failure lasting six hours caused passengers to be trapped in the tunnel on a shuttle.
On the evening of 18 December 2009, during the December 2009 European snowfall, five London-bound Eurostar trains failed inside the tunnel, trapping 2,000 passengers for approximately 16 hours, during the coldest temperatures in eight years. A Eurotunnel spokesperson explained that snow had evaded the train's winterisation shields, and the transition from cold air outside to the tunnel's warm atmosphere had melted the snow, resulting in electrical failures. One train was turned back before reaching the tunnel; two trains were hauled out of the tunnel by Eurotunnel Class 0001 diesel locomotives. The blocking of the tunnel led to the implementation of Operation Stack, the transformation of the M20 motorway into a linear car park.
The occasion was the first time that a Eurostar train was evacuated inside the tunnel; the failing of four at once was described as "unprecedented". The Channel Tunnel reopened the following morning. Nirj Deva, Member of the European Parliament for South East England, had called for Eurostar chief executive Richard Brown to resign over the incidents. An independent report by Christopher Garnett (former CEO of Great North Eastern Railway) and Claude Gressier (a French transport expert) on the 18/19 December 2009 incidents was issued in February 2010, making 21 recommendations.
On 7 January 2010, a Brussels–London Eurostar broke down in the tunnel. The train had 236 passengers on board and was towed to Ashford; other trains that had not yet reached the tunnel were turned back.
The Channel Tunnel Safety Authority is responsible for some aspects of safety regulation in the tunnel; it reports to the Intergovernmental Commission (IGC).
The service tunnel is used for access to technical equipment in cross-passages and equipment rooms, to provide fresh-air ventilation and for emergency evacuation. The Service Tunnel Transport System (STTS) allows fast access to all areas of the tunnel. The service vehicles are rubber-tired with a buried wire guidance system. The 24 STTS vehicles are used mainly for maintenance but also for firefighting and emergencies. "Pods" with different purposes, up to a payload of 2.5–5 tonnes (2.8–5.5 tons), are inserted into the side of the vehicles. The vehicles cannot turn around within the tunnel and are driven from either end. The maximum speed is 80 km/h (50 mph) when the steering is locked. A fleet of 15 Light Service Tunnel Vehicles (LADOGS) was introduced to supplement the STTSs. The LADOGS has a short wheelbase with a 3.4 m (11 ft) turning circle, allowing two-point turns within the service tunnel. Steering cannot be locked like the STTS vehicles, and maximum speed is 50 km/h (31 mph). Pods up to 1 tonne (1.1 tons) can be loaded onto the rear of the vehicles. Drivers in the tunnel sit on the right, and the vehicles drive on the left. Owing to the risk of French personnel driving on their native right side of the road, sensors in the vehicles alert the driver if the vehicle strays to the right side.
The three tunnels contain 6,000 tonnes (6,600 tons) of air that needs to be conditioned for comfort and safety. Air is supplied from ventilation buildings at Shakespeare Cliff and Sangatte, with each building capable of providing 100% standby capacity. Supplementary ventilation also exists on either side of the tunnel. In the event of a fire, ventilation is used to keep smoke out of the service tunnel and move smoke in one direction in the main tunnel to give passengers clean air. The tunnel was the first main-line railway tunnel to have special cooling equipment. Heat is generated from traction equipment and drag. The design limit was set at 30 °C (86 °F), using a mechanical cooling system with refrigeration plants on both sides that run chilled water circulating in pipes within the tunnel.
Trains travelling at high speed create piston effect pressure changes that can affect passenger comfort, ventilation systems, tunnel doors, fans and the structure of the trains, and which drag on the trains. Piston relief ducts of 2-metre (6 ft 7 in) diameter were chosen to solve the problem, with 4 ducts per kilometre to give close to optimum results. However, this design led to extreme lateral forces on the trains, so a reduction in train speed was required and restrictors were installed in the ducts.
The safety issue of a possible fire on a passenger-vehicle shuttle garnered much attention, with Eurotunnel noting that fire was the risk attracting the most attention in a 1994 safety case for three reasons: the opposition of ferry companies to passengers being allowed to remain with their cars; Home Office statistics indicating that car fires had doubled in ten years; and the long length of the tunnel. Eurotunnel commissioned the UK Fire Research Station—now part of the Building Research Establishment—to give reports of vehicle fires, and liaised with Kent Fire Brigade to gather vehicle fire statistics over one year. Fire tests took place at the French Mines Research Establishment with a mock wagon used to investigate how cars burned. The wagon door systems are designed to withstand fire inside the wagon for 30 minutes, longer than the transit time of 27 minutes. Wagon air conditioning units help to purge dangerous fumes from inside the wagon before travel. Each wagon has a fire detection and extinguishing system, with sensing of ions or ultraviolet radiation, smoke and gases that can trigger halon gas to quench a fire. Since the HGV wagons are not covered, fire sensors are located on the loading wagon and in the tunnel. A 10-inch (250 mm) water main in the service tunnel provides water to the main tunnels at 125-metre (410 ft) intervals. The ventilation system can control smoke movement. Special arrival sidings accept a train that is on fire, as the train is not allowed to stop whilst on fire in the tunnel unless continuing its journey would lead to a worse outcome. Eurotunnel has banned a wide range of hazardous goods from travelling in the tunnel. Two STTS (Service Tunnel Transportation System) vehicles with firefighting pods are on duty at all times, with a maximum delay of 10 minutes before they reach a burning train.
In 1999, the Kosovo Train for Life passed through the tunnel en route to Pristina, in Kosovo.
In 2009, former F1 racing champion John Surtees drove a Ginetta G50 EV electric sports car prototype from England to France, using the service tunnel, as part of a charity event. He was required to keep to the 50-kilometre-per-hour (30 mph) speed limit. To celebrate the 2014 Tour de France's transfer from its opening stages in Britain to France in July of that year, Chris Froome of Team Sky rode a bicycle through the service tunnel, becoming the first solo rider to do so. The crossing took under an hour, reaching speeds of 65 kilometres per hour (40 mph)—faster than most cross-channel ferries.
Since 2012, French operators Bouygues Telecom, Orange and SFR have covered Running Tunnel South, the tunnel bore normally used for travel from France to Britain.
In January 2014, UK operators EE and Vodafone signed ten-year contracts with Eurotunnel for Running Tunnel North. The agreements will enable both operators' subscribers to use 2G and 3G services. Both EE and Vodafone planned to offer LTE services on the route; EE said it expected to cover the route with LTE connectivity by the summer of 2014. EE and Vodafone will offer Channel Tunnel network coverage for travellers from the UK to France. Eurotunnel said it also held talks with Three UK but has yet to reach an agreement with the operator.
In May 2014, Eurotunnel announced that they had installed equipment from Alcatel-Lucent to cover Running Tunnel North and simultaneously to provide mobile service (GSM 900/1800 MHz and UMTS 2100 MHz) by EE, O2 and Vodafone. The service of EE and Vodafone commenced on the same date as the announcement. O2 service was expected to be available soon afterwards.
In November 2014, EE announced that it had previously switched on LTE earlier in September 2014. O2 turned on 2G, 3G and 4G services in November 2014, whilst Vodafone's 4G was due to go live later.
The tunnel also houses the 1,000 MW ElecLink interconnector to transfer power between the British and French electricity networks. During the night of 31 August/1 September 2021, the 51 km-long 320 kV DC cable was switched into service for the first time. | [
{
"paragraph_id": 0,
"text": "The Channel Tunnel (French: Tunnel sous la Manche), also known as the Chunnel, is a 50.46-kilometre (31.35 mi) underwater railway tunnel that connects Folkestone (Kent, England) with Coquelles (Pas-de-Calais, France) beneath the English Channel at the Strait of Dover. It is the only fixed link between the island of Great Britain and the European mainland. At its lowest point, it is 75 metres (246 ft) below the sea bed and 115 metres (377 ft) below sea level. At 37.9 kilometres (23.5 mi), it has the longest underwater section of any tunnel in the world and is the third-longest railway tunnel in the world. The speed limit for trains through the tunnel is 160 kilometres per hour (99 mph). The tunnel is owned and operated by the company Getlink, formerly \"Groupe Eurotunnel\".",
"title": ""
},
{
"paragraph_id": 1,
"text": "The tunnel carries high-speed Eurostar passenger trains, LeShuttle services for road vehicles and freight trains. It connects end-to-end with high-speed railway lines: the LGV Nord in France and High Speed 1 in England. In 2017, rail services carried 10.3 million passengers and 1.22 million tonnes of freight, and the Shuttle carried 10.4 million passengers, 2.6 million cars, 51,000 coaches, and 1.6 million lorries (equivalent to 21.3 million tonnes of freight), compared with 11.7 million passengers, 2.6 million lorries and 2.2 million cars by sea through the Port of Dover.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Plans to build a cross-Channel fixed link appeared as early as 1802, but British political and media pressure motivated by fears of compromising national security had disrupted attempts to build one. An early unsuccessful attempt was made in the late 19th century, on the English side, \"in the hope of forcing the hand of the English Government\". The eventual successful project, organised by Eurotunnel, began construction in 1988 and opened in 1994. Estimated to cost £5.5 billion in 1985, it was at the time the most expensive construction project ever proposed. The cost finally amounted to £9 billion (equivalent to £21.8 billion in 2021), well over budget.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Since its opening, the tunnel has experienced occasional mechanical problems. Both fires and cold weather have temporarily disrupted its operation. Since at least 1997, aggregations of migrants around Calais seeking entry to the United Kingdom, such as through the tunnel, have prompted deterrence and countermeasures.",
"title": ""
},
{
"paragraph_id": 4,
"text": "In 1802, Albert Mathieu-Favier, a French mining engineer, put forward a proposal to tunnel under the English Channel, with illumination from oil lamps, horse-drawn coaches, and an artificial island positioned mid-Channel for changing horses. His design envisaged a bored two-level tunnel with the top tunnel used for transport and the bottom one for groundwater flows.",
"title": "Origins"
},
{
"paragraph_id": 5,
"text": "In 1839, Aimé Thomé de Gamond, a Frenchman, performed the first geological and hydrographical surveys on the Channel between Calais and Dover. He explored several schemes and, in 1856, presented a proposal to Napoleon III for a mined railway tunnel from Cap Gris-Nez to East Wear Point with a port/airshaft on the Varne sandbank at a cost of 170 million francs, or less than £7 million.",
"title": "Origins"
},
{
"paragraph_id": 6,
"text": "In 1865, a deputation led by George Ward Hunt proposed the idea of a tunnel to the Chancellor of the Exchequer of the day, William Ewart Gladstone.",
"title": "Origins"
},
{
"paragraph_id": 7,
"text": "In 1866, Henry Marc Brunel made a survey of the floor of the Strait of Dover. By his results, he proved that the floor was composed of chalk, like the adjoining cliffs, and thus a tunnel was feasible. For this survey, he invented the gravity corer, which is still used in geology.",
"title": "Origins"
},
{
"paragraph_id": 8,
"text": "Around 1866, William Low and Sir John Hawkshaw promoted tunnel ideas, but apart from preliminary geological studies, none were implemented.",
"title": "Origins"
},
{
"paragraph_id": 9,
"text": "An official Anglo-French protocol was established in 1876 for a cross-Channel railway tunnel.",
"title": "Origins"
},
{
"paragraph_id": 10,
"text": "In 1881, British railway entrepreneur Sir Edward Watkin and Alexandre Lavalley, a French Suez Canal contractor, were in the Anglo-French Submarine Railway Company that conducted exploratory work on both sides of the Channel. From June 1882 to March 1883, the British tunnel boring machine tunneled, through chalk, a total of 1,840 m (6,037 ft), while Lavalley used a similar machine to drill 1,669 m (5,476 ft) from Sangatte on the French side. However, the cross-Channel tunnel project was abandoned in 1883, despite this success, after fears raised by the British military that an underwater tunnel might be used as an invasion route. Nevertheless, in 1883, this TBM was used to bore a railway ventilation tunnel—7 feet (2.1 m) in diameter and 6,750 feet (2,060 m) long—between Birkenhead and Liverpool, England, through sandstone under the Mersey River. These early works were encountered more than a century later during the TML project.",
"title": "Origins"
},
{
"paragraph_id": 11,
"text": "A 1907 film, Tunnelling the English Channel by pioneer filmmaker Georges Méliès, depicts King Edward VII and President Armand Fallières dreaming of building a tunnel under the English Channel.",
"title": "Origins"
},
{
"paragraph_id": 12,
"text": "In 1919, during the Paris Peace Conference, British prime minister David Lloyd George repeatedly brought up the idea of a Channel tunnel as a way of reassuring France about British willingness to defend against another German attack. The French did not take the idea seriously, and nothing came of the proposal.",
"title": "Origins"
},
{
"paragraph_id": 13,
"text": "In the 1920s, Winston Churchill advocated for the Channel Tunnel, using that exact name in his essay \"Should Strategists Veto The Tunnel?\" It was published on 27 July 1924 in the Weekly Dispatch, and argued vehemently against the idea that the tunnel could be used by a Continental enemy in an invasion of Britain. Churchill expressed his enthusiasm for the project again in an article for the Daily Mail on 12 February 1936, \"Why Not A Channel Tunnel?\"",
"title": "Origins"
},
{
"paragraph_id": 14,
"text": "There was another proposal in 1929, but nothing came of this discussion and the idea was shelved. Proponents estimated the construction cost at US$150 million. The engineers had addressed the concerns of both nations' military leaders by designing two sumps—one near the coast of each country—that could be flooded at will to block the tunnel but this did not appease military leaders, or dispel concerns about hordes of tourists who would disrupt English life. Military fears continued during the Second World War. After the fall of France, as Britain prepared for an expected German invasion, a Royal Navy officer in the Directorate of Miscellaneous Weapons Development calculated that Hitler could use slave labour to build two Channel tunnels in 18 months. The estimate caused rumours that Germany had already begun digging.",
"title": "Origins"
},
{
"paragraph_id": 15,
"text": "A British film from Gaumont Studios, The Tunnel (also called TransAtlantic Tunnel), was released in 1935 as a science-fiction project concerning the creation of a transatlantic tunnel. It referred briefly to its protagonist, a Mr. McAllan, as having completed a British Channel tunnel successfully in 1940, five years into the future of the film's release.",
"title": "Origins"
},
{
"paragraph_id": 16,
"text": "By 1955, defence arguments had become less relevant due to the dominance of air power, and both the British and French governments supported technical and geological surveys. In 1958 the 1881 workings were cleared in preparation for a £100,000 geological survey by the Channel Tunnel Study Group. 30% of the funding came from Channel Tunnel Co Ltd, the largest shareholder of which was the British Transport Commission, as successor to the South Eastern Railway. A detailed geological survey was carried out in 1964 and 1965.",
"title": "Origins"
},
{
"paragraph_id": 17,
"text": "Although the two countries agreed to build a tunnel in 1964, the phase 1 initial studies and signing of a second agreement to cover phase 2 took until 1973. The plan described a government-funded project to create two tunnels to accommodate car shuttle wagons on either side of a service tunnel. Construction started on both sides of the Channel in 1974.",
"title": "Origins"
},
{
"paragraph_id": 18,
"text": "On 20 January 1975, to the dismay of their French partners, the then-governing Labour Party in Britain cancelled the project due to uncertainty about EEC membership, doubling cost estimates and the general economic crisis at the time. By this time the British tunnel boring machine was ready and the Ministry of Transport had conducted a 300 m (980 ft) experimental drive. (This short tunnel, called Adit A1, was eventually reused as the starting and access point for tunnelling operations from the British side, and remains an access point to the service tunnel.) The cancellation costs were estimated at £17 million. On the French side, a tunnel-boring machine had been installed underground in a stub tunnel. It lay there for 14 years until 1988, when it was sold, dismantled, refurbished and shipped to Turkey, where it was used to drive the Moda tunnel for the Istanbul Sewerage Scheme.",
"title": "Origins"
},
{
"paragraph_id": 19,
"text": "In 1979, the \"Mouse-hole Project\" was suggested when the Conservatives came to power in Britain. The concept was a single-track rail tunnel with a service tunnel but without shuttle terminals. The British government took no interest in funding the project, but British Prime Minister Margaret Thatcher did not object to a privately funded project, although she said she assumed it would be for cars rather than trains. In 1981, Thatcher and French president François Mitterrand agreed to establish a working group to evaluate a privately funded project. In June 1982 the Franco-British study group favoured a twin tunnel to accommodate conventional trains and a vehicle shuttle service. In April 1985 promoters were invited to submit scheme proposals. Four submissions were shortlisted:",
"title": "Origins"
},
{
"paragraph_id": 20,
"text": "The cross-Channel ferry industry protested under the name \"Flexilink\". In 1975 there was no campaign protesting a fixed link, with one of the largest ferry operators (Sealink) being state-owned. Flexilink continued rousing opposition throughout 1986 and 1987. Public opinion strongly favoured a drive-through tunnel, but concerns about ventilation, accident management and driver mesmerisation led to the only shortlisted rail submission, CTG/F-M, being awarded the project in January 1986. Reasons given for the selection included that it caused least disruption to shipping in the Channel and least environmental disruption, was the best protected against terrorism, and was the most likely to attract sufficient private finance.",
"title": "Origins"
},
{
"paragraph_id": 21,
"text": "The British Channel Tunnel Group consisted of two banks and five construction companies, while their French counterparts, France–Manche, consisted of three banks and five construction companies. The banks' role was to advise on financing and secure loan commitments. On 2 July 1985, the groups formed Channel Tunnel Group/France–Manche (CTG/F–M). Their submission to the British and French governments was drawn from the 1975 project, including 11 volumes and a substantial environmental impact statement.",
"title": "Origins"
},
{
"paragraph_id": 22,
"text": "The Anglo-French Treaty on the Channel Tunnel was signed by both governments in Canterbury Cathedral. The Treaty of Canterbury (1986) prepared the Concession for the construction and operation of the Fixed Link by privately owned companies and outlined arbitration methods to be used in the event of disputes. It set up the Intergovernmental Commission (IGC), responsible for monitoring all matters associated with the Tunnel's construction and operation on behalf of the British and French governments, and a Safety Authority to advise the IGC. It drew a land frontier between the two countries in the middle of the Channel tunnel—the first of its kind.",
"title": "Origins"
},
{
"paragraph_id": 23,
"text": "Design and construction were done by the ten construction companies in the CTG/F-M group. The French terminal and boring from Sangatte were done by the five French construction companies in the joint venture group GIE Transmanche Construction. The English Terminal and boring from Shakespeare Cliff were done by the five British construction companies in the Translink Joint Venture. The two partnerships were linked by a bi-national project organisation, TransManche Link (TML). The Maître d'Oeuvre was a supervisory engineering body employed by Eurotunnel under the terms of the concession that monitored the project and reported to the governments and banks.",
"title": "Origins"
},
{
"paragraph_id": 24,
"text": "In France, with its long tradition of infrastructure investment, the project had widespread approval. The French National Assembly approved it unanimously in April 1987, and after a public inquiry, the Senate approved it unanimously in June. In Britain, select committees examined the proposal, making history by holding hearings away from Westminster, in Kent. In February 1987, the third reading of the Channel Tunnel Bill took place in the House of Commons, and passed by 94 votes to 22. The Channel Tunnel Act gained Royal assent and passed into law in July. Parliamentary support for the project came partly from provincial members of Parliament on the basis of promises of regional Eurostar through train services that never materialised; the promises were repeated in 1996 when the contract for construction of the Channel Tunnel Rail Link was awarded.",
"title": "Origins"
},
{
"paragraph_id": 25,
"text": "The tunnel is a build-own-operate-transfer (BOOT) project with a concession. TML would design and build the tunnel, but financing was through a separate legal entity, Eurotunnel. Eurotunnel absorbed CTG/F-M and signed a construction contract with TML, but the British and French governments controlled final engineering and safety decisions, now in the hands of the Channel Tunnel Safety Authority. The British and French governments gave Eurotunnel a 55-year operating concession (from 1987; extended by 10 years to 65 years in 1993) to repay loans and pay dividends. A Railway Usage Agreement was signed between Eurotunnel, British Rail and SNCF guaranteeing future revenue in exchange for the railways obtaining half of the tunnel's capacity.",
"title": "Origins"
},
{
"paragraph_id": 26,
"text": "Private funding for such a complex infrastructure project was of unprecedented scale. Initial equity of £45 million was raised by CTG/F-M, increased by £206 million private institutional placement, £770 million was raised in a public share offer that included press and television advertisements, a syndicated bank loan and letter of credit arranged £5 billion. Privately financed, the total investment costs at 1985 prices were £2.6 billion. At the 1994 completion actual costs were, in 1985 prices, £4.65 billion: an 80% cost overrun. The cost overrun was partly due to enhanced safety, security, and environmental demands. Financing costs were 140% higher than forecast.",
"title": "Origins"
},
{
"paragraph_id": 27,
"text": "Working from both the English and French sides of the Channel, eleven tunnel boring machines or TBMs cut through chalk marl to construct two rail tunnels and a service tunnel. The vehicle shuttle terminals are at Cheriton (part of Folkestone) and Coquelles, and are connected to the English M20 and French A16 motorways respectively.",
"title": "Origins"
},
{
"paragraph_id": 28,
"text": "Tunnelling commenced in 1988, and the tunnel began operating in 1994. In 1985 prices, the total construction cost was £4.65 billion (equivalent to £13 billion in 2015), an 80% cost overrun. At the peak of construction 15,000 people were employed with daily expenditure over £3 million. Ten workers, eight of them British, were killed during construction between 1987 and 1993, most in the first few months of boring.",
"title": "Origins"
},
{
"paragraph_id": 29,
"text": "A 50 mm (2.0 in) diameter pilot hole allowed the service tunnel to break through without ceremony on 30 October 1990. On 1 December 1990, Englishman Graham Fagg and Frenchman Phillippe Cozette broke through the service tunnel with the media watching. Eurotunnel completed the tunnel on time. (A BBC TV television commentator called Graham Fagg \"the first man to cross the Channel by land for 8000 years\".) The two tunnelling efforts met each other with an offset of only 36.2 cm (14.3 in). A Paddington Bear soft toy was chosen by British tunnellers as the first item to pass through to their French counterparts when the two sides met.",
"title": "Origins"
},
{
"paragraph_id": 30,
"text": "The tunnel was officially opened, one year later than originally planned, by Queen Elizabeth II and the French president, François Mitterrand, in a ceremony held in Calais on 6 May 1994. The Queen travelled through the tunnel to Calais on a Eurostar train, which stopped nose to nose with the train that carried President Mitterrand from Paris. Following the ceremony President Mitterrand and the Queen travelled on Le Shuttle to a similar ceremony in Folkestone. A full public service did not start for several months. The first freight train, however, ran on 1 June 1994 and carried Rover and Mini cars being exported to Italy.",
"title": "Origins"
},
{
"paragraph_id": 31,
"text": "The Channel Tunnel Rail Link (CTRL), now called High Speed 1, runs 69 miles (111 km) from St Pancras railway station in London to the tunnel portal at Folkestone in Kent. It cost £5.8 billion. On 16 September 2003 the prime minister, Tony Blair, opened the first section of High Speed 1, from Folkestone to north Kent. On 6 November 2007, the Queen officially opened High Speed 1 and St Pancras International station, replacing the original slower link to Waterloo International railway station. High Speed 1 trains travel at up to 300 km/h (186 mph), the journey from London to Paris taking 2 hours 15 minutes, to Brussels 1 hour 51 minutes.",
"title": "Origins"
},
{
"paragraph_id": 32,
"text": "In 1994, the American Society of Civil Engineers elected the tunnel as one of the seven modern Wonders of the World. In 1995, the American magazine Popular Mechanics published the results.",
"title": "Origins"
},
{
"paragraph_id": 33,
"text": "The opening was phased for various services offered as the Channel Tunnel Safety Authority, the IGC, gave permission for various services to begin at several dates over the period 1994/1995 but start-up dates were a few days later.",
"title": "Opening dates"
},
{
"paragraph_id": 34,
"text": "Site investigation undertaken in the 20 years before construction confirmed earlier speculations that a tunnel could be bored through a chalk marl stratum. The chalk marl is conducive to tunnelling, with impermeability, ease of excavation and strength. The chalk marl runs along the entire length of the English side of the tunnel, but on the French side a length of 5 kilometres (3.1 mi) has variable and difficult geology. The tunnel consists of three bores: two 7.6-metre (24 ft 11 in) diameter rail tunnels, 30 metres (98 ft) apart, 50 kilometres (31 mi) in length with a 4.8-metre (15 ft 9 in) diameter service tunnel in between. The three bores are connected by cross-passages and piston relief ducts. The service tunnel was used as a pilot tunnel, boring ahead of the main tunnels to determine the conditions. English access was provided at Shakespeare Cliff and French access from a shaft at Sangatte. The French side used five tunnel boring machines (TBMs), and the English side six. The service tunnel uses Service Tunnel Transport System (STTS) and Light Service Tunnel Vehicles (LADOGS). Fire safety was a critical design issue.",
"title": "Engineering"
},
{
"paragraph_id": 35,
"text": "Between the portals at Beussingue and Castle Hill the tunnel is 50.5 kilometres (31 mi) long, with 3.3 kilometres (2 mi) under land on the French side and 9.3 kilometres (6 mi) on the UK side, and 37.9 kilometres (24 mi) under sea. It is the third-longest rail tunnel in the world, behind the Gotthard Base Tunnel in Switzerland and the Seikan Tunnel in Japan, but with the longest under-sea section. The average depth is 45 metres (148 ft) below the seabed. On the UK side, of the expected 5 million cubic metres (6.5×10^ cu yd) of spoil approximately 1 million cubic metres (1.3×10^ cu yd) was used for fill at the terminal site, and the remainder was deposited at Lower Shakespeare Cliff behind a seawall, reclaiming 74 acres (30 ha) of land. This land was then made into the Samphire Hoe Country Park. Environmental impact assessment did not identify any major risks for the project, and further studies into safety, noise, and air pollution were overall positive. However, environmental objections were raised over a high-speed link to London.",
"title": "Engineering"
},
{
"paragraph_id": 36,
"text": "Successful tunnelling required a sound understanding of topography and geology and the selection of the best rock strata through which to dig. The geology of this site generally consists of northeasterly dipping Cretaceous strata, part of the northern limb of the Wealden-Boulonnais dome. Characteristics include:",
"title": "Engineering"
},
{
"paragraph_id": 37,
"text": "On the English side, the stratum dip is less than 5°; on the French side, this increases to 20°. Jointing and faulting are present on both sides. On the English side, only minor faults of displacement less than 2 metres (6 ft 7 in) exist; on the French side, displacements of up to 15 metres (49 ft 3 in) are present owing to the Quenocs anticlinal fold. The faults are of limited width, filled with calcite, pyrite and remolded clay. The increased dip and faulting restricted the selection of routes on the French side. To avoid confusion, microfossil assemblages were used to classify the chalk marl. On the French side, particularly near the coast, the chalk was harder, more brittle and more fractured than on the English side. This led to the adoption of different tunnelling techniques on the two sides.",
"title": "Engineering"
},
{
"paragraph_id": 38,
"text": "The Quaternary undersea valley Fosse Dangeard, and Castle Hill landslip at the English portal, caused concerns. Identified by the 1964–65 geophysical survey, the Fosse Dangeard is an infilled valley system extending 80 metres (262 ft) below the seabed, 500 metres (1,640 ft) south of the tunnel route in mid-channel. A 1986 survey showed that a tributary crossed the path of the tunnel, and so the tunnel route was made as far north and deep as possible. The English terminal had to be located in the Castle Hill landslip, which consists of displaced and tipping blocks of lower chalk, glauconitic marl and gault debris. Thus the area was stabilised by buttressing and inserting drainage adits. The service tunnel acted as a pilot preceding the main ones, so that the geology, areas of crushed rock, and zones of high water inflow could be predicted. Exploratory probing took place in the service tunnel, in the form of extensive forward probing, vertical downward probes and sideways probing.",
"title": "Engineering"
},
{
"paragraph_id": 39,
"text": "Marine soundings and samplings by Thomé de Gamond were carried out during 1833–67, establishing the seabed depth at a maximum of 55 metres (180 ft) and the continuity of geological strata (layers). Surveying continued over many years, with 166 marine and 70 land-deep boreholes being drilled and over 4,000 line kilometres of the marine geophysical survey completed. Surveys were undertaken in 1958–1959, 1964–1965, 1972–1974 and 1986–1988.",
"title": "Engineering"
},
{
"paragraph_id": 40,
"text": "The surveying in 1958–59 catered for immersed tube and bridge designs, as well as a bored tunnel, and thus a wide area was investigated. At this time, marine geophysics surveying for engineering projects was in its infancy, with poor positioning and resolution from seismic profiling. The 1964–65 surveys concentrated on a northerly route that left the English coast at Dover harbour; using 70 boreholes, an area of deeply weathered rock with high permeability was located just south of Dover harbour.",
"title": "Engineering"
},
{
"paragraph_id": 41,
"text": "Given the previous survey results and access constraints, a more southerly route was investigated in the 1972–73 survey, and the route was confirmed to be feasible. Information for the tunnelling project also came from work before the 1975 cancellation. On the French side at Sangatte, a deep shaft with adits was made. On the English side at Shakespeare Cliff, the government allowed 250 metres (820 ft) of 4.5-metre (15 ft) diameter tunnel to be driven. The actual tunnel alignment, method of excavation and support were essentially the same as the 1975 attempt. In the 1986–87 survey, previous findings were reinforced, and the characteristics of the gault clay and the tunnelling medium (chalk marl that made up 85% of the route) were investigated. Geophysical techniques from the oil industry were employed.",
"title": "Engineering"
},
{
"paragraph_id": 42,
"text": "Tunnelling was a major engineering challenge, with the only precedent being the undersea Seikan Tunnel in Japan, which opened in 1988. A serious health and safety risk with building tunnels underwater is major water inflow due to the high hydrostatic pressure from the sea above, under weak ground conditions. The tunnel also had the challenge of time: being privately funded, the early financial return was paramount.",
"title": "Engineering"
},
{
"paragraph_id": 43,
"text": "The objective was to construct two 7.6-metre-diameter (25 ft) rail tunnels, 30 metres (98 ft) apart, 50 kilometres (31 mi) in length; a 4.8-metre-diameter (16 ft) service tunnel between the two main ones; pairs of 3.3-metre (10 ft 10 in)-diameter cross-passages linking the rail tunnels to the service one at 375-metre (1,230 ft) spacing; piston relief ducts 2 metres (6 ft 7 in) in diameter connecting the rail tunnels 250 metres (820 ft) apart; two undersea crossover caverns to connect the rail tunnels, with the service tunnel always preceding the main ones by at least 1 kilometre (0.6 mi) to ascertain the ground conditions. There was plenty of experience with excavating through chalk in the mining industry, while the undersea crossover caverns were a complex engineering problem. The French one was based on the Mount Baker Ridge freeway tunnel in Seattle; the UK cavern was dug from the service tunnel ahead of the main ones, to avoid delay.",
"title": "Engineering"
},
{
"paragraph_id": 44,
"text": "Precast segmental linings in the main TBM drives were used, but two different solutions were used. On the French side, neoprene and grout sealed bolted linings made of cast iron or high-strength reinforced concrete were used; on the English side, the main requirement was for speed so bolting of cast-iron lining segments was only carried out in areas of poor geology. In the UK rail tunnels, eight lining segments plus a key segment were used; in the French side, five segments plus a key. On the French side, a 55-metre (180 ft) diameter 75-metre (246 ft) deep grout-curtained shaft at Sangatte was used for access. On the English side, a marshalling area was 140 metres (459 ft) below the top of Shakespeare Cliff, the New Austrian Tunnelling method (NATM) was first applied in the chalk marl here. On the English side, the land tunnels were driven from Shakespeare Cliff—the same place as the marine tunnels—not from Folkestone. The platform at the base of the cliff was not large enough for all of the drives and, despite environmental objections, tunnel spoil was placed behind a reinforced concrete seawall, on condition of placing the chalk in an enclosed lagoon, to avoid wide dispersal of chalk fines. Owing to limited space, the precast lining factory was on the Isle of Grain in the Thames estuary, which used Scottish granite aggregate delivered by ship from the Foster Yeoman coastal super quarry at Glensanda in Loch Linnhe on the west coast of Scotland.",
"title": "Engineering"
},
{
"paragraph_id": 45,
"text": "On the French side, owing to the greater permeability to water, earth pressure balance TBMs with open and closed modes was used. The TBMs were of a closed nature during the initial 5 kilometres (3 mi), but then operated as open, boring through the chalk marl stratum. This minimised the impact to the ground, allowed high water pressures to be withstood and it also alleviated the need to grout ahead of the tunnel. The French effort required five TBMs: two main marine machines, one mainland machine (the short land drives of 3 km (2 mi) allowed one TBM to complete the first drive then reverse direction and complete the other), and two service tunnel machines. On the English side, the simpler geology allowed faster open-faced TBMs. Six machines were used; all commenced digging from Shakespeare Cliff, three marine-bound and three for the land tunnels. Towards the completion of the undersea drives, the UK TBMs were driven steeply downwards and buried clear of the tunnel. These buried TBMs were then used to provide an electrical earth. The French TBMs then completed the tunnel and were dismantled. A 900 mm (35 in) gauge railway was used on the English side during construction.",
"title": "Engineering"
},
{
"paragraph_id": 46,
"text": "In contrast to the English machines, which were given technical names, the French tunnelling machines were all named after women: Brigitte, Europa, Catherine, Virginie, Pascaline, Séverine.",
"title": "Engineering"
},
{
"paragraph_id": 47,
"text": "At the end of the tunnelling, one machine was on display at the side of the M20 motorway in Folkestone until Eurotunnel sold it on eBay for £39,999 to a scrap metal merchant. Another machine (T4 \"Virginie\") still survives on the French side, adjacent to Junction 41 on the A16, in the middle of the D243E3/D243E4 roundabout. On it are the words \"hommage aux bâtisseurs du tunnel\", meaning \"tribute to the builders of the tunnel\".",
"title": "Engineering"
},
{
"paragraph_id": 48,
"text": "The eleven tunnel boring machines were designed and manufactured through a joint venture between the Robbins Company of Kent, Washington, United States; Markham & Co. of Chesterfield, England; and Kawasaki Heavy Industries of Japan. The TBMs for the service tunnels and main tunnels on the UK side were designed and manufactured by James Howden & Company Ltd, Scotland.",
"title": "Engineering"
},
{
"paragraph_id": 49,
"text": "The loading gauge height is 5.75 m (18 ft 10 in).",
"title": "Engineering"
},
{
"paragraph_id": 50,
"text": "There are three communication systems:",
"title": "Engineering"
},
{
"paragraph_id": 51,
"text": "Power is delivered to the locomotives via an overhead line at 25 kV 50 Hz. with a normal overhead clearance of 6.03 metres (19 ft 9+1⁄2 in). All tunnel services run on electricity, shared equally from English and French sources. There are two substations fed at 400 kV at each terminal, but in an emergency, the tunnel's lighting (about 20,000 light fittings) and the plant can be powered solely from either England or France.",
"title": "Engineering"
},
{
"paragraph_id": 52,
"text": "The traditional railway south of London uses a 750 V DC third rail to deliver electricity, but since the opening of High Speed 1 there is no longer any need for tunnel trains to use it. High Speed 1, the tunnel and the LGV Nord all have power provided via overhead catenary at 25 kV 50 Hz. The railways on \"classic\" lines in Belgium are also electrified by overhead wires, but at 3000 V DC.",
"title": "Engineering"
},
{
"paragraph_id": 53,
"text": "A cab signalling system gives information directly to train drivers on a display. There is a train protection system that stops the train if the speed exceeds that indicated on the in-cab display. TVM430, as used on LGV Nord and High Speed 1, is used in the tunnel. The TVM signalling is interconnected with the signalling on the high-speed lines on either side, allowing trains to enter and exit the tunnel system without stopping. The maximum speed is 160 km/h (99 mph).",
"title": "Engineering"
},
{
"paragraph_id": 54,
"text": "Signalling in the tunnel is coordinated from two control centres: The main control centre at the Folkestone terminal, and a backup at the Calais terminal, which is staffed at all times and can take over all operations in the event of a breakdown or emergency.",
"title": "Engineering"
},
{
"paragraph_id": 55,
"text": "Conventional ballasted tunnel track was ruled out owing to the difficulty of maintenance and lack of stability and precision. The Sonneville International Corporation's track system was chosen because it was reliable and also cost-effective. The type of track used is known as Low Vibration Track (LVT), which is held in place by gravity and friction. Reinforced concrete blocks of 100 kg support the rails every 60 cm and are held by 12 mm thick closed-cell polymer foam pads placed at the bottom of rubber boots. The latter separates the blocks' mass movements from the concrete. The track provides extra overhead clearance for larger trains. UIC60 (60 kg/m) rails of 900A grade rest on 6 mm (0.2 in) rail pads, which fit the RN/Sonneville bolted dual leaf-springs. The rails, LVT-blocks and their boots with pads were assembled outside the tunnel, in a fully automated process developed by the LVT inventor, Roger Sonneville. About 334,000 Sonneville blocks were made on the Sangatte site.",
"title": "Engineering"
},
{
"paragraph_id": 56,
"text": "Maintenance activities are less than projected. The rails had initially been ground on a yearly basis or after approximately 100MGT of traffic. Maintenance is facilitated by the existence of two tunnel junctions or crossover facilities, allowing for two-way operation in each of the six tunnel segments, and providing safe access for maintenance of one isolated tunnel segment at a time. The two crossovers are the largest artificial undersea caverns ever built, at 150 m (490 ft) long, 10 m (33 ft) high and 18 m (59 ft) wide. The English crossover is 8 km (5.0 mi) from Shakespeare Cliff, and the French crossover is 12 km (7.5 mi) from Sangatte.",
"title": "Engineering"
},
{
"paragraph_id": 57,
"text": "The ventilation system maintains the air pressure in the service tunnel higher than in the rail tunnels, so that in the event of a fire, smoke does not enter the service tunnel from the rail tunnels. Two cooling water pipes in each rail tunnel circulate chilled water to remove heat generated by the rail traffic. Pumping stations remove water in the tunnels from rain, seepage, and so on.",
"title": "Engineering"
},
{
"paragraph_id": 58,
"text": "During the design stage of the tunnel, engineers found that its aerodynamic properties and the heat generated by high-speed trains as they passed through it would raise the temperature inside the tunnel to 50 °C (122 °F). As well as making the trains \"unbearably warm\" for passengers, this also presented a risk of equipment failure and track distortion. To cool the tunnel to below 35 °C (95 °F), engineers installed 480 kilometres (300 mi) of 0.61 m (24 in) diameter cooling pipes carrying 84 million litres (18 million imperial gallons) of water. The network—Europe's largest cooling system—was supplied by eight York Titan chillers running on R22, a hydrochlorofluorocarbon (HCFC) refrigerant gas.",
"title": "Engineering"
},
{
"paragraph_id": 59,
"text": "Due to R22's ozone depletion potential (ODP) and high global warming potential (GWP), its use is being phased out in developed countries. Since 1 January 2015, it has been illegal in Europe to use HCFCs to service air-conditioning equipment; broken equipment that used HCFCs must be replaced with equipment that does not use it. In 2016, Trane was selected to provide replacement chillers for the tunnel's cooling network. The York chillers were decommissioned and four \"next generation\" Trane Series E CenTraVac large-capacity (2600 kW to 14,000 kW) chillers were installed—two in Sangatte, France, and two at Shakespeare Cliff, UK. The energy-efficient chillers, using Honeywell's non-flammable, ultra-low GWP R1233zd(E) refrigerant, maintain temperatures at 25 °C (77 °F), and in their first year of operation generated savings of 4.8 GWh—approximately 33%, equating to €500,000 ($585,000)—for tunnel operator Getlink.",
"title": "Engineering"
},
{
"paragraph_id": 60,
"text": "Getlink operates the LeShuttle, a vehicle shuttle service, through the tunnel.",
"title": "Operators"
},
{
"paragraph_id": 61,
"text": "Car shuttle sets have two separate halves: single and double deck. Each half has two loading/unloading wagons and 12 carrier wagons. Eurotunnel's original order was for nine car shuttle sets.",
"title": "Operators"
},
{
"paragraph_id": 62,
"text": "Heavy goods vehicle (HGV) shuttle sets also have two halves, with each half containing one loading wagon, one unloading wagon and 14 carrier wagons. There is a club car behind the leading locomotive, where drivers must stay during the journey. Eurotunnel originally ordered six HGV shuttle sets.",
"title": "Operators"
},
{
"paragraph_id": 63,
"text": "Initially 38 LeShuttle locomotives were commissioned, with one at each end of a shuttle train.",
"title": "Operators"
},
{
"paragraph_id": 64,
"text": "Forty-six Class 92 locomotives for hauling freight trains and overnight passenger trains (the Nightstar project, which was abandoned) were commissioned, running on both overhead AC and third-rail DC power. However, RFF does not let these run on French railways, so there are plans to certify Alstom Prima II locomotives for use in the tunnel.",
"title": "Operators"
},
{
"paragraph_id": 65,
"text": "Thirty-one Eurostar trains, based on the French TGV, built to UK loading gauge with many modifications for safety within the tunnel, were commissioned, with ownership split between British Rail, French national railways (SNCF) and Belgian national railways (NMBS/SNCB). British Rail ordered seven more for services north of London. Around 2010, Eurostar ordered ten trains from Siemens based on its Velaro product. The Class 374 entered service in 2016 and has been operating through the Channel Tunnel ever since alongside the current Class 373.",
"title": "Operators"
},
{
"paragraph_id": 66,
"text": "Germany (DB) has since around 2005 tried to get permission to run train services to London. At the end of 2009, extensive fire-proofing requirements were dropped and DB received permission to run German Intercity-Express (ICE) test trains through the tunnel. In June 2013 DB was granted access to the tunnel, but these plans were ultimately dropped.",
"title": "Operators"
},
{
"paragraph_id": 67,
"text": "In October 2021, Renfe, the Spanish state railway company, expressed interest in operating a cross-Channel route between Paris and London using some of their existing trains with the intention of competing with Eurostar. No details have been revealed as to which trains would be used.",
"title": "Operators"
},
{
"paragraph_id": 68,
"text": "Between October and November 2023, three more companies expressed interest in potentially running services between London and various European cities:",
"title": "Operators"
},
{
"paragraph_id": 69,
"text": "Diesel locomotives for rescue and shunting work are Eurotunnel Class 0001 and Eurotunnel Class 0031.",
"title": "Operators"
},
{
"paragraph_id": 70,
"text": "The following chart presents the estimated number of passengers and tonnes of freight, respectively, annually transported through the Channel Tunnel since 1994 (M = million).",
"title": "Operation"
},
{
"paragraph_id": 71,
"text": "",
"title": "Operation"
},
{
"paragraph_id": 72,
"text": "Transport services offered by the tunnel are as follows:",
"title": "Operation"
},
{
"paragraph_id": 73,
"text": "Both the freight and passenger traffic forecasts that led to the construction of the tunnel were overestimated; in particular, Eurotunnel's commissioned forecasts were over-predictions. Although the captured share of Channel crossings was forecast correctly, high competition (especially from budget airlines which expanded rapidly in the 1990s and 2000s) and reduced tariffs led to low revenue. Overall cross-Channel traffic was overestimated.",
"title": "Operation"
},
{
"paragraph_id": 74,
"text": "With the EU's liberalisation of international rail services, the tunnel and High Speed 1 have been open to competition since 2010. There have been a number of operators interested in running trains through the tunnel and along High Speed 1 to London. In June 2013, after several years, DB obtained a license to operate Frankfurt – London trains, not expected to run before 2016 because of delivery delays of the custom-made trains. Plans for the service to Frankfurt seem to have been shelved in 2018.",
"title": "Operation"
},
{
"paragraph_id": 75,
"text": "Cross-tunnel passenger traffic volumes peaked at 18.4 million in 1998, dropped to 14.9 million in 2003 and has increased substantially since then.",
"title": "Operation"
},
{
"paragraph_id": 76,
"text": "At the time of the decision about building the tunnel, 15.9 million passengers were predicted for Eurostar trains in the opening year. In 1995, the first full year, actual numbers were a little over 2.9 million, growing to 7.1 million in 2000, then dropping to 6.3 million in 2003. Eurostar was initially limited by the lack of a high-speed connection on the British side. After the completion of High Speed 1 in two stages in 2003 and 2007, traffic increased. In 2008, Eurostar carried 9,113,371 passengers, a 10% increase over the previous year, despite traffic limitations due to the 2008 Channel Tunnel fire. Eurostar passenger numbers continued to increase.",
"title": "Operation"
},
{
"paragraph_id": 77,
"text": "Freight volumes have been erratic, with a major decrease during 1997 due to a closure caused by a fire in a freight shuttle. Freight crossings increased over the period, indicating the substitutability of the tunnel by sea crossings. The tunnel has achieved a market share close to or above Eurotunnel's 1980s predictions but Eurotunnel's 1990 and 1994 predictions were overestimates.",
"title": "Operation"
},
{
"paragraph_id": 78,
"text": "For through freight trains, the first year prediction was 7.2 million tonnes; the actual 1995 figure was 1.3M tonnes. Through freight volumes peaked in 1998 at 3.1M tonnes. This fell back to 1.21M tonnes in 2007, increasing slightly to 1.24M tonnes in 2008. Together with that carried on freight shuttles, freight growth has occurred since opening, with 6.4M tonnes carried in 1995, 18.4M tonnes recorded in 2003 and 19.6M tonnes in 2007. Numbers fell back in the wake of the 2008 fire.",
"title": "Operation"
},
{
"paragraph_id": 79,
"text": "Eurotunnel's freight subsidiary is Europorte 2. In September 2006 EWS, the UK's largest rail freight operator, announced that owing to the cessation of UK-French government subsidies of £52 million per annum to cover the tunnel \"Minimum User Charge\" (a subsidy of around £13,000 per train, at a traffic level of 4,000 trains per annum), freight trains would stop running after 30 November.",
"title": "Operation"
},
{
"paragraph_id": 80,
"text": "Shares in Eurotunnel were issued at £3.50 per share on 9 December 1987. By mid-1989 the price had risen to £11.00. Delays and cost overruns led to the price dropping; during demonstration runs in October 1994, it reached an all-time low. Eurotunnel suspended payment on its debt in September 1995 to avoid bankruptcy. In December 1997 the British and French governments extended Eurotunnel's operating concession by 34 years, to 2086. The financial restructuring of Eurotunnel occurred in mid-1998, reducing debt and financial charges. Despite the restructuring, The Economist reported in 1998 that to break even Eurotunnel would have to increase fares, traffic and market share for sustainability. A cost-benefit analysis of the tunnel indicated that there were few impacts on the wider economy and few developments associated with the project and that the British economy would have been better off if it had not been constructed.",
"title": "Operation"
},
{
"paragraph_id": 81,
"text": "Under the terms of the Concession, Eurotunnel was obliged to investigate a cross-Channel road tunnel. In December 1999 road and rail tunnel proposals were presented to the British and French governments, but it was stressed that there was not enough demand for a second tunnel. A three-way treaty between the United Kingdom, France and Belgium governs border controls, with the establishment of control zones wherein the officers of the other nation may exercise limited customs and law enforcement powers. For most purposes, these are at either end of the tunnel, with the French border controls on the UK side of the tunnel and vice versa. For some city-to-city trains, the train is a control zone. A binational emergency plan coordinates UK and French emergency activities.",
"title": "Operation"
},
{
"paragraph_id": 82,
"text": "In 1999 Eurostar posted its first net profit, having made a loss of £925m in 1995. In 2005 Eurotunnel was described as being in a serious situation. In 2013, operating profits rose 4 percent from 2012, to £54 million.",
"title": "Operation"
},
{
"paragraph_id": 83,
"text": "There is a need for full passport controls, as the tunnel acts as a border between the Schengen Area and the Common Travel Area. There are juxtaposed controls, meaning that passports are checked before boarding by officials of the departing country, and on arrival by officials of the destination country. These control points are only at the main Eurostar stations: French officials operate at London St Pancras, Ebbsfleet International and Ashford International, while British officials operate at Calais-Fréthun, Lille-Europe, Marne-la-Vallée–Chessy, Brussels-South and Paris-Gare du Nord. There are security checks before boarding as well. For the shuttle road-vehicle trains, there are juxtaposed passport controls before boarding the trains.",
"title": "Operation"
},
{
"paragraph_id": 84,
"text": "For Eurostar trains originating south of Paris, there is no passport and security check before departure, and those trains must stop in Lille at least 30 minutes to allow all passengers to be checked. No checks are performed on board. There have been plans for services from Amsterdam, Frankfurt and Cologne to London, but a major reason to cancel them was the need for a stop in Lille. Direct service from London to Amsterdam started on 4 April 2018; following the building of check-in terminals at Amsterdam and Rotterdam and the intergovernmental agreement, a direct service from the two Dutch cities to London started on 30 April 2020.",
"title": "Operation"
},
{
"paragraph_id": 85,
"text": "The terminals' sites are at Cheriton (near Folkestone in the United Kingdom) and Coquelles (near Calais in France). The UK site uses the M20 motorway for access. The terminals are organised with the frontier controls juxtaposed with the entry to the system to allow travellers to go onto the motorway at the destination country immediately after leaving the shuttle.",
"title": "Terminals"
},
{
"paragraph_id": 86,
"text": "To achieve design output at the French terminal, the shuttles accept cars on double-deck wagons; for flexibility, ramps were placed inside the shuttles to provide access to the top decks. At Folkestone there are 20 kilometres (12 mi) of the main-line track, 45 turnouts and eight platforms. At Calais there are 30 kilometres (19 mi) of track and 44 turnouts. At the terminals, the shuttle trains traverse a figure eight to reduce uneven wear on the wheels. There is a freight marshalling yard west of Cheriton at Dollands Moor Freight Yard.",
"title": "Terminals"
},
{
"paragraph_id": 87,
"text": "A 1996 report from the European Commission predicted that Kent and Nord-Pas de Calais had to face increased traffic volumes due to the general growth of cross-Channel traffic and traffic attracted by the tunnel. In Kent, a high-speed rail line to London would transfer traffic from road to rail. Kent's regional development would benefit from the tunnel, but being so close to London restricts the benefits. Gains are in the traditional industries and are largely dependent on the development of Ashford International railway station, without which Kent would be totally dependent on London's expansion. Nord-Pas-de-Calais enjoys a strong internal symbolic effect of the Tunnel which results in significant gains in manufacturing.",
"title": "Regional impact"
},
{
"paragraph_id": 88,
"text": "The removal of a bottleneck by means like the tunnel does not necessarily induce economic gains in all adjacent regions. The image of a region being connected to European high-speed transport and active political response is more important for regional economic development. Some small-medium enterprises located in the immediate vicinity of the terminal have used the opportunity to re-brand the profile of their business with positive effects, such as The New Inn at Etchinghill which was able to commercially exploit its unique selling point as being 'the closest pub to the Channel Tunnel'. Tunnel-induced regional development is small compared to general economic growth. The South East of England is likely to benefit developmentally and socially from faster and cheaper transport to continental Europe, but the benefits are unlikely to be equally distributed throughout the region. The overall environmental impact is almost certainly negative.",
"title": "Regional impact"
},
{
"paragraph_id": 89,
"text": "Since the opening of the tunnel, small positive impacts on the wider economy have been felt, but it is difficult to identify major economic successes directly attributed to the tunnel. The Eurotunnel does operate profitably, offering an alternative transportation mode unaffected by poor weather. High costs of construction did delay profitability, however, and companies involved in the tunnel's construction and operation early in operation relied on government aid to deal with the accumulated debt.",
"title": "Regional impact"
},
{
"paragraph_id": 90,
"text": "Illegal immigrants and would-be asylum seekers have used the tunnel to attempt to enter Britain. By 1997, the problem had attracted international press attention, and by 1999, the French Red Cross opened the first migrant centre at Sangatte, using a warehouse once used for tunnel construction; by 2002, it housed up to 1,500 people at a time, most of them trying to get to the UK. In 2001, most came from Afghanistan, Iraq, and Iran, but African countries were also represented.",
"title": "Illegal immigration"
},
{
"paragraph_id": 91,
"text": "Eurotunnel, the company that operates the crossing, said that more than 37,000 migrants were intercepted between January and July 2015. Approximately 3,000 migrants, mainly from Ethiopia, Eritrea, Sudan and Afghanistan, were living in the temporary camps erected in Calais at the time of an official count in July 2015. An estimated 3,000 to 5,000 migrants were waiting in Calais for a chance to get to England.",
"title": "Illegal immigration"
},
{
"paragraph_id": 92,
"text": "Britain and France operate a system of juxtaposed controls on immigration and customs, where investigations happen before travel. France is part of the Schengen immigration zone, removing border checks in normal times between most EU member states; Britain and the Republic of Ireland form their own separate Common Travel Area immigration zone.",
"title": "Illegal immigration"
},
{
"paragraph_id": 93,
"text": "Most illegal immigrants and would-be asylum seekers who got into Britain found some way to ride a freight train. Trucks are loaded onto freight trains. In a few instances, migrants stowed away in a liquid chocolate tanker and managed to survive, spread across several attempts. Although the facilities were fenced, airtight security was deemed impossible; migrants would even jump from bridges onto moving trains. In several incidents people were injured during the crossing; others tampered with railway equipment, causing delays and requiring repairs. Eurotunnel said it was losing £5m per month because of the problem.",
"title": "Illegal immigration"
},
{
"paragraph_id": 94,
"text": "In 2001 and 2002, several riots broke out at Sangatte, and groups of migrants (up to 550 in a December 2001 incident) stormed the fences and attempted to enter en masse.",
"title": "Illegal immigration"
},
{
"paragraph_id": 95,
"text": "Other migrants seeking permanent UK settlement use the Eurostar passenger train. They may purport to be visitors (whether to be issued with a required visit visa, or deny and falsify their true intentions to obtain a maximum of 6-months-in-a-year at-port stamp); purport to be someone else whose documents they hold, or used forged or counterfeit passports. Such breaches result in refusal of permission to enter the UK, affected by Border Force after such a person's identity is fully established, assuming they persist in their application to enter the UK.",
"title": "Illegal immigration"
},
{
"paragraph_id": 96,
"text": "Local authorities in both France and the UK called for the closure of the Sangatte migrant camp, and Eurotunnel twice sought an injunction against the centre. As of 2006 the United Kingdom blamed France for allowing Sangatte to open, and France blamed both the UK for its then lax asylum rules/law, and the EU for not having a uniform immigration policy. The problem's cause célèbre nature even lead to journalists being detained as they followed migrants onto railway property.",
"title": "Illegal immigration"
},
{
"paragraph_id": 97,
"text": "In 2002, the European Commission told France that it was in breach of European Union rules on the free transfer of goods because of the delays and closures as a result of its poor security. The French government built a double fence, at a cost of £5 million, reducing the numbers of migrants detected each week reaching Britain on goods trains from 250 to almost none. Other measures included CCTV cameras and increased police patrols. At the end of 2002, the Sangatte centre was closed after the UK agreed to absorb some migrants.",
"title": "Illegal immigration"
},
{
"paragraph_id": 98,
"text": "On 23 and 30 June 2015, striking workers associated with MyFerryLink damaged sections of track by burning car tires, cancelling all trains and creating a backlog of vehicles. Hundreds seeking to reach Britain attempted to stow away inside and underneath transport trucks destined for the UK. Extra security measures included a £2 million upgrade of detection technology, £1 million extra for dog searches, and £12 million (over three years) towards a joint fund with France for security surrounding the Port of Calais.",
"title": "Illegal immigration"
},
{
"paragraph_id": 99,
"text": "In 2002, a dozen migrants died in crossing attempts. In the two months from June to July 2015, ten migrants died near the French tunnel terminal, during a period when 1,500 attempts to evade security precautions were being made each day.",
"title": "Illegal immigration"
},
{
"paragraph_id": 100,
"text": "On 6 July 2015, a migrant died while attempting to climb onto a freight train while trying to reach Britain from the French side of the Channel. The previous month an Eritrean man was killed under similar circumstances.",
"title": "Illegal immigration"
},
{
"paragraph_id": 101,
"text": "During the night of 28 July 2015, one person, aged 25–30, was found dead after a night in which 1,500–2,000 migrants had attempted to enter the Eurotunnel terminal. The body of a Sudanese migrant was subsequently found inside the tunnel. On 4 August 2015, another Sudanese migrant walked nearly the entire length of one of the tunnels. He was arrested close to the British side, after having walked about 30 miles (48 km) through the tunnel.",
"title": "Illegal immigration"
},
{
"paragraph_id": 102,
"text": "There have been three fires in the tunnel, all on the heavy goods vehicle (HGV) shuttles, that were significant enough to close the tunnel, as well as other minor incidents.",
"title": "Mechanical incidents"
},
{
"paragraph_id": 103,
"text": "On 9 December 1994, during an \"invitation only\" testing phase, a fire broke out in a Ford Escort car while its owner was loading it onto the upper deck of a tourist shuttle. The fire started at about 10:00, with the shuttle train stationary in the Folkestone terminal, and was put out about 40 minutes later with no passenger injuries.",
"title": "Mechanical incidents"
},
{
"paragraph_id": 104,
"text": "On 18 November 1996, a fire broke out on an HGV shuttle wagon in the tunnel, but nobody was seriously hurt. The exact cause is unknown, although it was neither a Eurotunnel equipment nor rolling stock problem; it may have been due to arson of a heavy goods vehicle. It is estimated that the heart of the fire reached 1,000 °C (1,800 °F), with the tunnel severely damaged over 46 metres (151 ft), with some 500 metres (1,640 ft) affected to some extent. Full operation recommenced six months after the fire.",
"title": "Mechanical incidents"
},
{
"paragraph_id": 105,
"text": "On 21 August 2006, the tunnel was closed for several hours when a truck on an HGV shuttle train caught fire.",
"title": "Mechanical incidents"
},
{
"paragraph_id": 106,
"text": "On 11 September 2008, a fire occurred in the Channel Tunnel at 13:57 GMT. The incident started on an HGV shuttle train travelling towards France. The event occurred 11 kilometres (6.8 mi) from the French entrance to the tunnel. No one was killed but several people were taken to hospitals suffering from smoke inhalation, and minor cuts and bruises. The tunnel was closed to all traffic, with the undamaged South Tunnel reopening for limited services two days later. Full service resumed on 9 February 2009 after repairs costing €60 million.",
"title": "Mechanical incidents"
},
{
"paragraph_id": 107,
"text": "On 29 November 2012, the tunnel was closed for several hours after a truck on an HGV shuttle caught fire.",
"title": "Mechanical incidents"
},
{
"paragraph_id": 108,
"text": "On 17 January 2015, both tunnels were closed following a lorry fire that filled the midsection of Running Tunnel North with smoke. Eurostar cancelled all services. The shuttle train had been heading from Folkestone to Coquelles and stopped adjacent to cross-passage CP 4418 just before 12:30 UTC. 38 passengers and four members of Eurotunnel staff were evacuated into the service tunnel and transported to France in special STTS road vehicles. They were taken to the Eurotunnel Fire/Emergency Management Centre close to the French portal.",
"title": "Mechanical incidents"
},
{
"paragraph_id": 109,
"text": "On the night of 19/20 February 1996, about 1,000 passengers became trapped in the Channel Tunnel when Eurostar trains from London broke down owing to failures of electronic circuits caused by snow and ice being deposited and then melting on the circuit boards.",
"title": "Mechanical incidents"
},
{
"paragraph_id": 110,
"text": "On 3 August 2007, an electrical failure lasting six hours caused passengers to be trapped in the tunnel on a shuttle.",
"title": "Mechanical incidents"
},
{
"paragraph_id": 111,
"text": "On the evening of 18 December 2009, during the December 2009 European snowfall, five London-bound Eurostar trains failed inside the tunnel, trapping 2,000 passengers for approximately 16 hours, during the coldest temperatures in eight years. A Eurotunnel spokesperson explained that snow had evaded the train's winterisation shields, and the transition from cold air outside to the tunnel's warm atmosphere had melted the snow, resulting in electrical failures. One train was turned back before reaching the tunnel; two trains were hauled out of the tunnel by Eurotunnel Class 0001 diesel locomotives. The blocking of the tunnel led to the implementation of Operation Stack, the transformation of the M20 motorway into a linear car park.",
"title": "Mechanical incidents"
},
{
"paragraph_id": 112,
"text": "The occasion was the first time that a Eurostar train was evacuated inside the tunnel; the failing of four at once was described as \"unprecedented\". The Channel Tunnel reopened the following morning. Nirj Deva, Member of the European Parliament for South East England, had called for Eurostar chief executive Richard Brown to resign over the incidents. An independent report by Christopher Garnett (former CEO of Great North Eastern Railway) and Claude Gressier (a French transport expert) on the 18/19 December 2009 incidents was issued in February 2010, making 21 recommendations.",
"title": "Mechanical incidents"
},
{
"paragraph_id": 113,
"text": "On 7 January 2010, a Brussels–London Eurostar broke down in the tunnel. The train had 236 passengers on board and was towed to Ashford; other trains that had not yet reached the tunnel were turned back.",
"title": "Mechanical incidents"
},
{
"paragraph_id": 114,
"text": "The Channel Tunnel Safety Authority is responsible for some aspects of safety regulation in the tunnel; it reports to the Intergovernmental Commission (IGC).",
"title": "Mechanical incidents"
},
{
"paragraph_id": 115,
"text": "The service tunnel is used for access to technical equipment in cross-passages and equipment rooms, to provide fresh-air ventilation and for emergency evacuation. The Service Tunnel Transport System (STTS) allows fast access to all areas of the tunnel. The service vehicles are rubber-tired with a buried wire guidance system. The 24 STTS vehicles are used mainly for maintenance but also for firefighting and emergencies. \"Pods\" with different purposes, up to a payload of 2.5–5 tonnes (2.8–5.5 tons), are inserted into the side of the vehicles. The vehicles cannot turn around within the tunnel and are driven from either end. The maximum speed is 80 km/h (50 mph) when the steering is locked. A fleet of 15 Light Service Tunnel Vehicles (LADOGS) was introduced to supplement the STTSs. The LADOGS has a short wheelbase with a 3.4 m (11 ft) turning circle, allowing two-point turns within the service tunnel. Steering cannot be locked like the STTS vehicles, and maximum speed is 50 km/h (31 mph). Pods up to 1 tonne (1.1 tons) can be loaded onto the rear of the vehicles. Drivers in the tunnel sit on the right, and the vehicles drive on the left. Owing to the risk of French personnel driving on their native right side of the road, sensors in the vehicles alert the driver if the vehicle strays to the right side.",
"title": "Mechanical incidents"
},
{
"paragraph_id": 116,
"text": "The three tunnels contain 6,000 tonnes (6,600 tons) of air that needs to be conditioned for comfort and safety. Air is supplied from ventilation buildings at Shakespeare Cliff and Sangatte, with each building capable of providing 100% standby capacity. Supplementary ventilation also exists on either side of the tunnel. In the event of a fire, ventilation is used to keep smoke out of the service tunnel and move smoke in one direction in the main tunnel to give passengers clean air. The tunnel was the first main-line railway tunnel to have special cooling equipment. Heat is generated from traction equipment and drag. The design limit was set at 30 °C (86 °F), using a mechanical cooling system with refrigeration plants on both sides that run chilled water circulating in pipes within the tunnel.",
"title": "Mechanical incidents"
},
{
"paragraph_id": 117,
"text": "Trains travelling at high speed create piston effect pressure changes that can affect passenger comfort, ventilation systems, tunnel doors, fans and the structure of the trains, and which drag on the trains. Piston relief ducts of 2-metre (6 ft 7 in) diameter were chosen to solve the problem, with 4 ducts per kilometre to give close to optimum results. However, this design led to extreme lateral forces on the trains, so a reduction in train speed was required and restrictors were installed in the ducts.",
"title": "Mechanical incidents"
},
{
"paragraph_id": 118,
"text": "The safety issue of a possible fire on a passenger-vehicle shuttle garnered much attention, with Eurotunnel noting that fire was the risk attracting the most attention in a 1994 safety case for three reasons: the opposition of ferry companies to passengers being allowed to remain with their cars; Home Office statistics indicating that car fires had doubled in ten years; and the long length of the tunnel. Eurotunnel commissioned the UK Fire Research Station—now part of the Building Research Establishment—to give reports of vehicle fires, and liaised with Kent Fire Brigade to gather vehicle fire statistics over one year. Fire tests took place at the French Mines Research Establishment with a mock wagon used to investigate how cars burned. The wagon door systems are designed to withstand fire inside the wagon for 30 minutes, longer than the transit time of 27 minutes. Wagon air conditioning units help to purge dangerous fumes from inside the wagon before travel. Each wagon has a fire detection and extinguishing system, with sensing of ions or ultraviolet radiation, smoke and gases that can trigger halon gas to quench a fire. Since the HGV wagons are not covered, fire sensors are located on the loading wagon and in the tunnel. A 10-inch (250 mm) water main in the service tunnel provides water to the main tunnels at 125-metre (410 ft) intervals. The ventilation system can control smoke movement. Special arrival sidings accept a train that is on fire, as the train is not allowed to stop whilst on fire in the tunnel unless continuing its journey would lead to a worse outcome. Eurotunnel has banned a wide range of hazardous goods from travelling in the tunnel. Two STTS (Service Tunnel Transportation System) vehicles with firefighting pods are on duty at all times, with a maximum delay of 10 minutes before they reach a burning train.",
"title": "Mechanical incidents"
},
{
"paragraph_id": 119,
"text": "In 1999, the Kosovo Train for Life passed through the tunnel en route to Pristina, in Kosovo.",
"title": "Unusual traffic"
},
{
"paragraph_id": 120,
"text": "In 2009, former F1 racing champion John Surtees drove a Ginetta G50 EV electric sports car prototype from England to France, using the service tunnel, as part of a charity event. He was required to keep to the 50-kilometre-per-hour (30 mph) speed limit. To celebrate the 2014 Tour de France's transfer from its opening stages in Britain to France in July of that year, Chris Froome of Team Sky rode a bicycle through the service tunnel, becoming the first solo rider to do so. The crossing took under an hour, reaching speeds of 65 kilometres per hour (40 mph)—faster than most cross-channel ferries.",
"title": "Unusual traffic"
},
{
"paragraph_id": 121,
"text": "Since 2012, French operators Bouygues Telecom, Orange and SFR have covered Running Tunnel South, the tunnel bore normally used for travel from France to Britain.",
"title": "Mobile network coverage"
},
{
"paragraph_id": 122,
"text": "In January 2014, UK operators EE and Vodafone signed ten-year contracts with Eurotunnel for Running Tunnel North. The agreements will enable both operators' subscribers to use 2G and 3G services. Both EE and Vodafone planned to offer LTE services on the route; EE said it expected to cover the route with LTE connectivity by the summer of 2014. EE and Vodafone will offer Channel Tunnel network coverage for travellers from the UK to France. Eurotunnel said it also held talks with Three UK but has yet to reach an agreement with the operator.",
"title": "Mobile network coverage"
},
{
"paragraph_id": 123,
"text": "In May 2014, Eurotunnel announced that they had installed equipment from Alcatel-Lucent to cover Running Tunnel North and simultaneously to provide mobile service (GSM 900/1800 MHz and UMTS 2100 MHz) by EE, O2 and Vodafone. The service of EE and Vodafone commenced on the same date as the announcement. O2 service was expected to be available soon afterwards.",
"title": "Mobile network coverage"
},
{
"paragraph_id": 124,
"text": "In November 2014, EE announced that it had previously switched on LTE earlier in September 2014. O2 turned on 2G, 3G and 4G services in November 2014, whilst Vodafone's 4G was due to go live later.",
"title": "Mobile network coverage"
},
{
"paragraph_id": 125,
"text": "The tunnel also houses the 1,000 MW ElecLink interconnector to transfer power between the British and French electricity networks. During the night of 31 August/1 September 2021, the 51 km-long 320 kV DC cable was switched into service for the first time.",
"title": "Other (non-transport) services"
}
] | The Channel Tunnel, also known as the Chunnel, is a 50.46-kilometre (31.35 mi) underwater railway tunnel that connects Folkestone with Coquelles beneath the English Channel at the Strait of Dover. It is the only fixed link between the island of Great Britain and the European mainland. At its lowest point, it is 75 metres (246 ft) below the sea bed and 115 metres (377 ft) below sea level. At 37.9 kilometres (23.5 mi), it has the longest underwater section of any tunnel in the world and is the third-longest railway tunnel in the world. The speed limit for trains through the tunnel is 160 kilometres per hour (99 mph). The tunnel is owned and operated by the company Getlink, formerly "Groupe Eurotunnel". The tunnel carries high-speed Eurostar passenger trains, LeShuttle services for road vehicles and freight trains. It connects end-to-end with high-speed railway lines: the LGV Nord in France and High Speed 1 in England. In 2017, rail services carried 10.3 million passengers and 1.22 million tonnes of freight, and the Shuttle carried 10.4 million passengers, 2.6 million cars, 51,000 coaches, and 1.6 million lorries, compared with 11.7 million passengers, 2.6 million lorries and 2.2 million cars by sea through the Port of Dover. Plans to build a cross-Channel fixed link appeared as early as 1802, but British political and media pressure motivated by fears of compromising national security had disrupted attempts to build one. An early unsuccessful attempt was made in the late 19th century, on the English side, "in the hope of forcing the hand of the English Government". The eventual successful project, organised by Eurotunnel, began construction in 1988 and opened in 1994. Estimated to cost £5.5 billion in 1985, it was at the time the most expensive construction project ever proposed. The cost finally amounted to £9 billion, well over budget. Since its opening, the tunnel has experienced occasional mechanical problems. Both fires and cold weather have temporarily disrupted its operation. Since at least 1997, aggregations of migrants around Calais seeking entry to the United Kingdom, such as through the tunnel, have prompted deterrence and countermeasures. | 2001-05-21T21:13:30Z | 2023-12-31T11:09:47Z | [
"Template:Channel Tunnel RDT",
"Template:Inflation/year",
"Template:Citation needed",
"Template:Abbr",
"Template:YouTube",
"Template:Quote box",
"Template:Routemap",
"Template:Cite news",
"Template:Channel tunnel",
"Template:Lang-fr",
"Template:Convert",
"Template:See also",
"Template:Nowrap",
"Template:Stn",
"Template:Cite report",
"Template:Wikicite",
"Template:Use British English",
"Template:Ref label",
"Template:Reflist",
"Template:Refend",
"Template:Commons",
"Template:Short description",
"Template:Infobox tunnel",
"Template:Sfn",
"Template:Cite journal",
"Template:Cite press release",
"Template:GeoGroup",
"Template:Webarchive",
"Template:Eurostar navbox",
"Template:Legend",
"Template:Cite web",
"Template:Cite magazine",
"Template:Dead link",
"Template:Refbegin",
"Template:Inflation",
"Template:Cvt",
"Template:Main",
"Template:Graph:Chart",
"Template:Note label",
"Template:Inconsistent",
"Template:Citation",
"Template:Wikisource",
"Template:Authority control",
"Template:Use dmy dates",
"Template:Main article",
"Template:Cite book",
"Template:Cbignore",
"Template:PM20"
] | https://en.wikipedia.org/wiki/Channel_Tunnel |
5,703 | Cyberpunk | Cyberpunk is a subgenre of science fiction in a dystopian futuristic setting that tends to focus on a "combination of lowlife and high tech", featuring futuristic technological and scientific achievements, such as artificial intelligence and cyberware, juxtaposed with societal collapse, dystopia or decay. Much of cyberpunk is rooted in the New Wave science fiction movement of the 1960s and 1970s, when writers like Philip K. Dick, Michael Moorcock, Roger Zelazny, John Brunner, J. G. Ballard, Philip José Farmer and Harlan Ellison examined the impact of drug culture, technology, and the sexual revolution while avoiding the utopian tendencies of earlier science fiction.
Comics exploring cyberpunk themes began appearing as early as Judge Dredd, first published in 1977. Released in 1984, William Gibson's influential debut novel Neuromancer helped solidify cyberpunk as a genre, drawing influence from punk subculture and early hacker culture. Frank Miller's Ronin is an example of a cyberpunk graphic novel. Other influential cyberpunk writers included Bruce Sterling and Rudy Rucker. The Japanese cyberpunk subgenre began in 1982 with the debut of Katsuhiro Otomo's manga series Akira, with its 1988 anime film adaptation (also directed by Otomo) later popularizing the subgenre.
Early films in the genre include Ridley Scott's 1982 film Blade Runner, one of several of Philip K. Dick's works that have been adapted into films (in this case, Do Androids Dream of Electric Sheep?). The "first cyberpunk television series" was the TV series Max Headroom from 1987, playing in a futuristic dystopia ruled by an oligarchy of television networks, and where computer hacking played a central role in many story lines. The films Johnny Mnemonic (1995) and New Rose Hotel (1998), both based upon short stories by William Gibson, flopped commercially and critically, while The Matrix trilogy (1999–2003) and Judge Dredd (1995) were some of the most successful cyberpunk films.
Newer cyberpunk media includes Blade Runner 2049 (2017), a sequel to the original 1982 film; Dredd (2012), which was not a sequel to the original movie; Upgrade (2018); Alita: Battle Angel (2019), based on the 1990s Japanese manga Battle Angel Alita; the 2018 Netflix TV series Altered Carbon, based on Richard K. Morgan's 2002 novel of the same name; the 2020 remake of 1997 role-playing video game Final Fantasy VII; and the video game Cyberpunk 2077 (2020), based on R. Talsorian Games's 1988 tabletop role-playing game Cyberpunk.
Lawrence Person has attempted to define the content and ethos of the cyberpunk literary movement stating:
Classic cyberpunk characters were marginalized, alienated loners who lived on the edge of society in generally dystopic futures where daily life was impacted by rapid technological change, an ubiquitous datasphere of computerized information, and invasive modification of the human body.
Cyberpunk plots often center on conflict among artificial intelligences, hackers, and megacorporations, and tend to be set in a near-future Earth, rather than in the far-future settings or galactic vistas found in novels such as Isaac Asimov's Foundation or Frank Herbert's Dune. The settings are usually post-industrial dystopias but tend to feature extraordinary cultural ferment and the use of technology in ways never anticipated by its original inventors ("the street finds its own uses for things"). Much of the genre's atmosphere echoes film noir, and written works in the genre often use techniques from detective fiction. There are sources who view that cyberpunk has shifted from a literary movement to a mode of science fiction due to the limited number of writers and its transition to a more generalized cultural formation.
The origins of cyberpunk are rooted in the New Wave science fiction movement of the 1960s and 1970s, where New Worlds, under the editorship of Michael Moorcock, began inviting and encouraging stories that examined new writing styles, techniques, and archetypes. Reacting to conventional storytelling, New Wave authors attempted to present a world where society coped with a constant upheaval of new technology and culture, generally with dystopian outcomes. Writers like Roger Zelazny, J. G. Ballard, Philip José Farmer, Samuel R. Delany, and Harlan Ellison often examined the impact of drug culture, technology, and the sexual revolution with an avant-garde style influenced by the Beat Generation (especially William S. Burroughs's science fiction writing), Dadaism, and their own ideas. Ballard attacked the idea that stories should follow the "archetypes" popular since the time of Ancient Greece, and the assumption that these would somehow be the same ones that would call to modern readers, as Joseph Campbell argued in The Hero with a Thousand Faces. Instead, Ballard wanted to write a new myth for the modern reader, a style with "more psycho-literary ideas, more meta-biological and meta-chemical concepts, private time systems, synthetic psychologies and space-times, more of the sombre half-worlds one glimpses in the paintings of schizophrenics."
This had a profound influence on a new generation of writers, some of whom would come to call their movement "cyberpunk". One, Bruce Sterling, later said:
In the circle of American science fiction writers of my generation—cyberpunks and humanists and so forth—[Ballard] was a towering figure. We used to have bitter struggles over who was more Ballardian than whom. We knew we were not fit to polish the man's boots, and we were scarcely able to understand how we could get to a position to do work which he might respect or stand, but at least we were able to see the peak of achievement that he had reached.
Ballard, Zelazny, and the rest of New Wave was seen by the subsequent generation as delivering more "realism" to science fiction, and they attempted to build on this.
Samuel R. Delany's 1968 novel Nova is also considered one of the major forerunners of the cyberpunk movement. It prefigures, for instance, cyberpunk's staple trope of humans interfacing with computers via implants. Writer William Gibson claimed to be greatly influenced by Delany, and his novel Neuromancer includes allusions to Nova.
Similarly influential, and generally cited as proto-cyberpunk, is the Philip K. Dick novel Do Androids Dream of Electric Sheep?, first published in 1968. Presenting precisely the general feeling of dystopian post-economic-apocalyptic future as Gibson and Sterling later deliver, it examines ethical and moral problems with cybernetic, artificial intelligence in a way more "realist" than the Isaac Asimov Robot series that laid its philosophical foundation. Dick's protege and friend K. W. Jeter wrote a novel called Dr. Adder in 1972 that, Dick lamented, might have been more influential in the field had it been able to find a publisher at that time. It was not published until 1984, after which Jeter made it the first book in a trilogy, followed by The Glass Hammer (1985) and Death Arms (1987). Jeter wrote other standalone cyberpunk novels before going on to write three authorized sequels to Do Androids Dream of Electric Sheep?, named Blade Runner 2: The Edge of Human (1995), Blade Runner 3: Replicant Night (1996), and Blade Runner 4: Eye and Talon.
Do Androids Dream of Electric Sheep? was made into the seminal movie Blade Runner, released in 1982. This was one year after William Gibson's story "Johnny Mnemonic" helped move proto-cyberpunk concepts into the mainstream. That story, which also became a film years later in 1995, involves another dystopian future, where human couriers deliver computer data, stored cybernetically in their own minds.
The term "cyberpunk" first appeared as the title of a short story by Bruce Bethke, written in 1980 and published in Amazing Stories in 1983. The name was picked up by Gardner Dozois, editor of Isaac Asimov's Science Fiction Magazine, and popularized in his editorials.
Bethke says he made two lists of words, one for technology, one for troublemakers, and experimented with combining them variously into compound words, consciously attempting to coin a term that encompassed both punk attitudes and high technology. He described the idea thus:
The kids who trashed my computer; their kids were going to be Holy Terrors, combining the ethical vacuity of teenagers with a technical fluency we adults could only guess at. Further, the parents and other adult authority figures of the early 21st Century were going to be terribly ill-equipped to deal with the first generation of teenagers who grew up truly "speaking computer".
Afterward, Dozois began using this term in his own writing, most notably in a Washington Post article where he said "About the closest thing here to a self-willed esthetic 'school' would be the purveyors of bizarre hard-edged, high-tech stuff, who have on occasion been referred to as 'cyberpunks'—Sterling, Gibson, Shiner, Cadigan, Bear."
About that time in 1984, William Gibson's novel Neuromancer was published, delivering a glimpse of a future encompassed by what became an archetype of cyberpunk "virtual reality", with the human mind being fed light-based worldscapes through a computer interface. Some, perhaps ironically including Bethke himself, argued at the time that the writers whose style Gibson's books epitomized should be called "Neuromantics", a pun on the name of the novel plus "New Romantics", a term used for a New Wave pop music movement that had just occurred in Britain, but this term did not catch on. Bethke later paraphrased Michael Swanwick's argument for the term: "the movement writers should properly be termed neuromantics, since so much of what they were doing was clearly imitating Neuromancer".
Sterling was another writer who played a central role, often consciously, in the cyberpunk genre, variously seen as either keeping it on track, or distorting its natural path into a stagnant formula. In 1986, he edited a volume of cyberpunk stories called Mirrorshades: The Cyberpunk Anthology, an attempt to establish what cyberpunk was, from Sterling's perspective.
In the subsequent decade, the motifs of Gibson's Neuromancer became formulaic, climaxing in the satirical extremes of Neal Stephenson's Snow Crash in 1992.
Bookending the cyberpunk era, Bethke himself published a novel in 1995 called Headcrash, like Snow Crash a satirical attack on the genre's excesses. Fittingly, it won an honor named after cyberpunk's spiritual founder, the Philip K. Dick Award. It satirized the genre in this way:
...full of young guys with no social lives, no sex lives and no hope of ever moving out of their mothers' basements ... They're total wankers and losers who indulge in Messianic fantasies about someday getting even with the world through almost-magical computer skills, but whose actual use of the Net amounts to dialing up the scatophilia forum and downloading a few disgusting pictures. You know, cyberpunks.
The impact of cyberpunk, though, has been long-lasting. Elements of both the setting and storytelling have become normal in science fiction in general, and a slew of sub-genres now have -punk tacked onto their names, most obviously steampunk, but also a host of other cyberpunk derivatives.
Primary figures in the cyberpunk movement include William Gibson, Neal Stephenson, Bruce Sterling, Bruce Bethke, Pat Cadigan, Rudy Rucker, and John Shirley. Philip K. Dick (author of Do Androids Dream of Electric Sheep?, from which the film Blade Runner was adapted) is also seen by some as prefiguring the movement.
Blade Runner can be seen as a quintessential example of the cyberpunk style and theme. Video games, board games, and tabletop role-playing games, such as Cyberpunk 2020 and Shadowrun, often feature storylines that are heavily influenced by cyberpunk writing and movies. Beginning in the early 1990s, some trends in fashion and music were also labeled as cyberpunk. Cyberpunk is also featured prominently in anime and manga (Japanese cyberpunk), with Akira, Ghost in the Shell and Cowboy Bebop being among the most notable.
Cyberpunk writers tend to use elements from crime fiction—particularly hardboiled detective fiction and film noir—and postmodernist prose to describe an often nihilistic underground side of an electronic society. The genre's vision of a troubled future is often called the antithesis of the generally utopian visions of the future popular in the 1940s and 1950s. Gibson defined cyberpunk's antipathy towards utopian science fiction in his 1981 short story "The Gernsback Continuum," which pokes fun at and, to a certain extent, condemns utopian science fiction.
In some cyberpunk writing, much of the action takes place online, in cyberspace, blurring the line between actual and virtual reality. A typical trope in such work is a direct connection between the human brain and computer systems. Cyberpunk settings are dystopias with corruption, computers, and computer networks. Giant, multinational corporations have for the most part replaced governments as centers of political, economic, and even military power.
The economic and technological state of Japan is a regular theme in the cyberpunk literature of the 1980s. Of Japan's influence on the genre, William Gibson said, "Modern Japan simply was cyberpunk." Cyberpunk is often set in urbanized, artificial landscapes, and "city lights, receding" was used by Gibson as one of the genre's first metaphors for cyberspace and virtual reality. The cityscapes of Hong Kong has had major influences in the urban backgrounds, ambiance and settings in many cyberpunk works such as Blade Runner and Shadowrun. Ridley Scott envisioned the landscape of cyberpunk Los Angeles in Blade Runner to be "Hong Kong on a very bad day". The streetscapes of the Ghost in the Shell film were based on Hong Kong. Its director Mamoru Oshii felt that Hong Kong's strange and chaotic streets where "old and new exist in confusing relationships", fit the theme of the film well. Hong Kong's Kowloon Walled City is particularly notable for its disorganized hyper-urbanization and breakdown in traditional urban planning to be an inspiration to cyberpunk landscapes. Portrayals of East Asia and Asians in Western cyberpunk have been criticized as Orientalist and promoting racist tropes playing on American and European fears of East Asian dominance; this has been referred to as "techno-Orientalism".
Cyberpunk can be intended to disquiet readers and call them to action. It often expresses a sense of rebellion, suggesting that one could describe it as a type of cultural revolution in science fiction. In the words of author and critic David Brin:
...a closer look [at cyberpunk authors] reveals that they nearly always portray future societies in which governments have become wimpy and pathetic ...Popular science fiction tales by Gibson, Williams, Cadigan and others do depict Orwellian accumulations of power in the next century, but nearly always clutched in the secretive hands of a wealthy or corporate elite.
Cyberpunk stories have also been seen as fictional forecasts of the evolution of the Internet. The earliest descriptions of a global communications network came long before the World Wide Web entered popular awareness, though not before traditional science-fiction writers such as Arthur C. Clarke and some social commentators such as James Burke began predicting that such networks would eventually form.
Some observers cite that cyberpunk tends to marginalize sectors of society such as women and people of colour. It is claimed that, for instance, cyberpunk depicts fantasies that ultimately empower masculinity using fragmentary and decentered aesthetic that culminate in a masculine genre populated by male outlaws. Critics also note the absence of any reference to Africa or black characters in the quintessential cyberpunk film Blade Runner while other films reinforce stereotypes.
Minnesota writer Bruce Bethke coined the term in 1983 for his short story "Cyberpunk", which was published in an issue of Amazing Science Fiction Stories. The term was quickly appropriated as a label to be applied to the works of William Gibson, Bruce Sterling, Pat Cadigan and others. Of these, Sterling became the movement's chief ideologue, thanks to his fanzine Cheap Truth. John Shirley wrote articles on Sterling and Rucker's significance. John Brunner's 1975 novel The Shockwave Rider is considered by many to be the first cyberpunk novel with many of the tropes commonly associated with the genre, some five years before the term was popularized by Dozois.
William Gibson with his novel Neuromancer (1984) is arguably the most famous writer connected with the term cyberpunk. He emphasized style, a fascination with surfaces, and atmosphere over traditional science-fiction tropes. Regarded as ground-breaking and sometimes as "the archetypal cyberpunk work", Neuromancer was awarded the Hugo, Nebula, and Philip K. Dick Awards. Count Zero (1986) and Mona Lisa Overdrive (1988) followed after Gibson's popular debut novel. According to the Jargon File, "Gibson's near-total ignorance of computers and the present-day hacker culture enabled him to speculate about the role of computers and hackers in the future in ways hackers have since found both irritatingly naïve and tremendously stimulating."
Early on, cyberpunk was hailed as a radical departure from science-fiction standards and a new manifestation of vitality. Shortly thereafter, however, some critics arose to challenge its status as a revolutionary movement. These critics said that the science fiction New Wave of the 1960s was much more innovative as far as narrative techniques and styles were concerned. Furthermore, while Neuromancer's narrator may have had an unusual "voice" for science fiction, much older examples can be found: Gibson's narrative voice, for example, resembles that of an updated Raymond Chandler, as in his novel The Big Sleep (1939). Others noted that almost all traits claimed to be uniquely cyberpunk could in fact be found in older writers' works—often citing J. G. Ballard, Philip K. Dick, Harlan Ellison, Stanisław Lem, Samuel R. Delany, and even William S. Burroughs. For example, Philip K. Dick's works contain recurring themes of social decay, artificial intelligence, paranoia, and blurred lines between objective and subjective realities. The influential cyberpunk movie Blade Runner (1982) is based on his book, Do Androids Dream of Electric Sheep?. Humans linked to machines are found in Pohl and Kornbluth's Wolfbane (1959) and Roger Zelazny's Creatures of Light and Darkness (1968).
In 1994, scholar Brian Stonehill suggested that Thomas Pynchon's 1973 novel Gravity's Rainbow "not only curses but precurses what we now glibly dub cyberspace." Other important predecessors include Alfred Bester's two most celebrated novels, The Demolished Man and The Stars My Destination, as well as Vernor Vinge's novella True Names.
Science-fiction writer David Brin describes cyberpunk as "the finest free promotion campaign ever waged on behalf of science fiction". It may not have attracted the "real punks", but it did ensnare many new readers, and it provided the sort of movement that postmodern literary critics found alluring. Cyberpunk made science fiction more attractive to academics, argues Brin; in addition, it made science fiction more profitable to Hollywood and to the visual arts generally. Although the "self-important rhetoric and whines of persecution" on the part of cyberpunk fans were irritating at worst and humorous at best, Brin declares that the "rebels did shake things up. We owe them a debt."
Fredric Jameson considers cyberpunk the "supreme literary expression if not of postmodernism, then of late capitalism itself".
Cyberpunk further inspired many later writers to incorporate cyberpunk ideas into their own works, such as George Alec Effinger's When Gravity Fails. Wired magazine, created by Louis Rossetto and Jane Metcalfe, mixes new technology, art, literature, and current topics in order to interest today's cyberpunk fans, which Paula Yoo claims "proves that hardcore hackers, multimedia junkies, cyberpunks and cellular freaks are poised to take over the world".
The film Blade Runner (1982) is set in 2019 in a dystopian future in which manufactured beings called replicants are slaves used on space colonies and are legal prey on Earth to various bounty hunters who "retire" (kill) them. Although Blade Runner was largely unsuccessful in its first theatrical release, it found a viewership in the home video market and became a cult film. Since the movie omits the religious and mythical elements of Dick's original novel (e.g. empathy boxes and Wilbur Mercer), it falls more strictly within the cyberpunk genre than the novel does. William Gibson would later reveal that upon first viewing the film, he was surprised at how the look of this film matched his vision for Neuromancer, a book he was then working on. The film's tone has since been the staple of many cyberpunk movies, such as The Matrix trilogy (1999–2003), which uses a wide variety of cyberpunk elements. A sequel to Blade Runner was released in 2017.
The TV series Max Headroom (1987) is an iconic cyberpunk work, taking place in a futuristic dystopia ruled by an oligarchy of television networks. Computer hacking played a central role in many of the story lines. Max Headroom has been called "the first cyberpunk television series".
The number of films in the genre has grown steadily since Blade Runner. Several of Philip K. Dick's works have been adapted to the silver screen. The films Johnny Mnemonic (1995) and New Rose Hotel (1998), both based on short stories by William Gibson, flopped commercially and critically. Other cyberpunk films include RoboCop (1987), Total Recall (1990), Hardware (1990), The Lawnmower Man (1992), 12 Monkeys (1995), Hackers (1995), and Strange Days (1995). Some cyberpunk films have been described as tech-noir, a hybrid genre combining neo-noir and science fiction or cyberpunk.
The Japanese cyberpunk subgenre began in 1982 with the debut of Katsuhiro Otomo's manga series Akira, with its 1988 anime film adaptation, which Otomo directed, later popularizing the subgenre. Akira inspired a wave of Japanese cyberpunk works, including manga and anime series such as Ghost in the Shell, Battle Angel Alita, and Cowboy Bebop. Other early Japanese cyberpunk works include the 1982 film Burst City, the 1985 original video animation Megazone 23, and the 1989 film Tetsuo: The Iron Man.
According to Paul Gravett, when Akira began to be published, cyberpunk literature had not yet been translated into Japanese, Otomo has distinct inspirations such as Mitsuteru Yokoyama's manga series Tetsujin 28-go (1956–1966) and Moebius.
In contrast to Western cyberpunk which has roots in New Wave science fiction literature, Japanese cyberpunk has roots in underground music culture, specifically the Japanese punk subculture that arose from the Japanese punk music scene in the 1970s. The filmmaker Sogo Ishii introduced this subculture to Japanese cinema with the punk film Panic High School (1978) and the punk biker film Crazy Thunder Road (1980), both portraying the rebellion and anarchy associated with punk, and the latter featuring a punk biker gang aesthetic. Ishii's punk films paved the way for Otomo's seminal cyberpunk work Akira.
Cyberpunk themes are widely visible in anime and manga. In Japan, where cosplay is popular and not only teenagers display such fashion styles, cyberpunk has been accepted and its influence is widespread. William Gibson's Neuromancer, whose influence dominated the early cyberpunk movement, was also set in Chiba, one of Japan's largest industrial areas, although at the time of writing the novel Gibson did not know the location of Chiba and had no idea how perfectly it fit his vision in some ways. The exposure to cyberpunk ideas and fiction in the 1980s has allowed it to seep into the Japanese culture.
Cyberpunk anime and manga draw upon a futuristic vision which has elements in common with Western science fiction and therefore have received wide international acceptance outside Japan. "The conceptualization involved in cyberpunk is more of forging ahead, looking at the new global culture. It is a culture that does not exist right now, so the Japanese concept of a cyberpunk future, seems just as valid as a Western one, especially as Western cyberpunk often incorporates many Japanese elements." William Gibson is now a frequent visitor to Japan, and he came to see that many of his visions of Japan have become a reality:
Modern Japan simply was cyberpunk. The Japanese themselves knew it and delighted in it. I remember my first glimpse of Shibuya, when one of the young Tokyo journalists who had taken me there, his face drenched with the light of a thousand media-suns—all that towering, animated crawl of commercial information—said, "You see? You see? It is Blade Runner town." And it was. It so evidently was.
Akira (1982 manga) and its 1988 anime film adaptation have influenced numerous works in animation, comics, film, music, television and video games. Akira has been cited as a major influence on Hollywood films such as The Matrix, Chronicle, Looper, Midnight Special, and Inception, as well as cyberpunk-influenced video games such as Hideo Kojima's Snatcher and Metal Gear Solid, Valve's Half-Life series and Dontnod Entertainment's Remember Me. Akira has also influenced the work of musicians such as Kanye West, who paid homage to Akira in the "Stronger" music video, and Lupe Fiasco, whose album Tetsuo & Youth is named after Tetsuo Shima. The popular bike from the film, Kaneda's Motorbike, appears in Steven Spielberg's film Ready Player One, and CD Projekt's video game Cyberpunk 2077.
Ghost in the Shell (1995) influenced a number of prominent filmmakers, most notably the Wachowskis in The Matrix (1999) and its sequels. The Matrix series took several concepts from the film, including the Matrix digital rain, which was inspired by the opening credits of Ghost in the Shell and a sushi magazine the wife of the senior designer of the animation, Simon Witheley, had in the kitchen at the time., and the way characters access the Matrix through holes in the back of their necks. Other parallels have been drawn to James Cameron's Avatar, Steven Spielberg's A.I. Artificial Intelligence, and Jonathan Mostow's Surrogates. James Cameron cited Ghost in the Shell as a source of inspiration, citing it as an influence on Avatar.
The original video animation Megazone 23 (1985) has a number of similarities to The Matrix. Battle Angel Alita (1990) has had a notable influence on filmmaker James Cameron, who was planning to adapt it into a film since 2000. It was an influence on his TV series Dark Angel, and he is the producer of the 2019 film adaptation Alita: Battle Angel.
In 1975, artist Moebius collaborated with writer Dan O'Bannon on a story called The Long Tomorrow, published in the French magazine Métal Hurlant. One of the first works featuring elements now seen as exemplifying cyberpunk, it combined influences from film noir and hardboiled crime fiction with a distant sci-fi environment. Author William Gibson stated that Moebius' artwork for the series, along with other visuals from Métal Hurlant, strongly influenced his 1984 novel Neuromancer. The series had a far-reaching impact in the cyberpunk genre, being cited as an influence on Ridley Scott's Alien (1979) and Blade Runner. Moebius later expanded upon The Long Tomorrow's aesthetic with The Incal, a graphic novel collaboration with Alejandro Jodorowsky published from 1980 to 1988. The story centers around the exploits of a detective named John Difool in various science fiction settings, and while not confined to the tropes of cyberpunk, it features many elements of the genre. Moebius was one of the designers of Tron (1982), a movie that shows a world inside a computer.
Concurrently with many other foundational cyberpunk works, DC Comics published Frank Miller's six-issue miniseries Rōnin from 1983 to 1984. The series, incorporating aspects of Samurai culture, martial arts films and manga, is set in a dystopian near-future New York. It explores the link between an ancient Japanese warrior and the apocalyptic, crumbling cityscape he finds himself in. The comic also bears several similarities to Akira, with highly powerful telepaths playing central roles, as well as sharing many key visuals.
Rōnin would go on to influence many later works, including Samurai Jack and the Teenage Mutant Ninja Turtles, as well as video games such as Cyberpunk 2077. Two years later, Miller himself would incorporate several toned-down elements of Rōnin into his acclaimed 1986 miniseries The Dark Knight Returns, in which a retired Bruce Wayne once again takes up the mantle of Batman in a Gotham that is increasingly becoming more dystopian.
Paul Pope's Batman: Year 100, published in 2006, also exhibits several traits typical of cyberpunk fiction, such as a rebel protagonist opposing a future authoritarian state, and a distinct retrofuturist aesthetic that makes callbacks to both The Dark Knight Returns and Batman's original appearances in the 1940s.
There are many cyberpunk video games. Popular series include Final Fantasy VII and its spin-offs and remake, the Megami Tensei series, Kojima's Snatcher and Metal Gear series, Deus Ex series, Syndicate series, and System Shock and its sequel. Other games, like Blade Runner, Ghost in the Shell, and the Matrix series, are based upon genre movies, or role-playing games (for instance the various Shadowrun games).
Several RPGs called Cyberpunk exist: Cyberpunk, Cyberpunk 2020, Cyberpunk v3.0 and Cyberpunk Red written by Mike Pondsmith and published by R. Talsorian Games, and GURPS Cyberpunk, published by Steve Jackson Games as a module of the GURPS family of RPGs. Cyberpunk 2020 was designed with the settings of William Gibson's writings in mind, and to some extent with his approval, unlike the approach taken by FASA in producing the transgenre Shadowrun game and its various sequels, which mixes cyberpunk with fantasy elements such as magic and fantasy races such as orcs and elves. Both are set in the near future, in a world where cybernetics are prominent. In addition, Iron Crown Enterprises released an RPG named Cyberspace, which was out of print for several years until recently being re-released in online PDF form. CD Projekt Red released Cyberpunk 2077, a cyberpunk open world first-person shooter/role-playing video game (RPG) based on the tabletop RPG Cyberpunk 2020, on December 10, 2020. In 1990, in a convergence of cyberpunk art and reality, the United States Secret Service raided Steve Jackson Games's headquarters and confiscated all their computers. Officials denied that the target had been the GURPS Cyberpunk sourcebook, but Jackson would later write that he and his colleagues "were never able to secure the return of the complete manuscript; [...] The Secret Service at first flatly refused to return anything – then agreed to let us copy files, but when we got to their office, restricted us to one set of out-of-date files – then agreed to make copies for us, but said "tomorrow" every day from March 4 to March 26. On March 26 we received a set of disks which purported to be our files, but the material was late, incomplete and well-nigh useless." Steve Jackson Games won a lawsuit against the Secret Service, aided by the new Electronic Frontier Foundation. This event has achieved a sort of notoriety, which has extended to the book itself as well. All published editions of GURPS Cyberpunk have a tagline on the front cover, which reads "The book that was seized by the U.S. Secret Service!" Inside, the book provides a summary of the raid and its aftermath.
Cyberpunk has also inspired several tabletop, miniature and board games such as Necromunda by Games Workshop. Netrunner is a collectible card game introduced in 1996, based on the Cyberpunk 2020 role-playing game. Tokyo NOVA, debuting in 1993, is a cyberpunk role-playing game that uses playing cards instead of dice.
Cyberpunk 2077 set a new record for the largest number of simultaneous players in a single player game, with a record 1,003,262 playing just after the December 10th launch, according to Steam Database. That tops the previous Steam record of 472,962 players set by Fallout 4 back in 2015.
"Much of the industrial/dance heavy 'Cyberpunk'—recorded in Billy Idol's Macintosh-run studio—revolves around Idol's theme of the common man rising up to fight against a faceless, soulless, corporate world."
—Julie Romandetta
Invariably the origin of cyberpunk music lies in the synthesizer-heavy scores of cyberpunk films such as Escape from New York (1981) and Blade Runner (1982). Some musicians and acts have been classified as cyberpunk due to their aesthetic style and musical content. Often dealing with dystopian visions of the future or biomechanical themes, some fit more squarely in the category than others. Bands whose music has been classified as cyberpunk include Psydoll, Front Line Assembly, Clock DVA, Angelspit and Sigue Sigue Sputnik.
Some musicians not normally associated with cyberpunk have at times been inspired to create concept albums exploring such themes. Albums such as the British musician and songwriter Gary Numan's Replicas, The Pleasure Principle and Telekon were heavily inspired by the works of Philip K. Dick. Kraftwerk's The Man-Machine and Computer World albums both explored the theme of humanity becoming dependent on technology. Nine Inch Nails' concept album Year Zero also fits into this category. Fear Factory concept albums are heavily based upon future dystopia, cybernetics, clash between man and machines, virtual worlds. Billy Idol's Cyberpunk drew heavily from cyberpunk literature and the cyberdelic counter culture in its creation. 1. Outside, a cyberpunk narrative fueled concept album by David Bowie, was warmly met by critics upon its release in 1995. Many musicians have also taken inspiration from specific cyberpunk works or authors, including Sonic Youth, whose albums Sister and Daydream Nation take influence from the works of Philip K. Dick and William Gibson respectively. Madonna's 2001 Drowned World Tour opened with a cyberpunk section, where costumes, asethetics and stage props were used to accentuate the dystopian nature of the theatrical concert. Lady Gaga used a cyberpunk-persona and visual style for her sixth studio album Chromatica (2020).
Vaporwave and synthwave are also influenced by cyberpunk. The former has been inspired by one of the messages of cyberpunk and is interpreted as a dystopian critique of capitalism in the vein of cyberpunk and the latter is more surface-level, inspired only by the aesthetic of cyberpunk as a nostalgic retrofuturistic revival of aspects of cyberpunk's origins.
Writers David Suzuki and Holly Dressel describe the cafes, brand-name stores and video arcades of the Sony Center in the Potsdamer Platz public square of Berlin, Germany, as "a vision of a cyberpunk, corporate urban future".
Several subcultures have been inspired by cyberpunk fiction. These include the cyberdelic counter culture of the late 1980s and early 1990s. Cyberdelic, whose adherents referred to themselves as "cyberpunks", attempted to blend the psychedelic art and drug movement with the technology of cyberculture. Early adherents included Timothy Leary, Mark Frauenfelder and R. U. Sirius. The movement largely faded following the dot-com bubble implosion of 2000.
Cybergoth is a fashion and dance subculture which draws its inspiration from cyberpunk fiction, as well as rave and Gothic subcultures. In addition, a distinct cyberpunk fashion of its own has emerged in recent years which rejects the raver and goth influences of cybergoth, and draws inspiration from urban street fashion, "post apocalypse", functional clothing, high tech sports wear, tactical uniform and multifunction. This fashion goes by names like "tech wear", "goth ninja" or "tech ninja".
The Kowloon Walled City in Hong Kong (demolished in 1994) is often referenced as the model cyberpunk/dystopian slum as, given its poor living conditions at the time coupled with the city's political, physical, and economic isolation has caused many in academia to be fascinated by the ingenuity of its spawning.
As a wider variety of writers began to work with cyberpunk concepts, new subgenres of science fiction emerged, some of which could be considered as playing off the cyberpunk label, others which could be considered as legitimate explorations into newer territory. These focused on technology and its social effects in different ways. One prominent subgenre is "steampunk," which is set in an alternate history Victorian era that combines anachronistic technology with cyberpunk's bleak film noir world view. The term was originally coined around 1987 as a joke to describe some of the novels of Tim Powers, James P. Blaylock, and K.W. Jeter, but by the time Gibson and Sterling entered the subgenre with their collaborative novel The Difference Engine the term was being used earnestly as well.
Another subgenre is "biopunk" (cyberpunk themes dominated by biotechnology) from the early 1990s, a derivative style building on biotechnology rather than informational technology. In these stories, people are changed in some way not by mechanical means, but by genetic manipulation.
Cyberpunk works have been described as well situated within postmodern literature.
In the United States, the term "Cyberpunk" is a registered trademark owned by CD Projekt SA who obtained it from the previous owner R. Talsorian Games Inc. who originally registered it for its tabletop role-playing game. R. Talsorian Games currently used the trademark under license from CD Projekt SA for the tabletop role-playing game.
Within the European Union, the "Cyberpunk" trademark is owned by two parties: CD Projekt SA for "games and online gaming services" (particularly for the video game adaptation of the former) and by Sony Music for use outside games. | [
{
"paragraph_id": 0,
"text": "Cyberpunk is a subgenre of science fiction in a dystopian futuristic setting that tends to focus on a \"combination of lowlife and high tech\", featuring futuristic technological and scientific achievements, such as artificial intelligence and cyberware, juxtaposed with societal collapse, dystopia or decay. Much of cyberpunk is rooted in the New Wave science fiction movement of the 1960s and 1970s, when writers like Philip K. Dick, Michael Moorcock, Roger Zelazny, John Brunner, J. G. Ballard, Philip José Farmer and Harlan Ellison examined the impact of drug culture, technology, and the sexual revolution while avoiding the utopian tendencies of earlier science fiction.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Comics exploring cyberpunk themes began appearing as early as Judge Dredd, first published in 1977. Released in 1984, William Gibson's influential debut novel Neuromancer helped solidify cyberpunk as a genre, drawing influence from punk subculture and early hacker culture. Frank Miller's Ronin is an example of a cyberpunk graphic novel. Other influential cyberpunk writers included Bruce Sterling and Rudy Rucker. The Japanese cyberpunk subgenre began in 1982 with the debut of Katsuhiro Otomo's manga series Akira, with its 1988 anime film adaptation (also directed by Otomo) later popularizing the subgenre.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Early films in the genre include Ridley Scott's 1982 film Blade Runner, one of several of Philip K. Dick's works that have been adapted into films (in this case, Do Androids Dream of Electric Sheep?). The \"first cyberpunk television series\" was the TV series Max Headroom from 1987, playing in a futuristic dystopia ruled by an oligarchy of television networks, and where computer hacking played a central role in many story lines. The films Johnny Mnemonic (1995) and New Rose Hotel (1998), both based upon short stories by William Gibson, flopped commercially and critically, while The Matrix trilogy (1999–2003) and Judge Dredd (1995) were some of the most successful cyberpunk films.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Newer cyberpunk media includes Blade Runner 2049 (2017), a sequel to the original 1982 film; Dredd (2012), which was not a sequel to the original movie; Upgrade (2018); Alita: Battle Angel (2019), based on the 1990s Japanese manga Battle Angel Alita; the 2018 Netflix TV series Altered Carbon, based on Richard K. Morgan's 2002 novel of the same name; the 2020 remake of 1997 role-playing video game Final Fantasy VII; and the video game Cyberpunk 2077 (2020), based on R. Talsorian Games's 1988 tabletop role-playing game Cyberpunk.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Lawrence Person has attempted to define the content and ethos of the cyberpunk literary movement stating:",
"title": "Background"
},
{
"paragraph_id": 5,
"text": "Classic cyberpunk characters were marginalized, alienated loners who lived on the edge of society in generally dystopic futures where daily life was impacted by rapid technological change, an ubiquitous datasphere of computerized information, and invasive modification of the human body.",
"title": "Background"
},
{
"paragraph_id": 6,
"text": "Cyberpunk plots often center on conflict among artificial intelligences, hackers, and megacorporations, and tend to be set in a near-future Earth, rather than in the far-future settings or galactic vistas found in novels such as Isaac Asimov's Foundation or Frank Herbert's Dune. The settings are usually post-industrial dystopias but tend to feature extraordinary cultural ferment and the use of technology in ways never anticipated by its original inventors (\"the street finds its own uses for things\"). Much of the genre's atmosphere echoes film noir, and written works in the genre often use techniques from detective fiction. There are sources who view that cyberpunk has shifted from a literary movement to a mode of science fiction due to the limited number of writers and its transition to a more generalized cultural formation.",
"title": "Background"
},
{
"paragraph_id": 7,
"text": "The origins of cyberpunk are rooted in the New Wave science fiction movement of the 1960s and 1970s, where New Worlds, under the editorship of Michael Moorcock, began inviting and encouraging stories that examined new writing styles, techniques, and archetypes. Reacting to conventional storytelling, New Wave authors attempted to present a world where society coped with a constant upheaval of new technology and culture, generally with dystopian outcomes. Writers like Roger Zelazny, J. G. Ballard, Philip José Farmer, Samuel R. Delany, and Harlan Ellison often examined the impact of drug culture, technology, and the sexual revolution with an avant-garde style influenced by the Beat Generation (especially William S. Burroughs's science fiction writing), Dadaism, and their own ideas. Ballard attacked the idea that stories should follow the \"archetypes\" popular since the time of Ancient Greece, and the assumption that these would somehow be the same ones that would call to modern readers, as Joseph Campbell argued in The Hero with a Thousand Faces. Instead, Ballard wanted to write a new myth for the modern reader, a style with \"more psycho-literary ideas, more meta-biological and meta-chemical concepts, private time systems, synthetic psychologies and space-times, more of the sombre half-worlds one glimpses in the paintings of schizophrenics.\"",
"title": "History and origins"
},
{
"paragraph_id": 8,
"text": "This had a profound influence on a new generation of writers, some of whom would come to call their movement \"cyberpunk\". One, Bruce Sterling, later said:",
"title": "History and origins"
},
{
"paragraph_id": 9,
"text": "In the circle of American science fiction writers of my generation—cyberpunks and humanists and so forth—[Ballard] was a towering figure. We used to have bitter struggles over who was more Ballardian than whom. We knew we were not fit to polish the man's boots, and we were scarcely able to understand how we could get to a position to do work which he might respect or stand, but at least we were able to see the peak of achievement that he had reached.",
"title": "History and origins"
},
{
"paragraph_id": 10,
"text": "Ballard, Zelazny, and the rest of New Wave was seen by the subsequent generation as delivering more \"realism\" to science fiction, and they attempted to build on this.",
"title": "History and origins"
},
{
"paragraph_id": 11,
"text": "Samuel R. Delany's 1968 novel Nova is also considered one of the major forerunners of the cyberpunk movement. It prefigures, for instance, cyberpunk's staple trope of humans interfacing with computers via implants. Writer William Gibson claimed to be greatly influenced by Delany, and his novel Neuromancer includes allusions to Nova.",
"title": "History and origins"
},
{
"paragraph_id": 12,
"text": "Similarly influential, and generally cited as proto-cyberpunk, is the Philip K. Dick novel Do Androids Dream of Electric Sheep?, first published in 1968. Presenting precisely the general feeling of dystopian post-economic-apocalyptic future as Gibson and Sterling later deliver, it examines ethical and moral problems with cybernetic, artificial intelligence in a way more \"realist\" than the Isaac Asimov Robot series that laid its philosophical foundation. Dick's protege and friend K. W. Jeter wrote a novel called Dr. Adder in 1972 that, Dick lamented, might have been more influential in the field had it been able to find a publisher at that time. It was not published until 1984, after which Jeter made it the first book in a trilogy, followed by The Glass Hammer (1985) and Death Arms (1987). Jeter wrote other standalone cyberpunk novels before going on to write three authorized sequels to Do Androids Dream of Electric Sheep?, named Blade Runner 2: The Edge of Human (1995), Blade Runner 3: Replicant Night (1996), and Blade Runner 4: Eye and Talon.",
"title": "History and origins"
},
{
"paragraph_id": 13,
"text": "Do Androids Dream of Electric Sheep? was made into the seminal movie Blade Runner, released in 1982. This was one year after William Gibson's story \"Johnny Mnemonic\" helped move proto-cyberpunk concepts into the mainstream. That story, which also became a film years later in 1995, involves another dystopian future, where human couriers deliver computer data, stored cybernetically in their own minds.",
"title": "History and origins"
},
{
"paragraph_id": 14,
"text": "The term \"cyberpunk\" first appeared as the title of a short story by Bruce Bethke, written in 1980 and published in Amazing Stories in 1983. The name was picked up by Gardner Dozois, editor of Isaac Asimov's Science Fiction Magazine, and popularized in his editorials.",
"title": "History and origins"
},
{
"paragraph_id": 15,
"text": "Bethke says he made two lists of words, one for technology, one for troublemakers, and experimented with combining them variously into compound words, consciously attempting to coin a term that encompassed both punk attitudes and high technology. He described the idea thus:",
"title": "History and origins"
},
{
"paragraph_id": 16,
"text": "The kids who trashed my computer; their kids were going to be Holy Terrors, combining the ethical vacuity of teenagers with a technical fluency we adults could only guess at. Further, the parents and other adult authority figures of the early 21st Century were going to be terribly ill-equipped to deal with the first generation of teenagers who grew up truly \"speaking computer\".",
"title": "History and origins"
},
{
"paragraph_id": 17,
"text": "Afterward, Dozois began using this term in his own writing, most notably in a Washington Post article where he said \"About the closest thing here to a self-willed esthetic 'school' would be the purveyors of bizarre hard-edged, high-tech stuff, who have on occasion been referred to as 'cyberpunks'—Sterling, Gibson, Shiner, Cadigan, Bear.\"",
"title": "History and origins"
},
{
"paragraph_id": 18,
"text": "About that time in 1984, William Gibson's novel Neuromancer was published, delivering a glimpse of a future encompassed by what became an archetype of cyberpunk \"virtual reality\", with the human mind being fed light-based worldscapes through a computer interface. Some, perhaps ironically including Bethke himself, argued at the time that the writers whose style Gibson's books epitomized should be called \"Neuromantics\", a pun on the name of the novel plus \"New Romantics\", a term used for a New Wave pop music movement that had just occurred in Britain, but this term did not catch on. Bethke later paraphrased Michael Swanwick's argument for the term: \"the movement writers should properly be termed neuromantics, since so much of what they were doing was clearly imitating Neuromancer\".",
"title": "History and origins"
},
{
"paragraph_id": 19,
"text": "Sterling was another writer who played a central role, often consciously, in the cyberpunk genre, variously seen as either keeping it on track, or distorting its natural path into a stagnant formula. In 1986, he edited a volume of cyberpunk stories called Mirrorshades: The Cyberpunk Anthology, an attempt to establish what cyberpunk was, from Sterling's perspective.",
"title": "History and origins"
},
{
"paragraph_id": 20,
"text": "In the subsequent decade, the motifs of Gibson's Neuromancer became formulaic, climaxing in the satirical extremes of Neal Stephenson's Snow Crash in 1992.",
"title": "History and origins"
},
{
"paragraph_id": 21,
"text": "Bookending the cyberpunk era, Bethke himself published a novel in 1995 called Headcrash, like Snow Crash a satirical attack on the genre's excesses. Fittingly, it won an honor named after cyberpunk's spiritual founder, the Philip K. Dick Award. It satirized the genre in this way:",
"title": "History and origins"
},
{
"paragraph_id": 22,
"text": "...full of young guys with no social lives, no sex lives and no hope of ever moving out of their mothers' basements ... They're total wankers and losers who indulge in Messianic fantasies about someday getting even with the world through almost-magical computer skills, but whose actual use of the Net amounts to dialing up the scatophilia forum and downloading a few disgusting pictures. You know, cyberpunks.",
"title": "History and origins"
},
{
"paragraph_id": 23,
"text": "The impact of cyberpunk, though, has been long-lasting. Elements of both the setting and storytelling have become normal in science fiction in general, and a slew of sub-genres now have -punk tacked onto their names, most obviously steampunk, but also a host of other cyberpunk derivatives.",
"title": "History and origins"
},
{
"paragraph_id": 24,
"text": "Primary figures in the cyberpunk movement include William Gibson, Neal Stephenson, Bruce Sterling, Bruce Bethke, Pat Cadigan, Rudy Rucker, and John Shirley. Philip K. Dick (author of Do Androids Dream of Electric Sheep?, from which the film Blade Runner was adapted) is also seen by some as prefiguring the movement.",
"title": "Style and ethos"
},
{
"paragraph_id": 25,
"text": "Blade Runner can be seen as a quintessential example of the cyberpunk style and theme. Video games, board games, and tabletop role-playing games, such as Cyberpunk 2020 and Shadowrun, often feature storylines that are heavily influenced by cyberpunk writing and movies. Beginning in the early 1990s, some trends in fashion and music were also labeled as cyberpunk. Cyberpunk is also featured prominently in anime and manga (Japanese cyberpunk), with Akira, Ghost in the Shell and Cowboy Bebop being among the most notable.",
"title": "Style and ethos"
},
{
"paragraph_id": 26,
"text": "Cyberpunk writers tend to use elements from crime fiction—particularly hardboiled detective fiction and film noir—and postmodernist prose to describe an often nihilistic underground side of an electronic society. The genre's vision of a troubled future is often called the antithesis of the generally utopian visions of the future popular in the 1940s and 1950s. Gibson defined cyberpunk's antipathy towards utopian science fiction in his 1981 short story \"The Gernsback Continuum,\" which pokes fun at and, to a certain extent, condemns utopian science fiction.",
"title": "Style and ethos"
},
{
"paragraph_id": 27,
"text": "In some cyberpunk writing, much of the action takes place online, in cyberspace, blurring the line between actual and virtual reality. A typical trope in such work is a direct connection between the human brain and computer systems. Cyberpunk settings are dystopias with corruption, computers, and computer networks. Giant, multinational corporations have for the most part replaced governments as centers of political, economic, and even military power.",
"title": "Style and ethos"
},
{
"paragraph_id": 28,
"text": "The economic and technological state of Japan is a regular theme in the cyberpunk literature of the 1980s. Of Japan's influence on the genre, William Gibson said, \"Modern Japan simply was cyberpunk.\" Cyberpunk is often set in urbanized, artificial landscapes, and \"city lights, receding\" was used by Gibson as one of the genre's first metaphors for cyberspace and virtual reality. The cityscapes of Hong Kong has had major influences in the urban backgrounds, ambiance and settings in many cyberpunk works such as Blade Runner and Shadowrun. Ridley Scott envisioned the landscape of cyberpunk Los Angeles in Blade Runner to be \"Hong Kong on a very bad day\". The streetscapes of the Ghost in the Shell film were based on Hong Kong. Its director Mamoru Oshii felt that Hong Kong's strange and chaotic streets where \"old and new exist in confusing relationships\", fit the theme of the film well. Hong Kong's Kowloon Walled City is particularly notable for its disorganized hyper-urbanization and breakdown in traditional urban planning to be an inspiration to cyberpunk landscapes. Portrayals of East Asia and Asians in Western cyberpunk have been criticized as Orientalist and promoting racist tropes playing on American and European fears of East Asian dominance; this has been referred to as \"techno-Orientalism\".",
"title": "Style and ethos"
},
{
"paragraph_id": 29,
"text": "Cyberpunk can be intended to disquiet readers and call them to action. It often expresses a sense of rebellion, suggesting that one could describe it as a type of cultural revolution in science fiction. In the words of author and critic David Brin:",
"title": "Style and ethos"
},
{
"paragraph_id": 30,
"text": "...a closer look [at cyberpunk authors] reveals that they nearly always portray future societies in which governments have become wimpy and pathetic ...Popular science fiction tales by Gibson, Williams, Cadigan and others do depict Orwellian accumulations of power in the next century, but nearly always clutched in the secretive hands of a wealthy or corporate elite.",
"title": "Style and ethos"
},
{
"paragraph_id": 31,
"text": "Cyberpunk stories have also been seen as fictional forecasts of the evolution of the Internet. The earliest descriptions of a global communications network came long before the World Wide Web entered popular awareness, though not before traditional science-fiction writers such as Arthur C. Clarke and some social commentators such as James Burke began predicting that such networks would eventually form.",
"title": "Style and ethos"
},
{
"paragraph_id": 32,
"text": "Some observers cite that cyberpunk tends to marginalize sectors of society such as women and people of colour. It is claimed that, for instance, cyberpunk depicts fantasies that ultimately empower masculinity using fragmentary and decentered aesthetic that culminate in a masculine genre populated by male outlaws. Critics also note the absence of any reference to Africa or black characters in the quintessential cyberpunk film Blade Runner while other films reinforce stereotypes.",
"title": "Style and ethos"
},
{
"paragraph_id": 33,
"text": "Minnesota writer Bruce Bethke coined the term in 1983 for his short story \"Cyberpunk\", which was published in an issue of Amazing Science Fiction Stories. The term was quickly appropriated as a label to be applied to the works of William Gibson, Bruce Sterling, Pat Cadigan and others. Of these, Sterling became the movement's chief ideologue, thanks to his fanzine Cheap Truth. John Shirley wrote articles on Sterling and Rucker's significance. John Brunner's 1975 novel The Shockwave Rider is considered by many to be the first cyberpunk novel with many of the tropes commonly associated with the genre, some five years before the term was popularized by Dozois.",
"title": "Media"
},
{
"paragraph_id": 34,
"text": "William Gibson with his novel Neuromancer (1984) is arguably the most famous writer connected with the term cyberpunk. He emphasized style, a fascination with surfaces, and atmosphere over traditional science-fiction tropes. Regarded as ground-breaking and sometimes as \"the archetypal cyberpunk work\", Neuromancer was awarded the Hugo, Nebula, and Philip K. Dick Awards. Count Zero (1986) and Mona Lisa Overdrive (1988) followed after Gibson's popular debut novel. According to the Jargon File, \"Gibson's near-total ignorance of computers and the present-day hacker culture enabled him to speculate about the role of computers and hackers in the future in ways hackers have since found both irritatingly naïve and tremendously stimulating.\"",
"title": "Media"
},
{
"paragraph_id": 35,
"text": "Early on, cyberpunk was hailed as a radical departure from science-fiction standards and a new manifestation of vitality. Shortly thereafter, however, some critics arose to challenge its status as a revolutionary movement. These critics said that the science fiction New Wave of the 1960s was much more innovative as far as narrative techniques and styles were concerned. Furthermore, while Neuromancer's narrator may have had an unusual \"voice\" for science fiction, much older examples can be found: Gibson's narrative voice, for example, resembles that of an updated Raymond Chandler, as in his novel The Big Sleep (1939). Others noted that almost all traits claimed to be uniquely cyberpunk could in fact be found in older writers' works—often citing J. G. Ballard, Philip K. Dick, Harlan Ellison, Stanisław Lem, Samuel R. Delany, and even William S. Burroughs. For example, Philip K. Dick's works contain recurring themes of social decay, artificial intelligence, paranoia, and blurred lines between objective and subjective realities. The influential cyberpunk movie Blade Runner (1982) is based on his book, Do Androids Dream of Electric Sheep?. Humans linked to machines are found in Pohl and Kornbluth's Wolfbane (1959) and Roger Zelazny's Creatures of Light and Darkness (1968).",
"title": "Media"
},
{
"paragraph_id": 36,
"text": "In 1994, scholar Brian Stonehill suggested that Thomas Pynchon's 1973 novel Gravity's Rainbow \"not only curses but precurses what we now glibly dub cyberspace.\" Other important predecessors include Alfred Bester's two most celebrated novels, The Demolished Man and The Stars My Destination, as well as Vernor Vinge's novella True Names.",
"title": "Media"
},
{
"paragraph_id": 37,
"text": "Science-fiction writer David Brin describes cyberpunk as \"the finest free promotion campaign ever waged on behalf of science fiction\". It may not have attracted the \"real punks\", but it did ensnare many new readers, and it provided the sort of movement that postmodern literary critics found alluring. Cyberpunk made science fiction more attractive to academics, argues Brin; in addition, it made science fiction more profitable to Hollywood and to the visual arts generally. Although the \"self-important rhetoric and whines of persecution\" on the part of cyberpunk fans were irritating at worst and humorous at best, Brin declares that the \"rebels did shake things up. We owe them a debt.\"",
"title": "Media"
},
{
"paragraph_id": 38,
"text": "Fredric Jameson considers cyberpunk the \"supreme literary expression if not of postmodernism, then of late capitalism itself\".",
"title": "Media"
},
{
"paragraph_id": 39,
"text": "Cyberpunk further inspired many later writers to incorporate cyberpunk ideas into their own works, such as George Alec Effinger's When Gravity Fails. Wired magazine, created by Louis Rossetto and Jane Metcalfe, mixes new technology, art, literature, and current topics in order to interest today's cyberpunk fans, which Paula Yoo claims \"proves that hardcore hackers, multimedia junkies, cyberpunks and cellular freaks are poised to take over the world\".",
"title": "Media"
},
{
"paragraph_id": 40,
"text": "The film Blade Runner (1982) is set in 2019 in a dystopian future in which manufactured beings called replicants are slaves used on space colonies and are legal prey on Earth to various bounty hunters who \"retire\" (kill) them. Although Blade Runner was largely unsuccessful in its first theatrical release, it found a viewership in the home video market and became a cult film. Since the movie omits the religious and mythical elements of Dick's original novel (e.g. empathy boxes and Wilbur Mercer), it falls more strictly within the cyberpunk genre than the novel does. William Gibson would later reveal that upon first viewing the film, he was surprised at how the look of this film matched his vision for Neuromancer, a book he was then working on. The film's tone has since been the staple of many cyberpunk movies, such as The Matrix trilogy (1999–2003), which uses a wide variety of cyberpunk elements. A sequel to Blade Runner was released in 2017.",
"title": "Media"
},
{
"paragraph_id": 41,
"text": "The TV series Max Headroom (1987) is an iconic cyberpunk work, taking place in a futuristic dystopia ruled by an oligarchy of television networks. Computer hacking played a central role in many of the story lines. Max Headroom has been called \"the first cyberpunk television series\".",
"title": "Media"
},
{
"paragraph_id": 42,
"text": "The number of films in the genre has grown steadily since Blade Runner. Several of Philip K. Dick's works have been adapted to the silver screen. The films Johnny Mnemonic (1995) and New Rose Hotel (1998), both based on short stories by William Gibson, flopped commercially and critically. Other cyberpunk films include RoboCop (1987), Total Recall (1990), Hardware (1990), The Lawnmower Man (1992), 12 Monkeys (1995), Hackers (1995), and Strange Days (1995). Some cyberpunk films have been described as tech-noir, a hybrid genre combining neo-noir and science fiction or cyberpunk.",
"title": "Media"
},
{
"paragraph_id": 43,
"text": "The Japanese cyberpunk subgenre began in 1982 with the debut of Katsuhiro Otomo's manga series Akira, with its 1988 anime film adaptation, which Otomo directed, later popularizing the subgenre. Akira inspired a wave of Japanese cyberpunk works, including manga and anime series such as Ghost in the Shell, Battle Angel Alita, and Cowboy Bebop. Other early Japanese cyberpunk works include the 1982 film Burst City, the 1985 original video animation Megazone 23, and the 1989 film Tetsuo: The Iron Man.",
"title": "Media"
},
{
"paragraph_id": 44,
"text": "According to Paul Gravett, when Akira began to be published, cyberpunk literature had not yet been translated into Japanese, Otomo has distinct inspirations such as Mitsuteru Yokoyama's manga series Tetsujin 28-go (1956–1966) and Moebius.",
"title": "Media"
},
{
"paragraph_id": 45,
"text": "In contrast to Western cyberpunk which has roots in New Wave science fiction literature, Japanese cyberpunk has roots in underground music culture, specifically the Japanese punk subculture that arose from the Japanese punk music scene in the 1970s. The filmmaker Sogo Ishii introduced this subculture to Japanese cinema with the punk film Panic High School (1978) and the punk biker film Crazy Thunder Road (1980), both portraying the rebellion and anarchy associated with punk, and the latter featuring a punk biker gang aesthetic. Ishii's punk films paved the way for Otomo's seminal cyberpunk work Akira.",
"title": "Media"
},
{
"paragraph_id": 46,
"text": "Cyberpunk themes are widely visible in anime and manga. In Japan, where cosplay is popular and not only teenagers display such fashion styles, cyberpunk has been accepted and its influence is widespread. William Gibson's Neuromancer, whose influence dominated the early cyberpunk movement, was also set in Chiba, one of Japan's largest industrial areas, although at the time of writing the novel Gibson did not know the location of Chiba and had no idea how perfectly it fit his vision in some ways. The exposure to cyberpunk ideas and fiction in the 1980s has allowed it to seep into the Japanese culture.",
"title": "Media"
},
{
"paragraph_id": 47,
"text": "Cyberpunk anime and manga draw upon a futuristic vision which has elements in common with Western science fiction and therefore have received wide international acceptance outside Japan. \"The conceptualization involved in cyberpunk is more of forging ahead, looking at the new global culture. It is a culture that does not exist right now, so the Japanese concept of a cyberpunk future, seems just as valid as a Western one, especially as Western cyberpunk often incorporates many Japanese elements.\" William Gibson is now a frequent visitor to Japan, and he came to see that many of his visions of Japan have become a reality:",
"title": "Media"
},
{
"paragraph_id": 48,
"text": "Modern Japan simply was cyberpunk. The Japanese themselves knew it and delighted in it. I remember my first glimpse of Shibuya, when one of the young Tokyo journalists who had taken me there, his face drenched with the light of a thousand media-suns—all that towering, animated crawl of commercial information—said, \"You see? You see? It is Blade Runner town.\" And it was. It so evidently was.",
"title": "Media"
},
{
"paragraph_id": 49,
"text": "Akira (1982 manga) and its 1988 anime film adaptation have influenced numerous works in animation, comics, film, music, television and video games. Akira has been cited as a major influence on Hollywood films such as The Matrix, Chronicle, Looper, Midnight Special, and Inception, as well as cyberpunk-influenced video games such as Hideo Kojima's Snatcher and Metal Gear Solid, Valve's Half-Life series and Dontnod Entertainment's Remember Me. Akira has also influenced the work of musicians such as Kanye West, who paid homage to Akira in the \"Stronger\" music video, and Lupe Fiasco, whose album Tetsuo & Youth is named after Tetsuo Shima. The popular bike from the film, Kaneda's Motorbike, appears in Steven Spielberg's film Ready Player One, and CD Projekt's video game Cyberpunk 2077.",
"title": "Media"
},
{
"paragraph_id": 50,
"text": "Ghost in the Shell (1995) influenced a number of prominent filmmakers, most notably the Wachowskis in The Matrix (1999) and its sequels. The Matrix series took several concepts from the film, including the Matrix digital rain, which was inspired by the opening credits of Ghost in the Shell and a sushi magazine the wife of the senior designer of the animation, Simon Witheley, had in the kitchen at the time., and the way characters access the Matrix through holes in the back of their necks. Other parallels have been drawn to James Cameron's Avatar, Steven Spielberg's A.I. Artificial Intelligence, and Jonathan Mostow's Surrogates. James Cameron cited Ghost in the Shell as a source of inspiration, citing it as an influence on Avatar.",
"title": "Media"
},
{
"paragraph_id": 51,
"text": "The original video animation Megazone 23 (1985) has a number of similarities to The Matrix. Battle Angel Alita (1990) has had a notable influence on filmmaker James Cameron, who was planning to adapt it into a film since 2000. It was an influence on his TV series Dark Angel, and he is the producer of the 2019 film adaptation Alita: Battle Angel.",
"title": "Media"
},
{
"paragraph_id": 52,
"text": "In 1975, artist Moebius collaborated with writer Dan O'Bannon on a story called The Long Tomorrow, published in the French magazine Métal Hurlant. One of the first works featuring elements now seen as exemplifying cyberpunk, it combined influences from film noir and hardboiled crime fiction with a distant sci-fi environment. Author William Gibson stated that Moebius' artwork for the series, along with other visuals from Métal Hurlant, strongly influenced his 1984 novel Neuromancer. The series had a far-reaching impact in the cyberpunk genre, being cited as an influence on Ridley Scott's Alien (1979) and Blade Runner. Moebius later expanded upon The Long Tomorrow's aesthetic with The Incal, a graphic novel collaboration with Alejandro Jodorowsky published from 1980 to 1988. The story centers around the exploits of a detective named John Difool in various science fiction settings, and while not confined to the tropes of cyberpunk, it features many elements of the genre. Moebius was one of the designers of Tron (1982), a movie that shows a world inside a computer.",
"title": "Media"
},
{
"paragraph_id": 53,
"text": "Concurrently with many other foundational cyberpunk works, DC Comics published Frank Miller's six-issue miniseries Rōnin from 1983 to 1984. The series, incorporating aspects of Samurai culture, martial arts films and manga, is set in a dystopian near-future New York. It explores the link between an ancient Japanese warrior and the apocalyptic, crumbling cityscape he finds himself in. The comic also bears several similarities to Akira, with highly powerful telepaths playing central roles, as well as sharing many key visuals.",
"title": "Media"
},
{
"paragraph_id": 54,
"text": "Rōnin would go on to influence many later works, including Samurai Jack and the Teenage Mutant Ninja Turtles, as well as video games such as Cyberpunk 2077. Two years later, Miller himself would incorporate several toned-down elements of Rōnin into his acclaimed 1986 miniseries The Dark Knight Returns, in which a retired Bruce Wayne once again takes up the mantle of Batman in a Gotham that is increasingly becoming more dystopian.",
"title": "Media"
},
{
"paragraph_id": 55,
"text": "Paul Pope's Batman: Year 100, published in 2006, also exhibits several traits typical of cyberpunk fiction, such as a rebel protagonist opposing a future authoritarian state, and a distinct retrofuturist aesthetic that makes callbacks to both The Dark Knight Returns and Batman's original appearances in the 1940s.",
"title": "Media"
},
{
"paragraph_id": 56,
"text": "There are many cyberpunk video games. Popular series include Final Fantasy VII and its spin-offs and remake, the Megami Tensei series, Kojima's Snatcher and Metal Gear series, Deus Ex series, Syndicate series, and System Shock and its sequel. Other games, like Blade Runner, Ghost in the Shell, and the Matrix series, are based upon genre movies, or role-playing games (for instance the various Shadowrun games).",
"title": "Media"
},
{
"paragraph_id": 57,
"text": "Several RPGs called Cyberpunk exist: Cyberpunk, Cyberpunk 2020, Cyberpunk v3.0 and Cyberpunk Red written by Mike Pondsmith and published by R. Talsorian Games, and GURPS Cyberpunk, published by Steve Jackson Games as a module of the GURPS family of RPGs. Cyberpunk 2020 was designed with the settings of William Gibson's writings in mind, and to some extent with his approval, unlike the approach taken by FASA in producing the transgenre Shadowrun game and its various sequels, which mixes cyberpunk with fantasy elements such as magic and fantasy races such as orcs and elves. Both are set in the near future, in a world where cybernetics are prominent. In addition, Iron Crown Enterprises released an RPG named Cyberspace, which was out of print for several years until recently being re-released in online PDF form. CD Projekt Red released Cyberpunk 2077, a cyberpunk open world first-person shooter/role-playing video game (RPG) based on the tabletop RPG Cyberpunk 2020, on December 10, 2020. In 1990, in a convergence of cyberpunk art and reality, the United States Secret Service raided Steve Jackson Games's headquarters and confiscated all their computers. Officials denied that the target had been the GURPS Cyberpunk sourcebook, but Jackson would later write that he and his colleagues \"were never able to secure the return of the complete manuscript; [...] The Secret Service at first flatly refused to return anything – then agreed to let us copy files, but when we got to their office, restricted us to one set of out-of-date files – then agreed to make copies for us, but said \"tomorrow\" every day from March 4 to March 26. On March 26 we received a set of disks which purported to be our files, but the material was late, incomplete and well-nigh useless.\" Steve Jackson Games won a lawsuit against the Secret Service, aided by the new Electronic Frontier Foundation. This event has achieved a sort of notoriety, which has extended to the book itself as well. All published editions of GURPS Cyberpunk have a tagline on the front cover, which reads \"The book that was seized by the U.S. Secret Service!\" Inside, the book provides a summary of the raid and its aftermath.",
"title": "Media"
},
{
"paragraph_id": 58,
"text": "Cyberpunk has also inspired several tabletop, miniature and board games such as Necromunda by Games Workshop. Netrunner is a collectible card game introduced in 1996, based on the Cyberpunk 2020 role-playing game. Tokyo NOVA, debuting in 1993, is a cyberpunk role-playing game that uses playing cards instead of dice.",
"title": "Media"
},
{
"paragraph_id": 59,
"text": "Cyberpunk 2077 set a new record for the largest number of simultaneous players in a single player game, with a record 1,003,262 playing just after the December 10th launch, according to Steam Database. That tops the previous Steam record of 472,962 players set by Fallout 4 back in 2015.",
"title": "Media"
},
{
"paragraph_id": 60,
"text": "\"Much of the industrial/dance heavy 'Cyberpunk'—recorded in Billy Idol's Macintosh-run studio—revolves around Idol's theme of the common man rising up to fight against a faceless, soulless, corporate world.\"",
"title": "Media"
},
{
"paragraph_id": 61,
"text": "—Julie Romandetta",
"title": "Media"
},
{
"paragraph_id": 62,
"text": "Invariably the origin of cyberpunk music lies in the synthesizer-heavy scores of cyberpunk films such as Escape from New York (1981) and Blade Runner (1982). Some musicians and acts have been classified as cyberpunk due to their aesthetic style and musical content. Often dealing with dystopian visions of the future or biomechanical themes, some fit more squarely in the category than others. Bands whose music has been classified as cyberpunk include Psydoll, Front Line Assembly, Clock DVA, Angelspit and Sigue Sigue Sputnik.",
"title": "Media"
},
{
"paragraph_id": 63,
"text": "Some musicians not normally associated with cyberpunk have at times been inspired to create concept albums exploring such themes. Albums such as the British musician and songwriter Gary Numan's Replicas, The Pleasure Principle and Telekon were heavily inspired by the works of Philip K. Dick. Kraftwerk's The Man-Machine and Computer World albums both explored the theme of humanity becoming dependent on technology. Nine Inch Nails' concept album Year Zero also fits into this category. Fear Factory concept albums are heavily based upon future dystopia, cybernetics, clash between man and machines, virtual worlds. Billy Idol's Cyberpunk drew heavily from cyberpunk literature and the cyberdelic counter culture in its creation. 1. Outside, a cyberpunk narrative fueled concept album by David Bowie, was warmly met by critics upon its release in 1995. Many musicians have also taken inspiration from specific cyberpunk works or authors, including Sonic Youth, whose albums Sister and Daydream Nation take influence from the works of Philip K. Dick and William Gibson respectively. Madonna's 2001 Drowned World Tour opened with a cyberpunk section, where costumes, asethetics and stage props were used to accentuate the dystopian nature of the theatrical concert. Lady Gaga used a cyberpunk-persona and visual style for her sixth studio album Chromatica (2020).",
"title": "Media"
},
{
"paragraph_id": 64,
"text": "Vaporwave and synthwave are also influenced by cyberpunk. The former has been inspired by one of the messages of cyberpunk and is interpreted as a dystopian critique of capitalism in the vein of cyberpunk and the latter is more surface-level, inspired only by the aesthetic of cyberpunk as a nostalgic retrofuturistic revival of aspects of cyberpunk's origins.",
"title": "Media"
},
{
"paragraph_id": 65,
"text": "Writers David Suzuki and Holly Dressel describe the cafes, brand-name stores and video arcades of the Sony Center in the Potsdamer Platz public square of Berlin, Germany, as \"a vision of a cyberpunk, corporate urban future\".",
"title": "Social impact"
},
{
"paragraph_id": 66,
"text": "Several subcultures have been inspired by cyberpunk fiction. These include the cyberdelic counter culture of the late 1980s and early 1990s. Cyberdelic, whose adherents referred to themselves as \"cyberpunks\", attempted to blend the psychedelic art and drug movement with the technology of cyberculture. Early adherents included Timothy Leary, Mark Frauenfelder and R. U. Sirius. The movement largely faded following the dot-com bubble implosion of 2000.",
"title": "Social impact"
},
{
"paragraph_id": 67,
"text": "Cybergoth is a fashion and dance subculture which draws its inspiration from cyberpunk fiction, as well as rave and Gothic subcultures. In addition, a distinct cyberpunk fashion of its own has emerged in recent years which rejects the raver and goth influences of cybergoth, and draws inspiration from urban street fashion, \"post apocalypse\", functional clothing, high tech sports wear, tactical uniform and multifunction. This fashion goes by names like \"tech wear\", \"goth ninja\" or \"tech ninja\".",
"title": "Social impact"
},
{
"paragraph_id": 68,
"text": "The Kowloon Walled City in Hong Kong (demolished in 1994) is often referenced as the model cyberpunk/dystopian slum as, given its poor living conditions at the time coupled with the city's political, physical, and economic isolation has caused many in academia to be fascinated by the ingenuity of its spawning.",
"title": "Social impact"
},
{
"paragraph_id": 69,
"text": "As a wider variety of writers began to work with cyberpunk concepts, new subgenres of science fiction emerged, some of which could be considered as playing off the cyberpunk label, others which could be considered as legitimate explorations into newer territory. These focused on technology and its social effects in different ways. One prominent subgenre is \"steampunk,\" which is set in an alternate history Victorian era that combines anachronistic technology with cyberpunk's bleak film noir world view. The term was originally coined around 1987 as a joke to describe some of the novels of Tim Powers, James P. Blaylock, and K.W. Jeter, but by the time Gibson and Sterling entered the subgenre with their collaborative novel The Difference Engine the term was being used earnestly as well.",
"title": "Social impact"
},
{
"paragraph_id": 70,
"text": "Another subgenre is \"biopunk\" (cyberpunk themes dominated by biotechnology) from the early 1990s, a derivative style building on biotechnology rather than informational technology. In these stories, people are changed in some way not by mechanical means, but by genetic manipulation.",
"title": "Social impact"
},
{
"paragraph_id": 71,
"text": "Cyberpunk works have been described as well situated within postmodern literature.",
"title": "Social impact"
},
{
"paragraph_id": 72,
"text": "In the United States, the term \"Cyberpunk\" is a registered trademark owned by CD Projekt SA who obtained it from the previous owner R. Talsorian Games Inc. who originally registered it for its tabletop role-playing game. R. Talsorian Games currently used the trademark under license from CD Projekt SA for the tabletop role-playing game.",
"title": "Registered trademark status"
},
{
"paragraph_id": 73,
"text": "Within the European Union, the \"Cyberpunk\" trademark is owned by two parties: CD Projekt SA for \"games and online gaming services\" (particularly for the video game adaptation of the former) and by Sony Music for use outside games.",
"title": "Registered trademark status"
}
] | Cyberpunk is a subgenre of science fiction in a dystopian futuristic setting that tends to focus on a "combination of lowlife and high tech", featuring futuristic technological and scientific achievements, such as artificial intelligence and cyberware, juxtaposed with societal collapse, dystopia or decay. Much of cyberpunk is rooted in the New Wave science fiction movement of the 1960s and 1970s, when writers like Philip K. Dick, Michael Moorcock, Roger Zelazny, John Brunner, J. G. Ballard, Philip José Farmer and Harlan Ellison examined the impact of drug culture, technology, and the sexual revolution while avoiding the utopian tendencies of earlier science fiction. Comics exploring cyberpunk themes began appearing as early as Judge Dredd, first published in 1977. Released in 1984, William Gibson's influential debut novel Neuromancer helped solidify cyberpunk as a genre, drawing influence from punk subculture and early hacker culture. Frank Miller's Ronin is an example of a cyberpunk graphic novel. Other influential cyberpunk writers included Bruce Sterling and Rudy Rucker. The Japanese cyberpunk subgenre began in 1982 with the debut of Katsuhiro Otomo's manga series Akira, with its 1988 anime film adaptation later popularizing the subgenre. Early films in the genre include Ridley Scott's 1982 film Blade Runner, one of several of Philip K. Dick's works that have been adapted into films. The "first cyberpunk television series" was the TV series Max Headroom from 1987, playing in a futuristic dystopia ruled by an oligarchy of television networks, and where computer hacking played a central role in many story lines. The films Johnny Mnemonic (1995) and New Rose Hotel (1998), both based upon short stories by William Gibson, flopped commercially and critically, while The Matrix trilogy (1999–2003) and Judge Dredd (1995) were some of the most successful cyberpunk films. Newer cyberpunk media includes Blade Runner 2049 (2017), a sequel to the original 1982 film; Dredd (2012), which was not a sequel to the original movie; Upgrade (2018); Alita: Battle Angel (2019), based on the 1990s Japanese manga Battle Angel Alita; the 2018 Netflix TV series Altered Carbon, based on Richard K. Morgan's 2002 novel of the same name; the 2020 remake of 1997 role-playing video game Final Fantasy VII; and the video game Cyberpunk 2077 (2020), based on R. Talsorian Games's 1988 tabletop role-playing game Cyberpunk. | 2001-09-29T03:37:18Z | 2023-12-31T14:22:47Z | [
"Template:Spoken Wikipedia",
"Template:Reflist",
"Template:Cite web",
"Template:Cite journal",
"Template:Cyberpunk",
"Template:Science fiction",
"Template:Authority control",
"Template:Who",
"Template:Colend",
"Template:Film genres",
"Template:Webarchive",
"Template:Avant-garde",
"Template:Cols",
"Template:Goth subculture",
"Template:Incomprehensible inline",
"Template:Original research inline",
"Template:Portal",
"Template:Multiple image",
"Template:Main",
"Template:Clear",
"Template:Cite interview",
"Template:Short description",
"Template:Other uses",
"Template:When",
"Template:Cite book",
"Template:Citation needed",
"Template:See also",
"Template:By whom",
"Template:Quote box",
"Template:Cite news",
"Template:Cite magazine",
"Template:Blockquote",
"Template:'"
] | https://en.wikipedia.org/wiki/Cyberpunk |
5,704 | Comic strip | A comic strip is a sequence of cartoons, arranged in interrelated panels to display brief humor or form a narrative, often serialized, with text in balloons and captions. Traditionally, throughout the 20th and into the 21st century, these have been published in newspapers and magazines, with daily horizontal strips printed in black-and-white in newspapers, while Sunday papers offered longer sequences in special color comics sections. With the advent of the internet, online comic strips began to appear as webcomics.
Most strips are written and drawn by a comics artist, known as a cartoonist. As the word "comic" implies, strips are frequently humorous. Examples of these gag-a-day strips are Blondie, Bringing Up Father, Marmaduke, and Pearls Before Swine. In the late 1920s, comic strips expanded from their mirthful origins to feature adventure stories, as seen in Popeye, Captain Easy, Buck Rogers, Tarzan, and Terry and the Pirates. In the 1940s, soap-opera-continuity strips such as Judge Parker and Mary Worth gained popularity. Because "comic" strips are not always funny, cartoonist Will Eisner has suggested that sequential art would be a better genre-neutral name.
Comic strips have appeared inside American magazines such as Liberty and Boys' Life, but also on the front covers, such as the Flossy Frills series on The American Weekly Sunday newspaper supplement. In the UK and the rest of Europe, comic strips are also serialized in comic book magazines, with a strip's story sometimes continuing over three pages.
Storytelling using a sequence of pictures has existed through history. One medieval European example in textile form is the Bayeux Tapestry. Printed examples emerged in 19th-century Germany and in 18th-century England, where some of the first satirical or humorous sequential narrative drawings were produced. William Hogarth's 18th-century English cartoons include both narrative sequences, such as A Rake's Progress, and single panels.
The Biblia pauperum ("Paupers' Bible"), a tradition of picture Bibles beginning in the Late Middle Ages, sometimes depicted Biblical events with words spoken by the figures in the miniatures written on scrolls coming out of their mouths—which makes them to some extent ancestors of the modern cartoon strips.
In China, with its traditions of block printing and of the incorporation of text with image, experiments with what became lianhuanhua date back to 1884.
The first newspaper comic strips and were also silly gosey woosey appeared in North America in the late 19th century. The Yellow Kid is usually credited as one of the first newspaper strips. However, the art form combining words and pictures developed gradually and there are many examples which led up to the comic strip.
The Glasgow Looking Glass was the first mass-produced publication to tell stories using illustrations and is regarded as the worlds first comic strip. It satirised the political and social life of Scotland in the 1820s. It was conceived and illustrated by William Heath.
Swiss author and caricature artist Rodolphe Töpffer (Geneva, 1799–1846) is considered the father of the modern comic strips. His illustrated stories such as Histoire de M. Vieux Bois (1827), first published in the US in 1842 as The Adventures of Obadiah Oldbuck or Histoire de Monsieur Jabot (1831), inspired subsequent generations of German and American comic artists. In 1865, German painter, author, and caricaturist Wilhelm Busch created the strip Max and Moritz, about two trouble-making boys, which had a direct influence on the American comic strip. Max and Moritz was a series of seven severely moralistic tales in the vein of German children's stories such as Struwwelpeter ("Shockheaded Peter"). In the story's final act, the boys, after perpetrating some mischief, are tossed into a sack of grain, run through a mill, and consumed by a flock of geese (without anybody mourning their demise). Max and Moritz provided an inspiration for German immigrant Rudolph Dirks, who created the Katzenjammer Kids in 1897—a strip starring two German-American boys visually modelled on Max and Moritz. Familiar comic-strip iconography such as stars for pain, sawing logs for snoring, speech balloons, and thought balloons originated in Dirks' strip.
Hugely popular, Katzenjammer Kids occasioned one of the first comic-strip copyright ownership suits in the history of the medium. When Dirks left William Randolph Hearst for the promise of a better salary under Joseph Pulitzer, it was an unusual move, since cartoonists regularly deserted Pulitzer for Hearst. In a highly unusual court decision, Hearst retained the rights to the name "Katzenjammer Kids", while creator Dirks retained the rights to the characters. Hearst promptly hired Harold Knerr to draw his own version of the strip. Dirks renamed his version Hans and Fritz (later, The Captain and the Kids). Thus, two versions distributed by rival syndicates graced the comics pages for decades. Dirks' version, eventually distributed by United Feature Syndicate, ran until 1979.
In the United States, the great popularity of comics sprang from the newspaper war (1887 onwards) between Pulitzer and Hearst. The Little Bears (1893–96) was the first American comic strip with recurring characters, while the first color comic supplement was published by the Chicago Inter-Ocean sometime in the latter half of 1892, followed by the New York Journal's first color Sunday comic pages in 1897. On January 31, 1912, Hearst introduced the nation's first full daily comic page in his New York Evening Journal. The history of this newspaper rivalry and the rapid appearance of comic strips in most major American newspapers is discussed by Ian Gordon. Numerous events in newspaper comic strips have reverberated throughout society at large, though few of these events occurred in recent years, owing mainly to the declining use of continuous storylines on newspaper comic strips, which since the 1970s had been waning as an entertainment form. From 1903 to 1905 Gustave Verbeek, wrote his comic series "The UpsideDowns of Old Man Muffaroo and Little Lady Lovekins". These comics were made in such a way that one could read the 6 panel comic, flip the book and keep reading. He made 64 such comics in total.
The longest-running American comic strips are:
Most newspaper comic strips are syndicated; a syndicate hires people to write and draw a strip and then distributes it to many newspapers for a fee. Some newspaper strips begin or remain exclusive to one newspaper. For example, the Pogo comic strip by Walt Kelly originally appeared only in the New York Star in 1948 and was not picked up for syndication until the following year.
Newspaper comic strips come in two different types: daily strips and Sunday strips. In the United States, a daily strip appears in newspapers on weekdays, Monday through Saturday, as contrasted with a Sunday strip, which typically only appears on Sundays. Daily strips usually are printed in black and white, and Sunday strips are usually in color. However, a few newspapers have published daily strips in color, and some newspapers have published Sunday strips in black and white. Silly goosey woosey.
Making his first appearance in the British magazine Judy by writer and fledgling artist Charles H. Ross in 1867, Ally Sloper is one of the earliest comic strip characters and he is regarded as the first recurring character in comics. The highly popular character was spun off into his own comic, Ally Sloper's Half Holiday, in 1884.
While in the early 20th century comic strips were a frequent target for detractors of "yellow journalism", by the 1920s the medium became wildly popular. While radio, and later, television surpassed newspapers as a means of entertainment, most comic strip characters were widely recognizable until the 1980s, and the "funny pages" were often arranged in a way they appeared at the front of Sunday editions. In 1931, George Gallup's first poll had the comic section as the most important part of the newspaper, with additional surveys pointing out that the comic strips were the second most popular feature after the picture page. During the 1930s, many comic sections had between 12 and 16 pages, although in some cases, these had up to 24 pages.
The popularity and accessibility of strips meant they were often clipped and saved; authors including John Updike and Ray Bradbury have written about their childhood collections of clipped strips. Often posted on bulletin boards, clipped strips had an ancillary form of distribution when they were faxed, photocopied or mailed. The Baltimore Sun's Linda White recalled, "I followed the adventures of Winnie Winkle, Moon Mullins and Dondi, and waited each fall to see how Lucy would manage to trick Charlie Brown into trying to kick that football. (After I left for college, my father would clip out that strip each year and send it to me just to make sure I didn't miss it.)"
The two conventional formats for newspaper comics are strips and single gag panels. The strips are usually displayed horizontally, wider than they are tall. Single panels are square, circular or taller than they are wide. Strips usually, but not always, are broken up into several smaller panels with continuity from panel to panel. A horizontal strip can also be used for a single panel with a single gag, as seen occasionally in Mike Peters' Mother Goose and Grimm.
Early daily strips were large, often running the entire width of the newspaper, and were sometimes three or more inches high. Initially, a newspaper page included only a single daily strip, usually either at the top or the bottom of the page. By the 1920s, many newspapers had a comics page on which many strips were collected together. During the 1930s, the original art for a daily strip could be drawn as large as 25 inches wide by six inches high. Over decades, the size of daily strips became smaller and smaller, until by 2000, four standard daily strips could fit in an area once occupied by a single daily strip. As strips have become smaller, the number of panels have been reduced.
Proof sheets were the means by which syndicates provided newspapers with black-and-white line art for the reproduction of strips (which they arranged to have colored in the case of Sunday strips). Michigan State University Comic Art Collection librarian Randy Scott describes these as "large sheets of paper on which newspaper comics have traditionally been distributed to subscribing newspapers. Typically each sheet will have either six daily strips of a given title or one Sunday strip. Thus, a week of Beetle Bailey would arrive at the Lansing State Journal in two sheets, printed much larger than the final version and ready to be cut apart and fitted into the local comics page." Comic strip historian Allan Holtz described how strips were provided as mats (the plastic or cardboard trays in which molten metal is poured to make plates) or even plates ready to be put directly on the printing press. He also notes that with electronic means of distribution becoming more prevalent printed sheets "are definitely on their way out."
NEA Syndicate experimented briefly with a two-tier daily strip, Star Hawks, but after a few years, Star Hawks dropped down to a single tier.
In Flanders, the two-tier strip is the standard publication style of most daily strips like Spike and Suzy and Nero. They appear Monday through Saturday; until 2003 there were no Sunday papers in Flanders. In the last decades, they have switched from black and white to color.
Single panels usually, but not always, are not broken up and lack continuity. The daily Peanuts is a strip, and the daily Dennis the Menace is a single panel. J. R. Williams' long-run Out Our Way continued as a daily panel even after it expanded into a Sunday strip, Out Our Way with the Willets. Jimmy Hatlo's They'll Do It Every Time was often displayed in a two-panel format with the first panel showing some deceptive, pretentious, unwitting or scheming human behavior and the second panel revealing the truth of the situation.
Sunday newspapers traditionally included a special color section. Early Sunday strips (known colloquially as "the funny papers", shortened to "the funnies"), such as Thimble Theatre and Little Orphan Annie, filled an entire newspaper page, a format known to collectors as full page. Sunday pages during the 1930s and into the 1940s often carried a secondary strip by the same artist as the main strip. No matter whether it appeared above or below a main strip, the extra strip was known as the topper, such as The Squirrel Cage which ran along with Room and Board, both drawn by Gene Ahern.
During the 1930s, the original art for a Sunday strip was usually drawn quite large. For example, in 1930, Russ Westover drew his Tillie the Toiler Sunday page at a size of 17" × 37". In 1937, the cartoonist Dudley Fisher launched the innovative Right Around Home, drawn as a huge single panel filling an entire Sunday page.
Full-page strips were eventually replaced by strips half that size. Strips such as The Phantom and Terry and the Pirates began appearing in a format of two strips to a page in full-size newspapers, such as the New Orleans Times Picayune, or with one strip on a tabloid page, as in the Chicago Sun-Times. When Sunday strips began to appear in more than one format, it became necessary for the cartoonist to allow for rearranged, cropped or dropped panels. During World War II, because of paper shortages, the size of Sunday strips began to shrink. After the war, strips continued to get smaller and smaller because of increased paper and printing costs. The last full-page comic strip was the Prince Valiant strip for 11 April 1971.
Comic strips have also been published in Sunday newspaper magazines. Russell Patterson and Carolyn Wells' New Adventures of Flossy Frills was a continuing strip series seen on Sunday magazine covers. Beginning January 26, 1941, it ran on the front covers of Hearst's American Weekly newspaper magazine supplement, continuing until March 30 of that year. Between 1939 and 1943, four different stories featuring Flossy appeared on American Weekly covers.
Sunday comics sections employed offset color printing with multiple print runs imitating a wide range of colors. Printing plates were created with four or more colors—traditionally, the CMYK color model: cyan, magenta, yellow and "K" for black. With a screen of tiny dots on each printing plate, the dots allowed an image to be printed in a halftone that appears to the eye in different gradations. The semi-opaque property of ink allows halftone dots of different colors to create an optical effect of full-color imagery.
The decade of the 1960s saw the rise of underground newspapers, which often carried comic strips, such as Fritz the Cat and The Fabulous Furry Freak Brothers. Zippy the Pinhead initially appeared in underground publications in the 1970s before being syndicated. Bloom County and Doonesbury began as strips in college newspapers under different titles, and later moved to national syndication. Underground comic strips covered subjects that are usually taboo in newspaper strips, such as sex and drugs. Many underground artists, notably Vaughn Bode, Dan O'Neill, Gilbert Shelton, and Art Spiegelman went on to draw comic strips for magazines such as Playboy, National Lampoon, and Pete Millar's CARtoons. Jay Lynch graduated from undergrounds to alternative weekly newspapers to Mad and children's books.
Webcomics, also known as online comics and internet comics, are comics that are available to read on the Internet. Many are exclusively published online, but the majority of traditional newspaper comic strips have some Internet presence. King Features Syndicate and other syndicates often provide archives of recent strips on their websites. Some, such as Scott Adams, creator of Dilbert, include an email address in each strip.
Most comic strip characters do not age throughout the strip's life, but in some strips, like Lynn Johnston's award-winning For Better or For Worse, the characters age as the years pass. The first strip to feature aging characters was Gasoline Alley.
The history of comic strips also includes series that are not humorous, but tell an ongoing dramatic story. Examples include The Phantom, Prince Valiant, Dick Tracy, Mary Worth, Modesty Blaise, Little Orphan Annie, Flash Gordon, and Tarzan. Sometimes these are spin-offs from comic books, for example Superman, Batman, and The Amazing Spider-Man.
A number of strips have featured animals as main characters. Some are non-verbal (Marmaduke, The Angriest Dog in the World), some have verbal thoughts but are not understood by humans, (Garfield, Snoopy in Peanuts), and some can converse with humans (Bloom County, Calvin and Hobbes, Mutts, Citizen Dog, Buckles, Get Fuzzy, Pearls Before Swine, and Pooch Cafe). Other strips are centered entirely on animals, as in Pogo and Donald Duck. Gary Larson's The Far Side was unusual, as there were no central characters. Instead The Far Side used a wide variety of characters including humans, monsters, aliens, chickens, cows, worms, amoebas, and more. John McPherson's Close to Home also uses this theme, though the characters are mostly restricted to humans and real-life situations. Wiley Miller not only mixes human, animal, and fantasy characters, but also does several different comic strip continuities under one umbrella title, Non Sequitur. Bob Thaves's Frank & Ernest began in 1972 and paved the way for some of these strips, as its human characters were manifest in diverse forms—as animals, vegetables, and minerals.
The comics have long held a distorted mirror to contemporary society, and almost from the beginning have been used for political or social commentary. This ranged from the conservative slant of Harold Gray's Little Orphan Annie to the unabashed liberalism of Garry Trudeau's Doonesbury. Al Capp's Li'l Abner espoused liberal opinions for most of its run, but by the late 1960s, it became a mouthpiece for Capp's repudiation of the counterculture.
Pogo used animals to particularly devastating effect, caricaturing many prominent politicians of the day as animal denizens of Pogo's Okeefenokee Swamp. In a fearless move, Pogo's creator Walt Kelly took on Joseph McCarthy in the 1950s, caricaturing him as a bobcat named Simple J. Malarkey, a megalomaniac who was bent on taking over the characters' birdwatching club and rooting out all undesirables. Kelly also defended the medium against possible government regulation in the McCarthy era. At a time when comic books were coming under fire for supposed sexual, violent, and subversive content, Kelly feared the same would happen to comic strips. Going before the Congressional subcommittee, he proceeded to charm the members with his drawings and the force of his personality. The comic strip was safe for satire.
During the early 20th century, comic strips were widely associated with publisher William Randolph Hearst, whose papers had the largest circulation of strips in the United States. Hearst was notorious for his practice of yellow journalism, and he was frowned on by readers of The New York Times and other newspapers which featured few or no comic strips. Hearst's critics often assumed that all the strips in his papers were fronts for his own political and social views. Hearst did occasionally work with or pitch ideas to cartoonists, most notably his continued support of George Herriman's Krazy Kat. An inspiration for Bill Watterson and other cartoonists, Krazy Kat gained a considerable following among intellectuals during the 1920s and 1930s.
Some comic strips, such as Doonesbury and Mallard Fillmore, may be printed on the editorial or op-ed page rather than the comics page because of their regular political commentary. For example, the August 12, 1974 Doonesbury strip was awarded a 1975 Pulitzer Prize for its depiction of the Watergate scandal. Dilbert is sometimes found in the business section of a newspaper instead of the comics page because of the strip's commentary about office politics, and Tank McNamara often appears on the sports page because of its subject matter. Lynn Johnston's For Better or For Worse created an uproar when Lawrence, one of the strip's supporting characters, came out of the closet.
The world's longest comic strip is 88.9-metre (292 ft) long and on display at Trafalgar Square as part of the London Comedy Festival. The London Cartoon Strip was created by 15 of Britain's best known cartoonists and depicts the history of London.
The Reuben, named for cartoonist Rube Goldberg, is the most prestigious award for U.S. comic strip artists. Reuben awards are presented annually by the National Cartoonists Society (NCS).
In 1995, the United States Postal Service issued a series of commemorative stamps, Comic Strip Classics, marking the comic-strip centennial.
Today's strip artists, with the help of the NCS, enthusiastically promote the medium, which since the 1970s (and particularly the 1990s) has been considered to be in decline due to numerous factors such as changing tastes in humor and entertainment, the waning relevance of newspapers in general and the loss of most foreign markets outside English-speaking countries. One particularly humorous example of such promotional efforts is the Great Comic Strip Switcheroonie, held in 1997 on April Fool's Day, an event in which dozens of prominent artists took over each other's strips. Garfield's Jim Davis, for example, switched with Blondie's Stan Drake, while Scott Adams (Dilbert) traded strips with Bil Keane (The Family Circus).
While the 1997 Switcheroonie was a one-time publicity stunt, an artist taking over a feature from its originator is an old tradition in newspaper cartooning (as it is in the comic book industry). In fact, the practice has made possible the longevity of the genre's more popular strips. Examples include Little Orphan Annie (drawn and plotted by Harold Gray from 1924 to 1944 and thereafter by a succession of artists including Leonard Starr and Andrew Pepoy), and Terry and the Pirates, started by Milton Caniff in 1934 and picked up by George Wunder.
A business-driven variation has sometimes led to the same feature continuing under a different name. In one case, in the early 1940s, Don Flowers' Modest Maidens was so admired by William Randolph Hearst that he lured Flowers away from the Associated Press and to King Features Syndicate by doubling the cartoonist's salary, and renamed the feature Glamor Girls to avoid legal action by the AP. The latter continued to publish Modest Maidens, drawn by Jay Allen in Flowers' style.
As newspapers have declined, the changes have affected comic strips. Jeff Reece, lifestyle editor of The Florida Times-Union, wrote, "Comics are sort of the 'third rail' of the newspaper."
In the early decades of the 20th century, all Sunday comics received a full page, and daily strips were generally the width of the page. The competition between papers for having more cartoons than the rest from the mid-1920s, the growth of large-scale newspaper advertising during most of the thirties, paper rationing during World War II, the decline on news readership (as television newscasts began to be more common) and inflation (which has caused higher printing costs) beginning during the fifties and sixties led to Sunday strips being published on smaller and more diverse formats. As newspapers have reduced the page count of Sunday comic sections since the late 1990s (by the 2010s, most sections have only four pages, with the back page not always being destined for comics) has also led to further downsizes.
Daily strips have suffered as well. Before the mid-1910s, there was not a "standard" size", with strips running the entire width of a page or having more than one tier. By the 1920s, strips often covered six of the eight columns occupied by a traditional broadsheet paper. During the 1940s, strips were reduced to four columns wide (with a "transition" width of five columns). As newspapers became narrower beginning in the 1970s, strips have gotten even smaller, often being just three columns wide, a similar width to the one most daily panels occupied before the 1940s.
In an issue related to size limitations, Sunday comics are often bound to rigid formats that allow their panels to be rearranged in several different ways while remaining readable. Such formats usually include throwaway panels at the beginning, which some newspapers will omit for space. As a result, cartoonists have less incentive to put great efforts into these panels. Garfield and Mutts were known during the mid-to-late 80s and 1990s respectively for their throwaways on their Sunday strips, however both strips now run "generic" title panels.
Some cartoonists have complained about this, with Walt Kelly, creator of Pogo, openly voicing his discontent about being forced to draw his Sunday strips in such rigid formats from the beginning. Kelly's heirs opted to end the strip in 1975 as a form of protest against the practice. Since then, Calvin and Hobbes creator Bill Watterson has written extensively on the issue, arguing that size reduction and dropped panels reduce both the potential and freedom of a cartoonist. After a lengthy battle with his syndicate, Watterson won the privilege of making half page-sized Sunday strips where he could arrange the panels any way he liked. Many newspaper publishers and a few cartoonists objected to this, and some papers continued to print Calvin and Hobbes at small sizes. Opus won that same privilege years after Calvin and Hobbes ended, while Wiley Miller circumvented further downsizes by making his Non Sequitur Sunday strip available only in a vertical arrangement. Most strips created since 1990, however, are drawn in the unbroken "third-page" format. Few newspapers still run half-page strips, as with Prince Valiant and Hägar the Horrible in the front page of the Reading Eagle Sunday comics section until the mid-2010s.
With the success of The Gumps during the 1920s, it became commonplace for strips (comedy- and adventure-laden alike) to have lengthy stories spanning weeks or months. The "Monarch of Medioka" story in Floyd Gottfredson's Mickey Mouse comic strip ran from September 8, 1937 to May 2, 1938. Between the 1960s and the late 1980s, as television news relegated newspaper reading to an occasional basis rather than daily, syndicators were abandoning long stories and urging cartoonists to switch to simple daily gags, or week-long "storylines" (with six consecutive (mostly unrelated) strips following a same subject), with longer storylines being used mainly on adventure-based and dramatic strips. Strips begun during the mid-1980s or after (such as Get Fuzzy, Over the Hedge, Monty, and others) are known for their heavy use of storylines, lasting between one and three weeks in most cases.
The writing style of comic strips changed as well after World War II. With an increase in the number of college-educated readers, there was a shift away from slapstick comedy and towards more cerebral humor. Slapstick and visual gags became more confined to Sunday strips, because as Garfield creator Jim Davis put it, "Children are more likely to read Sunday strips than dailies."
Many older strips are no longer drawn by the original cartoonist, who has either died or retired. Such strips are known as "zombie strips". A cartoonist, paid by the syndicate or sometimes a relative of the original cartoonist, continues writing the strip, a tradition that became commonplace in the early half of the 20th century. Hägar the Horrible and Frank and Ernest are both drawn by the sons of the creators. Some strips which are still in affiliation with the original creator are produced by small teams or entire companies, such as Jim Davis' Garfield, however there is some debate if these strips fall in this category.
This act is commonly criticized by modern cartoonists including Watterson and Pearls Before Swine's Stephan Pastis. The issue was addressed in six consecutive Pearls strips in 2005. Charles Schulz, of Peanuts fame, requested that his strip not be continued by another cartoonist after his death. He also rejected the idea of hiring an inker or letterer, comparing it to a golfer hiring a man to make his putts. Schulz's family has honored his wishes and refused numerous proposals by syndicators to continue Peanuts with a new author.
Since the consolidation of newspaper comics by the first quarter of the 20th century, most cartoonists have used a group of assistants (with usually one of them credited). However, quite a few cartoonists (e.g.: George Herriman and Charles Schulz, among others) have done their strips almost completely by themselves; often criticizing the use of assistants for the same reasons most have about their editors hiring anyone else to continue their work after their retirement.
Historically, syndicates owned the creators' work, enabling them to continue publishing the strip after the original creator retired, left the strip, or died. This practice led to the term "legacy strips", or more pejoratively "zombie strips". Most syndicates signed creators to 10- or even 20-year contracts. (There have been exceptions, however, such as Bud Fisher's Mutt and Jeff being an early—if not the earliest—case in which the creator retained ownership of his work.) Both these practices began to change with the 1970 debut of Universal Press Syndicate, as the company gave cartoonists a 50-percent ownership share of their work. Creators Syndicate, founded in 1987, granted artists full rights to the strips, something that Universal Press did in 1990, followed by King Features in 1995. By 1999 both Tribune Media Services and United Feature had begun granting ownership rights to creators (limited to new and/or hugely popular strips).
Starting in the late 1940s, the national syndicates which distributed newspaper comic strips subjected them to very strict censorship. Li'l Abner was censored in September 1947 and was pulled from the Pittsburgh Press by Scripps-Howard. The controversy, as reported in Time, centered on Capp's portrayal of the U.S. Senate. Said Edward Leech of Scripps, "We don't think it is good editing or sound citizenship to picture the Senate as an assemblage of freaks and crooks... boobs and undesirables."
As comics are easier for children to access compared to other types of media, they have a significantly more rigid censorship code than other media. Stephan Pastis has lamented that the "unwritten" censorship code is still "stuck somewhere in the 1950s". Generally, comics are not allowed to include such words as "damn", "sucks", "screwed", and "hell", although there have been exceptions such as the September 22, 2010 Mother Goose and Grimm in which an elderly man says, "This nursing home food sucks," and a pair of Pearls Before Swine comics from January 11, 2011 with a character named Ned using the word "crappy". Naked backsides and shooting guns cannot be shown, according to Dilbert cartoonist Scott Adams. Such comic strip taboos were detailed in Dave Breger's book But That's Unprintable (Bantam, 1955).
Many issues such as sex, narcotics, and terrorism cannot or can very rarely be openly discussed in strips, although there are exceptions, usually for satire, as in Bloom County. This led some cartoonists to resort to double entendre or dialogue children do not understand, as in Greg Evans' Luann. Another example of wordplay to get around censorship is a July 27, 2016 Pearls Before Swine strip that features Pig talking to his sister, and says the phrase "I SIS!" repeatedly after correcting his sister's grammar. The strip then cuts to a scene of a NSA wiretap agent, following a scene of Pig being arrested by the FBI saying "Never correct your sister's grammar", implying that the CIA mistook the phrase "I SIS" with "ISIS". Younger cartoonists have claimed commonplace words, images, and issues should be allowed in the comics, considering that the pressure on "clean" humor has been a chief factor for the declining popularity of comic strips since the 1990s (Aaron McGruder, creator of The Boondocks, decided to end his strip partly because of censorship issues, while the Popeye daily comic strip ended in 1994 after newspapers objected to a storyline they considered to be a satire on abortion). Some of the taboo words and topics are mentioned daily on television and other forms of visual media. Webcomics and comics distributed primarily to college newspapers are much freer in this respect. | [
{
"paragraph_id": 0,
"text": "A comic strip is a sequence of cartoons, arranged in interrelated panels to display brief humor or form a narrative, often serialized, with text in balloons and captions. Traditionally, throughout the 20th and into the 21st century, these have been published in newspapers and magazines, with daily horizontal strips printed in black-and-white in newspapers, while Sunday papers offered longer sequences in special color comics sections. With the advent of the internet, online comic strips began to appear as webcomics.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Most strips are written and drawn by a comics artist, known as a cartoonist. As the word \"comic\" implies, strips are frequently humorous. Examples of these gag-a-day strips are Blondie, Bringing Up Father, Marmaduke, and Pearls Before Swine. In the late 1920s, comic strips expanded from their mirthful origins to feature adventure stories, as seen in Popeye, Captain Easy, Buck Rogers, Tarzan, and Terry and the Pirates. In the 1940s, soap-opera-continuity strips such as Judge Parker and Mary Worth gained popularity. Because \"comic\" strips are not always funny, cartoonist Will Eisner has suggested that sequential art would be a better genre-neutral name.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Comic strips have appeared inside American magazines such as Liberty and Boys' Life, but also on the front covers, such as the Flossy Frills series on The American Weekly Sunday newspaper supplement. In the UK and the rest of Europe, comic strips are also serialized in comic book magazines, with a strip's story sometimes continuing over three pages.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Storytelling using a sequence of pictures has existed through history. One medieval European example in textile form is the Bayeux Tapestry. Printed examples emerged in 19th-century Germany and in 18th-century England, where some of the first satirical or humorous sequential narrative drawings were produced. William Hogarth's 18th-century English cartoons include both narrative sequences, such as A Rake's Progress, and single panels.",
"title": "History"
},
{
"paragraph_id": 4,
"text": "The Biblia pauperum (\"Paupers' Bible\"), a tradition of picture Bibles beginning in the Late Middle Ages, sometimes depicted Biblical events with words spoken by the figures in the miniatures written on scrolls coming out of their mouths—which makes them to some extent ancestors of the modern cartoon strips.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "In China, with its traditions of block printing and of the incorporation of text with image, experiments with what became lianhuanhua date back to 1884.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "The first newspaper comic strips and were also silly gosey woosey appeared in North America in the late 19th century. The Yellow Kid is usually credited as one of the first newspaper strips. However, the art form combining words and pictures developed gradually and there are many examples which led up to the comic strip.",
"title": "Newspapers"
},
{
"paragraph_id": 7,
"text": "The Glasgow Looking Glass was the first mass-produced publication to tell stories using illustrations and is regarded as the worlds first comic strip. It satirised the political and social life of Scotland in the 1820s. It was conceived and illustrated by William Heath.",
"title": "Newspapers"
},
{
"paragraph_id": 8,
"text": "Swiss author and caricature artist Rodolphe Töpffer (Geneva, 1799–1846) is considered the father of the modern comic strips. His illustrated stories such as Histoire de M. Vieux Bois (1827), first published in the US in 1842 as The Adventures of Obadiah Oldbuck or Histoire de Monsieur Jabot (1831), inspired subsequent generations of German and American comic artists. In 1865, German painter, author, and caricaturist Wilhelm Busch created the strip Max and Moritz, about two trouble-making boys, which had a direct influence on the American comic strip. Max and Moritz was a series of seven severely moralistic tales in the vein of German children's stories such as Struwwelpeter (\"Shockheaded Peter\"). In the story's final act, the boys, after perpetrating some mischief, are tossed into a sack of grain, run through a mill, and consumed by a flock of geese (without anybody mourning their demise). Max and Moritz provided an inspiration for German immigrant Rudolph Dirks, who created the Katzenjammer Kids in 1897—a strip starring two German-American boys visually modelled on Max and Moritz. Familiar comic-strip iconography such as stars for pain, sawing logs for snoring, speech balloons, and thought balloons originated in Dirks' strip.",
"title": "Newspapers"
},
{
"paragraph_id": 9,
"text": "Hugely popular, Katzenjammer Kids occasioned one of the first comic-strip copyright ownership suits in the history of the medium. When Dirks left William Randolph Hearst for the promise of a better salary under Joseph Pulitzer, it was an unusual move, since cartoonists regularly deserted Pulitzer for Hearst. In a highly unusual court decision, Hearst retained the rights to the name \"Katzenjammer Kids\", while creator Dirks retained the rights to the characters. Hearst promptly hired Harold Knerr to draw his own version of the strip. Dirks renamed his version Hans and Fritz (later, The Captain and the Kids). Thus, two versions distributed by rival syndicates graced the comics pages for decades. Dirks' version, eventually distributed by United Feature Syndicate, ran until 1979.",
"title": "Newspapers"
},
{
"paragraph_id": 10,
"text": "In the United States, the great popularity of comics sprang from the newspaper war (1887 onwards) between Pulitzer and Hearst. The Little Bears (1893–96) was the first American comic strip with recurring characters, while the first color comic supplement was published by the Chicago Inter-Ocean sometime in the latter half of 1892, followed by the New York Journal's first color Sunday comic pages in 1897. On January 31, 1912, Hearst introduced the nation's first full daily comic page in his New York Evening Journal. The history of this newspaper rivalry and the rapid appearance of comic strips in most major American newspapers is discussed by Ian Gordon. Numerous events in newspaper comic strips have reverberated throughout society at large, though few of these events occurred in recent years, owing mainly to the declining use of continuous storylines on newspaper comic strips, which since the 1970s had been waning as an entertainment form. From 1903 to 1905 Gustave Verbeek, wrote his comic series \"The UpsideDowns of Old Man Muffaroo and Little Lady Lovekins\". These comics were made in such a way that one could read the 6 panel comic, flip the book and keep reading. He made 64 such comics in total.",
"title": "Newspapers"
},
{
"paragraph_id": 11,
"text": "The longest-running American comic strips are:",
"title": "Newspapers"
},
{
"paragraph_id": 12,
"text": "Most newspaper comic strips are syndicated; a syndicate hires people to write and draw a strip and then distributes it to many newspapers for a fee. Some newspaper strips begin or remain exclusive to one newspaper. For example, the Pogo comic strip by Walt Kelly originally appeared only in the New York Star in 1948 and was not picked up for syndication until the following year.",
"title": "Newspapers"
},
{
"paragraph_id": 13,
"text": "Newspaper comic strips come in two different types: daily strips and Sunday strips. In the United States, a daily strip appears in newspapers on weekdays, Monday through Saturday, as contrasted with a Sunday strip, which typically only appears on Sundays. Daily strips usually are printed in black and white, and Sunday strips are usually in color. However, a few newspapers have published daily strips in color, and some newspapers have published Sunday strips in black and white. Silly goosey woosey.",
"title": "Newspapers"
},
{
"paragraph_id": 14,
"text": "Making his first appearance in the British magazine Judy by writer and fledgling artist Charles H. Ross in 1867, Ally Sloper is one of the earliest comic strip characters and he is regarded as the first recurring character in comics. The highly popular character was spun off into his own comic, Ally Sloper's Half Holiday, in 1884.",
"title": "Popularity"
},
{
"paragraph_id": 15,
"text": "While in the early 20th century comic strips were a frequent target for detractors of \"yellow journalism\", by the 1920s the medium became wildly popular. While radio, and later, television surpassed newspapers as a means of entertainment, most comic strip characters were widely recognizable until the 1980s, and the \"funny pages\" were often arranged in a way they appeared at the front of Sunday editions. In 1931, George Gallup's first poll had the comic section as the most important part of the newspaper, with additional surveys pointing out that the comic strips were the second most popular feature after the picture page. During the 1930s, many comic sections had between 12 and 16 pages, although in some cases, these had up to 24 pages.",
"title": "Popularity"
},
{
"paragraph_id": 16,
"text": "The popularity and accessibility of strips meant they were often clipped and saved; authors including John Updike and Ray Bradbury have written about their childhood collections of clipped strips. Often posted on bulletin boards, clipped strips had an ancillary form of distribution when they were faxed, photocopied or mailed. The Baltimore Sun's Linda White recalled, \"I followed the adventures of Winnie Winkle, Moon Mullins and Dondi, and waited each fall to see how Lucy would manage to trick Charlie Brown into trying to kick that football. (After I left for college, my father would clip out that strip each year and send it to me just to make sure I didn't miss it.)\"",
"title": "Popularity"
},
{
"paragraph_id": 17,
"text": "The two conventional formats for newspaper comics are strips and single gag panels. The strips are usually displayed horizontally, wider than they are tall. Single panels are square, circular or taller than they are wide. Strips usually, but not always, are broken up into several smaller panels with continuity from panel to panel. A horizontal strip can also be used for a single panel with a single gag, as seen occasionally in Mike Peters' Mother Goose and Grimm.",
"title": "Production and format"
},
{
"paragraph_id": 18,
"text": "Early daily strips were large, often running the entire width of the newspaper, and were sometimes three or more inches high. Initially, a newspaper page included only a single daily strip, usually either at the top or the bottom of the page. By the 1920s, many newspapers had a comics page on which many strips were collected together. During the 1930s, the original art for a daily strip could be drawn as large as 25 inches wide by six inches high. Over decades, the size of daily strips became smaller and smaller, until by 2000, four standard daily strips could fit in an area once occupied by a single daily strip. As strips have become smaller, the number of panels have been reduced.",
"title": "Production and format"
},
{
"paragraph_id": 19,
"text": "Proof sheets were the means by which syndicates provided newspapers with black-and-white line art for the reproduction of strips (which they arranged to have colored in the case of Sunday strips). Michigan State University Comic Art Collection librarian Randy Scott describes these as \"large sheets of paper on which newspaper comics have traditionally been distributed to subscribing newspapers. Typically each sheet will have either six daily strips of a given title or one Sunday strip. Thus, a week of Beetle Bailey would arrive at the Lansing State Journal in two sheets, printed much larger than the final version and ready to be cut apart and fitted into the local comics page.\" Comic strip historian Allan Holtz described how strips were provided as mats (the plastic or cardboard trays in which molten metal is poured to make plates) or even plates ready to be put directly on the printing press. He also notes that with electronic means of distribution becoming more prevalent printed sheets \"are definitely on their way out.\"",
"title": "Production and format"
},
{
"paragraph_id": 20,
"text": "NEA Syndicate experimented briefly with a two-tier daily strip, Star Hawks, but after a few years, Star Hawks dropped down to a single tier.",
"title": "Production and format"
},
{
"paragraph_id": 21,
"text": "In Flanders, the two-tier strip is the standard publication style of most daily strips like Spike and Suzy and Nero. They appear Monday through Saturday; until 2003 there were no Sunday papers in Flanders. In the last decades, they have switched from black and white to color.",
"title": "Production and format"
},
{
"paragraph_id": 22,
"text": "Single panels usually, but not always, are not broken up and lack continuity. The daily Peanuts is a strip, and the daily Dennis the Menace is a single panel. J. R. Williams' long-run Out Our Way continued as a daily panel even after it expanded into a Sunday strip, Out Our Way with the Willets. Jimmy Hatlo's They'll Do It Every Time was often displayed in a two-panel format with the first panel showing some deceptive, pretentious, unwitting or scheming human behavior and the second panel revealing the truth of the situation.",
"title": "Production and format"
},
{
"paragraph_id": 23,
"text": "Sunday newspapers traditionally included a special color section. Early Sunday strips (known colloquially as \"the funny papers\", shortened to \"the funnies\"), such as Thimble Theatre and Little Orphan Annie, filled an entire newspaper page, a format known to collectors as full page. Sunday pages during the 1930s and into the 1940s often carried a secondary strip by the same artist as the main strip. No matter whether it appeared above or below a main strip, the extra strip was known as the topper, such as The Squirrel Cage which ran along with Room and Board, both drawn by Gene Ahern.",
"title": "Sunday comics"
},
{
"paragraph_id": 24,
"text": "During the 1930s, the original art for a Sunday strip was usually drawn quite large. For example, in 1930, Russ Westover drew his Tillie the Toiler Sunday page at a size of 17\" × 37\". In 1937, the cartoonist Dudley Fisher launched the innovative Right Around Home, drawn as a huge single panel filling an entire Sunday page.",
"title": "Sunday comics"
},
{
"paragraph_id": 25,
"text": "Full-page strips were eventually replaced by strips half that size. Strips such as The Phantom and Terry and the Pirates began appearing in a format of two strips to a page in full-size newspapers, such as the New Orleans Times Picayune, or with one strip on a tabloid page, as in the Chicago Sun-Times. When Sunday strips began to appear in more than one format, it became necessary for the cartoonist to allow for rearranged, cropped or dropped panels. During World War II, because of paper shortages, the size of Sunday strips began to shrink. After the war, strips continued to get smaller and smaller because of increased paper and printing costs. The last full-page comic strip was the Prince Valiant strip for 11 April 1971.",
"title": "Sunday comics"
},
{
"paragraph_id": 26,
"text": "Comic strips have also been published in Sunday newspaper magazines. Russell Patterson and Carolyn Wells' New Adventures of Flossy Frills was a continuing strip series seen on Sunday magazine covers. Beginning January 26, 1941, it ran on the front covers of Hearst's American Weekly newspaper magazine supplement, continuing until March 30 of that year. Between 1939 and 1943, four different stories featuring Flossy appeared on American Weekly covers.",
"title": "Sunday comics"
},
{
"paragraph_id": 27,
"text": "Sunday comics sections employed offset color printing with multiple print runs imitating a wide range of colors. Printing plates were created with four or more colors—traditionally, the CMYK color model: cyan, magenta, yellow and \"K\" for black. With a screen of tiny dots on each printing plate, the dots allowed an image to be printed in a halftone that appears to the eye in different gradations. The semi-opaque property of ink allows halftone dots of different colors to create an optical effect of full-color imagery.",
"title": "Sunday comics"
},
{
"paragraph_id": 28,
"text": "The decade of the 1960s saw the rise of underground newspapers, which often carried comic strips, such as Fritz the Cat and The Fabulous Furry Freak Brothers. Zippy the Pinhead initially appeared in underground publications in the 1970s before being syndicated. Bloom County and Doonesbury began as strips in college newspapers under different titles, and later moved to national syndication. Underground comic strips covered subjects that are usually taboo in newspaper strips, such as sex and drugs. Many underground artists, notably Vaughn Bode, Dan O'Neill, Gilbert Shelton, and Art Spiegelman went on to draw comic strips for magazines such as Playboy, National Lampoon, and Pete Millar's CARtoons. Jay Lynch graduated from undergrounds to alternative weekly newspapers to Mad and children's books.",
"title": "Underground comic strips"
},
{
"paragraph_id": 29,
"text": "Webcomics, also known as online comics and internet comics, are comics that are available to read on the Internet. Many are exclusively published online, but the majority of traditional newspaper comic strips have some Internet presence. King Features Syndicate and other syndicates often provide archives of recent strips on their websites. Some, such as Scott Adams, creator of Dilbert, include an email address in each strip.",
"title": "Webcomics"
},
{
"paragraph_id": 30,
"text": "Most comic strip characters do not age throughout the strip's life, but in some strips, like Lynn Johnston's award-winning For Better or For Worse, the characters age as the years pass. The first strip to feature aging characters was Gasoline Alley.",
"title": "Conventions and genres"
},
{
"paragraph_id": 31,
"text": "The history of comic strips also includes series that are not humorous, but tell an ongoing dramatic story. Examples include The Phantom, Prince Valiant, Dick Tracy, Mary Worth, Modesty Blaise, Little Orphan Annie, Flash Gordon, and Tarzan. Sometimes these are spin-offs from comic books, for example Superman, Batman, and The Amazing Spider-Man.",
"title": "Conventions and genres"
},
{
"paragraph_id": 32,
"text": "A number of strips have featured animals as main characters. Some are non-verbal (Marmaduke, The Angriest Dog in the World), some have verbal thoughts but are not understood by humans, (Garfield, Snoopy in Peanuts), and some can converse with humans (Bloom County, Calvin and Hobbes, Mutts, Citizen Dog, Buckles, Get Fuzzy, Pearls Before Swine, and Pooch Cafe). Other strips are centered entirely on animals, as in Pogo and Donald Duck. Gary Larson's The Far Side was unusual, as there were no central characters. Instead The Far Side used a wide variety of characters including humans, monsters, aliens, chickens, cows, worms, amoebas, and more. John McPherson's Close to Home also uses this theme, though the characters are mostly restricted to humans and real-life situations. Wiley Miller not only mixes human, animal, and fantasy characters, but also does several different comic strip continuities under one umbrella title, Non Sequitur. Bob Thaves's Frank & Ernest began in 1972 and paved the way for some of these strips, as its human characters were manifest in diverse forms—as animals, vegetables, and minerals.",
"title": "Conventions and genres"
},
{
"paragraph_id": 33,
"text": "The comics have long held a distorted mirror to contemporary society, and almost from the beginning have been used for political or social commentary. This ranged from the conservative slant of Harold Gray's Little Orphan Annie to the unabashed liberalism of Garry Trudeau's Doonesbury. Al Capp's Li'l Abner espoused liberal opinions for most of its run, but by the late 1960s, it became a mouthpiece for Capp's repudiation of the counterculture.",
"title": "Social and political influence"
},
{
"paragraph_id": 34,
"text": "Pogo used animals to particularly devastating effect, caricaturing many prominent politicians of the day as animal denizens of Pogo's Okeefenokee Swamp. In a fearless move, Pogo's creator Walt Kelly took on Joseph McCarthy in the 1950s, caricaturing him as a bobcat named Simple J. Malarkey, a megalomaniac who was bent on taking over the characters' birdwatching club and rooting out all undesirables. Kelly also defended the medium against possible government regulation in the McCarthy era. At a time when comic books were coming under fire for supposed sexual, violent, and subversive content, Kelly feared the same would happen to comic strips. Going before the Congressional subcommittee, he proceeded to charm the members with his drawings and the force of his personality. The comic strip was safe for satire.",
"title": "Social and political influence"
},
{
"paragraph_id": 35,
"text": "During the early 20th century, comic strips were widely associated with publisher William Randolph Hearst, whose papers had the largest circulation of strips in the United States. Hearst was notorious for his practice of yellow journalism, and he was frowned on by readers of The New York Times and other newspapers which featured few or no comic strips. Hearst's critics often assumed that all the strips in his papers were fronts for his own political and social views. Hearst did occasionally work with or pitch ideas to cartoonists, most notably his continued support of George Herriman's Krazy Kat. An inspiration for Bill Watterson and other cartoonists, Krazy Kat gained a considerable following among intellectuals during the 1920s and 1930s.",
"title": "Social and political influence"
},
{
"paragraph_id": 36,
"text": "Some comic strips, such as Doonesbury and Mallard Fillmore, may be printed on the editorial or op-ed page rather than the comics page because of their regular political commentary. For example, the August 12, 1974 Doonesbury strip was awarded a 1975 Pulitzer Prize for its depiction of the Watergate scandal. Dilbert is sometimes found in the business section of a newspaper instead of the comics page because of the strip's commentary about office politics, and Tank McNamara often appears on the sports page because of its subject matter. Lynn Johnston's For Better or For Worse created an uproar when Lawrence, one of the strip's supporting characters, came out of the closet.",
"title": "Social and political influence"
},
{
"paragraph_id": 37,
"text": "The world's longest comic strip is 88.9-metre (292 ft) long and on display at Trafalgar Square as part of the London Comedy Festival. The London Cartoon Strip was created by 15 of Britain's best known cartoonists and depicts the history of London.",
"title": "Publicity and recognition"
},
{
"paragraph_id": 38,
"text": "The Reuben, named for cartoonist Rube Goldberg, is the most prestigious award for U.S. comic strip artists. Reuben awards are presented annually by the National Cartoonists Society (NCS).",
"title": "Publicity and recognition"
},
{
"paragraph_id": 39,
"text": "In 1995, the United States Postal Service issued a series of commemorative stamps, Comic Strip Classics, marking the comic-strip centennial.",
"title": "Publicity and recognition"
},
{
"paragraph_id": 40,
"text": "Today's strip artists, with the help of the NCS, enthusiastically promote the medium, which since the 1970s (and particularly the 1990s) has been considered to be in decline due to numerous factors such as changing tastes in humor and entertainment, the waning relevance of newspapers in general and the loss of most foreign markets outside English-speaking countries. One particularly humorous example of such promotional efforts is the Great Comic Strip Switcheroonie, held in 1997 on April Fool's Day, an event in which dozens of prominent artists took over each other's strips. Garfield's Jim Davis, for example, switched with Blondie's Stan Drake, while Scott Adams (Dilbert) traded strips with Bil Keane (The Family Circus).",
"title": "Publicity and recognition"
},
{
"paragraph_id": 41,
"text": "While the 1997 Switcheroonie was a one-time publicity stunt, an artist taking over a feature from its originator is an old tradition in newspaper cartooning (as it is in the comic book industry). In fact, the practice has made possible the longevity of the genre's more popular strips. Examples include Little Orphan Annie (drawn and plotted by Harold Gray from 1924 to 1944 and thereafter by a succession of artists including Leonard Starr and Andrew Pepoy), and Terry and the Pirates, started by Milton Caniff in 1934 and picked up by George Wunder.",
"title": "Publicity and recognition"
},
{
"paragraph_id": 42,
"text": "A business-driven variation has sometimes led to the same feature continuing under a different name. In one case, in the early 1940s, Don Flowers' Modest Maidens was so admired by William Randolph Hearst that he lured Flowers away from the Associated Press and to King Features Syndicate by doubling the cartoonist's salary, and renamed the feature Glamor Girls to avoid legal action by the AP. The latter continued to publish Modest Maidens, drawn by Jay Allen in Flowers' style.",
"title": "Publicity and recognition"
},
{
"paragraph_id": 43,
"text": "As newspapers have declined, the changes have affected comic strips. Jeff Reece, lifestyle editor of The Florida Times-Union, wrote, \"Comics are sort of the 'third rail' of the newspaper.\"",
"title": "Issues in U.S. newspaper comic strips"
},
{
"paragraph_id": 44,
"text": "In the early decades of the 20th century, all Sunday comics received a full page, and daily strips were generally the width of the page. The competition between papers for having more cartoons than the rest from the mid-1920s, the growth of large-scale newspaper advertising during most of the thirties, paper rationing during World War II, the decline on news readership (as television newscasts began to be more common) and inflation (which has caused higher printing costs) beginning during the fifties and sixties led to Sunday strips being published on smaller and more diverse formats. As newspapers have reduced the page count of Sunday comic sections since the late 1990s (by the 2010s, most sections have only four pages, with the back page not always being destined for comics) has also led to further downsizes.",
"title": "Issues in U.S. newspaper comic strips"
},
{
"paragraph_id": 45,
"text": "Daily strips have suffered as well. Before the mid-1910s, there was not a \"standard\" size\", with strips running the entire width of a page or having more than one tier. By the 1920s, strips often covered six of the eight columns occupied by a traditional broadsheet paper. During the 1940s, strips were reduced to four columns wide (with a \"transition\" width of five columns). As newspapers became narrower beginning in the 1970s, strips have gotten even smaller, often being just three columns wide, a similar width to the one most daily panels occupied before the 1940s.",
"title": "Issues in U.S. newspaper comic strips"
},
{
"paragraph_id": 46,
"text": "In an issue related to size limitations, Sunday comics are often bound to rigid formats that allow their panels to be rearranged in several different ways while remaining readable. Such formats usually include throwaway panels at the beginning, which some newspapers will omit for space. As a result, cartoonists have less incentive to put great efforts into these panels. Garfield and Mutts were known during the mid-to-late 80s and 1990s respectively for their throwaways on their Sunday strips, however both strips now run \"generic\" title panels.",
"title": "Issues in U.S. newspaper comic strips"
},
{
"paragraph_id": 47,
"text": "Some cartoonists have complained about this, with Walt Kelly, creator of Pogo, openly voicing his discontent about being forced to draw his Sunday strips in such rigid formats from the beginning. Kelly's heirs opted to end the strip in 1975 as a form of protest against the practice. Since then, Calvin and Hobbes creator Bill Watterson has written extensively on the issue, arguing that size reduction and dropped panels reduce both the potential and freedom of a cartoonist. After a lengthy battle with his syndicate, Watterson won the privilege of making half page-sized Sunday strips where he could arrange the panels any way he liked. Many newspaper publishers and a few cartoonists objected to this, and some papers continued to print Calvin and Hobbes at small sizes. Opus won that same privilege years after Calvin and Hobbes ended, while Wiley Miller circumvented further downsizes by making his Non Sequitur Sunday strip available only in a vertical arrangement. Most strips created since 1990, however, are drawn in the unbroken \"third-page\" format. Few newspapers still run half-page strips, as with Prince Valiant and Hägar the Horrible in the front page of the Reading Eagle Sunday comics section until the mid-2010s.",
"title": "Issues in U.S. newspaper comic strips"
},
{
"paragraph_id": 48,
"text": "With the success of The Gumps during the 1920s, it became commonplace for strips (comedy- and adventure-laden alike) to have lengthy stories spanning weeks or months. The \"Monarch of Medioka\" story in Floyd Gottfredson's Mickey Mouse comic strip ran from September 8, 1937 to May 2, 1938. Between the 1960s and the late 1980s, as television news relegated newspaper reading to an occasional basis rather than daily, syndicators were abandoning long stories and urging cartoonists to switch to simple daily gags, or week-long \"storylines\" (with six consecutive (mostly unrelated) strips following a same subject), with longer storylines being used mainly on adventure-based and dramatic strips. Strips begun during the mid-1980s or after (such as Get Fuzzy, Over the Hedge, Monty, and others) are known for their heavy use of storylines, lasting between one and three weeks in most cases.",
"title": "Issues in U.S. newspaper comic strips"
},
{
"paragraph_id": 49,
"text": "The writing style of comic strips changed as well after World War II. With an increase in the number of college-educated readers, there was a shift away from slapstick comedy and towards more cerebral humor. Slapstick and visual gags became more confined to Sunday strips, because as Garfield creator Jim Davis put it, \"Children are more likely to read Sunday strips than dailies.\"",
"title": "Issues in U.S. newspaper comic strips"
},
{
"paragraph_id": 50,
"text": "Many older strips are no longer drawn by the original cartoonist, who has either died or retired. Such strips are known as \"zombie strips\". A cartoonist, paid by the syndicate or sometimes a relative of the original cartoonist, continues writing the strip, a tradition that became commonplace in the early half of the 20th century. Hägar the Horrible and Frank and Ernest are both drawn by the sons of the creators. Some strips which are still in affiliation with the original creator are produced by small teams or entire companies, such as Jim Davis' Garfield, however there is some debate if these strips fall in this category.",
"title": "Issues in U.S. newspaper comic strips"
},
{
"paragraph_id": 51,
"text": "This act is commonly criticized by modern cartoonists including Watterson and Pearls Before Swine's Stephan Pastis. The issue was addressed in six consecutive Pearls strips in 2005. Charles Schulz, of Peanuts fame, requested that his strip not be continued by another cartoonist after his death. He also rejected the idea of hiring an inker or letterer, comparing it to a golfer hiring a man to make his putts. Schulz's family has honored his wishes and refused numerous proposals by syndicators to continue Peanuts with a new author.",
"title": "Issues in U.S. newspaper comic strips"
},
{
"paragraph_id": 52,
"text": "Since the consolidation of newspaper comics by the first quarter of the 20th century, most cartoonists have used a group of assistants (with usually one of them credited). However, quite a few cartoonists (e.g.: George Herriman and Charles Schulz, among others) have done their strips almost completely by themselves; often criticizing the use of assistants for the same reasons most have about their editors hiring anyone else to continue their work after their retirement.",
"title": "Issues in U.S. newspaper comic strips"
},
{
"paragraph_id": 53,
"text": "Historically, syndicates owned the creators' work, enabling them to continue publishing the strip after the original creator retired, left the strip, or died. This practice led to the term \"legacy strips\", or more pejoratively \"zombie strips\". Most syndicates signed creators to 10- or even 20-year contracts. (There have been exceptions, however, such as Bud Fisher's Mutt and Jeff being an early—if not the earliest—case in which the creator retained ownership of his work.) Both these practices began to change with the 1970 debut of Universal Press Syndicate, as the company gave cartoonists a 50-percent ownership share of their work. Creators Syndicate, founded in 1987, granted artists full rights to the strips, something that Universal Press did in 1990, followed by King Features in 1995. By 1999 both Tribune Media Services and United Feature had begun granting ownership rights to creators (limited to new and/or hugely popular strips).",
"title": "Issues in U.S. newspaper comic strips"
},
{
"paragraph_id": 54,
"text": "Starting in the late 1940s, the national syndicates which distributed newspaper comic strips subjected them to very strict censorship. Li'l Abner was censored in September 1947 and was pulled from the Pittsburgh Press by Scripps-Howard. The controversy, as reported in Time, centered on Capp's portrayal of the U.S. Senate. Said Edward Leech of Scripps, \"We don't think it is good editing or sound citizenship to picture the Senate as an assemblage of freaks and crooks... boobs and undesirables.\"",
"title": "Issues in U.S. newspaper comic strips"
},
{
"paragraph_id": 55,
"text": "As comics are easier for children to access compared to other types of media, they have a significantly more rigid censorship code than other media. Stephan Pastis has lamented that the \"unwritten\" censorship code is still \"stuck somewhere in the 1950s\". Generally, comics are not allowed to include such words as \"damn\", \"sucks\", \"screwed\", and \"hell\", although there have been exceptions such as the September 22, 2010 Mother Goose and Grimm in which an elderly man says, \"This nursing home food sucks,\" and a pair of Pearls Before Swine comics from January 11, 2011 with a character named Ned using the word \"crappy\". Naked backsides and shooting guns cannot be shown, according to Dilbert cartoonist Scott Adams. Such comic strip taboos were detailed in Dave Breger's book But That's Unprintable (Bantam, 1955).",
"title": "Issues in U.S. newspaper comic strips"
},
{
"paragraph_id": 56,
"text": "Many issues such as sex, narcotics, and terrorism cannot or can very rarely be openly discussed in strips, although there are exceptions, usually for satire, as in Bloom County. This led some cartoonists to resort to double entendre or dialogue children do not understand, as in Greg Evans' Luann. Another example of wordplay to get around censorship is a July 27, 2016 Pearls Before Swine strip that features Pig talking to his sister, and says the phrase \"I SIS!\" repeatedly after correcting his sister's grammar. The strip then cuts to a scene of a NSA wiretap agent, following a scene of Pig being arrested by the FBI saying \"Never correct your sister's grammar\", implying that the CIA mistook the phrase \"I SIS\" with \"ISIS\". Younger cartoonists have claimed commonplace words, images, and issues should be allowed in the comics, considering that the pressure on \"clean\" humor has been a chief factor for the declining popularity of comic strips since the 1990s (Aaron McGruder, creator of The Boondocks, decided to end his strip partly because of censorship issues, while the Popeye daily comic strip ended in 1994 after newspapers objected to a storyline they considered to be a satire on abortion). Some of the taboo words and topics are mentioned daily on television and other forms of visual media. Webcomics and comics distributed primarily to college newspapers are much freer in this respect.",
"title": "Issues in U.S. newspaper comic strips"
}
] | A comic strip is a sequence of cartoons, arranged in interrelated panels to display brief humor or form a narrative, often serialized, with text in balloons and captions. Traditionally, throughout the 20th and into the 21st century, these have been published in newspapers and magazines, with daily horizontal strips printed in black-and-white in newspapers, while Sunday papers offered longer sequences in special color comics sections. With the advent of the internet, online comic strips began to appear as webcomics. Most strips are written and drawn by a comics artist, known as a cartoonist. As the word "comic" implies, strips are frequently humorous. Examples of these gag-a-day strips are Blondie, Bringing Up Father, Marmaduke, and Pearls Before Swine. In the late 1920s, comic strips expanded from their mirthful origins to feature adventure stories, as seen in Popeye, Captain Easy, Buck Rogers, Tarzan, and Terry and the Pirates. In the 1940s, soap-opera-continuity strips such as Judge Parker and Mary Worth gained popularity. Because "comic" strips are not always funny, cartoonist Will Eisner has suggested that sequential art would be a better genre-neutral name. Comic strips have appeared inside American magazines such as Liberty and Boys' Life, but also on the front covers, such as the Flossy Frills series on The American Weekly Sunday newspaper supplement. In the UK and the rest of Europe, comic strips are also serialized in comic book magazines, with a strip's story sometimes continuing over three pages. | 2001-10-01T01:11:31Z | 2023-12-12T23:25:15Z | [
"Template:Hatgrp",
"Template:Comics navbar",
"Template:Main",
"Template:Short description",
"Template:'",
"Template:Portal",
"Template:Reflist",
"Template:Cite journal",
"Template:Cite news",
"Template:Cite magazine",
"Template:Webarchive",
"Template:Globalize",
"Template:Sfn",
"Template:Example farm",
"Template:Convert",
"Template:Citation needed",
"Template:Cite web",
"Template:Cite book",
"Template:Commons category",
"Template:ISBN",
"Template:Comics"
] | https://en.wikipedia.org/wiki/Comic_strip |
5,705 | Continuum hypothesis | In mathematics, specifically set theory, the continuum hypothesis (abbreviated CH) is a hypothesis about the possible sizes of infinite sets. It states that
there is no set whose cardinality is strictly between that of the integers and the real numbers,
or equivalently, that
any subset of the real numbers is finite, is countably infinite, or has the same cardinality as the real numbers.
In Zermelo–Fraenkel set theory with the axiom of choice (ZFC), this is equivalent to the following equation in aleph numbers: 2 ℵ 0 = ℵ 1 {\displaystyle 2^{\aleph _{0}}=\aleph _{1}} , or even shorter with beth numbers: ℶ 1 = ℵ 1 {\displaystyle \beth _{1}=\aleph _{1}} .
The continuum hypothesis was advanced by Georg Cantor in 1878, and establishing its truth or falsehood is the first of Hilbert's 23 problems presented in 1900. The answer to this problem is independent of ZFC, so that either the continuum hypothesis or its negation can be added as an axiom to ZFC set theory, with the resulting theory being consistent if and only if ZFC is consistent. This independence was proved in 1963 by Paul Cohen, complementing earlier work by Kurt Gödel in 1940.
The name of the hypothesis comes from the term the continuum for the real numbers.
Cantor believed the continuum hypothesis to be true and for many years tried in vain to prove it. It became the first on David Hilbert's list of important open questions that was presented at the International Congress of Mathematicians in the year 1900 in Paris. Axiomatic set theory was at that point not yet formulated. Kurt Gödel proved in 1940 that the negation of the continuum hypothesis, i.e., the existence of a set with intermediate cardinality, could not be proved in standard set theory. The second half of the independence of the continuum hypothesis – i.e., unprovability of the nonexistence of an intermediate-sized set – was proved in 1963 by Paul Cohen.
Two sets are said to have the same cardinality or cardinal number if there exists a bijection (a one-to-one correspondence) between them. Intuitively, for two sets S and T to have the same cardinality means that it is possible to "pair off" elements of S with elements of T in such a fashion that every element of S is paired off with exactly one element of T and vice versa. Hence, the set {banana, apple, pear} has the same cardinality as {yellow, red, green}.
With infinite sets such as the set of integers or rational numbers, the existence of a bijection between two sets becomes more difficult to demonstrate. The rational numbers seemingly form a counterexample to the continuum hypothesis: the integers form a proper subset of the rationals, which themselves form a proper subset of the reals, so intuitively, there are more rational numbers than integers and more real numbers than rational numbers. However, this intuitive analysis is flawed; it does not take proper account of the fact that all three sets are infinite. It turns out the rational numbers can actually be placed in one-to-one correspondence with the integers, and therefore the set of rational numbers is the same size (cardinality) as the set of integers: they are both countable sets.
Cantor gave two proofs that the cardinality of the set of integers is strictly smaller than that of the set of real numbers (see Cantor's first uncountability proof and Cantor's diagonal argument). His proofs, however, give no indication of the extent to which the cardinality of the integers is less than that of the real numbers. Cantor proposed the continuum hypothesis as a possible solution to this question.
The continuum hypothesis states that the set of real numbers has minimal possible cardinality which is greater than the cardinality of the set of integers. That is, every set, S, of real numbers can either be mapped one-to-one into the integers or the real numbers can be mapped one-to-one into S. As the real numbers are equinumerous with the powerset of the integers, i.e. | R | = 2 ℵ 0 {\displaystyle |\mathbb {R} |=2^{\aleph _{0}}} , the continuum hypothesis can be restated as follows:
Continuum hypothesis — ∄ S : ℵ 0 < | S | < 2 ℵ 0 {\displaystyle \nexists S:\aleph _{0}<|S|<2^{\aleph _{0}}} .
Assuming the axiom of choice, there is a unique smallest cardinal number ℵ 1 {\displaystyle \aleph _{1}} greater than ℵ 0 {\displaystyle \aleph _{0}} , and the continuum hypothesis is in turn equivalent to the equality 2 ℵ 0 = ℵ 1 {\displaystyle 2^{\aleph _{0}}=\aleph _{1}} .
The independence of the continuum hypothesis (CH) from Zermelo–Fraenkel set theory (ZF) follows from combined work of Kurt Gödel and Paul Cohen.
Gödel showed that CH cannot be disproved from ZF, even if the axiom of choice (AC) is adopted (making ZFC). Gödel's proof shows that CH and AC both hold in the constructible universe L, an inner model of ZF set theory, assuming only the axioms of ZF. The existence of an inner model of ZF in which additional axioms hold shows that the additional axioms are consistent with ZF, provided ZF itself is consistent. The latter condition cannot be proved in ZF itself, due to Gödel's incompleteness theorems, but is widely believed to be true and can be proved in stronger set theories.
Cohen showed that CH cannot be proven from the ZFC axioms, completing the overall independence proof. To prove his result, Cohen developed the method of forcing, which has become a standard tool in set theory. Essentially, this method begins with a model of ZF in which CH holds, and constructs another model which contains more sets than the original, in a way that CH does not hold in the new model. Cohen was awarded the Fields Medal in 1966 for his proof.
The independence proof just described shows that CH is independent of ZFC. Further research has shown that CH is independent of all known large cardinal axioms in the context of ZFC. Moreover, it has been shown that the cardinality of the continuum can be any cardinal consistent with König's theorem. A result of Solovay, proved shortly after Cohen's result on the independence of the continuum hypothesis, shows that in any model of ZFC, if κ {\displaystyle \kappa } is a cardinal of uncountable cofinality, then there is a forcing extension in which 2 ℵ 0 = κ {\displaystyle 2^{\aleph _{0}}=\kappa } . However, per König's theorem, it is not consistent to assume 2 ℵ 0 {\displaystyle 2^{\aleph _{0}}} is ℵ ω {\displaystyle \aleph _{\omega }} or ℵ ω 1 + ω {\displaystyle \aleph _{\omega _{1}+\omega }} or any cardinal with cofinality ω {\displaystyle \omega } .
The continuum hypothesis is closely related to many statements in analysis, point set topology and measure theory. As a result of its independence, many substantial conjectures in those fields have subsequently been shown to be independent as well.
The independence from ZFC means that proving or disproving the CH within ZFC is impossible. However, Gödel and Cohen's negative results are not universally accepted as disposing of all interest in the continuum hypothesis. The continuum hypothesis remains an active topic of research; see Woodin and Peter Koellner for an overview of the current research status.
The continuum hypothesis and the axiom of choice were among the first genuinely mathematical statements shown to be independent of ZF set theory. Although the existence of some statements independent of ZFC had already been known more than two decades prior: for example, assuming good soundness properties and the consistency ZFC, Gödel's incompleteness theorems, which were published in 1931, establish that there is a formal statement (one for each appropriate Gödel numbering scheme) expressing the consistency of ZFC, that is also independent of it. The latter independence result indeed holds for many theories.
Gödel believed that CH is false, and that his proof that CH is consistent with ZFC only shows that the Zermelo–Fraenkel axioms do not adequately characterize the universe of sets. Gödel was a platonist and therefore had no problems with asserting the truth and falsehood of statements independent of their provability. Cohen, though a formalist, also tended towards rejecting CH.
Historically, mathematicians who favored a "rich" and "large" universe of sets were against CH, while those favoring a "neat" and "controllable" universe favored CH. Parallel arguments were made for and against the axiom of constructibility, which implies CH. More recently, Matthew Foreman has pointed out that ontological maximalism can actually be used to argue in favor of CH, because among models that have the same reals, models with "more" sets of reals have a better chance of satisfying CH.
Another viewpoint is that the conception of set is not specific enough to determine whether CH is true or false. This viewpoint was advanced as early as 1923 by Skolem, even before Gödel's first incompleteness theorem. Skolem argued on the basis of what is now known as Skolem's paradox, and it was later supported by the independence of CH from the axioms of ZFC since these axioms are enough to establish the elementary properties of sets and cardinalities. In order to argue against this viewpoint, it would be sufficient to demonstrate new axioms that are supported by intuition and resolve CH in one direction or another. Although the axiom of constructibility does resolve CH, it is not generally considered to be intuitively true any more than CH is generally considered to be false.
At least two other axioms have been proposed that have implications for the continuum hypothesis, although these axioms have not currently found wide acceptance in the mathematical community. In 1986, Chris Freiling presented an argument against CH by showing that the negation of CH is equivalent to Freiling's axiom of symmetry, a statement derived by arguing from particular intuitions about probabilities. Freiling believes this axiom is "intuitively true" but others have disagreed.
A difficult argument against CH developed by W. Hugh Woodin has attracted considerable attention since the year 2000. Foreman does not reject Woodin's argument outright but urges caution. Woodin proposed a new hypothesis that he labeled the (*)-axiom", or "Star axiom". The Star axiom would imply that 2 ℵ 0 {\displaystyle 2^{\aleph _{0}}} is ℵ 2 {\displaystyle \aleph _{2}} , thus falsifying CH. The Star axiom was bolstered by an independent May 2021 proof showing the Star axiom can be derived from a variation of Martin's maximum. However, Woodin stated in the 2010s that he now instead believes CH to be true, based on his belief in his new "ultimate L" conjecture.
Solomon Feferman argued that CH is not a definite mathematical problem. He proposed a theory of "definiteness" using a semi-intuitionistic subsystem of ZF that accepts classical logic for bounded quantifiers but uses intuitionistic logic for unbounded ones, and suggested that a proposition ϕ {\displaystyle \phi } is mathematically "definite" if the semi-intuitionistic theory can prove ( ϕ ∨ ¬ ϕ ) {\displaystyle (\phi \lor \neg \phi )} . He conjectured that CH is not definite according to this notion, and proposed that CH should, therefore, be considered not to have a truth value. Peter Koellner wrote a critical commentary on Feferman's article.
Joel David Hamkins proposes a multiverse approach to set theory and argues that "the continuum hypothesis is settled on the multiverse view by our extensive knowledge about how it behaves in the multiverse, and, as a result, it can no longer be settled in the manner formerly hoped for". In a related vein, Saharon Shelah wrote that he does "not agree with the pure Platonic view that the interesting problems in set theory can be decided, that we just have to discover the additional axiom. My mental picture is that we have many possible set theories, all conforming to ZFC".
The generalized continuum hypothesis (GCH) states that if an infinite set's cardinality lies between that of an infinite set S and that of the power set P ( S ) {\displaystyle {\mathcal {P}}(S)} of S, then it has the same cardinality as either S or P ( S ) {\displaystyle {\mathcal {P}}(S)} . That is, for any infinite cardinal λ {\displaystyle \lambda } there is no cardinal κ {\displaystyle \kappa } such that λ < κ < 2 λ {\displaystyle \lambda <\kappa <2^{\lambda }} . GCH is equivalent to:
The beth numbers provide an alternate notation for this condition: ℵ α = ℶ α {\displaystyle \aleph _{\alpha }=\beth _{\alpha }} for every ordinal α {\displaystyle \alpha } . The continuum hypothesis is the special case for the ordinal α = 1 {\displaystyle \alpha =1} . GCH was first suggested by Philip Jourdain. For the early history of GCH, see Moore.
Like CH, GCH is also independent of ZFC, but Sierpiński proved that ZF + GCH implies the axiom of choice (AC) (and therefore the negation of the axiom of determinacy, AD), so choice and GCH are not independent in ZF; there are no models of ZF in which GCH holds and AC fails. To prove this, Sierpiński showed GCH implies that every cardinality n is smaller than some aleph number, and thus can be ordered. This is done by showing that n is smaller than 2 ℵ 0 + n {\displaystyle 2^{\aleph _{0}+n}} which is smaller than its own Hartogs number—this uses the equality 2 ℵ 0 + n = 2 ⋅ 2 ℵ 0 + n {\displaystyle 2^{\aleph _{0}+n}\,=\,2\cdot \,2^{\aleph _{0}+n}} ; for the full proof, see Gillman.
Kurt Gödel showed that GCH is a consequence of ZF + V=L (the axiom that every set is constructible relative to the ordinals), and is therefore consistent with ZFC. As GCH implies CH, Cohen's model in which CH fails is a model in which GCH fails, and thus GCH is not provable from ZFC. W. B. Easton used the method of forcing developed by Cohen to prove Easton's theorem, which shows it is consistent with ZFC for arbitrarily large cardinals ℵ α {\displaystyle \aleph _{\alpha }} to fail to satisfy 2 ℵ α = ℵ α + 1 {\displaystyle 2^{\aleph _{\alpha }}=\aleph _{\alpha +1}} . Much later, Foreman and Woodin proved that (assuming the consistency of very large cardinals) it is consistent that 2 κ > κ + {\displaystyle 2^{\kappa }>\kappa ^{+}} holds for every infinite cardinal κ {\displaystyle \kappa } . Later Woodin extended this by showing the consistency of 2 κ = κ + + {\displaystyle 2^{\kappa }=\kappa ^{++}} for every κ {\displaystyle \kappa } . Carmi Merimovich showed that, for each n ≥ 1, it is consistent with ZFC that for each κ, 2 is the nth successor of κ. On the other hand, László Patai proved that if γ is an ordinal and for each infinite cardinal κ, 2 is the γth successor of κ, then γ is finite.
For any infinite sets A and B, if there is an injection from A to B then there is an injection from subsets of A to subsets of B. Thus for any infinite cardinals A and B, A < B → 2 A ≤ 2 B {\displaystyle A<B\to 2^{A}\leq 2^{B}} . If A and B are finite, the stronger inequality A < B → 2 A < 2 B {\displaystyle A<B\to 2^{A}<2^{B}} holds. GCH implies that this strict, stronger inequality holds for infinite cardinals as well as finite cardinals.
Although the generalized continuum hypothesis refers directly only to cardinal exponentiation with 2 as the base, one can deduce from it the values of cardinal exponentiation ℵ α ℵ β {\displaystyle \aleph _{\alpha }^{\aleph _{\beta }}} in all cases. GCH implies that:
The first equality (when α ≤ β+1) follows from:
The third equality (when β+1 < α and ℵ β ≥ cf ( ℵ α ) {\displaystyle \aleph _{\beta }\geq \operatorname {cf} (\aleph _{\alpha })} ) follows from:
Where, for every γ, GCH is used for equating 2 ℵ γ {\displaystyle 2^{\aleph _{\gamma }}} and ℵ γ + 1 {\displaystyle \aleph _{\gamma +1}} ; ℵ γ 2 = ℵ γ {\displaystyle \aleph _{\gamma }^{2}=\aleph _{\gamma }} is used as it is equivalent to the axiom of choice.
Quotations related to Continuum hypothesis at Wikiquote | [
{
"paragraph_id": 0,
"text": "In mathematics, specifically set theory, the continuum hypothesis (abbreviated CH) is a hypothesis about the possible sizes of infinite sets. It states that",
"title": ""
},
{
"paragraph_id": 1,
"text": "there is no set whose cardinality is strictly between that of the integers and the real numbers,",
"title": ""
},
{
"paragraph_id": 2,
"text": "or equivalently, that",
"title": ""
},
{
"paragraph_id": 3,
"text": "any subset of the real numbers is finite, is countably infinite, or has the same cardinality as the real numbers.",
"title": ""
},
{
"paragraph_id": 4,
"text": "In Zermelo–Fraenkel set theory with the axiom of choice (ZFC), this is equivalent to the following equation in aleph numbers: 2 ℵ 0 = ℵ 1 {\\displaystyle 2^{\\aleph _{0}}=\\aleph _{1}} , or even shorter with beth numbers: ℶ 1 = ℵ 1 {\\displaystyle \\beth _{1}=\\aleph _{1}} .",
"title": ""
},
{
"paragraph_id": 5,
"text": "The continuum hypothesis was advanced by Georg Cantor in 1878, and establishing its truth or falsehood is the first of Hilbert's 23 problems presented in 1900. The answer to this problem is independent of ZFC, so that either the continuum hypothesis or its negation can be added as an axiom to ZFC set theory, with the resulting theory being consistent if and only if ZFC is consistent. This independence was proved in 1963 by Paul Cohen, complementing earlier work by Kurt Gödel in 1940.",
"title": ""
},
{
"paragraph_id": 6,
"text": "The name of the hypothesis comes from the term the continuum for the real numbers.",
"title": ""
},
{
"paragraph_id": 7,
"text": "Cantor believed the continuum hypothesis to be true and for many years tried in vain to prove it. It became the first on David Hilbert's list of important open questions that was presented at the International Congress of Mathematicians in the year 1900 in Paris. Axiomatic set theory was at that point not yet formulated. Kurt Gödel proved in 1940 that the negation of the continuum hypothesis, i.e., the existence of a set with intermediate cardinality, could not be proved in standard set theory. The second half of the independence of the continuum hypothesis – i.e., unprovability of the nonexistence of an intermediate-sized set – was proved in 1963 by Paul Cohen.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "Two sets are said to have the same cardinality or cardinal number if there exists a bijection (a one-to-one correspondence) between them. Intuitively, for two sets S and T to have the same cardinality means that it is possible to \"pair off\" elements of S with elements of T in such a fashion that every element of S is paired off with exactly one element of T and vice versa. Hence, the set {banana, apple, pear} has the same cardinality as {yellow, red, green}.",
"title": "Cardinality of infinite sets"
},
{
"paragraph_id": 9,
"text": "With infinite sets such as the set of integers or rational numbers, the existence of a bijection between two sets becomes more difficult to demonstrate. The rational numbers seemingly form a counterexample to the continuum hypothesis: the integers form a proper subset of the rationals, which themselves form a proper subset of the reals, so intuitively, there are more rational numbers than integers and more real numbers than rational numbers. However, this intuitive analysis is flawed; it does not take proper account of the fact that all three sets are infinite. It turns out the rational numbers can actually be placed in one-to-one correspondence with the integers, and therefore the set of rational numbers is the same size (cardinality) as the set of integers: they are both countable sets.",
"title": "Cardinality of infinite sets"
},
{
"paragraph_id": 10,
"text": "Cantor gave two proofs that the cardinality of the set of integers is strictly smaller than that of the set of real numbers (see Cantor's first uncountability proof and Cantor's diagonal argument). His proofs, however, give no indication of the extent to which the cardinality of the integers is less than that of the real numbers. Cantor proposed the continuum hypothesis as a possible solution to this question.",
"title": "Cardinality of infinite sets"
},
{
"paragraph_id": 11,
"text": "The continuum hypothesis states that the set of real numbers has minimal possible cardinality which is greater than the cardinality of the set of integers. That is, every set, S, of real numbers can either be mapped one-to-one into the integers or the real numbers can be mapped one-to-one into S. As the real numbers are equinumerous with the powerset of the integers, i.e. | R | = 2 ℵ 0 {\\displaystyle |\\mathbb {R} |=2^{\\aleph _{0}}} , the continuum hypothesis can be restated as follows:",
"title": "Cardinality of infinite sets"
},
{
"paragraph_id": 12,
"text": "Continuum hypothesis — ∄ S : ℵ 0 < | S | < 2 ℵ 0 {\\displaystyle \\nexists S:\\aleph _{0}<|S|<2^{\\aleph _{0}}} .",
"title": "Cardinality of infinite sets"
},
{
"paragraph_id": 13,
"text": "Assuming the axiom of choice, there is a unique smallest cardinal number ℵ 1 {\\displaystyle \\aleph _{1}} greater than ℵ 0 {\\displaystyle \\aleph _{0}} , and the continuum hypothesis is in turn equivalent to the equality 2 ℵ 0 = ℵ 1 {\\displaystyle 2^{\\aleph _{0}}=\\aleph _{1}} .",
"title": "Cardinality of infinite sets"
},
{
"paragraph_id": 14,
"text": "The independence of the continuum hypothesis (CH) from Zermelo–Fraenkel set theory (ZF) follows from combined work of Kurt Gödel and Paul Cohen.",
"title": "Independence from ZFC"
},
{
"paragraph_id": 15,
"text": "Gödel showed that CH cannot be disproved from ZF, even if the axiom of choice (AC) is adopted (making ZFC). Gödel's proof shows that CH and AC both hold in the constructible universe L, an inner model of ZF set theory, assuming only the axioms of ZF. The existence of an inner model of ZF in which additional axioms hold shows that the additional axioms are consistent with ZF, provided ZF itself is consistent. The latter condition cannot be proved in ZF itself, due to Gödel's incompleteness theorems, but is widely believed to be true and can be proved in stronger set theories.",
"title": "Independence from ZFC"
},
{
"paragraph_id": 16,
"text": "Cohen showed that CH cannot be proven from the ZFC axioms, completing the overall independence proof. To prove his result, Cohen developed the method of forcing, which has become a standard tool in set theory. Essentially, this method begins with a model of ZF in which CH holds, and constructs another model which contains more sets than the original, in a way that CH does not hold in the new model. Cohen was awarded the Fields Medal in 1966 for his proof.",
"title": "Independence from ZFC"
},
{
"paragraph_id": 17,
"text": "The independence proof just described shows that CH is independent of ZFC. Further research has shown that CH is independent of all known large cardinal axioms in the context of ZFC. Moreover, it has been shown that the cardinality of the continuum can be any cardinal consistent with König's theorem. A result of Solovay, proved shortly after Cohen's result on the independence of the continuum hypothesis, shows that in any model of ZFC, if κ {\\displaystyle \\kappa } is a cardinal of uncountable cofinality, then there is a forcing extension in which 2 ℵ 0 = κ {\\displaystyle 2^{\\aleph _{0}}=\\kappa } . However, per König's theorem, it is not consistent to assume 2 ℵ 0 {\\displaystyle 2^{\\aleph _{0}}} is ℵ ω {\\displaystyle \\aleph _{\\omega }} or ℵ ω 1 + ω {\\displaystyle \\aleph _{\\omega _{1}+\\omega }} or any cardinal with cofinality ω {\\displaystyle \\omega } .",
"title": "Independence from ZFC"
},
{
"paragraph_id": 18,
"text": "The continuum hypothesis is closely related to many statements in analysis, point set topology and measure theory. As a result of its independence, many substantial conjectures in those fields have subsequently been shown to be independent as well.",
"title": "Independence from ZFC"
},
{
"paragraph_id": 19,
"text": "The independence from ZFC means that proving or disproving the CH within ZFC is impossible. However, Gödel and Cohen's negative results are not universally accepted as disposing of all interest in the continuum hypothesis. The continuum hypothesis remains an active topic of research; see Woodin and Peter Koellner for an overview of the current research status.",
"title": "Independence from ZFC"
},
{
"paragraph_id": 20,
"text": "The continuum hypothesis and the axiom of choice were among the first genuinely mathematical statements shown to be independent of ZF set theory. Although the existence of some statements independent of ZFC had already been known more than two decades prior: for example, assuming good soundness properties and the consistency ZFC, Gödel's incompleteness theorems, which were published in 1931, establish that there is a formal statement (one for each appropriate Gödel numbering scheme) expressing the consistency of ZFC, that is also independent of it. The latter independence result indeed holds for many theories.",
"title": "Independence from ZFC"
},
{
"paragraph_id": 21,
"text": "Gödel believed that CH is false, and that his proof that CH is consistent with ZFC only shows that the Zermelo–Fraenkel axioms do not adequately characterize the universe of sets. Gödel was a platonist and therefore had no problems with asserting the truth and falsehood of statements independent of their provability. Cohen, though a formalist, also tended towards rejecting CH.",
"title": "Arguments for and against the continuum hypothesis"
},
{
"paragraph_id": 22,
"text": "Historically, mathematicians who favored a \"rich\" and \"large\" universe of sets were against CH, while those favoring a \"neat\" and \"controllable\" universe favored CH. Parallel arguments were made for and against the axiom of constructibility, which implies CH. More recently, Matthew Foreman has pointed out that ontological maximalism can actually be used to argue in favor of CH, because among models that have the same reals, models with \"more\" sets of reals have a better chance of satisfying CH.",
"title": "Arguments for and against the continuum hypothesis"
},
{
"paragraph_id": 23,
"text": "Another viewpoint is that the conception of set is not specific enough to determine whether CH is true or false. This viewpoint was advanced as early as 1923 by Skolem, even before Gödel's first incompleteness theorem. Skolem argued on the basis of what is now known as Skolem's paradox, and it was later supported by the independence of CH from the axioms of ZFC since these axioms are enough to establish the elementary properties of sets and cardinalities. In order to argue against this viewpoint, it would be sufficient to demonstrate new axioms that are supported by intuition and resolve CH in one direction or another. Although the axiom of constructibility does resolve CH, it is not generally considered to be intuitively true any more than CH is generally considered to be false.",
"title": "Arguments for and against the continuum hypothesis"
},
{
"paragraph_id": 24,
"text": "At least two other axioms have been proposed that have implications for the continuum hypothesis, although these axioms have not currently found wide acceptance in the mathematical community. In 1986, Chris Freiling presented an argument against CH by showing that the negation of CH is equivalent to Freiling's axiom of symmetry, a statement derived by arguing from particular intuitions about probabilities. Freiling believes this axiom is \"intuitively true\" but others have disagreed.",
"title": "Arguments for and against the continuum hypothesis"
},
{
"paragraph_id": 25,
"text": "A difficult argument against CH developed by W. Hugh Woodin has attracted considerable attention since the year 2000. Foreman does not reject Woodin's argument outright but urges caution. Woodin proposed a new hypothesis that he labeled the (*)-axiom\", or \"Star axiom\". The Star axiom would imply that 2 ℵ 0 {\\displaystyle 2^{\\aleph _{0}}} is ℵ 2 {\\displaystyle \\aleph _{2}} , thus falsifying CH. The Star axiom was bolstered by an independent May 2021 proof showing the Star axiom can be derived from a variation of Martin's maximum. However, Woodin stated in the 2010s that he now instead believes CH to be true, based on his belief in his new \"ultimate L\" conjecture.",
"title": "Arguments for and against the continuum hypothesis"
},
{
"paragraph_id": 26,
"text": "Solomon Feferman argued that CH is not a definite mathematical problem. He proposed a theory of \"definiteness\" using a semi-intuitionistic subsystem of ZF that accepts classical logic for bounded quantifiers but uses intuitionistic logic for unbounded ones, and suggested that a proposition ϕ {\\displaystyle \\phi } is mathematically \"definite\" if the semi-intuitionistic theory can prove ( ϕ ∨ ¬ ϕ ) {\\displaystyle (\\phi \\lor \\neg \\phi )} . He conjectured that CH is not definite according to this notion, and proposed that CH should, therefore, be considered not to have a truth value. Peter Koellner wrote a critical commentary on Feferman's article.",
"title": "Arguments for and against the continuum hypothesis"
},
{
"paragraph_id": 27,
"text": "Joel David Hamkins proposes a multiverse approach to set theory and argues that \"the continuum hypothesis is settled on the multiverse view by our extensive knowledge about how it behaves in the multiverse, and, as a result, it can no longer be settled in the manner formerly hoped for\". In a related vein, Saharon Shelah wrote that he does \"not agree with the pure Platonic view that the interesting problems in set theory can be decided, that we just have to discover the additional axiom. My mental picture is that we have many possible set theories, all conforming to ZFC\".",
"title": "Arguments for and against the continuum hypothesis"
},
{
"paragraph_id": 28,
"text": "The generalized continuum hypothesis (GCH) states that if an infinite set's cardinality lies between that of an infinite set S and that of the power set P ( S ) {\\displaystyle {\\mathcal {P}}(S)} of S, then it has the same cardinality as either S or P ( S ) {\\displaystyle {\\mathcal {P}}(S)} . That is, for any infinite cardinal λ {\\displaystyle \\lambda } there is no cardinal κ {\\displaystyle \\kappa } such that λ < κ < 2 λ {\\displaystyle \\lambda <\\kappa <2^{\\lambda }} . GCH is equivalent to:",
"title": "Generalized continuum hypothesis"
},
{
"paragraph_id": 29,
"text": "The beth numbers provide an alternate notation for this condition: ℵ α = ℶ α {\\displaystyle \\aleph _{\\alpha }=\\beth _{\\alpha }} for every ordinal α {\\displaystyle \\alpha } . The continuum hypothesis is the special case for the ordinal α = 1 {\\displaystyle \\alpha =1} . GCH was first suggested by Philip Jourdain. For the early history of GCH, see Moore.",
"title": "Generalized continuum hypothesis"
},
{
"paragraph_id": 30,
"text": "Like CH, GCH is also independent of ZFC, but Sierpiński proved that ZF + GCH implies the axiom of choice (AC) (and therefore the negation of the axiom of determinacy, AD), so choice and GCH are not independent in ZF; there are no models of ZF in which GCH holds and AC fails. To prove this, Sierpiński showed GCH implies that every cardinality n is smaller than some aleph number, and thus can be ordered. This is done by showing that n is smaller than 2 ℵ 0 + n {\\displaystyle 2^{\\aleph _{0}+n}} which is smaller than its own Hartogs number—this uses the equality 2 ℵ 0 + n = 2 ⋅ 2 ℵ 0 + n {\\displaystyle 2^{\\aleph _{0}+n}\\,=\\,2\\cdot \\,2^{\\aleph _{0}+n}} ; for the full proof, see Gillman.",
"title": "Generalized continuum hypothesis"
},
{
"paragraph_id": 31,
"text": "Kurt Gödel showed that GCH is a consequence of ZF + V=L (the axiom that every set is constructible relative to the ordinals), and is therefore consistent with ZFC. As GCH implies CH, Cohen's model in which CH fails is a model in which GCH fails, and thus GCH is not provable from ZFC. W. B. Easton used the method of forcing developed by Cohen to prove Easton's theorem, which shows it is consistent with ZFC for arbitrarily large cardinals ℵ α {\\displaystyle \\aleph _{\\alpha }} to fail to satisfy 2 ℵ α = ℵ α + 1 {\\displaystyle 2^{\\aleph _{\\alpha }}=\\aleph _{\\alpha +1}} . Much later, Foreman and Woodin proved that (assuming the consistency of very large cardinals) it is consistent that 2 κ > κ + {\\displaystyle 2^{\\kappa }>\\kappa ^{+}} holds for every infinite cardinal κ {\\displaystyle \\kappa } . Later Woodin extended this by showing the consistency of 2 κ = κ + + {\\displaystyle 2^{\\kappa }=\\kappa ^{++}} for every κ {\\displaystyle \\kappa } . Carmi Merimovich showed that, for each n ≥ 1, it is consistent with ZFC that for each κ, 2 is the nth successor of κ. On the other hand, László Patai proved that if γ is an ordinal and for each infinite cardinal κ, 2 is the γth successor of κ, then γ is finite.",
"title": "Generalized continuum hypothesis"
},
{
"paragraph_id": 32,
"text": "For any infinite sets A and B, if there is an injection from A to B then there is an injection from subsets of A to subsets of B. Thus for any infinite cardinals A and B, A < B → 2 A ≤ 2 B {\\displaystyle A<B\\to 2^{A}\\leq 2^{B}} . If A and B are finite, the stronger inequality A < B → 2 A < 2 B {\\displaystyle A<B\\to 2^{A}<2^{B}} holds. GCH implies that this strict, stronger inequality holds for infinite cardinals as well as finite cardinals.",
"title": "Generalized continuum hypothesis"
},
{
"paragraph_id": 33,
"text": "Although the generalized continuum hypothesis refers directly only to cardinal exponentiation with 2 as the base, one can deduce from it the values of cardinal exponentiation ℵ α ℵ β {\\displaystyle \\aleph _{\\alpha }^{\\aleph _{\\beta }}} in all cases. GCH implies that:",
"title": "Generalized continuum hypothesis"
},
{
"paragraph_id": 34,
"text": "The first equality (when α ≤ β+1) follows from:",
"title": "Generalized continuum hypothesis"
},
{
"paragraph_id": 35,
"text": "The third equality (when β+1 < α and ℵ β ≥ cf ( ℵ α ) {\\displaystyle \\aleph _{\\beta }\\geq \\operatorname {cf} (\\aleph _{\\alpha })} ) follows from:",
"title": "Generalized continuum hypothesis"
},
{
"paragraph_id": 36,
"text": "Where, for every γ, GCH is used for equating 2 ℵ γ {\\displaystyle 2^{\\aleph _{\\gamma }}} and ℵ γ + 1 {\\displaystyle \\aleph _{\\gamma +1}} ; ℵ γ 2 = ℵ γ {\\displaystyle \\aleph _{\\gamma }^{2}=\\aleph _{\\gamma }} is used as it is equivalent to the axiom of choice.",
"title": "Generalized continuum hypothesis"
},
{
"paragraph_id": 37,
"text": "Quotations related to Continuum hypothesis at Wikiquote",
"title": "External links"
}
] | In mathematics, specifically set theory, the continuum hypothesis is a hypothesis about the possible sizes of infinite sets. It states that or equivalently, that In Zermelo–Fraenkel set theory with the axiom of choice (ZFC), this is equivalent to the following equation in aleph numbers: 2 ℵ 0 = ℵ 1 , or even shorter with beth numbers: ℶ 1 = ℵ 1 . The continuum hypothesis was advanced by Georg Cantor in 1878, and establishing its truth or falsehood is the first of Hilbert's 23 problems presented in 1900. The answer to this problem is independent of ZFC, so that either the continuum hypothesis or its negation can be added as an axiom to ZFC set theory, with the resulting theory being consistent if and only if ZFC is consistent. This independence was proved in 1963 by Paul Cohen, complementing earlier work by Kurt Gödel in 1940. The name of the hypothesis comes from the term the continuum for the real numbers. | 2001-08-21T20:24:06Z | 2023-12-20T13:50:12Z | [
"Template:MathWorld",
"Template:About",
"Template:Use shortened footnotes",
"Template:Math theorem",
"Template:Cite news",
"Template:ISBN",
"Template:Cite web",
"Template:Hilbert's problems",
"Template:Authority control",
"Template:Short description",
"Template:R",
"Template:Sfn",
"Template:PlanetMath attribution",
"Template:Cite book",
"Template:Set theory",
"Template:Reflist",
"Template:Cite journal",
"Template:Webarchive",
"Template:Blockquote",
"Template:Main",
"Template:Nowrap",
"Template:Wikiquote-inline",
"Template:Mathematical logic"
] | https://en.wikipedia.org/wiki/Continuum_hypothesis |
5,706 | Çevik Bir | Çevik Bir (born 1939) is a retired Turkish army general. He was a member of the Turkish General Staff in the 1990s. He took a major part in several important international missions in the Middle East and North Africa. He was born in Buca, Izmir Province, in 1939 and is married with one child.
He graduated from the Turkish Military Academy as an engineer officer in 1958, from the Army Staff College in 1970 and from the Armed Forces College in 1971. He graduated from NATO Defense College, Rome, Italy in 1973.
From 1983 to 1985, he served at SHAPE, NATO's headquarters in Belgium. He was promoted to brigadier general and commanded an armed brigade and division in Turkey. From 1987 to 1991, he served as major general, and then was promoted to lieutenant general.
After the dictator Siad Barre’s ousting, conflicts between the General Mohammed Farah Aidid party and other clans in Somalia had led to famine and lawlessness throughout the country. An estimated 300,000 people had died from starvation. A combined military force of United States and United Nations (under the name "UNOSOM") were deployed to Mogadishu, to monitor the ceasefire and deliver food and supplies to the starving people of Somali. Çevik Bir, who was then a lieutenant-general of Turkey, became the force commander of UNOSOM II in April 1993. Despite the retreat of US and UN forces after several deaths due to local hostilities mainly led by Aidid, the introduction of a powerful military force opened the transportation routes, enabling the provision of supplies and ended the famine quickly. He was succeeded as Force Commander by a Malaysian general in January 1994.
He became a four-star general and served three years as vice chairman of the Turkish Armed Forces, then appointed commander of the Turkish First Army, in Istanbul. While he was vice chairman of the TAF, he signed the Turkish-Israeli Military Coordination agreement in 1996.
Çevik Bir became the Turkish army's deputy chief of general staff shortly after the Somali operation and played a vital role in establishing a Turkish-Israeli entente. He retired from the army on 30 August 1999. He is a former member of the Association for the Study of the Middle East and Africa (ASMEA).
On 12 April 2012, Bir and 30 other officers were taken in custody for their role in the 1997 military memorandum that forced the then Turkish government, led by the Refah Partisi (Welfare Party), to step down. On 11 September 2021, the General Staff Personnel Presidency reported to the Ankara 5th High Criminal Court, where the case was heard, that the administrative action was taken to demolish the 13 retired generals convicted in the February 28 trial. Thus, Çevik Bir was demoted.
Çevik Bir, one of the generals who planned the process, said "In Turkey we have a marriage of Islam and democracy. (…) The child of this marriage is secularism. Now this child gets sick from time to time. The Turkish Armed Forces is the doctor which saves the child. Depending on how sick the kid is, we administer the necessary medicine to make sure the child recuperates". | [
{
"paragraph_id": 0,
"text": "Çevik Bir (born 1939) is a retired Turkish army general. He was a member of the Turkish General Staff in the 1990s. He took a major part in several important international missions in the Middle East and North Africa. He was born in Buca, Izmir Province, in 1939 and is married with one child.",
"title": ""
},
{
"paragraph_id": 1,
"text": "He graduated from the Turkish Military Academy as an engineer officer in 1958, from the Army Staff College in 1970 and from the Armed Forces College in 1971. He graduated from NATO Defense College, Rome, Italy in 1973.",
"title": ""
},
{
"paragraph_id": 2,
"text": "From 1983 to 1985, he served at SHAPE, NATO's headquarters in Belgium. He was promoted to brigadier general and commanded an armed brigade and division in Turkey. From 1987 to 1991, he served as major general, and then was promoted to lieutenant general.",
"title": ""
},
{
"paragraph_id": 3,
"text": "After the dictator Siad Barre’s ousting, conflicts between the General Mohammed Farah Aidid party and other clans in Somalia had led to famine and lawlessness throughout the country. An estimated 300,000 people had died from starvation. A combined military force of United States and United Nations (under the name \"UNOSOM\") were deployed to Mogadishu, to monitor the ceasefire and deliver food and supplies to the starving people of Somali. Çevik Bir, who was then a lieutenant-general of Turkey, became the force commander of UNOSOM II in April 1993. Despite the retreat of US and UN forces after several deaths due to local hostilities mainly led by Aidid, the introduction of a powerful military force opened the transportation routes, enabling the provision of supplies and ended the famine quickly. He was succeeded as Force Commander by a Malaysian general in January 1994.",
"title": ""
},
{
"paragraph_id": 4,
"text": "He became a four-star general and served three years as vice chairman of the Turkish Armed Forces, then appointed commander of the Turkish First Army, in Istanbul. While he was vice chairman of the TAF, he signed the Turkish-Israeli Military Coordination agreement in 1996.",
"title": ""
},
{
"paragraph_id": 5,
"text": "Çevik Bir became the Turkish army's deputy chief of general staff shortly after the Somali operation and played a vital role in establishing a Turkish-Israeli entente. He retired from the army on 30 August 1999. He is a former member of the Association for the Study of the Middle East and Africa (ASMEA).",
"title": ""
},
{
"paragraph_id": 6,
"text": "On 12 April 2012, Bir and 30 other officers were taken in custody for their role in the 1997 military memorandum that forced the then Turkish government, led by the Refah Partisi (Welfare Party), to step down. On 11 September 2021, the General Staff Personnel Presidency reported to the Ankara 5th High Criminal Court, where the case was heard, that the administrative action was taken to demolish the 13 retired generals convicted in the February 28 trial. Thus, Çevik Bir was demoted.",
"title": ""
},
{
"paragraph_id": 7,
"text": "Çevik Bir, one of the generals who planned the process, said \"In Turkey we have a marriage of Islam and democracy. (…) The child of this marriage is secularism. Now this child gets sick from time to time. The Turkish Armed Forces is the doctor which saves the child. Depending on how sick the kid is, we administer the necessary medicine to make sure the child recuperates\".",
"title": ""
}
] | Çevik Bir is a retired Turkish army general. He was a member of the Turkish General Staff in the 1990s. He took a major part in several important international missions in the Middle East and North Africa. He was born in Buca, Izmir Province, in 1939 and is married with one child. He graduated from the Turkish Military Academy as an engineer officer in 1958, from the Army Staff College in 1970 and from the Armed Forces College in 1971. He graduated from NATO Defense College, Rome, Italy in 1973. From 1983 to 1985, he served at SHAPE, NATO's headquarters in Belgium. He was promoted to brigadier general and commanded an armed brigade and division in Turkey. From 1987 to 1991, he served as major general, and then was promoted to lieutenant general. After the dictator Siad Barre’s ousting, conflicts between the General Mohammed Farah Aidid party and other clans in Somalia had led to famine and lawlessness throughout the country. An estimated 300,000 people had died from starvation. A combined military force of United States and United Nations were deployed to Mogadishu, to monitor the ceasefire and deliver food and supplies to the starving people of Somali. Çevik Bir, who was then a lieutenant-general of Turkey, became the force commander of UNOSOM II in April 1993. Despite the retreat of US and UN forces after several deaths due to local hostilities mainly led by Aidid, the introduction of a powerful military force opened the transportation routes, enabling the provision of supplies and ended the famine quickly. He was succeeded as Force Commander by a Malaysian general in January 1994. He became a four-star general and served three years as vice chairman of the Turkish Armed Forces, then appointed commander of the Turkish First Army, in Istanbul. While he was vice chairman of the TAF, he signed the Turkish-Israeli Military Coordination agreement in 1996. Çevik Bir became the Turkish army's deputy chief of general staff shortly after the Somali operation and played a vital role in establishing a Turkish-Israeli entente. He retired from the army on 30 August 1999. He is a former member of the Association for the Study of the Middle East and Africa (ASMEA). On 12 April 2012, Bir and 30 other officers were taken in custody for their role in the 1997 military memorandum that forced the then Turkish government, led by the Refah Partisi, to step down. On 11 September 2021, the General Staff Personnel Presidency reported to the Ankara 5th High Criminal Court, where the case was heard, that the administrative action was taken to demolish the 13 retired generals convicted in the February 28 trial. Thus, Çevik Bir was demoted. Çevik Bir, one of the generals who planned the process, said "In Turkey we have a marriage of Islam and democracy. (…) The child of this marriage is secularism. Now this child gets sick from time to time. The Turkish Armed Forces is the doctor which saves the child. Depending on how sick the kid is, we administer the necessary medicine to make sure the child recuperates". | 2002-02-25T15:51:15Z | 2023-11-23T21:30:10Z | [
"Template:Cite web",
"Template:Cite news",
"Template:S-start",
"Template:S-bef",
"Template:Authority control",
"Template:Short description",
"Template:Reflist",
"Template:S-mil",
"Template:Cbignore",
"Template:Dead link",
"Template:S-aft",
"Template:Use dmy dates",
"Template:S-ttl",
"Template:S-end"
] | https://en.wikipedia.org/wiki/%C3%87evik_Bir |
5,708 | Collectivism (disambiguation) | Template:Gaibandha Cricket Story Collectivism is the type of social organization.
Collectivism may also refer to: | [
{
"paragraph_id": 0,
"text": "Template:Gaibandha Cricket Story Collectivism is the type of social organization.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Collectivism may also refer to:",
"title": ""
}
] | Template:Gaibandha Cricket Story
Collectivism is the type of social organization. Collectivism may also refer to: Bureaucratic collectivism, a theory of class society which is used to describe the Soviet Union under Joseph Stalin
Collectivist anarchism, a socialist doctrine in which the workers own and manage the production
Collectivism (art), art which is created by a group of people rather than an individual
Communitarianism, a political position that emphasizes the importance of the community over the individual or attempts to integrate the two
Corporatism, a political ideology in which groups, rather than individuals, are the building blocks of society | 2001-05-25T04:40:31Z | 2023-10-11T05:41:22Z | [
"Template:Gaibandha Cricket Story",
"Template:Disambiguation"
] | https://en.wikipedia.org/wiki/Collectivism_(disambiguation) |
5,711 | Nepeta | Nepeta is a genus of flowering plants in the family Lamiaceae. The genus name, from Latin nepeta (“catnip”), is reportedly in reference to Nepete, an ancient Etruscan city. There are about 250 species.
The genus is native to Europe, Asia, and Africa, and has also naturalized in North America.
Some members of this group are known as catnip or catmint because of their effect on house cats – the nepetalactone contained in some Nepeta species binds to the olfactory receptors of cats, typically resulting in temporary euphoria.
Most of the species are herbaceous perennial plants, but some are annuals. They have sturdy stems with opposite heart-shaped, green to gray-green leaves. Nepeta plants are usually aromatic in foliage and flowers.
The tubular flowers can be lavender, blue, white, pink, or lilac, and spotted with tiny lavender-purple dots. The flowers are located in verticillasters grouped on spikes; or the verticillasters are arranged in opposite cymes, racemes, or panicles – toward the tip of the stems.
The calyx is tubular or campanulate, they are slightly curved or straight, and the limbs are often 2-lipped with five teeth. The lower lip is larger, with 3-lobes, and the middle lobe is the largest. The flowers have 4 hairless stamens that are nearly parallel, and they ascend under the upper lip of the corolla. Two stamen are longer and stamens of pistillate flowers are rudimentary. The style protrudes outside of the mouth of the flowers.
The fruits are nutlets, which are oblong-ovoid, ellipsoid, ovoid, or obovoid in shape. The surfaces of the nutlets can be slightly ribbed, smooth or warty.
Species include:
Some Nepeta species are cultivated as ornamental plants. They can be drought tolerant – water conserving, often deer repellent, with long bloom periods from late spring to autumn. Some species also have repellent properties to insect pests, including aphids and squash bugs, when planted in a garden.
Nepeta species are used as food plants by the larvae of some Lepidoptera (butterfly and moth) species including Coleophora albitarsella, and as nectar sources for pollinators, such as honey bees and hummingbirds. | [
{
"paragraph_id": 0,
"text": "Nepeta is a genus of flowering plants in the family Lamiaceae. The genus name, from Latin nepeta (“catnip”), is reportedly in reference to Nepete, an ancient Etruscan city. There are about 250 species.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The genus is native to Europe, Asia, and Africa, and has also naturalized in North America.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Some members of this group are known as catnip or catmint because of their effect on house cats – the nepetalactone contained in some Nepeta species binds to the olfactory receptors of cats, typically resulting in temporary euphoria.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Most of the species are herbaceous perennial plants, but some are annuals. They have sturdy stems with opposite heart-shaped, green to gray-green leaves. Nepeta plants are usually aromatic in foliage and flowers.",
"title": "Description"
},
{
"paragraph_id": 4,
"text": "The tubular flowers can be lavender, blue, white, pink, or lilac, and spotted with tiny lavender-purple dots. The flowers are located in verticillasters grouped on spikes; or the verticillasters are arranged in opposite cymes, racemes, or panicles – toward the tip of the stems.",
"title": "Description"
},
{
"paragraph_id": 5,
"text": "The calyx is tubular or campanulate, they are slightly curved or straight, and the limbs are often 2-lipped with five teeth. The lower lip is larger, with 3-lobes, and the middle lobe is the largest. The flowers have 4 hairless stamens that are nearly parallel, and they ascend under the upper lip of the corolla. Two stamen are longer and stamens of pistillate flowers are rudimentary. The style protrudes outside of the mouth of the flowers.",
"title": "Description"
},
{
"paragraph_id": 6,
"text": "The fruits are nutlets, which are oblong-ovoid, ellipsoid, ovoid, or obovoid in shape. The surfaces of the nutlets can be slightly ribbed, smooth or warty.",
"title": "Description"
},
{
"paragraph_id": 7,
"text": "Species include:",
"title": "Selected species"
},
{
"paragraph_id": 8,
"text": "Some Nepeta species are cultivated as ornamental plants. They can be drought tolerant – water conserving, often deer repellent, with long bloom periods from late spring to autumn. Some species also have repellent properties to insect pests, including aphids and squash bugs, when planted in a garden.",
"title": "Uses"
},
{
"paragraph_id": 9,
"text": "Nepeta species are used as food plants by the larvae of some Lepidoptera (butterfly and moth) species including Coleophora albitarsella, and as nectar sources for pollinators, such as honey bees and hummingbirds.",
"title": "Uses"
}
] | Nepeta is a genus of flowering plants in the family Lamiaceae. The genus name, from Latin nepeta (“catnip”), is reportedly in reference to Nepete, an ancient Etruscan city. There are about 250 species. The genus is native to Europe, Asia, and Africa, and has also naturalized in North America. Some members of this group are known as catnip or catmint because of their effect on house cats – the nepetalactone contained in some Nepeta species binds to the olfactory receptors of cats, typically resulting in temporary euphoria. | 2023-07-08T15:46:29Z | [
"Template:Cite journal",
"Template:Citation",
"Template:Taxonbar",
"Template:Authority control",
"Template:Wikt-lang",
"Template:Automatic taxobox",
"Template:Reflist",
"Template:Cite web",
"Template:Other uses",
"Template:Cite book",
"Template:Commons category",
"Template:Div col end",
"Template:Div col",
"Template:Short description"
] | https://en.wikipedia.org/wiki/Nepeta |
|
5,714 | Cornish Nationalist Party | The Cornish Nationalist Party (CNP; Cornish: An Parti Kenethlegek Kernow) is a political party, founded by Dr James Whetter, who campaigned for independence for Cornwall.
It was formed by people who left Cornwall's main nationalist party Mebyon Kernow on 28 May 1975, but it is no longer for independence.
A separate party with a similar name (Cornish National Party) existed from 1969.
The split with Mebyon Kernow was based on the same debate that was occurring in most of the other political parties campaigning for autonomy from the United Kingdom at the time (such as the Scottish National Party and Plaid Cymru): whether to be a centre-left party, appealing to the electorate on a social democratic line, or whether to appeal emotionally on a centre-right cultural line. Originally, another subject of the split was whether to embrace devolution as a first step to full independence (or as the sole step if this was what the electorate wished) or for it to be "all or nothing".
The CNP essentially represented a more right-wing outlook from those who disagree that economic arguments were more likely to win votes than cultural. The CNP worked to preserve the Celtic identity of Cornwall and improve its economy, and encouraged links with Cornish people overseas and with other regions with distinct identities. It also gave support to the Cornish language and commemorated Thomas Flamank, a leader of the Cornish Rebellion in 1497, at an annual ceremony at Bodmin on 27 June each year.
The CNP was for some time seen as more of a pressure group, as it did not put up candidates for any elections, although its visibility and influence within Cornwall is negligible. In April 2009, a news story reported that the CNP had re-formed following a conference in Bodmin; however, it did not contest any elections that year.
Dr Whetter was the founder and editor of the CNP quarterly journal, The Cornish Banner (An Baner Kernewek), within the actions of the Roseland Institute. Since his death in 2018 the CNP has been led by Androw Hawke.
A newspaper article and a revamp of the party website in October 2014 state that the party is now to contest elections once more.
John Le Bretton, vice-chairman of the party, said: "The CNP supports the retention of Cornwall Council as a Cornwall-wide authority running Cornish affairs and we call for the British government in Westminster to devolve powers to the council so that decisions affecting Cornwall can be made in Cornwall".
The CNP polled 227 (0.4) votes in Truro during the 1979 UK General Election, 364 (0.67) in North Cornwall in the 1983 UK General Election, and 1,892 (1.0) at the European Parliament elections in the Cornwall and Plymouth constituency in 1984. The candidate on all three occasions was the founder and first leader of the CNP, Dr James Whetter.
The CNP had one parish councillor, CNP leader Androw Hawke who was elected to Polperro Community Council for the second time on 4 May 2017.
The reformed party was registered with the Electoral Commission in 2014, but ceased to be registered in 2017.
The Policy Statement and Programme of the CNP were published in 1975 and included the following points:
The party's policies include the following:
There have been perceived image problems as the CNP has been seen as similarly styled to the BNP and NF (the nativist British National Party and National Front), and during the 1970s letters were published in the party magazine The Cornish Banner (An Baner Kernewek) sympathetic to the NF and critical of "Zionist" politicians. The CNP also formed a controversial uniformed wing known as the Greenshirts led by the CNP Youth Movement leader and Public Relations Officer, Wallace Simmons who also founded the pro-NF Cornish Front. (Although the CNP and CF were sympathetic to Irish republicanism while the NF was supportive of Ulster loyalism, with the exception of leading NF figures like Patrick Harrington, who refused to condemn the IRA during an interview for the Channel 4 TV documentary Disciples of Chaos). | [
{
"paragraph_id": 0,
"text": "The Cornish Nationalist Party (CNP; Cornish: An Parti Kenethlegek Kernow) is a political party, founded by Dr James Whetter, who campaigned for independence for Cornwall.",
"title": ""
},
{
"paragraph_id": 1,
"text": "It was formed by people who left Cornwall's main nationalist party Mebyon Kernow on 28 May 1975, but it is no longer for independence.",
"title": "History"
},
{
"paragraph_id": 2,
"text": "A separate party with a similar name (Cornish National Party) existed from 1969.",
"title": "History"
},
{
"paragraph_id": 3,
"text": "The split with Mebyon Kernow was based on the same debate that was occurring in most of the other political parties campaigning for autonomy from the United Kingdom at the time (such as the Scottish National Party and Plaid Cymru): whether to be a centre-left party, appealing to the electorate on a social democratic line, or whether to appeal emotionally on a centre-right cultural line. Originally, another subject of the split was whether to embrace devolution as a first step to full independence (or as the sole step if this was what the electorate wished) or for it to be \"all or nothing\".",
"title": "History"
},
{
"paragraph_id": 4,
"text": "The CNP essentially represented a more right-wing outlook from those who disagree that economic arguments were more likely to win votes than cultural. The CNP worked to preserve the Celtic identity of Cornwall and improve its economy, and encouraged links with Cornish people overseas and with other regions with distinct identities. It also gave support to the Cornish language and commemorated Thomas Flamank, a leader of the Cornish Rebellion in 1497, at an annual ceremony at Bodmin on 27 June each year.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "The CNP was for some time seen as more of a pressure group, as it did not put up candidates for any elections, although its visibility and influence within Cornwall is negligible. In April 2009, a news story reported that the CNP had re-formed following a conference in Bodmin; however, it did not contest any elections that year.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "Dr Whetter was the founder and editor of the CNP quarterly journal, The Cornish Banner (An Baner Kernewek), within the actions of the Roseland Institute. Since his death in 2018 the CNP has been led by Androw Hawke.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "A newspaper article and a revamp of the party website in October 2014 state that the party is now to contest elections once more.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "John Le Bretton, vice-chairman of the party, said: \"The CNP supports the retention of Cornwall Council as a Cornwall-wide authority running Cornish affairs and we call for the British government in Westminster to devolve powers to the council so that decisions affecting Cornwall can be made in Cornwall\".",
"title": "History"
},
{
"paragraph_id": 9,
"text": "The CNP polled 227 (0.4) votes in Truro during the 1979 UK General Election, 364 (0.67) in North Cornwall in the 1983 UK General Election, and 1,892 (1.0) at the European Parliament elections in the Cornwall and Plymouth constituency in 1984. The candidate on all three occasions was the founder and first leader of the CNP, Dr James Whetter.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "The CNP had one parish councillor, CNP leader Androw Hawke who was elected to Polperro Community Council for the second time on 4 May 2017.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "The reformed party was registered with the Electoral Commission in 2014, but ceased to be registered in 2017.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "The Policy Statement and Programme of the CNP were published in 1975 and included the following points:",
"title": "Policy"
},
{
"paragraph_id": 13,
"text": "The party's policies include the following:",
"title": "Policy"
},
{
"paragraph_id": 14,
"text": "There have been perceived image problems as the CNP has been seen as similarly styled to the BNP and NF (the nativist British National Party and National Front), and during the 1970s letters were published in the party magazine The Cornish Banner (An Baner Kernewek) sympathetic to the NF and critical of \"Zionist\" politicians. The CNP also formed a controversial uniformed wing known as the Greenshirts led by the CNP Youth Movement leader and Public Relations Officer, Wallace Simmons who also founded the pro-NF Cornish Front. (Although the CNP and CF were sympathetic to Irish republicanism while the NF was supportive of Ulster loyalism, with the exception of leading NF figures like Patrick Harrington, who refused to condemn the IRA during an interview for the Channel 4 TV documentary Disciples of Chaos).",
"title": "Image"
}
] | The Cornish Nationalist Party is a political party, founded by Dr James Whetter, who campaigned for independence for Cornwall. | 2001-05-28T06:24:49Z | 2023-10-27T09:55:10Z | [
"Template:Distinguish",
"Template:Infobox political party",
"Template:Cite web",
"Template:Webarchive",
"Template:Portal bar",
"Template:Cornish self-government movement",
"Template:Celtic nations",
"Template:Authority control",
"Template:Use dmy dates",
"Template:Lang-kw",
"Template:Unref-section",
"Template:Reflist",
"Template:Cite news"
] | https://en.wikipedia.org/wiki/Cornish_Nationalist_Party |
5,715 | Cryptanalysis | Cryptanalysis (from the Greek kryptós, "hidden", and analýein, "to analyze") refers to the process of analyzing information systems in order to understand hidden aspects of the systems. Cryptanalysis is used to breach cryptographic security systems and gain access to the contents of encrypted messages, even if the cryptographic key is unknown.
In addition to mathematical analysis of cryptographic algorithms, cryptanalysis includes the study of side-channel attacks that do not target weaknesses in the cryptographic algorithms themselves, but instead exploit weaknesses in their implementation.
Even though the goal has been the same, the methods and techniques of cryptanalysis have changed drastically through the history of cryptography, adapting to increasing cryptographic complexity, ranging from the pen-and-paper methods of the past, through machines like the British Bombes and Colossus computers at Bletchley Park in World War II, to the mathematically advanced computerized schemes of the present. Methods for breaking modern cryptosystems often involve solving carefully constructed problems in pure mathematics, the best-known being integer factorization.
In encryption, confidential information (called the "plaintext") is sent securely to a recipient by the sender first converting it into an unreadable form ("ciphertext") using an encryption algorithm. The ciphertext is sent through an insecure channel to the recipient. The recipient decrypts the ciphertext by applying an inverse decryption algorithm, recovering the plaintext. To decrypt the ciphertext, the recipient requires a secret knowledge from the sender, usually a string of letters, numbers, or bits, called a cryptographic key. The concept is that even if an unauthorized person gets access to the ciphertext during transmission, without the secret key they cannot convert it back to plaintext.
Encryption has been used throughout history to send important military, diplomatic and commercial messages, and today is very widely used in computer networking to protect email and internet communication.
The goal of cryptanalysis is for a third party, a cryptanalyst, to gain as much information as possible about the original ("plaintext"), attempting to "break" the encryption to read the ciphertext and learning the secret key so future messages can be decrypted and read. A mathematical technique to do this is called a cryptographic attack. Cryptographic attacks can be characterized in a number of ways:
Attacks can be classified based on what type of information the attacker has available. As a basic starting point it is normally assumed that, for the purposes of analysis, the general algorithm is known; this is Shannon's Maxim "the enemy knows the system" – in its turn, equivalent to Kerckhoffs' principle. This is a reasonable assumption in practice – throughout history, there are countless examples of secret algorithms falling into wider knowledge, variously through espionage, betrayal and reverse engineering. (And on occasion, ciphers have been broken through pure deduction; for example, the German Lorenz cipher and the Japanese Purple code, and a variety of classical schemes):
Attacks can also be characterised by the resources they require. Those resources include:
It is sometimes difficult to predict these quantities precisely, especially when the attack is not practical to actually implement for testing. But academic cryptanalysts tend to provide at least the estimated order of magnitude of their attacks' difficulty, saying, for example, "SHA-1 collisions now 2."
Bruce Schneier notes that even computationally impractical attacks can be considered breaks: "Breaking a cipher simply means finding a weakness in the cipher that can be exploited with a complexity less than brute force. Never mind that brute-force might require 2 encryptions; an attack requiring 2 encryptions would be considered a break...simply put, a break can just be a certificational weakness: evidence that the cipher does not perform as advertised."
The results of cryptanalysis can also vary in usefulness. Cryptographer Lars Knudsen (1998) classified various types of attack on block ciphers according to the amount and quality of secret information that was discovered:
Academic attacks are often against weakened versions of a cryptosystem, such as a block cipher or hash function with some rounds removed. Many, but not all, attacks become exponentially more difficult to execute as rounds are added to a cryptosystem, so it's possible for the full cryptosystem to be strong even though reduced-round variants are weak. Nonetheless, partial breaks that come close to breaking the original cryptosystem may mean that a full break will follow; the successful attacks on DES, MD5, and SHA-1 were all preceded by attacks on weakened versions.
In academic cryptography, a weakness or a break in a scheme is usually defined quite conservatively: it might require impractical amounts of time, memory, or known plaintexts. It also might require the attacker be able to do things many real-world attackers can't: for example, the attacker may need to choose particular plaintexts to be encrypted or even to ask for plaintexts to be encrypted using several keys related to the secret key. Furthermore, it might only reveal a small amount of information, enough to prove the cryptosystem imperfect but too little to be useful to real-world attackers. Finally, an attack might only apply to a weakened version of cryptographic tools, like a reduced-round block cipher, as a step towards breaking the full system.
Cryptanalysis has coevolved together with cryptography, and the contest can be traced through the history of cryptography—new ciphers being designed to replace old broken designs, and new cryptanalytic techniques invented to crack the improved schemes. In practice, they are viewed as two sides of the same coin: secure cryptography requires design against possible cryptanalysis.
Although the actual word "cryptanalysis" is relatively recent (it was coined by William Friedman in 1920), methods for breaking codes and ciphers are much older. David Kahn notes in The Codebreakers that Arab scholars were the first people to systematically document cryptanalytic methods.
The first known recorded explanation of cryptanalysis was given by Al-Kindi (c. 801–873, also known as "Alkindus" in Europe), a 9th-century Arab polymath, in Risalah fi Istikhraj al-Mu'amma (A Manuscript on Deciphering Cryptographic Messages). This treatise contains the first description of the method of frequency analysis. Al-Kindi is thus regarded as the first codebreaker in history. His breakthrough work was influenced by Al-Khalil (717–786), who wrote the Book of Cryptographic Messages, which contains the first use of permutations and combinations to list all possible Arabic words with and without vowels.
Frequency analysis is the basic tool for breaking most classical ciphers. In natural languages, certain letters of the alphabet appear more often than others; in English, "E" is likely to be the most common letter in any sample of plaintext. Similarly, the digraph "TH" is the most likely pair of letters in English, and so on. Frequency analysis relies on a cipher failing to hide these statistics. For example, in a simple substitution cipher (where each letter is simply replaced with another), the most frequent letter in the ciphertext would be a likely candidate for "E". Frequency analysis of such a cipher is therefore relatively easy, provided that the ciphertext is long enough to give a reasonably representative count of the letters of the alphabet that it contains.
Al-Kindi's invention of the frequency analysis technique for breaking monoalphabetic substitution ciphers was the most significant cryptanalytic advance until World War II. Al-Kindi's Risalah fi Istikhraj al-Mu'amma described the first cryptanalytic techniques, including some for polyalphabetic ciphers, cipher classification, Arabic phonetics and syntax, and most importantly, gave the first descriptions on frequency analysis. He also covered methods of encipherments, cryptanalysis of certain encipherments, and statistical analysis of letters and letter combinations in Arabic. An important contribution of Ibn Adlan (1187–1268) was on sample size for use of frequency analysis.
In Europe, Italian scholar Giambattista della Porta (1535–1615) was the author of a seminal work on cryptanalysis, De Furtivis Literarum Notis.
Successful cryptanalysis has undoubtedly influenced history; the ability to read the presumed-secret thoughts and plans of others can be a decisive advantage. For example, in England in 1587, Mary, Queen of Scots was tried and executed for treason as a result of her involvement in three plots to assassinate Elizabeth I of England. The plans came to light after her coded correspondence with fellow conspirators was deciphered by Thomas Phelippes.
In Europe during the 15th and 16th centuries, the idea of a polyalphabetic substitution cipher was developed, among others by the French diplomat Blaise de Vigenère (1523–96). For some three centuries, the Vigenère cipher, which uses a repeating key to select different encryption alphabets in rotation, was considered to be completely secure (le chiffre indéchiffrable—"the indecipherable cipher"). Nevertheless, Charles Babbage (1791–1871) and later, independently, Friedrich Kasiski (1805–81) succeeded in breaking this cipher. During World War I, inventors in several countries developed rotor cipher machines such as Arthur Scherbius' Enigma, in an attempt to minimise the repetition that had been exploited to break the Vigenère system.
In World War I, the breaking of the Zimmermann Telegram was instrumental in bringing the United States into the war. In World War II, the Allies benefitted enormously from their joint success cryptanalysis of the German ciphers – including the Enigma machine and the Lorenz cipher – and Japanese ciphers, particularly 'Purple' and JN-25. 'Ultra' intelligence has been credited with everything between shortening the end of the European war by up to two years, to determining the eventual result. The war in the Pacific was similarly helped by 'Magic' intelligence.
Cryptanalysis of enemy messages played a significant part in the Allied victory in World War II. F. W. Winterbotham, quoted the western Supreme Allied Commander, Dwight D. Eisenhower, at the war's end as describing Ultra intelligence as having been "decisive" to Allied victory. Sir Harry Hinsley, official historian of British Intelligence in World War II, made a similar assessment about Ultra, saying that it shortened the war "by not less than two years and probably by four years"; moreover, he said that in the absence of Ultra, it is uncertain how the war would have ended.
In practice, frequency analysis relies as much on linguistic knowledge as it does on statistics, but as ciphers became more complex, mathematics became more important in cryptanalysis. This change was particularly evident before and during World War II, where efforts to crack Axis ciphers required new levels of mathematical sophistication. Moreover, automation was first applied to cryptanalysis in that era with the Polish Bomba device, the British Bombe, the use of punched card equipment, and in the Colossus computers – the first electronic digital computers to be controlled by a program.
With reciprocal machine ciphers such as the Lorenz cipher and the Enigma machine used by Nazi Germany during World War II, each message had its own key. Usually, the transmitting operator informed the receiving operator of this message key by transmitting some plaintext and/or ciphertext before the enciphered message. This is termed the indicator, as it indicates to the receiving operator how to set his machine to decipher the message.
Poorly designed and implemented indicator systems allowed first Polish cryptographers and then the British cryptographers at Bletchley Park to break the Enigma cipher system. Similar poor indicator systems allowed the British to identify depths that led to the diagnosis of the Lorenz SZ40/42 cipher system, and the comprehensive breaking of its messages without the cryptanalysts seeing the cipher machine.
Sending two or more messages with the same key is an insecure process. To a cryptanalyst the messages are then said to be "in depth." This may be detected by the messages having the same indicator by which the sending operator informs the receiving operator about the key generator initial settings for the message.
Generally, the cryptanalyst may benefit from lining up identical enciphering operations among a set of messages. For example, the Vernam cipher enciphers by bit-for-bit combining plaintext with a long key using the "exclusive or" operator, which is also known as "modulo-2 addition" (symbolized by ⊕ ):
Deciphering combines the same key bits with the ciphertext to reconstruct the plaintext:
(In modulo-2 arithmetic, addition is the same as subtraction.) When two such ciphertexts are aligned in depth, combining them eliminates the common key, leaving just a combination of the two plaintexts:
The individual plaintexts can then be worked out linguistically by trying probable words (or phrases), also known as "cribs," at various locations; a correct guess, when combined with the merged plaintext stream, produces intelligible text from the other plaintext component:
The recovered fragment of the second plaintext can often be extended in one or both directions, and the extra characters can be combined with the merged plaintext stream to extend the first plaintext. Working back and forth between the two plaintexts, using the intelligibility criterion to check guesses, the analyst may recover much or all of the original plaintexts. (With only two plaintexts in depth, the analyst may not know which one corresponds to which ciphertext, but in practice this is not a large problem.) When a recovered plaintext is then combined with its ciphertext, the key is revealed:
Knowledge of a key then allows the analyst to read other messages encrypted with the same key, and knowledge of a set of related keys may allow cryptanalysts to diagnose the system used for constructing them.
Governments have long recognized the potential benefits of cryptanalysis for intelligence, both military and diplomatic, and established dedicated organizations devoted to breaking the codes and ciphers of other nations, for example, GCHQ and the NSA, organizations which are still very active today.
Even though computation was used to great effect in the cryptanalysis of the Lorenz cipher and other systems during World War II, it also made possible new methods of cryptography orders of magnitude more complex than ever before. Taken as a whole, modern cryptography has become much more impervious to cryptanalysis than the pen-and-paper systems of the past, and now seems to have the upper hand against pure cryptanalysis. The historian David Kahn notes:
Many are the cryptosystems offered by the hundreds of commercial vendors today that cannot be broken by any known methods of cryptanalysis. Indeed, in such systems even a chosen plaintext attack, in which a selected plaintext is matched against its ciphertext, cannot yield the key that unlock[s] other messages. In a sense, then, cryptanalysis is dead. But that is not the end of the story. Cryptanalysis may be dead, but there is – to mix my metaphors – more than one way to skin a cat.
Kahn goes on to mention increased opportunities for interception, bugging, side channel attacks, and quantum computers as replacements for the traditional means of cryptanalysis. In 2010, former NSA technical director Brian Snow said that both academic and government cryptographers are "moving very slowly forward in a mature field."
However, any postmortems for cryptanalysis may be premature. While the effectiveness of cryptanalytic methods employed by intelligence agencies remains unknown, many serious attacks against both academic and practical cryptographic primitives have been published in the modern era of computer cryptography:
Thus, while the best modern ciphers may be far more resistant to cryptanalysis than the Enigma, cryptanalysis and the broader field of information security remain quite active.
Asymmetric cryptography (or public-key cryptography) is cryptography that relies on using two (mathematically related) keys; one private, and one public. Such ciphers invariably rely on "hard" mathematical problems as the basis of their security, so an obvious point of attack is to develop methods for solving the problem. The security of two-key cryptography depends on mathematical questions in a way that single-key cryptography generally does not, and conversely links cryptanalysis to wider mathematical research in a new way.
Asymmetric schemes are designed around the (conjectured) difficulty of solving various mathematical problems. If an improved algorithm can be found to solve the problem, then the system is weakened. For example, the security of the Diffie–Hellman key exchange scheme depends on the difficulty of calculating the discrete logarithm. In 1983, Don Coppersmith found a faster way to find discrete logarithms (in certain groups), and thereby requiring cryptographers to use larger groups (or different types of groups). RSA's security depends (in part) upon the difficulty of integer factorization – a breakthrough in factoring would impact the security of RSA.
In 1980, one could factor a difficult 50-digit number at an expense of 10 elementary computer operations. By 1984 the state of the art in factoring algorithms had advanced to a point where a 75-digit number could be factored in 10 operations. Advances in computing technology also meant that the operations could be performed much faster, too. Moore's law predicts that computer speeds will continue to increase. Factoring techniques may continue to do so as well, but will most likely depend on mathematical insight and creativity, neither of which has ever been successfully predictable. 150-digit numbers of the kind once used in RSA have been factored. The effort was greater than above, but was not unreasonable on fast modern computers. By the start of the 21st century, 150-digit numbers were no longer considered a large enough key size for RSA. Numbers with several hundred digits were still considered too hard to factor in 2005, though methods will probably continue to improve over time, requiring key size to keep pace or other methods such as elliptic curve cryptography to be used.
Another distinguishing feature of asymmetric schemes is that, unlike attacks on symmetric cryptosystems, any cryptanalysis has the opportunity to make use of knowledge gained from the public key.
Quantum computers, which are still in the early phases of research, have potential use in cryptanalysis. For example, Shor's Algorithm could factor large numbers in polynomial time, in effect breaking some commonly used forms of public-key encryption.
By using Grover's algorithm on a quantum computer, brute-force key search can be made quadratically faster. However, this could be countered by doubling the key length. | [
{
"paragraph_id": 0,
"text": "Cryptanalysis (from the Greek kryptós, \"hidden\", and analýein, \"to analyze\") refers to the process of analyzing information systems in order to understand hidden aspects of the systems. Cryptanalysis is used to breach cryptographic security systems and gain access to the contents of encrypted messages, even if the cryptographic key is unknown.",
"title": ""
},
{
"paragraph_id": 1,
"text": "In addition to mathematical analysis of cryptographic algorithms, cryptanalysis includes the study of side-channel attacks that do not target weaknesses in the cryptographic algorithms themselves, but instead exploit weaknesses in their implementation.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Even though the goal has been the same, the methods and techniques of cryptanalysis have changed drastically through the history of cryptography, adapting to increasing cryptographic complexity, ranging from the pen-and-paper methods of the past, through machines like the British Bombes and Colossus computers at Bletchley Park in World War II, to the mathematically advanced computerized schemes of the present. Methods for breaking modern cryptosystems often involve solving carefully constructed problems in pure mathematics, the best-known being integer factorization.",
"title": ""
},
{
"paragraph_id": 3,
"text": "In encryption, confidential information (called the \"plaintext\") is sent securely to a recipient by the sender first converting it into an unreadable form (\"ciphertext\") using an encryption algorithm. The ciphertext is sent through an insecure channel to the recipient. The recipient decrypts the ciphertext by applying an inverse decryption algorithm, recovering the plaintext. To decrypt the ciphertext, the recipient requires a secret knowledge from the sender, usually a string of letters, numbers, or bits, called a cryptographic key. The concept is that even if an unauthorized person gets access to the ciphertext during transmission, without the secret key they cannot convert it back to plaintext.",
"title": "Overview"
},
{
"paragraph_id": 4,
"text": "Encryption has been used throughout history to send important military, diplomatic and commercial messages, and today is very widely used in computer networking to protect email and internet communication.",
"title": "Overview"
},
{
"paragraph_id": 5,
"text": "The goal of cryptanalysis is for a third party, a cryptanalyst, to gain as much information as possible about the original (\"plaintext\"), attempting to \"break\" the encryption to read the ciphertext and learning the secret key so future messages can be decrypted and read. A mathematical technique to do this is called a cryptographic attack. Cryptographic attacks can be characterized in a number of ways:",
"title": "Overview"
},
{
"paragraph_id": 6,
"text": "Attacks can be classified based on what type of information the attacker has available. As a basic starting point it is normally assumed that, for the purposes of analysis, the general algorithm is known; this is Shannon's Maxim \"the enemy knows the system\" – in its turn, equivalent to Kerckhoffs' principle. This is a reasonable assumption in practice – throughout history, there are countless examples of secret algorithms falling into wider knowledge, variously through espionage, betrayal and reverse engineering. (And on occasion, ciphers have been broken through pure deduction; for example, the German Lorenz cipher and the Japanese Purple code, and a variety of classical schemes):",
"title": "Overview"
},
{
"paragraph_id": 7,
"text": "Attacks can also be characterised by the resources they require. Those resources include:",
"title": "Overview"
},
{
"paragraph_id": 8,
"text": "It is sometimes difficult to predict these quantities precisely, especially when the attack is not practical to actually implement for testing. But academic cryptanalysts tend to provide at least the estimated order of magnitude of their attacks' difficulty, saying, for example, \"SHA-1 collisions now 2.\"",
"title": "Overview"
},
{
"paragraph_id": 9,
"text": "Bruce Schneier notes that even computationally impractical attacks can be considered breaks: \"Breaking a cipher simply means finding a weakness in the cipher that can be exploited with a complexity less than brute force. Never mind that brute-force might require 2 encryptions; an attack requiring 2 encryptions would be considered a break...simply put, a break can just be a certificational weakness: evidence that the cipher does not perform as advertised.\"",
"title": "Overview"
},
{
"paragraph_id": 10,
"text": "The results of cryptanalysis can also vary in usefulness. Cryptographer Lars Knudsen (1998) classified various types of attack on block ciphers according to the amount and quality of secret information that was discovered:",
"title": "Overview"
},
{
"paragraph_id": 11,
"text": "Academic attacks are often against weakened versions of a cryptosystem, such as a block cipher or hash function with some rounds removed. Many, but not all, attacks become exponentially more difficult to execute as rounds are added to a cryptosystem, so it's possible for the full cryptosystem to be strong even though reduced-round variants are weak. Nonetheless, partial breaks that come close to breaking the original cryptosystem may mean that a full break will follow; the successful attacks on DES, MD5, and SHA-1 were all preceded by attacks on weakened versions.",
"title": "Overview"
},
{
"paragraph_id": 12,
"text": "In academic cryptography, a weakness or a break in a scheme is usually defined quite conservatively: it might require impractical amounts of time, memory, or known plaintexts. It also might require the attacker be able to do things many real-world attackers can't: for example, the attacker may need to choose particular plaintexts to be encrypted or even to ask for plaintexts to be encrypted using several keys related to the secret key. Furthermore, it might only reveal a small amount of information, enough to prove the cryptosystem imperfect but too little to be useful to real-world attackers. Finally, an attack might only apply to a weakened version of cryptographic tools, like a reduced-round block cipher, as a step towards breaking the full system.",
"title": "Overview"
},
{
"paragraph_id": 13,
"text": "Cryptanalysis has coevolved together with cryptography, and the contest can be traced through the history of cryptography—new ciphers being designed to replace old broken designs, and new cryptanalytic techniques invented to crack the improved schemes. In practice, they are viewed as two sides of the same coin: secure cryptography requires design against possible cryptanalysis.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "Although the actual word \"cryptanalysis\" is relatively recent (it was coined by William Friedman in 1920), methods for breaking codes and ciphers are much older. David Kahn notes in The Codebreakers that Arab scholars were the first people to systematically document cryptanalytic methods.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "The first known recorded explanation of cryptanalysis was given by Al-Kindi (c. 801–873, also known as \"Alkindus\" in Europe), a 9th-century Arab polymath, in Risalah fi Istikhraj al-Mu'amma (A Manuscript on Deciphering Cryptographic Messages). This treatise contains the first description of the method of frequency analysis. Al-Kindi is thus regarded as the first codebreaker in history. His breakthrough work was influenced by Al-Khalil (717–786), who wrote the Book of Cryptographic Messages, which contains the first use of permutations and combinations to list all possible Arabic words with and without vowels.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "Frequency analysis is the basic tool for breaking most classical ciphers. In natural languages, certain letters of the alphabet appear more often than others; in English, \"E\" is likely to be the most common letter in any sample of plaintext. Similarly, the digraph \"TH\" is the most likely pair of letters in English, and so on. Frequency analysis relies on a cipher failing to hide these statistics. For example, in a simple substitution cipher (where each letter is simply replaced with another), the most frequent letter in the ciphertext would be a likely candidate for \"E\". Frequency analysis of such a cipher is therefore relatively easy, provided that the ciphertext is long enough to give a reasonably representative count of the letters of the alphabet that it contains.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "Al-Kindi's invention of the frequency analysis technique for breaking monoalphabetic substitution ciphers was the most significant cryptanalytic advance until World War II. Al-Kindi's Risalah fi Istikhraj al-Mu'amma described the first cryptanalytic techniques, including some for polyalphabetic ciphers, cipher classification, Arabic phonetics and syntax, and most importantly, gave the first descriptions on frequency analysis. He also covered methods of encipherments, cryptanalysis of certain encipherments, and statistical analysis of letters and letter combinations in Arabic. An important contribution of Ibn Adlan (1187–1268) was on sample size for use of frequency analysis.",
"title": "History"
},
{
"paragraph_id": 18,
"text": "In Europe, Italian scholar Giambattista della Porta (1535–1615) was the author of a seminal work on cryptanalysis, De Furtivis Literarum Notis.",
"title": "History"
},
{
"paragraph_id": 19,
"text": "Successful cryptanalysis has undoubtedly influenced history; the ability to read the presumed-secret thoughts and plans of others can be a decisive advantage. For example, in England in 1587, Mary, Queen of Scots was tried and executed for treason as a result of her involvement in three plots to assassinate Elizabeth I of England. The plans came to light after her coded correspondence with fellow conspirators was deciphered by Thomas Phelippes.",
"title": "History"
},
{
"paragraph_id": 20,
"text": "In Europe during the 15th and 16th centuries, the idea of a polyalphabetic substitution cipher was developed, among others by the French diplomat Blaise de Vigenère (1523–96). For some three centuries, the Vigenère cipher, which uses a repeating key to select different encryption alphabets in rotation, was considered to be completely secure (le chiffre indéchiffrable—\"the indecipherable cipher\"). Nevertheless, Charles Babbage (1791–1871) and later, independently, Friedrich Kasiski (1805–81) succeeded in breaking this cipher. During World War I, inventors in several countries developed rotor cipher machines such as Arthur Scherbius' Enigma, in an attempt to minimise the repetition that had been exploited to break the Vigenère system.",
"title": "History"
},
{
"paragraph_id": 21,
"text": "In World War I, the breaking of the Zimmermann Telegram was instrumental in bringing the United States into the war. In World War II, the Allies benefitted enormously from their joint success cryptanalysis of the German ciphers – including the Enigma machine and the Lorenz cipher – and Japanese ciphers, particularly 'Purple' and JN-25. 'Ultra' intelligence has been credited with everything between shortening the end of the European war by up to two years, to determining the eventual result. The war in the Pacific was similarly helped by 'Magic' intelligence.",
"title": "History"
},
{
"paragraph_id": 22,
"text": "Cryptanalysis of enemy messages played a significant part in the Allied victory in World War II. F. W. Winterbotham, quoted the western Supreme Allied Commander, Dwight D. Eisenhower, at the war's end as describing Ultra intelligence as having been \"decisive\" to Allied victory. Sir Harry Hinsley, official historian of British Intelligence in World War II, made a similar assessment about Ultra, saying that it shortened the war \"by not less than two years and probably by four years\"; moreover, he said that in the absence of Ultra, it is uncertain how the war would have ended.",
"title": "History"
},
{
"paragraph_id": 23,
"text": "In practice, frequency analysis relies as much on linguistic knowledge as it does on statistics, but as ciphers became more complex, mathematics became more important in cryptanalysis. This change was particularly evident before and during World War II, where efforts to crack Axis ciphers required new levels of mathematical sophistication. Moreover, automation was first applied to cryptanalysis in that era with the Polish Bomba device, the British Bombe, the use of punched card equipment, and in the Colossus computers – the first electronic digital computers to be controlled by a program.",
"title": "History"
},
{
"paragraph_id": 24,
"text": "With reciprocal machine ciphers such as the Lorenz cipher and the Enigma machine used by Nazi Germany during World War II, each message had its own key. Usually, the transmitting operator informed the receiving operator of this message key by transmitting some plaintext and/or ciphertext before the enciphered message. This is termed the indicator, as it indicates to the receiving operator how to set his machine to decipher the message.",
"title": "History"
},
{
"paragraph_id": 25,
"text": "Poorly designed and implemented indicator systems allowed first Polish cryptographers and then the British cryptographers at Bletchley Park to break the Enigma cipher system. Similar poor indicator systems allowed the British to identify depths that led to the diagnosis of the Lorenz SZ40/42 cipher system, and the comprehensive breaking of its messages without the cryptanalysts seeing the cipher machine.",
"title": "History"
},
{
"paragraph_id": 26,
"text": "Sending two or more messages with the same key is an insecure process. To a cryptanalyst the messages are then said to be \"in depth.\" This may be detected by the messages having the same indicator by which the sending operator informs the receiving operator about the key generator initial settings for the message.",
"title": "History"
},
{
"paragraph_id": 27,
"text": "Generally, the cryptanalyst may benefit from lining up identical enciphering operations among a set of messages. For example, the Vernam cipher enciphers by bit-for-bit combining plaintext with a long key using the \"exclusive or\" operator, which is also known as \"modulo-2 addition\" (symbolized by ⊕ ):",
"title": "History"
},
{
"paragraph_id": 28,
"text": "Deciphering combines the same key bits with the ciphertext to reconstruct the plaintext:",
"title": "History"
},
{
"paragraph_id": 29,
"text": "(In modulo-2 arithmetic, addition is the same as subtraction.) When two such ciphertexts are aligned in depth, combining them eliminates the common key, leaving just a combination of the two plaintexts:",
"title": "History"
},
{
"paragraph_id": 30,
"text": "The individual plaintexts can then be worked out linguistically by trying probable words (or phrases), also known as \"cribs,\" at various locations; a correct guess, when combined with the merged plaintext stream, produces intelligible text from the other plaintext component:",
"title": "History"
},
{
"paragraph_id": 31,
"text": "The recovered fragment of the second plaintext can often be extended in one or both directions, and the extra characters can be combined with the merged plaintext stream to extend the first plaintext. Working back and forth between the two plaintexts, using the intelligibility criterion to check guesses, the analyst may recover much or all of the original plaintexts. (With only two plaintexts in depth, the analyst may not know which one corresponds to which ciphertext, but in practice this is not a large problem.) When a recovered plaintext is then combined with its ciphertext, the key is revealed:",
"title": "History"
},
{
"paragraph_id": 32,
"text": "Knowledge of a key then allows the analyst to read other messages encrypted with the same key, and knowledge of a set of related keys may allow cryptanalysts to diagnose the system used for constructing them.",
"title": "History"
},
{
"paragraph_id": 33,
"text": "Governments have long recognized the potential benefits of cryptanalysis for intelligence, both military and diplomatic, and established dedicated organizations devoted to breaking the codes and ciphers of other nations, for example, GCHQ and the NSA, organizations which are still very active today.",
"title": "History"
},
{
"paragraph_id": 34,
"text": "Even though computation was used to great effect in the cryptanalysis of the Lorenz cipher and other systems during World War II, it also made possible new methods of cryptography orders of magnitude more complex than ever before. Taken as a whole, modern cryptography has become much more impervious to cryptanalysis than the pen-and-paper systems of the past, and now seems to have the upper hand against pure cryptanalysis. The historian David Kahn notes:",
"title": "History"
},
{
"paragraph_id": 35,
"text": "Many are the cryptosystems offered by the hundreds of commercial vendors today that cannot be broken by any known methods of cryptanalysis. Indeed, in such systems even a chosen plaintext attack, in which a selected plaintext is matched against its ciphertext, cannot yield the key that unlock[s] other messages. In a sense, then, cryptanalysis is dead. But that is not the end of the story. Cryptanalysis may be dead, but there is – to mix my metaphors – more than one way to skin a cat.",
"title": "History"
},
{
"paragraph_id": 36,
"text": "Kahn goes on to mention increased opportunities for interception, bugging, side channel attacks, and quantum computers as replacements for the traditional means of cryptanalysis. In 2010, former NSA technical director Brian Snow said that both academic and government cryptographers are \"moving very slowly forward in a mature field.\"",
"title": "History"
},
{
"paragraph_id": 37,
"text": "However, any postmortems for cryptanalysis may be premature. While the effectiveness of cryptanalytic methods employed by intelligence agencies remains unknown, many serious attacks against both academic and practical cryptographic primitives have been published in the modern era of computer cryptography:",
"title": "History"
},
{
"paragraph_id": 38,
"text": "Thus, while the best modern ciphers may be far more resistant to cryptanalysis than the Enigma, cryptanalysis and the broader field of information security remain quite active.",
"title": "History"
},
{
"paragraph_id": 39,
"text": "Asymmetric cryptography (or public-key cryptography) is cryptography that relies on using two (mathematically related) keys; one private, and one public. Such ciphers invariably rely on \"hard\" mathematical problems as the basis of their security, so an obvious point of attack is to develop methods for solving the problem. The security of two-key cryptography depends on mathematical questions in a way that single-key cryptography generally does not, and conversely links cryptanalysis to wider mathematical research in a new way.",
"title": "Asymmetric ciphers"
},
{
"paragraph_id": 40,
"text": "Asymmetric schemes are designed around the (conjectured) difficulty of solving various mathematical problems. If an improved algorithm can be found to solve the problem, then the system is weakened. For example, the security of the Diffie–Hellman key exchange scheme depends on the difficulty of calculating the discrete logarithm. In 1983, Don Coppersmith found a faster way to find discrete logarithms (in certain groups), and thereby requiring cryptographers to use larger groups (or different types of groups). RSA's security depends (in part) upon the difficulty of integer factorization – a breakthrough in factoring would impact the security of RSA.",
"title": "Asymmetric ciphers"
},
{
"paragraph_id": 41,
"text": "In 1980, one could factor a difficult 50-digit number at an expense of 10 elementary computer operations. By 1984 the state of the art in factoring algorithms had advanced to a point where a 75-digit number could be factored in 10 operations. Advances in computing technology also meant that the operations could be performed much faster, too. Moore's law predicts that computer speeds will continue to increase. Factoring techniques may continue to do so as well, but will most likely depend on mathematical insight and creativity, neither of which has ever been successfully predictable. 150-digit numbers of the kind once used in RSA have been factored. The effort was greater than above, but was not unreasonable on fast modern computers. By the start of the 21st century, 150-digit numbers were no longer considered a large enough key size for RSA. Numbers with several hundred digits were still considered too hard to factor in 2005, though methods will probably continue to improve over time, requiring key size to keep pace or other methods such as elliptic curve cryptography to be used.",
"title": "Asymmetric ciphers"
},
{
"paragraph_id": 42,
"text": "Another distinguishing feature of asymmetric schemes is that, unlike attacks on symmetric cryptosystems, any cryptanalysis has the opportunity to make use of knowledge gained from the public key.",
"title": "Asymmetric ciphers"
},
{
"paragraph_id": 43,
"text": "Quantum computers, which are still in the early phases of research, have potential use in cryptanalysis. For example, Shor's Algorithm could factor large numbers in polynomial time, in effect breaking some commonly used forms of public-key encryption.",
"title": "Quantum computing applications for cryptanalysis"
},
{
"paragraph_id": 44,
"text": "By using Grover's algorithm on a quantum computer, brute-force key search can be made quadratically faster. However, this could be countered by doubling the key length.",
"title": "Quantum computing applications for cryptanalysis"
}
] | Cryptanalysis refers to the process of analyzing information systems in order to understand hidden aspects of the systems. Cryptanalysis is used to breach cryptographic security systems and gain access to the contents of encrypted messages, even if the cryptographic key is unknown. In addition to mathematical analysis of cryptographic algorithms, cryptanalysis includes the study of side-channel attacks that do not target weaknesses in the cryptographic algorithms themselves, but instead exploit weaknesses in their implementation. Even though the goal has been the same, the methods and techniques of cryptanalysis have changed drastically through the history of cryptography, adapting to increasing cryptographic complexity, ranging from the pen-and-paper methods of the past, through machines like the British Bombes and Colossus computers at Bletchley Park in World War II, to the mathematically advanced computerized schemes of the present. Methods for breaking modern cryptosystems often involve solving carefully constructed problems in pure mathematics, the best-known being integer factorization. | 2001-11-10T14:19:00Z | 2023-12-09T17:26:19Z | [
"Template:Cite web",
"Template:Webarchive",
"Template:Cite news",
"Template:ISBN",
"Template:Main",
"Template:Sfn",
"Template:See also",
"Template:Citation needed",
"Template:Blockquote",
"Template:Expand section",
"Template:Reflist",
"Template:Cite journal",
"Template:Short description",
"Template:Cn",
"Template:Refbegin",
"Template:Commons category",
"Template:Refend",
"Template:Wiktionary",
"Template:Cryptography navbox",
"Template:Authority control",
"Template:Cite book",
"Template:Harvnb",
"Template:More citations needed",
"Template:Citation"
] | https://en.wikipedia.org/wiki/Cryptanalysis |
5,716 | Chicano | Chicano (masculine form) or Chicana (feminine form) is an ethnic identity for Mexican Americans who have a non-Anglo self-image, embracing their Mexican Native ancestry. Chicano was originally a classist and racist slur used toward low-income Mexicans that was reclaimed in the 1940s among youth who belonged to the Pachuco and Pachuca subculture. In the 1960s, Chicano was widely reclaimed in the building of a movement toward political empowerment, ethnic solidarity, and pride in being of indigenous descent (with many using the Nahuatl language or names). Chicano developed its own meaning separate from Mexican American identity. Youth in barrios rejected cultural assimilation into whiteness and embraced their own identity and worldview as a form of empowerment and resistance. The community forged an independent political and cultural movement, sometimes working alongside the Black power movement.
The Chicano Movement faltered by the mid-1970s as a result of external and internal pressures. It was under state surveillance, infiltration, and repression by U.S. government agencies, informants, and agent provocateurs, such as through COINTELPRO. The Chicano Movement also had a fixation on masculine pride and machismo that fractured the community through sexism toward Chicanas and homophobia toward queer Chicana/os. In the 1980s, assimilation and economic mobility motivated many to embrace Hispanic identity in an era of conservatism. The term Hispanic emerged from a collaboration between the U.S. government and Mexican-American political elites in the Hispanic Caucus of Congress. Likewise, the same assimilatory force associated with Hispanic has been tied to the usage of Latino. They used the term to identify themselves and the community with mainstream American culture, depart from Chicanismo, and distance themselves from what they perceived as the "militant" Black Caucus.
At the grassroots level, Chicana/os continued to build the feminist, gay and lesbian, and anti-apartheid movements, which kept the identity politically relevant. After a decade of Hispanic dominance, Chicana/o student activism in the early 1990s recession and the anti-Gulf War movement revived the identity with a demand to expand Chicana/o studies programs. Chicanas were active at the forefront, despite facing critiques from "movement loyalists", as they did in the Chicano Movement. Chicana feminists addressed employment discrimination, environmental racism, healthcare, sexual violence, and exploitation in their communities and in solidarity with the Third World. Chicanas worked to "liberate her entire people"; not to oppress men, but to be equal partners in the movement. Xicanisma, coined by Ana Castillo in 1994, called for Chicana/os to "reinsert the forsaken feminine into our consciousness", to embrace one's Indigenous roots, and support Indigenous sovereignty.
In the 2000s, earlier traditions of anti-imperialism in the Chicano Movement were expanded. Building solidarity with undocumented immigrants became more important, despite issues of legal status and economic competitiveness sometimes maintaining distance between groups. U.S. foreign interventions abroad were connected with domestic issues concerning the rights of undocumented immigrants in the United States. Chicano/a consciousness increasingly became transnational and transcultural, thinking beyond and bridging with communities over political borders. The identity was renewed based on Indigenous and decolonial consciousness, cultural expression, resisting gentrification, defense of immigrants, and the rights of women and queer people. Xicanx identity also emerged in the 2010s, based on the Chicana feminist intervention of Xicanisma.
The etymology of the term Chicano is the subject of some debate by historians. Some believe Chicano is a Spanish language derivative of an older Nahuatl word Mexitli ("Meh-shee-tlee"). Mexitli formed part of the expression Huitzilopochtlil Mexitli—a reference to the historic migration of the Mexica people from their homeland of Aztlán to the Oaxaca Valley. Mexitli is the root of the word Mexica, which refers to the Mexica people, and its singular form Mexihcatl (/meːˈʃiʔkat͡ɬ/). The x in Mexihcatl represents an /ʃ/ or sh sound in both Nahuatl and early modern Spanish, while the glottal stop in the middle of the Nahuatl word disappeared.
The word Chicano may derive from the loss of the initial syllable of Mexicano (Mexican). According to Villanueva, "given that the velar (x) is a palatal phoneme (S) with the spelling (sh)," in accordance with the Indigenous phonological system of the Mexicas ("Meshicas"), it would become "Meshicano" or "Mechicano." In this explanation, Chicano comes from the "xicano" in "Mexicano." Some Chicanos replace the Ch with the letter X, or Xicano, to reclaim the Nahuatl sh sound. The first two syllables of Xicano are therefore in Nahuatl while the last syllable is Castilian.
In Mexico's Indigenous regions, Indigenous people refer to members of the non-indigenous majority as mexicanos, referring to the modern nation of Mexico. Among themselves, the speaker identifies by their pueblo (village or tribal) identity, such as Mayan, Zapotec, Mixtec, Huastec, or any of the other hundreds of indigenous groups. A newly emigrated Nahuatl speaker in an urban center might have referred to his cultural relatives in this country, different from himself, as mexicanos, shortened to Chicanos or Xicanos.
The town of Chicana was shown on the Gutiérrez 1562 New World map near the mouth of the Colorado River, and is probably pre-Columbian in origin. The town was again included on Desegno del Discoperto Della Nova Franza, a 1566 French map by Paolo Forlani. Roberto Cintli Rodríguez places the location of Chicana at the mouth of the Colorado River, near present-day Yuma, Arizona. An 18th century map of the Nayarit Missions used the name Xicana for a town near the same location of Chicana, which is considered to be the oldest recorded usage of that term.
A gunboat, the Chicana, was sold in 1857 to Jose Maria Carvajal to ship arms on the Rio Grande. The King and Kenedy firm submitted a voucher to the Joint Claims Commission of the United States in 1870 to cover the costs of this gunboat's conversion from a passenger steamer. No explanation for the boat's name is known.
The Chicano poet and writer Tino Villanueva traced the first documented use of the term as an ethnonym to 1911, as referenced in a then-unpublished essay by University of Texas anthropologist José Limón. Linguists Edward R. Simmen and Richard F. Bauerle report the use of the term in an essay by Mexican-American writer, Mario Suárez, published in the Arizona Quarterly in 1947. There is ample literary evidence to substantiate that Chicano is a long-standing endonym, as a large body of Chicano literature pre-dates the 1950s.
In the 1940s, "Chicano" was reclaimed by Pachuco youth as an expression of defiance to Anglo-American society. At the time, Chicano was used among English and Spanish speakers as a classist and racist slur to refer to working class Mexican Americans in Spanish-speaking neighborhoods. In Mexico, the term was used with Pocho "to deride Mexicans living in the United States, and especially their U.S.-born children, for losing their culture, customs, and language." Mexican anthropologist Manuel Gamio reported in 1930 that Chicamo (with an m) was used as a derogatory term by Hispanic Texans for recently arrived Mexican immigrants displaced during the Mexican Revolution in the beginning of the early 20th century.
By the 1950s, Chicano referred to those who resisted total assimilation, while Pocho referred (often pejoratively) to those who strongly advocated for assimilation. In his essay "Chicanismo" in The Oxford Encyclopedia of Mesoamerican Cultures (2002), José Cuéllar, dates the transition from derisive to positive to the late 1950s, with increasing use by young Mexican-American high school students. These younger, politically aware Mexican Americans adopted the term "as an act of political defiance and ethnic pride", similar to the reclaiming of Black by African Americans. The Chicano Movement during the 1960s and early 1970s played a significant role in reclaiming "Chicano," challenging those who used it as a term of derision on both sides of the Mexico-U.S. border.
Demographic differences in the adoption of Chicano occurred at first. It was more likely to be used by males than females, and less likely to be used among those of higher socioeconomic status. Usage was also generational, with third-generation men more likely to use the word. This group was also younger, more political, and different from traditional Mexican cultural heritage. Chicana was a similar classist term to refer to "[a] marginalized, brown woman who is treated as a foreigner and is expected to do menial labor and ask nothing of the society in which she lives." Among Mexican Americans, Chicano and Chicana began to be viewed as a positive identity of self-determination and political solidarity. In Mexico, Chicano may still be associated with a Mexican American person of low importance, class, and poor morals (similar to the terms Cholo, Chulo and Majo), indicating a difference in cultural views.
Chicano was widely reclaimed in the 1960s and 1970s during the Chicano Movement to assert a distinct ethnic, political, and cultural identity that resisted assimilation into whiteness, systematic racism and stereotypes, colonialism, and the American nation-state. Chicano identity formed around seven themes: unity, economy, education, institutions, self-defense, culture, and political liberation, in an effort to bridge regional and class divisions. The notion of Aztlán, a mythical homeland claimed to be located in the southwestern United States, mobilized Mexican Americans to take social and political action. Chicano became a unifying term for mestizos. Xicano was also used in the 1970s.
In the 1970s, Chicanos developed a reverence for machismo while also maintaining the values of their original platform. For instance, Oscar Zeta Acosta defined machismo as the source of Chicano identity, claiming that this "instinctual and mystical source of manhood, honor and pride... alone justifies all behavior." Armando Rendón wrote in Chicano Manifesto (1971) that machismo was "in fact an underlying drive of the gathering identification of Mexican Americans... the essence of machismo, of being macho, is as much a symbolic principle for the Chicano revolt as it is a guideline for family life."
From the beginning of the Chicano Movement, some Chicanas criticized the idea that machismo must guide the people and questioned if machismo was "indeed a genuinely Mexican cultural value or a kind of distorted view of masculinity generated by the psychological need to compensate for the indignities suffered by Chicanos in a white supremacist society." Angie Chabram-Dernersesian found that most of the literature on the Chicano Movement focused on men and boys, while almost none focused on Chicanas. The omission of Chicanas and the machismo of the Chicano Movement led to a shift by the 1990s.
Xicanisma was coined by Ana Castillo in Massacre of the Dreamers (1994) as a recognition of a shift in consciousness since the Chicano Movement and to reinvigorate Chicana feminism. The aim of Xicanisma is not to replace patriarchy with matriarchy, but to create "a nonmaterialistic and nonexploitive society in which feminine principles of nurturing and community prevail"; where the feminine is reinserted into our consciousness rather than subordinated by colonization. The X reflects the Sh sound in Mesoamerican languages (such as Tlaxcala, which is pronounced Tlash-KAH-lah), and so marked this sound with a letter X. More than a letter, the X in Xicanisma is also a symbol to represent being at a literal crossroads or otherwise embodying hybridity.
Xicanisma acknowledges Indigenous survival after hundreds of years of colonization and the need to reclaim one's Indigenous roots while also being "committed to the struggle for liberation of all oppressed people", wrote Francesca A. López. Activists like Guillermo Gómez-Peña, issued "a call for a return to the Amerindian roots of most Latinos as well as a call for a strategic alliance to give agency to Native American groups." This can include one's Indigenous roots from Mexico "as well as those with roots centered in Central and South America," wrote Francisco Rios. Castillo argued that this shift in language was important because "language is the vehicle by which we perceive ourselves in relation to the world".
Among a minority of Mexican Americans, the term Xicanx may be used to refer to gender non-conformity. Luis J. Rodriguez states that "even though most US Mexicans may not use this term," that it can be important for gender non-conforming Mexican Americans. Xicanx may destabilize aspects of the coloniality of gender in Mexican American communities. Artist Roy Martinez states that it is not "bound to the feminine or masculine aspects" and that it may be "inclusive to anyone who identifies with it". Some prefer the -e suffix Xicane in order to be more in-line with Spanish-speaking language constructs.
In the 1930s, "community leaders promoted the term Mexican American to convey an assimilationist ideology stressing white identity," as noted by legal scholar Ian Haney López. Lisa Y. Ramos argues that "this phenomenon demonstrates why no Black-Brown civil rights effort emerged prior to the 1960s." Chicano youth rejected the previous generation's racial aspirations to assimilate into Anglo-American society and developed a "Pachuco culture that fashioned itself neither as Mexican nor American."
In the Chicano Movement, possibilities for Black–brown unity arose: "Chicanos defined themselves as proud members of a brown race, thereby rejecting, not only the previous generation's assimilationist orientation, but their racial pretensions as well." Chicano leaders collaborated with Black Power movement leaders and activists. Mexican Americans insisted that Mexicans were white, while Chicanos embraced being non-white and the development of brown pride.
Mexican American continued to be used by a more assimilationist faction who wanted to define Mexican Americans "as a white ethnic group that had little in common with African Americans." Carlos Muñoz argues that the desire to separate themselves from Blackness and political struggle was rooted in an attempt to minimize "the existence of racism toward their own people, [believing] they could "deflect" anti-Mexican sentiment in society" through affiliating with whiteness.
Following the decline of the Chicano Movement, Hispanic was first defined by the U.S. Federal Office of Management and Budget's (OMB) Directive No. 15 in 1977 as "a person of Mexican, Dominican, Puerto Rican, Cuban, Central or South America or other Spanish culture or origin, regardless of race." The term was promoted by Mexican American political elites to encourage cultural assimilation into whiteness and move away from Chicanismo. The rise of Hispanic identity paralleled the emerging era of political and cultural conservatism in the United States during the 1980s.
Key members of the Mexican American political elite, all of whom were middle-aged men, helped popularize the term Hispanic among Mexican Americans. The term was picked up by electronic and print media. Laura E. Gómez conducted a series of interviews with these elites and found that one of the main reasons Hispanic was promoted was to move away from Chicano: "The Chicano label reflected the more radical political agenda of Mexican-Americans in the 1960s and 1970s, and the politicians who call themselves Hispanic today are the harbingers of a more conservative, more accomadationist politics."
Gómez found that some of these elites promoted Hispanic to appeal to white American sensibilities, particularly in regard to separating themselves from Black political consciousness. Gómez records:
Another respondent agreed with this position, contrasting his white colleagues' perceptions of the Congressional Hispanic Caucus with their perception of the Congressional Black Caucus. 'We certainly haven't been militant like the Black Caucus. We're seen as a power bloc—an ethnic power bloc striving to deal with mainstream issues.'
In 1980, Hispanic was first made available as a self-identification on U.S. census forms. While Chicano also appeared on the 1980 U.S. census, it was only permitted to be selected as a subcategory underneath Spanish/Hispanic descent, which erased the possibility of Afro-Chicanos and of Chicanos being of Indigenous descent. Chicano did not appear on any subsequent census forms and Hispanic has remained. Since then, Hispanic has widely been used by politicians and the media. For this reason, many Chicanos reject the term Hispanic.
Instead of or in addition to identifying as Chicano or any of its variations, some may prefer:
Chicano and Chicana identity reflects elements of ethnic, political, cultural and Indigenous hybridity. These qualities of what constitutes Chicano identity may be expressed by Chicanos differently. Armando Rendón wrote in the Chicano Manifesto (1971), "I am Chicano. What it means to me may be different than what it means to you." Benjamin Alire Sáenz wrote "There is no such thing as the Chicano voice: there are only Chicano and Chicana voices." The identity can be somewhat ambiguous (e.g. in the 1991 Culture Clash play A Bowl of Beings, in response to Che Guevara's demand for a definition of "Chicano", an "armchair activist" cries out, "I still don't know!").
Many Chicanos understand themselves as being "neither from here, nor from there", as neither from the United States or Mexico. Juan Bruce-Novoa wrote in 1990: "A Chicano lives in the space between the hyphen in Mexican-American." Being Chicano/a may represent the struggle of being institutionally acculturated to assimilate into the Anglo-dominated society of the United States, yet maintaining the cultural sense developed as a Latin-American cultured U.S.-born Mexican child. Rafael Pérez-Torres wrote, "one can no longer assert the wholeness of a Chicano subject ... It is illusory to deny the nomadic quality of the Chicano community, a community in flux that yet survives and, through survival, affirms itself."
Chicano is a way for Mexican Americans to assert ethnic solidarity and Brown Pride. Boxer Rodolfo Gonzales was one of the first to reclaim the term in this way. This Brown Pride movement established itself alongside the Black is Beautiful movement. Chicano identity emerged as a symbol of pride in having a non-white and non-European image of oneself. It challenged the U.S. census designation "Whites with Spanish Surnames" that was used in the 1950s. Chicanos asserted ethnic pride at a time when Mexican assimilation into whiteness was being promoted by the U.S. government. Ian Haney López argues that this was to "serve Anglo self-interest", who claimed Mexicans were white to try to deny racism against them.
Alfred Arteaga argues that Chicano as an ethnic identity is born out of the European colonization of the Americas. He states that Chicano arose as hybrid ethnicity or race amidst colonial violence. This hybridity extends beyond a previously generalized "Aztec" ancestry, since the Indigenous peoples of Mexico are a diverse group of nations and peoples. A 2011 study found that 85 to 90% of maternal mtDNA lineages in Mexican Americans are Indigenous. Chicano ethnic identity may involve more than just Indigenous and Spanish ancestry. It may also include African ancestry (as a result of Spanish slavery or runaway slaves from Anglo-Americans). Arteaga concluded that "the physical manifestation of the Chicano, is itself a product of hybridity."
Robert Quintana Hopkins argues that Afro-Chicanos are sometimes erased from the ethnic identity "because so many people uncritically apply the 'one drop rule' in the U.S. [which] ignores the complexity of racial hybridity." Black and Chicano communities have engaged in close political movements and struggles for liberation, yet there have also been tensions between Black and Chicano communities. This has been attributed to racial capitalism and anti-Blackness in Chicano communities. Afro-Chicano rapper Choosey stated "there's a stigma that Black and Mexican cultures don't get along, but I wanted to show the beauty in being a product of both."
Chicano political identity developed from a reverence of Pachuco resistance in the 1940s. Luis Valdez wrote that "Pachuco determination and pride grew through the 1950s and gave impetus to the Chicano Movement of the 1960s ... By then the political consciousness stirred by the 1943 Zoot Suit Riots had developed into a movement that would soon issue the Chicano Manifesto—a detailed platform of political activism." By the 1960s, the Pachuco figure "emerged as an icon of resistance in Chicano cultural production." The Pachuca was not regarded with the same status. Catherine Ramírez credits this to the Pachuca being interpreted as a symbol of "dissident femininity, female masculinity, and, in some instances, lesbian sexuality".
The political identity was founded on the principle that the U.S. nation-state had impoverished and exploited the Chicano people and communities. Alberto Varon argued that this brand of Chicano nationalism focused on the machismo subject in its calls for political resistance. Chicano machismo was both a unifying and fracturing force. Cherríe Moraga argued that it fostered homophobia and sexism, which became obstacles to the Movement. As the Chicano political consciousness developed, Chicanas, including Chicana lesbians of color brought attention to "reproductive rights, especially sterilization abuse [sterilization of Latinas], battered women's shelters, rape crisis centers, [and] welfare advocacy." Chicana texts like Essays on La Mujer (1977), Mexican Women in the United States (1980), and This Bridge Called My Back (1981) have been relatively ignored even in Chicano Studies. Sonia Saldívar-Hull argued that even when Chicanas have challenged sexism, their identities have been invalidated.
Chicano political activist groups like the Brown Berets (1967–1972; 1992–Present) gained support in their protests of educational inequalities and demanding an end to police brutality. They collaborated with the Black Panthers and Young Lords, which were founded in 1966 and 1968 respectively. Membership in the Brown Berets was estimated to have reached five thousand in over 80 chapters (mostly centered in California and Texas). The Brown Berets helped organize the Chicano Blowouts of 1968 and the national Chicano Moratorium, which protested the high rate of Chicano casualties in the Vietnam War. Police harassment, infiltration by federal agents provacateur via COINTELPRO, and internal disputes led to the decline and disbandment of the Berets in 1972. Sánchez, then a professor at East Los Angeles College, revived the Brown Berets in 1992 prompted by the high number of Chicano homicides in Los Angeles County, hoping to replace the gang life with the Brown Berets.
Reies Tijerina, who was a vocal claimant to the rights of Latin Americans and Mexican Americans and a major figure of the early Chicano Movement, wrote: "The Anglo press degradized the word 'Chicano.' They use it to divide us. We use it to unify ourselves with our people and with Latin America."
Chicano represents a cultural identity that is neither fully "American" or "Mexican." Chicano culture embodies the "in-between" nature of cultural hybridity. Central aspects of Chicano culture include lowriding, hip hop, rock, graffiti art, theater, muralism, visual art, literature, poetry, and more. Mexican American celebrities, artists, and actors/actresses help bring Chicano culture to light and contribute to the growing influence it has on American pop culture. In modern day America you can now find Chicanos in all types of professions and trades. Notable subcultures include the Cholo, Pachuca, Pachuco, and Pinto subcultures. Chicano culture has had international influence in the form of lowrider car clubs in Brazil and England, music and youth culture in Japan, Māori youth enhancing lowrider bicycles and taking on cholo style, and intellectuals in France "embracing the deterritorializing qualities of Chicano subjectivity."
As early as the 1930s, the precursors to Chicano cultural identity were developing in Los Angeles, California and the Southwestern United States. Former zoot suiter Salvador "El Chava" reflects on how racism and poverty forged a hostile social environment for Chicanos which led to the development of gangs: "we had to protect ourselves". Barrios and colonias (rural barrios) emerged throughout southern California and elsewhere in neglected districts of cities and outlying areas with little infrastructure. Alienation from public institutions made some Chicano youth susceptible to gang channels, who became drawn to their rigid hierarchical structure and assigned social roles in a world of government-sanctioned disorder.
Pachuco culture, which probably originated in the El Paso-Juarez area, spread to the borderland areas of California and Texas as Pachuquismo, which would eventually evolve into Chicanismo. Chicano zoot suiters on the west coast were influenced by Black zoot suiters in the jazz and swing music scene on the East Coast. Chicano zoot suiters developed a unique cultural identity, as noted by Charles "Chaz" Bojórquez, "with their hair done in big pompadours, and "draped" in tailor-made suits, they were swinging to their own styles. They spoke Cálo, their own language, a cool jive of half-English, half-Spanish rhythms. [...] Out of the zootsuiter experience came lowrider cars and culture, clothes, music, tag names, and, again, its own graffiti language." San Antonio-based Chicano artist Adan Hernandez regarded pachucos as "the coolest thing to behold in fashion, manner, and speech.” As described by artist Carlos Jackson, "Pachuco culture remains a prominent theme in Chicano art because the contemporary urban cholo culture" is seen as its heir.
Many aspects of Chicano culture like lowriding cars and bicycles have been stigmatized and policed by Anglo Americans who perceive Chicanos as "juvenile delinquents or gang members" for their embrace of nonwhite style and cultures, much as they did Pachucos. These negative societal perceptions of Chicanos were amplified by media outlets such as the Los Angeles Times. Luis Alvarez remarks how negative portrayals in the media served as a tool to advocate for increased policing of Black and Brown male bodies in particular: "Popular discourse characterizing nonwhite youth as animal-like, hypersexual, and criminal marked their bodies as "other" and, when coming from city officials and the press, served to help construct for the public a social meaning of African Americans and Mexican American youth [as, in their minds, justifiably criminalized]."
Chicano rave culture in southern California provided a space for Chicanos to partially escape criminalization in the 1990s. Artist and archivist Guadalupe Rosales states that "a lot of teenagers were being criminalized or profiled as criminals or gangsters, so the party scene gave access for people to escape that". Numerous party crews, such as Aztek Nation, organized events and parties would frequently take place in neighborhood backyards, particularly in East and South Los Angeles, the surrounding valleys, and Orange County. By 1995, it was estimated that over 500 party crews were in existence. They laid the foundations for "an influential but oft-overlooked Latin dance subculture that offered community for Chicano ravers, queer folk, and other marginalized youth." Ravers used map points techniques to derail police raids. Rosales states that a shift occurred around the late 1990s and increasing violence affected the Chicano party scene.
Chicano identity functions as a way to reclaim one's Indigenous American, and often Indigenous Mexican, ancestry—to form an identity distinct from European identity, despite some Chicanos being of partial European descent—as a way to resist and subvert colonial domination. Rather than part of European American culture, Alicia Gasper de Alba referred to Chicanismo as an "alter-Native culture, an Other American culture Indigenous to the land base now known as the West and Southwest of the United States." While influenced by settler-imposed systems and structures, Alba refers to Chicano culture as "not immigrant but native, not foreign but colonized, not alien but different from the overarching hegemony of white America."
The Plan Espiritual de Aztlán (1969) drew from Frantz Fanon's The Wretched of the Earth (1961). In Wretched, Fanon stated: "the past existence of an Aztec civilization does not change anything very much in the diet of the Mexican peasant today", elaborating that "this passionate search for a national culture which existed before the colonial era finds its legitimate reason in the anxiety shared by native intellectuals to shrink away from that of Western culture in which they all risk being swamped ... the native intellectuals, since they could not stand wonderstruck before the history of today's barbarity, decided to go back further and to delve deeper down; and, let us make no mistake, it was with the greatest delight that they discovered that there was nothing to be ashamed of in the past, but rather dignity, glory, and solemnity."
The Chicano Movement adopted this perspective through the notion of Aztlán—a mythic Aztec homeland which Chicanos used as a way to connect themselves to a precolonial past, before the time of the "'gringo' invasion of our lands." Chicano scholars have described how this functioned as a way for Chicanos to reclaim a diverse or imprecise Indigenous past; while recognizing how Aztlán promoted divisive forms of Chicano nationalism that "did little to shake the walls and bring down the structures of power as its rhetoric so firmly proclaimed". As stated by Chicano historian Juan Gómez-Quiñones, the Plan Espiritual de Aztlán was "stripped of what radical element it possessed by stressing its alleged romantic idealism, reducing the concept of Aztlán to a psychological ploy ... all of which became possible because of the Plan's incomplete analysis which, in turn, allowed it ... to degenerate into reformism."
While acknowledging its romanticized and exclusionary foundations, Chicano scholars like Rafael Pérez-Torres state that Aztlán opened a subjectivity which stressed a connection to Indigenous peoples and cultures at a critical historical moment in which Mexican-Americans and Mexicans were "under pressure to assimilate particular standards—of beauty, of identity, of aspiration. In a Mexican context, the pressure was to urbanize and Europeanize ... "Mexican-Americans" were expected to accept anti-indigenous discourses as their own." As Pérez-Torres concludes, Aztlán allowed "for another way of aligning one's interests and concerns with community and with history ... though hazy as to the precise means in which agency would emerge, Aztlán valorized a Chicanismo that rewove into the present previously devalued lines of descent." Romanticized notions of Aztlán have declined among some Chicanos, who argue for a need to reconstruct the place of Indigeneity in relation to Chicano identity.
Danza Azteca grew popular in the U.S. with the rise of the Chicano Movement, which inspired some "Latinos to embrace their ethnic heritage and question the Eurocentric norms forced upon them." The use of pre-contact Aztec cultural elements has been critiqued by some Chicanos who stress a need to represent the diversity of Indigenous ancestry among Chicanos. Patrisia Gonzales portrays Chicanos as descendants of the Indigenous peoples of Mexico who have been displaced by colonial violence, positioning them as "detribalized Indigenous peoples and communities." Roberto Cintli Rodríguez describes Chicanos as "de-Indigenized," which he remarks occurred "in part due to religious indoctrination and a violent uprooting from the land", detaching millions of people from maíz-based cultures throughout the greater Mesoamerican region. Rodríguez asks how and why "peoples who are clearly red or brown and undeniably Indigenous to this continent have allowed ourselves, historically, to be framed by bureaucrats and the courts, by politicians, scholars, and the media as alien, illegal, and less than human."
Gloria E. Anzaldúa has addressed Chicano's detribalization: "In the case of Chicanos, being 'Mexican' is not a tribe. So in a sense Chicanos and Mexicans are 'detribalized'. We don't have tribal affiliations but neither do we have to carry ID cards establishing tribal affiliation." Anzaldúa recognized that "Chicanos, people of color, and 'whites'" have often chosen "to ignore the struggles of Native people even when it's right in our caras (faces)," expressing disdain for this "willful ignorance". She concluded that "though both "detribalized urban mixed bloods" and Chicanos are recovering and reclaiming, this society is killing off urban mixed bloods through cultural genocide, by not allowing them equal opportunities for better jobs, schooling, and health care." Inés Hernández-Ávila argued that Chicanos should recognize and reconnect with their roots "respectfully and humbly" while also validating "those peoples who still maintain their identity as original peoples of this continent" in order to create radical change capable of "transforming our world, our universe, and our lives".
During World War II, Chicano youth were targeted by white servicemen, who despised their "cool, measured indifference to the war, as well as an increasingly defiant posture toward whites in general". Historian Robin Kelley states that this "annoyed white servicemen to no end". During the Zoot Suit Riots (1943), white rage erupted in Los Angeles, which "became the site of racist attacks on Black and Chicano youth, during which white soldiers engaged in what amounted to a ritualized stripping of the zoot." Zoot suits were a symbol of collective resistance among Chicano and Black youth against city segregation and fighting in the war. Many Chicano and Black zoot-suiters engaged in draft evasion because they felt it was hypocritical for them to be expected to "fight for democracy" abroad yet face racism and oppression daily in the U.S.
This galvanized Chicano youth to focus on anti-war activism, "especially influenced by the Third World movements of liberation in Asia, Africa, and Latin America." Historian Mario T. García reflects that "these anti-colonial and anti-Western movements for national liberation and self-awareness touched a historical nerve among Chicanos as they began to learn that they shared some similarities with these Third World struggles." Chicano poet Alurista argued that "Chicanos cannot be truly free until they recognize that the struggle in the United States is intricately bound with the anti-imperialist struggle in other countries." The Cuban Revolution (1953–1959) led by Fidel Castro and Che Guevara was particularly influential to Chicanos, as noted by García, who notes that Chicanos viewed the revolution as "a nationalist revolt against 'Yankee imperialism' and neo-colonialism."
In the 1960s, the Chicano Movement brought "attention and commitment to local struggles with an analysis and understanding of international struggles". Chicano youth organized with Black, Latin American, and Filipino activists to form the Third World Liberation Front (TWLF), which fought for the creation of a Third World college. During the Third World Liberation Front strikes of 1968, Chicano artists created posters to express solidarity. Chicano poster artist Rupert García referred to the place of artists in the movement: "I was critical of the police, of capitalist exploitation. I did posters of Che, of Zapata, of other Third World leaders. As artists, we climbed down from the ivory tower." Learning from Cuban poster makers of the post-revolutionary period, Chicano artists "incorporated international struggles for freedom and self-determination, such as those of Angola, Chile, and South Africa", while also promoting the struggles of Indigenous people and other civil rights movements through Black-brown unity. Chicanas organized with women of color activists to create the Third World Women's Alliance (1968-1980), representing "visions of liberation in third world solidarity that inspired political projects among racially and economically marginalized communities" against U.S. capitalism and imperialism.
The Chicano Moratorium (1969–1971) against the Vietnam War was one of the largest demonstrations of Mexican-Americans in history, drawing over 30,000 supporters in East L.A. Draft evasion was a form of resistance for Chicano anti-war activists such as Rosalio Muñoz, Ernesto Vigil, and Salomon Baldengro. They faced a felony charge—a minimum of five years prison time, $10,000, or both. In response, Munoz wrote "I declare my independence of the Selective Service System. I accuse the government of the United States of America of genocide against the Mexican people. Specifically, I accuse the draft, the entire social, political, and economic system of the United States of America, of creating a funnel which shoots Mexican youth into Vietnam to be killed and to kill innocent men, women, and children...." Rodolfo Corky Gonzales expressed a similar stance: "My feelings and emotions are aroused by the complete disregard of our present society for the rights, dignity, and lives of not only people of other nations but of our own unfortunate young men who die for an abstract cause in a war that cannot be honestly justified by any of our present leaders."
Anthologies such as This Bridge Called My Back: Writings by Radical Women of Color (1981) were produced in the late 1970s and early 80s by writers who identified as lesbians of color, including Cherríe Moraga, Pat Parker, Toni Cade Bambara, Chrystos (self-identified claim of Menominee ancestry), Audre Lorde, Gloria E. Anzaldúa, Cheryl Clarke, Jewelle Gomez, Kitty Tsui, and Hattie Gossett, who developed a poetics of liberation. Kitchen Table: Women of Color Press and Third Woman Press, founded in 1979 by Chicana feminist Norma Alarcón, provided sites for the production of women of color and Chicana literatures and critical essays. While first world feminists focused "on the liberal agenda of political rights", Third World feminists "linked their agenda for women's rights with economic and cultural rights" and unified together "under the banner of Third World solidarity". Maylei Blackwell identifies that this internationalist critique of capitalism and imperialism forged by women of color has yet to be fully historicized and is "usually dropped out of the false historical narrative".
In the 1980s and 90s, Central American activists influenced Chicano leaders. The Mexican American Legislative Caucus (MALC) supported the Esquipulas Peace Agreement in 1987, standing in opposition to Contra aid. Al Luna criticized Reagan and American involvement while defending Nicaragua's Sandinista-led government: "President Reagan cannot credibly make public speeches for peace in Central America while at the same time advocating for a three-fold increase in funding to the Contras." The Southwest Voter Research Initiative (SVRI), launched by Chicano leader Willie Velásquez, intended to educate Chicano youth about Central and Latin American political issues. In 1988, "there was no significant urban center in the Southwest where Chicano leaders and activists had not become involved in lobbying or organizing to change U.S. policy in Nicaragua." In the early 1990s, Cherríe Moraga urged Chicano activists to recognize that "the Anglo invasion of Latin America [had] extended well beyond the Mexican/American border" while Gloria E. Anzaldúa positioned Central America as the primary target of a U.S. interventionism that had murdered and displaced thousands. However, Chicano solidarity narratives of Central Americans in the 1990s tended to center themselves, stereotype Central Americans, and filter their struggles "through Chicana/o struggles, histories, and imaginaries."
Chicano activists organized against the Gulf War (1990–91). Raul Ruiz of the Chicano Mexican Committee against the Gulf War stated that U.S. intervention was "to support U.S. oil interests in the region." Ruiz expressed, "we were the only Chicano group against the war. We did a lot of protesting in L.A. even though it was difficult because of the strong support for the war and the anti-Arab reaction that followed ... we experienced racist attacks [but] we held our ground." The end of the Gulf War, along with the Rodney King Riots, were crucial in inspiring a new wave of Chicano political activism. In 1994, one of the largest demonstrations of Mexican Americans in the history of the United States occurred when 70,000 people, largely Chicanos and Latinos, marched in Los Angeles and other cities to protest Proposition 187, which aimed to cut educational and welfare benefits for undocumented immigrants.
In 2004, Mujeres against Militarism and the Raza Unida Coalition sponsored a Day of the Dead vigil against militarism within the Latino community, addressing the War in Afghanistan (2001–) and the Iraq War (2003–2011) They held photos of the dead and chanted "no blood for oil." The procession ended with a five-hour vigil at Tia Chucha's Centro Cultural. They condemned "the Junior Reserve Officers Training Corps (JROTC) and other military recruitment programs that concentrate heavily in Latino and African American communities, noting that JROTC is rarely found in upper-income Anglo communities." Rubén Funkahuatl Guevara organized a benefit concert for Latin@s Against the War in Iraq and Mexamérica por la Paz at Self-Help Graphics against the Iraq War. Although the events were well-attended, Guevara stated that "the Feds know how to manipulate fear to reach their ends: world military dominance and maintaining a foothold in an oil-rich region were their real goals."
Chicano and Mexican labor organizers played an active role in notable labor strikes since the early 20th century including the Oxnard strike of 1903, Pacific Electric Railway strike of 1903, 1919 Streetcar Strike of Los Angeles, Cantaloupe strike of 1928, California agricultural strikes (1931–1941), and the Ventura County agricultural strike of 1941, endured mass deportations as a form of strikebreaking in the Bisbee Deportation of 1917 and Mexican Repatriation (1929–1936), and experienced tensions with one another during the Bracero program (1942–1964). Although organizing laborers were harassed, sabotaged, and repressed, sometimes through warlike tactics from capitalist owners who engaged in coervice labor relations and collaborated with and received support from local police and community organizations, Chicano and Mexican workers, particularly in agriculture, have been engaged in widespread unionization activities since the 1930s.
Prior to unionization, agricultural workers, many of whom were undocumented aliens, worked in dismal conditions. Historian F. Arturo Rosales recorded a Federal Project Writer of the period, who stated: "It is sad, yet true, commentary that to the average landowner and grower in California the Mexican was to be placed in much the same category with ranch cattle, with this exception–the cattle were for the most part provided with comparatively better food and water and immeasurably better living accommodations." Growers used cheap Mexican labor to reap bigger profits and, until the 1930s, perceived Mexicans as docile and compliant with their subjugated status because they "did not organize troublesome labor unions, and it was held that he was not educated to the level of unionism". As one grower described, "We want the Mexican because we can treat them as we cannot treat any other living man ... We can control them by keeping them at night behind bolted gates, within a stockade eight feet high, surrounded by barbed wire ... We can make them work under armed guards in the fields."
Unionization efforts were initiated by the Confederación de Uniones Obreras (Federation of Labor Unions) in Los Angeles, with twenty-one chapters quickly extending throughout southern California, and La Unión de Trabajadores del Valle Imperial (Imperial Valley Workers' Union). The latter organized the Cantaloupe strike of 1928, in which workers demanded better working conditions and higher wages, but "the growers refused to budge and, as became a pattern, local authorities sided with the farmers and through harassment broke the strike". Communist-led organizations such as the Cannery and Agricultural Workers' Industrial Union (CAWIU) supported Mexican workers, renting spaces for cotton pickers during the cotton strikes of 1933 after they were thrown out of company housing by growers. Capitalist owners used "red-baiting" techniques to discredit the strikes through associating them with communists. Chicana and Mexican working women showed the greatest tendency to organize, particularly in the Los Angeles garment industry with the International Ladies' Garment Workers' Union, led by anarchist Rose Pesotta.
During World War II, the government-funded Bracero program (1942–1964) hindered unionization efforts. In response to the California agricultural strikes and the 1941 Ventura County strike of Chicano and Mexican, as well as Filipino, lemon pickers/packers, growers organized the Ventura County Citrus Growers Committee (VCCGC) and launched a lobbying campaign to pressure the U.S. government to pass laws to prohibit labor organizing. VCCGC joined with other grower associations, forming a powerful lobbying bloc in Congress, and worked to legislate for (1) a Mexican guest workers program, which would become the Bracero program, (2) laws prohibiting strike activity, and (3) military deferments for pickers. Their lobbying efforts were successful: unionization among farmworkers was made illegal, farmworkers were excluded from minimum wage laws, and the usage of child labor by growers was ignored. In formerly active areas, such as Santa Paula, union activity stopped for over thirty years as a result.
When World War II ended, the Bracero program continued. Legal anthropologist Martha Menchaca states that this was "regardless of the fact that massive quantities of crops were no longer needed for the war effort ... after the war, the braceros were used for the benefit of the large-scale growers and not for the nation's interest." The program was extended for an indefinite period in 1951. In the mid-1940s, labor organizer Ernesto Galarza founded the National Farm Workers Union (NFWU) in opposition to the Bracero Program, organizing a large-scale 1947 strike against the Di Giorgio Fruit Company in Arvin, California. Hundreds of Mexican, Filipino, and white workers walked out and demanded higher wages. The strike was broken by the usual tactics, with law enforcement on the side of the owners, evicting strikers and bringing in undocumented workers as strikebreakers. The NFWU folded, but served as a precursor to the United Farm Workers Union led by César Chávez. By the 1950s, opposition to the Bracero program had grown considerably, as unions, churches, and Mexican-American political activists raised awareness about the effects it had on American labor standards. On December 31, 1964, the U.S. government conceded and terminated the program.
Following the closure of the Bracero program, domestic farmworkers began to organize again because "growers could not longer maintain the peonage system" with the end of imported laborers from Mexico. Labor organizing formed part of the Chicano Movement via the struggle of farmworkers against depressed wages and working conditions. César Chávez began organizing Chicano farmworkers in the early 1960s, first through the National Farm Workers Association (NFWA) and then merging the association with the Agricultural Workers Organizing Committee (AWOC), an organization of mainly Filipino workers, to form the United Farm Workers. The labor organizing of Chávez was central to the expansion of unionization throughout the United States and inspired the Farm Labor Organizing Committee (FLOC), under the leadership of Baldemar Velásquez, which continues today. Farmworkers collaborated with local Chicano organizations, such as in Santa Paula, California, where farmworkers attended Brown Berets meetings in the 1970s and Chicano youth organized to improve working conditions and initiate an urban renewal project on the eastside of the city.
Although Mexican and Chicano workers, organizers, and activists organized for decades to improve working conditions and increase wages, some scholars characterize these gains as minimal. As described by Ronald Mize and Alicia Swords, "piecemeal gains in the interests of workers have had very little impact on the capitalist agricultural labor process, so picking grapes, strawberries, and oranges in 1948 is not so different from picking those same crops in 2008." U.S. agriculture today remains totally reliant on Mexican labor, with Mexican-born individuals now constituting about 90% of the labor force.
Chicanos often endure struggles in the U.S. education system, such as being erased in curriculums and devalued as students. Some Chicanos identify schools as colonial institutions that exercise control over colonized students by teaching Chicanos to idolize whiteness and develop a negative self-image of themselves and their worldviews. School segregation between Mexican and white students was not legally ended until the late 1940s. In Orange County, California, 80% of Mexican students could only attend schools that taught Mexican children manual education, or gardening, bootmaking, blacksmithing, and carpentry for Mexican boys and sewing and homemaking for Mexican girls. White schools taught academic preparation. When Sylvia Mendez was told to attend a Mexican school, her parents brought suit against the court in Mendez vs. Westminster (1947) and won.
Although legal segregation had been successfully challenged, de facto or segregation-in-practice continued in many areas. Schools with primarily Mexican American enrollment were still treated as "Mexican schools" much as before the legal overturning of segregation. Mexican American students were still treated poorly in schools. Continued bias in the education system motivated Chicanos to protest and use direct action, such as walkouts, in the 1960s. On March 5, 1968, the Chicano Blowouts at East Los Angeles High School occurred as a response to the racist treatment of Chicano students, an unresponsive school board, and a high dropout rate. It became known as "the first major mass protest against racism undertaken by Mexican-Americans in the history of the United States."
Sal Castro, a Chicano social science teacher at the school was arrested and fired for inspiring the walkouts. It was led by Harry Gamboa Jr. who was named "one of the hundred most dangerous and violent subversives in the United States" for organizing the student walkouts. The day prior, FBI director J. Edgar Hoover sent out a memo to law enforcement to place top priority on "political intelligence work to prevent the development of nationalist movements in minority communities". Chicana activist Alicia Escalante protested Castro's dismissal: "We in the Movement will at least be able to hold our heads up and say that we haven't submitted to the gringo or to the pressures of the system. We are brown and we are proud. I am at least raising my children to be proud of their heritage, to demand their rights, and as they become parents they too will pass this on until justice is done."
In 1969, Plan de Santa Bárbara was drafted as a 155-page document that outlined the foundation of Chicano Studies programs in higher education. It called for students, faculty, employees and the community to come together as "central and decisive designers and administrators of these programs". Chicano students and activists asserted that universities should exist to serve the community. However, by the mid-1970s, much of the radicalism of earlier Chicano studies became deflated by the education system, aimed to alter Chicano Studies programs from within. Mario García argued that one "encountered a deradicalization of the radicals". Some opportunistic faculty avoided their political responsibilities to the community. University administrators co-opted oppositional forces within Chicano Studies programs and encouraged tendencies that led "to the loss of autonomy of Chicano Studies programs." At the same time, "a domesticated Chicano Studies provided the university with the facade of being tolerant, liberal, and progressive."
Some Chicanos argued that the solution was to create "publishing outlets that would challenge Anglo control of academic print culture with its rules on peer review and thereby publish alternative research," arguing that a Chicano space in the colonial academy could "avoid colonization in higher education". In an attempt to establish educational autonomy, they worked with institutions like the Ford Foundation, but found that "these organizations presented a paradox". Rodolfo Acuña argued that such institutions "quickly became content to only acquire funding for research and thereby determine the success or failure of faculty". Chicano Studies became "much closer [to] the mainstream than its practitioners wanted to acknowledge." Others argued that Chicano Studies at UCLA shifted from its earlier interests in serving the Chicano community to gaining status within the colonial institution through a focus on academic publishing, which alienated it from the community.
In 2012, the Mexican American Studies Department Programs (MAS) in Tucson Unified School District were banned after a campaign led by Anglo-American politician Tom Horne accused it of working to "promote the overthrow of the U.S. government, promote resentment toward a race or class of people, are designed primarily for pupils of a particular ethnic group or advocate ethnic solidarity instead of the treatment of pupils as individuals." Classes on Latino literature, American history/Mexican-American perspectives, Chicano art, and an American government/social justice education project course were banned. Readings of In Lak'ech from Luis Valdez's poem Pensamiento Serpentino were also banned.
Seven books, including Paulo Friere's Pedagogy of the Oppressed and works covering Chicano history and critical race theory, were banned, taken from students, and stored away. The ban was overturned in 2017 by Judge A. Wallace Tashima, who ruled that it was unconstitutional and motivated by racism by depriving Chicano students of knowledge, thereby violating their Fourteenth Amendment right. The Xicanx Institute for Teaching & Organizing (XITO) emerged to carry on the legacy of the MAS programs. Chicanos continue to support the institution of Chicano studies programs. In 2021, students at Southwestern College, the closest college to the Mexico-United States Border urged for the creation of a Chicanx Studies Program to service the predominately Latino student body.
The Chicano concept of sin fronteras rejects the idea of borders. Some argued that the 1848 Treaty of Guadalupe Hidalgo transformed the Rio Grande region from a rich cultural center to a rigid border poorly enforced by the United States government. At the end of the Mexican-American War, 80,000 Spanish-Mexican-Indian people were forced into sudden U.S. habitation. Some Chicanos identified with the idea of Aztlán as a result, which celebrated a time preceding land division and rejected the "immigrant/foreigner" categorization by Anglo society. Chicano activists have called for unionism between both Mexicans and Chicanos on both sides of the border.
In the early 20th century, the border crossing had become a site of dehumanization for Mexicans. Protests in 1910 arose along the Santa Fe Bridge from abuses committed against Mexican workers while crossing the border. The 1917 Bath riots erupted after Mexicans crossing the border were required to strip naked and be disinfected with chemical agents like gasoline, kerosene, sulfuric acid, and Zyklon B, the latter of which was the fumigation of choice and would later notoriously be used in the gas chambers of Nazi Germany. Chemical dousing continued into the 1950s. During the early 20th century, Chicanos used corridos "to counter Anglocentric hegemony." Ramón Saldivar stated that "corridos served the symbolic function of empirical events and for creating counterfactual worlds of lived experience (functioning as a substitute for fiction writing)."
Newspaper Sin Fronteras (1976–1979) openly rejected the Mexico-United States border. The newspaper considered it "to be only an artificial creation that in time would be destroyed by the struggles of Mexicans on both sides of the border" and recognized that "Yankee political, economic, and cultural colonialism victimized all Mexicans, whether in the U.S. or in Mexico." Similarly, the General Brotherhood of Workers (CASA), important to the development of young Chicano intellectuals and activists, identified that, as "victims of oppression, Mexicanos could achieve liberation and self-determination only by engaging in a borderless struggle to defeat American international capitalism."
Chicana theorist Gloria E. Anzaldúa notably emphasized the border as a "1,950 mile-long wound that does not heal". In referring to the border as a wound, writer Catherine Leen suggests that Anzaldúa recognizes "the trauma and indeed physical violence very often associated with crossing the border from Mexico to the US, but also underlies the fact that the cyclical nature of this immigration means that this process will continue and find little resolution." Anzaldúa writes that la frontera signals "the coming together of two self-consistent but habitually incompatible frames of reference [which] cause un choque, a cultural collision" because "the U.S.-Mexican border es una herida abierta where the Third World grates against the first and bleeds." Chicano and Mexican artists and filmmakers continue to address "the contentious issues of exploitation, exclusion, and conflict at the border and attempt to overturn border stereotypes" through their work. Luis Alberto Urrea writes "the border runs down the middle of me. I have a barbed wire fence neatly bisecting my heart."
The 19th-century and early-20th-century image of the Mexican in the U.S. was "that of the greasy Mexican bandit or bandito," who was perceived as criminal because of Mestizo ancestry and "Indian blood." This rhetoric fueled anti-Mexican sentiment among whites, which led to many lynchings of Mexicans in the period as an act of racist violence. One of the largest massacres of Mexicans was known as La Matanza in Texas, where hundreds of Mexicans were lynched by white mobs. Many whites viewed Mexicans as inherently criminal, which they connected to their Indigenous ancestry. White historian Walter P. Webb wrote in 1935, "there is a cruel streak in the Mexican nature ... this cruelty may be a heritage from the Spanish and of the Inquisition; it may, and doubtless should be, attributed partly to Indian blood."
The "greasy bandito" stereotype of the old West evolved into images of "crazed Zoot-Suiters and pachuco killers in the 1940s, to contemporary cholos, gangsters, and gang members." Pachucos were portrayed as violent criminals in American mainstream media, which fueled the Zoot Suit Riots; initiated by off-duty policemen conducting a vigilante-hunt, the riots targeted Chicano youth who wore the zoot suit as a symbol of empowerment. On-duty police supported the violence against Chicano zoot suiters; they "escorted the servicemen to safety and arrested their Chicano victims." Arrest rates of Chicano youth rose during these decades, fueled by the "criminal" image portrayed in the media, by politicians, and by the police. Not aspiring to assimilate in Anglo-American society, Chicano youth were criminalized for their defiance to cultural assimilation: "When many of the same youth began wearing what the larger society considered outlandish clothing, sporting distinctive hairstyles, speaking in their own language (Caló), and dripping with attitude, law enforcement redoubled their efforts to rid [them from] the streets."
In the 1970s and subsequent decades, there was a wave of police killings of Chicanos. One of the most prominent cases was Luis "Tato" Rivera, who was a 20-year-old Chicano shot in the back by officer Craig Short in 1975. 2,000 Chicano demonstrators showed up to the city hall of National City, California in protest. Short was indicted for manslaughter by district attorney Ed Miller and was acquitted of all charges. Short was later appointed acting chief of police of National City in 2003. Another high-profile case was the murder of Ricardo Falcón, a student at the University of Colorado and leader of the United Latin American Students (UMAS), by Perry Brunson, a member of the far-right American Independent Party, at a gas station. Bruson was tried for manslaughter and was "acquitted by an all-White jury". Falcón became a martyr for the Chicano Movement as police violence increased in the subsequent decades. Similar cases led sociologist Alfredo Mirandé to refer to the U.S. criminal justice system as gringo justice, because "it reflected one standard for Anglos and another for Chicanos."
The criminalization of Chicano youth in the barrio remains omnipresent. Chicano youth who adopt a cholo or chola identity endure hyper-criminalization in what has been described by Victor Rios as the youth control complex. While older residents initially "embraced the idea of a chola or cholo as a larger subculture not necessarily associated with crime and violence (but rather with a youthful temporary identity), law enforcement agents, ignorant or disdainful of barrio life, labeled youth who wore clean white tennis shoes, shaved their heads, or long socks, as deviant." Community members were convinced by the police of cholo criminality, which led to criminalization and surveillance "reminiscent of the criminalization of Chicana and Chicano youth during the Zoot-Suit era in the 1940s."
Sociologist José S. Plascencia-Castillo refers to the barrio as a panopticon that leads to intense self-regulation, as Cholo youth are both scrutinized by law enforcement to "stay in their side of town" and by the community who in some cases "call the police to have the youngsters removed from the premises". The intense governance of Chicano youth, especially of cholo identity, has deep implications on youth experience, affecting their physical and mental health as well as their outlook on the future. Some youth feel they "can either comply with the demands of authority figures, and become obedient and compliant, and suffer the accompanying loss of identity and self-esteem, or, adopt a resistant stance and contest social invisibility to command respect in the public sphere."
Chicanas often confront objectification in Anglo society, being perceived as "exotic", "lascivious", and "hot" at a very young age while also facing denigration as "barefoot", "pregnant", "dark", and "low-class". These perceptions in society create numerous negative sociological and psychological effects, such as excessive dieting and eating disorders. Social media may enhance these stereotypes of Chicana women and girls. Numerous studies have found that Chicanas experience elevated levels of stress as a result of sexual expectations by society, as well as their parents and families.
Although many Chicana youth desire open conversation of these gender roles and sexuality, as well as mental health, these issues are often not discussed openly in Chicano families, which perpetuates unsafe and destructive practices. While young Chicanas are objectified, middle-aged Chicanas discuss feelings of being invisible, saying they feel trapped in balancing family obligations to their parents and children while attempting to create a space for their own sexual desires. The expectation that Chicanas should be "protected" by Chicanos may also constrict the agency and mobility of Chicanas.
Chicanas are often relegated to a secondary and subordinate status in families. Cherrie Moraga argues that this issue of patriarchal ideology in Chicano and Latino communities runs deep, as the great majority of Chicano and Latino men believe in and uphold male supremacy. Moraga argues that this ideology is not only upheld by men in Chicano families, but also by mothers in their relationship to their children: "the daughter must constantly earn the mother's love, prove her fidelity to her. The son—he gets her love for free."
Chicanos develop their manhood within a context of marginalization in white society. Some argue that "Mexican men and their Chicano brothers suffer from an inferiority complex due to the conquest and genocide inflicted upon their Indigenous ancestors," which leaves Chicano men feeling trapped between identifying with the so-called "superior" European and the so-called "inferior" Indigenous sense of self. This conflict may manifest itself in the form of hypermasculinity or machismo, in which a "quest for power and control over others in order to feel better" about oneself is undertaken. This may result in men developing abusive behaviors, the development of an impenetrable "cold" persona, alcohol abuse, and other destructive and self-isolating behaviors.
The lack of discussion of what it means to be a Chicano man between Chicano male youth and their fathers or their mothers creates a search for identity that often leads to self-destructive behaviors. Chicano male youth tend to learn about sex from their peers as well as older male family members who perpetuate the idea that as men they have "a right to engage in sexual activity without commitment". The looming threat of being labeled a joto (gay) for not engaging in sexual activity also conditions many Chicanos to "use" women for their own sexual desires. Gabriel S. Estrada argues that the criminalization of Chicanos proliferates further homophobia among Chicano boys and men who may adopt hypermasculine personas to escape such association.
Heteronormative gender roles are typically enforced in Chicano families. Any deviation from gender and sexual conformity is commonly perceived as a weakening or attack of la familia. However, Chicano men who retain a masculine or machismo performance are afforded some mobility to discreetly engage in homosexual behaviors, as long as it remains on the fringes. Effeminacy in Chicanos, Chicana lesbianism, and any deviation is understood as an attack on the family.
Queer Chicana/os may seek refuge in their families, if possible, because it is difficult for them to find spaces where they feel safe in the dominant and hostile white gay culture. Chicano machismo, religious traditionalism, and homophobia creates challenges for them to feel accepted by their families. Gabriel S. Estrada argues that upholding "Judeo-Christian mandates against homosexuality that are not native to [Indigenous Mexico]," exiles queer Chicana/o youth.
Chicanos may seek out both Western biomedical healthcare and Indigenous health practices when dealing with trauma or illness. The effects of colonization are proven to produce psychological distress among Indigenous communities. Intergenerational trauma, along with racism and institutionalized systems of oppression, have been shown to adversely impact the mental health of Chicanos and Latinos. Mexican Americans are three times more likely than European Americans to live in poverty. Chicano adolescent youth experience high rates of depression and anxiety. Chicana adolescents have higher rates of depression and suicidal ideation than their European-American and African-American peers. Chicano adolescents experience high rates of homicide, and suicide. Chicanos ages ten to seventeen are at a greater risk for mood and anxiety disorders than their European-American and African-American peers. Scholars have determined that the reasons for this are unclear due to the scarcity of studies on Chicano youth, but that intergenerational trauma, acculturative stress, and family factors are believed to contribute.
Among Mexican immigrants who have lived in the United States for less than thirteen years, lower rates of mental health disorders were found in comparison to Mexican-Americans and Chicanos born in the United States. Scholar Yvette G. Flores concludes that these studies demonstrate that "factors associated with living in the United States are related to an increased risk of mental disorders." Risk factors for negative mental health include historical and contemporary trauma stemming from colonization, marginalization, discrimination, and devaluation. The disconnection of Chicanos from their Indigeneity has been cited as a cause of trauma and negative mental health:
Loss of language, cultural rituals, and spiritual practices creates shame and despair. The loss of culture and language often goes unmourned, because it is silenced and denied by those who occupy, conquer, or dominate. Such losses and their psychological and spiritual impact are passed down across generations, resulting in depression, disconnection, and spiritual distress in subsequent generations, which are manifestations of historical or intergenerational trauma.
Psychological distress may emerge from Chicanos being "othered" in society since childhood and is linked to psychiatric disorders and symptoms which are culturally bound—susto (fright), nervios (nerves), mal de ojo (evil eye), and ataque de nervios (an attack of nerves resembling a panic attack). Manuel X. Zamarripa discusses how mental health and spirituality are often seen as disconnected subjects in Western perspectives. Zamarripa states "in our community, spirituality is key for many of us in our overall wellbeing and in restoring and giving balance to our lives". For Chicanos, Zamarripa recognizes that identity, community, and spirituality are three core aspects which are essential to maintaining good mental health.
Chicano spirituality has been described as a process of engaging in a journey to unite one's consciousness for the purposes of cultural unity and social justice. It brings together many elements and is therefore hybrid in nature. Scholar Regina M Marchi states that Chicano spirituality "emphasizes elements of struggle, process, and politics, with the goal of creating a unity of consciousness to aid social development and political action". Lara Medina and Martha R. Gonzales explain that "reclaiming and reconstructing our spirituality based on non-Western epistemologies is central to our process of decolonization, particularly in these most troubling times of incessant Eurocentric, heteronormative patriarchy, misogyny, racial injustice, global capitalist greed, and disastrous global climate change." As a result, some scholars state that Chicano spirituality must involve a study of Indigenous Ways of Knowing (IWOK). The Circulo de Hombres group in San Diego, California spiritually heals Chicano, Latino, and Indigenous men "by exposing them to Indigenous-based frameworks, men of this cultural group heal and rehumanize themselves through Maya-Nahua Indigenous-based concepts and teachings", helping them process intergenerational trauma and dehumanization that has resulted from colonization. A study on the group reported that reconnecting with Indigenous worldviews was overwhelmingly successful in helping Chicano, Latino, and Indigenous men heal. As stated by Jesus Mendoza, "our bodies remember our indigenous roots and demand that we open our mind, hearts, and souls to our reality".
Chicano spirituality is a way for Chicanos to listen, reclaim, and survive while disrupting coloniality. While historically Catholicism was the primary way for Chicanos to express their spirituality, this is changing rapidly. According to a Pew Research Center report in 2015, "the primary role of Catholicism as a conduit to spirituality has declined and some Chicanos have changed their affiliation to other Christian religions and many more have stopped attending church altogether." Increasingly, Chicanos are considering themselves spiritual rather than religious or part of an organized religion. A study on spirituality and Chicano men in 2020 found that many Chicanos indicated the benefits of spirituality through connecting with Indigenous spiritual beliefs and worldviews instead of Christian or Catholic organized religion in their lives. Dr. Lara Medina defines spirituality as (1) Knowledge of oneself—one's gifts and one's challenges, (2) Co-creation or a relationship with communities (others), and (3) A relationship with sacred sources of life and death 'the Great Mystery' or Creator. Jesus Mendoza writes that, for Chicanos, "spirituality is our connection to the earth, our pre-Hispanic history, our ancestors, the mixture of pre-Hispanic religion with Christianity ... a return to a non-Western worldview that understands all life as sacred." In her writing on Gloria Anzaldua's idea of spiritual activism, AnaLouise Keating states that spirituality is distinct from organized religion and New Age thinking. Leela Fernandes defines spirituality as follows:
When I speak of spirituality, at the most basic level I am referring to an understanding of the self as encompassing body and mind, as well as spirit. I am also referring to a transcendent sense of interconnection that moves beyond the knowable, visible material world. This sense of interconnection has been described variously as divinity, the sacred, spirit, or simply the universe. My understanding is also grounded in a form of lived spirituality, which is directly accessible to all and which does not need to be mediated by religious experts, institutions or theological texts; this is what is often referred to as the mystical side of spirituality... Spirituality can be as much about practices of compassion, love, ethics, and truth defined in nonreligious terms as it can be related to the mystical reinterpretations of existing religious traditions.
David Carrasco states that Mesoamerican spiritual or religious beliefs have historically always been evolving in response to the conditions of the world around them: "These ritual and mythic traditions were not mere repetitions of ancient ways. New rituals and mythic stories were produced to respond to ecological, social, and economic changes and crises." This was represented through the art of the Olmecs, Maya, and Mexica. European colonizers sought and worked to destroy Mesoamerican worldviews regarding spirituality and replace these with a Christian model. The colonizers used syncreticism in art and culture, exemplified through practices such as the idea presented in the Testerian Codices that "Jesus ate tortillas with his disciples at the last supper" or the creation of the Virgen de Guadalupe (mirroring the Christian Mary) in order to force Christianity into Mesoamerican cosmology.
Chicanos can create new spiritual traditions by recognizing this history or "by observing the past and creating a new reality". Gloria Anzaldua states that this can be achieved through nepantla spirituality or a space where, as stated by Jesus Mendoza, "all religious knowledge can coexist and create a new spirituality ... where no one is above the other ... a place where all is useful and none is rejected." Anzaldua and other scholars acknowledge that this is a difficult process that involves navigating many internal contradictions in order to find a path towards spiritual liberation. Cherrie Moraga calls for a deeper self-exploration of who Chicanos are in order to reach "a place of deeper inquiry into ourselves as a people ... possibly, we must turn our eyes away from racist America and take stock at the damages done to us. Possibly, the greatest risks yet to be taken are entre nosotros, where we write, paint, dance, and draw the wound for one another to build a stronger pueblo. The women artist seemed disposed to do this, their work often mediating the delicate area between cultural affirmation and criticism." Laura E. Pérez states in her study of Chicana art that "the artwork itself [is] altar-like, a site where the disembodied—divine, emotional, or social—[is] acknowledged, invoked, meditated upon, and released as a shared offering."
The diversity of Chicano cultural production is vast. Guillermo Gómez-Peña wrote that the complexity and diversity of the Chicano community includes influences from Central American, Caribbean, Africans, and Asian Americans who have moved into Chicano communities as well as queer people of color. Many Chicano artists continue to question "conventional, static notions of Chicanismo," while others conform to more conventional cultural traditions.
Chicano film has been marginalized since its inception and was established in the 1960s. The generally marginal status of Chicanos in the film industry has meant that many Chicano films are not released with wide theatrical distribution. Chicano film emerged from the creation of political plays and documentaries. This included El Teatro Campesino's Yo Soy Joaquín (1969), Luis Valdez's El Corrido (1976), and Efraín Gutiérrez's Please, Don't Bury Me Alive! (1976), the latter of which is referred to as the first full-length Chicano film.
Docudramas then emerged like Esperanza Vasquez's Agueda Martínez (1977), Jesús Salvador Treviño's Raíces de Sangre (1977), and Robert M. Young's ¡Alambrista! (1977). Luis Valdez's Zoot Suit (1981), Young's The Ballad of Gregorio Cortez (1982), Gregory Nava's, My Family/Mi familia (1995) and Selena (1997), and Josefina López's Real Women Have Curves (2002). Chicana/o films continue to be regarded as a small niche in the film industry that has yet to receive mainstream commercial success. However, Chicana/o films have been influential in shaping how Chicana/os see themselves.
Chicano literature tends to focus on challenging the dominant narrative, while embracing notions of hybridity, including the use of Spanglish, as well as the blending of genre forms, such as fiction and autobiography. José Antonio Villarreal's Pocho (1959) is widely recognized as the first major Chicano novel. Poet Alurista wrote that Chicano literature served an important role to push back against narratives by white Anglo-Saxon Protestant culture that sought to "keep Mexicans in their place."
Rodolfo "Corky" Gonzales's "Yo Soy Joaquin" is one of the first examples of explicitly Chicano poetry. Other early influential poems included "El Louie" by José Montoya and Abelardo "Lalo" Delgado's poem "Stupid America." In 1967, Octavio Romano founded Tonatiuh-Quinto Sol Publications, which was the first dedicated Chicano publication houses. The novel Chicano (1970) by Richard Vasquez, was the first novel about Mexican Americans to be released by a major publisher. It was widely read in high schools and universities during the 1970s and is now recognized as a breakthrough novel.
Chicana feminist writers have tended to focus on themes of identity, questioning how identity is constructed, who constructs it, and for what purpose in a racist, classist, and patriarchal structure. Characters in books such as Victuum (1976) by Isabella Ríos, The House on Mango Street (1983) by Sandra Cisneros, Loving in the War Years: lo que nunca pasó por sus labios (1983) by Cherríe Moraga, The Last of the Menu Girls (1986) by Denise Chávez, Margins (1992) by Terri de la Peña, and Gulf Dreams (1996) by Emma Pérez have also been read regarding how they intersect with themes of gender and sexuality. Catrióna Rueda Esquibel performs a queer reading of Chicana literature in With Her Machete in Her Hand (2006) to demonstrate how some of the intimate relationships between girls and women contributed to a discourse on homoeroticism and queer sexuality in Chicana/o literature.
Chicano characters who were gay tended to be removed from the barrio and were typically portrayed with negative attributes, such as the character of "Joe Pete" in Pocho and the unnamed protagonist of John Rechy's City of Night (1963). Other characters in the Chicano canon may also be read as queer, including the unnamed protagonist of Tomás Rivera's ...y no se lo tragó la tierra (1971), and "Antonio Márez" in Rudolfo Anaya's Bless Me, Ultima (1972). Juan Bruce-Novoa wrote that homosexuality was "far from being ignored during the 1960s and 1970s," despite homophobia restricting representations: "our community is less sexually repressive than we might expect".
Lalo Guerrero has been lauded as the "father of Chicano music." Beginning in the 1930s, he wrote songs in the big band and swing genres and expanded into traditional genres of Mexican music. During the farmworkers' rights campaign, he wrote music in support of César Chávez and the United Farm Workers. Other notable musicians include Selena, who sang a mixture of Mexican, Tejano, and American popular music, and died in 1995 at the age of 23; Zack de la Rocha, social activist and lead vocalist of Rage Against the Machine; and Los Lonely Boys, a Texas-style country rock band.
Chicano techno and electronic music artists DJ Rolando, Santiago Salazar, DJ Tranzo, and Esteban Adame have released music through independent labels like Underground Resistance, Planet E, Krown Entertainment, and Rush Hour. In the 1990s, house music artists such as DJ Juanito (Johnny Loopz), Rudy "Rude Dog" Gonzalez, and Juan V. released numerous tracks through Los Angeles-based house labels Groove Daddy Records and Bust A Groove.
DJ Rolando's techno track "Knights of the Jaguar," released on the UR label in 1999, became the most well-known Chicano techno track after charting at #43 in the UK in 2000. Mixmag commented: "after it was released, it spread like wildfire all over the world. It's one of those rare tracks that feels like it can play for an eternity without anyone batting an eyelash." It's consistently placed on Best Songs lists. The official video for the track features various portraits of Chicana/os in Detroit among several Chicano murals, lowrider cars and lowrider bicycles, and lifestyle.
Salazar and Adame are also affiliated with Underground Resistance and have collaborated with Nomadico. Salazar founded music labels Major People, Ican (as in Mex-Ican, with Esteban Adame) and Historia y Violencia (with Juan Mendez a.k.a. Silent Servant) and released his debut album Chicanismo in 2015 to positive reviews. Nomadico's label Yaxteq, founded in 2015, has released tracks by veteran Los Angeles techno producer Xavier De Enciso and Honduran producer Ritmos.
A growing Tex-Mex polka band trend influenced by the conjunto and norteño music of Mexican immigrants, has in turn influenced much new Chicano folk music, especially on large-market Spanish language radio stations and on television music video programs in the U.S. Some of these artists, like the band Quetzal, are known for the political content of political songs.
Hip hop culture, which is cited as having formed in the 1980s street culture of African American, West Indian (especially Jamaican), and Puerto Rican New York City Bronx youth and characterized by DJing, rap music, graffiti, and breakdancing, was adopted by many Chicano youth by the 1980s as its influence moved westward across the United States. Chicano artists were beginning to develop their own style of hip hop. Rappers such as Ice-T and Eazy-E shared their music and commercial insights with Chicano rappers in the late 1980s. Chicano rapper Kid Frost, who is often cited as "the godfather of Chicano rap" was highly influenced by Ice-T and was even cited as his protégé.
Chicano rap is a unique style of hip hop music which started with Kid Frost, who saw some mainstream exposure in the early 1990s. While Mellow Man Ace was the first mainstream rapper to use Spanglish, Frost's song "La Raza" paved the way for its use in American hip hop. Chicano rap tends to discuss themes of importance to young urban Chicanos. Some of the most prominent Chicano artists include A.L.T., Lil Rob, Psycho Realm, Baby Bash, Serio, A Lighter Shade of Brown, and Funky Aztecs. Chicano rap artists with less mainstream exposure, yet with popular underground followings include Cali Life Style, Ese 40'z, Sleepy Loka, Ms. Sancha, Mac Rockelle, Sir Dyno, and Choosey.
Chicano R&B artists include Paula DeAnda, Amanda Perez, Frankie J, and Victor Ivan Santos (early member of the Kumbia Kings and associated with Baby Bash).
Although Latin jazz is most popularly associated with artists from the Caribbean (particularly Cuba) and Brazil, young Mexican Americans have played a role in its development over the years, going back to the 1930s and early 1940s, the era of the zoot suit, when young Mexican-American musicians in Los Angeles and San Jose, such as Jenni Rivera, began to experiment with banda, a jazz-like fusion genre that has grown recently in popularity among Mexican Americans
In the 1950s, 1960s and 1970s, a wave of Chicano pop music surfaced through innovative musicians Carlos Santana, Johnny Rodriguez, Ritchie Valens and Linda Ronstadt. Joan Baez, who is also of Mexican-American descent, included Hispanic themes in some of her protest folk songs. Chicano rock is rock music performed by Chicano groups or music with themes derived from Chicano culture.
There are two undercurrents in Chicano rock. One is a devotion to the original rhythm and blues roots of Rock and roll including Ritchie Valens, Sunny and the Sunglows, and ? and the Mysterians. Groups inspired by this include Sir Douglas Quintet, Thee Midniters, Los Lobos, War, Tierra, and El Chicano, and, of course, the Chicano Blues Man himself, the late Randy Garribay. The second theme is the openness to Latin American sounds and influences. Trini Lopez, Santana, Malo, Azteca, Toro, Ozomatli and other Chicano Latin rock groups follow this approach. Chicano rock crossed paths of other Latin rock genres (Rock en español) by Cubans, Puerto Ricans, such as Joe Bataan and Ralphi Pagan and South America (Nueva canción). Rock band The Mars Volta combines elements of progressive rock with traditional Mexican folk music and Latin rhythms along with Cedric Bixler-Zavala's Spanglish lyrics.
Chicano punk is a branch of Chicano rock. There were many bands that emerged from the California punk scene, including The Zeros, Bags, Los Illegals, The Brat, The Plugz, Manic Hispanic, and the Cruzados; as well as others from outside of California including Mydolls from Houston, Texas and Los Crudos from Chicago, Illinois. Some music historians argue that Chicanos of Los Angeles in the late 1970s might have independently co-founded punk rock along with the already-acknowledged founders from European sources when introduced to the US in major cities. The rock band ? and the Mysterians, which was composed primarily of Mexican-American musicians, was the first band to be described as punk rock. The term was reportedly coined in 1971 by rock critic Dave Marsh in a review of their show for Creem magazine.
El Teatro Campesino (The Farmworkers' Theater) was founded by Luis Valdez and Agustin Lira in 1965 as the cultural wing of the United Farm Workers (UFW) as a result of the Great Delano Grape Strike in 1965. All of the actors were farmworkers and involved in organizing for farmworkers' rights. Its first performances sought to recruit members for the UFW and dissuade strikebreakers. Many early performances were not scripted and were rather conceived through the direction of Valdez and others through actos, in which a scenario would be proposed for a scene and then dialogue would simply be improvised.
Chicano performance art continued with the work of Los Angeles' comedy troupe Culture Clash, Guillermo Gómez-Peña, and Nao Bustamante, known internationally for her conceptual art pieces and as a participant in Work of Art: The Next Great Artist. Chicano performance art became popular in the 1970s, blending humor and pathos for tragicomic effect. Groups such as Asco and the Royal Chicano Air Force illustrated this aspect of performance art through their work. Asco (Spanish for naseau or disgust), composed of Willie Herón, Gronk, Harry Gamboa Jr., and Patssi Valdez, created performance pieces such as the Walking Mural, walking down Whittier Boulevard dressed as "a multifaceted mural, a Christmas tree, and the Virgin of Guadalupe. Asco continued its conceptual performance piece until 1987.
In the 1990s, San Diego-based artist cooperative of David Avalos, Louis Hock, and Elizabeth Sisco used their National Endowment for the Arts $5,000 fellowship subversively, deciding to circulate money back to the community: "handing 10-dollar bills to undocumented workers to spend as they please." Their piece Arte Reembolsa (Art Rebate) created controversy among the art establishment, with the documentation of the piece featuring "footage of U.S. House and Senate members questioning whether the project was, in fact, art."
One of the most well-known performance art troupes is La Pocha Nostra, which has been covered in numerous articles for various performance art pieces. The troupe has been active since 1993 yet has remained relevant into the 2010s and 2020s due to its political commentary, including anti-corporate stances. The troupe regularly uses parody and humor in their performances to make complex commentary on various social issues. Creating thought-provoking performances that challenge the audience to think differently is often their intention with each performance piece.
The Chicano visual art tradition, like the identity, is grounded in community empowerment and resisting assimilation and oppression. Prior to the introduction of spray cans, paint brushes were used by Chicano "shoeshine boys [who] marked their names on the walls with their daubers to stake out their spots on the sidewalk" in the early 20th century. Pachuco graffiti culture in Los Angeles was already "in full bloom" by the 1930s and 1940s, pachucos developed their placa, "a distinctive calligraphic writing style" which went on to influence contemporary graffiti tagging. Paño, a form of pinto arte (a caló term for male prisoner) using pen and pencil, developed in the 1930s, first using bed sheets and pillowcases as canvases. Paño has been described as rasquachismo, a Chicano worldview and artmaking method which makes the most from the least.
Graffiti artists, such as Charles "Chaz" Bojórquez, developed an original style of graffiti art known as West Coast Cholo style influenced by Mexican murals and pachuco placas (tags which indicate territorial boundaries) in the mid-20th century. In the 1960s, Chicano graffiti artists from San Antonio to L.A. (especially in East LA, Whittier, and Boyle Heights) used the art form to challenge authority, tagging police cars, buildings, and subways as "a demonstration of their bravado and anger", understanding their work as "individual acts of pride or protest, gang declarations of territory or challenge, and weapons in a class war." Chicano graffiti artists wrote C/S as an abbreviation for con safos or the variant con safo (loosely meaning "don't touch this" and expressing a "the same to you" attitude)—a common expression among Chicanos on the eastside of Los Angeles and throughout the Southwest.
The Chicano Movement and political identity had heavily influenced Chicano artists by the 1970s. Alongside the Black arts movement, this led to the development of institutions such as Self-Help Graphics, Los Angeles Contemporary Exhibitions, and Plaza de la Raza. Artists such as Harry Gamboa Jr., Gronk, and Judith Baca created art which "stood in opposition to the commercial galleries, museums, and civic institutional mainstream". This was exemplified with Asco's tagging of LACMA after "a curator refused to even entertain the idea of a Chicano art show within its walls" in 1972. Chicano art collectives such as the Royal Chicano Air Force, founded in 1970 by Ricardo Favela, José Montoya and Esteban Villa, supported the United Farm Workers movement through art activism, using art to create and inspire social change. Favela believed that it was important to keep the culture alive through their artwork. Favela stated "I was dealing with art forms very foreign to me, always trying to do western art, but there was always something lacking... it was very simple: it was just my Chicano heart wanting to do Chicano art." Other Chicano visual art collectives included Con Safo in San Antonio, which included Felipe Reyes, José Esquivel, Roberto Ríos, Jesse Almazán, Jesse "Chista" Cantú, Jose Garza, Mel Casas, Rudy Treviño, César Martínez, Kathy Vargas, Amado Peña, Jr., Robando Briseño, and Roberto Gonzalez. The Mujeres Muralistas in the Mission District, San Francisco included Patricia Rodriguez, Graciela Carrillo, Consuelo Mendez, and Irene Perez.
Chicano muralism, which began in the 1960s, became a state-sanctioned artform in the 1970s as an attempt by outsiders to "prevent gang violence and dissuade graffiti practices". This led to the creation of murals at Estrada Courts and other sites throughout Chicano communities. In some instances, these murals were covered with the placas they were instituted by the state to prevent. Marcos Sanchez-Tranquilino states that "rather than vandalism, the tagging of one's own murals points toward a complex sense of wall ownership and a social tension created by the uncomfortable yet approving attentions of official cultural authority." This created a division between established Chicano artists who celebrated inclusion and acceptance by the dominant culture and younger Chicano artists who "saw greater power in renegade muralism and barrio calligraphy than in state-sanctioned pieces." Chicano poster art became prominent in the 1970s as a way to challenge political authority, with pieces such as Rupert García's Save Our Sister (1972), depicting Angela Davis, and Yolanda M. López's Who's the Illegal Alien, Pilgrim? (1978) addressing settler colonialism.
The oppositional current of Chicano art was bolstered in the 1980s by a rising hip hop culture. The Olympic freeway murals, including Frank Romero's Going to the Olympics, created for the 1984 Olympic Games in Los Angeles became another site of contestation, as Chicano and other graffiti artists tagged the state-sanctioned public artwork. Government officials, muralists, and some residents were unable to understand the motivations for this, described it "as "mindless", "animalistic" vandalism perpetrated by "kids" who simply lack respect." L.A. had developed a distinct graffiti culture by the 1990s and, with the rise of drugs and violence, Chicano youth culture gravitated towards graffiti to express themselves and to mark their territory amidst state-sanctioned disorder. Following the Rodney King riots and the murder of Latasha Harlins, which exemplified an explosion of racial tensions bubbling under in American society, racialized youth in L.A., "feeling forgotten, angry, or marginalized, [embraced] graffiti's expressive power [as] a tool to push back."
Chicano art, although accepted into some institutional art spaces in shows like Chicano Art: Resistance and Affirmation, was still largely excluded from many mainstream art institutions in the 1990s. By the 2000s, attitudes towards graffiti by white hipster culture were changing, as it became known as "street art". In academic circles, "street art" was termed "post-graffiti". By the 2000s, where the LAPD once deployed CRASH (Community Resources Against Street Hoodlums) units in traditionally Chicano neighborhoods like Echo Park and "often brutalized suspected taggers and gang members", street art was now being mainstreamed by the white art world in those same neighborhoods.
Despite this shift, Chicano artists continued to challenge what was acceptable to both insiders and outsiders of their communities. Controversy surrounding Chicana artist Alma López's "Our Lady" at the Museum of International Folk Art in 2001 erupted when "local demonstrators demanded the image be removed from the state-run museum". Previously, López's digital mural "Heaven" (2000), which depicted two Latina women embracing, had been vandalized. López received homophobic slurs, threats of physical violence, and over 800 hate mail inquiries for "Our Lady." Santa Fe Archbishop Michael J Sheehan referred to the woman in López's piece as "a tart or a street woman". López stated that the response came from the conservative Catholic Church, "which finds women's bodies inherently sinful, and thereby promot[es] hatred of women's bodies." The art was again protested in 2011.
Manuel Paul's mural "Por Vida" (2015) at Galeria de la Raza in Mission District, San Francisco, which depicted queer and trans Chicanos, was targeted multiple times after its unveiling. Paul, a queer DJ and artist of the Maricón Collective, received online threats for the work. Ani Rivera, director of Galeria de la Raza, attributed the anger towards the mural to gentrification, which has led "some people [to] associate LGBT people with non-Latino communities." The mural was meant to challenge "long-held assumptions regarding the traditional exclusivity of heterosexuality in lowrider culture". Some credited the negative response to the mural's direct challenging of machismo and heteronormativity in the community.
Xandra Ibarra's video art Spictacle II: La Tortillera (2004) was censored by San Antonio's Department of Arts and Culture in 2020 from "XicanX: New Visions", a show which aimed to challenge "previous and existing surveys of Chicano and Latino identity-based exhibitions" through highlighting "the womxn, queer, immigrant, indigenous and activist artists who are at the forefront of the movement". Ibarra stated "the video is designed to challenge normative ideals of Mexican womanhood and is in alignment with the historical lineage of LGBTQAI+ artists' strategies to intervene in homophobic and sexist violence."
Chicano culture has become popular in some areas internationally, most prominently in Japan, Brazil, and Thailand. Chicano ideas such as Chicano hybridity and borderlands theory have found influence as well, such as in decoloniality. In São Paulo, Chicano cultural influence has formed the "Cho-Low" (combination of Cholo and Lowrider) subculture that has formed a sense of cultural pride among youth.
Chicano cultural influence is strong in Japan, where Chicano culture took hold in the 1980s and continued to grow with contributions from Shin Miyata, Junichi Shimodaira, Miki Style, Night Tha Funksta, and MoNa (Sad Girl). Miyata owns a record label, Gold Barrio Records, that re-releases Chicano music. Chicano fashion and other cultural aspects have also been adopted in Japan. There has been debate over whether this is cultural appropriation, with most arguing that it is appreciation rather than appropriation. In an interview asking why Chicano culture is popular in Japan, two long-time proponents of Chicano culture in Japan agreed that "it's not about Mexico or about America: it's an alluring quality unique to the hybrid nature of Chicano and imprinted in all its resulting art forms, from lowriders in the '80s to TikTok videos today, that people relate to and appreciate, not only in Japan but around the world."
Most recently, Chicano culture has found influence in Thailand among working-class men and women that is called "Thaino" culture. They state that they have disassociated the violence that Hollywood portrays of Chicanos from the Chicano people themselves. They have adopted rules of no cocaine or amphetamines, and only marijuana, which is legal in Thailand. The leader of one group stated that he was inspired by how Chicanos created a culture out of defiance "to fight against people who were racist toward them" and that this inspired him, since he was born in a slum in Thailand. He also stated "if you look closely at [Chicano] culture, you'll notice how gentle it is. You can see this in their Latin music, dances, clothes, and how they iron their clothes. It's both neat and gentle." | [
{
"paragraph_id": 0,
"text": "Chicano (masculine form) or Chicana (feminine form) is an ethnic identity for Mexican Americans who have a non-Anglo self-image, embracing their Mexican Native ancestry. Chicano was originally a classist and racist slur used toward low-income Mexicans that was reclaimed in the 1940s among youth who belonged to the Pachuco and Pachuca subculture. In the 1960s, Chicano was widely reclaimed in the building of a movement toward political empowerment, ethnic solidarity, and pride in being of indigenous descent (with many using the Nahuatl language or names). Chicano developed its own meaning separate from Mexican American identity. Youth in barrios rejected cultural assimilation into whiteness and embraced their own identity and worldview as a form of empowerment and resistance. The community forged an independent political and cultural movement, sometimes working alongside the Black power movement.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The Chicano Movement faltered by the mid-1970s as a result of external and internal pressures. It was under state surveillance, infiltration, and repression by U.S. government agencies, informants, and agent provocateurs, such as through COINTELPRO. The Chicano Movement also had a fixation on masculine pride and machismo that fractured the community through sexism toward Chicanas and homophobia toward queer Chicana/os. In the 1980s, assimilation and economic mobility motivated many to embrace Hispanic identity in an era of conservatism. The term Hispanic emerged from a collaboration between the U.S. government and Mexican-American political elites in the Hispanic Caucus of Congress. Likewise, the same assimilatory force associated with Hispanic has been tied to the usage of Latino. They used the term to identify themselves and the community with mainstream American culture, depart from Chicanismo, and distance themselves from what they perceived as the \"militant\" Black Caucus.",
"title": ""
},
{
"paragraph_id": 2,
"text": "At the grassroots level, Chicana/os continued to build the feminist, gay and lesbian, and anti-apartheid movements, which kept the identity politically relevant. After a decade of Hispanic dominance, Chicana/o student activism in the early 1990s recession and the anti-Gulf War movement revived the identity with a demand to expand Chicana/o studies programs. Chicanas were active at the forefront, despite facing critiques from \"movement loyalists\", as they did in the Chicano Movement. Chicana feminists addressed employment discrimination, environmental racism, healthcare, sexual violence, and exploitation in their communities and in solidarity with the Third World. Chicanas worked to \"liberate her entire people\"; not to oppress men, but to be equal partners in the movement. Xicanisma, coined by Ana Castillo in 1994, called for Chicana/os to \"reinsert the forsaken feminine into our consciousness\", to embrace one's Indigenous roots, and support Indigenous sovereignty.",
"title": ""
},
{
"paragraph_id": 3,
"text": "In the 2000s, earlier traditions of anti-imperialism in the Chicano Movement were expanded. Building solidarity with undocumented immigrants became more important, despite issues of legal status and economic competitiveness sometimes maintaining distance between groups. U.S. foreign interventions abroad were connected with domestic issues concerning the rights of undocumented immigrants in the United States. Chicano/a consciousness increasingly became transnational and transcultural, thinking beyond and bridging with communities over political borders. The identity was renewed based on Indigenous and decolonial consciousness, cultural expression, resisting gentrification, defense of immigrants, and the rights of women and queer people. Xicanx identity also emerged in the 2010s, based on the Chicana feminist intervention of Xicanisma.",
"title": ""
},
{
"paragraph_id": 4,
"text": "The etymology of the term Chicano is the subject of some debate by historians. Some believe Chicano is a Spanish language derivative of an older Nahuatl word Mexitli (\"Meh-shee-tlee\"). Mexitli formed part of the expression Huitzilopochtlil Mexitli—a reference to the historic migration of the Mexica people from their homeland of Aztlán to the Oaxaca Valley. Mexitli is the root of the word Mexica, which refers to the Mexica people, and its singular form Mexihcatl (/meːˈʃiʔkat͡ɬ/). The x in Mexihcatl represents an /ʃ/ or sh sound in both Nahuatl and early modern Spanish, while the glottal stop in the middle of the Nahuatl word disappeared.",
"title": "Etymology"
},
{
"paragraph_id": 5,
"text": "The word Chicano may derive from the loss of the initial syllable of Mexicano (Mexican). According to Villanueva, \"given that the velar (x) is a palatal phoneme (S) with the spelling (sh),\" in accordance with the Indigenous phonological system of the Mexicas (\"Meshicas\"), it would become \"Meshicano\" or \"Mechicano.\" In this explanation, Chicano comes from the \"xicano\" in \"Mexicano.\" Some Chicanos replace the Ch with the letter X, or Xicano, to reclaim the Nahuatl sh sound. The first two syllables of Xicano are therefore in Nahuatl while the last syllable is Castilian.",
"title": "Etymology"
},
{
"paragraph_id": 6,
"text": "In Mexico's Indigenous regions, Indigenous people refer to members of the non-indigenous majority as mexicanos, referring to the modern nation of Mexico. Among themselves, the speaker identifies by their pueblo (village or tribal) identity, such as Mayan, Zapotec, Mixtec, Huastec, or any of the other hundreds of indigenous groups. A newly emigrated Nahuatl speaker in an urban center might have referred to his cultural relatives in this country, different from himself, as mexicanos, shortened to Chicanos or Xicanos.",
"title": "Etymology"
},
{
"paragraph_id": 7,
"text": "The town of Chicana was shown on the Gutiérrez 1562 New World map near the mouth of the Colorado River, and is probably pre-Columbian in origin. The town was again included on Desegno del Discoperto Della Nova Franza, a 1566 French map by Paolo Forlani. Roberto Cintli Rodríguez places the location of Chicana at the mouth of the Colorado River, near present-day Yuma, Arizona. An 18th century map of the Nayarit Missions used the name Xicana for a town near the same location of Chicana, which is considered to be the oldest recorded usage of that term.",
"title": "Usage of terms"
},
{
"paragraph_id": 8,
"text": "A gunboat, the Chicana, was sold in 1857 to Jose Maria Carvajal to ship arms on the Rio Grande. The King and Kenedy firm submitted a voucher to the Joint Claims Commission of the United States in 1870 to cover the costs of this gunboat's conversion from a passenger steamer. No explanation for the boat's name is known.",
"title": "Usage of terms"
},
{
"paragraph_id": 9,
"text": "The Chicano poet and writer Tino Villanueva traced the first documented use of the term as an ethnonym to 1911, as referenced in a then-unpublished essay by University of Texas anthropologist José Limón. Linguists Edward R. Simmen and Richard F. Bauerle report the use of the term in an essay by Mexican-American writer, Mario Suárez, published in the Arizona Quarterly in 1947. There is ample literary evidence to substantiate that Chicano is a long-standing endonym, as a large body of Chicano literature pre-dates the 1950s.",
"title": "Usage of terms"
},
{
"paragraph_id": 10,
"text": "In the 1940s, \"Chicano\" was reclaimed by Pachuco youth as an expression of defiance to Anglo-American society. At the time, Chicano was used among English and Spanish speakers as a classist and racist slur to refer to working class Mexican Americans in Spanish-speaking neighborhoods. In Mexico, the term was used with Pocho \"to deride Mexicans living in the United States, and especially their U.S.-born children, for losing their culture, customs, and language.\" Mexican anthropologist Manuel Gamio reported in 1930 that Chicamo (with an m) was used as a derogatory term by Hispanic Texans for recently arrived Mexican immigrants displaced during the Mexican Revolution in the beginning of the early 20th century.",
"title": "Usage of terms"
},
{
"paragraph_id": 11,
"text": "By the 1950s, Chicano referred to those who resisted total assimilation, while Pocho referred (often pejoratively) to those who strongly advocated for assimilation. In his essay \"Chicanismo\" in The Oxford Encyclopedia of Mesoamerican Cultures (2002), José Cuéllar, dates the transition from derisive to positive to the late 1950s, with increasing use by young Mexican-American high school students. These younger, politically aware Mexican Americans adopted the term \"as an act of political defiance and ethnic pride\", similar to the reclaiming of Black by African Americans. The Chicano Movement during the 1960s and early 1970s played a significant role in reclaiming \"Chicano,\" challenging those who used it as a term of derision on both sides of the Mexico-U.S. border.",
"title": "Usage of terms"
},
{
"paragraph_id": 12,
"text": "Demographic differences in the adoption of Chicano occurred at first. It was more likely to be used by males than females, and less likely to be used among those of higher socioeconomic status. Usage was also generational, with third-generation men more likely to use the word. This group was also younger, more political, and different from traditional Mexican cultural heritage. Chicana was a similar classist term to refer to \"[a] marginalized, brown woman who is treated as a foreigner and is expected to do menial labor and ask nothing of the society in which she lives.\" Among Mexican Americans, Chicano and Chicana began to be viewed as a positive identity of self-determination and political solidarity. In Mexico, Chicano may still be associated with a Mexican American person of low importance, class, and poor morals (similar to the terms Cholo, Chulo and Majo), indicating a difference in cultural views.",
"title": "Usage of terms"
},
{
"paragraph_id": 13,
"text": "Chicano was widely reclaimed in the 1960s and 1970s during the Chicano Movement to assert a distinct ethnic, political, and cultural identity that resisted assimilation into whiteness, systematic racism and stereotypes, colonialism, and the American nation-state. Chicano identity formed around seven themes: unity, economy, education, institutions, self-defense, culture, and political liberation, in an effort to bridge regional and class divisions. The notion of Aztlán, a mythical homeland claimed to be located in the southwestern United States, mobilized Mexican Americans to take social and political action. Chicano became a unifying term for mestizos. Xicano was also used in the 1970s.",
"title": "Usage of terms"
},
{
"paragraph_id": 14,
"text": "In the 1970s, Chicanos developed a reverence for machismo while also maintaining the values of their original platform. For instance, Oscar Zeta Acosta defined machismo as the source of Chicano identity, claiming that this \"instinctual and mystical source of manhood, honor and pride... alone justifies all behavior.\" Armando Rendón wrote in Chicano Manifesto (1971) that machismo was \"in fact an underlying drive of the gathering identification of Mexican Americans... the essence of machismo, of being macho, is as much a symbolic principle for the Chicano revolt as it is a guideline for family life.\"",
"title": "Usage of terms"
},
{
"paragraph_id": 15,
"text": "From the beginning of the Chicano Movement, some Chicanas criticized the idea that machismo must guide the people and questioned if machismo was \"indeed a genuinely Mexican cultural value or a kind of distorted view of masculinity generated by the psychological need to compensate for the indignities suffered by Chicanos in a white supremacist society.\" Angie Chabram-Dernersesian found that most of the literature on the Chicano Movement focused on men and boys, while almost none focused on Chicanas. The omission of Chicanas and the machismo of the Chicano Movement led to a shift by the 1990s.",
"title": "Usage of terms"
},
{
"paragraph_id": 16,
"text": "Xicanisma was coined by Ana Castillo in Massacre of the Dreamers (1994) as a recognition of a shift in consciousness since the Chicano Movement and to reinvigorate Chicana feminism. The aim of Xicanisma is not to replace patriarchy with matriarchy, but to create \"a nonmaterialistic and nonexploitive society in which feminine principles of nurturing and community prevail\"; where the feminine is reinserted into our consciousness rather than subordinated by colonization. The X reflects the Sh sound in Mesoamerican languages (such as Tlaxcala, which is pronounced Tlash-KAH-lah), and so marked this sound with a letter X. More than a letter, the X in Xicanisma is also a symbol to represent being at a literal crossroads or otherwise embodying hybridity.",
"title": "Usage of terms"
},
{
"paragraph_id": 17,
"text": "Xicanisma acknowledges Indigenous survival after hundreds of years of colonization and the need to reclaim one's Indigenous roots while also being \"committed to the struggle for liberation of all oppressed people\", wrote Francesca A. López. Activists like Guillermo Gómez-Peña, issued \"a call for a return to the Amerindian roots of most Latinos as well as a call for a strategic alliance to give agency to Native American groups.\" This can include one's Indigenous roots from Mexico \"as well as those with roots centered in Central and South America,\" wrote Francisco Rios. Castillo argued that this shift in language was important because \"language is the vehicle by which we perceive ourselves in relation to the world\".",
"title": "Usage of terms"
},
{
"paragraph_id": 18,
"text": "Among a minority of Mexican Americans, the term Xicanx may be used to refer to gender non-conformity. Luis J. Rodriguez states that \"even though most US Mexicans may not use this term,\" that it can be important for gender non-conforming Mexican Americans. Xicanx may destabilize aspects of the coloniality of gender in Mexican American communities. Artist Roy Martinez states that it is not \"bound to the feminine or masculine aspects\" and that it may be \"inclusive to anyone who identifies with it\". Some prefer the -e suffix Xicane in order to be more in-line with Spanish-speaking language constructs.",
"title": "Usage of terms"
},
{
"paragraph_id": 19,
"text": "In the 1930s, \"community leaders promoted the term Mexican American to convey an assimilationist ideology stressing white identity,\" as noted by legal scholar Ian Haney López. Lisa Y. Ramos argues that \"this phenomenon demonstrates why no Black-Brown civil rights effort emerged prior to the 1960s.\" Chicano youth rejected the previous generation's racial aspirations to assimilate into Anglo-American society and developed a \"Pachuco culture that fashioned itself neither as Mexican nor American.\"",
"title": "Distinction from other terms"
},
{
"paragraph_id": 20,
"text": "In the Chicano Movement, possibilities for Black–brown unity arose: \"Chicanos defined themselves as proud members of a brown race, thereby rejecting, not only the previous generation's assimilationist orientation, but their racial pretensions as well.\" Chicano leaders collaborated with Black Power movement leaders and activists. Mexican Americans insisted that Mexicans were white, while Chicanos embraced being non-white and the development of brown pride.",
"title": "Distinction from other terms"
},
{
"paragraph_id": 21,
"text": "Mexican American continued to be used by a more assimilationist faction who wanted to define Mexican Americans \"as a white ethnic group that had little in common with African Americans.\" Carlos Muñoz argues that the desire to separate themselves from Blackness and political struggle was rooted in an attempt to minimize \"the existence of racism toward their own people, [believing] they could \"deflect\" anti-Mexican sentiment in society\" through affiliating with whiteness.",
"title": "Distinction from other terms"
},
{
"paragraph_id": 22,
"text": "Following the decline of the Chicano Movement, Hispanic was first defined by the U.S. Federal Office of Management and Budget's (OMB) Directive No. 15 in 1977 as \"a person of Mexican, Dominican, Puerto Rican, Cuban, Central or South America or other Spanish culture or origin, regardless of race.\" The term was promoted by Mexican American political elites to encourage cultural assimilation into whiteness and move away from Chicanismo. The rise of Hispanic identity paralleled the emerging era of political and cultural conservatism in the United States during the 1980s.",
"title": "Distinction from other terms"
},
{
"paragraph_id": 23,
"text": "Key members of the Mexican American political elite, all of whom were middle-aged men, helped popularize the term Hispanic among Mexican Americans. The term was picked up by electronic and print media. Laura E. Gómez conducted a series of interviews with these elites and found that one of the main reasons Hispanic was promoted was to move away from Chicano: \"The Chicano label reflected the more radical political agenda of Mexican-Americans in the 1960s and 1970s, and the politicians who call themselves Hispanic today are the harbingers of a more conservative, more accomadationist politics.\"",
"title": "Distinction from other terms"
},
{
"paragraph_id": 24,
"text": "Gómez found that some of these elites promoted Hispanic to appeal to white American sensibilities, particularly in regard to separating themselves from Black political consciousness. Gómez records:",
"title": "Distinction from other terms"
},
{
"paragraph_id": 25,
"text": "Another respondent agreed with this position, contrasting his white colleagues' perceptions of the Congressional Hispanic Caucus with their perception of the Congressional Black Caucus. 'We certainly haven't been militant like the Black Caucus. We're seen as a power bloc—an ethnic power bloc striving to deal with mainstream issues.'",
"title": "Distinction from other terms"
},
{
"paragraph_id": 26,
"text": "In 1980, Hispanic was first made available as a self-identification on U.S. census forms. While Chicano also appeared on the 1980 U.S. census, it was only permitted to be selected as a subcategory underneath Spanish/Hispanic descent, which erased the possibility of Afro-Chicanos and of Chicanos being of Indigenous descent. Chicano did not appear on any subsequent census forms and Hispanic has remained. Since then, Hispanic has widely been used by politicians and the media. For this reason, many Chicanos reject the term Hispanic.",
"title": "Distinction from other terms"
},
{
"paragraph_id": 27,
"text": "Instead of or in addition to identifying as Chicano or any of its variations, some may prefer:",
"title": "Distinction from other terms"
},
{
"paragraph_id": 28,
"text": "Chicano and Chicana identity reflects elements of ethnic, political, cultural and Indigenous hybridity. These qualities of what constitutes Chicano identity may be expressed by Chicanos differently. Armando Rendón wrote in the Chicano Manifesto (1971), \"I am Chicano. What it means to me may be different than what it means to you.\" Benjamin Alire Sáenz wrote \"There is no such thing as the Chicano voice: there are only Chicano and Chicana voices.\" The identity can be somewhat ambiguous (e.g. in the 1991 Culture Clash play A Bowl of Beings, in response to Che Guevara's demand for a definition of \"Chicano\", an \"armchair activist\" cries out, \"I still don't know!\").",
"title": "Identity"
},
{
"paragraph_id": 29,
"text": "Many Chicanos understand themselves as being \"neither from here, nor from there\", as neither from the United States or Mexico. Juan Bruce-Novoa wrote in 1990: \"A Chicano lives in the space between the hyphen in Mexican-American.\" Being Chicano/a may represent the struggle of being institutionally acculturated to assimilate into the Anglo-dominated society of the United States, yet maintaining the cultural sense developed as a Latin-American cultured U.S.-born Mexican child. Rafael Pérez-Torres wrote, \"one can no longer assert the wholeness of a Chicano subject ... It is illusory to deny the nomadic quality of the Chicano community, a community in flux that yet survives and, through survival, affirms itself.\"",
"title": "Identity"
},
{
"paragraph_id": 30,
"text": "Chicano is a way for Mexican Americans to assert ethnic solidarity and Brown Pride. Boxer Rodolfo Gonzales was one of the first to reclaim the term in this way. This Brown Pride movement established itself alongside the Black is Beautiful movement. Chicano identity emerged as a symbol of pride in having a non-white and non-European image of oneself. It challenged the U.S. census designation \"Whites with Spanish Surnames\" that was used in the 1950s. Chicanos asserted ethnic pride at a time when Mexican assimilation into whiteness was being promoted by the U.S. government. Ian Haney López argues that this was to \"serve Anglo self-interest\", who claimed Mexicans were white to try to deny racism against them.",
"title": "Identity"
},
{
"paragraph_id": 31,
"text": "Alfred Arteaga argues that Chicano as an ethnic identity is born out of the European colonization of the Americas. He states that Chicano arose as hybrid ethnicity or race amidst colonial violence. This hybridity extends beyond a previously generalized \"Aztec\" ancestry, since the Indigenous peoples of Mexico are a diverse group of nations and peoples. A 2011 study found that 85 to 90% of maternal mtDNA lineages in Mexican Americans are Indigenous. Chicano ethnic identity may involve more than just Indigenous and Spanish ancestry. It may also include African ancestry (as a result of Spanish slavery or runaway slaves from Anglo-Americans). Arteaga concluded that \"the physical manifestation of the Chicano, is itself a product of hybridity.\"",
"title": "Identity"
},
{
"paragraph_id": 32,
"text": "Robert Quintana Hopkins argues that Afro-Chicanos are sometimes erased from the ethnic identity \"because so many people uncritically apply the 'one drop rule' in the U.S. [which] ignores the complexity of racial hybridity.\" Black and Chicano communities have engaged in close political movements and struggles for liberation, yet there have also been tensions between Black and Chicano communities. This has been attributed to racial capitalism and anti-Blackness in Chicano communities. Afro-Chicano rapper Choosey stated \"there's a stigma that Black and Mexican cultures don't get along, but I wanted to show the beauty in being a product of both.\"",
"title": "Identity"
},
{
"paragraph_id": 33,
"text": "Chicano political identity developed from a reverence of Pachuco resistance in the 1940s. Luis Valdez wrote that \"Pachuco determination and pride grew through the 1950s and gave impetus to the Chicano Movement of the 1960s ... By then the political consciousness stirred by the 1943 Zoot Suit Riots had developed into a movement that would soon issue the Chicano Manifesto—a detailed platform of political activism.\" By the 1960s, the Pachuco figure \"emerged as an icon of resistance in Chicano cultural production.\" The Pachuca was not regarded with the same status. Catherine Ramírez credits this to the Pachuca being interpreted as a symbol of \"dissident femininity, female masculinity, and, in some instances, lesbian sexuality\".",
"title": "Identity"
},
{
"paragraph_id": 34,
"text": "The political identity was founded on the principle that the U.S. nation-state had impoverished and exploited the Chicano people and communities. Alberto Varon argued that this brand of Chicano nationalism focused on the machismo subject in its calls for political resistance. Chicano machismo was both a unifying and fracturing force. Cherríe Moraga argued that it fostered homophobia and sexism, which became obstacles to the Movement. As the Chicano political consciousness developed, Chicanas, including Chicana lesbians of color brought attention to \"reproductive rights, especially sterilization abuse [sterilization of Latinas], battered women's shelters, rape crisis centers, [and] welfare advocacy.\" Chicana texts like Essays on La Mujer (1977), Mexican Women in the United States (1980), and This Bridge Called My Back (1981) have been relatively ignored even in Chicano Studies. Sonia Saldívar-Hull argued that even when Chicanas have challenged sexism, their identities have been invalidated.",
"title": "Identity"
},
{
"paragraph_id": 35,
"text": "Chicano political activist groups like the Brown Berets (1967–1972; 1992–Present) gained support in their protests of educational inequalities and demanding an end to police brutality. They collaborated with the Black Panthers and Young Lords, which were founded in 1966 and 1968 respectively. Membership in the Brown Berets was estimated to have reached five thousand in over 80 chapters (mostly centered in California and Texas). The Brown Berets helped organize the Chicano Blowouts of 1968 and the national Chicano Moratorium, which protested the high rate of Chicano casualties in the Vietnam War. Police harassment, infiltration by federal agents provacateur via COINTELPRO, and internal disputes led to the decline and disbandment of the Berets in 1972. Sánchez, then a professor at East Los Angeles College, revived the Brown Berets in 1992 prompted by the high number of Chicano homicides in Los Angeles County, hoping to replace the gang life with the Brown Berets.",
"title": "Identity"
},
{
"paragraph_id": 36,
"text": "Reies Tijerina, who was a vocal claimant to the rights of Latin Americans and Mexican Americans and a major figure of the early Chicano Movement, wrote: \"The Anglo press degradized the word 'Chicano.' They use it to divide us. We use it to unify ourselves with our people and with Latin America.\"",
"title": "Identity"
},
{
"paragraph_id": 37,
"text": "Chicano represents a cultural identity that is neither fully \"American\" or \"Mexican.\" Chicano culture embodies the \"in-between\" nature of cultural hybridity. Central aspects of Chicano culture include lowriding, hip hop, rock, graffiti art, theater, muralism, visual art, literature, poetry, and more. Mexican American celebrities, artists, and actors/actresses help bring Chicano culture to light and contribute to the growing influence it has on American pop culture. In modern day America you can now find Chicanos in all types of professions and trades. Notable subcultures include the Cholo, Pachuca, Pachuco, and Pinto subcultures. Chicano culture has had international influence in the form of lowrider car clubs in Brazil and England, music and youth culture in Japan, Māori youth enhancing lowrider bicycles and taking on cholo style, and intellectuals in France \"embracing the deterritorializing qualities of Chicano subjectivity.\"",
"title": "Identity"
},
{
"paragraph_id": 38,
"text": "As early as the 1930s, the precursors to Chicano cultural identity were developing in Los Angeles, California and the Southwestern United States. Former zoot suiter Salvador \"El Chava\" reflects on how racism and poverty forged a hostile social environment for Chicanos which led to the development of gangs: \"we had to protect ourselves\". Barrios and colonias (rural barrios) emerged throughout southern California and elsewhere in neglected districts of cities and outlying areas with little infrastructure. Alienation from public institutions made some Chicano youth susceptible to gang channels, who became drawn to their rigid hierarchical structure and assigned social roles in a world of government-sanctioned disorder.",
"title": "Identity"
},
{
"paragraph_id": 39,
"text": "Pachuco culture, which probably originated in the El Paso-Juarez area, spread to the borderland areas of California and Texas as Pachuquismo, which would eventually evolve into Chicanismo. Chicano zoot suiters on the west coast were influenced by Black zoot suiters in the jazz and swing music scene on the East Coast. Chicano zoot suiters developed a unique cultural identity, as noted by Charles \"Chaz\" Bojórquez, \"with their hair done in big pompadours, and \"draped\" in tailor-made suits, they were swinging to their own styles. They spoke Cálo, their own language, a cool jive of half-English, half-Spanish rhythms. [...] Out of the zootsuiter experience came lowrider cars and culture, clothes, music, tag names, and, again, its own graffiti language.\" San Antonio-based Chicano artist Adan Hernandez regarded pachucos as \"the coolest thing to behold in fashion, manner, and speech.” As described by artist Carlos Jackson, \"Pachuco culture remains a prominent theme in Chicano art because the contemporary urban cholo culture\" is seen as its heir.",
"title": "Identity"
},
{
"paragraph_id": 40,
"text": "Many aspects of Chicano culture like lowriding cars and bicycles have been stigmatized and policed by Anglo Americans who perceive Chicanos as \"juvenile delinquents or gang members\" for their embrace of nonwhite style and cultures, much as they did Pachucos. These negative societal perceptions of Chicanos were amplified by media outlets such as the Los Angeles Times. Luis Alvarez remarks how negative portrayals in the media served as a tool to advocate for increased policing of Black and Brown male bodies in particular: \"Popular discourse characterizing nonwhite youth as animal-like, hypersexual, and criminal marked their bodies as \"other\" and, when coming from city officials and the press, served to help construct for the public a social meaning of African Americans and Mexican American youth [as, in their minds, justifiably criminalized].\"",
"title": "Identity"
},
{
"paragraph_id": 41,
"text": "Chicano rave culture in southern California provided a space for Chicanos to partially escape criminalization in the 1990s. Artist and archivist Guadalupe Rosales states that \"a lot of teenagers were being criminalized or profiled as criminals or gangsters, so the party scene gave access for people to escape that\". Numerous party crews, such as Aztek Nation, organized events and parties would frequently take place in neighborhood backyards, particularly in East and South Los Angeles, the surrounding valleys, and Orange County. By 1995, it was estimated that over 500 party crews were in existence. They laid the foundations for \"an influential but oft-overlooked Latin dance subculture that offered community for Chicano ravers, queer folk, and other marginalized youth.\" Ravers used map points techniques to derail police raids. Rosales states that a shift occurred around the late 1990s and increasing violence affected the Chicano party scene.",
"title": "Identity"
},
{
"paragraph_id": 42,
"text": "Chicano identity functions as a way to reclaim one's Indigenous American, and often Indigenous Mexican, ancestry—to form an identity distinct from European identity, despite some Chicanos being of partial European descent—as a way to resist and subvert colonial domination. Rather than part of European American culture, Alicia Gasper de Alba referred to Chicanismo as an \"alter-Native culture, an Other American culture Indigenous to the land base now known as the West and Southwest of the United States.\" While influenced by settler-imposed systems and structures, Alba refers to Chicano culture as \"not immigrant but native, not foreign but colonized, not alien but different from the overarching hegemony of white America.\"",
"title": "Identity"
},
{
"paragraph_id": 43,
"text": "The Plan Espiritual de Aztlán (1969) drew from Frantz Fanon's The Wretched of the Earth (1961). In Wretched, Fanon stated: \"the past existence of an Aztec civilization does not change anything very much in the diet of the Mexican peasant today\", elaborating that \"this passionate search for a national culture which existed before the colonial era finds its legitimate reason in the anxiety shared by native intellectuals to shrink away from that of Western culture in which they all risk being swamped ... the native intellectuals, since they could not stand wonderstruck before the history of today's barbarity, decided to go back further and to delve deeper down; and, let us make no mistake, it was with the greatest delight that they discovered that there was nothing to be ashamed of in the past, but rather dignity, glory, and solemnity.\"",
"title": "Identity"
},
{
"paragraph_id": 44,
"text": "The Chicano Movement adopted this perspective through the notion of Aztlán—a mythic Aztec homeland which Chicanos used as a way to connect themselves to a precolonial past, before the time of the \"'gringo' invasion of our lands.\" Chicano scholars have described how this functioned as a way for Chicanos to reclaim a diverse or imprecise Indigenous past; while recognizing how Aztlán promoted divisive forms of Chicano nationalism that \"did little to shake the walls and bring down the structures of power as its rhetoric so firmly proclaimed\". As stated by Chicano historian Juan Gómez-Quiñones, the Plan Espiritual de Aztlán was \"stripped of what radical element it possessed by stressing its alleged romantic idealism, reducing the concept of Aztlán to a psychological ploy ... all of which became possible because of the Plan's incomplete analysis which, in turn, allowed it ... to degenerate into reformism.\"",
"title": "Identity"
},
{
"paragraph_id": 45,
"text": "While acknowledging its romanticized and exclusionary foundations, Chicano scholars like Rafael Pérez-Torres state that Aztlán opened a subjectivity which stressed a connection to Indigenous peoples and cultures at a critical historical moment in which Mexican-Americans and Mexicans were \"under pressure to assimilate particular standards—of beauty, of identity, of aspiration. In a Mexican context, the pressure was to urbanize and Europeanize ... \"Mexican-Americans\" were expected to accept anti-indigenous discourses as their own.\" As Pérez-Torres concludes, Aztlán allowed \"for another way of aligning one's interests and concerns with community and with history ... though hazy as to the precise means in which agency would emerge, Aztlán valorized a Chicanismo that rewove into the present previously devalued lines of descent.\" Romanticized notions of Aztlán have declined among some Chicanos, who argue for a need to reconstruct the place of Indigeneity in relation to Chicano identity.",
"title": "Identity"
},
{
"paragraph_id": 46,
"text": "Danza Azteca grew popular in the U.S. with the rise of the Chicano Movement, which inspired some \"Latinos to embrace their ethnic heritage and question the Eurocentric norms forced upon them.\" The use of pre-contact Aztec cultural elements has been critiqued by some Chicanos who stress a need to represent the diversity of Indigenous ancestry among Chicanos. Patrisia Gonzales portrays Chicanos as descendants of the Indigenous peoples of Mexico who have been displaced by colonial violence, positioning them as \"detribalized Indigenous peoples and communities.\" Roberto Cintli Rodríguez describes Chicanos as \"de-Indigenized,\" which he remarks occurred \"in part due to religious indoctrination and a violent uprooting from the land\", detaching millions of people from maíz-based cultures throughout the greater Mesoamerican region. Rodríguez asks how and why \"peoples who are clearly red or brown and undeniably Indigenous to this continent have allowed ourselves, historically, to be framed by bureaucrats and the courts, by politicians, scholars, and the media as alien, illegal, and less than human.\"",
"title": "Identity"
},
{
"paragraph_id": 47,
"text": "Gloria E. Anzaldúa has addressed Chicano's detribalization: \"In the case of Chicanos, being 'Mexican' is not a tribe. So in a sense Chicanos and Mexicans are 'detribalized'. We don't have tribal affiliations but neither do we have to carry ID cards establishing tribal affiliation.\" Anzaldúa recognized that \"Chicanos, people of color, and 'whites'\" have often chosen \"to ignore the struggles of Native people even when it's right in our caras (faces),\" expressing disdain for this \"willful ignorance\". She concluded that \"though both \"detribalized urban mixed bloods\" and Chicanos are recovering and reclaiming, this society is killing off urban mixed bloods through cultural genocide, by not allowing them equal opportunities for better jobs, schooling, and health care.\" Inés Hernández-Ávila argued that Chicanos should recognize and reconnect with their roots \"respectfully and humbly\" while also validating \"those peoples who still maintain their identity as original peoples of this continent\" in order to create radical change capable of \"transforming our world, our universe, and our lives\".",
"title": "Identity"
},
{
"paragraph_id": 48,
"text": "During World War II, Chicano youth were targeted by white servicemen, who despised their \"cool, measured indifference to the war, as well as an increasingly defiant posture toward whites in general\". Historian Robin Kelley states that this \"annoyed white servicemen to no end\". During the Zoot Suit Riots (1943), white rage erupted in Los Angeles, which \"became the site of racist attacks on Black and Chicano youth, during which white soldiers engaged in what amounted to a ritualized stripping of the zoot.\" Zoot suits were a symbol of collective resistance among Chicano and Black youth against city segregation and fighting in the war. Many Chicano and Black zoot-suiters engaged in draft evasion because they felt it was hypocritical for them to be expected to \"fight for democracy\" abroad yet face racism and oppression daily in the U.S.",
"title": "Political aspects"
},
{
"paragraph_id": 49,
"text": "This galvanized Chicano youth to focus on anti-war activism, \"especially influenced by the Third World movements of liberation in Asia, Africa, and Latin America.\" Historian Mario T. García reflects that \"these anti-colonial and anti-Western movements for national liberation and self-awareness touched a historical nerve among Chicanos as they began to learn that they shared some similarities with these Third World struggles.\" Chicano poet Alurista argued that \"Chicanos cannot be truly free until they recognize that the struggle in the United States is intricately bound with the anti-imperialist struggle in other countries.\" The Cuban Revolution (1953–1959) led by Fidel Castro and Che Guevara was particularly influential to Chicanos, as noted by García, who notes that Chicanos viewed the revolution as \"a nationalist revolt against 'Yankee imperialism' and neo-colonialism.\"",
"title": "Political aspects"
},
{
"paragraph_id": 50,
"text": "In the 1960s, the Chicano Movement brought \"attention and commitment to local struggles with an analysis and understanding of international struggles\". Chicano youth organized with Black, Latin American, and Filipino activists to form the Third World Liberation Front (TWLF), which fought for the creation of a Third World college. During the Third World Liberation Front strikes of 1968, Chicano artists created posters to express solidarity. Chicano poster artist Rupert García referred to the place of artists in the movement: \"I was critical of the police, of capitalist exploitation. I did posters of Che, of Zapata, of other Third World leaders. As artists, we climbed down from the ivory tower.\" Learning from Cuban poster makers of the post-revolutionary period, Chicano artists \"incorporated international struggles for freedom and self-determination, such as those of Angola, Chile, and South Africa\", while also promoting the struggles of Indigenous people and other civil rights movements through Black-brown unity. Chicanas organized with women of color activists to create the Third World Women's Alliance (1968-1980), representing \"visions of liberation in third world solidarity that inspired political projects among racially and economically marginalized communities\" against U.S. capitalism and imperialism.",
"title": "Political aspects"
},
{
"paragraph_id": 51,
"text": "The Chicano Moratorium (1969–1971) against the Vietnam War was one of the largest demonstrations of Mexican-Americans in history, drawing over 30,000 supporters in East L.A. Draft evasion was a form of resistance for Chicano anti-war activists such as Rosalio Muñoz, Ernesto Vigil, and Salomon Baldengro. They faced a felony charge—a minimum of five years prison time, $10,000, or both. In response, Munoz wrote \"I declare my independence of the Selective Service System. I accuse the government of the United States of America of genocide against the Mexican people. Specifically, I accuse the draft, the entire social, political, and economic system of the United States of America, of creating a funnel which shoots Mexican youth into Vietnam to be killed and to kill innocent men, women, and children....\" Rodolfo Corky Gonzales expressed a similar stance: \"My feelings and emotions are aroused by the complete disregard of our present society for the rights, dignity, and lives of not only people of other nations but of our own unfortunate young men who die for an abstract cause in a war that cannot be honestly justified by any of our present leaders.\"",
"title": "Political aspects"
},
{
"paragraph_id": 52,
"text": "Anthologies such as This Bridge Called My Back: Writings by Radical Women of Color (1981) were produced in the late 1970s and early 80s by writers who identified as lesbians of color, including Cherríe Moraga, Pat Parker, Toni Cade Bambara, Chrystos (self-identified claim of Menominee ancestry), Audre Lorde, Gloria E. Anzaldúa, Cheryl Clarke, Jewelle Gomez, Kitty Tsui, and Hattie Gossett, who developed a poetics of liberation. Kitchen Table: Women of Color Press and Third Woman Press, founded in 1979 by Chicana feminist Norma Alarcón, provided sites for the production of women of color and Chicana literatures and critical essays. While first world feminists focused \"on the liberal agenda of political rights\", Third World feminists \"linked their agenda for women's rights with economic and cultural rights\" and unified together \"under the banner of Third World solidarity\". Maylei Blackwell identifies that this internationalist critique of capitalism and imperialism forged by women of color has yet to be fully historicized and is \"usually dropped out of the false historical narrative\".",
"title": "Political aspects"
},
{
"paragraph_id": 53,
"text": "In the 1980s and 90s, Central American activists influenced Chicano leaders. The Mexican American Legislative Caucus (MALC) supported the Esquipulas Peace Agreement in 1987, standing in opposition to Contra aid. Al Luna criticized Reagan and American involvement while defending Nicaragua's Sandinista-led government: \"President Reagan cannot credibly make public speeches for peace in Central America while at the same time advocating for a three-fold increase in funding to the Contras.\" The Southwest Voter Research Initiative (SVRI), launched by Chicano leader Willie Velásquez, intended to educate Chicano youth about Central and Latin American political issues. In 1988, \"there was no significant urban center in the Southwest where Chicano leaders and activists had not become involved in lobbying or organizing to change U.S. policy in Nicaragua.\" In the early 1990s, Cherríe Moraga urged Chicano activists to recognize that \"the Anglo invasion of Latin America [had] extended well beyond the Mexican/American border\" while Gloria E. Anzaldúa positioned Central America as the primary target of a U.S. interventionism that had murdered and displaced thousands. However, Chicano solidarity narratives of Central Americans in the 1990s tended to center themselves, stereotype Central Americans, and filter their struggles \"through Chicana/o struggles, histories, and imaginaries.\"",
"title": "Political aspects"
},
{
"paragraph_id": 54,
"text": "Chicano activists organized against the Gulf War (1990–91). Raul Ruiz of the Chicano Mexican Committee against the Gulf War stated that U.S. intervention was \"to support U.S. oil interests in the region.\" Ruiz expressed, \"we were the only Chicano group against the war. We did a lot of protesting in L.A. even though it was difficult because of the strong support for the war and the anti-Arab reaction that followed ... we experienced racist attacks [but] we held our ground.\" The end of the Gulf War, along with the Rodney King Riots, were crucial in inspiring a new wave of Chicano political activism. In 1994, one of the largest demonstrations of Mexican Americans in the history of the United States occurred when 70,000 people, largely Chicanos and Latinos, marched in Los Angeles and other cities to protest Proposition 187, which aimed to cut educational and welfare benefits for undocumented immigrants.",
"title": "Political aspects"
},
{
"paragraph_id": 55,
"text": "In 2004, Mujeres against Militarism and the Raza Unida Coalition sponsored a Day of the Dead vigil against militarism within the Latino community, addressing the War in Afghanistan (2001–) and the Iraq War (2003–2011) They held photos of the dead and chanted \"no blood for oil.\" The procession ended with a five-hour vigil at Tia Chucha's Centro Cultural. They condemned \"the Junior Reserve Officers Training Corps (JROTC) and other military recruitment programs that concentrate heavily in Latino and African American communities, noting that JROTC is rarely found in upper-income Anglo communities.\" Rubén Funkahuatl Guevara organized a benefit concert for Latin@s Against the War in Iraq and Mexamérica por la Paz at Self-Help Graphics against the Iraq War. Although the events were well-attended, Guevara stated that \"the Feds know how to manipulate fear to reach their ends: world military dominance and maintaining a foothold in an oil-rich region were their real goals.\"",
"title": "Political aspects"
},
{
"paragraph_id": 56,
"text": "Chicano and Mexican labor organizers played an active role in notable labor strikes since the early 20th century including the Oxnard strike of 1903, Pacific Electric Railway strike of 1903, 1919 Streetcar Strike of Los Angeles, Cantaloupe strike of 1928, California agricultural strikes (1931–1941), and the Ventura County agricultural strike of 1941, endured mass deportations as a form of strikebreaking in the Bisbee Deportation of 1917 and Mexican Repatriation (1929–1936), and experienced tensions with one another during the Bracero program (1942–1964). Although organizing laborers were harassed, sabotaged, and repressed, sometimes through warlike tactics from capitalist owners who engaged in coervice labor relations and collaborated with and received support from local police and community organizations, Chicano and Mexican workers, particularly in agriculture, have been engaged in widespread unionization activities since the 1930s.",
"title": "Political aspects"
},
{
"paragraph_id": 57,
"text": "Prior to unionization, agricultural workers, many of whom were undocumented aliens, worked in dismal conditions. Historian F. Arturo Rosales recorded a Federal Project Writer of the period, who stated: \"It is sad, yet true, commentary that to the average landowner and grower in California the Mexican was to be placed in much the same category with ranch cattle, with this exception–the cattle were for the most part provided with comparatively better food and water and immeasurably better living accommodations.\" Growers used cheap Mexican labor to reap bigger profits and, until the 1930s, perceived Mexicans as docile and compliant with their subjugated status because they \"did not organize troublesome labor unions, and it was held that he was not educated to the level of unionism\". As one grower described, \"We want the Mexican because we can treat them as we cannot treat any other living man ... We can control them by keeping them at night behind bolted gates, within a stockade eight feet high, surrounded by barbed wire ... We can make them work under armed guards in the fields.\"",
"title": "Political aspects"
},
{
"paragraph_id": 58,
"text": "Unionization efforts were initiated by the Confederación de Uniones Obreras (Federation of Labor Unions) in Los Angeles, with twenty-one chapters quickly extending throughout southern California, and La Unión de Trabajadores del Valle Imperial (Imperial Valley Workers' Union). The latter organized the Cantaloupe strike of 1928, in which workers demanded better working conditions and higher wages, but \"the growers refused to budge and, as became a pattern, local authorities sided with the farmers and through harassment broke the strike\". Communist-led organizations such as the Cannery and Agricultural Workers' Industrial Union (CAWIU) supported Mexican workers, renting spaces for cotton pickers during the cotton strikes of 1933 after they were thrown out of company housing by growers. Capitalist owners used \"red-baiting\" techniques to discredit the strikes through associating them with communists. Chicana and Mexican working women showed the greatest tendency to organize, particularly in the Los Angeles garment industry with the International Ladies' Garment Workers' Union, led by anarchist Rose Pesotta.",
"title": "Political aspects"
},
{
"paragraph_id": 59,
"text": "During World War II, the government-funded Bracero program (1942–1964) hindered unionization efforts. In response to the California agricultural strikes and the 1941 Ventura County strike of Chicano and Mexican, as well as Filipino, lemon pickers/packers, growers organized the Ventura County Citrus Growers Committee (VCCGC) and launched a lobbying campaign to pressure the U.S. government to pass laws to prohibit labor organizing. VCCGC joined with other grower associations, forming a powerful lobbying bloc in Congress, and worked to legislate for (1) a Mexican guest workers program, which would become the Bracero program, (2) laws prohibiting strike activity, and (3) military deferments for pickers. Their lobbying efforts were successful: unionization among farmworkers was made illegal, farmworkers were excluded from minimum wage laws, and the usage of child labor by growers was ignored. In formerly active areas, such as Santa Paula, union activity stopped for over thirty years as a result.",
"title": "Political aspects"
},
{
"paragraph_id": 60,
"text": "When World War II ended, the Bracero program continued. Legal anthropologist Martha Menchaca states that this was \"regardless of the fact that massive quantities of crops were no longer needed for the war effort ... after the war, the braceros were used for the benefit of the large-scale growers and not for the nation's interest.\" The program was extended for an indefinite period in 1951. In the mid-1940s, labor organizer Ernesto Galarza founded the National Farm Workers Union (NFWU) in opposition to the Bracero Program, organizing a large-scale 1947 strike against the Di Giorgio Fruit Company in Arvin, California. Hundreds of Mexican, Filipino, and white workers walked out and demanded higher wages. The strike was broken by the usual tactics, with law enforcement on the side of the owners, evicting strikers and bringing in undocumented workers as strikebreakers. The NFWU folded, but served as a precursor to the United Farm Workers Union led by César Chávez. By the 1950s, opposition to the Bracero program had grown considerably, as unions, churches, and Mexican-American political activists raised awareness about the effects it had on American labor standards. On December 31, 1964, the U.S. government conceded and terminated the program.",
"title": "Political aspects"
},
{
"paragraph_id": 61,
"text": "Following the closure of the Bracero program, domestic farmworkers began to organize again because \"growers could not longer maintain the peonage system\" with the end of imported laborers from Mexico. Labor organizing formed part of the Chicano Movement via the struggle of farmworkers against depressed wages and working conditions. César Chávez began organizing Chicano farmworkers in the early 1960s, first through the National Farm Workers Association (NFWA) and then merging the association with the Agricultural Workers Organizing Committee (AWOC), an organization of mainly Filipino workers, to form the United Farm Workers. The labor organizing of Chávez was central to the expansion of unionization throughout the United States and inspired the Farm Labor Organizing Committee (FLOC), under the leadership of Baldemar Velásquez, which continues today. Farmworkers collaborated with local Chicano organizations, such as in Santa Paula, California, where farmworkers attended Brown Berets meetings in the 1970s and Chicano youth organized to improve working conditions and initiate an urban renewal project on the eastside of the city.",
"title": "Political aspects"
},
{
"paragraph_id": 62,
"text": "Although Mexican and Chicano workers, organizers, and activists organized for decades to improve working conditions and increase wages, some scholars characterize these gains as minimal. As described by Ronald Mize and Alicia Swords, \"piecemeal gains in the interests of workers have had very little impact on the capitalist agricultural labor process, so picking grapes, strawberries, and oranges in 1948 is not so different from picking those same crops in 2008.\" U.S. agriculture today remains totally reliant on Mexican labor, with Mexican-born individuals now constituting about 90% of the labor force.",
"title": "Political aspects"
},
{
"paragraph_id": 63,
"text": "Chicanos often endure struggles in the U.S. education system, such as being erased in curriculums and devalued as students. Some Chicanos identify schools as colonial institutions that exercise control over colonized students by teaching Chicanos to idolize whiteness and develop a negative self-image of themselves and their worldviews. School segregation between Mexican and white students was not legally ended until the late 1940s. In Orange County, California, 80% of Mexican students could only attend schools that taught Mexican children manual education, or gardening, bootmaking, blacksmithing, and carpentry for Mexican boys and sewing and homemaking for Mexican girls. White schools taught academic preparation. When Sylvia Mendez was told to attend a Mexican school, her parents brought suit against the court in Mendez vs. Westminster (1947) and won.",
"title": "Political aspects"
},
{
"paragraph_id": 64,
"text": "Although legal segregation had been successfully challenged, de facto or segregation-in-practice continued in many areas. Schools with primarily Mexican American enrollment were still treated as \"Mexican schools\" much as before the legal overturning of segregation. Mexican American students were still treated poorly in schools. Continued bias in the education system motivated Chicanos to protest and use direct action, such as walkouts, in the 1960s. On March 5, 1968, the Chicano Blowouts at East Los Angeles High School occurred as a response to the racist treatment of Chicano students, an unresponsive school board, and a high dropout rate. It became known as \"the first major mass protest against racism undertaken by Mexican-Americans in the history of the United States.\"",
"title": "Political aspects"
},
{
"paragraph_id": 65,
"text": "Sal Castro, a Chicano social science teacher at the school was arrested and fired for inspiring the walkouts. It was led by Harry Gamboa Jr. who was named \"one of the hundred most dangerous and violent subversives in the United States\" for organizing the student walkouts. The day prior, FBI director J. Edgar Hoover sent out a memo to law enforcement to place top priority on \"political intelligence work to prevent the development of nationalist movements in minority communities\". Chicana activist Alicia Escalante protested Castro's dismissal: \"We in the Movement will at least be able to hold our heads up and say that we haven't submitted to the gringo or to the pressures of the system. We are brown and we are proud. I am at least raising my children to be proud of their heritage, to demand their rights, and as they become parents they too will pass this on until justice is done.\"",
"title": "Political aspects"
},
{
"paragraph_id": 66,
"text": "In 1969, Plan de Santa Bárbara was drafted as a 155-page document that outlined the foundation of Chicano Studies programs in higher education. It called for students, faculty, employees and the community to come together as \"central and decisive designers and administrators of these programs\". Chicano students and activists asserted that universities should exist to serve the community. However, by the mid-1970s, much of the radicalism of earlier Chicano studies became deflated by the education system, aimed to alter Chicano Studies programs from within. Mario García argued that one \"encountered a deradicalization of the radicals\". Some opportunistic faculty avoided their political responsibilities to the community. University administrators co-opted oppositional forces within Chicano Studies programs and encouraged tendencies that led \"to the loss of autonomy of Chicano Studies programs.\" At the same time, \"a domesticated Chicano Studies provided the university with the facade of being tolerant, liberal, and progressive.\"",
"title": "Political aspects"
},
{
"paragraph_id": 67,
"text": "Some Chicanos argued that the solution was to create \"publishing outlets that would challenge Anglo control of academic print culture with its rules on peer review and thereby publish alternative research,\" arguing that a Chicano space in the colonial academy could \"avoid colonization in higher education\". In an attempt to establish educational autonomy, they worked with institutions like the Ford Foundation, but found that \"these organizations presented a paradox\". Rodolfo Acuña argued that such institutions \"quickly became content to only acquire funding for research and thereby determine the success or failure of faculty\". Chicano Studies became \"much closer [to] the mainstream than its practitioners wanted to acknowledge.\" Others argued that Chicano Studies at UCLA shifted from its earlier interests in serving the Chicano community to gaining status within the colonial institution through a focus on academic publishing, which alienated it from the community.",
"title": "Political aspects"
},
{
"paragraph_id": 68,
"text": "In 2012, the Mexican American Studies Department Programs (MAS) in Tucson Unified School District were banned after a campaign led by Anglo-American politician Tom Horne accused it of working to \"promote the overthrow of the U.S. government, promote resentment toward a race or class of people, are designed primarily for pupils of a particular ethnic group or advocate ethnic solidarity instead of the treatment of pupils as individuals.\" Classes on Latino literature, American history/Mexican-American perspectives, Chicano art, and an American government/social justice education project course were banned. Readings of In Lak'ech from Luis Valdez's poem Pensamiento Serpentino were also banned.",
"title": "Political aspects"
},
{
"paragraph_id": 69,
"text": "Seven books, including Paulo Friere's Pedagogy of the Oppressed and works covering Chicano history and critical race theory, were banned, taken from students, and stored away. The ban was overturned in 2017 by Judge A. Wallace Tashima, who ruled that it was unconstitutional and motivated by racism by depriving Chicano students of knowledge, thereby violating their Fourteenth Amendment right. The Xicanx Institute for Teaching & Organizing (XITO) emerged to carry on the legacy of the MAS programs. Chicanos continue to support the institution of Chicano studies programs. In 2021, students at Southwestern College, the closest college to the Mexico-United States Border urged for the creation of a Chicanx Studies Program to service the predominately Latino student body.",
"title": "Political aspects"
},
{
"paragraph_id": 70,
"text": "The Chicano concept of sin fronteras rejects the idea of borders. Some argued that the 1848 Treaty of Guadalupe Hidalgo transformed the Rio Grande region from a rich cultural center to a rigid border poorly enforced by the United States government. At the end of the Mexican-American War, 80,000 Spanish-Mexican-Indian people were forced into sudden U.S. habitation. Some Chicanos identified with the idea of Aztlán as a result, which celebrated a time preceding land division and rejected the \"immigrant/foreigner\" categorization by Anglo society. Chicano activists have called for unionism between both Mexicans and Chicanos on both sides of the border.",
"title": "Political aspects"
},
{
"paragraph_id": 71,
"text": "In the early 20th century, the border crossing had become a site of dehumanization for Mexicans. Protests in 1910 arose along the Santa Fe Bridge from abuses committed against Mexican workers while crossing the border. The 1917 Bath riots erupted after Mexicans crossing the border were required to strip naked and be disinfected with chemical agents like gasoline, kerosene, sulfuric acid, and Zyklon B, the latter of which was the fumigation of choice and would later notoriously be used in the gas chambers of Nazi Germany. Chemical dousing continued into the 1950s. During the early 20th century, Chicanos used corridos \"to counter Anglocentric hegemony.\" Ramón Saldivar stated that \"corridos served the symbolic function of empirical events and for creating counterfactual worlds of lived experience (functioning as a substitute for fiction writing).\"",
"title": "Political aspects"
},
{
"paragraph_id": 72,
"text": "Newspaper Sin Fronteras (1976–1979) openly rejected the Mexico-United States border. The newspaper considered it \"to be only an artificial creation that in time would be destroyed by the struggles of Mexicans on both sides of the border\" and recognized that \"Yankee political, economic, and cultural colonialism victimized all Mexicans, whether in the U.S. or in Mexico.\" Similarly, the General Brotherhood of Workers (CASA), important to the development of young Chicano intellectuals and activists, identified that, as \"victims of oppression, Mexicanos could achieve liberation and self-determination only by engaging in a borderless struggle to defeat American international capitalism.\"",
"title": "Political aspects"
},
{
"paragraph_id": 73,
"text": "Chicana theorist Gloria E. Anzaldúa notably emphasized the border as a \"1,950 mile-long wound that does not heal\". In referring to the border as a wound, writer Catherine Leen suggests that Anzaldúa recognizes \"the trauma and indeed physical violence very often associated with crossing the border from Mexico to the US, but also underlies the fact that the cyclical nature of this immigration means that this process will continue and find little resolution.\" Anzaldúa writes that la frontera signals \"the coming together of two self-consistent but habitually incompatible frames of reference [which] cause un choque, a cultural collision\" because \"the U.S.-Mexican border es una herida abierta where the Third World grates against the first and bleeds.\" Chicano and Mexican artists and filmmakers continue to address \"the contentious issues of exploitation, exclusion, and conflict at the border and attempt to overturn border stereotypes\" through their work. Luis Alberto Urrea writes \"the border runs down the middle of me. I have a barbed wire fence neatly bisecting my heart.\"",
"title": "Political aspects"
},
{
"paragraph_id": 74,
"text": "The 19th-century and early-20th-century image of the Mexican in the U.S. was \"that of the greasy Mexican bandit or bandito,\" who was perceived as criminal because of Mestizo ancestry and \"Indian blood.\" This rhetoric fueled anti-Mexican sentiment among whites, which led to many lynchings of Mexicans in the period as an act of racist violence. One of the largest massacres of Mexicans was known as La Matanza in Texas, where hundreds of Mexicans were lynched by white mobs. Many whites viewed Mexicans as inherently criminal, which they connected to their Indigenous ancestry. White historian Walter P. Webb wrote in 1935, \"there is a cruel streak in the Mexican nature ... this cruelty may be a heritage from the Spanish and of the Inquisition; it may, and doubtless should be, attributed partly to Indian blood.\"",
"title": "Sociological aspects"
},
{
"paragraph_id": 75,
"text": "The \"greasy bandito\" stereotype of the old West evolved into images of \"crazed Zoot-Suiters and pachuco killers in the 1940s, to contemporary cholos, gangsters, and gang members.\" Pachucos were portrayed as violent criminals in American mainstream media, which fueled the Zoot Suit Riots; initiated by off-duty policemen conducting a vigilante-hunt, the riots targeted Chicano youth who wore the zoot suit as a symbol of empowerment. On-duty police supported the violence against Chicano zoot suiters; they \"escorted the servicemen to safety and arrested their Chicano victims.\" Arrest rates of Chicano youth rose during these decades, fueled by the \"criminal\" image portrayed in the media, by politicians, and by the police. Not aspiring to assimilate in Anglo-American society, Chicano youth were criminalized for their defiance to cultural assimilation: \"When many of the same youth began wearing what the larger society considered outlandish clothing, sporting distinctive hairstyles, speaking in their own language (Caló), and dripping with attitude, law enforcement redoubled their efforts to rid [them from] the streets.\"",
"title": "Sociological aspects"
},
{
"paragraph_id": 76,
"text": "In the 1970s and subsequent decades, there was a wave of police killings of Chicanos. One of the most prominent cases was Luis \"Tato\" Rivera, who was a 20-year-old Chicano shot in the back by officer Craig Short in 1975. 2,000 Chicano demonstrators showed up to the city hall of National City, California in protest. Short was indicted for manslaughter by district attorney Ed Miller and was acquitted of all charges. Short was later appointed acting chief of police of National City in 2003. Another high-profile case was the murder of Ricardo Falcón, a student at the University of Colorado and leader of the United Latin American Students (UMAS), by Perry Brunson, a member of the far-right American Independent Party, at a gas station. Bruson was tried for manslaughter and was \"acquitted by an all-White jury\". Falcón became a martyr for the Chicano Movement as police violence increased in the subsequent decades. Similar cases led sociologist Alfredo Mirandé to refer to the U.S. criminal justice system as gringo justice, because \"it reflected one standard for Anglos and another for Chicanos.\"",
"title": "Sociological aspects"
},
{
"paragraph_id": 77,
"text": "The criminalization of Chicano youth in the barrio remains omnipresent. Chicano youth who adopt a cholo or chola identity endure hyper-criminalization in what has been described by Victor Rios as the youth control complex. While older residents initially \"embraced the idea of a chola or cholo as a larger subculture not necessarily associated with crime and violence (but rather with a youthful temporary identity), law enforcement agents, ignorant or disdainful of barrio life, labeled youth who wore clean white tennis shoes, shaved their heads, or long socks, as deviant.\" Community members were convinced by the police of cholo criminality, which led to criminalization and surveillance \"reminiscent of the criminalization of Chicana and Chicano youth during the Zoot-Suit era in the 1940s.\"",
"title": "Sociological aspects"
},
{
"paragraph_id": 78,
"text": "Sociologist José S. Plascencia-Castillo refers to the barrio as a panopticon that leads to intense self-regulation, as Cholo youth are both scrutinized by law enforcement to \"stay in their side of town\" and by the community who in some cases \"call the police to have the youngsters removed from the premises\". The intense governance of Chicano youth, especially of cholo identity, has deep implications on youth experience, affecting their physical and mental health as well as their outlook on the future. Some youth feel they \"can either comply with the demands of authority figures, and become obedient and compliant, and suffer the accompanying loss of identity and self-esteem, or, adopt a resistant stance and contest social invisibility to command respect in the public sphere.\"",
"title": "Sociological aspects"
},
{
"paragraph_id": 79,
"text": "Chicanas often confront objectification in Anglo society, being perceived as \"exotic\", \"lascivious\", and \"hot\" at a very young age while also facing denigration as \"barefoot\", \"pregnant\", \"dark\", and \"low-class\". These perceptions in society create numerous negative sociological and psychological effects, such as excessive dieting and eating disorders. Social media may enhance these stereotypes of Chicana women and girls. Numerous studies have found that Chicanas experience elevated levels of stress as a result of sexual expectations by society, as well as their parents and families.",
"title": "Sociological aspects"
},
{
"paragraph_id": 80,
"text": "Although many Chicana youth desire open conversation of these gender roles and sexuality, as well as mental health, these issues are often not discussed openly in Chicano families, which perpetuates unsafe and destructive practices. While young Chicanas are objectified, middle-aged Chicanas discuss feelings of being invisible, saying they feel trapped in balancing family obligations to their parents and children while attempting to create a space for their own sexual desires. The expectation that Chicanas should be \"protected\" by Chicanos may also constrict the agency and mobility of Chicanas.",
"title": "Sociological aspects"
},
{
"paragraph_id": 81,
"text": "Chicanas are often relegated to a secondary and subordinate status in families. Cherrie Moraga argues that this issue of patriarchal ideology in Chicano and Latino communities runs deep, as the great majority of Chicano and Latino men believe in and uphold male supremacy. Moraga argues that this ideology is not only upheld by men in Chicano families, but also by mothers in their relationship to their children: \"the daughter must constantly earn the mother's love, prove her fidelity to her. The son—he gets her love for free.\"",
"title": "Sociological aspects"
},
{
"paragraph_id": 82,
"text": "Chicanos develop their manhood within a context of marginalization in white society. Some argue that \"Mexican men and their Chicano brothers suffer from an inferiority complex due to the conquest and genocide inflicted upon their Indigenous ancestors,\" which leaves Chicano men feeling trapped between identifying with the so-called \"superior\" European and the so-called \"inferior\" Indigenous sense of self. This conflict may manifest itself in the form of hypermasculinity or machismo, in which a \"quest for power and control over others in order to feel better\" about oneself is undertaken. This may result in men developing abusive behaviors, the development of an impenetrable \"cold\" persona, alcohol abuse, and other destructive and self-isolating behaviors.",
"title": "Sociological aspects"
},
{
"paragraph_id": 83,
"text": "The lack of discussion of what it means to be a Chicano man between Chicano male youth and their fathers or their mothers creates a search for identity that often leads to self-destructive behaviors. Chicano male youth tend to learn about sex from their peers as well as older male family members who perpetuate the idea that as men they have \"a right to engage in sexual activity without commitment\". The looming threat of being labeled a joto (gay) for not engaging in sexual activity also conditions many Chicanos to \"use\" women for their own sexual desires. Gabriel S. Estrada argues that the criminalization of Chicanos proliferates further homophobia among Chicano boys and men who may adopt hypermasculine personas to escape such association.",
"title": "Sociological aspects"
},
{
"paragraph_id": 84,
"text": "Heteronormative gender roles are typically enforced in Chicano families. Any deviation from gender and sexual conformity is commonly perceived as a weakening or attack of la familia. However, Chicano men who retain a masculine or machismo performance are afforded some mobility to discreetly engage in homosexual behaviors, as long as it remains on the fringes. Effeminacy in Chicanos, Chicana lesbianism, and any deviation is understood as an attack on the family.",
"title": "Sociological aspects"
},
{
"paragraph_id": 85,
"text": "Queer Chicana/os may seek refuge in their families, if possible, because it is difficult for them to find spaces where they feel safe in the dominant and hostile white gay culture. Chicano machismo, religious traditionalism, and homophobia creates challenges for them to feel accepted by their families. Gabriel S. Estrada argues that upholding \"Judeo-Christian mandates against homosexuality that are not native to [Indigenous Mexico],\" exiles queer Chicana/o youth.",
"title": "Sociological aspects"
},
{
"paragraph_id": 86,
"text": "Chicanos may seek out both Western biomedical healthcare and Indigenous health practices when dealing with trauma or illness. The effects of colonization are proven to produce psychological distress among Indigenous communities. Intergenerational trauma, along with racism and institutionalized systems of oppression, have been shown to adversely impact the mental health of Chicanos and Latinos. Mexican Americans are three times more likely than European Americans to live in poverty. Chicano adolescent youth experience high rates of depression and anxiety. Chicana adolescents have higher rates of depression and suicidal ideation than their European-American and African-American peers. Chicano adolescents experience high rates of homicide, and suicide. Chicanos ages ten to seventeen are at a greater risk for mood and anxiety disorders than their European-American and African-American peers. Scholars have determined that the reasons for this are unclear due to the scarcity of studies on Chicano youth, but that intergenerational trauma, acculturative stress, and family factors are believed to contribute.",
"title": "Sociological aspects"
},
{
"paragraph_id": 87,
"text": "Among Mexican immigrants who have lived in the United States for less than thirteen years, lower rates of mental health disorders were found in comparison to Mexican-Americans and Chicanos born in the United States. Scholar Yvette G. Flores concludes that these studies demonstrate that \"factors associated with living in the United States are related to an increased risk of mental disorders.\" Risk factors for negative mental health include historical and contemporary trauma stemming from colonization, marginalization, discrimination, and devaluation. The disconnection of Chicanos from their Indigeneity has been cited as a cause of trauma and negative mental health:",
"title": "Sociological aspects"
},
{
"paragraph_id": 88,
"text": "Loss of language, cultural rituals, and spiritual practices creates shame and despair. The loss of culture and language often goes unmourned, because it is silenced and denied by those who occupy, conquer, or dominate. Such losses and their psychological and spiritual impact are passed down across generations, resulting in depression, disconnection, and spiritual distress in subsequent generations, which are manifestations of historical or intergenerational trauma.",
"title": "Sociological aspects"
},
{
"paragraph_id": 89,
"text": "Psychological distress may emerge from Chicanos being \"othered\" in society since childhood and is linked to psychiatric disorders and symptoms which are culturally bound—susto (fright), nervios (nerves), mal de ojo (evil eye), and ataque de nervios (an attack of nerves resembling a panic attack). Manuel X. Zamarripa discusses how mental health and spirituality are often seen as disconnected subjects in Western perspectives. Zamarripa states \"in our community, spirituality is key for many of us in our overall wellbeing and in restoring and giving balance to our lives\". For Chicanos, Zamarripa recognizes that identity, community, and spirituality are three core aspects which are essential to maintaining good mental health.",
"title": "Sociological aspects"
},
{
"paragraph_id": 90,
"text": "Chicano spirituality has been described as a process of engaging in a journey to unite one's consciousness for the purposes of cultural unity and social justice. It brings together many elements and is therefore hybrid in nature. Scholar Regina M Marchi states that Chicano spirituality \"emphasizes elements of struggle, process, and politics, with the goal of creating a unity of consciousness to aid social development and political action\". Lara Medina and Martha R. Gonzales explain that \"reclaiming and reconstructing our spirituality based on non-Western epistemologies is central to our process of decolonization, particularly in these most troubling times of incessant Eurocentric, heteronormative patriarchy, misogyny, racial injustice, global capitalist greed, and disastrous global climate change.\" As a result, some scholars state that Chicano spirituality must involve a study of Indigenous Ways of Knowing (IWOK). The Circulo de Hombres group in San Diego, California spiritually heals Chicano, Latino, and Indigenous men \"by exposing them to Indigenous-based frameworks, men of this cultural group heal and rehumanize themselves through Maya-Nahua Indigenous-based concepts and teachings\", helping them process intergenerational trauma and dehumanization that has resulted from colonization. A study on the group reported that reconnecting with Indigenous worldviews was overwhelmingly successful in helping Chicano, Latino, and Indigenous men heal. As stated by Jesus Mendoza, \"our bodies remember our indigenous roots and demand that we open our mind, hearts, and souls to our reality\".",
"title": "Sociological aspects"
},
{
"paragraph_id": 91,
"text": "Chicano spirituality is a way for Chicanos to listen, reclaim, and survive while disrupting coloniality. While historically Catholicism was the primary way for Chicanos to express their spirituality, this is changing rapidly. According to a Pew Research Center report in 2015, \"the primary role of Catholicism as a conduit to spirituality has declined and some Chicanos have changed their affiliation to other Christian religions and many more have stopped attending church altogether.\" Increasingly, Chicanos are considering themselves spiritual rather than religious or part of an organized religion. A study on spirituality and Chicano men in 2020 found that many Chicanos indicated the benefits of spirituality through connecting with Indigenous spiritual beliefs and worldviews instead of Christian or Catholic organized religion in their lives. Dr. Lara Medina defines spirituality as (1) Knowledge of oneself—one's gifts and one's challenges, (2) Co-creation or a relationship with communities (others), and (3) A relationship with sacred sources of life and death 'the Great Mystery' or Creator. Jesus Mendoza writes that, for Chicanos, \"spirituality is our connection to the earth, our pre-Hispanic history, our ancestors, the mixture of pre-Hispanic religion with Christianity ... a return to a non-Western worldview that understands all life as sacred.\" In her writing on Gloria Anzaldua's idea of spiritual activism, AnaLouise Keating states that spirituality is distinct from organized religion and New Age thinking. Leela Fernandes defines spirituality as follows:",
"title": "Sociological aspects"
},
{
"paragraph_id": 92,
"text": "When I speak of spirituality, at the most basic level I am referring to an understanding of the self as encompassing body and mind, as well as spirit. I am also referring to a transcendent sense of interconnection that moves beyond the knowable, visible material world. This sense of interconnection has been described variously as divinity, the sacred, spirit, or simply the universe. My understanding is also grounded in a form of lived spirituality, which is directly accessible to all and which does not need to be mediated by religious experts, institutions or theological texts; this is what is often referred to as the mystical side of spirituality... Spirituality can be as much about practices of compassion, love, ethics, and truth defined in nonreligious terms as it can be related to the mystical reinterpretations of existing religious traditions.",
"title": "Sociological aspects"
},
{
"paragraph_id": 93,
"text": "David Carrasco states that Mesoamerican spiritual or religious beliefs have historically always been evolving in response to the conditions of the world around them: \"These ritual and mythic traditions were not mere repetitions of ancient ways. New rituals and mythic stories were produced to respond to ecological, social, and economic changes and crises.\" This was represented through the art of the Olmecs, Maya, and Mexica. European colonizers sought and worked to destroy Mesoamerican worldviews regarding spirituality and replace these with a Christian model. The colonizers used syncreticism in art and culture, exemplified through practices such as the idea presented in the Testerian Codices that \"Jesus ate tortillas with his disciples at the last supper\" or the creation of the Virgen de Guadalupe (mirroring the Christian Mary) in order to force Christianity into Mesoamerican cosmology.",
"title": "Sociological aspects"
},
{
"paragraph_id": 94,
"text": "Chicanos can create new spiritual traditions by recognizing this history or \"by observing the past and creating a new reality\". Gloria Anzaldua states that this can be achieved through nepantla spirituality or a space where, as stated by Jesus Mendoza, \"all religious knowledge can coexist and create a new spirituality ... where no one is above the other ... a place where all is useful and none is rejected.\" Anzaldua and other scholars acknowledge that this is a difficult process that involves navigating many internal contradictions in order to find a path towards spiritual liberation. Cherrie Moraga calls for a deeper self-exploration of who Chicanos are in order to reach \"a place of deeper inquiry into ourselves as a people ... possibly, we must turn our eyes away from racist America and take stock at the damages done to us. Possibly, the greatest risks yet to be taken are entre nosotros, where we write, paint, dance, and draw the wound for one another to build a stronger pueblo. The women artist seemed disposed to do this, their work often mediating the delicate area between cultural affirmation and criticism.\" Laura E. Pérez states in her study of Chicana art that \"the artwork itself [is] altar-like, a site where the disembodied—divine, emotional, or social—[is] acknowledged, invoked, meditated upon, and released as a shared offering.\"",
"title": "Sociological aspects"
},
{
"paragraph_id": 95,
"text": "The diversity of Chicano cultural production is vast. Guillermo Gómez-Peña wrote that the complexity and diversity of the Chicano community includes influences from Central American, Caribbean, Africans, and Asian Americans who have moved into Chicano communities as well as queer people of color. Many Chicano artists continue to question \"conventional, static notions of Chicanismo,\" while others conform to more conventional cultural traditions.",
"title": "Cultural aspects"
},
{
"paragraph_id": 96,
"text": "Chicano film has been marginalized since its inception and was established in the 1960s. The generally marginal status of Chicanos in the film industry has meant that many Chicano films are not released with wide theatrical distribution. Chicano film emerged from the creation of political plays and documentaries. This included El Teatro Campesino's Yo Soy Joaquín (1969), Luis Valdez's El Corrido (1976), and Efraín Gutiérrez's Please, Don't Bury Me Alive! (1976), the latter of which is referred to as the first full-length Chicano film.",
"title": "Cultural aspects"
},
{
"paragraph_id": 97,
"text": "Docudramas then emerged like Esperanza Vasquez's Agueda Martínez (1977), Jesús Salvador Treviño's Raíces de Sangre (1977), and Robert M. Young's ¡Alambrista! (1977). Luis Valdez's Zoot Suit (1981), Young's The Ballad of Gregorio Cortez (1982), Gregory Nava's, My Family/Mi familia (1995) and Selena (1997), and Josefina López's Real Women Have Curves (2002). Chicana/o films continue to be regarded as a small niche in the film industry that has yet to receive mainstream commercial success. However, Chicana/o films have been influential in shaping how Chicana/os see themselves.",
"title": "Cultural aspects"
},
{
"paragraph_id": 98,
"text": "Chicano literature tends to focus on challenging the dominant narrative, while embracing notions of hybridity, including the use of Spanglish, as well as the blending of genre forms, such as fiction and autobiography. José Antonio Villarreal's Pocho (1959) is widely recognized as the first major Chicano novel. Poet Alurista wrote that Chicano literature served an important role to push back against narratives by white Anglo-Saxon Protestant culture that sought to \"keep Mexicans in their place.\"",
"title": "Cultural aspects"
},
{
"paragraph_id": 99,
"text": "Rodolfo \"Corky\" Gonzales's \"Yo Soy Joaquin\" is one of the first examples of explicitly Chicano poetry. Other early influential poems included \"El Louie\" by José Montoya and Abelardo \"Lalo\" Delgado's poem \"Stupid America.\" In 1967, Octavio Romano founded Tonatiuh-Quinto Sol Publications, which was the first dedicated Chicano publication houses. The novel Chicano (1970) by Richard Vasquez, was the first novel about Mexican Americans to be released by a major publisher. It was widely read in high schools and universities during the 1970s and is now recognized as a breakthrough novel.",
"title": "Cultural aspects"
},
{
"paragraph_id": 100,
"text": "Chicana feminist writers have tended to focus on themes of identity, questioning how identity is constructed, who constructs it, and for what purpose in a racist, classist, and patriarchal structure. Characters in books such as Victuum (1976) by Isabella Ríos, The House on Mango Street (1983) by Sandra Cisneros, Loving in the War Years: lo que nunca pasó por sus labios (1983) by Cherríe Moraga, The Last of the Menu Girls (1986) by Denise Chávez, Margins (1992) by Terri de la Peña, and Gulf Dreams (1996) by Emma Pérez have also been read regarding how they intersect with themes of gender and sexuality. Catrióna Rueda Esquibel performs a queer reading of Chicana literature in With Her Machete in Her Hand (2006) to demonstrate how some of the intimate relationships between girls and women contributed to a discourse on homoeroticism and queer sexuality in Chicana/o literature.",
"title": "Cultural aspects"
},
{
"paragraph_id": 101,
"text": "Chicano characters who were gay tended to be removed from the barrio and were typically portrayed with negative attributes, such as the character of \"Joe Pete\" in Pocho and the unnamed protagonist of John Rechy's City of Night (1963). Other characters in the Chicano canon may also be read as queer, including the unnamed protagonist of Tomás Rivera's ...y no se lo tragó la tierra (1971), and \"Antonio Márez\" in Rudolfo Anaya's Bless Me, Ultima (1972). Juan Bruce-Novoa wrote that homosexuality was \"far from being ignored during the 1960s and 1970s,\" despite homophobia restricting representations: \"our community is less sexually repressive than we might expect\".",
"title": "Cultural aspects"
},
{
"paragraph_id": 102,
"text": "Lalo Guerrero has been lauded as the \"father of Chicano music.\" Beginning in the 1930s, he wrote songs in the big band and swing genres and expanded into traditional genres of Mexican music. During the farmworkers' rights campaign, he wrote music in support of César Chávez and the United Farm Workers. Other notable musicians include Selena, who sang a mixture of Mexican, Tejano, and American popular music, and died in 1995 at the age of 23; Zack de la Rocha, social activist and lead vocalist of Rage Against the Machine; and Los Lonely Boys, a Texas-style country rock band.",
"title": "Cultural aspects"
},
{
"paragraph_id": 103,
"text": "Chicano techno and electronic music artists DJ Rolando, Santiago Salazar, DJ Tranzo, and Esteban Adame have released music through independent labels like Underground Resistance, Planet E, Krown Entertainment, and Rush Hour. In the 1990s, house music artists such as DJ Juanito (Johnny Loopz), Rudy \"Rude Dog\" Gonzalez, and Juan V. released numerous tracks through Los Angeles-based house labels Groove Daddy Records and Bust A Groove.",
"title": "Cultural aspects"
},
{
"paragraph_id": 104,
"text": "DJ Rolando's techno track \"Knights of the Jaguar,\" released on the UR label in 1999, became the most well-known Chicano techno track after charting at #43 in the UK in 2000. Mixmag commented: \"after it was released, it spread like wildfire all over the world. It's one of those rare tracks that feels like it can play for an eternity without anyone batting an eyelash.\" It's consistently placed on Best Songs lists. The official video for the track features various portraits of Chicana/os in Detroit among several Chicano murals, lowrider cars and lowrider bicycles, and lifestyle.",
"title": "Cultural aspects"
},
{
"paragraph_id": 105,
"text": "Salazar and Adame are also affiliated with Underground Resistance and have collaborated with Nomadico. Salazar founded music labels Major People, Ican (as in Mex-Ican, with Esteban Adame) and Historia y Violencia (with Juan Mendez a.k.a. Silent Servant) and released his debut album Chicanismo in 2015 to positive reviews. Nomadico's label Yaxteq, founded in 2015, has released tracks by veteran Los Angeles techno producer Xavier De Enciso and Honduran producer Ritmos.",
"title": "Cultural aspects"
},
{
"paragraph_id": 106,
"text": "A growing Tex-Mex polka band trend influenced by the conjunto and norteño music of Mexican immigrants, has in turn influenced much new Chicano folk music, especially on large-market Spanish language radio stations and on television music video programs in the U.S. Some of these artists, like the band Quetzal, are known for the political content of political songs.",
"title": "Cultural aspects"
},
{
"paragraph_id": 107,
"text": "Hip hop culture, which is cited as having formed in the 1980s street culture of African American, West Indian (especially Jamaican), and Puerto Rican New York City Bronx youth and characterized by DJing, rap music, graffiti, and breakdancing, was adopted by many Chicano youth by the 1980s as its influence moved westward across the United States. Chicano artists were beginning to develop their own style of hip hop. Rappers such as Ice-T and Eazy-E shared their music and commercial insights with Chicano rappers in the late 1980s. Chicano rapper Kid Frost, who is often cited as \"the godfather of Chicano rap\" was highly influenced by Ice-T and was even cited as his protégé.",
"title": "Cultural aspects"
},
{
"paragraph_id": 108,
"text": "Chicano rap is a unique style of hip hop music which started with Kid Frost, who saw some mainstream exposure in the early 1990s. While Mellow Man Ace was the first mainstream rapper to use Spanglish, Frost's song \"La Raza\" paved the way for its use in American hip hop. Chicano rap tends to discuss themes of importance to young urban Chicanos. Some of the most prominent Chicano artists include A.L.T., Lil Rob, Psycho Realm, Baby Bash, Serio, A Lighter Shade of Brown, and Funky Aztecs. Chicano rap artists with less mainstream exposure, yet with popular underground followings include Cali Life Style, Ese 40'z, Sleepy Loka, Ms. Sancha, Mac Rockelle, Sir Dyno, and Choosey.",
"title": "Cultural aspects"
},
{
"paragraph_id": 109,
"text": "Chicano R&B artists include Paula DeAnda, Amanda Perez, Frankie J, and Victor Ivan Santos (early member of the Kumbia Kings and associated with Baby Bash).",
"title": "Cultural aspects"
},
{
"paragraph_id": 110,
"text": "Although Latin jazz is most popularly associated with artists from the Caribbean (particularly Cuba) and Brazil, young Mexican Americans have played a role in its development over the years, going back to the 1930s and early 1940s, the era of the zoot suit, when young Mexican-American musicians in Los Angeles and San Jose, such as Jenni Rivera, began to experiment with banda, a jazz-like fusion genre that has grown recently in popularity among Mexican Americans",
"title": "Cultural aspects"
},
{
"paragraph_id": 111,
"text": "In the 1950s, 1960s and 1970s, a wave of Chicano pop music surfaced through innovative musicians Carlos Santana, Johnny Rodriguez, Ritchie Valens and Linda Ronstadt. Joan Baez, who is also of Mexican-American descent, included Hispanic themes in some of her protest folk songs. Chicano rock is rock music performed by Chicano groups or music with themes derived from Chicano culture.",
"title": "Cultural aspects"
},
{
"paragraph_id": 112,
"text": "There are two undercurrents in Chicano rock. One is a devotion to the original rhythm and blues roots of Rock and roll including Ritchie Valens, Sunny and the Sunglows, and ? and the Mysterians. Groups inspired by this include Sir Douglas Quintet, Thee Midniters, Los Lobos, War, Tierra, and El Chicano, and, of course, the Chicano Blues Man himself, the late Randy Garribay. The second theme is the openness to Latin American sounds and influences. Trini Lopez, Santana, Malo, Azteca, Toro, Ozomatli and other Chicano Latin rock groups follow this approach. Chicano rock crossed paths of other Latin rock genres (Rock en español) by Cubans, Puerto Ricans, such as Joe Bataan and Ralphi Pagan and South America (Nueva canción). Rock band The Mars Volta combines elements of progressive rock with traditional Mexican folk music and Latin rhythms along with Cedric Bixler-Zavala's Spanglish lyrics.",
"title": "Cultural aspects"
},
{
"paragraph_id": 113,
"text": "Chicano punk is a branch of Chicano rock. There were many bands that emerged from the California punk scene, including The Zeros, Bags, Los Illegals, The Brat, The Plugz, Manic Hispanic, and the Cruzados; as well as others from outside of California including Mydolls from Houston, Texas and Los Crudos from Chicago, Illinois. Some music historians argue that Chicanos of Los Angeles in the late 1970s might have independently co-founded punk rock along with the already-acknowledged founders from European sources when introduced to the US in major cities. The rock band ? and the Mysterians, which was composed primarily of Mexican-American musicians, was the first band to be described as punk rock. The term was reportedly coined in 1971 by rock critic Dave Marsh in a review of their show for Creem magazine.",
"title": "Cultural aspects"
},
{
"paragraph_id": 114,
"text": "El Teatro Campesino (The Farmworkers' Theater) was founded by Luis Valdez and Agustin Lira in 1965 as the cultural wing of the United Farm Workers (UFW) as a result of the Great Delano Grape Strike in 1965. All of the actors were farmworkers and involved in organizing for farmworkers' rights. Its first performances sought to recruit members for the UFW and dissuade strikebreakers. Many early performances were not scripted and were rather conceived through the direction of Valdez and others through actos, in which a scenario would be proposed for a scene and then dialogue would simply be improvised.",
"title": "Cultural aspects"
},
{
"paragraph_id": 115,
"text": "Chicano performance art continued with the work of Los Angeles' comedy troupe Culture Clash, Guillermo Gómez-Peña, and Nao Bustamante, known internationally for her conceptual art pieces and as a participant in Work of Art: The Next Great Artist. Chicano performance art became popular in the 1970s, blending humor and pathos for tragicomic effect. Groups such as Asco and the Royal Chicano Air Force illustrated this aspect of performance art through their work. Asco (Spanish for naseau or disgust), composed of Willie Herón, Gronk, Harry Gamboa Jr., and Patssi Valdez, created performance pieces such as the Walking Mural, walking down Whittier Boulevard dressed as \"a multifaceted mural, a Christmas tree, and the Virgin of Guadalupe. Asco continued its conceptual performance piece until 1987.",
"title": "Cultural aspects"
},
{
"paragraph_id": 116,
"text": "In the 1990s, San Diego-based artist cooperative of David Avalos, Louis Hock, and Elizabeth Sisco used their National Endowment for the Arts $5,000 fellowship subversively, deciding to circulate money back to the community: \"handing 10-dollar bills to undocumented workers to spend as they please.\" Their piece Arte Reembolsa (Art Rebate) created controversy among the art establishment, with the documentation of the piece featuring \"footage of U.S. House and Senate members questioning whether the project was, in fact, art.\"",
"title": "Cultural aspects"
},
{
"paragraph_id": 117,
"text": "One of the most well-known performance art troupes is La Pocha Nostra, which has been covered in numerous articles for various performance art pieces. The troupe has been active since 1993 yet has remained relevant into the 2010s and 2020s due to its political commentary, including anti-corporate stances. The troupe regularly uses parody and humor in their performances to make complex commentary on various social issues. Creating thought-provoking performances that challenge the audience to think differently is often their intention with each performance piece.",
"title": "Cultural aspects"
},
{
"paragraph_id": 118,
"text": "The Chicano visual art tradition, like the identity, is grounded in community empowerment and resisting assimilation and oppression. Prior to the introduction of spray cans, paint brushes were used by Chicano \"shoeshine boys [who] marked their names on the walls with their daubers to stake out their spots on the sidewalk\" in the early 20th century. Pachuco graffiti culture in Los Angeles was already \"in full bloom\" by the 1930s and 1940s, pachucos developed their placa, \"a distinctive calligraphic writing style\" which went on to influence contemporary graffiti tagging. Paño, a form of pinto arte (a caló term for male prisoner) using pen and pencil, developed in the 1930s, first using bed sheets and pillowcases as canvases. Paño has been described as rasquachismo, a Chicano worldview and artmaking method which makes the most from the least.",
"title": "Cultural aspects"
},
{
"paragraph_id": 119,
"text": "Graffiti artists, such as Charles \"Chaz\" Bojórquez, developed an original style of graffiti art known as West Coast Cholo style influenced by Mexican murals and pachuco placas (tags which indicate territorial boundaries) in the mid-20th century. In the 1960s, Chicano graffiti artists from San Antonio to L.A. (especially in East LA, Whittier, and Boyle Heights) used the art form to challenge authority, tagging police cars, buildings, and subways as \"a demonstration of their bravado and anger\", understanding their work as \"individual acts of pride or protest, gang declarations of territory or challenge, and weapons in a class war.\" Chicano graffiti artists wrote C/S as an abbreviation for con safos or the variant con safo (loosely meaning \"don't touch this\" and expressing a \"the same to you\" attitude)—a common expression among Chicanos on the eastside of Los Angeles and throughout the Southwest.",
"title": "Cultural aspects"
},
{
"paragraph_id": 120,
"text": "The Chicano Movement and political identity had heavily influenced Chicano artists by the 1970s. Alongside the Black arts movement, this led to the development of institutions such as Self-Help Graphics, Los Angeles Contemporary Exhibitions, and Plaza de la Raza. Artists such as Harry Gamboa Jr., Gronk, and Judith Baca created art which \"stood in opposition to the commercial galleries, museums, and civic institutional mainstream\". This was exemplified with Asco's tagging of LACMA after \"a curator refused to even entertain the idea of a Chicano art show within its walls\" in 1972. Chicano art collectives such as the Royal Chicano Air Force, founded in 1970 by Ricardo Favela, José Montoya and Esteban Villa, supported the United Farm Workers movement through art activism, using art to create and inspire social change. Favela believed that it was important to keep the culture alive through their artwork. Favela stated \"I was dealing with art forms very foreign to me, always trying to do western art, but there was always something lacking... it was very simple: it was just my Chicano heart wanting to do Chicano art.\" Other Chicano visual art collectives included Con Safo in San Antonio, which included Felipe Reyes, José Esquivel, Roberto Ríos, Jesse Almazán, Jesse \"Chista\" Cantú, Jose Garza, Mel Casas, Rudy Treviño, César Martínez, Kathy Vargas, Amado Peña, Jr., Robando Briseño, and Roberto Gonzalez. The Mujeres Muralistas in the Mission District, San Francisco included Patricia Rodriguez, Graciela Carrillo, Consuelo Mendez, and Irene Perez.",
"title": "Cultural aspects"
},
{
"paragraph_id": 121,
"text": "Chicano muralism, which began in the 1960s, became a state-sanctioned artform in the 1970s as an attempt by outsiders to \"prevent gang violence and dissuade graffiti practices\". This led to the creation of murals at Estrada Courts and other sites throughout Chicano communities. In some instances, these murals were covered with the placas they were instituted by the state to prevent. Marcos Sanchez-Tranquilino states that \"rather than vandalism, the tagging of one's own murals points toward a complex sense of wall ownership and a social tension created by the uncomfortable yet approving attentions of official cultural authority.\" This created a division between established Chicano artists who celebrated inclusion and acceptance by the dominant culture and younger Chicano artists who \"saw greater power in renegade muralism and barrio calligraphy than in state-sanctioned pieces.\" Chicano poster art became prominent in the 1970s as a way to challenge political authority, with pieces such as Rupert García's Save Our Sister (1972), depicting Angela Davis, and Yolanda M. López's Who's the Illegal Alien, Pilgrim? (1978) addressing settler colonialism.",
"title": "Cultural aspects"
},
{
"paragraph_id": 122,
"text": "The oppositional current of Chicano art was bolstered in the 1980s by a rising hip hop culture. The Olympic freeway murals, including Frank Romero's Going to the Olympics, created for the 1984 Olympic Games in Los Angeles became another site of contestation, as Chicano and other graffiti artists tagged the state-sanctioned public artwork. Government officials, muralists, and some residents were unable to understand the motivations for this, described it \"as \"mindless\", \"animalistic\" vandalism perpetrated by \"kids\" who simply lack respect.\" L.A. had developed a distinct graffiti culture by the 1990s and, with the rise of drugs and violence, Chicano youth culture gravitated towards graffiti to express themselves and to mark their territory amidst state-sanctioned disorder. Following the Rodney King riots and the murder of Latasha Harlins, which exemplified an explosion of racial tensions bubbling under in American society, racialized youth in L.A., \"feeling forgotten, angry, or marginalized, [embraced] graffiti's expressive power [as] a tool to push back.\"",
"title": "Cultural aspects"
},
{
"paragraph_id": 123,
"text": "Chicano art, although accepted into some institutional art spaces in shows like Chicano Art: Resistance and Affirmation, was still largely excluded from many mainstream art institutions in the 1990s. By the 2000s, attitudes towards graffiti by white hipster culture were changing, as it became known as \"street art\". In academic circles, \"street art\" was termed \"post-graffiti\". By the 2000s, where the LAPD once deployed CRASH (Community Resources Against Street Hoodlums) units in traditionally Chicano neighborhoods like Echo Park and \"often brutalized suspected taggers and gang members\", street art was now being mainstreamed by the white art world in those same neighborhoods.",
"title": "Cultural aspects"
},
{
"paragraph_id": 124,
"text": "Despite this shift, Chicano artists continued to challenge what was acceptable to both insiders and outsiders of their communities. Controversy surrounding Chicana artist Alma López's \"Our Lady\" at the Museum of International Folk Art in 2001 erupted when \"local demonstrators demanded the image be removed from the state-run museum\". Previously, López's digital mural \"Heaven\" (2000), which depicted two Latina women embracing, had been vandalized. López received homophobic slurs, threats of physical violence, and over 800 hate mail inquiries for \"Our Lady.\" Santa Fe Archbishop Michael J Sheehan referred to the woman in López's piece as \"a tart or a street woman\". López stated that the response came from the conservative Catholic Church, \"which finds women's bodies inherently sinful, and thereby promot[es] hatred of women's bodies.\" The art was again protested in 2011.",
"title": "Cultural aspects"
},
{
"paragraph_id": 125,
"text": "Manuel Paul's mural \"Por Vida\" (2015) at Galeria de la Raza in Mission District, San Francisco, which depicted queer and trans Chicanos, was targeted multiple times after its unveiling. Paul, a queer DJ and artist of the Maricón Collective, received online threats for the work. Ani Rivera, director of Galeria de la Raza, attributed the anger towards the mural to gentrification, which has led \"some people [to] associate LGBT people with non-Latino communities.\" The mural was meant to challenge \"long-held assumptions regarding the traditional exclusivity of heterosexuality in lowrider culture\". Some credited the negative response to the mural's direct challenging of machismo and heteronormativity in the community.",
"title": "Cultural aspects"
},
{
"paragraph_id": 126,
"text": "Xandra Ibarra's video art Spictacle II: La Tortillera (2004) was censored by San Antonio's Department of Arts and Culture in 2020 from \"XicanX: New Visions\", a show which aimed to challenge \"previous and existing surveys of Chicano and Latino identity-based exhibitions\" through highlighting \"the womxn, queer, immigrant, indigenous and activist artists who are at the forefront of the movement\". Ibarra stated \"the video is designed to challenge normative ideals of Mexican womanhood and is in alignment with the historical lineage of LGBTQAI+ artists' strategies to intervene in homophobic and sexist violence.\"",
"title": "Cultural aspects"
},
{
"paragraph_id": 127,
"text": "Chicano culture has become popular in some areas internationally, most prominently in Japan, Brazil, and Thailand. Chicano ideas such as Chicano hybridity and borderlands theory have found influence as well, such as in decoloniality. In São Paulo, Chicano cultural influence has formed the \"Cho-Low\" (combination of Cholo and Lowrider) subculture that has formed a sense of cultural pride among youth.",
"title": "Cultural aspects"
},
{
"paragraph_id": 128,
"text": "Chicano cultural influence is strong in Japan, where Chicano culture took hold in the 1980s and continued to grow with contributions from Shin Miyata, Junichi Shimodaira, Miki Style, Night Tha Funksta, and MoNa (Sad Girl). Miyata owns a record label, Gold Barrio Records, that re-releases Chicano music. Chicano fashion and other cultural aspects have also been adopted in Japan. There has been debate over whether this is cultural appropriation, with most arguing that it is appreciation rather than appropriation. In an interview asking why Chicano culture is popular in Japan, two long-time proponents of Chicano culture in Japan agreed that \"it's not about Mexico or about America: it's an alluring quality unique to the hybrid nature of Chicano and imprinted in all its resulting art forms, from lowriders in the '80s to TikTok videos today, that people relate to and appreciate, not only in Japan but around the world.\"",
"title": "Cultural aspects"
},
{
"paragraph_id": 129,
"text": "Most recently, Chicano culture has found influence in Thailand among working-class men and women that is called \"Thaino\" culture. They state that they have disassociated the violence that Hollywood portrays of Chicanos from the Chicano people themselves. They have adopted rules of no cocaine or amphetamines, and only marijuana, which is legal in Thailand. The leader of one group stated that he was inspired by how Chicanos created a culture out of defiance \"to fight against people who were racist toward them\" and that this inspired him, since he was born in a slum in Thailand. He also stated \"if you look closely at [Chicano] culture, you'll notice how gentle it is. You can see this in their Latin music, dances, clothes, and how they iron their clothes. It's both neat and gentle.\"",
"title": "Cultural aspects"
}
] | Chicano or Chicana is an ethnic identity for Mexican Americans who have a non-Anglo self-image, embracing their Mexican Native ancestry. Chicano was originally a classist and racist slur used toward low-income Mexicans that was reclaimed in the 1940s among youth who belonged to the Pachuco and Pachuca subculture. In the 1960s, Chicano was widely reclaimed in the building of a movement toward political empowerment, ethnic solidarity, and pride in being of indigenous descent. Chicano developed its own meaning separate from Mexican American identity. Youth in barrios rejected cultural assimilation into whiteness and embraced their own identity and worldview as a form of empowerment and resistance. The community forged an independent political and cultural movement, sometimes working alongside the Black power movement. The Chicano Movement faltered by the mid-1970s as a result of external and internal pressures. It was under state surveillance, infiltration, and repression by U.S. government agencies, informants, and agent provocateurs, such as through COINTELPRO. The Chicano Movement also had a fixation on masculine pride and machismo that fractured the community through sexism toward Chicanas and homophobia toward queer Chicana/os. In the 1980s, assimilation and economic mobility motivated many to embrace Hispanic identity in an era of conservatism. The term Hispanic emerged from a collaboration between the U.S. government and Mexican-American political elites in the Hispanic Caucus of Congress. Likewise, the same assimilatory force associated with Hispanic has been tied to the usage of Latino. They used the term to identify themselves and the community with mainstream American culture, depart from Chicanismo, and distance themselves from what they perceived as the "militant" Black Caucus. At the grassroots level, Chicana/os continued to build the feminist, gay and lesbian, and anti-apartheid movements, which kept the identity politically relevant. After a decade of Hispanic dominance, Chicana/o student activism in the early 1990s recession and the anti-Gulf War movement revived the identity with a demand to expand Chicana/o studies programs. Chicanas were active at the forefront, despite facing critiques from "movement loyalists", as they did in the Chicano Movement. Chicana feminists addressed employment discrimination, environmental racism, healthcare, sexual violence, and exploitation in their communities and in solidarity with the Third World. Chicanas worked to "liberate her entire people"; not to oppress men, but to be equal partners in the movement. Xicanisma, coined by Ana Castillo in 1994, called for Chicana/os to "reinsert the forsaken feminine into our consciousness", to embrace one's Indigenous roots, and support Indigenous sovereignty. In the 2000s, earlier traditions of anti-imperialism in the Chicano Movement were expanded. Building solidarity with undocumented immigrants became more important, despite issues of legal status and economic competitiveness sometimes maintaining distance between groups. U.S. foreign interventions abroad were connected with domestic issues concerning the rights of undocumented immigrants in the United States. Chicano/a consciousness increasingly became transnational and transcultural, thinking beyond and bridging with communities over political borders. The identity was renewed based on Indigenous and decolonial consciousness, cultural expression, resisting gentrification, defense of immigrants, and the rights of women and queer people. Xicanx identity also emerged in the 2010s, based on the Chicana feminist intervention of Xicanisma. | 2001-06-09T13:28:17Z | 2023-12-29T06:51:23Z | [
"Template:\" '",
"Template:Reflist",
"Template:Cite journal",
"Template:Cite thesis",
"Template:Redirect",
"Template:Citation",
"Template:Sisterlinks",
"Template:Webarchive",
"Template:Chicano/Mexican-American",
"Template:Hispanics/Latinos",
"Template:'\"",
"Template:Var",
"Template:Cbignore",
"Template:Main articles",
"Template:Lang",
"Template:Cite book",
"Template:Cite web",
"Template:Dead link",
"Template:Page needed",
"Template:Authority control",
"Template:Short description",
"Template:IPA",
"Template:Citation needed",
"Template:Portal",
"Template:Cite news",
"Template:ISBN",
"Template:Chicano and Mexican American topics sidebar",
"Template:See also",
"Template:Cite video",
"Template:Main",
"Template:Wiktionary pipe"
] | https://en.wikipedia.org/wiki/Chicano |
5,717 | Canary Islands | The Canary Islands (/kəˈnɛəri/; Spanish: Canarias, pronounced [kaˈnaɾjas]), also known informally as the Canaries, are a Spanish autonomous community and archipelago in Macaronesia in the Atlantic Ocean. At their closest point to the African mainland, they are 100 kilometres (62 miles) west of Morocco and the Western Sahara. They are the southernmost of the autonomous communities of Spain. The islands have a population of 2.2 million people and are the most populous special territory of the European Union.
The seven main islands are (from largest to smallest in area) Tenerife, Fuerteventura, Gran Canaria, Lanzarote, La Palma, La Gomera, and El Hierro. The archipelago includes many smaller islands and islets, including La Graciosa, Alegranza, Isla de Lobos, Montaña Clara, Roque del Oeste, and Roque del Este. It also includes a number of rocks, including Garachico and Anaga. In ancient times, the island chain was often referred to as "the Fortunate Isles". The Canary Islands are the southernmost region of Spain, and the largest and most populous archipelago of Macaronesia. Because of their location, the Canary Islands have historically been considered a link between the four continents of Africa, North America, South America, and Europe.
In 2019, the Canary Islands had a population of 2,153,389, with a density of 287.39 inhabitants per km, making it the eighth most populous autonomous community of Spain. The population is mostly concentrated in the two capital islands: around 43% on the island of Tenerife and 40% on the island of Gran Canaria.
The Canary Islands, especially Tenerife, Gran Canaria, Fuerteventura, and Lanzarote, are a major tourist destination, with over 12 million visitors per year. This is due to their beaches, subtropical climate, and important natural attractions, especially Maspalomas in Gran Canaria and Mount Teide (a World Heritage Site) in Tenerife. Mount Teide is the highest peak in Spain and the 4th tallest volcano in the world, measured from its base on the ocean floor. The islands have warm summers and winters warm enough for the climate to be technically tropical at sea level. The amount of precipitation and the level of maritime moderation vary depending on location and elevation. The archipelago includes green areas as well as desert. The islands' high mountains are ideal for astronomical observation, because they lie above the temperature inversion layer. As a result, the archipelago boasts two professional observatories: the Teide Observatory on Tenerife, and Roque de los Muchachos Observatory on La Palma.
In 1927, the Province of Canary Islands was split into two provinces. In 1982, the autonomous community of the Canary Islands was established. The cities of Santa Cruz de Tenerife and Las Palmas de Gran Canaria are, jointly, the capitals of the islands. Those cities are also, respectively, the capitals of the provinces of Santa Cruz de Tenerife and Las Palmas. Las Palmas de Gran Canaria has been the largest city in the Canaries since 1768, except for a brief period in the 1910s. Between the 1833 territorial division of Spain and 1927, Santa Cruz de Tenerife was the sole capital of the Canary Islands. In 1927, it was ordered by decree that the capital of the Canary Islands would be shared between two cities, and this arrangement persists to the present day. The third largest city in the Canary Islands is San Cristóbal de La Laguna (another World Heritage Site) on Tenerife.
During the Age of Sail, the islands were the main stopover for Spanish galleons during the Spanish colonisation of the Americas, which sailed that far south in order to catch the prevailing northeasterly trade winds.
The name Islas Canarias is likely derived from the Latin name Canariae Insulae, meaning "Islands of the Dogs", perhaps because monk seals or sea dogs were abundant, a name that was evidently generalized from the ancient name of one of these islands, Canaria – presumably Gran Canaria. According to the historian Pliny the Elder, the island Canaria contained "vast multitudes of dogs of very large size". The connection to dogs is retained in their depiction on the islands' coat-of-arms.
Other theories speculate that the name comes from the Nukkari Berber tribe living in the Moroccan Atlas, named in Roman sources as Canarii, though Pliny again mentions the relation of this term with dogs.
The name of the islands is not derived from the canary bird; rather, the birds are named after the islands.
Tenerife is the largest and most populous island of the archipelago. Gran Canaria, with 865,070 inhabitants, is both the Canary Islands' second most populous island, and the third most populous one in Spain after Tenerife (966,354 inhabitants) and Majorca (896,038 inhabitants). The island of Fuerteventura is the second largest in the archipelago and located 100 km (62 mi) from the African coast.
The islands form the Macaronesia ecoregion with the Azores, Cape Verde, Madeira, and the Savage Isles. The Canary Islands is the largest and most populated archipelago of the Macaronesia region. The archipelago consists of seven large and several smaller islands, all of which are volcanic in origin.
According to the position of the islands with respect to the north-east trade winds, the climate can be mild and wet or very dry. Several native species form laurisilva forests.
As a consequence, the individual islands in the Canary archipelago tend to have distinct microclimates. Those islands such as El Hierro, La Palma and La Gomera lying to the west of the archipelago have a climate which is influenced by the moist Canary Current. They are well vegetated even at low levels and have extensive tracts of sub-tropical laurisilva forest. As one travels east toward the African coast, the influence of the current diminishes, and the islands become increasingly arid. Fuerteventura and Lanzarote, the islands which are closest to the African mainland, are effectively desert or semi desert. Gran Canaria is known as a "continent in miniature" for its diverse landscapes like Maspalomas and Roque Nublo. In terms of its climate Tenerife is particularly interesting. The north of the island lies under the influence of the moist Atlantic winds and is well vegetated, while the south of the island around the tourist resorts of Playa de las Américas and Los Cristianos is arid. The island rises to almost 4,000 m (13,000 ft) above sea level, and at altitude, in the cool relatively wet climate, forests of the endemic pine Pinus canariensis thrive. Many of the plant species in the Canary Islands, like the Canary Island pine and the dragon tree, Dracaena draco are endemic, as noted by Sabin Berthelot and Philip Barker Webb in their work, L'Histoire Naturelle des Îles Canaries (1835–50).
The climate is warm subtropical and generally semidesertic, moderated by the sea and in summer by the trade winds. There are a number of microclimates and the classifications range mainly from semi-arid to desert. According to Köppen, the majority of the Canary Islands have a hot desert climate (BWh) and a hot semi-arid climate (BSh), caused partly due to the cool Canary Current. There also exists a subtropical humid climate which is very influenced by the ocean in the middle of the islands of La Gomera, Tenerife and La Palma, where laurisilva cloud forests grow.
The seven major islands, one minor island, and several small islets were originally volcanic islands, formed by the Canary hotspot. The Canary Islands is the only place in Spain where volcanic eruptions have been recorded during the Modern Era, with some volcanoes still active (El Hierro, 2011). Volcanic islands such as those in the Canary chain often have steep ocean cliffs caused by catastrophic debris avalanches and landslides. The island chain's most recent eruption occurred at Cumbre Vieja, a volcanic ridge on La Palma, in 2021.
The Teide volcano on Tenerife is the highest mountain in Spain, and the third tallest volcano on Earth on a volcanic ocean island. All the islands except La Gomera have been active in the last million years; four of them (Lanzarote, Tenerife, La Palma and El Hierro) have historical records of eruptions since European discovery. The islands rise from Jurassic oceanic crust associated with the opening of the Atlantic. Underwater magmatism commenced during the Cretaceous, and continued to the present day. The current islands reached the ocean's surface during the Miocene. The islands were once considered as a distinct physiographic section of the Atlas Mountains province, which in turn is part of the larger African Alpine System division, but are nowadays recognized as being related to a magmatic hot spot.
In the summer of 2011 a series of low-magnitude earthquakes occurred beneath El Hierro. These had a linear trend of northeast–southwest. In October a submarine eruption occurred about 2 km (1+1⁄4 mi) south of Restinga. This eruption produced gases and pumice, but no explosive activity was reported.
The following table shows the highest mountains in each of the islands:
The official natural symbols associated with Canary Islands are the bird Serinus canaria (canary) and the Phoenix canariensis palm.
Four of Spain's thirteen national parks are located in the Canary Islands, more than any other autonomous community. Two of these have been declared UNESCO World Heritage Sites and the other two are part of Biosphere Reserves. The parks are:
Teide National Park is the oldest and largest national park in the Canary Islands and one of the oldest in Spain. Located in the geographic centre of the island of Tenerife, it is the most visited national park in Spain. In 2010, it became the most visited national park in Europe and second worldwide. The park's highlight is the Teide volcano; standing at an altitude of 3,715 metres (12,188 ft), it is the highest elevation of the country and the third largest volcano on Earth from its base. In 2007, the Teide National Park was declared one of the 12 Treasures of Spain.
The regional executive body, the Parliament of the Canary Islands, is presided over by Fernando Clavijo Batlle (Canarian Coalition), the current President of the Canary Islands. The latter is invested by the members of the regional legislature, the Parliament of the Canary Islands, that consists of 70 elected legislators. The last regional election took place in May 2023.
The islands have 14 seats in the Spanish Senate. Of these, 11 seats are directly elected (3 for Gran Canaria, 3 for Tenerife, and 1 each for Lanzarote (including La Graciosa), Fuerteventura, La Palma, La Gomera and El Hierro) while the other 3 are appointed by the regional legislature.
The Autonomous Community of the Canary Islands consists of two provinces (provincias), Las Palmas and Santa Cruz de Tenerife, whose capitals (Las Palmas de Gran Canaria and Santa Cruz de Tenerife) are capitals of the autonomous community. Each of the seven major islands is ruled by an island council named Cabildo Insular. Each island is subdivided into smaller municipalities (municipios); Las Palmas is divided into 34 municipalities, and Santa Cruz de Tenerife is divided into 54 municipalities.
The international boundary of the Canaries is one subject of dispute in the Morocco-Spain relations. Moreover, in 2022 the UN has declared the Canary Island's territorial waters as Moroccan coast and Morocco has authorised gas and oil exploration in what the Canary Islands states to be Canarian territorial waters and Western Sahara waters. Morocco's official position is that international laws regarding territorial limits do not authorise Spain to claim seabed boundaries based on the territory of the Canaries, since the Canary Islands enjoy a large degree of autonomy. In fact, the islands do not enjoy any special degree of autonomy as each one of the Spanish regions is considered an autonomous community with equal status to the European ones. Under the Law of the Sea, the only islands not granted territorial waters or an exclusive economic zone (EEZ) are those that are not fit for human habitation or do not have an economic life of their own, which is not the case of the Canary Islands.
There are some pro-independence political parties, like the National Congress of the Canaries (CNC) and the Popular Front of the Canary Islands, but their popular support is almost insignificant, with no presence in either the autonomous parliament or the cabildos insulares. According to a 2012 study by the Centro de Investigaciones Sociológicas, when asked about national identity, the majority of respondents from the Canary Islands (53.8%) consider themselves Spanish and Canarian in equal measures, followed by 24% who consider themselves more Canarian than Spanish. Only 6.1% of the respondents consider themselves only Canarian while 7% consider themselves only Spanish.
The defence of the territory is the responsibility of the Spanish Armed Forces. As such, various components of the Army, Navy, Air Force and the Civil Guard are based in the territory.
Before the arrival of humans, the Canaries were inhabited by prehistoric animals; for example, the giant lizard (Gallotia goliath), the Tenerife and Gran Canaria giant rats, and giant prehistoric tortoises, Geochelone burchardi and Geochelone vulcanica.
Although the original settlement of what are now called the Canary Islands is not entirely clear, linguistic, genetic, and archaeological analyses indicate that indigenous peoples were living on the Canary Islands at least 2000 years ago but possibly one thousand years or more before, and that they shared a common origin with the Berbers on the nearby North African coast. Reaching the islands may have taken place using several small boats, landing on the easternmost islands Lanzarote and Fuerteventura. These groups came to be known collectively as the Guanches, although Guanches had been the name for only the indigenous inhabitants of Tenerife.
As José Farrujia describes, 'The indigenous Canarians lived mainly in natural caves, usually near the coast, 300–500m above sea level. These caves were sometimes isolated but more commonly formed settlements, with burial caves nearby'. Archaeological work has uncovered a rich culture visible through artefacts of ceramics, human figures, fishing, hunting and farming tools, plant fibre clothing and vessels, as well as cave paintings. At Lomo de los Gatos on Gran Canaria, a site occupied from 1,600 years ago up until the 1960s, round stone houses, complex burial sites, and associated artefacts have been found. Across the islands are thousands of Libyco-Berber alphabet inscriptions scattered and they have been extensively documented by many linguists.
The social structure of indigenous Canarians encompassed 'a system of matrilineal descent in most of the islands, in which inheritance was passed on via the female line. Social status and wealth were hereditary and determined the individual's position in the social pyramid, which consisted of the king, the relatives of the king, the lower nobility, villeins, plebeians, and finally executioners, butchers, embalmers, and prisoners'. Their religion was animist, centring on the sun and moon, as well as natural features such as mountains.
The islands may have been visited by the Phoenicians, the Greeks, and the Carthaginians. King Juba II, Caesar Augustus's Numidian protégé, is credited with discovering the islands for the Western world. According to Pliny the Elder, Juba found the islands uninhabited, but found "a small temple of stone" and "some traces of buildings". Juba dispatched a naval contingent to re-open the dye production facility at Mogador in what is now western Morocco in the early first century AD. That same naval force was subsequently sent on an exploration of the Canary Islands, using Mogador as their mission base.
The names given by Romans to the individual islands were Ninguaria or Nivaria (Tenerife), Canaria (Gran Canaria), Pluvialia or Invale (Lanzarote), Ombrion (La Palma), Planasia (Fuerteventura), Iunonia or Junonia (El Hierro) and Capraria (La Gomera).
From the 14th century onward, numerous visits were made by sailors from Majorca, Portugal and Genoa. Lancelotto Malocello settled on Lanzarote in 1312. The Majorcans established a mission with a bishop in the islands that lasted from 1350 to 1400.
In 1402, the Castilian colonisation of the islands began with the expedition of the French explorers Jean de Béthencourt and Gadifer de la Salle, nobles and vassals of Henry III of Castile, to Lanzarote. From there, they went on to conquer Fuerteventura (1405) and El Hierro. These invasions were "brutal cultural and military clashes between the indigenous population and the Castilians" lasting over a century due to formidable resistance by indigenous Canarians. Professor Mohamed Adhikari has defined the conquest of the islands as a genocide of the Guanches.
Béthencourt received the title King of the Canary Islands, but still recognised King Henry III as his overlord. It was not a simple military enterprise, given the aboriginal resistance on some islands. Neither was it politically, since the particular interests of the nobility (determined to strengthen their economic and political power through the acquisition of the islands) conflicted with those of the states, particularly Castile, which were in the midst of territorial expansion and in a process of strengthening of the Crown against the nobility.
Historians distinguish two periods in the conquest of the Canary Islands:
Aristocratic conquest (Conquista señorial). This refers to the early conquests carried out by the nobility, for their own benefit and without the direct participation of the Crown of Castile, which merely granted rights of conquest in exchange for pacts of vassalage between the noble conqueror and the Crown. One can identify within this period an early phase known as the Betancurian or Norman Conquest, carried out by Jean de Bethencourt (who was originally from Normandy) and Gadifer de la Salle between 1402 and 1405, which involved the islands of Lanzarote, El Hierro and Fuerteventura. The subsequent phase is known as the Castilian Conquest, carried out by Castilian nobles who acquired, through purchases, assignments and marriages, the previously conquered islands and also incorporated the island of La Gomera around 1450.
Royal conquest (Conquista realenga). This defines the conquest between 1478 and 1496, carried out directly by the Crown of Castile, during the reign of the Catholic Monarchs, who armed and partly financed the conquest of those islands which were still unconquered: Gran Canaria, La Palma and Tenerife. This phase of the conquest came to an end in the year 1496, with the dominion of the island of Tenerife, bringing the entire Canarian Archipelago under the control of the Crown of Castile.
Béthencourt also established a base on the island of La Gomera, but it would be many years before the island was fully conquered. The natives of La Gomera, and of Gran Canaria, Tenerife, and La Palma, resisted the Castilian invaders for almost a century. In 1448 Maciot de Béthencourt sold the lordship of Lanzarote to Portugal's Prince Henry the Navigator, an action that was accepted by neither the natives nor the Castilians. Despite Pope Nicholas V ruling that the Canary Islands were under Portuguese control, the crisis swelled to a revolt which lasted until 1459 with the final expulsion of the Portuguese. In 1479, Portugal and Castile signed the Treaty of Alcáçovas, which settled disputes between Castile and Portugal over the control of the Atlantic. This treaty recognized Castilian control of the Canary Islands but also confirmed Portuguese possession of the Azores, Madeira, and the Cape Verde islands, and gave the Portuguese rights to any further islands or lands in the Atlantic that might be discovered.
The Castilians continued to dominate the islands, but due to the topography and the resistance of the native Guanches, they did not achieve complete control until 1496, when Tenerife and La Palma were finally subdued by Alonso Fernández de Lugo. As a result of this 'the native pre-Hispanic population declined quickly due to war, epidemics, and slavery'. The Canaries were incorporated into the Kingdom of Castile.
After the conquest, the Castilians imposed a new economic model, based on single-crop cultivation: first sugarcane; then wine, an important item of trade with England. Gran Canaria was conquered by the Crown of Castile on 6 March 1480, and Tenerife was conquered in 1496, and each had its own governor. There has been speculation that the abundance of Roccella tinctoria on the Canary Islands offered a profit motive for Jean de Béthencourt during his conquest of the islands. Lichen has been used for centuries to make dyes. This includes royal purple colors derived from roccella tinctoria, also known as orseille.
The objective of the Spanish Crown to convert the islands into a powerhouse of cultivation required a much larger labour force. This was attained through a brutal practice of enslavement, not only of indigenous Canarians but large numbers of Africans who were forcibly taken from North and Sub-Saharan Africa. Whilst the first slave plantations in the Atlantic region were across Madeira, Cape Verde, and the Canary Islands, it was only the Canary Islands which had an indigenous population and were therefore invaded rather than newly occupied.
This agriculture industry was largely based on sugarcane and the Castilians converted large swaths of the landscape for sugarcane production, and the processing and manufacturing of sugar, facilitated by enslaved labourers. The cities of Santa Cruz de Tenerife and Las Palmas de Gran Canaria became a stopping point for the Spanish traders, as well as conquistadors, and missionaries on their way to the New World. This trade route brought great wealth to the Castilian social sectors of the islands and soon were attracting merchants and adventurers from all over Europe. As the wealth grew, enslaved African workers were also forced into demeaning domestic roles for the rich Castilians on the islands such as servants in their houses. Research on the skeletons of some of these enslaved workers from the burial site of Finca Clavijo on Gran Canaria have showed that 'all of the adults buried in Finca Clavijo undertook extensive physical activity that involved significant stress on the spine and appendicular skeleton' that result from relentless hard labour, akin to the physical abnormalities found with enslaved peoples from other sugarcane plantations around the world. These findings of the physical strain that the enslaved at Finca Clavijo were subjected to in order to provide wealth for the Spanish elite has inspired a poem by British writer Ralph Hoyte, entitled Close to the Bone.
As a result of the huge wealth generated, magnificent palaces and churches were built on La Palma during this busy, prosperous period. The Church of El Salvador survives as one of the island's finest examples of the architecture of the 16th century. Civilian architecture survives in forms such as Casas de los Sánchez-Ochando or Casa Quintana.
The Canaries' wealth invited attacks by pirates and privateers. Ottoman Turkish admiral and privateer Kemal Reis ventured into the Canaries in 1501, while Murat Reis the Elder captured Lanzarote in 1585.
The most severe attack took place in 1599, during the Dutch Revolt. A Dutch fleet of 74 ships and 12,000 men, commanded by Pieter van der Does, attacked the capital Las Palmas de Gran Canaria (the city had 3,500 of Gran Canaria's 8,545 inhabitants). The Dutch attacked the Castillo de la Luz, which guarded the harbor. The Canarians evacuated civilians from the city, and the Castillo surrendered (but not the city). The Dutch moved inland, but Canarian cavalry drove them back to Tamaraceite, near the city.
The Dutch then laid siege to the city, demanding the surrender of all its wealth. They received 12 sheep and 3 calves. Furious, the Dutch sent 4,000 soldiers to attack the Council of the Canaries, who were sheltering in the village of Santa Brígida. Three hundred Canarian soldiers ambushed the Dutch in the village of Monte Lentiscal, killing 150 and forcing the rest to retreat. The Dutch concentrated on Las Palmas de Gran Canaria, attempting to burn it down. The Dutch pillaged Maspalomas, on the southern coast of Gran Canaria, San Sebastián on La Gomera, and Santa Cruz on La Palma, but eventually gave up the siege of Las Palmas and withdrew.
In 1618 the Barbary pirates from North Africa attacked Lanzarote and La Gomera taking 1000 captives to be sold as slaves. Another noteworthy attack occurred in 1797, when Santa Cruz de Tenerife was attacked by a British fleet under Horatio Nelson on 25 July. The British were repulsed, losing almost 400 men. It was during this battle that Nelson lost his right arm.
The sugar-based economy of the islands faced stiff competition from Spain's Caribbean colonies. Low sugar prices in the 19th century caused severe recessions on the islands. A new cash crop, cochineal (cochinilla), came into cultivation during this time, reinvigorating the islands' economy. During this time the Canarian-American trade was developed, in which Canarian products such as cochineal, sugarcane and rum were sold in American ports such as Veracruz, Campeche, La Guaira and Havana, among others.
By the end of the 18th century, Canary Islanders had already emigrated to Spanish American territories, such as Havana, Veracruz, and Santo Domingo, San Antonio, Texas and St. Bernard Parish, Louisiana. These economic difficulties spurred mass emigration during the 19th and first half of the 20th century, primarily to the Americas. Between 1840 and 1890 as many as 40,000 Canary Islanders emigrated to Venezuela. Also, thousands of Canarians moved to Puerto Rico where the Spanish monarchy felt that Canarians would adapt to island life better than other immigrants from the mainland of Spain. Deeply entrenched traditions, such as the Mascaras Festival in the town of Hatillo, Puerto Rico, are an example of Canarian culture still preserved in Puerto Rico. Similarly, many thousands of Canarians emigrated to the shores of Cuba. During the Spanish–American War of 1898, the Spanish fortified the islands against a possible American attack, but no such event took place.
Sirera and Renn (2004) distinguish two different types of expeditions, or voyages, during the period 1770–1830, which they term "the Romantic period":
First are "expeditions financed by the States, closely related with the official scientific Institutions. characterised by having strict scientific objectives (and inspired by) the spirit of Illustration and progress". In this type of expedition, Sirera and Renn include the following travellers:
The second type of expedition identified by Sirera and Renn is one that took place starting from more or less private initiatives. Among these, the key exponents were the following:
Sirera and Renn identify the period 1770–1830 as one in which "In a panorama dominated until that moment by France and England enters with strength and brio Germany of the Romantic period whose presence in the islands will increase".
At the beginning of the 20th century, the British introduced a new cash-crop, the banana, the export of which was controlled by companies such as Fyffes.
30 November 1833 the Province of Canary Islands had been created with the capital being declared as Santa Cruz de Tenerife. The rivalry between the cities of Las Palmas de Gran Canaria and Santa Cruz de Tenerife for the capital of the islands led to the division of the archipelago into two provinces on 23 September 1927.
During the time of the Second Spanish Republic, Marxist and anarchist workers' movements began to develop, led by figures such as Jose Miguel Perez and Guillermo Ascanio. However, outside of a few municipalities, these organisations were a minority and fell easily to Nationalist forces during the Spanish Civil War.
In 1936, Francisco Franco was appointed General Commandant of the Canaries. He joined the military revolt of 17 July which began the Spanish Civil War. Franco quickly took control of the archipelago, except for a few points of resistance on La Palma and in the town of Vallehermoso, on La Gomera. Though there was never a war in the islands, the post-war suppression of political dissent on the Canaries was most severe.
During the Second World War, Winston Churchill prepared plans for the British seizure of the Canary Islands as a naval base, in the event of Gibraltar being invaded from the Spanish mainland. The planned operation was known as Operation Pilgrim.
Opposition to Franco's regime did not begin to organise until the late 1950s, which experienced an upheaval of parties such as the Communist Party of Spain and the formation of various nationalist, leftist parties.
During the Ifni War, the Franco regime set up concentration camps on the islands to extrajudicially imprison those in Western Sahara suspected of disloyalty to Spain, many of whom were colonial troops recruited on the spot but were later deemed to be potential fifth columnists and deported to the Canary Islands. These camps were characterised by the use of forced labour for infrastructure projects and highly unsanitary conditions resulting in the widespread occurrence of tuberculosis.
After the death of Franco, there was a pro-independence armed movement based in Algeria, the Movement for the Independence and Self-determination of the Canaries Archipelago (MAIAC). In 1968, the Organisation of African Unity recognized the MAIAC as a legitimate African independence movement, and declared the Canary Islands as an African territory still under foreign rule.
After the establishment of a democratic constitutional monarchy in Spain, autonomy was granted to the Canaries via a law passed in 1982, with a newly established autonomous devolved government and parliament. In 1983, the first autonomous elections were held. The Spanish Socialist Workers' Party (PSOE) won. In the 2007 elections, the PSOE gained a plurality of seats, but the nationalist Canarian Coalition and the conservative Partido Popular (PP) formed a ruling coalition government.
At present, the Canary Islands is the only autonomous community in Spain that has two capitals: Santa Cruz de Tenerife and Las Palmas de Gran Canaria, since the Statute of Autonomy of the Canary Islands [es] was created in 1982.
The political capital of the archipelago did not exist as such until the nineteenth century. The first cities founded by the Europeans at the time of the conquest of the Canary Islands in the 15th century were: Telde (in Gran Canaria), San Marcial del Rubicón (in Lanzarote) and Betancuria (in Fuerteventura). These cities boasted the first European institutions present in the archipelago, including Catholic bishoprics. Although, because the period of splendor of these cities developed before the total conquest of the archipelago and its incorporation into the Crown of Castile never had a political and real control of the entire Canary archipelago.
The function of a Canarian city with full jurisdiction for the entire archipelago only exists after the conquest of the Canary Islands, although originally de facto, that is, without legal and real meaning and linked to the headquarters of the Canary Islands General Captaincy.
Las Palmas de Gran Canaria was the first city that exercised this function. This is because the residence of the Captain General of the Canary Islands was in this city during part of the sixteenth and seventeenth centuries. In May 1661, the Captain General of the Canary Islands, Jerónimo de Benavente y Quiñones, moved the headquarters of the captaincy to the city of San Cristóbal de La Laguna on the island of Tenerife. This was due to the fact that this island since the conquest was the most populated, productive and with the highest economic expectations. La Laguna would be considered the de facto capital of the archipelago until the official status of the capital of Canary Islands in the city of Santa Cruz de Tenerife was confirmed in the 19th century, due in part to the constant controversies and rivalries between the bourgeoisies of San Cristóbal de La Laguna and Las Palmas de Gran Canaria for the economic, political and institutional hegemony of the archipelago.
Already in 1723, the Captain General of the Canary Islands Lorenzo Fernandez de Villavicencio had moved the headquarters of the General Captaincy of the Canary Islands from San Cristóbal de La Laguna to Santa Cruz de Tenerife. This decision continued without pleasing the society of the island of Gran Canaria. It would be after the creation of the Province of Canary Islands in November 1833 in which Santa Cruz would become the first fully official capital of the Canary Islands (De jure and not of de facto as happened previously). Santa Cruz de Tenerife would be the capital of the Canary archipelago until during the Government of General Primo de Rivera in 1927 the Province of Canary Islands was split in two provinces: Las Palmas with capital in Las Palmas de Gran Canaria, and Santa Cruz de Tenerife with capital in the homonymous city.
Finally, with the Statute of Autonomy of the Canary Islands in 1982 and the creation of the Autonomous Community of the Canary Islands, the capital of the archipelago between Las Palmas de Gran Canaria and Santa Cruz de Tenerife is fixed, which is how it remains today.
The Canary Islands have a population of 2,153,389 inhabitants (2019), making it the eighth most populous of Spain's autonomous communities. The total area of the archipelago is 7,493 km (2,893 sq mi), resulting in a population density of 287.4 inhabitants per square kilometre.
The population of the islands according to the 2019 data are:
The Canary Islands have become home to many European residents, mainly coming from Italy, Germany and the UK. Because of the vast immigration to Venezuela and Cuba during the second half of the 20th century and the later return to the Canary Islands of these people along with their families, there are many residents whose country of origin was Venezuela (66,593) or Cuba (41,807). Since the 1990s, many illegal migrants have reached the Canary Islands, Melilla and Ceuta, using them as entry points to the EU.
The Catholic Church has been the majority religion in the archipelago for more than five centuries, ever since the Conquest of the Canary Islands. There are also several other religious communities.
The overwhelming majority of native Canarians are Roman Catholic (76.7%) with various smaller foreign-born populations of other Christian beliefs such as Protestants.
The appearance of the Virgin of Candelaria (Patron of Canary Islands) was credited with moving the Canary Islands toward Christianity. Two Catholic saints were born in the Canary Islands: Peter of Saint Joseph de Betancur and José de Anchieta. Both born on the island of Tenerife, they were respectively missionaries in Guatemala and Brazil.
The Canary Islands are divided into two Catholic dioceses, each governed by a bishop:
Separate from the overwhelming Christian majority are a minority of Muslims. Among the followers of Islam, the Islamic Federation of the Canary Islands exists to represent the Islamic community in the Canary Islands as well as to provide practical support to members of the Islamic community. For its part, there is also the Evangelical Council of the Canary Islands in the archipelago.
Other religious faiths represented include Jehovah's Witnesses, The Church of Jesus Christ of Latter-day Saints as well as Hinduism. Minority religions are also present such as the Church of the Guanche People which is classified as a neo-pagan native religion. Also present are Buddhism, Judaism, Baháʼí, African religion, and Chinese religions.
According to Statista in 2022, there are 80.171 Muslims in Canary Islands.
The distribution of beliefs in 2012 according to the CIS Barometer Autonomy was as follows:
Ordered from west to east, the Canary Islands are El Hierro, La Palma, La Gomera, Tenerife, Gran Canaria, Fuerteventura, and Lanzarote. In addition, north of Lanzarote are the islets of La Graciosa, Montaña Clara, Alegranza, Roque del Este and Roque del Oeste, belonging to the Chinijo Archipelago, and northeast of Fuerteventura is the islet of Lobos. There are also a series of small adjacent rocks in the Canary Islands: the Roques de Anaga, Garachico and Fasnia in Tenerife, and those of Salmor and Bonanza in El Hierro.
El Hierro, the westernmost island, covers 268.71 km (103.75 sq mi), making it the second smallest of the major islands, and the least populous with 10,798 inhabitants. The whole island was declared Reserve of the Biosphere in 2000. Its capital is Valverde. Also known as Ferro, it was once believed to be the westernmost land in the world. Ancient European geographers such as Ptolemy recognised the island as the prime meridian of longitude. That remained so until the 19th century when it was displaced by the one passing through Greenwich.
Fuerteventura, with a surface of 1,660 km (640 sq mi), is the second largest island of the archipelago. It has been declared a biosphere reserve by UNESCO. It has a population of 113,275. The oldest of the islands, it is more eroded. Its highest point is the Peak of the Bramble, at a height of 807 metres (2,648 feet). Its capital is Puerto del Rosario.
Gran Canaria has 846,717 inhabitants. The capital, Las Palmas de Gran Canaria (377,203 inhabitants), is the most populous city and shares the status of capital of the Canaries with Santa Cruz de Tenerife. Gran Canaria's surface area is 1,560 km (600 sq mi). Roque Nublo 1,813 metres (5,948 feet) and Pico de las Nieves ("Peak of Snow") 1,949 metres (6,394 feet) are located in the center of the island. On the south of the island are the Maspalomas Dunes (Gran Canaria).
La Gomera has an area of 369.76 km (142.77 sq mi) and is the second least populous island with 21,136 inhabitants. Geologically it is one of the oldest of the archipelago. The insular capital is San Sebastian de La Gomera. Garajonay National Park is located on the island.
Lanzarote is the easternmost island and one of the oldest of the archipelago, and it has shown evidence of recent volcanic activity. It has a surface of 845.94 km (326.62 sq mi), and a population of 149,183 inhabitants, including the adjacent islets of the Chinijo Archipelago. The capital is Arrecife, with 56,834 inhabitants.
The Chinijo Archipelago includes the islands La Graciosa, Alegranza, Montaña Clara, Roque del Este and Roque del Oeste. It has a surface of 40.8 km (15.8 sq mi), and only La Graciosa is populated, with 658 inhabitants. With 29 km (11 sq mi), La Graciosa, is the largest island of the Chinijo Archipelago but also the smallest inhabited island of the Canaries.
La Palma, with 81,863 inhabitants covering an area of 708.32 km (273.48 sq mi), is in its entirety a biosphere reserve. For long it showed no signs of volcanic activity, even though the volcano Teneguía entered into eruption last in 1971. On September 19, 2021, the volcanic Cumbre Vieja on the island erupted. It is the second-highest island of the Canaries, with the Roque de los Muchachos at 2,423 metres (7,949 feet) as its highest point. Santa Cruz de La Palma (known to those on the island as simply "Santa Cruz") is its capital.
Tenerife is, with its area of 2,034 km (785 sq mi), the most extensive island of the Canary Islands. In addition, with 904,713 inhabitants it is the most populated island of the archipelago and Spain. Two of the islands' principal cities are located on it: the capital, Santa Cruz de Tenerife and San Cristóbal de La Laguna (a World Heritage Site). San Cristóbal de La Laguna, the second city of the island is home to the oldest university in the Canary Islands, the University of La Laguna. Teide, with its 3,715 metres (12,188 feet) is the highest peak of Spain and also a World Heritage Site. Tenerife is the site of the worst air disaster in the history of aviation, in which 583 people were killed in the collision of two Boeing 747s on 27 March 1977.
Graciosa Island or commonly La Graciosa is a volcanic island in the Canary Islands of Spain, located 2 km (1.2 mi) north of the island of Lanzarote across the Strait of El Río. It was formed by the Canary hotspot. The island is part of the Chinijo Archipelago and the Chinijo Archipelago Natural Park (Parque Natural del Archipiélago Chinijo). It is administered by the municipality of Teguise. In 2018 La Graciosa officially became the eighth Canary Island. Before then, La Graciosa had the status of an islet, administratively dependent on the island of Lanzarote. It is the smallest and least populated of the main islands, with a population of about 700 people.
The economy is based primarily on tourism, which makes up 32% of the GDP. The Canaries receive about 12 million tourists per year. Construction makes up nearly 20% of the GDP and tropical agriculture, primarily bananas and tobacco, are grown for export to Europe and the Americas. Ecologists are concerned that the resources, especially in the more arid islands, are being overexploited but there are still many agricultural resources like tomatoes, potatoes, onions, cochineal, sugarcane, grapes, vines, dates, oranges, lemons, figs, wheat, barley, maize, apricots, peaches and almonds.
Water resources are also being overexploited, due to the high water usage by tourists. Also, some islands (such as Gran Canaria and Tenerife) overexploit the ground water. This is done in such degree that, according to European and Spanish legal regulations, the current situation is not acceptable. To address the problems, good governance and a change in the water use paradigm have been proposed. These solutions depend largely on controlling water use and on demand management. As this is administratively difficult and politically unpalatable, most action is currently directed at increasing the public offer of water through import from outside; a decision which is economically, politically and environmentally questionable.
To bring in revenue for environmental protection, innovation, training and water sanitation a tourist tax was considered in 2018, along with a doubling of the ecotax and restrictions on holiday rents in the zones with the greatest pressure of demand.
The economy is € 25 billion (2001 GDP figures). The islands experienced continuous growth during a 20-year period, up until 2001, at a rate of approximately 5% annually. This growth was fueled mainly by huge amounts of foreign direct investment, mostly to develop tourism real estate (hotels and apartments), and European Funds (near €11 billion in the period from 2000 to 2007), since the Canary Islands are labelled Region Objective 1 (eligible for euro structural funds). Additionally, the EU allows the Canary Islands Government to offer special tax concessions for investors who incorporate under the Zona Especial Canaria (ZEC) regime and create more than five jobs.
Spain gave permission in August 2014 for Repsol and its partners to explore oil and natural gas prospects off the Canary Islands, involving an investment of €7.5 billion over four years, to commence at the end of 2016. Repsol at the time said the area could ultimately produce 100,000 barrels of oil a day, which would meet 10 percent of Spain's energy needs. However, the analysis of samples obtained did not show the necessary volume nor quality to consider future extraction, and the project was scrapped.
Despite currently having very high dependence on fossil fuels, research on the renewable energy potential concluded that a high potential for renewable energy technologies exists on the archipelago. This, in such extent even that a scenario pathway to 100% renewable energy supply by 2050 has been put forward.
The Canary Islands have great natural attractions, climate and beaches make the islands a major tourist destination, being visited each year by about 12 million people (11,986,059 in 2007, noting 29% of Britons, 22% of Spanish (from outside the Canaries), and 21% of Germans). Among the islands, Tenerife has the largest number of tourists received annually, followed by Gran Canaria and Lanzarote. The archipelago's principal tourist attraction is the Teide National Park (in Tenerife) where the highest mountain in Spain and third largest volcano in the world (Mount Teide), receives over 2.8 million visitors annually.
The combination of high mountains, proximity to Europe, and clean air has made the Roque de los Muchachos peak (on La Palma island) a leading location for telescopes like the Grantecan.
The islands, as an autonomous region of Spain, are in the European Union and the Schengen Area. They are in the European Union Customs Union but outside the VAT area. Instead of VAT there is a local Sales Tax (IGIC) which has a general rate of 7%, an increased tax rate of 13.5%, a reduced tax rate of 3% and a zero tax rate for certain basic need products and services. Consequently, some products are subject to additional VAT if being exported from the islands into mainland Spain or the rest of the EU.
Canarian time is Western European Time (WET) (or GMT; in summer one hour ahead of GMT). So Canarian time is one hour behind that of mainland Spain and the same as that of the UK, Ireland and mainland Portugal all year round.
The number of tourists who visited the Canary Islands had been in 2018 16,150,054 and in the year 2019 15,589,290.
The Gross Domestic Product (GDP) in the Canary Islands in 2015 was €40,923 million, €19,222 per capita. The figures by island are as follows:
The Canary Islands have eight airports altogether, two of the main ports of Spain, and an extensive network of autopistas (highways) and other roads. For a road map see multimap. Traffic congestion is sometimes a problem in Tenerife and on Grand Canaria.
Large ferry boats and fast ferries link most of the islands. Both types can transport large numbers of passengers, cargo, and vehicles. Fast ferries are made of aluminium and powered by modern and efficient diesel engines, while conventional ferries have a steel hull and are powered by heavy oil. Fast ferries travel in excess of 30 kn (56 km/h; 35 mph); conventional ferries travel in excess of 20 kn (37 km/h; 23 mph), but are slower than fast ferries. A typical ferry ride between La Palma and Tenerife may take up to eight hours or more while a fast ferry takes about two and a half hours and between Tenerife and Gran Canaria can be about one hour.
The largest airport is the Gran Canaria Airport. Tenerife has two airports, Tenerife North Airport and Tenerife South Airport. The island of Tenerife gathers the highest passenger movement of all the Canary Islands through its two airports. The two main islands (Tenerife and Gran Canaria) receive the greatest number of passengers. Tenerife 6,204,499 passengers and Gran Canaria 5,011,176 passengers.
The port of Las Palmas is first in freight traffic in the islands, while the port of Santa Cruz de Tenerife is the first fishing port with approximately 7,500 tons of fish caught, according to the Spanish government publication Statistical Yearbook of State Ports. Similarly, it is the second port in Spain as regards ship traffic, only surpassed by the Port of Algeciras Bay. The port's facilities include a border inspection post (BIP) approved by the European Union, which is responsible for inspecting all types of imports from third countries or exports to countries outside the European Economic Area. The port of Los Cristianos (Tenerife) has the greatest number of passengers recorded in the Canary Islands, followed by the port of Santa Cruz de Tenerife. The Port of Las Palmas is the third port in the islands in passengers and first in number of vehicles transported.
The SS America was beached at the Canary islands on 18 January 1994. However, the ocean liner broke apart after the course of several years and eventually sank beneath the surface.
The Tenerife Tram opened in 2007 and is currently the only one in the Canary Islands, travelling between the cities of Santa Cruz de Tenerife and San Cristóbal de La Laguna.
Three more railway lines are being planned for the Canary Islands:
The Servicio Canario de Salud is an autonomous body of administrative nature attached to the Ministry responsible for Health of the Government of the Canary Islands. The majority of the archipelago's hospitals belong to this organization:
The Canary Islands were previously inhabited by a variety of endemic animals, such as extinct giant lizards (Gallotia goliath), giant tortoises (Centrochelys burchardi and C. vulcanica), and Tenerife and Gran Canaria giant rats (Canariomys bravoi and C. tamarani), among others. Extinct birds known only from Pleistocene and Holocene age bones include the Canary Islands quail (Coturnix gomerae), Dune shearwater (Puffinus holeae), Lava shearwater (P. olsoni), Trias greenfinch (Chloris triasi), Slender-billed greenfinch (C. aurelioi) and the Long-legged bunting (Emberiza alcoveri).
The bird life includes European and African species, such as the black-bellied sandgrouse, Canary, Graja, a subspecies of red-billed chough endemic to La Palma, Gran Canaria blue chaffinch, Tenerife blue chaffinch, Canary Islands chiffchaff, Fuerteventura chat, Tenerife goldcrest, La Palma chaffinch, Canarian Egyptian vulture, Bolle's pigeon, Laurel pigeon, Plain swift, and Houbara bustard.
Terrestrial fauna includes the El Hierro giant lizard, La Gomera giant lizard, and the La Palma giant lizard. Mammals include the Canarian shrew, Canary big-eared bat, the Algerian hedgehog, and the more recently introduced mouflon.
The marine life found in the Canary Islands is also varied, being a combination of North Atlantic, Mediterranean and endemic species. In recent years, the increasing popularity of both scuba diving and underwater photography have provided biologists with much new information on the marine life of the islands.
Fish species found in the islands include many species of shark, ray, moray eel, bream, jack, grunt, scorpionfish, triggerfish, grouper, goby, and blenny. In addition, there are many invertebrate species, including sponge, jellyfish, anemone, crab, mollusc, sea urchin, starfish, sea cucumber and coral.
There are a total of five different species of marine turtle that are sighted periodically in the islands, the most common of these being the endangered loggerhead sea turtle. The other four are the green sea turtle, hawksbill sea turtle, leatherback sea turtle and Kemp's ridley sea turtle. Currently, there are no signs that any of these species breed in the islands, and so those seen in the water are usually migrating. However, it is believed that some of these species may have bred in the islands in the past, and there are records of several sightings of leatherback sea turtle on beaches in Fuerteventura, adding credibility to the theory.
Marine mammals include the large varieties of cetaceans including rare and not well-known species (see more details in the Marine life of the Canary Islands). Hooded seals have also been known to be vagrant in the Canary Islands every now and then. The Canary Islands were also formerly home to a population of the rarest pinniped in the world, the Mediterranean monk seal.
Some holidays of those celebrated in the Canary Islands are international and national, others are regional holidays and others are of insular character. The official day of the autonomous community is Canary Islands Day on 30 May. The anniversary of the first session of the Parliament of the Canary Islands, based in the city of Santa Cruz de Tenerife, held on 30 May 1983, is commemorated with this day.
The common festive calendar throughout the Canary Islands is as follows:
In addition, each of the islands has an island festival, in which it is a holiday only on that specific island. These are the festivities of island patrons saints of each island. Organized chronologically are:
The most famous festivals of the Canary Islands is the carnival. It is the most famous and international festival of the archipelago. The carnival is celebrated in all the islands and all its municipalities, perhaps the two busiest are those of the two Canarian capitals; the Carnival of Santa Cruz de Tenerife (Tourist Festival of International Interest) and the Carnival of Las Palmas de Gran Canaria. It is celebrated on the streets between the months of February and March. But the rest of the islands of the archipelago have their carnivals with their own traditions among which stand out: The Festival of the Carneros of El Hierro, the Festival of the Diabletes of Teguise in Lanzarote, Los Indianos de La Palma, the Carnival of San Sebastián de La Gomera and the Carnival of Puerto del Rosario in Fuerteventura.
In the 1960s, Gran Canaria was selected as the location for one of the 14 ground stations in the Manned Space Flight Network (MSFN) to support the NASA space program. Maspalomas Station, located in the south of the island, took part in a number of space missions including the Apollo 11 Moon landings and Skylab. Today it continues to support satellite communications as part of the ESA network.
Because of the remote location, a number of astronomical observatories are located in the archipelago, including the Teide Observatory on Tenerife, the Roque de los Muchachos Observatory on La Palma, and the Temisas Astronomical Observatory on Gran Canaria.
Tenerife is the home of the Instituto de Astrofísica de Canarias (Astrophysical Institute of the Canaries). There is also an Instituto de Bio-Orgánica Antonio González (Antonio González Bio-Organic Institute) at the University of La Laguna. Also at that university are the Instituto de Lingüística Andrés Bello (Andrés Bello Institute of Linguistics), the Centro de Estudios Medievales y Renacentistas (Center for Medieval and Renaissance Studies), the Instituto Universitario de la Empresa (University Institute of Business), the Instituto de Derecho Regional (Regional Institute of Law), the Instituto Universitario de Ciencias Políticas y Sociales (University Institute of Political and Social Sciences) and the Instituto de Enfermedades Tropicales (Institute of Tropical Diseases). The latter is one of the seven institutions of the Red de Investigación de Centros de Enfermedades Tropicales (RICET, "Network of Research of Centers of Tropical Diseases"), located in various parts of Spain. The Instituto Volcanológico de Canarias (Volcanological Institute of the Canary Islands) is based in Tenerife.
A unique form of wrestling known as Canarian wrestling (lucha canaria) has opponents stand in a special area called a "terrero" and try to throw each other to the ground using strength and quick movements.
Another sport is the "game of the sticks" (palo canario) where opponents fence with long sticks. This may have come about from the shepherds of the islands who would challenge each other using their long walking sticks.
Furthermore, there is the shepherd's jump (salto del pastor). This involves using a long stick to vault over an open area. This sport possibly evolved from the shepherd's need to occasionally get over an open area in the hills as they were tending their sheep.
The two main football teams in the archipelago are: the CD Tenerife (founded in 1912) and UD Las Palmas (founded in 1949). As of the 2023/2024 season, UD Las Palmas plays in La Liga, the top tier of Spanish football. CD Tenerife however plays in The Segunda Divisón. When in the same division, the clubs contest the Canary Islands derby. There are smaller clubs also playing in the mainland Spanish football league system, most notably UD Lanzarote and CD Laguna, although no other Canarian clubs have played in the top flight.
The mountainous terrain of the Canary Islands also caters to the growing popularity of ultra running and ultramarathons as host of annual competitive long-distance events including CajaMar Tenerife Bluetrail on Tenerife, Transvulcania on La Palma, Transgrancanaria on Gran Canaria, and the Half Marathon des Sables on Fuerteventura. A yearly Ironman Triathlon has been taking place on Lanzarote since 1992. | [
{
"paragraph_id": 0,
"text": "The Canary Islands (/kəˈnɛəri/; Spanish: Canarias, pronounced [kaˈnaɾjas]), also known informally as the Canaries, are a Spanish autonomous community and archipelago in Macaronesia in the Atlantic Ocean. At their closest point to the African mainland, they are 100 kilometres (62 miles) west of Morocco and the Western Sahara. They are the southernmost of the autonomous communities of Spain. The islands have a population of 2.2 million people and are the most populous special territory of the European Union.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The seven main islands are (from largest to smallest in area) Tenerife, Fuerteventura, Gran Canaria, Lanzarote, La Palma, La Gomera, and El Hierro. The archipelago includes many smaller islands and islets, including La Graciosa, Alegranza, Isla de Lobos, Montaña Clara, Roque del Oeste, and Roque del Este. It also includes a number of rocks, including Garachico and Anaga. In ancient times, the island chain was often referred to as \"the Fortunate Isles\". The Canary Islands are the southernmost region of Spain, and the largest and most populous archipelago of Macaronesia. Because of their location, the Canary Islands have historically been considered a link between the four continents of Africa, North America, South America, and Europe.",
"title": ""
},
{
"paragraph_id": 2,
"text": "In 2019, the Canary Islands had a population of 2,153,389, with a density of 287.39 inhabitants per km, making it the eighth most populous autonomous community of Spain. The population is mostly concentrated in the two capital islands: around 43% on the island of Tenerife and 40% on the island of Gran Canaria.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The Canary Islands, especially Tenerife, Gran Canaria, Fuerteventura, and Lanzarote, are a major tourist destination, with over 12 million visitors per year. This is due to their beaches, subtropical climate, and important natural attractions, especially Maspalomas in Gran Canaria and Mount Teide (a World Heritage Site) in Tenerife. Mount Teide is the highest peak in Spain and the 4th tallest volcano in the world, measured from its base on the ocean floor. The islands have warm summers and winters warm enough for the climate to be technically tropical at sea level. The amount of precipitation and the level of maritime moderation vary depending on location and elevation. The archipelago includes green areas as well as desert. The islands' high mountains are ideal for astronomical observation, because they lie above the temperature inversion layer. As a result, the archipelago boasts two professional observatories: the Teide Observatory on Tenerife, and Roque de los Muchachos Observatory on La Palma.",
"title": ""
},
{
"paragraph_id": 4,
"text": "In 1927, the Province of Canary Islands was split into two provinces. In 1982, the autonomous community of the Canary Islands was established. The cities of Santa Cruz de Tenerife and Las Palmas de Gran Canaria are, jointly, the capitals of the islands. Those cities are also, respectively, the capitals of the provinces of Santa Cruz de Tenerife and Las Palmas. Las Palmas de Gran Canaria has been the largest city in the Canaries since 1768, except for a brief period in the 1910s. Between the 1833 territorial division of Spain and 1927, Santa Cruz de Tenerife was the sole capital of the Canary Islands. In 1927, it was ordered by decree that the capital of the Canary Islands would be shared between two cities, and this arrangement persists to the present day. The third largest city in the Canary Islands is San Cristóbal de La Laguna (another World Heritage Site) on Tenerife.",
"title": ""
},
{
"paragraph_id": 5,
"text": "During the Age of Sail, the islands were the main stopover for Spanish galleons during the Spanish colonisation of the Americas, which sailed that far south in order to catch the prevailing northeasterly trade winds.",
"title": ""
},
{
"paragraph_id": 6,
"text": "The name Islas Canarias is likely derived from the Latin name Canariae Insulae, meaning \"Islands of the Dogs\", perhaps because monk seals or sea dogs were abundant, a name that was evidently generalized from the ancient name of one of these islands, Canaria – presumably Gran Canaria. According to the historian Pliny the Elder, the island Canaria contained \"vast multitudes of dogs of very large size\". The connection to dogs is retained in their depiction on the islands' coat-of-arms.",
"title": "Etymology"
},
{
"paragraph_id": 7,
"text": "Other theories speculate that the name comes from the Nukkari Berber tribe living in the Moroccan Atlas, named in Roman sources as Canarii, though Pliny again mentions the relation of this term with dogs.",
"title": "Etymology"
},
{
"paragraph_id": 8,
"text": "The name of the islands is not derived from the canary bird; rather, the birds are named after the islands.",
"title": "Etymology"
},
{
"paragraph_id": 9,
"text": "Tenerife is the largest and most populous island of the archipelago. Gran Canaria, with 865,070 inhabitants, is both the Canary Islands' second most populous island, and the third most populous one in Spain after Tenerife (966,354 inhabitants) and Majorca (896,038 inhabitants). The island of Fuerteventura is the second largest in the archipelago and located 100 km (62 mi) from the African coast.",
"title": "Physical geography"
},
{
"paragraph_id": 10,
"text": "The islands form the Macaronesia ecoregion with the Azores, Cape Verde, Madeira, and the Savage Isles. The Canary Islands is the largest and most populated archipelago of the Macaronesia region. The archipelago consists of seven large and several smaller islands, all of which are volcanic in origin.",
"title": "Physical geography"
},
{
"paragraph_id": 11,
"text": "According to the position of the islands with respect to the north-east trade winds, the climate can be mild and wet or very dry. Several native species form laurisilva forests.",
"title": "Physical geography"
},
{
"paragraph_id": 12,
"text": "As a consequence, the individual islands in the Canary archipelago tend to have distinct microclimates. Those islands such as El Hierro, La Palma and La Gomera lying to the west of the archipelago have a climate which is influenced by the moist Canary Current. They are well vegetated even at low levels and have extensive tracts of sub-tropical laurisilva forest. As one travels east toward the African coast, the influence of the current diminishes, and the islands become increasingly arid. Fuerteventura and Lanzarote, the islands which are closest to the African mainland, are effectively desert or semi desert. Gran Canaria is known as a \"continent in miniature\" for its diverse landscapes like Maspalomas and Roque Nublo. In terms of its climate Tenerife is particularly interesting. The north of the island lies under the influence of the moist Atlantic winds and is well vegetated, while the south of the island around the tourist resorts of Playa de las Américas and Los Cristianos is arid. The island rises to almost 4,000 m (13,000 ft) above sea level, and at altitude, in the cool relatively wet climate, forests of the endemic pine Pinus canariensis thrive. Many of the plant species in the Canary Islands, like the Canary Island pine and the dragon tree, Dracaena draco are endemic, as noted by Sabin Berthelot and Philip Barker Webb in their work, L'Histoire Naturelle des Îles Canaries (1835–50).",
"title": "Physical geography"
},
{
"paragraph_id": 13,
"text": "The climate is warm subtropical and generally semidesertic, moderated by the sea and in summer by the trade winds. There are a number of microclimates and the classifications range mainly from semi-arid to desert. According to Köppen, the majority of the Canary Islands have a hot desert climate (BWh) and a hot semi-arid climate (BSh), caused partly due to the cool Canary Current. There also exists a subtropical humid climate which is very influenced by the ocean in the middle of the islands of La Gomera, Tenerife and La Palma, where laurisilva cloud forests grow.",
"title": "Physical geography"
},
{
"paragraph_id": 14,
"text": "The seven major islands, one minor island, and several small islets were originally volcanic islands, formed by the Canary hotspot. The Canary Islands is the only place in Spain where volcanic eruptions have been recorded during the Modern Era, with some volcanoes still active (El Hierro, 2011). Volcanic islands such as those in the Canary chain often have steep ocean cliffs caused by catastrophic debris avalanches and landslides. The island chain's most recent eruption occurred at Cumbre Vieja, a volcanic ridge on La Palma, in 2021.",
"title": "Physical geography"
},
{
"paragraph_id": 15,
"text": "The Teide volcano on Tenerife is the highest mountain in Spain, and the third tallest volcano on Earth on a volcanic ocean island. All the islands except La Gomera have been active in the last million years; four of them (Lanzarote, Tenerife, La Palma and El Hierro) have historical records of eruptions since European discovery. The islands rise from Jurassic oceanic crust associated with the opening of the Atlantic. Underwater magmatism commenced during the Cretaceous, and continued to the present day. The current islands reached the ocean's surface during the Miocene. The islands were once considered as a distinct physiographic section of the Atlas Mountains province, which in turn is part of the larger African Alpine System division, but are nowadays recognized as being related to a magmatic hot spot.",
"title": "Physical geography"
},
{
"paragraph_id": 16,
"text": "In the summer of 2011 a series of low-magnitude earthquakes occurred beneath El Hierro. These had a linear trend of northeast–southwest. In October a submarine eruption occurred about 2 km (1+1⁄4 mi) south of Restinga. This eruption produced gases and pumice, but no explosive activity was reported.",
"title": "Physical geography"
},
{
"paragraph_id": 17,
"text": "The following table shows the highest mountains in each of the islands:",
"title": "Physical geography"
},
{
"paragraph_id": 18,
"text": "The official natural symbols associated with Canary Islands are the bird Serinus canaria (canary) and the Phoenix canariensis palm.",
"title": "Physical geography"
},
{
"paragraph_id": 19,
"text": "Four of Spain's thirteen national parks are located in the Canary Islands, more than any other autonomous community. Two of these have been declared UNESCO World Heritage Sites and the other two are part of Biosphere Reserves. The parks are:",
"title": "Physical geography"
},
{
"paragraph_id": 20,
"text": "Teide National Park is the oldest and largest national park in the Canary Islands and one of the oldest in Spain. Located in the geographic centre of the island of Tenerife, it is the most visited national park in Spain. In 2010, it became the most visited national park in Europe and second worldwide. The park's highlight is the Teide volcano; standing at an altitude of 3,715 metres (12,188 ft), it is the highest elevation of the country and the third largest volcano on Earth from its base. In 2007, the Teide National Park was declared one of the 12 Treasures of Spain.",
"title": "Physical geography"
},
{
"paragraph_id": 21,
"text": "The regional executive body, the Parliament of the Canary Islands, is presided over by Fernando Clavijo Batlle (Canarian Coalition), the current President of the Canary Islands. The latter is invested by the members of the regional legislature, the Parliament of the Canary Islands, that consists of 70 elected legislators. The last regional election took place in May 2023.",
"title": "Politics"
},
{
"paragraph_id": 22,
"text": "The islands have 14 seats in the Spanish Senate. Of these, 11 seats are directly elected (3 for Gran Canaria, 3 for Tenerife, and 1 each for Lanzarote (including La Graciosa), Fuerteventura, La Palma, La Gomera and El Hierro) while the other 3 are appointed by the regional legislature.",
"title": "Politics"
},
{
"paragraph_id": 23,
"text": "The Autonomous Community of the Canary Islands consists of two provinces (provincias), Las Palmas and Santa Cruz de Tenerife, whose capitals (Las Palmas de Gran Canaria and Santa Cruz de Tenerife) are capitals of the autonomous community. Each of the seven major islands is ruled by an island council named Cabildo Insular. Each island is subdivided into smaller municipalities (municipios); Las Palmas is divided into 34 municipalities, and Santa Cruz de Tenerife is divided into 54 municipalities.",
"title": "Politics"
},
{
"paragraph_id": 24,
"text": "The international boundary of the Canaries is one subject of dispute in the Morocco-Spain relations. Moreover, in 2022 the UN has declared the Canary Island's territorial waters as Moroccan coast and Morocco has authorised gas and oil exploration in what the Canary Islands states to be Canarian territorial waters and Western Sahara waters. Morocco's official position is that international laws regarding territorial limits do not authorise Spain to claim seabed boundaries based on the territory of the Canaries, since the Canary Islands enjoy a large degree of autonomy. In fact, the islands do not enjoy any special degree of autonomy as each one of the Spanish regions is considered an autonomous community with equal status to the European ones. Under the Law of the Sea, the only islands not granted territorial waters or an exclusive economic zone (EEZ) are those that are not fit for human habitation or do not have an economic life of their own, which is not the case of the Canary Islands.",
"title": "Politics"
},
{
"paragraph_id": 25,
"text": "There are some pro-independence political parties, like the National Congress of the Canaries (CNC) and the Popular Front of the Canary Islands, but their popular support is almost insignificant, with no presence in either the autonomous parliament or the cabildos insulares. According to a 2012 study by the Centro de Investigaciones Sociológicas, when asked about national identity, the majority of respondents from the Canary Islands (53.8%) consider themselves Spanish and Canarian in equal measures, followed by 24% who consider themselves more Canarian than Spanish. Only 6.1% of the respondents consider themselves only Canarian while 7% consider themselves only Spanish.",
"title": "Politics"
},
{
"paragraph_id": 26,
"text": "The defence of the territory is the responsibility of the Spanish Armed Forces. As such, various components of the Army, Navy, Air Force and the Civil Guard are based in the territory.",
"title": "Politics"
},
{
"paragraph_id": 27,
"text": "Before the arrival of humans, the Canaries were inhabited by prehistoric animals; for example, the giant lizard (Gallotia goliath), the Tenerife and Gran Canaria giant rats, and giant prehistoric tortoises, Geochelone burchardi and Geochelone vulcanica.",
"title": "History"
},
{
"paragraph_id": 28,
"text": "Although the original settlement of what are now called the Canary Islands is not entirely clear, linguistic, genetic, and archaeological analyses indicate that indigenous peoples were living on the Canary Islands at least 2000 years ago but possibly one thousand years or more before, and that they shared a common origin with the Berbers on the nearby North African coast. Reaching the islands may have taken place using several small boats, landing on the easternmost islands Lanzarote and Fuerteventura. These groups came to be known collectively as the Guanches, although Guanches had been the name for only the indigenous inhabitants of Tenerife.",
"title": "History"
},
{
"paragraph_id": 29,
"text": "As José Farrujia describes, 'The indigenous Canarians lived mainly in natural caves, usually near the coast, 300–500m above sea level. These caves were sometimes isolated but more commonly formed settlements, with burial caves nearby'. Archaeological work has uncovered a rich culture visible through artefacts of ceramics, human figures, fishing, hunting and farming tools, plant fibre clothing and vessels, as well as cave paintings. At Lomo de los Gatos on Gran Canaria, a site occupied from 1,600 years ago up until the 1960s, round stone houses, complex burial sites, and associated artefacts have been found. Across the islands are thousands of Libyco-Berber alphabet inscriptions scattered and they have been extensively documented by many linguists.",
"title": "History"
},
{
"paragraph_id": 30,
"text": "The social structure of indigenous Canarians encompassed 'a system of matrilineal descent in most of the islands, in which inheritance was passed on via the female line. Social status and wealth were hereditary and determined the individual's position in the social pyramid, which consisted of the king, the relatives of the king, the lower nobility, villeins, plebeians, and finally executioners, butchers, embalmers, and prisoners'. Their religion was animist, centring on the sun and moon, as well as natural features such as mountains.",
"title": "History"
},
{
"paragraph_id": 31,
"text": "The islands may have been visited by the Phoenicians, the Greeks, and the Carthaginians. King Juba II, Caesar Augustus's Numidian protégé, is credited with discovering the islands for the Western world. According to Pliny the Elder, Juba found the islands uninhabited, but found \"a small temple of stone\" and \"some traces of buildings\". Juba dispatched a naval contingent to re-open the dye production facility at Mogador in what is now western Morocco in the early first century AD. That same naval force was subsequently sent on an exploration of the Canary Islands, using Mogador as their mission base.",
"title": "History"
},
{
"paragraph_id": 32,
"text": "The names given by Romans to the individual islands were Ninguaria or Nivaria (Tenerife), Canaria (Gran Canaria), Pluvialia or Invale (Lanzarote), Ombrion (La Palma), Planasia (Fuerteventura), Iunonia or Junonia (El Hierro) and Capraria (La Gomera).",
"title": "History"
},
{
"paragraph_id": 33,
"text": "From the 14th century onward, numerous visits were made by sailors from Majorca, Portugal and Genoa. Lancelotto Malocello settled on Lanzarote in 1312. The Majorcans established a mission with a bishop in the islands that lasted from 1350 to 1400.",
"title": "History"
},
{
"paragraph_id": 34,
"text": "In 1402, the Castilian colonisation of the islands began with the expedition of the French explorers Jean de Béthencourt and Gadifer de la Salle, nobles and vassals of Henry III of Castile, to Lanzarote. From there, they went on to conquer Fuerteventura (1405) and El Hierro. These invasions were \"brutal cultural and military clashes between the indigenous population and the Castilians\" lasting over a century due to formidable resistance by indigenous Canarians. Professor Mohamed Adhikari has defined the conquest of the islands as a genocide of the Guanches.",
"title": "History"
},
{
"paragraph_id": 35,
"text": "Béthencourt received the title King of the Canary Islands, but still recognised King Henry III as his overlord. It was not a simple military enterprise, given the aboriginal resistance on some islands. Neither was it politically, since the particular interests of the nobility (determined to strengthen their economic and political power through the acquisition of the islands) conflicted with those of the states, particularly Castile, which were in the midst of territorial expansion and in a process of strengthening of the Crown against the nobility.",
"title": "History"
},
{
"paragraph_id": 36,
"text": "Historians distinguish two periods in the conquest of the Canary Islands:",
"title": "History"
},
{
"paragraph_id": 37,
"text": "Aristocratic conquest (Conquista señorial). This refers to the early conquests carried out by the nobility, for their own benefit and without the direct participation of the Crown of Castile, which merely granted rights of conquest in exchange for pacts of vassalage between the noble conqueror and the Crown. One can identify within this period an early phase known as the Betancurian or Norman Conquest, carried out by Jean de Bethencourt (who was originally from Normandy) and Gadifer de la Salle between 1402 and 1405, which involved the islands of Lanzarote, El Hierro and Fuerteventura. The subsequent phase is known as the Castilian Conquest, carried out by Castilian nobles who acquired, through purchases, assignments and marriages, the previously conquered islands and also incorporated the island of La Gomera around 1450.",
"title": "History"
},
{
"paragraph_id": 38,
"text": "Royal conquest (Conquista realenga). This defines the conquest between 1478 and 1496, carried out directly by the Crown of Castile, during the reign of the Catholic Monarchs, who armed and partly financed the conquest of those islands which were still unconquered: Gran Canaria, La Palma and Tenerife. This phase of the conquest came to an end in the year 1496, with the dominion of the island of Tenerife, bringing the entire Canarian Archipelago under the control of the Crown of Castile.",
"title": "History"
},
{
"paragraph_id": 39,
"text": "Béthencourt also established a base on the island of La Gomera, but it would be many years before the island was fully conquered. The natives of La Gomera, and of Gran Canaria, Tenerife, and La Palma, resisted the Castilian invaders for almost a century. In 1448 Maciot de Béthencourt sold the lordship of Lanzarote to Portugal's Prince Henry the Navigator, an action that was accepted by neither the natives nor the Castilians. Despite Pope Nicholas V ruling that the Canary Islands were under Portuguese control, the crisis swelled to a revolt which lasted until 1459 with the final expulsion of the Portuguese. In 1479, Portugal and Castile signed the Treaty of Alcáçovas, which settled disputes between Castile and Portugal over the control of the Atlantic. This treaty recognized Castilian control of the Canary Islands but also confirmed Portuguese possession of the Azores, Madeira, and the Cape Verde islands, and gave the Portuguese rights to any further islands or lands in the Atlantic that might be discovered.",
"title": "History"
},
{
"paragraph_id": 40,
"text": "The Castilians continued to dominate the islands, but due to the topography and the resistance of the native Guanches, they did not achieve complete control until 1496, when Tenerife and La Palma were finally subdued by Alonso Fernández de Lugo. As a result of this 'the native pre-Hispanic population declined quickly due to war, epidemics, and slavery'. The Canaries were incorporated into the Kingdom of Castile.",
"title": "History"
},
{
"paragraph_id": 41,
"text": "After the conquest, the Castilians imposed a new economic model, based on single-crop cultivation: first sugarcane; then wine, an important item of trade with England. Gran Canaria was conquered by the Crown of Castile on 6 March 1480, and Tenerife was conquered in 1496, and each had its own governor. There has been speculation that the abundance of Roccella tinctoria on the Canary Islands offered a profit motive for Jean de Béthencourt during his conquest of the islands. Lichen has been used for centuries to make dyes. This includes royal purple colors derived from roccella tinctoria, also known as orseille.",
"title": "History"
},
{
"paragraph_id": 42,
"text": "The objective of the Spanish Crown to convert the islands into a powerhouse of cultivation required a much larger labour force. This was attained through a brutal practice of enslavement, not only of indigenous Canarians but large numbers of Africans who were forcibly taken from North and Sub-Saharan Africa. Whilst the first slave plantations in the Atlantic region were across Madeira, Cape Verde, and the Canary Islands, it was only the Canary Islands which had an indigenous population and were therefore invaded rather than newly occupied.",
"title": "History"
},
{
"paragraph_id": 43,
"text": "This agriculture industry was largely based on sugarcane and the Castilians converted large swaths of the landscape for sugarcane production, and the processing and manufacturing of sugar, facilitated by enslaved labourers. The cities of Santa Cruz de Tenerife and Las Palmas de Gran Canaria became a stopping point for the Spanish traders, as well as conquistadors, and missionaries on their way to the New World. This trade route brought great wealth to the Castilian social sectors of the islands and soon were attracting merchants and adventurers from all over Europe. As the wealth grew, enslaved African workers were also forced into demeaning domestic roles for the rich Castilians on the islands such as servants in their houses. Research on the skeletons of some of these enslaved workers from the burial site of Finca Clavijo on Gran Canaria have showed that 'all of the adults buried in Finca Clavijo undertook extensive physical activity that involved significant stress on the spine and appendicular skeleton' that result from relentless hard labour, akin to the physical abnormalities found with enslaved peoples from other sugarcane plantations around the world. These findings of the physical strain that the enslaved at Finca Clavijo were subjected to in order to provide wealth for the Spanish elite has inspired a poem by British writer Ralph Hoyte, entitled Close to the Bone.",
"title": "History"
},
{
"paragraph_id": 44,
"text": "As a result of the huge wealth generated, magnificent palaces and churches were built on La Palma during this busy, prosperous period. The Church of El Salvador survives as one of the island's finest examples of the architecture of the 16th century. Civilian architecture survives in forms such as Casas de los Sánchez-Ochando or Casa Quintana.",
"title": "History"
},
{
"paragraph_id": 45,
"text": "The Canaries' wealth invited attacks by pirates and privateers. Ottoman Turkish admiral and privateer Kemal Reis ventured into the Canaries in 1501, while Murat Reis the Elder captured Lanzarote in 1585.",
"title": "History"
},
{
"paragraph_id": 46,
"text": "The most severe attack took place in 1599, during the Dutch Revolt. A Dutch fleet of 74 ships and 12,000 men, commanded by Pieter van der Does, attacked the capital Las Palmas de Gran Canaria (the city had 3,500 of Gran Canaria's 8,545 inhabitants). The Dutch attacked the Castillo de la Luz, which guarded the harbor. The Canarians evacuated civilians from the city, and the Castillo surrendered (but not the city). The Dutch moved inland, but Canarian cavalry drove them back to Tamaraceite, near the city.",
"title": "History"
},
{
"paragraph_id": 47,
"text": "The Dutch then laid siege to the city, demanding the surrender of all its wealth. They received 12 sheep and 3 calves. Furious, the Dutch sent 4,000 soldiers to attack the Council of the Canaries, who were sheltering in the village of Santa Brígida. Three hundred Canarian soldiers ambushed the Dutch in the village of Monte Lentiscal, killing 150 and forcing the rest to retreat. The Dutch concentrated on Las Palmas de Gran Canaria, attempting to burn it down. The Dutch pillaged Maspalomas, on the southern coast of Gran Canaria, San Sebastián on La Gomera, and Santa Cruz on La Palma, but eventually gave up the siege of Las Palmas and withdrew.",
"title": "History"
},
{
"paragraph_id": 48,
"text": "In 1618 the Barbary pirates from North Africa attacked Lanzarote and La Gomera taking 1000 captives to be sold as slaves. Another noteworthy attack occurred in 1797, when Santa Cruz de Tenerife was attacked by a British fleet under Horatio Nelson on 25 July. The British were repulsed, losing almost 400 men. It was during this battle that Nelson lost his right arm.",
"title": "History"
},
{
"paragraph_id": 49,
"text": "The sugar-based economy of the islands faced stiff competition from Spain's Caribbean colonies. Low sugar prices in the 19th century caused severe recessions on the islands. A new cash crop, cochineal (cochinilla), came into cultivation during this time, reinvigorating the islands' economy. During this time the Canarian-American trade was developed, in which Canarian products such as cochineal, sugarcane and rum were sold in American ports such as Veracruz, Campeche, La Guaira and Havana, among others.",
"title": "History"
},
{
"paragraph_id": 50,
"text": "By the end of the 18th century, Canary Islanders had already emigrated to Spanish American territories, such as Havana, Veracruz, and Santo Domingo, San Antonio, Texas and St. Bernard Parish, Louisiana. These economic difficulties spurred mass emigration during the 19th and first half of the 20th century, primarily to the Americas. Between 1840 and 1890 as many as 40,000 Canary Islanders emigrated to Venezuela. Also, thousands of Canarians moved to Puerto Rico where the Spanish monarchy felt that Canarians would adapt to island life better than other immigrants from the mainland of Spain. Deeply entrenched traditions, such as the Mascaras Festival in the town of Hatillo, Puerto Rico, are an example of Canarian culture still preserved in Puerto Rico. Similarly, many thousands of Canarians emigrated to the shores of Cuba. During the Spanish–American War of 1898, the Spanish fortified the islands against a possible American attack, but no such event took place.",
"title": "History"
},
{
"paragraph_id": 51,
"text": "Sirera and Renn (2004) distinguish two different types of expeditions, or voyages, during the period 1770–1830, which they term \"the Romantic period\":",
"title": "History"
},
{
"paragraph_id": 52,
"text": "First are \"expeditions financed by the States, closely related with the official scientific Institutions. characterised by having strict scientific objectives (and inspired by) the spirit of Illustration and progress\". In this type of expedition, Sirera and Renn include the following travellers:",
"title": "History"
},
{
"paragraph_id": 53,
"text": "The second type of expedition identified by Sirera and Renn is one that took place starting from more or less private initiatives. Among these, the key exponents were the following:",
"title": "History"
},
{
"paragraph_id": 54,
"text": "Sirera and Renn identify the period 1770–1830 as one in which \"In a panorama dominated until that moment by France and England enters with strength and brio Germany of the Romantic period whose presence in the islands will increase\".",
"title": "History"
},
{
"paragraph_id": 55,
"text": "At the beginning of the 20th century, the British introduced a new cash-crop, the banana, the export of which was controlled by companies such as Fyffes.",
"title": "History"
},
{
"paragraph_id": 56,
"text": "30 November 1833 the Province of Canary Islands had been created with the capital being declared as Santa Cruz de Tenerife. The rivalry between the cities of Las Palmas de Gran Canaria and Santa Cruz de Tenerife for the capital of the islands led to the division of the archipelago into two provinces on 23 September 1927.",
"title": "History"
},
{
"paragraph_id": 57,
"text": "During the time of the Second Spanish Republic, Marxist and anarchist workers' movements began to develop, led by figures such as Jose Miguel Perez and Guillermo Ascanio. However, outside of a few municipalities, these organisations were a minority and fell easily to Nationalist forces during the Spanish Civil War.",
"title": "History"
},
{
"paragraph_id": 58,
"text": "In 1936, Francisco Franco was appointed General Commandant of the Canaries. He joined the military revolt of 17 July which began the Spanish Civil War. Franco quickly took control of the archipelago, except for a few points of resistance on La Palma and in the town of Vallehermoso, on La Gomera. Though there was never a war in the islands, the post-war suppression of political dissent on the Canaries was most severe.",
"title": "History"
},
{
"paragraph_id": 59,
"text": "During the Second World War, Winston Churchill prepared plans for the British seizure of the Canary Islands as a naval base, in the event of Gibraltar being invaded from the Spanish mainland. The planned operation was known as Operation Pilgrim.",
"title": "History"
},
{
"paragraph_id": 60,
"text": "Opposition to Franco's regime did not begin to organise until the late 1950s, which experienced an upheaval of parties such as the Communist Party of Spain and the formation of various nationalist, leftist parties.",
"title": "History"
},
{
"paragraph_id": 61,
"text": "During the Ifni War, the Franco regime set up concentration camps on the islands to extrajudicially imprison those in Western Sahara suspected of disloyalty to Spain, many of whom were colonial troops recruited on the spot but were later deemed to be potential fifth columnists and deported to the Canary Islands. These camps were characterised by the use of forced labour for infrastructure projects and highly unsanitary conditions resulting in the widespread occurrence of tuberculosis.",
"title": "History"
},
{
"paragraph_id": 62,
"text": "After the death of Franco, there was a pro-independence armed movement based in Algeria, the Movement for the Independence and Self-determination of the Canaries Archipelago (MAIAC). In 1968, the Organisation of African Unity recognized the MAIAC as a legitimate African independence movement, and declared the Canary Islands as an African territory still under foreign rule.",
"title": "History"
},
{
"paragraph_id": 63,
"text": "After the establishment of a democratic constitutional monarchy in Spain, autonomy was granted to the Canaries via a law passed in 1982, with a newly established autonomous devolved government and parliament. In 1983, the first autonomous elections were held. The Spanish Socialist Workers' Party (PSOE) won. In the 2007 elections, the PSOE gained a plurality of seats, but the nationalist Canarian Coalition and the conservative Partido Popular (PP) formed a ruling coalition government.",
"title": "History"
},
{
"paragraph_id": 64,
"text": "At present, the Canary Islands is the only autonomous community in Spain that has two capitals: Santa Cruz de Tenerife and Las Palmas de Gran Canaria, since the Statute of Autonomy of the Canary Islands [es] was created in 1982.",
"title": "History"
},
{
"paragraph_id": 65,
"text": "The political capital of the archipelago did not exist as such until the nineteenth century. The first cities founded by the Europeans at the time of the conquest of the Canary Islands in the 15th century were: Telde (in Gran Canaria), San Marcial del Rubicón (in Lanzarote) and Betancuria (in Fuerteventura). These cities boasted the first European institutions present in the archipelago, including Catholic bishoprics. Although, because the period of splendor of these cities developed before the total conquest of the archipelago and its incorporation into the Crown of Castile never had a political and real control of the entire Canary archipelago.",
"title": "History"
},
{
"paragraph_id": 66,
"text": "The function of a Canarian city with full jurisdiction for the entire archipelago only exists after the conquest of the Canary Islands, although originally de facto, that is, without legal and real meaning and linked to the headquarters of the Canary Islands General Captaincy.",
"title": "History"
},
{
"paragraph_id": 67,
"text": "Las Palmas de Gran Canaria was the first city that exercised this function. This is because the residence of the Captain General of the Canary Islands was in this city during part of the sixteenth and seventeenth centuries. In May 1661, the Captain General of the Canary Islands, Jerónimo de Benavente y Quiñones, moved the headquarters of the captaincy to the city of San Cristóbal de La Laguna on the island of Tenerife. This was due to the fact that this island since the conquest was the most populated, productive and with the highest economic expectations. La Laguna would be considered the de facto capital of the archipelago until the official status of the capital of Canary Islands in the city of Santa Cruz de Tenerife was confirmed in the 19th century, due in part to the constant controversies and rivalries between the bourgeoisies of San Cristóbal de La Laguna and Las Palmas de Gran Canaria for the economic, political and institutional hegemony of the archipelago.",
"title": "History"
},
{
"paragraph_id": 68,
"text": "Already in 1723, the Captain General of the Canary Islands Lorenzo Fernandez de Villavicencio had moved the headquarters of the General Captaincy of the Canary Islands from San Cristóbal de La Laguna to Santa Cruz de Tenerife. This decision continued without pleasing the society of the island of Gran Canaria. It would be after the creation of the Province of Canary Islands in November 1833 in which Santa Cruz would become the first fully official capital of the Canary Islands (De jure and not of de facto as happened previously). Santa Cruz de Tenerife would be the capital of the Canary archipelago until during the Government of General Primo de Rivera in 1927 the Province of Canary Islands was split in two provinces: Las Palmas with capital in Las Palmas de Gran Canaria, and Santa Cruz de Tenerife with capital in the homonymous city.",
"title": "History"
},
{
"paragraph_id": 69,
"text": "Finally, with the Statute of Autonomy of the Canary Islands in 1982 and the creation of the Autonomous Community of the Canary Islands, the capital of the archipelago between Las Palmas de Gran Canaria and Santa Cruz de Tenerife is fixed, which is how it remains today.",
"title": "History"
},
{
"paragraph_id": 70,
"text": "The Canary Islands have a population of 2,153,389 inhabitants (2019), making it the eighth most populous of Spain's autonomous communities. The total area of the archipelago is 7,493 km (2,893 sq mi), resulting in a population density of 287.4 inhabitants per square kilometre.",
"title": "Demographics"
},
{
"paragraph_id": 71,
"text": "The population of the islands according to the 2019 data are:",
"title": "Demographics"
},
{
"paragraph_id": 72,
"text": "The Canary Islands have become home to many European residents, mainly coming from Italy, Germany and the UK. Because of the vast immigration to Venezuela and Cuba during the second half of the 20th century and the later return to the Canary Islands of these people along with their families, there are many residents whose country of origin was Venezuela (66,593) or Cuba (41,807). Since the 1990s, many illegal migrants have reached the Canary Islands, Melilla and Ceuta, using them as entry points to the EU.",
"title": "Demographics"
},
{
"paragraph_id": 73,
"text": "The Catholic Church has been the majority religion in the archipelago for more than five centuries, ever since the Conquest of the Canary Islands. There are also several other religious communities.",
"title": "Demographics"
},
{
"paragraph_id": 74,
"text": "The overwhelming majority of native Canarians are Roman Catholic (76.7%) with various smaller foreign-born populations of other Christian beliefs such as Protestants.",
"title": "Demographics"
},
{
"paragraph_id": 75,
"text": "The appearance of the Virgin of Candelaria (Patron of Canary Islands) was credited with moving the Canary Islands toward Christianity. Two Catholic saints were born in the Canary Islands: Peter of Saint Joseph de Betancur and José de Anchieta. Both born on the island of Tenerife, they were respectively missionaries in Guatemala and Brazil.",
"title": "Demographics"
},
{
"paragraph_id": 76,
"text": "The Canary Islands are divided into two Catholic dioceses, each governed by a bishop:",
"title": "Demographics"
},
{
"paragraph_id": 77,
"text": "Separate from the overwhelming Christian majority are a minority of Muslims. Among the followers of Islam, the Islamic Federation of the Canary Islands exists to represent the Islamic community in the Canary Islands as well as to provide practical support to members of the Islamic community. For its part, there is also the Evangelical Council of the Canary Islands in the archipelago.",
"title": "Demographics"
},
{
"paragraph_id": 78,
"text": "Other religious faiths represented include Jehovah's Witnesses, The Church of Jesus Christ of Latter-day Saints as well as Hinduism. Minority religions are also present such as the Church of the Guanche People which is classified as a neo-pagan native religion. Also present are Buddhism, Judaism, Baháʼí, African religion, and Chinese religions.",
"title": "Demographics"
},
{
"paragraph_id": 79,
"text": "According to Statista in 2022, there are 80.171 Muslims in Canary Islands.",
"title": "Demographics"
},
{
"paragraph_id": 80,
"text": "The distribution of beliefs in 2012 according to the CIS Barometer Autonomy was as follows:",
"title": "Demographics"
},
{
"paragraph_id": 81,
"text": "Ordered from west to east, the Canary Islands are El Hierro, La Palma, La Gomera, Tenerife, Gran Canaria, Fuerteventura, and Lanzarote. In addition, north of Lanzarote are the islets of La Graciosa, Montaña Clara, Alegranza, Roque del Este and Roque del Oeste, belonging to the Chinijo Archipelago, and northeast of Fuerteventura is the islet of Lobos. There are also a series of small adjacent rocks in the Canary Islands: the Roques de Anaga, Garachico and Fasnia in Tenerife, and those of Salmor and Bonanza in El Hierro.",
"title": "Islands"
},
{
"paragraph_id": 82,
"text": "El Hierro, the westernmost island, covers 268.71 km (103.75 sq mi), making it the second smallest of the major islands, and the least populous with 10,798 inhabitants. The whole island was declared Reserve of the Biosphere in 2000. Its capital is Valverde. Also known as Ferro, it was once believed to be the westernmost land in the world. Ancient European geographers such as Ptolemy recognised the island as the prime meridian of longitude. That remained so until the 19th century when it was displaced by the one passing through Greenwich.",
"title": "Islands"
},
{
"paragraph_id": 83,
"text": "Fuerteventura, with a surface of 1,660 km (640 sq mi), is the second largest island of the archipelago. It has been declared a biosphere reserve by UNESCO. It has a population of 113,275. The oldest of the islands, it is more eroded. Its highest point is the Peak of the Bramble, at a height of 807 metres (2,648 feet). Its capital is Puerto del Rosario.",
"title": "Islands"
},
{
"paragraph_id": 84,
"text": "Gran Canaria has 846,717 inhabitants. The capital, Las Palmas de Gran Canaria (377,203 inhabitants), is the most populous city and shares the status of capital of the Canaries with Santa Cruz de Tenerife. Gran Canaria's surface area is 1,560 km (600 sq mi). Roque Nublo 1,813 metres (5,948 feet) and Pico de las Nieves (\"Peak of Snow\") 1,949 metres (6,394 feet) are located in the center of the island. On the south of the island are the Maspalomas Dunes (Gran Canaria).",
"title": "Islands"
},
{
"paragraph_id": 85,
"text": "La Gomera has an area of 369.76 km (142.77 sq mi) and is the second least populous island with 21,136 inhabitants. Geologically it is one of the oldest of the archipelago. The insular capital is San Sebastian de La Gomera. Garajonay National Park is located on the island.",
"title": "Islands"
},
{
"paragraph_id": 86,
"text": "Lanzarote is the easternmost island and one of the oldest of the archipelago, and it has shown evidence of recent volcanic activity. It has a surface of 845.94 km (326.62 sq mi), and a population of 149,183 inhabitants, including the adjacent islets of the Chinijo Archipelago. The capital is Arrecife, with 56,834 inhabitants.",
"title": "Islands"
},
{
"paragraph_id": 87,
"text": "The Chinijo Archipelago includes the islands La Graciosa, Alegranza, Montaña Clara, Roque del Este and Roque del Oeste. It has a surface of 40.8 km (15.8 sq mi), and only La Graciosa is populated, with 658 inhabitants. With 29 km (11 sq mi), La Graciosa, is the largest island of the Chinijo Archipelago but also the smallest inhabited island of the Canaries.",
"title": "Islands"
},
{
"paragraph_id": 88,
"text": "La Palma, with 81,863 inhabitants covering an area of 708.32 km (273.48 sq mi), is in its entirety a biosphere reserve. For long it showed no signs of volcanic activity, even though the volcano Teneguía entered into eruption last in 1971. On September 19, 2021, the volcanic Cumbre Vieja on the island erupted. It is the second-highest island of the Canaries, with the Roque de los Muchachos at 2,423 metres (7,949 feet) as its highest point. Santa Cruz de La Palma (known to those on the island as simply \"Santa Cruz\") is its capital.",
"title": "Islands"
},
{
"paragraph_id": 89,
"text": "Tenerife is, with its area of 2,034 km (785 sq mi), the most extensive island of the Canary Islands. In addition, with 904,713 inhabitants it is the most populated island of the archipelago and Spain. Two of the islands' principal cities are located on it: the capital, Santa Cruz de Tenerife and San Cristóbal de La Laguna (a World Heritage Site). San Cristóbal de La Laguna, the second city of the island is home to the oldest university in the Canary Islands, the University of La Laguna. Teide, with its 3,715 metres (12,188 feet) is the highest peak of Spain and also a World Heritage Site. Tenerife is the site of the worst air disaster in the history of aviation, in which 583 people were killed in the collision of two Boeing 747s on 27 March 1977.",
"title": "Islands"
},
{
"paragraph_id": 90,
"text": "Graciosa Island or commonly La Graciosa is a volcanic island in the Canary Islands of Spain, located 2 km (1.2 mi) north of the island of Lanzarote across the Strait of El Río. It was formed by the Canary hotspot. The island is part of the Chinijo Archipelago and the Chinijo Archipelago Natural Park (Parque Natural del Archipiélago Chinijo). It is administered by the municipality of Teguise. In 2018 La Graciosa officially became the eighth Canary Island. Before then, La Graciosa had the status of an islet, administratively dependent on the island of Lanzarote. It is the smallest and least populated of the main islands, with a population of about 700 people.",
"title": "Islands"
},
{
"paragraph_id": 91,
"text": "The economy is based primarily on tourism, which makes up 32% of the GDP. The Canaries receive about 12 million tourists per year. Construction makes up nearly 20% of the GDP and tropical agriculture, primarily bananas and tobacco, are grown for export to Europe and the Americas. Ecologists are concerned that the resources, especially in the more arid islands, are being overexploited but there are still many agricultural resources like tomatoes, potatoes, onions, cochineal, sugarcane, grapes, vines, dates, oranges, lemons, figs, wheat, barley, maize, apricots, peaches and almonds.",
"title": "Economy and environment"
},
{
"paragraph_id": 92,
"text": "Water resources are also being overexploited, due to the high water usage by tourists. Also, some islands (such as Gran Canaria and Tenerife) overexploit the ground water. This is done in such degree that, according to European and Spanish legal regulations, the current situation is not acceptable. To address the problems, good governance and a change in the water use paradigm have been proposed. These solutions depend largely on controlling water use and on demand management. As this is administratively difficult and politically unpalatable, most action is currently directed at increasing the public offer of water through import from outside; a decision which is economically, politically and environmentally questionable.",
"title": "Economy and environment"
},
{
"paragraph_id": 93,
"text": "To bring in revenue for environmental protection, innovation, training and water sanitation a tourist tax was considered in 2018, along with a doubling of the ecotax and restrictions on holiday rents in the zones with the greatest pressure of demand.",
"title": "Economy and environment"
},
{
"paragraph_id": 94,
"text": "The economy is € 25 billion (2001 GDP figures). The islands experienced continuous growth during a 20-year period, up until 2001, at a rate of approximately 5% annually. This growth was fueled mainly by huge amounts of foreign direct investment, mostly to develop tourism real estate (hotels and apartments), and European Funds (near €11 billion in the period from 2000 to 2007), since the Canary Islands are labelled Region Objective 1 (eligible for euro structural funds). Additionally, the EU allows the Canary Islands Government to offer special tax concessions for investors who incorporate under the Zona Especial Canaria (ZEC) regime and create more than five jobs.",
"title": "Economy and environment"
},
{
"paragraph_id": 95,
"text": "Spain gave permission in August 2014 for Repsol and its partners to explore oil and natural gas prospects off the Canary Islands, involving an investment of €7.5 billion over four years, to commence at the end of 2016. Repsol at the time said the area could ultimately produce 100,000 barrels of oil a day, which would meet 10 percent of Spain's energy needs. However, the analysis of samples obtained did not show the necessary volume nor quality to consider future extraction, and the project was scrapped.",
"title": "Economy and environment"
},
{
"paragraph_id": 96,
"text": "Despite currently having very high dependence on fossil fuels, research on the renewable energy potential concluded that a high potential for renewable energy technologies exists on the archipelago. This, in such extent even that a scenario pathway to 100% renewable energy supply by 2050 has been put forward.",
"title": "Economy and environment"
},
{
"paragraph_id": 97,
"text": "The Canary Islands have great natural attractions, climate and beaches make the islands a major tourist destination, being visited each year by about 12 million people (11,986,059 in 2007, noting 29% of Britons, 22% of Spanish (from outside the Canaries), and 21% of Germans). Among the islands, Tenerife has the largest number of tourists received annually, followed by Gran Canaria and Lanzarote. The archipelago's principal tourist attraction is the Teide National Park (in Tenerife) where the highest mountain in Spain and third largest volcano in the world (Mount Teide), receives over 2.8 million visitors annually.",
"title": "Economy and environment"
},
{
"paragraph_id": 98,
"text": "The combination of high mountains, proximity to Europe, and clean air has made the Roque de los Muchachos peak (on La Palma island) a leading location for telescopes like the Grantecan.",
"title": "Economy and environment"
},
{
"paragraph_id": 99,
"text": "The islands, as an autonomous region of Spain, are in the European Union and the Schengen Area. They are in the European Union Customs Union but outside the VAT area. Instead of VAT there is a local Sales Tax (IGIC) which has a general rate of 7%, an increased tax rate of 13.5%, a reduced tax rate of 3% and a zero tax rate for certain basic need products and services. Consequently, some products are subject to additional VAT if being exported from the islands into mainland Spain or the rest of the EU.",
"title": "Economy and environment"
},
{
"paragraph_id": 100,
"text": "Canarian time is Western European Time (WET) (or GMT; in summer one hour ahead of GMT). So Canarian time is one hour behind that of mainland Spain and the same as that of the UK, Ireland and mainland Portugal all year round.",
"title": "Economy and environment"
},
{
"paragraph_id": 101,
"text": "The number of tourists who visited the Canary Islands had been in 2018 16,150,054 and in the year 2019 15,589,290.",
"title": "Economy and environment"
},
{
"paragraph_id": 102,
"text": "The Gross Domestic Product (GDP) in the Canary Islands in 2015 was €40,923 million, €19,222 per capita. The figures by island are as follows:",
"title": "Economy and environment"
},
{
"paragraph_id": 103,
"text": "The Canary Islands have eight airports altogether, two of the main ports of Spain, and an extensive network of autopistas (highways) and other roads. For a road map see multimap. Traffic congestion is sometimes a problem in Tenerife and on Grand Canaria.",
"title": "Transport"
},
{
"paragraph_id": 104,
"text": "Large ferry boats and fast ferries link most of the islands. Both types can transport large numbers of passengers, cargo, and vehicles. Fast ferries are made of aluminium and powered by modern and efficient diesel engines, while conventional ferries have a steel hull and are powered by heavy oil. Fast ferries travel in excess of 30 kn (56 km/h; 35 mph); conventional ferries travel in excess of 20 kn (37 km/h; 23 mph), but are slower than fast ferries. A typical ferry ride between La Palma and Tenerife may take up to eight hours or more while a fast ferry takes about two and a half hours and between Tenerife and Gran Canaria can be about one hour.",
"title": "Transport"
},
{
"paragraph_id": 105,
"text": "The largest airport is the Gran Canaria Airport. Tenerife has two airports, Tenerife North Airport and Tenerife South Airport. The island of Tenerife gathers the highest passenger movement of all the Canary Islands through its two airports. The two main islands (Tenerife and Gran Canaria) receive the greatest number of passengers. Tenerife 6,204,499 passengers and Gran Canaria 5,011,176 passengers.",
"title": "Transport"
},
{
"paragraph_id": 106,
"text": "The port of Las Palmas is first in freight traffic in the islands, while the port of Santa Cruz de Tenerife is the first fishing port with approximately 7,500 tons of fish caught, according to the Spanish government publication Statistical Yearbook of State Ports. Similarly, it is the second port in Spain as regards ship traffic, only surpassed by the Port of Algeciras Bay. The port's facilities include a border inspection post (BIP) approved by the European Union, which is responsible for inspecting all types of imports from third countries or exports to countries outside the European Economic Area. The port of Los Cristianos (Tenerife) has the greatest number of passengers recorded in the Canary Islands, followed by the port of Santa Cruz de Tenerife. The Port of Las Palmas is the third port in the islands in passengers and first in number of vehicles transported.",
"title": "Transport"
},
{
"paragraph_id": 107,
"text": "The SS America was beached at the Canary islands on 18 January 1994. However, the ocean liner broke apart after the course of several years and eventually sank beneath the surface.",
"title": "Transport"
},
{
"paragraph_id": 108,
"text": "The Tenerife Tram opened in 2007 and is currently the only one in the Canary Islands, travelling between the cities of Santa Cruz de Tenerife and San Cristóbal de La Laguna.",
"title": "Transport"
},
{
"paragraph_id": 109,
"text": "Three more railway lines are being planned for the Canary Islands:",
"title": "Transport"
},
{
"paragraph_id": 110,
"text": "The Servicio Canario de Salud is an autonomous body of administrative nature attached to the Ministry responsible for Health of the Government of the Canary Islands. The majority of the archipelago's hospitals belong to this organization:",
"title": "Health"
},
{
"paragraph_id": 111,
"text": "The Canary Islands were previously inhabited by a variety of endemic animals, such as extinct giant lizards (Gallotia goliath), giant tortoises (Centrochelys burchardi and C. vulcanica), and Tenerife and Gran Canaria giant rats (Canariomys bravoi and C. tamarani), among others. Extinct birds known only from Pleistocene and Holocene age bones include the Canary Islands quail (Coturnix gomerae), Dune shearwater (Puffinus holeae), Lava shearwater (P. olsoni), Trias greenfinch (Chloris triasi), Slender-billed greenfinch (C. aurelioi) and the Long-legged bunting (Emberiza alcoveri).",
"title": "Wildlife"
},
{
"paragraph_id": 112,
"text": "The bird life includes European and African species, such as the black-bellied sandgrouse, Canary, Graja, a subspecies of red-billed chough endemic to La Palma, Gran Canaria blue chaffinch, Tenerife blue chaffinch, Canary Islands chiffchaff, Fuerteventura chat, Tenerife goldcrest, La Palma chaffinch, Canarian Egyptian vulture, Bolle's pigeon, Laurel pigeon, Plain swift, and Houbara bustard.",
"title": "Wildlife"
},
{
"paragraph_id": 113,
"text": "Terrestrial fauna includes the El Hierro giant lizard, La Gomera giant lizard, and the La Palma giant lizard. Mammals include the Canarian shrew, Canary big-eared bat, the Algerian hedgehog, and the more recently introduced mouflon.",
"title": "Wildlife"
},
{
"paragraph_id": 114,
"text": "The marine life found in the Canary Islands is also varied, being a combination of North Atlantic, Mediterranean and endemic species. In recent years, the increasing popularity of both scuba diving and underwater photography have provided biologists with much new information on the marine life of the islands.",
"title": "Wildlife"
},
{
"paragraph_id": 115,
"text": "Fish species found in the islands include many species of shark, ray, moray eel, bream, jack, grunt, scorpionfish, triggerfish, grouper, goby, and blenny. In addition, there are many invertebrate species, including sponge, jellyfish, anemone, crab, mollusc, sea urchin, starfish, sea cucumber and coral.",
"title": "Wildlife"
},
{
"paragraph_id": 116,
"text": "There are a total of five different species of marine turtle that are sighted periodically in the islands, the most common of these being the endangered loggerhead sea turtle. The other four are the green sea turtle, hawksbill sea turtle, leatherback sea turtle and Kemp's ridley sea turtle. Currently, there are no signs that any of these species breed in the islands, and so those seen in the water are usually migrating. However, it is believed that some of these species may have bred in the islands in the past, and there are records of several sightings of leatherback sea turtle on beaches in Fuerteventura, adding credibility to the theory.",
"title": "Wildlife"
},
{
"paragraph_id": 117,
"text": "Marine mammals include the large varieties of cetaceans including rare and not well-known species (see more details in the Marine life of the Canary Islands). Hooded seals have also been known to be vagrant in the Canary Islands every now and then. The Canary Islands were also formerly home to a population of the rarest pinniped in the world, the Mediterranean monk seal.",
"title": "Wildlife"
},
{
"paragraph_id": 118,
"text": "Some holidays of those celebrated in the Canary Islands are international and national, others are regional holidays and others are of insular character. The official day of the autonomous community is Canary Islands Day on 30 May. The anniversary of the first session of the Parliament of the Canary Islands, based in the city of Santa Cruz de Tenerife, held on 30 May 1983, is commemorated with this day.",
"title": "Holidays"
},
{
"paragraph_id": 119,
"text": "The common festive calendar throughout the Canary Islands is as follows:",
"title": "Holidays"
},
{
"paragraph_id": 120,
"text": "In addition, each of the islands has an island festival, in which it is a holiday only on that specific island. These are the festivities of island patrons saints of each island. Organized chronologically are:",
"title": "Holidays"
},
{
"paragraph_id": 121,
"text": "The most famous festivals of the Canary Islands is the carnival. It is the most famous and international festival of the archipelago. The carnival is celebrated in all the islands and all its municipalities, perhaps the two busiest are those of the two Canarian capitals; the Carnival of Santa Cruz de Tenerife (Tourist Festival of International Interest) and the Carnival of Las Palmas de Gran Canaria. It is celebrated on the streets between the months of February and March. But the rest of the islands of the archipelago have their carnivals with their own traditions among which stand out: The Festival of the Carneros of El Hierro, the Festival of the Diabletes of Teguise in Lanzarote, Los Indianos de La Palma, the Carnival of San Sebastián de La Gomera and the Carnival of Puerto del Rosario in Fuerteventura.",
"title": "Holidays"
},
{
"paragraph_id": 122,
"text": "In the 1960s, Gran Canaria was selected as the location for one of the 14 ground stations in the Manned Space Flight Network (MSFN) to support the NASA space program. Maspalomas Station, located in the south of the island, took part in a number of space missions including the Apollo 11 Moon landings and Skylab. Today it continues to support satellite communications as part of the ESA network.",
"title": "Science and technology"
},
{
"paragraph_id": 123,
"text": "Because of the remote location, a number of astronomical observatories are located in the archipelago, including the Teide Observatory on Tenerife, the Roque de los Muchachos Observatory on La Palma, and the Temisas Astronomical Observatory on Gran Canaria.",
"title": "Science and technology"
},
{
"paragraph_id": 124,
"text": "Tenerife is the home of the Instituto de Astrofísica de Canarias (Astrophysical Institute of the Canaries). There is also an Instituto de Bio-Orgánica Antonio González (Antonio González Bio-Organic Institute) at the University of La Laguna. Also at that university are the Instituto de Lingüística Andrés Bello (Andrés Bello Institute of Linguistics), the Centro de Estudios Medievales y Renacentistas (Center for Medieval and Renaissance Studies), the Instituto Universitario de la Empresa (University Institute of Business), the Instituto de Derecho Regional (Regional Institute of Law), the Instituto Universitario de Ciencias Políticas y Sociales (University Institute of Political and Social Sciences) and the Instituto de Enfermedades Tropicales (Institute of Tropical Diseases). The latter is one of the seven institutions of the Red de Investigación de Centros de Enfermedades Tropicales (RICET, \"Network of Research of Centers of Tropical Diseases\"), located in various parts of Spain. The Instituto Volcanológico de Canarias (Volcanological Institute of the Canary Islands) is based in Tenerife.",
"title": "Science and technology"
},
{
"paragraph_id": 125,
"text": "A unique form of wrestling known as Canarian wrestling (lucha canaria) has opponents stand in a special area called a \"terrero\" and try to throw each other to the ground using strength and quick movements.",
"title": "Sports"
},
{
"paragraph_id": 126,
"text": "Another sport is the \"game of the sticks\" (palo canario) where opponents fence with long sticks. This may have come about from the shepherds of the islands who would challenge each other using their long walking sticks.",
"title": "Sports"
},
{
"paragraph_id": 127,
"text": "Furthermore, there is the shepherd's jump (salto del pastor). This involves using a long stick to vault over an open area. This sport possibly evolved from the shepherd's need to occasionally get over an open area in the hills as they were tending their sheep.",
"title": "Sports"
},
{
"paragraph_id": 128,
"text": "The two main football teams in the archipelago are: the CD Tenerife (founded in 1912) and UD Las Palmas (founded in 1949). As of the 2023/2024 season, UD Las Palmas plays in La Liga, the top tier of Spanish football. CD Tenerife however plays in The Segunda Divisón. When in the same division, the clubs contest the Canary Islands derby. There are smaller clubs also playing in the mainland Spanish football league system, most notably UD Lanzarote and CD Laguna, although no other Canarian clubs have played in the top flight.",
"title": "Sports"
},
{
"paragraph_id": 129,
"text": "The mountainous terrain of the Canary Islands also caters to the growing popularity of ultra running and ultramarathons as host of annual competitive long-distance events including CajaMar Tenerife Bluetrail on Tenerife, Transvulcania on La Palma, Transgrancanaria on Gran Canaria, and the Half Marathon des Sables on Fuerteventura. A yearly Ironman Triathlon has been taking place on Lanzarote since 1992.",
"title": "Sports"
}
] | The Canary Islands, also known informally as the Canaries, are a Spanish autonomous community and archipelago in Macaronesia in the Atlantic Ocean. At their closest point to the African mainland, they are 100 kilometres west of Morocco and the Western Sahara. They are the southernmost of the autonomous communities of Spain. The islands have a population of 2.2 million people and are the most populous special territory of the European Union. The seven main islands are Tenerife, Fuerteventura, Gran Canaria, Lanzarote, La Palma, La Gomera, and El Hierro. The archipelago includes many smaller islands and islets, including La Graciosa, Alegranza, Isla de Lobos, Montaña Clara, Roque del Oeste, and Roque del Este. It also includes a number of rocks, including Garachico and Anaga. In ancient times, the island chain was often referred to as "the Fortunate Isles". The Canary Islands are the southernmost region of Spain, and the largest and most populous archipelago of Macaronesia. Because of their location, the Canary Islands have historically been considered a link between the four continents of Africa, North America, South America, and Europe. In 2019, the Canary Islands had a population of 2,153,389, with a density of 287.39 inhabitants per km2, making it the eighth most populous autonomous community of Spain. The population is mostly concentrated in the two capital islands: around 43% on the island of Tenerife and 40% on the island of Gran Canaria. The Canary Islands, especially Tenerife, Gran Canaria, Fuerteventura, and Lanzarote, are a major tourist destination, with over 12 million visitors per year. This is due to their beaches, subtropical climate, and important natural attractions, especially Maspalomas in Gran Canaria and Mount Teide in Tenerife. Mount Teide is the highest peak in Spain and the 4th tallest volcano in the world, measured from its base on the ocean floor. The islands have warm summers and winters warm enough for the climate to be technically tropical at sea level. The amount of precipitation and the level of maritime moderation vary depending on location and elevation. The archipelago includes green areas as well as desert. The islands' high mountains are ideal for astronomical observation, because they lie above the temperature inversion layer. As a result, the archipelago boasts two professional observatories: the Teide Observatory on Tenerife, and Roque de los Muchachos Observatory on La Palma. In 1927, the Province of Canary Islands was split into two provinces. In 1982, the autonomous community of the Canary Islands was established. The cities of Santa Cruz de Tenerife and Las Palmas de Gran Canaria are, jointly, the capitals of the islands. Those cities are also, respectively, the capitals of the provinces of Santa Cruz de Tenerife and Las Palmas. Las Palmas de Gran Canaria has been the largest city in the Canaries since 1768, except for a brief period in the 1910s. Between the 1833 territorial division of Spain and 1927, Santa Cruz de Tenerife was the sole capital of the Canary Islands. In 1927, it was ordered by decree that the capital of the Canary Islands would be shared between two cities, and this arrangement persists to the present day. The third largest city in the Canary Islands is San Cristóbal de La Laguna on Tenerife. During the Age of Sail, the islands were the main stopover for Spanish galleons during the Spanish colonisation of the Americas, which sailed that far south in order to catch the prevailing northeasterly trade winds. | 2001-11-02T19:33:22Z | 2023-12-27T10:36:51Z | [
"Template:Small",
"Template:Currency",
"Template:Portal",
"Template:Missing information",
"Template:Wide image",
"Template:Weather box",
"Template:Historical populations",
"Template:Cite book",
"Template:Cite web",
"Template:Dead link",
"Template:Pp-move",
"Template:Infobox settlement",
"Template:Cvt",
"Template:Cite journal",
"Template:Sister project links",
"Template:Africa topic",
"Template:Redirect",
"Template:Main",
"Template:Convert",
"Template:Lang",
"Template:Citation",
"Template:Cite iucn",
"Template:Culture of Canary Islands",
"Template:Short description",
"Template:IPAc-en",
"Template:EU Outermost regions",
"Template:Reflist",
"Template:Cite news",
"Template:Citation needed",
"Template:Ill",
"Template:See also",
"Template:Administrative divisions of Spain",
"Template:Outlying territories of European countries",
"Template:Countries and territories of North Africa",
"Template:Use dmy dates",
"Template:Lang-es",
"Template:ISBN",
"Template:Islands and provinces of the Canary Islands",
"Template:Authority control",
"Template:IPA-es",
"Template:Webarchive"
] | https://en.wikipedia.org/wiki/Canary_Islands |
5,718 | Chuck D | Carlton Douglas Ridenhour (born August 1, 1960), known professionally as Chuck D, is an American rapper, best known as the leader and frontman of the hip hop group Public Enemy, which he co-founded in 1985 with Flavor Flav. Chuck D is also a member of the rock supergroup Prophets of Rage. He has released several solo albums, most notably Autobiography of Mistachuck (1996).
His work with Public Enemy helped create politically and socially conscious hip hop music in the mid-1980s. The Source ranked him at No. 12 on its list of the Top 50 Hip-Hop Lyricists of All Time. Chuck D has been nominated for six Grammys throughout his career, and has received the Grammy Lifetime Achievement Award as a member of Public Enemy. He was also inducted into the Rock and Roll Hall of Fame in 2013 as a member of Public Enemy.
Ridenhour was born on August 1, 1960, on Long Island, New York. When he was a child, his mother played Motown and showtunes in the home and his father belonged to the Columbia Record Club. He began writing lyrics after the New York City blackout of 1977. He attended W. Tresper Clarke High School, where he was offered no formal education in music. He then went to Adelphi University on Long Island to study graphic design, where he met William Drayton (Flavor Flav). He received a Bachelor of Fine Arts from Adelphi in 1984 and later received an honorary doctorate from Adelphi in 2013.
While at Adelphi, Ridenhour co-hosted hip hop radio show the Super Spectrum Mix Hour as Chuck D on Saturday nights at Long Island rock radio station WLIR, designed flyers for local hip-hop events, and drew a cartoon called Tales of the Skind for Adelphi student newspaper The Delphian.
Ridenhour (using the nickname Chuck D) formed Public Enemy in 1985 with Flavor Flav. Upon hearing Ridenhour's demo track "Public Enemy Number One", fledgling producer/upcoming music-mogul Rick Rubin insisted on signing him to his Def Jam Records. Their major label releases were Yo! Bum Rush the Show (1987), It Takes a Nation of Millions to Hold Us Back (1988), Fear of a Black Planet (1990), Apocalypse 91... The Enemy Strikes Black (1991), the compilation album Greatest Misses (1992), and Muse Sick-n-Hour Mess Age (1994). They also released a full-length album soundtrack for the film He Got Game in 1998.
Ridenhour also contributed (as Chuck D) to several episodes of the documentary series The Blues. He has appeared as a featured artist on many other songs and albums, having collaborated with artists such as Janet Jackson, Kool Moe Dee, The Dope Poet Society, Run–D.M.C., Ice Cube, Boom Boom Satellites, Rage Against the Machine, Anthrax, John Mellencamp and many others. In 1990, he appeared on "Kool Thing", a song by the alternative rock band Sonic Youth, and along with Flavor Flav, he sang on George Clinton's song "Tweakin'", which appears on his 1989 album The Cinderella Theory. In 1993, he was the executive producer for Got 'Em Running Scared, an album by Ichiban Records group Chief Groovy Loo and the Chosen Tribe.
In 1996, Ridenhour released Autobiography of Mistachuck on Mercury Records. Chuck D made a rare appearance at the 1998 MTV Video Music Awards, presenting the Video Vanguard Award to the Beastie Boys, commending their musicianship. In November 1998, he settled out of court with Christopher "The Notorious B.I.G." Wallace's estate over the latter's sampling of his voice in the song "Ten Crack Commandments". The specific sampling is Ridenhour counting off the numbers one to nine on the track "Shut 'Em Down". He later described the decision to sue as "stupid".
In September 1999, he launched a multi-format "supersite" on the web site Rapstation.com. The site includes a TV and radio station with original programming, prominent hip hop DJs, celebrity interviews, free MP3 downloads (the first was contributed by rapper Coolio), downloadable ringtones by ToneThis, social commentary, current events, and regular features on turning rap careers into a viable living. Since 2000, he has been one of the most vocal supporters of peer-to-peer file sharing in the music industry.
He loaned his voice to Grand Theft Auto: San Andreas as DJ Forth Right MC for the radio station Playback FM. In 2000, he collaborated with Public Enemy's Gary G-Whiz and MC Lyte on the theme music to the television show Dark Angel. He appeared with Henry Rollins in a cover of Black Flag's "Rise Above" for the album Rise Above: 24 Black Flag Songs to Benefit the West Memphis Three. In 2003, he was featured in the PBS documentary Godfathers and Sons in which he recorded a version of Muddy Waters' song "Mannish Boy" with Common, Electrik Mud Cats, and Kyle Jason. He was also featured on Z-Trip's album Shifting Gears on a track called "Shock and Awe"; a 12-inch of the track was released featuring artwork by Shepard Fairey. In 2008 he contributed a chapter to Sound Unbound: Sampling Digital Music and Culture (The MIT Press, 2008) edited by Paul D. Miller a.k.a. DJ Spooky, and also turned up on The Go! Team's album Proof of Youth on the track "Flashlight Fight." He also fulfilled his childhood dreams of being a sports announcer by performing the play-by-play commentary in the video game NBA Ballers: Chosen One on Xbox 360 and PlayStation 3.
In 2009, Ridenhour wrote the foreword to the book The Love Ethic: The Reason Why You Can't Find and Keep Beautiful Black Love by Kamau and Akilah Butler. He also appeared on Brother Ali's album Us.
In March 2011, Chuck D re-recorded vocals with The Dillinger Escape Plan for a cover of "Fight the Power".
Chuck D duetted with Rock singer Meat Loaf on his 2011 album Hell in a Handbasket on the song "Mad Mad World/The Good God Is a Woman and She Don't Like Ugly".
In 2016 Chuck D joined the band Prophets of Rage along with B-Real and former members of Rage Against the Machine.
In July 2019, Ridenhour sued Terrordome Music Publishing and Reach Music Publishing for $1 million for withholding royalties.
In 2023, Chuck D released a four-part documentary on PBS entitled "Fight the Power: How Hip Hop Changed the World."
Chuck D is known for his powerful rapping. How to Rap says he "has a powerful, resonant voice that is often acclaimed as one of the most distinct and impressive in hip-hop". Chuck says this was based on listening to Melle Mel and sportscasters such as Marv Albert.
Chuck often comes up with a title for a song first. He writes on paper, though sometimes edits using a computer. He prefers to not punch in or overdub vocals.
Chuck listed his favourite rap albums in Hip Hop Connection in March 2000:
Chuck D identifies as Black, as opposed to African or African-American. In a 1993 issue of DIRT Magazine covering a taping of In the Mix hosted by Alimi Ballard at the Apollo, Dan Field writes,
At one point, Chuck bristles a bit at the term "African-American." He thinks of himself as Black and sees nothing wrong with the term. Besides, he says, having been born in the United States and lived his whole life here, he doesn't consider himself African. Being in Public Enemy has given him the chance to travel around the world, an experience that really opened his eyes and his mind. He says visiting Africa and experiencing life on a continent where the majority of people are Black gave him a new perspective and helped him get in touch with his own history. He also credits a trip to the ancient Egyptian pyramids at Giza with helping him appreciate the relative smallness of man.
Ridenhour is politically active; he co-hosted Unfiltered on Air America Radio, testified before the United States Congress in support of peer-to-peer MP3 sharing, and was involved in a 2004 rap political convention. He has continued to be an activist, publisher, lecturer, and producer.
Addressing the negative views associated with rap music, he co-wrote the essay book Fight the Power: Rap, Race, and Reality with Yusuf Jah. He argues that "music and art and culture is escapism, and escapism sometimes is healthy for people to get away from reality", but sometimes the distinction is blurred and that's when "things could lead a young mind in a direction." He also founded the record company Slam Jamz and acted as narrator in Kareem Adouard's short film Bling: Consequences and Repercussions, which examines the role of conflict diamonds in bling fashion. Despite Chuck D and Public Enemy's success, Chuck D claims that popularity or public approval was never a driving motivation behind their work. He is admittedly skeptical of celebrity status, revealing in a 1999 interview with BOMB Magazine that "The key for the record companies is to just keep making more and more stars, and make the ones who actually challenge our way of life irrelevant. The creation of celebrity has clouded the minds of most people in America, Europe and Asia. It gets people off the path they need to be on as individuals."
In an interview with Le Monde, published January 29, 2008, Chuck D stated that rap is devolving so much into a commercial enterprise, that the relationship between the rapper and the record label is that of slave to a master. He believes that nothing has changed for African-Americans since the debut of Public Enemy and, although he thinks that an Obama-Clinton alliance is great, he does not feel that the establishment will allow anything of substance to be accomplished. He stated that French President Nicolas Sarkozy is like any other European elite: he has profited through the murder, rape, and pillaging of those less fortunate and he refuses to allow equal opportunity for those men and women from Africa. In this article, he defended a comment made by Professor Griff in the past that he says was taken out of context by the media. The real statement was a critique of the Israeli government and its treatment of the Palestinian people. Chuck D stated that it is Public Enemy's belief that all human beings are equal.
In an interview with the magazine N'Digo published in June 2008, he spoke of today's mainstream urban music seemingly relishing the addictive euphoria of materialism and sexism, perhaps being the primary cause of many people harboring resentment towards the genre and its future. However, he has expressed hope for its resurrection, saying "It's only going to be dead if it doesn't talk about the messages of life as much as the messages of death and non-movement", citing artists such as NYOil, M.I.A. and The Roots as socially conscious artists who push the envelope creatively. "A lot of cats are out there doing it, on the Web and all over. They're just not placing their career in the hands of some major corporation."
In 2010, Chuck D released the track "Tear Down That Wall." He said "I talked about the wall not only just dividing the U.S. and Mexico but the states of California, New Mexico and Texas. But Arizona, it's like, come on. Now they're going to enforce a law that talks about basically racial profiling."
He is on the board of the TransAfrica Forum, a Pan African organization that is focused on African, Caribbean and Latin American issues.
He has been an activist with projects of The Revcoms, such as Refuse Fascism and Stop Mass Incarceration Network. Carl Dix interviewed Chuck D on The Revcoms' YouTube program The RNL – Revolution, Nothing Less! – Show.
In 2022, he endorsed Conrad Tillard, formerly the Nation of Islam Minister known as Conrad Muhammad and subsequently a Baptist Minister, in his campaign for New York State Senate in District 25 (covering part of eastern and north-central Brooklyn).
Chuck D has claimed on Twitter to be a maternal great-grandson of architect George Washington Foster.
As of June 2023, he has three children aged 34, 30, and 10. The two oldest by his first ex-wife Deborah McClendon and the youngest by his ex-wife Gaye Theresa Johnson.
Chuck D lives in California and lost his home in the Thomas Fire that occurred from December 2017 to January 2018.
Studio albums
Studio albums
Studio albums
Studio EPs
Studio albums
Compilation albums | [
{
"paragraph_id": 0,
"text": "Carlton Douglas Ridenhour (born August 1, 1960), known professionally as Chuck D, is an American rapper, best known as the leader and frontman of the hip hop group Public Enemy, which he co-founded in 1985 with Flavor Flav. Chuck D is also a member of the rock supergroup Prophets of Rage. He has released several solo albums, most notably Autobiography of Mistachuck (1996).",
"title": ""
},
{
"paragraph_id": 1,
"text": "His work with Public Enemy helped create politically and socially conscious hip hop music in the mid-1980s. The Source ranked him at No. 12 on its list of the Top 50 Hip-Hop Lyricists of All Time. Chuck D has been nominated for six Grammys throughout his career, and has received the Grammy Lifetime Achievement Award as a member of Public Enemy. He was also inducted into the Rock and Roll Hall of Fame in 2013 as a member of Public Enemy.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Ridenhour was born on August 1, 1960, on Long Island, New York. When he was a child, his mother played Motown and showtunes in the home and his father belonged to the Columbia Record Club. He began writing lyrics after the New York City blackout of 1977. He attended W. Tresper Clarke High School, where he was offered no formal education in music. He then went to Adelphi University on Long Island to study graphic design, where he met William Drayton (Flavor Flav). He received a Bachelor of Fine Arts from Adelphi in 1984 and later received an honorary doctorate from Adelphi in 2013.",
"title": "Early life"
},
{
"paragraph_id": 3,
"text": "While at Adelphi, Ridenhour co-hosted hip hop radio show the Super Spectrum Mix Hour as Chuck D on Saturday nights at Long Island rock radio station WLIR, designed flyers for local hip-hop events, and drew a cartoon called Tales of the Skind for Adelphi student newspaper The Delphian.",
"title": "Early life"
},
{
"paragraph_id": 4,
"text": "Ridenhour (using the nickname Chuck D) formed Public Enemy in 1985 with Flavor Flav. Upon hearing Ridenhour's demo track \"Public Enemy Number One\", fledgling producer/upcoming music-mogul Rick Rubin insisted on signing him to his Def Jam Records. Their major label releases were Yo! Bum Rush the Show (1987), It Takes a Nation of Millions to Hold Us Back (1988), Fear of a Black Planet (1990), Apocalypse 91... The Enemy Strikes Black (1991), the compilation album Greatest Misses (1992), and Muse Sick-n-Hour Mess Age (1994). They also released a full-length album soundtrack for the film He Got Game in 1998.",
"title": "Career"
},
{
"paragraph_id": 5,
"text": "Ridenhour also contributed (as Chuck D) to several episodes of the documentary series The Blues. He has appeared as a featured artist on many other songs and albums, having collaborated with artists such as Janet Jackson, Kool Moe Dee, The Dope Poet Society, Run–D.M.C., Ice Cube, Boom Boom Satellites, Rage Against the Machine, Anthrax, John Mellencamp and many others. In 1990, he appeared on \"Kool Thing\", a song by the alternative rock band Sonic Youth, and along with Flavor Flav, he sang on George Clinton's song \"Tweakin'\", which appears on his 1989 album The Cinderella Theory. In 1993, he was the executive producer for Got 'Em Running Scared, an album by Ichiban Records group Chief Groovy Loo and the Chosen Tribe.",
"title": "Career"
},
{
"paragraph_id": 6,
"text": "In 1996, Ridenhour released Autobiography of Mistachuck on Mercury Records. Chuck D made a rare appearance at the 1998 MTV Video Music Awards, presenting the Video Vanguard Award to the Beastie Boys, commending their musicianship. In November 1998, he settled out of court with Christopher \"The Notorious B.I.G.\" Wallace's estate over the latter's sampling of his voice in the song \"Ten Crack Commandments\". The specific sampling is Ridenhour counting off the numbers one to nine on the track \"Shut 'Em Down\". He later described the decision to sue as \"stupid\".",
"title": "Career"
},
{
"paragraph_id": 7,
"text": "In September 1999, he launched a multi-format \"supersite\" on the web site Rapstation.com. The site includes a TV and radio station with original programming, prominent hip hop DJs, celebrity interviews, free MP3 downloads (the first was contributed by rapper Coolio), downloadable ringtones by ToneThis, social commentary, current events, and regular features on turning rap careers into a viable living. Since 2000, he has been one of the most vocal supporters of peer-to-peer file sharing in the music industry.",
"title": "Career"
},
{
"paragraph_id": 8,
"text": "He loaned his voice to Grand Theft Auto: San Andreas as DJ Forth Right MC for the radio station Playback FM. In 2000, he collaborated with Public Enemy's Gary G-Whiz and MC Lyte on the theme music to the television show Dark Angel. He appeared with Henry Rollins in a cover of Black Flag's \"Rise Above\" for the album Rise Above: 24 Black Flag Songs to Benefit the West Memphis Three. In 2003, he was featured in the PBS documentary Godfathers and Sons in which he recorded a version of Muddy Waters' song \"Mannish Boy\" with Common, Electrik Mud Cats, and Kyle Jason. He was also featured on Z-Trip's album Shifting Gears on a track called \"Shock and Awe\"; a 12-inch of the track was released featuring artwork by Shepard Fairey. In 2008 he contributed a chapter to Sound Unbound: Sampling Digital Music and Culture (The MIT Press, 2008) edited by Paul D. Miller a.k.a. DJ Spooky, and also turned up on The Go! Team's album Proof of Youth on the track \"Flashlight Fight.\" He also fulfilled his childhood dreams of being a sports announcer by performing the play-by-play commentary in the video game NBA Ballers: Chosen One on Xbox 360 and PlayStation 3.",
"title": "Career"
},
{
"paragraph_id": 9,
"text": "In 2009, Ridenhour wrote the foreword to the book The Love Ethic: The Reason Why You Can't Find and Keep Beautiful Black Love by Kamau and Akilah Butler. He also appeared on Brother Ali's album Us.",
"title": "Career"
},
{
"paragraph_id": 10,
"text": "In March 2011, Chuck D re-recorded vocals with The Dillinger Escape Plan for a cover of \"Fight the Power\".",
"title": "Career"
},
{
"paragraph_id": 11,
"text": "Chuck D duetted with Rock singer Meat Loaf on his 2011 album Hell in a Handbasket on the song \"Mad Mad World/The Good God Is a Woman and She Don't Like Ugly\".",
"title": "Career"
},
{
"paragraph_id": 12,
"text": "In 2016 Chuck D joined the band Prophets of Rage along with B-Real and former members of Rage Against the Machine.",
"title": "Career"
},
{
"paragraph_id": 13,
"text": "In July 2019, Ridenhour sued Terrordome Music Publishing and Reach Music Publishing for $1 million for withholding royalties.",
"title": "Career"
},
{
"paragraph_id": 14,
"text": "In 2023, Chuck D released a four-part documentary on PBS entitled \"Fight the Power: How Hip Hop Changed the World.\"",
"title": "Career"
},
{
"paragraph_id": 15,
"text": "Chuck D is known for his powerful rapping. How to Rap says he \"has a powerful, resonant voice that is often acclaimed as one of the most distinct and impressive in hip-hop\". Chuck says this was based on listening to Melle Mel and sportscasters such as Marv Albert.",
"title": "Rapping technique and creative process"
},
{
"paragraph_id": 16,
"text": "Chuck often comes up with a title for a song first. He writes on paper, though sometimes edits using a computer. He prefers to not punch in or overdub vocals.",
"title": "Rapping technique and creative process"
},
{
"paragraph_id": 17,
"text": "Chuck listed his favourite rap albums in Hip Hop Connection in March 2000:",
"title": "Rapping technique and creative process"
},
{
"paragraph_id": 18,
"text": "Chuck D identifies as Black, as opposed to African or African-American. In a 1993 issue of DIRT Magazine covering a taping of In the Mix hosted by Alimi Ballard at the Apollo, Dan Field writes,",
"title": "Politics"
},
{
"paragraph_id": 19,
"text": "At one point, Chuck bristles a bit at the term \"African-American.\" He thinks of himself as Black and sees nothing wrong with the term. Besides, he says, having been born in the United States and lived his whole life here, he doesn't consider himself African. Being in Public Enemy has given him the chance to travel around the world, an experience that really opened his eyes and his mind. He says visiting Africa and experiencing life on a continent where the majority of people are Black gave him a new perspective and helped him get in touch with his own history. He also credits a trip to the ancient Egyptian pyramids at Giza with helping him appreciate the relative smallness of man.",
"title": "Politics"
},
{
"paragraph_id": 20,
"text": "Ridenhour is politically active; he co-hosted Unfiltered on Air America Radio, testified before the United States Congress in support of peer-to-peer MP3 sharing, and was involved in a 2004 rap political convention. He has continued to be an activist, publisher, lecturer, and producer.",
"title": "Politics"
},
{
"paragraph_id": 21,
"text": "Addressing the negative views associated with rap music, he co-wrote the essay book Fight the Power: Rap, Race, and Reality with Yusuf Jah. He argues that \"music and art and culture is escapism, and escapism sometimes is healthy for people to get away from reality\", but sometimes the distinction is blurred and that's when \"things could lead a young mind in a direction.\" He also founded the record company Slam Jamz and acted as narrator in Kareem Adouard's short film Bling: Consequences and Repercussions, which examines the role of conflict diamonds in bling fashion. Despite Chuck D and Public Enemy's success, Chuck D claims that popularity or public approval was never a driving motivation behind their work. He is admittedly skeptical of celebrity status, revealing in a 1999 interview with BOMB Magazine that \"The key for the record companies is to just keep making more and more stars, and make the ones who actually challenge our way of life irrelevant. The creation of celebrity has clouded the minds of most people in America, Europe and Asia. It gets people off the path they need to be on as individuals.\"",
"title": "Politics"
},
{
"paragraph_id": 22,
"text": "In an interview with Le Monde, published January 29, 2008, Chuck D stated that rap is devolving so much into a commercial enterprise, that the relationship between the rapper and the record label is that of slave to a master. He believes that nothing has changed for African-Americans since the debut of Public Enemy and, although he thinks that an Obama-Clinton alliance is great, he does not feel that the establishment will allow anything of substance to be accomplished. He stated that French President Nicolas Sarkozy is like any other European elite: he has profited through the murder, rape, and pillaging of those less fortunate and he refuses to allow equal opportunity for those men and women from Africa. In this article, he defended a comment made by Professor Griff in the past that he says was taken out of context by the media. The real statement was a critique of the Israeli government and its treatment of the Palestinian people. Chuck D stated that it is Public Enemy's belief that all human beings are equal.",
"title": "Politics"
},
{
"paragraph_id": 23,
"text": "In an interview with the magazine N'Digo published in June 2008, he spoke of today's mainstream urban music seemingly relishing the addictive euphoria of materialism and sexism, perhaps being the primary cause of many people harboring resentment towards the genre and its future. However, he has expressed hope for its resurrection, saying \"It's only going to be dead if it doesn't talk about the messages of life as much as the messages of death and non-movement\", citing artists such as NYOil, M.I.A. and The Roots as socially conscious artists who push the envelope creatively. \"A lot of cats are out there doing it, on the Web and all over. They're just not placing their career in the hands of some major corporation.\"",
"title": "Politics"
},
{
"paragraph_id": 24,
"text": "In 2010, Chuck D released the track \"Tear Down That Wall.\" He said \"I talked about the wall not only just dividing the U.S. and Mexico but the states of California, New Mexico and Texas. But Arizona, it's like, come on. Now they're going to enforce a law that talks about basically racial profiling.\"",
"title": "Politics"
},
{
"paragraph_id": 25,
"text": "He is on the board of the TransAfrica Forum, a Pan African organization that is focused on African, Caribbean and Latin American issues.",
"title": "Politics"
},
{
"paragraph_id": 26,
"text": "He has been an activist with projects of The Revcoms, such as Refuse Fascism and Stop Mass Incarceration Network. Carl Dix interviewed Chuck D on The Revcoms' YouTube program The RNL – Revolution, Nothing Less! – Show.",
"title": "Politics"
},
{
"paragraph_id": 27,
"text": "In 2022, he endorsed Conrad Tillard, formerly the Nation of Islam Minister known as Conrad Muhammad and subsequently a Baptist Minister, in his campaign for New York State Senate in District 25 (covering part of eastern and north-central Brooklyn).",
"title": "Politics"
},
{
"paragraph_id": 28,
"text": "Chuck D has claimed on Twitter to be a maternal great-grandson of architect George Washington Foster.",
"title": "Personal life"
},
{
"paragraph_id": 29,
"text": "As of June 2023, he has three children aged 34, 30, and 10. The two oldest by his first ex-wife Deborah McClendon and the youngest by his ex-wife Gaye Theresa Johnson.",
"title": "Personal life"
},
{
"paragraph_id": 30,
"text": "Chuck D lives in California and lost his home in the Thomas Fire that occurred from December 2017 to January 2018.",
"title": "Personal life"
},
{
"paragraph_id": 31,
"text": "Studio albums",
"title": "Discography"
},
{
"paragraph_id": 32,
"text": "Studio albums",
"title": "Discography"
},
{
"paragraph_id": 33,
"text": "Studio albums",
"title": "Discography"
},
{
"paragraph_id": 34,
"text": "Studio EPs",
"title": "Discography"
},
{
"paragraph_id": 35,
"text": "Studio albums",
"title": "Discography"
},
{
"paragraph_id": 36,
"text": "Compilation albums",
"title": "Discography"
}
] | Carlton Douglas Ridenhour, known professionally as Chuck D, is an American rapper, best known as the leader and frontman of the hip hop group Public Enemy, which he co-founded in 1985 with Flavor Flav. Chuck D is also a member of the rock supergroup Prophets of Rage. He has released several solo albums, most notably Autobiography of Mistachuck (1996). His work with Public Enemy helped create politically and socially conscious hip hop music in the mid-1980s. The Source ranked him at No. 12 on its list of the Top 50 Hip-Hop Lyricists of All Time. Chuck D has been nominated for six Grammys throughout his career, and has received the Grammy Lifetime Achievement Award as a member of Public Enemy. He was also inducted into the Rock and Roll Hall of Fame in 2013 as a member of Public Enemy. | 2001-06-01T16:07:16Z | 2023-12-06T05:03:24Z | [
"Template:Authority control",
"Template:Infobox musical artist",
"Template:Commons category",
"Template:Reflist",
"Template:Cite tweet",
"Template:Official website",
"Template:Public Enemy",
"Template:Short description",
"Template:Main",
"Template:Cite news",
"Template:Cite magazine",
"Template:Cite journal",
"Template:AllMusic",
"Template:2013 Rock and Roll Hall of Fame",
"Template:External media",
"Template:Cite web",
"Template:Cite book",
"Template:Cbignore",
"Template:Wikiquote",
"Template:IMDb name",
"Template:Discogs artist",
"Template:Use mdy dates",
"Template:Rp"
] | https://en.wikipedia.org/wiki/Chuck_D |
5,719 | Cutaway (filmmaking) | In film and video, a cutaway is the interruption of a continuously filmed action by inserting a view of something else. It is usually followed by a cut back to the first shot. A cutaway scene is the interruption of a scene with the insertion of another scene, generally unrelated or only peripherally related to the original scene. The interruption is usually quick, and is usually, although not always, ended by a return to the original scene. The effect is of commentary to the original scene, frequently comic in nature.
The most common use of cutaway shots in dramatic films is to adjust the pace of the main action, to conceal the deletion of some unwanted part of the main shot, or to allow the joining of parts of two versions of that shot. For example, a scene may be improved by cutting a few frames out of an actor's pause; a brief view of a listener can help conceal the break. Or the actor may fumble some of his lines in a group shot; rather than discarding a good version of the shot, the director may just have the actor repeat the lines for a new shot, and cut to that alternate view when necessary.
Cutaways are also used often in older horror films in place of special effects. For example, a shot of a zombie getting its head cut off may, for instance, start with a view of an axe being swung through the air, followed by a close-up of the actor swinging it, then followed by a cut back to the now severed head. George A. Romero, creator of the Dead Series, and Tom Savini pioneered effects that removed the need for cutaways in horror films.
In news broadcasting and documentary work, the cutaway is used much as it would be in fiction. On location, there is usually just one camera to film an interview, and it is usually trained on the interviewee. Often, there is also only one microphone. After the interview, the interviewer usually repeats his questions while he is being filmed, with pauses that act as if the answers are listened to. These shots can be used as cutaways. Cutaways to the interviewer, called noddies, can also be used to cover cuts.
The cutaway does not necessarily contribute any dramatic content of its own, but is used to help the editor assemble a longer sequence. For that reason, editors choose cutaways related to the main action, such as another action or object in the same location. For example, if the main shot is of a man walking down an alley, possible cutaways may include a shot of a cat on a nearby dumpster or a shot of a person watching from a window overhead. | [
{
"paragraph_id": 0,
"text": "In film and video, a cutaway is the interruption of a continuously filmed action by inserting a view of something else. It is usually followed by a cut back to the first shot. A cutaway scene is the interruption of a scene with the insertion of another scene, generally unrelated or only peripherally related to the original scene. The interruption is usually quick, and is usually, although not always, ended by a return to the original scene. The effect is of commentary to the original scene, frequently comic in nature.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The most common use of cutaway shots in dramatic films is to adjust the pace of the main action, to conceal the deletion of some unwanted part of the main shot, or to allow the joining of parts of two versions of that shot. For example, a scene may be improved by cutting a few frames out of an actor's pause; a brief view of a listener can help conceal the break. Or the actor may fumble some of his lines in a group shot; rather than discarding a good version of the shot, the director may just have the actor repeat the lines for a new shot, and cut to that alternate view when necessary.",
"title": "Usage"
},
{
"paragraph_id": 2,
"text": "Cutaways are also used often in older horror films in place of special effects. For example, a shot of a zombie getting its head cut off may, for instance, start with a view of an axe being swung through the air, followed by a close-up of the actor swinging it, then followed by a cut back to the now severed head. George A. Romero, creator of the Dead Series, and Tom Savini pioneered effects that removed the need for cutaways in horror films.",
"title": "Usage"
},
{
"paragraph_id": 3,
"text": "In news broadcasting and documentary work, the cutaway is used much as it would be in fiction. On location, there is usually just one camera to film an interview, and it is usually trained on the interviewee. Often, there is also only one microphone. After the interview, the interviewer usually repeats his questions while he is being filmed, with pauses that act as if the answers are listened to. These shots can be used as cutaways. Cutaways to the interviewer, called noddies, can also be used to cover cuts.",
"title": "Usage"
},
{
"paragraph_id": 4,
"text": "The cutaway does not necessarily contribute any dramatic content of its own, but is used to help the editor assemble a longer sequence. For that reason, editors choose cutaways related to the main action, such as another action or object in the same location. For example, if the main shot is of a man walking down an alley, possible cutaways may include a shot of a cat on a nearby dumpster or a shot of a person watching from a window overhead.",
"title": "Usage"
}
] | In film and video, a cutaway is the interruption of a continuously filmed action by inserting a view of something else. It is usually followed by a cut back to the first shot. A cutaway scene is the interruption of a scene with the insertion of another scene, generally unrelated or only peripherally related to the original scene. The interruption is usually quick, and is usually, although not always, ended by a return to the original scene. The effect is of commentary to the original scene, frequently comic in nature. | 2001-09-06T05:14:42Z | 2023-12-18T21:14:15Z | [
"Template:Cite web",
"Template:Dead link",
"Template:Continuity Editing",
"Template:Cinematic techniques",
"Template:Use dmy dates",
"Template:Reflist",
"Template:Cite book"
] | https://en.wikipedia.org/wiki/Cutaway_(filmmaking) |
5,721 | Coma | A coma is a deep state of prolonged unconsciousness in which a person cannot be awakened, fails to respond normally to painful stimuli, light, or sound, lacks a normal wake-sleep cycle and does not initiate voluntary actions. The person may experience respiratory and circulatory problems due to the body's inability to maintain normal bodily functions. People in a coma often require extensive medical care to maintain their health and prevent complications such as pneumonia or blood clots. Coma patients exhibit a complete absence of wakefulness and are unable to consciously feel, speak or move. Comas can be derived by natural causes, or can be medically induced.
Clinically, a coma can be defined as the consistent inability to follow a one-step command. It can also be defined as a score of ≤ 8 on the Glasgow Coma Scale (GCS) lasting ≥ 6 hours. For a patient to maintain consciousness, the components of wakefulness and awareness must be maintained. Wakefulness describes the quantitative degree of consciousness, whereas awareness relates to the qualitative aspects of the functions mediated by the cortex, including cognitive abilities such as attention, sensory perception, explicit memory, language, the execution of tasks, temporal and spatial orientation and reality judgment. From a neurological perspective, consciousness is maintained by the activation of the cerebral cortex—the gray matter that forms the outer layer of the brain—and by the reticular activating system (RAS), a structure located within the brainstem.
The term 'coma', from the Greek κῶμα koma, meaning deep sleep, had already been used in the Hippocratic corpus (Epidemica) and later by Galen (second century AD). Subsequently, it was hardly used in the known literature up to the middle of the 17th century. The term is found again in Thomas Willis' (1621–1675) influential De anima brutorum (1672), where lethargy (pathological sleep), 'coma' (heavy sleeping), carus (deprivation of the senses) and apoplexy (into which carus could turn and which he localized in the white matter) are mentioned. The term carus is also derived from Greek, where it can be found in the roots of several words meaning soporific or sleepy. It can still be found in the root of the term 'carotid'. Thomas Sydenham (1624–89) mentioned the term 'coma' in several cases of fever (Sydenham, 1685).
General symptoms of a person in a comatose state are:
Many types of problems can cause a coma. Forty percent of comatose states result from drug poisoning. Certain drug use under certain conditions can damage or weaken the synaptic functioning in the ascending reticular activating system (ARAS) and keep the system from properly functioning to arouse the brain. Secondary effects of drugs, which include abnormal heart rate and blood pressure, as well as abnormal breathing and sweating, may also indirectly harm the functioning of the ARAS and lead to a coma. Given that drug poisoning is the cause for a large portion of patients in a coma, hospitals first test all comatose patients by observing pupil size and eye movement, through the vestibular-ocular reflex. (See Diagnosis below.)
The second most common cause of coma, which makes up about 25% of cases, is lack of oxygen, generally resulting from cardiac arrest. The Central Nervous System (CNS) requires a great deal of oxygen for its neurons. Oxygen deprivation in the brain, also known as hypoxia, causes sodium and calcium from outside of the neurons to decrease and intracellular calcium to increase, which harms neuron communication. Lack of oxygen in the brain also causes ATP exhaustion and cellular breakdown from cytoskeleton damage and nitric oxide production.
Twenty percent of comatose states result from an ischemic stroke, brain hemorrhage, or brain tumor. During a stroke, blood flow to part of the brain is restricted or blocked. An ischemic stroke, brain hemorrhage, or brain tumor may cause restriction of blood flow. Lack of blood to cells in the brain prevents oxygen from getting to the neurons, and consequently causes cells to become disrupted and die. As brain cells die, brain tissue continues to deteriorate, which may affect the functioning of the ARAS, causing unconsciousness and coma.
Comatose cases can also result from traumatic brain injury, excessive blood loss, malnutrition, hypothermia, hyperthermia, hyperammonemia, abnormal glucose levels, and many other biological disorders. Furthermore, studies show that 1 out of 8 patients with traumatic brain injury experience a comatose state.
Heart-related causes of coma include cardiac arrest, myocardial infarction, heart failure, arrhythmia when severe, cardiogenic shock, myocarditis, and pericarditis. Respiratory arrest is the only lung condition to cause coma, but many different lung conditions can cause decreased level of consciousness, but don't reach coma.
Other causes of coma include severe or persistent seizures, kidney failure, liver failure, hyperglycemia, hypoglycemia, and infections involving the brain, like meningitis and encephalitis.
Injury to either or both of the cerebral cortex or the reticular activating system (RAS) is sufficient to cause a person to enter coma.
The cerebral cortex is the outer layer of neural tissue of the cerebrum of the brain. The cerebral cortex is composed of gray matter which consists of the nuclei of neurons, whereas the inner portion of the cerebrum is composed of white matter and is composed of the axons of neuron. White matter is responsible for perception, relay of the sensory input via the thalamic pathway, and many other neurological functions, including complex thinking.
The RAS, on the other hand, is a more primitive structure in the brainstem which includes the reticular formation (RF). The RAS has two tracts, the ascending and descending tract. The ascending tract, or ascending reticular activating system (ARAS), is made up of a system of acetylcholine-producing neurons, and works to arouse and wake up the brain. Arousal of the brain begins from the RF, through the thalamus, and then finally to the cerebral cortex. Any impairment in ARAS functioning, a neuronal dysfunction, along the arousal pathway stated directly above, prevents the body from being aware of its surroundings. Without the arousal and consciousness centers, the body cannot awaken, remaining in a comatose state.
The severity and mode of onset of coma depends on the underlying cause. There are two main subdivisions of a coma: structural and diffuse neuronal. A structural cause, for example, is brought upon by a mechanical force that brings about cellular damage, such as physical pressure or a blockage in neural transmission. While a diffuse cause is limited to aberrations of cellular function, that fall under a metabolic or toxic subgroup. Toxin-induced comas are caused by extrinsic substances, whereas metabolic-induced comas are caused by intrinsic processes, such as body thermoregulation or ionic imbalances(e.g. sodium). For instance, severe hypoglycemia (low blood sugar) or hypercapnia (increased carbon dioxide levels in the blood) are examples of a metabolic diffuse neuronal dysfunction. Hypoglycemia or hypercapnia initially cause mild agitation and confusion, but progress to obtundation, stupor, and finally, complete unconsciousness. In contrast, coma resulting from a severe traumatic brain injury or subarachnoid hemorrhage can be instantaneous. The mode of onset may therefore be indicative of the underlying cause.
Structural and diffuse causes of coma are not isolated from one another, as one can lead to the other in some situations. For instance, coma induced by a diffuse metabolic process, such as hypoglycemia, can result in a structural coma if it is not resolved. Another example is if cerebral edema, a diffuse dysfunction, leads to ischemia of the brainstem, a structural issue, due to the blockage of the circulation in the brain.
Although diagnosis of coma is simple, investigating the underlying cause of onset can be rather challenging. As such, after gaining stabilization of the patient's airways, breathing and circulation (the basic ABCs) various diagnostic tests, such as physical examinations and imaging tools (CT scan, MRI, etc.) are employed to access the underlying cause of the coma.
When an unconscious person enters a hospital, the hospital utilizes a series of diagnostic steps to identify the cause of unconsciousness. According to Young, the following steps should be taken when dealing with a patient possibly in a coma:
In the initial assessment of coma, it is common to gauge the level of consciousness on the AVPU (alert, vocal stimuli, painful stimuli, unresponsive) scale by spontaneously exhibiting actions and, assessing the patient's response to vocal and painful stimuli. More elaborate scales, such as the Glasgow Coma Scale, quantify an individual's reactions such as eye opening, movement and verbal response in order to indicate their extent of brain injury. The patient's score can vary from a score of 3 (indicating severe brain injury and death) to 15 (indicating mild or no brain injury).
In those with deep unconsciousness, there is a risk of asphyxiation as the control over the muscles in the face and throat is diminished. As a result, those presenting to a hospital with coma are typically assessed for this risk ("airway management"). If the risk of asphyxiation is deemed high, doctors may use various devices (such as an oropharyngeal airway, nasopharyngeal airway or endotracheal tube) to safeguard the airway.
Imaging basically encompasses computed tomography (CAT or CT) scan of the brain, or MRI for example, and is performed to identify specific causes of the coma, such as hemorrhage in the brain or herniation of the brain structures. Special tests such as an EEG can also show a lot about the activity level of the cortex such as semantic processing, presence of seizures, and are important available tools not only for the assessment of the cortical activity but also for predicting the likelihood of the patient's awakening. The autonomous responses such as the skin conductance response may also provide further insight on the patient's emotional processing.
In the treatment of traumatic brain injury (TBI), there are 4 examination methods that have proved useful: skull x-ray, angiography, computed tomography (CT), and magnetic resonance imaging (MRI). The skull x-ray can detect linear fractures, impression fractures (expression fractures) and burst fractures. Angiography is used on rare occasions for TBIs i.e. when there is suspicion of an aneurysm, carotid sinus fistula, traumatic vascular occlusion, and vascular dissection. A CT can detect changes in density between the brain tissue and hemorrhages like subdural and intracerebral hemorrhages. MRIs are not the first choice in emergencies because of the long scanning times and because fractures cannot be detected as well as CT. MRIs are used for the imaging of soft tissues and lesions in the posterior fossa which cannot be found with the use of CT.
Assessment of the brainstem and cortical function through special reflex tests such as the oculocephalic reflex test (doll's eyes test), oculovestibular reflex test (cold caloric test), corneal reflex, and the gag reflex. Reflexes are a good indicator of what cranial nerves are still intact and functioning and is an important part of the physical exam. Due to the unconscious status of the patient, only a limited number of the nerves can be assessed. These include the cranial nerves number 2 (CN II), number 3 (CN III), number 5 (CN V), number 7 (CN VII), and cranial nerves 9 and 10 (CN IX, CN X).
Assessment of posture and physique is the next step. It involves general observation about the patient's positioning. There are often two stereotypical postures seen in comatose patients. Decorticate posturing is a stereotypical posturing in which the patient has arms flexed at the elbow, and arms adducted toward the body, with both legs extended. Decerebrate posturing is a stereotypical posturing in which the legs are similarly extended (stretched), but the arms are also stretched (extended at the elbow). The posturing is critical since it indicates where the damage is in the central nervous system. A decorticate posturing indicates a lesion (a point of damage) at or above the red nucleus, whereas a decerebrate posturing indicates a lesion at or below the red nucleus. In other words, a decorticate lesion is closer to the cortex, as opposed to a decerebrate posturing which indicates that the lesion is closer to the brainstem.
Pupil assessment is often a critical portion of a comatose examination, as it can give information as to the cause of the coma; the following table is a technical, medical guideline for common pupil findings and their possible interpretations:
A coma can be classified as (1) supratentorial (above Tentorium cerebelli), (2) infratentorial (below Tentorium cerebelli), (3) metabolic or (4) diffused. This classification is merely dependent on the position of the original damage that caused the coma, and does not correlate with severity or the prognosis. The severity of coma impairment however is categorized into several levels. Patients may or may not progress through these levels. In the first level, the brain responsiveness lessens, normal reflexes are lost, the patient no longer responds to pain and cannot hear.
The Rancho Los Amigos Scale is a complex scale that has eight separate levels, and is often used in the first few weeks or months of coma while the patient is under closer observation, and when shifts between levels are more frequent.
Treatment for people in a coma will depend on the severity and cause of the comatose state. Upon admittance to an emergency department, coma patients will usually be placed in an Intensive Care Unit (ICU) immediately, where maintenance of the patient's respiration and circulation become a first priority. Stability of their respiration and circulation is sustained through the use of intubation, ventilation, administration of intravenous fluids or blood and other supportive care as needed.
Once a patient is stable and no longer in immediate danger, there may be a shift of priority from stabilizing the patient to maintaining the state of their physical wellbeing. Moving patients every 2–3 hours by turning them side to side is crucial to avoiding bed sores as a result of being confined to a bed. Moving patients through the use of physical therapy also aids in preventing atelectasis, contractures or other orthopedic deformities which would interfere with a coma patient's recovery.
Pneumonia is also common in coma patients due to their inability to swallow which can then lead to aspiration. A coma patient's lack of a gag reflex and use of a feeding tube can result in food, drink or other solid organic matter being lodged within their lower respiratory tract (from the trachea to the lungs). This trapping of matter in their lower respiratory tract can ultimately lead to infection, resulting in aspiration pneumonia.
Coma patients may also deal with restlessness or seizures. As such, soft cloth restraints may be used to prevent them from pulling on tubes or dressings and side rails on the bed should be kept up to prevent patients from falling.
Coma has a wide variety of emotional reactions from the family members of the affected patients, as well as the primary care givers taking care of the patients. Research has shown that the severity of injury causing coma was found to have no significant impact compared to how much time has passed since the injury occurred. Common reactions, such as desperation, anger, frustration, and denial are possible. The focus of the patient care should be on creating an amicable relationship with the family members or dependents of a comatose patient as well as creating a rapport with the medical staff. Although there is heavy importance of a primary care taker, secondary care takers can play a supporting role to temporarily relieve the primary care taker's burden of tasks.
Comas can last from several days to, in particularly extreme cases, years. Some patients eventually gradually come out of the coma, some progress to a vegetative state or a minimally conscious state, and others die. Some patients who have entered a vegetative state go on to regain a degree of awareness; and in some cases may remain in vegetative state for years or even decades (the longest recorded period is 42 years).
Predicted chances of recovery will differ depending on which techniques were used to measure the patient's severity of neurological damage. Predictions of recovery are based on statistical rates, expressed as the level of chance the person has of recovering. Time is the best general predictor of a chance of recovery. For example, after four months of coma caused by brain damage, the chance of partial recovery is less than 15%, and the chance of full recovery is very low.
The outcome for coma and vegetative state depends on the cause, location, severity and extent of neurological damage. A deeper coma alone does not necessarily mean a slimmer chance of recovery; similarly, a milder coma does not indicate a higher chance of recovery. The most common cause of death for a person in a vegetative state is secondary infection such as pneumonia, which can occur in patients who lie still for extended periods.
People may emerge from a coma with a combination of physical, intellectual, and psychological difficulties that need special attention. It is common for coma patients to awaken in a profound state of confusion and experience dysarthria, the inability to articulate any speech. Recovery is usually gradual. In the first days, the patient may only awaken for a few minutes, with increased duration of wakefulness as their recovery progresses, and they may eventually recover full awareness. That said, some patients may never progress beyond very basic responses.
There are reports of people coming out of a coma after long periods of time. After 19 years in a minimally conscious state, Terry Wallis spontaneously began speaking and regained awareness of his surroundings.
A man with brain-damage and trapped in a coma-like state for six years was brought back to consciousness in 2003 by doctors who planted electrodes deep inside his brain. The method, called deep brain stimulation (DBS), successfully roused communication, complex movement and eating ability in the 38-year-old American man with a traumatic brain injury. His injuries left him in a minimally conscious state, a condition akin to a coma but characterized by occasional, but brief, evidence of environmental and self-awareness that coma patients lack.
Research by Eelco Wijdicks on the depiction of comas in movies was published in Neurology in May 2006. Wijdicks studied 30 films (made between 1970 and 2004) that portrayed actors in prolonged comas, and he concluded that only two films accurately depicted the state of a coma patient and the agony of waiting for a patient to awaken: Reversal of Fortune (1990) and The Dreamlife of Angels (1998). The remaining 28 were criticized for portraying miraculous awakenings with no lasting side effects, unrealistic depictions of treatments and equipment required, and comatose patients remaining muscular and tanned.
A person in a coma is said to be in an unconscious state. Perspectives on personhood, identity and consciousness come into play when discussing the metaphysical and bioethical views on comas.
It has been argued that unawareness should be just as ethically relevant and important as a state of awareness and that there should be metaphysical support of unawareness as a state.
In the ethical discussions about disorders of consciousness (DOCs), two abilities are usually considered as central: experiencing well-being and having interest. Well-being can broadly be understood as the positive effect related to what makes life good (according to specific standards) for the individual in question. The only condition for well-being broadly considered is the ability to experience its 'positiveness'. That said, because experiencing positiveness is a basic emotional process with phylogenetic roots, it is likely to occur at a completely unaware level and therefore, introduces the idea of an unconscious well-being. As such, the ability of having interests, is crucial for describing two abilities which those with comas are deficient in. Having an interest in a certain domain can be understood as having a stake in something that can affect what makes our life good in that domain. An interest is what directly and immediately improves life from a certain point of view or within a particular domain, or greatly increases the likelihood of life improvement enabling the subject to realize some good. That said, sensitivity to reward signals is a fundamental element in the learning process, both consciously and unconsciously. Moreover, the unconscious brain is able to interact with its surroundings in a meaningful way and to produce meaningful information processing of stimuli coming from the external environment, including other people.
According to Hawkins, "1. A life is good if the subject is able to value, or more basically if the subject is able to care. Importantly, Hawkins stresses that caring has no need for cognitive commitment, i.e. for high-level cognitive activities: it requires being able to distinguish something, track it for a while, recognize it over time, and have certain emotional dispositions vis-à-vis something. 2. A life is good if the subject has the capacity for relationship with others, i.e. for meaningfully interacting with other people." This suggests that unawareness may (at least partly) fulfill both conditions identified by Hawkins for life to be good for a subject, thus making the unconscious ethically relevant. | [
{
"paragraph_id": 0,
"text": "A coma is a deep state of prolonged unconsciousness in which a person cannot be awakened, fails to respond normally to painful stimuli, light, or sound, lacks a normal wake-sleep cycle and does not initiate voluntary actions. The person may experience respiratory and circulatory problems due to the body's inability to maintain normal bodily functions. People in a coma often require extensive medical care to maintain their health and prevent complications such as pneumonia or blood clots. Coma patients exhibit a complete absence of wakefulness and are unable to consciously feel, speak or move. Comas can be derived by natural causes, or can be medically induced.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Clinically, a coma can be defined as the consistent inability to follow a one-step command. It can also be defined as a score of ≤ 8 on the Glasgow Coma Scale (GCS) lasting ≥ 6 hours. For a patient to maintain consciousness, the components of wakefulness and awareness must be maintained. Wakefulness describes the quantitative degree of consciousness, whereas awareness relates to the qualitative aspects of the functions mediated by the cortex, including cognitive abilities such as attention, sensory perception, explicit memory, language, the execution of tasks, temporal and spatial orientation and reality judgment. From a neurological perspective, consciousness is maintained by the activation of the cerebral cortex—the gray matter that forms the outer layer of the brain—and by the reticular activating system (RAS), a structure located within the brainstem.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The term 'coma', from the Greek κῶμα koma, meaning deep sleep, had already been used in the Hippocratic corpus (Epidemica) and later by Galen (second century AD). Subsequently, it was hardly used in the known literature up to the middle of the 17th century. The term is found again in Thomas Willis' (1621–1675) influential De anima brutorum (1672), where lethargy (pathological sleep), 'coma' (heavy sleeping), carus (deprivation of the senses) and apoplexy (into which carus could turn and which he localized in the white matter) are mentioned. The term carus is also derived from Greek, where it can be found in the roots of several words meaning soporific or sleepy. It can still be found in the root of the term 'carotid'. Thomas Sydenham (1624–89) mentioned the term 'coma' in several cases of fever (Sydenham, 1685).",
"title": "Etymology"
},
{
"paragraph_id": 3,
"text": "General symptoms of a person in a comatose state are:",
"title": "Signs and symptoms"
},
{
"paragraph_id": 4,
"text": "Many types of problems can cause a coma. Forty percent of comatose states result from drug poisoning. Certain drug use under certain conditions can damage or weaken the synaptic functioning in the ascending reticular activating system (ARAS) and keep the system from properly functioning to arouse the brain. Secondary effects of drugs, which include abnormal heart rate and blood pressure, as well as abnormal breathing and sweating, may also indirectly harm the functioning of the ARAS and lead to a coma. Given that drug poisoning is the cause for a large portion of patients in a coma, hospitals first test all comatose patients by observing pupil size and eye movement, through the vestibular-ocular reflex. (See Diagnosis below.)",
"title": "Causes"
},
{
"paragraph_id": 5,
"text": "The second most common cause of coma, which makes up about 25% of cases, is lack of oxygen, generally resulting from cardiac arrest. The Central Nervous System (CNS) requires a great deal of oxygen for its neurons. Oxygen deprivation in the brain, also known as hypoxia, causes sodium and calcium from outside of the neurons to decrease and intracellular calcium to increase, which harms neuron communication. Lack of oxygen in the brain also causes ATP exhaustion and cellular breakdown from cytoskeleton damage and nitric oxide production.",
"title": "Causes"
},
{
"paragraph_id": 6,
"text": "Twenty percent of comatose states result from an ischemic stroke, brain hemorrhage, or brain tumor. During a stroke, blood flow to part of the brain is restricted or blocked. An ischemic stroke, brain hemorrhage, or brain tumor may cause restriction of blood flow. Lack of blood to cells in the brain prevents oxygen from getting to the neurons, and consequently causes cells to become disrupted and die. As brain cells die, brain tissue continues to deteriorate, which may affect the functioning of the ARAS, causing unconsciousness and coma.",
"title": "Causes"
},
{
"paragraph_id": 7,
"text": "Comatose cases can also result from traumatic brain injury, excessive blood loss, malnutrition, hypothermia, hyperthermia, hyperammonemia, abnormal glucose levels, and many other biological disorders. Furthermore, studies show that 1 out of 8 patients with traumatic brain injury experience a comatose state.",
"title": "Causes"
},
{
"paragraph_id": 8,
"text": "Heart-related causes of coma include cardiac arrest, myocardial infarction, heart failure, arrhythmia when severe, cardiogenic shock, myocarditis, and pericarditis. Respiratory arrest is the only lung condition to cause coma, but many different lung conditions can cause decreased level of consciousness, but don't reach coma.",
"title": "Causes"
},
{
"paragraph_id": 9,
"text": "Other causes of coma include severe or persistent seizures, kidney failure, liver failure, hyperglycemia, hypoglycemia, and infections involving the brain, like meningitis and encephalitis.",
"title": "Causes"
},
{
"paragraph_id": 10,
"text": "Injury to either or both of the cerebral cortex or the reticular activating system (RAS) is sufficient to cause a person to enter coma.",
"title": "Pathophysiology"
},
{
"paragraph_id": 11,
"text": "The cerebral cortex is the outer layer of neural tissue of the cerebrum of the brain. The cerebral cortex is composed of gray matter which consists of the nuclei of neurons, whereas the inner portion of the cerebrum is composed of white matter and is composed of the axons of neuron. White matter is responsible for perception, relay of the sensory input via the thalamic pathway, and many other neurological functions, including complex thinking.",
"title": "Pathophysiology"
},
{
"paragraph_id": 12,
"text": "The RAS, on the other hand, is a more primitive structure in the brainstem which includes the reticular formation (RF). The RAS has two tracts, the ascending and descending tract. The ascending tract, or ascending reticular activating system (ARAS), is made up of a system of acetylcholine-producing neurons, and works to arouse and wake up the brain. Arousal of the brain begins from the RF, through the thalamus, and then finally to the cerebral cortex. Any impairment in ARAS functioning, a neuronal dysfunction, along the arousal pathway stated directly above, prevents the body from being aware of its surroundings. Without the arousal and consciousness centers, the body cannot awaken, remaining in a comatose state.",
"title": "Pathophysiology"
},
{
"paragraph_id": 13,
"text": "The severity and mode of onset of coma depends on the underlying cause. There are two main subdivisions of a coma: structural and diffuse neuronal. A structural cause, for example, is brought upon by a mechanical force that brings about cellular damage, such as physical pressure or a blockage in neural transmission. While a diffuse cause is limited to aberrations of cellular function, that fall under a metabolic or toxic subgroup. Toxin-induced comas are caused by extrinsic substances, whereas metabolic-induced comas are caused by intrinsic processes, such as body thermoregulation or ionic imbalances(e.g. sodium). For instance, severe hypoglycemia (low blood sugar) or hypercapnia (increased carbon dioxide levels in the blood) are examples of a metabolic diffuse neuronal dysfunction. Hypoglycemia or hypercapnia initially cause mild agitation and confusion, but progress to obtundation, stupor, and finally, complete unconsciousness. In contrast, coma resulting from a severe traumatic brain injury or subarachnoid hemorrhage can be instantaneous. The mode of onset may therefore be indicative of the underlying cause.",
"title": "Pathophysiology"
},
{
"paragraph_id": 14,
"text": "Structural and diffuse causes of coma are not isolated from one another, as one can lead to the other in some situations. For instance, coma induced by a diffuse metabolic process, such as hypoglycemia, can result in a structural coma if it is not resolved. Another example is if cerebral edema, a diffuse dysfunction, leads to ischemia of the brainstem, a structural issue, due to the blockage of the circulation in the brain.",
"title": "Pathophysiology"
},
{
"paragraph_id": 15,
"text": "Although diagnosis of coma is simple, investigating the underlying cause of onset can be rather challenging. As such, after gaining stabilization of the patient's airways, breathing and circulation (the basic ABCs) various diagnostic tests, such as physical examinations and imaging tools (CT scan, MRI, etc.) are employed to access the underlying cause of the coma.",
"title": "Diagnosis"
},
{
"paragraph_id": 16,
"text": "When an unconscious person enters a hospital, the hospital utilizes a series of diagnostic steps to identify the cause of unconsciousness. According to Young, the following steps should be taken when dealing with a patient possibly in a coma:",
"title": "Diagnosis"
},
{
"paragraph_id": 17,
"text": "In the initial assessment of coma, it is common to gauge the level of consciousness on the AVPU (alert, vocal stimuli, painful stimuli, unresponsive) scale by spontaneously exhibiting actions and, assessing the patient's response to vocal and painful stimuli. More elaborate scales, such as the Glasgow Coma Scale, quantify an individual's reactions such as eye opening, movement and verbal response in order to indicate their extent of brain injury. The patient's score can vary from a score of 3 (indicating severe brain injury and death) to 15 (indicating mild or no brain injury).",
"title": "Diagnosis"
},
{
"paragraph_id": 18,
"text": "In those with deep unconsciousness, there is a risk of asphyxiation as the control over the muscles in the face and throat is diminished. As a result, those presenting to a hospital with coma are typically assessed for this risk (\"airway management\"). If the risk of asphyxiation is deemed high, doctors may use various devices (such as an oropharyngeal airway, nasopharyngeal airway or endotracheal tube) to safeguard the airway.",
"title": "Diagnosis"
},
{
"paragraph_id": 19,
"text": "Imaging basically encompasses computed tomography (CAT or CT) scan of the brain, or MRI for example, and is performed to identify specific causes of the coma, such as hemorrhage in the brain or herniation of the brain structures. Special tests such as an EEG can also show a lot about the activity level of the cortex such as semantic processing, presence of seizures, and are important available tools not only for the assessment of the cortical activity but also for predicting the likelihood of the patient's awakening. The autonomous responses such as the skin conductance response may also provide further insight on the patient's emotional processing.",
"title": "Diagnosis"
},
{
"paragraph_id": 20,
"text": "In the treatment of traumatic brain injury (TBI), there are 4 examination methods that have proved useful: skull x-ray, angiography, computed tomography (CT), and magnetic resonance imaging (MRI). The skull x-ray can detect linear fractures, impression fractures (expression fractures) and burst fractures. Angiography is used on rare occasions for TBIs i.e. when there is suspicion of an aneurysm, carotid sinus fistula, traumatic vascular occlusion, and vascular dissection. A CT can detect changes in density between the brain tissue and hemorrhages like subdural and intracerebral hemorrhages. MRIs are not the first choice in emergencies because of the long scanning times and because fractures cannot be detected as well as CT. MRIs are used for the imaging of soft tissues and lesions in the posterior fossa which cannot be found with the use of CT.",
"title": "Diagnosis"
},
{
"paragraph_id": 21,
"text": "Assessment of the brainstem and cortical function through special reflex tests such as the oculocephalic reflex test (doll's eyes test), oculovestibular reflex test (cold caloric test), corneal reflex, and the gag reflex. Reflexes are a good indicator of what cranial nerves are still intact and functioning and is an important part of the physical exam. Due to the unconscious status of the patient, only a limited number of the nerves can be assessed. These include the cranial nerves number 2 (CN II), number 3 (CN III), number 5 (CN V), number 7 (CN VII), and cranial nerves 9 and 10 (CN IX, CN X).",
"title": "Diagnosis"
},
{
"paragraph_id": 22,
"text": "Assessment of posture and physique is the next step. It involves general observation about the patient's positioning. There are often two stereotypical postures seen in comatose patients. Decorticate posturing is a stereotypical posturing in which the patient has arms flexed at the elbow, and arms adducted toward the body, with both legs extended. Decerebrate posturing is a stereotypical posturing in which the legs are similarly extended (stretched), but the arms are also stretched (extended at the elbow). The posturing is critical since it indicates where the damage is in the central nervous system. A decorticate posturing indicates a lesion (a point of damage) at or above the red nucleus, whereas a decerebrate posturing indicates a lesion at or below the red nucleus. In other words, a decorticate lesion is closer to the cortex, as opposed to a decerebrate posturing which indicates that the lesion is closer to the brainstem.",
"title": "Diagnosis"
},
{
"paragraph_id": 23,
"text": "Pupil assessment is often a critical portion of a comatose examination, as it can give information as to the cause of the coma; the following table is a technical, medical guideline for common pupil findings and their possible interpretations:",
"title": "Diagnosis"
},
{
"paragraph_id": 24,
"text": "A coma can be classified as (1) supratentorial (above Tentorium cerebelli), (2) infratentorial (below Tentorium cerebelli), (3) metabolic or (4) diffused. This classification is merely dependent on the position of the original damage that caused the coma, and does not correlate with severity or the prognosis. The severity of coma impairment however is categorized into several levels. Patients may or may not progress through these levels. In the first level, the brain responsiveness lessens, normal reflexes are lost, the patient no longer responds to pain and cannot hear.",
"title": "Diagnosis"
},
{
"paragraph_id": 25,
"text": "The Rancho Los Amigos Scale is a complex scale that has eight separate levels, and is often used in the first few weeks or months of coma while the patient is under closer observation, and when shifts between levels are more frequent.",
"title": "Diagnosis"
},
{
"paragraph_id": 26,
"text": "Treatment for people in a coma will depend on the severity and cause of the comatose state. Upon admittance to an emergency department, coma patients will usually be placed in an Intensive Care Unit (ICU) immediately, where maintenance of the patient's respiration and circulation become a first priority. Stability of their respiration and circulation is sustained through the use of intubation, ventilation, administration of intravenous fluids or blood and other supportive care as needed.",
"title": "Treatment"
},
{
"paragraph_id": 27,
"text": "Once a patient is stable and no longer in immediate danger, there may be a shift of priority from stabilizing the patient to maintaining the state of their physical wellbeing. Moving patients every 2–3 hours by turning them side to side is crucial to avoiding bed sores as a result of being confined to a bed. Moving patients through the use of physical therapy also aids in preventing atelectasis, contractures or other orthopedic deformities which would interfere with a coma patient's recovery.",
"title": "Treatment"
},
{
"paragraph_id": 28,
"text": "Pneumonia is also common in coma patients due to their inability to swallow which can then lead to aspiration. A coma patient's lack of a gag reflex and use of a feeding tube can result in food, drink or other solid organic matter being lodged within their lower respiratory tract (from the trachea to the lungs). This trapping of matter in their lower respiratory tract can ultimately lead to infection, resulting in aspiration pneumonia.",
"title": "Treatment"
},
{
"paragraph_id": 29,
"text": "Coma patients may also deal with restlessness or seizures. As such, soft cloth restraints may be used to prevent them from pulling on tubes or dressings and side rails on the bed should be kept up to prevent patients from falling.",
"title": "Treatment"
},
{
"paragraph_id": 30,
"text": "Coma has a wide variety of emotional reactions from the family members of the affected patients, as well as the primary care givers taking care of the patients. Research has shown that the severity of injury causing coma was found to have no significant impact compared to how much time has passed since the injury occurred. Common reactions, such as desperation, anger, frustration, and denial are possible. The focus of the patient care should be on creating an amicable relationship with the family members or dependents of a comatose patient as well as creating a rapport with the medical staff. Although there is heavy importance of a primary care taker, secondary care takers can play a supporting role to temporarily relieve the primary care taker's burden of tasks.",
"title": "Treatment"
},
{
"paragraph_id": 31,
"text": "Comas can last from several days to, in particularly extreme cases, years. Some patients eventually gradually come out of the coma, some progress to a vegetative state or a minimally conscious state, and others die. Some patients who have entered a vegetative state go on to regain a degree of awareness; and in some cases may remain in vegetative state for years or even decades (the longest recorded period is 42 years).",
"title": "Prognosis"
},
{
"paragraph_id": 32,
"text": "Predicted chances of recovery will differ depending on which techniques were used to measure the patient's severity of neurological damage. Predictions of recovery are based on statistical rates, expressed as the level of chance the person has of recovering. Time is the best general predictor of a chance of recovery. For example, after four months of coma caused by brain damage, the chance of partial recovery is less than 15%, and the chance of full recovery is very low.",
"title": "Prognosis"
},
{
"paragraph_id": 33,
"text": "The outcome for coma and vegetative state depends on the cause, location, severity and extent of neurological damage. A deeper coma alone does not necessarily mean a slimmer chance of recovery; similarly, a milder coma does not indicate a higher chance of recovery. The most common cause of death for a person in a vegetative state is secondary infection such as pneumonia, which can occur in patients who lie still for extended periods.",
"title": "Prognosis"
},
{
"paragraph_id": 34,
"text": "People may emerge from a coma with a combination of physical, intellectual, and psychological difficulties that need special attention. It is common for coma patients to awaken in a profound state of confusion and experience dysarthria, the inability to articulate any speech. Recovery is usually gradual. In the first days, the patient may only awaken for a few minutes, with increased duration of wakefulness as their recovery progresses, and they may eventually recover full awareness. That said, some patients may never progress beyond very basic responses.",
"title": "Prognosis"
},
{
"paragraph_id": 35,
"text": "There are reports of people coming out of a coma after long periods of time. After 19 years in a minimally conscious state, Terry Wallis spontaneously began speaking and regained awareness of his surroundings.",
"title": "Prognosis"
},
{
"paragraph_id": 36,
"text": "A man with brain-damage and trapped in a coma-like state for six years was brought back to consciousness in 2003 by doctors who planted electrodes deep inside his brain. The method, called deep brain stimulation (DBS), successfully roused communication, complex movement and eating ability in the 38-year-old American man with a traumatic brain injury. His injuries left him in a minimally conscious state, a condition akin to a coma but characterized by occasional, but brief, evidence of environmental and self-awareness that coma patients lack.",
"title": "Prognosis"
},
{
"paragraph_id": 37,
"text": "Research by Eelco Wijdicks on the depiction of comas in movies was published in Neurology in May 2006. Wijdicks studied 30 films (made between 1970 and 2004) that portrayed actors in prolonged comas, and he concluded that only two films accurately depicted the state of a coma patient and the agony of waiting for a patient to awaken: Reversal of Fortune (1990) and The Dreamlife of Angels (1998). The remaining 28 were criticized for portraying miraculous awakenings with no lasting side effects, unrealistic depictions of treatments and equipment required, and comatose patients remaining muscular and tanned.",
"title": "Society and culture"
},
{
"paragraph_id": 38,
"text": "A person in a coma is said to be in an unconscious state. Perspectives on personhood, identity and consciousness come into play when discussing the metaphysical and bioethical views on comas.",
"title": "Society and culture"
},
{
"paragraph_id": 39,
"text": "It has been argued that unawareness should be just as ethically relevant and important as a state of awareness and that there should be metaphysical support of unawareness as a state.",
"title": "Society and culture"
},
{
"paragraph_id": 40,
"text": "In the ethical discussions about disorders of consciousness (DOCs), two abilities are usually considered as central: experiencing well-being and having interest. Well-being can broadly be understood as the positive effect related to what makes life good (according to specific standards) for the individual in question. The only condition for well-being broadly considered is the ability to experience its 'positiveness'. That said, because experiencing positiveness is a basic emotional process with phylogenetic roots, it is likely to occur at a completely unaware level and therefore, introduces the idea of an unconscious well-being. As such, the ability of having interests, is crucial for describing two abilities which those with comas are deficient in. Having an interest in a certain domain can be understood as having a stake in something that can affect what makes our life good in that domain. An interest is what directly and immediately improves life from a certain point of view or within a particular domain, or greatly increases the likelihood of life improvement enabling the subject to realize some good. That said, sensitivity to reward signals is a fundamental element in the learning process, both consciously and unconsciously. Moreover, the unconscious brain is able to interact with its surroundings in a meaningful way and to produce meaningful information processing of stimuli coming from the external environment, including other people.",
"title": "Society and culture"
},
{
"paragraph_id": 41,
"text": "According to Hawkins, \"1. A life is good if the subject is able to value, or more basically if the subject is able to care. Importantly, Hawkins stresses that caring has no need for cognitive commitment, i.e. for high-level cognitive activities: it requires being able to distinguish something, track it for a while, recognize it over time, and have certain emotional dispositions vis-à-vis something. 2. A life is good if the subject has the capacity for relationship with others, i.e. for meaningfully interacting with other people.\" This suggests that unawareness may (at least partly) fulfill both conditions identified by Hawkins for life to be good for a subject, thus making the unconscious ethically relevant.",
"title": "Society and culture"
}
] | A coma is a deep state of prolonged unconsciousness in which a person cannot be awakened, fails to respond normally to painful stimuli, light, or sound, lacks a normal wake-sleep cycle and does not initiate voluntary actions. The person may experience respiratory and circulatory problems due to the body's inability to maintain normal bodily functions. People in a coma often require extensive medical care to maintain their health and prevent complications such as pneumonia or blood clots. Coma patients exhibit a complete absence of wakefulness and are unable to consciously feel, speak or move. Comas can be derived by natural causes, or can be medically induced. Clinically, a coma can be defined as the consistent inability to follow a one-step command. It can also be defined as a score of ≤ 8 on the Glasgow Coma Scale (GCS) lasting ≥ 6 hours. For a patient to maintain consciousness, the components of wakefulness and awareness must be maintained. Wakefulness describes the quantitative degree of consciousness, whereas awareness relates to the qualitative aspects of the functions mediated by the cortex, including cognitive abilities such as attention, sensory perception, explicit memory, language, the execution of tasks, temporal and spatial orientation and reality judgment. From a neurological perspective, consciousness is maintained by the activation of the cerebral cortex—the gray matter that forms the outer layer of the brain—and by the reticular activating system (RAS), a structure located within the brainstem. | 2001-06-02T18:48:48Z | 2023-11-19T11:01:49Z | [
"Template:Infobox medical condition (new)",
"Template:Reflist",
"Template:Webarchive",
"Template:Dead link",
"Template:More citations needed section",
"Template:Cite web",
"Template:Cite encyclopedia",
"Template:Scholia",
"Template:Medical resources",
"Template:Cite news",
"Template:Merriam-Webster",
"Template:Short description",
"Template:Hatgrp",
"Template:Lang",
"Template:Cite book",
"Template:Citation",
"Template:Cite journal",
"Template:Citation needed",
"Template:Main",
"Template:Portal",
"Template:Wiktionary",
"Template:Disorders of consciousness",
"Template:Authority control"
] | https://en.wikipedia.org/wiki/Coma |
5,722 | Call of Cthulhu (role-playing game) | Call of Cthulhu is a horror fiction role-playing game based on H. P. Lovecraft's story of the same name and the associated Cthulhu Mythos. The game, often abbreviated as CoC, is published by Chaosium; it was first released in 1981 and is in its seventh edition, with licensed foreign language editions available as well. Its game system is based on Chaosium's Basic Role-Playing (BRP) with additions for the horror genre. These include special rules for sanity and luck.
Call of Cthulhu is set in a darker version of our world based on H. P. Lovecraft's observation (from his essay, "Supernatural Horror in Literature") that "The oldest and strongest emotion of mankind is fear, and the strongest kind of fear is fear of the unknown." The original edition, first published in 1981, uses Basic Role-Playing as its basis and is set in the 1920s, the setting of many of Lovecraft's stories. The Cthulhu by Gaslight supplement blends the occult and Holmesian mystery and is mostly set in England during the 1890s. Cthulhu Now and Delta Green are set in a modern/1980s era and deal with conspiracies. Recent settings include 1000 AD (Cthulhu: Dark Ages), the 23rd century (Cthulhu Rising) and Ancient Rome (Cthulhu Invictus). The protagonists may also travel to places that are not of this earth, such as the Dreamlands (which can be accessed through dreams as well as being physically connected to the earth), other planets, or the voids of space. In keeping with the Lovecraftian theme, the gamemaster is called the Keeper of Arcane Lore ("the keeper"), while player characters are called Investigators of the Unknown ("investigators").
While predominantly focused on Lovecraftian fiction and horror, playing in the Cthulhu Mythos is not required. The system also includes ideas for non-Lovecraft games, such as using folk horror or the settings of other authors and horror movies, or with entirely custom settings and creatures by the gamemaster and/or players.
CoC uses the Basic Role-Playing system first developed for RuneQuest and used in other Chaosium games. It is skill-based, with player characters getting better with their skills by succeeding at using them for as long as they stay functionally healthy and sane. They do not, however, gain hit points and do not become significantly harder to kill. The game does not use levels.
CoC uses percentile dice (with results ranging from 1 to 100) to determine success or failure. Every player statistic is intended to be compatible with the notion that there is a probability of success for a particular action given what the player is capable of doing. For example, an artist may have a 75% chance of being able to draw something (represented by having 75 in Art skill), and thus rolling a number under 75 would yield a success. Rolling 1⁄5 or less of the skill level (1-15 in the example) would be a "special success" (or an "impale" for combat skills) and would yield some extra bonus to be determined by the keeper. For example, the artist character might draw especially well or especially fast, or catch some unapparent detail in the drawing.
The players take the roles of ordinary people drawn into the realm of the mysterious: detectives, criminals, scholars, artists, war veterans, etc. Often, happenings begin innocently enough, until more and more of the workings behind the scenes are revealed. As the characters learn more of the true horrors of the world and the irrelevance of humanity, their sanity (represented by "Sanity Points", abbreviated SAN) inevitably withers away. The game includes a mechanism for determining how damaged a character's sanity is at any given point; encountering the horrific beings usually triggers a loss of SAN points. To gain the tools they need to defeat the horrors – mystic knowledge and magic – the characters may end up losing some of their sanity, though other means such as pure firepower or simply outsmarting one's opponents also exist. CoC has a reputation as a game in which it is quite common for a player character to die in gruesome circumstances or end up in a mental institution. Eventual triumph of the players is not guaranteed.
The original conception of Call of Cthulhu was Dark Worlds, a game commissioned by the publisher Chaosium but never published. Sandy Petersen contacted them regarding writing a supplement for their popular fantasy game RuneQuest set in Lovecraft's Dreamlands. He took over the writing of Call of Cthulhu, and the game was released in 1981. Petersen oversaw the first four editions with only minor changes to the system. Once he left, development was continued by Lynn Willis, who was credited as co-author in the fifth and sixth editions. After the death of Willis, Mike Mason became Call of Cthulhu line editor in 2013, continuing its development with Paul Fricker. Together they made the most significant rules alterations than in any previous edition, culminating in the release of the 7th edition in 2014.
For those grounded in the RPG tradition, the very first release of Call of Cthulhu created a brand new framework for table-top gaming. Rather than the traditional format established by Dungeons & Dragons, which often involved the characters wandering through caves or tunnels and fighting different types of monsters, Sandy Petersen introduced the concept of the Onion Skin: Interlocking layers of information and nested clues that lead the player characters from seemingly minor investigations into a missing person to discovering mind-numbingly awful, global conspiracies to destroy the world. Unlike its predecessor games, CoC assumed that most investigators would not survive, alive or sane, and that the only safe way to deal with the vast majority of nasty things described in the rule books was to run away. A well-run CoC campaign should engender a sense of foreboding and inevitable doom in its players. The style and setting of the game, in a relatively modern time period, created an emphasis on real-life settings, character research, and thinking one's way around trouble.
The first book of Call of Cthulhu adventures was Shadows of Yog-Sothoth. In this work, the characters come upon a secret society's foul plot to destroy mankind, and pursue it first near to home and then in a series of exotic locations. This template was to be followed in many subsequent campaigns, including Fungi from Yuggoth (later known as Curse of Cthulhu and Day of the Beast), Spawn of Azathoth, and possibly the most highly acclaimed, Masks of Nyarlathotep.
Shadows of Yog-Sothoth is important not only because it represents the first published addition to the boxed first edition of Call of Cthulhu, but because its format defined a new way of approaching a campaign of linked RPG scenarios involving actual clues for the would-be detectives amongst the players to follow and link in order to uncover the dastardly plots afoot. Its format has been used by every other campaign-length Call of Cthulhu publication. The standard of CoC scenarios was well received by independent reviewers. The Asylum and Other Tales, a series of stand alone articles released in 1983, rated an overall 9/10 in Issue 47 of White Dwarf magazine.
The standard of the included 'clue' material varies from scenario to scenario, but reached its zenith in the original boxed versions of the Masks of Nyarlathotep and Horror on the Orient Express campaigns. Inside these one could find matchbooks and business cards apparently defaced by non-player characters, newspaper cuttings and (in the case of Orient Express) period passports to which players could attach their photographs, increasing the sense of immersion. Indeed, during the period that these supplements were produced, third party campaign publishers strove to emulate the quality of the additional materials, often offering separately-priced 'deluxe' clue packages for their campaigns.
Additional milieux were provided by Chaosium with the release of Dreamlands, a boxed supplement containing additional rules needed for playing within the Lovecraft Dreamlands, a large map and a scenario booklet, and Cthulhu By Gaslight, another boxed set which moved the action from the 1920s to the 1890s.
In 1987, Chaosium issued the supplement titled Cthulhu Now, a collection of rules, supplemental source materials and scenarios for playing Call of Cthulhu in the present day. This proved to be a very popular alternative milieu, so much so that much of the supplemental material is now included in the core rule book.
Lovecraft Country was a line of supplements for Call of Cthulhu released in 1990. These supplements were overseen by Keith Herber and provided backgrounds and adventures set in Lovecraft's fictional towns of Arkham, Kingsport, Innsmouth, Dunwich, and their environs. The intent was to give investigators a common base, as well as to center the action on well-drawn characters with clear motivations.
In 1987, Terror Australis: Call of Cthulhu in the Land Down Under was published. In 2018, a revised and updated version of the 1987 game was reissued, with about triple the content and two new games. It requires the Call of Cthulhu Keeper's Rulebook (7th Edition) and is usable with Pulp Cthulhu.
In the years since the collapse of the Mythos collectible card game (production ceased in 1997), the release of CoC books has been very sporadic, with up to a year between releases. Chaosium struggled with near bankruptcy for many years before finally starting their upward climb again.
2005 was Chaosium's busiest year for many years, with 10 releases for the game. Chaosium took to marketing "monographs"—short books by individual writers with editing and layout provided out-of-house—directly to the consumer, allowing the company to gauge market response to possible new works. The range of times and places in which the horrors of the Mythos can be encountered was also expanded in late 2005 onward with the addition of Cthulhu Dark Ages by Stéphane Gesbert, which gives a framework for playing games set in 11th century Europe, Secrets of Japan by Michael Dziesinski for gaming in modern-day Japan, and Secrets of Kenya by David Conyers for gaming in interwar period Africa.
In July 2011, Chaosium announced it would re-release a 30th anniversary edition of the CoC 6th edition role-playing game. This 320-page book features thick (3 mm) leatherette hardcovers with the front cover and spine stamped with gold foil. The interior pages are printed in black ink, on 90 gsm matte art paper. The binding is thread sewn, square backed. Chaosium offered a one-time printing of this Collector's Edition.
On May 28, 2013, a crowdfunding campaign on Kickstarter for the 7th edition of Call of Cthulhu was launched with a goal of $40,000; it ended on June 29 of the same year having collected $561,836. It included many more major revisions than any previous edition, and also split the core rules into two books, a Player's Guide and Keeper's Guide. Problems and delays fulfilling the Kickstarters for the 7th edition of Call of Cthulhu led Greg Stafford and Sandy Petersen (who had both left in 1998) to return to an active role at Chaosium in June 2015.
The available milieux were also expanded with the release of Cthulhu Through the Ages, a supplement containing additional rules needed for playing within the Roman Empire, Mythic Iceland, a futuristic micro-setting, and the End Times, where the monsters of the mythos attempt to subjugate or destroy the world.
Chaosium has licensed other publishers to create supplements, video, card and board games using the setting and the Call of Cthulhu brand. Many, such as Delta Green by Pagan Publishing and Arkham Horror by Fantasy Flight, have moved away completely from Call of Cthulhu. Other licensees have included Infogrames, Miskatonic River Press, Theater of the Mind Enterprises, Triad Entertainment, Games Workshop, RAFM, Goodman Games, Grenadier Models Inc. and Yog-Sothoth.com. These supplements may be set in different time frames or even different game universes from the original game.
In February 2008, Pelgrane Press published Trail of Cthulhu, a stand-alone game created by Kenneth Hite using the GUMSHOE System developed by Robin Laws. GUMSHOE is specifically designed to be used in investigative games.
In September 2008, Reality Deviant Publications published Shadows of Cthulhu, a supplement that brings Lovecraftian gaming to Green Ronin's True20 system.
In October 2009, Reality Blurs published Realms of Cthulhu, a supplement for Pinnacle Entertainment's Savage Worlds system.
Pagan Publishing published Delta Green, a series of supplements originally set in the 1990s, although later supplements add support for playing closer to the present day. In these, player characters are agents of a secret agency known as Delta Green, which fights against creatures from the Mythos and conspiracies related to them. Arc Dream Publishing released a new version of Delta Green in 2016 as a standalone game, partially using the mechanics from Call of Cthulhu.
In 2001, a stand-alone version of Call of Cthulhu was released by Wizards of the Coast, for the d20 system. Intended to preserve the feeling of the original game, the d20 conversion of the game rules were supposed to make the game more accessible to the large D&D player base. The d20 system also made it possible to use Dungeons & Dragons characters in Call of Cthulhu, as well as to introduce the Cthulhu Mythos into Dungeons & Dragons games. The d20 version of the game is no longer supported by Wizards as per their contract with Chaosium. Chaosium included d20 stats as an appendix in three releases (see Lovecraft Country), but have since dropped the "dual stat" idea.
Mythos was a collectible card game (CCG) based on the Cthulhu Mythos that Chaosium produced and marketed during the mid-1990s. While generally praised for its fast gameplay and unique mechanics, it ultimately failed to gain a very large market presence. It bears mention because its eventual failure brought the company to hard times that affected its ability to produce material for Call of Cthulhu. Call of Cthulhu: The Card Game is a second collectible card game, produced by Fantasy Flight Games.
The first licensed Call of Cthulhu 25-millimetre (1.0-inch) gaming miniatures were sculpted by Andrew Chernack and released by Grenadier Models in boxed sets and blister packs in 1983. The license was later transferred to RAFM. As of 2011, RAFM still produce licensed Call of Cthulhu models sculpted by Bob Murch. Both lines include investigator player character models and the iconic monsters of the Cthulhu mythos. As of July 2015, Reaper Miniatures started its third "Bones Kickstarter", a Kickstarter intended to help the company migrate some miniatures from metal to plastic, and introducing some new ones. Among the stretch goals was the second $50 expansion, devoted to the Mythos, with miniatures such as Cultists, Deep Ones, Mi'Go, and an extra $15 Shub-Niggurath "miniature" (it is, at least, 6x4 squares). It is expected for those miniatures to remain in the Reaper Miniatures catalogue after the Kickstarter project finishes. In 2020 Chaosium announced a license agreement with Ardacious for Call of Cthulhu virtual miniatures to be released on their augmented reality app Ardent Roleplay.
Shadow of the Comet (later repackaged as Call of Cthulhu: Shadow of the Comet) is an adventure game developed and released by Infogrames in 1993. The game is based on H. P. Lovecraft's Cthulhu Mythos and uses many elements from Lovecraft's The Dunwich Horror and The Shadow Over Innsmouth. A follow-up game, Prisoner of Ice, is not a direct sequel.
Prisoner of Ice (also Call of Cthulhu: Prisoner of Ice) is an adventure game developed and released by Infogrames for the PC and Macintosh computers in 1995 in America and Europe. It is based on H. P. Lovecraft's Cthulhu Mythos, particularly At the Mountains of Madness, and is a follow-up to Infogrames' earlier Shadow of the Comet. In 1997, the game was ported to the Sega Saturn and PlayStation exclusively in Japan.
A licensed first-person shooter adventure game by Headfirst Productions, based on Call of Cthulhu campaign Escape from Innsmouth and released by Bethesda Softworks in 2005/2006 for the PC and Xbox.
In April 2011, Chaosium and new developer Red Wasp Design announced a joint project to produce a mobile video game based on the Call of Cthulhu RPG, entitled Call of Cthulhu: The Wasted Land. The game was released on January 30, 2012.
In 2018, Metarcade produced Cthulhu Chronicles, a game for iOS with a campaign of nine mobile interactive fiction stories set in 1920s England based on Call of Cthulhu. The first five stories were released on July 10, 2018.
Call of Cthulhu is a survival horror role-playing video game developed by Cyanide and published by Focus Home Interactive for PlayStation 4, Xbox One and Windows. The game features a semi-open world environment and incorporates themes of Lovecraftian and psychological horror into a story which includes elements of investigation and stealth. It is inspired by H. P. Lovecraft's short story "The Call of Cthulhu".
Multiple reviews of various editions appeared in Space Gamer/Fantasy Gamer.
Multiple reviews of various editions appeared in White Dwarf.
Several reviews of various editions and supplements also appeared in Dragon.
In his 1990 book The Complete Guide to Role-Playing Games, game critic Rick Swan gave the game a top rating of 4 out of 4, calling it "a masterpiece, easily the best horror RPG ever published and possibly the best RPG, period ... breathtaking in scope and as richly textured as a fine novel. All role-players owe it to themselves to experience this truly remarkable game."
In Issue 68 of Challenge, Craig Sheeley reviewed the fifth edition and liked the revisions. "The entire character generation process is highly streamlined and easily illustrated on a two-page flowchart." DeJong also liked the inclusion of material from all three of CoC's settings (1890s, 1920s, 1990s), calling it "One of the best features of this edition." And he was very impressed with the layout of the book, commenting, "The organization and format of this book deserve special mention. I hold that every game company should study this book to learn what to do right." DeJong concluded, "I am seriously impressed with this product. From cover to cover, it’s well done."
In a reader poll conducted by UK magazine Arcane in 1996 to determine the 50 most popular roleplaying games of all time, Call of Cthulhu was ranked 1st. Editor Paul Pettengale commented: "Call of Cthulhu is fully deserved of the title as the most popular roleplaying system ever - it's a game that doesn't age, is eminently playable, and which hangs together perfectly. The system, even though it's over ten years old, it still one of the very best you'll find in any roleplaying game. Also, there's not a referee in the land who could say they've read every Lovecraft inspired book or story going, so there's a pretty-well endless supply of scenario ideas. It's simply marvellous."
Scott Taylor for Black Gate in 2013 rated Call of Cthulhu as #4 in the top ten role-playing games of all time, saying "With various revisions, but never a full rewrite of its percentile-based system, Call of Cthulhu might be antiquated by today's standards, but remember it is supposed to be set in the 1920s, so to me that seems more than appropriate."
Following Dungeons & Dragons, Call of Cthulhu has been reported to be the game most played on the virtual table top platform Roll20 in 2021. It has also been reported to be have found success especially in Korea and Japan, and to have overtaken D&D in Japan.
The game has won multiple awards: | [
{
"paragraph_id": 0,
"text": "Call of Cthulhu is a horror fiction role-playing game based on H. P. Lovecraft's story of the same name and the associated Cthulhu Mythos. The game, often abbreviated as CoC, is published by Chaosium; it was first released in 1981 and is in its seventh edition, with licensed foreign language editions available as well. Its game system is based on Chaosium's Basic Role-Playing (BRP) with additions for the horror genre. These include special rules for sanity and luck.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Call of Cthulhu is set in a darker version of our world based on H. P. Lovecraft's observation (from his essay, \"Supernatural Horror in Literature\") that \"The oldest and strongest emotion of mankind is fear, and the strongest kind of fear is fear of the unknown.\" The original edition, first published in 1981, uses Basic Role-Playing as its basis and is set in the 1920s, the setting of many of Lovecraft's stories. The Cthulhu by Gaslight supplement blends the occult and Holmesian mystery and is mostly set in England during the 1890s. Cthulhu Now and Delta Green are set in a modern/1980s era and deal with conspiracies. Recent settings include 1000 AD (Cthulhu: Dark Ages), the 23rd century (Cthulhu Rising) and Ancient Rome (Cthulhu Invictus). The protagonists may also travel to places that are not of this earth, such as the Dreamlands (which can be accessed through dreams as well as being physically connected to the earth), other planets, or the voids of space. In keeping with the Lovecraftian theme, the gamemaster is called the Keeper of Arcane Lore (\"the keeper\"), while player characters are called Investigators of the Unknown (\"investigators\").",
"title": "Gameplay"
},
{
"paragraph_id": 2,
"text": "While predominantly focused on Lovecraftian fiction and horror, playing in the Cthulhu Mythos is not required. The system also includes ideas for non-Lovecraft games, such as using folk horror or the settings of other authors and horror movies, or with entirely custom settings and creatures by the gamemaster and/or players.",
"title": "Gameplay"
},
{
"paragraph_id": 3,
"text": "CoC uses the Basic Role-Playing system first developed for RuneQuest and used in other Chaosium games. It is skill-based, with player characters getting better with their skills by succeeding at using them for as long as they stay functionally healthy and sane. They do not, however, gain hit points and do not become significantly harder to kill. The game does not use levels.",
"title": "Gameplay"
},
{
"paragraph_id": 4,
"text": "CoC uses percentile dice (with results ranging from 1 to 100) to determine success or failure. Every player statistic is intended to be compatible with the notion that there is a probability of success for a particular action given what the player is capable of doing. For example, an artist may have a 75% chance of being able to draw something (represented by having 75 in Art skill), and thus rolling a number under 75 would yield a success. Rolling 1⁄5 or less of the skill level (1-15 in the example) would be a \"special success\" (or an \"impale\" for combat skills) and would yield some extra bonus to be determined by the keeper. For example, the artist character might draw especially well or especially fast, or catch some unapparent detail in the drawing.",
"title": "Gameplay"
},
{
"paragraph_id": 5,
"text": "The players take the roles of ordinary people drawn into the realm of the mysterious: detectives, criminals, scholars, artists, war veterans, etc. Often, happenings begin innocently enough, until more and more of the workings behind the scenes are revealed. As the characters learn more of the true horrors of the world and the irrelevance of humanity, their sanity (represented by \"Sanity Points\", abbreviated SAN) inevitably withers away. The game includes a mechanism for determining how damaged a character's sanity is at any given point; encountering the horrific beings usually triggers a loss of SAN points. To gain the tools they need to defeat the horrors – mystic knowledge and magic – the characters may end up losing some of their sanity, though other means such as pure firepower or simply outsmarting one's opponents also exist. CoC has a reputation as a game in which it is quite common for a player character to die in gruesome circumstances or end up in a mental institution. Eventual triumph of the players is not guaranteed.",
"title": "Gameplay"
},
{
"paragraph_id": 6,
"text": "The original conception of Call of Cthulhu was Dark Worlds, a game commissioned by the publisher Chaosium but never published. Sandy Petersen contacted them regarding writing a supplement for their popular fantasy game RuneQuest set in Lovecraft's Dreamlands. He took over the writing of Call of Cthulhu, and the game was released in 1981. Petersen oversaw the first four editions with only minor changes to the system. Once he left, development was continued by Lynn Willis, who was credited as co-author in the fifth and sixth editions. After the death of Willis, Mike Mason became Call of Cthulhu line editor in 2013, continuing its development with Paul Fricker. Together they made the most significant rules alterations than in any previous edition, culminating in the release of the 7th edition in 2014.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "For those grounded in the RPG tradition, the very first release of Call of Cthulhu created a brand new framework for table-top gaming. Rather than the traditional format established by Dungeons & Dragons, which often involved the characters wandering through caves or tunnels and fighting different types of monsters, Sandy Petersen introduced the concept of the Onion Skin: Interlocking layers of information and nested clues that lead the player characters from seemingly minor investigations into a missing person to discovering mind-numbingly awful, global conspiracies to destroy the world. Unlike its predecessor games, CoC assumed that most investigators would not survive, alive or sane, and that the only safe way to deal with the vast majority of nasty things described in the rule books was to run away. A well-run CoC campaign should engender a sense of foreboding and inevitable doom in its players. The style and setting of the game, in a relatively modern time period, created an emphasis on real-life settings, character research, and thinking one's way around trouble.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "The first book of Call of Cthulhu adventures was Shadows of Yog-Sothoth. In this work, the characters come upon a secret society's foul plot to destroy mankind, and pursue it first near to home and then in a series of exotic locations. This template was to be followed in many subsequent campaigns, including Fungi from Yuggoth (later known as Curse of Cthulhu and Day of the Beast), Spawn of Azathoth, and possibly the most highly acclaimed, Masks of Nyarlathotep.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "Shadows of Yog-Sothoth is important not only because it represents the first published addition to the boxed first edition of Call of Cthulhu, but because its format defined a new way of approaching a campaign of linked RPG scenarios involving actual clues for the would-be detectives amongst the players to follow and link in order to uncover the dastardly plots afoot. Its format has been used by every other campaign-length Call of Cthulhu publication. The standard of CoC scenarios was well received by independent reviewers. The Asylum and Other Tales, a series of stand alone articles released in 1983, rated an overall 9/10 in Issue 47 of White Dwarf magazine.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "The standard of the included 'clue' material varies from scenario to scenario, but reached its zenith in the original boxed versions of the Masks of Nyarlathotep and Horror on the Orient Express campaigns. Inside these one could find matchbooks and business cards apparently defaced by non-player characters, newspaper cuttings and (in the case of Orient Express) period passports to which players could attach their photographs, increasing the sense of immersion. Indeed, during the period that these supplements were produced, third party campaign publishers strove to emulate the quality of the additional materials, often offering separately-priced 'deluxe' clue packages for their campaigns.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "Additional milieux were provided by Chaosium with the release of Dreamlands, a boxed supplement containing additional rules needed for playing within the Lovecraft Dreamlands, a large map and a scenario booklet, and Cthulhu By Gaslight, another boxed set which moved the action from the 1920s to the 1890s.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "In 1987, Chaosium issued the supplement titled Cthulhu Now, a collection of rules, supplemental source materials and scenarios for playing Call of Cthulhu in the present day. This proved to be a very popular alternative milieu, so much so that much of the supplemental material is now included in the core rule book.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "Lovecraft Country was a line of supplements for Call of Cthulhu released in 1990. These supplements were overseen by Keith Herber and provided backgrounds and adventures set in Lovecraft's fictional towns of Arkham, Kingsport, Innsmouth, Dunwich, and their environs. The intent was to give investigators a common base, as well as to center the action on well-drawn characters with clear motivations.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "In 1987, Terror Australis: Call of Cthulhu in the Land Down Under was published. In 2018, a revised and updated version of the 1987 game was reissued, with about triple the content and two new games. It requires the Call of Cthulhu Keeper's Rulebook (7th Edition) and is usable with Pulp Cthulhu.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "In the years since the collapse of the Mythos collectible card game (production ceased in 1997), the release of CoC books has been very sporadic, with up to a year between releases. Chaosium struggled with near bankruptcy for many years before finally starting their upward climb again.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "2005 was Chaosium's busiest year for many years, with 10 releases for the game. Chaosium took to marketing \"monographs\"—short books by individual writers with editing and layout provided out-of-house—directly to the consumer, allowing the company to gauge market response to possible new works. The range of times and places in which the horrors of the Mythos can be encountered was also expanded in late 2005 onward with the addition of Cthulhu Dark Ages by Stéphane Gesbert, which gives a framework for playing games set in 11th century Europe, Secrets of Japan by Michael Dziesinski for gaming in modern-day Japan, and Secrets of Kenya by David Conyers for gaming in interwar period Africa.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "In July 2011, Chaosium announced it would re-release a 30th anniversary edition of the CoC 6th edition role-playing game. This 320-page book features thick (3 mm) leatherette hardcovers with the front cover and spine stamped with gold foil. The interior pages are printed in black ink, on 90 gsm matte art paper. The binding is thread sewn, square backed. Chaosium offered a one-time printing of this Collector's Edition.",
"title": "History"
},
{
"paragraph_id": 18,
"text": "On May 28, 2013, a crowdfunding campaign on Kickstarter for the 7th edition of Call of Cthulhu was launched with a goal of $40,000; it ended on June 29 of the same year having collected $561,836. It included many more major revisions than any previous edition, and also split the core rules into two books, a Player's Guide and Keeper's Guide. Problems and delays fulfilling the Kickstarters for the 7th edition of Call of Cthulhu led Greg Stafford and Sandy Petersen (who had both left in 1998) to return to an active role at Chaosium in June 2015.",
"title": "History"
},
{
"paragraph_id": 19,
"text": "The available milieux were also expanded with the release of Cthulhu Through the Ages, a supplement containing additional rules needed for playing within the Roman Empire, Mythic Iceland, a futuristic micro-setting, and the End Times, where the monsters of the mythos attempt to subjugate or destroy the world.",
"title": "History"
},
{
"paragraph_id": 20,
"text": "Chaosium has licensed other publishers to create supplements, video, card and board games using the setting and the Call of Cthulhu brand. Many, such as Delta Green by Pagan Publishing and Arkham Horror by Fantasy Flight, have moved away completely from Call of Cthulhu. Other licensees have included Infogrames, Miskatonic River Press, Theater of the Mind Enterprises, Triad Entertainment, Games Workshop, RAFM, Goodman Games, Grenadier Models Inc. and Yog-Sothoth.com. These supplements may be set in different time frames or even different game universes from the original game.",
"title": "Licenses"
},
{
"paragraph_id": 21,
"text": "In February 2008, Pelgrane Press published Trail of Cthulhu, a stand-alone game created by Kenneth Hite using the GUMSHOE System developed by Robin Laws. GUMSHOE is specifically designed to be used in investigative games.",
"title": "Licenses"
},
{
"paragraph_id": 22,
"text": "In September 2008, Reality Deviant Publications published Shadows of Cthulhu, a supplement that brings Lovecraftian gaming to Green Ronin's True20 system.",
"title": "Licenses"
},
{
"paragraph_id": 23,
"text": "In October 2009, Reality Blurs published Realms of Cthulhu, a supplement for Pinnacle Entertainment's Savage Worlds system.",
"title": "Licenses"
},
{
"paragraph_id": 24,
"text": "Pagan Publishing published Delta Green, a series of supplements originally set in the 1990s, although later supplements add support for playing closer to the present day. In these, player characters are agents of a secret agency known as Delta Green, which fights against creatures from the Mythos and conspiracies related to them. Arc Dream Publishing released a new version of Delta Green in 2016 as a standalone game, partially using the mechanics from Call of Cthulhu.",
"title": "Licenses"
},
{
"paragraph_id": 25,
"text": "In 2001, a stand-alone version of Call of Cthulhu was released by Wizards of the Coast, for the d20 system. Intended to preserve the feeling of the original game, the d20 conversion of the game rules were supposed to make the game more accessible to the large D&D player base. The d20 system also made it possible to use Dungeons & Dragons characters in Call of Cthulhu, as well as to introduce the Cthulhu Mythos into Dungeons & Dragons games. The d20 version of the game is no longer supported by Wizards as per their contract with Chaosium. Chaosium included d20 stats as an appendix in three releases (see Lovecraft Country), but have since dropped the \"dual stat\" idea.",
"title": "Licenses"
},
{
"paragraph_id": 26,
"text": "Mythos was a collectible card game (CCG) based on the Cthulhu Mythos that Chaosium produced and marketed during the mid-1990s. While generally praised for its fast gameplay and unique mechanics, it ultimately failed to gain a very large market presence. It bears mention because its eventual failure brought the company to hard times that affected its ability to produce material for Call of Cthulhu. Call of Cthulhu: The Card Game is a second collectible card game, produced by Fantasy Flight Games.",
"title": "Licenses"
},
{
"paragraph_id": 27,
"text": "The first licensed Call of Cthulhu 25-millimetre (1.0-inch) gaming miniatures were sculpted by Andrew Chernack and released by Grenadier Models in boxed sets and blister packs in 1983. The license was later transferred to RAFM. As of 2011, RAFM still produce licensed Call of Cthulhu models sculpted by Bob Murch. Both lines include investigator player character models and the iconic monsters of the Cthulhu mythos. As of July 2015, Reaper Miniatures started its third \"Bones Kickstarter\", a Kickstarter intended to help the company migrate some miniatures from metal to plastic, and introducing some new ones. Among the stretch goals was the second $50 expansion, devoted to the Mythos, with miniatures such as Cultists, Deep Ones, Mi'Go, and an extra $15 Shub-Niggurath \"miniature\" (it is, at least, 6x4 squares). It is expected for those miniatures to remain in the Reaper Miniatures catalogue after the Kickstarter project finishes. In 2020 Chaosium announced a license agreement with Ardacious for Call of Cthulhu virtual miniatures to be released on their augmented reality app Ardent Roleplay.",
"title": "Licenses"
},
{
"paragraph_id": 28,
"text": "Shadow of the Comet (later repackaged as Call of Cthulhu: Shadow of the Comet) is an adventure game developed and released by Infogrames in 1993. The game is based on H. P. Lovecraft's Cthulhu Mythos and uses many elements from Lovecraft's The Dunwich Horror and The Shadow Over Innsmouth. A follow-up game, Prisoner of Ice, is not a direct sequel.",
"title": "Licenses"
},
{
"paragraph_id": 29,
"text": "Prisoner of Ice (also Call of Cthulhu: Prisoner of Ice) is an adventure game developed and released by Infogrames for the PC and Macintosh computers in 1995 in America and Europe. It is based on H. P. Lovecraft's Cthulhu Mythos, particularly At the Mountains of Madness, and is a follow-up to Infogrames' earlier Shadow of the Comet. In 1997, the game was ported to the Sega Saturn and PlayStation exclusively in Japan.",
"title": "Licenses"
},
{
"paragraph_id": 30,
"text": "A licensed first-person shooter adventure game by Headfirst Productions, based on Call of Cthulhu campaign Escape from Innsmouth and released by Bethesda Softworks in 2005/2006 for the PC and Xbox.",
"title": "Licenses"
},
{
"paragraph_id": 31,
"text": "In April 2011, Chaosium and new developer Red Wasp Design announced a joint project to produce a mobile video game based on the Call of Cthulhu RPG, entitled Call of Cthulhu: The Wasted Land. The game was released on January 30, 2012.",
"title": "Licenses"
},
{
"paragraph_id": 32,
"text": "In 2018, Metarcade produced Cthulhu Chronicles, a game for iOS with a campaign of nine mobile interactive fiction stories set in 1920s England based on Call of Cthulhu. The first five stories were released on July 10, 2018.",
"title": "Licenses"
},
{
"paragraph_id": 33,
"text": "Call of Cthulhu is a survival horror role-playing video game developed by Cyanide and published by Focus Home Interactive for PlayStation 4, Xbox One and Windows. The game features a semi-open world environment and incorporates themes of Lovecraftian and psychological horror into a story which includes elements of investigation and stealth. It is inspired by H. P. Lovecraft's short story \"The Call of Cthulhu\".",
"title": "Licenses"
},
{
"paragraph_id": 34,
"text": "Multiple reviews of various editions appeared in Space Gamer/Fantasy Gamer.",
"title": "Reception"
},
{
"paragraph_id": 35,
"text": "Multiple reviews of various editions appeared in White Dwarf.",
"title": "Reception"
},
{
"paragraph_id": 36,
"text": "Several reviews of various editions and supplements also appeared in Dragon.",
"title": "Reception"
},
{
"paragraph_id": 37,
"text": "In his 1990 book The Complete Guide to Role-Playing Games, game critic Rick Swan gave the game a top rating of 4 out of 4, calling it \"a masterpiece, easily the best horror RPG ever published and possibly the best RPG, period ... breathtaking in scope and as richly textured as a fine novel. All role-players owe it to themselves to experience this truly remarkable game.\"",
"title": "Reception"
},
{
"paragraph_id": 38,
"text": "In Issue 68 of Challenge, Craig Sheeley reviewed the fifth edition and liked the revisions. \"The entire character generation process is highly streamlined and easily illustrated on a two-page flowchart.\" DeJong also liked the inclusion of material from all three of CoC's settings (1890s, 1920s, 1990s), calling it \"One of the best features of this edition.\" And he was very impressed with the layout of the book, commenting, \"The organization and format of this book deserve special mention. I hold that every game company should study this book to learn what to do right.\" DeJong concluded, \"I am seriously impressed with this product. From cover to cover, it’s well done.\"",
"title": "Reception"
},
{
"paragraph_id": 39,
"text": "In a reader poll conducted by UK magazine Arcane in 1996 to determine the 50 most popular roleplaying games of all time, Call of Cthulhu was ranked 1st. Editor Paul Pettengale commented: \"Call of Cthulhu is fully deserved of the title as the most popular roleplaying system ever - it's a game that doesn't age, is eminently playable, and which hangs together perfectly. The system, even though it's over ten years old, it still one of the very best you'll find in any roleplaying game. Also, there's not a referee in the land who could say they've read every Lovecraft inspired book or story going, so there's a pretty-well endless supply of scenario ideas. It's simply marvellous.\"",
"title": "Reception"
},
{
"paragraph_id": 40,
"text": "Scott Taylor for Black Gate in 2013 rated Call of Cthulhu as #4 in the top ten role-playing games of all time, saying \"With various revisions, but never a full rewrite of its percentile-based system, Call of Cthulhu might be antiquated by today's standards, but remember it is supposed to be set in the 1920s, so to me that seems more than appropriate.\"",
"title": "Reception"
},
{
"paragraph_id": 41,
"text": "Following Dungeons & Dragons, Call of Cthulhu has been reported to be the game most played on the virtual table top platform Roll20 in 2021. It has also been reported to be have found success especially in Korea and Japan, and to have overtaken D&D in Japan.",
"title": "Reception"
},
{
"paragraph_id": 42,
"text": "The game has won multiple awards:",
"title": "Awards"
}
] | Call of Cthulhu is a horror fiction role-playing game based on H. P. Lovecraft's story of the same name and the associated Cthulhu Mythos. The game, often abbreviated as CoC, is published by Chaosium; it was first released in 1981 and is in its seventh edition, with licensed foreign language editions available as well. Its game system is based on Chaosium's Basic Role-Playing (BRP) with additions for the horror genre. These include special rules for sanity and luck. | 2001-09-12T23:06:11Z | 2023-12-11T00:28:26Z | [
"Template:Anchor",
"Template:Cite journal",
"Template:In lang",
"Template:Authority control",
"Template:Fract",
"Template:Convert",
"Template:'",
"Template:Cite book",
"Template:Cite news",
"Template:Short description",
"Template:Main",
"Template:Reflist",
"Template:Cite magazine",
"Template:Dead link",
"Template:Refbegin",
"Template:Refend",
"Template:The Call of Cthulhu",
"Template:Use mdy dates",
"Template:Italic title",
"Template:Infobox game",
"Template:Portal",
"Template:Cite web",
"Template:Official website"
] | https://en.wikipedia.org/wiki/Call_of_Cthulhu_(role-playing_game) |
5,723 | Constellations (journal) | Constellations: An International Journal of Critical and Democratic Theory is a quarterly peer-reviewed academic journal of critical post-Marxist and democratic theory and successor of Praxis International. It is currently edited by Simone Chambers, Cristina Lafont, and Hubertus Buchstein. Ertug Tombus is the managing editor of the journal since 2009. Seyla Benhabib, Nancy Fraser and Andrew Arato are the co-founding former editors. With an international editorial contribution, it is based at the New School in New York.
Nadia Urbinati, Amy Allen, Jean L.Cohen, and Andreas Kalyvas are former co-editors. | [
{
"paragraph_id": 0,
"text": "Constellations: An International Journal of Critical and Democratic Theory is a quarterly peer-reviewed academic journal of critical post-Marxist and democratic theory and successor of Praxis International. It is currently edited by Simone Chambers, Cristina Lafont, and Hubertus Buchstein. Ertug Tombus is the managing editor of the journal since 2009. Seyla Benhabib, Nancy Fraser and Andrew Arato are the co-founding former editors. With an international editorial contribution, it is based at the New School in New York.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Nadia Urbinati, Amy Allen, Jean L.Cohen, and Andreas Kalyvas are former co-editors.",
"title": ""
}
] | Constellations: An International Journal of Critical and Democratic Theory is a quarterly peer-reviewed academic journal of critical post-Marxist and democratic theory and successor of Praxis International. It is currently edited by Simone Chambers, Cristina Lafont, and Hubertus Buchstein. Ertug Tombus is the managing editor of the journal since 2009. Seyla Benhabib, Nancy Fraser and Andrew Arato are the co-founding former editors. With an international editorial contribution, it is based at the New School in New York. Nadia Urbinati, Amy Allen, Jean L.Cohen, and Andreas Kalyvas are former co-editors. | 2001-06-04T00:53:38Z | 2023-08-17T15:22:22Z | [
"Template:Infobox journal",
"Template:Reflist",
"Template:Cite journal",
"Template:Official website",
"Template:Sociology-journal-stub",
"Template:Short description"
] | https://en.wikipedia.org/wiki/Constellations_(journal) |
5,724 | Cape Breton Island | Cape Breton Island (French: île du Cap-Breton, formerly île Royale; Scottish Gaelic: Ceap Breatainn or Eilean Cheap Bhreatainn; Miꞌkmaq: Unamaꞌki) is a rugged and irregularly shaped island on the Atlantic coast of North America and part of the province of Nova Scotia, Canada.
The 10,311 km (3,981 sq mi) island accounts for 18.7% of Nova Scotia's total area. Although the island is physically separated from the Nova Scotia peninsula by the Strait of Canso, the 1,385 m (4,544 ft) long Canso Causeway connects it to mainland Nova Scotia. The island is east-northeast of the mainland with its northern and western coasts fronting on the Gulf of Saint Lawrence with its western coast forming the eastern limits of the Northumberland Strait. The eastern and southern coasts front the Atlantic Ocean with its eastern coast also forming the western limits of the Cabot Strait. Its landmass slopes upward from south to north, culminating in the highlands of its northern cape. One of the world's larger saltwater lakes, Bras d'Or ("Arm of Gold" in French), dominates the island's centre.
The total population at the 2016 census numbered 132,010 Cape Bretoners, which is approximately 15% of the provincial population. Cape Breton Island has experienced a decline in population of approximately 2.9% since the 2011 census. Approximately 75% of the island's population is in the Cape Breton Regional Municipality (CBRM), which includes all of Cape Breton County and is often referred to as Industrial Cape Breton.
Cape Breton Island takes its name from its easternmost point, Cape Breton. At least two theories for this name have been put forward. The first connects it to the Bretons of northwestern France which discovered Canada. A Portuguese mappa mundi of 1516–1520 includes the label "terra q(ue) foy descuberta por Bertomes" in the vicinity of the Gulf of St Lawrence, which means "land discovered by Bretons".
The second connects it to the Gascon fishing port of Capbreton. Basque whalers and fishermen traded with the Miꞌkmaq of this island from the early sixteenth century.
The name "Cape Breton" first appears on a map of 1516, as C(abo) dos Bretoes, and became the general name for both the island and the cape toward the end of the 16th century.
William Francis Ganong argued that the Portuguese term Bertomes referred to Britons, and that the name should be interpreted as "Cape of the English". This theory is nowadays disagreed upon, due to the Portuguese etymology of Bertomes, meaning the Brittonic speaking people of Wales, Cornwall, Brittany and Galicia, who has close ties to Portugal.
Cape Breton Island's first residents were likely archaic maritime natives, ancestors of the Mi'kmaq people. These peoples and their progeny inhabited the island (known as Unama'ki) for several thousand years and continue to live there to this day. Their traditional lifestyle centred around hunting and fishing because of the unfavourable agricultural conditions of their maritime home. This ocean-centric lifestyle did, however, make them among the first Indigenous peoples to discover European explorers and sailors fishing in the St Lawrence Estuary. Italian explorer (sailing for the British crown) John Cabot reportedly visited the island in 1497. However, European histories and maps of the period are of too poor quality to be sure whether Cabot first visited Newfoundland or Cape Breton Island. This discovery is commemorated by Cape Breton's Cabot Trail, and by the Cabot's Landing Historic Site & Provincial Park, near the village of Dingwall.
The local Mi'kmaq peoples began trading with European fishermen when the fishermen began landing in their territories as early as the 1520s. In about 1521–22, the Portuguese under João Álvares Fagundes established a fishing colony on the island. As many as two hundred settlers lived in a village, the name of which is not known, located according to some historians at what is now Ingonish on the island's northeastern peninsula. These fishermen traded with the local population but did not maintain a permanent settlement. This Portuguese colony's fate is unknown, but it is mentioned as late as 1570.
During the Anglo-French War of 1627 to 1629, under King Charles I, the Kirkes took Quebec City, James Stewart, 4th Lord Ochiltree, planted a colony on Unama'ki at Baleine, Nova Scotia, and Alexander's son, William Alexander, 1st Earl of Stirling, established the first incarnation of "New Scotland" at Port Royal. These claims, and larger ideals of European colonization were the first time the island was incorporated as European territory, though it would be several decades later that treaties would actually be signed. However, no copies of these treaties exist.
These Scottish triumphs, which left Cape Sable as the only major French holding in North America, did not last. Charles I's haste to make peace with France on the terms most beneficial to him meant the new North American gains would be bargained away in the Treaty of Saint-Germain-en-Laye, which established which European power had laid claim over the territories.
The French quickly defeated the Scots at Baleine, and established the first European settlements on Île Royale, which is present-day Englishtown (1629) and St. Peter's (1630). These settlements lasted only one generation, until Nicolas Denys left in 1659. The island did not have any European settlers for another fifty years before those communities along with Louisbourg were re-established in 1713, after which point European settlement was permanently established on the island.
Known as Île Royale ("Royal Island") to the French, the island also saw active settlement by France. After the French ceded their claims to Newfoundland and the Acadian mainland to the British by the Treaty of Utrecht in 1713, the French relocated the population of Plaisance, Newfoundland, to Île Royale and the French garrison was established in the central eastern part at Sainte Anne. As the harbour at Sainte Anne experienced icing problems, it was decided to build a much larger fortification at Louisbourg to improve defences at the entrance to the Gulf of Saint Lawrence and to defend France's fishing fleet on the Grand Banks. The French also built the Louisbourg Lighthouse in 1734, the first lighthouse in Canada and one of the first in North America. In addition to Cape Breton Island, the French colony of Île Royale also included Île Saint-Jean, today called Prince Edward Island, and Les Îles-de-la-Madeleine.
Louisbourg itself was one of the most important commercial and military centres in New France. Louisbourg was captured by New Englanders with British naval assistance in the Siege of Louisbourg (1745) and by British forces in 1758. The French population of Île Royale was deported to France after each siege. While French settlers returned to their homes in Île Royale after the Treaty of Aix-la-Chapelle was signed in 1748, the fortress was demolished after the second siege in 1758. Île Royale remained formally part of New France until it was ceded to Great Britain by the Treaty of Paris in 1763. It was then merged with the adjacent British colony of Nova Scotia (present-day peninsular Nova Scotia and New Brunswick). Acadians who had been expelled from Nova Scotia and Île Royale were permitted to settle in Cape Breton beginning in 1764, and established communities in northwestern Cape Breton, near Chéticamp, and southern Cape Breton, on and near Isle Madame.
Some of the first British-sanctioned settlers on the island following the Seven Years' War were Irish, although upon settlement they merged with local French communities to form a culture rich in music and tradition. From 1763 to 1784, the island was administratively part of the colony of Nova Scotia and was governed from Halifax.
The first permanently settled Scottish community on Cape Breton Island was Judique, settled in 1775 by Michael Mor MacDonald. He spent his first winter using his upside-down boat for shelter, which is reflected in the architecture of the village's Community Centre. He composed a song about the area called "O 's àlainn an t-àite", or "O, Fair is the Place."
During the American Revolution, on 1 November 1776, John Paul Jones, the father of the American Navy, set sail in command of Alfred to free hundreds of American prisoners working in the area's coal mines. Although winter conditions prevented the freeing of the prisoners, the mission did result in the capture of Mellish, a vessel carrying a vital supply of winter clothing intended for John Burgoyne's troops in Canada.
Major Timothy Hierlihy and his regiment on board HMS Hope worked in and protected the coal mines at Sydney Cape Breton from privateer attacks. Sydney, Cape Breton provided a vital supply of coal for Halifax throughout the war. The British began developing the mining site at Sydney Mines in 1777. On 14 May 1778, Major Hierlihy arrived at Cape Breton. While there, Hierlihy reported that he "beat off many piratical attacks, killed some and took other prisoners."
A few years into the war, there was also a naval engagement between French ships and a British convoy off Sydney, Nova Scotia, near Spanish River (1781), Cape Breton. French ships, fighting with the Americans, were re-coaling and defeated a British convoy. Six French and 17 British sailors were killed, with many more wounded.
In 1784, Britain split the colony of Nova Scotia into three separate colonies: New Brunswick, Cape Breton Island, and present-day peninsular Nova Scotia, in addition to the adjacent colonies of St. John's Island (renamed Prince Edward Island in 1798) and Newfoundland. The colony of Cape Breton Island had its capital at Sydney on its namesake harbour fronting on Spanish Bay and the Cabot Strait. Its first Lieutenant-Governor was Joseph Frederick Wallet DesBarres (1784–1787) and his successor was William Macarmick (1787).
A number of United Empire Loyalists emigrated to the Canadian colonies, including Cape Breton. David Mathews, the former Mayor of New York City during the American Revolution, emigrated with his family to Cape Breton in 1783. He succeeded Macarmick as head of the colony and served from 1795 to 1798.
From 1799 to 1807, the military commandant was John Despard, brother of Edward.
An order forbidding the granting of land in Cape Breton, issued in 1763, was removed in 1784. The mineral rights to the island were given over to the Duke of York by an order-in-council. The British government had intended that the Crown take over the operation of the mines when Cape Breton was made a colony, but this was never done, probably because of the rehabilitation cost of the mines. The mines were in a neglected state, caused by careless operations dating back at least to the time of the final fall of Louisbourg in 1758.
Large-scale shipbuilding began in the 1790s, beginning with schooners for local trade, moving in the 1820s to larger brigs and brigantines, mostly built for British ship owners. Shipbuilding peaked in the 1850s, marked in 1851 by the full-rigged ship Lord Clarendon, which was the largest wooden ship ever built in Cape Breton.
In 1820, the colony of Cape Breton Island was merged for the second time with Nova Scotia. This development is one of the factors which led to large-scale industrial development in the Sydney Coal Field of eastern Cape Breton County. By the late 19th century, as a result of the faster shipping, expanding fishery and industrialization of the island, exchanges of people between the island of Newfoundland and Cape Breton increased, beginning a cultural exchange that continues to this day.
The 1920s were some of the most violent times in Cape Breton. They were marked by several severe labour disputes. The famous murder of William Davis by strike breakers, and the seizing of the New Waterford power plant by striking miners led to a major union sentiment that persists to this day in some circles. William Davis Miners' Memorial Day continues to be celebrated in coal mining towns to commemorate the deaths of miners at the hands of the coal companies.
The turn of the 20th century saw Cape Breton Island at the forefront of scientific achievement with the now-famous activities launched by inventors Alexander Graham Bell and Guglielmo Marconi.
Following his successful invention of the telephone and being relatively wealthy, Bell acquired land near Baddeck in 1885. He chose the land, which he named Beinn Bhreagh, largely due to its resemblance to his early surroundings in Scotland. He established a summer estate complete with research laboratories, working with deaf people including Helen Keller, and continued to invent. Baddeck would be the site of his experiments with hydrofoil technologies as well as the Aerial Experiment Association, financed by his wife Mabel Gardiner Hubbard. These efforts resulted in the first powered flight in Canada when the AEA Silver Dart took off from the ice-covered waters of Bras d'Or Lake. Bell also built the forerunner to the iron lung and experimented with breeding sheep.
Marconi's contributions to Cape Breton Island were also quite significant, as he used the island's geography to his advantage in transmitting the first North American trans-Atlantic radio message from a station constructed at Table Head in Glace Bay to a receiving station at Poldhu in Cornwall, England. Marconi's pioneering work in Cape Breton marked the beginning of modern radio technology. Marconi's station at Marconi Towers, on the outskirts of Glace Bay, became the chief communication centre for the Royal Canadian Navy in World War I through to the early years of World War II.
Promotions for tourism beginning in the 1950s recognized the importance of the Scottish culture to the province, as the provincial government started encouraging the use of Gaelic once again. The establishment of funding for the Gaelic College of Celtic Arts and Crafts and formal Gaelic language courses in public schools are intended to address the near-loss of this culture to assimilation into Anglophone Canadian culture.
In the 1960s, the Fortress of Louisbourg was partially reconstructed by Parks Canada, using the labour of unemployed coal miners. Since 2009, this National Historic Site of Canada has attracted an average of 90 000 visitors per year.
The irregularly-shaped rectangular island is about 100 km wide and 150 long, for a total of 10,311 square kilometres (3,981 sq mi) in area.
It lies in the southeastern extremity of the Gulf of St. Lawrence. Cape Breton is separated from the Nova Scotia peninsula by the very deep Strait of Canso. The island is joined to the mainland by the Canso Causeway.
Cape Breton Island is composed of rocky shores, rolling farmland, glacial valleys, barren headlands, highlands, woods and plateaus.
The island is characterized by a number of elevations of ancient crystalline and metamorphic rock rising up from the south to the north, and contrasted with eroded lowlands. The bedrock of blocks that developed in different places around the globe, at different times, and then were fused together via tectonics.
Cape Breton is formed from three terranes. These are fragments of the Earth's crust formed on a tectonic plate and attached by accretion or suture to crust lying on another plate. Each of these has its own distinctive geologic history, which is different from that of the surrounding areas. The southern half of the island formed from the Avalon terrane, which was once a microcontinent in the Paleozoic era. It is made up of volcanic rock that formed near what is now called Africa. Most of the northern half of the island is on the Bras d'Or terrane (part of the Ganderia terrane). It contains volcanic and sedimentary rock formed off the coast of what is now South America. The third terrane is the relatively small Blair River inlier on the far northwestern tip. It contains the oldest rock in the Maritimes, formed up to 1.6 billion years ago. These rocks, which can be seen in the Polletts Cove - Aspy Fault Wilderness Area north of Pleasant Bay, are likely part of the Canadian Shield, a large area of Precambrian igneous and metamorphic rock that forms the core of the North American continent.
The Avalon and Bras d'Or terranes were pushed together about 500 million years ago when the supercontinent Gondwana was formed. The Blair River inlier was sandwiched in between the two when Laurussia was formed 450-360 million years ago, at which time the land was found in the tropics. This collision also formed the Appalachian Mountains. Associated rifting and faulting is now visible as the canyons of the Cape Breton Highlands. Then, during the Carboniferous period, the area was flooded, which created sedimentary rock layers such as sandstone, shale, gypsum, and conglomerate. Later, most of the island was tropical forest which later formed coal deposits.
Much later, the land was shaped by repeated ice ages which left striations, till, U-shaped valleys, and carved the Bras d'Or Lake from the bedrock. Examples of U-shaped valleys are those of the Chéticamp, Grande Anse, and Clyburn River valleys. Other valleys have been eroded by water, forming V-shaped valleys and canyons. Cape Breton has many fault lines but few earthquakes. Since the North American continent is moving westward, earthquakes tend to occur on the western edge of the continent.
The warm summer humid continental climate is moderated by the proximity of the cold, oftentimes polar Labrador Current and its warmer counterpart the Gulf Stream, both being dominant currents in the North Atlantic Ocean.
There are lowland areas in along the western shore, around Lake Ainslie, the Bras d'Or watershed, Boularderie Island, and the Sydney coalfield. They include salt marshes, coastal beaches, and freshwater wetlands.
Starting in the 1800s, many areas were cleared for farming or timber. Many farms were abandoned from the 1920s to the 1950s with fields being reclaimed by white spruce, red maple, white birch, and balsam fir. Higher slopes are dominated by yellow birch and sugar maple. In sheltered areas with sun and drainage, Acadian forest is found. Wetter areas have tamarack, and black spruce. The weather station at Ingonish records more rain than anywhere else in Nova Scotia.
Behind barrier beaches and dunes at Aspy Bay are salt marshes. The Aspy, Clyburn, and Ingonish rivers have all created floodplains which support populations of black ash, fiddle head fern, swamp loosestrife, swamp milkweed, southern twayblade, and bloodroot.
Red sandstone and white gypsum cliffs can be observed throughout this area. Bedrock is Carboniferous sedimentary with limestone, shale, and sandstone. Many fluvial remains from are glaciation found here. Mining has been ongoing for centuries, and more than 500 mine openings can be found, mainly in the east.
Karst topography is found in Dingwall, South Harbour, Plaster Provincial Park, along the Margaree and Middle Rivers, and along the north shore of Lake Ainslie. The presence of gypsum and limestone increases soil pH and produces some rich wetlands which support giant spear, tufted fen, and other mosses, as well as vascular plants like sedges.
This ecosystem is spread throughout Cape Breton and is defined as hills and slopes 150-300m above sea level, typically covered with Acadian forest.
It includes North Mountain, Kellys Mountain, and East Bay Hills.
Forests in this area were cleared for timber and agriculture and are now a mosaic of habitats depending on the local terrain, soils and microclimate. Typical species include ironwood, white ash, beech, sugar maple, red maple, and yellow birch. The understory can include striped maple, beaked hazelnut, fly honeysuckle, club mosses and ferns. Ephemerals are visible in the spring, such as Dutchman's breeches and spring beauty.
In ravines, shade tolerant trees like hemlock, white pine, red spruce are found. Less well-drained areas are forested with balsam fir and black spruce.
The Highlands comprise a tableland in the northern portions of Inverness and Victoria counties.
An extension of the Appalachian mountain chain, elevations average 350 metres at the edges of the plateau and rise to more than 500 metres at the centre. The area has broad, gently rolling hills bisected with deep valleys and steep-walled canyons. A majority of the land is a taiga of balsam fir, with some white birch, white spruce, mountain ash, and heart-leaf birch.
The northern and western edges of the plateau, particularly at high elevations, resemble arctic tundra. Trees 30–90 high, overgrown with reindeer lichens, can be 150 years old. At very high elevations some areas are exposed bedrock without any vegetation apart from Cladonia lichens. There are many barrens, or heaths, dominated by bushy species of the Ericaceae family.
Spruce, killed by spruce budworm in the late 1970s, has reestablished at lower elevations, but not at higher elevations due to moose browsing. Decomposition is slow, leaving thick layers of plant litter. Ground cover includes wood aster, twinflower, liverworts, wood sorrel, bluebead lily, goldthread, various ferns, and lily-of-the-valley, with bryophyte and large-leaved goldenrod at higher elevations. The understory can include striped maple, mountain ash, ferns, and mountain maple.
Near water, bog birch, alder, and mountain-ash are found. There are many open wetlands populated with stunted tamarack and black spruce. Poor drainage has led to the formation of peatlands which can support tufted clubrush, Bartram's serviceberry, coastal sedge, and bakeapple.
The eastern shore is unique in that while not at a high elevation, it has a cool climate with much rain and fog, strong winds, and low summer temperatures. It is dominated by a boreal forest of black spruce and balsam fir. Sheltered areas support tolerant hardwoods such as white birch and red maple. Many salt marshes, fens, and bogs are found there.
There are many beaches on the highly crenelated coastline. Unlike elsewhere on the island, these are rocky and support plants unlike those of sandy beaches. The coast provides habitat for common coast bird species like common eider, black legged kittiwake, black guillemot, whimbrel, and great cormorant.
Land is drained into the Gulf of Saint Lawrence via the rivers Aspy, Sydney, Mira, Framboise, Margaree, and Chéticamp. The largest freshwater lake is Lake Ainslie.
Local government on the island is provided by the Cape Breton Regional Municipality, the Municipality of the County of Inverness, the Municipality of the County of Richmond, and the Municipality of the County of Victoria, along with the Town of Port Hawkesbury.
The island has five Miꞌkmaq Indian reserves: Eskasoni (the largest in population and land area), Membertou, Wagmatcook, Waycobah, and Potlotek.
The island's residents can be grouped into six main cultures: Scottish, Mi'kmaq, Acadian, Irish, English, and Deaf, with respective languages Scottish Gaelic, Mi'kmaq, French, and English alongside several sign languages including Maritime Sign Language. English is now the primary language, including a locally distinctive Cape Breton accent, while Mi'kmaq, Scottish Gaelic and Acadian French are still spoken in some communities. Amongst sign languages, it is unknown to what extent LSQ is spoken amongst Acadians, but American Sign Language is certainly predominant across the island, as it has gained significant numbers of signers, especially with the steep declines in Maritime Sign Language use.
Later migrations of Black Loyalists, Italians, and Eastern Europeans mostly settled in the island's eastern part around the industrial Cape Breton region. Cape Breton Island's population has been in decline two decades with an increasing exodus in recent years due to economic conditions.
Population trend
Statistics Canada in 2001 reported a "religion" total of 145,525 for Cape Breton, including 5,245 with "no religious affiliation." Major categories included:
Much of the recent economic history of Cape Breton Island can be tied to the coal industry.
The island has two major coal deposits:
Sydney has traditionally been the main port, with facilities in a large, sheltered, natural harbour. It is the island's largest commercial centre and home to the Cape Breton Post daily newspaper, as well as one television station, CJCB-TV (CTV), and several radio stations. The Marine Atlantic terminal at North Sydney is the terminal for large ferries traveling to Channel-Port aux Basques and seasonally to Argentia, both on the island of Newfoundland.
Point Edward on the west side of Sydney Harbour is the location of Sydport, a former navy base (HMCS Protector) now converted to commercial use. The Canadian Coast Guard College is nearby at Westmount. Petroleum, bulk coal, and cruise ship facilities are also in Sydney Harbour.
Glace Bay, the second largest urban community in population, was the island's main coal mining centre until its last mine closed in the 1980s. Glace Bay was the hub of the Sydney & Louisburg Railway and a major fishing port. At one time, Glace Bay was known as the largest town in Nova Scotia, based on population.
Port Hawkesbury has risen to prominence since the completion of the Canso Causeway and Canso Canal created an artificial deep-water port, allowing extensive petrochemical, pulp and paper, and gypsum handling facilities to be established. The Strait of Canso is completely navigable to Seawaymax vessels, and Port Hawkesbury is open to the deepest-draught vessels on the world's oceans. Large marine vessels may also enter Bras d'Or Lake through the Great Bras d'Or channel, and small craft can use the Little Bras d'Or channel or St. Peters Canal. While commercial shipping no longer uses the St. Peters Canal, it remains an important waterway for recreational vessels.
The industrial Cape Breton area faced several challenges with the closure of the Cape Breton Development Corporation's (DEVCO) coal mines and the Sydney Steel Corporation's (SYSCO) steel mill. In recent years, the Island's residents have tried to diversify the area economy by investing in tourism developments, call centres, and small businesses, as well as manufacturing ventures in fields such as auto parts, pharmaceuticals, and window glazings.
While the Cape Breton Regional Municipality is in transition from an industrial to a service-based economy, the rest of Cape Breton Island outside the industrial area surrounding Sydney-Glace Bay has been more stable, with a mixture of fishing, forestry, small-scale agriculture, and tourism.
Tourism in particular has grown throughout the post-Second World War era, especially the growth in vehicle-based touring, which was furthered by the creation of the Cabot Trail scenic drive. The scenery of the island is rivalled in northeastern North America by only Newfoundland; and Cape Breton Island tourism marketing places a heavy emphasis on its Scottish Gaelic heritage through events such as the Celtic Colours Festival, held each October, as well as promotions through the Gaelic College of Celtic Arts and Crafts.
Whale-watching is a popular attraction for tourists. Whale-watching cruises are operated by vendors from Baddeck to Chéticamp. The most popular species of whale found in Cape Breton's waters is the pilot whale.
The Cabot Trail is a scenic road circuit around and over the Cape Breton Highlands with spectacular coastal vistas; over 400,000 visitors drive the Cabot Trail each summer and fall. Coupled with the Fortress of Louisbourg, it has driven the growth of the tourism industry on the island in recent decades. The Condé Nast travel guide has rated Cape Breton Island as one of the world's best island destinations.
The island's primary east–west road is Highway 105, the Trans-Canada Highway, although Trunk 4 is also heavily used. Highway 125 is an important arterial route around Sydney Harbour in the Cape Breton Regional Municipality. The Cabot Trail, circling the Cape Breton Highlands, and Trunk 19, along the island's western coast, are important secondary roads. The Cape Breton and Central Nova Scotia Railway maintains railway connections between the port of Sydney to the Canadian National Railway in Truro.
Cape Breton Island is served by several airports, the largest, the JA Douglas McCurdy Sydney Airport, situated on Trunk 4 between the communities of Sydney and Glace Bay, as well as smaller airports at Port Hawksbury, Margaree, and Baddeck.
Gaelic speakers in Cape Breton, as elsewhere in Nova Scotia, constituted a large proportion of the local population from the 18th century on. They brought with them a common culture of poetry, traditional songs and tales, music and dance, and used this to develop distinctive local traditions.
Most Gaelic settlement in Nova Scotia happened between 1770 and 1840, with probably over 50,000 Gaelic speakers emigrating from the Scottish Highlands and the Hebrides to Nova Scotia and Prince Edward Island. Such emigration was facilitated by changes in Gaelic society and the economy, with sharp increases in rents, confiscation of land and disruption of local customs and rights. In Nova Scotia, poetry and song in Gaelic flourished. George Emmerson argues that an "ancient and rich" tradition of storytelling, song, and Gaelic poetry emerged during the 18th century and was transplanted from the Highlands of Scotland to Nova Scotia, where the language similarly took root there. The majority of those settling in Nova Scotia from the end of the 18th century through to middle of the next were from the Scottish Highlands, rather than the Lowlands, making the Highland tradition's impact more profound on the region. Gaelic settlement in Cape Breton began in earnest in the early nineteenth century.
The Gaelic language became dominant from Colchester County in the west of Nova Scotia into Cape Breton County in the east. It was reinforced in Cape Breton in the first half of the 19th century with an influx of Highland Scots numbering approximately 50,000 as a result of the Highland Clearances.
From 1892 to 1904, Jonathon MacKinnon published the Scottish Gaelic-language biweekly newspaper Mac-Talla (lit. 'The Echo') in Sydney, Nova Scotia. During the 1920s, several Scottish Gaelic-language newspapers were printed in Sydney for distribution primarily on Cape Breton, including the Teachdaire nan Gàidheal (lit. 'The Messenger of the Gaels'), which included Gaelic-language lessons; the United Church-affiliated An Solus Iùil (lit. 'The Guiding Light'); and MacKinnon's later endeavor, Fear na Cèilidh (lit. 'The Entertainer').
Gaelic speakers, however, tended to be poor; they were largely illiterate and had little access to education. This situation persisted into the early days of the twentieth century. In 1921 Gaelic was approved as an optional subject in the curriculum of Nova Scotia, but few teachers could be found and children were discouraged from using the language in schools. By 1931 the number of Gaelic speakers in Nova Scotia had fallen to approximately 25,000, mostly in discrete pockets. In Cape Breton it was still a majority language, but the proportion was falling. Children were no longer being raised with Gaelic.
From 1939 on, attempts were made to strengthen its position in the public school system in Nova Scotia, but funding, official commitment and the availability of teachers continued to be a problem. By the 1950s the number of speakers was less than 7,000. The advent of multiculturalism in Canada in the 1960s meant that new educational opportunities became available, with a gradual strengthening of the language at secondary and tertiary level. At present several schools in Cape Breton offer Gaelic Studies and Gaelic language programs, and the language is taught at Cape Breton University.
The 2016 Canadian Census shows that there are only 40 reported speakers of Gaelic as a mother tongue in Cape Breton. On the other hand, there are families and individuals who have recommenced intergenerational transmission. They include fluent speakers from Gaelic-speaking areas of Scotland and speakers who became fluent in Nova Scotia and who in some cases studied in Scotland. Other revitalization activities include adult education, community cultural events and publishing.
Cape Breton is well known for its traditional fiddle music, which was brought to North America by Scottish immigrants during the Highland Clearances. The traditional style has been well preserved in Cape Breton, and cèilidhs have become a popular attraction for tourists. Inverness County in particular has a heavy concentration of musical activity, with regular performances in communities such as Mabou and Judique. Judique is recognized as "Baile nam Fonn" (lit. 'Village of Tunes') or the 'Home of Celtic Music', featuring the Celtic Music Interpretive Centre. The traditional fiddle music of Cape Breton is studied by musicians around the world, where its global recognition continues to rise.
Local performers who have received significant recognition outside of Cape Breton include Angus Chisholm; Buddy MacMaster; Joseph Cormier, the first Cape Breton fiddler to record an album made available in Europe (1974); Lee Cremo; Bruce Guthro; Natalie MacMaster; Ashley MacIsaac; The Rankin Family; Aselin Debison; Gordie Sampson; John Allan Cameron; and the Barra MacNeils.
The Men of the Deeps are a male choral group of current and former miners from the industrial Cape Breton area. | [
{
"paragraph_id": 0,
"text": "Cape Breton Island (French: île du Cap-Breton, formerly île Royale; Scottish Gaelic: Ceap Breatainn or Eilean Cheap Bhreatainn; Miꞌkmaq: Unamaꞌki) is a rugged and irregularly shaped island on the Atlantic coast of North America and part of the province of Nova Scotia, Canada.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The 10,311 km (3,981 sq mi) island accounts for 18.7% of Nova Scotia's total area. Although the island is physically separated from the Nova Scotia peninsula by the Strait of Canso, the 1,385 m (4,544 ft) long Canso Causeway connects it to mainland Nova Scotia. The island is east-northeast of the mainland with its northern and western coasts fronting on the Gulf of Saint Lawrence with its western coast forming the eastern limits of the Northumberland Strait. The eastern and southern coasts front the Atlantic Ocean with its eastern coast also forming the western limits of the Cabot Strait. Its landmass slopes upward from south to north, culminating in the highlands of its northern cape. One of the world's larger saltwater lakes, Bras d'Or (\"Arm of Gold\" in French), dominates the island's centre.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The total population at the 2016 census numbered 132,010 Cape Bretoners, which is approximately 15% of the provincial population. Cape Breton Island has experienced a decline in population of approximately 2.9% since the 2011 census. Approximately 75% of the island's population is in the Cape Breton Regional Municipality (CBRM), which includes all of Cape Breton County and is often referred to as Industrial Cape Breton.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Cape Breton Island takes its name from its easternmost point, Cape Breton. At least two theories for this name have been put forward. The first connects it to the Bretons of northwestern France which discovered Canada. A Portuguese mappa mundi of 1516–1520 includes the label \"terra q(ue) foy descuberta por Bertomes\" in the vicinity of the Gulf of St Lawrence, which means \"land discovered by Bretons\".",
"title": "Toponymy"
},
{
"paragraph_id": 4,
"text": "The second connects it to the Gascon fishing port of Capbreton. Basque whalers and fishermen traded with the Miꞌkmaq of this island from the early sixteenth century.",
"title": "Toponymy"
},
{
"paragraph_id": 5,
"text": "The name \"Cape Breton\" first appears on a map of 1516, as C(abo) dos Bretoes, and became the general name for both the island and the cape toward the end of the 16th century.",
"title": "Toponymy"
},
{
"paragraph_id": 6,
"text": "William Francis Ganong argued that the Portuguese term Bertomes referred to Britons, and that the name should be interpreted as \"Cape of the English\". This theory is nowadays disagreed upon, due to the Portuguese etymology of Bertomes, meaning the Brittonic speaking people of Wales, Cornwall, Brittany and Galicia, who has close ties to Portugal.",
"title": "Toponymy"
},
{
"paragraph_id": 7,
"text": "Cape Breton Island's first residents were likely archaic maritime natives, ancestors of the Mi'kmaq people. These peoples and their progeny inhabited the island (known as Unama'ki) for several thousand years and continue to live there to this day. Their traditional lifestyle centred around hunting and fishing because of the unfavourable agricultural conditions of their maritime home. This ocean-centric lifestyle did, however, make them among the first Indigenous peoples to discover European explorers and sailors fishing in the St Lawrence Estuary. Italian explorer (sailing for the British crown) John Cabot reportedly visited the island in 1497. However, European histories and maps of the period are of too poor quality to be sure whether Cabot first visited Newfoundland or Cape Breton Island. This discovery is commemorated by Cape Breton's Cabot Trail, and by the Cabot's Landing Historic Site & Provincial Park, near the village of Dingwall.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "The local Mi'kmaq peoples began trading with European fishermen when the fishermen began landing in their territories as early as the 1520s. In about 1521–22, the Portuguese under João Álvares Fagundes established a fishing colony on the island. As many as two hundred settlers lived in a village, the name of which is not known, located according to some historians at what is now Ingonish on the island's northeastern peninsula. These fishermen traded with the local population but did not maintain a permanent settlement. This Portuguese colony's fate is unknown, but it is mentioned as late as 1570.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "During the Anglo-French War of 1627 to 1629, under King Charles I, the Kirkes took Quebec City, James Stewart, 4th Lord Ochiltree, planted a colony on Unama'ki at Baleine, Nova Scotia, and Alexander's son, William Alexander, 1st Earl of Stirling, established the first incarnation of \"New Scotland\" at Port Royal. These claims, and larger ideals of European colonization were the first time the island was incorporated as European territory, though it would be several decades later that treaties would actually be signed. However, no copies of these treaties exist.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "These Scottish triumphs, which left Cape Sable as the only major French holding in North America, did not last. Charles I's haste to make peace with France on the terms most beneficial to him meant the new North American gains would be bargained away in the Treaty of Saint-Germain-en-Laye, which established which European power had laid claim over the territories.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "The French quickly defeated the Scots at Baleine, and established the first European settlements on Île Royale, which is present-day Englishtown (1629) and St. Peter's (1630). These settlements lasted only one generation, until Nicolas Denys left in 1659. The island did not have any European settlers for another fifty years before those communities along with Louisbourg were re-established in 1713, after which point European settlement was permanently established on the island.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "Known as Île Royale (\"Royal Island\") to the French, the island also saw active settlement by France. After the French ceded their claims to Newfoundland and the Acadian mainland to the British by the Treaty of Utrecht in 1713, the French relocated the population of Plaisance, Newfoundland, to Île Royale and the French garrison was established in the central eastern part at Sainte Anne. As the harbour at Sainte Anne experienced icing problems, it was decided to build a much larger fortification at Louisbourg to improve defences at the entrance to the Gulf of Saint Lawrence and to defend France's fishing fleet on the Grand Banks. The French also built the Louisbourg Lighthouse in 1734, the first lighthouse in Canada and one of the first in North America. In addition to Cape Breton Island, the French colony of Île Royale also included Île Saint-Jean, today called Prince Edward Island, and Les Îles-de-la-Madeleine.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "Louisbourg itself was one of the most important commercial and military centres in New France. Louisbourg was captured by New Englanders with British naval assistance in the Siege of Louisbourg (1745) and by British forces in 1758. The French population of Île Royale was deported to France after each siege. While French settlers returned to their homes in Île Royale after the Treaty of Aix-la-Chapelle was signed in 1748, the fortress was demolished after the second siege in 1758. Île Royale remained formally part of New France until it was ceded to Great Britain by the Treaty of Paris in 1763. It was then merged with the adjacent British colony of Nova Scotia (present-day peninsular Nova Scotia and New Brunswick). Acadians who had been expelled from Nova Scotia and Île Royale were permitted to settle in Cape Breton beginning in 1764, and established communities in northwestern Cape Breton, near Chéticamp, and southern Cape Breton, on and near Isle Madame.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "Some of the first British-sanctioned settlers on the island following the Seven Years' War were Irish, although upon settlement they merged with local French communities to form a culture rich in music and tradition. From 1763 to 1784, the island was administratively part of the colony of Nova Scotia and was governed from Halifax.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "The first permanently settled Scottish community on Cape Breton Island was Judique, settled in 1775 by Michael Mor MacDonald. He spent his first winter using his upside-down boat for shelter, which is reflected in the architecture of the village's Community Centre. He composed a song about the area called \"O 's àlainn an t-àite\", or \"O, Fair is the Place.\"",
"title": "History"
},
{
"paragraph_id": 16,
"text": "During the American Revolution, on 1 November 1776, John Paul Jones, the father of the American Navy, set sail in command of Alfred to free hundreds of American prisoners working in the area's coal mines. Although winter conditions prevented the freeing of the prisoners, the mission did result in the capture of Mellish, a vessel carrying a vital supply of winter clothing intended for John Burgoyne's troops in Canada.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "Major Timothy Hierlihy and his regiment on board HMS Hope worked in and protected the coal mines at Sydney Cape Breton from privateer attacks. Sydney, Cape Breton provided a vital supply of coal for Halifax throughout the war. The British began developing the mining site at Sydney Mines in 1777. On 14 May 1778, Major Hierlihy arrived at Cape Breton. While there, Hierlihy reported that he \"beat off many piratical attacks, killed some and took other prisoners.\"",
"title": "History"
},
{
"paragraph_id": 18,
"text": "A few years into the war, there was also a naval engagement between French ships and a British convoy off Sydney, Nova Scotia, near Spanish River (1781), Cape Breton. French ships, fighting with the Americans, were re-coaling and defeated a British convoy. Six French and 17 British sailors were killed, with many more wounded.",
"title": "History"
},
{
"paragraph_id": 19,
"text": "In 1784, Britain split the colony of Nova Scotia into three separate colonies: New Brunswick, Cape Breton Island, and present-day peninsular Nova Scotia, in addition to the adjacent colonies of St. John's Island (renamed Prince Edward Island in 1798) and Newfoundland. The colony of Cape Breton Island had its capital at Sydney on its namesake harbour fronting on Spanish Bay and the Cabot Strait. Its first Lieutenant-Governor was Joseph Frederick Wallet DesBarres (1784–1787) and his successor was William Macarmick (1787).",
"title": "History"
},
{
"paragraph_id": 20,
"text": "A number of United Empire Loyalists emigrated to the Canadian colonies, including Cape Breton. David Mathews, the former Mayor of New York City during the American Revolution, emigrated with his family to Cape Breton in 1783. He succeeded Macarmick as head of the colony and served from 1795 to 1798.",
"title": "History"
},
{
"paragraph_id": 21,
"text": "From 1799 to 1807, the military commandant was John Despard, brother of Edward.",
"title": "History"
},
{
"paragraph_id": 22,
"text": "An order forbidding the granting of land in Cape Breton, issued in 1763, was removed in 1784. The mineral rights to the island were given over to the Duke of York by an order-in-council. The British government had intended that the Crown take over the operation of the mines when Cape Breton was made a colony, but this was never done, probably because of the rehabilitation cost of the mines. The mines were in a neglected state, caused by careless operations dating back at least to the time of the final fall of Louisbourg in 1758.",
"title": "History"
},
{
"paragraph_id": 23,
"text": "Large-scale shipbuilding began in the 1790s, beginning with schooners for local trade, moving in the 1820s to larger brigs and brigantines, mostly built for British ship owners. Shipbuilding peaked in the 1850s, marked in 1851 by the full-rigged ship Lord Clarendon, which was the largest wooden ship ever built in Cape Breton.",
"title": "History"
},
{
"paragraph_id": 24,
"text": "In 1820, the colony of Cape Breton Island was merged for the second time with Nova Scotia. This development is one of the factors which led to large-scale industrial development in the Sydney Coal Field of eastern Cape Breton County. By the late 19th century, as a result of the faster shipping, expanding fishery and industrialization of the island, exchanges of people between the island of Newfoundland and Cape Breton increased, beginning a cultural exchange that continues to this day.",
"title": "History"
},
{
"paragraph_id": 25,
"text": "The 1920s were some of the most violent times in Cape Breton. They were marked by several severe labour disputes. The famous murder of William Davis by strike breakers, and the seizing of the New Waterford power plant by striking miners led to a major union sentiment that persists to this day in some circles. William Davis Miners' Memorial Day continues to be celebrated in coal mining towns to commemorate the deaths of miners at the hands of the coal companies.",
"title": "History"
},
{
"paragraph_id": 26,
"text": "The turn of the 20th century saw Cape Breton Island at the forefront of scientific achievement with the now-famous activities launched by inventors Alexander Graham Bell and Guglielmo Marconi.",
"title": "History"
},
{
"paragraph_id": 27,
"text": "Following his successful invention of the telephone and being relatively wealthy, Bell acquired land near Baddeck in 1885. He chose the land, which he named Beinn Bhreagh, largely due to its resemblance to his early surroundings in Scotland. He established a summer estate complete with research laboratories, working with deaf people including Helen Keller, and continued to invent. Baddeck would be the site of his experiments with hydrofoil technologies as well as the Aerial Experiment Association, financed by his wife Mabel Gardiner Hubbard. These efforts resulted in the first powered flight in Canada when the AEA Silver Dart took off from the ice-covered waters of Bras d'Or Lake. Bell also built the forerunner to the iron lung and experimented with breeding sheep.",
"title": "History"
},
{
"paragraph_id": 28,
"text": "Marconi's contributions to Cape Breton Island were also quite significant, as he used the island's geography to his advantage in transmitting the first North American trans-Atlantic radio message from a station constructed at Table Head in Glace Bay to a receiving station at Poldhu in Cornwall, England. Marconi's pioneering work in Cape Breton marked the beginning of modern radio technology. Marconi's station at Marconi Towers, on the outskirts of Glace Bay, became the chief communication centre for the Royal Canadian Navy in World War I through to the early years of World War II.",
"title": "History"
},
{
"paragraph_id": 29,
"text": "Promotions for tourism beginning in the 1950s recognized the importance of the Scottish culture to the province, as the provincial government started encouraging the use of Gaelic once again. The establishment of funding for the Gaelic College of Celtic Arts and Crafts and formal Gaelic language courses in public schools are intended to address the near-loss of this culture to assimilation into Anglophone Canadian culture.",
"title": "History"
},
{
"paragraph_id": 30,
"text": "In the 1960s, the Fortress of Louisbourg was partially reconstructed by Parks Canada, using the labour of unemployed coal miners. Since 2009, this National Historic Site of Canada has attracted an average of 90 000 visitors per year.",
"title": "History"
},
{
"paragraph_id": 31,
"text": "The irregularly-shaped rectangular island is about 100 km wide and 150 long, for a total of 10,311 square kilometres (3,981 sq mi) in area.",
"title": "Geography"
},
{
"paragraph_id": 32,
"text": "It lies in the southeastern extremity of the Gulf of St. Lawrence. Cape Breton is separated from the Nova Scotia peninsula by the very deep Strait of Canso. The island is joined to the mainland by the Canso Causeway.",
"title": "Geography"
},
{
"paragraph_id": 33,
"text": "Cape Breton Island is composed of rocky shores, rolling farmland, glacial valleys, barren headlands, highlands, woods and plateaus.",
"title": "Geography"
},
{
"paragraph_id": 34,
"text": "The island is characterized by a number of elevations of ancient crystalline and metamorphic rock rising up from the south to the north, and contrasted with eroded lowlands. The bedrock of blocks that developed in different places around the globe, at different times, and then were fused together via tectonics.",
"title": "Geography"
},
{
"paragraph_id": 35,
"text": "Cape Breton is formed from three terranes. These are fragments of the Earth's crust formed on a tectonic plate and attached by accretion or suture to crust lying on another plate. Each of these has its own distinctive geologic history, which is different from that of the surrounding areas. The southern half of the island formed from the Avalon terrane, which was once a microcontinent in the Paleozoic era. It is made up of volcanic rock that formed near what is now called Africa. Most of the northern half of the island is on the Bras d'Or terrane (part of the Ganderia terrane). It contains volcanic and sedimentary rock formed off the coast of what is now South America. The third terrane is the relatively small Blair River inlier on the far northwestern tip. It contains the oldest rock in the Maritimes, formed up to 1.6 billion years ago. These rocks, which can be seen in the Polletts Cove - Aspy Fault Wilderness Area north of Pleasant Bay, are likely part of the Canadian Shield, a large area of Precambrian igneous and metamorphic rock that forms the core of the North American continent.",
"title": "Geography"
},
{
"paragraph_id": 36,
"text": "The Avalon and Bras d'Or terranes were pushed together about 500 million years ago when the supercontinent Gondwana was formed. The Blair River inlier was sandwiched in between the two when Laurussia was formed 450-360 million years ago, at which time the land was found in the tropics. This collision also formed the Appalachian Mountains. Associated rifting and faulting is now visible as the canyons of the Cape Breton Highlands. Then, during the Carboniferous period, the area was flooded, which created sedimentary rock layers such as sandstone, shale, gypsum, and conglomerate. Later, most of the island was tropical forest which later formed coal deposits.",
"title": "Geography"
},
{
"paragraph_id": 37,
"text": "Much later, the land was shaped by repeated ice ages which left striations, till, U-shaped valleys, and carved the Bras d'Or Lake from the bedrock. Examples of U-shaped valleys are those of the Chéticamp, Grande Anse, and Clyburn River valleys. Other valleys have been eroded by water, forming V-shaped valleys and canyons. Cape Breton has many fault lines but few earthquakes. Since the North American continent is moving westward, earthquakes tend to occur on the western edge of the continent.",
"title": "Geography"
},
{
"paragraph_id": 38,
"text": "The warm summer humid continental climate is moderated by the proximity of the cold, oftentimes polar Labrador Current and its warmer counterpart the Gulf Stream, both being dominant currents in the North Atlantic Ocean.",
"title": "Geography"
},
{
"paragraph_id": 39,
"text": "There are lowland areas in along the western shore, around Lake Ainslie, the Bras d'Or watershed, Boularderie Island, and the Sydney coalfield. They include salt marshes, coastal beaches, and freshwater wetlands.",
"title": "Geography"
},
{
"paragraph_id": 40,
"text": "Starting in the 1800s, many areas were cleared for farming or timber. Many farms were abandoned from the 1920s to the 1950s with fields being reclaimed by white spruce, red maple, white birch, and balsam fir. Higher slopes are dominated by yellow birch and sugar maple. In sheltered areas with sun and drainage, Acadian forest is found. Wetter areas have tamarack, and black spruce. The weather station at Ingonish records more rain than anywhere else in Nova Scotia.",
"title": "Geography"
},
{
"paragraph_id": 41,
"text": "Behind barrier beaches and dunes at Aspy Bay are salt marshes. The Aspy, Clyburn, and Ingonish rivers have all created floodplains which support populations of black ash, fiddle head fern, swamp loosestrife, swamp milkweed, southern twayblade, and bloodroot.",
"title": "Geography"
},
{
"paragraph_id": 42,
"text": "Red sandstone and white gypsum cliffs can be observed throughout this area. Bedrock is Carboniferous sedimentary with limestone, shale, and sandstone. Many fluvial remains from are glaciation found here. Mining has been ongoing for centuries, and more than 500 mine openings can be found, mainly in the east.",
"title": "Geography"
},
{
"paragraph_id": 43,
"text": "Karst topography is found in Dingwall, South Harbour, Plaster Provincial Park, along the Margaree and Middle Rivers, and along the north shore of Lake Ainslie. The presence of gypsum and limestone increases soil pH and produces some rich wetlands which support giant spear, tufted fen, and other mosses, as well as vascular plants like sedges.",
"title": "Geography"
},
{
"paragraph_id": 44,
"text": "This ecosystem is spread throughout Cape Breton and is defined as hills and slopes 150-300m above sea level, typically covered with Acadian forest.",
"title": "Geography"
},
{
"paragraph_id": 45,
"text": "It includes North Mountain, Kellys Mountain, and East Bay Hills.",
"title": "Geography"
},
{
"paragraph_id": 46,
"text": "Forests in this area were cleared for timber and agriculture and are now a mosaic of habitats depending on the local terrain, soils and microclimate. Typical species include ironwood, white ash, beech, sugar maple, red maple, and yellow birch. The understory can include striped maple, beaked hazelnut, fly honeysuckle, club mosses and ferns. Ephemerals are visible in the spring, such as Dutchman's breeches and spring beauty.",
"title": "Geography"
},
{
"paragraph_id": 47,
"text": "In ravines, shade tolerant trees like hemlock, white pine, red spruce are found. Less well-drained areas are forested with balsam fir and black spruce.",
"title": "Geography"
},
{
"paragraph_id": 48,
"text": "The Highlands comprise a tableland in the northern portions of Inverness and Victoria counties.",
"title": "Geography"
},
{
"paragraph_id": 49,
"text": "An extension of the Appalachian mountain chain, elevations average 350 metres at the edges of the plateau and rise to more than 500 metres at the centre. The area has broad, gently rolling hills bisected with deep valleys and steep-walled canyons. A majority of the land is a taiga of balsam fir, with some white birch, white spruce, mountain ash, and heart-leaf birch.",
"title": "Geography"
},
{
"paragraph_id": 50,
"text": "The northern and western edges of the plateau, particularly at high elevations, resemble arctic tundra. Trees 30–90 high, overgrown with reindeer lichens, can be 150 years old. At very high elevations some areas are exposed bedrock without any vegetation apart from Cladonia lichens. There are many barrens, or heaths, dominated by bushy species of the Ericaceae family.",
"title": "Geography"
},
{
"paragraph_id": 51,
"text": "Spruce, killed by spruce budworm in the late 1970s, has reestablished at lower elevations, but not at higher elevations due to moose browsing. Decomposition is slow, leaving thick layers of plant litter. Ground cover includes wood aster, twinflower, liverworts, wood sorrel, bluebead lily, goldthread, various ferns, and lily-of-the-valley, with bryophyte and large-leaved goldenrod at higher elevations. The understory can include striped maple, mountain ash, ferns, and mountain maple.",
"title": "Geography"
},
{
"paragraph_id": 52,
"text": "Near water, bog birch, alder, and mountain-ash are found. There are many open wetlands populated with stunted tamarack and black spruce. Poor drainage has led to the formation of peatlands which can support tufted clubrush, Bartram's serviceberry, coastal sedge, and bakeapple.",
"title": "Geography"
},
{
"paragraph_id": 53,
"text": "The eastern shore is unique in that while not at a high elevation, it has a cool climate with much rain and fog, strong winds, and low summer temperatures. It is dominated by a boreal forest of black spruce and balsam fir. Sheltered areas support tolerant hardwoods such as white birch and red maple. Many salt marshes, fens, and bogs are found there.",
"title": "Geography"
},
{
"paragraph_id": 54,
"text": "There are many beaches on the highly crenelated coastline. Unlike elsewhere on the island, these are rocky and support plants unlike those of sandy beaches. The coast provides habitat for common coast bird species like common eider, black legged kittiwake, black guillemot, whimbrel, and great cormorant.",
"title": "Geography"
},
{
"paragraph_id": 55,
"text": "Land is drained into the Gulf of Saint Lawrence via the rivers Aspy, Sydney, Mira, Framboise, Margaree, and Chéticamp. The largest freshwater lake is Lake Ainslie.",
"title": "Geography"
},
{
"paragraph_id": 56,
"text": "Local government on the island is provided by the Cape Breton Regional Municipality, the Municipality of the County of Inverness, the Municipality of the County of Richmond, and the Municipality of the County of Victoria, along with the Town of Port Hawkesbury.",
"title": "Government"
},
{
"paragraph_id": 57,
"text": "The island has five Miꞌkmaq Indian reserves: Eskasoni (the largest in population and land area), Membertou, Wagmatcook, Waycobah, and Potlotek.",
"title": "Government"
},
{
"paragraph_id": 58,
"text": "The island's residents can be grouped into six main cultures: Scottish, Mi'kmaq, Acadian, Irish, English, and Deaf, with respective languages Scottish Gaelic, Mi'kmaq, French, and English alongside several sign languages including Maritime Sign Language. English is now the primary language, including a locally distinctive Cape Breton accent, while Mi'kmaq, Scottish Gaelic and Acadian French are still spoken in some communities. Amongst sign languages, it is unknown to what extent LSQ is spoken amongst Acadians, but American Sign Language is certainly predominant across the island, as it has gained significant numbers of signers, especially with the steep declines in Maritime Sign Language use.",
"title": "Demographics"
},
{
"paragraph_id": 59,
"text": "Later migrations of Black Loyalists, Italians, and Eastern Europeans mostly settled in the island's eastern part around the industrial Cape Breton region. Cape Breton Island's population has been in decline two decades with an increasing exodus in recent years due to economic conditions.",
"title": "Demographics"
},
{
"paragraph_id": 60,
"text": "Population trend",
"title": "Demographics"
},
{
"paragraph_id": 61,
"text": "Statistics Canada in 2001 reported a \"religion\" total of 145,525 for Cape Breton, including 5,245 with \"no religious affiliation.\" Major categories included:",
"title": "Demographics"
},
{
"paragraph_id": 62,
"text": "Much of the recent economic history of Cape Breton Island can be tied to the coal industry.",
"title": "Economy"
},
{
"paragraph_id": 63,
"text": "The island has two major coal deposits:",
"title": "Economy"
},
{
"paragraph_id": 64,
"text": "Sydney has traditionally been the main port, with facilities in a large, sheltered, natural harbour. It is the island's largest commercial centre and home to the Cape Breton Post daily newspaper, as well as one television station, CJCB-TV (CTV), and several radio stations. The Marine Atlantic terminal at North Sydney is the terminal for large ferries traveling to Channel-Port aux Basques and seasonally to Argentia, both on the island of Newfoundland.",
"title": "Economy"
},
{
"paragraph_id": 65,
"text": "Point Edward on the west side of Sydney Harbour is the location of Sydport, a former navy base (HMCS Protector) now converted to commercial use. The Canadian Coast Guard College is nearby at Westmount. Petroleum, bulk coal, and cruise ship facilities are also in Sydney Harbour.",
"title": "Economy"
},
{
"paragraph_id": 66,
"text": "Glace Bay, the second largest urban community in population, was the island's main coal mining centre until its last mine closed in the 1980s. Glace Bay was the hub of the Sydney & Louisburg Railway and a major fishing port. At one time, Glace Bay was known as the largest town in Nova Scotia, based on population.",
"title": "Economy"
},
{
"paragraph_id": 67,
"text": "Port Hawkesbury has risen to prominence since the completion of the Canso Causeway and Canso Canal created an artificial deep-water port, allowing extensive petrochemical, pulp and paper, and gypsum handling facilities to be established. The Strait of Canso is completely navigable to Seawaymax vessels, and Port Hawkesbury is open to the deepest-draught vessels on the world's oceans. Large marine vessels may also enter Bras d'Or Lake through the Great Bras d'Or channel, and small craft can use the Little Bras d'Or channel or St. Peters Canal. While commercial shipping no longer uses the St. Peters Canal, it remains an important waterway for recreational vessels.",
"title": "Economy"
},
{
"paragraph_id": 68,
"text": "The industrial Cape Breton area faced several challenges with the closure of the Cape Breton Development Corporation's (DEVCO) coal mines and the Sydney Steel Corporation's (SYSCO) steel mill. In recent years, the Island's residents have tried to diversify the area economy by investing in tourism developments, call centres, and small businesses, as well as manufacturing ventures in fields such as auto parts, pharmaceuticals, and window glazings.",
"title": "Economy"
},
{
"paragraph_id": 69,
"text": "While the Cape Breton Regional Municipality is in transition from an industrial to a service-based economy, the rest of Cape Breton Island outside the industrial area surrounding Sydney-Glace Bay has been more stable, with a mixture of fishing, forestry, small-scale agriculture, and tourism.",
"title": "Economy"
},
{
"paragraph_id": 70,
"text": "Tourism in particular has grown throughout the post-Second World War era, especially the growth in vehicle-based touring, which was furthered by the creation of the Cabot Trail scenic drive. The scenery of the island is rivalled in northeastern North America by only Newfoundland; and Cape Breton Island tourism marketing places a heavy emphasis on its Scottish Gaelic heritage through events such as the Celtic Colours Festival, held each October, as well as promotions through the Gaelic College of Celtic Arts and Crafts.",
"title": "Economy"
},
{
"paragraph_id": 71,
"text": "Whale-watching is a popular attraction for tourists. Whale-watching cruises are operated by vendors from Baddeck to Chéticamp. The most popular species of whale found in Cape Breton's waters is the pilot whale.",
"title": "Economy"
},
{
"paragraph_id": 72,
"text": "The Cabot Trail is a scenic road circuit around and over the Cape Breton Highlands with spectacular coastal vistas; over 400,000 visitors drive the Cabot Trail each summer and fall. Coupled with the Fortress of Louisbourg, it has driven the growth of the tourism industry on the island in recent decades. The Condé Nast travel guide has rated Cape Breton Island as one of the world's best island destinations.",
"title": "Economy"
},
{
"paragraph_id": 73,
"text": "The island's primary east–west road is Highway 105, the Trans-Canada Highway, although Trunk 4 is also heavily used. Highway 125 is an important arterial route around Sydney Harbour in the Cape Breton Regional Municipality. The Cabot Trail, circling the Cape Breton Highlands, and Trunk 19, along the island's western coast, are important secondary roads. The Cape Breton and Central Nova Scotia Railway maintains railway connections between the port of Sydney to the Canadian National Railway in Truro.",
"title": "Economy"
},
{
"paragraph_id": 74,
"text": "Cape Breton Island is served by several airports, the largest, the JA Douglas McCurdy Sydney Airport, situated on Trunk 4 between the communities of Sydney and Glace Bay, as well as smaller airports at Port Hawksbury, Margaree, and Baddeck.",
"title": "Economy"
},
{
"paragraph_id": 75,
"text": "Gaelic speakers in Cape Breton, as elsewhere in Nova Scotia, constituted a large proportion of the local population from the 18th century on. They brought with them a common culture of poetry, traditional songs and tales, music and dance, and used this to develop distinctive local traditions.",
"title": "Culture"
},
{
"paragraph_id": 76,
"text": "Most Gaelic settlement in Nova Scotia happened between 1770 and 1840, with probably over 50,000 Gaelic speakers emigrating from the Scottish Highlands and the Hebrides to Nova Scotia and Prince Edward Island. Such emigration was facilitated by changes in Gaelic society and the economy, with sharp increases in rents, confiscation of land and disruption of local customs and rights. In Nova Scotia, poetry and song in Gaelic flourished. George Emmerson argues that an \"ancient and rich\" tradition of storytelling, song, and Gaelic poetry emerged during the 18th century and was transplanted from the Highlands of Scotland to Nova Scotia, where the language similarly took root there. The majority of those settling in Nova Scotia from the end of the 18th century through to middle of the next were from the Scottish Highlands, rather than the Lowlands, making the Highland tradition's impact more profound on the region. Gaelic settlement in Cape Breton began in earnest in the early nineteenth century.",
"title": "Culture"
},
{
"paragraph_id": 77,
"text": "The Gaelic language became dominant from Colchester County in the west of Nova Scotia into Cape Breton County in the east. It was reinforced in Cape Breton in the first half of the 19th century with an influx of Highland Scots numbering approximately 50,000 as a result of the Highland Clearances.",
"title": "Culture"
},
{
"paragraph_id": 78,
"text": "From 1892 to 1904, Jonathon MacKinnon published the Scottish Gaelic-language biweekly newspaper Mac-Talla (lit. 'The Echo') in Sydney, Nova Scotia. During the 1920s, several Scottish Gaelic-language newspapers were printed in Sydney for distribution primarily on Cape Breton, including the Teachdaire nan Gàidheal (lit. 'The Messenger of the Gaels'), which included Gaelic-language lessons; the United Church-affiliated An Solus Iùil (lit. 'The Guiding Light'); and MacKinnon's later endeavor, Fear na Cèilidh (lit. 'The Entertainer').",
"title": "Culture"
},
{
"paragraph_id": 79,
"text": "Gaelic speakers, however, tended to be poor; they were largely illiterate and had little access to education. This situation persisted into the early days of the twentieth century. In 1921 Gaelic was approved as an optional subject in the curriculum of Nova Scotia, but few teachers could be found and children were discouraged from using the language in schools. By 1931 the number of Gaelic speakers in Nova Scotia had fallen to approximately 25,000, mostly in discrete pockets. In Cape Breton it was still a majority language, but the proportion was falling. Children were no longer being raised with Gaelic.",
"title": "Culture"
},
{
"paragraph_id": 80,
"text": "From 1939 on, attempts were made to strengthen its position in the public school system in Nova Scotia, but funding, official commitment and the availability of teachers continued to be a problem. By the 1950s the number of speakers was less than 7,000. The advent of multiculturalism in Canada in the 1960s meant that new educational opportunities became available, with a gradual strengthening of the language at secondary and tertiary level. At present several schools in Cape Breton offer Gaelic Studies and Gaelic language programs, and the language is taught at Cape Breton University.",
"title": "Culture"
},
{
"paragraph_id": 81,
"text": "The 2016 Canadian Census shows that there are only 40 reported speakers of Gaelic as a mother tongue in Cape Breton. On the other hand, there are families and individuals who have recommenced intergenerational transmission. They include fluent speakers from Gaelic-speaking areas of Scotland and speakers who became fluent in Nova Scotia and who in some cases studied in Scotland. Other revitalization activities include adult education, community cultural events and publishing.",
"title": "Culture"
},
{
"paragraph_id": 82,
"text": "Cape Breton is well known for its traditional fiddle music, which was brought to North America by Scottish immigrants during the Highland Clearances. The traditional style has been well preserved in Cape Breton, and cèilidhs have become a popular attraction for tourists. Inverness County in particular has a heavy concentration of musical activity, with regular performances in communities such as Mabou and Judique. Judique is recognized as \"Baile nam Fonn\" (lit. 'Village of Tunes') or the 'Home of Celtic Music', featuring the Celtic Music Interpretive Centre. The traditional fiddle music of Cape Breton is studied by musicians around the world, where its global recognition continues to rise.",
"title": "Culture"
},
{
"paragraph_id": 83,
"text": "Local performers who have received significant recognition outside of Cape Breton include Angus Chisholm; Buddy MacMaster; Joseph Cormier, the first Cape Breton fiddler to record an album made available in Europe (1974); Lee Cremo; Bruce Guthro; Natalie MacMaster; Ashley MacIsaac; The Rankin Family; Aselin Debison; Gordie Sampson; John Allan Cameron; and the Barra MacNeils.",
"title": "Culture"
},
{
"paragraph_id": 84,
"text": "The Men of the Deeps are a male choral group of current and former miners from the industrial Cape Breton area.",
"title": "Culture"
}
] | Cape Breton Island is a rugged and irregularly shaped island on the Atlantic coast of North America and part of the province of Nova Scotia, Canada. The 10,311 km2 (3,981 sq mi) island accounts for 18.7% of Nova Scotia's total area. Although the island is physically separated from the Nova Scotia peninsula by the Strait of Canso, the 1,385 m (4,544 ft) long Canso Causeway connects it to mainland Nova Scotia. The island is east-northeast of the mainland with its northern and western coasts fronting on the Gulf of Saint Lawrence with its western coast forming the eastern limits of the Northumberland Strait. The eastern and southern coasts front the Atlantic Ocean with its eastern coast also forming the western limits of the Cabot Strait. Its landmass slopes upward from south to north, culminating in the highlands of its northern cape. One of the world's larger saltwater lakes, Bras d'Or, dominates the island's centre. The total population at the 2016 census numbered 132,010 Cape Bretoners, which is approximately 15% of the provincial population. Cape Breton Island has experienced a decline in population of approximately 2.9% since the 2011 census. Approximately 75% of the island's population is in the Cape Breton Regional Municipality (CBRM), which includes all of Cape Breton County and is often referred to as Industrial Cape Breton. | 2001-10-28T21:43:17Z | 2023-12-10T18:10:09Z | [
"Template:Full citation needed",
"Template:Cite journal",
"Template:Wikivoyage-inline",
"Template:Use dmy dates",
"Template:Lang-gd",
"Template:Convert",
"Template:Gain",
"Template:Refn",
"Template:Lit",
"Template:Prone to spam",
"Template:Subdivisions of Nova Scotia",
"Template:HMCS",
"Template:Citation needed",
"Template:Main",
"Template:Short description",
"Template:Redirect",
"Template:Infobox islands",
"Template:Lang",
"Template:Lang-mic",
"Template:Authority control",
"Template:Loss",
"Template:Dead link",
"Template:Clear left",
"Template:Cite news",
"Template:Cite DCB",
"Template:Commons category",
"Template:Cite thesis",
"Template:Canadian colonies",
"Template:British overseas territories",
"Template:Use Canadian English",
"Template:Lang-fr",
"Template:Cite EB1911",
"Template:Cite book",
"Template:Webarchive",
"Template:Climate chart",
"Template:Reflist",
"Template:Cite web",
"Template:Celtic languages"
] | https://en.wikipedia.org/wiki/Cape_Breton_Island |
5,725 | Cthulhu Mythos | The Cthulhu Mythos is a mythopoeia and a shared fictional universe, originating in the works of American horror writer H. P. Lovecraft. The term was coined by August Derleth, a contemporary correspondent and protégé of Lovecraft, to identify the settings, tropes, and lore that were employed by Lovecraft and his literary successors. The name "Cthulhu" derives from the central creature in Lovecraft's seminal short story "The Call of Cthulhu", first published in the pulp magazine Weird Tales in 1928.
Richard L. Tierney, a writer who also wrote Mythos tales, later applied the term "Derleth Mythos" to distinguish Lovecraft's works from Derleth's later stories, which modify key tenets of the Mythos. Authors of Lovecraftian horror in particular frequently use elements of the Cthulhu Mythos.
In his essay "H. P. Lovecraft and the Cthulhu Mythos", Robert M. Price described two stages in the development of the Cthulhu Mythos. Price called the first stage the "Cthulhu Mythos proper". This stage was formulated during Lovecraft's lifetime and was subject to his guidance. The second stage was guided by August Derleth who, in addition to publishing Lovecraft's stories after his death, attempted to categorize and expand the Mythos.
An ongoing theme in Lovecraft's work is the complete irrelevance of mankind in the face of the cosmic horrors that apparently exist in the universe. Lovecraft made frequent references to the "Great Old Ones", a loose pantheon of ancient, powerful deities from space who once ruled the Earth and have since fallen into a deathlike sleep. While these monstrous deities were present in almost all of Lovecraft's published work (his second short story "Dagon", published in 1919, is considered the start of the Mythos), the first story to really expand the pantheon of Great Old Ones and its themes is "The Call of Cthulhu", which was published in 1928.
Lovecraft broke with other pulp writers of the time by having his main characters' minds deteriorate when afforded a glimpse of what exists outside their perceived reality. He emphasized the point by stating in the opening sentence of the story that "The most merciful thing in the world, I think, is the inability of the human mind to correlate all its contents."
Writer Dirk W. Mosig noted that Lovecraft was a "mechanistic materialist" who embraced the philosophy of cosmic indifferentism and believed in a purposeless, mechanical, and uncaring universe. Human beings, with their limited faculties, can never fully understand this universe, and the cognitive dissonance caused by this revelation leads to insanity, in his view.
There have been attempts at categorizing this fictional group of beings. Phillip A. Schreffler argues that by carefully scrutinizing Lovecraft's writings, a workable framework emerges that outlines the entire "pantheon"—from the unreachable "Outer Ones" (e.g., Azathoth, who occupies the centre of the universe) and "Great Old Ones" (e.g., Cthulhu, imprisoned on Earth in the sunken city of R'lyeh) to the lesser castes (the lowly slave shoggoths and the Mi-Go).
David E. Schultz said Lovecraft never meant to create a canonical Mythos but rather intended his imaginary pantheon to serve merely as a background element. Lovecraft himself humorously referred to his Mythos as "Yog Sothothery" (Dirk W. Mosig coincidentally suggested the term Yog-Sothoth Cycle of Myth be substituted for Cthulhu Mythos). At times, Lovecraft even had to remind his readers that his Mythos creations were entirely fictional.
The view that there was no rigid structure is expounded upon by S. T. Joshi, who said
Lovecraft's imaginary cosmogony was never a static system but rather a sort of aesthetic construct that remained ever adaptable to its creator's developing personality and altering interests…. There was never a rigid system that might be posthumously appropriated.…. The essence of the mythos lies not in a pantheon of imaginary deities nor in a cobwebby collection of forgotten tomes, but rather in a certain convincing cosmic attitude.
Price said Lovecraft's writings could at least be divided into categories and identified three distinct themes: the "Dunsanian" (written in a similar style as Lord Dunsany), "Arkham" (occurring in Lovecraft's fictionalized New England setting), and "Cthulhu" (the cosmic tales) cycles. Writer Will Murray noted that while Lovecraft often used his fictional pantheon in the stories he ghostwrote for other authors, he reserved Arkham and its environs exclusively for those tales he wrote under his own name.
Although the Mythos was not formalized or acknowledged between them, Lovecraft did correspond, meet in person, and share story elements with other contemporary writers including Clark Ashton Smith, Robert E. Howard, Robert Bloch, Frank Belknap Long, Henry Kuttner, Henry S. Whitehead, and Fritz Leiber—a group referred to as the "Lovecraft Circle".
For example, Robert E. Howard's character Friedrich Von Junzt reads Lovecraft's Necronomicon in the short story "The Children of the Night" (1931), and in turn Lovecraft mentions Howard's Unaussprechlichen Kulten in the stories "Out of the Aeons" (1935) and "The Shadow Out of Time" (1936). Many of Howard's original unedited Conan stories also involve parts of the Cthulhu Mythos.
Price denotes the second stage's commencement with August Derleth, with the principal difference between Lovecraft and Derleth being Derleth's use of hope and development of the idea that the Cthulhu Mythos essentially represented a struggle between good and evil. Derleth is credited with creating the "Elder Gods". He stated:
As Lovecraft conceived the deities or forces of his mythos, there were, initially, the Elder Gods…. These Elder Gods were benign deities, representing the forces of good, and existed peacefully…very rarely stirring forth to intervene in the unceasing struggle between the powers of evil and the races of Earth. These powers of evil were variously known as the Great Old Ones or the Ancient Ones....
Price said the basis for Derleth's system is found in Lovecraft: "Was Derleth's use of the rubric 'Elder Gods' so alien to Lovecraft's in At the Mountains of Madness? Perhaps not. In fact, this very story, along with some hints from "The Shadow over Innsmouth", provides the key to the origin of the 'Derleth Mythos'. For in At the Mountains of Madness is shown the history of a conflict between interstellar races, first among them the Elder Ones and the Cthulhu-spawn.
Derleth said Lovecraft wished for other authors to actively write about the Mythos as opposed to it being a discrete plot device within Lovecraft's own stories. Derleth expanded the boundaries of the Mythos by including any passing reference to another author's story elements by Lovecraft as part of the genre. Just as Lovecraft made passing reference to Clark Ashton Smith's Book of Eibon, Derleth in turn added Smith's Ubbo-Sathla to the Mythos.
Derleth also attempted to connect the deities of the Mythos to the four elements (air, earth, fire, and water), creating new beings representative of certain elements in order to legitimize his system of classification. He created "Cthugha" as a sort of fire elemental when a fan, Francis Towner Laney, complained that he had neglected to include the element in his schema. Laney, the editor of The Acolyte, had categorized the Mythos in an essay that first appeared in the Winter 1942 issue of the magazine.
Impressed by the glossary, Derleth asked Laney to rewrite it for publication in the Arkham House collection Beyond the Wall of Sleep (1943). Laney's essay ("The Cthulhu Mythos") was later republished in Crypt of Cthulhu #32 (1985). In applying the elemental theory to beings that function on a cosmic scale (e.g., Yog-Sothoth) some authors created a fifth element that they termed aethyr.
A number of fictional cults appear in the Cthulhu Mythos, the loosely connected series of horror stories written by Lovecraft and other writers inspired by his creations. Many of these cults serve the Outer God Nyarlathotep, the Crawling Chaos, a protean creature that appears in myriad guises. Other cults are dedicated to the cause of the Great Old Ones, a group of powerful alien beings currently imprisoned or otherwise resting in a deathlike sleep. These fictional cults have in some ways taken on a life of their own beyond the pages of Lovecraft's works. According to author John Engle, "The very real world of esoteric magical and occult practices has adopted Lovecraft and his works into its canon, which have informed the ritual practices, or even formed the bedrock, of certain cabals and magical circles".
The Cthulhu Mythos of H. P. Lovecraft is considered to have been highly influential for the speculative fiction genre. It has been called "the official fictional religion of fantasy, science fiction, and horror, a grab bag for writers in need of unthinkably vast, and unthinkably indifferent, eldritch entities". | [
{
"paragraph_id": 0,
"text": "The Cthulhu Mythos is a mythopoeia and a shared fictional universe, originating in the works of American horror writer H. P. Lovecraft. The term was coined by August Derleth, a contemporary correspondent and protégé of Lovecraft, to identify the settings, tropes, and lore that were employed by Lovecraft and his literary successors. The name \"Cthulhu\" derives from the central creature in Lovecraft's seminal short story \"The Call of Cthulhu\", first published in the pulp magazine Weird Tales in 1928.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Richard L. Tierney, a writer who also wrote Mythos tales, later applied the term \"Derleth Mythos\" to distinguish Lovecraft's works from Derleth's later stories, which modify key tenets of the Mythos. Authors of Lovecraftian horror in particular frequently use elements of the Cthulhu Mythos.",
"title": ""
},
{
"paragraph_id": 2,
"text": "In his essay \"H. P. Lovecraft and the Cthulhu Mythos\", Robert M. Price described two stages in the development of the Cthulhu Mythos. Price called the first stage the \"Cthulhu Mythos proper\". This stage was formulated during Lovecraft's lifetime and was subject to his guidance. The second stage was guided by August Derleth who, in addition to publishing Lovecraft's stories after his death, attempted to categorize and expand the Mythos.",
"title": "History"
},
{
"paragraph_id": 3,
"text": "An ongoing theme in Lovecraft's work is the complete irrelevance of mankind in the face of the cosmic horrors that apparently exist in the universe. Lovecraft made frequent references to the \"Great Old Ones\", a loose pantheon of ancient, powerful deities from space who once ruled the Earth and have since fallen into a deathlike sleep. While these monstrous deities were present in almost all of Lovecraft's published work (his second short story \"Dagon\", published in 1919, is considered the start of the Mythos), the first story to really expand the pantheon of Great Old Ones and its themes is \"The Call of Cthulhu\", which was published in 1928.",
"title": "History"
},
{
"paragraph_id": 4,
"text": "Lovecraft broke with other pulp writers of the time by having his main characters' minds deteriorate when afforded a glimpse of what exists outside their perceived reality. He emphasized the point by stating in the opening sentence of the story that \"The most merciful thing in the world, I think, is the inability of the human mind to correlate all its contents.\"",
"title": "History"
},
{
"paragraph_id": 5,
"text": "Writer Dirk W. Mosig noted that Lovecraft was a \"mechanistic materialist\" who embraced the philosophy of cosmic indifferentism and believed in a purposeless, mechanical, and uncaring universe. Human beings, with their limited faculties, can never fully understand this universe, and the cognitive dissonance caused by this revelation leads to insanity, in his view.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "There have been attempts at categorizing this fictional group of beings. Phillip A. Schreffler argues that by carefully scrutinizing Lovecraft's writings, a workable framework emerges that outlines the entire \"pantheon\"—from the unreachable \"Outer Ones\" (e.g., Azathoth, who occupies the centre of the universe) and \"Great Old Ones\" (e.g., Cthulhu, imprisoned on Earth in the sunken city of R'lyeh) to the lesser castes (the lowly slave shoggoths and the Mi-Go).",
"title": "History"
},
{
"paragraph_id": 7,
"text": "David E. Schultz said Lovecraft never meant to create a canonical Mythos but rather intended his imaginary pantheon to serve merely as a background element. Lovecraft himself humorously referred to his Mythos as \"Yog Sothothery\" (Dirk W. Mosig coincidentally suggested the term Yog-Sothoth Cycle of Myth be substituted for Cthulhu Mythos). At times, Lovecraft even had to remind his readers that his Mythos creations were entirely fictional.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "The view that there was no rigid structure is expounded upon by S. T. Joshi, who said",
"title": "History"
},
{
"paragraph_id": 9,
"text": "Lovecraft's imaginary cosmogony was never a static system but rather a sort of aesthetic construct that remained ever adaptable to its creator's developing personality and altering interests…. There was never a rigid system that might be posthumously appropriated.…. The essence of the mythos lies not in a pantheon of imaginary deities nor in a cobwebby collection of forgotten tomes, but rather in a certain convincing cosmic attitude.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "Price said Lovecraft's writings could at least be divided into categories and identified three distinct themes: the \"Dunsanian\" (written in a similar style as Lord Dunsany), \"Arkham\" (occurring in Lovecraft's fictionalized New England setting), and \"Cthulhu\" (the cosmic tales) cycles. Writer Will Murray noted that while Lovecraft often used his fictional pantheon in the stories he ghostwrote for other authors, he reserved Arkham and its environs exclusively for those tales he wrote under his own name.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "Although the Mythos was not formalized or acknowledged between them, Lovecraft did correspond, meet in person, and share story elements with other contemporary writers including Clark Ashton Smith, Robert E. Howard, Robert Bloch, Frank Belknap Long, Henry Kuttner, Henry S. Whitehead, and Fritz Leiber—a group referred to as the \"Lovecraft Circle\".",
"title": "History"
},
{
"paragraph_id": 12,
"text": "For example, Robert E. Howard's character Friedrich Von Junzt reads Lovecraft's Necronomicon in the short story \"The Children of the Night\" (1931), and in turn Lovecraft mentions Howard's Unaussprechlichen Kulten in the stories \"Out of the Aeons\" (1935) and \"The Shadow Out of Time\" (1936). Many of Howard's original unedited Conan stories also involve parts of the Cthulhu Mythos.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "Price denotes the second stage's commencement with August Derleth, with the principal difference between Lovecraft and Derleth being Derleth's use of hope and development of the idea that the Cthulhu Mythos essentially represented a struggle between good and evil. Derleth is credited with creating the \"Elder Gods\". He stated:",
"title": "History"
},
{
"paragraph_id": 14,
"text": "As Lovecraft conceived the deities or forces of his mythos, there were, initially, the Elder Gods…. These Elder Gods were benign deities, representing the forces of good, and existed peacefully…very rarely stirring forth to intervene in the unceasing struggle between the powers of evil and the races of Earth. These powers of evil were variously known as the Great Old Ones or the Ancient Ones....",
"title": "History"
},
{
"paragraph_id": 15,
"text": "Price said the basis for Derleth's system is found in Lovecraft: \"Was Derleth's use of the rubric 'Elder Gods' so alien to Lovecraft's in At the Mountains of Madness? Perhaps not. In fact, this very story, along with some hints from \"The Shadow over Innsmouth\", provides the key to the origin of the 'Derleth Mythos'. For in At the Mountains of Madness is shown the history of a conflict between interstellar races, first among them the Elder Ones and the Cthulhu-spawn.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "Derleth said Lovecraft wished for other authors to actively write about the Mythos as opposed to it being a discrete plot device within Lovecraft's own stories. Derleth expanded the boundaries of the Mythos by including any passing reference to another author's story elements by Lovecraft as part of the genre. Just as Lovecraft made passing reference to Clark Ashton Smith's Book of Eibon, Derleth in turn added Smith's Ubbo-Sathla to the Mythos.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "Derleth also attempted to connect the deities of the Mythos to the four elements (air, earth, fire, and water), creating new beings representative of certain elements in order to legitimize his system of classification. He created \"Cthugha\" as a sort of fire elemental when a fan, Francis Towner Laney, complained that he had neglected to include the element in his schema. Laney, the editor of The Acolyte, had categorized the Mythos in an essay that first appeared in the Winter 1942 issue of the magazine.",
"title": "History"
},
{
"paragraph_id": 18,
"text": "Impressed by the glossary, Derleth asked Laney to rewrite it for publication in the Arkham House collection Beyond the Wall of Sleep (1943). Laney's essay (\"The Cthulhu Mythos\") was later republished in Crypt of Cthulhu #32 (1985). In applying the elemental theory to beings that function on a cosmic scale (e.g., Yog-Sothoth) some authors created a fifth element that they termed aethyr.",
"title": "History"
},
{
"paragraph_id": 19,
"text": "A number of fictional cults appear in the Cthulhu Mythos, the loosely connected series of horror stories written by Lovecraft and other writers inspired by his creations. Many of these cults serve the Outer God Nyarlathotep, the Crawling Chaos, a protean creature that appears in myriad guises. Other cults are dedicated to the cause of the Great Old Ones, a group of powerful alien beings currently imprisoned or otherwise resting in a deathlike sleep. These fictional cults have in some ways taken on a life of their own beyond the pages of Lovecraft's works. According to author John Engle, \"The very real world of esoteric magical and occult practices has adopted Lovecraft and his works into its canon, which have informed the ritual practices, or even formed the bedrock, of certain cabals and magical circles\".",
"title": "Fictional cults"
},
{
"paragraph_id": 20,
"text": "The Cthulhu Mythos of H. P. Lovecraft is considered to have been highly influential for the speculative fiction genre. It has been called \"the official fictional religion of fantasy, science fiction, and horror, a grab bag for writers in need of unthinkably vast, and unthinkably indifferent, eldritch entities\".",
"title": "Significance"
}
] | The Cthulhu Mythos is a mythopoeia and a shared fictional universe, originating in the works of American horror writer H. P. Lovecraft. The term was coined by August Derleth, a contemporary correspondent and protégé of Lovecraft, to identify the settings, tropes, and lore that were employed by Lovecraft and his literary successors. The name "Cthulhu" derives from the central creature in Lovecraft's seminal short story "The Call of Cthulhu", first published in the pulp magazine Weird Tales in 1928. Richard L. Tierney, a writer who also wrote Mythos tales, later applied the term "Derleth Mythos" to distinguish Lovecraft's works from Derleth's later stories, which modify key tenets of the Mythos. Authors of Lovecraftian horror in particular frequently use elements of the Cthulhu Mythos. | 2001-11-14T23:26:59Z | 2023-12-31T12:22:14Z | [
"Template:Bquote",
"Template:Citation needed",
"Template:Wikiquote",
"Template:Cthulhu Mythos",
"Template:The Shadow Over Innsmouth",
"Template:The Call of Cthulhu",
"Template:At the Mountains of Madness",
"Template:Short description",
"Template:Mdash",
"Template:Reflist",
"Template:Commons category",
"Template:Cite journal",
"Template:Wikisource portal",
"Template:H. P. Lovecraft",
"Template:Fantasy fiction",
"Template:Rp",
"Template:Blockquote",
"Template:Cite magazine",
"Template:Cite web",
"Template:Narrative",
"Template:Mythology",
"Template:Annotated link",
"Template:Cite book",
"Template:Hugo Award Best Series"
] | https://en.wikipedia.org/wiki/Cthulhu_Mythos |
5,726 | Crane shot | In filmmaking and video production, a crane shot is a shot taken by a camera on a moving crane or jib. Filmmaker D. W. Griffith created the first crane for his 1916 epic film Intolerance, with famed special effects pioneer Eiji Tsuburaya later constructing the first iron camera crane which is still adapted worldwide today. Most cranes accommodate both the camera and an operator, but some can be moved by remote control. Crane shots are often found in what are supposed to be emotional or suspenseful scenes. One example of this technique is the shots taken by remote cranes in the car-chase sequence of the 1985 film To Live and Die in L.A. Some filmmakers place the camera on a boom arm simply to make it easier to move around between ordinary set-ups.
D. W. Griffith designed the first camera crane for his 1916 epic film Intolerance. His crane measured 140 feet tall and ascended on six four-wheeled railroad trucks. In 1929, future special effects pioneer Eiji Tsuburaya constructed a smaller replica of Griffith's wooden camera crane without blueprints or manuals. Although his wooden crane collapsed shortly after its completion, Tsuburaya created the first-ever iron shooting crane in October 1934, and an adaptation of this crane is still used worldwide today.
Camera cranes may be small, medium, or large, depending on the load capacity and length of the loading arm. Historically, the first camera crane provided for lifting the camera together with the operator, and sometimes an assistant. The range of motion of the boom was restricted because of the high load capacity and the need to ensure operator safety. In recent years a camera crane boom tripod with a remote control has become popular. It carries on the boom only a movie or television camera without an operator and allows shooting from difficult positions as a small load capacity makes it possible to achieve a long reach of the crane boom and relative freedom of movement. The operator controls the camera from the ground through a motorized panoramic head, using remote control and video surveillance by watching the image on the monitor. A separate category consists of telescopic camera cranes. These devices allow setting an arbitrary trajectory of the camera, eliminating the characteristic jib crane radial displacement that comes with traditional spanning shots.
Large camera cranes are almost indistinguishable from the usual boom-type cranes, with the exception of special equipment for smoothly moving the boom and controlling noise. Small camera cranes and crane-trucks have a lightweight construction, often without a mechanical drive. The valves are controlled manually by balancing the load-specific counterweight, facilitating manipulation. To improve usability and repeatability of movement of the crane in different takes, the axis of rotation arrows are provided with limbs and a pointer. In some cases, the camera crane is mounted on a dolly for even greater camera mobility. Such devices are called crane trolleys. In modern films robotic cranes allow use of multiple actuators for high-accuracy repeated movement of the camera in trick photography. These devices are called tap-robots; some sources use the term motion control.
The major supplier of cranes in the cinema of the United States throughout the 1940s, 1950s, and 1960s was the Chapman Company (later Chapman-Leonard of North Hollywood), supplanted by dozens of similar manufacturers around the world. The traditional design provided seats for both the director and the camera operator, and sometimes a third seat for the cinematographer as well. Large weights on the back of the crane compensate for the weight of the people riding the crane and must be adjusted carefully to avoid the possibility of accidents. During the 1960s, the tallest crane was the Chapman Titan crane, a massive design over 20 feet high that won an Academy Scientific & Engineering award.
During the last few years, camera cranes have been miniaturized and costs have dropped so dramatically that most aspiring film makers have access to these tools. What was once a "Hollywood" effect is now available for under $400. Manufacturers of camera cranes include ABC-Products, Cambo, Filmotechnic, Polecam, Panther and Matthews Studio Equipment, Sevenoak, and Newton Nordic.
Most such cranes were manually operated, requiring an experienced boom operator who knew how to vertically raise, lower, and "crab" the camera alongside actors while the crane platform rolled on separate tracks. The crane operator and camera operator had to precisely coordinate their moves so that focus, pan, and camera position all started and stopped at the same time, requiring great skill and rehearsal. On the back of the crane is a counter weight. This allows the crane to smooth action while in motion with minimal effort. | [
{
"paragraph_id": 0,
"text": "In filmmaking and video production, a crane shot is a shot taken by a camera on a moving crane or jib. Filmmaker D. W. Griffith created the first crane for his 1916 epic film Intolerance, with famed special effects pioneer Eiji Tsuburaya later constructing the first iron camera crane which is still adapted worldwide today. Most cranes accommodate both the camera and an operator, but some can be moved by remote control. Crane shots are often found in what are supposed to be emotional or suspenseful scenes. One example of this technique is the shots taken by remote cranes in the car-chase sequence of the 1985 film To Live and Die in L.A. Some filmmakers place the camera on a boom arm simply to make it easier to move around between ordinary set-ups.",
"title": ""
},
{
"paragraph_id": 1,
"text": "D. W. Griffith designed the first camera crane for his 1916 epic film Intolerance. His crane measured 140 feet tall and ascended on six four-wheeled railroad trucks. In 1929, future special effects pioneer Eiji Tsuburaya constructed a smaller replica of Griffith's wooden camera crane without blueprints or manuals. Although his wooden crane collapsed shortly after its completion, Tsuburaya created the first-ever iron shooting crane in October 1934, and an adaptation of this crane is still used worldwide today.",
"title": "History"
},
{
"paragraph_id": 2,
"text": "Camera cranes may be small, medium, or large, depending on the load capacity and length of the loading arm. Historically, the first camera crane provided for lifting the camera together with the operator, and sometimes an assistant. The range of motion of the boom was restricted because of the high load capacity and the need to ensure operator safety. In recent years a camera crane boom tripod with a remote control has become popular. It carries on the boom only a movie or television camera without an operator and allows shooting from difficult positions as a small load capacity makes it possible to achieve a long reach of the crane boom and relative freedom of movement. The operator controls the camera from the ground through a motorized panoramic head, using remote control and video surveillance by watching the image on the monitor. A separate category consists of telescopic camera cranes. These devices allow setting an arbitrary trajectory of the camera, eliminating the characteristic jib crane radial displacement that comes with traditional spanning shots.",
"title": "Camera crane types"
},
{
"paragraph_id": 3,
"text": "Large camera cranes are almost indistinguishable from the usual boom-type cranes, with the exception of special equipment for smoothly moving the boom and controlling noise. Small camera cranes and crane-trucks have a lightweight construction, often without a mechanical drive. The valves are controlled manually by balancing the load-specific counterweight, facilitating manipulation. To improve usability and repeatability of movement of the crane in different takes, the axis of rotation arrows are provided with limbs and a pointer. In some cases, the camera crane is mounted on a dolly for even greater camera mobility. Such devices are called crane trolleys. In modern films robotic cranes allow use of multiple actuators for high-accuracy repeated movement of the camera in trick photography. These devices are called tap-robots; some sources use the term motion control.",
"title": "Camera crane types"
},
{
"paragraph_id": 4,
"text": "The major supplier of cranes in the cinema of the United States throughout the 1940s, 1950s, and 1960s was the Chapman Company (later Chapman-Leonard of North Hollywood), supplanted by dozens of similar manufacturers around the world. The traditional design provided seats for both the director and the camera operator, and sometimes a third seat for the cinematographer as well. Large weights on the back of the crane compensate for the weight of the people riding the crane and must be adjusted carefully to avoid the possibility of accidents. During the 1960s, the tallest crane was the Chapman Titan crane, a massive design over 20 feet high that won an Academy Scientific & Engineering award.",
"title": "Manufacturers"
},
{
"paragraph_id": 5,
"text": "During the last few years, camera cranes have been miniaturized and costs have dropped so dramatically that most aspiring film makers have access to these tools. What was once a \"Hollywood\" effect is now available for under $400. Manufacturers of camera cranes include ABC-Products, Cambo, Filmotechnic, Polecam, Panther and Matthews Studio Equipment, Sevenoak, and Newton Nordic.",
"title": "Manufacturers"
},
{
"paragraph_id": 6,
"text": "Most such cranes were manually operated, requiring an experienced boom operator who knew how to vertically raise, lower, and \"crab\" the camera alongside actors while the crane platform rolled on separate tracks. The crane operator and camera operator had to precisely coordinate their moves so that focus, pan, and camera position all started and stopped at the same time, requiring great skill and rehearsal. On the back of the crane is a counter weight. This allows the crane to smooth action while in motion with minimal effort.",
"title": "Camera crane technique"
}
] | In filmmaking and video production, a crane shot is a shot taken by a camera on a moving crane or jib. Filmmaker D. W. Griffith created the first crane for his 1916 epic film Intolerance, with famed special effects pioneer Eiji Tsuburaya later constructing the first iron camera crane which is still adapted worldwide today. Most cranes accommodate both the camera and an operator, but some can be moved by remote control. Crane shots are often found in what are supposed to be emotional or suspenseful scenes. One example of this technique is the shots taken by remote cranes in the car-chase sequence of the 1985 film To Live and Die in L.A. Some filmmakers place the camera on a boom arm simply to make it easier to move around between ordinary set-ups. | 2023-07-10T18:59:06Z | [
"Template:Short description",
"Template:More citations needed",
"Template:When",
"Template:Reflist",
"Template:Cite book",
"Template:Cite web",
"Template:Citation",
"Template:Cinematic techniques"
] | https://en.wikipedia.org/wiki/Crane_shot |
|
5,729 | Chariots of Fire | Chariots of Fire is a 1981 British historical sports drama film directed by Hugh Hudson, written by Colin Welland and produced by David Puttnam. It is based on the true story of two British athletes in the 1924 Olympics: Eric Liddell, a devout Scottish Christian who runs for the glory of God, and Harold Abrahams, an English Jew who runs to overcome prejudice. Ben Cross and Ian Charleson star as Abrahams and Liddell, alongside Nigel Havers, Ian Holm, John Gielgud, Lindsay Anderson, Cheryl Campbell, Alice Krige, Brad Davis and Dennis Christopher in supporting roles. Kenneth Branagh makes his debut in a minor role.
Chariots of Fire was nominated for seven Academy Awards and won four, including Best Picture, Best Original Screenplay and Best Original Score for Vangelis' electronic theme tune. At the 35th British Academy Film Awards, the film was nominated in 11 categories and won in three, including Best Film. It is ranked 19th in the British Film Institute's list of Top 100 British films.
The film's title was inspired by the line "Bring me my Chariot of fire!" from the William Blake poem adapted into the British hymn and unofficial English anthem "Jerusalem"; the hymn is heard at the end of the film. The original phrase "chariot(s) of fire" is from 2 Kings 2:11 and 6:17 in the Bible.
During a 1978 funeral service in London in honour of the life of Harold Abrahams, headed by his former colleague Lord Andrew Lindsay, there is a flashback to when he was young and in a group of athletes running along a beach.
In 1919, Harold Abrahams enters the University of Cambridge, where he experiences antisemitism from the staff but enjoys participating in the Gilbert and Sullivan club. He becomes the first person ever to complete the Trinity Great Court Run, running around the college courtyard in the time it takes for the clock to strike 12, and achieves an undefeated string of victories in various national running competitions. Although focused on his running, he falls in love with Sybil Gordon, a leading Gilbert and Sullivan soprano.
Eric Liddell, born in China to Scottish missionary parents, is in Scotland. His devout sister Jennie disapproves of Liddell's plans to pursue competitive running. Still, Liddell sees running as a way of glorifying God before returning to China to work as a missionary. When they first race against each other, Liddell beats Abrahams. Abrahams takes it poorly, but Sam Mussabini, a professional trainer he had approached earlier, offers to take him on to improve his technique. This attracts criticism from the Cambridge college masters, who allege it is not gentlemanly for an amateur to "play the tradesman" by employing a professional coach. Abrahams dismisses this concern, interpreting it as cover for antisemitic and class-based prejudice. When Liddell accidentally misses a church prayer meeting because of his running, Jennie upbraids him and accuses him of no longer caring about God. Eric tells her that though he intends to return eventually to the China mission, he feels divinely inspired when running and that not to run would be to dishonour God.
After years of training and racing, the two athletes are accepted to represent Great Britain in the 1924 Olympics in Paris. Also accepted are Abrahams' Cambridge friends, Andrew Lindsay, Aubrey Montague, and Henry Stallard. While boarding the boat to France for the Olympics, Liddell discovers the heats for his 100-metre race will be on a Sunday. Despite intense pressure from the Prince of Wales and the British Olympic Committee, he refuses to run the race because his Christian convictions prevent him from running on the Lord's Day. A solution is found thanks to Liddell's teammate Lindsay, who, having already won a silver medal in the 400 metres hurdles, offers to give his place in the 400-metre race on the following Thursday to Liddell, who gratefully accepts. Liddell's religious convictions in the face of national athletic pride make headlines around the world; he delivers a sermon at the Paris Church of Scotland that Sunday, and quotes from Isaiah 40.
Abrahams is badly beaten by the heavily favoured United States runners in the 200 metre race. He knows his last chance for a medal will be the 100 metres. He competes in the race and wins. His coach Mussabini, who was barred from the stadium, is overcome that the years of dedication and training have paid off with an Olympic gold medal. Now Abrahams can get on with his life and reunite with his girlfriend Sybil, whom he had neglected for the sake of running. Before Liddell's race, the American coach remarks dismissively to his runners that Liddell has little chance of doing well in his now, far longer, 400 metre race. But one of the American runners, Jackson Scholz, hands Liddell a note of support that quotes 1 Samuel 2:30. Liddell defeats the American favourites and wins the gold medal. The British team returns home triumphant.
A textual epilogue reveals that Abrahams married Sybil and became the elder statesman of British athletics while Liddell went on to do missionary work and was mourned by all of Scotland following his death in Japanese-occupied China.
Other actors in smaller roles include John Young as Eric and Jennie's father Reverend J.D. Liddell, Yvonne Gilan as their mother Mary, Benny Young as their older brother Rob, Yves Beneyton as French runner Géo André, Philip O'Brien as American coach George Collins, Patrick Doyle as Jimmie, and Ruby Wax as Bunty. Kenneth Branagh, who worked as a set gofer, appears as an extra in the Cambridge Society Day sequence. Stephen Fry has a likewise uncredited role as a Gilbert-and-Sullivan Club singer.
Producer David Puttnam was looking for a story in the mould of A Man for All Seasons (1966), regarding someone who follows his conscience, and felt that sport provided clear situations in this sense. He discovered Eric Liddell's story by accident in 1977, when he happened upon An Approved History of the Olympic Games, a reference book on the Olympics, while housebound from the flu, in a rented house in Malibu.
Screenwriter Colin Welland, commissioned by Puttnam, did an enormous amount of research for his Academy Award-winning script. Among other things, he took out advertisements in London newspapers seeking memories of the 1924 Olympics, went to the National Film Archives for pictures and footage of the 1924 Olympics, and interviewed everyone involved who was still alive. Welland just missed Abrahams, who died on 14 January 1978, but he did attend Abrahams' February 1978 memorial service, which inspired the present-day framing device of the film. Aubrey Montague's son saw Welland's newspaper ad and sent him copies of the letters his father had sent home – which gave Welland something to use as a narrative bridge in the film. Except for changes in the greetings of the letters from "Darling Mummy" to "Dear Mum" and the change from Oxford to Cambridge, all of the readings from Montague's letters are from the originals.
Welland's original script also featured, in addition to Eric Liddell and Harold Abrahams, a third protagonist, 1924 Olympic gold medallist Douglas Lowe, who was presented as a privileged aristocratic athlete. However, Lowe refused to have anything to do with the film, and his character was written out and replaced by the fictional character of Lord Andrew Lindsay.
Initial financing towards development costs was provided by Goldcrest Films, who then sold the project to Mohamed Al-Fayed's Allied Stars, but kept a percentage of the profits.
Ian Charleson wrote Eric Liddell's speech to the post-race workingmen's crowd at the Scotland v. Ireland races. Charleson, who had studied the Bible intensively in preparation for the role, told director Hugh Hudson that he didn't feel the portentous and sanctimonious scripted speech was either authentic or inspiring. Hudson and Welland allowed him to write words he personally found inspirational instead.
Puttnam chose Hugh Hudson, a multiple award-winning advertising and documentary filmmaker who had never helmed a feature film, to direct Chariots of Fire. Hudson and Puttnam had known each other since the 1960s when Puttnam was an advertising executive and Hudson was making films for ad agencies. In 1977, Hudson had also been second-unit director on the Puttnam-produced film Midnight Express.
Director Hugh Hudson was determined to cast young, unknown actors in all the major roles of the film, and to back them up by using veterans like John Gielgud, Lindsay Anderson, and Ian Holm as their supporting cast. Hudson and producer David Puttnam did months of fruitless searching for the perfect actor to play Eric Liddell. They then saw Scottish stage actor Ian Charleson performing the role of Pierre in the Royal Shakespeare Company's production of Piaf, and knew immediately they had found their man. Unbeknownst to them, Charleson had heard about the film from his father, and desperately wanted to play the part, feeling it would "fit like a kid glove".
Ben Cross, who plays Harold Abrahams, was discovered while playing Billy Flynn in Chicago. In addition to having a natural pugnaciousness, he had the desired ability to sing and play the piano. Cross was thrilled to be cast, and said he was moved to tears by the film's script.
20th Century-Fox, which put up half of the production budget in exchange for distribution rights outside of North America, insisted on having a couple of notable American names in the cast. Thus the small parts of the two American champion runners, Jackson Scholz and Charley Paddock, were cast with recent headliners: Brad Davis had recently starred in Midnight Express (also produced by Puttnam), and Dennis Christopher had recently starred, as a young bicycle racer, in the popular indie film Breaking Away.
All of the actors portraying runners underwent an intensive three-month training regimen with renowned running coach Tom McNab. This training and isolation of the actors also created a strong bond and sense of camaraderie among them.
The beach scenes showing the athletes running towards the Carlton Hotel at Broadstairs, Kent, were shot in Scotland on West Sands, St Andrews next to the 18th hole of the Old Course at St Andrews Links. A plaque now commemorates the filming. The impact of these scenes (as the athletes run in slow motion to Vangelis's music) prompted Broadstairs town council to commemorate them with a seafront plaque.
All of the Cambridge scenes were actually filmed at Hugh Hudson's alma mater Eton College, because Cambridge refused filming rights, fearing depictions of anti-Semitism. The Cambridge administration greatly regretted the decision after the film's enormous success.
Liverpool Town Hall was the setting for the scenes depicting the British Embassy in Paris. The Colombes Olympic Stadium in Paris was represented by the Oval Sports Centre, Bebington, Merseyside. The nearby Woodside ferry terminal was used to represent the embarkation scenes set in Dover. The railway station scenes were filmed in York, using locomotives from the National Railway Museum. The filming of the Scotland–France international athletic meeting took place at Goldenacre Sports Ground, owned by George Heriot's School while the Scotland–Ireland meeting was at the nearby Inverleith Sports Ground. The scene depicting a performance of The Mikado was filmed in the Royal Court Theatre, Liverpool, with members of the D'Oyly Carte Opera Company who were on tour.
The film was slightly altered for the U.S. audience. A brief scene depicting a pre-Olympics cricket game between Abrahams, Liddell, Montague, and the rest of the British track team appears shortly after the beginning of the original film. For the American audience, this brief scene was deleted. In the U.S., to avoid the initial G rating, which had been strongly associated with children's films and might have hindered box office sales, a different scene was used – one depicting Abrahams and Montague arriving at a Cambridge railway station and encountering two First World War veterans who use an obscenity – in order to be given a PG rating. An off camera retort of "Win It For Israel" among exhortations of Abraham's fellow students before he takes on the challenge of The Great Court Run was curiously absent from the final cuts theatrically distributed in the U.S. but can be heard in versions broadcast on such cable outlets as TCM.
Although the film is a period piece, set in the 1920s, the Academy Award-winning original soundtrack composed by Vangelis (credited as Vangelis Papathanassiou) uses a modern 1980s electronic sound, with a strong use of synthesizer and piano among other instruments. This was a departure from earlier period films, which employed sweeping orchestral instrumentals. The title theme of the film has been used in subsequent films and television shows during slow-motion segments.
Vangelis, a Greek-born electronic composer who moved to Paris in the late 1960s, had been living in London since 1974. Director Hugh Hudson had collaborated with him on documentaries and commercials, and was also particularly impressed with his 1979 albums Opera Sauvage and China. David Puttnam also greatly admired Vangelis's body of work, having originally selected his compositions for his previous film Midnight Express. Hudson made the choice for Vangelis and for a modern score: "I knew we needed a piece which was anachronistic to the period to give it a feel of modernity. It was a risky idea but we went with it rather than have a period symphonic score." The soundtrack had a personal significance to Vangelis: after composing the theme he told Puttnam, "My father is a runner, and this is an anthem to him."
Hudson originally wanted Vangelis's 1977 tune "L'Enfant", from his Opera Sauvage album, to be the title theme of the film, and the beach running sequence was actually filmed with "L'Enfant" playing on loudspeakers for the runners to pace to. Vangelis finally convinced Hudson he could create a new and better piece for the film's main theme – and when he played the "Chariots of Fire" theme for Hudson, it was agreed the new tune was unquestionably better. The "L'Enfant" melody still made it into the film: when the athletes reach Paris and enter the stadium, a brass band marches through the field, and first plays a modified, acoustic performance of the piece. Vangelis's electronic "L'Enfant" track eventually was used prominently in the 1982 film The Year of Living Dangerously.
Some pieces of Vangelis's music in the film did not end up on the film's soundtrack album. One of them is the background music to the race Eric Liddell runs in the Scottish highlands. This piece is a version of "Hymne", the original version of which appears on Vangelis's 1979 album, Opéra sauvage. Various versions are also included on Vangelis's compilation albums Themes, Portraits, and Odyssey: The Definitive Collection, though none of these include the version used in the film.
Five lively Gilbert and Sullivan tunes also appear in the soundtrack, and serve as jaunty period music which counterpoints Vangelis's modern electronic score. These are: "He is an Englishman" from H.M.S. Pinafore, "Three Little Maids From School Are We" from The Mikado, "With Catlike Tread" from The Pirates of Penzance, "The Soldiers of Our Queen" from Patience, and "There Lived a King" from The Gondoliers.
The film also incorporates a major traditional work: "Jerusalem", sung by a British choir at the 1978 funeral of Harold Abrahams. The words, written by William Blake in 1804–08, were set to music by Hubert Parry in 1916 as a celebration of England. This hymn has been described as "England's unofficial national anthem", concludes the film and inspired its title. A handful of other traditional anthems and hymns and period-appropriate instrumental ballroom-dance music round out the film's soundtrack.
The film was distributed by 20th Century-Fox and selected for the 1981 Royal Film Performance with its premiere on 30 March 1981 at the Odeon Haymarket before opening to the public the following day. It opened in Edinburgh on 4 April and in Oxford and Cambridge on 5 April with other openings in Manchester and Liverpool before expanding further in May into 20 additional London cinemas and 11 others nationally. It was shown in competition at the 1981 Cannes Film Festival on 20 May.
The film was distributed by The Ladd Company through Warner Bros. in North America and released on 25 September 1981 in Los Angeles, California and in New York Film Festival, on 26 September 1981 in New York and on 9 April 1982 in the United States.
Since its release, Chariots of Fire has received generally positive reviews from critics. As of 2022, the film holds an 83% "Certified Fresh" rating on the review aggregator website Rotten Tomatoes, based on 111 reviews, with a weighted average of 7.7/10. The site's consensus reads: "Decidedly slower and less limber than the Olympic runners at the center of its story, Chariots of Fire nevertheless manages to make effectively stirring use of its spiritual and patriotic themes." On Metacritic, the film has a score of 78 out of 100 based on 19 critics' reviews, indicating "generally favorable reviews".
For its 2012 re-release, Kate Muir of The Times gave the film five stars, writing: "In a time when drug tests and synthetic fibres have replaced gumption and moral fibre, the tale of two runners competing against each other in the 1924 Olympics has a simple, undiminished power. From the opening scene of pale young men racing barefoot along the beach, full of hope and elation, backed by Vangelis's now famous anthem, the film is utterly compelling."
In its first four weeks at the Odeon Haymarket it grossed £106,484. The film was the highest-grossing British film for the year with theatrical rentals of £1,859,480. Its gross of almost $59 million in the United States and Canada made it the highest-grossing film import into the US (i.e. a film without any US input) at the time, surpassing Meatballs' $43 million.
The film was nominated for seven Academy Awards, winning four (including Best Picture). When accepting his Oscar for Best Original Screenplay, Colin Welland famously announced "The British are coming". It was the first film released by Warner Bros. to win Best Picture since My Fair Lady in 1964.
American Film Institute recognition
Other honours
Chariots of Fire is a film about achieving victory through self sacrifice and moral courage. While the producers' intent was to make a cinematic work that was historically authentic, the film was not intended to be historically accurate. Numerous liberties were taken with the actual historical chronology, the inclusion and exclusion of notable people, and the creation of fictional scenes for dramatic purpose, plot pacing and exposition.
The film depicts Abrahams as attending Gonville and Caius College, Cambridge, with three other Olympic athletes: Henry Stallard, Aubrey Montague, and Lord Andrew Lindsay. Abrahams and Stallard were, in fact, students there and competed in the 1924 Olympics. Montague also competed in the Olympics as depicted, but he attended Oxford, not Cambridge. Aubrey Montague sent daily letters to his mother about his time at Oxford and the Olympics; these letters were the basis of Montague's narration in the film.
The character of Lindsay was based partially on Lord Burghley, a significant figure in the history of British athletics. Although Burghley did attend Cambridge, he was not a contemporary of Harold Abrahams, as Abrahams was an undergraduate from 1919 to 1923 and Burghley was at Cambridge from 1923 to 1927. One scene in the film depicts the Burghley-based "Lindsay" as practising hurdles on his estate with full champagne glasses placed on each hurdle – this was something the wealthy Burghley did, although he used matchboxes instead of champagne glasses. The fictional character of Lindsay was created when Douglas Lowe, who was Britain's third athletics gold medallist in the 1924 Olympics, was not willing to be involved with the film.
Another scene in the film recreates the Great Court Run, in which the runners attempt to run around the perimeter of the Great Court at Trinity College, Cambridge in the time it takes the clock to strike 12 at midday. The film shows Abrahams performing the feat for the first time in history. In fact, Abrahams never attempted this race, and at the time of filming the only person on record known to have succeeded was Lord Burghley, in 1927. In Chariots of Fire, Lindsay, who is based on Lord Burghley, runs the Great Court Run with Abrahams in order to spur him on, and crosses the finish line just a moment too late. Since the film's release, the Great Court Run has also been successfully run by Trinity undergraduate Sam Dobin, in October 2007.
In the film, Eric Liddell is tripped up by a Frenchman in the 400-metre event of a Scotland–France international athletic meeting. He recovers, makes up a 20-metre deficit, and wins. This was based on fact; the actual race was the 440 yards at a Triangular Contest meet between Scotland, England, and Ireland at Stoke-on-Trent in England in July 1923. His achievement was remarkable as he had already won the 100- and 220-yard events that day. Also unmentioned with regard to Liddell is that it was he who introduced Abrahams to Sam Mussabini. This is alluded to: in the film, Abrahams first encounters Mussabini while he is watching Liddell race.
Abrahams and Liddell did race against each other twice, but not as depicted in the film, which shows Liddell winning the final of the 100 yards against a shattered Abrahams at the 1923 AAA Championship at Stamford Bridge. In fact, they raced only in a heat of the 220 yards, which Liddell won, five yards ahead of Abrahams, who did not progress to the final. In the 100 yards, Abrahams was eliminated in the heats and did not race against Liddell, who won the finals of both races the next day. They also raced against each other in the 200 m final at the 1924 Olympics, and this was also not shown in the film.
Abrahams' fiancée is misidentified as Sybil Gordon, a soprano with the D'Oyly Carte Opera Company. In fact, in 1936, Abrahams married Sybil Evers, who also performed with D'Oyly Carte, but they did not meet until 1934. Also, in the film, Sybil is depicted as singing the role of Yum-Yum in The Mikado, but neither Gordon nor Evers ever sang that role with D'Oyly Carte, although Evers was known for her charm in singing Peep-Bo, one of the two other "little maids from school". Harold Abrahams' love of and heavy involvement with Gilbert and Sullivan, as depicted in the film, is factual.
Liddell's sister was several years younger than she was portrayed in the film. Her disapproval of Liddell's track career was creative licence; she actually fully supported his sporting work. Jenny Liddell Somerville cooperated fully with the making of the film and has a brief cameo in the Paris Church of Scotland during Liddell's sermon.
At the memorial service for Harold Abrahams, which opens the film, Lord Lindsay mentions that he and Aubrey Montague are the only members of the 1924 Olympic team still alive. However, Montague died in 1948, 30 years before Abrahams' death.
In the film, the 100m bronze medallist is a character called "Tom Watson"; the real medallist was Arthur Porritt of New Zealand, who refused permission for his name to be used in the film, allegedly out of modesty, and his wish was accepted by the film's producers, even though his permission was not necessary. However, the brief back-story given for Watson, who is called up to the New Zealand team from the University of Oxford, substantially matches Porritt's history. With the exception of Porritt, all the runners in the 100m final are identified correctly when they line up for inspection by the Prince of Wales.
Jackson Scholz is depicted as handing Liddell an inspirational Bible-quotation message before the 400 metres final: "It says in the Old Book, 'He that honors me, I will honor.' Good luck." In reality, the note was from members of the British team, and was handed to Liddell before the race by his attending masseur at the team's Paris hotel. For dramatic purposes, screenwriter Welland asked Scholz if he could be depicted handing the note, and Scholz readily agreed, saying "Yes, great, as long as it makes me look good."
The events surrounding Liddell's refusal to race on a Sunday are fictional. In the film, he does not learn that the 100-metre heat is to be held on the Christian Sabbath until he is boarding the boat to Paris. In fact, the schedule was made public several months in advance; Liddell did however face immense pressure to run on that Sunday and to compete in the 100 metres, getting called before a grilling by the British Olympic Committee, the Prince of Wales, and other grandees, and his refusal to run made headlines around the world.
The decision to change races was, even so, made well before embarking to Paris, and Liddell spent the intervening months training for the 400 metres, an event in which he had previously excelled. It is true, nonetheless, that Liddell's success in the Olympic 400m was largely unexpected.
The film depicts Lindsay, having already won a medal in the 400-metre hurdles, giving up his place in the 400-metre race for Liddell. In fact Burghley, on whom Lindsay is loosely based, was eliminated in the heats of the 110 hurdles (he would go on to win a gold medal in the 400 hurdles at the 1928 Olympics), and was not entered for the 400 metres.
The film reverses the order of Abrahams' 100m and 200m races at the Olympics. In reality, after winning the 100 metres race, Abrahams ran the 200 metres but finished last, Jackson Scholz taking the gold medal. In the film, before his triumph in the 100m, Abrahams is shown losing the 200m and being scolded by Mussabini. And during the following scene in which Abrahams speaks with his friend Montague while receiving a massage from Mussabini, there is a French newspaper clipping showing Scholz and Charley Paddock with a headline which states that the 200 metres was a triumph for the United States. In the same conversation, Abrahams laments getting "beaten out of sight" in the 200. The film thus has Abrahams overcoming the disappointment of losing the 200 by going on to win the 100, a reversal of the real order.
Eric Liddell actually also ran in the 200m race, and finished third, behind Paddock and Scholz. This was the only time in reality that Liddell and Abrahams competed in the same finals race. While their meeting in the 1923 AAA Championship in the film was fictitious, Liddell's record win in that race did spur Abrahams to train even harder.
Abrahams also won a silver medal as an opening runner for the 4 x 100 metres relay team, not shown in the film, and Aubrey Montague placed sixth in the steeplechase, as depicted.
Chariots of Fire became a recurring theme in promotions for the 2012 Summer Olympics in London. The film's theme was featured at the opening of the 2012 London New Year's fireworks celebrating the Olympics. The runners who first tested the new Olympic Park were spurred on by the Chariots of Fire theme, and the music was also used to fanfare the carriers of the Olympic flame on parts of its route through the UK. The beach-running sequence was also recreated at St. Andrews and filmed as part of the Olympic torch relay.
The film's theme was also performed by the London Symphony Orchestra, conducted by Simon Rattle, during the Opening Ceremony of the games; the performance was accompanied by a comedy skit by Rowan Atkinson (as Mr. Bean) which included the opening beach-running footage from the film. The film's theme was again played during each medal ceremony of the 2012 Olympics.
As an official part of the London 2012 Festival celebrations, a new digitally re-mastered version of the film screened in 150 cinemas throughout the UK. The re-release began 13 July 2012, two weeks before the opening ceremony of the London Olympics.
A Blu-ray of the film was released on 10 July 2012 in North America, and was released 16 July 2012 in the UK. The release includes nearly an hour of special features, a CD sampler, and a 32-page "digibook".
A stage adaptation of Chariots of Fire was mounted in honour of the 2012 Olympics. The play, Chariots of Fire, which was adapted by playwright Mike Bartlett and included the Vangelis score, ran from 9 May to 16 June 2012 at London's Hampstead Theatre, and transferred to the Gielgud Theatre in the West End on 23 June, where it ran until 5 January 2013. It starred Jack Lowden as Eric Liddell and James McArdle as Harold Abrahams, and Edward Hall directed. Stage designer Miriam Buether transformed each theatre into an Olympic stadium, and composer Jason Carr wrote additional music. Vangelis also created several new pieces of music for the production.
The stage version for the London Olympic year was the idea of the film's director, Hugh Hudson, who co-produced the play; he stated, "Issues of faith, of refusal to compromise, standing up for one's beliefs, achieving something for the sake of it, with passion, and not just for fame or financial gain, are even more vital today."
Another play, Running for Glory, written by Philip Dart, based on the 1924 Olympics, and focusing on Abrahams and Liddell, toured parts of Britain from 25 February to 1 April 2012. It starred Nicholas Jacobs as Harold Abrahams, and Tom Micklem as Eric Liddell. | [
{
"paragraph_id": 0,
"text": "Chariots of Fire is a 1981 British historical sports drama film directed by Hugh Hudson, written by Colin Welland and produced by David Puttnam. It is based on the true story of two British athletes in the 1924 Olympics: Eric Liddell, a devout Scottish Christian who runs for the glory of God, and Harold Abrahams, an English Jew who runs to overcome prejudice. Ben Cross and Ian Charleson star as Abrahams and Liddell, alongside Nigel Havers, Ian Holm, John Gielgud, Lindsay Anderson, Cheryl Campbell, Alice Krige, Brad Davis and Dennis Christopher in supporting roles. Kenneth Branagh makes his debut in a minor role.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Chariots of Fire was nominated for seven Academy Awards and won four, including Best Picture, Best Original Screenplay and Best Original Score for Vangelis' electronic theme tune. At the 35th British Academy Film Awards, the film was nominated in 11 categories and won in three, including Best Film. It is ranked 19th in the British Film Institute's list of Top 100 British films.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The film's title was inspired by the line \"Bring me my Chariot of fire!\" from the William Blake poem adapted into the British hymn and unofficial English anthem \"Jerusalem\"; the hymn is heard at the end of the film. The original phrase \"chariot(s) of fire\" is from 2 Kings 2:11 and 6:17 in the Bible.",
"title": ""
},
{
"paragraph_id": 3,
"text": "During a 1978 funeral service in London in honour of the life of Harold Abrahams, headed by his former colleague Lord Andrew Lindsay, there is a flashback to when he was young and in a group of athletes running along a beach.",
"title": "Plot"
},
{
"paragraph_id": 4,
"text": "In 1919, Harold Abrahams enters the University of Cambridge, where he experiences antisemitism from the staff but enjoys participating in the Gilbert and Sullivan club. He becomes the first person ever to complete the Trinity Great Court Run, running around the college courtyard in the time it takes for the clock to strike 12, and achieves an undefeated string of victories in various national running competitions. Although focused on his running, he falls in love with Sybil Gordon, a leading Gilbert and Sullivan soprano.",
"title": "Plot"
},
{
"paragraph_id": 5,
"text": "Eric Liddell, born in China to Scottish missionary parents, is in Scotland. His devout sister Jennie disapproves of Liddell's plans to pursue competitive running. Still, Liddell sees running as a way of glorifying God before returning to China to work as a missionary. When they first race against each other, Liddell beats Abrahams. Abrahams takes it poorly, but Sam Mussabini, a professional trainer he had approached earlier, offers to take him on to improve his technique. This attracts criticism from the Cambridge college masters, who allege it is not gentlemanly for an amateur to \"play the tradesman\" by employing a professional coach. Abrahams dismisses this concern, interpreting it as cover for antisemitic and class-based prejudice. When Liddell accidentally misses a church prayer meeting because of his running, Jennie upbraids him and accuses him of no longer caring about God. Eric tells her that though he intends to return eventually to the China mission, he feels divinely inspired when running and that not to run would be to dishonour God.",
"title": "Plot"
},
{
"paragraph_id": 6,
"text": "After years of training and racing, the two athletes are accepted to represent Great Britain in the 1924 Olympics in Paris. Also accepted are Abrahams' Cambridge friends, Andrew Lindsay, Aubrey Montague, and Henry Stallard. While boarding the boat to France for the Olympics, Liddell discovers the heats for his 100-metre race will be on a Sunday. Despite intense pressure from the Prince of Wales and the British Olympic Committee, he refuses to run the race because his Christian convictions prevent him from running on the Lord's Day. A solution is found thanks to Liddell's teammate Lindsay, who, having already won a silver medal in the 400 metres hurdles, offers to give his place in the 400-metre race on the following Thursday to Liddell, who gratefully accepts. Liddell's religious convictions in the face of national athletic pride make headlines around the world; he delivers a sermon at the Paris Church of Scotland that Sunday, and quotes from Isaiah 40.",
"title": "Plot"
},
{
"paragraph_id": 7,
"text": "Abrahams is badly beaten by the heavily favoured United States runners in the 200 metre race. He knows his last chance for a medal will be the 100 metres. He competes in the race and wins. His coach Mussabini, who was barred from the stadium, is overcome that the years of dedication and training have paid off with an Olympic gold medal. Now Abrahams can get on with his life and reunite with his girlfriend Sybil, whom he had neglected for the sake of running. Before Liddell's race, the American coach remarks dismissively to his runners that Liddell has little chance of doing well in his now, far longer, 400 metre race. But one of the American runners, Jackson Scholz, hands Liddell a note of support that quotes 1 Samuel 2:30. Liddell defeats the American favourites and wins the gold medal. The British team returns home triumphant.",
"title": "Plot"
},
{
"paragraph_id": 8,
"text": "A textual epilogue reveals that Abrahams married Sybil and became the elder statesman of British athletics while Liddell went on to do missionary work and was mourned by all of Scotland following his death in Japanese-occupied China.",
"title": "Plot"
},
{
"paragraph_id": 9,
"text": "Other actors in smaller roles include John Young as Eric and Jennie's father Reverend J.D. Liddell, Yvonne Gilan as their mother Mary, Benny Young as their older brother Rob, Yves Beneyton as French runner Géo André, Philip O'Brien as American coach George Collins, Patrick Doyle as Jimmie, and Ruby Wax as Bunty. Kenneth Branagh, who worked as a set gofer, appears as an extra in the Cambridge Society Day sequence. Stephen Fry has a likewise uncredited role as a Gilbert-and-Sullivan Club singer.",
"title": "Cast "
},
{
"paragraph_id": 10,
"text": "Producer David Puttnam was looking for a story in the mould of A Man for All Seasons (1966), regarding someone who follows his conscience, and felt that sport provided clear situations in this sense. He discovered Eric Liddell's story by accident in 1977, when he happened upon An Approved History of the Olympic Games, a reference book on the Olympics, while housebound from the flu, in a rented house in Malibu.",
"title": "Production"
},
{
"paragraph_id": 11,
"text": "Screenwriter Colin Welland, commissioned by Puttnam, did an enormous amount of research for his Academy Award-winning script. Among other things, he took out advertisements in London newspapers seeking memories of the 1924 Olympics, went to the National Film Archives for pictures and footage of the 1924 Olympics, and interviewed everyone involved who was still alive. Welland just missed Abrahams, who died on 14 January 1978, but he did attend Abrahams' February 1978 memorial service, which inspired the present-day framing device of the film. Aubrey Montague's son saw Welland's newspaper ad and sent him copies of the letters his father had sent home – which gave Welland something to use as a narrative bridge in the film. Except for changes in the greetings of the letters from \"Darling Mummy\" to \"Dear Mum\" and the change from Oxford to Cambridge, all of the readings from Montague's letters are from the originals.",
"title": "Production"
},
{
"paragraph_id": 12,
"text": "Welland's original script also featured, in addition to Eric Liddell and Harold Abrahams, a third protagonist, 1924 Olympic gold medallist Douglas Lowe, who was presented as a privileged aristocratic athlete. However, Lowe refused to have anything to do with the film, and his character was written out and replaced by the fictional character of Lord Andrew Lindsay.",
"title": "Production"
},
{
"paragraph_id": 13,
"text": "Initial financing towards development costs was provided by Goldcrest Films, who then sold the project to Mohamed Al-Fayed's Allied Stars, but kept a percentage of the profits.",
"title": "Production"
},
{
"paragraph_id": 14,
"text": "Ian Charleson wrote Eric Liddell's speech to the post-race workingmen's crowd at the Scotland v. Ireland races. Charleson, who had studied the Bible intensively in preparation for the role, told director Hugh Hudson that he didn't feel the portentous and sanctimonious scripted speech was either authentic or inspiring. Hudson and Welland allowed him to write words he personally found inspirational instead.",
"title": "Production"
},
{
"paragraph_id": 15,
"text": "Puttnam chose Hugh Hudson, a multiple award-winning advertising and documentary filmmaker who had never helmed a feature film, to direct Chariots of Fire. Hudson and Puttnam had known each other since the 1960s when Puttnam was an advertising executive and Hudson was making films for ad agencies. In 1977, Hudson had also been second-unit director on the Puttnam-produced film Midnight Express.",
"title": "Production"
},
{
"paragraph_id": 16,
"text": "Director Hugh Hudson was determined to cast young, unknown actors in all the major roles of the film, and to back them up by using veterans like John Gielgud, Lindsay Anderson, and Ian Holm as their supporting cast. Hudson and producer David Puttnam did months of fruitless searching for the perfect actor to play Eric Liddell. They then saw Scottish stage actor Ian Charleson performing the role of Pierre in the Royal Shakespeare Company's production of Piaf, and knew immediately they had found their man. Unbeknownst to them, Charleson had heard about the film from his father, and desperately wanted to play the part, feeling it would \"fit like a kid glove\".",
"title": "Production"
},
{
"paragraph_id": 17,
"text": "Ben Cross, who plays Harold Abrahams, was discovered while playing Billy Flynn in Chicago. In addition to having a natural pugnaciousness, he had the desired ability to sing and play the piano. Cross was thrilled to be cast, and said he was moved to tears by the film's script.",
"title": "Production"
},
{
"paragraph_id": 18,
"text": "20th Century-Fox, which put up half of the production budget in exchange for distribution rights outside of North America, insisted on having a couple of notable American names in the cast. Thus the small parts of the two American champion runners, Jackson Scholz and Charley Paddock, were cast with recent headliners: Brad Davis had recently starred in Midnight Express (also produced by Puttnam), and Dennis Christopher had recently starred, as a young bicycle racer, in the popular indie film Breaking Away.",
"title": "Production"
},
{
"paragraph_id": 19,
"text": "All of the actors portraying runners underwent an intensive three-month training regimen with renowned running coach Tom McNab. This training and isolation of the actors also created a strong bond and sense of camaraderie among them.",
"title": "Production"
},
{
"paragraph_id": 20,
"text": "The beach scenes showing the athletes running towards the Carlton Hotel at Broadstairs, Kent, were shot in Scotland on West Sands, St Andrews next to the 18th hole of the Old Course at St Andrews Links. A plaque now commemorates the filming. The impact of these scenes (as the athletes run in slow motion to Vangelis's music) prompted Broadstairs town council to commemorate them with a seafront plaque.",
"title": "Production"
},
{
"paragraph_id": 21,
"text": "All of the Cambridge scenes were actually filmed at Hugh Hudson's alma mater Eton College, because Cambridge refused filming rights, fearing depictions of anti-Semitism. The Cambridge administration greatly regretted the decision after the film's enormous success.",
"title": "Production"
},
{
"paragraph_id": 22,
"text": "Liverpool Town Hall was the setting for the scenes depicting the British Embassy in Paris. The Colombes Olympic Stadium in Paris was represented by the Oval Sports Centre, Bebington, Merseyside. The nearby Woodside ferry terminal was used to represent the embarkation scenes set in Dover. The railway station scenes were filmed in York, using locomotives from the National Railway Museum. The filming of the Scotland–France international athletic meeting took place at Goldenacre Sports Ground, owned by George Heriot's School while the Scotland–Ireland meeting was at the nearby Inverleith Sports Ground. The scene depicting a performance of The Mikado was filmed in the Royal Court Theatre, Liverpool, with members of the D'Oyly Carte Opera Company who were on tour.",
"title": "Production"
},
{
"paragraph_id": 23,
"text": "The film was slightly altered for the U.S. audience. A brief scene depicting a pre-Olympics cricket game between Abrahams, Liddell, Montague, and the rest of the British track team appears shortly after the beginning of the original film. For the American audience, this brief scene was deleted. In the U.S., to avoid the initial G rating, which had been strongly associated with children's films and might have hindered box office sales, a different scene was used – one depicting Abrahams and Montague arriving at a Cambridge railway station and encountering two First World War veterans who use an obscenity – in order to be given a PG rating. An off camera retort of \"Win It For Israel\" among exhortations of Abraham's fellow students before he takes on the challenge of The Great Court Run was curiously absent from the final cuts theatrically distributed in the U.S. but can be heard in versions broadcast on such cable outlets as TCM.",
"title": "Production"
},
{
"paragraph_id": 24,
"text": "Although the film is a period piece, set in the 1920s, the Academy Award-winning original soundtrack composed by Vangelis (credited as Vangelis Papathanassiou) uses a modern 1980s electronic sound, with a strong use of synthesizer and piano among other instruments. This was a departure from earlier period films, which employed sweeping orchestral instrumentals. The title theme of the film has been used in subsequent films and television shows during slow-motion segments.",
"title": "Production"
},
{
"paragraph_id": 25,
"text": "Vangelis, a Greek-born electronic composer who moved to Paris in the late 1960s, had been living in London since 1974. Director Hugh Hudson had collaborated with him on documentaries and commercials, and was also particularly impressed with his 1979 albums Opera Sauvage and China. David Puttnam also greatly admired Vangelis's body of work, having originally selected his compositions for his previous film Midnight Express. Hudson made the choice for Vangelis and for a modern score: \"I knew we needed a piece which was anachronistic to the period to give it a feel of modernity. It was a risky idea but we went with it rather than have a period symphonic score.\" The soundtrack had a personal significance to Vangelis: after composing the theme he told Puttnam, \"My father is a runner, and this is an anthem to him.\"",
"title": "Production"
},
{
"paragraph_id": 26,
"text": "Hudson originally wanted Vangelis's 1977 tune \"L'Enfant\", from his Opera Sauvage album, to be the title theme of the film, and the beach running sequence was actually filmed with \"L'Enfant\" playing on loudspeakers for the runners to pace to. Vangelis finally convinced Hudson he could create a new and better piece for the film's main theme – and when he played the \"Chariots of Fire\" theme for Hudson, it was agreed the new tune was unquestionably better. The \"L'Enfant\" melody still made it into the film: when the athletes reach Paris and enter the stadium, a brass band marches through the field, and first plays a modified, acoustic performance of the piece. Vangelis's electronic \"L'Enfant\" track eventually was used prominently in the 1982 film The Year of Living Dangerously.",
"title": "Production"
},
{
"paragraph_id": 27,
"text": "Some pieces of Vangelis's music in the film did not end up on the film's soundtrack album. One of them is the background music to the race Eric Liddell runs in the Scottish highlands. This piece is a version of \"Hymne\", the original version of which appears on Vangelis's 1979 album, Opéra sauvage. Various versions are also included on Vangelis's compilation albums Themes, Portraits, and Odyssey: The Definitive Collection, though none of these include the version used in the film.",
"title": "Production"
},
{
"paragraph_id": 28,
"text": "Five lively Gilbert and Sullivan tunes also appear in the soundtrack, and serve as jaunty period music which counterpoints Vangelis's modern electronic score. These are: \"He is an Englishman\" from H.M.S. Pinafore, \"Three Little Maids From School Are We\" from The Mikado, \"With Catlike Tread\" from The Pirates of Penzance, \"The Soldiers of Our Queen\" from Patience, and \"There Lived a King\" from The Gondoliers.",
"title": "Production"
},
{
"paragraph_id": 29,
"text": "The film also incorporates a major traditional work: \"Jerusalem\", sung by a British choir at the 1978 funeral of Harold Abrahams. The words, written by William Blake in 1804–08, were set to music by Hubert Parry in 1916 as a celebration of England. This hymn has been described as \"England's unofficial national anthem\", concludes the film and inspired its title. A handful of other traditional anthems and hymns and period-appropriate instrumental ballroom-dance music round out the film's soundtrack.",
"title": "Production"
},
{
"paragraph_id": 30,
"text": "The film was distributed by 20th Century-Fox and selected for the 1981 Royal Film Performance with its premiere on 30 March 1981 at the Odeon Haymarket before opening to the public the following day. It opened in Edinburgh on 4 April and in Oxford and Cambridge on 5 April with other openings in Manchester and Liverpool before expanding further in May into 20 additional London cinemas and 11 others nationally. It was shown in competition at the 1981 Cannes Film Festival on 20 May.",
"title": "Release"
},
{
"paragraph_id": 31,
"text": "The film was distributed by The Ladd Company through Warner Bros. in North America and released on 25 September 1981 in Los Angeles, California and in New York Film Festival, on 26 September 1981 in New York and on 9 April 1982 in the United States.",
"title": "Release"
},
{
"paragraph_id": 32,
"text": "Since its release, Chariots of Fire has received generally positive reviews from critics. As of 2022, the film holds an 83% \"Certified Fresh\" rating on the review aggregator website Rotten Tomatoes, based on 111 reviews, with a weighted average of 7.7/10. The site's consensus reads: \"Decidedly slower and less limber than the Olympic runners at the center of its story, Chariots of Fire nevertheless manages to make effectively stirring use of its spiritual and patriotic themes.\" On Metacritic, the film has a score of 78 out of 100 based on 19 critics' reviews, indicating \"generally favorable reviews\".",
"title": "Reception"
},
{
"paragraph_id": 33,
"text": "For its 2012 re-release, Kate Muir of The Times gave the film five stars, writing: \"In a time when drug tests and synthetic fibres have replaced gumption and moral fibre, the tale of two runners competing against each other in the 1924 Olympics has a simple, undiminished power. From the opening scene of pale young men racing barefoot along the beach, full of hope and elation, backed by Vangelis's now famous anthem, the film is utterly compelling.\"",
"title": "Reception"
},
{
"paragraph_id": 34,
"text": "In its first four weeks at the Odeon Haymarket it grossed £106,484. The film was the highest-grossing British film for the year with theatrical rentals of £1,859,480. Its gross of almost $59 million in the United States and Canada made it the highest-grossing film import into the US (i.e. a film without any US input) at the time, surpassing Meatballs' $43 million.",
"title": "Reception"
},
{
"paragraph_id": 35,
"text": "The film was nominated for seven Academy Awards, winning four (including Best Picture). When accepting his Oscar for Best Original Screenplay, Colin Welland famously announced \"The British are coming\". It was the first film released by Warner Bros. to win Best Picture since My Fair Lady in 1964.",
"title": "Accolades"
},
{
"paragraph_id": 36,
"text": "American Film Institute recognition",
"title": "Accolades"
},
{
"paragraph_id": 37,
"text": "Other honours",
"title": "Accolades"
},
{
"paragraph_id": 38,
"text": "Chariots of Fire is a film about achieving victory through self sacrifice and moral courage. While the producers' intent was to make a cinematic work that was historically authentic, the film was not intended to be historically accurate. Numerous liberties were taken with the actual historical chronology, the inclusion and exclusion of notable people, and the creation of fictional scenes for dramatic purpose, plot pacing and exposition.",
"title": "Historical accuracy"
},
{
"paragraph_id": 39,
"text": "The film depicts Abrahams as attending Gonville and Caius College, Cambridge, with three other Olympic athletes: Henry Stallard, Aubrey Montague, and Lord Andrew Lindsay. Abrahams and Stallard were, in fact, students there and competed in the 1924 Olympics. Montague also competed in the Olympics as depicted, but he attended Oxford, not Cambridge. Aubrey Montague sent daily letters to his mother about his time at Oxford and the Olympics; these letters were the basis of Montague's narration in the film.",
"title": "Historical accuracy"
},
{
"paragraph_id": 40,
"text": "The character of Lindsay was based partially on Lord Burghley, a significant figure in the history of British athletics. Although Burghley did attend Cambridge, he was not a contemporary of Harold Abrahams, as Abrahams was an undergraduate from 1919 to 1923 and Burghley was at Cambridge from 1923 to 1927. One scene in the film depicts the Burghley-based \"Lindsay\" as practising hurdles on his estate with full champagne glasses placed on each hurdle – this was something the wealthy Burghley did, although he used matchboxes instead of champagne glasses. The fictional character of Lindsay was created when Douglas Lowe, who was Britain's third athletics gold medallist in the 1924 Olympics, was not willing to be involved with the film.",
"title": "Historical accuracy"
},
{
"paragraph_id": 41,
"text": "Another scene in the film recreates the Great Court Run, in which the runners attempt to run around the perimeter of the Great Court at Trinity College, Cambridge in the time it takes the clock to strike 12 at midday. The film shows Abrahams performing the feat for the first time in history. In fact, Abrahams never attempted this race, and at the time of filming the only person on record known to have succeeded was Lord Burghley, in 1927. In Chariots of Fire, Lindsay, who is based on Lord Burghley, runs the Great Court Run with Abrahams in order to spur him on, and crosses the finish line just a moment too late. Since the film's release, the Great Court Run has also been successfully run by Trinity undergraduate Sam Dobin, in October 2007.",
"title": "Historical accuracy"
},
{
"paragraph_id": 42,
"text": "In the film, Eric Liddell is tripped up by a Frenchman in the 400-metre event of a Scotland–France international athletic meeting. He recovers, makes up a 20-metre deficit, and wins. This was based on fact; the actual race was the 440 yards at a Triangular Contest meet between Scotland, England, and Ireland at Stoke-on-Trent in England in July 1923. His achievement was remarkable as he had already won the 100- and 220-yard events that day. Also unmentioned with regard to Liddell is that it was he who introduced Abrahams to Sam Mussabini. This is alluded to: in the film, Abrahams first encounters Mussabini while he is watching Liddell race.",
"title": "Historical accuracy"
},
{
"paragraph_id": 43,
"text": "Abrahams and Liddell did race against each other twice, but not as depicted in the film, which shows Liddell winning the final of the 100 yards against a shattered Abrahams at the 1923 AAA Championship at Stamford Bridge. In fact, they raced only in a heat of the 220 yards, which Liddell won, five yards ahead of Abrahams, who did not progress to the final. In the 100 yards, Abrahams was eliminated in the heats and did not race against Liddell, who won the finals of both races the next day. They also raced against each other in the 200 m final at the 1924 Olympics, and this was also not shown in the film.",
"title": "Historical accuracy"
},
{
"paragraph_id": 44,
"text": "Abrahams' fiancée is misidentified as Sybil Gordon, a soprano with the D'Oyly Carte Opera Company. In fact, in 1936, Abrahams married Sybil Evers, who also performed with D'Oyly Carte, but they did not meet until 1934. Also, in the film, Sybil is depicted as singing the role of Yum-Yum in The Mikado, but neither Gordon nor Evers ever sang that role with D'Oyly Carte, although Evers was known for her charm in singing Peep-Bo, one of the two other \"little maids from school\". Harold Abrahams' love of and heavy involvement with Gilbert and Sullivan, as depicted in the film, is factual.",
"title": "Historical accuracy"
},
{
"paragraph_id": 45,
"text": "Liddell's sister was several years younger than she was portrayed in the film. Her disapproval of Liddell's track career was creative licence; she actually fully supported his sporting work. Jenny Liddell Somerville cooperated fully with the making of the film and has a brief cameo in the Paris Church of Scotland during Liddell's sermon.",
"title": "Historical accuracy"
},
{
"paragraph_id": 46,
"text": "At the memorial service for Harold Abrahams, which opens the film, Lord Lindsay mentions that he and Aubrey Montague are the only members of the 1924 Olympic team still alive. However, Montague died in 1948, 30 years before Abrahams' death.",
"title": "Historical accuracy"
},
{
"paragraph_id": 47,
"text": "In the film, the 100m bronze medallist is a character called \"Tom Watson\"; the real medallist was Arthur Porritt of New Zealand, who refused permission for his name to be used in the film, allegedly out of modesty, and his wish was accepted by the film's producers, even though his permission was not necessary. However, the brief back-story given for Watson, who is called up to the New Zealand team from the University of Oxford, substantially matches Porritt's history. With the exception of Porritt, all the runners in the 100m final are identified correctly when they line up for inspection by the Prince of Wales.",
"title": "Historical accuracy"
},
{
"paragraph_id": 48,
"text": "Jackson Scholz is depicted as handing Liddell an inspirational Bible-quotation message before the 400 metres final: \"It says in the Old Book, 'He that honors me, I will honor.' Good luck.\" In reality, the note was from members of the British team, and was handed to Liddell before the race by his attending masseur at the team's Paris hotel. For dramatic purposes, screenwriter Welland asked Scholz if he could be depicted handing the note, and Scholz readily agreed, saying \"Yes, great, as long as it makes me look good.\"",
"title": "Historical accuracy"
},
{
"paragraph_id": 49,
"text": "The events surrounding Liddell's refusal to race on a Sunday are fictional. In the film, he does not learn that the 100-metre heat is to be held on the Christian Sabbath until he is boarding the boat to Paris. In fact, the schedule was made public several months in advance; Liddell did however face immense pressure to run on that Sunday and to compete in the 100 metres, getting called before a grilling by the British Olympic Committee, the Prince of Wales, and other grandees, and his refusal to run made headlines around the world.",
"title": "Historical accuracy"
},
{
"paragraph_id": 50,
"text": "The decision to change races was, even so, made well before embarking to Paris, and Liddell spent the intervening months training for the 400 metres, an event in which he had previously excelled. It is true, nonetheless, that Liddell's success in the Olympic 400m was largely unexpected.",
"title": "Historical accuracy"
},
{
"paragraph_id": 51,
"text": "The film depicts Lindsay, having already won a medal in the 400-metre hurdles, giving up his place in the 400-metre race for Liddell. In fact Burghley, on whom Lindsay is loosely based, was eliminated in the heats of the 110 hurdles (he would go on to win a gold medal in the 400 hurdles at the 1928 Olympics), and was not entered for the 400 metres.",
"title": "Historical accuracy"
},
{
"paragraph_id": 52,
"text": "The film reverses the order of Abrahams' 100m and 200m races at the Olympics. In reality, after winning the 100 metres race, Abrahams ran the 200 metres but finished last, Jackson Scholz taking the gold medal. In the film, before his triumph in the 100m, Abrahams is shown losing the 200m and being scolded by Mussabini. And during the following scene in which Abrahams speaks with his friend Montague while receiving a massage from Mussabini, there is a French newspaper clipping showing Scholz and Charley Paddock with a headline which states that the 200 metres was a triumph for the United States. In the same conversation, Abrahams laments getting \"beaten out of sight\" in the 200. The film thus has Abrahams overcoming the disappointment of losing the 200 by going on to win the 100, a reversal of the real order.",
"title": "Historical accuracy"
},
{
"paragraph_id": 53,
"text": "Eric Liddell actually also ran in the 200m race, and finished third, behind Paddock and Scholz. This was the only time in reality that Liddell and Abrahams competed in the same finals race. While their meeting in the 1923 AAA Championship in the film was fictitious, Liddell's record win in that race did spur Abrahams to train even harder.",
"title": "Historical accuracy"
},
{
"paragraph_id": 54,
"text": "Abrahams also won a silver medal as an opening runner for the 4 x 100 metres relay team, not shown in the film, and Aubrey Montague placed sixth in the steeplechase, as depicted.",
"title": "Historical accuracy"
},
{
"paragraph_id": 55,
"text": "Chariots of Fire became a recurring theme in promotions for the 2012 Summer Olympics in London. The film's theme was featured at the opening of the 2012 London New Year's fireworks celebrating the Olympics. The runners who first tested the new Olympic Park were spurred on by the Chariots of Fire theme, and the music was also used to fanfare the carriers of the Olympic flame on parts of its route through the UK. The beach-running sequence was also recreated at St. Andrews and filmed as part of the Olympic torch relay.",
"title": "London Olympics' 2012 revival"
},
{
"paragraph_id": 56,
"text": "The film's theme was also performed by the London Symphony Orchestra, conducted by Simon Rattle, during the Opening Ceremony of the games; the performance was accompanied by a comedy skit by Rowan Atkinson (as Mr. Bean) which included the opening beach-running footage from the film. The film's theme was again played during each medal ceremony of the 2012 Olympics.",
"title": "London Olympics' 2012 revival"
},
{
"paragraph_id": 57,
"text": "As an official part of the London 2012 Festival celebrations, a new digitally re-mastered version of the film screened in 150 cinemas throughout the UK. The re-release began 13 July 2012, two weeks before the opening ceremony of the London Olympics.",
"title": "London Olympics' 2012 revival"
},
{
"paragraph_id": 58,
"text": "A Blu-ray of the film was released on 10 July 2012 in North America, and was released 16 July 2012 in the UK. The release includes nearly an hour of special features, a CD sampler, and a 32-page \"digibook\".",
"title": "London Olympics' 2012 revival"
},
{
"paragraph_id": 59,
"text": "A stage adaptation of Chariots of Fire was mounted in honour of the 2012 Olympics. The play, Chariots of Fire, which was adapted by playwright Mike Bartlett and included the Vangelis score, ran from 9 May to 16 June 2012 at London's Hampstead Theatre, and transferred to the Gielgud Theatre in the West End on 23 June, where it ran until 5 January 2013. It starred Jack Lowden as Eric Liddell and James McArdle as Harold Abrahams, and Edward Hall directed. Stage designer Miriam Buether transformed each theatre into an Olympic stadium, and composer Jason Carr wrote additional music. Vangelis also created several new pieces of music for the production.",
"title": "London Olympics' 2012 revival"
},
{
"paragraph_id": 60,
"text": "The stage version for the London Olympic year was the idea of the film's director, Hugh Hudson, who co-produced the play; he stated, \"Issues of faith, of refusal to compromise, standing up for one's beliefs, achieving something for the sake of it, with passion, and not just for fame or financial gain, are even more vital today.\"",
"title": "London Olympics' 2012 revival"
},
{
"paragraph_id": 61,
"text": "Another play, Running for Glory, written by Philip Dart, based on the 1924 Olympics, and focusing on Abrahams and Liddell, toured parts of Britain from 25 February to 1 April 2012. It starred Nicholas Jacobs as Harold Abrahams, and Tom Micklem as Eric Liddell.",
"title": "London Olympics' 2012 revival"
}
] | Chariots of Fire is a 1981 British historical sports drama film directed by Hugh Hudson, written by Colin Welland and produced by David Puttnam. It is based on the true story of two British athletes in the 1924 Olympics: Eric Liddell, a devout Scottish Christian who runs for the glory of God, and Harold Abrahams, an English Jew who runs to overcome prejudice. Ben Cross and Ian Charleson star as Abrahams and Liddell, alongside Nigel Havers, Ian Holm, John Gielgud, Lindsay Anderson, Cheryl Campbell, Alice Krige, Brad Davis and Dennis Christopher in supporting roles. Kenneth Branagh makes his debut in a minor role. Chariots of Fire was nominated for seven Academy Awards and won four, including Best Picture, Best Original Screenplay and Best Original Score for Vangelis' electronic theme tune. At the 35th British Academy Film Awards, the film was nominated in 11 categories and won in three, including Best Film. It is ranked 19th in the British Film Institute's list of Top 100 British films. The film's title was inspired by the line "Bring me my Chariot of fire!" from the William Blake poem adapted into the British hymn and unofficial English anthem "Jerusalem"; the hymn is heard at the end of the film. The original phrase "chariot(s) of fire" is from 2 Kings 2:11 and 6:17 in the Bible. | 2001-06-06T20:01:30Z | 2023-12-29T06:00:05Z | [
"Template:Use dmy dates",
"Template:Webarchive",
"Template:'s",
"Template:Bibleref2",
"Template:Castlist",
"Template:Draw",
"Template:Notelist",
"Template:Rotten-tomatoes",
"Template:See also",
"Template:Cite book",
"Template:Cite magazine",
"Template:IMDb title",
"Template:Hugh Hudson",
"Template:Efn",
"Template:'",
"Template:Cite news",
"Template:YouTube",
"Template:Tcmdb title",
"Template:Use British English",
"Template:Reflist",
"Template:ISBN",
"Template:Won",
"Template:Cite web",
"Template:Chariots of Fire",
"Template:Navboxes",
"Template:Short description",
"Template:As of",
"Template:Nom",
"Template:Cite journal",
"Template:Wikiquote",
"Template:Allmovie title",
"Template:Authority control",
"Template:About",
"Template:Infobox film",
"Template:Anchor",
"Template:Sfn",
"Template:Main"
] | https://en.wikipedia.org/wiki/Chariots_of_Fire |
5,734 | Consequentialism | In ethical philosophy, consequentialism is a class of normative, teleological ethical theories that holds that the consequences of one's conduct are the ultimate basis for judgement about the rightness or wrongness of that conduct. Thus, from a consequentialist standpoint, a morally right act (or omission from acting) is one that will produce a good outcome. Consequentialism, along with eudaimonism, falls under the broader category of teleological ethics, a group of views which claim that the moral value of any act consists in its tendency to produce things of intrinsic value. Consequentialists hold in general that an act is right if and only if the act (or in some views, the rule under which it falls) will produce, will probably produce, or is intended to produce, a greater balance of good over evil than any available alternative. Different consequentialist theories differ in how they define moral goods, with chief candidates including pleasure, the absence of pain, the satisfaction of one's preferences, and broader notions of the "general good".
Consequentialism is usually contrasted with deontological ethics (or deontology): deontology, in which rules and moral duty are central, derives the rightness or wrongness of one's conduct from the character of the behaviour itself, rather than the outcomes of the conduct. It is also contrasted with both virtue ethics which focuses on the character of the agent rather than on the nature or consequences of the act (or omission) itself, and pragmatic ethics which treats morality like science: advancing collectively as a society over the course of many lifetimes, such that any moral criterion is subject to revision.
Some argue that consequentialist theories (such as utilitarianism) and deontological theories (such as Kantian ethics) are not necessarily mutually exclusive. For example, T. M. Scanlon advances the idea that human rights, which are commonly considered a "deontological" concept, can only be justified with reference to the consequences of having those rights. Similarly, Robert Nozick argued for a theory that is mostly consequentialist, but incorporates inviolable "side-constraints" which restrict the sort of actions agents are permitted to do. Derek Parfit argued that in practice, when understood properly, rule consequentialism, Kantian deontology, and contractualism would all end up prescribing the same behavior.
Nature has placed mankind under the governance of two sovereign masters, pain and pleasure. It is for them alone to point out what we ought to do, as well as to determine what we shall do. On the one hand the standard of right and wrong, on the other the chain of causes and effects, are fastened to their throne. They govern us in all we do, in all we say, in all we think...
In summary, Jeremy Bentham states that people are driven by their interests and their fears, but their interests take precedence over their fears; their interests are carried out in accordance with how people view the consequences that might be involved with their interests. Happiness, in this account, is defined as the maximization of pleasure and the minimization of pain. It can be argued that the existence of phenomenal consciousness and "qualia" is required for the experience of pleasure or pain to have an ethical significance.
Historically, hedonistic utilitarianism is the paradigmatic example of a consequentialist moral theory. This form of utilitarianism holds that what matters is the aggregate happiness; the happiness of everyone, and not the happiness of any particular person. John Stuart Mill, in his exposition of hedonistic utilitarianism, proposed a hierarchy of pleasures, meaning that the pursuit of certain kinds of pleasure is more highly valued than the pursuit of other pleasures. However, some contemporary utilitarians, such as Peter Singer, are concerned with maximizing the satisfaction of preferences, hence preference utilitarianism. Other contemporary forms of utilitarianism mirror the forms of consequentialism outlined below.
In general, consequentialist theories focus on actions. However, this need not be the case. Rule consequentialism is a theory that is sometimes seen as an attempt to reconcile consequentialism with deontology, or rules-based ethics—and in some cases, this is stated as a criticism of rule consequentialism. Like deontology, rule consequentialism holds that moral behavior involves following certain rules. However, rule consequentialism chooses rules based on the consequences that the selection of those rules has. Rule consequentialism exists in the forms of rule utilitarianism and rule egoism.
Various theorists are split as to whether the rules are the only determinant of moral behavior or not. For example, Robert Nozick held that a certain set of minimal rules, which he calls "side-constraints," are necessary to ensure appropriate actions. There are also differences as to how absolute these moral rules are. Thus, while Nozick's side-constraints are absolute restrictions on behavior, Amartya Sen proposes a theory that recognizes the importance of certain rules, but these rules are not absolute. That is, they may be violated if strict adherence to the rule would lead to much more undesirable consequences.
One of the most common objections to rule-consequentialism is that it is incoherent, because it is based on the consequentialist principle that what we should be concerned with is maximizing the good, but then it tells us not to act to maximize the good, but to follow rules (even in cases where we know that breaking the rule could produce better results).
In Ideal Code, Real World, Brad Hooker avoids this objection by not basing his form of rule-consequentialism on the ideal of maximizing the good. He writes:
[T]he best argument for rule-consequentialism is not that it derives from an overarching commitment to maximise the good. The best argument for rule-consequentialism is that it does a better job than its rivals of matching and tying together our moral convictions, as well as offering us help with our moral disagreements and uncertainties.
Derek Parfit described Hooker's book as the "best statement and defence, so far, of one of the most important moral theories."
It is the business of the benevolent man to seek to promote what is beneficial to the world and to eliminate what is harmful, and to provide a model for the world. What benefits he will carry out; what does not benefit men he will leave alone.
State consequentialism, also known as Mohist consequentialism, is an ethical theory that evaluates the moral worth of an action based on how much it contributes to the welfare of a state. According to the Stanford Encyclopedia of Philosophy, Mohist consequentialism, dating back to the 5th century BCE, is the "world's earliest form of consequentialism, a remarkably sophisticated version based on a plurality of intrinsic goods taken as constitutive of human welfare."
Unlike utilitarianism, which views utility as the sole moral good, "the basic goods in Mohist consequentialist thinking are...order, material wealth, and increase in population." During the time of Mozi, war and famine were common, and population growth was seen as a moral necessity for a harmonious society. The "material wealth" of Mohist consequentialism refers to basic needs, like shelter and clothing; and "order" refers to Mozi's stance against warfare and violence, which he viewed as pointless and a threat to social stability. In The Cambridge History of Ancient China, Stanford sinologist David Shepherd Nivison writes that the moral goods of Mohism "are interrelated: more basic wealth, then more reproduction; more people, then more production and wealth...if people have plenty, they would be good, filial, kind, and so on unproblematically."
The Mohists believed that morality is based on "promoting the benefit of all under heaven and eliminating harm to all under heaven." In contrast to Jeremy Bentham's views, state consequentialism is not utilitarian because it is not hedonistic or individualistic. The importance of outcomes that are good for the community outweigh the importance of individual pleasure and pain. The term state consequentialism has also been applied to the political philosophy of the Confucian philosopher Xunzi. On the other hand, "legalist" Han Fei "is motivated almost totally from the ruler's point of view."
Ethical egoism can be understood as a consequentialist theory according to which the consequences for the individual agent are taken to matter more than any other result. Thus, egoism will prescribe actions that may be beneficial, detrimental, or neutral to the welfare of others. Some, like Henry Sidgwick, argue that a certain degree of egoism promotes the general welfare of society for two reasons: because individuals know how to please themselves best, and because if everyone were an austere altruist then general welfare would inevitably decrease.
Ethical altruism can be seen as a consequentialist theory which prescribes that an individual take actions that have the best consequences for everyone except for himself. This was advocated by Auguste Comte, who coined the term altruism, and whose ethics can be summed up in the phrase "Live for others."
The two-level approach involves engaging in critical reasoning and considering all the possible ramifications of one's actions before making an ethical decision, but reverting to generally reliable moral rules when one is not in a position to stand back and examine the dilemma as a whole. In practice, this equates to adhering to rule consequentialism when one can only reason on an intuitive level, and to act consequentialism when in a position to stand back and reason on a more critical level.
This position can be described as a reconciliation between act consequentialism—in which the morality of an action is determined by that action's effects—and rule consequentialism—in which moral behavior is derived from following rules that lead to positive outcomes.
The two-level approach to consequentialism is most often associated with R. M. Hare and Peter Singer.
Another consequentialist version is motive consequentialism, which looks at whether the state of affairs that results from the motive to choose an action is better or at least as good as each alternative state of affairs that would have resulted from alternative actions. This version gives relevance to the motive of an act and links it to its consequences. An act can therefore not be wrong if the decision to act was based on a right motive. A possible inference is that one can not be blamed for mistaken judgments if the motivation was to do good.
Most consequentialist theories focus on promoting some sort of good consequences. However, negative utilitarianism lays out a consequentialist theory that focuses solely on minimizing bad consequences.
One major difference between these two approaches is the agent's responsibility. Positive consequentialism demands that we bring about good states of affairs, whereas negative consequentialism requires that we avoid bad ones. Stronger versions of negative consequentialism will require active intervention to prevent bad and ameliorate existing harm. In weaker versions, simple forbearance from acts tending to harm others is sufficient. An example of this is the slippery-slope argument, which encourages others to avoid a specified act on the grounds that it may ultimately lead to undesirable consequences.
Often "negative" consequentialist theories assert that reducing suffering is more important than increasing pleasure. Karl Popper, for example, claimed that "from the moral point of view, pain cannot be outweighed by pleasure." (While Popper is not a consequentialist per se, this is taken as a classic statement of negative utilitarianism.) When considering a theory of justice, negative consequentialists may use a statewide or global-reaching principle: the reduction of suffering (for the disadvantaged) is more valuable than increased pleasure (for the affluent or luxurious).
Since pure consequentialism holds that an action is to be judged solely by its result, most consequentialist theories hold that a deliberate action is no different from a deliberate decision not to act. This contrasts with the "acts and omissions doctrine", which is upheld by some medical ethicists and some religions: it asserts there is a significant moral distinction between acts and deliberate non-actions which lead to the same outcome. This contrast is brought out in issues such as voluntary euthanasia.
The normative status of an action depends on its consequences according to consequentialism. The consequences of the actions of an agent may include other actions by this agent. Actualism and possibilism disagree on how later possible actions impact the normative status of the current action by the same agent. Actualists assert that it is only relevant what the agent would actually do later for assessing the value of an alternative. Possibilists, on the other hand, hold that we should also take into account what the agent could do, even if she would not do it.
For example, assume that Gifre has the choice between two alternatives, eating a cookie or not eating anything. Having eaten the first cookie, Gifre could stop eating cookies, which is the best alternative. But after having tasted one cookie, Gifre would freely decide to continue eating cookies until the whole bag is finished, which would result in a terrible stomach ache and would be the worst alternative. Not eating any cookies at all, on the other hand, would be the second-best alternative. Now the question is: should Gifre eat the first cookie or not? Actualists are only concerned with the actual consequences. According to them, Gifre should not eat any cookies at all since it is better than the alternative leading to a stomach ache. Possibilists, however, contend that the best possible course of action involves eating the first cookie and this is therefore what Gifre should do.
One counterintuitive consequence of actualism is that agents can avoid moral obligations simply by having an imperfect moral character. For example, a lazy person might justify rejecting a request to help a friend by arguing that, due to her lazy character, she would not have done the work anyway, even if she had accepted the request. By rejecting the offer right away, she managed at least not to waste anyone's time. Actualists might even consider her behavior praiseworthy since she did what, according to actualism, she ought to have done. This seems to be a very easy way to "get off the hook" that is avoided by possibilism. But possibilism has to face the objection that in some cases it sanctions and even recommends what actually leads to the worst outcome.
Douglas W. Portmore has suggested that these and other problems of actualism and possibilism can be avoided by constraining what counts as a genuine alternative for the agent. On his view, it is a requirement that the agent has rational control over the event in question. For example, eating only one cookie and stopping afterward only is an option for Gifre if she has the rational capacity to repress her temptation to continue eating. If the temptation is irrepressible then this course of action is not considered to be an option and is therefore not relevant when assessing what the best alternative is. Portmore suggests that, given this adjustment, we should prefer a view very closely associated with possibilism called maximalism.
One important characteristic of many normative moral theories such as consequentialism is the ability to produce practical moral judgements. At the very least, any moral theory needs to define the standpoint from which the goodness of the consequences are to be determined. What is primarily at stake here is the responsibility of the agent.
One common tactic among consequentialists, particularly those committed to an altruistic (selfless) account of consequentialism, is to employ an ideal, neutral observer from which moral judgements can be made. John Rawls, a critic of utilitarianism, argues that utilitarianism, in common with other forms of consequentialism, relies on the perspective of such an ideal observer. The particular characteristics of this ideal observer can vary from an omniscient observer, who would grasp all the consequences of any action, to an ideally informed observer, who knows as much as could reasonably be expected, but not necessarily all the circumstances or all the possible consequences. Consequentialist theories that adopt this paradigm hold that right action is the action that will bring about the best consequences from this ideal observer's perspective.
In practice, it is very difficult, and at times arguably impossible, to adopt the point of view of an ideal observer. Individual moral agents do not know everything about their particular situations, and thus do not know all the possible consequences of their potential actions. For this reason, some theorists have argued that consequentialist theories can only require agents to choose the best action in line with what they know about the situation. However, if this approach is naïvely adopted, then moral agents who, for example, recklessly fail to reflect on their situation, and act in a way that brings about terrible results, could be said to be acting in a morally justifiable way. Acting in a situation without first informing oneself of the circumstances of the situation can lead to even the most well-intended actions yielding miserable consequences. As a result, it could be argued that there is a moral imperative for agents to inform themselves as much as possible about a situation before judging the appropriate course of action. This imperative, of course, is derived from consequential thinking: a better-informed agent is able to bring about better consequences.
Moral action always has consequences for certain people or things. Varieties of consequentialism can be differentiated by the beneficiary of the good consequences. That is, one might ask "Consequences for whom?"
A fundamental distinction can be drawn between theories which require that agents act for ends perhaps disconnected from their own interests and drives, and theories which permit that agents act for ends in which they have some personal interest or motivation. These are called "agent-neutral" and "agent-focused" theories respectively.
Agent-neutral consequentialism ignores the specific value a state of affairs has for any particular agent. Thus, in an agent-neutral theory, an actor's personal goals do not count any more than anyone else's goals in evaluating what action the actor should take. Agent-focused consequentialism, on the other hand, focuses on the particular needs of the moral agent. Thus, in an agent-focused account, such as one that Peter Railton outlines, the agent might be concerned with the general welfare, but the agent is more concerned with the immediate welfare of herself and her friends and family.
These two approaches could be reconciled by acknowledging the tension between an agent's interests as an individual and as a member of various groups, and seeking to somehow optimize among all of these interests. For example, it may be meaningful to speak of an action as being good for someone as an individual, but bad for them as a citizen of their town.
Many consequentialist theories may seem primarily concerned with human beings and their relationships with other human beings. However, some philosophers argue that we should not limit our ethical consideration to the interests of human beings alone. Jeremy Bentham, who is regarded as the founder of utilitarianism, argues that animals can experience pleasure and pain, thus demanding that 'non-human animals' should be a serious object of moral concern.
More recently, Peter Singer has argued that it is unreasonable that we do not give equal consideration to the interests of animals as to those of human beings when we choose the way we are to treat them. Such equal consideration does not necessarily imply identical treatment of humans and non-humans, any more than it necessarily implies identical treatment of all humans.
One way to divide various consequentialisms is by the types of consequences that are taken to matter most, that is, which consequences count as good states of affairs. According to utilitarianism, a good action is one that results in an increase in pleasure, and the best action is one that results in the most pleasure for the greatest number. Closely related is eudaimonic consequentialism, according to which a full, flourishing life, which may or may not be the same as enjoying a great deal of pleasure, is the ultimate aim. Similarly, one might adopt an aesthetic consequentialism, in which the ultimate aim is to produce beauty. However, one might fix on non-psychological goods as the relevant effect. Thus, one might pursue an increase in material equality or political liberty instead of something like the more ephemeral "pleasure". Other theories adopt a package of several goods, all to be promoted equally. As the consequentialist approach contains an inherent assumption that the outcomes of a moral decision can be quantified in terms of "goodness" or "badness," or at least put in order of increasing preference, it is an especially suited moral theory for a probabilistic and decision theoretical approach.
Consequentialism can also be contrasted with aretaic moral theories such as virtue ethics. Whereas consequentialist theories posit that consequences of action should be the primary focus of our thinking about ethics, virtue ethics insists that it is the character rather than the consequences of actions that should be the focal point. Some virtue ethicists hold that consequentialist theories totally disregard the development and importance of moral character. For example, Philippa Foot argues that consequences in themselves have no ethical content, unless it has been provided by a virtue such as benevolence.
However, consequentialism and virtue ethics need not be entirely antagonistic. Iain King has developed an approach that reconciles the two schools. Other consequentialists consider effects on the character of people involved in an action when assessing consequence. Similarly, a consequentialist theory may aim at the maximization of a particular virtue or set of virtues. Finally, following Foot's lead, one might adopt a sort of consequentialism that argues that virtuous activity ultimately produces the best consequences.
The ultimate end is a concept in the moral philosophy of Max Weber, in which individuals act in a faithful, rather than rational, manner.
We must be clear about the fact that all ethically oriented conduct may be guided by one of two fundamentally differing and irreconcilably opposed maxims: conduct can be oriented to an ethic of ultimate ends or to an ethic of responsibility. [...] There is an abysmal contrast between conduct that follows the maxim of an ethic of ultimate ends — that is in religious terms, "the Christian does rightly and leaves the results with the Lord" — and conduct that follows the maxim of an ethic of responsibility, in which case one has to give an account of the foreseeable results of one's action.
Teleological ethics (Greek: telos, 'end, purpose' + logos, 'science') is a broader class of views in moral philosophy which consequentialism falls under. In general, proponents of teleological ethics argue that the moral value of any act consists in its tendency to produce things of intrinsic value, meaning that an act is right if and only if it, or the rule under which it falls, produces, will probably produce, or is intended to produce, a greater balance of good over evil than any alternative act. This concept is exemplified by the famous aphorism, "the end justifies the means," variously attributed to Machiavelli or Ovid i.e. if a goal is morally important enough, any method of achieving it is acceptable.
Teleological theories differ among themselves on the nature of the particular end that actions ought to promote. The two major families of views in teleological ethics are virtue ethics and consequentialism. Teleological ethical theories are often discussed in opposition to deontological ethical theories, which hold that acts themselves are inherently good or bad, rather than good or bad because of extrinsic factors (such as the act's consequences or the moral character of the person who acts).
The term consequentialism was coined by G. E. M. Anscombe in her essay "Modern Moral Philosophy" in 1958, to describe what she saw as the central error of certain moral theories, such as those propounded by Mill and Sidgwick.
The phrase and concept of "the end justifies the means" are at least as old as the first century BC. Ovid wrote in his Heroides that Exitus acta probat ("The result justifies the deed").
G. E. M. Anscombe objects to the consequentialism of Sidgwick on the grounds that the moral worth of an action is premised on the predictive capabilities of the individual, relieving them of the responsibility for the "badness" of an act should they "make out a case for not having foreseen" negative consequences.
The future amplification of the effects of small decisions is an important factor that makes it more difficult to predict the ethical value of consequences, even though most would agree that only predictable consequences are charged with a moral responsibility.
Bernard Williams has argued that consequentialism is alienating because it requires moral agents to put too much distance between themselves and their own projects and commitments. Williams argues that consequentialism requires moral agents to take a strictly impersonal view of all actions, since it is only the consequences, and not who produces them, that are said to matter. Williams argues that this demands too much of moral agents—since (he claims) consequentialism demands that they be willing to sacrifice any and all personal projects and commitments in any given circumstance in order to pursue the most beneficent course of action possible. He argues further that consequentialism fails to make sense of intuitions that it can matter whether or not someone is personally the author of a particular consequence. For example, that participating in a crime can matter, even if the crime would have been committed anyway, or would even have been worse, without the agent's participation.
Some consequentialists—most notably Peter Railton—have attempted to develop a form of consequentialism that acknowledges and avoids the objections raised by Williams. Railton argues that Williams's criticisms can be avoided by adopting a form of consequentialism in which moral decisions are to be determined by the sort of life that they express. On his account, the agent should choose the sort of life that will, on the whole, produce the best overall effects. | [
{
"paragraph_id": 0,
"text": "In ethical philosophy, consequentialism is a class of normative, teleological ethical theories that holds that the consequences of one's conduct are the ultimate basis for judgement about the rightness or wrongness of that conduct. Thus, from a consequentialist standpoint, a morally right act (or omission from acting) is one that will produce a good outcome. Consequentialism, along with eudaimonism, falls under the broader category of teleological ethics, a group of views which claim that the moral value of any act consists in its tendency to produce things of intrinsic value. Consequentialists hold in general that an act is right if and only if the act (or in some views, the rule under which it falls) will produce, will probably produce, or is intended to produce, a greater balance of good over evil than any available alternative. Different consequentialist theories differ in how they define moral goods, with chief candidates including pleasure, the absence of pain, the satisfaction of one's preferences, and broader notions of the \"general good\".",
"title": ""
},
{
"paragraph_id": 1,
"text": "Consequentialism is usually contrasted with deontological ethics (or deontology): deontology, in which rules and moral duty are central, derives the rightness or wrongness of one's conduct from the character of the behaviour itself, rather than the outcomes of the conduct. It is also contrasted with both virtue ethics which focuses on the character of the agent rather than on the nature or consequences of the act (or omission) itself, and pragmatic ethics which treats morality like science: advancing collectively as a society over the course of many lifetimes, such that any moral criterion is subject to revision.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Some argue that consequentialist theories (such as utilitarianism) and deontological theories (such as Kantian ethics) are not necessarily mutually exclusive. For example, T. M. Scanlon advances the idea that human rights, which are commonly considered a \"deontological\" concept, can only be justified with reference to the consequences of having those rights. Similarly, Robert Nozick argued for a theory that is mostly consequentialist, but incorporates inviolable \"side-constraints\" which restrict the sort of actions agents are permitted to do. Derek Parfit argued that in practice, when understood properly, rule consequentialism, Kantian deontology, and contractualism would all end up prescribing the same behavior.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Nature has placed mankind under the governance of two sovereign masters, pain and pleasure. It is for them alone to point out what we ought to do, as well as to determine what we shall do. On the one hand the standard of right and wrong, on the other the chain of causes and effects, are fastened to their throne. They govern us in all we do, in all we say, in all we think...",
"title": "Forms of consequentialism"
},
{
"paragraph_id": 4,
"text": "In summary, Jeremy Bentham states that people are driven by their interests and their fears, but their interests take precedence over their fears; their interests are carried out in accordance with how people view the consequences that might be involved with their interests. Happiness, in this account, is defined as the maximization of pleasure and the minimization of pain. It can be argued that the existence of phenomenal consciousness and \"qualia\" is required for the experience of pleasure or pain to have an ethical significance.",
"title": "Forms of consequentialism"
},
{
"paragraph_id": 5,
"text": "Historically, hedonistic utilitarianism is the paradigmatic example of a consequentialist moral theory. This form of utilitarianism holds that what matters is the aggregate happiness; the happiness of everyone, and not the happiness of any particular person. John Stuart Mill, in his exposition of hedonistic utilitarianism, proposed a hierarchy of pleasures, meaning that the pursuit of certain kinds of pleasure is more highly valued than the pursuit of other pleasures. However, some contemporary utilitarians, such as Peter Singer, are concerned with maximizing the satisfaction of preferences, hence preference utilitarianism. Other contemporary forms of utilitarianism mirror the forms of consequentialism outlined below.",
"title": "Forms of consequentialism"
},
{
"paragraph_id": 6,
"text": "In general, consequentialist theories focus on actions. However, this need not be the case. Rule consequentialism is a theory that is sometimes seen as an attempt to reconcile consequentialism with deontology, or rules-based ethics—and in some cases, this is stated as a criticism of rule consequentialism. Like deontology, rule consequentialism holds that moral behavior involves following certain rules. However, rule consequentialism chooses rules based on the consequences that the selection of those rules has. Rule consequentialism exists in the forms of rule utilitarianism and rule egoism.",
"title": "Forms of consequentialism"
},
{
"paragraph_id": 7,
"text": "Various theorists are split as to whether the rules are the only determinant of moral behavior or not. For example, Robert Nozick held that a certain set of minimal rules, which he calls \"side-constraints,\" are necessary to ensure appropriate actions. There are also differences as to how absolute these moral rules are. Thus, while Nozick's side-constraints are absolute restrictions on behavior, Amartya Sen proposes a theory that recognizes the importance of certain rules, but these rules are not absolute. That is, they may be violated if strict adherence to the rule would lead to much more undesirable consequences.",
"title": "Forms of consequentialism"
},
{
"paragraph_id": 8,
"text": "One of the most common objections to rule-consequentialism is that it is incoherent, because it is based on the consequentialist principle that what we should be concerned with is maximizing the good, but then it tells us not to act to maximize the good, but to follow rules (even in cases where we know that breaking the rule could produce better results).",
"title": "Forms of consequentialism"
},
{
"paragraph_id": 9,
"text": "In Ideal Code, Real World, Brad Hooker avoids this objection by not basing his form of rule-consequentialism on the ideal of maximizing the good. He writes:",
"title": "Forms of consequentialism"
},
{
"paragraph_id": 10,
"text": "[T]he best argument for rule-consequentialism is not that it derives from an overarching commitment to maximise the good. The best argument for rule-consequentialism is that it does a better job than its rivals of matching and tying together our moral convictions, as well as offering us help with our moral disagreements and uncertainties.",
"title": "Forms of consequentialism"
},
{
"paragraph_id": 11,
"text": "Derek Parfit described Hooker's book as the \"best statement and defence, so far, of one of the most important moral theories.\"",
"title": "Forms of consequentialism"
},
{
"paragraph_id": 12,
"text": "It is the business of the benevolent man to seek to promote what is beneficial to the world and to eliminate what is harmful, and to provide a model for the world. What benefits he will carry out; what does not benefit men he will leave alone.",
"title": "Forms of consequentialism"
},
{
"paragraph_id": 13,
"text": "State consequentialism, also known as Mohist consequentialism, is an ethical theory that evaluates the moral worth of an action based on how much it contributes to the welfare of a state. According to the Stanford Encyclopedia of Philosophy, Mohist consequentialism, dating back to the 5th century BCE, is the \"world's earliest form of consequentialism, a remarkably sophisticated version based on a plurality of intrinsic goods taken as constitutive of human welfare.\"",
"title": "Forms of consequentialism"
},
{
"paragraph_id": 14,
"text": "Unlike utilitarianism, which views utility as the sole moral good, \"the basic goods in Mohist consequentialist thinking are...order, material wealth, and increase in population.\" During the time of Mozi, war and famine were common, and population growth was seen as a moral necessity for a harmonious society. The \"material wealth\" of Mohist consequentialism refers to basic needs, like shelter and clothing; and \"order\" refers to Mozi's stance against warfare and violence, which he viewed as pointless and a threat to social stability. In The Cambridge History of Ancient China, Stanford sinologist David Shepherd Nivison writes that the moral goods of Mohism \"are interrelated: more basic wealth, then more reproduction; more people, then more production and wealth...if people have plenty, they would be good, filial, kind, and so on unproblematically.\"",
"title": "Forms of consequentialism"
},
{
"paragraph_id": 15,
"text": "The Mohists believed that morality is based on \"promoting the benefit of all under heaven and eliminating harm to all under heaven.\" In contrast to Jeremy Bentham's views, state consequentialism is not utilitarian because it is not hedonistic or individualistic. The importance of outcomes that are good for the community outweigh the importance of individual pleasure and pain. The term state consequentialism has also been applied to the political philosophy of the Confucian philosopher Xunzi. On the other hand, \"legalist\" Han Fei \"is motivated almost totally from the ruler's point of view.\"",
"title": "Forms of consequentialism"
},
{
"paragraph_id": 16,
"text": "Ethical egoism can be understood as a consequentialist theory according to which the consequences for the individual agent are taken to matter more than any other result. Thus, egoism will prescribe actions that may be beneficial, detrimental, or neutral to the welfare of others. Some, like Henry Sidgwick, argue that a certain degree of egoism promotes the general welfare of society for two reasons: because individuals know how to please themselves best, and because if everyone were an austere altruist then general welfare would inevitably decrease.",
"title": "Forms of consequentialism"
},
{
"paragraph_id": 17,
"text": "Ethical altruism can be seen as a consequentialist theory which prescribes that an individual take actions that have the best consequences for everyone except for himself. This was advocated by Auguste Comte, who coined the term altruism, and whose ethics can be summed up in the phrase \"Live for others.\"",
"title": "Forms of consequentialism"
},
{
"paragraph_id": 18,
"text": "The two-level approach involves engaging in critical reasoning and considering all the possible ramifications of one's actions before making an ethical decision, but reverting to generally reliable moral rules when one is not in a position to stand back and examine the dilemma as a whole. In practice, this equates to adhering to rule consequentialism when one can only reason on an intuitive level, and to act consequentialism when in a position to stand back and reason on a more critical level.",
"title": "Forms of consequentialism"
},
{
"paragraph_id": 19,
"text": "This position can be described as a reconciliation between act consequentialism—in which the morality of an action is determined by that action's effects—and rule consequentialism—in which moral behavior is derived from following rules that lead to positive outcomes.",
"title": "Forms of consequentialism"
},
{
"paragraph_id": 20,
"text": "The two-level approach to consequentialism is most often associated with R. M. Hare and Peter Singer.",
"title": "Forms of consequentialism"
},
{
"paragraph_id": 21,
"text": "Another consequentialist version is motive consequentialism, which looks at whether the state of affairs that results from the motive to choose an action is better or at least as good as each alternative state of affairs that would have resulted from alternative actions. This version gives relevance to the motive of an act and links it to its consequences. An act can therefore not be wrong if the decision to act was based on a right motive. A possible inference is that one can not be blamed for mistaken judgments if the motivation was to do good.",
"title": "Forms of consequentialism"
},
{
"paragraph_id": 22,
"text": "Most consequentialist theories focus on promoting some sort of good consequences. However, negative utilitarianism lays out a consequentialist theory that focuses solely on minimizing bad consequences.",
"title": "Forms of consequentialism"
},
{
"paragraph_id": 23,
"text": "One major difference between these two approaches is the agent's responsibility. Positive consequentialism demands that we bring about good states of affairs, whereas negative consequentialism requires that we avoid bad ones. Stronger versions of negative consequentialism will require active intervention to prevent bad and ameliorate existing harm. In weaker versions, simple forbearance from acts tending to harm others is sufficient. An example of this is the slippery-slope argument, which encourages others to avoid a specified act on the grounds that it may ultimately lead to undesirable consequences.",
"title": "Forms of consequentialism"
},
{
"paragraph_id": 24,
"text": "Often \"negative\" consequentialist theories assert that reducing suffering is more important than increasing pleasure. Karl Popper, for example, claimed that \"from the moral point of view, pain cannot be outweighed by pleasure.\" (While Popper is not a consequentialist per se, this is taken as a classic statement of negative utilitarianism.) When considering a theory of justice, negative consequentialists may use a statewide or global-reaching principle: the reduction of suffering (for the disadvantaged) is more valuable than increased pleasure (for the affluent or luxurious).",
"title": "Forms of consequentialism"
},
{
"paragraph_id": 25,
"text": "Since pure consequentialism holds that an action is to be judged solely by its result, most consequentialist theories hold that a deliberate action is no different from a deliberate decision not to act. This contrasts with the \"acts and omissions doctrine\", which is upheld by some medical ethicists and some religions: it asserts there is a significant moral distinction between acts and deliberate non-actions which lead to the same outcome. This contrast is brought out in issues such as voluntary euthanasia.",
"title": "Forms of consequentialism"
},
{
"paragraph_id": 26,
"text": "The normative status of an action depends on its consequences according to consequentialism. The consequences of the actions of an agent may include other actions by this agent. Actualism and possibilism disagree on how later possible actions impact the normative status of the current action by the same agent. Actualists assert that it is only relevant what the agent would actually do later for assessing the value of an alternative. Possibilists, on the other hand, hold that we should also take into account what the agent could do, even if she would not do it.",
"title": "Forms of consequentialism"
},
{
"paragraph_id": 27,
"text": "For example, assume that Gifre has the choice between two alternatives, eating a cookie or not eating anything. Having eaten the first cookie, Gifre could stop eating cookies, which is the best alternative. But after having tasted one cookie, Gifre would freely decide to continue eating cookies until the whole bag is finished, which would result in a terrible stomach ache and would be the worst alternative. Not eating any cookies at all, on the other hand, would be the second-best alternative. Now the question is: should Gifre eat the first cookie or not? Actualists are only concerned with the actual consequences. According to them, Gifre should not eat any cookies at all since it is better than the alternative leading to a stomach ache. Possibilists, however, contend that the best possible course of action involves eating the first cookie and this is therefore what Gifre should do.",
"title": "Forms of consequentialism"
},
{
"paragraph_id": 28,
"text": "One counterintuitive consequence of actualism is that agents can avoid moral obligations simply by having an imperfect moral character. For example, a lazy person might justify rejecting a request to help a friend by arguing that, due to her lazy character, she would not have done the work anyway, even if she had accepted the request. By rejecting the offer right away, she managed at least not to waste anyone's time. Actualists might even consider her behavior praiseworthy since she did what, according to actualism, she ought to have done. This seems to be a very easy way to \"get off the hook\" that is avoided by possibilism. But possibilism has to face the objection that in some cases it sanctions and even recommends what actually leads to the worst outcome.",
"title": "Forms of consequentialism"
},
{
"paragraph_id": 29,
"text": "Douglas W. Portmore has suggested that these and other problems of actualism and possibilism can be avoided by constraining what counts as a genuine alternative for the agent. On his view, it is a requirement that the agent has rational control over the event in question. For example, eating only one cookie and stopping afterward only is an option for Gifre if she has the rational capacity to repress her temptation to continue eating. If the temptation is irrepressible then this course of action is not considered to be an option and is therefore not relevant when assessing what the best alternative is. Portmore suggests that, given this adjustment, we should prefer a view very closely associated with possibilism called maximalism.",
"title": "Forms of consequentialism"
},
{
"paragraph_id": 30,
"text": "One important characteristic of many normative moral theories such as consequentialism is the ability to produce practical moral judgements. At the very least, any moral theory needs to define the standpoint from which the goodness of the consequences are to be determined. What is primarily at stake here is the responsibility of the agent.",
"title": "Issues"
},
{
"paragraph_id": 31,
"text": "One common tactic among consequentialists, particularly those committed to an altruistic (selfless) account of consequentialism, is to employ an ideal, neutral observer from which moral judgements can be made. John Rawls, a critic of utilitarianism, argues that utilitarianism, in common with other forms of consequentialism, relies on the perspective of such an ideal observer. The particular characteristics of this ideal observer can vary from an omniscient observer, who would grasp all the consequences of any action, to an ideally informed observer, who knows as much as could reasonably be expected, but not necessarily all the circumstances or all the possible consequences. Consequentialist theories that adopt this paradigm hold that right action is the action that will bring about the best consequences from this ideal observer's perspective.",
"title": "Issues"
},
{
"paragraph_id": 32,
"text": "In practice, it is very difficult, and at times arguably impossible, to adopt the point of view of an ideal observer. Individual moral agents do not know everything about their particular situations, and thus do not know all the possible consequences of their potential actions. For this reason, some theorists have argued that consequentialist theories can only require agents to choose the best action in line with what they know about the situation. However, if this approach is naïvely adopted, then moral agents who, for example, recklessly fail to reflect on their situation, and act in a way that brings about terrible results, could be said to be acting in a morally justifiable way. Acting in a situation without first informing oneself of the circumstances of the situation can lead to even the most well-intended actions yielding miserable consequences. As a result, it could be argued that there is a moral imperative for agents to inform themselves as much as possible about a situation before judging the appropriate course of action. This imperative, of course, is derived from consequential thinking: a better-informed agent is able to bring about better consequences.",
"title": "Issues"
},
{
"paragraph_id": 33,
"text": "Moral action always has consequences for certain people or things. Varieties of consequentialism can be differentiated by the beneficiary of the good consequences. That is, one might ask \"Consequences for whom?\"",
"title": "Issues"
},
{
"paragraph_id": 34,
"text": "A fundamental distinction can be drawn between theories which require that agents act for ends perhaps disconnected from their own interests and drives, and theories which permit that agents act for ends in which they have some personal interest or motivation. These are called \"agent-neutral\" and \"agent-focused\" theories respectively.",
"title": "Issues"
},
{
"paragraph_id": 35,
"text": "Agent-neutral consequentialism ignores the specific value a state of affairs has for any particular agent. Thus, in an agent-neutral theory, an actor's personal goals do not count any more than anyone else's goals in evaluating what action the actor should take. Agent-focused consequentialism, on the other hand, focuses on the particular needs of the moral agent. Thus, in an agent-focused account, such as one that Peter Railton outlines, the agent might be concerned with the general welfare, but the agent is more concerned with the immediate welfare of herself and her friends and family.",
"title": "Issues"
},
{
"paragraph_id": 36,
"text": "These two approaches could be reconciled by acknowledging the tension between an agent's interests as an individual and as a member of various groups, and seeking to somehow optimize among all of these interests. For example, it may be meaningful to speak of an action as being good for someone as an individual, but bad for them as a citizen of their town.",
"title": "Issues"
},
{
"paragraph_id": 37,
"text": "Many consequentialist theories may seem primarily concerned with human beings and their relationships with other human beings. However, some philosophers argue that we should not limit our ethical consideration to the interests of human beings alone. Jeremy Bentham, who is regarded as the founder of utilitarianism, argues that animals can experience pleasure and pain, thus demanding that 'non-human animals' should be a serious object of moral concern.",
"title": "Issues"
},
{
"paragraph_id": 38,
"text": "More recently, Peter Singer has argued that it is unreasonable that we do not give equal consideration to the interests of animals as to those of human beings when we choose the way we are to treat them. Such equal consideration does not necessarily imply identical treatment of humans and non-humans, any more than it necessarily implies identical treatment of all humans.",
"title": "Issues"
},
{
"paragraph_id": 39,
"text": "One way to divide various consequentialisms is by the types of consequences that are taken to matter most, that is, which consequences count as good states of affairs. According to utilitarianism, a good action is one that results in an increase in pleasure, and the best action is one that results in the most pleasure for the greatest number. Closely related is eudaimonic consequentialism, according to which a full, flourishing life, which may or may not be the same as enjoying a great deal of pleasure, is the ultimate aim. Similarly, one might adopt an aesthetic consequentialism, in which the ultimate aim is to produce beauty. However, one might fix on non-psychological goods as the relevant effect. Thus, one might pursue an increase in material equality or political liberty instead of something like the more ephemeral \"pleasure\". Other theories adopt a package of several goods, all to be promoted equally. As the consequentialist approach contains an inherent assumption that the outcomes of a moral decision can be quantified in terms of \"goodness\" or \"badness,\" or at least put in order of increasing preference, it is an especially suited moral theory for a probabilistic and decision theoretical approach.",
"title": "Issues"
},
{
"paragraph_id": 40,
"text": "Consequentialism can also be contrasted with aretaic moral theories such as virtue ethics. Whereas consequentialist theories posit that consequences of action should be the primary focus of our thinking about ethics, virtue ethics insists that it is the character rather than the consequences of actions that should be the focal point. Some virtue ethicists hold that consequentialist theories totally disregard the development and importance of moral character. For example, Philippa Foot argues that consequences in themselves have no ethical content, unless it has been provided by a virtue such as benevolence.",
"title": "Issues"
},
{
"paragraph_id": 41,
"text": "However, consequentialism and virtue ethics need not be entirely antagonistic. Iain King has developed an approach that reconciles the two schools. Other consequentialists consider effects on the character of people involved in an action when assessing consequence. Similarly, a consequentialist theory may aim at the maximization of a particular virtue or set of virtues. Finally, following Foot's lead, one might adopt a sort of consequentialism that argues that virtuous activity ultimately produces the best consequences.",
"title": "Issues"
},
{
"paragraph_id": 42,
"text": "The ultimate end is a concept in the moral philosophy of Max Weber, in which individuals act in a faithful, rather than rational, manner.",
"title": "Issues"
},
{
"paragraph_id": 43,
"text": "We must be clear about the fact that all ethically oriented conduct may be guided by one of two fundamentally differing and irreconcilably opposed maxims: conduct can be oriented to an ethic of ultimate ends or to an ethic of responsibility. [...] There is an abysmal contrast between conduct that follows the maxim of an ethic of ultimate ends — that is in religious terms, \"the Christian does rightly and leaves the results with the Lord\" — and conduct that follows the maxim of an ethic of responsibility, in which case one has to give an account of the foreseeable results of one's action.",
"title": "Issues"
},
{
"paragraph_id": 44,
"text": "Teleological ethics (Greek: telos, 'end, purpose' + logos, 'science') is a broader class of views in moral philosophy which consequentialism falls under. In general, proponents of teleological ethics argue that the moral value of any act consists in its tendency to produce things of intrinsic value, meaning that an act is right if and only if it, or the rule under which it falls, produces, will probably produce, or is intended to produce, a greater balance of good over evil than any alternative act. This concept is exemplified by the famous aphorism, \"the end justifies the means,\" variously attributed to Machiavelli or Ovid i.e. if a goal is morally important enough, any method of achieving it is acceptable.",
"title": "Teleological ethics"
},
{
"paragraph_id": 45,
"text": "Teleological theories differ among themselves on the nature of the particular end that actions ought to promote. The two major families of views in teleological ethics are virtue ethics and consequentialism. Teleological ethical theories are often discussed in opposition to deontological ethical theories, which hold that acts themselves are inherently good or bad, rather than good or bad because of extrinsic factors (such as the act's consequences or the moral character of the person who acts).",
"title": "Teleological ethics"
},
{
"paragraph_id": 46,
"text": "The term consequentialism was coined by G. E. M. Anscombe in her essay \"Modern Moral Philosophy\" in 1958, to describe what she saw as the central error of certain moral theories, such as those propounded by Mill and Sidgwick.",
"title": "Etymology"
},
{
"paragraph_id": 47,
"text": "The phrase and concept of \"the end justifies the means\" are at least as old as the first century BC. Ovid wrote in his Heroides that Exitus acta probat (\"The result justifies the deed\").",
"title": "Etymology"
},
{
"paragraph_id": 48,
"text": "G. E. M. Anscombe objects to the consequentialism of Sidgwick on the grounds that the moral worth of an action is premised on the predictive capabilities of the individual, relieving them of the responsibility for the \"badness\" of an act should they \"make out a case for not having foreseen\" negative consequences.",
"title": "Criticisms"
},
{
"paragraph_id": 49,
"text": "The future amplification of the effects of small decisions is an important factor that makes it more difficult to predict the ethical value of consequences, even though most would agree that only predictable consequences are charged with a moral responsibility.",
"title": "Criticisms"
},
{
"paragraph_id": 50,
"text": "Bernard Williams has argued that consequentialism is alienating because it requires moral agents to put too much distance between themselves and their own projects and commitments. Williams argues that consequentialism requires moral agents to take a strictly impersonal view of all actions, since it is only the consequences, and not who produces them, that are said to matter. Williams argues that this demands too much of moral agents—since (he claims) consequentialism demands that they be willing to sacrifice any and all personal projects and commitments in any given circumstance in order to pursue the most beneficent course of action possible. He argues further that consequentialism fails to make sense of intuitions that it can matter whether or not someone is personally the author of a particular consequence. For example, that participating in a crime can matter, even if the crime would have been committed anyway, or would even have been worse, without the agent's participation.",
"title": "Criticisms"
},
{
"paragraph_id": 51,
"text": "Some consequentialists—most notably Peter Railton—have attempted to develop a form of consequentialism that acknowledges and avoids the objections raised by Williams. Railton argues that Williams's criticisms can be avoided by adopting a form of consequentialism in which moral decisions are to be determined by the sort of life that they express. On his account, the agent should choose the sort of life that will, on the whole, produce the best overall effects.",
"title": "Criticisms"
}
] | In ethical philosophy, consequentialism is a class of normative, teleological ethical theories that holds that the consequences of one's conduct are the ultimate basis for judgement about the rightness or wrongness of that conduct. Thus, from a consequentialist standpoint, a morally right act is one that will produce a good outcome. Consequentialism, along with eudaimonism, falls under the broader category of teleological ethics, a group of views which claim that the moral value of any act consists in its tendency to produce things of intrinsic value. Consequentialists hold in general that an act is right if and only if the act will produce, will probably produce, or is intended to produce, a greater balance of good over evil than any available alternative. Different consequentialist theories differ in how they define moral goods, with chief candidates including pleasure, the absence of pain, the satisfaction of one's preferences, and broader notions of the "general good". Consequentialism is usually contrasted with deontological ethics: deontology, in which rules and moral duty are central, derives the rightness or wrongness of one's conduct from the character of the behaviour itself, rather than the outcomes of the conduct. It is also contrasted with both virtue ethics which focuses on the character of the agent rather than on the nature or consequences of the act itself, and pragmatic ethics which treats morality like science: advancing collectively as a society over the course of many lifetimes, such that any moral criterion is subject to revision. Some argue that consequentialist theories and deontological theories are not necessarily mutually exclusive. For example, T. M. Scanlon advances the idea that human rights, which are commonly considered a "deontological" concept, can only be justified with reference to the consequences of having those rights. Similarly, Robert Nozick argued for a theory that is mostly consequentialist, but incorporates inviolable "side-constraints" which restrict the sort of actions agents are permitted to do. Derek Parfit argued that in practice, when understood properly, rule consequentialism, Kantian deontology, and contractualism would all end up prescribing the same behavior. | 2001-11-08T17:00:12Z | 2023-12-24T13:02:33Z | [
"Template:ISBN",
"Template:Cite news",
"Template:Main",
"Template:See also",
"Template:About",
"Template:Clarify",
"Template:Div col end",
"Template:Div col",
"Template:Page needed",
"Template:Ethics",
"Template:Authority control",
"Template:Short description",
"Template:Blockquote",
"Template:Reflist",
"Template:Hdl",
"Template:Cite web",
"Template:Citation needed",
"Template:Wikiquote",
"Template:Philosophy topics",
"Template:Cite book",
"Template:Cite IEP",
"Template:Cite encyclopedia",
"Template:Commons category",
"Template:Cite SEP",
"Template:Doi",
"Template:Philosophy sidebar",
"Template:Wikt",
"Template:Cite journal",
"Template:Reliable source",
"Template:ISSN"
] | https://en.wikipedia.org/wiki/Consequentialism |
5,735 | Conscription | Conscription (also called the draft in the United States) is the state-mandated enlistment of people in a national service, mainly a military service. Conscription dates back to antiquity and it continues in some countries to the present day under various names. The modern system of near-universal national conscription for young men dates to the French Revolution in the 1790s, where it became the basis of a very large and powerful military. Most European nations later copied the system in peacetime, so that men at a certain age would serve 1–8 years on active duty and then transfer to the reserve force.
Conscription is controversial for a range of reasons, including conscientious objection to military engagements on religious or philosophical grounds; political objection, for example to service for a disliked government or unpopular war; sexism, in that historically men have been subject to the draft in the most cases; and ideological objection, for example, to a perceived violation of individual rights. Those conscripted may evade service, sometimes by leaving the country, and seeking asylum in another country. Some selection systems accommodate these attitudes by providing alternative service outside combat-operations roles or even outside the military, such as siviilipalvelus (alternative civil service) in Finland, Zivildienst (compulsory community service) in Austria, Germany and Switzerland. Several countries conscript male soldiers not only for armed forces, but also for paramilitary agencies, which are dedicated to police-like domestic only service like internal troops, border guards or non-combat rescue duties like civil defence.
As of 2023, many states no longer conscript their citizens, relying instead upon professional militaries with volunteers. The ability to rely on such an arrangement, however, presupposes some degree of predictability with regard to both war-fighting requirements and the scope of hostilities. Many states that have abolished conscription still, therefore, reserve the power to resume conscription during wartime or times of crisis. States involved in wars or interstate rivalries are most likely to implement conscription, and democracies are less likely than autocracies to implement conscription. With a few exceptions, such as Singapore and Egypt, former British colonies are less likely to have conscription, as they are influenced by British anti-conscription norms that can be traced back to the English Civil War; the United Kingdom abolished conscription in 1960.
Around the reign of Hammurabi (1791–1750 BC), the Babylonian Empire used a system of conscription called Ilkum. Under that system those eligible were required to serve in the royal army in time of war. During times of peace they were instead required to provide labour for other activities of the state. In return for this service, people subject to it gained the right to hold land. It is possible that this right was not to hold land per se but specific land supplied by the state.
Various forms of avoiding military service are recorded. While it was outlawed by the Code of Hammurabi, the hiring of substitutes appears to have been practiced both before and after the creation of the code. Later records show that Ilkum commitments could become regularly traded. In other places, people simply left their towns to avoid their Ilkum service. Another option was to sell Ilkum lands and the commitments along with them. With the exception of a few exempted classes, this was forbidden by the Code of Hammurabi.
Under the feudal laws on the European continent, landowners in the medieval period enforced a system whereby all peasants, freemen commoners and noblemen aged 15 to 60 living in the countryside or in urban centers, were summoned for military duty when required by either the king or the local lord, bringing along the weapons and armor according to their wealth. These levies fought as footmen, sergeants, and men at arms under local superiors appointed by the king or the local lord such as the arrière-ban in France. Arrière-ban denoted a general levy, where all able-bodied males age 15 to 60 living in the Kingdom of France were summoned to go to war by the King (or the constable and the marshals). Men were summoned by the bailiff (or the sénéchal in the south). Bailiffs were military and political administrators installed by the King to steward and govern a specific area of a province following the king's commands and orders. The men summoned in this way were then summoned by the lieutenant who was the King's representative and military governor over an entire province comprising many bailiwicks, seneschalties and castellanies. All men from the richest noble to the poorest commoner were summoned under the arrière-ban and they were supposed to present themselves to the King or his officials.
In medieval Scandinavia the leiðangr (Old Norse), leidang (Norwegian), leding, (Danish), ledung (Swedish), lichting (Dutch), expeditio (Latin) or sometimes leþing (Old English), was a levy of free farmers conscripted into coastal fleets for seasonal excursions and in defence of the realm.
The bulk of the Anglo-Saxon English army, called the fyrd, was composed of part-time English soldiers drawn from the freemen of each county. In the 690s laws of Ine of Wessex, three levels of fines are imposed on different social classes for neglecting military service.
Some modern writers claim military service in Europe was restricted to the landowning minor nobility. These thegns were the land-holding aristocracy of the time and were required to serve with their own armour and weapons for a certain number of days each year. The historian David Sturdy has cautioned about regarding the fyrd as a precursor to a modern national army composed of all ranks of society, describing it as a "ridiculous fantasy":
The persistent old belief that peasants and small farmers gathered to form a national army or fyrd is a strange delusion dreamt up by antiquarians in the late eighteenth or early nineteenth centuries to justify universal military conscription.
In feudal Japan the shogun decree of 1393 exempted money lenders from religious or military levies, in return for a yearly tax. The Ōnin War weakened the shogun and levies were imposed again on money lenders. This overlordism was arbitrary and unpredictable for commoners. While the money lenders were not poor, several overlords tapped them for income. Levies became necessary for the survival of the overlord, allowing the lord to impose taxes at will. These levies included tansen tax on agricultural land for ceremonial expenses. Yakubu takumai tax was raised on all land to rebuild the Ise Grand Shrine, and munabechisen tax was imposed on all houses. At the time, land in Kyoto was acquired by commoners through usury and in 1422 the shogun threatened to repossess the land of those commoners who failed to pay their levies.
The system of military slaves was widely used in the Middle East, beginning with the creation of the corps of Turkic slave-soldiers (ghulams or mamluks) by the Abbasid caliph al-Mu'tasim in the 820s and 830s. The Turkish troops soon came to dominate the government, establishing a pattern throughout the Islamic world of a ruling military class, often separated by ethnicity, culture and even religion by the mass of the population, a paradigm that found its apogee in the Mamluks of Egypt and the Janissary corps of the Ottoman Empire, institutions that survived until the early 19th century.
In the middle of the 14th century, Ottoman Sultan Murad I developed personal troops to be loyal to him, with a slave army called the Kapıkulu. The new force was built by taking Christian children from newly conquered lands, especially from the far areas of his empire, in a system known as the devşirme (translated "gathering" or "converting"). The captive children were forced to convert to Islam. The Sultans had the young boys trained over several years. Those who showed special promise in fighting skills were trained in advanced warrior skills, put into the sultan's personal service, and turned into the Janissaries, the elite branch of the Kapıkulu. A number of distinguished military commanders of the Ottomans, and most of the imperial administrators and upper-level officials of the Empire, such as Pargalı İbrahim Pasha and Sokollu Mehmet Paşa, were recruited in this way. By 1609, the Sultan's Kapıkulu forces increased to about 100,000.
In later years, Sultans turned to the Barbary Pirates to supply their Jannissaries corps. Their attacks on ships off the coast of Africa or in the Mediterranean, and subsequent capture of able-bodied men for ransom or sale provided some captives for the Sultan's system. Starting in the 17th century, Christian families living under the Ottoman rule began to submit their sons into the Kapikulu system willingly, as they saw this as a potentially invaluable career opportunity for their children. Eventually the Sultan turned to foreign volunteers from the warrior clans of Circassians in southern Russia to fill his Janissary armies. As a whole the system began to break down, the loyalty of the Jannissaries became increasingly suspect. Mahmud II forcibly disbanded the Janissary corps in 1826.
Similar to the Janissaries in origin and means of development were the Mamluks of Egypt in the Middle Ages. The Mamluks were usually captive non-Muslim Iranian and Turkish children who had been kidnapped or bought as slaves from the Barbary coasts. The Egyptians assimilated and trained the boys and young men to become Islamic soldiers who served the Muslim caliphs and the Ayyubid sultans during the Middle Ages. The first mamluks served the Abbasid caliphs in 9th-century Baghdad. Over time they became a powerful military caste. On more than one occasion, they seized power, for example, ruling Egypt from 1250 to 1517.
From 1250 Egypt had been ruled by the Bahri dynasty of Kipchak origin. Slaves from the Caucasus served in the army and formed an elite corps of troops. They eventually revolted in Egypt to form the Burgi dynasty. The Mamluks' excellent fighting abilities, massed Islamic armies, and overwhelming numbers succeeded in overcoming the Christian Crusader fortresses in the Holy Land. The Mamluks were the most successful defence against the Mongol Ilkhanate of Persia and Iraq from entering Egypt.
On the western coast of Africa, Berber Muslims captured non-Muslims to put to work as laborers. They generally converted the younger people to Islam and many became quite assimilated. In Morocco, the Berber looked south rather than north. The Moroccan Sultan Moulay Ismail, called "the Bloodthirsty" (1672–1727), employed a corps of 150,000 black slaves, called his Black Guard. He used them to coerce the country into submission.
Modern conscription, the massed military enlistment of national citizens (levée en masse), was devised during the French Revolution, to enable the Republic to defend itself from the attacks of European monarchies. Deputy Jean-Baptiste Jourdan gave its name to the 5 September 1798 Act, whose first article stated: "Any Frenchman is a soldier and owes himself to the defense of the nation." It enabled the creation of the Grande Armée, what Napoleon Bonaparte called "the nation in arms", which overwhelmed European professional armies that often numbered only into the low tens of thousands. More than 2.6 million men were inducted into the French military in this way between the years 1800 and 1813.
The defeat of the Prussian Army in particular shocked the Prussian establishment, which had believed it was invincible after the victories of Frederick the Great. The Prussians were used to relying on superior organization and tactical factors such as order of battle to focus superior troops against inferior ones. Given approximately equivalent forces, as was generally the case with professional armies, these factors showed considerable importance. However, they became considerably less important when the Prussian armies faced Napoleon's forces that outnumbered their own in some cases by more than ten to one. Scharnhorst advocated adopting the levée en masse, the military conscription used by France. The Krümpersystem was the beginning of short-term compulsory service in Prussia, as opposed to the long-term conscription previously used.
In the Russian Empire, the military service time "owed" by serfs was 25 years at the beginning of the 19th century. In 1834 it was decreased to 20 years. The recruits were to be not younger than 17 and not older than 35. In 1874 Russia introduced universal conscription in the modern pattern, an innovation only made possible by the abolition of serfdom in 1861. New military law decreed that all male Russian subjects, when they reached the age of 20, were eligible to serve in the military for six years.
In the decades prior to World War I universal conscription along broadly Prussian lines became the norm for European armies, and those modeled on them. By 1914 the only substantial armies still completely dependent on voluntary enlistment were those of Britain and the United States. Some colonial powers such as France reserved their conscript armies for home service while maintaining professional units for overseas duties.
The range of eligible ages for conscripting was expanded to meet national demand during the World Wars. In the United States, the Selective Service System drafted men for World War I initially in an age range from 21 to 30 but expanded its eligibility in 1918 to an age range of 18 to 45. In the case of a widespread mobilization of forces where service includes homefront defense, ages of conscripts may range much higher, with the oldest conscripts serving in roles requiring lesser mobility.
Expanded-age conscription was common during the Second World War: in Britain, it was commonly known as "call-up" and extended to age 51. Nazi Germany termed it Volkssturm ("People's Storm") and included children as young as 16 and men as old as 60. During the Second World War, both Britain and the Soviet Union conscripted women. The United States was on the verge of drafting women into the Nurse Corps because it anticipated it would need the extra personnel for its planned invasion of Japan. However, the Japanese surrendered and the idea was abandoned.
During the Great Patriotic War, the Red Army conscripted nearly 30 million men.
Men's rights activists, feminists, and opponents of discrimination against men have criticized military conscription, or compulsory military service, as sexist. The National Coalition for Men, a men's rights group, sued the US Selective Service System in 2019, leading to it being declared unconstitutional by a US Federal Judge. The federal district judge's opinion was unanimously overturned on appeal to the U.S. Court of Appeals for the 5th Circuit. In September 2021, the House of Representatives passed the annual Defense Authorization Act, which included an amendment that states that "all Americans between the ages of 18 and 25 must register for selective service." This amendment omitted the word "male," which would have extended a potential draft to women; however, the amendment was removed before the National Defense Authorization Act was passed.
Feminists have argued, first, that military conscription is sexist because wars serve the interests of what they view as the patriarchy; second, that the military is a sexist institution and that conscripts are therefore indoctrinated into sexism; and third, that conscription of men normalizes violence by men as socially acceptable. Feminists have been organizers and participants in resistance to conscription in several countries.
Conscription has also been criticized on the ground that, historically, only men have been subjected to conscription. Men who opt out or are deemed unfit for military service must often perform alternative service, such as Zivildienst in Austria, Germany and Switzerland, or pay extra taxes, whereas women do not have these obligations. In the US, men who do not register with the Selective Service cannot apply for citizenship, receive federal financial aid, grants or loans, be employed by the federal government, be admitted to public colleges or universities, or, in some states, obtain a driver's license.
Many American libertarians oppose conscription and call for the abolition of the Selective Service System, arguing that impressment of individuals into the armed forces amounts to involuntary servitude. For example, Ron Paul, a former U.S. Libertarian Party presidential nominee, has said that conscription "is wrongly associated with patriotism, when it really represents slavery and involuntary servitude". The philosopher Ayn Rand opposed conscription, opining that "of all the statist violations of individual rights in a mixed economy, the military draft is the worst. It is an abrogation of rights. It negates man's fundamental right—the right to life—and establishes the fundamental principle of statism: that a man's life belongs to the state, and the state may claim it by compelling him to sacrifice it in battle."
In 1917, a number of radicals and anarchists, including Emma Goldman, challenged the new draft law in federal court, arguing that it was a violation of the Thirteenth Amendment's prohibition against slavery and involuntary servitude. However, the Supreme Court unanimously upheld the constitutionality of the draft act in the case of Arver v. United States on 7 January 1918, on the ground that the Constitution gives Congress the power to declare war and to raise and support armies. The Court also relied on the principle of the reciprocal rights and duties of citizens. "It may not be doubted that the very conception of a just government in its duty to the citizen includes the reciprocal obligation of the citizen to render military service in case of need and the right to compel."
It can be argued that in a cost-to-benefit ratio, conscription during peacetime is not worthwhile. Months or years of service performed by the most fit and capable subtract from the productivity of the economy; add to this the cost of training them, and in some countries paying them. Compared to these extensive costs, some would argue there is very little benefit; if there ever was a war then conscription and basic training could be completed quickly, and in any case there is little threat of a war in most countries with conscription. In the United States, every male resident is required by law to register with the Selective Service System within 30 days following his 18th birthday and be available for a draft; this is often accomplished automatically by a motor vehicle department during licensing or by voter registration.
According to Milton Friedman the cost of conscription can be related to the parable of the broken window in anti-draft arguments. The cost of the work, military service, does not disappear even if no salary is paid. The work effort of the conscripts is effectively wasted, as an unwilling workforce is extremely inefficient. The impact is especially severe in wartime, when civilian professionals are forced to fight as amateur soldiers. Not only is the work effort of the conscripts wasted and productivity lost, but professionally skilled conscripts are also difficult to replace in the civilian workforce. Every soldier conscripted in the army is taken away from his civilian work, and away from contributing to the economy which funds the military. This may be less a problem in an agrarian or pre-industrialized state where the level of education is generally low, and where a worker is easily replaced by another. However, this is potentially more costly in a post-industrial society where educational levels are high and where the workforce is sophisticated and a replacement for a conscripted specialist is difficult to find. Even more dire economic consequences result if the professional conscripted as an amateur soldier is killed or maimed for life; his work effort and productivity are lost.
Jean Jacques Rousseau argued vehemently against professional armies since he believed that it was the right and privilege of every citizen to participate to the defense of the whole society and that it was a mark of moral decline to leave the business to professionals. He based his belief upon the development of the Roman Republic, which came to an end at the same time as the Roman Army changed from a conscript to a professional force. Similarly, Aristotle linked the division of armed service among the populace intimately with the political order of the state. Niccolò Machiavelli argued strongly for conscription and saw the professional armies, made up of mercenary units, as the cause of the failure of societal unity in Italy.
Other proponents, such as William James, consider both mandatory military and national service as ways of instilling maturity in young adults. Some proponents, such as Jonathan Alter and Mickey Kaus, support a draft in order to reinforce social equality, create social consciousness, break down class divisions and allow young adults to immerse themselves in public enterprise. Charles Rangel called for the reinstatement of the draft during the Iraq War not because he seriously expected it to be adopted but to stress how the socioeconomic restratification meant that very few children of upper-class Americans served in the all-volunteer American armed forces.
It is estimated by the British military that in a professional military, a company deployed for active duty in peacekeeping corresponds to three inactive companies at home. Salaries for each are paid from the military budget. In contrast, volunteers from a trained reserve are in their civilian jobs when they are not deployed.
It was more financially beneficial for less-educated young Portuguese men born in 1967 to participate in conscription than to participate in the highly competitive job market with men of the same age who continued to higher education.
Throughout history, women have only been conscripted to join armed forces in a few countries, in contrast to the universal practice of conscription from among the male population. The traditional view has been that military service is a test of manhood and a rite of passage from boyhood into manhood. In recent years, this position has been challenged on the basis that it violates gender equality, and some countries, especially in Europe, have extended conscription obligations to women.
Nations that in present-day actively draft women into military service are Bolivia, Chad, Eritrea, Israel, Mozambique, Norway, North Korea and Sweden.
Norway introduced female conscription in 2015, making it the first NATO member to have a legally compulsory national service for both men and women. In practice only motivated volunteers are selected to join the army in Norway.
Sweden introduced female conscription in 2010, but it was not activated until 2017. This made Sweden the second nation in Europe to draft women, and the second in the world to draft women on the same formal terms as men.
Israel has universal female conscription, although it is possible to avoid service by claiming a religious exemption and over a third of Israeli women do so.
Finland introduced voluntary female conscription in 1995, giving women between the ages of 18 and 29 an option to complete their military service alongside men.
Sudanese law allows for conscription of women, but this is not implemented in practice. In the United Kingdom during World War II, beginning in 1941, women were brought into the scope of conscription but, as all women with dependent children were exempt and many women were informally left in occupations such as nursing or teaching, the number conscripted was relatively few.
In the Soviet Union, there was never conscription of women for the armed forces, but the severe disruption of normal life and the high proportion of civilians affected by World War II after the German invasion attracted many volunteers for "The Great Patriotic War". Medical doctors of both sexes could and would be conscripted (as officers). Also, the Soviet university education system required Department of Chemistry students of both sexes to complete an ROTC course in NBC defense, and such female reservist officers could be conscripted in times of war. The United States came close to drafting women into the Nurse Corps in preparation for a planned invasion of Japan.
In 1981 in the United States, several men filed lawsuit in the case Rostker v. Goldberg, alleging that the Selective Service Act of 1948 violates the Due Process Clause of the Fifth Amendment by requiring that only men register with the Selective Service System (SSS). The Supreme Court eventually upheld the Act, stating that "the argument for registering women was based on considerations of equity, but Congress was entitled, in the exercise of its constitutional powers, to focus on the question of military need, rather than 'equity.'" In 2013, Judge Gray H. Miller of the United States District Court for the Southern District of Texas ruled that the Service's men-only requirement was unconstitutional, as while at the time Rostker was decided, women were banned from serving in combat, the situation had since changed with the 2013 and 2015 restriction removals. Miller's opinion was reversed by the Fifth Circuit, stating that only the Supreme Court could overturn the Supreme Court precedence from Rostker. The Supreme Court considered but declined to review the Fifth Circuit's ruling in June 2021. In an opinion authored by Justice Sonia Sotomayor and joined by Justices Stephen Breyer and Brett Kavanaugh, the three justices agreed that the male-only draft was likely unconstitutional given the changes in the military's stance on the roles, but because Congress had been reviewing and evaluating legislation to eliminate its male-only draft requirement via the National Commission on Military, National, and Public Service (NCMNPS) since 2016, it would have been inappropriate for the Court to act at that time.
On 1 October 1999, in Taiwan, the Judicial Yuan of the Republic of China in its Interpretation 490 considered that the physical differences between males and females and the derived role differentiation in their respective social functions and lives would not make drafting only males a violation of the Constitution of the Republic of China. Though women are not conscripted in Taiwan, transsexual persons are exempt.
In 2018, the Netherlands started including women in its draft registration system, although conscription is not currently enforced for either sex.
A conscientious objector is an individual whose personal beliefs are incompatible with military service, or, more often, with any role in the armed forces. In some countries, conscientious objectors have special legal status, which augments their conscription duties. For example, Sweden allows conscientious objectors to choose a service in the weapons-free civil defense.
The reasons for refusing to serve in the military are varied. Some people are conscientious objectors for religious reasons. In particular, the members of the historic peace churches are pacifist by doctrine, and Jehovah's Witnesses, while not strictly pacifists, refuse to participate in the armed forces on the ground that they believe that Christians should be neutral in international conflicts.
Every male citizen of the Republic of Austria from the age of 17 up to 50, specialists up to 65 years is liable to military service. However, besides mobilization, conscription calls to a six-month long basic military training in the Bundesheer can be done up to the age of 35. For men refusing to undergo this training, a nine-month lasting community service is mandatory.
Belgium abolished the conscription in 1994. The last conscripts left active service in February 1995. To this day (2019), a small minority of the Belgian citizens supports the idea of reintroducing military conscription, for both men and women.
Bulgaria had mandatory military service for males above 18 until conscription was ended in 2008. Due to a shortfall in the army of some 5500 soldiers, parts of the former ruling coalition have expressed their support for the return of mandatory military service, most notably Krasimir Karakachanov. Opposition towards this idea from the main coalition partner, GERB, saw a compromise in 2018, where instead of mandatory military service, Bulgaria could have possibly introduced a voluntary military service by 2019 where young citizens can volunteer for a period of 6 to 9 months, receiving a basic wage. However this has not gone forward.
Since the signing of the Peace Accord in 1993, there has been no official conscription in Cambodia. Also the National Assembly has repeatedly rejected to reintroduce it due to popular resentment. However, in November 2006, it was reintroduced. Although mandatory for all males between the ages of 18 and 30 (with some sources stating up to age 35), less than 20% of those in the age group are recruited amidst a downsizing of the armed forces.
Universal conscription in China dates back to the State of Qin, which eventually became the Qin Empire of 221 BC. Following unification, historical records show that a total of 300,000 conscript soldiers and 500,000 conscript labourers constructed the Great Wall of China. In the following dynasties, universal conscription was abolished and reintroduced on numerous occasions.
As of 2011, universal military conscription is theoretically mandatory in China, and reinforced by law. However, due to the large population of China and large pool of candidates available for recruitment, the People's Liberation Army has always had sufficient volunteers, so conscription has not been required in practice.
Military service in Cyprus has a deep rooted history entangled with the Cyprus problem. Military service in the Cypriot National Guard is mandatory for all male citizens of the Republic of Cyprus, as well as any male non-citizens born of a parent of Greek Cypriot descent, lasting from the 1 January of the year in which they turn 18 years of age to 31 December, of the year in which they turn 50. All male residents of Cyprus who are of military age (16 and over) are required to obtain an exit visa from the Ministry of Defense. Currently, military conscription in Cyprus lasts up to 14 months.
Conscription is known in Denmark since the Viking Age, where one man out of every 10 had to serve the king. Frederick IV of Denmark changed the law in 1710 to every 4th man. The men were chosen by the landowner and it was seen as a penalty.
Since 12 February 1849, every physically fit man must do military service. According to §81 in the Constitution of Denmark, which was promulgated in 1849:
Every male person able to carry arms shall be liable with his person to contribute to the defence of his country under such rules as are laid down by Statute. — Constitution of Denmark
The legislation about compulsory military service is articulated in the Danish Law of Conscription. National service takes 4–12 months. It is possible to postpone the duty when one is still in full-time education. Every male turning 18 will be drafted to the 'Day of Defence', where they will be introduced to the Danish military and their health will be tested. Physically unfit persons are not required to do military service. It is only compulsory for men, while women are free to choose to join the Danish army. Almost all of the men have been volunteers in recent years, 96.9% of the total number of recruits having been volunteers in the 2015 draft.
After lottery, one can become a conscientious objector. Total objection (refusal from alternative civilian service) results in up to 4 months jailtime according to the law. However, in 2014 a Danish man, who signed up for the service and objected later, got only 14 days of home arrest. In many countries the act of desertion (objection after signing up) is punished harder than objecting the compulsory service.
Estonia adopted a policy of ajateenistus (literally "timed service") in late 1991, having inherited the concept from Soviet legislature. According to §124 of the 1992 constitution, "Estonian citizens have a duty to participate in national defence on the bases and pursuant to a procedure provided by a law", which in practice means that men aged 18–27 are subject to the draft.
In the formative years, conscripts had to serve an 18-month term. An amendment passed in 1994 shortened this to 12 months. Further revisions in 2003 established an eleven-month term for draftees trained as NCOs and drivers, and an eight-month term for rank & file. Under the current system, the yearly draft is divided into three "waves" - separate batches of eleven-month conscripts start their service in January and July while those selected for an eight-month term are brought in on October. An estimated 3200 people go through conscript service every year.
Conscripts serve in all branches of the Estonian Defence Forces except the air force which only relies on paid professionals due to its highly technical nature and security concerns. Historically, draftees could also be assigned to the border guard (before it switched to an all-volunteer model in 2000), a special rapid response unit of the police force (disbanded in 1997) or three militarized rescue companies within the Estonian Rescue Board (disbanded in 2004).
Conscription in Finland is part of a general compulsion for national military service for all adult males (Finnish: maanpuolustusvelvollisuus; Swedish: totalförsvarsplikt) defined in the 127§ of the Constitution of Finland.
Conscription can take the form of military or of civilian service. According to Finnish Defence Forces 2011 data slightly under 80% of Finnish males turned 30 had entered and finished the military service. The number of female volunteers to annually enter armed service had stabilised at approximately 300. The service period is 165, 255 or 347 days for the rank and file conscripts and 347 days for conscripts trained as NCOs or reserve officers. The length of civilian service is always twelve months. Those electing to serve unarmed in duties where unarmed service is possible serve either nine or twelve months, depending on their training.
Any Finnish male citizen who refuses to perform both military and civilian service faces a penalty of 173 days in prison, minus any served days. Such sentences are usually served fully in prison, with no parole. Jehovah's Witnesses are no longer exempted from service as of 27 February 2019. The inhabitants of demilitarized Åland are exempt from military service. By the Conscription Act of 1951, they are, however, required to serve a time at a local institution, like the coast guard. However, until such service has been arranged, they are freed from service obligation. The non-military service of Åland has not been arranged since the introduction of the act, and there are no plans to institute it. The inhabitants of Åland can also volunteer for military service on the mainland. As of 1995, women are permitted to serve on a voluntary basis and pursue careers in the military after their initial voluntary military service.
The military service takes place in Finnish Defence Forces or in the Finnish Border Guard. All services of the Finnish Defence Forces train conscripts. However, the Border Guard trains conscripts only in land-based units, not in coast guard detachments or in the Border Guard Air Wing. Civilian service may take place in the Civilian Service Center in Lapinjärvi or in an accepted non-profit organization of educational, social or medical nature.
Between 1956 and 2011 conscription was mandatory for all male citizens in the German federal armed forces (German: Bundeswehr), as well as for the Federal Border Guard (Bundesgrenzschutz) in the 1970s (see Border Guard Service). With the end of the Cold War the German government drastically reduced the size of its armed forces. The low demand for conscripts led to the suspension of compulsory conscription in 2011. Since then, only volunteer professionals serve in the Bundeswehr.
Since 1914 Greece has been enforcing mandatory military service, currently lasting 12 months (but historically up to 36 months) for all adult men. Citizens discharged from active service are normally placed in the reserve and are subject to periodic recalls of 1–10 days at irregular intervals.
Universal conscription was introduced in Greece during the military reforms of 1909, although various forms of selective conscription had been in place earlier. In more recent years, conscription was associated with the state of general mobilisation declared on 20 July 1974, due to the crisis in Cyprus (the mobilisation was formally ended on 18 December 2002).
The duration of military service has historically ranged between 9 and 36 months depending on various factors either particular to the conscript or the political situation in the Eastern Mediterranean. Although women are employed by the Greek army as officers and soldiers, they are not obliged to enlist. Soldiers receive no health insurance, but they are provided with medical support during their army service, including hospitalization costs.
Greece enforces conscription for all male citizens aged between 19 and 45. In August 2009, duration of the mandatory service was reduced from 12 months as it was before to 9 months for the army, but remained at 12 months for the navy and the air force. The number of conscripts allocated to the latter two has been greatly reduced aiming at full professionalization. Nevertheless, mandatory military service at the army was once again raised to 12 months in March 2021, unless served in units in Evros or the North Aegean islands where duration was kept at 9 months. Although full professionalization is under consideration, severe financial difficulties and mismanagement, including delays and reduced rates in the hiring of professional soldiers, as well as widespread abuse of the deferment process, has resulted in the postponement of such a plan.
In Iran, all men who reach the age of 18 must do about two years of compulsory military service in the IR police department or Iranian army or Islamic Revolutionary Guard Corps. Before the 1979 revolution, women could serve in the military. However, after the establishment of the Islamic Republic, some Ayatollahs considered women's military service to be disrespectful to women by the Pahlavi government and banned women's military service in Iran. Therefore, Iranian women and girls were completely exempted from military service, which caused Iranian men and boys to oppose.
In Iran, men who refuse to go to military service are deprived of their citizenship rights, such as employment, health insurance, continuing their education at university, finding a job, going abroad, opening a bank account, etc. Iranian men have so far opposed mandatory military service and demanded that military service in Iran become a job like in other countries, but the Islamic Republic is opposed to this demand. Some Iranian military commanders consider the elimination of conscription or improving the condition of soldiers as a security issue and one of Ali Khamenei's powers as the commander-in-chief of the armed forces, so they treat it with caution. In Iran, usually wealthy people are exempted from conscription. Some other men can be exempted from conscription due to their fathers serving in the Iran-Iraq war.
There is a mandatory military service for all men and women in Israel who are fit and 18 years old. Men must serve 32 months while women serve 24 months, with the vast majority of conscripts being Jewish.
Some Israeli citizens are exempt from mandatory service:
All of the exempt above are eligible to volunteer to the Israel Defense Forces (IDF), as long as they declare so.
Male Druze and male Circassian Israeli citizens are liable for conscription, in accordance with agreement set by their community leaders (their community leaders however signed a clause in which all female Druze and female Circassian are exempt from service).
A few male Bedouin Israeli citizens choose to enlist to the Israeli military in every draft (despite their Muslim-Arab background that exempt them from conscription).
There was mandatory military conscription for all white men in South Africa from 1968 until the end of apartheid in 1994. Under South African defense law, young white men had to undergo two years' continuous military training after they leave school, after which they had to serve 720 days in occasional military duty over the next 12 years. The End Conscription Campaign began in 1983 in opposition to the requirement. In the same year the National Party government announced plans to extend conscription to white immigrants in the country.
Lithuania abolished its conscription in 2008. In May 2015, the Lithuanian parliament voted to reintroduce conscription and the conscripts started their training in August 2015. From 2015 to 2017 there were enough volunteers to avoid drafting civilians.
Luxembourg practiced military conscription from 1948 until 1967.
Moldova, which currently has male conscription, has announced plans to abolish the practice. Moldova's Defense Ministry announced that a plan which stipulates the gradual elimination of military conscription will be implemented starting from the autumn of 2018.
Conscription, which was called "Service Duty" (Dutch: dienstplicht) in the Netherlands, was first employed in 1810 by French occupying forces. Napoleon's brother Louis Bonaparte, who was King of Holland from 1806 to 1810, had tried to introduce conscription a few years earlier, unsuccessfully. Every man aged 20 years or older had to enlist. By means of drawing lots it was decided who had to undertake service in the French army. It was possible to arrange a substitute against payment.
Later on, conscription was used for all men over the age of 18. Postponement was possible, due to study, for example. Conscientious objectors could perform an alternative civilian service instead of military service. For various reasons, this forced military service was criticized at the end of the twentieth century. Since the Cold War was over, so was the direct threat of a war. Instead, the Dutch army was employed in more and more peacekeeping operations. The complexity and danger of these missions made the use of conscripts controversial. Furthermore, the conscription system was thought to be unfair as only men were drafted.
In the European part of Netherlands, compulsory attendance has been officially suspended since 1 May 1997. Between 1991 and 1996, the Dutch armed forces phased out their conscript personnel and converted to an all-professional force. The last conscript troops were inducted in 1995, and demobilized in 1996. The suspension means that citizens are no longer forced to serve in the armed forces, as long as it is not required for the safety of the country. Since then, the Dutch army has become an all-professional force. However, to this day, every male and – from January 2020 onward – female citizen aged 17 gets a letter in which they are told that they have been registered but do not have to present themselves for service.
Conscription was constitutionally established the 12 April 1907 with Kongeriket Norges Grunnlov § 119.. As of March 2016, Norway currently employs a weak form of mandatory military service for men and women. In practice recruits are not forced to serve, instead only those who are motivated are selected. About 60,000 Norwegians are available for conscription every year, but only 8,000 to 10,000 are conscripted. Since 1985, women have been able to enlist for voluntary service as regular recruits. On 14 June 2013 the Norwegian Parliament voted to extend conscription to women, making Norway the first NATO member and first European country to make national service compulsory for both sexes. In earlier times, up until at least the early 2000s, all men aged 19–44 were subject to mandatory service, with good reasons required to avoid becoming drafted. There is a right of conscientious objection.
In addition to the military service, the Norwegian government draft a total of 8,000 men and women between 18 and 55 to non-military Civil defence duty. (Not to be confused with Alternative civilian service.) Former service in the military does not exclude anyone from later being drafted to the Civil defence, but an upper limit of total 19 months of service applies. Neglecting mobilisation orders to training exercises and actual incidents, may impose fines.
As of 1 January 2011, Serbia no longer practises mandatory military service. Prior to this, mandatory military service lasted 6 months for men. Conscientious objectors could however opt for 9 months of civil service instead.
On 15 December 2010, the Parliament of Serbia voted to suspend mandatory military service. The decision fully came into force on 1 January 2011.
Sweden had conscription (Swedish: värnplikt) for men between 1901 and 2010. During the last few decades it was selective. Since 1980, women have been allowed to sign up by choice, and, if passing the tests, do military training together with male conscripts. Since 1989 women have been allowed to serve in all military positions and units, including combat.
In 2010, conscription was made gender-neutral, meaning both women and men would be conscripted on equal terms. The conscription system was simultaneously deactivated in peacetime. Seven years later, referencing increased military threat, the Swedish Government reactivated military conscription. Beginning in 2018, both men and women are conscripted.
Taiwan, officially the Republic of China (ROC), maintains an active conscription system. All qualified male citizens of military age are now obligated to receive 4-month of military training. In December 2022, President Tsai Ing-wen led the government to announce the reinstatement of the mandatory 1-year active duty military service from January 2024.
The United Kingdom introduced conscription to full-time military service for the first time in January 1916 (the eighteenth month of World War I) and abolished it in 1920. Ireland, then part of the United Kingdom, was exempted from the original 1916 military service legislation, and although further legislation in 1918 gave power for an extension of conscription to Ireland, the power was never put into effect.
Conscription was reintroduced in 1939, in the lead up to World War II, and continued in force until 1963. Northern Ireland was exempted from conscription legislation throughout the whole period.
In all, eight million men were conscripted during both World Wars, as well as several hundred thousand younger single women. The introduction of conscription in May 1939, before the war began, was partly due to pressure from the French, who emphasized the need for a large British army to oppose the Germans. From early 1942 unmarried women age 19–30 were conscripted. Most were sent to the factories, but they could volunteer for the Auxiliary Territorial Service (ATS) and other women's services. Some women served in the Women's Land Army: initially volunteers but later conscription was introduced. However, women who were already working in a skilled job considered helpful to the war effort, such as a General Post Office telephonist, were told to continue working as before. None was assigned to combat roles unless she volunteered. By 1943 women were liable to some form of directed labour up to age 51. During the Second World War, 1.4 million British men volunteered for service and 3.2 million were conscripted. Conscripts comprised 50% of the Royal Air Force, 60% of the Royal Navy and 80% of the British Army.
The abolition of conscription in Britain was announced on 4 April 1957, by new prime minister Harold Macmillan, with the last conscripts being recruited three years later.
Conscription in the United States ended in 1973, but males aged between 18 and 25 are required to register with the Selective Service System to enable a reintroduction of conscription if necessary. President Gerald Ford had suspended mandatory draft registration in 1975, but President Jimmy Carter reinstated that requirement when the Soviet Union intervened in Afghanistan five years later. Consequently, Selective Service registration is still required of almost all young men. There have been no prosecutions for violations of the draft registration law since 1986. Males between the ages of 17 and 45, and female members of the US National Guard may be conscripted for federal militia service pursuant to 10 U.S. Code § 246 and the Militia Clauses of the United States Constitution.
In February 2019, the United States District Court for the Southern District of Texas ruled that male-only conscription registration breached the Fourteenth Amendment's equal protection clause. In National Coalition for Men v. Selective Service System, a case brought by non-profit men's rights organisation the National Coalition for Men against the U.S. Selective Service System, judge Gray H. Miller issued a declaratory judgement that the male-only registration requirement is unconstitutional, though did not specify what action the government should take. That ruling was reversed by the Fifth Circuit. In June 2021, the U.S. Supreme Court declined to review the decision by the Court of Appeals. | [
{
"paragraph_id": 0,
"text": "Conscription (also called the draft in the United States) is the state-mandated enlistment of people in a national service, mainly a military service. Conscription dates back to antiquity and it continues in some countries to the present day under various names. The modern system of near-universal national conscription for young men dates to the French Revolution in the 1790s, where it became the basis of a very large and powerful military. Most European nations later copied the system in peacetime, so that men at a certain age would serve 1–8 years on active duty and then transfer to the reserve force.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Conscription is controversial for a range of reasons, including conscientious objection to military engagements on religious or philosophical grounds; political objection, for example to service for a disliked government or unpopular war; sexism, in that historically men have been subject to the draft in the most cases; and ideological objection, for example, to a perceived violation of individual rights. Those conscripted may evade service, sometimes by leaving the country, and seeking asylum in another country. Some selection systems accommodate these attitudes by providing alternative service outside combat-operations roles or even outside the military, such as siviilipalvelus (alternative civil service) in Finland, Zivildienst (compulsory community service) in Austria, Germany and Switzerland. Several countries conscript male soldiers not only for armed forces, but also for paramilitary agencies, which are dedicated to police-like domestic only service like internal troops, border guards or non-combat rescue duties like civil defence.",
"title": ""
},
{
"paragraph_id": 2,
"text": "As of 2023, many states no longer conscript their citizens, relying instead upon professional militaries with volunteers. The ability to rely on such an arrangement, however, presupposes some degree of predictability with regard to both war-fighting requirements and the scope of hostilities. Many states that have abolished conscription still, therefore, reserve the power to resume conscription during wartime or times of crisis. States involved in wars or interstate rivalries are most likely to implement conscription, and democracies are less likely than autocracies to implement conscription. With a few exceptions, such as Singapore and Egypt, former British colonies are less likely to have conscription, as they are influenced by British anti-conscription norms that can be traced back to the English Civil War; the United Kingdom abolished conscription in 1960.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Around the reign of Hammurabi (1791–1750 BC), the Babylonian Empire used a system of conscription called Ilkum. Under that system those eligible were required to serve in the royal army in time of war. During times of peace they were instead required to provide labour for other activities of the state. In return for this service, people subject to it gained the right to hold land. It is possible that this right was not to hold land per se but specific land supplied by the state.",
"title": "History"
},
{
"paragraph_id": 4,
"text": "Various forms of avoiding military service are recorded. While it was outlawed by the Code of Hammurabi, the hiring of substitutes appears to have been practiced both before and after the creation of the code. Later records show that Ilkum commitments could become regularly traded. In other places, people simply left their towns to avoid their Ilkum service. Another option was to sell Ilkum lands and the commitments along with them. With the exception of a few exempted classes, this was forbidden by the Code of Hammurabi.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "Under the feudal laws on the European continent, landowners in the medieval period enforced a system whereby all peasants, freemen commoners and noblemen aged 15 to 60 living in the countryside or in urban centers, were summoned for military duty when required by either the king or the local lord, bringing along the weapons and armor according to their wealth. These levies fought as footmen, sergeants, and men at arms under local superiors appointed by the king or the local lord such as the arrière-ban in France. Arrière-ban denoted a general levy, where all able-bodied males age 15 to 60 living in the Kingdom of France were summoned to go to war by the King (or the constable and the marshals). Men were summoned by the bailiff (or the sénéchal in the south). Bailiffs were military and political administrators installed by the King to steward and govern a specific area of a province following the king's commands and orders. The men summoned in this way were then summoned by the lieutenant who was the King's representative and military governor over an entire province comprising many bailiwicks, seneschalties and castellanies. All men from the richest noble to the poorest commoner were summoned under the arrière-ban and they were supposed to present themselves to the King or his officials.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "In medieval Scandinavia the leiðangr (Old Norse), leidang (Norwegian), leding, (Danish), ledung (Swedish), lichting (Dutch), expeditio (Latin) or sometimes leþing (Old English), was a levy of free farmers conscripted into coastal fleets for seasonal excursions and in defence of the realm.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "The bulk of the Anglo-Saxon English army, called the fyrd, was composed of part-time English soldiers drawn from the freemen of each county. In the 690s laws of Ine of Wessex, three levels of fines are imposed on different social classes for neglecting military service.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "Some modern writers claim military service in Europe was restricted to the landowning minor nobility. These thegns were the land-holding aristocracy of the time and were required to serve with their own armour and weapons for a certain number of days each year. The historian David Sturdy has cautioned about regarding the fyrd as a precursor to a modern national army composed of all ranks of society, describing it as a \"ridiculous fantasy\":",
"title": "History"
},
{
"paragraph_id": 9,
"text": "The persistent old belief that peasants and small farmers gathered to form a national army or fyrd is a strange delusion dreamt up by antiquarians in the late eighteenth or early nineteenth centuries to justify universal military conscription.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "In feudal Japan the shogun decree of 1393 exempted money lenders from religious or military levies, in return for a yearly tax. The Ōnin War weakened the shogun and levies were imposed again on money lenders. This overlordism was arbitrary and unpredictable for commoners. While the money lenders were not poor, several overlords tapped them for income. Levies became necessary for the survival of the overlord, allowing the lord to impose taxes at will. These levies included tansen tax on agricultural land for ceremonial expenses. Yakubu takumai tax was raised on all land to rebuild the Ise Grand Shrine, and munabechisen tax was imposed on all houses. At the time, land in Kyoto was acquired by commoners through usury and in 1422 the shogun threatened to repossess the land of those commoners who failed to pay their levies.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "The system of military slaves was widely used in the Middle East, beginning with the creation of the corps of Turkic slave-soldiers (ghulams or mamluks) by the Abbasid caliph al-Mu'tasim in the 820s and 830s. The Turkish troops soon came to dominate the government, establishing a pattern throughout the Islamic world of a ruling military class, often separated by ethnicity, culture and even religion by the mass of the population, a paradigm that found its apogee in the Mamluks of Egypt and the Janissary corps of the Ottoman Empire, institutions that survived until the early 19th century.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "In the middle of the 14th century, Ottoman Sultan Murad I developed personal troops to be loyal to him, with a slave army called the Kapıkulu. The new force was built by taking Christian children from newly conquered lands, especially from the far areas of his empire, in a system known as the devşirme (translated \"gathering\" or \"converting\"). The captive children were forced to convert to Islam. The Sultans had the young boys trained over several years. Those who showed special promise in fighting skills were trained in advanced warrior skills, put into the sultan's personal service, and turned into the Janissaries, the elite branch of the Kapıkulu. A number of distinguished military commanders of the Ottomans, and most of the imperial administrators and upper-level officials of the Empire, such as Pargalı İbrahim Pasha and Sokollu Mehmet Paşa, were recruited in this way. By 1609, the Sultan's Kapıkulu forces increased to about 100,000.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "In later years, Sultans turned to the Barbary Pirates to supply their Jannissaries corps. Their attacks on ships off the coast of Africa or in the Mediterranean, and subsequent capture of able-bodied men for ransom or sale provided some captives for the Sultan's system. Starting in the 17th century, Christian families living under the Ottoman rule began to submit their sons into the Kapikulu system willingly, as they saw this as a potentially invaluable career opportunity for their children. Eventually the Sultan turned to foreign volunteers from the warrior clans of Circassians in southern Russia to fill his Janissary armies. As a whole the system began to break down, the loyalty of the Jannissaries became increasingly suspect. Mahmud II forcibly disbanded the Janissary corps in 1826.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "Similar to the Janissaries in origin and means of development were the Mamluks of Egypt in the Middle Ages. The Mamluks were usually captive non-Muslim Iranian and Turkish children who had been kidnapped or bought as slaves from the Barbary coasts. The Egyptians assimilated and trained the boys and young men to become Islamic soldiers who served the Muslim caliphs and the Ayyubid sultans during the Middle Ages. The first mamluks served the Abbasid caliphs in 9th-century Baghdad. Over time they became a powerful military caste. On more than one occasion, they seized power, for example, ruling Egypt from 1250 to 1517.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "From 1250 Egypt had been ruled by the Bahri dynasty of Kipchak origin. Slaves from the Caucasus served in the army and formed an elite corps of troops. They eventually revolted in Egypt to form the Burgi dynasty. The Mamluks' excellent fighting abilities, massed Islamic armies, and overwhelming numbers succeeded in overcoming the Christian Crusader fortresses in the Holy Land. The Mamluks were the most successful defence against the Mongol Ilkhanate of Persia and Iraq from entering Egypt.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "On the western coast of Africa, Berber Muslims captured non-Muslims to put to work as laborers. They generally converted the younger people to Islam and many became quite assimilated. In Morocco, the Berber looked south rather than north. The Moroccan Sultan Moulay Ismail, called \"the Bloodthirsty\" (1672–1727), employed a corps of 150,000 black slaves, called his Black Guard. He used them to coerce the country into submission.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "Modern conscription, the massed military enlistment of national citizens (levée en masse), was devised during the French Revolution, to enable the Republic to defend itself from the attacks of European monarchies. Deputy Jean-Baptiste Jourdan gave its name to the 5 September 1798 Act, whose first article stated: \"Any Frenchman is a soldier and owes himself to the defense of the nation.\" It enabled the creation of the Grande Armée, what Napoleon Bonaparte called \"the nation in arms\", which overwhelmed European professional armies that often numbered only into the low tens of thousands. More than 2.6 million men were inducted into the French military in this way between the years 1800 and 1813.",
"title": "History"
},
{
"paragraph_id": 18,
"text": "The defeat of the Prussian Army in particular shocked the Prussian establishment, which had believed it was invincible after the victories of Frederick the Great. The Prussians were used to relying on superior organization and tactical factors such as order of battle to focus superior troops against inferior ones. Given approximately equivalent forces, as was generally the case with professional armies, these factors showed considerable importance. However, they became considerably less important when the Prussian armies faced Napoleon's forces that outnumbered their own in some cases by more than ten to one. Scharnhorst advocated adopting the levée en masse, the military conscription used by France. The Krümpersystem was the beginning of short-term compulsory service in Prussia, as opposed to the long-term conscription previously used.",
"title": "History"
},
{
"paragraph_id": 19,
"text": "In the Russian Empire, the military service time \"owed\" by serfs was 25 years at the beginning of the 19th century. In 1834 it was decreased to 20 years. The recruits were to be not younger than 17 and not older than 35. In 1874 Russia introduced universal conscription in the modern pattern, an innovation only made possible by the abolition of serfdom in 1861. New military law decreed that all male Russian subjects, when they reached the age of 20, were eligible to serve in the military for six years.",
"title": "History"
},
{
"paragraph_id": 20,
"text": "In the decades prior to World War I universal conscription along broadly Prussian lines became the norm for European armies, and those modeled on them. By 1914 the only substantial armies still completely dependent on voluntary enlistment were those of Britain and the United States. Some colonial powers such as France reserved their conscript armies for home service while maintaining professional units for overseas duties.",
"title": "History"
},
{
"paragraph_id": 21,
"text": "The range of eligible ages for conscripting was expanded to meet national demand during the World Wars. In the United States, the Selective Service System drafted men for World War I initially in an age range from 21 to 30 but expanded its eligibility in 1918 to an age range of 18 to 45. In the case of a widespread mobilization of forces where service includes homefront defense, ages of conscripts may range much higher, with the oldest conscripts serving in roles requiring lesser mobility.",
"title": "History"
},
{
"paragraph_id": 22,
"text": "Expanded-age conscription was common during the Second World War: in Britain, it was commonly known as \"call-up\" and extended to age 51. Nazi Germany termed it Volkssturm (\"People's Storm\") and included children as young as 16 and men as old as 60. During the Second World War, both Britain and the Soviet Union conscripted women. The United States was on the verge of drafting women into the Nurse Corps because it anticipated it would need the extra personnel for its planned invasion of Japan. However, the Japanese surrendered and the idea was abandoned.",
"title": "History"
},
{
"paragraph_id": 23,
"text": "During the Great Patriotic War, the Red Army conscripted nearly 30 million men.",
"title": "History"
},
{
"paragraph_id": 24,
"text": "Men's rights activists, feminists, and opponents of discrimination against men have criticized military conscription, or compulsory military service, as sexist. The National Coalition for Men, a men's rights group, sued the US Selective Service System in 2019, leading to it being declared unconstitutional by a US Federal Judge. The federal district judge's opinion was unanimously overturned on appeal to the U.S. Court of Appeals for the 5th Circuit. In September 2021, the House of Representatives passed the annual Defense Authorization Act, which included an amendment that states that \"all Americans between the ages of 18 and 25 must register for selective service.\" This amendment omitted the word \"male,\" which would have extended a potential draft to women; however, the amendment was removed before the National Defense Authorization Act was passed.",
"title": "Arguments against conscription"
},
{
"paragraph_id": 25,
"text": "Feminists have argued, first, that military conscription is sexist because wars serve the interests of what they view as the patriarchy; second, that the military is a sexist institution and that conscripts are therefore indoctrinated into sexism; and third, that conscription of men normalizes violence by men as socially acceptable. Feminists have been organizers and participants in resistance to conscription in several countries.",
"title": "Arguments against conscription"
},
{
"paragraph_id": 26,
"text": "Conscription has also been criticized on the ground that, historically, only men have been subjected to conscription. Men who opt out or are deemed unfit for military service must often perform alternative service, such as Zivildienst in Austria, Germany and Switzerland, or pay extra taxes, whereas women do not have these obligations. In the US, men who do not register with the Selective Service cannot apply for citizenship, receive federal financial aid, grants or loans, be employed by the federal government, be admitted to public colleges or universities, or, in some states, obtain a driver's license.",
"title": "Arguments against conscription"
},
{
"paragraph_id": 27,
"text": "Many American libertarians oppose conscription and call for the abolition of the Selective Service System, arguing that impressment of individuals into the armed forces amounts to involuntary servitude. For example, Ron Paul, a former U.S. Libertarian Party presidential nominee, has said that conscription \"is wrongly associated with patriotism, when it really represents slavery and involuntary servitude\". The philosopher Ayn Rand opposed conscription, opining that \"of all the statist violations of individual rights in a mixed economy, the military draft is the worst. It is an abrogation of rights. It negates man's fundamental right—the right to life—and establishes the fundamental principle of statism: that a man's life belongs to the state, and the state may claim it by compelling him to sacrifice it in battle.\"",
"title": "Arguments against conscription"
},
{
"paragraph_id": 28,
"text": "In 1917, a number of radicals and anarchists, including Emma Goldman, challenged the new draft law in federal court, arguing that it was a violation of the Thirteenth Amendment's prohibition against slavery and involuntary servitude. However, the Supreme Court unanimously upheld the constitutionality of the draft act in the case of Arver v. United States on 7 January 1918, on the ground that the Constitution gives Congress the power to declare war and to raise and support armies. The Court also relied on the principle of the reciprocal rights and duties of citizens. \"It may not be doubted that the very conception of a just government in its duty to the citizen includes the reciprocal obligation of the citizen to render military service in case of need and the right to compel.\"",
"title": "Arguments against conscription"
},
{
"paragraph_id": 29,
"text": "It can be argued that in a cost-to-benefit ratio, conscription during peacetime is not worthwhile. Months or years of service performed by the most fit and capable subtract from the productivity of the economy; add to this the cost of training them, and in some countries paying them. Compared to these extensive costs, some would argue there is very little benefit; if there ever was a war then conscription and basic training could be completed quickly, and in any case there is little threat of a war in most countries with conscription. In the United States, every male resident is required by law to register with the Selective Service System within 30 days following his 18th birthday and be available for a draft; this is often accomplished automatically by a motor vehicle department during licensing or by voter registration.",
"title": "Arguments against conscription"
},
{
"paragraph_id": 30,
"text": "According to Milton Friedman the cost of conscription can be related to the parable of the broken window in anti-draft arguments. The cost of the work, military service, does not disappear even if no salary is paid. The work effort of the conscripts is effectively wasted, as an unwilling workforce is extremely inefficient. The impact is especially severe in wartime, when civilian professionals are forced to fight as amateur soldiers. Not only is the work effort of the conscripts wasted and productivity lost, but professionally skilled conscripts are also difficult to replace in the civilian workforce. Every soldier conscripted in the army is taken away from his civilian work, and away from contributing to the economy which funds the military. This may be less a problem in an agrarian or pre-industrialized state where the level of education is generally low, and where a worker is easily replaced by another. However, this is potentially more costly in a post-industrial society where educational levels are high and where the workforce is sophisticated and a replacement for a conscripted specialist is difficult to find. Even more dire economic consequences result if the professional conscripted as an amateur soldier is killed or maimed for life; his work effort and productivity are lost.",
"title": "Arguments against conscription"
},
{
"paragraph_id": 31,
"text": "Jean Jacques Rousseau argued vehemently against professional armies since he believed that it was the right and privilege of every citizen to participate to the defense of the whole society and that it was a mark of moral decline to leave the business to professionals. He based his belief upon the development of the Roman Republic, which came to an end at the same time as the Roman Army changed from a conscript to a professional force. Similarly, Aristotle linked the division of armed service among the populace intimately with the political order of the state. Niccolò Machiavelli argued strongly for conscription and saw the professional armies, made up of mercenary units, as the cause of the failure of societal unity in Italy.",
"title": "Arguments for conscription"
},
{
"paragraph_id": 32,
"text": "Other proponents, such as William James, consider both mandatory military and national service as ways of instilling maturity in young adults. Some proponents, such as Jonathan Alter and Mickey Kaus, support a draft in order to reinforce social equality, create social consciousness, break down class divisions and allow young adults to immerse themselves in public enterprise. Charles Rangel called for the reinstatement of the draft during the Iraq War not because he seriously expected it to be adopted but to stress how the socioeconomic restratification meant that very few children of upper-class Americans served in the all-volunteer American armed forces.",
"title": "Arguments for conscription"
},
{
"paragraph_id": 33,
"text": "It is estimated by the British military that in a professional military, a company deployed for active duty in peacekeeping corresponds to three inactive companies at home. Salaries for each are paid from the military budget. In contrast, volunteers from a trained reserve are in their civilian jobs when they are not deployed.",
"title": "Arguments for conscription"
},
{
"paragraph_id": 34,
"text": "It was more financially beneficial for less-educated young Portuguese men born in 1967 to participate in conscription than to participate in the highly competitive job market with men of the same age who continued to higher education.",
"title": "Arguments for conscription"
},
{
"paragraph_id": 35,
"text": "Throughout history, women have only been conscripted to join armed forces in a few countries, in contrast to the universal practice of conscription from among the male population. The traditional view has been that military service is a test of manhood and a rite of passage from boyhood into manhood. In recent years, this position has been challenged on the basis that it violates gender equality, and some countries, especially in Europe, have extended conscription obligations to women.",
"title": "Drafting of women"
},
{
"paragraph_id": 36,
"text": "Nations that in present-day actively draft women into military service are Bolivia, Chad, Eritrea, Israel, Mozambique, Norway, North Korea and Sweden.",
"title": "Drafting of women"
},
{
"paragraph_id": 37,
"text": "Norway introduced female conscription in 2015, making it the first NATO member to have a legally compulsory national service for both men and women. In practice only motivated volunteers are selected to join the army in Norway.",
"title": "Drafting of women"
},
{
"paragraph_id": 38,
"text": "Sweden introduced female conscription in 2010, but it was not activated until 2017. This made Sweden the second nation in Europe to draft women, and the second in the world to draft women on the same formal terms as men.",
"title": "Drafting of women"
},
{
"paragraph_id": 39,
"text": "Israel has universal female conscription, although it is possible to avoid service by claiming a religious exemption and over a third of Israeli women do so.",
"title": "Drafting of women"
},
{
"paragraph_id": 40,
"text": "Finland introduced voluntary female conscription in 1995, giving women between the ages of 18 and 29 an option to complete their military service alongside men.",
"title": "Drafting of women"
},
{
"paragraph_id": 41,
"text": "Sudanese law allows for conscription of women, but this is not implemented in practice. In the United Kingdom during World War II, beginning in 1941, women were brought into the scope of conscription but, as all women with dependent children were exempt and many women were informally left in occupations such as nursing or teaching, the number conscripted was relatively few.",
"title": "Drafting of women"
},
{
"paragraph_id": 42,
"text": "In the Soviet Union, there was never conscription of women for the armed forces, but the severe disruption of normal life and the high proportion of civilians affected by World War II after the German invasion attracted many volunteers for \"The Great Patriotic War\". Medical doctors of both sexes could and would be conscripted (as officers). Also, the Soviet university education system required Department of Chemistry students of both sexes to complete an ROTC course in NBC defense, and such female reservist officers could be conscripted in times of war. The United States came close to drafting women into the Nurse Corps in preparation for a planned invasion of Japan.",
"title": "Drafting of women"
},
{
"paragraph_id": 43,
"text": "In 1981 in the United States, several men filed lawsuit in the case Rostker v. Goldberg, alleging that the Selective Service Act of 1948 violates the Due Process Clause of the Fifth Amendment by requiring that only men register with the Selective Service System (SSS). The Supreme Court eventually upheld the Act, stating that \"the argument for registering women was based on considerations of equity, but Congress was entitled, in the exercise of its constitutional powers, to focus on the question of military need, rather than 'equity.'\" In 2013, Judge Gray H. Miller of the United States District Court for the Southern District of Texas ruled that the Service's men-only requirement was unconstitutional, as while at the time Rostker was decided, women were banned from serving in combat, the situation had since changed with the 2013 and 2015 restriction removals. Miller's opinion was reversed by the Fifth Circuit, stating that only the Supreme Court could overturn the Supreme Court precedence from Rostker. The Supreme Court considered but declined to review the Fifth Circuit's ruling in June 2021. In an opinion authored by Justice Sonia Sotomayor and joined by Justices Stephen Breyer and Brett Kavanaugh, the three justices agreed that the male-only draft was likely unconstitutional given the changes in the military's stance on the roles, but because Congress had been reviewing and evaluating legislation to eliminate its male-only draft requirement via the National Commission on Military, National, and Public Service (NCMNPS) since 2016, it would have been inappropriate for the Court to act at that time.",
"title": "Drafting of women"
},
{
"paragraph_id": 44,
"text": "On 1 October 1999, in Taiwan, the Judicial Yuan of the Republic of China in its Interpretation 490 considered that the physical differences between males and females and the derived role differentiation in their respective social functions and lives would not make drafting only males a violation of the Constitution of the Republic of China. Though women are not conscripted in Taiwan, transsexual persons are exempt.",
"title": "Drafting of women"
},
{
"paragraph_id": 45,
"text": "In 2018, the Netherlands started including women in its draft registration system, although conscription is not currently enforced for either sex.",
"title": "Drafting of women"
},
{
"paragraph_id": 46,
"text": "A conscientious objector is an individual whose personal beliefs are incompatible with military service, or, more often, with any role in the armed forces. In some countries, conscientious objectors have special legal status, which augments their conscription duties. For example, Sweden allows conscientious objectors to choose a service in the weapons-free civil defense.",
"title": "Conscientious objection"
},
{
"paragraph_id": 47,
"text": "The reasons for refusing to serve in the military are varied. Some people are conscientious objectors for religious reasons. In particular, the members of the historic peace churches are pacifist by doctrine, and Jehovah's Witnesses, while not strictly pacifists, refuse to participate in the armed forces on the ground that they believe that Christians should be neutral in international conflicts.",
"title": "Conscientious objection"
},
{
"paragraph_id": 48,
"text": "Every male citizen of the Republic of Austria from the age of 17 up to 50, specialists up to 65 years is liable to military service. However, besides mobilization, conscription calls to a six-month long basic military training in the Bundesheer can be done up to the age of 35. For men refusing to undergo this training, a nine-month lasting community service is mandatory.",
"title": "By country"
},
{
"paragraph_id": 49,
"text": "Belgium abolished the conscription in 1994. The last conscripts left active service in February 1995. To this day (2019), a small minority of the Belgian citizens supports the idea of reintroducing military conscription, for both men and women.",
"title": "By country"
},
{
"paragraph_id": 50,
"text": "Bulgaria had mandatory military service for males above 18 until conscription was ended in 2008. Due to a shortfall in the army of some 5500 soldiers, parts of the former ruling coalition have expressed their support for the return of mandatory military service, most notably Krasimir Karakachanov. Opposition towards this idea from the main coalition partner, GERB, saw a compromise in 2018, where instead of mandatory military service, Bulgaria could have possibly introduced a voluntary military service by 2019 where young citizens can volunteer for a period of 6 to 9 months, receiving a basic wage. However this has not gone forward.",
"title": "By country"
},
{
"paragraph_id": 51,
"text": "Since the signing of the Peace Accord in 1993, there has been no official conscription in Cambodia. Also the National Assembly has repeatedly rejected to reintroduce it due to popular resentment. However, in November 2006, it was reintroduced. Although mandatory for all males between the ages of 18 and 30 (with some sources stating up to age 35), less than 20% of those in the age group are recruited amidst a downsizing of the armed forces.",
"title": "By country"
},
{
"paragraph_id": 52,
"text": "Universal conscription in China dates back to the State of Qin, which eventually became the Qin Empire of 221 BC. Following unification, historical records show that a total of 300,000 conscript soldiers and 500,000 conscript labourers constructed the Great Wall of China. In the following dynasties, universal conscription was abolished and reintroduced on numerous occasions.",
"title": "By country"
},
{
"paragraph_id": 53,
"text": "As of 2011, universal military conscription is theoretically mandatory in China, and reinforced by law. However, due to the large population of China and large pool of candidates available for recruitment, the People's Liberation Army has always had sufficient volunteers, so conscription has not been required in practice.",
"title": "By country"
},
{
"paragraph_id": 54,
"text": "Military service in Cyprus has a deep rooted history entangled with the Cyprus problem. Military service in the Cypriot National Guard is mandatory for all male citizens of the Republic of Cyprus, as well as any male non-citizens born of a parent of Greek Cypriot descent, lasting from the 1 January of the year in which they turn 18 years of age to 31 December, of the year in which they turn 50. All male residents of Cyprus who are of military age (16 and over) are required to obtain an exit visa from the Ministry of Defense. Currently, military conscription in Cyprus lasts up to 14 months.",
"title": "By country"
},
{
"paragraph_id": 55,
"text": "Conscription is known in Denmark since the Viking Age, where one man out of every 10 had to serve the king. Frederick IV of Denmark changed the law in 1710 to every 4th man. The men were chosen by the landowner and it was seen as a penalty.",
"title": "By country"
},
{
"paragraph_id": 56,
"text": "Since 12 February 1849, every physically fit man must do military service. According to §81 in the Constitution of Denmark, which was promulgated in 1849:",
"title": "By country"
},
{
"paragraph_id": 57,
"text": "Every male person able to carry arms shall be liable with his person to contribute to the defence of his country under such rules as are laid down by Statute. — Constitution of Denmark",
"title": "By country"
},
{
"paragraph_id": 58,
"text": "The legislation about compulsory military service is articulated in the Danish Law of Conscription. National service takes 4–12 months. It is possible to postpone the duty when one is still in full-time education. Every male turning 18 will be drafted to the 'Day of Defence', where they will be introduced to the Danish military and their health will be tested. Physically unfit persons are not required to do military service. It is only compulsory for men, while women are free to choose to join the Danish army. Almost all of the men have been volunteers in recent years, 96.9% of the total number of recruits having been volunteers in the 2015 draft.",
"title": "By country"
},
{
"paragraph_id": 59,
"text": "After lottery, one can become a conscientious objector. Total objection (refusal from alternative civilian service) results in up to 4 months jailtime according to the law. However, in 2014 a Danish man, who signed up for the service and objected later, got only 14 days of home arrest. In many countries the act of desertion (objection after signing up) is punished harder than objecting the compulsory service.",
"title": "By country"
},
{
"paragraph_id": 60,
"text": "Estonia adopted a policy of ajateenistus (literally \"timed service\") in late 1991, having inherited the concept from Soviet legislature. According to §124 of the 1992 constitution, \"Estonian citizens have a duty to participate in national defence on the bases and pursuant to a procedure provided by a law\", which in practice means that men aged 18–27 are subject to the draft.",
"title": "By country"
},
{
"paragraph_id": 61,
"text": "In the formative years, conscripts had to serve an 18-month term. An amendment passed in 1994 shortened this to 12 months. Further revisions in 2003 established an eleven-month term for draftees trained as NCOs and drivers, and an eight-month term for rank & file. Under the current system, the yearly draft is divided into three \"waves\" - separate batches of eleven-month conscripts start their service in January and July while those selected for an eight-month term are brought in on October. An estimated 3200 people go through conscript service every year.",
"title": "By country"
},
{
"paragraph_id": 62,
"text": "Conscripts serve in all branches of the Estonian Defence Forces except the air force which only relies on paid professionals due to its highly technical nature and security concerns. Historically, draftees could also be assigned to the border guard (before it switched to an all-volunteer model in 2000), a special rapid response unit of the police force (disbanded in 1997) or three militarized rescue companies within the Estonian Rescue Board (disbanded in 2004).",
"title": "By country"
},
{
"paragraph_id": 63,
"text": "Conscription in Finland is part of a general compulsion for national military service for all adult males (Finnish: maanpuolustusvelvollisuus; Swedish: totalförsvarsplikt) defined in the 127§ of the Constitution of Finland.",
"title": "By country"
},
{
"paragraph_id": 64,
"text": "Conscription can take the form of military or of civilian service. According to Finnish Defence Forces 2011 data slightly under 80% of Finnish males turned 30 had entered and finished the military service. The number of female volunteers to annually enter armed service had stabilised at approximately 300. The service period is 165, 255 or 347 days for the rank and file conscripts and 347 days for conscripts trained as NCOs or reserve officers. The length of civilian service is always twelve months. Those electing to serve unarmed in duties where unarmed service is possible serve either nine or twelve months, depending on their training.",
"title": "By country"
},
{
"paragraph_id": 65,
"text": "Any Finnish male citizen who refuses to perform both military and civilian service faces a penalty of 173 days in prison, minus any served days. Such sentences are usually served fully in prison, with no parole. Jehovah's Witnesses are no longer exempted from service as of 27 February 2019. The inhabitants of demilitarized Åland are exempt from military service. By the Conscription Act of 1951, they are, however, required to serve a time at a local institution, like the coast guard. However, until such service has been arranged, they are freed from service obligation. The non-military service of Åland has not been arranged since the introduction of the act, and there are no plans to institute it. The inhabitants of Åland can also volunteer for military service on the mainland. As of 1995, women are permitted to serve on a voluntary basis and pursue careers in the military after their initial voluntary military service.",
"title": "By country"
},
{
"paragraph_id": 66,
"text": "The military service takes place in Finnish Defence Forces or in the Finnish Border Guard. All services of the Finnish Defence Forces train conscripts. However, the Border Guard trains conscripts only in land-based units, not in coast guard detachments or in the Border Guard Air Wing. Civilian service may take place in the Civilian Service Center in Lapinjärvi or in an accepted non-profit organization of educational, social or medical nature.",
"title": "By country"
},
{
"paragraph_id": 67,
"text": "Between 1956 and 2011 conscription was mandatory for all male citizens in the German federal armed forces (German: Bundeswehr), as well as for the Federal Border Guard (Bundesgrenzschutz) in the 1970s (see Border Guard Service). With the end of the Cold War the German government drastically reduced the size of its armed forces. The low demand for conscripts led to the suspension of compulsory conscription in 2011. Since then, only volunteer professionals serve in the Bundeswehr.",
"title": "By country"
},
{
"paragraph_id": 68,
"text": "Since 1914 Greece has been enforcing mandatory military service, currently lasting 12 months (but historically up to 36 months) for all adult men. Citizens discharged from active service are normally placed in the reserve and are subject to periodic recalls of 1–10 days at irregular intervals.",
"title": "By country"
},
{
"paragraph_id": 69,
"text": "Universal conscription was introduced in Greece during the military reforms of 1909, although various forms of selective conscription had been in place earlier. In more recent years, conscription was associated with the state of general mobilisation declared on 20 July 1974, due to the crisis in Cyprus (the mobilisation was formally ended on 18 December 2002).",
"title": "By country"
},
{
"paragraph_id": 70,
"text": "The duration of military service has historically ranged between 9 and 36 months depending on various factors either particular to the conscript or the political situation in the Eastern Mediterranean. Although women are employed by the Greek army as officers and soldiers, they are not obliged to enlist. Soldiers receive no health insurance, but they are provided with medical support during their army service, including hospitalization costs.",
"title": "By country"
},
{
"paragraph_id": 71,
"text": "Greece enforces conscription for all male citizens aged between 19 and 45. In August 2009, duration of the mandatory service was reduced from 12 months as it was before to 9 months for the army, but remained at 12 months for the navy and the air force. The number of conscripts allocated to the latter two has been greatly reduced aiming at full professionalization. Nevertheless, mandatory military service at the army was once again raised to 12 months in March 2021, unless served in units in Evros or the North Aegean islands where duration was kept at 9 months. Although full professionalization is under consideration, severe financial difficulties and mismanagement, including delays and reduced rates in the hiring of professional soldiers, as well as widespread abuse of the deferment process, has resulted in the postponement of such a plan.",
"title": "By country"
},
{
"paragraph_id": 72,
"text": "In Iran, all men who reach the age of 18 must do about two years of compulsory military service in the IR police department or Iranian army or Islamic Revolutionary Guard Corps. Before the 1979 revolution, women could serve in the military. However, after the establishment of the Islamic Republic, some Ayatollahs considered women's military service to be disrespectful to women by the Pahlavi government and banned women's military service in Iran. Therefore, Iranian women and girls were completely exempted from military service, which caused Iranian men and boys to oppose.",
"title": "By country"
},
{
"paragraph_id": 73,
"text": "In Iran, men who refuse to go to military service are deprived of their citizenship rights, such as employment, health insurance, continuing their education at university, finding a job, going abroad, opening a bank account, etc. Iranian men have so far opposed mandatory military service and demanded that military service in Iran become a job like in other countries, but the Islamic Republic is opposed to this demand. Some Iranian military commanders consider the elimination of conscription or improving the condition of soldiers as a security issue and one of Ali Khamenei's powers as the commander-in-chief of the armed forces, so they treat it with caution. In Iran, usually wealthy people are exempted from conscription. Some other men can be exempted from conscription due to their fathers serving in the Iran-Iraq war.",
"title": "By country"
},
{
"paragraph_id": 74,
"text": "There is a mandatory military service for all men and women in Israel who are fit and 18 years old. Men must serve 32 months while women serve 24 months, with the vast majority of conscripts being Jewish.",
"title": "By country"
},
{
"paragraph_id": 75,
"text": "Some Israeli citizens are exempt from mandatory service:",
"title": "By country"
},
{
"paragraph_id": 76,
"text": "All of the exempt above are eligible to volunteer to the Israel Defense Forces (IDF), as long as they declare so.",
"title": "By country"
},
{
"paragraph_id": 77,
"text": "Male Druze and male Circassian Israeli citizens are liable for conscription, in accordance with agreement set by their community leaders (their community leaders however signed a clause in which all female Druze and female Circassian are exempt from service).",
"title": "By country"
},
{
"paragraph_id": 78,
"text": "A few male Bedouin Israeli citizens choose to enlist to the Israeli military in every draft (despite their Muslim-Arab background that exempt them from conscription).",
"title": "By country"
},
{
"paragraph_id": 79,
"text": "There was mandatory military conscription for all white men in South Africa from 1968 until the end of apartheid in 1994. Under South African defense law, young white men had to undergo two years' continuous military training after they leave school, after which they had to serve 720 days in occasional military duty over the next 12 years. The End Conscription Campaign began in 1983 in opposition to the requirement. In the same year the National Party government announced plans to extend conscription to white immigrants in the country.",
"title": "By country"
},
{
"paragraph_id": 80,
"text": "Lithuania abolished its conscription in 2008. In May 2015, the Lithuanian parliament voted to reintroduce conscription and the conscripts started their training in August 2015. From 2015 to 2017 there were enough volunteers to avoid drafting civilians.",
"title": "By country"
},
{
"paragraph_id": 81,
"text": "Luxembourg practiced military conscription from 1948 until 1967.",
"title": "By country"
},
{
"paragraph_id": 82,
"text": "Moldova, which currently has male conscription, has announced plans to abolish the practice. Moldova's Defense Ministry announced that a plan which stipulates the gradual elimination of military conscription will be implemented starting from the autumn of 2018.",
"title": "By country"
},
{
"paragraph_id": 83,
"text": "Conscription, which was called \"Service Duty\" (Dutch: dienstplicht) in the Netherlands, was first employed in 1810 by French occupying forces. Napoleon's brother Louis Bonaparte, who was King of Holland from 1806 to 1810, had tried to introduce conscription a few years earlier, unsuccessfully. Every man aged 20 years or older had to enlist. By means of drawing lots it was decided who had to undertake service in the French army. It was possible to arrange a substitute against payment.",
"title": "By country"
},
{
"paragraph_id": 84,
"text": "Later on, conscription was used for all men over the age of 18. Postponement was possible, due to study, for example. Conscientious objectors could perform an alternative civilian service instead of military service. For various reasons, this forced military service was criticized at the end of the twentieth century. Since the Cold War was over, so was the direct threat of a war. Instead, the Dutch army was employed in more and more peacekeeping operations. The complexity and danger of these missions made the use of conscripts controversial. Furthermore, the conscription system was thought to be unfair as only men were drafted.",
"title": "By country"
},
{
"paragraph_id": 85,
"text": "In the European part of Netherlands, compulsory attendance has been officially suspended since 1 May 1997. Between 1991 and 1996, the Dutch armed forces phased out their conscript personnel and converted to an all-professional force. The last conscript troops were inducted in 1995, and demobilized in 1996. The suspension means that citizens are no longer forced to serve in the armed forces, as long as it is not required for the safety of the country. Since then, the Dutch army has become an all-professional force. However, to this day, every male and – from January 2020 onward – female citizen aged 17 gets a letter in which they are told that they have been registered but do not have to present themselves for service.",
"title": "By country"
},
{
"paragraph_id": 86,
"text": "Conscription was constitutionally established the 12 April 1907 with Kongeriket Norges Grunnlov § 119.. As of March 2016, Norway currently employs a weak form of mandatory military service for men and women. In practice recruits are not forced to serve, instead only those who are motivated are selected. About 60,000 Norwegians are available for conscription every year, but only 8,000 to 10,000 are conscripted. Since 1985, women have been able to enlist for voluntary service as regular recruits. On 14 June 2013 the Norwegian Parliament voted to extend conscription to women, making Norway the first NATO member and first European country to make national service compulsory for both sexes. In earlier times, up until at least the early 2000s, all men aged 19–44 were subject to mandatory service, with good reasons required to avoid becoming drafted. There is a right of conscientious objection.",
"title": "By country"
},
{
"paragraph_id": 87,
"text": "In addition to the military service, the Norwegian government draft a total of 8,000 men and women between 18 and 55 to non-military Civil defence duty. (Not to be confused with Alternative civilian service.) Former service in the military does not exclude anyone from later being drafted to the Civil defence, but an upper limit of total 19 months of service applies. Neglecting mobilisation orders to training exercises and actual incidents, may impose fines.",
"title": "By country"
},
{
"paragraph_id": 88,
"text": "As of 1 January 2011, Serbia no longer practises mandatory military service. Prior to this, mandatory military service lasted 6 months for men. Conscientious objectors could however opt for 9 months of civil service instead.",
"title": "By country"
},
{
"paragraph_id": 89,
"text": "On 15 December 2010, the Parliament of Serbia voted to suspend mandatory military service. The decision fully came into force on 1 January 2011.",
"title": "By country"
},
{
"paragraph_id": 90,
"text": "Sweden had conscription (Swedish: värnplikt) for men between 1901 and 2010. During the last few decades it was selective. Since 1980, women have been allowed to sign up by choice, and, if passing the tests, do military training together with male conscripts. Since 1989 women have been allowed to serve in all military positions and units, including combat.",
"title": "By country"
},
{
"paragraph_id": 91,
"text": "In 2010, conscription was made gender-neutral, meaning both women and men would be conscripted on equal terms. The conscription system was simultaneously deactivated in peacetime. Seven years later, referencing increased military threat, the Swedish Government reactivated military conscription. Beginning in 2018, both men and women are conscripted.",
"title": "By country"
},
{
"paragraph_id": 92,
"text": "Taiwan, officially the Republic of China (ROC), maintains an active conscription system. All qualified male citizens of military age are now obligated to receive 4-month of military training. In December 2022, President Tsai Ing-wen led the government to announce the reinstatement of the mandatory 1-year active duty military service from January 2024.",
"title": "By country"
},
{
"paragraph_id": 93,
"text": "The United Kingdom introduced conscription to full-time military service for the first time in January 1916 (the eighteenth month of World War I) and abolished it in 1920. Ireland, then part of the United Kingdom, was exempted from the original 1916 military service legislation, and although further legislation in 1918 gave power for an extension of conscription to Ireland, the power was never put into effect.",
"title": "By country"
},
{
"paragraph_id": 94,
"text": "Conscription was reintroduced in 1939, in the lead up to World War II, and continued in force until 1963. Northern Ireland was exempted from conscription legislation throughout the whole period.",
"title": "By country"
},
{
"paragraph_id": 95,
"text": "In all, eight million men were conscripted during both World Wars, as well as several hundred thousand younger single women. The introduction of conscription in May 1939, before the war began, was partly due to pressure from the French, who emphasized the need for a large British army to oppose the Germans. From early 1942 unmarried women age 19–30 were conscripted. Most were sent to the factories, but they could volunteer for the Auxiliary Territorial Service (ATS) and other women's services. Some women served in the Women's Land Army: initially volunteers but later conscription was introduced. However, women who were already working in a skilled job considered helpful to the war effort, such as a General Post Office telephonist, were told to continue working as before. None was assigned to combat roles unless she volunteered. By 1943 women were liable to some form of directed labour up to age 51. During the Second World War, 1.4 million British men volunteered for service and 3.2 million were conscripted. Conscripts comprised 50% of the Royal Air Force, 60% of the Royal Navy and 80% of the British Army.",
"title": "By country"
},
{
"paragraph_id": 96,
"text": "The abolition of conscription in Britain was announced on 4 April 1957, by new prime minister Harold Macmillan, with the last conscripts being recruited three years later.",
"title": "By country"
},
{
"paragraph_id": 97,
"text": "Conscription in the United States ended in 1973, but males aged between 18 and 25 are required to register with the Selective Service System to enable a reintroduction of conscription if necessary. President Gerald Ford had suspended mandatory draft registration in 1975, but President Jimmy Carter reinstated that requirement when the Soviet Union intervened in Afghanistan five years later. Consequently, Selective Service registration is still required of almost all young men. There have been no prosecutions for violations of the draft registration law since 1986. Males between the ages of 17 and 45, and female members of the US National Guard may be conscripted for federal militia service pursuant to 10 U.S. Code § 246 and the Militia Clauses of the United States Constitution.",
"title": "By country"
},
{
"paragraph_id": 98,
"text": "In February 2019, the United States District Court for the Southern District of Texas ruled that male-only conscription registration breached the Fourteenth Amendment's equal protection clause. In National Coalition for Men v. Selective Service System, a case brought by non-profit men's rights organisation the National Coalition for Men against the U.S. Selective Service System, judge Gray H. Miller issued a declaratory judgement that the male-only registration requirement is unconstitutional, though did not specify what action the government should take. That ruling was reversed by the Fifth Circuit. In June 2021, the U.S. Supreme Court declined to review the decision by the Court of Appeals.",
"title": "By country"
}
] | Conscription is the state-mandated enlistment of people in a national service, mainly a military service. Conscription dates back to antiquity and it continues in some countries to the present day under various names. The modern system of near-universal national conscription for young men dates to the French Revolution in the 1790s, where it became the basis of a very large and powerful military. Most European nations later copied the system in peacetime, so that men at a certain age would serve 1–8 years on active duty and then transfer to the reserve force. Conscription is controversial for a range of reasons, including conscientious objection to military engagements on religious or philosophical grounds; political objection, for example to service for a disliked government or unpopular war; sexism, in that historically men have been subject to the draft in the most cases; and ideological objection, for example, to a perceived violation of individual rights. Those conscripted may evade service, sometimes by leaving the country, and seeking asylum in another country. Some selection systems accommodate these attitudes by providing alternative service outside combat-operations roles or even outside the military, such as siviilipalvelus in Finland, Zivildienst in Austria, Germany and Switzerland. Several countries conscript male soldiers not only for armed forces, but also for paramilitary agencies, which are dedicated to police-like domestic only service like internal troops, border guards or non-combat rescue duties like civil defence. As of 2023, many states no longer conscript their citizens, relying instead upon professional militaries with volunteers. The ability to rely on such an arrangement, however, presupposes some degree of predictability with regard to both war-fighting requirements and the scope of hostilities. Many states that have abolished conscription still, therefore, reserve the power to resume conscription during wartime or times of crisis. States involved in wars or interstate rivalries are most likely to implement conscription, and democracies are less likely than autocracies to implement conscription. With a few exceptions, such as Singapore and Egypt, former British colonies are less likely to have conscription, as they are influenced by British anti-conscription norms that can be traced back to the English Civil War; the United Kingdom abolished conscription in 1960. | 2001-06-08T09:07:43Z | 2023-12-22T14:39:21Z | [
"Template:Flag",
"Template:Cite periodical",
"Template:Dead link",
"Template:Refbegin",
"Template:Short description",
"Template:Conscription",
"Template:Legend",
"Template:Multiple issues",
"Template:Lang-fi",
"Template:Unreferenced section",
"Template:Wiktionary inline",
"Template:Lang",
"Template:Further",
"Template:As of",
"Template:Excerpt",
"Template:Cbignore",
"Template:More citations needed section",
"Template:See also",
"Template:Hatnote",
"Template:Clear",
"Template:In lang",
"Template:Lang-nl",
"Template:Cite book",
"Template:Cite web",
"Template:Citation",
"Template:Div col",
"Template:Div col end",
"Template:Cite magazine",
"Template:Authority control",
"Template:Reflist",
"Template:Cite encyclopedia",
"Template:ISBN",
"Template:Cite CIA World Factbook",
"Template:IRL",
"Template:Lang-sv",
"Template:Lang-de",
"Template:Update section",
"Template:Webarchive",
"Template:Refn",
"Template:Cite news",
"Template:Redirect2",
"Template:Citation needed",
"Template:Who",
"Template:Verify source",
"Template:Commons category-inline",
"Template:Main",
"Template:Rp",
"Template:Cite journal",
"Template:Refend"
] | https://en.wikipedia.org/wiki/Conscription |
5,736 | Catherine Coleman | Catherine Grace "Cady" Coleman (born December 14, 1960) is an American chemist, engineer, former United States Air Force colonel, and retired NASA astronaut. She is a veteran of two Space Shuttle missions, and departed the International Space Station on May 23, 2011, as a crew member of Expedition 27 after logging 159 days in space.
Coleman graduated from Wilbert Tucker Woodson High School, Fairfax, Virginia, in 1978. In 1978–1979, she was an exchange student at Røyken Upper Secondary School in Norway with the AFS Intercultural Programs. She received a B.S. degree in chemistry from the Massachusetts Institute of Technology (MIT) in 1983 and was commissioned as graduate of the Air Force Reserve Officer Training Corps (Air Force ROTC)., then received a Ph.D. degree in polymer science and engineering from the University of Massachusetts Amherst in 1991. She was advised by Professor Thomas J. McCarthy on her doctorate. As an undergraduate, she was a member of the intercollegiate rowing crew and was a resident of Baker House.
Coleman continued to pursue her PhD at the University of Massachusetts Amherst as a second lieutenant. In 1988, she entered active duty at Wright-Patterson Air Force Base as a research chemist. During her work, she participated as a surface analysis consultant on the NASA Long Duration Exposure Facility experiment. In 1991, she received her doctorate in polymer science and engineering. She retired from the Air Force in November 2009 as a colonel.
Coleman was selected by NASA in 1992 to join the NASA Astronaut Corps. In 1995, she was a member of the STS-73 crew on the scientific mission USML-2 with experiments including biotechnology, combustion science, and the physics of fluids. During the flight, she reported to Houston Mission Control that she had spotted an Unidentified flying object (UFO). She also trained for the mission STS-83 to be the backup for Donald A. Thomas; however, as he recovered on time, she did not fly that mission. STS-93 was Coleman's second space flight in 1999. She was mission specialist in charge of deploying the Chandra X-ray Observatory and its Inertial Upper Stage out of the shuttle's cargo bay.
Coleman served as Chief of Robotics for the Astronaut Office, to include robotic arm operations and training for all Space Shuttle and International Space Station missions. In October 2004, Coleman served as an aquanaut during the NEEMO 7 mission aboard the Aquarius underwater laboratory, living and working underwater for eleven days.
Coleman was assigned as a backup U.S. crew member for Expeditions 19, 20 and 21 and served as a backup crew member for Expeditions 24 and 25 as part of her training for Expedition 26.
Coleman launched on December 15, 2010 (December 16, 2010 Baikonur time), aboard Soyuz TMA-20 to join the Expedition 26 mission aboard the International Space Station. She retired from NASA on December 1, 2016.
STS-73 on Space Shuttle Columbia (October 20 to November 5, 1995) was the second United States Microgravity Laboratory (USML-2) mission. The mission focused on materials science, biotechnology, combustion science, the physics of fluids, and numerous scientific experiments housed in the pressurized Spacelab module. In completing her first space flight, Coleman orbited the Earth 256 times, traveled over 6 million miles, and logged a total of 15 days, 21 hours, 52 minutes and 21 seconds in space.
STS-93 on Columbia (July 22 to 27, 1999) was a five-day mission during which Coleman was the lead mission specialist for the deployment of the Chandra X-ray Observatory. Designed to conduct comprehensive studies of the universe, the telescope will enable scientists to study exotic phenomena such as exploding stars, quasars, and black holes. Mission duration was 118 hours and 50 minutes.
Soyuz TMA-20 / Expedition 26/27 (December 15, 2010, to May 23, 2011) was an extended duration mission to the International Space Station.
Coleman is married to glass artist Josh Simpson who lives in Massachusetts. They have one son. She is part of the band Bandella, which also includes fellow NASA astronaut Stephen Robinson, Canadian astronaut Chris Hadfield, and Micki Pettit (wife of the astronaut Donald Pettit). Coleman is a flute player and has taken several flutes with her to the ISS, including a pennywhistle from Paddy Moloney of The Chieftains, an old Irish flute from Matt Molloy of The Chieftains, and a flute from Ian Anderson of Jethro Tull (band). On February 15, 2011, she played one of the instruments live from orbit on National Public Radio. On April 12, 2011, she played live via video link for the audience of Jethro Tull's show in Russia in honour of the 50th anniversary of Yuri Gagarin's flight, playing in orbit while Anderson played on the ground. On May 13 of that year, Coleman delivered a taped commencement address to the class of 2011 at the University of Massachusetts Amherst.
As do many other astronauts, Coleman holds an amateur radio license (callsign: KC5ZTH).
As of 2015, she is also known to be working as a guest speaker at the Baylor College of Medicine, for the children's program 'Saturday Morning Science'.
In 2018, she gave a graduation address to Carter Lynch, the sole graduate of Cuttyhunk Elementary School, on Cuttyhunk Island, Massachusetts.
In 2019 the Irish postal service An Post issued a set of commemorative stamps for the 50th anniversary of the Apollo Moon landings, Catherine Coleman is featured alongside fellow astronauts Neil Armstrong, Michael Collins, and Eileen Collins. | [
{
"paragraph_id": 0,
"text": "Catherine Grace \"Cady\" Coleman (born December 14, 1960) is an American chemist, engineer, former United States Air Force colonel, and retired NASA astronaut. She is a veteran of two Space Shuttle missions, and departed the International Space Station on May 23, 2011, as a crew member of Expedition 27 after logging 159 days in space.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Coleman graduated from Wilbert Tucker Woodson High School, Fairfax, Virginia, in 1978. In 1978–1979, she was an exchange student at Røyken Upper Secondary School in Norway with the AFS Intercultural Programs. She received a B.S. degree in chemistry from the Massachusetts Institute of Technology (MIT) in 1983 and was commissioned as graduate of the Air Force Reserve Officer Training Corps (Air Force ROTC)., then received a Ph.D. degree in polymer science and engineering from the University of Massachusetts Amherst in 1991. She was advised by Professor Thomas J. McCarthy on her doctorate. As an undergraduate, she was a member of the intercollegiate rowing crew and was a resident of Baker House.",
"title": "Education"
},
{
"paragraph_id": 2,
"text": "Coleman continued to pursue her PhD at the University of Massachusetts Amherst as a second lieutenant. In 1988, she entered active duty at Wright-Patterson Air Force Base as a research chemist. During her work, she participated as a surface analysis consultant on the NASA Long Duration Exposure Facility experiment. In 1991, she received her doctorate in polymer science and engineering. She retired from the Air Force in November 2009 as a colonel.",
"title": "Military career"
},
{
"paragraph_id": 3,
"text": "Coleman was selected by NASA in 1992 to join the NASA Astronaut Corps. In 1995, she was a member of the STS-73 crew on the scientific mission USML-2 with experiments including biotechnology, combustion science, and the physics of fluids. During the flight, she reported to Houston Mission Control that she had spotted an Unidentified flying object (UFO). She also trained for the mission STS-83 to be the backup for Donald A. Thomas; however, as he recovered on time, she did not fly that mission. STS-93 was Coleman's second space flight in 1999. She was mission specialist in charge of deploying the Chandra X-ray Observatory and its Inertial Upper Stage out of the shuttle's cargo bay.",
"title": "NASA career"
},
{
"paragraph_id": 4,
"text": "Coleman served as Chief of Robotics for the Astronaut Office, to include robotic arm operations and training for all Space Shuttle and International Space Station missions. In October 2004, Coleman served as an aquanaut during the NEEMO 7 mission aboard the Aquarius underwater laboratory, living and working underwater for eleven days.",
"title": "NASA career"
},
{
"paragraph_id": 5,
"text": "Coleman was assigned as a backup U.S. crew member for Expeditions 19, 20 and 21 and served as a backup crew member for Expeditions 24 and 25 as part of her training for Expedition 26.",
"title": "NASA career"
},
{
"paragraph_id": 6,
"text": "Coleman launched on December 15, 2010 (December 16, 2010 Baikonur time), aboard Soyuz TMA-20 to join the Expedition 26 mission aboard the International Space Station. She retired from NASA on December 1, 2016.",
"title": "NASA career"
},
{
"paragraph_id": 7,
"text": "STS-73 on Space Shuttle Columbia (October 20 to November 5, 1995) was the second United States Microgravity Laboratory (USML-2) mission. The mission focused on materials science, biotechnology, combustion science, the physics of fluids, and numerous scientific experiments housed in the pressurized Spacelab module. In completing her first space flight, Coleman orbited the Earth 256 times, traveled over 6 million miles, and logged a total of 15 days, 21 hours, 52 minutes and 21 seconds in space.",
"title": "NASA career"
},
{
"paragraph_id": 8,
"text": "STS-93 on Columbia (July 22 to 27, 1999) was a five-day mission during which Coleman was the lead mission specialist for the deployment of the Chandra X-ray Observatory. Designed to conduct comprehensive studies of the universe, the telescope will enable scientists to study exotic phenomena such as exploding stars, quasars, and black holes. Mission duration was 118 hours and 50 minutes.",
"title": "NASA career"
},
{
"paragraph_id": 9,
"text": "Soyuz TMA-20 / Expedition 26/27 (December 15, 2010, to May 23, 2011) was an extended duration mission to the International Space Station.",
"title": "NASA career"
},
{
"paragraph_id": 10,
"text": "Coleman is married to glass artist Josh Simpson who lives in Massachusetts. They have one son. She is part of the band Bandella, which also includes fellow NASA astronaut Stephen Robinson, Canadian astronaut Chris Hadfield, and Micki Pettit (wife of the astronaut Donald Pettit). Coleman is a flute player and has taken several flutes with her to the ISS, including a pennywhistle from Paddy Moloney of The Chieftains, an old Irish flute from Matt Molloy of The Chieftains, and a flute from Ian Anderson of Jethro Tull (band). On February 15, 2011, she played one of the instruments live from orbit on National Public Radio. On April 12, 2011, she played live via video link for the audience of Jethro Tull's show in Russia in honour of the 50th anniversary of Yuri Gagarin's flight, playing in orbit while Anderson played on the ground. On May 13 of that year, Coleman delivered a taped commencement address to the class of 2011 at the University of Massachusetts Amherst.",
"title": "Personal"
},
{
"paragraph_id": 11,
"text": "As do many other astronauts, Coleman holds an amateur radio license (callsign: KC5ZTH).",
"title": "Personal"
},
{
"paragraph_id": 12,
"text": "As of 2015, she is also known to be working as a guest speaker at the Baylor College of Medicine, for the children's program 'Saturday Morning Science'.",
"title": "Personal"
},
{
"paragraph_id": 13,
"text": "In 2018, she gave a graduation address to Carter Lynch, the sole graduate of Cuttyhunk Elementary School, on Cuttyhunk Island, Massachusetts.",
"title": "Personal"
},
{
"paragraph_id": 14,
"text": "In 2019 the Irish postal service An Post issued a set of commemorative stamps for the 50th anniversary of the Apollo Moon landings, Catherine Coleman is featured alongside fellow astronauts Neil Armstrong, Michael Collins, and Eileen Collins.",
"title": "Personal"
}
] | Catherine Grace "Cady" Coleman is an American chemist, engineer, former United States Air Force colonel, and retired NASA astronaut. She is a veteran of two Space Shuttle missions, and departed the International Space Station on May 23, 2011, as a crew member of Expedition 27 after logging 159 days in space. | 2001-06-08T13:13:29Z | 2023-11-20T05:26:13Z | [
"Template:Infobox astronaut",
"Template:Reflist",
"Template:Cite news",
"Template:Cbignore",
"Template:Commons category",
"Template:Authority control",
"Template:Use American English",
"Template:Use mdy dates",
"Template:Cite web",
"Template:Portal",
"Template:NASA Astronaut Group 14",
"Template:Short description",
"Template:Citation-attribution"
] | https://en.wikipedia.org/wiki/Catherine_Coleman |
5,738 | Cervix | The cervix (pl.: cervices) or cervix uteri (Latin, "neck of the uterus") is the lower part of the uterus (womb) in the human female reproductive system. The cervix is usually 2 to 3 cm long (~1 inch) and roughly cylindrical in shape, which changes during pregnancy. The narrow, central cervical canal runs along its entire length, connecting the uterine cavity and the lumen of the vagina. The opening into the uterus is called the internal os, and the opening into the vagina is called the external os. The lower part of the cervix, known as the vaginal portion of the cervix (or ectocervix), bulges into the top of the vagina. The cervix has been documented anatomically since at least the time of Hippocrates, over 2,000 years ago.
The cervical canal is a passage through which sperm must travel to fertilize an egg cell after sexual intercourse. Several methods of contraception, including cervical caps and cervical diaphragms, aim to block or prevent the passage of sperm through the cervical canal. Cervical mucus is used in several methods of fertility awareness, such as the Creighton model and Billings method, due to its changes in consistency throughout the menstrual period. During vaginal childbirth, the cervix must flatten and dilate to allow the fetus to progress along the birth canal. Midwives and doctors use the extent of the dilation of the cervix to assist decision-making during childbirth.
The cervical canal is lined with a single layer of column-shaped cells, while the ectocervix is covered with multiple layers of cells topped with flat cells. The two types of epithelia meet at the squamocolumnar junction. Infection with the human papillomavirus (HPV) can cause changes in the epithelium, which can lead to cancer of the cervix. Cervical cytology tests can often detect cervical cancer and its precursors, and enable early successful treatment. Ways to avoid HPV include avoiding sex, using condoms, and HPV vaccination. HPV vaccines, developed in the early 21st century, reduce the risk of cervical cancer by preventing infections from the main cancer-causing strains of HPV.
The cervix is part of the female reproductive system. Around 2–3 centimetres (0.8–1.2 in) in length, it is the lower narrower part of the uterus continuous above with the broader upper part—or body—of the uterus. The lower end of the cervix bulges through the anterior wall of the vagina, and is referred to as the vaginal portion of cervix (or ectocervix) while the rest of the cervix above the vagina is called the supravaginal portion of cervix. A central canal, known as the cervical canal, runs along its length and connects the cavity of the body of the uterus with the lumen of the vagina. The openings are known as the internal os and external orifice of the uterus (or external os), respectively. The mucosa lining the cervical canal is known as the endocervix, and the mucosa covering the ectocervix is known as the exocervix. The cervix has an inner mucosal layer, a thick layer of smooth muscle, and posteriorly the supravaginal portion has a serosal covering consisting of connective tissue and overlying peritoneum.
In front of the upper part of the cervix lies the bladder, separated from it by cellular connective tissue known as parametrium, which also extends over the sides of the cervix. To the rear, the supravaginal cervix is covered by peritoneum, which runs onto the back of the vaginal wall and then turns upwards and onto the rectum, forming the recto-uterine pouch. The cervix is more tightly connected to surrounding structures than the rest of the uterus.
The cervical canal varies greatly in length and width between women or over the course of a woman's life, and it can measure 8 mm (0.3 inch) at its widest diameter in premenopausal adults. It is wider in the middle and narrower at each end. The anterior and posterior walls of the canal each have a vertical fold, from which ridges run diagonally upwards and laterally. These are known as palmate folds, due to their resemblance to a palm leaf. The anterior and posterior ridges are arranged in such a way that they interlock with each other and close the canal. They are often effaced after pregnancy.
The ectocervix (also known as the vaginal portion of the cervix) has a convex, elliptical shape and projects into the cervix between the anterior and posterior vaginal fornices. On the rounded part of the ectocervix is a small, depressed external opening, connecting the cervix with the vagina. The size and shape of the ectocervix and the external opening (external os) can vary according to age, hormonal state, and whether childbirth has taken place. In women who have not had a vaginal delivery, the external opening is small and circular, and in women who have had a vaginal delivery, it is slit-like. On average, the ectocervix is 3 cm (1.2 in) long and 2.5 cm (1 in) wide.
Blood is supplied to the cervix by the descending branch of the uterine artery and drains into the uterine vein. The pelvic splanchnic nerves, emerging as S2–S3, transmit the sensation of pain from the cervix to the brain. These nerves travel along the uterosacral ligaments, which pass from the uterus to the anterior sacrum.
Three channels facilitate lymphatic drainage from the cervix. The anterior and lateral cervix drains to nodes along the uterine arteries, travelling along the cardinal ligaments at the base of the broad ligament to the external iliac lymph nodes and ultimately the paraaortic lymph nodes. The posterior and lateral cervix drains along the uterine arteries to the internal iliac lymph nodes and ultimately the paraaortic lymph nodes, and the posterior section of the cervix drains to the obturator and presacral lymph nodes. However, there are variations as lymphatic drainage from the cervix travels to different sets of pelvic nodes in some people. This has implications in scanning nodes for involvement in cervical cancer.
After menstruation and directly under the influence of estrogen, the cervix undergoes a series of changes in position and texture. During most of the menstrual cycle, the cervix remains firm, and is positioned low and closed. However, as ovulation approaches, the cervix becomes softer and rises to open in response to the higher levels of estrogen present. These changes are also accompanied by changes in cervical mucus, described below.
As a component of the female reproductive system, the cervix is derived from the two paramesonephric ducts (also called Müllerian ducts), which develop around the sixth week of embryogenesis. During development, the outer parts of the two ducts fuse, forming a single urogenital canal that will become the vagina, cervix and uterus. The cervix grows in size at a smaller rate than the body of the uterus, so the relative size of the cervix over time decreases, decreasing from being much larger than the body of the uterus in fetal life, twice as large during childhood, and decreasing to its adult size, smaller than the uterus, after puberty. Previously, it was thought that during fetal development, the original squamous epithelium of the cervix is derived from the urogenital sinus and the original columnar epithelium is derived from the paramesonephric duct. The point at which these two original epithelia meet is called the original squamocolumnar junction. New studies show, however, that all the cervical as well as large part of the vaginal epithelium are derived from Müllerian duct tissue and that phenotypic differences might be due to other causes.
The endocervical mucosa is about 3 mm (0.12 in) thick and lined with a single layer of columnar mucous cells. It contains numerous tubular mucous glands, which empty viscous alkaline mucus into the lumen. In contrast, the ectocervix is covered with nonkeratinized stratified squamous epithelium, which resembles the squamous epithelium lining the vagina. The junction between these two types of epithelia is called the squamocolumnar junction. Underlying both types of epithelium is a tough layer of collagen. The mucosa of the endocervix is not shed during menstruation. The cervix has more fibrous tissue, including collagen and elastin, than the rest of the uterus.
In prepubertal girls, the functional squamocolumnar junction is present just within the cervical canal. Upon entering puberty, due to hormonal influence, and during pregnancy, the columnar epithelium extends outward over the ectocervix as the cervix everts. Hence, this also causes the squamocolumnar junction to move outwards onto the vaginal portion of the cervix, where it is exposed to the acidic vaginal environment. The exposed columnar epithelium can undergo physiological metaplasia and change to tougher metaplastic squamous epithelium in days or weeks, which is very similar to the original squamous epithelium when mature. The new squamocolumnar junction is therefore internal to the original squamocolumnar junction, and the zone of unstable epithelium between the two junctions is called the transformation zone of the cervix. Histologically, the transformation zone is generally defined as surface squamous epithelium with surface columnar epithelium or stromal glands/crypts, or both.
After menopause, the uterine structures involute and the functional squamocolumnar junction moves into the cervical canal.
Nabothian cysts (or Nabothian follicles) form in the transformation zone where the lining of metaplastic epithelium has replaced mucous epithelium and caused a strangulation of the outlet of some of the mucous glands. A buildup of mucus in the glands forms Nabothian cysts, usually less than about 5 mm (0.20 in) in diameter, which are considered physiological rather than pathological. Both gland openings and Nabothian cysts are helpful to identify the transformation zone.
The cervical canal is a pathway through which sperm enter the uterus after being induced by estradiol after sexual intercourse, and some forms of artificial insemination. Some sperm remains in cervical crypts, infoldings of the endocervix, which act as a reservoir, releasing sperm over several hours and maximising the chances of fertilisation. A theory states the cervical and uterine contractions during orgasm draw semen into the uterus. Although the "upsuck theory" has been generally accepted for some years, it has been disputed due to lack of evidence, small sample size, and methodological errors.
Some methods of fertility awareness, such as the Creighton model and the Billings method involve estimating a woman's periods of fertility and infertility by observing physiological changes in her body. Among these changes are several involving the quality of her cervical mucus: the sensation it causes at the vulva, its elasticity (Spinnbarkeit), its transparency, and the presence of ferning.
Several hundred glands in the endocervix produce 20–60 mg of cervical mucus a day, increasing to 600 mg around the time of ovulation. It is viscous because it contains large proteins known as mucins. The viscosity and water content varies during the menstrual cycle; mucus is composed of around 93% water, reaching 98% at midcycle. These changes allow it to function either as a barrier or a transport medium to spermatozoa. It contains electrolytes such as calcium, sodium, and potassium; organic components such as glucose, amino acids, and soluble proteins; trace elements including zinc, copper, iron, manganese, and selenium; free fatty acids; enzymes such as amylase; and prostaglandins. Its consistency is determined by the influence of the hormones estrogen and progesterone. At midcycle around the time of ovulation—a period of high estrogen levels— the mucus is thin and serous to allow sperm to enter the uterus and is more alkaline and hence more hospitable to sperm. It is also higher in electrolytes, which results in the "ferning" pattern that can be observed in drying mucus under low magnification; as the mucus dries, the salts crystallize, resembling the leaves of a fern. The mucus has a stretchy character described as Spinnbarkeit most prominent around the time of ovulation.
At other times in the cycle, the mucus is thick and more acidic due to the effects of progesterone. This "infertile" mucus acts as a barrier to keep sperm from entering the uterus. Women taking an oral contraceptive pill also have thick mucus from the effects of progesterone. Thick mucus also prevents pathogens from interfering with a nascent pregnancy.
A cervical mucus plug, called the operculum, forms inside the cervical canal during pregnancy. This provides a protective seal for the uterus against the entry of pathogens and against leakage of uterine fluids. The mucus plug is also known to have antibacterial properties. This plug is released as the cervix dilates, either during the first stage of childbirth or shortly before. It is visible as a blood-tinged mucous discharge.
The cervix plays a major role in childbirth. As the fetus descends within the uterus in preparation for birth, the presenting part, usually the head, rests on and is supported by the cervix. As labour progresses, the cervix becomes softer and shorter, begins to dilate, and withdraws to face the anterior of the body. The support the cervix provides to the fetal head starts to give way when the uterus begins its contractions. During childbirth, the cervix must dilate to a diameter of more than 10 cm (3.9 in) to accommodate the head of the fetus as it descends from the uterus to the vagina. In becoming wider, the cervix also becomes shorter, a phenomenon known as effacement.
Along with other factors, midwives and doctors use the extent of cervical dilation to assist decision making during childbirth. Generally, the active first stage of labour, when the uterine contractions become strong and regular, begins when the cervical dilation is more than 3–5 cm (1.2–2.0 in). The second phase of labor begins when the cervix has dilated to 10 cm (4 in), which is regarded as its fullest dilation, and is when active pushing and contractions push the baby along the birth canal leading to the birth of the baby. The number of past vaginal deliveries is a strong factor in influencing how rapidly the cervix is able to dilate in labour. The time taken for the cervix to dilate and efface is one factor used in reporting systems such as the Bishop score, used to recommend whether interventions such as a forceps delivery, induction, or Caesarean section should be used in childbirth.
Cervical incompetence is a condition in which shortening of the cervix due to dilation and thinning occurs, before term pregnancy. Short cervical length is the strongest predictor of preterm birth.
Several methods of contraception involve the cervix. Cervical diaphragms are reusable, firm-rimmed plastic devices inserted by a woman prior to intercourse that cover the cervix. Pressure against the walls of the vagina maintain the position of the diaphragm, and it acts as a physical barrier to prevent the entry of sperm into the uterus, preventing fertilisation. Cervical caps are a similar method, although they are smaller and adhere to the cervix by suction. Diaphragms and caps are often used in conjunction with spermicides. In one year, 12% of women using the diaphragm will undergo an unintended pregnancy, and with optimal use this falls to 6%. Efficacy rates are lower for the cap, with 18% of women undergoing an unintended pregnancy, and 10–13% with optimal use. Most types of progestogen-only pills are effective as a contraceptive because they thicken cervical mucus, making it difficult for sperm to pass along the cervical canal. In addition, they may also sometimes prevent ovulation. In contrast, contraceptive pills that contain both oestrogen and progesterone, the combined oral contraceptive pills, work mainly by preventing ovulation. They also thicken cervical mucus and thin the lining of the uterus, enhancing their effectiveness.
In 2008, cervical cancer was the third-most common cancer in women worldwide, with rates varying geographically from less than one to more than 50 cases per 100,000 women. It is a leading cause of cancer-related death in poor countries, where delayed diagnosis leading to poor outcomes is common. The introduction of routine screening has resulted in fewer cases of (and deaths from) cervical cancer, however this has mainly taken place in developed countries. Most developing countries have limited or no screening, and 85% of the global burden occurring there.
Cervical cancer nearly always involves human papillomavirus (HPV) infection. HPV is a virus with numerous strains, several of which predispose to precancerous changes in the cervical epithelium, particularly in the transformation zone, which is the most common area for cervical cancer to start. HPV vaccines, such as Gardasil and Cervarix, reduce the incidence of cervical cancer, by inoculating against the viral strains involved in cancer development.
Potentially precancerous changes in the cervix can be detected by cervical screening, using methods including a Pap smear (also called a cervical smear), in which epithelial cells are scraped from the surface of the cervix and examined under a microscope. The colposcope, an instrument used to see a magnified view of the cervix, was invented in 1925. The Pap smear was developed by Georgios Papanikolaou in 1928. A LEEP procedure using a heated loop of platinum to excise a patch of cervical tissue was developed by Aurel Babes in 1927. In some parts of the developed world including the UK, the Pap test has been superseded with liquid-based cytology.
A cheap, cost-effective and practical alternative in poorer countries is visual inspection with acetic acid (VIA). Instituting and sustaining cytology-based programs in these regions can be difficult, due to the need for trained personnel, equipment and facilities and difficulties in follow-up. With VIA, results and treatment can be available on the same day. As a screening test, VIA is comparable to cervical cytology in accurately identifying precancerous lesions.
A result of dysplasia is usually further investigated, such as by taking a cone biopsy, which may also remove the cancerous lesion. Cervical intraepithelial neoplasia is a possible result of the biopsy and represents dysplastic changes that may eventually progress to invasive cancer. Most cases of cervical cancer are detected in this way, without having caused any symptoms. When symptoms occur, they may include vaginal bleeding, discharge, or discomfort.
Inflammation of the cervix is referred to as cervicitis. This inflammation may be of the endocervix or ectocervix. When associated with the endocervix, it is associated with a mucous vaginal discharge and sexually transmitted infections such as chlamydia and gonorrhoea. As many as half of pregnant women having a gonorrheal infection of the cervix are asymptomatic. Other causes include overgrowth of the commensal flora of the vagina. When associated with the ectocervix, inflammation may be caused by the herpes simplex virus. Inflammation is often investigated through directly visualising the cervix using a speculum, which may appear whiteish due to exudate, and by taking a Pap smear and examining for causal bacteria. Special tests may be used to identify particular bacteria. If the inflammation is due to a bacterium, then antibiotics may be given as treatment.
Cervical stenosis is an abnormally narrow cervical canal, typically associated with trauma caused by removal of tissue for investigation or treatment of cancer, or cervical cancer itself. Diethylstilbestrol, used from 1938 to 1971 to prevent preterm labour and miscarriage, is also strongly associated with the development of cervical stenosis and other abnormalities in the daughters of the exposed women. Other abnormalities include: vaginal adenosis, in which the squamous epithelium of the ectocervix becomes columnar; cancers such as clear cell adenocarcinomas; cervical ridges and hoods; and development of a cockscomb cervix appearance, which is the condition wherein, as the name suggests, the cervix of the uterus is shaped like a cockscomb. About one third of women born to diethylstilbestrol-treated mothers (i.e. in-utero exposure) develop a cockscomb cervix.
Enlarged folds or ridges of cervical stroma (fibrous tissues) and epithelium constitute a cockscomb cervix. Similarly, cockscomb polyps lining the cervix are usually considered or grouped into the same overarching description. It is in and of itself considered a benign abnormality; its presence, however is usually indicative of DES exposure, and as such women who experience these abnormalities should be aware of their increased risk of associated pathologies.
Cervical agenesis is a rare congenital condition in which the cervix completely fails to develop, often associated with the concurrent failure of the vagina to develop. Other congenital cervical abnormalities exist, often associated with abnormalities of the vagina and uterus. The cervix may be duplicated in situations such as bicornuate uterus and uterine didelphys.
Cervical polyps, which are benign overgrowths of endocervical tissue, if present, may cause bleeding, or a benign overgrowth may be present in the cervical canal. Cervical ectropion refers to the horizontal overgrowth of the endocervical columnar lining in a one-cell-thick layer over the ectocervix.
Female marsupials have paired uteri and cervices. Most eutherian (placental) mammal species have a single cervix and single, bipartite or bicornuate uterus. Lagomorphs, rodents, aardvarks and hyraxes have a duplex uterus and two cervices. Lagomorphs and rodents share many morphological characteristics and are grouped together in the clade Glires. Anteaters of the family myrmecophagidae are unusual in that they lack a defined cervix; they are thought to have lost the characteristic rather than other mammals developing a cervix on more than one lineage. In domestic pigs, the cervix contains a series of five interdigitating pads that hold the boar's corkscrew-shaped penis during copulation.
The word cervix (/ˈsɜːrvɪks/) came to English from Latin, where it means "neck", and like its Germanic counterpart, it can refer not only to the neck [of the body] but also to an analogous narrowed part of an object. The cervix uteri (neck of the uterus) is thus the uterine cervix, but in English the word cervix used alone usually refers to it. Thus the adjective cervical may refer either to the neck (as in cervical vertebrae or cervical lymph nodes) or to the uterine cervix (as in cervical cap or cervical cancer).
Latin cervix came from the Proto-Indo-European root ker-, referring to a "structure that projects". Thus, the word cervix is linguistically related to the English word "horn", the Persian word for "head" (Persian: سر sar), the Greek word for "head" (Greek: κορυφή koruphe), and the Welsh and Romanian words for "deer" (Welsh: carw, Romanian: cerb).
The cervix was documented in anatomical literature in at least the time of Hippocrates; cervical cancer was first described more than 2,000 years ago, with descriptions provided by both Hippocrates and Aretaeus. However, there was some variation in word sense among early writers, who used the term to refer to both the cervix and the internal uterine orifice. The first attested use of the word to refer to the cervix of the uterus was in 1702. | [
{
"paragraph_id": 0,
"text": "The cervix (pl.: cervices) or cervix uteri (Latin, \"neck of the uterus\") is the lower part of the uterus (womb) in the human female reproductive system. The cervix is usually 2 to 3 cm long (~1 inch) and roughly cylindrical in shape, which changes during pregnancy. The narrow, central cervical canal runs along its entire length, connecting the uterine cavity and the lumen of the vagina. The opening into the uterus is called the internal os, and the opening into the vagina is called the external os. The lower part of the cervix, known as the vaginal portion of the cervix (or ectocervix), bulges into the top of the vagina. The cervix has been documented anatomically since at least the time of Hippocrates, over 2,000 years ago.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The cervical canal is a passage through which sperm must travel to fertilize an egg cell after sexual intercourse. Several methods of contraception, including cervical caps and cervical diaphragms, aim to block or prevent the passage of sperm through the cervical canal. Cervical mucus is used in several methods of fertility awareness, such as the Creighton model and Billings method, due to its changes in consistency throughout the menstrual period. During vaginal childbirth, the cervix must flatten and dilate to allow the fetus to progress along the birth canal. Midwives and doctors use the extent of the dilation of the cervix to assist decision-making during childbirth.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The cervical canal is lined with a single layer of column-shaped cells, while the ectocervix is covered with multiple layers of cells topped with flat cells. The two types of epithelia meet at the squamocolumnar junction. Infection with the human papillomavirus (HPV) can cause changes in the epithelium, which can lead to cancer of the cervix. Cervical cytology tests can often detect cervical cancer and its precursors, and enable early successful treatment. Ways to avoid HPV include avoiding sex, using condoms, and HPV vaccination. HPV vaccines, developed in the early 21st century, reduce the risk of cervical cancer by preventing infections from the main cancer-causing strains of HPV.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The cervix is part of the female reproductive system. Around 2–3 centimetres (0.8–1.2 in) in length, it is the lower narrower part of the uterus continuous above with the broader upper part—or body—of the uterus. The lower end of the cervix bulges through the anterior wall of the vagina, and is referred to as the vaginal portion of cervix (or ectocervix) while the rest of the cervix above the vagina is called the supravaginal portion of cervix. A central canal, known as the cervical canal, runs along its length and connects the cavity of the body of the uterus with the lumen of the vagina. The openings are known as the internal os and external orifice of the uterus (or external os), respectively. The mucosa lining the cervical canal is known as the endocervix, and the mucosa covering the ectocervix is known as the exocervix. The cervix has an inner mucosal layer, a thick layer of smooth muscle, and posteriorly the supravaginal portion has a serosal covering consisting of connective tissue and overlying peritoneum.",
"title": "Structure"
},
{
"paragraph_id": 4,
"text": "In front of the upper part of the cervix lies the bladder, separated from it by cellular connective tissue known as parametrium, which also extends over the sides of the cervix. To the rear, the supravaginal cervix is covered by peritoneum, which runs onto the back of the vaginal wall and then turns upwards and onto the rectum, forming the recto-uterine pouch. The cervix is more tightly connected to surrounding structures than the rest of the uterus.",
"title": "Structure"
},
{
"paragraph_id": 5,
"text": "The cervical canal varies greatly in length and width between women or over the course of a woman's life, and it can measure 8 mm (0.3 inch) at its widest diameter in premenopausal adults. It is wider in the middle and narrower at each end. The anterior and posterior walls of the canal each have a vertical fold, from which ridges run diagonally upwards and laterally. These are known as palmate folds, due to their resemblance to a palm leaf. The anterior and posterior ridges are arranged in such a way that they interlock with each other and close the canal. They are often effaced after pregnancy.",
"title": "Structure"
},
{
"paragraph_id": 6,
"text": "The ectocervix (also known as the vaginal portion of the cervix) has a convex, elliptical shape and projects into the cervix between the anterior and posterior vaginal fornices. On the rounded part of the ectocervix is a small, depressed external opening, connecting the cervix with the vagina. The size and shape of the ectocervix and the external opening (external os) can vary according to age, hormonal state, and whether childbirth has taken place. In women who have not had a vaginal delivery, the external opening is small and circular, and in women who have had a vaginal delivery, it is slit-like. On average, the ectocervix is 3 cm (1.2 in) long and 2.5 cm (1 in) wide.",
"title": "Structure"
},
{
"paragraph_id": 7,
"text": "Blood is supplied to the cervix by the descending branch of the uterine artery and drains into the uterine vein. The pelvic splanchnic nerves, emerging as S2–S3, transmit the sensation of pain from the cervix to the brain. These nerves travel along the uterosacral ligaments, which pass from the uterus to the anterior sacrum.",
"title": "Structure"
},
{
"paragraph_id": 8,
"text": "Three channels facilitate lymphatic drainage from the cervix. The anterior and lateral cervix drains to nodes along the uterine arteries, travelling along the cardinal ligaments at the base of the broad ligament to the external iliac lymph nodes and ultimately the paraaortic lymph nodes. The posterior and lateral cervix drains along the uterine arteries to the internal iliac lymph nodes and ultimately the paraaortic lymph nodes, and the posterior section of the cervix drains to the obturator and presacral lymph nodes. However, there are variations as lymphatic drainage from the cervix travels to different sets of pelvic nodes in some people. This has implications in scanning nodes for involvement in cervical cancer.",
"title": "Structure"
},
{
"paragraph_id": 9,
"text": "After menstruation and directly under the influence of estrogen, the cervix undergoes a series of changes in position and texture. During most of the menstrual cycle, the cervix remains firm, and is positioned low and closed. However, as ovulation approaches, the cervix becomes softer and rises to open in response to the higher levels of estrogen present. These changes are also accompanied by changes in cervical mucus, described below.",
"title": "Structure"
},
{
"paragraph_id": 10,
"text": "As a component of the female reproductive system, the cervix is derived from the two paramesonephric ducts (also called Müllerian ducts), which develop around the sixth week of embryogenesis. During development, the outer parts of the two ducts fuse, forming a single urogenital canal that will become the vagina, cervix and uterus. The cervix grows in size at a smaller rate than the body of the uterus, so the relative size of the cervix over time decreases, decreasing from being much larger than the body of the uterus in fetal life, twice as large during childhood, and decreasing to its adult size, smaller than the uterus, after puberty. Previously, it was thought that during fetal development, the original squamous epithelium of the cervix is derived from the urogenital sinus and the original columnar epithelium is derived from the paramesonephric duct. The point at which these two original epithelia meet is called the original squamocolumnar junction. New studies show, however, that all the cervical as well as large part of the vaginal epithelium are derived from Müllerian duct tissue and that phenotypic differences might be due to other causes.",
"title": "Structure"
},
{
"paragraph_id": 11,
"text": "",
"title": "Structure"
},
{
"paragraph_id": 12,
"text": "The endocervical mucosa is about 3 mm (0.12 in) thick and lined with a single layer of columnar mucous cells. It contains numerous tubular mucous glands, which empty viscous alkaline mucus into the lumen. In contrast, the ectocervix is covered with nonkeratinized stratified squamous epithelium, which resembles the squamous epithelium lining the vagina. The junction between these two types of epithelia is called the squamocolumnar junction. Underlying both types of epithelium is a tough layer of collagen. The mucosa of the endocervix is not shed during menstruation. The cervix has more fibrous tissue, including collagen and elastin, than the rest of the uterus.",
"title": "Structure"
},
{
"paragraph_id": 13,
"text": "In prepubertal girls, the functional squamocolumnar junction is present just within the cervical canal. Upon entering puberty, due to hormonal influence, and during pregnancy, the columnar epithelium extends outward over the ectocervix as the cervix everts. Hence, this also causes the squamocolumnar junction to move outwards onto the vaginal portion of the cervix, where it is exposed to the acidic vaginal environment. The exposed columnar epithelium can undergo physiological metaplasia and change to tougher metaplastic squamous epithelium in days or weeks, which is very similar to the original squamous epithelium when mature. The new squamocolumnar junction is therefore internal to the original squamocolumnar junction, and the zone of unstable epithelium between the two junctions is called the transformation zone of the cervix. Histologically, the transformation zone is generally defined as surface squamous epithelium with surface columnar epithelium or stromal glands/crypts, or both.",
"title": "Structure"
},
{
"paragraph_id": 14,
"text": "After menopause, the uterine structures involute and the functional squamocolumnar junction moves into the cervical canal.",
"title": "Structure"
},
{
"paragraph_id": 15,
"text": "Nabothian cysts (or Nabothian follicles) form in the transformation zone where the lining of metaplastic epithelium has replaced mucous epithelium and caused a strangulation of the outlet of some of the mucous glands. A buildup of mucus in the glands forms Nabothian cysts, usually less than about 5 mm (0.20 in) in diameter, which are considered physiological rather than pathological. Both gland openings and Nabothian cysts are helpful to identify the transformation zone.",
"title": "Structure"
},
{
"paragraph_id": 16,
"text": "The cervical canal is a pathway through which sperm enter the uterus after being induced by estradiol after sexual intercourse, and some forms of artificial insemination. Some sperm remains in cervical crypts, infoldings of the endocervix, which act as a reservoir, releasing sperm over several hours and maximising the chances of fertilisation. A theory states the cervical and uterine contractions during orgasm draw semen into the uterus. Although the \"upsuck theory\" has been generally accepted for some years, it has been disputed due to lack of evidence, small sample size, and methodological errors.",
"title": "Function"
},
{
"paragraph_id": 17,
"text": "Some methods of fertility awareness, such as the Creighton model and the Billings method involve estimating a woman's periods of fertility and infertility by observing physiological changes in her body. Among these changes are several involving the quality of her cervical mucus: the sensation it causes at the vulva, its elasticity (Spinnbarkeit), its transparency, and the presence of ferning.",
"title": "Function"
},
{
"paragraph_id": 18,
"text": "Several hundred glands in the endocervix produce 20–60 mg of cervical mucus a day, increasing to 600 mg around the time of ovulation. It is viscous because it contains large proteins known as mucins. The viscosity and water content varies during the menstrual cycle; mucus is composed of around 93% water, reaching 98% at midcycle. These changes allow it to function either as a barrier or a transport medium to spermatozoa. It contains electrolytes such as calcium, sodium, and potassium; organic components such as glucose, amino acids, and soluble proteins; trace elements including zinc, copper, iron, manganese, and selenium; free fatty acids; enzymes such as amylase; and prostaglandins. Its consistency is determined by the influence of the hormones estrogen and progesterone. At midcycle around the time of ovulation—a period of high estrogen levels— the mucus is thin and serous to allow sperm to enter the uterus and is more alkaline and hence more hospitable to sperm. It is also higher in electrolytes, which results in the \"ferning\" pattern that can be observed in drying mucus under low magnification; as the mucus dries, the salts crystallize, resembling the leaves of a fern. The mucus has a stretchy character described as Spinnbarkeit most prominent around the time of ovulation.",
"title": "Function"
},
{
"paragraph_id": 19,
"text": "At other times in the cycle, the mucus is thick and more acidic due to the effects of progesterone. This \"infertile\" mucus acts as a barrier to keep sperm from entering the uterus. Women taking an oral contraceptive pill also have thick mucus from the effects of progesterone. Thick mucus also prevents pathogens from interfering with a nascent pregnancy.",
"title": "Function"
},
{
"paragraph_id": 20,
"text": "A cervical mucus plug, called the operculum, forms inside the cervical canal during pregnancy. This provides a protective seal for the uterus against the entry of pathogens and against leakage of uterine fluids. The mucus plug is also known to have antibacterial properties. This plug is released as the cervix dilates, either during the first stage of childbirth or shortly before. It is visible as a blood-tinged mucous discharge.",
"title": "Function"
},
{
"paragraph_id": 21,
"text": "The cervix plays a major role in childbirth. As the fetus descends within the uterus in preparation for birth, the presenting part, usually the head, rests on and is supported by the cervix. As labour progresses, the cervix becomes softer and shorter, begins to dilate, and withdraws to face the anterior of the body. The support the cervix provides to the fetal head starts to give way when the uterus begins its contractions. During childbirth, the cervix must dilate to a diameter of more than 10 cm (3.9 in) to accommodate the head of the fetus as it descends from the uterus to the vagina. In becoming wider, the cervix also becomes shorter, a phenomenon known as effacement.",
"title": "Function"
},
{
"paragraph_id": 22,
"text": "Along with other factors, midwives and doctors use the extent of cervical dilation to assist decision making during childbirth. Generally, the active first stage of labour, when the uterine contractions become strong and regular, begins when the cervical dilation is more than 3–5 cm (1.2–2.0 in). The second phase of labor begins when the cervix has dilated to 10 cm (4 in), which is regarded as its fullest dilation, and is when active pushing and contractions push the baby along the birth canal leading to the birth of the baby. The number of past vaginal deliveries is a strong factor in influencing how rapidly the cervix is able to dilate in labour. The time taken for the cervix to dilate and efface is one factor used in reporting systems such as the Bishop score, used to recommend whether interventions such as a forceps delivery, induction, or Caesarean section should be used in childbirth.",
"title": "Function"
},
{
"paragraph_id": 23,
"text": "Cervical incompetence is a condition in which shortening of the cervix due to dilation and thinning occurs, before term pregnancy. Short cervical length is the strongest predictor of preterm birth.",
"title": "Function"
},
{
"paragraph_id": 24,
"text": "Several methods of contraception involve the cervix. Cervical diaphragms are reusable, firm-rimmed plastic devices inserted by a woman prior to intercourse that cover the cervix. Pressure against the walls of the vagina maintain the position of the diaphragm, and it acts as a physical barrier to prevent the entry of sperm into the uterus, preventing fertilisation. Cervical caps are a similar method, although they are smaller and adhere to the cervix by suction. Diaphragms and caps are often used in conjunction with spermicides. In one year, 12% of women using the diaphragm will undergo an unintended pregnancy, and with optimal use this falls to 6%. Efficacy rates are lower for the cap, with 18% of women undergoing an unintended pregnancy, and 10–13% with optimal use. Most types of progestogen-only pills are effective as a contraceptive because they thicken cervical mucus, making it difficult for sperm to pass along the cervical canal. In addition, they may also sometimes prevent ovulation. In contrast, contraceptive pills that contain both oestrogen and progesterone, the combined oral contraceptive pills, work mainly by preventing ovulation. They also thicken cervical mucus and thin the lining of the uterus, enhancing their effectiveness.",
"title": "Function"
},
{
"paragraph_id": 25,
"text": "In 2008, cervical cancer was the third-most common cancer in women worldwide, with rates varying geographically from less than one to more than 50 cases per 100,000 women. It is a leading cause of cancer-related death in poor countries, where delayed diagnosis leading to poor outcomes is common. The introduction of routine screening has resulted in fewer cases of (and deaths from) cervical cancer, however this has mainly taken place in developed countries. Most developing countries have limited or no screening, and 85% of the global burden occurring there.",
"title": "Clinical significance"
},
{
"paragraph_id": 26,
"text": "Cervical cancer nearly always involves human papillomavirus (HPV) infection. HPV is a virus with numerous strains, several of which predispose to precancerous changes in the cervical epithelium, particularly in the transformation zone, which is the most common area for cervical cancer to start. HPV vaccines, such as Gardasil and Cervarix, reduce the incidence of cervical cancer, by inoculating against the viral strains involved in cancer development.",
"title": "Clinical significance"
},
{
"paragraph_id": 27,
"text": "Potentially precancerous changes in the cervix can be detected by cervical screening, using methods including a Pap smear (also called a cervical smear), in which epithelial cells are scraped from the surface of the cervix and examined under a microscope. The colposcope, an instrument used to see a magnified view of the cervix, was invented in 1925. The Pap smear was developed by Georgios Papanikolaou in 1928. A LEEP procedure using a heated loop of platinum to excise a patch of cervical tissue was developed by Aurel Babes in 1927. In some parts of the developed world including the UK, the Pap test has been superseded with liquid-based cytology.",
"title": "Clinical significance"
},
{
"paragraph_id": 28,
"text": "A cheap, cost-effective and practical alternative in poorer countries is visual inspection with acetic acid (VIA). Instituting and sustaining cytology-based programs in these regions can be difficult, due to the need for trained personnel, equipment and facilities and difficulties in follow-up. With VIA, results and treatment can be available on the same day. As a screening test, VIA is comparable to cervical cytology in accurately identifying precancerous lesions.",
"title": "Clinical significance"
},
{
"paragraph_id": 29,
"text": "A result of dysplasia is usually further investigated, such as by taking a cone biopsy, which may also remove the cancerous lesion. Cervical intraepithelial neoplasia is a possible result of the biopsy and represents dysplastic changes that may eventually progress to invasive cancer. Most cases of cervical cancer are detected in this way, without having caused any symptoms. When symptoms occur, they may include vaginal bleeding, discharge, or discomfort.",
"title": "Clinical significance"
},
{
"paragraph_id": 30,
"text": "Inflammation of the cervix is referred to as cervicitis. This inflammation may be of the endocervix or ectocervix. When associated with the endocervix, it is associated with a mucous vaginal discharge and sexually transmitted infections such as chlamydia and gonorrhoea. As many as half of pregnant women having a gonorrheal infection of the cervix are asymptomatic. Other causes include overgrowth of the commensal flora of the vagina. When associated with the ectocervix, inflammation may be caused by the herpes simplex virus. Inflammation is often investigated through directly visualising the cervix using a speculum, which may appear whiteish due to exudate, and by taking a Pap smear and examining for causal bacteria. Special tests may be used to identify particular bacteria. If the inflammation is due to a bacterium, then antibiotics may be given as treatment.",
"title": "Clinical significance"
},
{
"paragraph_id": 31,
"text": "Cervical stenosis is an abnormally narrow cervical canal, typically associated with trauma caused by removal of tissue for investigation or treatment of cancer, or cervical cancer itself. Diethylstilbestrol, used from 1938 to 1971 to prevent preterm labour and miscarriage, is also strongly associated with the development of cervical stenosis and other abnormalities in the daughters of the exposed women. Other abnormalities include: vaginal adenosis, in which the squamous epithelium of the ectocervix becomes columnar; cancers such as clear cell adenocarcinomas; cervical ridges and hoods; and development of a cockscomb cervix appearance, which is the condition wherein, as the name suggests, the cervix of the uterus is shaped like a cockscomb. About one third of women born to diethylstilbestrol-treated mothers (i.e. in-utero exposure) develop a cockscomb cervix.",
"title": "Clinical significance"
},
{
"paragraph_id": 32,
"text": "Enlarged folds or ridges of cervical stroma (fibrous tissues) and epithelium constitute a cockscomb cervix. Similarly, cockscomb polyps lining the cervix are usually considered or grouped into the same overarching description. It is in and of itself considered a benign abnormality; its presence, however is usually indicative of DES exposure, and as such women who experience these abnormalities should be aware of their increased risk of associated pathologies.",
"title": "Clinical significance"
},
{
"paragraph_id": 33,
"text": "Cervical agenesis is a rare congenital condition in which the cervix completely fails to develop, often associated with the concurrent failure of the vagina to develop. Other congenital cervical abnormalities exist, often associated with abnormalities of the vagina and uterus. The cervix may be duplicated in situations such as bicornuate uterus and uterine didelphys.",
"title": "Clinical significance"
},
{
"paragraph_id": 34,
"text": "Cervical polyps, which are benign overgrowths of endocervical tissue, if present, may cause bleeding, or a benign overgrowth may be present in the cervical canal. Cervical ectropion refers to the horizontal overgrowth of the endocervical columnar lining in a one-cell-thick layer over the ectocervix.",
"title": "Clinical significance"
},
{
"paragraph_id": 35,
"text": "Female marsupials have paired uteri and cervices. Most eutherian (placental) mammal species have a single cervix and single, bipartite or bicornuate uterus. Lagomorphs, rodents, aardvarks and hyraxes have a duplex uterus and two cervices. Lagomorphs and rodents share many morphological characteristics and are grouped together in the clade Glires. Anteaters of the family myrmecophagidae are unusual in that they lack a defined cervix; they are thought to have lost the characteristic rather than other mammals developing a cervix on more than one lineage. In domestic pigs, the cervix contains a series of five interdigitating pads that hold the boar's corkscrew-shaped penis during copulation.",
"title": "Other animals"
},
{
"paragraph_id": 36,
"text": "The word cervix (/ˈsɜːrvɪks/) came to English from Latin, where it means \"neck\", and like its Germanic counterpart, it can refer not only to the neck [of the body] but also to an analogous narrowed part of an object. The cervix uteri (neck of the uterus) is thus the uterine cervix, but in English the word cervix used alone usually refers to it. Thus the adjective cervical may refer either to the neck (as in cervical vertebrae or cervical lymph nodes) or to the uterine cervix (as in cervical cap or cervical cancer).",
"title": "Etymology and pronunciation"
},
{
"paragraph_id": 37,
"text": "Latin cervix came from the Proto-Indo-European root ker-, referring to a \"structure that projects\". Thus, the word cervix is linguistically related to the English word \"horn\", the Persian word for \"head\" (Persian: سر sar), the Greek word for \"head\" (Greek: κορυφή koruphe), and the Welsh and Romanian words for \"deer\" (Welsh: carw, Romanian: cerb).",
"title": "Etymology and pronunciation"
},
{
"paragraph_id": 38,
"text": "The cervix was documented in anatomical literature in at least the time of Hippocrates; cervical cancer was first described more than 2,000 years ago, with descriptions provided by both Hippocrates and Aretaeus. However, there was some variation in word sense among early writers, who used the term to refer to both the cervix and the internal uterine orifice. The first attested use of the word to refer to the cervix of the uterus was in 1702.",
"title": "Etymology and pronunciation"
}
] | The cervix or cervix uteri is the lower part of the uterus (womb) in the human female reproductive system. The cervix is usually 2 to 3 cm long and roughly cylindrical in shape, which changes during pregnancy. The narrow, central cervical canal runs along its entire length, connecting the uterine cavity and the lumen of the vagina. The opening into the uterus is called the internal os, and the opening into the vagina is called the external os. The lower part of the cervix, known as the vaginal portion of the cervix, bulges into the top of the vagina. The cervix has been documented anatomically since at least the time of Hippocrates, over 2,000 years ago. The cervical canal is a passage through which sperm must travel to fertilize an egg cell after sexual intercourse. Several methods of contraception, including cervical caps and cervical diaphragms, aim to block or prevent the passage of sperm through the cervical canal. Cervical mucus is used in several methods of fertility awareness, such as the Creighton model and Billings method, due to its changes in consistency throughout the menstrual period. During vaginal childbirth, the cervix must flatten and dilate to allow the fetus to progress along the birth canal. Midwives and doctors use the extent of the dilation of the cervix to assist decision-making during childbirth. The cervical canal is lined with a single layer of column-shaped cells, while the ectocervix is covered with multiple layers of cells topped with flat cells. The two types of epithelia meet at the squamocolumnar junction. Infection with the human papillomavirus (HPV) can cause changes in the epithelium, which can lead to cancer of the cervix. Cervical cytology tests can often detect cervical cancer and its precursors, and enable early successful treatment. Ways to avoid HPV include avoiding sex, using condoms, and HPV vaccination. HPV vaccines, developed in the early 21st century, reduce the risk of cervical cancer by preventing infections from the main cancer-causing strains of HPV. | 2001-06-08T17:51:13Z | 2023-12-19T11:38:37Z | [
"Template:Cite web",
"Template:Cite news",
"Template:Convert",
"Template:Rp",
"Template:Anchor",
"Template:Lang-fa",
"Template:Cite journal",
"Template:Commons category-inline",
"Template:Short description",
"Template:Infobox anatomy",
"Template:Multiple image",
"Template:Lang-cy",
"Template:IPAc-en",
"Template:Lang-grc-gre",
"Template:Female reproductive system",
"Template:Authority control",
"Template:Old fact",
"Template:Reflist",
"Template:Cite book",
"Template:Women's health",
"Template:Other uses",
"Template:Good article",
"Template:Plural form",
"Template:Main"
] | https://en.wikipedia.org/wiki/Cervix |
5,739 | Compiler | In computing, a compiler is a computer program that translates computer code written in one programming language (the source language) into another language (the target language). The name "compiler" is primarily used for programs that translate source code from a high-level programming language to a low-level programming language (e.g. assembly language, object code, or machine code) to create an executable program.
There are many different types of compilers which produce output in different useful forms. A cross-compiler produces code for a different CPU or operating system than the one on which the cross-compiler itself runs. A bootstrap compiler is often a temporary compiler, used for compiling a more permanent or better optimised compiler for a language.
Related software include decompilers, programs that translate from low-level languages to higher level ones; programs that translate between high-level languages, usually called source-to-source compilers or transpilers; language rewriters, usually programs that translate the form of expressions without a change of language; and compiler-compilers, compilers that produce compilers (or parts of them), often in a generic and reusable way so as to be able to produce many differing compilers.
A compiler is likely to perform some or all of the following operations, often called phases: preprocessing, lexical analysis, parsing, semantic analysis (syntax-directed translation), conversion of input programs to an intermediate representation, code optimization and machine specific code generation. Compilers generally implement these phases as modular components, promoting efficient design and correctness of transformations of source input to target output. Program faults caused by incorrect compiler behavior can be very difficult to track down and work around; therefore, compiler implementers invest significant effort to ensure compiler correctness.
Compilers are not the only language processor used to transform source programs. An interpreter is computer software that transforms and then executes the indicated operations. The translation process influences the design of computer languages, which leads to a preference of compilation or interpretation. In theory, a programming language can have both a compiler and an interpreter. In practice, programming languages tend to be associated with just one (a compiler or an interpreter).
Theoretical computing concepts developed by scientists, mathematicians, and engineers formed the basis of digital modern computing development during World War II. Primitive binary languages evolved because digital devices only understand ones and zeros and the circuit patterns in the underlying machine architecture. In the late 1940s, assembly languages were created to offer a more workable abstraction of the computer architectures. Limited memory capacity of early computers led to substantial technical challenges when the first compilers were designed. Therefore, the compilation process needed to be divided into several small programs. The front end programs produce the analysis products used by the back end programs to generate target code. As computer technology provided more resources, compiler designs could align better with the compilation process.
It is usually more productive for a programmer to use a high-level language, so the development of high-level languages followed naturally from the capabilities offered by digital computers. High-level languages are formal languages that are strictly defined by their syntax and semantics which form the high-level language architecture. Elements of these formal languages include:
The sentences in a language may be defined by a set of rules called a grammar.
Backus–Naur form (BNF) describes the syntax of "sentences" of a language and was used for the syntax of Algol 60 by John Backus. The ideas derive from the context-free grammar concepts by Noam Chomsky, a linguist. "BNF and its extensions have become standard tools for describing the syntax of programming notations, and in many cases parts of compilers are generated automatically from a BNF description."
Between 1942 and 1945, Konrad Zuse designed the first (algorithmic) programming language for computers called Plankalkül ("Plan Calculus"). Zuse also envisioned a Planfertigungsgerät ("Plan assembly device") to automatically translate the mathematical formulation of a program into machine-readable punched film stock. While no actual implementation occurred until the 1970s, it presented concepts later seen in APL designed by Ken Iverson in the late 1950s. APL is a language for mathematical computations.
Between 1949 and 1951, Heinz Rutishauser proposed Superplan, a high-level language and automatic translator. His ideas were later refined by Friedrich L. Bauer and Klaus Samelson.
High-level language design during the formative years of digital computing provided useful programming tools for a variety of applications:
Compiler technology evolved from the need for a strictly defined transformation of the high-level source program into a low-level target program for the digital computer. The compiler could be viewed as a front end to deal with the analysis of the source code and a back end to synthesize the analysis into the target code. Optimization between the front end and back end could produce more efficient target code.
Some early milestones in the development of compiler technology:
Early operating systems and software were written in assembly language. In the 1960s and early 1970s, the use of high-level languages for system programming was still controversial due to resource limitations. However, several research and industry efforts began the shift toward high-level systems programming languages, for example, BCPL, BLISS, B, and C.
BCPL (Basic Combined Programming Language) designed in 1966 by Martin Richards at the University of Cambridge was originally developed as a compiler writing tool. Several compilers have been implemented, Richards' book provides insights to the language and its compiler. BCPL was not only an influential systems programming language that is still used in research but also provided a basis for the design of B and C languages.
BLISS (Basic Language for Implementation of System Software) was developed for a Digital Equipment Corporation (DEC) PDP-10 computer by W. A. Wulf's Carnegie Mellon University (CMU) research team. The CMU team went on to develop BLISS-11 compiler one year later in 1970.
Multics (Multiplexed Information and Computing Service), a time-sharing operating system project, involved MIT, Bell Labs, General Electric (later Honeywell) and was led by Fernando Corbató from MIT. Multics was written in the PL/I language developed by IBM and IBM User Group. IBM's goal was to satisfy business, scientific, and systems programming requirements. There were other languages that could have been considered but PL/I offered the most complete solution even though it had not been implemented. For the first few years of the Multics project, a subset of the language could be compiled to assembly language with the Early PL/I (EPL) compiler by Doug McIlory and Bob Morris from Bell Labs. EPL supported the project until a boot-strapping compiler for the full PL/I could be developed.
Bell Labs left the Multics project in 1969, and developed a system programming language B based on BCPL concepts, written by Dennis Ritchie and Ken Thompson. Ritchie created a boot-strapping compiler for B and wrote Unics (Uniplexed Information and Computing Service) operating system for a PDP-7 in B. Unics eventually became spelled Unix.
Bell Labs started the development and expansion of C based on B and BCPL. The BCPL compiler had been transported to Multics by Bell Labs and BCPL was a preferred language at Bell Labs. Initially, a front-end program to Bell Labs' B compiler was used while a C compiler was developed. In 1971, a new PDP-11 provided the resource to define extensions to B and rewrite the compiler. By 1973 the design of C language was essentially complete and the Unix kernel for a PDP-11 was rewritten in C. Steve Johnson started development of Portable C Compiler (PCC) to support retargeting of C compilers to new machines.
Object-oriented programming (OOP) offered some interesting possibilities for application development and maintenance. OOP concepts go further back but were part of LISP and Simula language science. Bell Labs became interested in OOP with the development of C++. C++ was first used in 1980 for systems programming. The initial design leveraged C language systems programming capabilities with Simula concepts. Object-oriented facilities were added in 1983. The Cfront program implemented a C++ front-end for C84 language compiler. In subsequent years several C++ compilers were developed as C++ popularity grew.
In many application domains, the idea of using a higher-level language quickly caught on. Because of the expanding functionality supported by newer programming languages and the increasing complexity of computer architectures, compilers became more complex.
DARPA (Defense Advanced Research Projects Agency) sponsored a compiler project with Wulf's CMU research team in 1970. The Production Quality Compiler-Compiler PQCC design would produce a Production Quality Compiler (PQC) from formal definitions of source language and the target. PQCC tried to extend the term compiler-compiler beyond the traditional meaning as a parser generator (e.g., Yacc) without much success. PQCC might more properly be referred to as a compiler generator.
PQCC research into code generation process sought to build a truly automatic compiler-writing system. The effort discovered and designed the phase structure of the PQC. The BLISS-11 compiler provided the initial structure. The phases included analyses (front end), intermediate translation to virtual machine (middle end), and translation to the target (back end). TCOL was developed for the PQCC research to handle language specific constructs in the intermediate representation. Variations of TCOL supported various languages. The PQCC project investigated techniques of automated compiler construction. The design concepts proved useful in optimizing compilers and compilers for the (since 1995, object-oriented) programming language Ada.
The Ada STONEMAN document formalized the program support environment (APSE) along with the kernel (KAPSE) and minimal (MAPSE). An Ada interpreter NYU/ED supported development and standardization efforts with the American National Standards Institute (ANSI) and the International Standards Organization (ISO). Initial Ada compiler development by the U.S. Military Services included the compilers in a complete integrated design environment along the lines of the STONEMAN document. Army and Navy worked on the Ada Language System (ALS) project targeted to DEC/VAX architecture while the Air Force started on the Ada Integrated Environment (AIE) targeted to IBM 370 series. While the projects did not provide the desired results, they did contribute to the overall effort on Ada development.
Other Ada compiler efforts got underway in Britain at the University of York and in Germany at the University of Karlsruhe. In the U. S., Verdix (later acquired by Rational) delivered the Verdix Ada Development System (VADS) to the Army. VADS provided a set of development tools including a compiler. Unix/VADS could be hosted on a variety of Unix platforms such as DEC Ultrix and the Sun 3/60 Solaris targeted to Motorola 68020 in an Army CECOM evaluation. There were soon many Ada compilers available that passed the Ada Validation tests. The Free Software Foundation GNU project developed the GNU Compiler Collection (GCC) which provides a core capability to support multiple languages and targets. The Ada version GNAT is one of the most widely used Ada compilers. GNAT is free but there is also commercial support, for example, AdaCore, was founded in 1994 to provide commercial software solutions for Ada. GNAT Pro includes the GNU GCC based GNAT with a tool suite to provide an integrated development environment.
High-level languages continued to drive compiler research and development. Focus areas included optimization and automatic code generation. Trends in programming languages and development environments influenced compiler technology. More compilers became included in language distributions (PERL, Java Development Kit) and as a component of an IDE (VADS, Eclipse, Ada Pro). The interrelationship and interdependence of technologies grew. The advent of web services promoted growth of web languages and scripting languages. Scripts trace back to the early days of Command Line Interfaces (CLI) where the user could enter commands to be executed by the system. User Shell concepts developed with languages to write shell programs. Early Windows designs offered a simple batch programming capability. The conventional transformation of these language used an interpreter. While not widely used, Bash and Batch compilers have been written. More recently sophisticated interpreted languages became part of the developers tool kit. Modern scripting languages include PHP, Python, Ruby and Lua. (Lua is widely used in game development.) All of these have interpreter and compiler support.
"When the field of compiling began in the late 50s, its focus was limited to the translation of high-level language programs into machine code ... The compiler field is increasingly intertwined with other disciplines including computer architecture, programming languages, formal methods, software engineering, and computer security." The "Compiler Research: The Next 50 Years" article noted the importance of object-oriented languages and Java. Security and parallel computing were cited among the future research targets.
A compiler implements a formal transformation from a high-level source program to a low-level target program. Compiler design can define an end-to-end solution or tackle a defined subset that interfaces with other compilation tools e.g. preprocessors, assemblers, linkers. Design requirements include rigorously defined interfaces both internally between compiler components and externally between supporting toolsets.
In the early days, the approach taken to compiler design was directly affected by the complexity of the computer language to be processed, the experience of the person(s) designing it, and the resources available. Resource limitations led to the need to pass through the source code more than once.
A compiler for a relatively simple language written by one person might be a single, monolithic piece of software. However, as the source language grows in complexity the design may be split into a number of interdependent phases. Separate phases provide design improvements that focus development on the functions in the compilation process.
Classifying compilers by number of passes has its background in the hardware resource limitations of computers. Compiling involves performing much work and early computers did not have enough memory to contain one program that did all of this work. So compilers were split up into smaller programs which each made a pass over the source (or some representation of it) performing some of the required analysis and translations.
The ability to compile in a single pass has classically been seen as a benefit because it simplifies the job of writing a compiler and one-pass compilers generally perform compilations faster than multi-pass compilers. Thus, partly driven by the resource limitations of early systems, many early languages were specifically designed so that they could be compiled in a single pass (e.g., Pascal).
In some cases, the design of a language feature may require a compiler to perform more than one pass over the source. For instance, consider a declaration appearing on line 20 of the source which affects the translation of a statement appearing on line 10. In this case, the first pass needs to gather information about declarations appearing after statements that they affect, with the actual translation happening during a subsequent pass.
The disadvantage of compiling in a single pass is that it is not possible to perform many of the sophisticated optimizations needed to generate high quality code. It can be difficult to count exactly how many passes an optimizing compiler makes. For instance, different phases of optimization may analyse one expression many times but only analyse another expression once.
Splitting a compiler up into small programs is a technique used by researchers interested in producing provably correct compilers. Proving the correctness of a set of small programs often requires less effort than proving the correctness of a larger, single, equivalent program.
Regardless of the exact number of phases in the compiler design, the phases can be assigned to one of three stages. The stages include a front end, a middle end, and a back end.
This front/middle/back-end approach makes it possible to combine front ends for different languages with back ends for different CPUs while sharing the optimizations of the middle end. Practical examples of this approach are the GNU Compiler Collection, Clang (LLVM-based C/C++ compiler), and the Amsterdam Compiler Kit, which have multiple front-ends, shared optimizations and multiple back-ends.
The front end analyzes the source code to build an internal representation of the program, called the intermediate representation (IR). It also manages the symbol table, a data structure mapping each symbol in the source code to associated information such as location, type and scope.
While the frontend can be a single monolithic function or program, as in a scannerless parser, it was traditionally implemented and analyzed as several phases, which may execute sequentially or concurrently. This method is favored due to its modularity and separation of concerns. Most commonly, the frontend is broken into three phases: lexical analysis (also known as lexing or scanning), syntax analysis (also known as scanning or parsing), and semantic analysis. Lexing and parsing comprise the syntactic analysis (word syntax and phrase syntax, respectively), and in simple cases, these modules (the lexer and parser) can be automatically generated from a grammar for the language, though in more complex cases these require manual modification. The lexical grammar and phrase grammar are usually context-free grammars, which simplifies analysis significantly, with context-sensitivity handled at the semantic analysis phase. The semantic analysis phase is generally more complex and written by hand, but can be partially or fully automated using attribute grammars. These phases themselves can be further broken down: lexing as scanning and evaluating, and parsing as building a concrete syntax tree (CST, parse tree) and then transforming it into an abstract syntax tree (AST, syntax tree). In some cases additional phases are used, notably line reconstruction and preprocessing, but these are rare.
The main phases of the front end include the following:
The middle end, also known as optimizer, performs optimizations on the intermediate representation in order to improve the performance and the quality of the produced machine code. The middle end contains those optimizations that are independent of the CPU architecture being targeted.
The main phases of the middle end include the following:
Compiler analysis is the prerequisite for any compiler optimization, and they tightly work together. For example, dependence analysis is crucial for loop transformation.
The scope of compiler analysis and optimizations vary greatly; their scope may range from operating within a basic block, to whole procedures, or even the whole program. There is a trade-off between the granularity of the optimizations and the cost of compilation. For example, peephole optimizations are fast to perform during compilation but only affect a small local fragment of the code, and can be performed independently of the context in which the code fragment appears. In contrast, interprocedural optimization requires more compilation time and memory space, but enable optimizations that are only possible by considering the behavior of multiple functions simultaneously.
Interprocedural analysis and optimizations are common in modern commercial compilers from HP, IBM, SGI, Intel, Microsoft, and Sun Microsystems. The free software GCC was criticized for a long time for lacking powerful interprocedural optimizations, but it is changing in this respect. Another open source compiler with full analysis and optimization infrastructure is Open64, which is used by many organizations for research and commercial purposes.
Due to the extra time and space needed for compiler analysis and optimizations, some compilers skip them by default. Users have to use compilation options to explicitly tell the compiler which optimizations should be enabled.
The back end is responsible for the CPU architecture specific optimizations and for code generation.
The main phases of the back end include the following:
Compiler correctness is the branch of software engineering that deals with trying to show that a compiler behaves according to its language specification. Techniques include developing the compiler using formal methods and using rigorous testing (often called compiler validation) on an existing compiler.
Higher-level programming languages usually appear with a type of translation in mind: either designed as compiled language or interpreted language. However, in practice there is rarely anything about a language that requires it to be exclusively compiled or exclusively interpreted, although it is possible to design languages that rely on re-interpretation at run time. The categorization usually reflects the most popular or widespread implementations of a language – for instance, BASIC is sometimes called an interpreted language, and C a compiled one, despite the existence of BASIC compilers and C interpreters.
Interpretation does not replace compilation completely. It only hides it from the user and makes it gradual. Even though an interpreter can itself be interpreted, a set of directly executed machine instructions is needed somewhere at the bottom of the execution stack (see machine language).
Furthermore, for optimization compilers can contain interpreter functionality, and interpreters may include ahead of time compilation techniques. For example, where an expression can be executed during compilation and the results inserted into the output program, then it prevents it having to be recalculated each time the program runs, which can greatly speed up the final program. Modern trends toward just-in-time compilation and bytecode interpretation at times blur the traditional categorizations of compilers and interpreters even further.
Some language specifications spell out that implementations must include a compilation facility; for example, Common Lisp. However, there is nothing inherent in the definition of Common Lisp that stops it from being interpreted. Other languages have features that are very easy to implement in an interpreter, but make writing a compiler much harder; for example, APL, SNOBOL4, and many scripting languages allow programs to construct arbitrary source code at runtime with regular string operations, and then execute that code by passing it to a special evaluation function. To implement these features in a compiled language, programs must usually be shipped with a runtime library that includes a version of the compiler itself.
One classification of compilers is by the platform on which their generated code executes. This is known as the target platform.
A native or hosted compiler is one whose output is intended to directly run on the same type of computer and operating system that the compiler itself runs on. The output of a cross compiler is designed to run on a different platform. Cross compilers are often used when developing software for embedded systems that are not intended to support a software development environment.
The output of a compiler that produces code for a virtual machine (VM) may or may not be executed on the same platform as the compiler that produced it. For this reason, such compilers are not usually classified as native or cross compilers.
The lower level language that is the target of a compiler may itself be a high-level programming language. C, viewed by some as a sort of portable assembly language, is frequently the target language of such compilers. For example, Cfront, the original compiler for C++, used C as its target language. The C code generated by such a compiler is usually not intended to be readable and maintained by humans, so indent style and creating pretty C intermediate code are ignored. Some of the features of C that make it a good target language include the #line directive, which can be generated by the compiler to support debugging of the original source, and the wide platform support available with C compilers.
While a common compiler type outputs machine code, there are many other types: | [
{
"paragraph_id": 0,
"text": "In computing, a compiler is a computer program that translates computer code written in one programming language (the source language) into another language (the target language). The name \"compiler\" is primarily used for programs that translate source code from a high-level programming language to a low-level programming language (e.g. assembly language, object code, or machine code) to create an executable program.",
"title": ""
},
{
"paragraph_id": 1,
"text": "There are many different types of compilers which produce output in different useful forms. A cross-compiler produces code for a different CPU or operating system than the one on which the cross-compiler itself runs. A bootstrap compiler is often a temporary compiler, used for compiling a more permanent or better optimised compiler for a language.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Related software include decompilers, programs that translate from low-level languages to higher level ones; programs that translate between high-level languages, usually called source-to-source compilers or transpilers; language rewriters, usually programs that translate the form of expressions without a change of language; and compiler-compilers, compilers that produce compilers (or parts of them), often in a generic and reusable way so as to be able to produce many differing compilers.",
"title": ""
},
{
"paragraph_id": 3,
"text": "A compiler is likely to perform some or all of the following operations, often called phases: preprocessing, lexical analysis, parsing, semantic analysis (syntax-directed translation), conversion of input programs to an intermediate representation, code optimization and machine specific code generation. Compilers generally implement these phases as modular components, promoting efficient design and correctness of transformations of source input to target output. Program faults caused by incorrect compiler behavior can be very difficult to track down and work around; therefore, compiler implementers invest significant effort to ensure compiler correctness.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Compilers are not the only language processor used to transform source programs. An interpreter is computer software that transforms and then executes the indicated operations. The translation process influences the design of computer languages, which leads to a preference of compilation or interpretation. In theory, a programming language can have both a compiler and an interpreter. In practice, programming languages tend to be associated with just one (a compiler or an interpreter).",
"title": ""
},
{
"paragraph_id": 5,
"text": "Theoretical computing concepts developed by scientists, mathematicians, and engineers formed the basis of digital modern computing development during World War II. Primitive binary languages evolved because digital devices only understand ones and zeros and the circuit patterns in the underlying machine architecture. In the late 1940s, assembly languages were created to offer a more workable abstraction of the computer architectures. Limited memory capacity of early computers led to substantial technical challenges when the first compilers were designed. Therefore, the compilation process needed to be divided into several small programs. The front end programs produce the analysis products used by the back end programs to generate target code. As computer technology provided more resources, compiler designs could align better with the compilation process.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "It is usually more productive for a programmer to use a high-level language, so the development of high-level languages followed naturally from the capabilities offered by digital computers. High-level languages are formal languages that are strictly defined by their syntax and semantics which form the high-level language architecture. Elements of these formal languages include:",
"title": "History"
},
{
"paragraph_id": 7,
"text": "The sentences in a language may be defined by a set of rules called a grammar.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "Backus–Naur form (BNF) describes the syntax of \"sentences\" of a language and was used for the syntax of Algol 60 by John Backus. The ideas derive from the context-free grammar concepts by Noam Chomsky, a linguist. \"BNF and its extensions have become standard tools for describing the syntax of programming notations, and in many cases parts of compilers are generated automatically from a BNF description.\"",
"title": "History"
},
{
"paragraph_id": 9,
"text": "Between 1942 and 1945, Konrad Zuse designed the first (algorithmic) programming language for computers called Plankalkül (\"Plan Calculus\"). Zuse also envisioned a Planfertigungsgerät (\"Plan assembly device\") to automatically translate the mathematical formulation of a program into machine-readable punched film stock. While no actual implementation occurred until the 1970s, it presented concepts later seen in APL designed by Ken Iverson in the late 1950s. APL is a language for mathematical computations.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "Between 1949 and 1951, Heinz Rutishauser proposed Superplan, a high-level language and automatic translator. His ideas were later refined by Friedrich L. Bauer and Klaus Samelson.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "High-level language design during the formative years of digital computing provided useful programming tools for a variety of applications:",
"title": "History"
},
{
"paragraph_id": 12,
"text": "Compiler technology evolved from the need for a strictly defined transformation of the high-level source program into a low-level target program for the digital computer. The compiler could be viewed as a front end to deal with the analysis of the source code and a back end to synthesize the analysis into the target code. Optimization between the front end and back end could produce more efficient target code.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "Some early milestones in the development of compiler technology:",
"title": "History"
},
{
"paragraph_id": 14,
"text": "Early operating systems and software were written in assembly language. In the 1960s and early 1970s, the use of high-level languages for system programming was still controversial due to resource limitations. However, several research and industry efforts began the shift toward high-level systems programming languages, for example, BCPL, BLISS, B, and C.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "BCPL (Basic Combined Programming Language) designed in 1966 by Martin Richards at the University of Cambridge was originally developed as a compiler writing tool. Several compilers have been implemented, Richards' book provides insights to the language and its compiler. BCPL was not only an influential systems programming language that is still used in research but also provided a basis for the design of B and C languages.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "BLISS (Basic Language for Implementation of System Software) was developed for a Digital Equipment Corporation (DEC) PDP-10 computer by W. A. Wulf's Carnegie Mellon University (CMU) research team. The CMU team went on to develop BLISS-11 compiler one year later in 1970.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "Multics (Multiplexed Information and Computing Service), a time-sharing operating system project, involved MIT, Bell Labs, General Electric (later Honeywell) and was led by Fernando Corbató from MIT. Multics was written in the PL/I language developed by IBM and IBM User Group. IBM's goal was to satisfy business, scientific, and systems programming requirements. There were other languages that could have been considered but PL/I offered the most complete solution even though it had not been implemented. For the first few years of the Multics project, a subset of the language could be compiled to assembly language with the Early PL/I (EPL) compiler by Doug McIlory and Bob Morris from Bell Labs. EPL supported the project until a boot-strapping compiler for the full PL/I could be developed.",
"title": "History"
},
{
"paragraph_id": 18,
"text": "Bell Labs left the Multics project in 1969, and developed a system programming language B based on BCPL concepts, written by Dennis Ritchie and Ken Thompson. Ritchie created a boot-strapping compiler for B and wrote Unics (Uniplexed Information and Computing Service) operating system for a PDP-7 in B. Unics eventually became spelled Unix.",
"title": "History"
},
{
"paragraph_id": 19,
"text": "Bell Labs started the development and expansion of C based on B and BCPL. The BCPL compiler had been transported to Multics by Bell Labs and BCPL was a preferred language at Bell Labs. Initially, a front-end program to Bell Labs' B compiler was used while a C compiler was developed. In 1971, a new PDP-11 provided the resource to define extensions to B and rewrite the compiler. By 1973 the design of C language was essentially complete and the Unix kernel for a PDP-11 was rewritten in C. Steve Johnson started development of Portable C Compiler (PCC) to support retargeting of C compilers to new machines.",
"title": "History"
},
{
"paragraph_id": 20,
"text": "Object-oriented programming (OOP) offered some interesting possibilities for application development and maintenance. OOP concepts go further back but were part of LISP and Simula language science. Bell Labs became interested in OOP with the development of C++. C++ was first used in 1980 for systems programming. The initial design leveraged C language systems programming capabilities with Simula concepts. Object-oriented facilities were added in 1983. The Cfront program implemented a C++ front-end for C84 language compiler. In subsequent years several C++ compilers were developed as C++ popularity grew.",
"title": "History"
},
{
"paragraph_id": 21,
"text": "In many application domains, the idea of using a higher-level language quickly caught on. Because of the expanding functionality supported by newer programming languages and the increasing complexity of computer architectures, compilers became more complex.",
"title": "History"
},
{
"paragraph_id": 22,
"text": "DARPA (Defense Advanced Research Projects Agency) sponsored a compiler project with Wulf's CMU research team in 1970. The Production Quality Compiler-Compiler PQCC design would produce a Production Quality Compiler (PQC) from formal definitions of source language and the target. PQCC tried to extend the term compiler-compiler beyond the traditional meaning as a parser generator (e.g., Yacc) without much success. PQCC might more properly be referred to as a compiler generator.",
"title": "History"
},
{
"paragraph_id": 23,
"text": "PQCC research into code generation process sought to build a truly automatic compiler-writing system. The effort discovered and designed the phase structure of the PQC. The BLISS-11 compiler provided the initial structure. The phases included analyses (front end), intermediate translation to virtual machine (middle end), and translation to the target (back end). TCOL was developed for the PQCC research to handle language specific constructs in the intermediate representation. Variations of TCOL supported various languages. The PQCC project investigated techniques of automated compiler construction. The design concepts proved useful in optimizing compilers and compilers for the (since 1995, object-oriented) programming language Ada.",
"title": "History"
},
{
"paragraph_id": 24,
"text": "The Ada STONEMAN document formalized the program support environment (APSE) along with the kernel (KAPSE) and minimal (MAPSE). An Ada interpreter NYU/ED supported development and standardization efforts with the American National Standards Institute (ANSI) and the International Standards Organization (ISO). Initial Ada compiler development by the U.S. Military Services included the compilers in a complete integrated design environment along the lines of the STONEMAN document. Army and Navy worked on the Ada Language System (ALS) project targeted to DEC/VAX architecture while the Air Force started on the Ada Integrated Environment (AIE) targeted to IBM 370 series. While the projects did not provide the desired results, they did contribute to the overall effort on Ada development.",
"title": "History"
},
{
"paragraph_id": 25,
"text": "Other Ada compiler efforts got underway in Britain at the University of York and in Germany at the University of Karlsruhe. In the U. S., Verdix (later acquired by Rational) delivered the Verdix Ada Development System (VADS) to the Army. VADS provided a set of development tools including a compiler. Unix/VADS could be hosted on a variety of Unix platforms such as DEC Ultrix and the Sun 3/60 Solaris targeted to Motorola 68020 in an Army CECOM evaluation. There were soon many Ada compilers available that passed the Ada Validation tests. The Free Software Foundation GNU project developed the GNU Compiler Collection (GCC) which provides a core capability to support multiple languages and targets. The Ada version GNAT is one of the most widely used Ada compilers. GNAT is free but there is also commercial support, for example, AdaCore, was founded in 1994 to provide commercial software solutions for Ada. GNAT Pro includes the GNU GCC based GNAT with a tool suite to provide an integrated development environment.",
"title": "History"
},
{
"paragraph_id": 26,
"text": "High-level languages continued to drive compiler research and development. Focus areas included optimization and automatic code generation. Trends in programming languages and development environments influenced compiler technology. More compilers became included in language distributions (PERL, Java Development Kit) and as a component of an IDE (VADS, Eclipse, Ada Pro). The interrelationship and interdependence of technologies grew. The advent of web services promoted growth of web languages and scripting languages. Scripts trace back to the early days of Command Line Interfaces (CLI) where the user could enter commands to be executed by the system. User Shell concepts developed with languages to write shell programs. Early Windows designs offered a simple batch programming capability. The conventional transformation of these language used an interpreter. While not widely used, Bash and Batch compilers have been written. More recently sophisticated interpreted languages became part of the developers tool kit. Modern scripting languages include PHP, Python, Ruby and Lua. (Lua is widely used in game development.) All of these have interpreter and compiler support.",
"title": "History"
},
{
"paragraph_id": 27,
"text": "\"When the field of compiling began in the late 50s, its focus was limited to the translation of high-level language programs into machine code ... The compiler field is increasingly intertwined with other disciplines including computer architecture, programming languages, formal methods, software engineering, and computer security.\" The \"Compiler Research: The Next 50 Years\" article noted the importance of object-oriented languages and Java. Security and parallel computing were cited among the future research targets.",
"title": "History"
},
{
"paragraph_id": 28,
"text": "A compiler implements a formal transformation from a high-level source program to a low-level target program. Compiler design can define an end-to-end solution or tackle a defined subset that interfaces with other compilation tools e.g. preprocessors, assemblers, linkers. Design requirements include rigorously defined interfaces both internally between compiler components and externally between supporting toolsets.",
"title": "Compiler construction"
},
{
"paragraph_id": 29,
"text": "In the early days, the approach taken to compiler design was directly affected by the complexity of the computer language to be processed, the experience of the person(s) designing it, and the resources available. Resource limitations led to the need to pass through the source code more than once.",
"title": "Compiler construction"
},
{
"paragraph_id": 30,
"text": "A compiler for a relatively simple language written by one person might be a single, monolithic piece of software. However, as the source language grows in complexity the design may be split into a number of interdependent phases. Separate phases provide design improvements that focus development on the functions in the compilation process.",
"title": "Compiler construction"
},
{
"paragraph_id": 31,
"text": "Classifying compilers by number of passes has its background in the hardware resource limitations of computers. Compiling involves performing much work and early computers did not have enough memory to contain one program that did all of this work. So compilers were split up into smaller programs which each made a pass over the source (or some representation of it) performing some of the required analysis and translations.",
"title": "Compiler construction"
},
{
"paragraph_id": 32,
"text": "The ability to compile in a single pass has classically been seen as a benefit because it simplifies the job of writing a compiler and one-pass compilers generally perform compilations faster than multi-pass compilers. Thus, partly driven by the resource limitations of early systems, many early languages were specifically designed so that they could be compiled in a single pass (e.g., Pascal).",
"title": "Compiler construction"
},
{
"paragraph_id": 33,
"text": "In some cases, the design of a language feature may require a compiler to perform more than one pass over the source. For instance, consider a declaration appearing on line 20 of the source which affects the translation of a statement appearing on line 10. In this case, the first pass needs to gather information about declarations appearing after statements that they affect, with the actual translation happening during a subsequent pass.",
"title": "Compiler construction"
},
{
"paragraph_id": 34,
"text": "The disadvantage of compiling in a single pass is that it is not possible to perform many of the sophisticated optimizations needed to generate high quality code. It can be difficult to count exactly how many passes an optimizing compiler makes. For instance, different phases of optimization may analyse one expression many times but only analyse another expression once.",
"title": "Compiler construction"
},
{
"paragraph_id": 35,
"text": "Splitting a compiler up into small programs is a technique used by researchers interested in producing provably correct compilers. Proving the correctness of a set of small programs often requires less effort than proving the correctness of a larger, single, equivalent program.",
"title": "Compiler construction"
},
{
"paragraph_id": 36,
"text": "Regardless of the exact number of phases in the compiler design, the phases can be assigned to one of three stages. The stages include a front end, a middle end, and a back end.",
"title": "Compiler construction"
},
{
"paragraph_id": 37,
"text": "This front/middle/back-end approach makes it possible to combine front ends for different languages with back ends for different CPUs while sharing the optimizations of the middle end. Practical examples of this approach are the GNU Compiler Collection, Clang (LLVM-based C/C++ compiler), and the Amsterdam Compiler Kit, which have multiple front-ends, shared optimizations and multiple back-ends.",
"title": "Compiler construction"
},
{
"paragraph_id": 38,
"text": "The front end analyzes the source code to build an internal representation of the program, called the intermediate representation (IR). It also manages the symbol table, a data structure mapping each symbol in the source code to associated information such as location, type and scope.",
"title": "Compiler construction"
},
{
"paragraph_id": 39,
"text": "While the frontend can be a single monolithic function or program, as in a scannerless parser, it was traditionally implemented and analyzed as several phases, which may execute sequentially or concurrently. This method is favored due to its modularity and separation of concerns. Most commonly, the frontend is broken into three phases: lexical analysis (also known as lexing or scanning), syntax analysis (also known as scanning or parsing), and semantic analysis. Lexing and parsing comprise the syntactic analysis (word syntax and phrase syntax, respectively), and in simple cases, these modules (the lexer and parser) can be automatically generated from a grammar for the language, though in more complex cases these require manual modification. The lexical grammar and phrase grammar are usually context-free grammars, which simplifies analysis significantly, with context-sensitivity handled at the semantic analysis phase. The semantic analysis phase is generally more complex and written by hand, but can be partially or fully automated using attribute grammars. These phases themselves can be further broken down: lexing as scanning and evaluating, and parsing as building a concrete syntax tree (CST, parse tree) and then transforming it into an abstract syntax tree (AST, syntax tree). In some cases additional phases are used, notably line reconstruction and preprocessing, but these are rare.",
"title": "Compiler construction"
},
{
"paragraph_id": 40,
"text": "The main phases of the front end include the following:",
"title": "Compiler construction"
},
{
"paragraph_id": 41,
"text": "The middle end, also known as optimizer, performs optimizations on the intermediate representation in order to improve the performance and the quality of the produced machine code. The middle end contains those optimizations that are independent of the CPU architecture being targeted.",
"title": "Compiler construction"
},
{
"paragraph_id": 42,
"text": "The main phases of the middle end include the following:",
"title": "Compiler construction"
},
{
"paragraph_id": 43,
"text": "Compiler analysis is the prerequisite for any compiler optimization, and they tightly work together. For example, dependence analysis is crucial for loop transformation.",
"title": "Compiler construction"
},
{
"paragraph_id": 44,
"text": "The scope of compiler analysis and optimizations vary greatly; their scope may range from operating within a basic block, to whole procedures, or even the whole program. There is a trade-off between the granularity of the optimizations and the cost of compilation. For example, peephole optimizations are fast to perform during compilation but only affect a small local fragment of the code, and can be performed independently of the context in which the code fragment appears. In contrast, interprocedural optimization requires more compilation time and memory space, but enable optimizations that are only possible by considering the behavior of multiple functions simultaneously.",
"title": "Compiler construction"
},
{
"paragraph_id": 45,
"text": "Interprocedural analysis and optimizations are common in modern commercial compilers from HP, IBM, SGI, Intel, Microsoft, and Sun Microsystems. The free software GCC was criticized for a long time for lacking powerful interprocedural optimizations, but it is changing in this respect. Another open source compiler with full analysis and optimization infrastructure is Open64, which is used by many organizations for research and commercial purposes.",
"title": "Compiler construction"
},
{
"paragraph_id": 46,
"text": "Due to the extra time and space needed for compiler analysis and optimizations, some compilers skip them by default. Users have to use compilation options to explicitly tell the compiler which optimizations should be enabled.",
"title": "Compiler construction"
},
{
"paragraph_id": 47,
"text": "The back end is responsible for the CPU architecture specific optimizations and for code generation.",
"title": "Compiler construction"
},
{
"paragraph_id": 48,
"text": "The main phases of the back end include the following:",
"title": "Compiler construction"
},
{
"paragraph_id": 49,
"text": "Compiler correctness is the branch of software engineering that deals with trying to show that a compiler behaves according to its language specification. Techniques include developing the compiler using formal methods and using rigorous testing (often called compiler validation) on an existing compiler.",
"title": "Compiler construction"
},
{
"paragraph_id": 50,
"text": "Higher-level programming languages usually appear with a type of translation in mind: either designed as compiled language or interpreted language. However, in practice there is rarely anything about a language that requires it to be exclusively compiled or exclusively interpreted, although it is possible to design languages that rely on re-interpretation at run time. The categorization usually reflects the most popular or widespread implementations of a language – for instance, BASIC is sometimes called an interpreted language, and C a compiled one, despite the existence of BASIC compilers and C interpreters.",
"title": "Compiled versus interpreted languages"
},
{
"paragraph_id": 51,
"text": "Interpretation does not replace compilation completely. It only hides it from the user and makes it gradual. Even though an interpreter can itself be interpreted, a set of directly executed machine instructions is needed somewhere at the bottom of the execution stack (see machine language).",
"title": "Compiled versus interpreted languages"
},
{
"paragraph_id": 52,
"text": "Furthermore, for optimization compilers can contain interpreter functionality, and interpreters may include ahead of time compilation techniques. For example, where an expression can be executed during compilation and the results inserted into the output program, then it prevents it having to be recalculated each time the program runs, which can greatly speed up the final program. Modern trends toward just-in-time compilation and bytecode interpretation at times blur the traditional categorizations of compilers and interpreters even further.",
"title": "Compiled versus interpreted languages"
},
{
"paragraph_id": 53,
"text": "Some language specifications spell out that implementations must include a compilation facility; for example, Common Lisp. However, there is nothing inherent in the definition of Common Lisp that stops it from being interpreted. Other languages have features that are very easy to implement in an interpreter, but make writing a compiler much harder; for example, APL, SNOBOL4, and many scripting languages allow programs to construct arbitrary source code at runtime with regular string operations, and then execute that code by passing it to a special evaluation function. To implement these features in a compiled language, programs must usually be shipped with a runtime library that includes a version of the compiler itself.",
"title": "Compiled versus interpreted languages"
},
{
"paragraph_id": 54,
"text": "One classification of compilers is by the platform on which their generated code executes. This is known as the target platform.",
"title": "Types"
},
{
"paragraph_id": 55,
"text": "A native or hosted compiler is one whose output is intended to directly run on the same type of computer and operating system that the compiler itself runs on. The output of a cross compiler is designed to run on a different platform. Cross compilers are often used when developing software for embedded systems that are not intended to support a software development environment.",
"title": "Types"
},
{
"paragraph_id": 56,
"text": "The output of a compiler that produces code for a virtual machine (VM) may or may not be executed on the same platform as the compiler that produced it. For this reason, such compilers are not usually classified as native or cross compilers.",
"title": "Types"
},
{
"paragraph_id": 57,
"text": "The lower level language that is the target of a compiler may itself be a high-level programming language. C, viewed by some as a sort of portable assembly language, is frequently the target language of such compilers. For example, Cfront, the original compiler for C++, used C as its target language. The C code generated by such a compiler is usually not intended to be readable and maintained by humans, so indent style and creating pretty C intermediate code are ignored. Some of the features of C that make it a good target language include the #line directive, which can be generated by the compiler to support debugging of the original source, and the wide platform support available with C compilers.",
"title": "Types"
},
{
"paragraph_id": 58,
"text": "While a common compiler type outputs machine code, there are many other types:",
"title": "Types"
}
] | In computing, a compiler is a computer program that translates computer code written in one programming language into another language. The name "compiler" is primarily used for programs that translate source code from a high-level programming language to a low-level programming language to create an executable program. There are many different types of compilers which produce output in different useful forms. A cross-compiler produces code for a different CPU or operating system than the one on which the cross-compiler itself runs. A bootstrap compiler is often a temporary compiler, used for compiling a more permanent or better optimised compiler for a language. Related software include decompilers, programs that translate from low-level languages to higher level ones; programs that translate between high-level languages, usually called source-to-source compilers or transpilers; language rewriters, usually programs that translate the form of expressions without a change of language; and compiler-compilers, compilers that produce compilers, often in a generic and reusable way so as to be able to produce many differing compilers. A compiler is likely to perform some or all of the following operations, often called phases: preprocessing, lexical analysis, parsing, semantic analysis, conversion of input programs to an intermediate representation, code optimization and machine specific code generation. Compilers generally implement these phases as modular components, promoting efficient design and correctness of transformations of source input to target output. Program faults caused by incorrect compiler behavior can be very difficult to track down and work around; therefore, compiler implementers invest significant effort to ensure compiler correctness. Compilers are not the only language processor used to transform source programs. An interpreter is computer software that transforms and then executes the indicated operations. The translation process influences the design of computer languages, which leads to a preference of compilation or interpretation. In theory, a programming language can have both a compiler and an interpreter. In practice, programming languages tend to be associated with just one. | 2001-10-26T06:08:03Z | 2023-12-27T18:20:05Z | [
"Template:Section link",
"Template:Snd",
"Template:Short description",
"Template:Anchor",
"Template:Reflist",
"Template:Curlie",
"Template:Authority control",
"Template:Use dmy dates",
"Template:Main",
"Template:Div col",
"Template:Cite web",
"Template:YouTube",
"Template:Rp",
"Template:Program execution",
"Template:Citation needed",
"Template:ISBN",
"Template:Cite journal",
"Template:Commons category",
"Template:Computer science",
"Template:Redirect2",
"Template:Cite book",
"Template:Visible anchor",
"Template:Better source",
"Template:Div col end",
"Template:Refbegin",
"Template:Refend",
"Template:About",
"Template:Cn",
"Template:More footnotes needed",
"Template:Color",
"Template:Unreferenced section",
"Template:Primary source inline",
"Template:Portal",
"Template:Citation",
"Template:Lang",
"Template:Wikibooks",
"Template:Webarchive",
"Template:Wiktionary"
] | https://en.wikipedia.org/wiki/Compiler |
5,742 | Castrato | A castrato (Italian; pl.: castrati) is a male singer who underwent castration before puberty in order to retain singing voice equivalent to that of a soprano, mezzo-soprano, or contralto. The voice can also occur in one who, due to an endocrinological condition, never reaches sexual maturity.
Castration before puberty (or in its early stages) prevents the larynx from being transformed by the normal physiological events of puberty. As a result, the vocal range of prepubescence (shared by both sexes) is largely retained, and the voice develops into adulthood in a unique way. Prepubescent castration for this purpose diminished greatly in the late 18th century.
Methods of castration used to terminate the onset of puberty varied. Methods involved using opium to medically induce a coma, then submerging the boy into an ice or milk bath where the procedure of either severing the vas deferens (similar to a vasectomy), twisting the testicles until they atrophied, or complete removal via surgical cutting was performed (however the complete removal of the testicles was not a popularly used technique). The procedure was usually done to boys around the age of 8–10; recovery time from the procedure took around two weeks. The means by which future singers were prepared could lead to premature death. To prevent the child from experiencing the intense pain of castration, many were inadvertently administered lethal doses of opium or some other narcotic, or were killed by overlong compression of the carotid artery in the neck (intended to render them unconscious during the castration procedure).
The geographical locations of where these procedures took place is not known specifically. During the 18th century itself, the music historian Charles Burney was sent from pillar to post in search of places where the operation was carried out:
I enquired throughout Italy at what place boys were chiefly qualified for singing by castration, but could get no certain intelligence. I was told at Milan that it was at Venice; at Venice that it was at Bologna; but at Bologna the fact was denied, and I was referred to Florence; from Florence to Rome, and from Rome I was sent to Naples ... it is said that there are shops in Naples with this inscription: 'QUI SI CASTRANO RAGAZZI' ("Here boys are castrated"); but I was utterly unable to see or hear of any such shops during my residence in that city.
As the castrato's body grew, his lack of testosterone meant that his epiphyses (bone-joints) did not harden in the normal manner. Thus the limbs of the castrati often grew unusually long, as did their ribs. This, combined with intensive training, gave them unrivalled lung power and breath capacity. Operating through small, child-sized vocal cords, their voices were also extraordinarily flexible, and quite different from the equivalent adult female voice. Their vocal range was higher than that of the uncastrated adult male. Listening to the only surviving recordings of a castrato (see below), one can hear that the lower part of the voice sounds like a "super-high" tenor, with a more falsetto-like upper register above that.
Castrati were rarely referred to as such: in the 18th century, the euphemism musico (pl.: musici) was much more generally used, although it usually carried derogatory implications; another synonym was evirato, literally meaning "emasculated". Eunuch is a more general term since, historically, many eunuchs were castrated after puberty and thus the castration had no impact on their voices.
Castration as a means of subjugation, enslavement or other punishment has a very long history, dating back to ancient Sumer. In a Western context, eunuch singers are known to have existed from the early Byzantine Empire. In Constantinople around 400 AD, the empress Aelia Eudoxia had a eunuch choir-master, Brison, who may have established the use of castrati in Byzantine choirs, though whether Brison himself was a singer and whether he had colleagues who were eunuch singers is not certain. By the 9th century, eunuch singers were well-known (not least in the choir of Hagia Sophia) and remained so until the sack of Constantinople by the Western forces of the Fourth Crusade in 1204. Their fate from then until their reappearance in Italy more than three hundred years later is not clear. It seems likely that the Spanish tradition of soprano falsettists may have hidden castrati. Much of Spain was under Muslim rulers during the Middle Ages, and castration had a history going back to the ancient Near East. Stereotypically, eunuchs served as harem guards, but they were also valued as high-level political appointees since they could not start a dynasty which would threaten the ruler.
Castrati first appeared in Italy in the mid-16th century, though at first the terms describing them were not always clear. The phrase soprano maschio (male soprano), which could also mean falsettist, occurs in the Due Dialoghi della Musica (Two dialogues upon music) of Luigi Dentice, an Oratorian priest, published in Rome in 1553. On 9 November 1555 Cardinal Ippolito II d'Este (famed as the builder of the Villa d'Este at Tivoli), wrote to Guglielmo Gonzaga, Duke of Mantua (1538–1587), that he has heard that the Duke was interested in his cantoretti (little singers) and offered to send him two, so that he could choose one for his own service. This is a rare term but probably does equate to castrato. The Cardinal's nephew, Alfonso II d'Este, Duke of Ferrara, was another early enthusiast, inquiring about castrati in 1556. There were certainly castrati in the Sistine Chapel choir in 1558, although not described as such: on 27 April of that year, Hernando Bustamante, a Spaniard from Palencia, was admitted (the first castrati so termed who joined the Sistine choir were Pietro Paolo Folignato and Girolamo Rossini, admitted in 1599). Surprisingly, considering the later French distaste for castrati, they certainly existed in France at this time also, being known of in Paris, Orléans, Picardy and Normandy, though they were not abundant: the King of France himself had difficulty in obtaining them. By 1574, there were castrati in the Ducal court chapel at Munich, where the Kapellmeister (music director) was the famous Orlando di Lasso. In 1589, by the bull Cum pro nostro pastorali munere, Pope Sixtus V re-organised the choir of St Peter's, Rome specifically to include castrati.
Thus the castrati came to supplant both boys (whose voices broke after only a few years) and falsettists (whose voices were weaker and less reliable) from the top line in such choirs. Women were banned by the Pauline dictum mulieres in ecclesiis taceant ("let women keep silent in the churches"; see I Corinthians, ch. 14, v. 34).
The Italian castrati were often rumored to have unusually long lives, but a 1993 study found that their lifespans were average.
Although the castrato (or musico) predates opera, there is some evidence that castrati had parts in the earliest operas. In the first performance of Monteverdi's Orfeo (1607), for example, they played subsidiary roles, including Speranza and (possibly) that of Euridice. Although female roles were performed by castrati in some of the papal states, this was increasingly rare; by 1680, they had supplanted "normal" male voices in lead roles, and retained their position as primo uomo for about a hundred years; an Italian opera not featuring at least one renowned castrato in a lead part would be doomed to fail. Because of the popularity of Italian opera throughout 18th-century Europe (except France), singers such as Ferri, Farinelli, Senesino and Pacchierotti became the first operatic superstars, earning enormous fees and hysterical public adulation. The strictly hierarchical organisation of opera seria favoured their high voices as symbols of heroic virtue, though they were frequently mocked for their strange appearance and bad acting. In his 1755 Reflections upon theatrical expression in tragedy, Roger Pickering wrote:
Farinelli drew every Body to the Haymarket. What a Pipe! What Modulation! What Extasy to the Ear! But, Heavens! What Clumsiness! What Stupidity! What Offence to the Eye! Reader, if of the City, thou mayest probably have seen in the Fields of Islington or Mile-End or, If thou art in the environs of St James', thou must have observed in the Park with what Ease and Agility a cow, heavy with calf, has rose up at the command of the Milk-woman's foot: thus from the mossy bank sprang the DIVINE FARINELLI.
The training of the boys was rigorous. The regimen of one singing school in Rome (c. 1700) consisted of one hour of singing difficult and awkward pieces, one hour practising trills, one hour practising ornamented passaggi, one hour of singing exercises in their teacher's presence and in front of a mirror so as to avoid unnecessary movement of the body or facial grimaces, and one hour of literary study; all this, moreover, before lunch. After, half an hour would be devoted to musical theory, another to writing counterpoint, an hour copying down the same from dictation, and another hour of literary study. During the remainder of the day, the young castrati had to find time to practice their harpsichord playing, and to compose vocal music, either sacred or secular depending on their inclination. This demanding schedule meant that, if sufficiently talented, they were able to make a debut in their mid-teens with a perfect technique and a voice of a flexibility and power no woman or ordinary male singer could match.
In the 1720s and 1730s, at the height of the craze for these voices, it has been estimated that upwards of 4,000 boys were castrated annually in the service of art. Many came from poor homes and were castrated by their parents in the hope that their child might be successful and lift them from poverty (this was the case with Senesino). There are, though, records of some young boys asking to be operated on to preserve their voices (e.g. Caffarelli, who was from a wealthy family: his grandmother gave him the income from two vineyards to pay for his studies). Caffarelli was also typical of many castrati in being famous for tantrums on and off-stage, and for amorous adventures with noble ladies. Some, as described by Casanova, preferred gentlemen (noble or otherwise). Only a small percentage of boys castrated to preserve their voices had successful careers on the operatic stage; the better "also-rans" sang in cathedral or church choirs, but because of their marked appearance and the ban on their marrying, there was little room for them in society outside a musical context.
The castrati came in for a great amount of scurrilous and unkind abuse, and as their fame increased, so did the hatred of them. They were often castigated as malign creatures who lured men into homosexuality. There were homosexual castrati, as Casanova's accounts of 18th-century Italy bear witness. He mentions meeting an abbé whom he took for a girl in disguise, only later discovering that "she" was a famous castrato. In Rome in 1762 he attended a performance at which the prima donna was a castrato, "the favourite pathic" of Cardinal Borghese, who dined every evening with his protector. From his behaviour on stage "it was obvious that he hoped to inspire the love of those who liked him as a man, and probably would not have done so as a woman".
By the late 18th century, changes in operatic taste and social attitudes spelled the end for castrati. They lingered on past the end of the ancien régime (which their style of opera parallels), and two of their number, Pacchierotti and Crescentini, performed before Napoleon. The last great operatic castrato was Giovanni Battista Velluti (1781–1861), who performed the last operatic castrato role ever written: Armando in Il crociato in Egitto by Meyerbeer (Venice, 1824). Soon after this they were replaced definitively as the first men of the operatic stage by a new breed of heroic tenor, as first incarnated by the Frenchman Gilbert-Louis Duprez, the earliest so-called "king of the high Cs". His successors have included such singers as Enrico Tamberlik, Jean de Reszke, Francesco Tamagno, Enrico Caruso, Giovanni Martinelli, Beniamino Gigli, Jussi Björling, Franco Corelli and Luciano Pavarotti, among others.
After the unification of Italy in 1861, "eviration" was officially made illegal (the new Italian state had adopted the previous penal code of the Kingdom of Sardinia which expressly forbade the practice). In 1878, Pope Leo XIII prohibited the hiring of new castrati by the church: only in the Sistine Chapel and in other papal basilicas in Rome did a few castrati linger. A group photo of the Sistine Choir taken in 1898 shows that by then only six remained (plus the Direttore Perpetuo, the fine soprano castrato Domenico Mustafà), and in 1902 a ruling was extracted from Pope Leo that no further castrati should be admitted. The official end to the castrati came on St. Cecilia's Day, 22 November 1903, when the new pope, Pius X, issued his motu proprio, Tra le Sollecitudini ('Amongst the Cares'), which contained this instruction: "Whenever ... it is desirable to employ the high voices of sopranos and contraltos, these parts must be taken by boys, according to the most ancient usage of the Church."
The last Sistine castrato to survive was Alessandro Moreschi, the only castrato to have made solo recordings. While an interesting historical record, these discs of his give us only a glimpse of the castrato voice – although he had been renowned as "The Angel of Rome" at the beginning of his career, some would say he was past his prime when the recordings were made in 1902 and 1904 and he never attempted to sing opera. Domenico Salvatori, a castrato who was contemporary with Moreschi, made some ensemble recordings with him but has no surviving solo recordings. The recording technology of the day was not of modern high quality. Salvatori died in 1909; Moreschi retired officially in March 1913, and died in 1922.
The Catholic Church's involvement in the castrato phenomenon has long been controversial, and there have recently been calls for it to issue an official apology for its role. As early as 1748, Pope Benedict XIV tried to ban castrati from churches, but such was their popularity at the time that he realised that doing so might result in a drastic decline in church attendance.
The rumours of another castrato sequestered in the Vatican for the personal delectation of the Pontiff until as recently as 1959 have been proven false. The singer in question was a pupil of Moreschi's, Domenico Mancini, such a successful imitator of his teacher's voice that even Lorenzo Perosi, Direttore Perpetuo of the Sistine Choir from 1898 to 1956 and a strenuous opponent of the practice of castrato singers, thought he was a castrato. Mancini was in fact a moderately skilful falsettist and professional double bass player.
A male can retain his child voice if it never changes during puberty. The retained voice can be the treble voice shared by both sexes in childhood and is the same as a boy soprano voice. But as evidence shows, many castratos, such as Senesino and Caffarelli, were actually altos (mezzo-soprano) – not sopranos. So-called "natural" or "endocrinological castrati" are born with hormonal anomalies, such as Klinefelter's syndrome and Kallmann's syndrome, or have undergone unusual physical or medical events during their early lives that reproduce the vocal effects of castration without being castrated.
Jimmy Scott, Radu Marian and Javier Medina are examples of this type of high male voice via endocrinological diseases. Michael Maniaci is somewhat different, in that he has no hormonal or other anomalies, but claims that his voice did not "break" in the usual manner, leaving him still able to sing in the soprano register. Other uncastrated male adults sing soprano, generally using some form of falsetto but in a much higher range than most countertenors. Examples are Aris Christofellis, Jörg Waschinski, and Ghio Nannini.
However, it is believed the castrati possessed more of a tenorial chest register (the aria "Navigante che non spera" in Leonardo Vinci's opera Il Medo, written for Farinelli, requires notes down to C3, 131 Hz). Similar low-voiced singing can be heard from the jazz vocalist Jimmy Scott, whose range matches approximately that used by female blues singers. High-pitched singer Jordan Smith has demonstrated having more of a tenorial chest register.
Actor Chris Colfer has stated in interviews that when his voice began to change at puberty, he sang in a high voice "constantly" in an effort to retain his range. Actor and singer Alex Newell has soprano range. Voice actor Walter Tetley may or may not have been a castrato; Bill Scott, a co-worker of Tetley's during their later work in television, once half-jokingly quipped that Tetley's mother "had him fixed" to protect the child star's voice-acting career. Tetley did never personally divulge the exact reason for his condition, which left him with the voice of a preteen boy for his entire adult life. Botanist George Washington Carver was noted for his high voice, believed to be the result of pertussis and croup infections in his childhood that stunted his growth. | [
{
"paragraph_id": 0,
"text": "A castrato (Italian; pl.: castrati) is a male singer who underwent castration before puberty in order to retain singing voice equivalent to that of a soprano, mezzo-soprano, or contralto. The voice can also occur in one who, due to an endocrinological condition, never reaches sexual maturity.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Castration before puberty (or in its early stages) prevents the larynx from being transformed by the normal physiological events of puberty. As a result, the vocal range of prepubescence (shared by both sexes) is largely retained, and the voice develops into adulthood in a unique way. Prepubescent castration for this purpose diminished greatly in the late 18th century.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Methods of castration used to terminate the onset of puberty varied. Methods involved using opium to medically induce a coma, then submerging the boy into an ice or milk bath where the procedure of either severing the vas deferens (similar to a vasectomy), twisting the testicles until they atrophied, or complete removal via surgical cutting was performed (however the complete removal of the testicles was not a popularly used technique). The procedure was usually done to boys around the age of 8–10; recovery time from the procedure took around two weeks. The means by which future singers were prepared could lead to premature death. To prevent the child from experiencing the intense pain of castration, many were inadvertently administered lethal doses of opium or some other narcotic, or were killed by overlong compression of the carotid artery in the neck (intended to render them unconscious during the castration procedure).",
"title": ""
},
{
"paragraph_id": 3,
"text": "The geographical locations of where these procedures took place is not known specifically. During the 18th century itself, the music historian Charles Burney was sent from pillar to post in search of places where the operation was carried out:",
"title": ""
},
{
"paragraph_id": 4,
"text": "I enquired throughout Italy at what place boys were chiefly qualified for singing by castration, but could get no certain intelligence. I was told at Milan that it was at Venice; at Venice that it was at Bologna; but at Bologna the fact was denied, and I was referred to Florence; from Florence to Rome, and from Rome I was sent to Naples ... it is said that there are shops in Naples with this inscription: 'QUI SI CASTRANO RAGAZZI' (\"Here boys are castrated\"); but I was utterly unable to see or hear of any such shops during my residence in that city.",
"title": ""
},
{
"paragraph_id": 5,
"text": "As the castrato's body grew, his lack of testosterone meant that his epiphyses (bone-joints) did not harden in the normal manner. Thus the limbs of the castrati often grew unusually long, as did their ribs. This, combined with intensive training, gave them unrivalled lung power and breath capacity. Operating through small, child-sized vocal cords, their voices were also extraordinarily flexible, and quite different from the equivalent adult female voice. Their vocal range was higher than that of the uncastrated adult male. Listening to the only surviving recordings of a castrato (see below), one can hear that the lower part of the voice sounds like a \"super-high\" tenor, with a more falsetto-like upper register above that.",
"title": ""
},
{
"paragraph_id": 6,
"text": "Castrati were rarely referred to as such: in the 18th century, the euphemism musico (pl.: musici) was much more generally used, although it usually carried derogatory implications; another synonym was evirato, literally meaning \"emasculated\". Eunuch is a more general term since, historically, many eunuchs were castrated after puberty and thus the castration had no impact on their voices.",
"title": ""
},
{
"paragraph_id": 7,
"text": "Castration as a means of subjugation, enslavement or other punishment has a very long history, dating back to ancient Sumer. In a Western context, eunuch singers are known to have existed from the early Byzantine Empire. In Constantinople around 400 AD, the empress Aelia Eudoxia had a eunuch choir-master, Brison, who may have established the use of castrati in Byzantine choirs, though whether Brison himself was a singer and whether he had colleagues who were eunuch singers is not certain. By the 9th century, eunuch singers were well-known (not least in the choir of Hagia Sophia) and remained so until the sack of Constantinople by the Western forces of the Fourth Crusade in 1204. Their fate from then until their reappearance in Italy more than three hundred years later is not clear. It seems likely that the Spanish tradition of soprano falsettists may have hidden castrati. Much of Spain was under Muslim rulers during the Middle Ages, and castration had a history going back to the ancient Near East. Stereotypically, eunuchs served as harem guards, but they were also valued as high-level political appointees since they could not start a dynasty which would threaten the ruler.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "Castrati first appeared in Italy in the mid-16th century, though at first the terms describing them were not always clear. The phrase soprano maschio (male soprano), which could also mean falsettist, occurs in the Due Dialoghi della Musica (Two dialogues upon music) of Luigi Dentice, an Oratorian priest, published in Rome in 1553. On 9 November 1555 Cardinal Ippolito II d'Este (famed as the builder of the Villa d'Este at Tivoli), wrote to Guglielmo Gonzaga, Duke of Mantua (1538–1587), that he has heard that the Duke was interested in his cantoretti (little singers) and offered to send him two, so that he could choose one for his own service. This is a rare term but probably does equate to castrato. The Cardinal's nephew, Alfonso II d'Este, Duke of Ferrara, was another early enthusiast, inquiring about castrati in 1556. There were certainly castrati in the Sistine Chapel choir in 1558, although not described as such: on 27 April of that year, Hernando Bustamante, a Spaniard from Palencia, was admitted (the first castrati so termed who joined the Sistine choir were Pietro Paolo Folignato and Girolamo Rossini, admitted in 1599). Surprisingly, considering the later French distaste for castrati, they certainly existed in France at this time also, being known of in Paris, Orléans, Picardy and Normandy, though they were not abundant: the King of France himself had difficulty in obtaining them. By 1574, there were castrati in the Ducal court chapel at Munich, where the Kapellmeister (music director) was the famous Orlando di Lasso. In 1589, by the bull Cum pro nostro pastorali munere, Pope Sixtus V re-organised the choir of St Peter's, Rome specifically to include castrati.",
"title": "European classical tradition"
},
{
"paragraph_id": 9,
"text": "Thus the castrati came to supplant both boys (whose voices broke after only a few years) and falsettists (whose voices were weaker and less reliable) from the top line in such choirs. Women were banned by the Pauline dictum mulieres in ecclesiis taceant (\"let women keep silent in the churches\"; see I Corinthians, ch. 14, v. 34).",
"title": "European classical tradition"
},
{
"paragraph_id": 10,
"text": "The Italian castrati were often rumored to have unusually long lives, but a 1993 study found that their lifespans were average.",
"title": "European classical tradition"
},
{
"paragraph_id": 11,
"text": "Although the castrato (or musico) predates opera, there is some evidence that castrati had parts in the earliest operas. In the first performance of Monteverdi's Orfeo (1607), for example, they played subsidiary roles, including Speranza and (possibly) that of Euridice. Although female roles were performed by castrati in some of the papal states, this was increasingly rare; by 1680, they had supplanted \"normal\" male voices in lead roles, and retained their position as primo uomo for about a hundred years; an Italian opera not featuring at least one renowned castrato in a lead part would be doomed to fail. Because of the popularity of Italian opera throughout 18th-century Europe (except France), singers such as Ferri, Farinelli, Senesino and Pacchierotti became the first operatic superstars, earning enormous fees and hysterical public adulation. The strictly hierarchical organisation of opera seria favoured their high voices as symbols of heroic virtue, though they were frequently mocked for their strange appearance and bad acting. In his 1755 Reflections upon theatrical expression in tragedy, Roger Pickering wrote:",
"title": "Opera"
},
{
"paragraph_id": 12,
"text": "Farinelli drew every Body to the Haymarket. What a Pipe! What Modulation! What Extasy to the Ear! But, Heavens! What Clumsiness! What Stupidity! What Offence to the Eye! Reader, if of the City, thou mayest probably have seen in the Fields of Islington or Mile-End or, If thou art in the environs of St James', thou must have observed in the Park with what Ease and Agility a cow, heavy with calf, has rose up at the command of the Milk-woman's foot: thus from the mossy bank sprang the DIVINE FARINELLI.",
"title": "Opera"
},
{
"paragraph_id": 13,
"text": "The training of the boys was rigorous. The regimen of one singing school in Rome (c. 1700) consisted of one hour of singing difficult and awkward pieces, one hour practising trills, one hour practising ornamented passaggi, one hour of singing exercises in their teacher's presence and in front of a mirror so as to avoid unnecessary movement of the body or facial grimaces, and one hour of literary study; all this, moreover, before lunch. After, half an hour would be devoted to musical theory, another to writing counterpoint, an hour copying down the same from dictation, and another hour of literary study. During the remainder of the day, the young castrati had to find time to practice their harpsichord playing, and to compose vocal music, either sacred or secular depending on their inclination. This demanding schedule meant that, if sufficiently talented, they were able to make a debut in their mid-teens with a perfect technique and a voice of a flexibility and power no woman or ordinary male singer could match.",
"title": "Opera"
},
{
"paragraph_id": 14,
"text": "In the 1720s and 1730s, at the height of the craze for these voices, it has been estimated that upwards of 4,000 boys were castrated annually in the service of art. Many came from poor homes and were castrated by their parents in the hope that their child might be successful and lift them from poverty (this was the case with Senesino). There are, though, records of some young boys asking to be operated on to preserve their voices (e.g. Caffarelli, who was from a wealthy family: his grandmother gave him the income from two vineyards to pay for his studies). Caffarelli was also typical of many castrati in being famous for tantrums on and off-stage, and for amorous adventures with noble ladies. Some, as described by Casanova, preferred gentlemen (noble or otherwise). Only a small percentage of boys castrated to preserve their voices had successful careers on the operatic stage; the better \"also-rans\" sang in cathedral or church choirs, but because of their marked appearance and the ban on their marrying, there was little room for them in society outside a musical context.",
"title": "Opera"
},
{
"paragraph_id": 15,
"text": "The castrati came in for a great amount of scurrilous and unkind abuse, and as their fame increased, so did the hatred of them. They were often castigated as malign creatures who lured men into homosexuality. There were homosexual castrati, as Casanova's accounts of 18th-century Italy bear witness. He mentions meeting an abbé whom he took for a girl in disguise, only later discovering that \"she\" was a famous castrato. In Rome in 1762 he attended a performance at which the prima donna was a castrato, \"the favourite pathic\" of Cardinal Borghese, who dined every evening with his protector. From his behaviour on stage \"it was obvious that he hoped to inspire the love of those who liked him as a man, and probably would not have done so as a woman\".",
"title": "Opera"
},
{
"paragraph_id": 16,
"text": "By the late 18th century, changes in operatic taste and social attitudes spelled the end for castrati. They lingered on past the end of the ancien régime (which their style of opera parallels), and two of their number, Pacchierotti and Crescentini, performed before Napoleon. The last great operatic castrato was Giovanni Battista Velluti (1781–1861), who performed the last operatic castrato role ever written: Armando in Il crociato in Egitto by Meyerbeer (Venice, 1824). Soon after this they were replaced definitively as the first men of the operatic stage by a new breed of heroic tenor, as first incarnated by the Frenchman Gilbert-Louis Duprez, the earliest so-called \"king of the high Cs\". His successors have included such singers as Enrico Tamberlik, Jean de Reszke, Francesco Tamagno, Enrico Caruso, Giovanni Martinelli, Beniamino Gigli, Jussi Björling, Franco Corelli and Luciano Pavarotti, among others.",
"title": "Decline"
},
{
"paragraph_id": 17,
"text": "After the unification of Italy in 1861, \"eviration\" was officially made illegal (the new Italian state had adopted the previous penal code of the Kingdom of Sardinia which expressly forbade the practice). In 1878, Pope Leo XIII prohibited the hiring of new castrati by the church: only in the Sistine Chapel and in other papal basilicas in Rome did a few castrati linger. A group photo of the Sistine Choir taken in 1898 shows that by then only six remained (plus the Direttore Perpetuo, the fine soprano castrato Domenico Mustafà), and in 1902 a ruling was extracted from Pope Leo that no further castrati should be admitted. The official end to the castrati came on St. Cecilia's Day, 22 November 1903, when the new pope, Pius X, issued his motu proprio, Tra le Sollecitudini ('Amongst the Cares'), which contained this instruction: \"Whenever ... it is desirable to employ the high voices of sopranos and contraltos, these parts must be taken by boys, according to the most ancient usage of the Church.\"",
"title": "Decline"
},
{
"paragraph_id": 18,
"text": "The last Sistine castrato to survive was Alessandro Moreschi, the only castrato to have made solo recordings. While an interesting historical record, these discs of his give us only a glimpse of the castrato voice – although he had been renowned as \"The Angel of Rome\" at the beginning of his career, some would say he was past his prime when the recordings were made in 1902 and 1904 and he never attempted to sing opera. Domenico Salvatori, a castrato who was contemporary with Moreschi, made some ensemble recordings with him but has no surviving solo recordings. The recording technology of the day was not of modern high quality. Salvatori died in 1909; Moreschi retired officially in March 1913, and died in 1922.",
"title": "Decline"
},
{
"paragraph_id": 19,
"text": "The Catholic Church's involvement in the castrato phenomenon has long been controversial, and there have recently been calls for it to issue an official apology for its role. As early as 1748, Pope Benedict XIV tried to ban castrati from churches, but such was their popularity at the time that he realised that doing so might result in a drastic decline in church attendance.",
"title": "Decline"
},
{
"paragraph_id": 20,
"text": "The rumours of another castrato sequestered in the Vatican for the personal delectation of the Pontiff until as recently as 1959 have been proven false. The singer in question was a pupil of Moreschi's, Domenico Mancini, such a successful imitator of his teacher's voice that even Lorenzo Perosi, Direttore Perpetuo of the Sistine Choir from 1898 to 1956 and a strenuous opponent of the practice of castrato singers, thought he was a castrato. Mancini was in fact a moderately skilful falsettist and professional double bass player.",
"title": "Decline"
},
{
"paragraph_id": 21,
"text": "A male can retain his child voice if it never changes during puberty. The retained voice can be the treble voice shared by both sexes in childhood and is the same as a boy soprano voice. But as evidence shows, many castratos, such as Senesino and Caffarelli, were actually altos (mezzo-soprano) – not sopranos. So-called \"natural\" or \"endocrinological castrati\" are born with hormonal anomalies, such as Klinefelter's syndrome and Kallmann's syndrome, or have undergone unusual physical or medical events during their early lives that reproduce the vocal effects of castration without being castrated.",
"title": "Modern castrati and similar voices"
},
{
"paragraph_id": 22,
"text": "Jimmy Scott, Radu Marian and Javier Medina are examples of this type of high male voice via endocrinological diseases. Michael Maniaci is somewhat different, in that he has no hormonal or other anomalies, but claims that his voice did not \"break\" in the usual manner, leaving him still able to sing in the soprano register. Other uncastrated male adults sing soprano, generally using some form of falsetto but in a much higher range than most countertenors. Examples are Aris Christofellis, Jörg Waschinski, and Ghio Nannini.",
"title": "Modern castrati and similar voices"
},
{
"paragraph_id": 23,
"text": "However, it is believed the castrati possessed more of a tenorial chest register (the aria \"Navigante che non spera\" in Leonardo Vinci's opera Il Medo, written for Farinelli, requires notes down to C3, 131 Hz). Similar low-voiced singing can be heard from the jazz vocalist Jimmy Scott, whose range matches approximately that used by female blues singers. High-pitched singer Jordan Smith has demonstrated having more of a tenorial chest register.",
"title": "Modern castrati and similar voices"
},
{
"paragraph_id": 24,
"text": "Actor Chris Colfer has stated in interviews that when his voice began to change at puberty, he sang in a high voice \"constantly\" in an effort to retain his range. Actor and singer Alex Newell has soprano range. Voice actor Walter Tetley may or may not have been a castrato; Bill Scott, a co-worker of Tetley's during their later work in television, once half-jokingly quipped that Tetley's mother \"had him fixed\" to protect the child star's voice-acting career. Tetley did never personally divulge the exact reason for his condition, which left him with the voice of a preteen boy for his entire adult life. Botanist George Washington Carver was noted for his high voice, believed to be the result of pertussis and croup infections in his childhood that stunted his growth.",
"title": "Modern castrati and similar voices"
}
] | A castrato is a male singer who underwent castration before puberty in order to retain singing voice equivalent to that of a soprano, mezzo-soprano, or contralto. The voice can also occur in one who, due to an endocrinological condition, never reaches sexual maturity. Castration before puberty prevents the larynx from being transformed by the normal physiological events of puberty. As a result, the vocal range of prepubescence is largely retained, and the voice develops into adulthood in a unique way. Prepubescent castration for this purpose diminished greatly in the late 18th century. Methods of castration used to terminate the onset of puberty varied. Methods involved using opium to medically induce a coma, then submerging the boy into an ice or milk bath where the procedure of either severing the vas deferens, twisting the testicles until they atrophied, or complete removal via surgical cutting was performed. The procedure was usually done to boys around the age of 8–10; recovery time from the procedure took around two weeks. The means by which future singers were prepared could lead to premature death. To prevent the child from experiencing the intense pain of castration, many were inadvertently administered lethal doses of opium or some other narcotic, or were killed by overlong compression of the carotid artery in the neck. The geographical locations of where these procedures took place is not known specifically. During the 18th century itself, the music historian Charles Burney was sent from pillar to post in search of places where the operation was carried out: As the castrato's body grew, his lack of testosterone meant that his epiphyses (bone-joints) did not harden in the normal manner. Thus the limbs of the castrati often grew unusually long, as did their ribs. This, combined with intensive training, gave them unrivalled lung power and breath capacity. Operating through small, child-sized vocal cords, their voices were also extraordinarily flexible, and quite different from the equivalent adult female voice. Their vocal range was higher than that of the uncastrated adult male. Listening to the only surviving recordings of a castrato, one can hear that the lower part of the voice sounds like a "super-high" tenor, with a more falsetto-like upper register above that. Castrati were rarely referred to as such: in the 18th century, the euphemism musico was much more generally used, although it usually carried derogatory implications; another synonym was evirato, literally meaning "emasculated". Eunuch is a more general term since, historically, many eunuchs were castrated after puberty and thus the castration had no impact on their voices. | 2001-06-10T00:09:00Z | 2023-12-31T12:03:52Z | [
"Template:Listen",
"Template:Cite magazine",
"Template:Castrati",
"Template:Short description",
"Template:Vocal range",
"Template:Plural form",
"Template:Circa",
"Template:Range (music)",
"Template:See also",
"Template:Cite book",
"Template:Cite web",
"Template:Opera terms",
"Template:Cite news",
"Template:More citations needed section",
"Template:Reflist",
"Template:Webarchive",
"Template:Cite journal",
"Template:Use dmy dates",
"Template:Authority control"
] | https://en.wikipedia.org/wiki/Castrato |
5,743 | Counting-out game | A counting-out game or counting-out rhyme is a simple method of 'randomly' selecting a person from a group, often used by children for the purpose of playing another game. It usually requires no materials, and is achieved with spoken words or hand gestures. The historian Henry Carrington Bolton suggested in his 1888 book Counting Out Rhymes of Children that the custom of counting out originated in the "superstitious practices of divination by lots."
Many such methods involve one person pointing at each participant in a circle of players while reciting a rhyme. A new person is pointed at as each word is said. The player who is selected at the conclusion of the rhyme is "it" or "out". In an alternate version, the circle of players may each put two feet in and at the conclusion of the rhyme, that player removes one foot and the rhyme starts over with the next person. In this case, the first player that has both feet removed is "it" or "out". In theory a counting rhyme is determined entirely by the starting selection (and would result in a modulo operation), but in practice they are often accepted as random selections because the number of words has not been calculated beforehand, so the result is unknown until someone is selected.
A variant of counting-out game, known as the Josephus problem, represents a famous theoretical problem in mathematics and computer science.
Several simple games can be played to select one person from a group, either as a straightforward winner, or as someone who is eliminated. Rock, Paper, Scissors, Odd or Even and Blue Shoe require no materials and are played using hand gestures, although with the former it is possible for a player to win or lose through skill rather than luck. Coin flipping and drawing straws are fair methods of randomly determining a player. Fizz Buzz is a spoken word game where if a player slips up and speaks a word out of sequence, they are eliminated.
(These rhymes may have many local or regional variants.)
A scene in the Marx Brothers movie Duck Soup plays on the fact that counting-out games are not really random. Faced with selecting someone to go on a dangerous mission, the character Chicolini (Chico Marx) chants:
only to stop as he realizes he is about to select himself. He then says, "I did it wrong. Wait, wait, I start here", and repeats the chant—with the same result. After that, he says, "That's no good too. I got it!" and reduces the chant to
And with this version he finally manages to "randomly" select someone else.
A version of a counting game "ink-a-dink" features in the Seinfeld episode "The Statue." The relevant scene includes a discussion between the characters of Jerry and George if the person who is "it" is the "winner" or the "loser": | [
{
"paragraph_id": 0,
"text": "A counting-out game or counting-out rhyme is a simple method of 'randomly' selecting a person from a group, often used by children for the purpose of playing another game. It usually requires no materials, and is achieved with spoken words or hand gestures. The historian Henry Carrington Bolton suggested in his 1888 book Counting Out Rhymes of Children that the custom of counting out originated in the \"superstitious practices of divination by lots.\"",
"title": ""
},
{
"paragraph_id": 1,
"text": "Many such methods involve one person pointing at each participant in a circle of players while reciting a rhyme. A new person is pointed at as each word is said. The player who is selected at the conclusion of the rhyme is \"it\" or \"out\". In an alternate version, the circle of players may each put two feet in and at the conclusion of the rhyme, that player removes one foot and the rhyme starts over with the next person. In this case, the first player that has both feet removed is \"it\" or \"out\". In theory a counting rhyme is determined entirely by the starting selection (and would result in a modulo operation), but in practice they are often accepted as random selections because the number of words has not been calculated beforehand, so the result is unknown until someone is selected.",
"title": ""
},
{
"paragraph_id": 2,
"text": "A variant of counting-out game, known as the Josephus problem, represents a famous theoretical problem in mathematics and computer science.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Several simple games can be played to select one person from a group, either as a straightforward winner, or as someone who is eliminated. Rock, Paper, Scissors, Odd or Even and Blue Shoe require no materials and are played using hand gestures, although with the former it is possible for a player to win or lose through skill rather than luck. Coin flipping and drawing straws are fair methods of randomly determining a player. Fizz Buzz is a spoken word game where if a player slips up and speaks a word out of sequence, they are eliminated.",
"title": "Examples"
},
{
"paragraph_id": 4,
"text": "(These rhymes may have many local or regional variants.)",
"title": "Examples"
},
{
"paragraph_id": 5,
"text": "A scene in the Marx Brothers movie Duck Soup plays on the fact that counting-out games are not really random. Faced with selecting someone to go on a dangerous mission, the character Chicolini (Chico Marx) chants:",
"title": "Cultural references"
},
{
"paragraph_id": 6,
"text": "only to stop as he realizes he is about to select himself. He then says, \"I did it wrong. Wait, wait, I start here\", and repeats the chant—with the same result. After that, he says, \"That's no good too. I got it!\" and reduces the chant to",
"title": "Cultural references"
},
{
"paragraph_id": 7,
"text": "And with this version he finally manages to \"randomly\" select someone else.",
"title": "Cultural references"
},
{
"paragraph_id": 8,
"text": "A version of a counting game \"ink-a-dink\" features in the Seinfeld episode \"The Statue.\" The relevant scene includes a discussion between the characters of Jerry and George if the person who is \"it\" is the \"winner\" or the \"loser\":",
"title": "Cultural references"
}
] | A counting-out game or counting-out rhyme is a simple method of 'randomly' selecting a person from a group, often used by children for the purpose of playing another game. It usually requires no materials, and is achieved with spoken words or hand gestures. The historian Henry Carrington Bolton suggested in his 1888 book Counting Out Rhymes of Children that the custom of counting out originated in the "superstitious practices of divination by lots." Many such methods involve one person pointing at each participant in a circle of players while reciting a rhyme. A new person is pointed at as each word is said. The player who is selected at the conclusion of the rhyme is "it" or "out". In an alternate version, the circle of players may each put two feet in and at the conclusion of the rhyme, that player removes one foot and the rhyme starts over with the next person. In this case, the first player that has both feet removed is "it" or "out". In theory a counting rhyme is determined entirely by the starting selection, but in practice they are often accepted as random selections because the number of words has not been calculated beforehand, so the result is unknown until someone is selected. A variant of counting-out game, known as the Josephus problem, represents a famous theoretical problem in mathematics and computer science. | 2001-06-10T04:00:11Z | 2023-11-12T14:39:13Z | [
"Template:Lang",
"Template:Cite book",
"Template:Pre-ISBN",
"Template:Cite web",
"Template:Authority control",
"Template:Page needed",
"Template:'",
"Template:Reflist",
"Template:Citation",
"Template:Portal bar",
"Template:Singing games",
"Template:Short description"
] | https://en.wikipedia.org/wiki/Counting-out_game |
5,749 | Key size | In cryptography, key size or key length refers to the number of bits in a key used by a cryptographic algorithm (such as a cipher).
Key length defines the upper-bound on an algorithm's security (i.e. a logarithmic measure of the fastest known attack against an algorithm), because the security of all algorithms can be violated by brute-force attacks. Ideally, the lower-bound on an algorithm's security is by design equal to the key length (that is, the algorithm's design does not detract from the degree of security inherent in the key length).
Most symmetric-key algorithms are designed to have security equal to their key length. However, after design, a new attack might be discovered. For instance, Triple DES was designed to have a 168-bit key, but an attack of complexity 2 is now known (i.e. Triple DES now only has 112 bits of security, and of the 168 bits in the key the attack has rendered 56 'ineffective' towards security). Nevertheless, as long as the security (understood as "the amount of effort it would take to gain access") is sufficient for a particular application, then it does not matter if key length and security coincide. This is important for asymmetric-key algorithms, because no such algorithm is known to satisfy this property; elliptic curve cryptography comes the closest with an effective security of roughly half its key length.
Keys are used to control the operation of a cipher so that only the correct key can convert encrypted text (ciphertext) to plaintext. All commonly-used ciphers are based on publicly known algorithms or are open source and so it is only the difficulty of obtaining the key that determines security of the system, provided that there is no analytic attack (i.e. a "structural weakness" in the algorithms or protocols used), and assuming that the key is not otherwise available (such as via theft, extortion, or compromise of computer systems). The widely accepted notion that the security of the system should depend on the key alone has been explicitly formulated by Auguste Kerckhoffs (in the 1880s) and Claude Shannon (in the 1940s); the statements are known as Kerckhoffs' principle and Shannon's Maxim respectively.
A key should, therefore, be large enough that a brute-force attack (possible against any encryption algorithm) is infeasible – i.e. would take too long and/or would take too much memory to execute. Shannon's work on information theory showed that to achieve so-called 'perfect secrecy', the key length must be at least as large as the message and only used once (this algorithm is called the one-time pad). In light of this, and the practical difficulty of managing such long keys, modern cryptographic practice has discarded the notion of perfect secrecy as a requirement for encryption, and instead focuses on computational security, under which the computational requirements of breaking an encrypted text must be infeasible for an attacker.
Encryption systems are often grouped into families. Common families include symmetric systems (e.g. AES) and asymmetric systems (e.g. RSA and Elliptic-curve_cryptography). They may be grouped according to the central algorithm used (e.g. elliptic curve cryptography and Feistel ciphers). Because each of these has a different level of cryptographic complexity, it is usual to have different key sizes for the same level of security, depending upon the algorithm used. For example, the security available with a 1024-bit key using asymmetric RSA is considered approximately equal in security to an 80-bit key in a symmetric algorithm.
The actual degree of security achieved over time varies, as more computational power and more powerful mathematical analytic methods become available. For this reason, cryptologists tend to look at indicators that an algorithm or key length shows signs of potential vulnerability, to move to longer key sizes or more difficult algorithms. For example, as of May 2007, a 1039-bit integer was factored with the special number field sieve using 400 computers over 11 months. The factored number was of a special form; the special number field sieve cannot be used on RSA keys. The computation is roughly equivalent to breaking a 700 bit RSA key. However, this might be an advance warning that 1024 bit RSA keys used in secure online commerce should be deprecated, since they may become breakable in the foreseeable future. Cryptography professor Arjen Lenstra observed that "Last time, it took nine years for us to generalize from a special to a nonspecial, hard-to-factor number" and when asked whether 1024-bit RSA keys are dead, said: "The answer to that question is an unqualified yes."
The 2015 Logjam attack revealed additional dangers in using Diffie-Hellman key exchange when only one or a few common 1024-bit or smaller prime moduli are in use. This practice, somewhat common at the time, allows large amounts of communications to be compromised at the expense of attacking a small number of primes.
Even if a symmetric cipher is currently unbreakable by exploiting structural weaknesses in its algorithm, it may be possible to run through the entire space of keys in what is known as a brute-force attack. Because longer symmetric keys require exponentially more work to brute force search, a sufficiently long symmetric key makes this line of attack impractical.
With a key of length n bits, there are 2 possible keys. This number grows very rapidly as n increases. The large number of operations (2) required to try all possible 128-bit keys is widely considered out of reach for conventional digital computing techniques for the foreseeable future. However, a quantum computer capable of running Grover's algorithm would be able to search the possible keys more efficiently. If a suitably sized quantum computer would reduce a 128-bit key down to 64-bit security, roughly a DES equivalent. This is one of the reasons why AES supports key lengths of 256 bits and longer.
IBM's Lucifer cipher was selected in 1974 as the base for what would become the Data Encryption Standard. Lucifer's key length was reduced from 128 bits to 56 bits, which the NSA and NIST argued was sufficient for non-governmental protection at the time. The NSA has major computing resources and a large budget; some cryptographers including Whitfield Diffie and Martin Hellman complained that this made the cipher so weak that NSA computers would be able to break a DES key in a day through brute force parallel computing. The NSA disputed this, claiming that brute-forcing DES would take them "something like 91 years".
However, by the late 90s, it became clear that DES could be cracked in a few days' time-frame with custom-built hardware such as could be purchased by a large corporation or government. The book Cracking DES (O'Reilly and Associates) tells of the successful ability in 1998 to break 56-bit DES by a brute-force attack mounted by a cyber civil rights group with limited resources; see EFF DES cracker. Even before that demonstration, 56 bits was considered insufficient length for symmetric algorithm keys for general use. Because of this, DES was replaced in most security applications by Triple DES, which has 112 bits of security when using 168-bit keys (triple key).
The Advanced Encryption Standard published in 2001 uses key sizes of 128, 192 or 256 bits. Many observers consider 128 bits sufficient for the foreseeable future for symmetric algorithms of AES's quality until quantum computers become available. However, as of 2015, the U.S. National Security Agency has issued guidance that it plans to switch to quantum computing resistant algorithms and now requires 256-bit AES keys for data classified up to Top Secret.
In 2003, the U.S. National Institute for Standards and Technology, NIST proposed phasing out 80-bit keys by 2015. At 2005, 80-bit keys were allowed only until 2010.
Since 2015, NIST guidance says that "the use of keys that provide less than 112 bits of security strength for key agreement is now disallowed." NIST approved symmetric encryption algorithms include three-key Triple DES, and AES. Approvals for two-key Triple DES and Skipjack were withdrawn in 2015; the NSA's Skipjack algorithm used in its Fortezza program employs 80-bit keys.
The effectiveness of public key cryptosystems depends on the intractability (computational and theoretical) of certain mathematical problems such as integer factorization. These problems are time-consuming to solve, but usually faster than trying all possible keys by brute force. Thus, asymmetric keys must be longer for equivalent resistance to attack than symmetric algorithm keys. The most common methods are assumed to be weak against sufficiently powerful quantum computers in the future.
Since 2015, NIST recommends a minimum of 2048-bit keys for RSA, an update to the widely-accepted recommendation of a 1024-bit minimum since at least 2002.
1024-bit RSA keys are equivalent in strength to 80-bit symmetric keys, 2048-bit RSA keys to 112-bit symmetric keys, 3072-bit RSA keys to 128-bit symmetric keys, and 15360-bit RSA keys to 256-bit symmetric keys. In 2003, RSA Security claimed that 1024-bit keys were likely to become crackable some time between 2006 and 2010, while 2048-bit keys are sufficient until 2030. As of 2020 the largest RSA key publicly known to be cracked is RSA-250 with 829 bits.
The Finite Field Diffie-Hellman algorithm has roughly the same key strength as RSA for the same key sizes. The work factor for breaking Diffie-Hellman is based on the discrete logarithm problem, which is related to the integer factorization problem on which RSA's strength is based. Thus, a 2048-bit Diffie-Hellman key has about the same strength as a 2048-bit RSA key.
Elliptic-curve cryptography (ECC) is an alternative set of asymmetric algorithms that is equivalently secure with shorter keys, requiring only approximately twice the bits as the equivalent symmetric algorithm. A 256-bit ECDH key has approximately the same safety factor as a 128-bit AES key. A message encrypted with an elliptic key algorithm using a 109-bit long key was broken in 2004.
The NSA previously recommended 256-bit ECC for protecting classified information up to the SECRET level, and 384-bit for TOP SECRET; In 2015 it announced plans to transition to quantum-resistant algorithms by 2024, and until then recommends 384-bit for all classified information.
The two best known quantum computing attacks are based on Shor's algorithm and Grover's algorithm. Of the two, Shor's offers the greater risk to current security systems.
Derivatives of Shor's algorithm are widely conjectured to be effective against all mainstream public-key algorithms including RSA, Diffie-Hellman and elliptic curve cryptography. According to Professor Gilles Brassard, an expert in quantum computing: "The time needed to factor an RSA integer is the same order as the time needed to use that same integer as modulus for a single RSA encryption. In other words, it takes no more time to break RSA on a quantum computer (up to a multiplicative constant) than to use it legitimately on a classical computer." The general consensus is that these public key algorithms are insecure at any key size if sufficiently large quantum computers capable of running Shor's algorithm become available. The implication of this attack is that all data encrypted using current standards based security systems such as the ubiquitous SSL used to protect e-commerce and Internet banking and SSH used to protect access to sensitive computing systems is at risk. Encrypted data protected using public-key algorithms can be archived and may be broken at a later time, commonly known as retroactive/retrospective decryption or "harvest and decrypt".
Mainstream symmetric ciphers (such as AES or Twofish) and collision resistant hash functions (such as SHA) are widely conjectured to offer greater security against known quantum computing attacks. They are widely thought most vulnerable to Grover's algorithm. Bennett, Bernstein, Brassard, and Vazirani proved in 1996 that a brute-force key search on a quantum computer cannot be faster than roughly 2 invocations of the underlying cryptographic algorithm, compared with roughly 2 in the classical case. Thus in the presence of large quantum computers an n-bit key can provide at least n/2 bits of security. Quantum brute force is easily defeated by doubling the key length, which has little extra computational cost in ordinary use. This implies that at least a 256-bit symmetric key is required to achieve 128-bit security rating against a quantum computer. As mentioned above, the NSA announced in 2015 that it plans to transition to quantum-resistant algorithms.
According to the NSA:
"A sufficiently large quantum computer, if built, would be capable of undermining all widely-deployed public key algorithms used for key establishment and digital signatures. ... It is generally accepted that quantum computing techniques are much less effective against symmetric algorithms than against current widely used public key algorithms. While public key cryptography requires changes in the fundamental design to protect against a potential future quantum computer, symmetric key algorithms are believed to be secure provided a sufficiently large key size is used. ... In the longer term, NSA looks to NIST to identify a broadly accepted, standardized suite of commercial public key algorithms that are not vulnerable to quantum attacks."
As of 2016, the NSA's Commercial National Security Algorithm Suite includes: | [
{
"paragraph_id": 0,
"text": "In cryptography, key size or key length refers to the number of bits in a key used by a cryptographic algorithm (such as a cipher).",
"title": ""
},
{
"paragraph_id": 1,
"text": "Key length defines the upper-bound on an algorithm's security (i.e. a logarithmic measure of the fastest known attack against an algorithm), because the security of all algorithms can be violated by brute-force attacks. Ideally, the lower-bound on an algorithm's security is by design equal to the key length (that is, the algorithm's design does not detract from the degree of security inherent in the key length).",
"title": ""
},
{
"paragraph_id": 2,
"text": "Most symmetric-key algorithms are designed to have security equal to their key length. However, after design, a new attack might be discovered. For instance, Triple DES was designed to have a 168-bit key, but an attack of complexity 2 is now known (i.e. Triple DES now only has 112 bits of security, and of the 168 bits in the key the attack has rendered 56 'ineffective' towards security). Nevertheless, as long as the security (understood as \"the amount of effort it would take to gain access\") is sufficient for a particular application, then it does not matter if key length and security coincide. This is important for asymmetric-key algorithms, because no such algorithm is known to satisfy this property; elliptic curve cryptography comes the closest with an effective security of roughly half its key length.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Keys are used to control the operation of a cipher so that only the correct key can convert encrypted text (ciphertext) to plaintext. All commonly-used ciphers are based on publicly known algorithms or are open source and so it is only the difficulty of obtaining the key that determines security of the system, provided that there is no analytic attack (i.e. a \"structural weakness\" in the algorithms or protocols used), and assuming that the key is not otherwise available (such as via theft, extortion, or compromise of computer systems). The widely accepted notion that the security of the system should depend on the key alone has been explicitly formulated by Auguste Kerckhoffs (in the 1880s) and Claude Shannon (in the 1940s); the statements are known as Kerckhoffs' principle and Shannon's Maxim respectively.",
"title": "Significance"
},
{
"paragraph_id": 4,
"text": "A key should, therefore, be large enough that a brute-force attack (possible against any encryption algorithm) is infeasible – i.e. would take too long and/or would take too much memory to execute. Shannon's work on information theory showed that to achieve so-called 'perfect secrecy', the key length must be at least as large as the message and only used once (this algorithm is called the one-time pad). In light of this, and the practical difficulty of managing such long keys, modern cryptographic practice has discarded the notion of perfect secrecy as a requirement for encryption, and instead focuses on computational security, under which the computational requirements of breaking an encrypted text must be infeasible for an attacker.",
"title": "Significance"
},
{
"paragraph_id": 5,
"text": "Encryption systems are often grouped into families. Common families include symmetric systems (e.g. AES) and asymmetric systems (e.g. RSA and Elliptic-curve_cryptography). They may be grouped according to the central algorithm used (e.g. elliptic curve cryptography and Feistel ciphers). Because each of these has a different level of cryptographic complexity, it is usual to have different key sizes for the same level of security, depending upon the algorithm used. For example, the security available with a 1024-bit key using asymmetric RSA is considered approximately equal in security to an 80-bit key in a symmetric algorithm.",
"title": "Key size and encryption system"
},
{
"paragraph_id": 6,
"text": "The actual degree of security achieved over time varies, as more computational power and more powerful mathematical analytic methods become available. For this reason, cryptologists tend to look at indicators that an algorithm or key length shows signs of potential vulnerability, to move to longer key sizes or more difficult algorithms. For example, as of May 2007, a 1039-bit integer was factored with the special number field sieve using 400 computers over 11 months. The factored number was of a special form; the special number field sieve cannot be used on RSA keys. The computation is roughly equivalent to breaking a 700 bit RSA key. However, this might be an advance warning that 1024 bit RSA keys used in secure online commerce should be deprecated, since they may become breakable in the foreseeable future. Cryptography professor Arjen Lenstra observed that \"Last time, it took nine years for us to generalize from a special to a nonspecial, hard-to-factor number\" and when asked whether 1024-bit RSA keys are dead, said: \"The answer to that question is an unqualified yes.\"",
"title": "Key size and encryption system"
},
{
"paragraph_id": 7,
"text": "The 2015 Logjam attack revealed additional dangers in using Diffie-Hellman key exchange when only one or a few common 1024-bit or smaller prime moduli are in use. This practice, somewhat common at the time, allows large amounts of communications to be compromised at the expense of attacking a small number of primes.",
"title": "Key size and encryption system"
},
{
"paragraph_id": 8,
"text": "Even if a symmetric cipher is currently unbreakable by exploiting structural weaknesses in its algorithm, it may be possible to run through the entire space of keys in what is known as a brute-force attack. Because longer symmetric keys require exponentially more work to brute force search, a sufficiently long symmetric key makes this line of attack impractical.",
"title": "Brute-force attack"
},
{
"paragraph_id": 9,
"text": "With a key of length n bits, there are 2 possible keys. This number grows very rapidly as n increases. The large number of operations (2) required to try all possible 128-bit keys is widely considered out of reach for conventional digital computing techniques for the foreseeable future. However, a quantum computer capable of running Grover's algorithm would be able to search the possible keys more efficiently. If a suitably sized quantum computer would reduce a 128-bit key down to 64-bit security, roughly a DES equivalent. This is one of the reasons why AES supports key lengths of 256 bits and longer.",
"title": "Brute-force attack"
},
{
"paragraph_id": 10,
"text": "IBM's Lucifer cipher was selected in 1974 as the base for what would become the Data Encryption Standard. Lucifer's key length was reduced from 128 bits to 56 bits, which the NSA and NIST argued was sufficient for non-governmental protection at the time. The NSA has major computing resources and a large budget; some cryptographers including Whitfield Diffie and Martin Hellman complained that this made the cipher so weak that NSA computers would be able to break a DES key in a day through brute force parallel computing. The NSA disputed this, claiming that brute-forcing DES would take them \"something like 91 years\".",
"title": "Symmetric algorithm key lengths"
},
{
"paragraph_id": 11,
"text": "However, by the late 90s, it became clear that DES could be cracked in a few days' time-frame with custom-built hardware such as could be purchased by a large corporation or government. The book Cracking DES (O'Reilly and Associates) tells of the successful ability in 1998 to break 56-bit DES by a brute-force attack mounted by a cyber civil rights group with limited resources; see EFF DES cracker. Even before that demonstration, 56 bits was considered insufficient length for symmetric algorithm keys for general use. Because of this, DES was replaced in most security applications by Triple DES, which has 112 bits of security when using 168-bit keys (triple key).",
"title": "Symmetric algorithm key lengths"
},
{
"paragraph_id": 12,
"text": "The Advanced Encryption Standard published in 2001 uses key sizes of 128, 192 or 256 bits. Many observers consider 128 bits sufficient for the foreseeable future for symmetric algorithms of AES's quality until quantum computers become available. However, as of 2015, the U.S. National Security Agency has issued guidance that it plans to switch to quantum computing resistant algorithms and now requires 256-bit AES keys for data classified up to Top Secret.",
"title": "Symmetric algorithm key lengths"
},
{
"paragraph_id": 13,
"text": "In 2003, the U.S. National Institute for Standards and Technology, NIST proposed phasing out 80-bit keys by 2015. At 2005, 80-bit keys were allowed only until 2010.",
"title": "Symmetric algorithm key lengths"
},
{
"paragraph_id": 14,
"text": "Since 2015, NIST guidance says that \"the use of keys that provide less than 112 bits of security strength for key agreement is now disallowed.\" NIST approved symmetric encryption algorithms include three-key Triple DES, and AES. Approvals for two-key Triple DES and Skipjack were withdrawn in 2015; the NSA's Skipjack algorithm used in its Fortezza program employs 80-bit keys.",
"title": "Symmetric algorithm key lengths"
},
{
"paragraph_id": 15,
"text": "The effectiveness of public key cryptosystems depends on the intractability (computational and theoretical) of certain mathematical problems such as integer factorization. These problems are time-consuming to solve, but usually faster than trying all possible keys by brute force. Thus, asymmetric keys must be longer for equivalent resistance to attack than symmetric algorithm keys. The most common methods are assumed to be weak against sufficiently powerful quantum computers in the future.",
"title": "Asymmetric algorithm key lengths"
},
{
"paragraph_id": 16,
"text": "Since 2015, NIST recommends a minimum of 2048-bit keys for RSA, an update to the widely-accepted recommendation of a 1024-bit minimum since at least 2002.",
"title": "Asymmetric algorithm key lengths"
},
{
"paragraph_id": 17,
"text": "1024-bit RSA keys are equivalent in strength to 80-bit symmetric keys, 2048-bit RSA keys to 112-bit symmetric keys, 3072-bit RSA keys to 128-bit symmetric keys, and 15360-bit RSA keys to 256-bit symmetric keys. In 2003, RSA Security claimed that 1024-bit keys were likely to become crackable some time between 2006 and 2010, while 2048-bit keys are sufficient until 2030. As of 2020 the largest RSA key publicly known to be cracked is RSA-250 with 829 bits.",
"title": "Asymmetric algorithm key lengths"
},
{
"paragraph_id": 18,
"text": "The Finite Field Diffie-Hellman algorithm has roughly the same key strength as RSA for the same key sizes. The work factor for breaking Diffie-Hellman is based on the discrete logarithm problem, which is related to the integer factorization problem on which RSA's strength is based. Thus, a 2048-bit Diffie-Hellman key has about the same strength as a 2048-bit RSA key.",
"title": "Asymmetric algorithm key lengths"
},
{
"paragraph_id": 19,
"text": "Elliptic-curve cryptography (ECC) is an alternative set of asymmetric algorithms that is equivalently secure with shorter keys, requiring only approximately twice the bits as the equivalent symmetric algorithm. A 256-bit ECDH key has approximately the same safety factor as a 128-bit AES key. A message encrypted with an elliptic key algorithm using a 109-bit long key was broken in 2004.",
"title": "Asymmetric algorithm key lengths"
},
{
"paragraph_id": 20,
"text": "The NSA previously recommended 256-bit ECC for protecting classified information up to the SECRET level, and 384-bit for TOP SECRET; In 2015 it announced plans to transition to quantum-resistant algorithms by 2024, and until then recommends 384-bit for all classified information.",
"title": "Asymmetric algorithm key lengths"
},
{
"paragraph_id": 21,
"text": "The two best known quantum computing attacks are based on Shor's algorithm and Grover's algorithm. Of the two, Shor's offers the greater risk to current security systems.",
"title": "Effect of quantum computing attacks on key strength"
},
{
"paragraph_id": 22,
"text": "Derivatives of Shor's algorithm are widely conjectured to be effective against all mainstream public-key algorithms including RSA, Diffie-Hellman and elliptic curve cryptography. According to Professor Gilles Brassard, an expert in quantum computing: \"The time needed to factor an RSA integer is the same order as the time needed to use that same integer as modulus for a single RSA encryption. In other words, it takes no more time to break RSA on a quantum computer (up to a multiplicative constant) than to use it legitimately on a classical computer.\" The general consensus is that these public key algorithms are insecure at any key size if sufficiently large quantum computers capable of running Shor's algorithm become available. The implication of this attack is that all data encrypted using current standards based security systems such as the ubiquitous SSL used to protect e-commerce and Internet banking and SSH used to protect access to sensitive computing systems is at risk. Encrypted data protected using public-key algorithms can be archived and may be broken at a later time, commonly known as retroactive/retrospective decryption or \"harvest and decrypt\".",
"title": "Effect of quantum computing attacks on key strength"
},
{
"paragraph_id": 23,
"text": "Mainstream symmetric ciphers (such as AES or Twofish) and collision resistant hash functions (such as SHA) are widely conjectured to offer greater security against known quantum computing attacks. They are widely thought most vulnerable to Grover's algorithm. Bennett, Bernstein, Brassard, and Vazirani proved in 1996 that a brute-force key search on a quantum computer cannot be faster than roughly 2 invocations of the underlying cryptographic algorithm, compared with roughly 2 in the classical case. Thus in the presence of large quantum computers an n-bit key can provide at least n/2 bits of security. Quantum brute force is easily defeated by doubling the key length, which has little extra computational cost in ordinary use. This implies that at least a 256-bit symmetric key is required to achieve 128-bit security rating against a quantum computer. As mentioned above, the NSA announced in 2015 that it plans to transition to quantum-resistant algorithms.",
"title": "Effect of quantum computing attacks on key strength"
},
{
"paragraph_id": 24,
"text": "According to the NSA:",
"title": "Effect of quantum computing attacks on key strength"
},
{
"paragraph_id": 25,
"text": "\"A sufficiently large quantum computer, if built, would be capable of undermining all widely-deployed public key algorithms used for key establishment and digital signatures. ... It is generally accepted that quantum computing techniques are much less effective against symmetric algorithms than against current widely used public key algorithms. While public key cryptography requires changes in the fundamental design to protect against a potential future quantum computer, symmetric key algorithms are believed to be secure provided a sufficiently large key size is used. ... In the longer term, NSA looks to NIST to identify a broadly accepted, standardized suite of commercial public key algorithms that are not vulnerable to quantum attacks.\"",
"title": "Effect of quantum computing attacks on key strength"
},
{
"paragraph_id": 26,
"text": "As of 2016, the NSA's Commercial National Security Algorithm Suite includes:",
"title": "Effect of quantum computing attacks on key strength"
}
] | In cryptography, key size or key length refers to the number of bits in a key used by a cryptographic algorithm (such as a cipher). Key length defines the upper-bound on an algorithm's security (i.e. a logarithmic measure of the fastest known attack against an algorithm), because the security of all algorithms can be violated by brute-force attacks. Ideally, the lower-bound on an algorithm's security is by design equal to the key length (that is, the algorithm's design does not detract from the degree of security inherent in the key length). Most symmetric-key algorithms are designed to have security equal to their key length. However, after design, a new attack might be discovered. For instance, Triple DES was designed to have a 168-bit key, but an attack of complexity 2112 is now known (i.e. Triple DES now only has 112 bits of security, and of the 168 bits in the key the attack has rendered 56 'ineffective' towards security). Nevertheless, as long as the security (understood as "the amount of effort it would take to gain access") is sufficient for a particular application, then it does not matter if key length and security coincide. This is important for asymmetric-key algorithms, because no such algorithm is known to satisfy this property; elliptic curve cryptography comes the closest with an effective security of roughly half its key length. | 2002-02-25T15:51:15Z | 2023-11-06T09:49:58Z | [
"Template:Short description",
"Template:Citation needed",
"Template:Cryptography navbox",
"Template:As of",
"Template:Efn",
"Template:Blockquote",
"Template:Notelist",
"Template:Cite web",
"Template:More citations needed section",
"Template:Main article",
"Template:Cite conference",
"Template:Cite journal",
"Template:Reflist",
"Template:Cite magazine"
] | https://en.wikipedia.org/wiki/Key_size |
5,750 | Cognitive behavioral therapy | Cognitive behavioral therapy (CBT) is a psycho-social intervention that aims to reduce symptoms of various mental health conditions, primarily depression and anxiety disorders. Cognitive behavioral therapy is one of the most effective means of treatment for substance abuse and co-occurring mental health disorders. CBT focuses on challenging and changing cognitive distortions (such as thoughts, beliefs, and attitudes) and their associated behaviors to improve emotional regulation and develop personal coping strategies that target solving current problems. Though it was originally designed to treat depression, its uses have been expanded to include many issues and the treatment of many mental health conditions, including anxiety, substance use disorders, marital problems, ADHD, and eating disorders. CBT includes a number of cognitive or behavioral psychotherapies that treat defined psychopathologies using evidence-based techniques and strategies.
CBT is a common form of talk therapy based on the combination of the basic principles from behavioral and cognitive psychology. It is different from historical approaches to psychotherapy, such as the psychoanalytic approach where the therapist looks for the unconscious meaning behind the behaviors, and then formulates a diagnosis. Instead, CBT is a "problem-focused" and "action-oriented" form of therapy, meaning it is used to treat specific problems related to a diagnosed mental disorder. The therapist's role is to assist the client in finding and practicing effective strategies to address the identified goals and to alleviate symptoms of the disorder. CBT is based on the belief that thought distortions and maladaptive behaviors play a role in the development and maintenance of many psychological disorders and that symptoms and associated distress can be reduced by teaching new information-processing skills and coping mechanisms.
When compared to psychoactive medications, review studies have found CBT alone to be as effective for treating less severe forms of depression, anxiety, post-traumatic stress disorder (PTSD), tics, substance use disorders, eating disorders, and borderline personality disorder. Some research suggests that CBT is most effective when combined with medication for treating mental disorders, such as major depressive disorder. CBT is recommended as the first line of treatment for the majority of psychological disorders in children and adolescents, including aggression and conduct disorder. Researchers have found that other bona fide therapeutic interventions were equally effective for treating certain conditions in adults. Along with interpersonal psychotherapy (IPT), CBT is recommended in treatment guidelines as a psychosocial treatment of choice.
For thousands of years, humans have looked to faith and religious belief for answers to their emotional problems. The majority of studies show that having a faith or belief is, in general, good for your mental health. Religions have initiated charities specifically to help mental health problems, such as the Samaritans. CBT was developed from empirical studies that did not initially consider faith as a variable. However, as investigations on the role of religious belief and practice have grown in popularity, evidence has been gathered in various religious groups: randomized controlled trials of CBT adapted for Judaism, Taoism and, most commonly, Christianity.
Concepts drawn from Buddhism have influenced the development of several newer forms of CBT such as Dialectical Behavior Therapy, Mindfulness-Based Cognitive Therapy, Spirituality-Based CBT and Compassion Focused Therapy. Generic spiritual concepts (such as hope and well-being) and the importance of virtues such as fortitude and humility have been researched and reviewed.
Islamic psychology within the Sufi tradition was established as far back as the 11th century by Al Ghazali, who described the self as made up of four elements: heart, spirit, soul, and intellect. These can be respectively linked to CBT domains like emotions, behaviors, thoughts, and the capacity for reflection.
Pentecostal Christians often talk about three levels of body, soul, and spirit – the former is earthly and the latter holy, with the soul somewhere in between. Used helpfully, the therapist can encourage such persons to think that (like the body) the mind/brain can become ill and make them feel depressed, but the soul can still be there – enabled to retain a degree of objectivity because it is still in touch with the spirit.
Precursors of certain fundamental aspects of CBT have been identified in various ancient philosophical traditions, particularly Stoicism. Stoic philosophers, particularly Epictetus, believed logic could be used to identify and discard false beliefs that lead to destructive emotions, which has influenced the way modern cognitive-behavioral therapists identify cognitive distortions that contribute to depression and anxiety. Aaron T. Beck's original treatment manual for depression states, "The philosophical origins of cognitive therapy can be traced back to the Stoic philosophers". Another example of Stoic influence on cognitive theorists is Epictetus on Albert Ellis. A key philosophical figure who influenced the development of CBT was John Stuart Mill through his creation of Associationism, a predecessor of classical conditioning and behavioral theory.
The modern roots of CBT can be traced to the development of behavior therapy in the early 20th century, the development of cognitive therapy in the 1960s, and the subsequent merging of the two.
Groundbreaking work of behaviorism began with John B. Watson and Rosalie Rayner's studies of conditioning in 1920. Behaviorally-centered therapeutic approaches appeared as early as 1924 with Mary Cover Jones' work dedicated to the unlearning of fears in children. These were the antecedents of the development of Joseph Wolpe's behavioral therapy in the 1950s. It was the work of Wolpe and Watson, which was based on Ivan Pavlov's work on learning and conditioning, that influenced Hans Eysenck and Arnold Lazarus to develop new behavioral therapy techniques based on classical conditioning.
During the 1950s and 1960s, behavioral therapy became widely used by researchers in the United States, the United Kingdom, and South Africa. Their inspiration was by the behaviorist learning theory of Ivan Pavlov, John B. Watson, and Clark L. Hull.
In Britain, Joseph Wolpe, who applied the findings of animal experiments to his method of systematic desensitization, applied behavioral research to the treatment of neurotic disorders. Wolpe's therapeutic efforts were precursors to today's fear reduction techniques. British psychologist Hans Eysenck presented behavior therapy as a constructive alternative.
At the same time as Eysenck's work, B. F. Skinner and his associates were beginning to have an impact with their work on operant conditioning. Skinner's work was referred to as radical behaviorism and avoided anything related to cognition. However, Julian Rotter in 1954 and Albert Bandura in 1969 contributed to behavior therapy with their works on social learning theory by demonstrating the effects of cognition on learning and behavior modification. The work of Claire Weekes in dealing with anxiety disorders in the 1960s is also seen as a prototype of behavior therapy.
The emphasis on behavioral factors has been described as the "first wave" of CBT.
One of the first therapists to address cognition in psychotherapy was Alfred Adler, notably with his idea of basic mistakes and how they contributed to creation of unhealthy behavioral and life goals.Abraham Low believed that someone's thoughts were best changed by changing their actions. Adler and Low influenced the work of Albert Ellis, who developed the earliest cognitive-based psychotherapy called rational emotive behavioral therapy, or REBT. The first version of REBT was announced to the public in 1956.
In the late 1950s, Aaron T. Beck was conducting free association sessions in his psychoanalytic practice. During these sessions, Beck noticed that thoughts were not as unconscious as Freud had previously theorized, and that certain types of thinking may be the culprits of emotional distress. It was from this hypothesis that Beck developed cognitive therapy, and called these thoughts "automatic thoughts". He first published his new methodology in 1967, and his first treatment manual in 1979. Beck has been referred to as "the father of cognitive behavioral therapy".
It was these two therapies, rational emotive therapy, and cognitive therapy, that started the "second wave" of CBT, which emphasised cognitive factors.
Although the early behavioral approaches were successful in many so-called neurotic disorders, they had little success in treating depression. Behaviorism was also losing popularity due to the cognitive revolution. The therapeutic approaches of Albert Ellis and Aaron T. Beck gained popularity among behavior therapists, despite the earlier behaviorist rejection of mentalistic concepts like thoughts and cognitions. Both of these systems included behavioral elements and interventions, with the primary focus being on problems in the present.
In initial studies, cognitive therapy was often contrasted with behavioral treatments to see which was most effective. During the 1980s and 1990s, cognitive and behavioral techniques were merged into cognitive behavioral therapy. Pivotal to this merging was the successful development of treatments for panic disorder by David M. Clark in the UK and David H. Barlow in the US.
Over time, cognitive behavior therapy came to be known not only as a therapy, but as an umbrella term for all cognitive-based psychotherapies. These therapies include, but are not limited to, REBT, cognitive therapy, acceptance and commitment therapy, dialectical behavior therapy, metacognitive therapy, metacognitive training, reality therapy/choice theory, cognitive processing therapy, EMDR, and multimodal therapy.
This blending of theoretical and technical foundations from both behavior and cognitive therapies constituted the "third wave" of CBT. The most prominent therapies of this third wave are dialectical behavior therapy and acceptance and commitment therapy. Despite the increasing popularity of third-wave treatment approaches, reviews of studies reveal there may be no difference in the effectiveness compared with non-third wave CBT for the treatment of depression.
In adults, CBT has been shown to be an effective part of treatment plans for anxiety disorders, body dysmorphic disorder, depression, eating disorders, chronic low back pain, personality disorders, psychosis, schizophrenia, substance use disorders, and bipolar disorder. It is also effective as part of treatment plans in the adjustment, depression, and anxiety associated with fibromyalgia, and with post-spinal cord injuries.
In children or adolescents, CBT is an effective part of treatment plans for anxiety disorders, body dysmorphic disorder, depression and suicidality, eating disorders and obesity, obsessive–compulsive disorder (OCD), and post-traumatic stress disorder (PTSD), as well as tic disorders, trichotillomania, and other repetitive behavior disorders. CBT has also been applied to a variety of childhood disorders, including depressive disorders and various anxiety disorders. CBT has shown to be the most effective intervention for people exposed to adverse childhood experiences in the form of abuse or neglect.
Criticism of CBT sometimes focuses on implementations (such as the UK IAPT) which may result initially in low quality therapy being offered by poorly trained practitioners. However, evidence supports the effectiveness of CBT for anxiety and depression.
Evidence suggests that the addition of hypnotherapy as an adjunct to CBT improves treatment efficacy for a variety of clinical issues.
The United Kingdom's National Institute for Health and Care Excellence (NICE) recommends CBT in the treatment plans for a number of mental health difficulties, including PTSD, OCD, bulimia nervosa, and clinical depression.
Cognitive behavioral therapy has been shown as an effective treatment for clinical depression. The American Psychiatric Association Practice Guidelines (April 2000) indicated that, among psychotherapeutic approaches, cognitive behavioral therapy and interpersonal psychotherapy had the best-documented efficacy for treatment of major depressive disorder.
A 2001 meta-analysis comparing CBT and psychodynamic psychotherapy suggested the approaches were equally effective in the short term for depression. In contrast, a 2013 meta-analyses suggested that CBT, interpersonal therapy, and problem-solving therapy outperformed psychodynamic psychotherapy and behavioral activation in the treatment of depression.
According to a 2004 review by INSERM of three methods, cognitive behavioral therapy was either proven or presumed to be an effective therapy on several mental disorders. This included depression, panic disorder, post-traumatic stress, and other anxiety disorders.
CBT has been shown to be effective in the treatment of adults with anxiety disorders.
Results from a 2018 systematic review found a high strength of evidence that CBT-exposure therapy can reduce PTSD symptoms and lead to the loss of a PTSD diagnosis. CBT has also been shown to be effective for post-traumatic stress disorder in very young children (3 to 6 years of age). A Cochrane review found low quality evidence that CBT may be more effective than other psychotherapies in reducing symptoms of posttraumatic stress disorder in children and adolescents.
A systematic review of CBT in depression and anxiety disorders concluded that "CBT delivered in primary care, especially including computer- or Internet-based self-help programs, is potentially more effective than usual care and could be delivered effectively by primary care therapists."
Some meta-analyses find CBT more effective than psychodynamic therapy and equal to other therapies in treating anxiety and depression.
One etiological theory of depression is Aaron T. Beck's cognitive theory of depression. His theory states that depressed people think the way they do because their thinking is biased towards negative interpretations. According to this theory, depressed people acquire a negative schema of the world in childhood and adolescence as an effect of stressful life events, and the negative schema is activated later in life when the person encounters similar situations.
Beck also described a negative cognitive triad. The cognitive triad is made up of the depressed individual's negative evaluations of themselves, the world, and the future. Beck suggested that these negative evaluations derive from the negative schemata and cognitive biases of the person. According to this theory, depressed people have views such as "I never do a good job", "It is impossible to have a good day", and "things will never get better". A negative schema helps give rise to the cognitive bias, and the cognitive bias helps fuel the negative schema. Beck further proposed that depressed people often have the following cognitive biases: arbitrary inference, selective abstraction, overgeneralization, magnification, and minimization. These cognitive biases are quick to make negative, generalized, and personal inferences of the self, thus fueling the negative schema.
A basic concept in some CBT treatments used in anxiety disorders is in vivo exposure. CBT-exposure therapy refers to the direct confrontation of feared objects, activities, or situations by a patient. For example, a woman with PTSD who fears the location where she was assaulted may be assisted by her therapist in going to that location and directly confronting those fears. Likewise, a person with a social anxiety disorder who fears public speaking may be instructed to directly confront those fears by giving a speech. This "two-factor" model is often credited to O. Hobart Mowrer. Through exposure to the stimulus, this harmful conditioning can be "unlearned" (referred to as extinction and habituation).
CBT for children with phobias is normally delivered over multiple sessions, but one-session treatment has been shown to be equally effective and is cheaper.
CBT-SP, an adaptation of CBT for suicide prevention (SP), was specifically designed for treating youths who are severely depressed and who have recently attempted suicide within the past 90 days, and was found to be effective, feasible, and acceptable.
Acceptance and commitment therapy (ACT) is a specialist branch of CBT (sometimes referred to as contextual CBT). ACT uses mindfulness and acceptance interventions and has been found to have a greater longevity in therapeutic outcomes. In a study with anxiety, CBT and ACT improved similarly across all outcomes from pre- to post-treatment. However, during a 12-month follow-up, ACT proved to be more effective, showing that it is a highly viable lasting treatment model for anxiety disorders.
Computerized CBT (CCBT) has been proven to be effective by randomized controlled and other trials in treating depression and anxiety disorders, including children. Some research has found similar effectiveness to an intervention of informational websites and weekly telephone calls. CCBT was found to be equally effective as face-to-face CBT in adolescent anxiety.
Studies have provided evidence that when examining animals and humans, that glucocorticoids may lead to a more successful extinction learning during exposure therapy for anxiety disorders. For instance, glucocorticoids can prevent aversive learning episodes from being retrieved and heighten reinforcement of memory traces creating a non-fearful reaction in feared situations. A combination of glucocorticoids and exposure therapy may be a better-improved treatment for treating people with anxiety disorders.
For anxiety disorders, use of CBT with people at risk has significantly reduced the number of episodes of generalized anxiety disorder and other anxiety symptoms, and also given significant improvements in explanatory style, hopelessness, and dysfunctional attitudes. In another study, 3% of the group receiving the CBT intervention developed generalized anxiety disorder by 12 months postintervention compared with 14% in the control group. Individuals with subthreshold levels of panic disorder significantly benefitted from use of CBT. Use of CBT was found to significantly reduce social anxiety prevalence.
For depressive disorders, a stepped-care intervention (watchful waiting, CBT and medication if appropriate) achieved a 50% lower incidence rate in a patient group aged 75 or older. Another depression study found a neutral effect compared to personal, social, and health education, and usual school provision, and included a comment on potential for increased depression scores from people who have received CBT due to greater self recognition and acknowledgement of existing symptoms of depression and negative thinking styles. A further study also saw a neutral result. A meta-study of the Coping with Depression course, a cognitive behavioral intervention delivered by a psychoeducational method, saw a 38% reduction in risk of major depression.
Many studies show CBT, combined with pharmacotherapy, is effective in improving depressive symptoms, mania severity and psychosocial functioning with mild to moderate effects, and that it is better than medication alone.
INSERM's 2004 review found that CBT is an effective therapy for several mental disorders, including bipolar disorder. This included schizophrenia, depression, bipolar disorder, panic disorder, post-traumatic stress, anxiety disorders, bulimia, anorexia, personality disorders and alcohol dependency.
In long-term psychoses, CBT is used to complement medication and is adapted to meet individual needs. Interventions particularly related to these conditions include exploring reality testing, changing delusions and hallucinations, examining factors which precipitate relapse, and managing relapses. Meta-analyses confirm the effectiveness of metacognitive training (MCT) for the improvement of positive symptoms (e.g., delusions).
For people at risk of psychosis, in 2014 the UK National Institute for Health and Care Excellence (NICE) recommended preventive CBT.
INSERM's 2004 review found that CBT is an effective therapy for several mental disorders, including schizophrenia.
A Cochrane review reported CBT had "no effect on long‐term risk of relapse" and no additional effect above standard care. A 2015 systematic review investigated the effects of CBT compared with other psychosocial therapies for people with schizophrenia and determined that there is no clear advantage over other, often less expensive, interventions but acknowledged that better quality evidence is needed before firm conclusions can be drawn.
CBT is also used for pathological and problem gambling. The percentage of people who problem gamble is 1–3% around the world. Cognitive behavioral therapy develops skills for relapse prevention and someone can learn to control their mind and manage high-risk cases. There is evidence of efficacy of CBT for treating pathological and problem gambling at immediate follow up, however the longer term efficacy of CBT for it is currently unknown.
CBT looks at the habit of smoking cigarettes as a learned behavior, which later evolves into a coping strategy to handle daily stressors. Since smoking is often easily accessible and quickly allows the user to feel good, it can take precedence over other coping strategies, and eventually work its way into everyday life during non-stressful events as well. CBT aims to target the function of the behavior, as it can vary between individuals, and works to inject other coping mechanisms in place of smoking. CBT also aims to support individuals with strong cravings, which are a major reported reason for relapse during treatment.
In a 2008 controlled study out of Stanford University School of Medicine suggested CBT may be an effective tool to help maintain abstinence. The results of 304 random adult participants were tracked over the course of one year. During this program, some participants were provided medication, CBT, 24-hour phone support, or some combination of the three methods. At 20 weeks, the participants who received CBT had a 45% abstinence rate, versus non-CBT participants, who had a 29% abstinence rate. Overall, the study concluded that emphasizing cognitive and behavioral strategies to support smoking cessation can help individuals build tools for long term smoking abstinence.
Mental health history can affect the outcomes of treatment. Individuals with a history of depressive disorders had a lower rate of success when using CBT alone to combat smoking addiction.
A Cochrane review was unable to find evidence of any difference between CBT and hypnosis for smoking cessation. While this may be evidence of no effect, further research may uncover an effect of CBT for smoking cessation.
Studies have shown CBT to be an effective treatment for substance use disorders. For individuals with substance use disorders, CBT aims to reframe maladaptive thoughts, such as denial, minimizing and catastrophizing thought patterns, with healthier narratives. Specific techniques include identifying potential triggers and developing coping mechanisms to manage high-risk situations. Research has shown CBT to be particularly effective when combined with other therapy-based treatments or medication.
INSERM's 2004 review found that CBT is an effective therapy for several mental disorders, including alcohol dependency.
Research has identified Internet addiction as a new clinical disorder that causes relational, occupational, and social problems. Cognitive behavioral therapy (CBT) has been suggested as the treatment of choice for Internet addiction, and addiction recovery in general has used CBT as part of treatment planning. There is also evidence for the efficacy of CBT in multicenter randomized controlled trials such as STICA (Short-Term Treatment of Internet and Computer Game Addiction).
Though many forms of treatment can support individuals with eating disorders, CBT is proven to be a more effective treatment than medications and interpersonal psychotherapy alone. CBT aims to combat major causes of distress such as negative cognitions surrounding body weight, shape and size. CBT therapists also work with individuals to regulate strong emotions and thoughts that lead to dangerous compensatory behaviors. CBT is the first line of treatment for bulimia nervosa, and Eating Disorder Non-Specific. While there is evidence to support the efficacy of CBT for bulimia nervosa and binging, the evidence is somewhat variable and limited by small study sizes. INSERM's 2004 review found that CBT is an effective therapy for several mental disorders, including bulimia and anorexia nervosa.
Emerging evidence for cognitive behavioral interventions aimed at reducing symptoms of depression, anxiety, and obsessive-compulsive disorder in autistic adults without intellectual disability has been identified through a systematic review. While the research was focused on adults, cognitive behavioral interventions have also been beneficial to autistic children.
A Cochrane review in 2022 found that adults with dementia and mild cognitive impairment (MCI) who experience symptoms of depression may benefit from CBT, whereas other counselling or supportive interventions might not improve symptoms significantly. Across 5 different psychometric scales, where higher scores indicate severity of depression, adults receiving CBT reported somewhat lower mood scores than those receiving usual care for dementia and MCI overall. In this review, a sub-group analysis found clinically significant benefits only among those diagnosed with dementia, rather than MCI.
The likelihood of remission from depression also appeared to be 84% higher following CBT, though the evidence for this was less certain. Anxiety, cognition and other neuropsychiatric symptoms were not significantly improved following CBT, however this review did find moderate evidence of improved quality of life and daily living activity scores in those with dementia and MCI.
Cognitive behavioural therapy interventions may have some benefits for people who have post-traumatic stress related to surviving rape, sexual abuse, or sexual assault.
Evidence suggests a possible role for CBT in the treatment of attention deficit hyperactivity disorder (ADHD), hypochondriasis, and bipolar disorder, but more study is needed and results should be interpreted with caution. CBT has been studied as an aid in the treatment of anxiety associated with stuttering. Initial studies have shown CBT to be effective in reducing social anxiety in adults who stutter, but not in reducing stuttering frequency.
There is some evidence that CBT is superior in the long-term to benzodiazepines and the nonbenzodiazepines in the treatment and management of insomnia. Computerized CBT (CCBT) has been proven to be effective by randomized controlled and other trials in treating insomnia. Some research has found similar effectiveness to an intervention of informational websites and weekly telephone calls. CCBT was found to be equally effective as face-to-face CBT in insomnia.
A Cochrane review of interventions aimed at preventing psychological stress in healthcare workers found that CBT was more effective than no intervention but no more effective than alternative stress-reduction interventions.
Cochrane Reviews have found no convincing evidence that CBT training helps foster care providers manage difficult behaviors in the youths under their care, nor was it helpful in treating people who abuse their intimate partners.
CBT has been applied in both clinical and non-clinical environments to treat disorders such as personality disorders and behavioral problems. INSERM's 2004 review found that CBT is an effective therapy for personality disorders.
In the case of people with metastatic breast cancer, data is limited but CBT and other psychosocial interventions might help with psychological outcomes and pain management. A 2015 Cochrane review also found that CBT for symptomatic management of non-specific chest pain is probably effective in the short term. However, the findings were limited by small trials and the evidence was considered of questionable quality. Cochrane reviews have found no evidence that CBT is effective for tinnitus, although there appears to be an effect on management of associated depression and quality of life in this condition. CBT combined with hypnosis and distraction reduces self-reported pain in children.
There is limited evidence to support CBT's use in managing the impact of multiple sclerosis, sleep disturbances related to aging, and dysmenorrhea, but more study is needed and results should be interpreted with caution.
Previously CBT has been considered as moderately effective for treating chronic fatigue syndrome, however a National Institutes of Health Pathways to Prevention Workshop stated that in respect of improving treatment options for ME/CFS that the modest benefit from cognitive behavioral therapy should be studied as an adjunct to other methods. The Centres for Disease Control advice on the treatment of ME/CFS makes no reference to CBT while the National Institute for Health and Care Excellence states that cognitive behavioral therapy (CBT) has sometimes been assumed to be a cure for ME/CFS, however, it should only be offered to support people who live with ME/CFS to manage their symptoms, improve their functioning and reduce the distress associated with having a chronic illness."
CBT is used to help people of all ages, but the therapy should be adjusted based on the age of the patient with whom the therapist is dealing. Older individuals in particular have certain characteristics that need to be acknowledged and the therapy altered to account for these differences thanks to age. Of the small number of studies examining CBT for the management of depression in older people, there is currently no strong support.
Mainstream cognitive behavioral therapy assumes that changing maladaptive thinking leads to change in behavior and affect, but recent variants emphasize changes in one's relationship to maladaptive thinking rather than changes in thinking itself.
Therapists use CBT techniques to help people challenge their patterns and beliefs and replace errors in thinking, known as cognitive distortions with "more realistic and effective thoughts, thus decreasing emotional distress and self-defeating behavior". Cognitive distortions can be either a pseudo-discrimination belief or an overgeneralization of something. CBT techniques may also be used to help individuals take a more open, mindful, and aware posture toward cognitive distortions so as to diminish their impact.
Mainstream CBT helps individuals replace "maladaptive... coping skills, cognitions, emotions and behaviors with more adaptive ones", by challenging an individual's way of thinking and the way that they react to certain habits or behaviors, but there is still controversy about the degree to which these traditional cognitive elements account for the effects seen with CBT over and above the earlier behavioral elements such as exposure and skills training.
CBT can be seen as having six phases:
These steps are based on a system created by Kanfer and Saslow. After identifying the behaviors that need changing, whether they be in excess or deficit, and treatment has occurred, the psychologist must identify whether or not the intervention succeeded. For example, "If the goal was to decrease the behavior, then there should be a decrease relative to the baseline. If the critical behavior remains at or above the baseline, then the intervention has failed."
The steps in the assessment phase include:
The re-conceptualization phase makes up much of the "cognitive" portion of CBT.
There are different protocols for delivering cognitive behavioral therapy, with important similarities among them. Use of the term CBT may refer to different interventions, including "self-instructions (e.g. distraction, imagery, motivational self-talk), relaxation and/or biofeedback, development of adaptive coping strategies (e.g. minimizing negative or self-defeating thoughts), changing maladaptive beliefs about pain, and goal setting". Treatment is sometimes manualized, with brief, direct, and time-limited treatments for individual psychological disorders that are specific technique-driven. CBT is used in both individual and group settings, and the techniques are often adapted for self-help applications. Some clinicians and researchers are cognitively oriented (e.g. cognitive restructuring), while others are more behaviorally oriented (e.g. in vivo exposure therapy). Interventions such as imaginal exposure therapy combine both approaches.
CBT may be delivered in conjunction with a variety of diverse but related techniques such as exposure therapy, stress inoculation, cognitive processing therapy, cognitive therapy, metacognitive therapy, metacognitive training, relaxation training, dialectical behavior therapy, and acceptance and commitment therapy. Some practitioners promote a form of mindful cognitive therapy which includes a greater emphasis on self-awareness as part of the therapeutic process.
A typical CBT programme would consist of face-to-face sessions between patient and therapist, made up of 6–18 sessions of around an hour each with a gap of 1–3 weeks between sessions. This initial programme might be followed by some booster sessions, for instance after one month and three months. CBT has also been found to be effective if patient and therapist type in real time to each other over computer links.
Cognitive-behavioral therapy is most closely allied with the scientist–practitioner model in which clinical practice and research are informed by a scientific perspective, clear operationalization of the problem, and an emphasis on measurement, including measuring changes in cognition and behavior and the attainment of goals. These are often met through "homework" assignments in which the patient and the therapist work together to craft an assignment to complete before the next session. The completion of these assignments – which can be as simple as a person with depression attending some kind of social event – indicates a dedication to treatment compliance and a desire to change. The therapists can then logically gauge the next step of treatment based on how thoroughly the patient completes the assignment. Effective cognitive behavioral therapy is dependent on a therapeutic alliance between the healthcare practitioner and the person seeking assistance. Unlike many other forms of psychotherapy, the patient is very involved in CBT. For example, an anxious patient may be asked to talk to a stranger as a homework assignment, but if that is too difficult, he or she can work out an easier assignment first. The therapist needs to be flexible and willing to listen to the patient rather than acting as an authority figure.
Computerized cognitive behavioral therapy (CCBT) has been described by NICE as a "generic term for delivering CBT via an interactive computer interface delivered by a personal computer, internet, or interactive voice response system", instead of face-to-face with a human therapist. It is also known as internet-delivered cognitive behavioral therapy or ICBT. CCBT has potential to improve access to evidence-based therapies, and to overcome the prohibitive costs and lack of availability sometimes associated with retaining a human therapist. In this context, it is important not to confuse CBT with 'computer-based training', which nowadays is more commonly referred to as e-Learning.
Although improvements in both research quality and treatment adherence is required before advocating for the global dissemination of CCBT, it has been found in meta-studies to be cost-effective and often cheaper than usual care, including for anxiety and PTSD. Studies have shown that individuals with social anxiety and depression experienced improvement with online CBT-based methods. A study assessing an online version of CBT for people with mild-to-moderate PTSD found that the online approach was as effective as, and cheaper than, the same therapy given face-to-face. A review of current CCBT research in the treatment of OCD in children found this interface to hold great potential for future treatment of OCD in youths and adolescent populations. Additionally, most internet interventions for post-traumatic stress disorder use CCBT. CCBT is also predisposed to treating mood disorders amongst non-heterosexual populations, who may avoid face-to-face therapy from fear of stigma. However presently CCBT programs seldom cater to these populations.
In February 2006 NICE recommended that CCBT be made available for use within the NHS across England and Wales for patients presenting with mild-to-moderate depression, rather than immediately opting for antidepressant medication, and CCBT is made available by some health systems. The 2009 NICE guideline recognized that there are likely to be a number of computerized CBT products that are useful to patients, but removed endorsement of any specific product.
Another new method of access is the use of mobile app or smartphone applications to deliver self-help or guided CBT. Technology companies are developing mobile-based artificial intelligence chatbot applications in delivering CBT as an early intervention to support mental health, to build psychological resilience, and to promote emotional well-being. Artificial intelligence (AI) text-based conversational application delivered securely and privately over smartphone devices have the ability to scale globally and offer contextual and always-available support. Active research is underway including real-world data studies that measure effectiveness and engagement of text-based smartphone chatbot apps for delivery of CBT using a text-based conversational interface. Recent market research and analysis of over 500 online mental healthcare solutions identified 3 key challenges in this market: quality of the content, guidance of the user and personalisation.
A study compared CBT alone with a mindfulness-based therapy combined with CBT, both delivered via an app. It found that mindfulness-based self-help reduced the severity of depression more than CBT self-help in the short-term. Overall, NHS costs for the mindfulness approach were £500 less per person than for CBT.
Enabling patients to read self-help CBT guides has been shown to be effective by some studies. However one study found a negative effect in patients who tended to ruminate, and another meta-analysis found that the benefit was only significant when the self-help was guided (e.g. by a medical professional).
Patient participation in group courses has been shown to be effective. In a meta-analysis reviewing evidence-based treatment of OCD in children, individual CBT was found to be more efficacious than group CBT.
Brief cognitive behavioral therapy (BCBT) is a form of CBT which has been developed for situations in which there are time constraints on the therapy sessions and specifically for those struggling with suicidal ideation and/or making suicide attempts. BCBT was based on Rudd's proposed "suicidal mode", an elaboration of Beck's modal theory. BCBT takes place over a couple of sessions that can last up to 12 accumulated hours by design. This technique was first implemented and developed with soldiers on active duty by Dr. M. David Rudd to prevent suicide.
Breakdown of treatment
Cognitive emotional behavioral therapy (CEBT) is a form of CBT developed initially for individuals with eating disorders but now used with a range of problems including anxiety, depression, obsessive compulsive disorder (OCD), post-traumatic stress disorder (PTSD) and anger problems. It combines aspects of CBT and dialectical behavioral therapy and aims to improve understanding and tolerance of emotions in order to facilitate the therapeutic process. It is frequently used as a "pretreatment" to prepare and better equip individuals for longer-term therapy.
Structured cognitive-behavioral training (SCBT) is a cognitive-based process with core philosophies that draw heavily from CBT. Like CBT, SCBT asserts that behavior is inextricably related to beliefs, thoughts, and emotions. SCBT also builds on core CBT philosophy by incorporating other well-known modalities in the fields of behavioral health and psychology: most notably, Albert Ellis's rational emotive behavior therapy. SCBT differs from CBT in two distinct ways. First, SCBT is delivered in a highly regimented format. Second, SCBT is a predetermined and finite training process that becomes personalized by the input of the participant. SCBT is designed to bring a participant to a specific result in a specific period of time. SCBT has been used to challenge addictive behavior, particularly with substances such as tobacco, alcohol and food, and to manage diabetes and subdue stress and anxiety. SCBT has also been used in the field of criminal psychology in the effort to reduce recidivism.
Moral reconation therapy, a type of CBT used to help felons overcome antisocial personality disorder (ASPD), slightly decreases the risk of further offending. It is generally implemented in a group format because of the risk of offenders with ASPD being given one-on-one therapy reinforces narcissistic behavioral characteristics, and can be used in correctional or outpatient settings. Groups usually meet weekly for two to six months.
This type of therapy uses a blend of cognitive, behavioral, and certain humanistic training techniques to target the stressors of the client. This usually is used to help clients better cope with their stress or anxiety after stressful events. This is a three-phase process that trains the client to use skills that they already have to better adapt to their current stressors. The first phase is an interview phase that includes psychological testing, client self-monitoring, and a variety of reading materials. This allows the therapist to individually tailor the training process to the client. Clients learn how to categorize problems into emotion-focused or problem-focused so that they can better treat their negative situations. This phase ultimately prepares the client to eventually confront and reflect upon their current reactions to stressors, before looking at ways to change their reactions and emotions to their stressors. The focus is conceptualization.
The second phase emphasizes the aspect of skills acquisition and rehearsal that continues from the earlier phase of conceptualization. The client is taught skills that help them cope with their stressors. These skills are then practised in the space of therapy. These skills involve self-regulation, problem-solving, interpersonal communication skills, etc.
The third and final phase is the application and following through of the skills learned in the training process. This gives the client opportunities to apply their learned skills to a wide range of stressors. Activities include role-playing, imagery, modeling, etc. In the end, the client will have been trained on a preventive basis to inoculate personal, chronic, and future stressors by breaking down their stressors into problems they will address in long-term, short-term, and intermediate coping goals.
A newly developed group therapy model based on CBT integrates knitting into the therapeutical process and has been proven to yield reliable and promising results. The foundation for this novel approach to CBT is the frequently emphasized notion that therapy success depends on the embeddedness of the therapy method in the patients' natural routine. Similar to standard group-based CBT, patients meet once a week in a group of 10 to 15 patients and knit together under the instruction of a trained psychologist or mental health professional. Central for the therapy is the patient's imaginative ability to assign each part of the wool to a certain thought. During the therapy, the wool is carefully knitted, creating a knitted piece of any form. This therapeutical process teaches the patient to meaningfully align thought, by (physically) creating a coherent knitted piece. Moreover, since CBT emphasizes the behavior as a result of cognition, the knitting illustrates how thoughts (which are tried to be imaginary tight to the wool) materialize into the reality surrounding us.
Mindfulness-based cognitive behavioral hypnotherapy (MCBH) is a form of CBT that focuses on awareness in a reflective approach, addressing subconscious tendencies. It is more the process that contains three phases for achieving wanted goals and integrates the principles of mindfulness and cognitive-behavioral techniques with the transformative potential of hypnotherapy.
The Unified Protocol for Transdiagnostic Treatment of Emotional Disorders (UP) is a form of CBT, developed by David H. Barlow and researchers at Boston University, that can be applied to a range of and anxiety disorders. The rationale is that anxiety and depression disorders often occur together due to common underlying causes and can efficiently be treated together.
The UP includes a common set of components:
The UP has been shown to produce equivalent results to single-diagnosis protocols for specific disorders, such as OCD and social anxiety disorder. Several studies have shown that the UP is easier to disseminate as compared to single-diagnosis protocols.
The research conducted for CBT has been a topic of sustained controversy. While some researchers write that CBT is more effective than other treatments, many other researchers and practitioners have questioned the validity of such claims. For example, one study determined CBT to be superior to other treatments in treating anxiety and depression. However, researchers responding directly to that study conducted a re-analysis and found no evidence of CBT being superior to other bona fide treatments, and conducted an analysis of thirteen other CBT clinical trials and determined that they failed to provide evidence of CBT superiority. In cases where CBT has been reported to be statistically better than other psychological interventions in terms of primary outcome measures, effect sizes were small and suggested that those differences were clinically meaningless and insignificant. Moreover, on secondary outcomes (i.e., measures of general functioning) no significant differences have been typically found between CBT and other treatments.
A major criticism has been that clinical studies of CBT efficacy (or any psychotherapy) are not double-blind (i.e., either the subjects or the therapists in psychotherapy studies are not blind to the type of treatment). They may be single-blinded, i.e. the rater may not know the treatment the patient received, but neither the patients nor the therapists are blinded to the type of therapy given (two out of three of the persons involved in the trial, i.e., all of the persons involved in the treatment, are unblinded). The patient is an active participant in correcting negative distorted thoughts, thus quite aware of the treatment group they are in.
The importance of double-blinding was shown in a meta-analysis that examined the effectiveness of CBT when placebo control and blindness were factored in. Pooled data from published trials of CBT in schizophrenia, major depressive disorder (MDD), and bipolar disorder that used controls for non-specific effects of intervention were analyzed. This study concluded that CBT is no better than non-specific control interventions in the treatment of schizophrenia and does not reduce relapse rates; treatment effects are small in treatment studies of MDD, and it is not an effective treatment strategy for prevention of relapse in bipolar disorder. For MDD, the authors note that the pooled effect size was very low.
Additionally, a 2015 meta-analysis revealed that the positive effects of CBT on depression have been declining since 1977. The overall results showed two different declines in effect sizes: 1) an overall decline between 1977 and 2014, and 2) a steeper decline between 1995 and 2014. Additional sub-analysis revealed that CBT studies where therapists in the test group were instructed to adhere to the Beck CBT manual had a steeper decline in effect sizes since 1977 than studies where therapists in the test group were instructed to use CBT without a manual. The authors reported that they were unsure why the effects were declining but did list inadequate therapist training, failure to adhere to a manual, lack of therapist experience, and patients' hope and faith in its efficacy waning as potential reasons. The authors did mention that the current study was limited to depressive disorders only.
Furthermore, other researchers write that CBT studies have high drop-out rates compared to other treatments. One meta-analysis found that CBT drop-out rates were 17% higher than those of other therapies. This high drop-out rate is also evident in the treatment of several disorders, particularly the eating disorder anorexia nervosa, which is commonly treated with CBT. Those treated with CBT have a high chance of dropping out of therapy before completion and reverting to their anorexia behaviors.
Other researchers analyzing treatments for youths who self-injure found similar drop-out rates in CBT and DBT groups. In this study, the researchers analyzed several clinical trials that measured the efficacy of CBT administered to youths who self-injure. The researchers concluded that none of them were found to be efficacious.
The methods employed in CBT research have not been the only criticisms; some individuals have called its theory and therapy into question.
Slife and Williams write that one of the hidden assumptions in CBT is that of determinism, or the absence of free will. They argue that CBT holds that external stimuli from the environment enter the mind, causing different thoughts that cause emotional states: nowhere in CBT theory is agency, or free will, accounted for.
Another criticism of CBT theory, especially as applied to major depressive disorder (MDD), is that it confounds the symptoms of the disorder with its causes.
CBT is generally regarded as having very few if any side effects. Calls have been made by some for more appraisal of possible side effects of CBT. Many randomized trials of psychological interventions like CBT do not monitor potential harms to the patient. In contrast, randomized trials of pharmacological interventions are much more likely to take adverse effects into consideration.
A 2017 meta-analysis revealed that adverse events are not common in children receiving CBT and, furthermore, that CBT is associated with fewer dropouts than either placebo or medications. Nevertheless, CBT therapists do sometimes report 'unwanted events' and side effects in their outpatients with "negative wellbeing/distress" being the most frequent.
The writer and group analyst Farhad Dalal questions the socio-political assumptions behind the introduction of CBT. According to one reviewer, Dalal connects the rise of CBT with "the parallel rise of neoliberalism, with its focus on marketization, efficiency, quantification and managerialism", and he questions the scientific basis of CBT, suggesting that "the 'science' of psychological treatment is often less a scientific than a political contest". In his book, Dalal also questions the ethical basis of CBT.
The UK's National Health Service announced in 2008 that more therapists would be trained to provide CBT at government expense as part of an initiative called Improving Access to Psychological Therapies (IAPT). The NICE said that CBT would become the mainstay of treatment for non-severe depression, with medication used only in cases where CBT had failed. Therapists complained that the data does not fully support the attention and funding CBT receives. Psychotherapist and professor Andrew Samuels stated that this constitutes "a coup, a power play by a community that has suddenly found itself on the brink of corralling an enormous amount of money ... Everyone has been seduced by CBT's apparent cheapness."
The UK Council for Psychotherapy issued a press release in 2012 saying that the IAPT's policies were undermining traditional psychotherapy and criticized proposals that would limit some approved therapies to CBT, claiming that they restricted patients to "a watered down version of cognitive behavioural therapy (CBT), often delivered by very lightly trained staff". | [
{
"paragraph_id": 0,
"text": "Cognitive behavioral therapy (CBT) is a psycho-social intervention that aims to reduce symptoms of various mental health conditions, primarily depression and anxiety disorders. Cognitive behavioral therapy is one of the most effective means of treatment for substance abuse and co-occurring mental health disorders. CBT focuses on challenging and changing cognitive distortions (such as thoughts, beliefs, and attitudes) and their associated behaviors to improve emotional regulation and develop personal coping strategies that target solving current problems. Though it was originally designed to treat depression, its uses have been expanded to include many issues and the treatment of many mental health conditions, including anxiety, substance use disorders, marital problems, ADHD, and eating disorders. CBT includes a number of cognitive or behavioral psychotherapies that treat defined psychopathologies using evidence-based techniques and strategies.",
"title": ""
},
{
"paragraph_id": 1,
"text": "CBT is a common form of talk therapy based on the combination of the basic principles from behavioral and cognitive psychology. It is different from historical approaches to psychotherapy, such as the psychoanalytic approach where the therapist looks for the unconscious meaning behind the behaviors, and then formulates a diagnosis. Instead, CBT is a \"problem-focused\" and \"action-oriented\" form of therapy, meaning it is used to treat specific problems related to a diagnosed mental disorder. The therapist's role is to assist the client in finding and practicing effective strategies to address the identified goals and to alleviate symptoms of the disorder. CBT is based on the belief that thought distortions and maladaptive behaviors play a role in the development and maintenance of many psychological disorders and that symptoms and associated distress can be reduced by teaching new information-processing skills and coping mechanisms.",
"title": ""
},
{
"paragraph_id": 2,
"text": "When compared to psychoactive medications, review studies have found CBT alone to be as effective for treating less severe forms of depression, anxiety, post-traumatic stress disorder (PTSD), tics, substance use disorders, eating disorders, and borderline personality disorder. Some research suggests that CBT is most effective when combined with medication for treating mental disorders, such as major depressive disorder. CBT is recommended as the first line of treatment for the majority of psychological disorders in children and adolescents, including aggression and conduct disorder. Researchers have found that other bona fide therapeutic interventions were equally effective for treating certain conditions in adults. Along with interpersonal psychotherapy (IPT), CBT is recommended in treatment guidelines as a psychosocial treatment of choice.",
"title": ""
},
{
"paragraph_id": 3,
"text": "For thousands of years, humans have looked to faith and religious belief for answers to their emotional problems. The majority of studies show that having a faith or belief is, in general, good for your mental health. Religions have initiated charities specifically to help mental health problems, such as the Samaritans. CBT was developed from empirical studies that did not initially consider faith as a variable. However, as investigations on the role of religious belief and practice have grown in popularity, evidence has been gathered in various religious groups: randomized controlled trials of CBT adapted for Judaism, Taoism and, most commonly, Christianity.",
"title": "History"
},
{
"paragraph_id": 4,
"text": "Concepts drawn from Buddhism have influenced the development of several newer forms of CBT such as Dialectical Behavior Therapy, Mindfulness-Based Cognitive Therapy, Spirituality-Based CBT and Compassion Focused Therapy. Generic spiritual concepts (such as hope and well-being) and the importance of virtues such as fortitude and humility have been researched and reviewed.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "Islamic psychology within the Sufi tradition was established as far back as the 11th century by Al Ghazali, who described the self as made up of four elements: heart, spirit, soul, and intellect. These can be respectively linked to CBT domains like emotions, behaviors, thoughts, and the capacity for reflection.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "Pentecostal Christians often talk about three levels of body, soul, and spirit – the former is earthly and the latter holy, with the soul somewhere in between. Used helpfully, the therapist can encourage such persons to think that (like the body) the mind/brain can become ill and make them feel depressed, but the soul can still be there – enabled to retain a degree of objectivity because it is still in touch with the spirit.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "Precursors of certain fundamental aspects of CBT have been identified in various ancient philosophical traditions, particularly Stoicism. Stoic philosophers, particularly Epictetus, believed logic could be used to identify and discard false beliefs that lead to destructive emotions, which has influenced the way modern cognitive-behavioral therapists identify cognitive distortions that contribute to depression and anxiety. Aaron T. Beck's original treatment manual for depression states, \"The philosophical origins of cognitive therapy can be traced back to the Stoic philosophers\". Another example of Stoic influence on cognitive theorists is Epictetus on Albert Ellis. A key philosophical figure who influenced the development of CBT was John Stuart Mill through his creation of Associationism, a predecessor of classical conditioning and behavioral theory.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "The modern roots of CBT can be traced to the development of behavior therapy in the early 20th century, the development of cognitive therapy in the 1960s, and the subsequent merging of the two.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "Groundbreaking work of behaviorism began with John B. Watson and Rosalie Rayner's studies of conditioning in 1920. Behaviorally-centered therapeutic approaches appeared as early as 1924 with Mary Cover Jones' work dedicated to the unlearning of fears in children. These were the antecedents of the development of Joseph Wolpe's behavioral therapy in the 1950s. It was the work of Wolpe and Watson, which was based on Ivan Pavlov's work on learning and conditioning, that influenced Hans Eysenck and Arnold Lazarus to develop new behavioral therapy techniques based on classical conditioning.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "During the 1950s and 1960s, behavioral therapy became widely used by researchers in the United States, the United Kingdom, and South Africa. Their inspiration was by the behaviorist learning theory of Ivan Pavlov, John B. Watson, and Clark L. Hull.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "In Britain, Joseph Wolpe, who applied the findings of animal experiments to his method of systematic desensitization, applied behavioral research to the treatment of neurotic disorders. Wolpe's therapeutic efforts were precursors to today's fear reduction techniques. British psychologist Hans Eysenck presented behavior therapy as a constructive alternative.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "At the same time as Eysenck's work, B. F. Skinner and his associates were beginning to have an impact with their work on operant conditioning. Skinner's work was referred to as radical behaviorism and avoided anything related to cognition. However, Julian Rotter in 1954 and Albert Bandura in 1969 contributed to behavior therapy with their works on social learning theory by demonstrating the effects of cognition on learning and behavior modification. The work of Claire Weekes in dealing with anxiety disorders in the 1960s is also seen as a prototype of behavior therapy.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "The emphasis on behavioral factors has been described as the \"first wave\" of CBT.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "One of the first therapists to address cognition in psychotherapy was Alfred Adler, notably with his idea of basic mistakes and how they contributed to creation of unhealthy behavioral and life goals.Abraham Low believed that someone's thoughts were best changed by changing their actions. Adler and Low influenced the work of Albert Ellis, who developed the earliest cognitive-based psychotherapy called rational emotive behavioral therapy, or REBT. The first version of REBT was announced to the public in 1956.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "In the late 1950s, Aaron T. Beck was conducting free association sessions in his psychoanalytic practice. During these sessions, Beck noticed that thoughts were not as unconscious as Freud had previously theorized, and that certain types of thinking may be the culprits of emotional distress. It was from this hypothesis that Beck developed cognitive therapy, and called these thoughts \"automatic thoughts\". He first published his new methodology in 1967, and his first treatment manual in 1979. Beck has been referred to as \"the father of cognitive behavioral therapy\".",
"title": "History"
},
{
"paragraph_id": 16,
"text": "It was these two therapies, rational emotive therapy, and cognitive therapy, that started the \"second wave\" of CBT, which emphasised cognitive factors.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "Although the early behavioral approaches were successful in many so-called neurotic disorders, they had little success in treating depression. Behaviorism was also losing popularity due to the cognitive revolution. The therapeutic approaches of Albert Ellis and Aaron T. Beck gained popularity among behavior therapists, despite the earlier behaviorist rejection of mentalistic concepts like thoughts and cognitions. Both of these systems included behavioral elements and interventions, with the primary focus being on problems in the present.",
"title": "History"
},
{
"paragraph_id": 18,
"text": "In initial studies, cognitive therapy was often contrasted with behavioral treatments to see which was most effective. During the 1980s and 1990s, cognitive and behavioral techniques were merged into cognitive behavioral therapy. Pivotal to this merging was the successful development of treatments for panic disorder by David M. Clark in the UK and David H. Barlow in the US.",
"title": "History"
},
{
"paragraph_id": 19,
"text": "Over time, cognitive behavior therapy came to be known not only as a therapy, but as an umbrella term for all cognitive-based psychotherapies. These therapies include, but are not limited to, REBT, cognitive therapy, acceptance and commitment therapy, dialectical behavior therapy, metacognitive therapy, metacognitive training, reality therapy/choice theory, cognitive processing therapy, EMDR, and multimodal therapy.",
"title": "History"
},
{
"paragraph_id": 20,
"text": "This blending of theoretical and technical foundations from both behavior and cognitive therapies constituted the \"third wave\" of CBT. The most prominent therapies of this third wave are dialectical behavior therapy and acceptance and commitment therapy. Despite the increasing popularity of third-wave treatment approaches, reviews of studies reveal there may be no difference in the effectiveness compared with non-third wave CBT for the treatment of depression.",
"title": "History"
},
{
"paragraph_id": 21,
"text": "In adults, CBT has been shown to be an effective part of treatment plans for anxiety disorders, body dysmorphic disorder, depression, eating disorders, chronic low back pain, personality disorders, psychosis, schizophrenia, substance use disorders, and bipolar disorder. It is also effective as part of treatment plans in the adjustment, depression, and anxiety associated with fibromyalgia, and with post-spinal cord injuries.",
"title": "Medical uses"
},
{
"paragraph_id": 22,
"text": "In children or adolescents, CBT is an effective part of treatment plans for anxiety disorders, body dysmorphic disorder, depression and suicidality, eating disorders and obesity, obsessive–compulsive disorder (OCD), and post-traumatic stress disorder (PTSD), as well as tic disorders, trichotillomania, and other repetitive behavior disorders. CBT has also been applied to a variety of childhood disorders, including depressive disorders and various anxiety disorders. CBT has shown to be the most effective intervention for people exposed to adverse childhood experiences in the form of abuse or neglect.",
"title": "Medical uses"
},
{
"paragraph_id": 23,
"text": "Criticism of CBT sometimes focuses on implementations (such as the UK IAPT) which may result initially in low quality therapy being offered by poorly trained practitioners. However, evidence supports the effectiveness of CBT for anxiety and depression.",
"title": "Medical uses"
},
{
"paragraph_id": 24,
"text": "Evidence suggests that the addition of hypnotherapy as an adjunct to CBT improves treatment efficacy for a variety of clinical issues.",
"title": "Medical uses"
},
{
"paragraph_id": 25,
"text": "The United Kingdom's National Institute for Health and Care Excellence (NICE) recommends CBT in the treatment plans for a number of mental health difficulties, including PTSD, OCD, bulimia nervosa, and clinical depression.",
"title": "Medical uses"
},
{
"paragraph_id": 26,
"text": "Cognitive behavioral therapy has been shown as an effective treatment for clinical depression. The American Psychiatric Association Practice Guidelines (April 2000) indicated that, among psychotherapeutic approaches, cognitive behavioral therapy and interpersonal psychotherapy had the best-documented efficacy for treatment of major depressive disorder.",
"title": "Medical uses"
},
{
"paragraph_id": 27,
"text": "A 2001 meta-analysis comparing CBT and psychodynamic psychotherapy suggested the approaches were equally effective in the short term for depression. In contrast, a 2013 meta-analyses suggested that CBT, interpersonal therapy, and problem-solving therapy outperformed psychodynamic psychotherapy and behavioral activation in the treatment of depression.",
"title": "Medical uses"
},
{
"paragraph_id": 28,
"text": "According to a 2004 review by INSERM of three methods, cognitive behavioral therapy was either proven or presumed to be an effective therapy on several mental disorders. This included depression, panic disorder, post-traumatic stress, and other anxiety disorders.",
"title": "Medical uses"
},
{
"paragraph_id": 29,
"text": "CBT has been shown to be effective in the treatment of adults with anxiety disorders.",
"title": "Medical uses"
},
{
"paragraph_id": 30,
"text": "Results from a 2018 systematic review found a high strength of evidence that CBT-exposure therapy can reduce PTSD symptoms and lead to the loss of a PTSD diagnosis. CBT has also been shown to be effective for post-traumatic stress disorder in very young children (3 to 6 years of age). A Cochrane review found low quality evidence that CBT may be more effective than other psychotherapies in reducing symptoms of posttraumatic stress disorder in children and adolescents.",
"title": "Medical uses"
},
{
"paragraph_id": 31,
"text": "A systematic review of CBT in depression and anxiety disorders concluded that \"CBT delivered in primary care, especially including computer- or Internet-based self-help programs, is potentially more effective than usual care and could be delivered effectively by primary care therapists.\"",
"title": "Medical uses"
},
{
"paragraph_id": 32,
"text": "Some meta-analyses find CBT more effective than psychodynamic therapy and equal to other therapies in treating anxiety and depression.",
"title": "Medical uses"
},
{
"paragraph_id": 33,
"text": "One etiological theory of depression is Aaron T. Beck's cognitive theory of depression. His theory states that depressed people think the way they do because their thinking is biased towards negative interpretations. According to this theory, depressed people acquire a negative schema of the world in childhood and adolescence as an effect of stressful life events, and the negative schema is activated later in life when the person encounters similar situations.",
"title": "Medical uses"
},
{
"paragraph_id": 34,
"text": "Beck also described a negative cognitive triad. The cognitive triad is made up of the depressed individual's negative evaluations of themselves, the world, and the future. Beck suggested that these negative evaluations derive from the negative schemata and cognitive biases of the person. According to this theory, depressed people have views such as \"I never do a good job\", \"It is impossible to have a good day\", and \"things will never get better\". A negative schema helps give rise to the cognitive bias, and the cognitive bias helps fuel the negative schema. Beck further proposed that depressed people often have the following cognitive biases: arbitrary inference, selective abstraction, overgeneralization, magnification, and minimization. These cognitive biases are quick to make negative, generalized, and personal inferences of the self, thus fueling the negative schema.",
"title": "Medical uses"
},
{
"paragraph_id": 35,
"text": "A basic concept in some CBT treatments used in anxiety disorders is in vivo exposure. CBT-exposure therapy refers to the direct confrontation of feared objects, activities, or situations by a patient. For example, a woman with PTSD who fears the location where she was assaulted may be assisted by her therapist in going to that location and directly confronting those fears. Likewise, a person with a social anxiety disorder who fears public speaking may be instructed to directly confront those fears by giving a speech. This \"two-factor\" model is often credited to O. Hobart Mowrer. Through exposure to the stimulus, this harmful conditioning can be \"unlearned\" (referred to as extinction and habituation).",
"title": "Medical uses"
},
{
"paragraph_id": 36,
"text": "CBT for children with phobias is normally delivered over multiple sessions, but one-session treatment has been shown to be equally effective and is cheaper.",
"title": "Medical uses"
},
{
"paragraph_id": 37,
"text": "CBT-SP, an adaptation of CBT for suicide prevention (SP), was specifically designed for treating youths who are severely depressed and who have recently attempted suicide within the past 90 days, and was found to be effective, feasible, and acceptable.",
"title": "Medical uses"
},
{
"paragraph_id": 38,
"text": "Acceptance and commitment therapy (ACT) is a specialist branch of CBT (sometimes referred to as contextual CBT). ACT uses mindfulness and acceptance interventions and has been found to have a greater longevity in therapeutic outcomes. In a study with anxiety, CBT and ACT improved similarly across all outcomes from pre- to post-treatment. However, during a 12-month follow-up, ACT proved to be more effective, showing that it is a highly viable lasting treatment model for anxiety disorders.",
"title": "Medical uses"
},
{
"paragraph_id": 39,
"text": "Computerized CBT (CCBT) has been proven to be effective by randomized controlled and other trials in treating depression and anxiety disorders, including children. Some research has found similar effectiveness to an intervention of informational websites and weekly telephone calls. CCBT was found to be equally effective as face-to-face CBT in adolescent anxiety.",
"title": "Medical uses"
},
{
"paragraph_id": 40,
"text": "Studies have provided evidence that when examining animals and humans, that glucocorticoids may lead to a more successful extinction learning during exposure therapy for anxiety disorders. For instance, glucocorticoids can prevent aversive learning episodes from being retrieved and heighten reinforcement of memory traces creating a non-fearful reaction in feared situations. A combination of glucocorticoids and exposure therapy may be a better-improved treatment for treating people with anxiety disorders.",
"title": "Medical uses"
},
{
"paragraph_id": 41,
"text": "For anxiety disorders, use of CBT with people at risk has significantly reduced the number of episodes of generalized anxiety disorder and other anxiety symptoms, and also given significant improvements in explanatory style, hopelessness, and dysfunctional attitudes. In another study, 3% of the group receiving the CBT intervention developed generalized anxiety disorder by 12 months postintervention compared with 14% in the control group. Individuals with subthreshold levels of panic disorder significantly benefitted from use of CBT. Use of CBT was found to significantly reduce social anxiety prevalence.",
"title": "Medical uses"
},
{
"paragraph_id": 42,
"text": "For depressive disorders, a stepped-care intervention (watchful waiting, CBT and medication if appropriate) achieved a 50% lower incidence rate in a patient group aged 75 or older. Another depression study found a neutral effect compared to personal, social, and health education, and usual school provision, and included a comment on potential for increased depression scores from people who have received CBT due to greater self recognition and acknowledgement of existing symptoms of depression and negative thinking styles. A further study also saw a neutral result. A meta-study of the Coping with Depression course, a cognitive behavioral intervention delivered by a psychoeducational method, saw a 38% reduction in risk of major depression.",
"title": "Medical uses"
},
{
"paragraph_id": 43,
"text": "Many studies show CBT, combined with pharmacotherapy, is effective in improving depressive symptoms, mania severity and psychosocial functioning with mild to moderate effects, and that it is better than medication alone.",
"title": "Medical uses"
},
{
"paragraph_id": 44,
"text": "INSERM's 2004 review found that CBT is an effective therapy for several mental disorders, including bipolar disorder. This included schizophrenia, depression, bipolar disorder, panic disorder, post-traumatic stress, anxiety disorders, bulimia, anorexia, personality disorders and alcohol dependency.",
"title": "Medical uses"
},
{
"paragraph_id": 45,
"text": "In long-term psychoses, CBT is used to complement medication and is adapted to meet individual needs. Interventions particularly related to these conditions include exploring reality testing, changing delusions and hallucinations, examining factors which precipitate relapse, and managing relapses. Meta-analyses confirm the effectiveness of metacognitive training (MCT) for the improvement of positive symptoms (e.g., delusions).",
"title": "Medical uses"
},
{
"paragraph_id": 46,
"text": "For people at risk of psychosis, in 2014 the UK National Institute for Health and Care Excellence (NICE) recommended preventive CBT.",
"title": "Medical uses"
},
{
"paragraph_id": 47,
"text": "INSERM's 2004 review found that CBT is an effective therapy for several mental disorders, including schizophrenia.",
"title": "Medical uses"
},
{
"paragraph_id": 48,
"text": "A Cochrane review reported CBT had \"no effect on long‐term risk of relapse\" and no additional effect above standard care. A 2015 systematic review investigated the effects of CBT compared with other psychosocial therapies for people with schizophrenia and determined that there is no clear advantage over other, often less expensive, interventions but acknowledged that better quality evidence is needed before firm conclusions can be drawn.",
"title": "Medical uses"
},
{
"paragraph_id": 49,
"text": "CBT is also used for pathological and problem gambling. The percentage of people who problem gamble is 1–3% around the world. Cognitive behavioral therapy develops skills for relapse prevention and someone can learn to control their mind and manage high-risk cases. There is evidence of efficacy of CBT for treating pathological and problem gambling at immediate follow up, however the longer term efficacy of CBT for it is currently unknown.",
"title": "Medical uses"
},
{
"paragraph_id": 50,
"text": "CBT looks at the habit of smoking cigarettes as a learned behavior, which later evolves into a coping strategy to handle daily stressors. Since smoking is often easily accessible and quickly allows the user to feel good, it can take precedence over other coping strategies, and eventually work its way into everyday life during non-stressful events as well. CBT aims to target the function of the behavior, as it can vary between individuals, and works to inject other coping mechanisms in place of smoking. CBT also aims to support individuals with strong cravings, which are a major reported reason for relapse during treatment.",
"title": "Medical uses"
},
{
"paragraph_id": 51,
"text": "In a 2008 controlled study out of Stanford University School of Medicine suggested CBT may be an effective tool to help maintain abstinence. The results of 304 random adult participants were tracked over the course of one year. During this program, some participants were provided medication, CBT, 24-hour phone support, or some combination of the three methods. At 20 weeks, the participants who received CBT had a 45% abstinence rate, versus non-CBT participants, who had a 29% abstinence rate. Overall, the study concluded that emphasizing cognitive and behavioral strategies to support smoking cessation can help individuals build tools for long term smoking abstinence.",
"title": "Medical uses"
},
{
"paragraph_id": 52,
"text": "Mental health history can affect the outcomes of treatment. Individuals with a history of depressive disorders had a lower rate of success when using CBT alone to combat smoking addiction.",
"title": "Medical uses"
},
{
"paragraph_id": 53,
"text": "A Cochrane review was unable to find evidence of any difference between CBT and hypnosis for smoking cessation. While this may be evidence of no effect, further research may uncover an effect of CBT for smoking cessation.",
"title": "Medical uses"
},
{
"paragraph_id": 54,
"text": "Studies have shown CBT to be an effective treatment for substance use disorders. For individuals with substance use disorders, CBT aims to reframe maladaptive thoughts, such as denial, minimizing and catastrophizing thought patterns, with healthier narratives. Specific techniques include identifying potential triggers and developing coping mechanisms to manage high-risk situations. Research has shown CBT to be particularly effective when combined with other therapy-based treatments or medication.",
"title": "Medical uses"
},
{
"paragraph_id": 55,
"text": "INSERM's 2004 review found that CBT is an effective therapy for several mental disorders, including alcohol dependency.",
"title": "Medical uses"
},
{
"paragraph_id": 56,
"text": "Research has identified Internet addiction as a new clinical disorder that causes relational, occupational, and social problems. Cognitive behavioral therapy (CBT) has been suggested as the treatment of choice for Internet addiction, and addiction recovery in general has used CBT as part of treatment planning. There is also evidence for the efficacy of CBT in multicenter randomized controlled trials such as STICA (Short-Term Treatment of Internet and Computer Game Addiction).",
"title": "Medical uses"
},
{
"paragraph_id": 57,
"text": "Though many forms of treatment can support individuals with eating disorders, CBT is proven to be a more effective treatment than medications and interpersonal psychotherapy alone. CBT aims to combat major causes of distress such as negative cognitions surrounding body weight, shape and size. CBT therapists also work with individuals to regulate strong emotions and thoughts that lead to dangerous compensatory behaviors. CBT is the first line of treatment for bulimia nervosa, and Eating Disorder Non-Specific. While there is evidence to support the efficacy of CBT for bulimia nervosa and binging, the evidence is somewhat variable and limited by small study sizes. INSERM's 2004 review found that CBT is an effective therapy for several mental disorders, including bulimia and anorexia nervosa.",
"title": "Medical uses"
},
{
"paragraph_id": 58,
"text": "Emerging evidence for cognitive behavioral interventions aimed at reducing symptoms of depression, anxiety, and obsessive-compulsive disorder in autistic adults without intellectual disability has been identified through a systematic review. While the research was focused on adults, cognitive behavioral interventions have also been beneficial to autistic children.",
"title": "Medical uses"
},
{
"paragraph_id": 59,
"text": "A Cochrane review in 2022 found that adults with dementia and mild cognitive impairment (MCI) who experience symptoms of depression may benefit from CBT, whereas other counselling or supportive interventions might not improve symptoms significantly. Across 5 different psychometric scales, where higher scores indicate severity of depression, adults receiving CBT reported somewhat lower mood scores than those receiving usual care for dementia and MCI overall. In this review, a sub-group analysis found clinically significant benefits only among those diagnosed with dementia, rather than MCI.",
"title": "Medical uses"
},
{
"paragraph_id": 60,
"text": "The likelihood of remission from depression also appeared to be 84% higher following CBT, though the evidence for this was less certain. Anxiety, cognition and other neuropsychiatric symptoms were not significantly improved following CBT, however this review did find moderate evidence of improved quality of life and daily living activity scores in those with dementia and MCI.",
"title": "Medical uses"
},
{
"paragraph_id": 61,
"text": "Cognitive behavioural therapy interventions may have some benefits for people who have post-traumatic stress related to surviving rape, sexual abuse, or sexual assault.",
"title": "Medical uses"
},
{
"paragraph_id": 62,
"text": "Evidence suggests a possible role for CBT in the treatment of attention deficit hyperactivity disorder (ADHD), hypochondriasis, and bipolar disorder, but more study is needed and results should be interpreted with caution. CBT has been studied as an aid in the treatment of anxiety associated with stuttering. Initial studies have shown CBT to be effective in reducing social anxiety in adults who stutter, but not in reducing stuttering frequency.",
"title": "Medical uses"
},
{
"paragraph_id": 63,
"text": "There is some evidence that CBT is superior in the long-term to benzodiazepines and the nonbenzodiazepines in the treatment and management of insomnia. Computerized CBT (CCBT) has been proven to be effective by randomized controlled and other trials in treating insomnia. Some research has found similar effectiveness to an intervention of informational websites and weekly telephone calls. CCBT was found to be equally effective as face-to-face CBT in insomnia.",
"title": "Medical uses"
},
{
"paragraph_id": 64,
"text": "A Cochrane review of interventions aimed at preventing psychological stress in healthcare workers found that CBT was more effective than no intervention but no more effective than alternative stress-reduction interventions.",
"title": "Medical uses"
},
{
"paragraph_id": 65,
"text": "Cochrane Reviews have found no convincing evidence that CBT training helps foster care providers manage difficult behaviors in the youths under their care, nor was it helpful in treating people who abuse their intimate partners.",
"title": "Medical uses"
},
{
"paragraph_id": 66,
"text": "CBT has been applied in both clinical and non-clinical environments to treat disorders such as personality disorders and behavioral problems. INSERM's 2004 review found that CBT is an effective therapy for personality disorders.",
"title": "Medical uses"
},
{
"paragraph_id": 67,
"text": "In the case of people with metastatic breast cancer, data is limited but CBT and other psychosocial interventions might help with psychological outcomes and pain management. A 2015 Cochrane review also found that CBT for symptomatic management of non-specific chest pain is probably effective in the short term. However, the findings were limited by small trials and the evidence was considered of questionable quality. Cochrane reviews have found no evidence that CBT is effective for tinnitus, although there appears to be an effect on management of associated depression and quality of life in this condition. CBT combined with hypnosis and distraction reduces self-reported pain in children.",
"title": "Medical uses"
},
{
"paragraph_id": 68,
"text": "There is limited evidence to support CBT's use in managing the impact of multiple sclerosis, sleep disturbances related to aging, and dysmenorrhea, but more study is needed and results should be interpreted with caution.",
"title": "Medical uses"
},
{
"paragraph_id": 69,
"text": "Previously CBT has been considered as moderately effective for treating chronic fatigue syndrome, however a National Institutes of Health Pathways to Prevention Workshop stated that in respect of improving treatment options for ME/CFS that the modest benefit from cognitive behavioral therapy should be studied as an adjunct to other methods. The Centres for Disease Control advice on the treatment of ME/CFS makes no reference to CBT while the National Institute for Health and Care Excellence states that cognitive behavioral therapy (CBT) has sometimes been assumed to be a cure for ME/CFS, however, it should only be offered to support people who live with ME/CFS to manage their symptoms, improve their functioning and reduce the distress associated with having a chronic illness.\"",
"title": "Medical uses"
},
{
"paragraph_id": 70,
"text": "CBT is used to help people of all ages, but the therapy should be adjusted based on the age of the patient with whom the therapist is dealing. Older individuals in particular have certain characteristics that need to be acknowledged and the therapy altered to account for these differences thanks to age. Of the small number of studies examining CBT for the management of depression in older people, there is currently no strong support.",
"title": "Medical uses"
},
{
"paragraph_id": 71,
"text": "Mainstream cognitive behavioral therapy assumes that changing maladaptive thinking leads to change in behavior and affect, but recent variants emphasize changes in one's relationship to maladaptive thinking rather than changes in thinking itself.",
"title": "Description"
},
{
"paragraph_id": 72,
"text": "Therapists use CBT techniques to help people challenge their patterns and beliefs and replace errors in thinking, known as cognitive distortions with \"more realistic and effective thoughts, thus decreasing emotional distress and self-defeating behavior\". Cognitive distortions can be either a pseudo-discrimination belief or an overgeneralization of something. CBT techniques may also be used to help individuals take a more open, mindful, and aware posture toward cognitive distortions so as to diminish their impact.",
"title": "Description"
},
{
"paragraph_id": 73,
"text": "Mainstream CBT helps individuals replace \"maladaptive... coping skills, cognitions, emotions and behaviors with more adaptive ones\", by challenging an individual's way of thinking and the way that they react to certain habits or behaviors, but there is still controversy about the degree to which these traditional cognitive elements account for the effects seen with CBT over and above the earlier behavioral elements such as exposure and skills training.",
"title": "Description"
},
{
"paragraph_id": 74,
"text": "CBT can be seen as having six phases:",
"title": "Description"
},
{
"paragraph_id": 75,
"text": "These steps are based on a system created by Kanfer and Saslow. After identifying the behaviors that need changing, whether they be in excess or deficit, and treatment has occurred, the psychologist must identify whether or not the intervention succeeded. For example, \"If the goal was to decrease the behavior, then there should be a decrease relative to the baseline. If the critical behavior remains at or above the baseline, then the intervention has failed.\"",
"title": "Description"
},
{
"paragraph_id": 76,
"text": "The steps in the assessment phase include:",
"title": "Description"
},
{
"paragraph_id": 77,
"text": "The re-conceptualization phase makes up much of the \"cognitive\" portion of CBT.",
"title": "Description"
},
{
"paragraph_id": 78,
"text": "There are different protocols for delivering cognitive behavioral therapy, with important similarities among them. Use of the term CBT may refer to different interventions, including \"self-instructions (e.g. distraction, imagery, motivational self-talk), relaxation and/or biofeedback, development of adaptive coping strategies (e.g. minimizing negative or self-defeating thoughts), changing maladaptive beliefs about pain, and goal setting\". Treatment is sometimes manualized, with brief, direct, and time-limited treatments for individual psychological disorders that are specific technique-driven. CBT is used in both individual and group settings, and the techniques are often adapted for self-help applications. Some clinicians and researchers are cognitively oriented (e.g. cognitive restructuring), while others are more behaviorally oriented (e.g. in vivo exposure therapy). Interventions such as imaginal exposure therapy combine both approaches.",
"title": "Description"
},
{
"paragraph_id": 79,
"text": "CBT may be delivered in conjunction with a variety of diverse but related techniques such as exposure therapy, stress inoculation, cognitive processing therapy, cognitive therapy, metacognitive therapy, metacognitive training, relaxation training, dialectical behavior therapy, and acceptance and commitment therapy. Some practitioners promote a form of mindful cognitive therapy which includes a greater emphasis on self-awareness as part of the therapeutic process.",
"title": "Description"
},
{
"paragraph_id": 80,
"text": "A typical CBT programme would consist of face-to-face sessions between patient and therapist, made up of 6–18 sessions of around an hour each with a gap of 1–3 weeks between sessions. This initial programme might be followed by some booster sessions, for instance after one month and three months. CBT has also been found to be effective if patient and therapist type in real time to each other over computer links.",
"title": "Methods of access"
},
{
"paragraph_id": 81,
"text": "Cognitive-behavioral therapy is most closely allied with the scientist–practitioner model in which clinical practice and research are informed by a scientific perspective, clear operationalization of the problem, and an emphasis on measurement, including measuring changes in cognition and behavior and the attainment of goals. These are often met through \"homework\" assignments in which the patient and the therapist work together to craft an assignment to complete before the next session. The completion of these assignments – which can be as simple as a person with depression attending some kind of social event – indicates a dedication to treatment compliance and a desire to change. The therapists can then logically gauge the next step of treatment based on how thoroughly the patient completes the assignment. Effective cognitive behavioral therapy is dependent on a therapeutic alliance between the healthcare practitioner and the person seeking assistance. Unlike many other forms of psychotherapy, the patient is very involved in CBT. For example, an anxious patient may be asked to talk to a stranger as a homework assignment, but if that is too difficult, he or she can work out an easier assignment first. The therapist needs to be flexible and willing to listen to the patient rather than acting as an authority figure.",
"title": "Methods of access"
},
{
"paragraph_id": 82,
"text": "Computerized cognitive behavioral therapy (CCBT) has been described by NICE as a \"generic term for delivering CBT via an interactive computer interface delivered by a personal computer, internet, or interactive voice response system\", instead of face-to-face with a human therapist. It is also known as internet-delivered cognitive behavioral therapy or ICBT. CCBT has potential to improve access to evidence-based therapies, and to overcome the prohibitive costs and lack of availability sometimes associated with retaining a human therapist. In this context, it is important not to confuse CBT with 'computer-based training', which nowadays is more commonly referred to as e-Learning.",
"title": "Methods of access"
},
{
"paragraph_id": 83,
"text": "Although improvements in both research quality and treatment adherence is required before advocating for the global dissemination of CCBT, it has been found in meta-studies to be cost-effective and often cheaper than usual care, including for anxiety and PTSD. Studies have shown that individuals with social anxiety and depression experienced improvement with online CBT-based methods. A study assessing an online version of CBT for people with mild-to-moderate PTSD found that the online approach was as effective as, and cheaper than, the same therapy given face-to-face. A review of current CCBT research in the treatment of OCD in children found this interface to hold great potential for future treatment of OCD in youths and adolescent populations. Additionally, most internet interventions for post-traumatic stress disorder use CCBT. CCBT is also predisposed to treating mood disorders amongst non-heterosexual populations, who may avoid face-to-face therapy from fear of stigma. However presently CCBT programs seldom cater to these populations.",
"title": "Methods of access"
},
{
"paragraph_id": 84,
"text": "In February 2006 NICE recommended that CCBT be made available for use within the NHS across England and Wales for patients presenting with mild-to-moderate depression, rather than immediately opting for antidepressant medication, and CCBT is made available by some health systems. The 2009 NICE guideline recognized that there are likely to be a number of computerized CBT products that are useful to patients, but removed endorsement of any specific product.",
"title": "Methods of access"
},
{
"paragraph_id": 85,
"text": "Another new method of access is the use of mobile app or smartphone applications to deliver self-help or guided CBT. Technology companies are developing mobile-based artificial intelligence chatbot applications in delivering CBT as an early intervention to support mental health, to build psychological resilience, and to promote emotional well-being. Artificial intelligence (AI) text-based conversational application delivered securely and privately over smartphone devices have the ability to scale globally and offer contextual and always-available support. Active research is underway including real-world data studies that measure effectiveness and engagement of text-based smartphone chatbot apps for delivery of CBT using a text-based conversational interface. Recent market research and analysis of over 500 online mental healthcare solutions identified 3 key challenges in this market: quality of the content, guidance of the user and personalisation.",
"title": "Methods of access"
},
{
"paragraph_id": 86,
"text": "A study compared CBT alone with a mindfulness-based therapy combined with CBT, both delivered via an app. It found that mindfulness-based self-help reduced the severity of depression more than CBT self-help in the short-term. Overall, NHS costs for the mindfulness approach were £500 less per person than for CBT.",
"title": "Methods of access"
},
{
"paragraph_id": 87,
"text": "Enabling patients to read self-help CBT guides has been shown to be effective by some studies. However one study found a negative effect in patients who tended to ruminate, and another meta-analysis found that the benefit was only significant when the self-help was guided (e.g. by a medical professional).",
"title": "Methods of access"
},
{
"paragraph_id": 88,
"text": "Patient participation in group courses has been shown to be effective. In a meta-analysis reviewing evidence-based treatment of OCD in children, individual CBT was found to be more efficacious than group CBT.",
"title": "Methods of access"
},
{
"paragraph_id": 89,
"text": "Brief cognitive behavioral therapy (BCBT) is a form of CBT which has been developed for situations in which there are time constraints on the therapy sessions and specifically for those struggling with suicidal ideation and/or making suicide attempts. BCBT was based on Rudd's proposed \"suicidal mode\", an elaboration of Beck's modal theory. BCBT takes place over a couple of sessions that can last up to 12 accumulated hours by design. This technique was first implemented and developed with soldiers on active duty by Dr. M. David Rudd to prevent suicide.",
"title": "Types"
},
{
"paragraph_id": 90,
"text": "Breakdown of treatment",
"title": "Types"
},
{
"paragraph_id": 91,
"text": "Cognitive emotional behavioral therapy (CEBT) is a form of CBT developed initially for individuals with eating disorders but now used with a range of problems including anxiety, depression, obsessive compulsive disorder (OCD), post-traumatic stress disorder (PTSD) and anger problems. It combines aspects of CBT and dialectical behavioral therapy and aims to improve understanding and tolerance of emotions in order to facilitate the therapeutic process. It is frequently used as a \"pretreatment\" to prepare and better equip individuals for longer-term therapy.",
"title": "Types"
},
{
"paragraph_id": 92,
"text": "Structured cognitive-behavioral training (SCBT) is a cognitive-based process with core philosophies that draw heavily from CBT. Like CBT, SCBT asserts that behavior is inextricably related to beliefs, thoughts, and emotions. SCBT also builds on core CBT philosophy by incorporating other well-known modalities in the fields of behavioral health and psychology: most notably, Albert Ellis's rational emotive behavior therapy. SCBT differs from CBT in two distinct ways. First, SCBT is delivered in a highly regimented format. Second, SCBT is a predetermined and finite training process that becomes personalized by the input of the participant. SCBT is designed to bring a participant to a specific result in a specific period of time. SCBT has been used to challenge addictive behavior, particularly with substances such as tobacco, alcohol and food, and to manage diabetes and subdue stress and anxiety. SCBT has also been used in the field of criminal psychology in the effort to reduce recidivism.",
"title": "Types"
},
{
"paragraph_id": 93,
"text": "Moral reconation therapy, a type of CBT used to help felons overcome antisocial personality disorder (ASPD), slightly decreases the risk of further offending. It is generally implemented in a group format because of the risk of offenders with ASPD being given one-on-one therapy reinforces narcissistic behavioral characteristics, and can be used in correctional or outpatient settings. Groups usually meet weekly for two to six months.",
"title": "Types"
},
{
"paragraph_id": 94,
"text": "This type of therapy uses a blend of cognitive, behavioral, and certain humanistic training techniques to target the stressors of the client. This usually is used to help clients better cope with their stress or anxiety after stressful events. This is a three-phase process that trains the client to use skills that they already have to better adapt to their current stressors. The first phase is an interview phase that includes psychological testing, client self-monitoring, and a variety of reading materials. This allows the therapist to individually tailor the training process to the client. Clients learn how to categorize problems into emotion-focused or problem-focused so that they can better treat their negative situations. This phase ultimately prepares the client to eventually confront and reflect upon their current reactions to stressors, before looking at ways to change their reactions and emotions to their stressors. The focus is conceptualization.",
"title": "Types"
},
{
"paragraph_id": 95,
"text": "The second phase emphasizes the aspect of skills acquisition and rehearsal that continues from the earlier phase of conceptualization. The client is taught skills that help them cope with their stressors. These skills are then practised in the space of therapy. These skills involve self-regulation, problem-solving, interpersonal communication skills, etc.",
"title": "Types"
},
{
"paragraph_id": 96,
"text": "The third and final phase is the application and following through of the skills learned in the training process. This gives the client opportunities to apply their learned skills to a wide range of stressors. Activities include role-playing, imagery, modeling, etc. In the end, the client will have been trained on a preventive basis to inoculate personal, chronic, and future stressors by breaking down their stressors into problems they will address in long-term, short-term, and intermediate coping goals.",
"title": "Types"
},
{
"paragraph_id": 97,
"text": "A newly developed group therapy model based on CBT integrates knitting into the therapeutical process and has been proven to yield reliable and promising results. The foundation for this novel approach to CBT is the frequently emphasized notion that therapy success depends on the embeddedness of the therapy method in the patients' natural routine. Similar to standard group-based CBT, patients meet once a week in a group of 10 to 15 patients and knit together under the instruction of a trained psychologist or mental health professional. Central for the therapy is the patient's imaginative ability to assign each part of the wool to a certain thought. During the therapy, the wool is carefully knitted, creating a knitted piece of any form. This therapeutical process teaches the patient to meaningfully align thought, by (physically) creating a coherent knitted piece. Moreover, since CBT emphasizes the behavior as a result of cognition, the knitting illustrates how thoughts (which are tried to be imaginary tight to the wool) materialize into the reality surrounding us.",
"title": "Types"
},
{
"paragraph_id": 98,
"text": "Mindfulness-based cognitive behavioral hypnotherapy (MCBH) is a form of CBT that focuses on awareness in a reflective approach, addressing subconscious tendencies. It is more the process that contains three phases for achieving wanted goals and integrates the principles of mindfulness and cognitive-behavioral techniques with the transformative potential of hypnotherapy.",
"title": "Types"
},
{
"paragraph_id": 99,
"text": "The Unified Protocol for Transdiagnostic Treatment of Emotional Disorders (UP) is a form of CBT, developed by David H. Barlow and researchers at Boston University, that can be applied to a range of and anxiety disorders. The rationale is that anxiety and depression disorders often occur together due to common underlying causes and can efficiently be treated together.",
"title": "Types"
},
{
"paragraph_id": 100,
"text": "The UP includes a common set of components:",
"title": "Types"
},
{
"paragraph_id": 101,
"text": "The UP has been shown to produce equivalent results to single-diagnosis protocols for specific disorders, such as OCD and social anxiety disorder. Several studies have shown that the UP is easier to disseminate as compared to single-diagnosis protocols.",
"title": "Types"
},
{
"paragraph_id": 102,
"text": "The research conducted for CBT has been a topic of sustained controversy. While some researchers write that CBT is more effective than other treatments, many other researchers and practitioners have questioned the validity of such claims. For example, one study determined CBT to be superior to other treatments in treating anxiety and depression. However, researchers responding directly to that study conducted a re-analysis and found no evidence of CBT being superior to other bona fide treatments, and conducted an analysis of thirteen other CBT clinical trials and determined that they failed to provide evidence of CBT superiority. In cases where CBT has been reported to be statistically better than other psychological interventions in terms of primary outcome measures, effect sizes were small and suggested that those differences were clinically meaningless and insignificant. Moreover, on secondary outcomes (i.e., measures of general functioning) no significant differences have been typically found between CBT and other treatments.",
"title": "Criticisms"
},
{
"paragraph_id": 103,
"text": "A major criticism has been that clinical studies of CBT efficacy (or any psychotherapy) are not double-blind (i.e., either the subjects or the therapists in psychotherapy studies are not blind to the type of treatment). They may be single-blinded, i.e. the rater may not know the treatment the patient received, but neither the patients nor the therapists are blinded to the type of therapy given (two out of three of the persons involved in the trial, i.e., all of the persons involved in the treatment, are unblinded). The patient is an active participant in correcting negative distorted thoughts, thus quite aware of the treatment group they are in.",
"title": "Criticisms"
},
{
"paragraph_id": 104,
"text": "The importance of double-blinding was shown in a meta-analysis that examined the effectiveness of CBT when placebo control and blindness were factored in. Pooled data from published trials of CBT in schizophrenia, major depressive disorder (MDD), and bipolar disorder that used controls for non-specific effects of intervention were analyzed. This study concluded that CBT is no better than non-specific control interventions in the treatment of schizophrenia and does not reduce relapse rates; treatment effects are small in treatment studies of MDD, and it is not an effective treatment strategy for prevention of relapse in bipolar disorder. For MDD, the authors note that the pooled effect size was very low.",
"title": "Criticisms"
},
{
"paragraph_id": 105,
"text": "Additionally, a 2015 meta-analysis revealed that the positive effects of CBT on depression have been declining since 1977. The overall results showed two different declines in effect sizes: 1) an overall decline between 1977 and 2014, and 2) a steeper decline between 1995 and 2014. Additional sub-analysis revealed that CBT studies where therapists in the test group were instructed to adhere to the Beck CBT manual had a steeper decline in effect sizes since 1977 than studies where therapists in the test group were instructed to use CBT without a manual. The authors reported that they were unsure why the effects were declining but did list inadequate therapist training, failure to adhere to a manual, lack of therapist experience, and patients' hope and faith in its efficacy waning as potential reasons. The authors did mention that the current study was limited to depressive disorders only.",
"title": "Criticisms"
},
{
"paragraph_id": 106,
"text": "Furthermore, other researchers write that CBT studies have high drop-out rates compared to other treatments. One meta-analysis found that CBT drop-out rates were 17% higher than those of other therapies. This high drop-out rate is also evident in the treatment of several disorders, particularly the eating disorder anorexia nervosa, which is commonly treated with CBT. Those treated with CBT have a high chance of dropping out of therapy before completion and reverting to their anorexia behaviors.",
"title": "Criticisms"
},
{
"paragraph_id": 107,
"text": "Other researchers analyzing treatments for youths who self-injure found similar drop-out rates in CBT and DBT groups. In this study, the researchers analyzed several clinical trials that measured the efficacy of CBT administered to youths who self-injure. The researchers concluded that none of them were found to be efficacious.",
"title": "Criticisms"
},
{
"paragraph_id": 108,
"text": "The methods employed in CBT research have not been the only criticisms; some individuals have called its theory and therapy into question.",
"title": "Criticisms"
},
{
"paragraph_id": 109,
"text": "Slife and Williams write that one of the hidden assumptions in CBT is that of determinism, or the absence of free will. They argue that CBT holds that external stimuli from the environment enter the mind, causing different thoughts that cause emotional states: nowhere in CBT theory is agency, or free will, accounted for.",
"title": "Criticisms"
},
{
"paragraph_id": 110,
"text": "Another criticism of CBT theory, especially as applied to major depressive disorder (MDD), is that it confounds the symptoms of the disorder with its causes.",
"title": "Criticisms"
},
{
"paragraph_id": 111,
"text": "CBT is generally regarded as having very few if any side effects. Calls have been made by some for more appraisal of possible side effects of CBT. Many randomized trials of psychological interventions like CBT do not monitor potential harms to the patient. In contrast, randomized trials of pharmacological interventions are much more likely to take adverse effects into consideration.",
"title": "Criticisms"
},
{
"paragraph_id": 112,
"text": "A 2017 meta-analysis revealed that adverse events are not common in children receiving CBT and, furthermore, that CBT is associated with fewer dropouts than either placebo or medications. Nevertheless, CBT therapists do sometimes report 'unwanted events' and side effects in their outpatients with \"negative wellbeing/distress\" being the most frequent.",
"title": "Criticisms"
},
{
"paragraph_id": 113,
"text": "The writer and group analyst Farhad Dalal questions the socio-political assumptions behind the introduction of CBT. According to one reviewer, Dalal connects the rise of CBT with \"the parallel rise of neoliberalism, with its focus on marketization, efficiency, quantification and managerialism\", and he questions the scientific basis of CBT, suggesting that \"the 'science' of psychological treatment is often less a scientific than a political contest\". In his book, Dalal also questions the ethical basis of CBT.",
"title": "Criticisms"
},
{
"paragraph_id": 114,
"text": "The UK's National Health Service announced in 2008 that more therapists would be trained to provide CBT at government expense as part of an initiative called Improving Access to Psychological Therapies (IAPT). The NICE said that CBT would become the mainstay of treatment for non-severe depression, with medication used only in cases where CBT had failed. Therapists complained that the data does not fully support the attention and funding CBT receives. Psychotherapist and professor Andrew Samuels stated that this constitutes \"a coup, a power play by a community that has suddenly found itself on the brink of corralling an enormous amount of money ... Everyone has been seduced by CBT's apparent cheapness.\"",
"title": "Society and culture"
},
{
"paragraph_id": 115,
"text": "The UK Council for Psychotherapy issued a press release in 2012 saying that the IAPT's policies were undermining traditional psychotherapy and criticized proposals that would limit some approved therapies to CBT, claiming that they restricted patients to \"a watered down version of cognitive behavioural therapy (CBT), often delivered by very lightly trained staff\".",
"title": "Society and culture"
}
] | Cognitive behavioral therapy (CBT) is a psycho-social intervention that aims to reduce symptoms of various mental health conditions, primarily depression and anxiety disorders. Cognitive behavioral therapy is one of the most effective means of treatment for substance abuse and co-occurring mental health disorders. CBT focuses on challenging and changing cognitive distortions and their associated behaviors to improve emotional regulation and develop personal coping strategies that target solving current problems. Though it was originally designed to treat depression, its uses have been expanded to include many issues and the treatment of many mental health conditions, including anxiety, substance use disorders, marital problems, ADHD, and eating disorders. CBT includes a number of cognitive or behavioral psychotherapies that treat defined psychopathologies using evidence-based techniques and strategies. CBT is a common form of talk therapy based on the combination of the basic principles from behavioral and cognitive psychology. It is different from historical approaches to psychotherapy, such as the psychoanalytic approach where the therapist looks for the unconscious meaning behind the behaviors, and then formulates a diagnosis. Instead, CBT is a "problem-focused" and "action-oriented" form of therapy, meaning it is used to treat specific problems related to a diagnosed mental disorder. The therapist's role is to assist the client in finding and practicing effective strategies to address the identified goals and to alleviate symptoms of the disorder. CBT is based on the belief that thought distortions and maladaptive behaviors play a role in the development and maintenance of many psychological disorders and that symptoms and associated distress can be reduced by teaching new information-processing skills and coping mechanisms. When compared to psychoactive medications, review studies have found CBT alone to be as effective for treating less severe forms of depression, anxiety, post-traumatic stress disorder (PTSD), tics, substance use disorders, eating disorders, and borderline personality disorder. Some research suggests that CBT is most effective when combined with medication for treating mental disorders, such as major depressive disorder. CBT is recommended as the first line of treatment for the majority of psychological disorders in children and adolescents, including aggression and conduct disorder. Researchers have found that other bona fide therapeutic interventions were equally effective for treating certain conditions in adults. Along with interpersonal psychotherapy (IPT), CBT is recommended in treatment guidelines as a psychosocial treatment of choice. | 2001-10-31T00:22:46Z | 2023-12-30T13:27:14Z | [
"Template:Commons category-inline",
"Template:Authority control",
"Template:Anchor",
"Template:Cite magazine",
"Template:Clarify",
"Template:Multiple issues",
"Template:Citation",
"Template:Cite news",
"Template:Library resources box",
"Template:Addiction",
"Template:Use dmy dates",
"Template:Page needed",
"Template:Cite book",
"Template:Refbegin",
"Template:Refend",
"Template:Cognitive behavioral therapy",
"Template:Psychotherapy",
"Template:Use American English",
"Template:Infobox medical intervention",
"Template:Cite report",
"Template:Reflist",
"Template:Cite press release",
"Template:Short description",
"Template:Main",
"Template:About",
"Template:Further",
"Template:See also",
"Template:Cite web",
"Template:Psychology",
"Template:Cite journal",
"Template:Unreliable medical source"
] | https://en.wikipedia.org/wiki/Cognitive_behavioral_therapy |
5,751 | Chinese language | Chinese (simplified Chinese: 汉语; traditional Chinese: 漢語; pinyin: Hànyǔ; lit. 'Han language' or 中文; Zhōngwén; 'Chinese writing') is a group of languages spoken natively by the ethnic Han Chinese majority and many minority ethnic groups in Greater China. Approximately 1.3 billion people, or around 16% of the global population, speak a variety of Chinese as their first language.
Chinese languages form the Sinitic branch of the Sino-Tibetan language family. The spoken varieties of Chinese are usually considered by native speakers to be dialects of a single language. However, their lack of mutual intelligibility means they are sometimes considered to be separate languages in a family. Investigation of the historical relationships among the varieties of Chinese is ongoing. Currently, most classifications posit 7 to 13 main regional groups based on phonetic developments from Middle Chinese, of which the most spoken by far is Mandarin with 66%, or around 800 million speakers, followed by Min (75 million, e.g. Southern Min), Wu (74 million, e.g. Shanghainese), and Yue (68 million, e.g. Cantonese). These branches are unintelligible to each other, and many of their subgroups are unintelligible with the other varieties within the same branch (e.g. Southern Min). There are, however, transitional areas where varieties from different branches share enough features for some limited intelligibility, including New Xiang with Southwestern Mandarin, Xuanzhou Wu Chinese with Lower Yangtze Mandarin, Jin with Central Plains Mandarin and certain divergent dialects of Hakka with Gan (though these are unintelligible with mainstream Hakka). All varieties of Chinese are tonal to at least some degree, and are largely analytic.
The earliest Chinese written records are oracle bone inscriptions dating from the Shang dynasty c. 1250 BCE. The phonetic categories of Old Chinese can be reconstructed from the rhymes of ancient poetry. During the Northern and Southern period, Middle Chinese went through several sound changes and split into several varieties following prolonged geographic and political separation. The Qieyun, a rime dictionary, recorded a compromise between the pronunciations of different regions. The royal courts of the Ming and early Qing dynasties operated using a koiné language known as Guanhua, based on the Nanjing dialect of Mandarin.
Standard Chinese is an official language of both the People's Republic of China and the Republic of China on Taiwan, one of the four official languages of Singapore, and one of the six official languages of the United Nations. Standard Chinese is based on the Beijing dialect of Mandarin, and was first officially adopted in the 1930s. The language is written primarily using a logography of Chinese characters, largely shared by readers who may otherwise speak mutually unintelligible varieties. Since the 1950s, the use of Simplified characters has been promoted by the government of the People's Republic of China, with Singapore officially adopting them in 1976. Traditional characters are used in Taiwan, Hong Kong, Macau, and among Chinese-speaking communities overseas. Traditional characters are also in use in mainland China, despite them not being the first choice in daily use. For example, practising Chinese calligraphy requires the knowledge of traditional Chinese characters.
Linguists classify all varieties of Chinese as part of the Sino-Tibetan language family, together with Burmese, Tibetan and many other languages spoken in the Himalayas and the Southeast Asian Massif. Although the relationship was first proposed in the early 19th century and is now broadly accepted, reconstruction of Sino-Tibetan is much less developed than that of families such as Indo-European or Austroasiatic. Difficulties have included the great diversity of the languages, the lack of inflection in many of them, and the effects of language contact. In addition, many of the smaller languages are spoken in mountainous areas that are difficult to reach and are often also sensitive border zones. Without a secure reconstruction of proto-Sino-Tibetan, the higher-level structure of the family remains unclear. A top-level branching into Chinese and Tibeto-Burman languages is often assumed, but has not been convincingly demonstrated.
The first written records appeared over 3,000 years ago during the Shang dynasty. As the language evolved over this period, the various local varieties became mutually unintelligible. In reaction, central governments have repeatedly sought to promulgate a unified standard.
The earliest examples of Old Chinese are divinatory inscriptions on oracle bones dated to c. 1250 BCE, during the late Shang. The next attested stage came from inscriptions on bronze artifacts of the Western Zhou period (1046–771 BCE), the Classic of Poetry and portions of the Book of Documents and I Ching. Scholars have attempted to reconstruct the phonology of Old Chinese by comparing later varieties of Chinese with the rhyming practice of the Classic of Poetry and the phonetic elements found in the majority of Chinese characters. Although many of the finer details remain unclear, most scholars agree that Old Chinese differs from Middle Chinese in lacking retroflex and palatal obstruents but having initial consonant clusters of some sort, and in having voiceless nasals and liquids. Most recent reconstructions also describe an atonal language with consonant clusters at the end of the syllable, developing into tone distinctions in Middle Chinese. Several derivational affixes have also been identified, but the language lacks inflection, and indicated grammatical relationships using word order and grammatical particles.
Middle Chinese was the language used during Northern and Southern dynasties and the Sui, Tang, and Song dynasties (6th–10th centuries CE). It can be divided into an early period, reflected by the Qieyun rime book (601 CE), and a late period in the 10th century, reflected by rhyme tables such as the Yunjing constructed by ancient Chinese philologists as a guide to the Qieyun system. These works define phonological categories, but with little hint of what sounds they represent. Linguists have identified these sounds by comparing the categories with pronunciations in modern varieties of Chinese, borrowed Chinese words in Japanese, Vietnamese, and Korean, and transcription evidence. The resulting system is very complex, with a large number of consonants and vowels, but they are probably not all distinguished in any single dialect. Most linguists now believe it represents a diasystem encompassing 6th-century northern and southern standards for reading the classics.
The complex relationship between spoken and written Chinese is an example of diglossia: as spoken, Chinese varieties have evolved at different rates, while the written language used throughout China changed comparatively little, crystallizing into a prestige form known as Classical or Literary Chinese. Literature written distinctly in the Classical form began to emerge during the Spring and Autumn period. Its use in writing remained nearly universal until the late 19th century, culminating with the widespread adoption of written vernacular Chinese with the May Fourth Movement beginning in 1919.
After the fall of the Northern Song dynasty and subsequent reign of the Jurchen Jin and Mongol Yuan dynasties in northern China, a common speech (now called Old Mandarin) developed based on the dialects of the North China Plain around the capital. The 1324 Zhongyuan Yinyun was a dictionary that codified the rhyming conventions of new sanqu verse form in this language. Together with the slightly later Menggu Ziyun, this dictionary describes a language with many of the features characteristic of modern Mandarin dialects.
Up to the early 20th century, most Chinese people only spoke their local variety. Thus, as a practical measure, officials of the Ming and Qing dynasties carried out the administration of the empire using a common language based on Mandarin varieties, known as 官话; 官話; Guānhuà; 'language of officials'. For most of this period, this language was a koiné based on dialects spoken in the Nanjing area, though not identical to any single dialect. By the middle of the 19th century, the Beijing dialect had become dominant and was essential for any business with the imperial court.
In the 1930s, a standard national language, 国语; 國語; Guóyǔ; 'national language', was adopted. After much dispute between proponents of northern and southern dialects and an abortive attempt at an artificial pronunciation, the National Language Unification Commission finally settled on the Beijing dialect in 1932. The People's Republic founded in 1949 retained this standard but renamed it 普通话; 普通話; pǔtōnghuà; 'common speech'. The national language is now used in education, the media, and formal situations in both mainland China and Taiwan. Because of their colonial and linguistic history, the language used in education, the media, formal speech, and everyday life in Hong Kong and Macau is the local Cantonese, although the standard language, Mandarin, has become very influential and is being taught in schools.
Historically, the Chinese language has spread to its neighbors through a variety of means. Northern Vietnam was incorporated into the Han empire in 111 BCE, marking the beginning of a period of Chinese control that ran almost continuously for a millennium. The Four Commanderies were established in northern Korea in the first century BCE, but disintegrated in the following centuries. Chinese Buddhism spread over East Asia between the 2nd and 5th centuries CE, and with it the study of scriptures and literature in Literary Chinese. Later, strong central governments modeled on Chinese institutions were established in Korea, Japan, and Vietnam, with Literary Chinese serving as the language of administration and scholarship, a position it would retain until the late 19th century in Korea and (to a lesser extent) Japan, and the early 20th century in Vietnam. Scholars from different lands could communicate, albeit only in writing, using Literary Chinese.
Although they used Chinese solely for written communication, each country had its own tradition of reading texts aloud, the so-called Sino-Xenic pronunciations. Chinese words with these pronunciations were also extensively imported into the Korean, Japanese and Vietnamese languages, and today comprise over half of their vocabularies. This massive influx led to changes in the phonological structure of the languages, contributing to the development of moraic structure in Japanese and the disruption of vowel harmony in Korean.
Borrowed Chinese morphemes have been used extensively in all these languages to coin compound words for new concepts, in a similar way to the use of Latin and Ancient Greek roots in European languages. Many new compounds, or new meanings for old phrases, were created in the late 19th and early 20th centuries to name Western concepts and artifacts. These coinages, written in shared Chinese characters, have then been borrowed freely between languages. They have even been accepted into Chinese, a language usually resistant to loanwords, because their foreign origin was hidden by their written form. Often different compounds for the same concept were in circulation for some time before a winner emerged, and sometimes the final choice differed between countries. The proportion of vocabulary of Chinese origin thus tends to be greater in technical, abstract, or formal language. For example, in Japan, Sino-Japanese words account for about 35% of the words in entertainment magazines, over half the words in newspapers, and 60% of the words in science magazines.
Vietnam, Korea, and Japan each developed writing systems for their own languages, initially based on Chinese characters, but later replaced with the hangul alphabet for Korean and supplemented with kana syllabaries for Japanese, while Vietnamese continued to be written with the complex chữ Nôm script. However, these were limited to popular literature until the late 19th century. Today Japanese is written with a composite script using both Chinese characters called kanji, and kana. Korean is written exclusively with hangul in North Korea (although knowledge of the supplementary Chinese characters (called hanja) is still required), and hanja are increasingly rarely used in South Korea. As a result of former French colonization, Vietnamese switched to a Latin-based alphabet.
Examples of loan words in English include 'tea' from Hokkien 茶; tê, 'dim sum' from Cantonese 點心; dim sam, and 'kumquat' from Cantonese 金橘; gamgwat.
Jerry Norman estimated that there are hundreds of mutually unintelligible varieties of Chinese. These varieties form a dialect continuum, in which differences in speech generally become more pronounced as distances increase, though the rate of change varies immensely. Generally, mountainous South China exhibits more linguistic diversity than the North China Plain. In parts of South China, a major city's dialect may only be marginally intelligible to close neighbours. For instance, Wuzhou is about 190 kilometres (120 mi) upstream from Guangzhou, but the Yue variety spoken there is more like that of Guangzhou than is that of Taishan, 95 kilometres (60 mi) southwest of Guangzhou and separated from it by several rivers. In parts of Fujian the speech of neighbouring counties or even villages may be mutually unintelligible.
Until the late 20th century, Chinese emigrants to Southeast Asia and North America came from southeast coastal areas, where Min, Hakka, and Yue dialects are spoken. The vast majority of Chinese immigrants to North America up to the mid-20th century spoke the Taishan dialect, from a small coastal area southwest of Guangzhou.
Proportions of first-language speakers
Local varieties of Chinese are conventionally classified into seven dialect groups, largely based on the different evolution of Middle Chinese voiced initials:
The classification of Li Rong, which is used in the Language Atlas of China (1987), distinguishes three further groups:
Some varieties remain unclassified, including the Danzhou dialect on Hainan, Waxianghua spoken in western Hunan, and Shaozhou Tuhua spoken in northern Guangdong.
Standard Chinese is the official standard language of China (where it is called 普通话; pǔtōnghuà) and Taiwan, and one of the four official languages of Singapore (where it is called either 华语; 華語; Huáyŭ or 汉语; 漢語; Hànyǔ). Standard Chinese is based on the Beijing dialect of Mandarin. The governments of both China and Taiwan intend for speakers of all Chinese speech varieties to use it as a common language of communication. Therefore, it is used in government agencies, in the media, and as a language of instruction in schools.
In China, diglossia has been a common feature. For example, in addition to Standard Chinese, a resident of Shanghai may speak Shanghainese; if they grew up elsewhere, then they are also likely to be fluent in the particular dialect of that local area. A native of Guangzhou may speak both Cantonese and Standard Chinese. In addition to Mandarin, most Taiwanese people also speak Taiwanese Hokkien (commonly 台語; 'Taiwanese'), Hakka, or an Austronesian language. A Taiwanese may commonly mix pronunciations, phrases, and words from Mandarin and other languages of Taiwan, and this mixture is considered normal in daily or informal speech.
Due to their traditional cultural ties to Guangdong amid a history of outside colonization, Cantonese is used as a standard language in Hong Kong and Macau.
The designation of various Chinese branches remains controversial. Some linguists and most ordinary Chinese people consider all the spoken varieties as one single language, as speakers share a common national identity and a common written form. Others instead argue that it is inappropriate to refer to major branches of Chinese such as Mandarin, Wu and so on as "dialects" because the mutual unintelligibility between them is too great. However, calling major Chinese branches "languages" would also be wrong under the same criterion, since a branch such as Wu, itself contains many mutually unintelligible varieties, and could not be properly called a single language.
There are also viewpoints pointing out that linguists often ignore mutual intelligibility when varieties share intelligibility with a central variety (i.e. prestige variety, such as Standard Mandarin), as the issue requires some careful handling when mutual intelligibility is inconsistent with language identity.
The Chinese government's official Chinese designation for the major branches of Chinese is 方言; fāngyán; 'regional speech', whereas the more closely related varieties within these are called 地点方言; 地點方言; dìdiǎn fāngyán; 'local speech'.
Because of the difficulties involved in determining the difference between language and dialect, other terms have been proposed. These include topolect, lect, vernacular, regional, and variety.
Syllables in the Chinese languages have some unique characteristics. They are tightly related to the morphology and also to the characters of the writing system; and phonologically they are structured according to fixed rules.
The structure of each syllable consists of a nucleus that has a vowel (which can be a monophthong, diphthong, or even a triphthong in certain varieties), preceded by an onset (a single consonant, or consonant + glide; a zero onset is also possible), and followed (optionally) by a coda consonant; a syllable also carries a tone. There are some instances where a vowel is not used as a nucleus. An example of this is in Cantonese, where the nasal sonorant consonants /m/ and /ŋ/ can stand alone as their own syllable.
In Mandarin much more than in other spoken varieties, most syllables tend to be open syllables, meaning they have no coda (assuming that a final glide is not analyzed as a coda), but syllables that do have codas are restricted to nasals /m/, /n/, /ŋ/, the retroflex approximant /ɻ/, and voiceless stops /p/, /t/, /k/, or /ʔ/. Some varieties allow most of these codas, whereas others, such as Standard Chinese, are limited to only /n/, /ŋ/, and /ɻ/.
The number of sounds in the different spoken dialects varies, but in general there has been a tendency to a reduction in sounds from Middle Chinese. The Mandarin dialects in particular have experienced a dramatic decrease in sounds and so have far more polysyllabic words than most other spoken varieties. The total number of syllables in some varieties is therefore only about a thousand, including tonal variation, which is only about an eighth as many as English.
All varieties of spoken Chinese use tones to distinguish words. A few dialects of north China may have as few as three tones, while some dialects in south China have up to 6 or 12 tones, depending on how one counts. One exception from this is Shanghainese which has reduced the set of tones to a two-toned pitch accent system much like modern Japanese.
A very common example used to illustrate the use of tones in Chinese is the application of the four tones of Standard Chinese, along with the neutral tone, to the syllable ma. The tones are exemplified by the following five Chinese words:
In contrast, Standard Cantonese has six tones. Historically, finals that end in a stop consonant were considered to be "checked tones" and thus counted separately for a total of nine tones. However, they are considered to be duplicates in modern linguistics and are no longer counted as such:
Chinese is often described as a 'monosyllabic' language. However, this is only partially correct. It is largely accurate when describing Old and Middle Chinese; in Classical Chinese, around 90% of words consist of a single character that corresponds one-to-one with a morpheme, the smallest unit of meaning in a language. In modern varieties, it usually remains the case that a morphemes are monosyllabic—in contrast, English has many multi-syllable morphemes, both bound and free, such as 'seven', 'elephant', 'para-' and '-able'. Some of the more conservative modern varieties, usually found in the south, have largely monosyllabic words, especially with basic vocabulary. However, most nouns, adjectives and verbs in modern Mandarin are disyllabic. A significant cause of this is phonological attrition: sound changes over time have steadily reduced the number of possible syllables in the language's inventory. In modern Mandarin, there are only around 1,200 possible syllables, including the tonal distinctions, compared with about 5,000 in Vietnamese (still a largely monosyllabic language), and over 8,000 in English.
Most modern varieties have the tendency to form new words through polysyllabic compounds. In some cases, monosyllabic words have become disyllabic formed from different characters without the use of compounding, as in 窟窿; kūlong from 孔; kǒng; this is especially common in Jin varieties. This phonological collapse has led to a corresponding increase in the number of homophones. As an example, the small Langenscheidt Pocket Chinese Dictionary lists six words that are commonly pronounced as shí in Standard Chinese:
In modern spoken Mandarin, however, tremendous ambiguity would result if all of these words could be used as-is. The 20th century Yuen Ren Chao poem Lion-Eating Poet in the Stone Den exploits this, consisting of 92 characters all pronounced shi. As such, most of these words have been replaced in speech, if not in writing, with less ambiguous disyllabic compounds. Only the first one, 十, normally appears in monosyllabic form in spoken Mandarin; the rest are normally used in the polysyllabic forms of
respectively. In each, the homophone was disambiguated by addition of another morpheme, typically either a near-synonym or some sort of generic word (e.g. 'head', 'thing'), the purpose of which is to indicate which of the possible meanings of the other, homophonic syllable is specifically meant.
However, when one of the above words forms part of a compound, the disambiguating syllable is generally dropped and the resulting word is still disyllabic. For example, 石; shí alone, and not 石头; 石頭; shítou, appears in compounds as meaning 'stone' such as 石膏; shígāo; 'plaster', 石灰; shíhuī; 'lime', 石窟; shíkū; 'grotto', 石英; 'quartz', and 石油; shíyóu; 'petroleum'. Although many single-syllable morphemes (字; zì) can stand alone as individual words, they more often than not form multi-syllable compounds known as 词; 詞; cí, which more closely resembles the traditional Western notion of a word. A Chinese cí can consist of more than one character–morpheme, usually two, but there can be three or more.
Examples of Chinese words of more than two syllables include 汉堡包; 漢堡包; hànbǎobāo; 'hamburger', 守门员; 守門員; shǒuményuán; 'goalkeeper', and 电子邮件; 電子郵件; diànzǐyóujiàn; 'e-mail'.
All varieties of modern Chinese are analytic languages: they depend on syntax (word order and sentence structure), rather than inflectional morphology (changes in the form of a word), to indicate a word's function within a sentence. In other words, Chinese has very few grammatical inflections—it possesses no tenses, no voices, no grammatical number, and only a few articles. They make heavy use of grammatical particles to indicate aspect and mood. In Mandarin, this involves the use of particles such as 了; le; 'PFV', 还; 還; hái; 'still', and 已经; 已經; yǐjīng; 'already'.
Chinese has a subject–verb–object word order, and like many other languages of East Asia, makes frequent use of the topic–comment construction to form sentences. Chinese also has an extensive system of classifiers and measure words, another trait shared with neighboring languages such as Japanese and Korean. Other notable grammatical features common to all the spoken varieties of Chinese include the use of serial verb construction, pronoun dropping and the related subject dropping. Although the grammars of the spoken varieties share many traits, they do possess differences.
The entire Chinese character corpus since antiquity comprises well over 50,000 characters, of which only roughly 10,000 are in use and only about 3,000 are frequently used in Chinese media and newspapers. However, Chinese characters should not be confused with Chinese words. Because most Chinese words are made up of two or more characters, there are many more Chinese words than characters. A more accurate equivalent for a Chinese character is the morpheme, as characters represent the smallest grammatical units with individual meanings in the Chinese language.
Estimates of the total number of Chinese words and lexicalized phrases vary greatly. The Hanyu Da Zidian, a compendium of Chinese characters, includes 54,678 head entries for characters, including oracle bone versions. The Zhonghua Zihai (1994) contains 85,568 head entries for character definitions, and is the largest reference work based purely on character and its literary variants. The CC-CEDICT project (2010) contains 97,404 contemporary entries including idioms, technology terms and names of political figures, businesses and products. The 2009 version of the Webster's Digital Chinese Dictionary (WDCD), based on CC-CEDICT, contains over 84,000 entries.
The most comprehensive pure linguistic Chinese-language dictionary, the 12-volume Hanyu Da Cidian, records more than 23,000 head Chinese characters and gives over 370,000 definitions. The 1999 revised Cihai, a multi-volume encyclopedic dictionary reference work, gives 122,836 vocabulary entry definitions under 19,485 Chinese characters, including proper names, phrases and common zoological, geographical, sociological, scientific and technical terms.
The 2016 edition of Xiandai Hanyu Cidian, an authoritative one-volume dictionary on modern standard Chinese language as used in mainland China, has 13,000 head characters and defines 70,000 words.
Like many other languages, Chinese has absorbed a sizable number of loanwords from other cultures. Most Chinese words are formed out of native Chinese morphemes, including words describing imported objects and ideas. However, direct phonetic borrowing of foreign words has gone on since ancient times.
Some early Indo-European loanwords in Chinese have been proposed, notably 蜜; mì; 'honey', 狮; 獅; shī; 'lion', and perhaps also 马; 馬; mǎ; 'horse', 猪; 豬; zhū; 'pig', 犬; quǎn; 'dog', and 鹅; 鵝; é; 'goose'. Ancient words borrowed from along the Silk Road during the Old Chinese period include 葡萄; pútáo; 'grape', 石榴; shíliu, shíliú; 'pomegranate', and 狮子; 獅子; shīzi; 'lion'. Some words were borrowed from Buddhist scriptures, including 佛; Fó; 'Buddha' and 菩萨; 菩薩; Púsà; 'bodhisattva'. Other words came from nomadic peoples to the north, such as 胡同; hútòng; 'hutong'. Words borrowed from the peoples along the Silk Road, such as 葡萄; 'grape', generally have Persian etymologies. Buddhist terminology is generally derived from Sanskrit or Pāli, the liturgical languages of northern India. Words borrowed from the nomadic tribes of the Gobi, Mongolian or northeast regions generally have Altaic etymologies, such as 琵琶; pípá, the Chinese lute, or 酪; lào, luò; 'cheese or yogurt', but from exactly which source is not always clear.
Modern neologisms are primarily translated into Chinese in one of three ways: free translation (calques), phonetic translation (by sound), or a combination of the two. Today, it is much more common to use existing Chinese morphemes to coin new words to represent imported concepts, such as technical expressions and international scientific vocabulary, wherein the Latin and Greek components usually converted one-for-one into the corresponding Chinese characters. The word 'telephone' was initially loaned phonetically as 德律风; 德律風; délǜfēng (Shanghainese télífon [təlɪfoŋ])—this word was widely used in Shanghai during the 1920s, but the later 电话; 電話; diànhuà; 'electric speech', built out of native Chinese morphemes, became prevalent. Other examples include
Occasionally, compromises between the transliteration and translation approaches become accepted, such as 汉堡包; 漢堡包; hànbǎobāo; 'hamburger' from 汉堡; 'Hamburg' + 包; 'bun'. Sometimes translations are designed so that they sound like the original while incorporating Chinese morphemes (phono-semantic matching), such as 马利奥; 馬利奧; Mǎlì'ào for the video game character 'Mario'. This is often done for commercial purposes, for example 奔腾; 奔騰; bēnténg; 'dashing-leaping' for 'Pentium' and 赛百味; 賽百味; Sàibǎiwèi; 'better-than hundred tastes' for 'Subway'.
Foreign words, mainly proper nouns, continue to enter the Chinese language by transcription according to their pronunciations. This is done by employing Chinese characters with similar pronunciations. For example, 'Israel' becomes 以色列; Yǐsèliè, and 'Paris' becomes 巴黎; Bālí. A rather small number of direct transliterations have survived as common words, including 沙发; 沙發; shāfā; 'sofa', 马达; 馬達; mǎdá; 'motor', 幽默; yōumò; 'humor', 逻辑; 邏輯; luóji, luójí; 'logic', 时髦; 時髦; shímáo; 'smart'', 'fashionable', and 歇斯底里; xiēsīdǐlǐ; 'hysterics'. The bulk of these words were originally coined in Shanghai during the early 20th century, and later loaned from there into Mandarin, hence their Mandarin pronunciations occasionally being quite divergent from the English. For example, in Shanghainese 沙发; 沙發; sofa and 马达; 馬達; 'motor' sound more like their English counterparts. Cantonese differs from Mandarin with some transliterations, such as 梳化; so faa; 'sofa' and 摩打; mo daa; 'motor'.
Western foreign words representing Western concepts have influenced Chinese since the 20th century through transcription. From French, 芭蕾; bālěi and 香槟; 香檳; xiāngbīn were borrowed for 'ballet' and 'champagne' respectively; 咖啡; kāfēi was borrowed from Italian caffè; 'coffee'. The influence of English is particularly pronounced: from the early 20th century, many English words were borrowed into Shanghainese, such as 高尔夫; 高爾夫; gāo'ěrfū; 'golf' and the aforementioned 沙发; 沙發; shāfā; 'sofa'. Later, American soft power gave rise to 迪斯科; dísīkē; 'disco', 可乐; 可樂; kělè; 'cola', and mínǐ; 'miniskirt'. Contemporary colloquial Cantonese has distinct loanwords from English, such as 卡通; kaa tung1; 'cartoon', 基佬; gei lou; 'gay people', 的士; dik si; 'taxi', and 巴士; baa si; 'bus'. With the rising popularity of the Internet, there is a current vogue in China for coining English transliterations, for example, 粉丝; 粉絲; fěnsī; 'fans', 黑客; hēikè; 'hacker', and 博客; bókè; 'blog'. In Taiwan, some of these transliterations are different, such as 駭客; hàikè; 'hacker' and 部落格; bùluògé; 'interconnected tribes' for 'blog'.
Another result of English influence on Chinese is the appearance in of so-called 字母词; 字母詞; zìmǔcí; 'lettered words' spelled with letters from the English alphabet. These have appeared in colloquial usage, as well as in magazines and newspapers, and on websites and television:
Since the 20th century, another source of words has been kanji: Japan re-molded European concepts and inventions into 和製漢語, wasei-kango, 'Japanese-made Chinese', and many of these words have been re-loaned into modern Chinese. Other terms were coined by the Japanese by giving new senses to existing Chinese terms or by referring to expressions used in classical Chinese literature. For example, 经济; 經濟; jīngjì; 経済, keizai in Japanese, which in the original Chinese meant 'the workings of the state', narrowed to 'economy' in Japanese; this narrowed definition was then reimported into Chinese. As a result, these terms are virtually indistinguishable from native Chinese words: indeed, there is some dispute over some of these terms as to whether the Japanese or Chinese coined them first. As a result of this loaning, Chinese, Korean, Japanese, and Vietnamese share a corpus of linguistic terms describing modern terminology, paralleling the similar corpus of terms built from Greco-Latin and shared among European languages.
The Chinese orthography centers on Chinese characters, which are written within imaginary square blocks, traditionally arranged in vertical columns, read from top to bottom down a column, and right to left across columns, despite alternative arrangement with rows of characters from left to right within a row and from top to bottom across rows (like English and other Western writing systems) having become more popular since the 20th century. Chinese characters denote morphemes independent of phonetic variation in different languages. Thus the character 一; 'one' is pronounced as yī in Standard Chinese, yat in Cantonese and it in Hokkien, a form of Min.
Most written Chinese documents in the modern time, especially the more formal ones, are created using the grammar and syntax of the Standard Chinese variants, regardless of dialectical background of the author or targeted audience. This replaced the old writing language standard of Literary Chinese before the 20th century. However, vocabularies from different Chinese-speaking areas have diverged, and the divergence can be observed in written Chinese.
Meanwhile, colloquial forms of various Chinese language variants have also been written down by their users, especially in less formal settings. The most prominent example of this is Written Cantonese, which has become quite popular in tabloids, instant messaging applications, and on the internet amongst Hong-Kongers and Cantonese-speakers elsewhere.
Because some Chinese variants have diverged and developed a number of unique morphemes that are not found in Standard Mandarin (despite all other common morphemes), unique characters rarely used in Standard Chinese have also been created or inherited from archaic literary standard to represent these unique morphemes. For example, characters like 冇 and 係 are actively used in Cantonese and Hakka, while being archaic or unused in standard written Chinese.
The Chinese had no uniform phonetic transcription system for most of its speakers until the mid-20th century, although enunciation patterns were recorded in early rime books and dictionaries. Early Indian translators, working in Sanskrit and Pali, were the first to attempt to describe the sounds and enunciation patterns of Chinese in a foreign language. After the 15th century, the efforts of Jesuits and Western court missionaries resulted in some Latin character transcription/writing systems, based on various variants of Chinese languages. Some of these Latin character based systems are still being used to write various Chinese variants in the modern era.
In Hunan, women in certain areas write their local Chinese language variant in Nüshu, a syllabary derived from Chinese characters. The Dungan language, considered by many a dialect of Mandarin, is nowadays written in Cyrillic, and was previously written in the Arabic script. The Dungan people are primarily Muslim and live mainly in Kazakhstan, Kyrgyzstan, and Russia; many Hui people, living mainly in China, also speak the language.
Each Chinese character represents a monosyllabic Chinese word or morpheme. In 100 CE, the famed Han dynasty scholar Xu Shen classified characters into six categories: pictographs, simple ideographs, compound ideographs, phonetic loans, phonetic compounds and derivative characters. Only 4% were categorized as pictographs, including many of the simplest characters, such as 人; rén; 'human', 日; rì; 'the Sun', 山; shān; 'mountain', and 水; shuǐ; 'water'. Between 80% and 90% were classified as phonetic compounds such as 沖; chōng; 'pour', combining a phonetic component 中; zhōng with a semantic component of the radical 氵, a reduced form of 水; 'water'. Almost all characters created since have been made using this format. The 18th-century Kangxi Dictionary classified characters under a now-common set of 214 radicals.
Modern characters are styled after the regular script. Various other written styles are also used in Chinese calligraphy, including seal script, cursive script and clerical script. Calligraphy artists can write in Traditional and Simplified characters, but they tend to use Traditional characters for traditional art.
There are currently two systems for Chinese characters. Traditional characters, used in Hong Kong, Taiwan, Macau, and many overseas Chinese speaking communities, largely takes their form from received character forms dating back to the late Han dynasty and standardized during the Ming. Simplified characters, introduced by the PRC in 1954 to promote mass literacy, simplifies most complex traditional glyphs to fewer strokes, many to common cursive shorthand variants. Singapore, which has a large Chinese community, was the second nation to officially adopt simplified characters, although it has also become the de facto standard for younger ethnic Chinese in Malaysia.
The Internet provides practice reading each of these systems, and most Chinese readers are capable of, if not necessarily comfortable with, reading the alternative system through experience and guesswork.
A well-educated Chinese reader today recognizes approximately 4,000 to 6,000 characters; approximately 3,000 characters are required to read a mainland newspaper. The PRC defines literacy amongst workers as a knowledge of 2,000 characters, though this would be only functional literacy. School-children typically learn around 2,000 characters whereas scholars may memorize up to 10,000. A large unabridged dictionary like the Kangxi dictionary, contains over 40,000 characters, including obscure, variant, rare, and archaic characters; fewer than a quarter of these characters are now commonly used.
Romanization is the process of transcribing a language into the Latin script. There are many systems of romanization for the Chinese varieties, due to the lack of a native phonetic transcription until modern times. Chinese is first known to have been written in Latin characters by Western Christian missionaries in the 16th century.
Today the most common romanization standard for Standard Mandarin is Hanyu Pinyin, introduced in 1956 by the PRC, and later adopted by Singapore and Taiwan. Pinyin is almost universally employed now for teaching standard spoken Chinese in schools and universities across the Americas, Australia, and Europe. Chinese parents also use Pinyin to teach their children the sounds and tones of new words. In school books that teach Chinese, the pinyin romanization is often shown below a picture of the thing the word represents, with the Chinese character alongside.
The second-most common romanization system, the Wade–Giles, was invented by Thomas Wade in 1859 and modified by Herbert Giles in 1892. As this system approximates the phonology of Mandarin Chinese into English consonants and vowels–it is largely an anglicization, it may be particularly helpful for beginner Chinese speakers of an English-speaking background. Wade–Giles was found in academic use in the United States, particularly before the 1980s, and until 2009 was widely used in Taiwan.
When used within European texts, the tone transcriptions in both pinyin and Wade–Giles are often left out for simplicity; Wade–Giles's extensive use of apostrophes is also usually omitted. Thus, most Western readers will be much more familiar with Beijing than they will be with Běijīng (pinyin), and with Taipei than T'ai-pei (Wade–Giles). This simplification presents syllables as homophones which really are none, and therefore exaggerates the number of homophones almost by a factor of four.
For comparison:
Other systems include Gwoyeu Romatzyh, the French EFEO, the Yale system (invented for use by US troops during World War II), as well as distinct systems for the phonetic requirements of Cantonese, Min Nan, Hakka, and other varieties.
Chinese varieties have been phonetically transcribed into many other writing systems over the centuries. The 'Phags-pa script, for example, has been very helpful in reconstructing the pronunciations of premodern forms of Chinese.
Zhuyin (colloquially bopomofo), a semi-syllabary is still widely used in Taiwan's elementary schools to aid standard pronunciation. Although zhuyin characters are reminiscent of katakana script, there is no source to substantiate the claim that Katakana was the basis for the zhuyin system. A comparison table of zhuyin to pinyin exists in the zhuyin article. Syllables based on pinyin and zhuyin can also be compared by looking at the following articles:
There are also at least two systems of cyrillization for Chinese. The most widespread is the Palladius system.
With the growing importance and influence of China's economy globally, Standard Chinese instruction has been gaining popularity in schools throughout East Asia, Southeast Asia, and the Western world.
Besides Mandarin, Cantonese is the only other Chinese language that is widely taught as a foreign language, largely due to the economic and cultural influence of Hong Kong and its widespread usage among significant Overseas Chinese communities.
In 1991 there were 2,000 foreign learners taking China's official Chinese Proficiency Test, called Hanyu Shuiping Kaoshi (HSK), comparable to the English Cambridge Certificate, but by 2005 the number of candidates had risen sharply to 117,660 and in 2010 to 750,000.
The current iteration of the HSK exams is termed HSK 2.0, with the release of HSK 3.0 still undefined despite being announced by the Chinese Ministry of Education in March 2021. The new HSK system is thought to be in response to criticism of the current HSK levels not matching with the CEFR levels (Common European Framework of Reference for Languages), contrary to the Chinese Ministry of Education's claims. | [
{
"paragraph_id": 0,
"text": "Chinese (simplified Chinese: 汉语; traditional Chinese: 漢語; pinyin: Hànyǔ; lit. 'Han language' or 中文; Zhōngwén; 'Chinese writing') is a group of languages spoken natively by the ethnic Han Chinese majority and many minority ethnic groups in Greater China. Approximately 1.3 billion people, or around 16% of the global population, speak a variety of Chinese as their first language.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Chinese languages form the Sinitic branch of the Sino-Tibetan language family. The spoken varieties of Chinese are usually considered by native speakers to be dialects of a single language. However, their lack of mutual intelligibility means they are sometimes considered to be separate languages in a family. Investigation of the historical relationships among the varieties of Chinese is ongoing. Currently, most classifications posit 7 to 13 main regional groups based on phonetic developments from Middle Chinese, of which the most spoken by far is Mandarin with 66%, or around 800 million speakers, followed by Min (75 million, e.g. Southern Min), Wu (74 million, e.g. Shanghainese), and Yue (68 million, e.g. Cantonese). These branches are unintelligible to each other, and many of their subgroups are unintelligible with the other varieties within the same branch (e.g. Southern Min). There are, however, transitional areas where varieties from different branches share enough features for some limited intelligibility, including New Xiang with Southwestern Mandarin, Xuanzhou Wu Chinese with Lower Yangtze Mandarin, Jin with Central Plains Mandarin and certain divergent dialects of Hakka with Gan (though these are unintelligible with mainstream Hakka). All varieties of Chinese are tonal to at least some degree, and are largely analytic.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The earliest Chinese written records are oracle bone inscriptions dating from the Shang dynasty c. 1250 BCE. The phonetic categories of Old Chinese can be reconstructed from the rhymes of ancient poetry. During the Northern and Southern period, Middle Chinese went through several sound changes and split into several varieties following prolonged geographic and political separation. The Qieyun, a rime dictionary, recorded a compromise between the pronunciations of different regions. The royal courts of the Ming and early Qing dynasties operated using a koiné language known as Guanhua, based on the Nanjing dialect of Mandarin.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Standard Chinese is an official language of both the People's Republic of China and the Republic of China on Taiwan, one of the four official languages of Singapore, and one of the six official languages of the United Nations. Standard Chinese is based on the Beijing dialect of Mandarin, and was first officially adopted in the 1930s. The language is written primarily using a logography of Chinese characters, largely shared by readers who may otherwise speak mutually unintelligible varieties. Since the 1950s, the use of Simplified characters has been promoted by the government of the People's Republic of China, with Singapore officially adopting them in 1976. Traditional characters are used in Taiwan, Hong Kong, Macau, and among Chinese-speaking communities overseas. Traditional characters are also in use in mainland China, despite them not being the first choice in daily use. For example, practising Chinese calligraphy requires the knowledge of traditional Chinese characters.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Linguists classify all varieties of Chinese as part of the Sino-Tibetan language family, together with Burmese, Tibetan and many other languages spoken in the Himalayas and the Southeast Asian Massif. Although the relationship was first proposed in the early 19th century and is now broadly accepted, reconstruction of Sino-Tibetan is much less developed than that of families such as Indo-European or Austroasiatic. Difficulties have included the great diversity of the languages, the lack of inflection in many of them, and the effects of language contact. In addition, many of the smaller languages are spoken in mountainous areas that are difficult to reach and are often also sensitive border zones. Without a secure reconstruction of proto-Sino-Tibetan, the higher-level structure of the family remains unclear. A top-level branching into Chinese and Tibeto-Burman languages is often assumed, but has not been convincingly demonstrated.",
"title": "Classification"
},
{
"paragraph_id": 5,
"text": "The first written records appeared over 3,000 years ago during the Shang dynasty. As the language evolved over this period, the various local varieties became mutually unintelligible. In reaction, central governments have repeatedly sought to promulgate a unified standard.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "The earliest examples of Old Chinese are divinatory inscriptions on oracle bones dated to c. 1250 BCE, during the late Shang. The next attested stage came from inscriptions on bronze artifacts of the Western Zhou period (1046–771 BCE), the Classic of Poetry and portions of the Book of Documents and I Ching. Scholars have attempted to reconstruct the phonology of Old Chinese by comparing later varieties of Chinese with the rhyming practice of the Classic of Poetry and the phonetic elements found in the majority of Chinese characters. Although many of the finer details remain unclear, most scholars agree that Old Chinese differs from Middle Chinese in lacking retroflex and palatal obstruents but having initial consonant clusters of some sort, and in having voiceless nasals and liquids. Most recent reconstructions also describe an atonal language with consonant clusters at the end of the syllable, developing into tone distinctions in Middle Chinese. Several derivational affixes have also been identified, but the language lacks inflection, and indicated grammatical relationships using word order and grammatical particles.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "Middle Chinese was the language used during Northern and Southern dynasties and the Sui, Tang, and Song dynasties (6th–10th centuries CE). It can be divided into an early period, reflected by the Qieyun rime book (601 CE), and a late period in the 10th century, reflected by rhyme tables such as the Yunjing constructed by ancient Chinese philologists as a guide to the Qieyun system. These works define phonological categories, but with little hint of what sounds they represent. Linguists have identified these sounds by comparing the categories with pronunciations in modern varieties of Chinese, borrowed Chinese words in Japanese, Vietnamese, and Korean, and transcription evidence. The resulting system is very complex, with a large number of consonants and vowels, but they are probably not all distinguished in any single dialect. Most linguists now believe it represents a diasystem encompassing 6th-century northern and southern standards for reading the classics.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "The complex relationship between spoken and written Chinese is an example of diglossia: as spoken, Chinese varieties have evolved at different rates, while the written language used throughout China changed comparatively little, crystallizing into a prestige form known as Classical or Literary Chinese. Literature written distinctly in the Classical form began to emerge during the Spring and Autumn period. Its use in writing remained nearly universal until the late 19th century, culminating with the widespread adoption of written vernacular Chinese with the May Fourth Movement beginning in 1919.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "After the fall of the Northern Song dynasty and subsequent reign of the Jurchen Jin and Mongol Yuan dynasties in northern China, a common speech (now called Old Mandarin) developed based on the dialects of the North China Plain around the capital. The 1324 Zhongyuan Yinyun was a dictionary that codified the rhyming conventions of new sanqu verse form in this language. Together with the slightly later Menggu Ziyun, this dictionary describes a language with many of the features characteristic of modern Mandarin dialects.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "Up to the early 20th century, most Chinese people only spoke their local variety. Thus, as a practical measure, officials of the Ming and Qing dynasties carried out the administration of the empire using a common language based on Mandarin varieties, known as 官话; 官話; Guānhuà; 'language of officials'. For most of this period, this language was a koiné based on dialects spoken in the Nanjing area, though not identical to any single dialect. By the middle of the 19th century, the Beijing dialect had become dominant and was essential for any business with the imperial court.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "In the 1930s, a standard national language, 国语; 國語; Guóyǔ; 'national language', was adopted. After much dispute between proponents of northern and southern dialects and an abortive attempt at an artificial pronunciation, the National Language Unification Commission finally settled on the Beijing dialect in 1932. The People's Republic founded in 1949 retained this standard but renamed it 普通话; 普通話; pǔtōnghuà; 'common speech'. The national language is now used in education, the media, and formal situations in both mainland China and Taiwan. Because of their colonial and linguistic history, the language used in education, the media, formal speech, and everyday life in Hong Kong and Macau is the local Cantonese, although the standard language, Mandarin, has become very influential and is being taught in schools.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "Historically, the Chinese language has spread to its neighbors through a variety of means. Northern Vietnam was incorporated into the Han empire in 111 BCE, marking the beginning of a period of Chinese control that ran almost continuously for a millennium. The Four Commanderies were established in northern Korea in the first century BCE, but disintegrated in the following centuries. Chinese Buddhism spread over East Asia between the 2nd and 5th centuries CE, and with it the study of scriptures and literature in Literary Chinese. Later, strong central governments modeled on Chinese institutions were established in Korea, Japan, and Vietnam, with Literary Chinese serving as the language of administration and scholarship, a position it would retain until the late 19th century in Korea and (to a lesser extent) Japan, and the early 20th century in Vietnam. Scholars from different lands could communicate, albeit only in writing, using Literary Chinese.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "Although they used Chinese solely for written communication, each country had its own tradition of reading texts aloud, the so-called Sino-Xenic pronunciations. Chinese words with these pronunciations were also extensively imported into the Korean, Japanese and Vietnamese languages, and today comprise over half of their vocabularies. This massive influx led to changes in the phonological structure of the languages, contributing to the development of moraic structure in Japanese and the disruption of vowel harmony in Korean.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "Borrowed Chinese morphemes have been used extensively in all these languages to coin compound words for new concepts, in a similar way to the use of Latin and Ancient Greek roots in European languages. Many new compounds, or new meanings for old phrases, were created in the late 19th and early 20th centuries to name Western concepts and artifacts. These coinages, written in shared Chinese characters, have then been borrowed freely between languages. They have even been accepted into Chinese, a language usually resistant to loanwords, because their foreign origin was hidden by their written form. Often different compounds for the same concept were in circulation for some time before a winner emerged, and sometimes the final choice differed between countries. The proportion of vocabulary of Chinese origin thus tends to be greater in technical, abstract, or formal language. For example, in Japan, Sino-Japanese words account for about 35% of the words in entertainment magazines, over half the words in newspapers, and 60% of the words in science magazines.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "Vietnam, Korea, and Japan each developed writing systems for their own languages, initially based on Chinese characters, but later replaced with the hangul alphabet for Korean and supplemented with kana syllabaries for Japanese, while Vietnamese continued to be written with the complex chữ Nôm script. However, these were limited to popular literature until the late 19th century. Today Japanese is written with a composite script using both Chinese characters called kanji, and kana. Korean is written exclusively with hangul in North Korea (although knowledge of the supplementary Chinese characters (called hanja) is still required), and hanja are increasingly rarely used in South Korea. As a result of former French colonization, Vietnamese switched to a Latin-based alphabet.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "Examples of loan words in English include 'tea' from Hokkien 茶; tê, 'dim sum' from Cantonese 點心; dim sam, and 'kumquat' from Cantonese 金橘; gamgwat.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "Jerry Norman estimated that there are hundreds of mutually unintelligible varieties of Chinese. These varieties form a dialect continuum, in which differences in speech generally become more pronounced as distances increase, though the rate of change varies immensely. Generally, mountainous South China exhibits more linguistic diversity than the North China Plain. In parts of South China, a major city's dialect may only be marginally intelligible to close neighbours. For instance, Wuzhou is about 190 kilometres (120 mi) upstream from Guangzhou, but the Yue variety spoken there is more like that of Guangzhou than is that of Taishan, 95 kilometres (60 mi) southwest of Guangzhou and separated from it by several rivers. In parts of Fujian the speech of neighbouring counties or even villages may be mutually unintelligible.",
"title": "Varieties"
},
{
"paragraph_id": 18,
"text": "Until the late 20th century, Chinese emigrants to Southeast Asia and North America came from southeast coastal areas, where Min, Hakka, and Yue dialects are spoken. The vast majority of Chinese immigrants to North America up to the mid-20th century spoke the Taishan dialect, from a small coastal area southwest of Guangzhou.",
"title": "Varieties"
},
{
"paragraph_id": 19,
"text": "Proportions of first-language speakers",
"title": "Varieties"
},
{
"paragraph_id": 20,
"text": "Local varieties of Chinese are conventionally classified into seven dialect groups, largely based on the different evolution of Middle Chinese voiced initials:",
"title": "Varieties"
},
{
"paragraph_id": 21,
"text": "The classification of Li Rong, which is used in the Language Atlas of China (1987), distinguishes three further groups:",
"title": "Varieties"
},
{
"paragraph_id": 22,
"text": "Some varieties remain unclassified, including the Danzhou dialect on Hainan, Waxianghua spoken in western Hunan, and Shaozhou Tuhua spoken in northern Guangdong.",
"title": "Varieties"
},
{
"paragraph_id": 23,
"text": "Standard Chinese is the official standard language of China (where it is called 普通话; pǔtōnghuà) and Taiwan, and one of the four official languages of Singapore (where it is called either 华语; 華語; Huáyŭ or 汉语; 漢語; Hànyǔ). Standard Chinese is based on the Beijing dialect of Mandarin. The governments of both China and Taiwan intend for speakers of all Chinese speech varieties to use it as a common language of communication. Therefore, it is used in government agencies, in the media, and as a language of instruction in schools.",
"title": "Varieties"
},
{
"paragraph_id": 24,
"text": "In China, diglossia has been a common feature. For example, in addition to Standard Chinese, a resident of Shanghai may speak Shanghainese; if they grew up elsewhere, then they are also likely to be fluent in the particular dialect of that local area. A native of Guangzhou may speak both Cantonese and Standard Chinese. In addition to Mandarin, most Taiwanese people also speak Taiwanese Hokkien (commonly 台語; 'Taiwanese'), Hakka, or an Austronesian language. A Taiwanese may commonly mix pronunciations, phrases, and words from Mandarin and other languages of Taiwan, and this mixture is considered normal in daily or informal speech.",
"title": "Varieties"
},
{
"paragraph_id": 25,
"text": "Due to their traditional cultural ties to Guangdong amid a history of outside colonization, Cantonese is used as a standard language in Hong Kong and Macau.",
"title": "Varieties"
},
{
"paragraph_id": 26,
"text": "The designation of various Chinese branches remains controversial. Some linguists and most ordinary Chinese people consider all the spoken varieties as one single language, as speakers share a common national identity and a common written form. Others instead argue that it is inappropriate to refer to major branches of Chinese such as Mandarin, Wu and so on as \"dialects\" because the mutual unintelligibility between them is too great. However, calling major Chinese branches \"languages\" would also be wrong under the same criterion, since a branch such as Wu, itself contains many mutually unintelligible varieties, and could not be properly called a single language.",
"title": "Varieties"
},
{
"paragraph_id": 27,
"text": "There are also viewpoints pointing out that linguists often ignore mutual intelligibility when varieties share intelligibility with a central variety (i.e. prestige variety, such as Standard Mandarin), as the issue requires some careful handling when mutual intelligibility is inconsistent with language identity.",
"title": "Varieties"
},
{
"paragraph_id": 28,
"text": "The Chinese government's official Chinese designation for the major branches of Chinese is 方言; fāngyán; 'regional speech', whereas the more closely related varieties within these are called 地点方言; 地點方言; dìdiǎn fāngyán; 'local speech'.",
"title": "Varieties"
},
{
"paragraph_id": 29,
"text": "Because of the difficulties involved in determining the difference between language and dialect, other terms have been proposed. These include topolect, lect, vernacular, regional, and variety.",
"title": "Varieties"
},
{
"paragraph_id": 30,
"text": "Syllables in the Chinese languages have some unique characteristics. They are tightly related to the morphology and also to the characters of the writing system; and phonologically they are structured according to fixed rules.",
"title": "Phonology"
},
{
"paragraph_id": 31,
"text": "The structure of each syllable consists of a nucleus that has a vowel (which can be a monophthong, diphthong, or even a triphthong in certain varieties), preceded by an onset (a single consonant, or consonant + glide; a zero onset is also possible), and followed (optionally) by a coda consonant; a syllable also carries a tone. There are some instances where a vowel is not used as a nucleus. An example of this is in Cantonese, where the nasal sonorant consonants /m/ and /ŋ/ can stand alone as their own syllable.",
"title": "Phonology"
},
{
"paragraph_id": 32,
"text": "In Mandarin much more than in other spoken varieties, most syllables tend to be open syllables, meaning they have no coda (assuming that a final glide is not analyzed as a coda), but syllables that do have codas are restricted to nasals /m/, /n/, /ŋ/, the retroflex approximant /ɻ/, and voiceless stops /p/, /t/, /k/, or /ʔ/. Some varieties allow most of these codas, whereas others, such as Standard Chinese, are limited to only /n/, /ŋ/, and /ɻ/.",
"title": "Phonology"
},
{
"paragraph_id": 33,
"text": "The number of sounds in the different spoken dialects varies, but in general there has been a tendency to a reduction in sounds from Middle Chinese. The Mandarin dialects in particular have experienced a dramatic decrease in sounds and so have far more polysyllabic words than most other spoken varieties. The total number of syllables in some varieties is therefore only about a thousand, including tonal variation, which is only about an eighth as many as English.",
"title": "Phonology"
},
{
"paragraph_id": 34,
"text": "All varieties of spoken Chinese use tones to distinguish words. A few dialects of north China may have as few as three tones, while some dialects in south China have up to 6 or 12 tones, depending on how one counts. One exception from this is Shanghainese which has reduced the set of tones to a two-toned pitch accent system much like modern Japanese.",
"title": "Phonology"
},
{
"paragraph_id": 35,
"text": "A very common example used to illustrate the use of tones in Chinese is the application of the four tones of Standard Chinese, along with the neutral tone, to the syllable ma. The tones are exemplified by the following five Chinese words:",
"title": "Phonology"
},
{
"paragraph_id": 36,
"text": "In contrast, Standard Cantonese has six tones. Historically, finals that end in a stop consonant were considered to be \"checked tones\" and thus counted separately for a total of nine tones. However, they are considered to be duplicates in modern linguistics and are no longer counted as such:",
"title": "Phonology"
},
{
"paragraph_id": 37,
"text": "Chinese is often described as a 'monosyllabic' language. However, this is only partially correct. It is largely accurate when describing Old and Middle Chinese; in Classical Chinese, around 90% of words consist of a single character that corresponds one-to-one with a morpheme, the smallest unit of meaning in a language. In modern varieties, it usually remains the case that a morphemes are monosyllabic—in contrast, English has many multi-syllable morphemes, both bound and free, such as 'seven', 'elephant', 'para-' and '-able'. Some of the more conservative modern varieties, usually found in the south, have largely monosyllabic words, especially with basic vocabulary. However, most nouns, adjectives and verbs in modern Mandarin are disyllabic. A significant cause of this is phonological attrition: sound changes over time have steadily reduced the number of possible syllables in the language's inventory. In modern Mandarin, there are only around 1,200 possible syllables, including the tonal distinctions, compared with about 5,000 in Vietnamese (still a largely monosyllabic language), and over 8,000 in English.",
"title": "Grammar"
},
{
"paragraph_id": 38,
"text": "Most modern varieties have the tendency to form new words through polysyllabic compounds. In some cases, monosyllabic words have become disyllabic formed from different characters without the use of compounding, as in 窟窿; kūlong from 孔; kǒng; this is especially common in Jin varieties. This phonological collapse has led to a corresponding increase in the number of homophones. As an example, the small Langenscheidt Pocket Chinese Dictionary lists six words that are commonly pronounced as shí in Standard Chinese:",
"title": "Grammar"
},
{
"paragraph_id": 39,
"text": "In modern spoken Mandarin, however, tremendous ambiguity would result if all of these words could be used as-is. The 20th century Yuen Ren Chao poem Lion-Eating Poet in the Stone Den exploits this, consisting of 92 characters all pronounced shi. As such, most of these words have been replaced in speech, if not in writing, with less ambiguous disyllabic compounds. Only the first one, 十, normally appears in monosyllabic form in spoken Mandarin; the rest are normally used in the polysyllabic forms of",
"title": "Grammar"
},
{
"paragraph_id": 40,
"text": "respectively. In each, the homophone was disambiguated by addition of another morpheme, typically either a near-synonym or some sort of generic word (e.g. 'head', 'thing'), the purpose of which is to indicate which of the possible meanings of the other, homophonic syllable is specifically meant.",
"title": "Grammar"
},
{
"paragraph_id": 41,
"text": "However, when one of the above words forms part of a compound, the disambiguating syllable is generally dropped and the resulting word is still disyllabic. For example, 石; shí alone, and not 石头; 石頭; shítou, appears in compounds as meaning 'stone' such as 石膏; shígāo; 'plaster', 石灰; shíhuī; 'lime', 石窟; shíkū; 'grotto', 石英; 'quartz', and 石油; shíyóu; 'petroleum'. Although many single-syllable morphemes (字; zì) can stand alone as individual words, they more often than not form multi-syllable compounds known as 词; 詞; cí, which more closely resembles the traditional Western notion of a word. A Chinese cí can consist of more than one character–morpheme, usually two, but there can be three or more.",
"title": "Grammar"
},
{
"paragraph_id": 42,
"text": "Examples of Chinese words of more than two syllables include 汉堡包; 漢堡包; hànbǎobāo; 'hamburger', 守门员; 守門員; shǒuményuán; 'goalkeeper', and 电子邮件; 電子郵件; diànzǐyóujiàn; 'e-mail'.",
"title": "Grammar"
},
{
"paragraph_id": 43,
"text": "All varieties of modern Chinese are analytic languages: they depend on syntax (word order and sentence structure), rather than inflectional morphology (changes in the form of a word), to indicate a word's function within a sentence. In other words, Chinese has very few grammatical inflections—it possesses no tenses, no voices, no grammatical number, and only a few articles. They make heavy use of grammatical particles to indicate aspect and mood. In Mandarin, this involves the use of particles such as 了; le; 'PFV', 还; 還; hái; 'still', and 已经; 已經; yǐjīng; 'already'.",
"title": "Grammar"
},
{
"paragraph_id": 44,
"text": "Chinese has a subject–verb–object word order, and like many other languages of East Asia, makes frequent use of the topic–comment construction to form sentences. Chinese also has an extensive system of classifiers and measure words, another trait shared with neighboring languages such as Japanese and Korean. Other notable grammatical features common to all the spoken varieties of Chinese include the use of serial verb construction, pronoun dropping and the related subject dropping. Although the grammars of the spoken varieties share many traits, they do possess differences.",
"title": "Grammar"
},
{
"paragraph_id": 45,
"text": "The entire Chinese character corpus since antiquity comprises well over 50,000 characters, of which only roughly 10,000 are in use and only about 3,000 are frequently used in Chinese media and newspapers. However, Chinese characters should not be confused with Chinese words. Because most Chinese words are made up of two or more characters, there are many more Chinese words than characters. A more accurate equivalent for a Chinese character is the morpheme, as characters represent the smallest grammatical units with individual meanings in the Chinese language.",
"title": "Vocabulary"
},
{
"paragraph_id": 46,
"text": "Estimates of the total number of Chinese words and lexicalized phrases vary greatly. The Hanyu Da Zidian, a compendium of Chinese characters, includes 54,678 head entries for characters, including oracle bone versions. The Zhonghua Zihai (1994) contains 85,568 head entries for character definitions, and is the largest reference work based purely on character and its literary variants. The CC-CEDICT project (2010) contains 97,404 contemporary entries including idioms, technology terms and names of political figures, businesses and products. The 2009 version of the Webster's Digital Chinese Dictionary (WDCD), based on CC-CEDICT, contains over 84,000 entries.",
"title": "Vocabulary"
},
{
"paragraph_id": 47,
"text": "The most comprehensive pure linguistic Chinese-language dictionary, the 12-volume Hanyu Da Cidian, records more than 23,000 head Chinese characters and gives over 370,000 definitions. The 1999 revised Cihai, a multi-volume encyclopedic dictionary reference work, gives 122,836 vocabulary entry definitions under 19,485 Chinese characters, including proper names, phrases and common zoological, geographical, sociological, scientific and technical terms.",
"title": "Vocabulary"
},
{
"paragraph_id": 48,
"text": "The 2016 edition of Xiandai Hanyu Cidian, an authoritative one-volume dictionary on modern standard Chinese language as used in mainland China, has 13,000 head characters and defines 70,000 words.",
"title": "Vocabulary"
},
{
"paragraph_id": 49,
"text": "Like many other languages, Chinese has absorbed a sizable number of loanwords from other cultures. Most Chinese words are formed out of native Chinese morphemes, including words describing imported objects and ideas. However, direct phonetic borrowing of foreign words has gone on since ancient times.",
"title": "Vocabulary"
},
{
"paragraph_id": 50,
"text": "Some early Indo-European loanwords in Chinese have been proposed, notably 蜜; mì; 'honey', 狮; 獅; shī; 'lion', and perhaps also 马; 馬; mǎ; 'horse', 猪; 豬; zhū; 'pig', 犬; quǎn; 'dog', and 鹅; 鵝; é; 'goose'. Ancient words borrowed from along the Silk Road during the Old Chinese period include 葡萄; pútáo; 'grape', 石榴; shíliu, shíliú; 'pomegranate', and 狮子; 獅子; shīzi; 'lion'. Some words were borrowed from Buddhist scriptures, including 佛; Fó; 'Buddha' and 菩萨; 菩薩; Púsà; 'bodhisattva'. Other words came from nomadic peoples to the north, such as 胡同; hútòng; 'hutong'. Words borrowed from the peoples along the Silk Road, such as 葡萄; 'grape', generally have Persian etymologies. Buddhist terminology is generally derived from Sanskrit or Pāli, the liturgical languages of northern India. Words borrowed from the nomadic tribes of the Gobi, Mongolian or northeast regions generally have Altaic etymologies, such as 琵琶; pípá, the Chinese lute, or 酪; lào, luò; 'cheese or yogurt', but from exactly which source is not always clear.",
"title": "Vocabulary"
},
{
"paragraph_id": 51,
"text": "Modern neologisms are primarily translated into Chinese in one of three ways: free translation (calques), phonetic translation (by sound), or a combination of the two. Today, it is much more common to use existing Chinese morphemes to coin new words to represent imported concepts, such as technical expressions and international scientific vocabulary, wherein the Latin and Greek components usually converted one-for-one into the corresponding Chinese characters. The word 'telephone' was initially loaned phonetically as 德律风; 德律風; délǜfēng (Shanghainese télífon [təlɪfoŋ])—this word was widely used in Shanghai during the 1920s, but the later 电话; 電話; diànhuà; 'electric speech', built out of native Chinese morphemes, became prevalent. Other examples include",
"title": "Vocabulary"
},
{
"paragraph_id": 52,
"text": "Occasionally, compromises between the transliteration and translation approaches become accepted, such as 汉堡包; 漢堡包; hànbǎobāo; 'hamburger' from 汉堡; 'Hamburg' + 包; 'bun'. Sometimes translations are designed so that they sound like the original while incorporating Chinese morphemes (phono-semantic matching), such as 马利奥; 馬利奧; Mǎlì'ào for the video game character 'Mario'. This is often done for commercial purposes, for example 奔腾; 奔騰; bēnténg; 'dashing-leaping' for 'Pentium' and 赛百味; 賽百味; Sàibǎiwèi; 'better-than hundred tastes' for 'Subway'.",
"title": "Vocabulary"
},
{
"paragraph_id": 53,
"text": "Foreign words, mainly proper nouns, continue to enter the Chinese language by transcription according to their pronunciations. This is done by employing Chinese characters with similar pronunciations. For example, 'Israel' becomes 以色列; Yǐsèliè, and 'Paris' becomes 巴黎; Bālí. A rather small number of direct transliterations have survived as common words, including 沙发; 沙發; shāfā; 'sofa', 马达; 馬達; mǎdá; 'motor', 幽默; yōumò; 'humor', 逻辑; 邏輯; luóji, luójí; 'logic', 时髦; 時髦; shímáo; 'smart'', 'fashionable', and 歇斯底里; xiēsīdǐlǐ; 'hysterics'. The bulk of these words were originally coined in Shanghai during the early 20th century, and later loaned from there into Mandarin, hence their Mandarin pronunciations occasionally being quite divergent from the English. For example, in Shanghainese 沙发; 沙發; sofa and 马达; 馬達; 'motor' sound more like their English counterparts. Cantonese differs from Mandarin with some transliterations, such as 梳化; so faa; 'sofa' and 摩打; mo daa; 'motor'.",
"title": "Vocabulary"
},
{
"paragraph_id": 54,
"text": "Western foreign words representing Western concepts have influenced Chinese since the 20th century through transcription. From French, 芭蕾; bālěi and 香槟; 香檳; xiāngbīn were borrowed for 'ballet' and 'champagne' respectively; 咖啡; kāfēi was borrowed from Italian caffè; 'coffee'. The influence of English is particularly pronounced: from the early 20th century, many English words were borrowed into Shanghainese, such as 高尔夫; 高爾夫; gāo'ěrfū; 'golf' and the aforementioned 沙发; 沙發; shāfā; 'sofa'. Later, American soft power gave rise to 迪斯科; dísīkē; 'disco', 可乐; 可樂; kělè; 'cola', and mínǐ; 'miniskirt'. Contemporary colloquial Cantonese has distinct loanwords from English, such as 卡通; kaa tung1; 'cartoon', 基佬; gei lou; 'gay people', 的士; dik si; 'taxi', and 巴士; baa si; 'bus'. With the rising popularity of the Internet, there is a current vogue in China for coining English transliterations, for example, 粉丝; 粉絲; fěnsī; 'fans', 黑客; hēikè; 'hacker', and 博客; bókè; 'blog'. In Taiwan, some of these transliterations are different, such as 駭客; hàikè; 'hacker' and 部落格; bùluògé; 'interconnected tribes' for 'blog'.",
"title": "Vocabulary"
},
{
"paragraph_id": 55,
"text": "Another result of English influence on Chinese is the appearance in of so-called 字母词; 字母詞; zìmǔcí; 'lettered words' spelled with letters from the English alphabet. These have appeared in colloquial usage, as well as in magazines and newspapers, and on websites and television:",
"title": "Vocabulary"
},
{
"paragraph_id": 56,
"text": "Since the 20th century, another source of words has been kanji: Japan re-molded European concepts and inventions into 和製漢語, wasei-kango, 'Japanese-made Chinese', and many of these words have been re-loaned into modern Chinese. Other terms were coined by the Japanese by giving new senses to existing Chinese terms or by referring to expressions used in classical Chinese literature. For example, 经济; 經濟; jīngjì; 経済, keizai in Japanese, which in the original Chinese meant 'the workings of the state', narrowed to 'economy' in Japanese; this narrowed definition was then reimported into Chinese. As a result, these terms are virtually indistinguishable from native Chinese words: indeed, there is some dispute over some of these terms as to whether the Japanese or Chinese coined them first. As a result of this loaning, Chinese, Korean, Japanese, and Vietnamese share a corpus of linguistic terms describing modern terminology, paralleling the similar corpus of terms built from Greco-Latin and shared among European languages.",
"title": "Vocabulary"
},
{
"paragraph_id": 57,
"text": "The Chinese orthography centers on Chinese characters, which are written within imaginary square blocks, traditionally arranged in vertical columns, read from top to bottom down a column, and right to left across columns, despite alternative arrangement with rows of characters from left to right within a row and from top to bottom across rows (like English and other Western writing systems) having become more popular since the 20th century. Chinese characters denote morphemes independent of phonetic variation in different languages. Thus the character 一; 'one' is pronounced as yī in Standard Chinese, yat in Cantonese and it in Hokkien, a form of Min.",
"title": "Writing system"
},
{
"paragraph_id": 58,
"text": "Most written Chinese documents in the modern time, especially the more formal ones, are created using the grammar and syntax of the Standard Chinese variants, regardless of dialectical background of the author or targeted audience. This replaced the old writing language standard of Literary Chinese before the 20th century. However, vocabularies from different Chinese-speaking areas have diverged, and the divergence can be observed in written Chinese.",
"title": "Writing system"
},
{
"paragraph_id": 59,
"text": "Meanwhile, colloquial forms of various Chinese language variants have also been written down by their users, especially in less formal settings. The most prominent example of this is Written Cantonese, which has become quite popular in tabloids, instant messaging applications, and on the internet amongst Hong-Kongers and Cantonese-speakers elsewhere.",
"title": "Writing system"
},
{
"paragraph_id": 60,
"text": "Because some Chinese variants have diverged and developed a number of unique morphemes that are not found in Standard Mandarin (despite all other common morphemes), unique characters rarely used in Standard Chinese have also been created or inherited from archaic literary standard to represent these unique morphemes. For example, characters like 冇 and 係 are actively used in Cantonese and Hakka, while being archaic or unused in standard written Chinese.",
"title": "Writing system"
},
{
"paragraph_id": 61,
"text": "The Chinese had no uniform phonetic transcription system for most of its speakers until the mid-20th century, although enunciation patterns were recorded in early rime books and dictionaries. Early Indian translators, working in Sanskrit and Pali, were the first to attempt to describe the sounds and enunciation patterns of Chinese in a foreign language. After the 15th century, the efforts of Jesuits and Western court missionaries resulted in some Latin character transcription/writing systems, based on various variants of Chinese languages. Some of these Latin character based systems are still being used to write various Chinese variants in the modern era.",
"title": "Writing system"
},
{
"paragraph_id": 62,
"text": "In Hunan, women in certain areas write their local Chinese language variant in Nüshu, a syllabary derived from Chinese characters. The Dungan language, considered by many a dialect of Mandarin, is nowadays written in Cyrillic, and was previously written in the Arabic script. The Dungan people are primarily Muslim and live mainly in Kazakhstan, Kyrgyzstan, and Russia; many Hui people, living mainly in China, also speak the language.",
"title": "Writing system"
},
{
"paragraph_id": 63,
"text": "Each Chinese character represents a monosyllabic Chinese word or morpheme. In 100 CE, the famed Han dynasty scholar Xu Shen classified characters into six categories: pictographs, simple ideographs, compound ideographs, phonetic loans, phonetic compounds and derivative characters. Only 4% were categorized as pictographs, including many of the simplest characters, such as 人; rén; 'human', 日; rì; 'the Sun', 山; shān; 'mountain', and 水; shuǐ; 'water'. Between 80% and 90% were classified as phonetic compounds such as 沖; chōng; 'pour', combining a phonetic component 中; zhōng with a semantic component of the radical 氵, a reduced form of 水; 'water'. Almost all characters created since have been made using this format. The 18th-century Kangxi Dictionary classified characters under a now-common set of 214 radicals.",
"title": "Writing system"
},
{
"paragraph_id": 64,
"text": "Modern characters are styled after the regular script. Various other written styles are also used in Chinese calligraphy, including seal script, cursive script and clerical script. Calligraphy artists can write in Traditional and Simplified characters, but they tend to use Traditional characters for traditional art.",
"title": "Writing system"
},
{
"paragraph_id": 65,
"text": "There are currently two systems for Chinese characters. Traditional characters, used in Hong Kong, Taiwan, Macau, and many overseas Chinese speaking communities, largely takes their form from received character forms dating back to the late Han dynasty and standardized during the Ming. Simplified characters, introduced by the PRC in 1954 to promote mass literacy, simplifies most complex traditional glyphs to fewer strokes, many to common cursive shorthand variants. Singapore, which has a large Chinese community, was the second nation to officially adopt simplified characters, although it has also become the de facto standard for younger ethnic Chinese in Malaysia.",
"title": "Writing system"
},
{
"paragraph_id": 66,
"text": "The Internet provides practice reading each of these systems, and most Chinese readers are capable of, if not necessarily comfortable with, reading the alternative system through experience and guesswork.",
"title": "Writing system"
},
{
"paragraph_id": 67,
"text": "A well-educated Chinese reader today recognizes approximately 4,000 to 6,000 characters; approximately 3,000 characters are required to read a mainland newspaper. The PRC defines literacy amongst workers as a knowledge of 2,000 characters, though this would be only functional literacy. School-children typically learn around 2,000 characters whereas scholars may memorize up to 10,000. A large unabridged dictionary like the Kangxi dictionary, contains over 40,000 characters, including obscure, variant, rare, and archaic characters; fewer than a quarter of these characters are now commonly used.",
"title": "Writing system"
},
{
"paragraph_id": 68,
"text": "Romanization is the process of transcribing a language into the Latin script. There are many systems of romanization for the Chinese varieties, due to the lack of a native phonetic transcription until modern times. Chinese is first known to have been written in Latin characters by Western Christian missionaries in the 16th century.",
"title": "Writing system"
},
{
"paragraph_id": 69,
"text": "Today the most common romanization standard for Standard Mandarin is Hanyu Pinyin, introduced in 1956 by the PRC, and later adopted by Singapore and Taiwan. Pinyin is almost universally employed now for teaching standard spoken Chinese in schools and universities across the Americas, Australia, and Europe. Chinese parents also use Pinyin to teach their children the sounds and tones of new words. In school books that teach Chinese, the pinyin romanization is often shown below a picture of the thing the word represents, with the Chinese character alongside.",
"title": "Writing system"
},
{
"paragraph_id": 70,
"text": "The second-most common romanization system, the Wade–Giles, was invented by Thomas Wade in 1859 and modified by Herbert Giles in 1892. As this system approximates the phonology of Mandarin Chinese into English consonants and vowels–it is largely an anglicization, it may be particularly helpful for beginner Chinese speakers of an English-speaking background. Wade–Giles was found in academic use in the United States, particularly before the 1980s, and until 2009 was widely used in Taiwan.",
"title": "Writing system"
},
{
"paragraph_id": 71,
"text": "When used within European texts, the tone transcriptions in both pinyin and Wade–Giles are often left out for simplicity; Wade–Giles's extensive use of apostrophes is also usually omitted. Thus, most Western readers will be much more familiar with Beijing than they will be with Běijīng (pinyin), and with Taipei than T'ai-pei (Wade–Giles). This simplification presents syllables as homophones which really are none, and therefore exaggerates the number of homophones almost by a factor of four.",
"title": "Writing system"
},
{
"paragraph_id": 72,
"text": "For comparison:",
"title": "Writing system"
},
{
"paragraph_id": 73,
"text": "Other systems include Gwoyeu Romatzyh, the French EFEO, the Yale system (invented for use by US troops during World War II), as well as distinct systems for the phonetic requirements of Cantonese, Min Nan, Hakka, and other varieties.",
"title": "Writing system"
},
{
"paragraph_id": 74,
"text": "Chinese varieties have been phonetically transcribed into many other writing systems over the centuries. The 'Phags-pa script, for example, has been very helpful in reconstructing the pronunciations of premodern forms of Chinese.",
"title": "Writing system"
},
{
"paragraph_id": 75,
"text": "Zhuyin (colloquially bopomofo), a semi-syllabary is still widely used in Taiwan's elementary schools to aid standard pronunciation. Although zhuyin characters are reminiscent of katakana script, there is no source to substantiate the claim that Katakana was the basis for the zhuyin system. A comparison table of zhuyin to pinyin exists in the zhuyin article. Syllables based on pinyin and zhuyin can also be compared by looking at the following articles:",
"title": "Writing system"
},
{
"paragraph_id": 76,
"text": "There are also at least two systems of cyrillization for Chinese. The most widespread is the Palladius system.",
"title": "Writing system"
},
{
"paragraph_id": 77,
"text": "With the growing importance and influence of China's economy globally, Standard Chinese instruction has been gaining popularity in schools throughout East Asia, Southeast Asia, and the Western world.",
"title": "As a foreign language"
},
{
"paragraph_id": 78,
"text": "Besides Mandarin, Cantonese is the only other Chinese language that is widely taught as a foreign language, largely due to the economic and cultural influence of Hong Kong and its widespread usage among significant Overseas Chinese communities.",
"title": "As a foreign language"
},
{
"paragraph_id": 79,
"text": "In 1991 there were 2,000 foreign learners taking China's official Chinese Proficiency Test, called Hanyu Shuiping Kaoshi (HSK), comparable to the English Cambridge Certificate, but by 2005 the number of candidates had risen sharply to 117,660 and in 2010 to 750,000.",
"title": "As a foreign language"
},
{
"paragraph_id": 80,
"text": "The current iteration of the HSK exams is termed HSK 2.0, with the release of HSK 3.0 still undefined despite being announced by the Chinese Ministry of Education in March 2021. The new HSK system is thought to be in response to criticism of the current HSK levels not matching with the CEFR levels (Common European Framework of Reference for Languages), contrary to the Chinese Ministry of Education's claims.",
"title": "As a foreign language"
}
] | Chinese is a group of languages spoken natively by the ethnic Han Chinese majority and many minority ethnic groups in Greater China. Approximately 1.3 billion people, or around 16% of the global population, speak a variety of Chinese as their first language. Chinese languages form the Sinitic branch of the Sino-Tibetan language family. The spoken varieties of Chinese are usually considered by native speakers to be dialects of a single language. However, their lack of mutual intelligibility means they are sometimes considered to be separate languages in a family. Investigation of the historical relationships among the varieties of Chinese is ongoing. Currently, most classifications posit 7 to 13 main regional groups based on phonetic developments from Middle Chinese, of which the most spoken by far is Mandarin with 66%, or around 800 million speakers, followed by Min, Wu, and Yue. These branches are unintelligible to each other, and many of their subgroups are unintelligible with the other varieties within the same branch. There are, however, transitional areas where varieties from different branches share enough features for some limited intelligibility, including New Xiang with Southwestern Mandarin, Xuanzhou Wu Chinese with Lower Yangtze Mandarin, Jin with Central Plains Mandarin and certain divergent dialects of Hakka with Gan. All varieties of Chinese are tonal to at least some degree, and are largely analytic. The earliest Chinese written records are oracle bone inscriptions dating from the Shang dynasty c. 1250 BCE. The phonetic categories of Old Chinese can be reconstructed from the rhymes of ancient poetry. During the Northern and Southern period, Middle Chinese went through several sound changes and split into several varieties following prolonged geographic and political separation. The Qieyun, a rime dictionary, recorded a compromise between the pronunciations of different regions. The royal courts of the Ming and early Qing dynasties operated using a koiné language known as Guanhua, based on the Nanjing dialect of Mandarin. Standard Chinese is an official language of both the People's Republic of China and the Republic of China on Taiwan, one of the four official languages of Singapore, and one of the six official languages of the United Nations. Standard Chinese is based on the Beijing dialect of Mandarin, and was first officially adopted in the 1930s. The language is written primarily using a logography of Chinese characters, largely shared by readers who may otherwise speak mutually unintelligible varieties. Since the 1950s, the use of Simplified characters has been promoted by the government of the People's Republic of China, with Singapore officially adopting them in 1976. Traditional characters are used in Taiwan, Hong Kong, Macau, and among Chinese-speaking communities overseas. Traditional characters are also in use in mainland China, despite them not being the first choice in daily use. For example, practising Chinese calligraphy requires the knowledge of traditional Chinese characters. | 2001-09-27T15:56:11Z | 2023-12-29T21:55:53Z | [
"Template:Transliteration",
"Template:Lang-ja",
"Template:Wikiquote",
"Template:Infobox Chinese",
"Template:Zhi",
"Template:Sfnp",
"Template:IPA",
"Template:Cite press release",
"Template:Portal bar",
"Template:Harvcoltxt",
"Template:Commons category",
"Template:About",
"Template:Pie chart",
"Template:Midsize",
"Template:Em",
"Template:Notelist",
"Template:Reflist",
"Template:InterWiki",
"Template:Chinese language",
"Template:Main",
"Template:Cite news",
"Template:Cite journal",
"Template:Cite thesis",
"Template:Citation",
"Template:Refend",
"Template:Abbr",
"Template:Refbegin",
"Template:Short description",
"Template:Merge from",
"Template:Infobox language",
"Template:Lang",
"Template:See also",
"Template:Convert",
"Template:Cite web",
"Template:In lang",
"Template:Zh",
"Template:Gcl",
"Template:Cite magazine",
"Template:Wikivoyage",
"Template:Webarchive",
"Template:Authority control",
"Template:Navboxes",
"Template:Use dmy dates",
"Template:Efn",
"Template:Circa",
"Template:Further",
"Template:Chinese tones",
"Template:Cite book"
] | https://en.wikipedia.org/wiki/Chinese_language |
5,759 | Complex analysis | Complex analysis, traditionally known as the theory of functions of a complex variable, is the branch of mathematical analysis that investigates functions of complex numbers. It is helpful in many branches of mathematics, including algebraic geometry, number theory, analytic combinatorics, applied mathematics; as well as in physics, including the branches of hydrodynamics, thermodynamics, quantum mechanics, and twistor theory. By extension, use of complex analysis also has applications in engineering fields such as nuclear, aerospace, mechanical and electrical engineering.
As a differentiable function of a complex variable is equal to its Taylor series (that is, it is analytic), complex analysis is particularly concerned with analytic functions of a complex variable, that is, holomorphic functions. The concept can be extended to functions of several complex variables.
Complex analysis is one of the classical branches in mathematics, with roots in the 18th century and just prior. Important mathematicians associated with complex numbers include Euler, Gauss, Riemann, Cauchy, Gösta Mittag-Leffler, Weierstrass, and many more in the 20th century. Complex analysis, in particular the theory of conformal mappings, has many physical applications and is also used throughout analytic number theory. In modern times, it has become very popular through a new boost from complex dynamics and the pictures of fractals produced by iterating holomorphic functions. Another important application of complex analysis is in string theory which examines conformal invariants in quantum field theory.
A complex function is a function from complex numbers to complex numbers. In other words, it is a function that has a subset of the complex numbers as a domain and the complex numbers as a codomain. Complex functions are generally assumed to have a domain that contains a nonempty open subset of the complex plane.
For any complex function, the values z {\displaystyle z} from the domain and their images f ( z ) {\displaystyle f(z)} in the range may be separated into real and imaginary parts:
where x , y , u ( x , y ) , v ( x , y ) {\displaystyle x,y,u(x,y),v(x,y)} are all real-valued.
In other words, a complex function f : C → C {\displaystyle f:\mathbb {C} \to \mathbb {C} } may be decomposed into
i.e., into two real-valued functions ( u {\displaystyle u} , v {\displaystyle v} ) of two real variables ( x {\displaystyle x} , y {\displaystyle y} ).
Similarly, any complex-valued function f on an arbitrary set X (is isomorphic to, and therefore, in that sense, it) can be considered as an ordered pair of two real-valued functions: (Re f, Im f) or, alternatively, as a vector-valued function from X into R 2 . {\displaystyle \mathbb {R} ^{2}.}
Some properties of complex-valued functions (such as continuity) are nothing more than the corresponding properties of vector valued functions of two real variables. Other concepts of complex analysis, such as differentiability, are direct generalizations of the similar concepts for real functions, but may have very different properties. In particular, every differentiable complex function is analytic (see next section), and two differentiable functions that are equal in a neighborhood of a point are equal on the intersection of their domain (if the domains are connected). The latter property is the basis of the principle of analytic continuation which allows extending every real analytic function in a unique way for getting a complex analytic function whose domain is the whole complex plane with a finite number of curve arcs removed. Many basic and special complex functions are defined in this way, including the complex exponential function, complex logarithm functions, and trigonometric functions.
Complex functions that are differentiable at every point of an open subset Ω {\displaystyle \Omega } of the complex plane are said to be holomorphic on Ω {\displaystyle \Omega } . In the context of complex analysis, the derivative of f {\displaystyle f} at z 0 {\displaystyle z_{0}} is defined to be
Superficially, this definition is formally analogous to that of the derivative of a real function. However, complex derivatives and differentiable functions behave in significantly different ways compared to their real counterparts. In particular, for this limit to exist, the value of the difference quotient must approach the same complex number, regardless of the manner in which we approach z 0 {\displaystyle z_{0}} in the complex plane. Consequently, complex differentiability has much stronger implications than real differentiability. For instance, holomorphic functions are infinitely differentiable, whereas the existence of the nth derivative need not imply the existence of the (n + 1)th derivative for real functions. Furthermore, all holomorphic functions satisfy the stronger condition of analyticity, meaning that the function is, at every point in its domain, locally given by a convergent power series. In essence, this means that functions holomorphic on Ω {\displaystyle \Omega } can be approximated arbitrarily well by polynomials in some neighborhood of every point in Ω {\displaystyle \Omega } . This stands in sharp contrast to differentiable real functions; there are infinitely differentiable real functions that are nowhere analytic; see Non-analytic smooth function § A smooth function which is nowhere real analytic.
Most elementary functions, including the exponential function, the trigonometric functions, and all polynomial functions, extended appropriately to complex arguments as functions C → C {\displaystyle \mathbb {C} \to \mathbb {C} } , are holomorphic over the entire complex plane, making them entire functions, while rational functions p / q {\displaystyle p/q} , where p and q are polynomials, are holomorphic on domains that exclude points where q is zero. Such functions that are holomorphic everywhere except a set of isolated points are known as meromorphic functions. On the other hand, the functions z ↦ ℜ ( z ) {\displaystyle z\mapsto \Re (z)} , z ↦ | z | {\displaystyle z\mapsto |z|} , and z ↦ z ¯ {\displaystyle z\mapsto {\bar {z}}} are not holomorphic anywhere on the complex plane, as can be shown by their failure to satisfy the Cauchy–Riemann conditions (see below).
An important property of holomorphic functions is the relationship between the partial derivatives of their real and imaginary components, known as the Cauchy–Riemann conditions. If f : C → C {\displaystyle f:\mathbb {C} \to \mathbb {C} } , defined by f ( z ) = f ( x + i y ) = u ( x , y ) + i v ( x , y ) {\displaystyle f(z)=f(x+iy)=u(x,y)+iv(x,y)} , where x , y , u ( x , y ) , v ( x , y ) ∈ R {\displaystyle x,y,u(x,y),v(x,y)\in \mathbb {R} } , is holomorphic on a region Ω {\displaystyle \Omega } , then for all z 0 ∈ Ω {\displaystyle z_{0}\in \Omega } ,
In terms of the real and imaginary parts of the function, u and v, this is equivalent to the pair of equations u x = v y {\displaystyle u_{x}=v_{y}} and u y = − v x {\displaystyle u_{y}=-v_{x}} , where the subscripts indicate partial differentiation. However, the Cauchy–Riemann conditions do not characterize holomorphic functions, without additional continuity conditions (see Looman–Menchoff theorem).
Holomorphic functions exhibit some remarkable features. For instance, Picard's theorem asserts that the range of an entire function can take only three possible forms: C {\displaystyle \mathbb {C} } , C ∖ { z 0 } {\displaystyle \mathbb {C} \setminus \{z_{0}\}} , or { z 0 } {\displaystyle \{z_{0}\}} for some z 0 ∈ C {\displaystyle z_{0}\in \mathbb {C} } . In other words, if two distinct complex numbers z {\displaystyle z} and w {\displaystyle w} are not in the range of an entire function f {\displaystyle f} , then f {\displaystyle f} is a constant function. Moreover, a holomorphic function on a connected open set is determined by its restriction to any nonempty open subset.
In mathematics, a conformal map is a function that locally preserves angles, but not necessarily lengths.
More formally, let U {\displaystyle U} and V {\displaystyle V} be open subsets of R n {\displaystyle \mathbb {R} ^{n}} . A function f : U → V {\displaystyle f:U\to V} is called conformal (or angle-preserving) at a point u 0 ∈ U {\displaystyle u_{0}\in U} if it preserves angles between directed curves through u 0 {\displaystyle u_{0}} , as well as preserving orientation. Conformal maps preserve both angles and the shapes of infinitesimally small figures, but not necessarily their size or curvature.
The conformal property may be described in terms of the Jacobian derivative matrix of a coordinate transformation. The transformation is conformal whenever the Jacobian at each point is a positive scalar times a rotation matrix (orthogonal with determinant one). Some authors define conformality to include orientation-reversing mappings whose Jacobians can be written as any scalar times any orthogonal matrix.
For mappings in two dimensions, the (orientation-preserving) conformal mappings are precisely the locally invertible complex analytic functions. In three and higher dimensions, Liouville's theorem sharply limits the conformal mappings to a few types.
One of the central tools in complex analysis is the line integral. The line integral around a closed path of a function that is holomorphic everywhere inside the area bounded by the closed path is always zero, as is stated by the Cauchy integral theorem. The values of such a holomorphic function inside a disk can be computed by a path integral on the disk's boundary (as shown in Cauchy's integral formula). Path integrals in the complex plane are often used to determine complicated real integrals, and here the theory of residues among others is applicable (see methods of contour integration). A "pole" (or isolated singularity) of a function is a point where the function's value becomes unbounded, or "blows up". If a function has such a pole, then one can compute the function's residue there, which can be used to compute path integrals involving the function; this is the content of the powerful residue theorem. The remarkable behavior of holomorphic functions near essential singularities is described by Picard's theorem. Functions that have only poles but no essential singularities are called meromorphic. Laurent series are the complex-valued equivalent to Taylor series, but can be used to study the behavior of functions near singularities through infinite sums of more well understood functions, such as polynomials.
A bounded function that is holomorphic in the entire complex plane must be constant; this is Liouville's theorem. It can be used to provide a natural and short proof for the fundamental theorem of algebra which states that the field of complex numbers is algebraically closed.
If a function is holomorphic throughout a connected domain then its values are fully determined by its values on any smaller subdomain. The function on the larger domain is said to be analytically continued from its values on the smaller domain. This allows the extension of the definition of functions, such as the Riemann zeta function, which are initially defined in terms of infinite sums that converge only on limited domains to almost the entire complex plane. Sometimes, as in the case of the natural logarithm, it is impossible to analytically continue a holomorphic function to a non-simply connected domain in the complex plane but it is possible to extend it to a holomorphic function on a closely related surface known as a Riemann surface.
All this refers to complex analysis in one variable. There is also a very rich theory of complex analysis in more than one complex dimension in which the analytic properties such as power series expansion carry over whereas most of the geometric properties of holomorphic functions in one complex dimension (such as conformality) do not carry over. The Riemann mapping theorem about the conformal relationship of certain domains in the complex plane, which may be the most important result in the one-dimensional theory, fails dramatically in higher dimensions.
A major application of certain complex spaces is in quantum mechanics as wave functions. | [
{
"paragraph_id": 0,
"text": "Complex analysis, traditionally known as the theory of functions of a complex variable, is the branch of mathematical analysis that investigates functions of complex numbers. It is helpful in many branches of mathematics, including algebraic geometry, number theory, analytic combinatorics, applied mathematics; as well as in physics, including the branches of hydrodynamics, thermodynamics, quantum mechanics, and twistor theory. By extension, use of complex analysis also has applications in engineering fields such as nuclear, aerospace, mechanical and electrical engineering.",
"title": ""
},
{
"paragraph_id": 1,
"text": "As a differentiable function of a complex variable is equal to its Taylor series (that is, it is analytic), complex analysis is particularly concerned with analytic functions of a complex variable, that is, holomorphic functions. The concept can be extended to functions of several complex variables.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Complex analysis is one of the classical branches in mathematics, with roots in the 18th century and just prior. Important mathematicians associated with complex numbers include Euler, Gauss, Riemann, Cauchy, Gösta Mittag-Leffler, Weierstrass, and many more in the 20th century. Complex analysis, in particular the theory of conformal mappings, has many physical applications and is also used throughout analytic number theory. In modern times, it has become very popular through a new boost from complex dynamics and the pictures of fractals produced by iterating holomorphic functions. Another important application of complex analysis is in string theory which examines conformal invariants in quantum field theory.",
"title": "History"
},
{
"paragraph_id": 3,
"text": "A complex function is a function from complex numbers to complex numbers. In other words, it is a function that has a subset of the complex numbers as a domain and the complex numbers as a codomain. Complex functions are generally assumed to have a domain that contains a nonempty open subset of the complex plane.",
"title": "Complex functions"
},
{
"paragraph_id": 4,
"text": "For any complex function, the values z {\\displaystyle z} from the domain and their images f ( z ) {\\displaystyle f(z)} in the range may be separated into real and imaginary parts:",
"title": "Complex functions"
},
{
"paragraph_id": 5,
"text": "where x , y , u ( x , y ) , v ( x , y ) {\\displaystyle x,y,u(x,y),v(x,y)} are all real-valued.",
"title": "Complex functions"
},
{
"paragraph_id": 6,
"text": "In other words, a complex function f : C → C {\\displaystyle f:\\mathbb {C} \\to \\mathbb {C} } may be decomposed into",
"title": "Complex functions"
},
{
"paragraph_id": 7,
"text": "i.e., into two real-valued functions ( u {\\displaystyle u} , v {\\displaystyle v} ) of two real variables ( x {\\displaystyle x} , y {\\displaystyle y} ).",
"title": "Complex functions"
},
{
"paragraph_id": 8,
"text": "Similarly, any complex-valued function f on an arbitrary set X (is isomorphic to, and therefore, in that sense, it) can be considered as an ordered pair of two real-valued functions: (Re f, Im f) or, alternatively, as a vector-valued function from X into R 2 . {\\displaystyle \\mathbb {R} ^{2}.}",
"title": "Complex functions"
},
{
"paragraph_id": 9,
"text": "Some properties of complex-valued functions (such as continuity) are nothing more than the corresponding properties of vector valued functions of two real variables. Other concepts of complex analysis, such as differentiability, are direct generalizations of the similar concepts for real functions, but may have very different properties. In particular, every differentiable complex function is analytic (see next section), and two differentiable functions that are equal in a neighborhood of a point are equal on the intersection of their domain (if the domains are connected). The latter property is the basis of the principle of analytic continuation which allows extending every real analytic function in a unique way for getting a complex analytic function whose domain is the whole complex plane with a finite number of curve arcs removed. Many basic and special complex functions are defined in this way, including the complex exponential function, complex logarithm functions, and trigonometric functions.",
"title": "Complex functions"
},
{
"paragraph_id": 10,
"text": "Complex functions that are differentiable at every point of an open subset Ω {\\displaystyle \\Omega } of the complex plane are said to be holomorphic on Ω {\\displaystyle \\Omega } . In the context of complex analysis, the derivative of f {\\displaystyle f} at z 0 {\\displaystyle z_{0}} is defined to be",
"title": "Holomorphic functions"
},
{
"paragraph_id": 11,
"text": "Superficially, this definition is formally analogous to that of the derivative of a real function. However, complex derivatives and differentiable functions behave in significantly different ways compared to their real counterparts. In particular, for this limit to exist, the value of the difference quotient must approach the same complex number, regardless of the manner in which we approach z 0 {\\displaystyle z_{0}} in the complex plane. Consequently, complex differentiability has much stronger implications than real differentiability. For instance, holomorphic functions are infinitely differentiable, whereas the existence of the nth derivative need not imply the existence of the (n + 1)th derivative for real functions. Furthermore, all holomorphic functions satisfy the stronger condition of analyticity, meaning that the function is, at every point in its domain, locally given by a convergent power series. In essence, this means that functions holomorphic on Ω {\\displaystyle \\Omega } can be approximated arbitrarily well by polynomials in some neighborhood of every point in Ω {\\displaystyle \\Omega } . This stands in sharp contrast to differentiable real functions; there are infinitely differentiable real functions that are nowhere analytic; see Non-analytic smooth function § A smooth function which is nowhere real analytic.",
"title": "Holomorphic functions"
},
{
"paragraph_id": 12,
"text": "Most elementary functions, including the exponential function, the trigonometric functions, and all polynomial functions, extended appropriately to complex arguments as functions C → C {\\displaystyle \\mathbb {C} \\to \\mathbb {C} } , are holomorphic over the entire complex plane, making them entire functions, while rational functions p / q {\\displaystyle p/q} , where p and q are polynomials, are holomorphic on domains that exclude points where q is zero. Such functions that are holomorphic everywhere except a set of isolated points are known as meromorphic functions. On the other hand, the functions z ↦ ℜ ( z ) {\\displaystyle z\\mapsto \\Re (z)} , z ↦ | z | {\\displaystyle z\\mapsto |z|} , and z ↦ z ¯ {\\displaystyle z\\mapsto {\\bar {z}}} are not holomorphic anywhere on the complex plane, as can be shown by their failure to satisfy the Cauchy–Riemann conditions (see below).",
"title": "Holomorphic functions"
},
{
"paragraph_id": 13,
"text": "An important property of holomorphic functions is the relationship between the partial derivatives of their real and imaginary components, known as the Cauchy–Riemann conditions. If f : C → C {\\displaystyle f:\\mathbb {C} \\to \\mathbb {C} } , defined by f ( z ) = f ( x + i y ) = u ( x , y ) + i v ( x , y ) {\\displaystyle f(z)=f(x+iy)=u(x,y)+iv(x,y)} , where x , y , u ( x , y ) , v ( x , y ) ∈ R {\\displaystyle x,y,u(x,y),v(x,y)\\in \\mathbb {R} } , is holomorphic on a region Ω {\\displaystyle \\Omega } , then for all z 0 ∈ Ω {\\displaystyle z_{0}\\in \\Omega } ,",
"title": "Holomorphic functions"
},
{
"paragraph_id": 14,
"text": "In terms of the real and imaginary parts of the function, u and v, this is equivalent to the pair of equations u x = v y {\\displaystyle u_{x}=v_{y}} and u y = − v x {\\displaystyle u_{y}=-v_{x}} , where the subscripts indicate partial differentiation. However, the Cauchy–Riemann conditions do not characterize holomorphic functions, without additional continuity conditions (see Looman–Menchoff theorem).",
"title": "Holomorphic functions"
},
{
"paragraph_id": 15,
"text": "Holomorphic functions exhibit some remarkable features. For instance, Picard's theorem asserts that the range of an entire function can take only three possible forms: C {\\displaystyle \\mathbb {C} } , C ∖ { z 0 } {\\displaystyle \\mathbb {C} \\setminus \\{z_{0}\\}} , or { z 0 } {\\displaystyle \\{z_{0}\\}} for some z 0 ∈ C {\\displaystyle z_{0}\\in \\mathbb {C} } . In other words, if two distinct complex numbers z {\\displaystyle z} and w {\\displaystyle w} are not in the range of an entire function f {\\displaystyle f} , then f {\\displaystyle f} is a constant function. Moreover, a holomorphic function on a connected open set is determined by its restriction to any nonempty open subset.",
"title": "Holomorphic functions"
},
{
"paragraph_id": 16,
"text": "In mathematics, a conformal map is a function that locally preserves angles, but not necessarily lengths.",
"title": "Conformal map"
},
{
"paragraph_id": 17,
"text": "More formally, let U {\\displaystyle U} and V {\\displaystyle V} be open subsets of R n {\\displaystyle \\mathbb {R} ^{n}} . A function f : U → V {\\displaystyle f:U\\to V} is called conformal (or angle-preserving) at a point u 0 ∈ U {\\displaystyle u_{0}\\in U} if it preserves angles between directed curves through u 0 {\\displaystyle u_{0}} , as well as preserving orientation. Conformal maps preserve both angles and the shapes of infinitesimally small figures, but not necessarily their size or curvature.",
"title": "Conformal map"
},
{
"paragraph_id": 18,
"text": "The conformal property may be described in terms of the Jacobian derivative matrix of a coordinate transformation. The transformation is conformal whenever the Jacobian at each point is a positive scalar times a rotation matrix (orthogonal with determinant one). Some authors define conformality to include orientation-reversing mappings whose Jacobians can be written as any scalar times any orthogonal matrix.",
"title": "Conformal map"
},
{
"paragraph_id": 19,
"text": "For mappings in two dimensions, the (orientation-preserving) conformal mappings are precisely the locally invertible complex analytic functions. In three and higher dimensions, Liouville's theorem sharply limits the conformal mappings to a few types.",
"title": "Conformal map"
},
{
"paragraph_id": 20,
"text": "One of the central tools in complex analysis is the line integral. The line integral around a closed path of a function that is holomorphic everywhere inside the area bounded by the closed path is always zero, as is stated by the Cauchy integral theorem. The values of such a holomorphic function inside a disk can be computed by a path integral on the disk's boundary (as shown in Cauchy's integral formula). Path integrals in the complex plane are often used to determine complicated real integrals, and here the theory of residues among others is applicable (see methods of contour integration). A \"pole\" (or isolated singularity) of a function is a point where the function's value becomes unbounded, or \"blows up\". If a function has such a pole, then one can compute the function's residue there, which can be used to compute path integrals involving the function; this is the content of the powerful residue theorem. The remarkable behavior of holomorphic functions near essential singularities is described by Picard's theorem. Functions that have only poles but no essential singularities are called meromorphic. Laurent series are the complex-valued equivalent to Taylor series, but can be used to study the behavior of functions near singularities through infinite sums of more well understood functions, such as polynomials.",
"title": "Major results"
},
{
"paragraph_id": 21,
"text": "A bounded function that is holomorphic in the entire complex plane must be constant; this is Liouville's theorem. It can be used to provide a natural and short proof for the fundamental theorem of algebra which states that the field of complex numbers is algebraically closed.",
"title": "Major results"
},
{
"paragraph_id": 22,
"text": "If a function is holomorphic throughout a connected domain then its values are fully determined by its values on any smaller subdomain. The function on the larger domain is said to be analytically continued from its values on the smaller domain. This allows the extension of the definition of functions, such as the Riemann zeta function, which are initially defined in terms of infinite sums that converge only on limited domains to almost the entire complex plane. Sometimes, as in the case of the natural logarithm, it is impossible to analytically continue a holomorphic function to a non-simply connected domain in the complex plane but it is possible to extend it to a holomorphic function on a closely related surface known as a Riemann surface.",
"title": "Major results"
},
{
"paragraph_id": 23,
"text": "All this refers to complex analysis in one variable. There is also a very rich theory of complex analysis in more than one complex dimension in which the analytic properties such as power series expansion carry over whereas most of the geometric properties of holomorphic functions in one complex dimension (such as conformality) do not carry over. The Riemann mapping theorem about the conformal relationship of certain domains in the complex plane, which may be the most important result in the one-dimensional theory, fails dramatically in higher dimensions.",
"title": "Major results"
},
{
"paragraph_id": 24,
"text": "A major application of certain complex spaces is in quantum mechanics as wave functions.",
"title": "Major results"
}
] | Complex analysis, traditionally known as the theory of functions of a complex variable, is the branch of mathematical analysis that investigates functions of complex numbers. It is helpful in many branches of mathematics, including algebraic geometry, number theory, analytic combinatorics, applied mathematics; as well as in physics, including the branches of hydrodynamics, thermodynamics, quantum mechanics, and twistor theory. By extension, use of complex analysis also has applications in engineering fields such as nuclear, aerospace, mechanical and electrical engineering. As a differentiable function of a complex variable is equal to its Taylor series, complex analysis is particularly concerned with analytic functions of a complex variable, that is, holomorphic functions. The concept can be extended to functions of several complex variables. | 2001-11-08T21:52:10Z | 2023-12-31T12:10:41Z | [
"Template:Short description",
"Template:Main",
"Template:Reflist",
"Template:Sister project links",
"Template:By whom",
"Template:Analysis-footer",
"Template:Distinguish",
"Template:Math",
"Template:Mvar",
"Template:Nowrap",
"Template:Cite web",
"Template:More footnotes",
"Template:Complex analysis sidebar",
"Template:Slink",
"Template:Excerpt",
"Template:Cite book",
"Template:Authority control"
] | https://en.wikipedia.org/wiki/Complex_analysis |
5,760 | History of China | The history of China spans several millennia across a wide geographical area. Each region now considered part of the Chinese world has experienced periods of unity, fracture, prosperity, and strife. Chinese civilization first emerged in the Yellow River valley, which along with the Yangtze basin constitutes the geographic core of the Chinese cultural sphere. China maintains a rich diversity of ethnic and linguistic people groups. The traditional lens for viewing Chinese history is the dynastic cycle: imperial dynasties rise and fall, and are ascribed certain achievements. Throughout pervades the narrative that Chinese civilization can be traced as an unbroken thread many thousands of years into the past, making it one of the cradles of civilization. At various times, states representative of a dominant Chinese culture have directly controlled areas stretching as far west as the Tian Shan, the Tarim Basin, and the Himalayas, as far north as the Sayan Mountains, and as far south as the delta of the Red River.
The Neolithic period saw increasingly complex polities begin to emerge along the Yellow and Yangtze rivers. The Erlitou culture in the central plains of China is sometimes identified with the Xia dynasty (3rd millennium BCE) of traditional Chinese historiography. The earliest surviving written Chinese dates to roughly 1250 BCE, consisting of divinations inscribed on oracle bones. Chinese bronze inscriptions, ritual texts dedicated to ancestors, form another large corpus of early Chinese writing. The earliest strata of received literature in Chinese include poetry, divination, and records of official speeches. China is believed to be one of a very few loci of independent invention of writing, and the earliest surviving records display an already-mature written language. The culture remembered by the earliest extant literature is that of the Zhou dynasty (c. 1046–256 BCE), China's Axial age, during which the Mandate of Heaven was introduced, and foundations laid for philosophies such as Confucianism, Taoism, Legalism, and Wuxing.
China was first united under a single imperial state by Qin Shi Huang in 221 BCE. Orthography, weights, measures, and law were all standardized. Shortly thereafter, China entered its classical era with the Han dynasty (206 BCE – CE 220), marking a critical period. A term for the Chinese language is still "Han language", and the dominant Chinese ethnic group is known as Han Chinese. The Chinese empire reached some of its farthest geographical extents during this period. Confucianism was officially sanctioned and its core texts were edited into their received forms. Wealthy landholding families independent of the ancient aristocracy began to wield significant power. Han technology can be considered on par with that of the contemporaneous Roman Empire: mass production of paper aided the proliferation of written documents, and the written language of this period was employed for millennia afterwards. China became known internationally for its sericulture. When the Han imperial order finally collapsed after four centuries, China entered an equally lengthy period of disunity, duing which Buddhism began to have a significant impact on Chinese culture, while Calligraphy, art, historiography, and storytelling flourished. Wealthy families in some cases became more powerful than the central government. The Yangtze River valley was incorporated into the dominant cultural sphere.
A period of unity began in 581 with the Sui dynasty, which soon gave way to the long-lived Tang dynasty (608–907), regarded as another Chinese golden age. The Tang dynasty saw flourishing developments in science, technology, poetry, economics, and geographical influence. China's first officially recognized empress, Wu Zetian, reigned during the dynasty's first century. Buddhism was adopted by Tang emperors. "Tang people" is the other common demonym for the Han ethnic group. After the Tang fractured, the Song dynasty (960–1279) saw the maximal extent of imperial Chinese cosmopolitan development. Mechanical printing was introduced, and many of the earliest surviving witnesses of certain texts are wood-block prints from this era. Song scientific advancement led the world, on par with the contemporaneous Khwarazmian Empire, and the imperial examination system gave ideological structure to the political bureaucracy. Confucianism and Taoism were fully knit together in Neo-Confucianism.
Eventually, the Mongol Empire conquered all of China, establishing the Yuan dynasty in 1271. Contact with Europe began to increase during this time. Achievements under the subsequent Ming dynasty (1368–1644) include global exploration, fine porcelain, and many extant public works projects, such as those restoring the Grand Canal and Great Wall. Three of the four Classic Chinese Novels were written during the Ming. The Qing dynasty that succeeded the Ming was ruled by ethnic Manchu people. The Qianlong emperor (r. 1735–1796) commissioned a complete encyclopaedia of imperial libraries, totaling nearly a billion words. Imperial China reached its greatest territorial extent of during the Qing, but China came into increasing conflict with European powers, culminating in the Opium Wars and subsequent unequal treaties.
The 1911 Xinhai Revolution, led by Sun Yat-sen and others, created the modern Republic of China. From 1927, a costly civil war roiled between the Republican government under Chiang Kai-shek and the Chinese Red Army, and the industrialized Empire of Japan also invaded the divided country. After the Communist victory, Mao Zedong proclaimed the People's Republic of China (PRC) in 1949, with the Republic retreating to Taiwan. Both governments still claim sole legitimacy. The PRC has slowly accumulated the majority of diplomatic recognition, and Taiwan's status remains disputed. From 1966 to 1976, the Cultural Revolution in mainland China helped consolidate Mao's power towards the end of his life. After his death, the government began economic reforms under Deng Xiaoping, and became the world's fastest-growing major economy. China had been the most populous nation in the world for decades, until it was surpassed by India in 2023.
The archaic human species of Homo erectus arrived in Eurasia sometime between 1.3 and 1.8 million years ago (Ma) and numerous remains of its subspecies have been found in what is now China. The oldest of these is the southwestern Yuanmou Man (元谋人; in Yunnan), dated to c. 1.7 Ma, which lived in a mixed bushland-forest environment alongside chalicotheres, deer, the elephant Stegodon, rhinos, cattle, pigs, and the giant short-faced hyaena. The better-known Peking Man (北京猿人; near Beijing) of 700,000–400,000 BP, was discovered in the Zhoukoudian cave alongside scrapers, choppers, and, dated slightly later, points, burins, and awls. Other Homo erectus fossils have been found widely throughout the region, including the northwestern Lantian Man (蓝田人; in Shaanxi) as well minor specimens in northeastern Liaoning and southern Guangdong. The dates of most Paleolithic sites were long debated but have been more reliably established based on modern magnetostratigraphy: Majuangou at 1.66–1.55 Ma, Lanpo at 1.6 Ma, Xiaochangliang at 1.36 Ma, Xiantai at 1.36 Ma, Banshan at 1.32 Ma, Feiliang at 1.2 Ma and Donggutuo at 1.1 Ma. Evidence of fire use by Homo erectus occurred between 1–1.8 million years BP at the archaeological site of Xihoudu, Shanxi Province.
The circumstances surrounding the evolution of Homo erectus to contemporary H. sapiens is debated; the three main theories include the dominant "Out of Africa" theory (OOA), the regional continuity model and the admixture variant of the OOA hypothesis. Regardless, the earliest modern humans have been dated to China at 120,000–80,000 BP based on fossilized teeth discovered in Fuyan Cave of Dao County, Hunan. The larger animals which lived alongside these humans include the extinct Ailuropoda baconi panda, the Crocuta ultima hyena, the Stegodon, and the giant tapir. Evidence of Middle Palaeolithic Levallois technology has been found in the lithic assemblage of Guanyindong Cave site in southwest China, dated to approximately 170,000–80,000 years ago.
The Neolithic age in China is considered to have begun about 10,000 years ago. Because the Neolithic is conventionally defined by the presence of agriculture, it follows that the Neolithic began at different times in the various regions of what is now China. Agriculture in China developed gradually, with initial domestication of a few grains and animals gradually expanding with the addition of many others over subsequent millennia. The earliest evidence of cultivated rice, found by the Yangtze River, was carbon-dated to 8,000 years ago. Early evidence for millet agriculture in the Yellow River valley was radiocarbon-dated to about 7000 BC. The Jiahu site is one of the best preserved early agricultural villages (7000 to 5800 BC). At Damaidi in Ningxia, 3,172 cliff carvings dating to 6000–5000 BC have been discovered, "featuring 8,453 individual characters such as the sun, moon, stars, gods and scenes of hunting or grazing", according to researcher Li Xiangshi. Written symbols, sometimes called proto-writing, were found at the site of Jiahu, which is dated around 7000 BC, Damaidi around 6000 BC, Dadiwan from 5800 BC to 5400 BC, and Banpo dating from the 5th millennium BC. With agriculture came increased population, the ability to store and redistribute crops, and the potential to support specialist craftsmen and administrators, which may have existed at late Neolithic sites like Taosi and the Liangzhu culture in the Yangtze delta. The cultures of the middle and late Neolithic in the central Yellow River valley are known respectively as the Yangshao culture (5000 BC to 3000 BC) and the Longshan culture (3000 BC to 2000 BC). Pigs and dogs were the earliest domesticated animals in the region, and after about 3000 BC domesticated cattle and sheep arrived from Western Asia. Wheat also arrived at this time but remained a minor crop. Fruit such as peaches, cherries and oranges, as well as chickens and various vegetables, were also domesticated in Neolithic China.
Bronze artifacts have been found at the Majiayao culture site (between 3100 and 2700 BC). The Bronze Age is also represented at the Lower Xiajiadian culture (2200–1600 BC) site in northeast China. Sanxingdui located in what is now Sichuan is believed to be the site of a major ancient city, of a previously unknown Bronze Age culture (between 2000 and 1200 BC). The site was first discovered in 1929 and then re-discovered in 1986. Chinese archaeologists have identified the Sanxingdui culture to be part of the ancient kingdom of Shu, linking the artifacts found at the site to its early legendary kings.
Ferrous metallurgy begins to appear in the late 6th century in the Yangzi Valley. A bronze hatchet with a blade of meteoric iron excavated near the city of Gaocheng in Shijiazhuang (now Hebei) has been dated to the 14th century BC. An Iron Age culture of the Tibetan Plateau has tentatively been associated with the Zhang Zhung culture described in early Tibetan writings.
Chinese historians in later periods were accustomed to the notion of one dynasty succeeding another, but the political situation in early China was much more complicated. Hence, as some scholars of China suggest, the Xia and the Shang can refer to political entities that existed concurrently, just as the early Zhou existed at the same time as the Shang. This bears similarities to how China, both contemporaneously and later, has been divided into states that were not one region, legally or culturally.
The earliest period once considered historical was the legendary era of the sage-emperors Yao, Shun, and Yu. Traditionally, the abdication system was prominent in this period, with Yao yielding his throne to Shun, who abdicated to Yu, who founded the Xia dynasty.
The Xia dynasty of China (from c. 2070 – c. 1600 BC) is the earliest of the Three Dynasties described in ancient historical records such as Sima Qian's Records of the Grand Historian and Bamboo Annals. The dynasty is generally considered mythical by Western scholars, but in China it is usually associated with the early Bronze Age site at Erlitou that was excavated in Henan in 1959. Since no writing was excavated at Erlitou or any other contemporaneous site, there is not enough evidence to prove whether the Xia dynasty ever existed. Some archaeologists claim that the Erlitou site was the capital of the Xia Dynasty. In any case, the site of Erlitou had a level of political organization that would not be incompatible with the legends of Xia recorded in later texts. More importantly, the Erlitou site has the earliest evidence for an elite who conducted rituals using cast bronze vessels, which would later be adopted by the Shang and Zhou.
Archaeological evidence, such as oracle bones and bronzes, as well as transmitted texts attest to the historical existence of the Shang dynasty (c. 1600–1046 BC). Findings from the earlier Shang period come from excavations at Erligang, in present-day Zhengzhou. Findings from the later Shang or Yin (殷) period, were found in profusion at Anyang, in modern-day Henan, the last of the Shang's capitals. The findings at Anyang include the earliest written record of the Chinese so far discovered: inscriptions of divination records in ancient Chinese writing on the bones or shells of animals—the "oracle bones", dating from around 1250 to 1046 BC.
A series of at least twenty-nine kings reigned over the Shang dynasty. Throughout their reigns, according to the Shiji, the capital city was moved six times. The final and most important move was to Yin during the reign of Pan Geng, around 1300 BC. The term Yin dynasty has been synonymous with the Shang dynasty in history, although it has lately been used to refer specifically to the latter half of the Shang dynasty.
Although written records found at Anyang confirm the existence of the Shang dynasty, Western scholars are often hesitant to associate settlements that are contemporaneous with the Anyang settlement with the Shang dynasty. For example, archaeological findings at Sanxingdui suggest a technologically advanced civilization culturally unlike Anyang. The evidence is inconclusive in proving how far the Shang realm extended from Anyang. The leading hypothesis is that Anyang, ruled by the same Shang in the official history, coexisted and traded with numerous other culturally diverse settlements in the area that is now referred to as China proper.
The Zhou dynasty (1046 BC to about 256 BC) is the longest-lasting dynasty in Chinese history, though its power declined steadily over the almost eight centuries of its existence. In the late 2nd millennium BC, the Zhou dynasty arose in the Wei River valley of modern western Shaanxi Province, where they were appointed Western Protectors by the Shang. A coalition led by the ruler of the Zhou, King Wu, defeated the Shang at the Battle of Muye. They took over most of the central and lower Yellow River valley and enfeoffed their relatives and allies in semi-independent states across the region. Several of these states eventually became more powerful than the Zhou kings.
The kings of Zhou invoked the concept of the Mandate of Heaven to legitimize their rule, a concept that was influential for almost every succeeding dynasty. Like Shangdi, Heaven (tian) ruled over all the other gods, and it decided who would rule China. It was believed that a ruler lost the Mandate of Heaven when natural disasters occurred in great number, and when, more realistically, the sovereign had apparently lost his concern for the people. In response, the royal house would be overthrown, and a new house would rule, having been granted the Mandate of Heaven.
The Zhou established two capitals Zongzhou (near modern Xi'an) and Chengzhou (Luoyang), with the king's court moving between them regularly. The Zhou alliance gradually expanded eastward into Shandong, southeastward into the Huai River valley, and southward into the Yangtze River valley.
In 771 BC, King You and his forces were defeated in the Battle of Mount Li by rebel states and Quanrong barbarians. The rebel aristocrats established a new ruler, King Ping, in Luoyang, beginning the second major phase of the Zhou dynasty: the Eastern Zhou period, which is divided into the Spring and Autumn and Warring States periods. The former period is named after the famous Spring and Autumn Annals. The decline of central power left a vacuum. The Zhou empire now consisted of hundreds of tiny states, some of them only as large as a walled town and surrounding land. These states began to fight against one another and vie for hegemony. The more powerful states tended to conquer and incorporate the weaker ones, so the number of states declined over time. By the 6th century BC most small states had disappeared by being annexed and just a few large and powerful principalities remained. Some southern states, such as Chu and Wu, claimed independence from the Zhou, who undertook wars against some of them (Wu and Yue). Many new cities were established in this period and society gradually became more urbanized and commercialized. Many famous individuals such as Laozi, Confucius and Sun Tzu lived during this chaotic period.
Conflict in this period occurred both between and within states. Warfare between states forced the surviving states to develop better administrations to mobilize more soldiers and resources. Within states there was constant jockeying between elite families. For example, the three most powerful families in the Jin state—Zhao, Wei and Han—eventually overthrew the ruling family and partitioned the state between them.
The Hundred Schools of Thought of classical Chinese philosophy began blossoming during this period and the subsequent Warring States period. Such influential intellectual movements as Confucianism, Taoism, Legalism and Mohism were founded, partly in response to the changing political world. The first two philosophical thoughts would have an enormous influence on Chinese culture.
After further political consolidations, seven prominent states remained during the 5th century BC. The years in which these states battled each other is known as the Warring States period. Though the Zhou king nominally remained as such until 256 BC, he was largely a figurehead that held little real power.
Numerous developments were made during this period in the areas of culture and mathematics—including the Zuo Zhuan within the Spring and Autumn Annals (a literary work summarizing the preceding Spring and Autumn period), and the bundle of 21 bamboo slips from the Tsinghua collection, dated to 305 BC—being the world's earliest known example of a two-digit, base-10 multiplication table. The Tsinghua collection indicates that sophisticated commercial arithmetic was already established during this period.
As neighboring territories of the seven states were annexed (including areas of modern Sichuan and Liaoning), they were now to be governed under an administrative system of commanderies and prefectures. This system had been in use elsewhere since the Spring and Autumn period, and its influence on administration would prove resilient—its terminology can still be seen in the contemporaneous sheng and xian ("provinces" and "counties") of contemporary China.
The state of Qin became dominant in the waning decades of the Warring States period, conquering the Shu capital of Jinsha on the Chengdu Plain; and then eventually driving Chu from its place in the Han River valley. Qin imitated the administrative reforms of the other states, thereby becoming a powerhouse. Its final expansion began during the reign of Ying Zheng, ultimately unifying the other six regional powers, and enabling him to proclaim himself as China's first emperor—known to history as Qin Shi Huang.
Ying Zheng's establishment of the Qin dynasty (秦朝) in 221 BC effectively formalized the region as an empire, rather than a state, and its pivotal status probably led to "Qin" (秦) later evolving into the Western term "China". To emphasize his sole rule, Zheng proclaimed himself Shi Huangdi (始皇帝; "First August Emperor"); the Huangdi title, derived from Chinese mythology, become the standard for subsequent rulers. Based in Xianyang, the empire was a centralized bureaucratic monarchy, a governing scheme which dominated the future of Imperial China. In an effort to improve the Zhou's perceived failures, this system consisted of more than 36 commanderies (郡; jun), made up of counties (县; xian) and progressively smaller divisions, each with a local leader.
Many aspects of society were informed by Legalism, a state ideology promoted by the emperor and his chancellor Li Si that was introduced at an earlier time by Shang Yang. In legal matters this philosophy emphasized mutual responsibility in disputes and severe punishments, while economic practices included the general encouragement of agriculture and repression of trade. Reforms occurred in weights and measures, writing styles (seal script) and metal currency (Ban Liang), all of which were standardized. Traditionally, Qin Shi Huang is regarded as ordering a mass burning of books and the live burial of scholars under the guise of Legalism, though contemporary scholars express considerable doubt on the historicity of this event. Despite its importance, Legalism was probably supplemented in non-political matters by Confucianism for social and moral beliefs and the five-element Wuxing (五行) theories for cosmological thought.
The Qin administration kept exhaustive records on their population, collecting information on their sex, age, social status and residence. Commoners, who made up over 90% of the population, "suffered harsh treatment" according to the historian Patricia Buckley Ebrey, as they were often conscripted into forced labor for the empire's construction projects. This included a massive system of imperial highways in 220 BC, which ranged around 4,250 miles (6,840 km) altogether. Other major construction projects were assigned to the general Meng Tian, who concurrently led a successful campaign against the northern Xiongnu peoples (210s BC), reportedly with 300,000 troops. Under Qin Shi Huang's orders, Meng supervised the combining of numerous ancient walls into what came to be known as the Great Wall of China and oversaw the building of a 500 miles (800 km) straight highway between northern and southern China.
After Qin Shi Huang's death the Qin government drastically deteriorated and eventually capitulated in 207 BC after the Qin capital was captured and sacked by rebels, which would ultimately lead to the establishment of the Han Empire.
The Han dynasty was founded by Liu Bang, who emerged victorious in the Chu–Han Contention that followed the fall of the Qin dynasty. A golden age in Chinese history, the Han dynasty's long period of stability and prosperity consolidated the foundation of China as a unified state under a central imperial bureaucracy, which was to last intermittently for most of the next two millennia. During the Han dynasty, territory of China was extended to most of the China proper and to areas far west. Confucianism was officially elevated to orthodox status and was to shape the subsequent Chinese civilization. Art, culture and science all advanced to unprecedented heights. With the profound and lasting impacts of this period of Chinese history, the dynasty name "Han" had been taken as the name of the Chinese people, now the dominant ethnic group in modern China, and had been commonly used to refer to Chinese language and written characters.
After the initial laissez-faire policies of Emperors Wen and Jing, the ambitious Emperor Wu brought the empire to its zenith. To consolidate his power, he disenfranchised the majority of imperial relatives, appointing military governors to control their former lands. As a further step, he extended patronage to Confucianism, which emphasizes stability and order in a well-structured society. Imperial Universities were established to support its study. At the urging of his Legalist advisors, however, he also strengthened the fiscal structure of the dynasty with government monopolies.
Major military campaigns were launched to weaken the nomadic Xiongnu Empire, limiting their influence north of the Great Wall. Along with the diplomatic efforts led by Zhang Qian, the sphere of influence of the Han Empire extended to the states in the Tarim Basin, opened up the Silk Road that connected China to the west, stimulating bilateral trade and cultural exchange. To the south, various small kingdoms far beyond the Yangtze River Valley were formally incorporated into the empire.
Emperor Wu also dispatched a series of military campaigns against the Baiyue tribes. The Han annexed Minyue in 135 BC and 111 BC, Nanyue in 111 BC, and Dian in 109 BC. Migration and military expeditions led to the cultural assimilation of the south. It also brought the Han into contact with kingdoms in Southeast Asia, introducing diplomacy and trade.
After Emperor Wu the empire slipped into gradual stagnation and decline. Economically, the state treasury was strained by excessive campaigns and projects, while land acquisitions by elite families gradually drained the tax base. Various consort clans exerted increasing control over strings of incompetent emperors and eventually the dynasty was briefly interrupted by the usurpation of Wang Mang.
In AD 9 the usurper Wang Mang claimed that the Mandate of Heaven called for the end of the Han dynasty and the rise of his own, and he founded the short-lived Xin dynasty. Wang Mang started an extensive program of land and other economic reforms, including the outlawing of slavery and land nationalization and redistribution. These programs, however, were never supported by the landholding families, because they favored the peasants. The instability of power brought about chaos, uprisings, and loss of territories. This was compounded by mass flooding of the Yellow River; silt buildup caused it to split into two channels and displaced large numbers of farmers. Wang Mang was eventually killed in Weiyang Palace by an enraged peasant mob in AD 23.
Emperor Guangwu reinstated the Han dynasty with the support of landholding and merchant families at Luoyang, east of the former capital Xi'an. Thus, this new era is termed the Eastern Han dynasty. With the capable administrations of Emperors Ming and Zhang, former glories of the dynasty were reclaimed, with brilliant military and cultural achievements. The Xiongnu Empire was decisively defeated. The diplomat and general Ban Chao further expanded the conquests across the Pamirs to the shores of the Caspian Sea, thus reopening the Silk Road, and bringing trade, foreign cultures, along with the arrival of Buddhism. With extensive connections with the west, the first of several Roman embassies to China were recorded in Chinese sources, coming from the sea route in AD 166, and a second one in AD 284.
The Eastern Han dynasty was one of the most prolific eras of science and technology in ancient China, notably the historic invention of papermaking by Cai Lun, and the numerous scientific and mathematical contributions by the famous polymath Zhang Heng.
By the 2nd century, the empire declined amidst land acquisitions, invasions, and feuding between consort clans and eunuchs. The Yellow Turban Rebellion broke out in AD 184, ushering in an era of warlords. In the ensuing turmoil, three states emerged, trying to gain predominance and reunify the land, giving this historical period its name. The classic historical novel Romance of the Three Kingdoms dramatizes events of this period.
The warlord Cao Cao reunified the north in 208, and in 220 his son accepted the abdication of Emperor Xian of Han, thus initiating the Wei dynasty. Soon, Wei's rivals Shu and Wu proclaimed their independence. This period was characterized by a gradual decentralization of the state that had existed during the Qin and Han dynasties, and an increase in the power of great families.
In 266, the Jin dynasty overthrew the Wei and later unified the country in 280, but this union was short-lived.
The Jin dynasty was severely weakened by War of the Eight Princes and lost control of northern China after non-Han Chinese settlers rebelled and captured Luoyang and Chang'an. In 317, the Jin prince Sima Rui, based in modern-day Nanjing, became emperor and continued the dynasty, now known as the Eastern Jin, which held southern China for another century. Prior to this move, historians refer to the Jin dynasty as the Western Jin.
Northern China fragmented into a series of independent states known as the Sixteen Kingdoms, most of which were founded by Xiongnu, Xianbei, Jie, Di and Qiang rulers. These non-Han peoples were ancestors of the Turks, Mongols, and Tibetans. Many had, to some extent, been "sinicized" long before their ascent to power. In fact, some of them, notably the Qiang and the Xiongnu, had already been allowed to live in the frontier regions within the Great Wall since late Han times. During this period, warfare ravaged the north and prompted large-scale Han Chinese migration south to the Yangtze River Basin and Delta.
In the early 5th century China entered a period known as the Northern and Southern dynasties, in which parallel regimes ruled the northern and southern halves of the country. In the south, the Eastern Jin gave way to the Liu Song, Southern Qi, Liang and finally Chen. Each of these Southern dynasties were led by Han Chinese ruling families and used Jiankang (modern Nanjing) as the capital. They held off attacks from the north and preserved many aspects of Chinese civilization, while northern barbarian regimes began to sinify.
In the north the last of the Sixteen Kingdoms was extinguished in 439 by the Northern Wei, a kingdom founded by the Xianbei, a nomadic people who unified northern China. The Northern Wei eventually split into the Eastern and Western Wei, which then became the Northern Qi and Northern Zhou. These regimes were dominated by Xianbei or Han Chinese who had married into Xianbei families. During this period most Xianbei people adopted Han surnames, eventually leading to complete assimilation into the Han.
Despite the division of the country, Buddhism spread throughout the land. In southern China, fierce debates about whether Buddhism should be allowed were held frequently by the royal court and nobles. By the end of the era, Buddhists and Taoists had become much more tolerant of each other.
The short-lived Sui dynasty was a pivotal period in Chinese history. Founded by Emperor Wen in 581 in succession of the Northern Zhou, the Sui went on to conquer the Southern Chen in 589 to reunify China, ending three centuries of political division. The Sui pioneered many new institutions, including the government system of Three Departments and Six Ministries, imperial examinations for selecting officials from commoners, while improved on the systems of fubing system of the army conscription and the equal-field system of land distributions. These policies, which were adopted by later dynasties, brought enormous population growth, and amassed excessive wealth to the state. Standardized coinage was enforced throughout the unified empire. Buddhism took root as a prominent religion and was supported officially. Sui China was known for its numerous mega-construction projects. Intended for grains shipment and transporting troops, the Grand Canal was constructed, linking the capitals Daxing (Chang'an) and Luoyang to the wealthy southeast region, and in another route, to the northeast border. The Great Wall was also expanded, while series of military conquests and diplomatic maneuvers further pacified its borders. However, the massive invasions of the Korean Peninsula during the Goguryeo–Sui War failed disastrously, triggering widespread revolts that led to the fall of the dynasty.
The Tang dynasty was a golden age of Chinese civilization, a prosperous, stable, and creative period with significant developments in culture, art, literature, particularly poetry, and technology. Buddhism became the predominant religion for the common people. Chang'an (modern Xi'an), the national capital, was the largest city in the world during its time.
The first emperor, Emperor Gaozu, came to the throne on 18 June 618, placed there by his son, Li Shimin, who became the second emperor, Taizong, one of the greatest emperors in Chinese history. Combined military conquests and diplomatic maneuvers reduced threats from Central Asian tribes, extended the border, and brought neighboring states into a tributary system. Military victories in the Tarim Basin kept the Silk Road open, connecting Chang'an to Central Asia and areas far to the west. In the south, lucrative maritime trade routes from port cities such as Guangzhou connected with distant countries, and foreign merchants settled in China, encouraging a cosmopolitan culture. The Tang culture and social systems were observed and adapted by neighboring countries, most notably Japan. Internally the Grand Canal linked the political heartland in Chang'an to the agricultural and economic centers in the eastern and southern parts of the empire. Xuanzang, a Chinese Buddhist monk, scholar, traveller, and translator travelled to India on his own and returned with "over six hundred Mahayana and Hinayana texts, seven statues of the Buddha and more than a hundred sarira relics."
The prosperity of the early Tang dynasty was abetted by a centralized bureaucracy. The government was organized as "Three Departments and Six Ministries" to separately draft, review, and implement policies. These departments were run by royal family members and landed aristocrats, but as the dynasty wore on, were joined or replaced by scholar officials selected by imperial examinations, setting patterns for later dynasties.
Under the Tang "equal-field system" all land was owned by the Emperor and granted to each family according to household size. Men granted land were conscripted for military service for a fixed period each year, a military policy known as the fubing system. These policies stimulated a rapid growth in productivity and a significant army without much burden on the state treasury. By the dynasty's midpoint, however, standing armies had replaced conscription, and land was continuously falling into the hands of private owners and religious institutions granted exemptions.
The dynasty continued to flourish under the rule of Empress Wu Zetian, the only official empress regnant in Chinese history, and reached its zenith during the long reign of Emperor Xuanzong, who oversaw an empire that stretched from the Pacific to the Aral Sea with at least 50 million people. There were vibrant artistic and cultural creations, including works of the greatest Chinese poets, Li Bai and Du Fu.
At the zenith of prosperity of the empire, the An Lushan Rebellion from 755 to 763 was a watershed event. War, disease, and economic disruption devastated the population and drastically weakened the central imperial government. Upon suppression of the rebellion, regional military governors, known as jiedushi, gained increasingly autonomous status. With loss of revenue from land tax, the central imperial government came to rely heavily on salt monopoly. Externally, former submissive states raided the empire and the vast border territories were lost for centuries. Nevertheless, civil society recovered and thrived amidst the weakened imperial bureaucracy.
In late Tang period the empire was worn out by recurring revolts of the regional military governors, while scholar-officials engaged in fierce factional strife and corrupted eunuchs amassed immense power. Catastrophically, the Huang Chao Rebellion, from 874 to 884, devastated the entire empire for a decade. The sack of the southern port Guangzhou in 879 was followed by the massacre of most of its inhabitants, especially the large foreign merchant enclaves. By 881, both capitals, Luoyang and Chang'an, fell successively. The reliance on ethnic Han and Turkic warlords in suppressing the rebellion increased their power and influence. Consequently, the fall of the dynasty following Zhu Wen's usurpation led to an era of division.
The period of political disunity between the Tang and the Song, known as the Five Dynasties and Ten Kingdoms period, lasted from 907 to 960. During this half-century, China was in all respects a multi-state system. Five regimes, namely, (Later) Liang, Tang, Jin, Han and Zhou, rapidly succeeded one another in control of the traditional Imperial heartland in northern China. Among the regimes, rulers of (Later) Tang, Jin and Han were sinicized Shatuo Turks, which ruled over the ethnic majority of Han Chinese. More stable and smaller regimes of mostly ethnic Han rulers coexisted in south and western China over the period, cumulatively constituted the "Ten Kingdoms".
Amidst political chaos in the north, the strategic Sixteen Prefectures (region along today's Great Wall) were ceded to the emerging Khitan Liao dynasty, which drastically weakened the defense of China proper against northern nomadic empires. To the south, Vietnam gained lasting independence after being a Chinese prefecture for many centuries. With wars dominating in Northern China, there were mass southward migrations of population, which further enhanced the southward shift of cultural and economic centers in China. The era ended with the coup of Later Zhou general Zhao Kuangyin, and the establishment of the Song dynasty in 960, which eventually annihilated the remains of the "Ten Kingdoms" and reunified China.
In 960, the Song dynasty was founded by Emperor Taizu, with its capital established in Kaifeng (then known as Bianjing). In 979, the Song dynasty reunified most of China proper, while large swaths of the outer territories were occupied by sinicized nomadic empires. The Khitan Liao dynasty, which lasted from 907 to 1125, ruled over Manchuria, Mongolia, and parts of Northern China. Meanwhile, in what are now the north-western Chinese provinces of Gansu, Shaanxi, and Ningxia, the Tangut tribes founded the Western Xia dynasty from 1032 to 1227.
Aiming to recover the strategic sixteen prefectures lost in the previous dynasty, campaigns were launched against the Liao dynasty in the early Song period, which all ended in failure. Then in 1004, the Liao cavalry swept over the exposed North China Plain and reached the outskirts of Kaifeng, forcing the Song's submission and then agreement to the Chanyuan Treaty, which imposed heavy annual tributes from the Song treasury. The treaty was a significant reversal of Chinese dominance of the traditional tributary system. Yet the annual outflow of Song's silver to the Liao was paid back through the purchase of Chinese goods and products, which expanded the Song economy, and replenished its treasury. This dampened the incentive for the Song to further campaign against the Liao. Meanwhile, this cross-border trade and contact induced further sinicization within the Liao Empire, at the expense of its military might which was derived from its nomadic lifestyle. Similar treaties and social-economical consequences occurred in Song's relations with the Jin dynasty.
Within the Liao Empire the Jurchen tribes revolted against their overlords to establish the Jin dynasty in 1115. In 1125, the devastating Jin cataphract annihilated the Liao dynasty, while remnants of Liao court members fled to Central Asia to found the Qara Khitai Empire (Western Liao dynasty). Jin's invasion of the Song dynasty followed swiftly. In 1127, Kaifeng was sacked, a massive catastrophe known as the Jingkang Incident, ending the Northern Song dynasty. Later the entire north of China was conquered. The survived members of Song court regrouped in the new capital city of Hangzhou, and initiated the Southern Song dynasty, which ruled territories south of the Huai River. In the ensuing years, the territory and population of China were divided between the Song dynasty, the Jin dynasty and the Western Xia dynasty. The era ended with the Mongol conquest, as Western Xia fell in 1227, the Jin dynasty in 1234, and finally the Southern Song dynasty in 1279.
Despite its military weakness, the Song dynasty is widely considered to be the high point of classical Chinese civilization. The Song economy, facilitated by technology advancement, had reached a level of sophistication probably unseen in world history before its time. The population soared to over 100 million and the living standards of common people improved tremendously due to improvements in rice cultivation and the wide availability of coal for production. The capital cities of Kaifeng and subsequently Hangzhou were both the most populous cities in the world for their time, and encouraged vibrant civil societies unmatched by previous Chinese dynasties. Although land trading routes to the far west were blocked by nomadic empires, there was extensive maritime trade with neighboring states, which facilitated the use of Song coinage as the de facto currency of exchange. Giant wooden vessels equipped with compasses traveled throughout the China Seas and northern Indian Ocean. The concept of insurance was practised by merchants to hedge the risks of such long-haul maritime shipments. With prosperous economic activities, the historically first use of paper currency emerged in the western city of Chengdu, as a supplement to the existing copper coins.
The Song dynasty was considered to be the golden age of great advancements in science and technology of China, thanks to innovative scholar-officials such as Su Song (1020–1101) and Shen Kuo (1031–1095). Inventions such as the hydro-mechanical astronomical clock, the first continuous and endless power-transmitting chain, woodblock printing and paper money were all invented during the Song dynasty.
There was court intrigue between the political reformers and conservatives, led by the chancellors Wang Anshi and Sima Guang, respectively. By the mid-to-late 13th century, the Chinese had adopted the dogma of Neo-Confucian philosophy formulated by Zhu Xi. Enormous literary works were compiled during the Song dynasty, such as the innovative historical narrative Zizhi Tongjian ("Comprehensive Mirror to Aid in Government"). The invention of movable-type printing further facilitated the spread of knowledge. Culture and the arts flourished, with grandiose artworks such as Along the River During the Qingming Festival and Eighteen Songs of a Nomad Flute, along with great Buddhist painters such as the prolific Lin Tinggui.
The Song dynasty was also a period of major innovation in the history of warfare. Gunpowder, while invented in the Tang dynasty, was first put into use in battlefields by the Song army, inspiring a succession of new firearms and siege engines designs. During the Southern Song dynasty, as its survival hinged decisively on guarding the Yangtze and Huai River against the cavalry forces from the north, the first standing navy in China was assembled in 1132, with its admiral's headquarters established at Dinghai. Paddle-wheel warships equipped with trebuchets could launch incendiary bombs made of gunpowder and lime, as recorded in Song's victory over the invading Jin forces at the Battle of Tangdao in the East China Sea, and the Battle of Caishi on the Yangtze River in 1161.
The advances in civilization during the Song dynasty came to an abrupt end following the devastating Mongol conquest, during which the population sharply dwindled, with a marked contraction in economy. Despite viciously halting Mongol advance for more than three decades, the Southern Song capital Hangzhou fell in 1276, followed by the final annihilation of the Song standing navy at the Battle of Yamen in 1279.
The Yuan dynasty was formally proclaimed in 1271, when the Great Khan of Mongol, Kublai Khan, one of the grandsons of Genghis Khan, assumed the additional title of Emperor of China, and considered his inherited part of the Mongol Empire as a Chinese dynasty. In the preceding decades, the Mongols had conquered the Jin dynasty in Northern China, and the Southern Song dynasty fell in 1279 after a protracted and bloody war. The Mongol Yuan dynasty became the first conquest dynasty in Chinese history to rule the entire China proper and its population as an ethnic minority. The dynasty also directly controlled the Mongol heartland and other regions, inheriting the largest share of territory of the eastern Mongol empire, which roughly coincided with the modern area of China and nearby regions in East Asia. Further expansion of the empire was halted after defeats in the invasions of Japan and Vietnam. Following the previous Jin dynasty, the capital of Yuan dynasty was established at Khanbaliq (also known as Dadu, modern-day Beijing). The Grand Canal was reconstructed to connect the remote capital city to economic hubs in southern part of China, setting the precedence and foundation where Beijing would largely remain as the capital of the successive regimes that unified China mainland.
A series of Mongol civil wars in the late 13th century led to the division of the Mongol Empire. In 1304 the emperors of the Yuan dynasty were upheld as the nominal Khagan over western khanates (the Chagatai Khanate, the Golden Horde and the Ilkhanate), which nonetheless remained de facto autonomous. The era was known as Pax Mongolica, when much of the Asian continent was ruled by the Mongols. For the first and only time in history, the Silk Road was controlled entirely by a single state, facilitating the flow of people, trade, and cultural exchange. A network of roads and a postal system were established to connect the vast empire. Lucrative maritime trade, developed from the previous Song dynasty, continued to flourish, with Quanzhou and Hangzhou emerging as the largest ports in the world. Adventurous travelers from the far west, most notably the Venetian, Marco Polo, would settle in China for decades. Upon his return, his detail travel record inspired generations of medieval Europeans with the splendors of the far East. The Yuan dynasty was the first ancient economy, where paper currency, known at the time as Jiaochao, was used as the predominant medium of exchange. Its unrestricted issuance in the late Yuan dynasty inflicted hyperinflation, which eventually brought the downfall of the dynasty.
While the Mongol rulers of the Yuan dynasty adopted substantially to Chinese culture, their sinicization was of lesser extent compared to earlier conquest dynasties in Chinese history. For preserving racial superiority as the conqueror and ruling class, traditional nomadic customs and heritage from the Mongolian Steppe were held in high regard. On the other hand, the Mongol rulers also adopted flexibly to a variety of cultures from many advanced civilizations within the vast empire. Traditional social structure and culture in China underwent immense transform during the Mongol dominance. Large groups of foreign migrants settled in China, who enjoyed elevated social status over the majority Han Chinese, while enriching Chinese culture with foreign elements. The class of scholar officials and intellectuals, traditional bearers of elite Chinese culture, lost substantial social status. This stimulated the development of culture of the common folks. There were prolific works in zaju variety shows and literary songs (sanqu), which were written in a distinctive poetry style known as qu. Novels of vernacular style gained unprecedented status and popularity.
Before the Mongol invasion, Chinese dynasties reported approximately 120 million inhabitants; after the conquest had been completed in 1279, the 1300 census reported roughly 60 million people. This major decline is not necessarily due only to Mongol killings. Scholars such as Frederick W. Mote argue that the wide drop in numbers reflects an administrative failure to record rather than an actual decrease; others such as Timothy Brook argue that the Mongols created a system of enserfment among a huge portion of the Chinese populace, causing many to disappear from the census altogether; other historians including William McNeill and David Morgan consider that plague was the main factor behind the demographic decline during this period. In the 14th century China suffered additional depredations from epidemics of plague, estimated to have killed around a quarter of the population of China.
Throughout the Yuan dynasty, there was some general sentiment among the populace against the Mongol dominance. Yet rather than the nationalist cause, it was mainly strings of natural disasters and incompetent governance that triggered widespread peasant uprisings since the 1340s. After the massive naval engagement at Lake Poyang, Zhu Yuanzhang prevailed over other rebel forces in the south. He proclaimed himself emperor and founded the Ming dynasty in 1368. The same year his northern expedition army captured the capital Khanbaliq. The Yuan remnants fled back to Mongolia and sustained the regime. Other Mongol Khanates in Central Asia continued to exist after the fall of Yuan dynasty in China.
The Ming dynasty was founded by Zhu Yuanzhang in 1368, who proclaimed himself as the Hongwu Emperor. The capital was initially set at Nanjing, and was later moved to Beijing from Yongle Emperor's reign onward.
Urbanization increased as the population grew and as the division of labor grew more complex. Large urban centers, such as Nanjing and Beijing, also contributed to the growth of private industry. In particular, small-scale industries grew up, often specializing in paper, silk, cotton, and porcelain goods. For the most part, however, relatively small urban centers with markets proliferated around the country. Town markets mainly traded food, with some necessary manufactures such as pins or oil.
Despite the xenophobia and intellectual introspection characteristic of the increasingly popular new school of neo-Confucianism, China under the early Ming dynasty was not isolated. Foreign trade and other contacts with the outside world, particularly Japan, increased considerably. Chinese merchants explored all of the Indian Ocean, reaching East Africa with the voyages of Zheng He.
The Hongwu Emperor, being the only founder of a Chinese dynasty who was also of peasant origin, had laid the foundation of a state that relied fundamentally in agriculture. Commerce and trade, which flourished in the previous Song and Yuan dynasties, were less emphasized. Neo-feudal landholdings of the Song and Mongol periods were expropriated by the Ming rulers. Land estates were confiscated by the government, fragmented, and rented out. Private slavery was forbidden. Consequently, after the death of the Yongle Emperor, independent peasant landholders predominated in Chinese agriculture. These laws might have paved the way to removing the worst of the poverty during the previous regimes. Towards later era of the Ming dynasty, with declining government control, commerce, trade and private industries revived.
The dynasty had a strong and complex central government that unified and controlled the empire. The emperor's role became more autocratic, although Hongwu Emperor necessarily continued to use what he called the "Grand Secretariat" to assist with the immense paperwork of the bureaucracy, including memorials (petitions and recommendations to the throne), imperial edicts in reply, reports of various kinds, and tax records. It was this same bureaucracy that later prevented the Ming government from being able to adapt to changes in society, and eventually led to its decline.
The Yongle Emperor strenuously tried to extend China's influence beyond its borders by demanding other rulers send ambassadors to China to present tribute. A large navy was built, including four-masted ships displacing 1,500 tons. A standing army of 1 million troops was created. The Chinese armies conquered and occupied Vietnam for around 20 years, while the Chinese fleet sailed the China seas and the Indian Ocean, cruising as far as the east coast of Africa. The Chinese gained influence in eastern Moghulistan. Several maritime Asian nations sent envoys with tribute for the Chinese emperor. Domestically, the Grand Canal was expanded and became a stimulus to domestic trade. Over 100,000 tons of iron per year were produced. Many books were printed using movable type. The imperial palace in Beijing's Forbidden City reached its current splendor. It was also during these centuries that the potential of south China came to be fully exploited. New crops were widely cultivated and industries such as those producing porcelain and textiles flourished.
In 1449 Esen Tayisi led an Oirat Mongol invasion of northern China which culminated in the capture of the Zhengtong Emperor at Tumu. Since then, the Ming became on the defensive on the northern frontier, which led to the Ming Great Wall being built. Most of what remains of the Great Wall of China today was either built or repaired by the Ming. The brick and granite work was enlarged, the watchtowers were redesigned, and cannons were placed along its length.
At sea the Ming became increasingly isolationist after the death of the Yongle Emperor. The treasure voyages which sailed the Indian Ocean were discontinued, and the maritime prohibition laws were set in place banning the Chinese from sailing abroad. European traders who reached China in the midst of the Age of Discovery were repeatedly rebuked in their requests for trade, with the Portuguese being repulsed by the Ming navy at Tuen Mun in 1521 and again in 1522. Domestic and foreign demands for overseas trade, deemed illegal by the state, led to widespread wokou piracy attacking the southeastern coastline during the rule of the Jiajing Emperor (1507–1567), which only subsided after the opening of ports in Guangdong and Fujian and much military suppression. In addition to raids from Japan by the wokou, raids from Taiwan and the Philippines by the Pisheye also ravaged the southern coasts. The Portuguese were allowed to settle in Macau in 1557 for trade, which remained in Portuguese hands until 1999. After the Spanish invasion of the Philippines, trade with the Spanish at Manila imported large quantities of Mexican and Peruvian silver from the Spanish Americas to China. The Dutch entry into the Chinese seas was also met with fierce resistance, with the Dutch being chased off the Penghu islands in the Sino-Dutch conflicts of 1622–1624 and were forced to settle in Taiwan instead. The Dutch in Taiwan fought with the Ming in the Battle of Liaoluo Bay in 1633 and lost, and eventually surrendered to the Ming loyalist Koxinga in 1662, after the fall of the Ming dynasty.
In 1556, during the rule of the Jiajing Emperor, the Shaanxi earthquake killed about 830,000 people, the deadliest earthquake of all time.
The Ming dynasty intervened deeply in the Japanese invasions of Korea (1592–98), which ended with the withdrawal of all invading Japanese forces in Korea, and the restoration of the Joseon dynasty, its traditional ally and tributary state. The regional hegemony of the Ming dynasty was preserved at a toll on its resources. Coincidentally, with Ming's control in Manchuria in decline, the Manchu (Jurchen) tribes, under their chieftain Nurhaci, broke away from Ming's rule, and emerged as a powerful, unified state, which was later proclaimed as the Qing dynasty. It went on to subdue the much weakened Korea as its tributary, conquered Mongolia, and expanded its territory to the outskirt of the Great Wall. The most elite army of the Ming dynasty was to station at the Shanhai Pass to guard the last stronghold against the Manchus, which weakened its suppression of internal peasants uprisings.
The Qing dynasty (1644–1912) was the last imperial dynasty in China. Founded by the Manchus, it was the second conquest dynasty to rule the entirety of China proper, and roughly doubled the territory controlled by the Ming. The Manchus were formerly known as Jurchens, residing in the northeastern part of the Ming territory outside the Great Wall. They emerged as the major threat to the late Ming dynasty after Nurhaci united all Jurchen tribes and his son, Hong Taiji, declared the founding of the Qing dynasty in 1636. The Qing dynasty set up the Eight Banners system that provided the basic framework for the Qing military conquest. Li Zicheng's peasant rebellion captured Beijing in 1644 and the Chongzhen Emperor, the last Ming emperor, committed suicide. The Manchus allied with the Ming general Wu Sangui to seize Beijing, which was made the capital of the Qing dynasty, and then proceeded to subdue the Ming remnants in the south. During the Ming-Qing transition, when the Ming dynasty and later the Southern Ming, the emerging Qing dynasty, and several other factions like the Shun dynasty and Xi dynasty founded by peasant revolt leaders fought against each another, which, along with innumerable natural disasters at that time such as those caused by the Little Ice Age and epidemics like the Great Plague during the last decade of the Ming dynasty, caused enormous loss of lives and significant harm to the economy. In total, these decades saw the loss of as many as 25 million lives, but the Qing appeared to have restored China's imperial power and inaugurate another flowering of the arts. The early Manchu emperors combined traditions of Inner Asian rule with Confucian norms of traditional Chinese government and were considered a Chinese dynasty.
The Manchus enforced a 'queue order', forcing Han Chinese men to adopt the Manchu queue hairstyle. Officials were required to wear Manchu-style clothing Changshan (bannermen dress and Tangzhuang), but ordinary Han civilians were allowed to wear traditional Han clothing. Bannermen could not undertake trade or manual labor; they had to petition to be removed from banner status. They were considered aristocracy and were given annual pensions, land, and allotments of cloth. The Kangxi Emperor ordered the creation of the Kangxi Dictionary, the most complete dictionary of Chinese characters that had been compiled.
Over the next half-century, all areas previously under the Ming dynasty were consolidated under the Qing. Conquests in Central Asia in the eighteenth century extended territorial control. Between 1673 and 1681, the Kangxi Emperor suppressed the Revolt of the Three Feudatories, an uprising of three generals in Southern China who had been denied hereditary rule of large fiefdoms granted by the previous emperor. In 1683, the Qing staged an amphibious assault on southern Taiwan, bringing down the rebel Kingdom of Tungning, which was founded by the Ming loyalist Koxinga (Zheng Chenggong) in 1662 after the fall of the Southern Ming, and had served as a base for continued Ming resistance in Southern China. The Qing defeated the Russians at Albazin, resulting in the Treaty of Nerchinsk.
By the end of Qianlong Emperor's long reign in 1796, the Qing Empire was at its zenith. The Qing ruled more than one-third of the world's population, and had the largest economy in the world. By area it was one of the largest empires ever.
In the 19th century the empire was internally restive and externally threatened by western powers. The defeat by the British Empire in the First Opium War (1840) led to the Treaty of Nanking (1842), under which Hong Kong was ceded to Britain and importation of opium (produced by British Empire territories) was allowed. Opium usage continued to grow in China, adversely affecting societal stability. Subsequent military defeats and unequal treaties with other western powers continued even after the fall of the Qing dynasty.
Internally the Taiping Rebellion (1851–1864), a Christian religious movement led by the "Heavenly King" Hong Xiuquan swept from the south to establish the Taiping Heavenly Kingdom and controlled roughly a third of China proper for over a decade. The court in desperation empowered Han Chinese officials such as Zeng Guofan to raise local armies. After initial defeats, Zeng crushed the rebels in the Third Battle of Nanking in 1864. This was one of the largest wars in the 19th century in troop involvement; there was massive loss of life, with a death toll of about 20 million. A string of civil disturbances followed, including the Punti–Hakka Clan Wars, Nian Rebellion, Dungan Revolt, and Panthay Rebellion. All rebellions were ultimately put down, but at enormous cost and with millions dead, seriously weakening the central imperial authority. China never rebuilt a strong central army, and many local officials used their military power to effectively rule independently in their provinces.
Yet the dynasty appeared to recover in the Tongzhi Restoration (1860–1872), led by Manchu royal family reformers and Han Chinese officials such as Zeng Guofan and his proteges Li Hongzhang and Zuo Zongtang. Their Self-Strengthening Movement made effective institutional reforms, imported Western factories and communications technology, with prime emphasis on strengthening the military. However, the reform was undermined by official rivalries, cynicism, and quarrels within the imperial family. The defeat of Yuan Shikai's modernized "Beiyang Fleet" in the First Sino-Japanese War (1894–1895) led to the formation of the New Army. The Guangxu Emperor, advised by Kang Youwei, then launched a comprehensive reform effort, the Hundred Days' Reform (1898). Empress Dowager Cixi, however, feared that precipitous change would lead to bureaucratic opposition and foreign intervention and quickly suppressed it.
In the summer of 1900, the Boxer Uprising opposed foreign influence and murdered Chinese Christians and foreign missionaries. When Boxers entered Beijing, the Qing government ordered all foreigners to leave, but they and many Chinese Christians were besieged in the foreign legations quarter. An Eight-Nation Alliance sent the Seymour Expedition of Japanese, Russian, British, Italian, German, French, American, and Austrian troops to relieve the siege, but they were forced to retreat by Boxer and Qing troops at the Battle of Langfang. After the Alliance's attack on the Dagu Forts, the court declared war on the Alliance and authorized the Boxers to join with imperial armies. After fierce fighting at Tianjin, the Alliance formed the second, much larger Gaselee Expedition and finally reached Beijing; the Empress Dowager evacuated to Xi'an. The Boxer Protocol ended the war, exacting a tremendous indemnity.
The Qing court then instituted "New Policies" of administrative and legal reform, including abolition of the examination system. But young officials, military officers, and students debated reform, perhaps a constitutional monarchy, or the overthrow of the dynasty and the creation of a republic. They were inspired by an emerging public opinion formed by intellectuals such as Liang Qichao and the revolutionary ideas of Sun Yat-sen. A localised military uprising, the Wuchang uprising, began on 10 October 1911, in Wuchang (today part of Wuhan), and soon spread. The Republic of China was proclaimed on 1 January 1912, ending 2,000 years of dynastic rule.
The provisional government of the Republic of China was formed in Nanjing on 12 March 1912. Sun Yat-sen became President of the Republic of China, but he turned power over to Yuan Shikai, who commanded the New Army. Over the next few years, Yuan proceeded to abolish the national and provincial assemblies and declared himself as the emperor of Empire of China in late 1915. Yuan's imperial ambitions were fiercely opposed by his subordinates; faced with the prospect of rebellion, he abdicated in March 1916 and died of natural causes in June.
Yuan's death in 1916 left a power vacuum; the republican government was all but shattered. This opened the way for the Warlord Era, during which much of China was ruled by shifting coalitions of competing provincial military leaders and the Beiyang government. Intellectuals, disappointed in the failure of the Republic, launched the New Culture Movement.
In 1919, the May Fourth Movement began as a response to the pro-Japanese terms imposed on China by the Treaty of Versailles following World War I. It quickly became a nationwide protest movement. The protests were a moral success as the cabinet fell and China refused to sign the Treaty of Versailles, which had awarded German holdings of Shandong to Japan. Memory of the mistreatment at Versailles fuels resentment into the 21st century.
Political and intellectual ferment waxed strong throughout the 1920s and 1930s. According to Patricia Ebrey:
In the 1920s Sun Yat-sen established a revolutionary base in Guangzhou and set out to unite the fragmented nation. He welcomed assistance from the Soviet Union (itself fresh from Lenin's Communist takeover) and he entered into an alliance with the fledgling Chinese Communist Party (CCP). After Sun's death from cancer in 1925, one of his protégés, Chiang Kai-shek, seized control of the Nationalist Party (KMT) and succeeded in bringing most of south and central China under its rule in the Northern Expedition (1926–1927). Having defeated the warlords in the south and central China by military force, Chiang was able to secure the nominal allegiance of the warlords in the North and establish the Nationalist government in Nanking. In 1927, Chiang turned on the CCP and relentlessly purged the Communists elements in his NRA. In 1934, driven from their mountain bases such as the Chinese Soviet Republic, the CCP forces embarked on the Long March across China's most desolate terrain to the northwest, where they established a guerrilla base at Yan'an in Shaanxi. During the Long March, the communists reorganized under a new leader, Mao Zedong (Mao Tse-tung).
The bitter Chinese Civil War between the Nationalists and the Communists continued, openly or clandestinely, through the 14-year-long Japanese occupation of various parts of the country (1931–1945). The two Chinese parties nominally formed a United Front to oppose the Japanese in 1937, during the Second Sino-Japanese War (1937–1945), which became a part of World War II. Japanese forces committed numerous war atrocities against the civilian population, including biological warfare (see Unit 731) and the Three Alls Policy (Sankō Sakusen), the three alls being: "Kill All, Burn All and Loot All". During the war, China was recognized as one of the Allied "Big Four" in the Declaration by United Nations. China was one of the four major Allies of World War II, and was later considered one of the primary victors in the war.
Following the defeat of Japan in 1945, the war between the Nationalist government forces and the CCP resumed, after failed attempts at reconciliation and a negotiated settlement. By 1949, the CCP had established control over most of the country. Odd Arne Westad says the Communists won the Civil War because they made fewer military mistakes than Chiang, and because in his search for a powerful centralized government, Chiang antagonized too many interest groups in China. Furthermore, his party was weakened in the war against the Japanese. Meanwhile, the Communists told different groups, such as peasants, exactly what they wanted to hear, and cloaked themselves in the cover of Chinese Nationalism. During the civil war both the Nationalists and Communists carried out mass atrocities, with millions of non-combatants killed by both sides. These included deaths from forced conscription and massacres. When the Nationalist government forces were defeated by CCP forces in mainland China in 1949, the Nationalist government retreated to Taiwan with its forces, along with Chiang and a large number of their supporters; the Nationalist government had taken effective control of Taiwan at the end of WWII as part of the overall Japanese surrender, when Japanese troops in Taiwan surrendered to the Republic of China troops.
Until the early 1970s the ROC was recognized as the sole legitimate government of China by the United Nations, the United States and most Western nations, refusing to recognize the PRC on account of the Cold War. This changed in 1971 when the PRC was seated in the United Nations, replacing the ROC. The KMT ruled Taiwan under martial law until 1987, with the stated goal of being vigilant against Communist infiltration and preparing to retake mainland China. Therefore, political dissent was not tolerated during that period.
In the 1990s the ROC underwent a major democratic reform, beginning with the 1991 resignation of the members of the Legislative Yuan and National Assembly elected in 1947. These groups were originally created to represent mainland China constituencies. Also lifted were the restrictions on the use of Taiwanese languages in the broadcast media and in schools. This culminated with the first direct presidential election in 1996 against the Democratic Progressive Party (DPP) candidate and former dissident, Peng Ming-min. In 2000, the KMT status as the ruling party ended when the DPP took power, only to regain its status in the 2008 election by Ma Ying-jeou.
Due to the controversial nature of Taiwan's political status, the ROC is currently recognized by 12 UN member states and Holy See as of 2023 as the legitimate government of "China".
Major combat in the Chinese Civil War ended in 1949 with the KMT pulling out of the mainland, with the government relocating to Taipei and maintaining control only over a few islands. The CCP was left in control of mainland China. On 1 October 1949, Mao Zedong proclaimed the People's Republic of China. "Communist China" and "Red China" were two common names for the PRC.
The PRC was shaped by a series of campaigns and five-year plans. The economic and social plan known as the Great Leap Forward caused an estimated 45 million deaths. Mao's government carried out mass executions of landowners, instituted collectivisation and implemented the Laogai camp system. Execution, deaths from forced labor and other atrocities resulted in millions of deaths under Mao. In 1966 Mao and his allies launched the Cultural Revolution, which continued until Mao's death a decade later. The Cultural Revolution, motivated by power struggles within the Party and a fear of the Soviet Union, led to a major upheaval in Chinese society.
In 1972, at the peak of the Sino-Soviet split, Mao and Zhou Enlai met U.S. president Richard Nixon in Beijing to establish relations with the US. In the same year, the PRC was admitted to the United Nations in place of the Republic of China, with permanent membership of the Security Council.
A power struggle followed Mao's death in 1976. The Gang of Four were arrested and blamed for the excesses of the Cultural Revolution, marking the end of a turbulent political era in China. Deng Xiaoping outmaneuvered Mao's anointed successor chairman Hua Guofeng, and gradually emerged as the de facto leader over the next few years.
Deng Xiaoping was the Paramount Leader of China from 1978 to 1992, although he never became the head of the party or state, and his influence within the Party led the country to significant economic reforms. The CCP subsequently loosened governmental control over citizens' personal lives and the communes were disbanded with many peasants receiving multiple land leases, which greatly increased incentives and agricultural production. In addition, there were many free market areas opened. The most successful free market area was Shenzhen. It is located in Guangdong and the property tax free area still exists today. This turn of events marked China's transition from a planned economy to a mixed economy with an increasingly open market environment, a system termed by some as "market socialism", and officially by the CCP as "Socialism with Chinese characteristics". The PRC adopted its current constitution on 4 December 1982.
In 1989 the death of former general secretary Hu Yaobang helped to spark the Tiananmen Square protests of that year, during which students and others campaigned for several months, speaking out against corruption and in favour of greater political reform, including democratic rights and freedom of speech. However, they were eventually put down on 4 June when Army troops and vehicles entered and forcibly cleared the square, with considerable numbers of fatalities. This event was widely reported, and brought worldwide condemnation and sanctions against the government.
CCP general secretary and PRC president Jiang Zemin and PRC premier Zhu Rongji, both former mayors of Shanghai, led post-Tiananmen PRC in the 1990s. Under Jiang and Zhu's ten years of administration, the PRC's economic performance pulled an estimated 150 million peasants out of poverty and sustained an average annual gross domestic product growth rate of 11.2%. The country formally joined the World Trade Organization in 2001. By 1997 and 1999, former European colonies of British Hong Kong and Portuguese Macau became the Hong Kong and Macau special administrative regions of the People's Republic of China respectively.
Although the PRC needed economic growth to spur its development, the government began to worry that rapid economic growth was degrading the country's resources and environment. Another concern is that certain sectors of society are not sufficiently benefiting from the PRC's economic development; one example of this is the wide gap between urban and rural areas. As a result, under former CCP general secretary and President Hu Jintao and Premier Wen Jiabao, the PRC initiated policies to address issues of equitable distribution of resources, but the outcome was not known as of 2014. More than 40 million farmers were displaced from their land, usually for economic development, contributing to 87,000 demonstrations and riots across China in 2005. For much of the PRC's population, living standards improved very substantially and freedom increased, but political controls remained tight and rural areas poor.
According to the U.S. Department of Defense, as many as 3 million Uyghurs and members of other Muslim minority groups are being held in China's internment camps which are located in the Xinjiang region and which American news reports often label as "concentration camps". The camps were established in late 2010s under Xi Jinping's administration. Human Rights Watch says that they have been used to indoctrinate Uyghurs and other Muslims since 2017 as part of a "people's war on terror", a policy announced in 2014. The camps have been criticized by the governments of many countries and human rights organizations for alleged human rights abuses, including mistreatment, rape, and torture, with some of them alleging genocide.
The novel coronavirus SARS-CoV-2, which causes the disease COVID-19, was first detected in Wuhan, Hubei in 2019 and led to a global pandemic. | [
{
"paragraph_id": 0,
"text": "The history of China spans several millennia across a wide geographical area. Each region now considered part of the Chinese world has experienced periods of unity, fracture, prosperity, and strife. Chinese civilization first emerged in the Yellow River valley, which along with the Yangtze basin constitutes the geographic core of the Chinese cultural sphere. China maintains a rich diversity of ethnic and linguistic people groups. The traditional lens for viewing Chinese history is the dynastic cycle: imperial dynasties rise and fall, and are ascribed certain achievements. Throughout pervades the narrative that Chinese civilization can be traced as an unbroken thread many thousands of years into the past, making it one of the cradles of civilization. At various times, states representative of a dominant Chinese culture have directly controlled areas stretching as far west as the Tian Shan, the Tarim Basin, and the Himalayas, as far north as the Sayan Mountains, and as far south as the delta of the Red River.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The Neolithic period saw increasingly complex polities begin to emerge along the Yellow and Yangtze rivers. The Erlitou culture in the central plains of China is sometimes identified with the Xia dynasty (3rd millennium BCE) of traditional Chinese historiography. The earliest surviving written Chinese dates to roughly 1250 BCE, consisting of divinations inscribed on oracle bones. Chinese bronze inscriptions, ritual texts dedicated to ancestors, form another large corpus of early Chinese writing. The earliest strata of received literature in Chinese include poetry, divination, and records of official speeches. China is believed to be one of a very few loci of independent invention of writing, and the earliest surviving records display an already-mature written language. The culture remembered by the earliest extant literature is that of the Zhou dynasty (c. 1046–256 BCE), China's Axial age, during which the Mandate of Heaven was introduced, and foundations laid for philosophies such as Confucianism, Taoism, Legalism, and Wuxing.",
"title": ""
},
{
"paragraph_id": 2,
"text": "China was first united under a single imperial state by Qin Shi Huang in 221 BCE. Orthography, weights, measures, and law were all standardized. Shortly thereafter, China entered its classical era with the Han dynasty (206 BCE – CE 220), marking a critical period. A term for the Chinese language is still \"Han language\", and the dominant Chinese ethnic group is known as Han Chinese. The Chinese empire reached some of its farthest geographical extents during this period. Confucianism was officially sanctioned and its core texts were edited into their received forms. Wealthy landholding families independent of the ancient aristocracy began to wield significant power. Han technology can be considered on par with that of the contemporaneous Roman Empire: mass production of paper aided the proliferation of written documents, and the written language of this period was employed for millennia afterwards. China became known internationally for its sericulture. When the Han imperial order finally collapsed after four centuries, China entered an equally lengthy period of disunity, duing which Buddhism began to have a significant impact on Chinese culture, while Calligraphy, art, historiography, and storytelling flourished. Wealthy families in some cases became more powerful than the central government. The Yangtze River valley was incorporated into the dominant cultural sphere.",
"title": ""
},
{
"paragraph_id": 3,
"text": "A period of unity began in 581 with the Sui dynasty, which soon gave way to the long-lived Tang dynasty (608–907), regarded as another Chinese golden age. The Tang dynasty saw flourishing developments in science, technology, poetry, economics, and geographical influence. China's first officially recognized empress, Wu Zetian, reigned during the dynasty's first century. Buddhism was adopted by Tang emperors. \"Tang people\" is the other common demonym for the Han ethnic group. After the Tang fractured, the Song dynasty (960–1279) saw the maximal extent of imperial Chinese cosmopolitan development. Mechanical printing was introduced, and many of the earliest surviving witnesses of certain texts are wood-block prints from this era. Song scientific advancement led the world, on par with the contemporaneous Khwarazmian Empire, and the imperial examination system gave ideological structure to the political bureaucracy. Confucianism and Taoism were fully knit together in Neo-Confucianism.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Eventually, the Mongol Empire conquered all of China, establishing the Yuan dynasty in 1271. Contact with Europe began to increase during this time. Achievements under the subsequent Ming dynasty (1368–1644) include global exploration, fine porcelain, and many extant public works projects, such as those restoring the Grand Canal and Great Wall. Three of the four Classic Chinese Novels were written during the Ming. The Qing dynasty that succeeded the Ming was ruled by ethnic Manchu people. The Qianlong emperor (r. 1735–1796) commissioned a complete encyclopaedia of imperial libraries, totaling nearly a billion words. Imperial China reached its greatest territorial extent of during the Qing, but China came into increasing conflict with European powers, culminating in the Opium Wars and subsequent unequal treaties.",
"title": ""
},
{
"paragraph_id": 5,
"text": "The 1911 Xinhai Revolution, led by Sun Yat-sen and others, created the modern Republic of China. From 1927, a costly civil war roiled between the Republican government under Chiang Kai-shek and the Chinese Red Army, and the industrialized Empire of Japan also invaded the divided country. After the Communist victory, Mao Zedong proclaimed the People's Republic of China (PRC) in 1949, with the Republic retreating to Taiwan. Both governments still claim sole legitimacy. The PRC has slowly accumulated the majority of diplomatic recognition, and Taiwan's status remains disputed. From 1966 to 1976, the Cultural Revolution in mainland China helped consolidate Mao's power towards the end of his life. After his death, the government began economic reforms under Deng Xiaoping, and became the world's fastest-growing major economy. China had been the most populous nation in the world for decades, until it was surpassed by India in 2023.",
"title": ""
},
{
"paragraph_id": 6,
"text": "The archaic human species of Homo erectus arrived in Eurasia sometime between 1.3 and 1.8 million years ago (Ma) and numerous remains of its subspecies have been found in what is now China. The oldest of these is the southwestern Yuanmou Man (元谋人; in Yunnan), dated to c. 1.7 Ma, which lived in a mixed bushland-forest environment alongside chalicotheres, deer, the elephant Stegodon, rhinos, cattle, pigs, and the giant short-faced hyaena. The better-known Peking Man (北京猿人; near Beijing) of 700,000–400,000 BP, was discovered in the Zhoukoudian cave alongside scrapers, choppers, and, dated slightly later, points, burins, and awls. Other Homo erectus fossils have been found widely throughout the region, including the northwestern Lantian Man (蓝田人; in Shaanxi) as well minor specimens in northeastern Liaoning and southern Guangdong. The dates of most Paleolithic sites were long debated but have been more reliably established based on modern magnetostratigraphy: Majuangou at 1.66–1.55 Ma, Lanpo at 1.6 Ma, Xiaochangliang at 1.36 Ma, Xiantai at 1.36 Ma, Banshan at 1.32 Ma, Feiliang at 1.2 Ma and Donggutuo at 1.1 Ma. Evidence of fire use by Homo erectus occurred between 1–1.8 million years BP at the archaeological site of Xihoudu, Shanxi Province.",
"title": "Prehistory"
},
{
"paragraph_id": 7,
"text": "The circumstances surrounding the evolution of Homo erectus to contemporary H. sapiens is debated; the three main theories include the dominant \"Out of Africa\" theory (OOA), the regional continuity model and the admixture variant of the OOA hypothesis. Regardless, the earliest modern humans have been dated to China at 120,000–80,000 BP based on fossilized teeth discovered in Fuyan Cave of Dao County, Hunan. The larger animals which lived alongside these humans include the extinct Ailuropoda baconi panda, the Crocuta ultima hyena, the Stegodon, and the giant tapir. Evidence of Middle Palaeolithic Levallois technology has been found in the lithic assemblage of Guanyindong Cave site in southwest China, dated to approximately 170,000–80,000 years ago.",
"title": "Prehistory"
},
{
"paragraph_id": 8,
"text": "The Neolithic age in China is considered to have begun about 10,000 years ago. Because the Neolithic is conventionally defined by the presence of agriculture, it follows that the Neolithic began at different times in the various regions of what is now China. Agriculture in China developed gradually, with initial domestication of a few grains and animals gradually expanding with the addition of many others over subsequent millennia. The earliest evidence of cultivated rice, found by the Yangtze River, was carbon-dated to 8,000 years ago. Early evidence for millet agriculture in the Yellow River valley was radiocarbon-dated to about 7000 BC. The Jiahu site is one of the best preserved early agricultural villages (7000 to 5800 BC). At Damaidi in Ningxia, 3,172 cliff carvings dating to 6000–5000 BC have been discovered, \"featuring 8,453 individual characters such as the sun, moon, stars, gods and scenes of hunting or grazing\", according to researcher Li Xiangshi. Written symbols, sometimes called proto-writing, were found at the site of Jiahu, which is dated around 7000 BC, Damaidi around 6000 BC, Dadiwan from 5800 BC to 5400 BC, and Banpo dating from the 5th millennium BC. With agriculture came increased population, the ability to store and redistribute crops, and the potential to support specialist craftsmen and administrators, which may have existed at late Neolithic sites like Taosi and the Liangzhu culture in the Yangtze delta. The cultures of the middle and late Neolithic in the central Yellow River valley are known respectively as the Yangshao culture (5000 BC to 3000 BC) and the Longshan culture (3000 BC to 2000 BC). Pigs and dogs were the earliest domesticated animals in the region, and after about 3000 BC domesticated cattle and sheep arrived from Western Asia. Wheat also arrived at this time but remained a minor crop. Fruit such as peaches, cherries and oranges, as well as chickens and various vegetables, were also domesticated in Neolithic China.",
"title": "Prehistory"
},
{
"paragraph_id": 9,
"text": "Bronze artifacts have been found at the Majiayao culture site (between 3100 and 2700 BC). The Bronze Age is also represented at the Lower Xiajiadian culture (2200–1600 BC) site in northeast China. Sanxingdui located in what is now Sichuan is believed to be the site of a major ancient city, of a previously unknown Bronze Age culture (between 2000 and 1200 BC). The site was first discovered in 1929 and then re-discovered in 1986. Chinese archaeologists have identified the Sanxingdui culture to be part of the ancient kingdom of Shu, linking the artifacts found at the site to its early legendary kings.",
"title": "Prehistory"
},
{
"paragraph_id": 10,
"text": "Ferrous metallurgy begins to appear in the late 6th century in the Yangzi Valley. A bronze hatchet with a blade of meteoric iron excavated near the city of Gaocheng in Shijiazhuang (now Hebei) has been dated to the 14th century BC. An Iron Age culture of the Tibetan Plateau has tentatively been associated with the Zhang Zhung culture described in early Tibetan writings.",
"title": "Prehistory"
},
{
"paragraph_id": 11,
"text": "Chinese historians in later periods were accustomed to the notion of one dynasty succeeding another, but the political situation in early China was much more complicated. Hence, as some scholars of China suggest, the Xia and the Shang can refer to political entities that existed concurrently, just as the early Zhou existed at the same time as the Shang. This bears similarities to how China, both contemporaneously and later, has been divided into states that were not one region, legally or culturally.",
"title": "Ancient China"
},
{
"paragraph_id": 12,
"text": "The earliest period once considered historical was the legendary era of the sage-emperors Yao, Shun, and Yu. Traditionally, the abdication system was prominent in this period, with Yao yielding his throne to Shun, who abdicated to Yu, who founded the Xia dynasty.",
"title": "Ancient China"
},
{
"paragraph_id": 13,
"text": "The Xia dynasty of China (from c. 2070 – c. 1600 BC) is the earliest of the Three Dynasties described in ancient historical records such as Sima Qian's Records of the Grand Historian and Bamboo Annals. The dynasty is generally considered mythical by Western scholars, but in China it is usually associated with the early Bronze Age site at Erlitou that was excavated in Henan in 1959. Since no writing was excavated at Erlitou or any other contemporaneous site, there is not enough evidence to prove whether the Xia dynasty ever existed. Some archaeologists claim that the Erlitou site was the capital of the Xia Dynasty. In any case, the site of Erlitou had a level of political organization that would not be incompatible with the legends of Xia recorded in later texts. More importantly, the Erlitou site has the earliest evidence for an elite who conducted rituals using cast bronze vessels, which would later be adopted by the Shang and Zhou.",
"title": "Ancient China"
},
{
"paragraph_id": 14,
"text": "Archaeological evidence, such as oracle bones and bronzes, as well as transmitted texts attest to the historical existence of the Shang dynasty (c. 1600–1046 BC). Findings from the earlier Shang period come from excavations at Erligang, in present-day Zhengzhou. Findings from the later Shang or Yin (殷) period, were found in profusion at Anyang, in modern-day Henan, the last of the Shang's capitals. The findings at Anyang include the earliest written record of the Chinese so far discovered: inscriptions of divination records in ancient Chinese writing on the bones or shells of animals—the \"oracle bones\", dating from around 1250 to 1046 BC.",
"title": "Ancient China"
},
{
"paragraph_id": 15,
"text": "A series of at least twenty-nine kings reigned over the Shang dynasty. Throughout their reigns, according to the Shiji, the capital city was moved six times. The final and most important move was to Yin during the reign of Pan Geng, around 1300 BC. The term Yin dynasty has been synonymous with the Shang dynasty in history, although it has lately been used to refer specifically to the latter half of the Shang dynasty.",
"title": "Ancient China"
},
{
"paragraph_id": 16,
"text": "Although written records found at Anyang confirm the existence of the Shang dynasty, Western scholars are often hesitant to associate settlements that are contemporaneous with the Anyang settlement with the Shang dynasty. For example, archaeological findings at Sanxingdui suggest a technologically advanced civilization culturally unlike Anyang. The evidence is inconclusive in proving how far the Shang realm extended from Anyang. The leading hypothesis is that Anyang, ruled by the same Shang in the official history, coexisted and traded with numerous other culturally diverse settlements in the area that is now referred to as China proper.",
"title": "Ancient China"
},
{
"paragraph_id": 17,
"text": "The Zhou dynasty (1046 BC to about 256 BC) is the longest-lasting dynasty in Chinese history, though its power declined steadily over the almost eight centuries of its existence. In the late 2nd millennium BC, the Zhou dynasty arose in the Wei River valley of modern western Shaanxi Province, where they were appointed Western Protectors by the Shang. A coalition led by the ruler of the Zhou, King Wu, defeated the Shang at the Battle of Muye. They took over most of the central and lower Yellow River valley and enfeoffed their relatives and allies in semi-independent states across the region. Several of these states eventually became more powerful than the Zhou kings.",
"title": "Ancient China"
},
{
"paragraph_id": 18,
"text": "The kings of Zhou invoked the concept of the Mandate of Heaven to legitimize their rule, a concept that was influential for almost every succeeding dynasty. Like Shangdi, Heaven (tian) ruled over all the other gods, and it decided who would rule China. It was believed that a ruler lost the Mandate of Heaven when natural disasters occurred in great number, and when, more realistically, the sovereign had apparently lost his concern for the people. In response, the royal house would be overthrown, and a new house would rule, having been granted the Mandate of Heaven.",
"title": "Ancient China"
},
{
"paragraph_id": 19,
"text": "The Zhou established two capitals Zongzhou (near modern Xi'an) and Chengzhou (Luoyang), with the king's court moving between them regularly. The Zhou alliance gradually expanded eastward into Shandong, southeastward into the Huai River valley, and southward into the Yangtze River valley.",
"title": "Ancient China"
},
{
"paragraph_id": 20,
"text": "In 771 BC, King You and his forces were defeated in the Battle of Mount Li by rebel states and Quanrong barbarians. The rebel aristocrats established a new ruler, King Ping, in Luoyang, beginning the second major phase of the Zhou dynasty: the Eastern Zhou period, which is divided into the Spring and Autumn and Warring States periods. The former period is named after the famous Spring and Autumn Annals. The decline of central power left a vacuum. The Zhou empire now consisted of hundreds of tiny states, some of them only as large as a walled town and surrounding land. These states began to fight against one another and vie for hegemony. The more powerful states tended to conquer and incorporate the weaker ones, so the number of states declined over time. By the 6th century BC most small states had disappeared by being annexed and just a few large and powerful principalities remained. Some southern states, such as Chu and Wu, claimed independence from the Zhou, who undertook wars against some of them (Wu and Yue). Many new cities were established in this period and society gradually became more urbanized and commercialized. Many famous individuals such as Laozi, Confucius and Sun Tzu lived during this chaotic period.",
"title": "Ancient China"
},
{
"paragraph_id": 21,
"text": "Conflict in this period occurred both between and within states. Warfare between states forced the surviving states to develop better administrations to mobilize more soldiers and resources. Within states there was constant jockeying between elite families. For example, the three most powerful families in the Jin state—Zhao, Wei and Han—eventually overthrew the ruling family and partitioned the state between them.",
"title": "Ancient China"
},
{
"paragraph_id": 22,
"text": "The Hundred Schools of Thought of classical Chinese philosophy began blossoming during this period and the subsequent Warring States period. Such influential intellectual movements as Confucianism, Taoism, Legalism and Mohism were founded, partly in response to the changing political world. The first two philosophical thoughts would have an enormous influence on Chinese culture.",
"title": "Ancient China"
},
{
"paragraph_id": 23,
"text": "After further political consolidations, seven prominent states remained during the 5th century BC. The years in which these states battled each other is known as the Warring States period. Though the Zhou king nominally remained as such until 256 BC, he was largely a figurehead that held little real power.",
"title": "Ancient China"
},
{
"paragraph_id": 24,
"text": "Numerous developments were made during this period in the areas of culture and mathematics—including the Zuo Zhuan within the Spring and Autumn Annals (a literary work summarizing the preceding Spring and Autumn period), and the bundle of 21 bamboo slips from the Tsinghua collection, dated to 305 BC—being the world's earliest known example of a two-digit, base-10 multiplication table. The Tsinghua collection indicates that sophisticated commercial arithmetic was already established during this period.",
"title": "Ancient China"
},
{
"paragraph_id": 25,
"text": "As neighboring territories of the seven states were annexed (including areas of modern Sichuan and Liaoning), they were now to be governed under an administrative system of commanderies and prefectures. This system had been in use elsewhere since the Spring and Autumn period, and its influence on administration would prove resilient—its terminology can still be seen in the contemporaneous sheng and xian (\"provinces\" and \"counties\") of contemporary China.",
"title": "Ancient China"
},
{
"paragraph_id": 26,
"text": "The state of Qin became dominant in the waning decades of the Warring States period, conquering the Shu capital of Jinsha on the Chengdu Plain; and then eventually driving Chu from its place in the Han River valley. Qin imitated the administrative reforms of the other states, thereby becoming a powerhouse. Its final expansion began during the reign of Ying Zheng, ultimately unifying the other six regional powers, and enabling him to proclaim himself as China's first emperor—known to history as Qin Shi Huang.",
"title": "Ancient China"
},
{
"paragraph_id": 27,
"text": "Ying Zheng's establishment of the Qin dynasty (秦朝) in 221 BC effectively formalized the region as an empire, rather than a state, and its pivotal status probably led to \"Qin\" (秦) later evolving into the Western term \"China\". To emphasize his sole rule, Zheng proclaimed himself Shi Huangdi (始皇帝; \"First August Emperor\"); the Huangdi title, derived from Chinese mythology, become the standard for subsequent rulers. Based in Xianyang, the empire was a centralized bureaucratic monarchy, a governing scheme which dominated the future of Imperial China. In an effort to improve the Zhou's perceived failures, this system consisted of more than 36 commanderies (郡; jun), made up of counties (县; xian) and progressively smaller divisions, each with a local leader.",
"title": "Imperial China"
},
{
"paragraph_id": 28,
"text": "Many aspects of society were informed by Legalism, a state ideology promoted by the emperor and his chancellor Li Si that was introduced at an earlier time by Shang Yang. In legal matters this philosophy emphasized mutual responsibility in disputes and severe punishments, while economic practices included the general encouragement of agriculture and repression of trade. Reforms occurred in weights and measures, writing styles (seal script) and metal currency (Ban Liang), all of which were standardized. Traditionally, Qin Shi Huang is regarded as ordering a mass burning of books and the live burial of scholars under the guise of Legalism, though contemporary scholars express considerable doubt on the historicity of this event. Despite its importance, Legalism was probably supplemented in non-political matters by Confucianism for social and moral beliefs and the five-element Wuxing (五行) theories for cosmological thought.",
"title": "Imperial China"
},
{
"paragraph_id": 29,
"text": "The Qin administration kept exhaustive records on their population, collecting information on their sex, age, social status and residence. Commoners, who made up over 90% of the population, \"suffered harsh treatment\" according to the historian Patricia Buckley Ebrey, as they were often conscripted into forced labor for the empire's construction projects. This included a massive system of imperial highways in 220 BC, which ranged around 4,250 miles (6,840 km) altogether. Other major construction projects were assigned to the general Meng Tian, who concurrently led a successful campaign against the northern Xiongnu peoples (210s BC), reportedly with 300,000 troops. Under Qin Shi Huang's orders, Meng supervised the combining of numerous ancient walls into what came to be known as the Great Wall of China and oversaw the building of a 500 miles (800 km) straight highway between northern and southern China.",
"title": "Imperial China"
},
{
"paragraph_id": 30,
"text": "After Qin Shi Huang's death the Qin government drastically deteriorated and eventually capitulated in 207 BC after the Qin capital was captured and sacked by rebels, which would ultimately lead to the establishment of the Han Empire.",
"title": "Imperial China"
},
{
"paragraph_id": 31,
"text": "The Han dynasty was founded by Liu Bang, who emerged victorious in the Chu–Han Contention that followed the fall of the Qin dynasty. A golden age in Chinese history, the Han dynasty's long period of stability and prosperity consolidated the foundation of China as a unified state under a central imperial bureaucracy, which was to last intermittently for most of the next two millennia. During the Han dynasty, territory of China was extended to most of the China proper and to areas far west. Confucianism was officially elevated to orthodox status and was to shape the subsequent Chinese civilization. Art, culture and science all advanced to unprecedented heights. With the profound and lasting impacts of this period of Chinese history, the dynasty name \"Han\" had been taken as the name of the Chinese people, now the dominant ethnic group in modern China, and had been commonly used to refer to Chinese language and written characters.",
"title": "Imperial China"
},
{
"paragraph_id": 32,
"text": "After the initial laissez-faire policies of Emperors Wen and Jing, the ambitious Emperor Wu brought the empire to its zenith. To consolidate his power, he disenfranchised the majority of imperial relatives, appointing military governors to control their former lands. As a further step, he extended patronage to Confucianism, which emphasizes stability and order in a well-structured society. Imperial Universities were established to support its study. At the urging of his Legalist advisors, however, he also strengthened the fiscal structure of the dynasty with government monopolies.",
"title": "Imperial China"
},
{
"paragraph_id": 33,
"text": "Major military campaigns were launched to weaken the nomadic Xiongnu Empire, limiting their influence north of the Great Wall. Along with the diplomatic efforts led by Zhang Qian, the sphere of influence of the Han Empire extended to the states in the Tarim Basin, opened up the Silk Road that connected China to the west, stimulating bilateral trade and cultural exchange. To the south, various small kingdoms far beyond the Yangtze River Valley were formally incorporated into the empire.",
"title": "Imperial China"
},
{
"paragraph_id": 34,
"text": "Emperor Wu also dispatched a series of military campaigns against the Baiyue tribes. The Han annexed Minyue in 135 BC and 111 BC, Nanyue in 111 BC, and Dian in 109 BC. Migration and military expeditions led to the cultural assimilation of the south. It also brought the Han into contact with kingdoms in Southeast Asia, introducing diplomacy and trade.",
"title": "Imperial China"
},
{
"paragraph_id": 35,
"text": "After Emperor Wu the empire slipped into gradual stagnation and decline. Economically, the state treasury was strained by excessive campaigns and projects, while land acquisitions by elite families gradually drained the tax base. Various consort clans exerted increasing control over strings of incompetent emperors and eventually the dynasty was briefly interrupted by the usurpation of Wang Mang.",
"title": "Imperial China"
},
{
"paragraph_id": 36,
"text": "In AD 9 the usurper Wang Mang claimed that the Mandate of Heaven called for the end of the Han dynasty and the rise of his own, and he founded the short-lived Xin dynasty. Wang Mang started an extensive program of land and other economic reforms, including the outlawing of slavery and land nationalization and redistribution. These programs, however, were never supported by the landholding families, because they favored the peasants. The instability of power brought about chaos, uprisings, and loss of territories. This was compounded by mass flooding of the Yellow River; silt buildup caused it to split into two channels and displaced large numbers of farmers. Wang Mang was eventually killed in Weiyang Palace by an enraged peasant mob in AD 23.",
"title": "Imperial China"
},
{
"paragraph_id": 37,
"text": "Emperor Guangwu reinstated the Han dynasty with the support of landholding and merchant families at Luoyang, east of the former capital Xi'an. Thus, this new era is termed the Eastern Han dynasty. With the capable administrations of Emperors Ming and Zhang, former glories of the dynasty were reclaimed, with brilliant military and cultural achievements. The Xiongnu Empire was decisively defeated. The diplomat and general Ban Chao further expanded the conquests across the Pamirs to the shores of the Caspian Sea, thus reopening the Silk Road, and bringing trade, foreign cultures, along with the arrival of Buddhism. With extensive connections with the west, the first of several Roman embassies to China were recorded in Chinese sources, coming from the sea route in AD 166, and a second one in AD 284.",
"title": "Imperial China"
},
{
"paragraph_id": 38,
"text": "The Eastern Han dynasty was one of the most prolific eras of science and technology in ancient China, notably the historic invention of papermaking by Cai Lun, and the numerous scientific and mathematical contributions by the famous polymath Zhang Heng.",
"title": "Imperial China"
},
{
"paragraph_id": 39,
"text": "By the 2nd century, the empire declined amidst land acquisitions, invasions, and feuding between consort clans and eunuchs. The Yellow Turban Rebellion broke out in AD 184, ushering in an era of warlords. In the ensuing turmoil, three states emerged, trying to gain predominance and reunify the land, giving this historical period its name. The classic historical novel Romance of the Three Kingdoms dramatizes events of this period.",
"title": "Imperial China"
},
{
"paragraph_id": 40,
"text": "The warlord Cao Cao reunified the north in 208, and in 220 his son accepted the abdication of Emperor Xian of Han, thus initiating the Wei dynasty. Soon, Wei's rivals Shu and Wu proclaimed their independence. This period was characterized by a gradual decentralization of the state that had existed during the Qin and Han dynasties, and an increase in the power of great families.",
"title": "Imperial China"
},
{
"paragraph_id": 41,
"text": "In 266, the Jin dynasty overthrew the Wei and later unified the country in 280, but this union was short-lived.",
"title": "Imperial China"
},
{
"paragraph_id": 42,
"text": "The Jin dynasty was severely weakened by War of the Eight Princes and lost control of northern China after non-Han Chinese settlers rebelled and captured Luoyang and Chang'an. In 317, the Jin prince Sima Rui, based in modern-day Nanjing, became emperor and continued the dynasty, now known as the Eastern Jin, which held southern China for another century. Prior to this move, historians refer to the Jin dynasty as the Western Jin.",
"title": "Imperial China"
},
{
"paragraph_id": 43,
"text": "Northern China fragmented into a series of independent states known as the Sixteen Kingdoms, most of which were founded by Xiongnu, Xianbei, Jie, Di and Qiang rulers. These non-Han peoples were ancestors of the Turks, Mongols, and Tibetans. Many had, to some extent, been \"sinicized\" long before their ascent to power. In fact, some of them, notably the Qiang and the Xiongnu, had already been allowed to live in the frontier regions within the Great Wall since late Han times. During this period, warfare ravaged the north and prompted large-scale Han Chinese migration south to the Yangtze River Basin and Delta.",
"title": "Imperial China"
},
{
"paragraph_id": 44,
"text": "In the early 5th century China entered a period known as the Northern and Southern dynasties, in which parallel regimes ruled the northern and southern halves of the country. In the south, the Eastern Jin gave way to the Liu Song, Southern Qi, Liang and finally Chen. Each of these Southern dynasties were led by Han Chinese ruling families and used Jiankang (modern Nanjing) as the capital. They held off attacks from the north and preserved many aspects of Chinese civilization, while northern barbarian regimes began to sinify.",
"title": "Imperial China"
},
{
"paragraph_id": 45,
"text": "In the north the last of the Sixteen Kingdoms was extinguished in 439 by the Northern Wei, a kingdom founded by the Xianbei, a nomadic people who unified northern China. The Northern Wei eventually split into the Eastern and Western Wei, which then became the Northern Qi and Northern Zhou. These regimes were dominated by Xianbei or Han Chinese who had married into Xianbei families. During this period most Xianbei people adopted Han surnames, eventually leading to complete assimilation into the Han.",
"title": "Imperial China"
},
{
"paragraph_id": 46,
"text": "Despite the division of the country, Buddhism spread throughout the land. In southern China, fierce debates about whether Buddhism should be allowed were held frequently by the royal court and nobles. By the end of the era, Buddhists and Taoists had become much more tolerant of each other.",
"title": "Imperial China"
},
{
"paragraph_id": 47,
"text": "The short-lived Sui dynasty was a pivotal period in Chinese history. Founded by Emperor Wen in 581 in succession of the Northern Zhou, the Sui went on to conquer the Southern Chen in 589 to reunify China, ending three centuries of political division. The Sui pioneered many new institutions, including the government system of Three Departments and Six Ministries, imperial examinations for selecting officials from commoners, while improved on the systems of fubing system of the army conscription and the equal-field system of land distributions. These policies, which were adopted by later dynasties, brought enormous population growth, and amassed excessive wealth to the state. Standardized coinage was enforced throughout the unified empire. Buddhism took root as a prominent religion and was supported officially. Sui China was known for its numerous mega-construction projects. Intended for grains shipment and transporting troops, the Grand Canal was constructed, linking the capitals Daxing (Chang'an) and Luoyang to the wealthy southeast region, and in another route, to the northeast border. The Great Wall was also expanded, while series of military conquests and diplomatic maneuvers further pacified its borders. However, the massive invasions of the Korean Peninsula during the Goguryeo–Sui War failed disastrously, triggering widespread revolts that led to the fall of the dynasty.",
"title": "Imperial China"
},
{
"paragraph_id": 48,
"text": "The Tang dynasty was a golden age of Chinese civilization, a prosperous, stable, and creative period with significant developments in culture, art, literature, particularly poetry, and technology. Buddhism became the predominant religion for the common people. Chang'an (modern Xi'an), the national capital, was the largest city in the world during its time.",
"title": "Imperial China"
},
{
"paragraph_id": 49,
"text": "The first emperor, Emperor Gaozu, came to the throne on 18 June 618, placed there by his son, Li Shimin, who became the second emperor, Taizong, one of the greatest emperors in Chinese history. Combined military conquests and diplomatic maneuvers reduced threats from Central Asian tribes, extended the border, and brought neighboring states into a tributary system. Military victories in the Tarim Basin kept the Silk Road open, connecting Chang'an to Central Asia and areas far to the west. In the south, lucrative maritime trade routes from port cities such as Guangzhou connected with distant countries, and foreign merchants settled in China, encouraging a cosmopolitan culture. The Tang culture and social systems were observed and adapted by neighboring countries, most notably Japan. Internally the Grand Canal linked the political heartland in Chang'an to the agricultural and economic centers in the eastern and southern parts of the empire. Xuanzang, a Chinese Buddhist monk, scholar, traveller, and translator travelled to India on his own and returned with \"over six hundred Mahayana and Hinayana texts, seven statues of the Buddha and more than a hundred sarira relics.\"",
"title": "Imperial China"
},
{
"paragraph_id": 50,
"text": "The prosperity of the early Tang dynasty was abetted by a centralized bureaucracy. The government was organized as \"Three Departments and Six Ministries\" to separately draft, review, and implement policies. These departments were run by royal family members and landed aristocrats, but as the dynasty wore on, were joined or replaced by scholar officials selected by imperial examinations, setting patterns for later dynasties.",
"title": "Imperial China"
},
{
"paragraph_id": 51,
"text": "Under the Tang \"equal-field system\" all land was owned by the Emperor and granted to each family according to household size. Men granted land were conscripted for military service for a fixed period each year, a military policy known as the fubing system. These policies stimulated a rapid growth in productivity and a significant army without much burden on the state treasury. By the dynasty's midpoint, however, standing armies had replaced conscription, and land was continuously falling into the hands of private owners and religious institutions granted exemptions.",
"title": "Imperial China"
},
{
"paragraph_id": 52,
"text": "The dynasty continued to flourish under the rule of Empress Wu Zetian, the only official empress regnant in Chinese history, and reached its zenith during the long reign of Emperor Xuanzong, who oversaw an empire that stretched from the Pacific to the Aral Sea with at least 50 million people. There were vibrant artistic and cultural creations, including works of the greatest Chinese poets, Li Bai and Du Fu.",
"title": "Imperial China"
},
{
"paragraph_id": 53,
"text": "At the zenith of prosperity of the empire, the An Lushan Rebellion from 755 to 763 was a watershed event. War, disease, and economic disruption devastated the population and drastically weakened the central imperial government. Upon suppression of the rebellion, regional military governors, known as jiedushi, gained increasingly autonomous status. With loss of revenue from land tax, the central imperial government came to rely heavily on salt monopoly. Externally, former submissive states raided the empire and the vast border territories were lost for centuries. Nevertheless, civil society recovered and thrived amidst the weakened imperial bureaucracy.",
"title": "Imperial China"
},
{
"paragraph_id": 54,
"text": "In late Tang period the empire was worn out by recurring revolts of the regional military governors, while scholar-officials engaged in fierce factional strife and corrupted eunuchs amassed immense power. Catastrophically, the Huang Chao Rebellion, from 874 to 884, devastated the entire empire for a decade. The sack of the southern port Guangzhou in 879 was followed by the massacre of most of its inhabitants, especially the large foreign merchant enclaves. By 881, both capitals, Luoyang and Chang'an, fell successively. The reliance on ethnic Han and Turkic warlords in suppressing the rebellion increased their power and influence. Consequently, the fall of the dynasty following Zhu Wen's usurpation led to an era of division.",
"title": "Imperial China"
},
{
"paragraph_id": 55,
"text": "The period of political disunity between the Tang and the Song, known as the Five Dynasties and Ten Kingdoms period, lasted from 907 to 960. During this half-century, China was in all respects a multi-state system. Five regimes, namely, (Later) Liang, Tang, Jin, Han and Zhou, rapidly succeeded one another in control of the traditional Imperial heartland in northern China. Among the regimes, rulers of (Later) Tang, Jin and Han were sinicized Shatuo Turks, which ruled over the ethnic majority of Han Chinese. More stable and smaller regimes of mostly ethnic Han rulers coexisted in south and western China over the period, cumulatively constituted the \"Ten Kingdoms\".",
"title": "Imperial China"
},
{
"paragraph_id": 56,
"text": "Amidst political chaos in the north, the strategic Sixteen Prefectures (region along today's Great Wall) were ceded to the emerging Khitan Liao dynasty, which drastically weakened the defense of China proper against northern nomadic empires. To the south, Vietnam gained lasting independence after being a Chinese prefecture for many centuries. With wars dominating in Northern China, there were mass southward migrations of population, which further enhanced the southward shift of cultural and economic centers in China. The era ended with the coup of Later Zhou general Zhao Kuangyin, and the establishment of the Song dynasty in 960, which eventually annihilated the remains of the \"Ten Kingdoms\" and reunified China.",
"title": "Imperial China"
},
{
"paragraph_id": 57,
"text": "In 960, the Song dynasty was founded by Emperor Taizu, with its capital established in Kaifeng (then known as Bianjing). In 979, the Song dynasty reunified most of China proper, while large swaths of the outer territories were occupied by sinicized nomadic empires. The Khitan Liao dynasty, which lasted from 907 to 1125, ruled over Manchuria, Mongolia, and parts of Northern China. Meanwhile, in what are now the north-western Chinese provinces of Gansu, Shaanxi, and Ningxia, the Tangut tribes founded the Western Xia dynasty from 1032 to 1227.",
"title": "Imperial China"
},
{
"paragraph_id": 58,
"text": "Aiming to recover the strategic sixteen prefectures lost in the previous dynasty, campaigns were launched against the Liao dynasty in the early Song period, which all ended in failure. Then in 1004, the Liao cavalry swept over the exposed North China Plain and reached the outskirts of Kaifeng, forcing the Song's submission and then agreement to the Chanyuan Treaty, which imposed heavy annual tributes from the Song treasury. The treaty was a significant reversal of Chinese dominance of the traditional tributary system. Yet the annual outflow of Song's silver to the Liao was paid back through the purchase of Chinese goods and products, which expanded the Song economy, and replenished its treasury. This dampened the incentive for the Song to further campaign against the Liao. Meanwhile, this cross-border trade and contact induced further sinicization within the Liao Empire, at the expense of its military might which was derived from its nomadic lifestyle. Similar treaties and social-economical consequences occurred in Song's relations with the Jin dynasty.",
"title": "Imperial China"
},
{
"paragraph_id": 59,
"text": "Within the Liao Empire the Jurchen tribes revolted against their overlords to establish the Jin dynasty in 1115. In 1125, the devastating Jin cataphract annihilated the Liao dynasty, while remnants of Liao court members fled to Central Asia to found the Qara Khitai Empire (Western Liao dynasty). Jin's invasion of the Song dynasty followed swiftly. In 1127, Kaifeng was sacked, a massive catastrophe known as the Jingkang Incident, ending the Northern Song dynasty. Later the entire north of China was conquered. The survived members of Song court regrouped in the new capital city of Hangzhou, and initiated the Southern Song dynasty, which ruled territories south of the Huai River. In the ensuing years, the territory and population of China were divided between the Song dynasty, the Jin dynasty and the Western Xia dynasty. The era ended with the Mongol conquest, as Western Xia fell in 1227, the Jin dynasty in 1234, and finally the Southern Song dynasty in 1279.",
"title": "Imperial China"
},
{
"paragraph_id": 60,
"text": "Despite its military weakness, the Song dynasty is widely considered to be the high point of classical Chinese civilization. The Song economy, facilitated by technology advancement, had reached a level of sophistication probably unseen in world history before its time. The population soared to over 100 million and the living standards of common people improved tremendously due to improvements in rice cultivation and the wide availability of coal for production. The capital cities of Kaifeng and subsequently Hangzhou were both the most populous cities in the world for their time, and encouraged vibrant civil societies unmatched by previous Chinese dynasties. Although land trading routes to the far west were blocked by nomadic empires, there was extensive maritime trade with neighboring states, which facilitated the use of Song coinage as the de facto currency of exchange. Giant wooden vessels equipped with compasses traveled throughout the China Seas and northern Indian Ocean. The concept of insurance was practised by merchants to hedge the risks of such long-haul maritime shipments. With prosperous economic activities, the historically first use of paper currency emerged in the western city of Chengdu, as a supplement to the existing copper coins.",
"title": "Imperial China"
},
{
"paragraph_id": 61,
"text": "The Song dynasty was considered to be the golden age of great advancements in science and technology of China, thanks to innovative scholar-officials such as Su Song (1020–1101) and Shen Kuo (1031–1095). Inventions such as the hydro-mechanical astronomical clock, the first continuous and endless power-transmitting chain, woodblock printing and paper money were all invented during the Song dynasty.",
"title": "Imperial China"
},
{
"paragraph_id": 62,
"text": "There was court intrigue between the political reformers and conservatives, led by the chancellors Wang Anshi and Sima Guang, respectively. By the mid-to-late 13th century, the Chinese had adopted the dogma of Neo-Confucian philosophy formulated by Zhu Xi. Enormous literary works were compiled during the Song dynasty, such as the innovative historical narrative Zizhi Tongjian (\"Comprehensive Mirror to Aid in Government\"). The invention of movable-type printing further facilitated the spread of knowledge. Culture and the arts flourished, with grandiose artworks such as Along the River During the Qingming Festival and Eighteen Songs of a Nomad Flute, along with great Buddhist painters such as the prolific Lin Tinggui.",
"title": "Imperial China"
},
{
"paragraph_id": 63,
"text": "The Song dynasty was also a period of major innovation in the history of warfare. Gunpowder, while invented in the Tang dynasty, was first put into use in battlefields by the Song army, inspiring a succession of new firearms and siege engines designs. During the Southern Song dynasty, as its survival hinged decisively on guarding the Yangtze and Huai River against the cavalry forces from the north, the first standing navy in China was assembled in 1132, with its admiral's headquarters established at Dinghai. Paddle-wheel warships equipped with trebuchets could launch incendiary bombs made of gunpowder and lime, as recorded in Song's victory over the invading Jin forces at the Battle of Tangdao in the East China Sea, and the Battle of Caishi on the Yangtze River in 1161.",
"title": "Imperial China"
},
{
"paragraph_id": 64,
"text": "The advances in civilization during the Song dynasty came to an abrupt end following the devastating Mongol conquest, during which the population sharply dwindled, with a marked contraction in economy. Despite viciously halting Mongol advance for more than three decades, the Southern Song capital Hangzhou fell in 1276, followed by the final annihilation of the Song standing navy at the Battle of Yamen in 1279.",
"title": "Imperial China"
},
{
"paragraph_id": 65,
"text": "The Yuan dynasty was formally proclaimed in 1271, when the Great Khan of Mongol, Kublai Khan, one of the grandsons of Genghis Khan, assumed the additional title of Emperor of China, and considered his inherited part of the Mongol Empire as a Chinese dynasty. In the preceding decades, the Mongols had conquered the Jin dynasty in Northern China, and the Southern Song dynasty fell in 1279 after a protracted and bloody war. The Mongol Yuan dynasty became the first conquest dynasty in Chinese history to rule the entire China proper and its population as an ethnic minority. The dynasty also directly controlled the Mongol heartland and other regions, inheriting the largest share of territory of the eastern Mongol empire, which roughly coincided with the modern area of China and nearby regions in East Asia. Further expansion of the empire was halted after defeats in the invasions of Japan and Vietnam. Following the previous Jin dynasty, the capital of Yuan dynasty was established at Khanbaliq (also known as Dadu, modern-day Beijing). The Grand Canal was reconstructed to connect the remote capital city to economic hubs in southern part of China, setting the precedence and foundation where Beijing would largely remain as the capital of the successive regimes that unified China mainland.",
"title": "Imperial China"
},
{
"paragraph_id": 66,
"text": "A series of Mongol civil wars in the late 13th century led to the division of the Mongol Empire. In 1304 the emperors of the Yuan dynasty were upheld as the nominal Khagan over western khanates (the Chagatai Khanate, the Golden Horde and the Ilkhanate), which nonetheless remained de facto autonomous. The era was known as Pax Mongolica, when much of the Asian continent was ruled by the Mongols. For the first and only time in history, the Silk Road was controlled entirely by a single state, facilitating the flow of people, trade, and cultural exchange. A network of roads and a postal system were established to connect the vast empire. Lucrative maritime trade, developed from the previous Song dynasty, continued to flourish, with Quanzhou and Hangzhou emerging as the largest ports in the world. Adventurous travelers from the far west, most notably the Venetian, Marco Polo, would settle in China for decades. Upon his return, his detail travel record inspired generations of medieval Europeans with the splendors of the far East. The Yuan dynasty was the first ancient economy, where paper currency, known at the time as Jiaochao, was used as the predominant medium of exchange. Its unrestricted issuance in the late Yuan dynasty inflicted hyperinflation, which eventually brought the downfall of the dynasty.",
"title": "Imperial China"
},
{
"paragraph_id": 67,
"text": "While the Mongol rulers of the Yuan dynasty adopted substantially to Chinese culture, their sinicization was of lesser extent compared to earlier conquest dynasties in Chinese history. For preserving racial superiority as the conqueror and ruling class, traditional nomadic customs and heritage from the Mongolian Steppe were held in high regard. On the other hand, the Mongol rulers also adopted flexibly to a variety of cultures from many advanced civilizations within the vast empire. Traditional social structure and culture in China underwent immense transform during the Mongol dominance. Large groups of foreign migrants settled in China, who enjoyed elevated social status over the majority Han Chinese, while enriching Chinese culture with foreign elements. The class of scholar officials and intellectuals, traditional bearers of elite Chinese culture, lost substantial social status. This stimulated the development of culture of the common folks. There were prolific works in zaju variety shows and literary songs (sanqu), which were written in a distinctive poetry style known as qu. Novels of vernacular style gained unprecedented status and popularity.",
"title": "Imperial China"
},
{
"paragraph_id": 68,
"text": "Before the Mongol invasion, Chinese dynasties reported approximately 120 million inhabitants; after the conquest had been completed in 1279, the 1300 census reported roughly 60 million people. This major decline is not necessarily due only to Mongol killings. Scholars such as Frederick W. Mote argue that the wide drop in numbers reflects an administrative failure to record rather than an actual decrease; others such as Timothy Brook argue that the Mongols created a system of enserfment among a huge portion of the Chinese populace, causing many to disappear from the census altogether; other historians including William McNeill and David Morgan consider that plague was the main factor behind the demographic decline during this period. In the 14th century China suffered additional depredations from epidemics of plague, estimated to have killed around a quarter of the population of China.",
"title": "Imperial China"
},
{
"paragraph_id": 69,
"text": "Throughout the Yuan dynasty, there was some general sentiment among the populace against the Mongol dominance. Yet rather than the nationalist cause, it was mainly strings of natural disasters and incompetent governance that triggered widespread peasant uprisings since the 1340s. After the massive naval engagement at Lake Poyang, Zhu Yuanzhang prevailed over other rebel forces in the south. He proclaimed himself emperor and founded the Ming dynasty in 1368. The same year his northern expedition army captured the capital Khanbaliq. The Yuan remnants fled back to Mongolia and sustained the regime. Other Mongol Khanates in Central Asia continued to exist after the fall of Yuan dynasty in China.",
"title": "Imperial China"
},
{
"paragraph_id": 70,
"text": "The Ming dynasty was founded by Zhu Yuanzhang in 1368, who proclaimed himself as the Hongwu Emperor. The capital was initially set at Nanjing, and was later moved to Beijing from Yongle Emperor's reign onward.",
"title": "Imperial China"
},
{
"paragraph_id": 71,
"text": "Urbanization increased as the population grew and as the division of labor grew more complex. Large urban centers, such as Nanjing and Beijing, also contributed to the growth of private industry. In particular, small-scale industries grew up, often specializing in paper, silk, cotton, and porcelain goods. For the most part, however, relatively small urban centers with markets proliferated around the country. Town markets mainly traded food, with some necessary manufactures such as pins or oil.",
"title": "Imperial China"
},
{
"paragraph_id": 72,
"text": "Despite the xenophobia and intellectual introspection characteristic of the increasingly popular new school of neo-Confucianism, China under the early Ming dynasty was not isolated. Foreign trade and other contacts with the outside world, particularly Japan, increased considerably. Chinese merchants explored all of the Indian Ocean, reaching East Africa with the voyages of Zheng He.",
"title": "Imperial China"
},
{
"paragraph_id": 73,
"text": "The Hongwu Emperor, being the only founder of a Chinese dynasty who was also of peasant origin, had laid the foundation of a state that relied fundamentally in agriculture. Commerce and trade, which flourished in the previous Song and Yuan dynasties, were less emphasized. Neo-feudal landholdings of the Song and Mongol periods were expropriated by the Ming rulers. Land estates were confiscated by the government, fragmented, and rented out. Private slavery was forbidden. Consequently, after the death of the Yongle Emperor, independent peasant landholders predominated in Chinese agriculture. These laws might have paved the way to removing the worst of the poverty during the previous regimes. Towards later era of the Ming dynasty, with declining government control, commerce, trade and private industries revived.",
"title": "Imperial China"
},
{
"paragraph_id": 74,
"text": "The dynasty had a strong and complex central government that unified and controlled the empire. The emperor's role became more autocratic, although Hongwu Emperor necessarily continued to use what he called the \"Grand Secretariat\" to assist with the immense paperwork of the bureaucracy, including memorials (petitions and recommendations to the throne), imperial edicts in reply, reports of various kinds, and tax records. It was this same bureaucracy that later prevented the Ming government from being able to adapt to changes in society, and eventually led to its decline.",
"title": "Imperial China"
},
{
"paragraph_id": 75,
"text": "The Yongle Emperor strenuously tried to extend China's influence beyond its borders by demanding other rulers send ambassadors to China to present tribute. A large navy was built, including four-masted ships displacing 1,500 tons. A standing army of 1 million troops was created. The Chinese armies conquered and occupied Vietnam for around 20 years, while the Chinese fleet sailed the China seas and the Indian Ocean, cruising as far as the east coast of Africa. The Chinese gained influence in eastern Moghulistan. Several maritime Asian nations sent envoys with tribute for the Chinese emperor. Domestically, the Grand Canal was expanded and became a stimulus to domestic trade. Over 100,000 tons of iron per year were produced. Many books were printed using movable type. The imperial palace in Beijing's Forbidden City reached its current splendor. It was also during these centuries that the potential of south China came to be fully exploited. New crops were widely cultivated and industries such as those producing porcelain and textiles flourished.",
"title": "Imperial China"
},
{
"paragraph_id": 76,
"text": "In 1449 Esen Tayisi led an Oirat Mongol invasion of northern China which culminated in the capture of the Zhengtong Emperor at Tumu. Since then, the Ming became on the defensive on the northern frontier, which led to the Ming Great Wall being built. Most of what remains of the Great Wall of China today was either built or repaired by the Ming. The brick and granite work was enlarged, the watchtowers were redesigned, and cannons were placed along its length.",
"title": "Imperial China"
},
{
"paragraph_id": 77,
"text": "At sea the Ming became increasingly isolationist after the death of the Yongle Emperor. The treasure voyages which sailed the Indian Ocean were discontinued, and the maritime prohibition laws were set in place banning the Chinese from sailing abroad. European traders who reached China in the midst of the Age of Discovery were repeatedly rebuked in their requests for trade, with the Portuguese being repulsed by the Ming navy at Tuen Mun in 1521 and again in 1522. Domestic and foreign demands for overseas trade, deemed illegal by the state, led to widespread wokou piracy attacking the southeastern coastline during the rule of the Jiajing Emperor (1507–1567), which only subsided after the opening of ports in Guangdong and Fujian and much military suppression. In addition to raids from Japan by the wokou, raids from Taiwan and the Philippines by the Pisheye also ravaged the southern coasts. The Portuguese were allowed to settle in Macau in 1557 for trade, which remained in Portuguese hands until 1999. After the Spanish invasion of the Philippines, trade with the Spanish at Manila imported large quantities of Mexican and Peruvian silver from the Spanish Americas to China. The Dutch entry into the Chinese seas was also met with fierce resistance, with the Dutch being chased off the Penghu islands in the Sino-Dutch conflicts of 1622–1624 and were forced to settle in Taiwan instead. The Dutch in Taiwan fought with the Ming in the Battle of Liaoluo Bay in 1633 and lost, and eventually surrendered to the Ming loyalist Koxinga in 1662, after the fall of the Ming dynasty.",
"title": "Imperial China"
},
{
"paragraph_id": 78,
"text": "In 1556, during the rule of the Jiajing Emperor, the Shaanxi earthquake killed about 830,000 people, the deadliest earthquake of all time.",
"title": "Imperial China"
},
{
"paragraph_id": 79,
"text": "The Ming dynasty intervened deeply in the Japanese invasions of Korea (1592–98), which ended with the withdrawal of all invading Japanese forces in Korea, and the restoration of the Joseon dynasty, its traditional ally and tributary state. The regional hegemony of the Ming dynasty was preserved at a toll on its resources. Coincidentally, with Ming's control in Manchuria in decline, the Manchu (Jurchen) tribes, under their chieftain Nurhaci, broke away from Ming's rule, and emerged as a powerful, unified state, which was later proclaimed as the Qing dynasty. It went on to subdue the much weakened Korea as its tributary, conquered Mongolia, and expanded its territory to the outskirt of the Great Wall. The most elite army of the Ming dynasty was to station at the Shanhai Pass to guard the last stronghold against the Manchus, which weakened its suppression of internal peasants uprisings.",
"title": "Imperial China"
},
{
"paragraph_id": 80,
"text": "The Qing dynasty (1644–1912) was the last imperial dynasty in China. Founded by the Manchus, it was the second conquest dynasty to rule the entirety of China proper, and roughly doubled the territory controlled by the Ming. The Manchus were formerly known as Jurchens, residing in the northeastern part of the Ming territory outside the Great Wall. They emerged as the major threat to the late Ming dynasty after Nurhaci united all Jurchen tribes and his son, Hong Taiji, declared the founding of the Qing dynasty in 1636. The Qing dynasty set up the Eight Banners system that provided the basic framework for the Qing military conquest. Li Zicheng's peasant rebellion captured Beijing in 1644 and the Chongzhen Emperor, the last Ming emperor, committed suicide. The Manchus allied with the Ming general Wu Sangui to seize Beijing, which was made the capital of the Qing dynasty, and then proceeded to subdue the Ming remnants in the south. During the Ming-Qing transition, when the Ming dynasty and later the Southern Ming, the emerging Qing dynasty, and several other factions like the Shun dynasty and Xi dynasty founded by peasant revolt leaders fought against each another, which, along with innumerable natural disasters at that time such as those caused by the Little Ice Age and epidemics like the Great Plague during the last decade of the Ming dynasty, caused enormous loss of lives and significant harm to the economy. In total, these decades saw the loss of as many as 25 million lives, but the Qing appeared to have restored China's imperial power and inaugurate another flowering of the arts. The early Manchu emperors combined traditions of Inner Asian rule with Confucian norms of traditional Chinese government and were considered a Chinese dynasty.",
"title": "Imperial China"
},
{
"paragraph_id": 81,
"text": "The Manchus enforced a 'queue order', forcing Han Chinese men to adopt the Manchu queue hairstyle. Officials were required to wear Manchu-style clothing Changshan (bannermen dress and Tangzhuang), but ordinary Han civilians were allowed to wear traditional Han clothing. Bannermen could not undertake trade or manual labor; they had to petition to be removed from banner status. They were considered aristocracy and were given annual pensions, land, and allotments of cloth. The Kangxi Emperor ordered the creation of the Kangxi Dictionary, the most complete dictionary of Chinese characters that had been compiled.",
"title": "Imperial China"
},
{
"paragraph_id": 82,
"text": "Over the next half-century, all areas previously under the Ming dynasty were consolidated under the Qing. Conquests in Central Asia in the eighteenth century extended territorial control. Between 1673 and 1681, the Kangxi Emperor suppressed the Revolt of the Three Feudatories, an uprising of three generals in Southern China who had been denied hereditary rule of large fiefdoms granted by the previous emperor. In 1683, the Qing staged an amphibious assault on southern Taiwan, bringing down the rebel Kingdom of Tungning, which was founded by the Ming loyalist Koxinga (Zheng Chenggong) in 1662 after the fall of the Southern Ming, and had served as a base for continued Ming resistance in Southern China. The Qing defeated the Russians at Albazin, resulting in the Treaty of Nerchinsk.",
"title": "Imperial China"
},
{
"paragraph_id": 83,
"text": "By the end of Qianlong Emperor's long reign in 1796, the Qing Empire was at its zenith. The Qing ruled more than one-third of the world's population, and had the largest economy in the world. By area it was one of the largest empires ever.",
"title": "Imperial China"
},
{
"paragraph_id": 84,
"text": "In the 19th century the empire was internally restive and externally threatened by western powers. The defeat by the British Empire in the First Opium War (1840) led to the Treaty of Nanking (1842), under which Hong Kong was ceded to Britain and importation of opium (produced by British Empire territories) was allowed. Opium usage continued to grow in China, adversely affecting societal stability. Subsequent military defeats and unequal treaties with other western powers continued even after the fall of the Qing dynasty.",
"title": "Imperial China"
},
{
"paragraph_id": 85,
"text": "Internally the Taiping Rebellion (1851–1864), a Christian religious movement led by the \"Heavenly King\" Hong Xiuquan swept from the south to establish the Taiping Heavenly Kingdom and controlled roughly a third of China proper for over a decade. The court in desperation empowered Han Chinese officials such as Zeng Guofan to raise local armies. After initial defeats, Zeng crushed the rebels in the Third Battle of Nanking in 1864. This was one of the largest wars in the 19th century in troop involvement; there was massive loss of life, with a death toll of about 20 million. A string of civil disturbances followed, including the Punti–Hakka Clan Wars, Nian Rebellion, Dungan Revolt, and Panthay Rebellion. All rebellions were ultimately put down, but at enormous cost and with millions dead, seriously weakening the central imperial authority. China never rebuilt a strong central army, and many local officials used their military power to effectively rule independently in their provinces.",
"title": "Imperial China"
},
{
"paragraph_id": 86,
"text": "Yet the dynasty appeared to recover in the Tongzhi Restoration (1860–1872), led by Manchu royal family reformers and Han Chinese officials such as Zeng Guofan and his proteges Li Hongzhang and Zuo Zongtang. Their Self-Strengthening Movement made effective institutional reforms, imported Western factories and communications technology, with prime emphasis on strengthening the military. However, the reform was undermined by official rivalries, cynicism, and quarrels within the imperial family. The defeat of Yuan Shikai's modernized \"Beiyang Fleet\" in the First Sino-Japanese War (1894–1895) led to the formation of the New Army. The Guangxu Emperor, advised by Kang Youwei, then launched a comprehensive reform effort, the Hundred Days' Reform (1898). Empress Dowager Cixi, however, feared that precipitous change would lead to bureaucratic opposition and foreign intervention and quickly suppressed it.",
"title": "Imperial China"
},
{
"paragraph_id": 87,
"text": "In the summer of 1900, the Boxer Uprising opposed foreign influence and murdered Chinese Christians and foreign missionaries. When Boxers entered Beijing, the Qing government ordered all foreigners to leave, but they and many Chinese Christians were besieged in the foreign legations quarter. An Eight-Nation Alliance sent the Seymour Expedition of Japanese, Russian, British, Italian, German, French, American, and Austrian troops to relieve the siege, but they were forced to retreat by Boxer and Qing troops at the Battle of Langfang. After the Alliance's attack on the Dagu Forts, the court declared war on the Alliance and authorized the Boxers to join with imperial armies. After fierce fighting at Tianjin, the Alliance formed the second, much larger Gaselee Expedition and finally reached Beijing; the Empress Dowager evacuated to Xi'an. The Boxer Protocol ended the war, exacting a tremendous indemnity.",
"title": "Imperial China"
},
{
"paragraph_id": 88,
"text": "The Qing court then instituted \"New Policies\" of administrative and legal reform, including abolition of the examination system. But young officials, military officers, and students debated reform, perhaps a constitutional monarchy, or the overthrow of the dynasty and the creation of a republic. They were inspired by an emerging public opinion formed by intellectuals such as Liang Qichao and the revolutionary ideas of Sun Yat-sen. A localised military uprising, the Wuchang uprising, began on 10 October 1911, in Wuchang (today part of Wuhan), and soon spread. The Republic of China was proclaimed on 1 January 1912, ending 2,000 years of dynastic rule.",
"title": "Imperial China"
},
{
"paragraph_id": 89,
"text": "The provisional government of the Republic of China was formed in Nanjing on 12 March 1912. Sun Yat-sen became President of the Republic of China, but he turned power over to Yuan Shikai, who commanded the New Army. Over the next few years, Yuan proceeded to abolish the national and provincial assemblies and declared himself as the emperor of Empire of China in late 1915. Yuan's imperial ambitions were fiercely opposed by his subordinates; faced with the prospect of rebellion, he abdicated in March 1916 and died of natural causes in June.",
"title": "Modern China"
},
{
"paragraph_id": 90,
"text": "Yuan's death in 1916 left a power vacuum; the republican government was all but shattered. This opened the way for the Warlord Era, during which much of China was ruled by shifting coalitions of competing provincial military leaders and the Beiyang government. Intellectuals, disappointed in the failure of the Republic, launched the New Culture Movement.",
"title": "Modern China"
},
{
"paragraph_id": 91,
"text": "In 1919, the May Fourth Movement began as a response to the pro-Japanese terms imposed on China by the Treaty of Versailles following World War I. It quickly became a nationwide protest movement. The protests were a moral success as the cabinet fell and China refused to sign the Treaty of Versailles, which had awarded German holdings of Shandong to Japan. Memory of the mistreatment at Versailles fuels resentment into the 21st century.",
"title": "Modern China"
},
{
"paragraph_id": 92,
"text": "Political and intellectual ferment waxed strong throughout the 1920s and 1930s. According to Patricia Ebrey:",
"title": "Modern China"
},
{
"paragraph_id": 93,
"text": "In the 1920s Sun Yat-sen established a revolutionary base in Guangzhou and set out to unite the fragmented nation. He welcomed assistance from the Soviet Union (itself fresh from Lenin's Communist takeover) and he entered into an alliance with the fledgling Chinese Communist Party (CCP). After Sun's death from cancer in 1925, one of his protégés, Chiang Kai-shek, seized control of the Nationalist Party (KMT) and succeeded in bringing most of south and central China under its rule in the Northern Expedition (1926–1927). Having defeated the warlords in the south and central China by military force, Chiang was able to secure the nominal allegiance of the warlords in the North and establish the Nationalist government in Nanking. In 1927, Chiang turned on the CCP and relentlessly purged the Communists elements in his NRA. In 1934, driven from their mountain bases such as the Chinese Soviet Republic, the CCP forces embarked on the Long March across China's most desolate terrain to the northwest, where they established a guerrilla base at Yan'an in Shaanxi. During the Long March, the communists reorganized under a new leader, Mao Zedong (Mao Tse-tung).",
"title": "Modern China"
},
{
"paragraph_id": 94,
"text": "The bitter Chinese Civil War between the Nationalists and the Communists continued, openly or clandestinely, through the 14-year-long Japanese occupation of various parts of the country (1931–1945). The two Chinese parties nominally formed a United Front to oppose the Japanese in 1937, during the Second Sino-Japanese War (1937–1945), which became a part of World War II. Japanese forces committed numerous war atrocities against the civilian population, including biological warfare (see Unit 731) and the Three Alls Policy (Sankō Sakusen), the three alls being: \"Kill All, Burn All and Loot All\". During the war, China was recognized as one of the Allied \"Big Four\" in the Declaration by United Nations. China was one of the four major Allies of World War II, and was later considered one of the primary victors in the war.",
"title": "Modern China"
},
{
"paragraph_id": 95,
"text": "Following the defeat of Japan in 1945, the war between the Nationalist government forces and the CCP resumed, after failed attempts at reconciliation and a negotiated settlement. By 1949, the CCP had established control over most of the country. Odd Arne Westad says the Communists won the Civil War because they made fewer military mistakes than Chiang, and because in his search for a powerful centralized government, Chiang antagonized too many interest groups in China. Furthermore, his party was weakened in the war against the Japanese. Meanwhile, the Communists told different groups, such as peasants, exactly what they wanted to hear, and cloaked themselves in the cover of Chinese Nationalism. During the civil war both the Nationalists and Communists carried out mass atrocities, with millions of non-combatants killed by both sides. These included deaths from forced conscription and massacres. When the Nationalist government forces were defeated by CCP forces in mainland China in 1949, the Nationalist government retreated to Taiwan with its forces, along with Chiang and a large number of their supporters; the Nationalist government had taken effective control of Taiwan at the end of WWII as part of the overall Japanese surrender, when Japanese troops in Taiwan surrendered to the Republic of China troops.",
"title": "Modern China"
},
{
"paragraph_id": 96,
"text": "Until the early 1970s the ROC was recognized as the sole legitimate government of China by the United Nations, the United States and most Western nations, refusing to recognize the PRC on account of the Cold War. This changed in 1971 when the PRC was seated in the United Nations, replacing the ROC. The KMT ruled Taiwan under martial law until 1987, with the stated goal of being vigilant against Communist infiltration and preparing to retake mainland China. Therefore, political dissent was not tolerated during that period.",
"title": "Modern China"
},
{
"paragraph_id": 97,
"text": "In the 1990s the ROC underwent a major democratic reform, beginning with the 1991 resignation of the members of the Legislative Yuan and National Assembly elected in 1947. These groups were originally created to represent mainland China constituencies. Also lifted were the restrictions on the use of Taiwanese languages in the broadcast media and in schools. This culminated with the first direct presidential election in 1996 against the Democratic Progressive Party (DPP) candidate and former dissident, Peng Ming-min. In 2000, the KMT status as the ruling party ended when the DPP took power, only to regain its status in the 2008 election by Ma Ying-jeou.",
"title": "Modern China"
},
{
"paragraph_id": 98,
"text": "Due to the controversial nature of Taiwan's political status, the ROC is currently recognized by 12 UN member states and Holy See as of 2023 as the legitimate government of \"China\".",
"title": "Modern China"
},
{
"paragraph_id": 99,
"text": "Major combat in the Chinese Civil War ended in 1949 with the KMT pulling out of the mainland, with the government relocating to Taipei and maintaining control only over a few islands. The CCP was left in control of mainland China. On 1 October 1949, Mao Zedong proclaimed the People's Republic of China. \"Communist China\" and \"Red China\" were two common names for the PRC.",
"title": "Modern China"
},
{
"paragraph_id": 100,
"text": "The PRC was shaped by a series of campaigns and five-year plans. The economic and social plan known as the Great Leap Forward caused an estimated 45 million deaths. Mao's government carried out mass executions of landowners, instituted collectivisation and implemented the Laogai camp system. Execution, deaths from forced labor and other atrocities resulted in millions of deaths under Mao. In 1966 Mao and his allies launched the Cultural Revolution, which continued until Mao's death a decade later. The Cultural Revolution, motivated by power struggles within the Party and a fear of the Soviet Union, led to a major upheaval in Chinese society.",
"title": "Modern China"
},
{
"paragraph_id": 101,
"text": "In 1972, at the peak of the Sino-Soviet split, Mao and Zhou Enlai met U.S. president Richard Nixon in Beijing to establish relations with the US. In the same year, the PRC was admitted to the United Nations in place of the Republic of China, with permanent membership of the Security Council.",
"title": "Modern China"
},
{
"paragraph_id": 102,
"text": "A power struggle followed Mao's death in 1976. The Gang of Four were arrested and blamed for the excesses of the Cultural Revolution, marking the end of a turbulent political era in China. Deng Xiaoping outmaneuvered Mao's anointed successor chairman Hua Guofeng, and gradually emerged as the de facto leader over the next few years.",
"title": "Modern China"
},
{
"paragraph_id": 103,
"text": "Deng Xiaoping was the Paramount Leader of China from 1978 to 1992, although he never became the head of the party or state, and his influence within the Party led the country to significant economic reforms. The CCP subsequently loosened governmental control over citizens' personal lives and the communes were disbanded with many peasants receiving multiple land leases, which greatly increased incentives and agricultural production. In addition, there were many free market areas opened. The most successful free market area was Shenzhen. It is located in Guangdong and the property tax free area still exists today. This turn of events marked China's transition from a planned economy to a mixed economy with an increasingly open market environment, a system termed by some as \"market socialism\", and officially by the CCP as \"Socialism with Chinese characteristics\". The PRC adopted its current constitution on 4 December 1982.",
"title": "Modern China"
},
{
"paragraph_id": 104,
"text": "In 1989 the death of former general secretary Hu Yaobang helped to spark the Tiananmen Square protests of that year, during which students and others campaigned for several months, speaking out against corruption and in favour of greater political reform, including democratic rights and freedom of speech. However, they were eventually put down on 4 June when Army troops and vehicles entered and forcibly cleared the square, with considerable numbers of fatalities. This event was widely reported, and brought worldwide condemnation and sanctions against the government.",
"title": "Modern China"
},
{
"paragraph_id": 105,
"text": "CCP general secretary and PRC president Jiang Zemin and PRC premier Zhu Rongji, both former mayors of Shanghai, led post-Tiananmen PRC in the 1990s. Under Jiang and Zhu's ten years of administration, the PRC's economic performance pulled an estimated 150 million peasants out of poverty and sustained an average annual gross domestic product growth rate of 11.2%. The country formally joined the World Trade Organization in 2001. By 1997 and 1999, former European colonies of British Hong Kong and Portuguese Macau became the Hong Kong and Macau special administrative regions of the People's Republic of China respectively.",
"title": "Modern China"
},
{
"paragraph_id": 106,
"text": "Although the PRC needed economic growth to spur its development, the government began to worry that rapid economic growth was degrading the country's resources and environment. Another concern is that certain sectors of society are not sufficiently benefiting from the PRC's economic development; one example of this is the wide gap between urban and rural areas. As a result, under former CCP general secretary and President Hu Jintao and Premier Wen Jiabao, the PRC initiated policies to address issues of equitable distribution of resources, but the outcome was not known as of 2014. More than 40 million farmers were displaced from their land, usually for economic development, contributing to 87,000 demonstrations and riots across China in 2005. For much of the PRC's population, living standards improved very substantially and freedom increased, but political controls remained tight and rural areas poor.",
"title": "Modern China"
},
{
"paragraph_id": 107,
"text": "According to the U.S. Department of Defense, as many as 3 million Uyghurs and members of other Muslim minority groups are being held in China's internment camps which are located in the Xinjiang region and which American news reports often label as \"concentration camps\". The camps were established in late 2010s under Xi Jinping's administration. Human Rights Watch says that they have been used to indoctrinate Uyghurs and other Muslims since 2017 as part of a \"people's war on terror\", a policy announced in 2014. The camps have been criticized by the governments of many countries and human rights organizations for alleged human rights abuses, including mistreatment, rape, and torture, with some of them alleging genocide.",
"title": "Modern China"
},
{
"paragraph_id": 108,
"text": "The novel coronavirus SARS-CoV-2, which causes the disease COVID-19, was first detected in Wuhan, Hubei in 2019 and led to a global pandemic.",
"title": "Modern China"
}
] | The history of China spans several millennia across a wide geographical area. Each region now considered part of the Chinese world has experienced periods of unity, fracture, prosperity, and strife. Chinese civilization first emerged in the Yellow River valley, which along with the Yangtze basin constitutes the geographic core of the Chinese cultural sphere. China maintains a rich diversity of ethnic and linguistic people groups. The traditional lens for viewing Chinese history is the dynastic cycle: imperial dynasties rise and fall, and are ascribed certain achievements. Throughout pervades the narrative that Chinese civilization can be traced as an unbroken thread many thousands of years into the past, making it one of the cradles of civilization. At various times, states representative of a dominant Chinese culture have directly controlled areas stretching as far west as the Tian Shan, the Tarim Basin, and the Himalayas, as far north as the Sayan Mountains, and as far south as the delta of the Red River. The Neolithic period saw increasingly complex polities begin to emerge along the Yellow and Yangtze rivers. The Erlitou culture in the central plains of China is sometimes identified with the Xia dynasty of traditional Chinese historiography. The earliest surviving written Chinese dates to roughly 1250 BCE, consisting of divinations inscribed on oracle bones. Chinese bronze inscriptions, ritual texts dedicated to ancestors, form another large corpus of early Chinese writing. The earliest strata of received literature in Chinese include poetry, divination, and records of official speeches. China is believed to be one of a very few loci of independent invention of writing, and the earliest surviving records display an already-mature written language. The culture remembered by the earliest extant literature is that of the Zhou dynasty, China's Axial age, during which the Mandate of Heaven was introduced, and foundations laid for philosophies such as Confucianism, Taoism, Legalism, and Wuxing. China was first united under a single imperial state by Qin Shi Huang in 221 BCE. Orthography, weights, measures, and law were all standardized. Shortly thereafter, China entered its classical era with the Han dynasty, marking a critical period. A term for the Chinese language is still "Han language", and the dominant Chinese ethnic group is known as Han Chinese. The Chinese empire reached some of its farthest geographical extents during this period. Confucianism was officially sanctioned and its core texts were edited into their received forms. Wealthy landholding families independent of the ancient aristocracy began to wield significant power. Han technology can be considered on par with that of the contemporaneous Roman Empire: mass production of paper aided the proliferation of written documents, and the written language of this period was employed for millennia afterwards. China became known internationally for its sericulture. When the Han imperial order finally collapsed after four centuries, China entered an equally lengthy period of disunity, duing which Buddhism began to have a significant impact on Chinese culture, while Calligraphy, art, historiography, and storytelling flourished. Wealthy families in some cases became more powerful than the central government. The Yangtze River valley was incorporated into the dominant cultural sphere. A period of unity began in 581 with the Sui dynasty, which soon gave way to the long-lived Tang dynasty (608–907), regarded as another Chinese golden age. The Tang dynasty saw flourishing developments in science, technology, poetry, economics, and geographical influence. China's first officially recognized empress, Wu Zetian, reigned during the dynasty's first century. Buddhism was adopted by Tang emperors. "Tang people" is the other common demonym for the Han ethnic group. After the Tang fractured, the Song dynasty (960–1279) saw the maximal extent of imperial Chinese cosmopolitan development. Mechanical printing was introduced, and many of the earliest surviving witnesses of certain texts are wood-block prints from this era. Song scientific advancement led the world, on par with the contemporaneous Khwarazmian Empire, and the imperial examination system gave ideological structure to the political bureaucracy. Confucianism and Taoism were fully knit together in Neo-Confucianism. Eventually, the Mongol Empire conquered all of China, establishing the Yuan dynasty in 1271. Contact with Europe began to increase during this time. Achievements under the subsequent Ming dynasty (1368–1644) include global exploration, fine porcelain, and many extant public works projects, such as those restoring the Grand Canal and Great Wall. Three of the four Classic Chinese Novels were written during the Ming. The Qing dynasty that succeeded the Ming was ruled by ethnic Manchu people. The Qianlong emperor commissioned a complete encyclopaedia of imperial libraries, totaling nearly a billion words. Imperial China reached its greatest territorial extent of during the Qing, but China came into increasing conflict with European powers, culminating in the Opium Wars and subsequent unequal treaties. The 1911 Xinhai Revolution, led by Sun Yat-sen and others, created the modern Republic of China. From 1927, a costly civil war roiled between the Republican government under Chiang Kai-shek and the Chinese Red Army, and the industrialized Empire of Japan also invaded the divided country. After the Communist victory, Mao Zedong proclaimed the People's Republic of China (PRC) in 1949, with the Republic retreating to Taiwan. Both governments still claim sole legitimacy. The PRC has slowly accumulated the majority of diplomatic recognition, and Taiwan's status remains disputed. From 1966 to 1976, the Cultural Revolution in mainland China helped consolidate Mao's power towards the end of his life. After his death, the government began economic reforms under Deng Xiaoping, and became the world's fastest-growing major economy. China had been the most populous nation in the world for decades, until it was surpassed by India in 2023. | 2001-10-09T18:29:44Z | 2023-12-20T16:10:33Z | [
"Template:Pp-vandalism",
"Template:See also",
"Template:Linktext",
"Template:Clear",
"Template:Short description",
"Template:Taiwan topics",
"Template:Sfn",
"Template:Cite Cambridge History of China",
"Template:Subscription required",
"Template:Refbegin",
"Template:Notelist",
"Template:ISBN",
"Template:History of China",
"Template:Reign",
"Template:Abbr",
"Template:Lang",
"Template:Circa",
"Template:Convert",
"Template:Nbsp",
"Template:Currentyear",
"Template:Refend",
"Template:Sfnp",
"Template:Harvc",
"Template:Open access",
"Template:BCE",
"Template:Rp",
"Template:Center",
"Template:As of",
"Template:Pb",
"Template:Main list",
"Template:China topics",
"Template:Pp-move",
"Template:Cite web",
"Template:Cite magazine",
"Template:Better source needed",
"Template:Further",
"Template:Anchor",
"Template:Main",
"Template:Efn",
"Template:Use dmy dates",
"Template:Nowrap",
"Template:Div col end",
"Template:Cite book",
"Template:History of Asia",
"Template:Authority control",
"Template:Transliteration",
"Template:Cite journal",
"Template:Cite encyclopedia",
"Template:Multiref2",
"Template:When",
"Template:Reflist",
"Template:Cite news",
"Template:Citation",
"Template:Subject bar",
"Template:Div col",
"Template:Redirect2",
"Template:Webarchive",
"Template:Multiple image",
"Template:Page needed",
"Template:About"
] | https://en.wikipedia.org/wiki/History_of_China |
5,762 | Civil engineering | Civil engineering is a professional engineering discipline that deals with the design, construction, and maintenance of the physical and naturally built environment, including public works such as roads, bridges, canals, dams, airports, sewage systems, pipelines, structural components of buildings, and railways.
Civil engineering is traditionally broken into a number of sub-disciplines. It is considered the second-oldest engineering discipline after military engineering, and it is defined to distinguish non-military engineering from military engineering. Civil engineering can take place in the public sector from municipal public works departments through to federal government agencies, and in the private sector from locally based firms to global Fortune 500 companies.
Civil engineering is the application of physical and scientific principles for solving the problems of society, and its history is intricately linked to advances in the understanding of physics and mathematics throughout history. Because civil engineering is a broad profession, including several specialized sub-disciplines, its history is linked to knowledge of structures, materials science, geography, geology, soils, hydrology, environmental science, mechanics, project management, and other fields.
Throughout ancient and medieval history most architectural design and construction was carried out by artisans, such as stonemasons and carpenters, rising to the role of master builder. Knowledge was retained in guilds and seldom supplanted by advances. Structures, roads, and infrastructure that existed were repetitive, and increases in scale were incremental.
One of the earliest examples of a scientific approach to physical and mathematical problems applicable to civil engineering is the work of Archimedes in the 3rd century BC, including Archimedes' principle, which underpins our understanding of buoyancy, and practical solutions such as Archimedes' screw. Brahmagupta, an Indian mathematician, used arithmetic in the 7th century AD, based on Hindu-Arabic numerals, for excavation (volume) computations.
Engineering has been an aspect of life since the beginnings of human existence. The earliest practice of civil engineering may have commenced between 4000 and 2000 BC in ancient Egypt, the Indus Valley civilization, and Mesopotamia (ancient Iraq) when humans started to abandon a nomadic existence, creating a need for the construction of shelter. During this time, transportation became increasingly important leading to the development of the wheel and sailing.
Until modern times there was no clear distinction between civil engineering and architecture, and the term engineer and architect were mainly geographical variations referring to the same occupation, and often used interchangeably. The construction of pyramids in Egypt (c. 2700–2500 BC) were some of the first instances of large structure constructions. Other ancient historic civil engineering constructions include the Qanat water management system in modern-day Iran (the oldest is older than 3000 years and longer than 71 kilometres (44 mi),) the Parthenon by Iktinos in Ancient Greece (447–438 BC), the Appian Way by Roman engineers (c. 312 BC), the Great Wall of China by General Meng T'ien under orders from Ch'in Emperor Shih Huang Ti (c. 220 BC) and the stupas constructed in ancient Sri Lanka like the Jetavanaramaya and the extensive irrigation works in Anuradhapura. The Romans developed civil structures throughout their empire, including especially aqueducts, insulae, harbors, bridges, dams and roads.
In the 18th century, the term civil engineering was coined to incorporate all things civilian as opposed to military engineering. In 1747, the first institution for the teaching of civil engineering, the École Nationale des Ponts et Chaussées was established in France; and more examples followed in other European countries, like Spain. The first self-proclaimed civil engineer was John Smeaton, who constructed the Eddystone Lighthouse. In 1771 Smeaton and some of his colleagues formed the Smeatonian Society of Civil Engineers, a group of leaders of the profession who met informally over dinner. Though there was evidence of some technical meetings, it was little more than a social society.
In 1818 the Institution of Civil Engineers was founded in London, and in 1820 the eminent engineer Thomas Telford became its first president. The institution received a Royal charter in 1828, formally recognising civil engineering as a profession. Its charter defined civil engineering as:
the art of directing the great sources of power in nature for the use and convenience of man, as the means of production and of traffic in states, both for external and internal trade, as applied in the construction of roads, bridges, aqueducts, canals, river navigation and docks for internal intercourse and exchange, and in the construction of ports, harbours, moles, breakwaters and lighthouses, and in the art of navigation by artificial power for the purposes of commerce, and in the construction and application of machinery, and in the drainage of cities and towns.
The first private college to teach civil engineering in the United States was Norwich University, founded in 1819 by Captain Alden Partridge. The first degree in civil engineering in the United States was awarded by Rensselaer Polytechnic Institute in 1835. The first such degree to be awarded to a woman was granted by Cornell University to Nora Stanton Blatch in 1905.
In the UK during the early 19th century, the division between civil engineering and military engineering (served by the Royal Military Academy, Woolwich), coupled with the demands of the Industrial Revolution, spawned new engineering education initiatives: the Class of Civil Engineering and Mining was founded at King's College London in 1838, mainly as a response to the growth of the railway system and the need for more qualified engineers, the private College for Civil Engineers in Putney was established in 1839, and the UK's first Chair of Engineering was established at the University of Glasgow in 1840.
Civil engineers typically possess an academic degree in civil engineering. The length of study is three to five years, and the completed degree is designated as a bachelor of technology, or a bachelor of engineering. The curriculum generally includes classes in physics, mathematics, project management, design and specific topics in civil engineering. After taking basic courses in most sub-disciplines of civil engineering, they move on to specialize in one or more sub-disciplines at advanced levels. While an undergraduate degree (BEng/BSc) normally provides successful students with industry-accredited qualification, some academic institutions offer post-graduate degrees (MEng/MSc), which allow students to further specialize in their particular area of interest.
In most countries, a bachelor's degree in engineering represents the first step towards professional certification, and a professional body certifies the degree program. After completing a certified degree program, the engineer must satisfy a range of requirements including work experience and exam requirements before being certified. Once certified, the engineer is designated as a professional engineer (in the United States, Canada and South Africa), a chartered engineer (in most Commonwealth countries), a chartered professional engineer (in Australia and New Zealand), or a European engineer (in most countries of the European Union). There are international agreements between relevant professional bodies to allow engineers to practice across national borders.
The benefits of certification vary depending upon location. For example, in the United States and Canada, "only a licensed professional engineer may prepare, sign and seal, and submit engineering plans and drawings to a public authority for approval, or seal engineering work for public and private clients." This requirement is enforced under provincial law such as the Engineers Act in Quebec. No such legislation has been enacted in other countries including the United Kingdom. In Australia, state licensing of engineers is limited to the state of Queensland. Almost all certifying bodies maintain a code of ethics which all members must abide by.
Engineers must obey contract law in their contractual relationships with other parties. In cases where an engineer's work fails, they may be subject to the law of tort of negligence, and in extreme cases, criminal charges. An engineer's work must also comply with numerous other rules and regulations such as building codes and environmental law.
There are a number of sub-disciplines within the broad field of civil engineering. General civil engineers work closely with surveyors and specialized civil engineers to design grading, drainage, pavement, water supply, sewer service, dams, electric and communications supply. General civil engineering is also referred to as site engineering, a branch of civil engineering that primarily focuses on converting a tract of land from one usage to another. Site engineers spend time visiting project sites, meeting with stakeholders, and preparing construction plans. Civil engineers apply the principles of geotechnical engineering, structural engineering, environmental engineering, transportation engineering and construction engineering to residential, commercial, industrial and public works projects of all sizes and levels of construction.
Coastal engineering is concerned with managing coastal areas. In some jurisdictions, the terms sea defense and coastal protection mean defense against flooding and erosion, respectively. Coastal defense is the more traditional term, but coastal management has become popular as well.
Construction engineering involves planning and execution, transportation of materials, site development based on hydraulic, environmental, structural and geotechnical engineering. As construction firms tend to have higher business risk than other types of civil engineering firms do, construction engineers often engage in more business-like transactions, for example, drafting and reviewing contracts, evaluating logistical operations, and monitoring prices of supplies.
Earthquake engineering involves designing structures to withstand hazardous earthquake exposures. Earthquake engineering is a sub-discipline of structural engineering. The main objectives of earthquake engineering are to understand interaction of structures on the shaky ground; foresee the consequences of possible earthquakes; and design, construct and maintain structures to perform at earthquake in compliance with building codes.
Environmental engineering is the contemporary term for sanitary engineering, though sanitary engineering traditionally had not included much of the hazardous waste management and environmental remediation work covered by environmental engineering. Public health engineering and environmental health engineering are other terms being used.
Environmental engineering deals with treatment of chemical, biological, or thermal wastes, purification of water and air, and remediation of contaminated sites after waste disposal or accidental contamination. Among the topics covered by environmental engineering are pollutant transport, water purification, waste water treatment, air pollution, solid waste treatment, recycling, and hazardous waste management. Environmental engineers administer pollution reduction, green engineering, and industrial ecology. Environmental engineers also compile information on environmental consequences of proposed actions.
Forensic engineering is the investigation of materials, products, structures or components that fail or do not operate or function as intended, causing personal injury or damage to property. The consequences of failure are dealt with by the law of product liability. The field also deals with retracing processes and procedures leading to accidents in operation of vehicles or machinery. The subject is applied most commonly in civil law cases, although it may be of use in criminal law cases. Generally the purpose of a Forensic engineering investigation is to locate cause or causes of failure with a view to improve performance or life of a component, or to assist a court in determining the facts of an accident. It can also involve investigation of intellectual property claims, especially patents.
Geotechnical engineering studies rock and soil supporting civil engineering systems. Knowledge from the field of soil science, materials science, mechanics, and hydraulics is applied to safely and economically design foundations, retaining walls, and other structures. Environmental efforts to protect groundwater and safely maintain landfills have spawned a new area of research called geo-environmental engineering.
Identification of soil properties presents challenges to geotechnical engineers. Boundary conditions are often well defined in other branches of civil engineering, but unlike steel or concrete, the material properties and behavior of soil are difficult to predict due to its variability and limitation on investigation. Furthermore, soil exhibits nonlinear (stress-dependent) strength, stiffness, and dilatancy (volume change associated with application of shear stress), making studying soil mechanics all the more difficult. Geotechnical engineers frequently work with professional geologists, Geological Engineering professionals and soil scientists.
Materials science is closely related to civil engineering. It studies fundamental characteristics of materials, and deals with ceramics such as concrete and mix asphalt concrete, strong metals such as aluminum and steel, and thermosetting polymers including polymethylmethacrylate (PMMA) and carbon fibers.
Materials engineering involves protection and prevention (paints and finishes). Alloying combines two types of metals to produce another metal with desired properties. It incorporates elements of applied physics and chemistry. With recent media attention on nanoscience and nanotechnology, materials engineering has been at the forefront of academic research. It is also an important part of forensic engineering and failure analysis.
Site development, also known as site planning, is focused on the planning and development potential of a site as well as addressing possible impacts from permitting issues and environmental challenges.
Structural engineering is concerned with the structural design and structural analysis of buildings, bridges, towers, flyovers (overpasses), tunnels, off shore structures like oil and gas fields in the sea, aerostructure and other structures. This involves identifying the loads which act upon a structure and the forces and stresses which arise within that structure due to those loads, and then designing the structure to successfully support and resist those loads. The loads can be self weight of the structures, other dead load, live loads, moving (wheel) load, wind load, earthquake load, load from temperature change etc. The structural engineer must design structures to be safe for their users and to successfully fulfill the function they are designed for (to be serviceable). Due to the nature of some loading conditions, sub-disciplines within structural engineering have emerged, including wind engineering and earthquake engineering.
Design considerations will include strength, stiffness, and stability of the structure when subjected to loads which may be static, such as furniture or self-weight, or dynamic, such as wind, seismic, crowd or vehicle loads, or transitory, such as temporary construction loads or impact. Other considerations include cost, constructibility, safety, aesthetics and sustainability.
Surveying is the process by which a surveyor measures certain dimensions that occur on or near the surface of the Earth. Surveying equipment such as levels and theodolites are used for accurate measurement of angular deviation, horizontal, vertical and slope distances. With computerisation, electronic distance measurement (EDM), total stations, GPS surveying and laser scanning have to a large extent supplanted traditional instruments. Data collected by survey measurement is converted into a graphical representation of the Earth's surface in the form of a map. This information is then used by civil engineers, contractors and realtors to design from, build on, and trade, respectively. Elements of a structure must be sized and positioned in relation to each other and to site boundaries and adjacent structures.
Although surveying is a distinct profession with separate qualifications and licensing arrangements, civil engineers are trained in the basics of surveying and mapping, as well as geographic information systems. Surveyors also lay out the routes of railways, tramway tracks, highways, roads, pipelines and streets as well as position other infrastructure, such as harbors, before construction.
In the United States, Canada, the United Kingdom and most Commonwealth countries land surveying is considered to be a separate and distinct profession. Land surveyors are not considered to be engineers, and have their own professional associations and licensing requirements. The services of a licensed land surveyor are generally required for boundary surveys (to establish the boundaries of a parcel using its legal description) and subdivision plans (a plot or map based on a survey of a parcel of land, with boundary lines drawn inside the larger parcel to indicate the creation of new boundary lines and roads), both of which are generally referred to as Cadastral surveying.
Construction surveying is generally performed by specialized technicians. Unlike land surveyors, the resulting plan does not have legal status. Construction surveyors perform the following tasks:
Transportation engineering is concerned with moving people and goods efficiently, safely, and in a manner conducive to a vibrant community. This involves specifying, designing, constructing, and maintaining transportation infrastructure which includes streets, canals, highways, rail systems, airports, ports, and mass transit. It includes areas such as transportation design, transportation planning, traffic engineering, some aspects of urban engineering, queueing theory, pavement engineering, Intelligent Transportation System (ITS), and infrastructure management.
Municipal engineering is concerned with municipal infrastructure. This involves specifying, designing, constructing, and maintaining streets, sidewalks, water supply networks, sewers, street lighting, municipal solid waste management and disposal, storage depots for various bulk materials used for maintenance and public works (salt, sand, etc.), public parks and cycling infrastructure. In the case of underground utility networks, it may also include the civil portion (conduits and access chambers) of the local distribution networks of electrical and telecommunications services. It can also include the optimizing of waste collection and bus service networks. Some of these disciplines overlap with other civil engineering specialties, however municipal engineering focuses on the coordination of these infrastructure networks and services, as they are often built simultaneously, and managed by the same municipal authority. Municipal engineers may also design the site civil works for large buildings, industrial plants or campuses (i.e. access roads, parking lots, potable water supply, treatment or pretreatment of waste water, site drainage, etc.)
Water resources engineering is concerned with the collection and management of water (as a natural resource). As a discipline it therefore combines elements of hydrology, environmental science, meteorology, conservation, and resource management. This area of civil engineering relates to the prediction and management of both the quality and the quantity of water in both underground (aquifers) and above ground (lakes, rivers, and streams) resources. Water resource engineers analyze and model very small to very large areas of the earth to predict the amount and content of water as it flows into, through, or out of a facility. Although the actual design of the facility may be left to other engineers.
Hydraulic engineering is concerned with the flow and conveyance of fluids, principally water. This area of civil engineering is intimately related to the design of pipelines, water supply network, drainage facilities (including bridges, dams, channels, culverts, levees, storm sewers), and canals. Hydraulic engineers design these facilities using the concepts of fluid pressure, fluid statics, fluid dynamics, and hydraulics, among others.
Civil engineering systems is a discipline that promotes the use of systems thinking to manage complexity and change in civil engineering within its wider public context. It posits that the proper development of civil engineering infrastructure requires a holistic, coherent understanding of the relationships between all of the important factors that contribute to successful projects while at the same time emphasizing the importance of attention to technical detail. Its purpose is to help integrate the entire civil engineering project life cycle from conception, through planning, designing, making, operating to decommissioning. | [
{
"paragraph_id": 0,
"text": "Civil engineering is a professional engineering discipline that deals with the design, construction, and maintenance of the physical and naturally built environment, including public works such as roads, bridges, canals, dams, airports, sewage systems, pipelines, structural components of buildings, and railways.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Civil engineering is traditionally broken into a number of sub-disciplines. It is considered the second-oldest engineering discipline after military engineering, and it is defined to distinguish non-military engineering from military engineering. Civil engineering can take place in the public sector from municipal public works departments through to federal government agencies, and in the private sector from locally based firms to global Fortune 500 companies.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Civil engineering is the application of physical and scientific principles for solving the problems of society, and its history is intricately linked to advances in the understanding of physics and mathematics throughout history. Because civil engineering is a broad profession, including several specialized sub-disciplines, its history is linked to knowledge of structures, materials science, geography, geology, soils, hydrology, environmental science, mechanics, project management, and other fields.",
"title": "History"
},
{
"paragraph_id": 3,
"text": "Throughout ancient and medieval history most architectural design and construction was carried out by artisans, such as stonemasons and carpenters, rising to the role of master builder. Knowledge was retained in guilds and seldom supplanted by advances. Structures, roads, and infrastructure that existed were repetitive, and increases in scale were incremental.",
"title": "History"
},
{
"paragraph_id": 4,
"text": "One of the earliest examples of a scientific approach to physical and mathematical problems applicable to civil engineering is the work of Archimedes in the 3rd century BC, including Archimedes' principle, which underpins our understanding of buoyancy, and practical solutions such as Archimedes' screw. Brahmagupta, an Indian mathematician, used arithmetic in the 7th century AD, based on Hindu-Arabic numerals, for excavation (volume) computations.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "Engineering has been an aspect of life since the beginnings of human existence. The earliest practice of civil engineering may have commenced between 4000 and 2000 BC in ancient Egypt, the Indus Valley civilization, and Mesopotamia (ancient Iraq) when humans started to abandon a nomadic existence, creating a need for the construction of shelter. During this time, transportation became increasingly important leading to the development of the wheel and sailing.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "Until modern times there was no clear distinction between civil engineering and architecture, and the term engineer and architect were mainly geographical variations referring to the same occupation, and often used interchangeably. The construction of pyramids in Egypt (c. 2700–2500 BC) were some of the first instances of large structure constructions. Other ancient historic civil engineering constructions include the Qanat water management system in modern-day Iran (the oldest is older than 3000 years and longer than 71 kilometres (44 mi),) the Parthenon by Iktinos in Ancient Greece (447–438 BC), the Appian Way by Roman engineers (c. 312 BC), the Great Wall of China by General Meng T'ien under orders from Ch'in Emperor Shih Huang Ti (c. 220 BC) and the stupas constructed in ancient Sri Lanka like the Jetavanaramaya and the extensive irrigation works in Anuradhapura. The Romans developed civil structures throughout their empire, including especially aqueducts, insulae, harbors, bridges, dams and roads.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "In the 18th century, the term civil engineering was coined to incorporate all things civilian as opposed to military engineering. In 1747, the first institution for the teaching of civil engineering, the École Nationale des Ponts et Chaussées was established in France; and more examples followed in other European countries, like Spain. The first self-proclaimed civil engineer was John Smeaton, who constructed the Eddystone Lighthouse. In 1771 Smeaton and some of his colleagues formed the Smeatonian Society of Civil Engineers, a group of leaders of the profession who met informally over dinner. Though there was evidence of some technical meetings, it was little more than a social society.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "In 1818 the Institution of Civil Engineers was founded in London, and in 1820 the eminent engineer Thomas Telford became its first president. The institution received a Royal charter in 1828, formally recognising civil engineering as a profession. Its charter defined civil engineering as:",
"title": "History"
},
{
"paragraph_id": 9,
"text": "the art of directing the great sources of power in nature for the use and convenience of man, as the means of production and of traffic in states, both for external and internal trade, as applied in the construction of roads, bridges, aqueducts, canals, river navigation and docks for internal intercourse and exchange, and in the construction of ports, harbours, moles, breakwaters and lighthouses, and in the art of navigation by artificial power for the purposes of commerce, and in the construction and application of machinery, and in the drainage of cities and towns.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "The first private college to teach civil engineering in the United States was Norwich University, founded in 1819 by Captain Alden Partridge. The first degree in civil engineering in the United States was awarded by Rensselaer Polytechnic Institute in 1835. The first such degree to be awarded to a woman was granted by Cornell University to Nora Stanton Blatch in 1905.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "In the UK during the early 19th century, the division between civil engineering and military engineering (served by the Royal Military Academy, Woolwich), coupled with the demands of the Industrial Revolution, spawned new engineering education initiatives: the Class of Civil Engineering and Mining was founded at King's College London in 1838, mainly as a response to the growth of the railway system and the need for more qualified engineers, the private College for Civil Engineers in Putney was established in 1839, and the UK's first Chair of Engineering was established at the University of Glasgow in 1840.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "Civil engineers typically possess an academic degree in civil engineering. The length of study is three to five years, and the completed degree is designated as a bachelor of technology, or a bachelor of engineering. The curriculum generally includes classes in physics, mathematics, project management, design and specific topics in civil engineering. After taking basic courses in most sub-disciplines of civil engineering, they move on to specialize in one or more sub-disciplines at advanced levels. While an undergraduate degree (BEng/BSc) normally provides successful students with industry-accredited qualification, some academic institutions offer post-graduate degrees (MEng/MSc), which allow students to further specialize in their particular area of interest.",
"title": "Education"
},
{
"paragraph_id": 13,
"text": "In most countries, a bachelor's degree in engineering represents the first step towards professional certification, and a professional body certifies the degree program. After completing a certified degree program, the engineer must satisfy a range of requirements including work experience and exam requirements before being certified. Once certified, the engineer is designated as a professional engineer (in the United States, Canada and South Africa), a chartered engineer (in most Commonwealth countries), a chartered professional engineer (in Australia and New Zealand), or a European engineer (in most countries of the European Union). There are international agreements between relevant professional bodies to allow engineers to practice across national borders.",
"title": "Practicing engineers"
},
{
"paragraph_id": 14,
"text": "The benefits of certification vary depending upon location. For example, in the United States and Canada, \"only a licensed professional engineer may prepare, sign and seal, and submit engineering plans and drawings to a public authority for approval, or seal engineering work for public and private clients.\" This requirement is enforced under provincial law such as the Engineers Act in Quebec. No such legislation has been enacted in other countries including the United Kingdom. In Australia, state licensing of engineers is limited to the state of Queensland. Almost all certifying bodies maintain a code of ethics which all members must abide by.",
"title": "Practicing engineers"
},
{
"paragraph_id": 15,
"text": "Engineers must obey contract law in their contractual relationships with other parties. In cases where an engineer's work fails, they may be subject to the law of tort of negligence, and in extreme cases, criminal charges. An engineer's work must also comply with numerous other rules and regulations such as building codes and environmental law.",
"title": "Practicing engineers"
},
{
"paragraph_id": 16,
"text": "There are a number of sub-disciplines within the broad field of civil engineering. General civil engineers work closely with surveyors and specialized civil engineers to design grading, drainage, pavement, water supply, sewer service, dams, electric and communications supply. General civil engineering is also referred to as site engineering, a branch of civil engineering that primarily focuses on converting a tract of land from one usage to another. Site engineers spend time visiting project sites, meeting with stakeholders, and preparing construction plans. Civil engineers apply the principles of geotechnical engineering, structural engineering, environmental engineering, transportation engineering and construction engineering to residential, commercial, industrial and public works projects of all sizes and levels of construction.",
"title": "Sub-disciplines"
},
{
"paragraph_id": 17,
"text": "Coastal engineering is concerned with managing coastal areas. In some jurisdictions, the terms sea defense and coastal protection mean defense against flooding and erosion, respectively. Coastal defense is the more traditional term, but coastal management has become popular as well.",
"title": "Sub-disciplines"
},
{
"paragraph_id": 18,
"text": "Construction engineering involves planning and execution, transportation of materials, site development based on hydraulic, environmental, structural and geotechnical engineering. As construction firms tend to have higher business risk than other types of civil engineering firms do, construction engineers often engage in more business-like transactions, for example, drafting and reviewing contracts, evaluating logistical operations, and monitoring prices of supplies.",
"title": "Sub-disciplines"
},
{
"paragraph_id": 19,
"text": "Earthquake engineering involves designing structures to withstand hazardous earthquake exposures. Earthquake engineering is a sub-discipline of structural engineering. The main objectives of earthquake engineering are to understand interaction of structures on the shaky ground; foresee the consequences of possible earthquakes; and design, construct and maintain structures to perform at earthquake in compliance with building codes.",
"title": "Sub-disciplines"
},
{
"paragraph_id": 20,
"text": "Environmental engineering is the contemporary term for sanitary engineering, though sanitary engineering traditionally had not included much of the hazardous waste management and environmental remediation work covered by environmental engineering. Public health engineering and environmental health engineering are other terms being used.",
"title": "Sub-disciplines"
},
{
"paragraph_id": 21,
"text": "Environmental engineering deals with treatment of chemical, biological, or thermal wastes, purification of water and air, and remediation of contaminated sites after waste disposal or accidental contamination. Among the topics covered by environmental engineering are pollutant transport, water purification, waste water treatment, air pollution, solid waste treatment, recycling, and hazardous waste management. Environmental engineers administer pollution reduction, green engineering, and industrial ecology. Environmental engineers also compile information on environmental consequences of proposed actions.",
"title": "Sub-disciplines"
},
{
"paragraph_id": 22,
"text": "Forensic engineering is the investigation of materials, products, structures or components that fail or do not operate or function as intended, causing personal injury or damage to property. The consequences of failure are dealt with by the law of product liability. The field also deals with retracing processes and procedures leading to accidents in operation of vehicles or machinery. The subject is applied most commonly in civil law cases, although it may be of use in criminal law cases. Generally the purpose of a Forensic engineering investigation is to locate cause or causes of failure with a view to improve performance or life of a component, or to assist a court in determining the facts of an accident. It can also involve investigation of intellectual property claims, especially patents.",
"title": "Sub-disciplines"
},
{
"paragraph_id": 23,
"text": "Geotechnical engineering studies rock and soil supporting civil engineering systems. Knowledge from the field of soil science, materials science, mechanics, and hydraulics is applied to safely and economically design foundations, retaining walls, and other structures. Environmental efforts to protect groundwater and safely maintain landfills have spawned a new area of research called geo-environmental engineering.",
"title": "Sub-disciplines"
},
{
"paragraph_id": 24,
"text": "Identification of soil properties presents challenges to geotechnical engineers. Boundary conditions are often well defined in other branches of civil engineering, but unlike steel or concrete, the material properties and behavior of soil are difficult to predict due to its variability and limitation on investigation. Furthermore, soil exhibits nonlinear (stress-dependent) strength, stiffness, and dilatancy (volume change associated with application of shear stress), making studying soil mechanics all the more difficult. Geotechnical engineers frequently work with professional geologists, Geological Engineering professionals and soil scientists.",
"title": "Sub-disciplines"
},
{
"paragraph_id": 25,
"text": "Materials science is closely related to civil engineering. It studies fundamental characteristics of materials, and deals with ceramics such as concrete and mix asphalt concrete, strong metals such as aluminum and steel, and thermosetting polymers including polymethylmethacrylate (PMMA) and carbon fibers.",
"title": "Sub-disciplines"
},
{
"paragraph_id": 26,
"text": "Materials engineering involves protection and prevention (paints and finishes). Alloying combines two types of metals to produce another metal with desired properties. It incorporates elements of applied physics and chemistry. With recent media attention on nanoscience and nanotechnology, materials engineering has been at the forefront of academic research. It is also an important part of forensic engineering and failure analysis.",
"title": "Sub-disciplines"
},
{
"paragraph_id": 27,
"text": "Site development, also known as site planning, is focused on the planning and development potential of a site as well as addressing possible impacts from permitting issues and environmental challenges.",
"title": "Sub-disciplines"
},
{
"paragraph_id": 28,
"text": "Structural engineering is concerned with the structural design and structural analysis of buildings, bridges, towers, flyovers (overpasses), tunnels, off shore structures like oil and gas fields in the sea, aerostructure and other structures. This involves identifying the loads which act upon a structure and the forces and stresses which arise within that structure due to those loads, and then designing the structure to successfully support and resist those loads. The loads can be self weight of the structures, other dead load, live loads, moving (wheel) load, wind load, earthquake load, load from temperature change etc. The structural engineer must design structures to be safe for their users and to successfully fulfill the function they are designed for (to be serviceable). Due to the nature of some loading conditions, sub-disciplines within structural engineering have emerged, including wind engineering and earthquake engineering.",
"title": "Sub-disciplines"
},
{
"paragraph_id": 29,
"text": "Design considerations will include strength, stiffness, and stability of the structure when subjected to loads which may be static, such as furniture or self-weight, or dynamic, such as wind, seismic, crowd or vehicle loads, or transitory, such as temporary construction loads or impact. Other considerations include cost, constructibility, safety, aesthetics and sustainability.",
"title": "Sub-disciplines"
},
{
"paragraph_id": 30,
"text": "Surveying is the process by which a surveyor measures certain dimensions that occur on or near the surface of the Earth. Surveying equipment such as levels and theodolites are used for accurate measurement of angular deviation, horizontal, vertical and slope distances. With computerisation, electronic distance measurement (EDM), total stations, GPS surveying and laser scanning have to a large extent supplanted traditional instruments. Data collected by survey measurement is converted into a graphical representation of the Earth's surface in the form of a map. This information is then used by civil engineers, contractors and realtors to design from, build on, and trade, respectively. Elements of a structure must be sized and positioned in relation to each other and to site boundaries and adjacent structures.",
"title": "Sub-disciplines"
},
{
"paragraph_id": 31,
"text": "Although surveying is a distinct profession with separate qualifications and licensing arrangements, civil engineers are trained in the basics of surveying and mapping, as well as geographic information systems. Surveyors also lay out the routes of railways, tramway tracks, highways, roads, pipelines and streets as well as position other infrastructure, such as harbors, before construction.",
"title": "Sub-disciplines"
},
{
"paragraph_id": 32,
"text": "In the United States, Canada, the United Kingdom and most Commonwealth countries land surveying is considered to be a separate and distinct profession. Land surveyors are not considered to be engineers, and have their own professional associations and licensing requirements. The services of a licensed land surveyor are generally required for boundary surveys (to establish the boundaries of a parcel using its legal description) and subdivision plans (a plot or map based on a survey of a parcel of land, with boundary lines drawn inside the larger parcel to indicate the creation of new boundary lines and roads), both of which are generally referred to as Cadastral surveying.",
"title": "Sub-disciplines"
},
{
"paragraph_id": 33,
"text": "Construction surveying is generally performed by specialized technicians. Unlike land surveyors, the resulting plan does not have legal status. Construction surveyors perform the following tasks:",
"title": "Sub-disciplines"
},
{
"paragraph_id": 34,
"text": "Transportation engineering is concerned with moving people and goods efficiently, safely, and in a manner conducive to a vibrant community. This involves specifying, designing, constructing, and maintaining transportation infrastructure which includes streets, canals, highways, rail systems, airports, ports, and mass transit. It includes areas such as transportation design, transportation planning, traffic engineering, some aspects of urban engineering, queueing theory, pavement engineering, Intelligent Transportation System (ITS), and infrastructure management.",
"title": "Sub-disciplines"
},
{
"paragraph_id": 35,
"text": "Municipal engineering is concerned with municipal infrastructure. This involves specifying, designing, constructing, and maintaining streets, sidewalks, water supply networks, sewers, street lighting, municipal solid waste management and disposal, storage depots for various bulk materials used for maintenance and public works (salt, sand, etc.), public parks and cycling infrastructure. In the case of underground utility networks, it may also include the civil portion (conduits and access chambers) of the local distribution networks of electrical and telecommunications services. It can also include the optimizing of waste collection and bus service networks. Some of these disciplines overlap with other civil engineering specialties, however municipal engineering focuses on the coordination of these infrastructure networks and services, as they are often built simultaneously, and managed by the same municipal authority. Municipal engineers may also design the site civil works for large buildings, industrial plants or campuses (i.e. access roads, parking lots, potable water supply, treatment or pretreatment of waste water, site drainage, etc.)",
"title": "Sub-disciplines"
},
{
"paragraph_id": 36,
"text": "Water resources engineering is concerned with the collection and management of water (as a natural resource). As a discipline it therefore combines elements of hydrology, environmental science, meteorology, conservation, and resource management. This area of civil engineering relates to the prediction and management of both the quality and the quantity of water in both underground (aquifers) and above ground (lakes, rivers, and streams) resources. Water resource engineers analyze and model very small to very large areas of the earth to predict the amount and content of water as it flows into, through, or out of a facility. Although the actual design of the facility may be left to other engineers.",
"title": "Sub-disciplines"
},
{
"paragraph_id": 37,
"text": "Hydraulic engineering is concerned with the flow and conveyance of fluids, principally water. This area of civil engineering is intimately related to the design of pipelines, water supply network, drainage facilities (including bridges, dams, channels, culverts, levees, storm sewers), and canals. Hydraulic engineers design these facilities using the concepts of fluid pressure, fluid statics, fluid dynamics, and hydraulics, among others.",
"title": "Sub-disciplines"
},
{
"paragraph_id": 38,
"text": "Civil engineering systems is a discipline that promotes the use of systems thinking to manage complexity and change in civil engineering within its wider public context. It posits that the proper development of civil engineering infrastructure requires a holistic, coherent understanding of the relationships between all of the important factors that contribute to successful projects while at the same time emphasizing the importance of attention to technical detail. Its purpose is to help integrate the entire civil engineering project life cycle from conception, through planning, designing, making, operating to decommissioning.",
"title": "Sub-disciplines"
}
] | Civil engineering is a professional engineering discipline that deals with the design, construction, and maintenance of the physical and naturally built environment, including public works such as roads, bridges, canals, dams, airports, sewage systems, pipelines, structural components of buildings, and railways. Civil engineering is traditionally broken into a number of sub-disciplines. It is considered the second-oldest engineering discipline after military engineering, and it is defined to distinguish non-military engineering from military engineering. Civil engineering can take place in the public sector from municipal public works departments through to federal government agencies, and in the private sector from locally based firms to global Fortune 500 companies. | 2001-06-18T05:26:55Z | 2023-12-28T03:42:13Z | [
"Template:Engineering fields",
"Template:Authority control",
"Template:More citations needed",
"Template:Blockquote",
"Template:Div col",
"Template:Cite encyclopedia",
"Template:Wikiquote",
"Template:Library resources box",
"Template:Use dmy dates",
"Template:Main",
"Template:Portal",
"Template:Page needed",
"Template:Cite magazine",
"Template:Citation",
"Template:Glossaries of science and engineering",
"Template:Short description",
"Template:Use American English",
"Template:Div col end",
"Template:Reflist",
"Template:Cite web",
"Template:Cite news",
"Template:Cite book",
"Template:Construction overview",
"Template:See also",
"Template:Circa",
"Template:Convert"
] | https://en.wikipedia.org/wiki/Civil_engineering |
5,763 | Cantonese (disambiguation) | Cantonese is a language originating in Canton, Guangdong.
Cantonese may also refer to: | [
{
"paragraph_id": 0,
"text": "Cantonese is a language originating in Canton, Guangdong.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Cantonese may also refer to:",
"title": ""
}
] | Cantonese is a language originating in Canton, Guangdong. Cantonese may also refer to: Yue Chinese, Chinese languages that include Cantonese
Cantonese cuisine, the cuisine of Guangdong Province
Cantonese people, the native people of Guangdong and Guangxi
Lingnan culture, the regional culture often referred to as Cantonese culture | 2022-02-26T03:17:02Z | [
"Template:Wiktionary",
"Template:In title",
"Template:Disambiguation"
] | https://en.wikipedia.org/wiki/Cantonese_(disambiguation) |
|
5,765 | Çatalhöyük | Çatalhöyük (Turkish pronunciation: [tʃaˈtaɫhœjyc]; also Çatal Höyük and Çatal Hüyük; from Turkish çatal "fork" + höyük "tumulus") is a tell (a mounded accretion due to long-term human settlement) of a very large Neolithic and Chalcolithic proto-city settlement in southern Anatolia, which existed from approximately 7500 BC to 6400 BC, and flourished around 7000 BC. In July 2012, it was inscribed as a UNESCO World Heritage Site.
Çatalhöyük is located overlooking the Konya Plain, southeast of the present-day city of Konya (ancient Iconium) in Turkey, approximately 140 km (87 mi) from the twin-coned volcano of Mount Hasan. The eastern settlement forms a mound that would have risen about 20 m (66 ft) above the plain at the time of the latest Neolithic occupation. There is also a smaller settlement mound to the west and a Byzantine settlement a few hundred meters to the east. The prehistoric mound settlements were abandoned before the Bronze Age. A channel of the Çarşamba River once flowed between the two mounds, and the settlement was built on alluvial clay which may have been favorable for early agriculture. Currently the closest river to it is the Euphrates.
The site was first excavated by James Mellaart in 1958. He later led a team which further excavated there for four seasons between 1961 and 1965. These excavations revealed this section of Anatolia as a centre of advanced culture in the Neolithic period. Excavation revealed 18 successive layers of buildings signifying various stages of the settlement and eras of history. The bottom layer of buildings can be dated as early as 7100 BC while the top layer is from 5600 BC.
Mellaart was banned from Turkey for his involvement in the Dorak affair in which he published drawings of supposedly important Bronze Age artifacts that later went missing. After this scandal, the site lay idle until 1993, when investigations began under the leadership of Ian Hodder, then at the University of Cambridge. The Hodder led excavations ended in 2018. Hodder, a former student of Mellaart, chose the site as the first "real world" test of his then-controversial theory of post-processual archaeology. The site has always had a strong research emphasis upon engagement with digital methodologies, driven by the project's experimental and reflexive methodological framework. According to Mickel, Hodder's Çatalhöyük Research Project (ÇRP) established itself as a site for progressive methodologies - in terms of adaptable and democratized recording, integration of computerized technologies, sampling strategies, and community involvement."
New excavations are being directed by Ali Umut Türkcan from Anadolu University.
Çatalhöyük was composed entirely of domestic buildings, with no obvious public buildings. While some of the larger ones have rather ornate murals, the purpose of some rooms remains unclear.
The population of the eastern mound has been estimated to be around 10,000 people, but the population likely varied over the community's history. An average population of between 5,000 and 7,000 is a reasonable estimate. The sites were set up as large numbers of buildings clustered together. Households looked to their neighbors for help, trade, and possible marriage for their children. The inhabitants lived in mudbrick houses that were crammed together in an aggregate structure. No footpaths or streets were used between the dwellings, which were clustered in a honeycomb-like maze. Most were accessed by holes in the ceiling and doors on the side of the houses, with doors reached by ladders and stairs. The rooftops were effectively streets. The ceiling openings also served as the only source of ventilation, allowing smoke from the houses' open hearths and ovens to escape. Houses had plaster interiors characterized by squared-off timber ladders or steep stairs. These were usually on the south wall of the room, as were cooking hearths and ovens. The main rooms contained raised platforms that may have been used for a range of domestic activities. Typical houses contained two rooms for everyday activity, such as cooking and crafting. All interior walls and platforms were plastered to a smooth finish. Ancillary rooms were used as storage, and were accessed through low openings from main rooms.
All rooms were kept scrupulously clean. Archaeologists identified very little rubbish in the buildings, finding middens outside the ruins, with sewage and food waste, as well as significant amounts of ash from burning wood, reeds and animal dung. In good weather, many daily activities may also have taken place on the rooftops, which may have formed a plaza. In later periods, large communal ovens appear to have been built on these rooftops. Over time, houses were renewed by partial demolition and rebuilding on a foundation of rubble, which was how the mound was gradually built up. As many as eighteen levels of settlement have been uncovered.
As a part of ritual life, the people of Çatalhöyük buried their dead within the village. Human remains have been found in pits beneath the floors and, especially, beneath hearths, the platforms within the main rooms, and under beds. Bodies were tightly flexed before burial and were often placed in baskets or wound and wrapped in reed mats. Disarticulated bones in some graves suggest that bodies may have been exposed in the open air for a time before the bones were gathered and buried. In some cases, graves were disturbed, and the individual's head removed from the skeleton. These heads may have been used in rituals, as some were found in other areas of the community. In a woman's grave spinning whorls were recovered and in a man's grave, stone axes. Some skulls were plastered and painted with ochre to recreate faces, a custom more characteristic of Neolithic sites in Syria and at Neolithic Jericho than at sites closer by.
Vivid murals and figurines are found throughout the settlement, on interior and exterior walls. Distinctive clay figurines of women, notably the Seated Woman of Çatalhöyük, have been found in the upper levels of the site. Although no identifiable temples have been found, the graves, murals, and figurines suggest that the people of Çatalhöyük had a religion rich in symbols. Rooms with concentrations of these items may have been shrines or public meeting areas. Predominant images include men with erect phalluses, hunting scenes, red images of the now extinct aurochs (wild cattle) and stags, and vultures swooping down on headless figures. Relief figures are carved on walls, such as of lionesses facing one another.
Heads of animals, especially of cattle, were mounted on walls. A painting of the village, with the twin mountain peaks of Hasan Dağ in the background, is frequently cited as the world's oldest map, and the first landscape painting. However, some archaeologists question this interpretation. Stephanie Meece, for example, argues that it is more likely a painting of a leopard skin instead of a volcano, and a decorative geometric design instead of a map.
A feature of Çatalhöyük are its female figurines. Mellaart, the original excavator, argued that these carefully made figurines, carved and molded from marble, blue and brown limestone, schist, calcite, basalt, alabaster, and clay, represented a female deity. Although a male deity existed as well, "statues of a female deity far outnumber those of the male deity, who moreover, does not appear to be represented at all after Level VI". To date, eighteen levels have been identified. These figurines were found primarily in areas Mellaart believed to be shrines. The stately goddess seated on a throne flanked by two lionesses was found in a grain bin, which Mellaart suggests might have been a means of ensuring the harvest or protecting the food supply.
Whereas Mellaart excavated nearly two hundred buildings in four seasons, the current excavator, Ian Hodder, spent an entire season excavating one building alone. Hodder and his team, in 2004 and 2005, began to believe that the patterns suggested by Mellaart were false. They found one similar figurine, but the vast majority did not imitate the Mother Goddess style that Mellaart suggested. Instead of a Mother Goddess culture, Hodder points out that the site gives little indication of a matriarchy or patriarchy.
"There are full breasts on which the hands rest, and the stomach is extended in the central part. There is a hole in the top for the head which is missing. As one turns the figurine around one notices that the arms are very thin, and then on the back of the figurine one sees a depiction of either a skeleton or the bones of a very thin and depleted human. The ribs and vertebrae are clear, as are the scapulae and the main pelvic bones. The figurine can be interpreted in a number of ways – as a woman turning into an ancestor, as a woman associated with death, or as death and life conjoined. It is possible that the lines around the body represent wrapping rather than ribs. Whatever the specific interpretation, this is a unique piece that may force us to change our views of the nature of Çatalhöyük society and imagery. Perhaps the importance of female imagery was related to some special role of the female in relation to death as much as to the roles of mother and nurturer."
In an article in the Turkish Daily News, Hodder is reported as denying that Çatalhöyük was a matriarchal society and quoted as saying "When we look at what they eat and drink and at their social statues, we see that men and women had the same social status. There was a balance of power. Another example is the skulls found. If one's social status was of high importance in Çatalhöyük, the body and head were separated after death. The number of female and male skulls found during the excavations is almost equal." In another article in the Hurriyet Daily News Hodder is reported to say "We have learned that men and women were equally approached".
In a report in September 2009 on the discovery of around 2000 figurines Hodder is quoted as saying:
Çatalhöyük was excavated in the 1960s in a methodical way, but not using the full range of natural science techniques that are available to us today. Sir James Mellaart who excavated the site in the 1960s came up with all sorts of ideas about the way the site was organized and how it was lived in and so on ... We’ve now started working there since the mid 1990s and come up with very different ideas about the site. One of the most obvious examples of that is that Çatalhöyük is perhaps best known for the idea of the mother goddess. But our work more recently has tended to show that in fact there is very little evidence of a mother goddess and very little evidence of some sort of female-based matriarchy. That's just one of the many myths that the modern scientific work is undermining.
Professor Lynn Meskell explained that while the original excavations had found only 200 figures, the new excavations had uncovered 2,000 figures, most of which depicted animals, and fewer than 5% of the figurines depicted women.
Estonian folklorist Uku Masing has suggested as early as in 1976, that Çatalhöyük was probably a hunting and gathering religion and the Mother Goddess figurine did not represent a female deity. He implied that perhaps a longer period of time was needed to develop symbols for agricultural rites. His theory was developed in the paper "Some remarks on the mythology of the people of Catal Hüyük".
Çatalhöyük has strong evidence of an egalitarian society, as no houses with distinctive features (belonging to royalty or religious hierarchy, for example) have been found so far. The most recent investigations also reveal little social distinction based on gender, with men and women receiving equivalent nutrition and seeming to have equal social status, as typically found in Paleolithic cultures. Children observed domestic areas. They learned how to perform rituals and how to build or repair houses by watching the adults make statues, beads and other objects. Çatalhöyük's spatial layout may be due to the close kin relations exhibited amongst the people. It can be seen, in the layout, that the people were "divided into two groups who lived on opposite sides of the town, separated by a gully." Furthermore, because no nearby towns were found from which marriage partners could be drawn, "this spatial separation must have marked two intermarrying kinship groups." This would help explain how a settlement so early on would become so large.
In the upper levels of the site, it becomes apparent that the people of Çatalhöyük were honing skills in agriculture and the domestication of animals. Female figurines have been found within bins used for storage of cereals, such as wheat and barley, and the figurines are presumed to be of a deity protecting the grain. Peas were also grown, and almonds, pistachios and fruit were harvested from trees in the surrounding hills. Sheep were domesticated and evidence suggests the beginning of cattle domestication as well. However, hunting continued to be a major source of food for the community. Pottery and obsidian tools appear to have been major industries; obsidian tools were probably both used and also traded for items such as Mediterranean sea shells and flint from Syria. Noting the lack of hierarchy and economic inequality, historian and anti-capitalist author Murray Bookchin has argued that Çatalhöyük was an early example of anarcho-communism.
Conversely, a 2014 paper argues that the picture of Çatalhöyük is more complex and that while there seemed to have been an egalitarian distribution of cooking tools and some stone tools, unbroken quern-stones and storage units were more unevenly distributed. Private property existed but shared tools also existed. It was also suggested that Çatalhöyük was becoming less egalitarian, with greater inter-generational wealth transmission. | [
{
"paragraph_id": 0,
"text": "Çatalhöyük (Turkish pronunciation: [tʃaˈtaɫhœjyc]; also Çatal Höyük and Çatal Hüyük; from Turkish çatal \"fork\" + höyük \"tumulus\") is a tell (a mounded accretion due to long-term human settlement) of a very large Neolithic and Chalcolithic proto-city settlement in southern Anatolia, which existed from approximately 7500 BC to 6400 BC, and flourished around 7000 BC. In July 2012, it was inscribed as a UNESCO World Heritage Site.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Çatalhöyük is located overlooking the Konya Plain, southeast of the present-day city of Konya (ancient Iconium) in Turkey, approximately 140 km (87 mi) from the twin-coned volcano of Mount Hasan. The eastern settlement forms a mound that would have risen about 20 m (66 ft) above the plain at the time of the latest Neolithic occupation. There is also a smaller settlement mound to the west and a Byzantine settlement a few hundred meters to the east. The prehistoric mound settlements were abandoned before the Bronze Age. A channel of the Çarşamba River once flowed between the two mounds, and the settlement was built on alluvial clay which may have been favorable for early agriculture. Currently the closest river to it is the Euphrates.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The site was first excavated by James Mellaart in 1958. He later led a team which further excavated there for four seasons between 1961 and 1965. These excavations revealed this section of Anatolia as a centre of advanced culture in the Neolithic period. Excavation revealed 18 successive layers of buildings signifying various stages of the settlement and eras of history. The bottom layer of buildings can be dated as early as 7100 BC while the top layer is from 5600 BC.",
"title": "Archaeology"
},
{
"paragraph_id": 3,
"text": "Mellaart was banned from Turkey for his involvement in the Dorak affair in which he published drawings of supposedly important Bronze Age artifacts that later went missing. After this scandal, the site lay idle until 1993, when investigations began under the leadership of Ian Hodder, then at the University of Cambridge. The Hodder led excavations ended in 2018. Hodder, a former student of Mellaart, chose the site as the first \"real world\" test of his then-controversial theory of post-processual archaeology. The site has always had a strong research emphasis upon engagement with digital methodologies, driven by the project's experimental and reflexive methodological framework. According to Mickel, Hodder's Çatalhöyük Research Project (ÇRP) established itself as a site for progressive methodologies - in terms of adaptable and democratized recording, integration of computerized technologies, sampling strategies, and community involvement.\"",
"title": "Archaeology"
},
{
"paragraph_id": 4,
"text": "New excavations are being directed by Ali Umut Türkcan from Anadolu University.",
"title": "Archaeology"
},
{
"paragraph_id": 5,
"text": "Çatalhöyük was composed entirely of domestic buildings, with no obvious public buildings. While some of the larger ones have rather ornate murals, the purpose of some rooms remains unclear.",
"title": "Culture"
},
{
"paragraph_id": 6,
"text": "The population of the eastern mound has been estimated to be around 10,000 people, but the population likely varied over the community's history. An average population of between 5,000 and 7,000 is a reasonable estimate. The sites were set up as large numbers of buildings clustered together. Households looked to their neighbors for help, trade, and possible marriage for their children. The inhabitants lived in mudbrick houses that were crammed together in an aggregate structure. No footpaths or streets were used between the dwellings, which were clustered in a honeycomb-like maze. Most were accessed by holes in the ceiling and doors on the side of the houses, with doors reached by ladders and stairs. The rooftops were effectively streets. The ceiling openings also served as the only source of ventilation, allowing smoke from the houses' open hearths and ovens to escape. Houses had plaster interiors characterized by squared-off timber ladders or steep stairs. These were usually on the south wall of the room, as were cooking hearths and ovens. The main rooms contained raised platforms that may have been used for a range of domestic activities. Typical houses contained two rooms for everyday activity, such as cooking and crafting. All interior walls and platforms were plastered to a smooth finish. Ancillary rooms were used as storage, and were accessed through low openings from main rooms.",
"title": "Culture"
},
{
"paragraph_id": 7,
"text": "All rooms were kept scrupulously clean. Archaeologists identified very little rubbish in the buildings, finding middens outside the ruins, with sewage and food waste, as well as significant amounts of ash from burning wood, reeds and animal dung. In good weather, many daily activities may also have taken place on the rooftops, which may have formed a plaza. In later periods, large communal ovens appear to have been built on these rooftops. Over time, houses were renewed by partial demolition and rebuilding on a foundation of rubble, which was how the mound was gradually built up. As many as eighteen levels of settlement have been uncovered.",
"title": "Culture"
},
{
"paragraph_id": 8,
"text": "As a part of ritual life, the people of Çatalhöyük buried their dead within the village. Human remains have been found in pits beneath the floors and, especially, beneath hearths, the platforms within the main rooms, and under beds. Bodies were tightly flexed before burial and were often placed in baskets or wound and wrapped in reed mats. Disarticulated bones in some graves suggest that bodies may have been exposed in the open air for a time before the bones were gathered and buried. In some cases, graves were disturbed, and the individual's head removed from the skeleton. These heads may have been used in rituals, as some were found in other areas of the community. In a woman's grave spinning whorls were recovered and in a man's grave, stone axes. Some skulls were plastered and painted with ochre to recreate faces, a custom more characteristic of Neolithic sites in Syria and at Neolithic Jericho than at sites closer by.",
"title": "Culture"
},
{
"paragraph_id": 9,
"text": "Vivid murals and figurines are found throughout the settlement, on interior and exterior walls. Distinctive clay figurines of women, notably the Seated Woman of Çatalhöyük, have been found in the upper levels of the site. Although no identifiable temples have been found, the graves, murals, and figurines suggest that the people of Çatalhöyük had a religion rich in symbols. Rooms with concentrations of these items may have been shrines or public meeting areas. Predominant images include men with erect phalluses, hunting scenes, red images of the now extinct aurochs (wild cattle) and stags, and vultures swooping down on headless figures. Relief figures are carved on walls, such as of lionesses facing one another.",
"title": "Culture"
},
{
"paragraph_id": 10,
"text": "Heads of animals, especially of cattle, were mounted on walls. A painting of the village, with the twin mountain peaks of Hasan Dağ in the background, is frequently cited as the world's oldest map, and the first landscape painting. However, some archaeologists question this interpretation. Stephanie Meece, for example, argues that it is more likely a painting of a leopard skin instead of a volcano, and a decorative geometric design instead of a map.",
"title": "Culture"
},
{
"paragraph_id": 11,
"text": "A feature of Çatalhöyük are its female figurines. Mellaart, the original excavator, argued that these carefully made figurines, carved and molded from marble, blue and brown limestone, schist, calcite, basalt, alabaster, and clay, represented a female deity. Although a male deity existed as well, \"statues of a female deity far outnumber those of the male deity, who moreover, does not appear to be represented at all after Level VI\". To date, eighteen levels have been identified. These figurines were found primarily in areas Mellaart believed to be shrines. The stately goddess seated on a throne flanked by two lionesses was found in a grain bin, which Mellaart suggests might have been a means of ensuring the harvest or protecting the food supply.",
"title": "Religion"
},
{
"paragraph_id": 12,
"text": "Whereas Mellaart excavated nearly two hundred buildings in four seasons, the current excavator, Ian Hodder, spent an entire season excavating one building alone. Hodder and his team, in 2004 and 2005, began to believe that the patterns suggested by Mellaart were false. They found one similar figurine, but the vast majority did not imitate the Mother Goddess style that Mellaart suggested. Instead of a Mother Goddess culture, Hodder points out that the site gives little indication of a matriarchy or patriarchy.",
"title": "Religion"
},
{
"paragraph_id": 13,
"text": "\"There are full breasts on which the hands rest, and the stomach is extended in the central part. There is a hole in the top for the head which is missing. As one turns the figurine around one notices that the arms are very thin, and then on the back of the figurine one sees a depiction of either a skeleton or the bones of a very thin and depleted human. The ribs and vertebrae are clear, as are the scapulae and the main pelvic bones. The figurine can be interpreted in a number of ways – as a woman turning into an ancestor, as a woman associated with death, or as death and life conjoined. It is possible that the lines around the body represent wrapping rather than ribs. Whatever the specific interpretation, this is a unique piece that may force us to change our views of the nature of Çatalhöyük society and imagery. Perhaps the importance of female imagery was related to some special role of the female in relation to death as much as to the roles of mother and nurturer.\"",
"title": "Religion"
},
{
"paragraph_id": 14,
"text": "In an article in the Turkish Daily News, Hodder is reported as denying that Çatalhöyük was a matriarchal society and quoted as saying \"When we look at what they eat and drink and at their social statues, we see that men and women had the same social status. There was a balance of power. Another example is the skulls found. If one's social status was of high importance in Çatalhöyük, the body and head were separated after death. The number of female and male skulls found during the excavations is almost equal.\" In another article in the Hurriyet Daily News Hodder is reported to say \"We have learned that men and women were equally approached\".",
"title": "Religion"
},
{
"paragraph_id": 15,
"text": "In a report in September 2009 on the discovery of around 2000 figurines Hodder is quoted as saying:",
"title": "Religion"
},
{
"paragraph_id": 16,
"text": "Çatalhöyük was excavated in the 1960s in a methodical way, but not using the full range of natural science techniques that are available to us today. Sir James Mellaart who excavated the site in the 1960s came up with all sorts of ideas about the way the site was organized and how it was lived in and so on ... We’ve now started working there since the mid 1990s and come up with very different ideas about the site. One of the most obvious examples of that is that Çatalhöyük is perhaps best known for the idea of the mother goddess. But our work more recently has tended to show that in fact there is very little evidence of a mother goddess and very little evidence of some sort of female-based matriarchy. That's just one of the many myths that the modern scientific work is undermining.",
"title": "Religion"
},
{
"paragraph_id": 17,
"text": "Professor Lynn Meskell explained that while the original excavations had found only 200 figures, the new excavations had uncovered 2,000 figures, most of which depicted animals, and fewer than 5% of the figurines depicted women.",
"title": "Religion"
},
{
"paragraph_id": 18,
"text": "Estonian folklorist Uku Masing has suggested as early as in 1976, that Çatalhöyük was probably a hunting and gathering religion and the Mother Goddess figurine did not represent a female deity. He implied that perhaps a longer period of time was needed to develop symbols for agricultural rites. His theory was developed in the paper \"Some remarks on the mythology of the people of Catal Hüyük\".",
"title": "Religion"
},
{
"paragraph_id": 19,
"text": "Çatalhöyük has strong evidence of an egalitarian society, as no houses with distinctive features (belonging to royalty or religious hierarchy, for example) have been found so far. The most recent investigations also reveal little social distinction based on gender, with men and women receiving equivalent nutrition and seeming to have equal social status, as typically found in Paleolithic cultures. Children observed domestic areas. They learned how to perform rituals and how to build or repair houses by watching the adults make statues, beads and other objects. Çatalhöyük's spatial layout may be due to the close kin relations exhibited amongst the people. It can be seen, in the layout, that the people were \"divided into two groups who lived on opposite sides of the town, separated by a gully.\" Furthermore, because no nearby towns were found from which marriage partners could be drawn, \"this spatial separation must have marked two intermarrying kinship groups.\" This would help explain how a settlement so early on would become so large.",
"title": "Economy"
},
{
"paragraph_id": 20,
"text": "In the upper levels of the site, it becomes apparent that the people of Çatalhöyük were honing skills in agriculture and the domestication of animals. Female figurines have been found within bins used for storage of cereals, such as wheat and barley, and the figurines are presumed to be of a deity protecting the grain. Peas were also grown, and almonds, pistachios and fruit were harvested from trees in the surrounding hills. Sheep were domesticated and evidence suggests the beginning of cattle domestication as well. However, hunting continued to be a major source of food for the community. Pottery and obsidian tools appear to have been major industries; obsidian tools were probably both used and also traded for items such as Mediterranean sea shells and flint from Syria. Noting the lack of hierarchy and economic inequality, historian and anti-capitalist author Murray Bookchin has argued that Çatalhöyük was an early example of anarcho-communism.",
"title": "Economy"
},
{
"paragraph_id": 21,
"text": "Conversely, a 2014 paper argues that the picture of Çatalhöyük is more complex and that while there seemed to have been an egalitarian distribution of cooking tools and some stone tools, unbroken quern-stones and storage units were more unevenly distributed. Private property existed but shared tools also existed. It was also suggested that Çatalhöyük was becoming less egalitarian, with greater inter-generational wealth transmission.",
"title": "Economy"
}
] | Çatalhöyük is a tell of a very large Neolithic and Chalcolithic proto-city settlement in southern Anatolia, which existed from approximately 7500 BC to 6400 BC, and flourished around 7000 BC. In July 2012, it was inscribed as a UNESCO World Heritage Site. Çatalhöyük is located overlooking the Konya Plain, southeast of the present-day city of Konya in Turkey, approximately 140 km (87 mi) from the twin-coned volcano of Mount Hasan. The eastern settlement forms a mound that would have risen about 20 m (66 ft) above the plain at the time of the latest Neolithic occupation. There is also a smaller settlement mound to the west and a Byzantine settlement a few hundred meters to the east. The prehistoric mound settlements were abandoned before the Bronze Age. A channel of the Çarşamba River once flowed between the two mounds, and the settlement was built on alluvial clay which may have been favorable for early agriculture. Currently the closest river to it is the Euphrates. | 2001-06-19T21:02:45Z | 2023-12-14T16:44:16Z | [
"Template:Former settlements in Turkey",
"Template:Use dmy dates",
"Template:Cite web",
"Template:ISBN",
"Template:Reflist",
"Template:World Heritage Sites in Turkey",
"Template:Authority control",
"Template:Infobox ancient site",
"Template:IPA-tr",
"Template:See also",
"Template:Citation",
"Template:Wikivoyage",
"Template:Archaeological museums in Turkey",
"Template:Blockquote",
"Template:Cite book",
"Template:Cite news",
"Template:Open access",
"Template:Commons",
"Template:Prehistoric technology",
"Template:Short description",
"Template:Redirect-distinguish",
"Template:Cite journal"
] | https://en.wikipedia.org/wiki/%C3%87atalh%C3%B6y%C3%BCk |
5,766 | Clement Attlee | Clement Richard Attlee, 1st Earl Attlee, KG, OM, CH, PC, FRS (3 January 1883 – 8 October 1967) was a British statesman and Labour Party politician who served as Prime Minister of the United Kingdom from 1945 to 1951 and Leader of the Labour Party from 1935 to 1955. He was Deputy Prime Minister during the wartime coalition government under Winston Churchill, and served twice as Leader of the Opposition from 1935 to 1940 and from 1951 to 1955. Attlee remains the longest serving Labour leader and is widely considered by historians and members of the public through various polls to be one of the greatest Prime Ministers of the United Kingdom.
Attlee was born into an upper-middle-class family, the son of a wealthy London solicitor. After attending Haileybury College and the University of Oxford, he practised as a barrister. The volunteer work he carried out in London's East End exposed him to poverty, and his political views shifted leftwards thereafter. He joined the Independent Labour Party, gave up his legal career, and began lecturing at the London School of Economics; with his work briefly interrupted by service in the First World War. In 1919, he became mayor of Stepney and in 1922 was elected as the Member for Limehouse. Attlee served in the first Labour minority government led by Ramsay MacDonald in 1924, and then joined the Cabinet during MacDonald's second minority (1929–1931). After retaining his seat in Labour's landslide defeat of 1931, he became the party's Deputy Leader. Elected Leader of the Labour Party in 1935, and at first advocating pacificism and opposing re-armament, he became a critic of Neville Chamberlain's policy of appeasement in the lead-up to the Second World War. Attlee took Labour into the wartime coalition government in 1940 and served under Winston Churchill, initially as Lord Privy Seal and then as Deputy Prime Minister from 1942.
As the European front of WWII reached its conclusion, the war cabinet headed by Churchill was dissolved and elections were scheduled to be held. The Labour Party, led by Attlee, won a landslide victory in the 1945 general election, on their post-war recovery platform. Following the election, Attlee led the construction of the first Labour majority government. His government's Keynesian approach to economic management aimed to maintain full employment, a mixed economy and a greatly enlarged system of social services provided by the state. To this end, it undertook the nationalisation of public utilities and major industries, and implemented wide-ranging social reforms, including the passing of the National Insurance Act 1946 and National Assistance Act 1948, the formation of the National Health Service (NHS) in 1948, and the enlargement of public subsidies for council house building. His government also reformed trade union legislation, working practices and children's services; it created the National Parks system, passed the New Towns Act 1946 and established the town and country planning system. The Attlee government proved itself to be a radical, reforming government. From 1945 to 1948, over 200 public Acts of Parliament were passed, with eight major pieces of legislation placed on the statute book in 1946 alone.
Attlee's foreign policy focused on decolonization efforts which he delegated to Ernest Bevin, but Attlee personally oversaw the partition of India (1947), the independence of Burma and Ceylon, and the dissolution of the British mandates of Palestine and Transjordan. Attlee and Bevin encouraged the United States to take a vigorous role in the Cold War; unable to afford military intervention in Greece during its civil war, he called on Washington to counter the communists there. The strategy of containment was formalized between the two nations through the Truman Doctrine. He supported the Marshall Plan to rebuild Western Europe with American money and, in 1949, promoted the NATO military alliance against the Soviet bloc. After leading Labour to a narrow victory at the 1950 general election, he sent British troops to fight alongside South Korea in the Korean War.
Attlee had inherited a country close to bankruptcy following the Second World War and beset by food, housing and resource shortages; despite his social reforms and economic programme, these problems persisted throughout his premiership, alongside recurrent currency crises and dependence on US aid. His party was narrowly defeated by the Conservatives in the 1951 general election, despite winning the most votes. He continued as Labour leader but retired after losing the 1955 election and was elevated to the House of Lords, where he served until his death in 1967. In public, he was modest and unassuming, but behind the scenes his depth of knowledge, quiet demeanour, objectivity and pragmatism proved decisive. He is often ranked as one of the greatest British prime ministers. In 2004, he was voted the most successful British Prime Minister of the 20th century by a poll of 139 academics. The majority of those responses singled out the Attlee government's welfare state reforms and the creation of the NHS as the key 20th century domestic policy achievements. He is also commended for continuing the 'Special Relationship' with the US and active involvement in NATO.
Attlee was born on 3 January 1883 in Putney, Surrey (now part of London), into an upper middle-class family, the seventh of eight children. His father was Henry Attlee (1841–1908), a solicitor, and his mother was Ellen Bravery Watson (1847–1920), daughter of Thomas Simons Watson, secretary for the Art Union of London. His parents were "committed Anglicans" who read prayers and psalms each morning at breakfast.
Attlee grew up in a two-storey villa with a large garden and tennis court, staffed by three servants and a gardener. His father, a political Liberal, had inherited family interests in milling and brewing, and became a senior partner in the law firm of Druces, also serving a term as president of the Law Society of England and Wales. In 1898 he purchased a 200-acre (81 ha) estate, Comaques in Thorpe-le-Soken, Essex. At the age of nine, Attlee was sent to board at Northaw Place, a boys' preparatory school in Hertfordshire. In 1896 he followed his brothers to Haileybury College, where he was a middling student. He was influenced by the Darwinist views of his housemaster Frederick Webb Headley, and in 1899 he published an attack on striking London cab-drivers in the school magazine, predicting they would soon have to "beg for their fares".
In 1901, Attlee went up to University College, Oxford, reading modern history. He and his brother Tom "were given a generous stipend by their father and embraced the university lifestyle—rowing, reading and socializing". He was later described by a tutor as "level-headed, industrious, dependable man with no brilliance of style ... but with excellent sound judgement". At university he had little interest in politics or economics, later describing his views at this time as "good old fashioned imperialist conservative". He graduated Bachelor of Arts in 1904 with second-class honours.
Attlee then trained as a barrister at the Inner Temple and was called to the bar in March 1906. He worked for a time at his father's law firm Druces and Attlee but did not enjoy the work, and had no particular ambition to succeed in the legal profession. He also played football for non-League club Fleet. Attlee's father died in 1908, leaving an estate valued for probate at £75,394 (equivalent to £8,374,628 in 2021).
In 1906, he became a volunteer at Haileybury House, a charitable club for working-class boys in Stepney in the East End of London run by his old school, and from 1907 to 1909 he served as the club's manager. Until then, his political views had been more conservative. However, after his shock at the poverty and deprivation he saw while working with the slum children, he came to the view that private charity would never be sufficient to alleviate poverty and that only direct action and income redistribution by the state would have any serious effect. This sparked a process that caused him to convert to socialism. He subsequently joined the Independent Labour Party (ILP) in 1908 and became active in local politics. In 1909, he stood unsuccessfully at his first election, as an ILP candidate for Stepney Borough Council.
He also worked briefly as a secretary for Beatrice Webb in 1909, before becoming a secretary for Toynbee Hall. He worked for Webb's campaign of popularisation of the Minority Report as he was very active in Fabian Society circles, in which he would go round visiting many political societies—Liberal, Conservative and socialist—to explain and popularise the ideas, as well as recruiting lecturers deemed suitable to work on the campaign. In 1911, he was employed by the Government as an "official explainer"—touring the country to explain Chancellor of the Exchequer David Lloyd George's National Insurance Act. He spent the summer of that year touring Essex and Somerset on a bicycle, explaining the Act at public meetings. A year later, he became a lecturer at the London School of Economics, teaching Social science and Public administration.
Following the outbreak of the First World War in August 1914, Attlee applied to join the British Army. Initially his application was turned down, as his age of 31 was seen as being too old; however, he was eventually commissioned as a temporary lieutenant in the 6th (Service) Battalion, South Lancashire Regiment, on 30 September 1914. On 9 February 1915 he was promoted to captain, and on 14 March was appointed battalion adjutant. The 6th South Lancashires were part of the 38th Brigade of the 13th (Western) Division, which served in the Gallipoli campaign in Turkey. Attlee's decision to fight caused a rift between him and his older brother Tom, who, as a conscientious objector, spent much of the war in prison.
After a period spent fighting in Gallipoli, Attlee collapsed after falling ill with dysentery and was put on a ship bound for England to recover. When he woke up he wanted to get back to action as soon as possible, and asked to be let off the ship in Malta, where he stayed in hospital in order to recover. His hospitalisation coincided with the Battle of Sari Bair, which saw a large number of his comrades killed. Upon returning to action, he was informed that his company had been chosen to hold the final lines during the evacuation of Suvla. As such, he was the penultimate man to be evacuated from Suvla Bay, the last being General Stanley Maude.
The Gallipoli Campaign had been engineered by the First Lord of the Admiralty, Winston Churchill. Although it was unsuccessful, Attlee believed that it was a bold strategy which could have been successful if it had been better implemented on the ground. This led to an admiration for Churchill as a military strategist, something which would make their working relationship in later years productive.
He later served in the Mesopotamian campaign in what is now Iraq, where in April 1916 he was badly wounded, being hit in the leg by shrapnel from friendly fire while storming an enemy trench during the Battle of Hanna. The battle was an unsuccessful attempt to relieve the Siege of Kut, and many of Attlee's fellow soldiers were also wounded or killed. He was sent firstly to India, and then back to the UK to recover. On 18 December 1916 he was transferred to the Heavy Section of the Machine Gun Corps, and 1 March 1917 he was promoted to the temporary rank of major, leading him to be known as "Major Attlee" for much of the inter-war period. He would spend most of 1917 training soldiers at various locations in England. From 2 to 9 July 1917, he was the temporary commanding officer (CO) of the newly formed L (later 10th) Battalion, the Tank Corps at Bovington Camp, Dorset. From 9 July, he assumed command of the 30th Company of the same battalion; however, he did not deploy to France with it in December 1917, as he was transferred back to the South Lancashire Regiment on 28 November.
After fully recovering from his injuries, he was sent to France in June 1918 to serve on the Western Front for the final months of the war. After being discharged from the Army in January 1919, he returned to Stepney, and returned to his old job lecturing part-time at the London School of Economics.
Attlee returned to local politics in the immediate post-war period, becoming mayor of the Metropolitan Borough of Stepney, one of London's most deprived inner-city boroughs, in 1919. During his time as mayor, the council undertook action to tackle slum landlords who charged high rents but refused to spend money on keeping their property in habitable condition. The council served and enforced legal orders on homeowners to repair their property. It also appointed health visitors and sanitary inspectors, reducing the infant mortality rate, and took action to find work for returning unemployed ex-servicemen.
In 1920, while mayor, he wrote his first book, The Social Worker, which set out many of the principles that informed his political philosophy and that were to underpin the actions of his government in later years. The book attacked the idea that looking after the poor could be left to voluntary action. He wrote that:
In a civilised community, although it may be composed of self-reliant individuals, there will be some persons who will be unable at some period of their lives to look after themselves, and the question of what is to happen to them may be solved in three ways – they may be neglected, they may be cared for by the organised community as of right, or they may be left to the goodwill of individuals in the community. [...] Charity is only possible without loss of dignity between equals. A right established by law, such as that to an old age pension, is less galling than an allowance made by a rich man to a poor one, dependent on his view of the recipient's character, and terminable at his caprice.
In 1921, George Lansbury, the Labour mayor of the neighbouring borough of Poplar, and future Labour Party leader, launched the Poplar Rates Rebellion; a campaign of disobedience seeking to equalise the poor relief burden across all the London boroughs. Attlee, who was a personal friend of Lansbury, strongly supported this. However, Herbert Morrison, the Labour mayor of nearby Hackney, and one of the main figures in the London Labour Party, strongly denounced Lansbury and the rebellion. During this period, Attlee developed a lifelong dislike of Morrison.
At the 1922 general election, Attlee became the Member of Parliament (MP) for the constituency of Limehouse in Stepney. At the time, he admired Ramsay MacDonald and helped him get elected as Labour Party leader at the 1922 leadership election. He served as MacDonald's Parliamentary Private Secretary for the brief 1922 parliament. His first taste of ministerial office came in 1924, when he served as Under-Secretary of State for War in the short-lived first Labour government, led by MacDonald.
Attlee opposed the 1926 General Strike, believing that strike action should not be used as a political weapon. However, when it happened, he did not attempt to undermine it. At the time of the strike, he was chairman of the Stepney Borough Electricity Committee. He negotiated a deal with the Electrical Trade Union so that they would continue to supply power to hospitals, but would end supplies to factories. One firm, Scammell and Nephew Ltd, took a civil action against Attlee and the other Labour members of the committee (although not against the Conservative members who had also supported this). The court found against Attlee and his fellow councillors and they were ordered to pay £300 damages. The decision was later reversed on appeal, but the financial problems caused by the episode almost forced Attlee out of politics.
In 1927, he was appointed a member of the multi-party Simon Commission, a royal commission set up to examine the possibility of granting self-rule to India. Due to the time he needed to devote to the commission, and contrary to a promise MacDonald made to Attlee to induce him to serve on the commission, he was not initially offered a ministerial post in the Second Labour Government, which entered office after the 1929 general election. Attlee's service on the Commission equipped him with a thorough exposure to India and many of its political leaders. By 1933 he argued that British rule was alien to India and was unable to make the social and economic reforms necessary for India's progress. He became the British leader most sympathetic to Indian independence (as a dominion), preparing him for his role in deciding on independence in 1947.
In May 1930, Labour MP Oswald Mosley left the party after its rejection of his proposals for solving the unemployment problem, and Attlee was given Mosley's post of Chancellor of the Duchy of Lancaster. In March 1931, he became Postmaster General, a post he held for five months until August, when the Labour government fell, after failing to agree on how to tackle the financial crisis of the Great Depression. That month MacDonald and a few of his allies formed a National Government with the Conservatives and Liberals, leading them to be expelled from Labour. MacDonald offered Attlee a job in the National Government, but he turned down the offer and opted to stay loyal to the main Labour party.
After Ramsay MacDonald formed the National Government, Labour was deeply divided. Attlee had long been close to MacDonald and now felt betrayed—as did most Labour politicians. During the course of the second Labour government, Attlee had become increasingly disillusioned with MacDonald, whom he came to regard as vain and incompetent, and of whom he later wrote scathingly in his autobiography. He would write:
In the old days I had looked up to MacDonald as a great leader. He had a fine presence and great oratorical power. The unpopular line which he took during the First World War seemed to mark him as a man of character. Despite his mishandling of the Red Letter episode, I had not appreciated his defects until he took office a second time. I then realised his reluctance to take positive action and noted with dismay his increasing vanity and snobbery, while his habit of telling me, a junior Minister, the poor opinion he had of all his Cabinet colleagues made an unpleasant impression. I had not, however, expected that he would perpetrate the greatest betrayal in the political history of this country ... The shock to the Party was very great, especially to the loyal workers of the rank-and-file who had made great sacrifices for these men.
The 1931 general election held later that year was a disaster for the Labour Party, which lost over 200 seats, returning only 52 MPs to Parliament. The vast majority of the party's senior figures, including the Leader Arthur Henderson, lost their seats. Attlee, however, narrowly retained his Limehouse seat, with his majority being slashed from 7,288 to just 551. He was one of only three Labour MPs who had experience of government to retain their seats, along with George Lansbury and Stafford Cripps. Accordingly, Lansbury was elected Leader unopposed with Attlee as his deputy.
Most of the remaining Labour MPs after 1931 were elderly trade union officials who could not contribute much to debates, Lansbury was in his 70s, and Stafford Cripps another main figure of the Labour front bench who had entered Parliament in 1931, was inexperienced. As one of the most capable and experienced of the remaining Labour MPs, Attlee therefore shouldered a lot of the burden of providing an opposition to the National Government in the years 1931–35, during this time he had to extend his knowledge of subjects which he had not studied in any depth before, such as finance and foreign affairs in order to provide an effective opposition to the government.
Attlee effectively served as acting leader for nine months from December 1933, after Lansbury fractured his thigh in an accident, which raised Attlee's public profile considerably. It was during this period, however, that personal financial problems almost forced Attlee to quit politics altogether. His wife had become ill, and at that time there was no separate salary for the Leader of the Opposition. On the verge of resigning from Parliament, he was persuaded to stay by Stafford Cripps, a wealthy socialist, who agreed to make a donation to party funds to pay him an additional salary until Lansbury could take over again.
During 1932–33 Attlee flirted with, and then drew back from radicalism, influenced by Stafford Cripps who was then on the radical wing of the party. He was briefly a member of the Socialist League, which had been formed by former Independent Labour Party (ILP) members, who opposed the ILP's disaffiliation from the main Labour Party in 1932. At one point he agreed with the proposition put forward by Cripps that gradual reform was inadequate and that a socialist government would have to pass an emergency powers act, allowing it to rule by decree to overcome any opposition by vested interests until it was safe to restore democracy. He admired Oliver Cromwell's strong-armed rule and use of major generals to control England. After looking more closely at Hitler, Mussolini, Stalin, and even his former colleague Oswald Mosley, leader of the new blackshirt fascist movement in Britain, Attlee retreated from his radicalism, and distanced himself from the League, and argued instead that the Labour Party must adhere to constitutional methods and stand forthright for democracy and against totalitarianism of either the left or right. He always supported the crown, and as Prime Minister was close to King George VI.
George Lansbury, a committed pacifist, resigned as the Leader of the Labour Party at the 1935 Party Conference on 8 October, after delegates voted in favour of sanctions against Italy for its aggression against Abyssinia. Lansbury had strongly opposed the policy, and felt unable to continue leading the party. Taking advantage of the disarray in the Labour Party, the Prime Minister Stanley Baldwin announced on 19 October that a general election would be held on 14 November. With no time for a leadership contest, the party agreed that Attlee should serve as interim leader, on the understanding that a leadership election would be held after the general election. Attlee therefore led Labour through the 1935 election, which saw the party stage a partial comeback from its disastrous 1931 performance, winning 38 per cent of the vote, the highest share Labour had won up to that point, and gaining over one hundred seats.
Attlee stood in the subsequent leadership election, held soon afterward, where he was opposed by Herbert Morrison, who had just re-entered parliament in the recent election, and Arthur Greenwood: Morrison was seen as the favourite, but was distrusted by many sections of the party, especially the left wing. Arthur Greenwood meanwhile was a popular figure in the party; however, his leadership bid was severely hampered by his alcohol problem. Attlee was able to come across as a competent and unifying figure, particularly having already led the party through a general election. He went on to come first in both the first and second ballots, formally being elected Leader of the Labour Party on 3 December 1935.
Throughout the 1920s and most of the 1930s, the Labour Party's official policy had been to oppose rearmament, instead supporting internationalism and collective security under the League of Nations. At the 1934 Labour Party Conference, Attlee declared that, "We have absolutely abandoned any idea of nationalist loyalty. We are deliberately putting a world order before our loyalty to our own country. We say we want to see put on the statute book something which will make our people citizens of the world before they are citizens of this country". During a debate on defence in Commons a year later, Attlee said "We are told (in the White Paper) that there is danger against which we have to guard ourselves. We do not think you can do it by national defence. We think you can only do it by moving forward to a new world. A world of law, the abolition of national armaments with a world force and a world economic system. I shall be told that that is quite impossible". Shortly after those comments, Adolf Hitler proclaimed that German rearmament offered no threat to world peace. Attlee responded the next day noting that Hitler's speech, although containing unfavourable references to the Soviet Union, created "A chance to call a halt in the armaments race ... We do not think that our answer to Herr Hitler should be just rearmament. We are in an age of rearmaments, but we on this side cannot accept that position".
Attlee played little part in the events that would lead up to the abdication of Edward VIII, for despite Baldwin's threat to step down if Edward attempted to remain on the throne after marrying Wallis Simpson, Labour was widely accepted not to be a viable alternative government, owing to the National Government's overwhelming majority in the Commons. Attlee, along with Liberal leader Archibald Sinclair, was eventually consulted with by Baldwin on 24 November 1936, and Attlee agreed with both Baldwin and Sinclair that Edward could not remain on the throne, firmly eliminating any prospect of any alternative government forming were Baldwin to resign.
In April 1936, the Chancellor of the Exchequer, Neville Chamberlain, introduced a Budget which increased the amount spent on the armed forces. Attlee made a radio broadcast in opposition to it, saying:
[The budget] was the natural expression of the character of the present Government. There was hardly any increase allowed for the services which went to build up the life of the people, education and health. Everything was devoted to piling up the instruments of death. The Chancellor expressed great regret that he should have to spend so much on armaments, but said that it was absolutely necessary and was due only to the actions of other nations. One would think to listen to him that the Government had no responsibility for the state of world affairs. [...] The Government has now resolved to enter upon an arms race, and the people will have to pay for their mistake in believing that it could be trusted to carry out a policy of peace. [...] This is a War Budget. We can look in the future for no advance in Social Legislation. All available resources are to be devoted to armaments.
In June 1936, the Conservative MP Duff Cooper called for an Anglo-French alliance against possible German aggression and called for all parties to support one. Attlee condemned this: "We say that any suggestion of an alliance of this kind—an alliance in which one country is bound to another, right or wrong, by some overwhelming necessity—is contrary to the spirit of the League of Nations, is contrary to the Covenant, is contrary to Locarno is contrary to the obligations which this country has undertaken, and is contrary to the professed policy of this Government". At the Labour Party conference at Edinburgh in October Attlee reiterated that "There can be no question of our supporting the Government in its rearmament policy".
However, with the rising threat from Nazi Germany, and the ineffectiveness of the League of Nations, this policy eventually lost credibility. By 1937, Labour had jettisoned its pacifist position and came to support rearmament and oppose Neville Chamberlain's policy of appeasement.
At the end of 1937, Attlee and a party of three Labour MPs visited Spain and visited the British Battalion of the International Brigades fighting in the Spanish Civil War. One of the companies was named the "Major Attlee Company" in his honour. Attlee was supportive of the Republican government, and at the 1937 Labour conference moved the wider Labour Party towards opposing what he considered the "farce" of the Non-Intervention Committee organised by the British and French governments. In the House of Commons, Attlee stated, "I cannot understand the delusion that if Franco wins with Italian and German aid, he will immediately become independent. I think it is a ridiculous proposition." Dalton, the Labour Party's spokesman on foreign policy, also thought that Franco would ally with Germany and Italy. However, Franco's subsequent behaviour proved it was not such a ridiculous proposition. As Dalton later acknowledged, Franco skilfully maintained Spanish neutrality, whereas Hitler would likely have occupied Spain if Franco had lost the Civil War.
In 1938, Attlee opposed the Munich Agreement, in which Chamberlain negotiated with Hitler to give Germany the German-speaking parts of Czechoslovakia, the Sudetenland:
We all feel relief that war has not come this time... we cannot, however, feel that peace has been established, but that we have nothing but an armistice in a state of war. We have been unable to go in for care-free rejoicing. We have felt that we are in the midst of a tragedy... [and] humiliation. This has not been a victory for reason and humanity. It has been a victory for brute force. At every stage of the proceedings there have been time limits laid down... [the] terms laid down as ultimata. We have seen to-day a gallant, civilised and democratic people betrayed and handed over to a ruthless despotism... The events of these last few days constitute one of the greatest diplomatic defeats that this country and France have ever sustained. There can be no doubt that it is a tremendous victory for Herr Hitler. Without firing a shot, by the mere display of military force, he has achieved a dominating position in Europe which Germany failed to win after four years of war. He has overturned the balance of power in Europe... [and] destroyed the last fortress of democracy in Eastern Europe which stood in the way of his ambition. He has opened his way to the food, the oil and the resources which he requires in order to consolidate his military power, and he has successfully defeated and reduced to impotence the forces that might have stood against the rule of violence. [...] The cause [of the crisis which we have undergone] was not the existence of minorities in Czechoslovakia; it was not that the position of the Sudeten Germans had become intolerable. It was not the wonderful principle of self-determination. It was because Herr Hitler had decided that the time was ripe for another step forward in his design to dominate Europe... The minorities question is no new one. [...] [And] short of a drastic and entire reshuffling of these populations there is no possible solution to the problem of minorities in Europe except toleration.
However, the new Czechoslovakian state did not provide equal rights to the Slovaks and Sudeten Germans, with the historian Arnold J. Toynbee already having noted that "for the Germans, Magyars and Poles, who account between them for more than one quarter of the whole population, the present regime in Czechoslovakia is not essentially different from the regimes in the surrounding countries". Anthony Eden in the Munich debate acknowledged that there had been "discrimination, even severe discrimination" against the Sudeten Germans.
In 1937, Attlee wrote a book entitled The Labour Party in Perspective that sold fairly well in which he set out some of his views. He argued that there was no point in Labour compromising on its socialist principles in the belief that this would achieve electoral success. He wrote: "I find that the proposition often reduces itself to this – that if the Labour Party would drop its socialism and adopt a Liberal platform, many Liberals would be pleased to support it. I have heard it said more than once that if Labour would only drop its policy of nationalisation everyone would be pleased, and it would soon obtain a majority. I am convinced it would be fatal for the Labour Party." He also wrote that there was no point in "watering down Labour's socialist creed in order to attract new adherents who cannot accept the full socialist faith. On the contrary, I believe that it is only a clear and bold policy that will attract this support".
In the late 1930s, Attlee sponsored a Jewish mother and her two children, enabling them to leave Germany in 1939 and move to the UK. On arriving in Britain, Attlee invited one of the children into his home in Stanmore, north-west London, where he stayed for several months.
Attlee remained as Leader of the Opposition when the Second World War broke out in September 1939. The ensuing disastrous Norwegian campaign would result in a motion of no confidence in Neville Chamberlain. Although Chamberlain survived this, the reputation of his administration was so badly and publicly damaged that it became clear a coalition government would be necessary. Even if Attlee had personally been prepared to serve under Chamberlain in an emergency coalition government, he would never have been able to carry Labour with him. Consequently, Chamberlain tendered his resignation, and Labour and the Conservatives entered a coalition government led by Winston Churchill on 10 May 1940, with Attlee joining the Cabinet as Lord Privy Seal on 12 May.
Attlee and Churchill quickly agreed that the War Cabinet would consist of three Conservatives (initially Churchill, Chamberlain and Lord Halifax) and two Labour members (initially himself and Arthur Greenwood) and that Labour should have slightly more than one third of the posts in the coalition government. Attlee and Greenwood played a vital role in supporting Churchill during a series of War Cabinet debates over whether or not to negotiate peace terms with Hitler following the Fall of France in May 1940; both supported Churchill and gave him the majority he needed in the War Cabinet to continue Britain's resistance.
Only Attlee and Churchill remained in the War Cabinet from the formation of the Government of National Unity in May 1940 through to the election in May 1945. Attlee was initially the Lord Privy Seal, before becoming Britain's first ever Deputy Prime Minister in 1942, as well as becoming the Dominions Secretary and Lord President of the Council on 28 September 1943.
Attlee himself played a generally low key but vital role in the wartime government, working behind the scenes and in committees to ensure the smooth operation of government. In the coalition government, three inter-connected committees effectively ran the country. Churchill chaired the first two, the War cabinet and the Defence Committee, with Attlee deputising for him in these, and answering for the government in Parliament when Churchill was absent. Attlee himself instituted, and later chaired the third body, the Lord President's Committee, which was responsible for overseeing domestic affairs. As Churchill was most concerned with overseeing the war effort, this arrangement suited both men. Attlee himself had largely been responsible for creating these arrangements with Churchill's backing, streamlining the machinery of government and abolishing many committees. He also acted as a conciliator in the government, smoothing over tensions which frequently arose between Labour and Conservative Ministers.
Many Labour activists were baffled by the top leadership role for a man they regarded as having little charisma; Beatrice Webb wrote in her diary in early 1940:
Following the defeat of Nazi Germany and the end of the War in Europe in May 1945, Attlee and Churchill favoured the coalition government remaining in place until Japan had been defeated. However, Herbert Morrison made it clear that the Labour Party would not be willing to accept this, and Churchill was forced to tender his resignation as Prime Minister and call an immediate election.
The war had set in motion profound social changes within Britain and had ultimately led to a widespread popular desire for social reform. This mood was epitomised in the Beveridge Report of 1942, by the Liberal economist William Beveridge. The Report assumed that the maintenance of full employment would be the aim of post-war governments, and that this would provide the basis for the welfare state. Immediately upon its release, it sold hundreds of thousands of copies. All major parties committed themselves to fulfilling this aim, but most historians say that Attlee's Labour Party was seen by the electorate as the party most likely to follow it through.
Labour campaigned on the theme of "Let Us Face the Future", positioning themselves as the party best placed to rebuild Britain following the war, and were widely viewed as having run a strong and positive campaign, while the Conservative campaign centred entirely on Churchill. Despite opinion polls indicating a strong Labour lead, opinion polls were then viewed as a novelty which had not proven their worth, and most commentators expected that Churchill's prestige and status as a "war hero" would ensure a comfortable Conservative victory. Before polling day, The Manchester Guardian surmised that "the chances of Labour sweeping the country and obtaining a clear majority ... are pretty remote". The News of the World predicted a working Conservative majority, while in Glasgow a pundit forecast the result as Conservatives 360, Labour 220, Others 60. Churchill, however, made some costly errors during the campaign. In particular, his suggestion during one radio broadcast that a future Labour Government would require "some form of a gestapo" to implement their policies was widely regarded as being in very bad taste and massively backfired.
When the results of the election were announced on 26 July, they came as a surprise to most, including Attlee himself. Labour had won power by a huge landslide, winning 47.7 per cent of the vote to the Conservatives' 36 per cent. This gave them 393 seats in the House of Commons, a working majority of 146. This was the first time in history that the Labour Party had won a majority in Parliament. When Attlee went to see King George VI at Buckingham Palace to be appointed Prime Minister, the notoriously laconic Attlee and the famously tongue-tied King stood in silence; Attlee finally volunteered the remark, "I've won the election". The King replied "I know. I heard it on the Six O'Clock News".
Francis (1995) argues there was consensus both in the Labour's national executive committee and at party conferences on a definition of socialism that stressed moral improvement as well as material improvement. The Attlee government was committed to rebuilding British society as an ethical commonwealth, using public ownership and controls to abolish extremes of wealth and poverty. Labour's ideology contrasted sharply with the contemporary Conservative Party's defence of individualism, inherited privileges, and income inequality. On 5 July 1948, Clement Attlee replied to a letter dated 22 June from James Murray and ten other MPs who raised concerns about West Indians who arrived on board the HMT Empire Windrush. As for the prime minister himself, he was not much focused on economic policy, letting others handle the issues.
Attlee's government also carried out their manifesto commitment for nationalisation of basic industries and public utilities. The Bank of England and civil aviation were nationalised in 1946. Coal mining, the railways, road haulage, canals and Cable and Wireless were nationalised in 1947, and electricity and gas followed in 1948. The steel industry was nationalised in 1951. By 1951 about 20 per cent of the British economy had been taken into public ownership.
Nationalisation failed to provide workers with a greater say in the running of the industries in which they worked. It did, however, bring about significant material gains for workers in the form of higher wages, reduced working hours, and improvements in working conditions, especially in regards to safety. As historian Eric Shaw noted of the years following nationalisation, the electricity and gas supply companies became "impressive models of public enterprise" in terms of efficiency, and the National Coal Board was not only profitable, but working conditions for miners had significantly improved as well.
Within a few years of nationalisation, a number of progressive measures had been carried out which did much to improve conditions in the mines, including better pay, a five-day working week, a national safety scheme (with proper standards at all the collieries), a ban on boys under the age of 16 going underground, the introduction of training for newcomers before going down to the coalface, and the making of pithead baths into a standard facility.
The newly established National Coal Board offered sick pay and holiday pay to miners. As noted by Martin Francis:
Union leaders saw nationalisation as a means to pursue a more advantageous position within a framework of continued conflict, rather than as an opportunity to replace the old adversarial form of industrial relations. Moreover, most workers in nationalised industries exhibited an essentially instrumentalist attitude, favouring public ownership because it secured job security and improved wages rather than because it promised the creation of a new set of socialist relationships in the workplace.
The Attlee government placed strong emphasis on improving the quality of life in rural areas, benefiting both farmers and other consumers. Security of tenure for farmers was introduced, while consumers were protected by food subsidies and the redistributive effects of deficiency payments. Between 1945 and 1951, the quality of rural life was improved by improvements in gas, electricity, and water services, as well as in leisure and public amenities. In addition, the 1947 Transport Act improved provision of rural bus services, while the Agriculture Act 1947 established a more generous subsidy system for farmers. Legislation was also passed in 1947 and 1948 which established a permanent Agricultural Wages Board to fix minimum wages for agricultural workers.
Attlee's government made it possible for farm workers to borrow up to 90 per cent of the cost of building their own houses, and received a subsidy of £15 a year for 40 years towards that cost. Grants were also made to meet up to half the cost of supplying water to farm buildings and fields, the government met half the cost of bracken eradication and lime spreading, and grants were paid for bringing hill farming land into use that had previously been considered unfit for farming purposes.
In 1946, the National Agricultural Advisory Service was set up to supply agricultural advice and information. The Hill Farming Act 1946 introduced for upland areas a system of grants for buildings, land improvement, and infrastructural improvements such as roads and electrification. The Act also continued a system of headage payments for hill sheep and cattle that had been introduced during the war. The Agricultural Holdings Act 1948 enabled (in effect) tenant farmers to have lifelong tenancies and made provision for compensation in the event of cessations of tenancies. In addition, the Livestock Rearing Act 1951 extended the provisions of the Hill Farming Act 1946 to the upland store cattle and sheep sector.
At a time of world food shortages, it was vital that farmers produced the maximum possible quantities. The government encouraged farmers via subsidies for modernisation, while the National Agricultural Advisory Service provided expertise and price guarantees. As a result of the Attlee government's initiatives in agriculture, there was a 20 per cent increase in output between 1947 and 1952, while Britain adopted one of the most mechanised and efficient farming industries in the world.
The Attlee government ensured provisions of the Education Act 1944 were fully implemented, with free secondary education becoming a right for the first time. Fees in state grammar schools were eliminated, while new, modern secondary schools were constructed.
The school leaving age was raised to 15 in 1947, an accomplishment helped brought into fruition by initiatives such as the HORSA ("Huts Operation for Raising the School-leaving Age") scheme and the S.F.O.R.S.A. (furniture) scheme. University scholarships were introduced to ensure that no one who was qualified "should be deprived of a university education for financial reasons", while a large school building programme was organised. A rapid increase in the number of trained teachers took place, and the number of new school places was increased.
Increased Treasury funds were made available for education, particularly for upgrading school buildings suffering from years of neglect and war damage. Prefabricated classrooms were built, and 928 new primary schools were constructed between 1945 and 1950. The provision of free school meals was expanded, and opportunities for university entrants were increased. State scholarships to universities were increased, and the government adopted a policy of supplementing university scholarships awards to a level sufficient to cover fees plus maintenance.
Many thousands of ex-servicemen were assisted to go through college who could never have contemplated it before the war. Free milk was also made available to all schoolchildren for the first time. In addition, spending on technical education rose, and the number of nursery schools was increased. Salaries for teachers were also improved, and funds were allocated towards improving existing schools.
In 1947 the Arts Council of Great Britain was set up to encourage the arts.
The Ministry of Education was established under the 1944 Act, and free County Colleges were set up for the compulsory part-time instruction of teenagers between the ages of 15 and 18 who were not in full-time education. An Emergency Training Scheme was also introduced which turned out an extra 25,000 teachers in 1945–1951. In 1947, Regional Advisory Councils were set up to bring together industry and education to find out the needs of young workers "and advise on the provision required, and to secure reasonable economy of provision". That same year, thirteen Area Training Organisations were set up in England and one in Wales to coordinate teacher training.
Attlee's government, however, failed to introduce the comprehensive education for which many socialists had hoped. This reform was eventually carried out by Harold Wilson's government. During its time in office, the Attlee government increased spending on education by over 50 per cent, from £6.5 billion to £10 billion.
The most significant problem facing Attlee and his ministers remained the economy, as the war effort had left Britain nearly bankrupt. Overseas investments had been used up to pay for the war. The transition to a peacetime economy, and the maintaining of strategic military commitments abroad led to continuous and severe problems with the balance of trade. This resulted in strict rationing of food and other essential goods continuing in the post war period to force a reduction in consumption in an effort to limit imports, boost exports, and stabilise the Pound Sterling so that Britain could trade its way out of its financial state.
The abrupt end of the American Lend-Lease programme in August 1945 almost caused a crisis. Some relief was provided by the Anglo-American loan, negotiated in December 1945. The conditions attached to the loan included making the pound fully convertible to the US dollar. When this was introduced in July 1947, it led to a currency crisis and convertibility had to be suspended after just five weeks. The UK benefited from the American Marshall Aid program in 1948, and the economic situation improved significantly. Another balance of payments crisis in 1949 forced Chancellor of the Exchequer, Stafford Cripps, into devaluation of the pound.
Despite these problems, one of the main achievements of Attlee's government was the maintenance of near full employment. The government maintained most of the wartime controls over the economy, including control over the allocation of materials and manpower, and unemployment rarely rose above 500,000, or 3 per cent of the total workforce. Labour shortages proved a more frequent problem. The inflation rate was also kept low during his term. The rate of unemployment rarely rose above 2 per cent during Attlee's time in office, whilst there was no hard-core of long-term unemployed. Both production and productivity rose as a result of new equipment, while the average working week was shortened.
The government was less successful in housing, which was the responsibility of Aneurin Bevan. The government had a target to build 400,000 new houses a year to replace those which had been destroyed in the war, but shortages of materials and manpower meant that less than half this number were built. Nevertheless, millions of people were rehoused as a result of the Attlee government's housing policies. Between August 1945 and December 1951, 1,016,349 new homes were completed in England, Scotland, and Wales.
When the Attlee government was voted out of office in 1951, the economy had been improved compared to 1945. The period from 1946 to 1951 saw continuous full employment and steadily rising living standards, which increased by about 10 per cent each year. During that same period, the economy grew by 3 per cent a year, and by 1951 the UK had "the best economic performance in Europe, while output per person was increasing faster than in the United States". Careful planning after 1945 also ensured that demobilisation was carried out without having a negative impact upon economic recovery, and that unemployment stayed at very low levels. In addition, the number of motor cars on the roads rose from 3 million to 5 million from 1945 to 1951, and seaside holidays were taken by far more people than ever before. A Monopolies and Restrictive Practices (Inquiry and Control) Act was passed in 1948, which allowed for investigations of restrictive practices and monopolies. However, some economic historians have argued that the UK failed to develop economically after the war, with failure to support industry leaving the economy not recovering as effectively as Germany.
1947 proved a particularly difficult year for the government; an exceptionally cold winter that year caused coal mines to freeze and cease production, creating widespread power cuts and food shortages. The Minister of Fuel and Power, Emanuel Shinwell was widely blamed for failing to ensure adequate coal stocks, and soon resigned from his post. The Conservatives capitalised on the crisis with the slogan 'Starve with Strachey and shiver with Shinwell' (referring to the Minister of Food John Strachey).
The crisis led to an unsuccessful plot by Hugh Dalton to replace Attlee as Prime Minister with Ernest Bevin. Later that year Stafford Cripps tried to persuade Attlee to stand aside for Bevin. These plots petered out after Bevin refused to cooperate. Later that year, Dalton resigned as Chancellor after inadvertently leaking details of the budget to a journalist. He was replaced by Cripps.
In foreign affairs, the Attlee government was concerned with four main issues: post-war Europe, the onset of the Cold War, the establishment of the United Nations, and decolonisation. The first two were closely related, and Attlee was assisted by Foreign Secretary Ernest Bevin. Attlee also attended the later stages of the Potsdam Conference, where he negotiated with President Harry S. Truman and Joseph Stalin.
In the immediate aftermath of the war, the Government faced the challenge of managing relations with Britain's former war-time ally, Stalin and the Soviet Union. Ernest Bevin was a passionate anti-communist, based largely on his experience of fighting communist influence in the trade union movement. Bevin's initial approach to the USSR as Foreign Secretary was "wary and suspicious, but not automatically hostile". Attlee himself sought warm relations with Stalin. He put his trust in the United Nations, rejected notions that the Soviet Union was bent on world conquest, and warned that treating Moscow as an enemy would turn it into one. This put Attlee at sword's point with his foreign minister, the Foreign Office, and the military who all saw the Soviets as a growing threat to Britain's role in the Middle East. Suddenly in January 1947, Attlee reversed his position and agreed with Bevin on a hardline anti-Soviet policy.
In an early "good-will" gesture that was later heavily criticised, the Attlee government allowed the Soviets to purchase, under the terms of a 1946 UK-USSR Trade agreement, a total of 25 Rolls-Royce Nene jet engines in September 1947 and March 1948. The agreement included an agreement not to use them for military purposes. The price was fixed under a commercial contract; a total of 55 jet engines were sold to the USSR in 1947. However, the Cold War intensified during this period and the Soviets, who at the time were well behind the West in jet technology, reverse-engineered the Nene and installed their own version in the MiG-15 interceptor. This was used to good effect against US-UK forces in the subsequent Korean War, as well as in several later MiG models.
After Stalin took political control of most of Eastern Europe, and began to subvert other governments in the Balkans, Attlee's and Bevin's worst fears of Soviet intentions were realised. The Attlee government then became instrumental in the creation of the successful NATO defence alliance to protect Western Europe against any Soviet expansion. In a crucial contribution to the economic stability of post-war Europe, Attlee's Cabinet was instrumental in promoting the American Marshall Plan for the economic recovery of Europe. He called it one of the "most bold, enlightened and good-natured acts in the history of nations".
A group of Labour MPs, organised under the banner of "Keep Left", urged the government to steer a middle way between the two emerging superpowers, and advocated the creation of a "third force" of European powers to stand between the US and USSR. However, deteriorating relations between Britain and the USSR, as well as Britain's economic reliance on America following the Marshall Plan, steered policy towards supporting the US. In January 1947, fear of both Soviet and American nuclear intentions led to a secret meeting of the Cabinet, where the decision was made to press ahead with the development of Britain's independent nuclear deterrent, an issue which later caused a split in the Labour Party. Britain's first successful nuclear test, however, did not occur until 1952, one year after Attlee had left office.
The London dock strike of July 1949, led by Communists, was suppressed when the Attlee government sent in 13,000 Army troops and passed special legislation to promptly end the strike. His response reveals Attlee's growing concern that Soviet expansionism, supported by the British Communist Party, was a genuine threat to national security, and that the docks were highly vulnerable to sabotage ordered by Moscow. He noted that the strike was caused not by local grievances, but to help communist unions who were on strike in Canada. Attlee agreed with MI5 that he faced "a very present menace".
Decolonisation was never a major election issue, but Attlee gave the matter a great deal of attention and was the chief leader in beginning the process of decolonisation of the British Empire.
In August 1948, the Chinese Communists' victories caused Attlee to begin preparing for a Communist takeover of China. It kept open consulates in Communist-controlled areas and rejected the Chinese Nationalists' requests that British citizens assist in the defence of Shanghai. By December, the government concluded that although British property in China would likely be nationalised, British traders would benefit in the long run from a stable, industrialising Communist China. Retaining Hong Kong was especially important to him; although the Chinese Communists promised to not interfere with its rule, Britain reinforced the Hong Kong Garrison during 1949. When the victorious Chinese Communists government declared on 1 October 1949 that it would exchange diplomats with any country that ended relations with the Chinese Nationalists, Britain became the first western country to formally recognise the People's Republic of China in January 1950. In 1954, a Labour Party delegation including Attlee visited China at the invitation of then Foreign Minister Zhou Enlai. Attlee became the first high-ranking western politician to meet Mao Zedong.
Attlee orchestrated the granting of independence to India and Pakistan in 1947. Attlee in 1928–1934 had been a member of the Indian Statutory Commission (otherwise known as the Simon Commission). He became the Labour Party expert on India and by 1934 was committed to granting India the same independent dominion status that Canada, Australia, New Zealand and South Africa had recently been given. He faced strong resistance from the die-hard Conservative imperialists, led by Churchill, who opposed both independence and efforts led by Prime Minister Stanley Baldwin to set up a system of limited local control by Indians themselves. Attlee and the Labour leadership were sympathetic to both the Congress led by Jawaharlal Nehru and the Pakistan movement led by Muhammad Ali Jinnah. During the Second World War, Attlee was in charge of Indian affairs. He set up the Cripps Mission in 1942, which tried and failed to bring the factions together. When Congress called for passive resistance in the Quit India movement of 1942–1945, it was the British regime ordered the widespread arrest and internment for the duration of tens of thousands of Congress leaders as part of its efforts to crush the revolt.
Labour's election Manifesto in 1945 called for "the advancement of India to responsible self-government". In 1942 the British Raj tried to enlist all major political parties in support of the war effort. Congress, led by Nehru and Gandhi, demanded immediate independence and full control by Congress of all of India. That demand was rejected by the British, and Congress opposed the war effort with its "Quit India campaign". The Raj immediately responded in 1942 by imprisoning the major national, regional and local Congress leaders for the duration. Attlee did not object. By contrast, the Muslim League, led by Muhammad Ali Jinnah, strongly supported the war effort. They greatly enlarged their membership and won favour from London for their decision. Attlee retained a fondness for Congress and until 1946, accepted their thesis that they were a non-religious party that accepted Hindus, Muslims, Sikhs, and everyone else. Nevertheless, this difference in opinion between the Congress and the Muslim League towards the British war effort encouraged Attlee and his government to consider further negotiations with the Muslim League.
The Muslim League insisted that it was the only true representative of all of the Muslims of India. With violence escalating in India after the war, but with British financial power at a low ebb, large-scale military involvement was impossible. Viceroy Wavell said he needed a further seven army divisions to prevent communal violence if independence negotiations failed. No divisions were available; independence was the only option. Given the increasing demands of the Muslim League, independence implied a partition that set off heavily Muslim Pakistan from the main portion of India. After becoming Prime Minister in 1945 Attlee originally planned to give India Dominion status in 1948.
Attlee suggested in his memoirs that "traditional" colonial rule in Asia was no longer viable. He said that he expected it to meet renewed opposition after the war both by local national movements as well as by the United States. The prime minister's biographer John Bew says that Attlee hoped for a transition to a multilateral world order and a Commonwealth, and that the old British empire "should not be supported beyond its natural lifespan" and instead be ended "on the right note." His exchequer Hugh Dalton meanwhile feared that post-war Britain could no longer afford to garrison its empire.
Ultimately the Labour government gave full independence to India and Pakistan in 1947 through the Indian Independence Act. This involved creating a demarcation between the two regions which was known as the Radcliffe Line. The boundary between the newly created states of Pakistan and India involved the widespread resettlement of millions of Hindus, Sikhs and Muslims. Almost immediately, extreme anti-Hindu and anti-Sikh violence ensued in Lahore, Multan and Dacca when the Punjab province and the Bengal province were split in the Partition of India. This was followed by a rapid increase in widespread anti-Muslim violence in several areas including Amritsar, Rajkot, Jaipur, Calcutta and Delhi. Historian Yasmin Khan estimates that over a million people were killed of which several were women and children. Gandhi himself was assassinated in January 1948. Attlee remarked Gandhi as the "greatest citizen" of India and added, "this one man has been the major factor in every consideration of the Indian problem. He had become the expression of the aspirations of the Indian people for independence".
Historian Andrew Roberts says the independence of India was a "national humiliation" but it was necessitated by urgent financial, administrative, strategic and political needs. Churchill in 1940–1945 had tightened the hold on India and imprisoned the Congress leadership, with Attlee's approval. Labour had looked forward to making it a fully independent dominion like Canada or Australia. Many of the Congress leaders in the India had studied in England, and were highly regarded as fellow idealistic socialists by Labour leaders. Attlee was the Labour expert on India and took special charge of decolonisation. Attlee found that Churchill's viceroy, Field Marshal Wavell, was too imperialistic, too keen on military solutions, and too neglectful of Indian political alignments. The new Viceroy was Lord Mountbatten, the dashing war hero and a cousin of the King.
Attlee also sponsored the peaceful transition to independence in 1948 of Burma (Myanmar) and Ceylon (Sri Lanka).
One of the most urgent problems facing Attlee concerned the future of the British mandate in Palestine, which had become too troublesome and expensive to handle. British policies in Palestine were perceived by the Zionist movement and the Truman administration to be pro-Arab and anti-Jewish, and Britain soon found itself unable to maintain public order in the face of a Jewish insurgency and a civil war.
During this period, 70,000 Holocaust survivors attempted to reach Palestine as part of the Aliyah Bet refugee movement. Attlee's government tried several tactics to prevent the migration. Five ships were bombed by the Secret Intelligence Service (though with no casualties) with a fake Palestinian group created to take responsibility. The navy apprehended over 50,000 refugees en route, interning them in detention camps in Cyprus. Conditions in the camps were harsh and faced global criticism. Later, the refugee ship Exodus 1947 would be sent back to mainland Europe, instead of being taken to Cyprus.
In response to the increasingly unpopular mandate, Attlee ordered the evacuation of all British military personnel and handed over the issue to the United Nations, a decision which was widely supported by the general public in Britain. With the establishment of the state of Israel in 1948, the camps in Cyprus were eventually closed, with their former occupants finally completing their journey to the new country.
The government's policies with regard to the other colonies, particularly those in Africa, focused on keeping them as strategic Cold War assets while modernising their economies. The Labour Party had long attracted aspiring leaders from Africa and had developed elaborate plans before the war. Implementing them overnight with an empty treasury proved too challenging. A major military base was built in Kenya, and the African colonies came under an unprecedented degree of direct control from London. Development schemes were implemented to help solve Britain's post-war balance of payments crisis and raise African living standards. This "new colonialism" worked slowly, and had failures such as the Tanganyika groundnut scheme.
The 1950 election gave Labour a massively reduced majority of five seats compared to the triple-digit majority of 1945. Although re-elected, the result was seen by Attlee as very disappointing, and was widely attributed to the effects of post-war austerity denting Labour's appeal to middle-class voters. With such a small majority leaving him dependent on a small number of MPs to govern, Attlee's second term was much tamer than his first. Some major reforms were nevertheless passed, particularly regarding industry in urban areas and regulations to limit air and water pollution.
By 1951, the Attlee government was exhausted, with several of its most senior ministers ailing or ageing, and with a lack of new ideas. Attlee's record for settling internal differences in the Labour Party fell in April 1951, when there was a damaging split over an austerity Budget brought in by the Chancellor, Hugh Gaitskell, to pay for the cost of Britain's participation in the Korean War. Aneurin Bevan resigned to protest against the new charges for "teeth and spectacles" in the National Health Service introduced by that Budget, and was joined in this action by several senior ministers, including the future Prime Minister Harold Wilson, then the President of the Board of Trade. Thus escalated a battle between the left and right wings of the Party that continues today. Finding it increasingly impossible to govern, Attlee's only chance was to call a snap election in October 1951, in the hope of achieving a more workable majority and to regain authority. The gamble failed: Labour narrowly lost to the Conservative Party, despite winning considerably more votes (achieving the largest Labour vote in electoral history). Attlee tendered his resignation as Prime Minister the following day, after six years and three months in office.
Following the defeat in 1951, Attlee continued to lead the party as Leader of the Opposition. His last four years as leader were, however, widely seen as one of the Labour Party's weaker periods.
The period was dominated by infighting between the Labour Party's right wing, led by Hugh Gaitskell, and its left, led by Aneurin Bevan. Many Labour MPs felt that Attlee should have retired following 1951 election and allowed a younger man to lead the party. Bevan openly called for him to stand down in the summer of 1954. One of his main reasons for staying on as leader was to frustrate the leadership ambitions of Herbert Morrison, whom Attlee disliked for both political and personal reasons. At one time, Attlee had favoured Aneurin Bevan to succeed him as leader, but this became problematic after Bevan almost irrevocably split the party.
Attlee, now aged 72, contested the 1955 general election against Anthony Eden, which saw Labour lose 18 seats, and the Conservatives increase their majority.
In an interview with the News Chronicle columnist Percy Cudlipp in mid-September 1955, Attlee made clear his own thinking together with his preference for the leadership succession, stating:
Labour has nothing to gain by dwelling in the past. Nor do I think we can impress the nation by adopting a futile left-wingism. I regard myself as Left of Centre which is where a Party Leader ought to be. It is no use asking, 'What would Keir Hardie have done?' We must have at the top men brought up in the present age, not, as I was, in the Victorian Age.
He retired as Leader of the Labour Party on 7 December 1955, having led the party for twenty years, and on 14 December Hugh Gaitskell was elected as his successor.
He was one of the signatories of the agreement to convene a convention for drafting a world constitution. As a result, for the first time in human history, a World Constituent Assembly convened to draft and adopt a Constitution for the Federation of Earth.
He subsequently retired from the House of Commons and was elevated to the peerage as Earl Attlee and Viscount Prestwood on 16 December 1955, taking his seat in the House of Lords on 25 January. He believed Eden had been forced into taking a strong stand on the Suez Crisis by his backbenchers. In 1958, Attlee, along with numerous notables, established the Homosexual Law Reform Society: this campaigned for the decriminalisation of homosexual acts in private by consenting adults, a reform that was voted through Parliament nine years later. In May 1961, he travelled to Washington, D.C., to meet with President Kennedy.
In 1962, he spoke twice in the House of Lords against the British government's application for the UK to join the European Communities ("Common Market"). In his second speech delivered in November, Attlee claimed that Britain had a separate parliamentary tradition from the Continental European countries that comprised the EC. He also claimed that if Britain became a member, EC rules would prevent the British government from planning the economy and that Britain's traditional policy had been outward-looking rather than Continental.
He attended Winston Churchill's funeral in January 1965. He was frail by that time, and had to remain seated in the freezing cold as the coffin was carried, having tired himself out by standing at the rehearsal the previous day. He lived to see the Labour Party return to power under Harold Wilson in 1964, and also to see his old constituency of Walthamstow West fall to the Conservatives in a by-election in September 1967.
Attlee died peacefully in his sleep of pneumonia, at the age of 84 at Westminster Hospital on 8 October 1967. Two thousand people attended his funeral in November, including the then-Prime Minister Harold Wilson and the Duke of Kent, representing the Queen. He was cremated and his ashes were buried at Westminster Abbey.
Upon his death, the title passed to his son Martin Richard Attlee, 2nd Earl Attlee (1927–1991), who defected from Labour to the SDP in 1981. It is now held by Clement Attlee's grandson John Richard Attlee, 3rd Earl Attlee. The third earl (a member of the Conservative Party) retained his seat in the Lords as one of the hereditary peers to remain under an amendment to Labour's House of Lords Act 1999.
Attlee's estate was sworn for probate purposes at a value of £7,295, (equivalent to £140,865 in 2021) a relatively modest sum for so prominent a figure, and only a fraction of the £75,394 in his father's estate when he died in 1908.
The quotation about Attlee, "A modest man, but then he has so much to be modest about", is commonly ascribed to Churchill—though Churchill denied saying it, and respected Attlee's service in the War cabinet. Attlee's modesty and quiet manner hid a great deal that has only come to light with historical reappraisal. Attlee himself is said to have responded to critics with a limerick: "There were few who thought him a starter, Many who thought themselves smarter. But he ended PM, CH and OM, an Earl and a Knight of the Garter".
The journalist and broadcaster Anthony Howard called him "the greatest Prime Minister of the 20th century".
His leadership style of consensual government, acting as a chairman rather than a president, won him much praise from historians and politicians alike. Christopher Soames, the British Ambassador to France during the Conservative government of Edward Heath and cabinet minister under Margaret Thatcher, remarked that "Mrs Thatcher was not really running a team. Every time you have a Prime Minister who wants to make all the decisions, it mainly leads to bad results. Attlee didn't. That's why he was so damn good".
Thatcher herself wrote in her 1995 memoirs, which charted her life from her beginnings in Grantham to her victory at the 1979 general election, that she admired Attlee, writing: "Of Clement Attlee, however, I was an admirer. He was a serious man and a patriot. Quite contrary to the general tendency of politicians in the 1990s, he was all substance and no show".
Attlee's government presided over the successful transition from a wartime economy to peacetime, tackling problems of demobilisation, shortages of foreign currency, and adverse deficits in trade balances and government expenditure. Further domestic policies that he brought about included the creation of the National Health Service and the post-war Welfare state, which became key to the reconstruction of post-war Britain. Attlee and his ministers did much to transform the UK into a more prosperous and egalitarian society during their time in office with reductions in poverty and a rise in the general economic security of the population.
In foreign affairs, he did much to assist with the post-war economic recovery of Europe. He proved a loyal ally of the US at the onset of the Cold War. Due to his style of leadership, it was not he, but Ernest Bevin who masterminded foreign policy. It was Attlee's government that decided Britain should have an independent nuclear weapons programme, and work on it began in 1947.
Bevin, Attlee's Foreign Secretary, famously stated that "We've got to have it [nuclear weapons] and it's got to have a bloody Union Jack on it". The first operational British nuclear bomb was not detonated until October 1952, about one year after Attlee had left office. Independent British atomic research was prompted partly by the US McMahon Act, which nullified wartime expectations of postwar US–UK collaboration in nuclear research, and prohibited Americans from communicating nuclear technology even to allied countries. British atomic bomb research was kept secret even from some members of Attlee's own cabinet, whose loyalty or discretion seemed uncertain.
Although a socialist, Attlee still believed in the British Empire of his youth. He thought of it as an institution that was a power for good in the world. Nevertheless, he saw that a large part of it needed to be self-governing. Using the Dominions of Canada, Australia, and New Zealand as a model, he continued the transformation of the empire into the modern-day British Commonwealth.
His greatest achievement, surpassing many of these, was perhaps the establishment of a political and economic consensus about the governance of Britain that all three major parties subscribed to for three decades, fixing the arena of political discourse until the late-1970s. In 2004, he was voted the most successful British Prime Minister of the 20th century by a poll of 139 academics organised by Ipsos MORI.
A blue plaque unveiled in 1979 commemorates Attlee at 17 Monkhams Avenue, in Woodford Green in the London borough of Redbridge.
Attlee was elected a Fellow of the Royal Society in 1947. Attlee was awarded an Honorary Fellowship of Queen Mary College on 15 December 1948.
In the 1960s a new suburb near Curepipe in British Mauritius was given the name Cité Atlee [sic] in his honour.
On 30 November 1988, a bronze statue of Clement Attlee was unveiled by Harold Wilson (the next Labour Prime Minister after Attlee) outside Limehouse Library in Attlee's former constituency. By then Wilson was the last surviving member of Attlee's cabinet, and the unveiling of the statue would be one of the last public appearances by Wilson, who was by that point in the early stages of Alzheimer's disease; he died at the age of 79 in May 1995.
Limehouse Library was closed in 2003, after which the statue was vandalised. The council surrounded it with protective hoarding for four years, before eventually removing it for repair and recasting in 2009. The restored statue was unveiled by Peter Mandelson in April 2011, in its new position less than a mile away at the Queen Mary University of London's Mile End campus.
There is also a statue of Clement Attlee in the Houses of Parliament that was erected, instead of a bust, by parliamentary vote in 1979. The sculptor was Ivor Roberts-Jones.
Attlee met Violet Millar while on a long trip with friends to Italy in 1921. They fell in love and were soon engaged, marrying at Christ Church, Hampstead, on 10 January 1922. It would come to be a devoted marriage, with Attlee providing protection and Violet providing a home that was an escape for Attlee from political turmoil. She died in 1964. They had four children:
Although his parents were devout Anglicans, with one of his brothers becoming a clergyman and one of his sisters a missionary, Attlee himself is usually regarded as an agnostic. In an interview he described himself as "incapable of religious feeling", saying that he believed in "the ethics of Christianity" but not "the mumbo-jumbo". When asked whether he was an agnostic, Attlee replied "I don't know".
Biographical
Biographies of his cabinet and associates
Scholarly studies | [
{
"paragraph_id": 0,
"text": "Clement Richard Attlee, 1st Earl Attlee, KG, OM, CH, PC, FRS (3 January 1883 – 8 October 1967) was a British statesman and Labour Party politician who served as Prime Minister of the United Kingdom from 1945 to 1951 and Leader of the Labour Party from 1935 to 1955. He was Deputy Prime Minister during the wartime coalition government under Winston Churchill, and served twice as Leader of the Opposition from 1935 to 1940 and from 1951 to 1955. Attlee remains the longest serving Labour leader and is widely considered by historians and members of the public through various polls to be one of the greatest Prime Ministers of the United Kingdom.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Attlee was born into an upper-middle-class family, the son of a wealthy London solicitor. After attending Haileybury College and the University of Oxford, he practised as a barrister. The volunteer work he carried out in London's East End exposed him to poverty, and his political views shifted leftwards thereafter. He joined the Independent Labour Party, gave up his legal career, and began lecturing at the London School of Economics; with his work briefly interrupted by service in the First World War. In 1919, he became mayor of Stepney and in 1922 was elected as the Member for Limehouse. Attlee served in the first Labour minority government led by Ramsay MacDonald in 1924, and then joined the Cabinet during MacDonald's second minority (1929–1931). After retaining his seat in Labour's landslide defeat of 1931, he became the party's Deputy Leader. Elected Leader of the Labour Party in 1935, and at first advocating pacificism and opposing re-armament, he became a critic of Neville Chamberlain's policy of appeasement in the lead-up to the Second World War. Attlee took Labour into the wartime coalition government in 1940 and served under Winston Churchill, initially as Lord Privy Seal and then as Deputy Prime Minister from 1942.",
"title": ""
},
{
"paragraph_id": 2,
"text": "As the European front of WWII reached its conclusion, the war cabinet headed by Churchill was dissolved and elections were scheduled to be held. The Labour Party, led by Attlee, won a landslide victory in the 1945 general election, on their post-war recovery platform. Following the election, Attlee led the construction of the first Labour majority government. His government's Keynesian approach to economic management aimed to maintain full employment, a mixed economy and a greatly enlarged system of social services provided by the state. To this end, it undertook the nationalisation of public utilities and major industries, and implemented wide-ranging social reforms, including the passing of the National Insurance Act 1946 and National Assistance Act 1948, the formation of the National Health Service (NHS) in 1948, and the enlargement of public subsidies for council house building. His government also reformed trade union legislation, working practices and children's services; it created the National Parks system, passed the New Towns Act 1946 and established the town and country planning system. The Attlee government proved itself to be a radical, reforming government. From 1945 to 1948, over 200 public Acts of Parliament were passed, with eight major pieces of legislation placed on the statute book in 1946 alone.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Attlee's foreign policy focused on decolonization efforts which he delegated to Ernest Bevin, but Attlee personally oversaw the partition of India (1947), the independence of Burma and Ceylon, and the dissolution of the British mandates of Palestine and Transjordan. Attlee and Bevin encouraged the United States to take a vigorous role in the Cold War; unable to afford military intervention in Greece during its civil war, he called on Washington to counter the communists there. The strategy of containment was formalized between the two nations through the Truman Doctrine. He supported the Marshall Plan to rebuild Western Europe with American money and, in 1949, promoted the NATO military alliance against the Soviet bloc. After leading Labour to a narrow victory at the 1950 general election, he sent British troops to fight alongside South Korea in the Korean War.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Attlee had inherited a country close to bankruptcy following the Second World War and beset by food, housing and resource shortages; despite his social reforms and economic programme, these problems persisted throughout his premiership, alongside recurrent currency crises and dependence on US aid. His party was narrowly defeated by the Conservatives in the 1951 general election, despite winning the most votes. He continued as Labour leader but retired after losing the 1955 election and was elevated to the House of Lords, where he served until his death in 1967. In public, he was modest and unassuming, but behind the scenes his depth of knowledge, quiet demeanour, objectivity and pragmatism proved decisive. He is often ranked as one of the greatest British prime ministers. In 2004, he was voted the most successful British Prime Minister of the 20th century by a poll of 139 academics. The majority of those responses singled out the Attlee government's welfare state reforms and the creation of the NHS as the key 20th century domestic policy achievements. He is also commended for continuing the 'Special Relationship' with the US and active involvement in NATO.",
"title": ""
},
{
"paragraph_id": 5,
"text": "Attlee was born on 3 January 1883 in Putney, Surrey (now part of London), into an upper middle-class family, the seventh of eight children. His father was Henry Attlee (1841–1908), a solicitor, and his mother was Ellen Bravery Watson (1847–1920), daughter of Thomas Simons Watson, secretary for the Art Union of London. His parents were \"committed Anglicans\" who read prayers and psalms each morning at breakfast.",
"title": "Early life"
},
{
"paragraph_id": 6,
"text": "Attlee grew up in a two-storey villa with a large garden and tennis court, staffed by three servants and a gardener. His father, a political Liberal, had inherited family interests in milling and brewing, and became a senior partner in the law firm of Druces, also serving a term as president of the Law Society of England and Wales. In 1898 he purchased a 200-acre (81 ha) estate, Comaques in Thorpe-le-Soken, Essex. At the age of nine, Attlee was sent to board at Northaw Place, a boys' preparatory school in Hertfordshire. In 1896 he followed his brothers to Haileybury College, where he was a middling student. He was influenced by the Darwinist views of his housemaster Frederick Webb Headley, and in 1899 he published an attack on striking London cab-drivers in the school magazine, predicting they would soon have to \"beg for their fares\".",
"title": "Early life"
},
{
"paragraph_id": 7,
"text": "In 1901, Attlee went up to University College, Oxford, reading modern history. He and his brother Tom \"were given a generous stipend by their father and embraced the university lifestyle—rowing, reading and socializing\". He was later described by a tutor as \"level-headed, industrious, dependable man with no brilliance of style ... but with excellent sound judgement\". At university he had little interest in politics or economics, later describing his views at this time as \"good old fashioned imperialist conservative\". He graduated Bachelor of Arts in 1904 with second-class honours.",
"title": "Early life"
},
{
"paragraph_id": 8,
"text": "Attlee then trained as a barrister at the Inner Temple and was called to the bar in March 1906. He worked for a time at his father's law firm Druces and Attlee but did not enjoy the work, and had no particular ambition to succeed in the legal profession. He also played football for non-League club Fleet. Attlee's father died in 1908, leaving an estate valued for probate at £75,394 (equivalent to £8,374,628 in 2021).",
"title": "Early life"
},
{
"paragraph_id": 9,
"text": "In 1906, he became a volunteer at Haileybury House, a charitable club for working-class boys in Stepney in the East End of London run by his old school, and from 1907 to 1909 he served as the club's manager. Until then, his political views had been more conservative. However, after his shock at the poverty and deprivation he saw while working with the slum children, he came to the view that private charity would never be sufficient to alleviate poverty and that only direct action and income redistribution by the state would have any serious effect. This sparked a process that caused him to convert to socialism. He subsequently joined the Independent Labour Party (ILP) in 1908 and became active in local politics. In 1909, he stood unsuccessfully at his first election, as an ILP candidate for Stepney Borough Council.",
"title": "Early life"
},
{
"paragraph_id": 10,
"text": "He also worked briefly as a secretary for Beatrice Webb in 1909, before becoming a secretary for Toynbee Hall. He worked for Webb's campaign of popularisation of the Minority Report as he was very active in Fabian Society circles, in which he would go round visiting many political societies—Liberal, Conservative and socialist—to explain and popularise the ideas, as well as recruiting lecturers deemed suitable to work on the campaign. In 1911, he was employed by the Government as an \"official explainer\"—touring the country to explain Chancellor of the Exchequer David Lloyd George's National Insurance Act. He spent the summer of that year touring Essex and Somerset on a bicycle, explaining the Act at public meetings. A year later, he became a lecturer at the London School of Economics, teaching Social science and Public administration.",
"title": "Early life"
},
{
"paragraph_id": 11,
"text": "Following the outbreak of the First World War in August 1914, Attlee applied to join the British Army. Initially his application was turned down, as his age of 31 was seen as being too old; however, he was eventually commissioned as a temporary lieutenant in the 6th (Service) Battalion, South Lancashire Regiment, on 30 September 1914. On 9 February 1915 he was promoted to captain, and on 14 March was appointed battalion adjutant. The 6th South Lancashires were part of the 38th Brigade of the 13th (Western) Division, which served in the Gallipoli campaign in Turkey. Attlee's decision to fight caused a rift between him and his older brother Tom, who, as a conscientious objector, spent much of the war in prison.",
"title": "Early life"
},
{
"paragraph_id": 12,
"text": "After a period spent fighting in Gallipoli, Attlee collapsed after falling ill with dysentery and was put on a ship bound for England to recover. When he woke up he wanted to get back to action as soon as possible, and asked to be let off the ship in Malta, where he stayed in hospital in order to recover. His hospitalisation coincided with the Battle of Sari Bair, which saw a large number of his comrades killed. Upon returning to action, he was informed that his company had been chosen to hold the final lines during the evacuation of Suvla. As such, he was the penultimate man to be evacuated from Suvla Bay, the last being General Stanley Maude.",
"title": "Early life"
},
{
"paragraph_id": 13,
"text": "The Gallipoli Campaign had been engineered by the First Lord of the Admiralty, Winston Churchill. Although it was unsuccessful, Attlee believed that it was a bold strategy which could have been successful if it had been better implemented on the ground. This led to an admiration for Churchill as a military strategist, something which would make their working relationship in later years productive.",
"title": "Early life"
},
{
"paragraph_id": 14,
"text": "He later served in the Mesopotamian campaign in what is now Iraq, where in April 1916 he was badly wounded, being hit in the leg by shrapnel from friendly fire while storming an enemy trench during the Battle of Hanna. The battle was an unsuccessful attempt to relieve the Siege of Kut, and many of Attlee's fellow soldiers were also wounded or killed. He was sent firstly to India, and then back to the UK to recover. On 18 December 1916 he was transferred to the Heavy Section of the Machine Gun Corps, and 1 March 1917 he was promoted to the temporary rank of major, leading him to be known as \"Major Attlee\" for much of the inter-war period. He would spend most of 1917 training soldiers at various locations in England. From 2 to 9 July 1917, he was the temporary commanding officer (CO) of the newly formed L (later 10th) Battalion, the Tank Corps at Bovington Camp, Dorset. From 9 July, he assumed command of the 30th Company of the same battalion; however, he did not deploy to France with it in December 1917, as he was transferred back to the South Lancashire Regiment on 28 November.",
"title": "Early life"
},
{
"paragraph_id": 15,
"text": "After fully recovering from his injuries, he was sent to France in June 1918 to serve on the Western Front for the final months of the war. After being discharged from the Army in January 1919, he returned to Stepney, and returned to his old job lecturing part-time at the London School of Economics.",
"title": "Early life"
},
{
"paragraph_id": 16,
"text": "Attlee returned to local politics in the immediate post-war period, becoming mayor of the Metropolitan Borough of Stepney, one of London's most deprived inner-city boroughs, in 1919. During his time as mayor, the council undertook action to tackle slum landlords who charged high rents but refused to spend money on keeping their property in habitable condition. The council served and enforced legal orders on homeowners to repair their property. It also appointed health visitors and sanitary inspectors, reducing the infant mortality rate, and took action to find work for returning unemployed ex-servicemen.",
"title": "Early political career"
},
{
"paragraph_id": 17,
"text": "In 1920, while mayor, he wrote his first book, The Social Worker, which set out many of the principles that informed his political philosophy and that were to underpin the actions of his government in later years. The book attacked the idea that looking after the poor could be left to voluntary action. He wrote that:",
"title": "Early political career"
},
{
"paragraph_id": 18,
"text": "In a civilised community, although it may be composed of self-reliant individuals, there will be some persons who will be unable at some period of their lives to look after themselves, and the question of what is to happen to them may be solved in three ways – they may be neglected, they may be cared for by the organised community as of right, or they may be left to the goodwill of individuals in the community. [...] Charity is only possible without loss of dignity between equals. A right established by law, such as that to an old age pension, is less galling than an allowance made by a rich man to a poor one, dependent on his view of the recipient's character, and terminable at his caprice.",
"title": "Early political career"
},
{
"paragraph_id": 19,
"text": "In 1921, George Lansbury, the Labour mayor of the neighbouring borough of Poplar, and future Labour Party leader, launched the Poplar Rates Rebellion; a campaign of disobedience seeking to equalise the poor relief burden across all the London boroughs. Attlee, who was a personal friend of Lansbury, strongly supported this. However, Herbert Morrison, the Labour mayor of nearby Hackney, and one of the main figures in the London Labour Party, strongly denounced Lansbury and the rebellion. During this period, Attlee developed a lifelong dislike of Morrison.",
"title": "Early political career"
},
{
"paragraph_id": 20,
"text": "At the 1922 general election, Attlee became the Member of Parliament (MP) for the constituency of Limehouse in Stepney. At the time, he admired Ramsay MacDonald and helped him get elected as Labour Party leader at the 1922 leadership election. He served as MacDonald's Parliamentary Private Secretary for the brief 1922 parliament. His first taste of ministerial office came in 1924, when he served as Under-Secretary of State for War in the short-lived first Labour government, led by MacDonald.",
"title": "Early political career"
},
{
"paragraph_id": 21,
"text": "Attlee opposed the 1926 General Strike, believing that strike action should not be used as a political weapon. However, when it happened, he did not attempt to undermine it. At the time of the strike, he was chairman of the Stepney Borough Electricity Committee. He negotiated a deal with the Electrical Trade Union so that they would continue to supply power to hospitals, but would end supplies to factories. One firm, Scammell and Nephew Ltd, took a civil action against Attlee and the other Labour members of the committee (although not against the Conservative members who had also supported this). The court found against Attlee and his fellow councillors and they were ordered to pay £300 damages. The decision was later reversed on appeal, but the financial problems caused by the episode almost forced Attlee out of politics.",
"title": "Early political career"
},
{
"paragraph_id": 22,
"text": "In 1927, he was appointed a member of the multi-party Simon Commission, a royal commission set up to examine the possibility of granting self-rule to India. Due to the time he needed to devote to the commission, and contrary to a promise MacDonald made to Attlee to induce him to serve on the commission, he was not initially offered a ministerial post in the Second Labour Government, which entered office after the 1929 general election. Attlee's service on the Commission equipped him with a thorough exposure to India and many of its political leaders. By 1933 he argued that British rule was alien to India and was unable to make the social and economic reforms necessary for India's progress. He became the British leader most sympathetic to Indian independence (as a dominion), preparing him for his role in deciding on independence in 1947.",
"title": "Early political career"
},
{
"paragraph_id": 23,
"text": "In May 1930, Labour MP Oswald Mosley left the party after its rejection of his proposals for solving the unemployment problem, and Attlee was given Mosley's post of Chancellor of the Duchy of Lancaster. In March 1931, he became Postmaster General, a post he held for five months until August, when the Labour government fell, after failing to agree on how to tackle the financial crisis of the Great Depression. That month MacDonald and a few of his allies formed a National Government with the Conservatives and Liberals, leading them to be expelled from Labour. MacDonald offered Attlee a job in the National Government, but he turned down the offer and opted to stay loyal to the main Labour party.",
"title": "Early political career"
},
{
"paragraph_id": 24,
"text": "After Ramsay MacDonald formed the National Government, Labour was deeply divided. Attlee had long been close to MacDonald and now felt betrayed—as did most Labour politicians. During the course of the second Labour government, Attlee had become increasingly disillusioned with MacDonald, whom he came to regard as vain and incompetent, and of whom he later wrote scathingly in his autobiography. He would write:",
"title": "Early political career"
},
{
"paragraph_id": 25,
"text": "In the old days I had looked up to MacDonald as a great leader. He had a fine presence and great oratorical power. The unpopular line which he took during the First World War seemed to mark him as a man of character. Despite his mishandling of the Red Letter episode, I had not appreciated his defects until he took office a second time. I then realised his reluctance to take positive action and noted with dismay his increasing vanity and snobbery, while his habit of telling me, a junior Minister, the poor opinion he had of all his Cabinet colleagues made an unpleasant impression. I had not, however, expected that he would perpetrate the greatest betrayal in the political history of this country ... The shock to the Party was very great, especially to the loyal workers of the rank-and-file who had made great sacrifices for these men.",
"title": "Early political career"
},
{
"paragraph_id": 26,
"text": "The 1931 general election held later that year was a disaster for the Labour Party, which lost over 200 seats, returning only 52 MPs to Parliament. The vast majority of the party's senior figures, including the Leader Arthur Henderson, lost their seats. Attlee, however, narrowly retained his Limehouse seat, with his majority being slashed from 7,288 to just 551. He was one of only three Labour MPs who had experience of government to retain their seats, along with George Lansbury and Stafford Cripps. Accordingly, Lansbury was elected Leader unopposed with Attlee as his deputy.",
"title": "Early political career"
},
{
"paragraph_id": 27,
"text": "Most of the remaining Labour MPs after 1931 were elderly trade union officials who could not contribute much to debates, Lansbury was in his 70s, and Stafford Cripps another main figure of the Labour front bench who had entered Parliament in 1931, was inexperienced. As one of the most capable and experienced of the remaining Labour MPs, Attlee therefore shouldered a lot of the burden of providing an opposition to the National Government in the years 1931–35, during this time he had to extend his knowledge of subjects which he had not studied in any depth before, such as finance and foreign affairs in order to provide an effective opposition to the government.",
"title": "Early political career"
},
{
"paragraph_id": 28,
"text": "Attlee effectively served as acting leader for nine months from December 1933, after Lansbury fractured his thigh in an accident, which raised Attlee's public profile considerably. It was during this period, however, that personal financial problems almost forced Attlee to quit politics altogether. His wife had become ill, and at that time there was no separate salary for the Leader of the Opposition. On the verge of resigning from Parliament, he was persuaded to stay by Stafford Cripps, a wealthy socialist, who agreed to make a donation to party funds to pay him an additional salary until Lansbury could take over again.",
"title": "Early political career"
},
{
"paragraph_id": 29,
"text": "During 1932–33 Attlee flirted with, and then drew back from radicalism, influenced by Stafford Cripps who was then on the radical wing of the party. He was briefly a member of the Socialist League, which had been formed by former Independent Labour Party (ILP) members, who opposed the ILP's disaffiliation from the main Labour Party in 1932. At one point he agreed with the proposition put forward by Cripps that gradual reform was inadequate and that a socialist government would have to pass an emergency powers act, allowing it to rule by decree to overcome any opposition by vested interests until it was safe to restore democracy. He admired Oliver Cromwell's strong-armed rule and use of major generals to control England. After looking more closely at Hitler, Mussolini, Stalin, and even his former colleague Oswald Mosley, leader of the new blackshirt fascist movement in Britain, Attlee retreated from his radicalism, and distanced himself from the League, and argued instead that the Labour Party must adhere to constitutional methods and stand forthright for democracy and against totalitarianism of either the left or right. He always supported the crown, and as Prime Minister was close to King George VI.",
"title": "Early political career"
},
{
"paragraph_id": 30,
"text": "George Lansbury, a committed pacifist, resigned as the Leader of the Labour Party at the 1935 Party Conference on 8 October, after delegates voted in favour of sanctions against Italy for its aggression against Abyssinia. Lansbury had strongly opposed the policy, and felt unable to continue leading the party. Taking advantage of the disarray in the Labour Party, the Prime Minister Stanley Baldwin announced on 19 October that a general election would be held on 14 November. With no time for a leadership contest, the party agreed that Attlee should serve as interim leader, on the understanding that a leadership election would be held after the general election. Attlee therefore led Labour through the 1935 election, which saw the party stage a partial comeback from its disastrous 1931 performance, winning 38 per cent of the vote, the highest share Labour had won up to that point, and gaining over one hundred seats.",
"title": "Early political career"
},
{
"paragraph_id": 31,
"text": "Attlee stood in the subsequent leadership election, held soon afterward, where he was opposed by Herbert Morrison, who had just re-entered parliament in the recent election, and Arthur Greenwood: Morrison was seen as the favourite, but was distrusted by many sections of the party, especially the left wing. Arthur Greenwood meanwhile was a popular figure in the party; however, his leadership bid was severely hampered by his alcohol problem. Attlee was able to come across as a competent and unifying figure, particularly having already led the party through a general election. He went on to come first in both the first and second ballots, formally being elected Leader of the Labour Party on 3 December 1935.",
"title": "Early political career"
},
{
"paragraph_id": 32,
"text": "Throughout the 1920s and most of the 1930s, the Labour Party's official policy had been to oppose rearmament, instead supporting internationalism and collective security under the League of Nations. At the 1934 Labour Party Conference, Attlee declared that, \"We have absolutely abandoned any idea of nationalist loyalty. We are deliberately putting a world order before our loyalty to our own country. We say we want to see put on the statute book something which will make our people citizens of the world before they are citizens of this country\". During a debate on defence in Commons a year later, Attlee said \"We are told (in the White Paper) that there is danger against which we have to guard ourselves. We do not think you can do it by national defence. We think you can only do it by moving forward to a new world. A world of law, the abolition of national armaments with a world force and a world economic system. I shall be told that that is quite impossible\". Shortly after those comments, Adolf Hitler proclaimed that German rearmament offered no threat to world peace. Attlee responded the next day noting that Hitler's speech, although containing unfavourable references to the Soviet Union, created \"A chance to call a halt in the armaments race ... We do not think that our answer to Herr Hitler should be just rearmament. We are in an age of rearmaments, but we on this side cannot accept that position\".",
"title": "Early political career"
},
{
"paragraph_id": 33,
"text": "Attlee played little part in the events that would lead up to the abdication of Edward VIII, for despite Baldwin's threat to step down if Edward attempted to remain on the throne after marrying Wallis Simpson, Labour was widely accepted not to be a viable alternative government, owing to the National Government's overwhelming majority in the Commons. Attlee, along with Liberal leader Archibald Sinclair, was eventually consulted with by Baldwin on 24 November 1936, and Attlee agreed with both Baldwin and Sinclair that Edward could not remain on the throne, firmly eliminating any prospect of any alternative government forming were Baldwin to resign.",
"title": "Early political career"
},
{
"paragraph_id": 34,
"text": "In April 1936, the Chancellor of the Exchequer, Neville Chamberlain, introduced a Budget which increased the amount spent on the armed forces. Attlee made a radio broadcast in opposition to it, saying:",
"title": "Early political career"
},
{
"paragraph_id": 35,
"text": "[The budget] was the natural expression of the character of the present Government. There was hardly any increase allowed for the services which went to build up the life of the people, education and health. Everything was devoted to piling up the instruments of death. The Chancellor expressed great regret that he should have to spend so much on armaments, but said that it was absolutely necessary and was due only to the actions of other nations. One would think to listen to him that the Government had no responsibility for the state of world affairs. [...] The Government has now resolved to enter upon an arms race, and the people will have to pay for their mistake in believing that it could be trusted to carry out a policy of peace. [...] This is a War Budget. We can look in the future for no advance in Social Legislation. All available resources are to be devoted to armaments.",
"title": "Early political career"
},
{
"paragraph_id": 36,
"text": "In June 1936, the Conservative MP Duff Cooper called for an Anglo-French alliance against possible German aggression and called for all parties to support one. Attlee condemned this: \"We say that any suggestion of an alliance of this kind—an alliance in which one country is bound to another, right or wrong, by some overwhelming necessity—is contrary to the spirit of the League of Nations, is contrary to the Covenant, is contrary to Locarno is contrary to the obligations which this country has undertaken, and is contrary to the professed policy of this Government\". At the Labour Party conference at Edinburgh in October Attlee reiterated that \"There can be no question of our supporting the Government in its rearmament policy\".",
"title": "Early political career"
},
{
"paragraph_id": 37,
"text": "However, with the rising threat from Nazi Germany, and the ineffectiveness of the League of Nations, this policy eventually lost credibility. By 1937, Labour had jettisoned its pacifist position and came to support rearmament and oppose Neville Chamberlain's policy of appeasement.",
"title": "Early political career"
},
{
"paragraph_id": 38,
"text": "At the end of 1937, Attlee and a party of three Labour MPs visited Spain and visited the British Battalion of the International Brigades fighting in the Spanish Civil War. One of the companies was named the \"Major Attlee Company\" in his honour. Attlee was supportive of the Republican government, and at the 1937 Labour conference moved the wider Labour Party towards opposing what he considered the \"farce\" of the Non-Intervention Committee organised by the British and French governments. In the House of Commons, Attlee stated, \"I cannot understand the delusion that if Franco wins with Italian and German aid, he will immediately become independent. I think it is a ridiculous proposition.\" Dalton, the Labour Party's spokesman on foreign policy, also thought that Franco would ally with Germany and Italy. However, Franco's subsequent behaviour proved it was not such a ridiculous proposition. As Dalton later acknowledged, Franco skilfully maintained Spanish neutrality, whereas Hitler would likely have occupied Spain if Franco had lost the Civil War.",
"title": "Early political career"
},
{
"paragraph_id": 39,
"text": "In 1938, Attlee opposed the Munich Agreement, in which Chamberlain negotiated with Hitler to give Germany the German-speaking parts of Czechoslovakia, the Sudetenland:",
"title": "Early political career"
},
{
"paragraph_id": 40,
"text": "We all feel relief that war has not come this time... we cannot, however, feel that peace has been established, but that we have nothing but an armistice in a state of war. We have been unable to go in for care-free rejoicing. We have felt that we are in the midst of a tragedy... [and] humiliation. This has not been a victory for reason and humanity. It has been a victory for brute force. At every stage of the proceedings there have been time limits laid down... [the] terms laid down as ultimata. We have seen to-day a gallant, civilised and democratic people betrayed and handed over to a ruthless despotism... The events of these last few days constitute one of the greatest diplomatic defeats that this country and France have ever sustained. There can be no doubt that it is a tremendous victory for Herr Hitler. Without firing a shot, by the mere display of military force, he has achieved a dominating position in Europe which Germany failed to win after four years of war. He has overturned the balance of power in Europe... [and] destroyed the last fortress of democracy in Eastern Europe which stood in the way of his ambition. He has opened his way to the food, the oil and the resources which he requires in order to consolidate his military power, and he has successfully defeated and reduced to impotence the forces that might have stood against the rule of violence. [...] The cause [of the crisis which we have undergone] was not the existence of minorities in Czechoslovakia; it was not that the position of the Sudeten Germans had become intolerable. It was not the wonderful principle of self-determination. It was because Herr Hitler had decided that the time was ripe for another step forward in his design to dominate Europe... The minorities question is no new one. [...] [And] short of a drastic and entire reshuffling of these populations there is no possible solution to the problem of minorities in Europe except toleration.",
"title": "Early political career"
},
{
"paragraph_id": 41,
"text": "However, the new Czechoslovakian state did not provide equal rights to the Slovaks and Sudeten Germans, with the historian Arnold J. Toynbee already having noted that \"for the Germans, Magyars and Poles, who account between them for more than one quarter of the whole population, the present regime in Czechoslovakia is not essentially different from the regimes in the surrounding countries\". Anthony Eden in the Munich debate acknowledged that there had been \"discrimination, even severe discrimination\" against the Sudeten Germans.",
"title": "Early political career"
},
{
"paragraph_id": 42,
"text": "In 1937, Attlee wrote a book entitled The Labour Party in Perspective that sold fairly well in which he set out some of his views. He argued that there was no point in Labour compromising on its socialist principles in the belief that this would achieve electoral success. He wrote: \"I find that the proposition often reduces itself to this – that if the Labour Party would drop its socialism and adopt a Liberal platform, many Liberals would be pleased to support it. I have heard it said more than once that if Labour would only drop its policy of nationalisation everyone would be pleased, and it would soon obtain a majority. I am convinced it would be fatal for the Labour Party.\" He also wrote that there was no point in \"watering down Labour's socialist creed in order to attract new adherents who cannot accept the full socialist faith. On the contrary, I believe that it is only a clear and bold policy that will attract this support\".",
"title": "Early political career"
},
{
"paragraph_id": 43,
"text": "In the late 1930s, Attlee sponsored a Jewish mother and her two children, enabling them to leave Germany in 1939 and move to the UK. On arriving in Britain, Attlee invited one of the children into his home in Stanmore, north-west London, where he stayed for several months.",
"title": "Early political career"
},
{
"paragraph_id": 44,
"text": "Attlee remained as Leader of the Opposition when the Second World War broke out in September 1939. The ensuing disastrous Norwegian campaign would result in a motion of no confidence in Neville Chamberlain. Although Chamberlain survived this, the reputation of his administration was so badly and publicly damaged that it became clear a coalition government would be necessary. Even if Attlee had personally been prepared to serve under Chamberlain in an emergency coalition government, he would never have been able to carry Labour with him. Consequently, Chamberlain tendered his resignation, and Labour and the Conservatives entered a coalition government led by Winston Churchill on 10 May 1940, with Attlee joining the Cabinet as Lord Privy Seal on 12 May.",
"title": "Deputy Prime Minister"
},
{
"paragraph_id": 45,
"text": "Attlee and Churchill quickly agreed that the War Cabinet would consist of three Conservatives (initially Churchill, Chamberlain and Lord Halifax) and two Labour members (initially himself and Arthur Greenwood) and that Labour should have slightly more than one third of the posts in the coalition government. Attlee and Greenwood played a vital role in supporting Churchill during a series of War Cabinet debates over whether or not to negotiate peace terms with Hitler following the Fall of France in May 1940; both supported Churchill and gave him the majority he needed in the War Cabinet to continue Britain's resistance.",
"title": "Deputy Prime Minister"
},
{
"paragraph_id": 46,
"text": "Only Attlee and Churchill remained in the War Cabinet from the formation of the Government of National Unity in May 1940 through to the election in May 1945. Attlee was initially the Lord Privy Seal, before becoming Britain's first ever Deputy Prime Minister in 1942, as well as becoming the Dominions Secretary and Lord President of the Council on 28 September 1943.",
"title": "Deputy Prime Minister"
},
{
"paragraph_id": 47,
"text": "Attlee himself played a generally low key but vital role in the wartime government, working behind the scenes and in committees to ensure the smooth operation of government. In the coalition government, three inter-connected committees effectively ran the country. Churchill chaired the first two, the War cabinet and the Defence Committee, with Attlee deputising for him in these, and answering for the government in Parliament when Churchill was absent. Attlee himself instituted, and later chaired the third body, the Lord President's Committee, which was responsible for overseeing domestic affairs. As Churchill was most concerned with overseeing the war effort, this arrangement suited both men. Attlee himself had largely been responsible for creating these arrangements with Churchill's backing, streamlining the machinery of government and abolishing many committees. He also acted as a conciliator in the government, smoothing over tensions which frequently arose between Labour and Conservative Ministers.",
"title": "Deputy Prime Minister"
},
{
"paragraph_id": 48,
"text": "Many Labour activists were baffled by the top leadership role for a man they regarded as having little charisma; Beatrice Webb wrote in her diary in early 1940:",
"title": "Deputy Prime Minister"
},
{
"paragraph_id": 49,
"text": "Following the defeat of Nazi Germany and the end of the War in Europe in May 1945, Attlee and Churchill favoured the coalition government remaining in place until Japan had been defeated. However, Herbert Morrison made it clear that the Labour Party would not be willing to accept this, and Churchill was forced to tender his resignation as Prime Minister and call an immediate election.",
"title": "Deputy Prime Minister"
},
{
"paragraph_id": 50,
"text": "The war had set in motion profound social changes within Britain and had ultimately led to a widespread popular desire for social reform. This mood was epitomised in the Beveridge Report of 1942, by the Liberal economist William Beveridge. The Report assumed that the maintenance of full employment would be the aim of post-war governments, and that this would provide the basis for the welfare state. Immediately upon its release, it sold hundreds of thousands of copies. All major parties committed themselves to fulfilling this aim, but most historians say that Attlee's Labour Party was seen by the electorate as the party most likely to follow it through.",
"title": "Deputy Prime Minister"
},
{
"paragraph_id": 51,
"text": "Labour campaigned on the theme of \"Let Us Face the Future\", positioning themselves as the party best placed to rebuild Britain following the war, and were widely viewed as having run a strong and positive campaign, while the Conservative campaign centred entirely on Churchill. Despite opinion polls indicating a strong Labour lead, opinion polls were then viewed as a novelty which had not proven their worth, and most commentators expected that Churchill's prestige and status as a \"war hero\" would ensure a comfortable Conservative victory. Before polling day, The Manchester Guardian surmised that \"the chances of Labour sweeping the country and obtaining a clear majority ... are pretty remote\". The News of the World predicted a working Conservative majority, while in Glasgow a pundit forecast the result as Conservatives 360, Labour 220, Others 60. Churchill, however, made some costly errors during the campaign. In particular, his suggestion during one radio broadcast that a future Labour Government would require \"some form of a gestapo\" to implement their policies was widely regarded as being in very bad taste and massively backfired.",
"title": "Deputy Prime Minister"
},
{
"paragraph_id": 52,
"text": "When the results of the election were announced on 26 July, they came as a surprise to most, including Attlee himself. Labour had won power by a huge landslide, winning 47.7 per cent of the vote to the Conservatives' 36 per cent. This gave them 393 seats in the House of Commons, a working majority of 146. This was the first time in history that the Labour Party had won a majority in Parliament. When Attlee went to see King George VI at Buckingham Palace to be appointed Prime Minister, the notoriously laconic Attlee and the famously tongue-tied King stood in silence; Attlee finally volunteered the remark, \"I've won the election\". The King replied \"I know. I heard it on the Six O'Clock News\".",
"title": "Deputy Prime Minister"
},
{
"paragraph_id": 53,
"text": "Francis (1995) argues there was consensus both in the Labour's national executive committee and at party conferences on a definition of socialism that stressed moral improvement as well as material improvement. The Attlee government was committed to rebuilding British society as an ethical commonwealth, using public ownership and controls to abolish extremes of wealth and poverty. Labour's ideology contrasted sharply with the contemporary Conservative Party's defence of individualism, inherited privileges, and income inequality. On 5 July 1948, Clement Attlee replied to a letter dated 22 June from James Murray and ten other MPs who raised concerns about West Indians who arrived on board the HMT Empire Windrush. As for the prime minister himself, he was not much focused on economic policy, letting others handle the issues.",
"title": "Prime Minister"
},
{
"paragraph_id": 54,
"text": "Attlee's government also carried out their manifesto commitment for nationalisation of basic industries and public utilities. The Bank of England and civil aviation were nationalised in 1946. Coal mining, the railways, road haulage, canals and Cable and Wireless were nationalised in 1947, and electricity and gas followed in 1948. The steel industry was nationalised in 1951. By 1951 about 20 per cent of the British economy had been taken into public ownership.",
"title": "Prime Minister"
},
{
"paragraph_id": 55,
"text": "Nationalisation failed to provide workers with a greater say in the running of the industries in which they worked. It did, however, bring about significant material gains for workers in the form of higher wages, reduced working hours, and improvements in working conditions, especially in regards to safety. As historian Eric Shaw noted of the years following nationalisation, the electricity and gas supply companies became \"impressive models of public enterprise\" in terms of efficiency, and the National Coal Board was not only profitable, but working conditions for miners had significantly improved as well.",
"title": "Prime Minister"
},
{
"paragraph_id": 56,
"text": "Within a few years of nationalisation, a number of progressive measures had been carried out which did much to improve conditions in the mines, including better pay, a five-day working week, a national safety scheme (with proper standards at all the collieries), a ban on boys under the age of 16 going underground, the introduction of training for newcomers before going down to the coalface, and the making of pithead baths into a standard facility.",
"title": "Prime Minister"
},
{
"paragraph_id": 57,
"text": "The newly established National Coal Board offered sick pay and holiday pay to miners. As noted by Martin Francis:",
"title": "Prime Minister"
},
{
"paragraph_id": 58,
"text": "Union leaders saw nationalisation as a means to pursue a more advantageous position within a framework of continued conflict, rather than as an opportunity to replace the old adversarial form of industrial relations. Moreover, most workers in nationalised industries exhibited an essentially instrumentalist attitude, favouring public ownership because it secured job security and improved wages rather than because it promised the creation of a new set of socialist relationships in the workplace.",
"title": "Prime Minister"
},
{
"paragraph_id": 59,
"text": "The Attlee government placed strong emphasis on improving the quality of life in rural areas, benefiting both farmers and other consumers. Security of tenure for farmers was introduced, while consumers were protected by food subsidies and the redistributive effects of deficiency payments. Between 1945 and 1951, the quality of rural life was improved by improvements in gas, electricity, and water services, as well as in leisure and public amenities. In addition, the 1947 Transport Act improved provision of rural bus services, while the Agriculture Act 1947 established a more generous subsidy system for farmers. Legislation was also passed in 1947 and 1948 which established a permanent Agricultural Wages Board to fix minimum wages for agricultural workers.",
"title": "Prime Minister"
},
{
"paragraph_id": 60,
"text": "Attlee's government made it possible for farm workers to borrow up to 90 per cent of the cost of building their own houses, and received a subsidy of £15 a year for 40 years towards that cost. Grants were also made to meet up to half the cost of supplying water to farm buildings and fields, the government met half the cost of bracken eradication and lime spreading, and grants were paid for bringing hill farming land into use that had previously been considered unfit for farming purposes.",
"title": "Prime Minister"
},
{
"paragraph_id": 61,
"text": "In 1946, the National Agricultural Advisory Service was set up to supply agricultural advice and information. The Hill Farming Act 1946 introduced for upland areas a system of grants for buildings, land improvement, and infrastructural improvements such as roads and electrification. The Act also continued a system of headage payments for hill sheep and cattle that had been introduced during the war. The Agricultural Holdings Act 1948 enabled (in effect) tenant farmers to have lifelong tenancies and made provision for compensation in the event of cessations of tenancies. In addition, the Livestock Rearing Act 1951 extended the provisions of the Hill Farming Act 1946 to the upland store cattle and sheep sector.",
"title": "Prime Minister"
},
{
"paragraph_id": 62,
"text": "At a time of world food shortages, it was vital that farmers produced the maximum possible quantities. The government encouraged farmers via subsidies for modernisation, while the National Agricultural Advisory Service provided expertise and price guarantees. As a result of the Attlee government's initiatives in agriculture, there was a 20 per cent increase in output between 1947 and 1952, while Britain adopted one of the most mechanised and efficient farming industries in the world.",
"title": "Prime Minister"
},
{
"paragraph_id": 63,
"text": "The Attlee government ensured provisions of the Education Act 1944 were fully implemented, with free secondary education becoming a right for the first time. Fees in state grammar schools were eliminated, while new, modern secondary schools were constructed.",
"title": "Prime Minister"
},
{
"paragraph_id": 64,
"text": "The school leaving age was raised to 15 in 1947, an accomplishment helped brought into fruition by initiatives such as the HORSA (\"Huts Operation for Raising the School-leaving Age\") scheme and the S.F.O.R.S.A. (furniture) scheme. University scholarships were introduced to ensure that no one who was qualified \"should be deprived of a university education for financial reasons\", while a large school building programme was organised. A rapid increase in the number of trained teachers took place, and the number of new school places was increased.",
"title": "Prime Minister"
},
{
"paragraph_id": 65,
"text": "Increased Treasury funds were made available for education, particularly for upgrading school buildings suffering from years of neglect and war damage. Prefabricated classrooms were built, and 928 new primary schools were constructed between 1945 and 1950. The provision of free school meals was expanded, and opportunities for university entrants were increased. State scholarships to universities were increased, and the government adopted a policy of supplementing university scholarships awards to a level sufficient to cover fees plus maintenance.",
"title": "Prime Minister"
},
{
"paragraph_id": 66,
"text": "Many thousands of ex-servicemen were assisted to go through college who could never have contemplated it before the war. Free milk was also made available to all schoolchildren for the first time. In addition, spending on technical education rose, and the number of nursery schools was increased. Salaries for teachers were also improved, and funds were allocated towards improving existing schools.",
"title": "Prime Minister"
},
{
"paragraph_id": 67,
"text": "In 1947 the Arts Council of Great Britain was set up to encourage the arts.",
"title": "Prime Minister"
},
{
"paragraph_id": 68,
"text": "The Ministry of Education was established under the 1944 Act, and free County Colleges were set up for the compulsory part-time instruction of teenagers between the ages of 15 and 18 who were not in full-time education. An Emergency Training Scheme was also introduced which turned out an extra 25,000 teachers in 1945–1951. In 1947, Regional Advisory Councils were set up to bring together industry and education to find out the needs of young workers \"and advise on the provision required, and to secure reasonable economy of provision\". That same year, thirteen Area Training Organisations were set up in England and one in Wales to coordinate teacher training.",
"title": "Prime Minister"
},
{
"paragraph_id": 69,
"text": "Attlee's government, however, failed to introduce the comprehensive education for which many socialists had hoped. This reform was eventually carried out by Harold Wilson's government. During its time in office, the Attlee government increased spending on education by over 50 per cent, from £6.5 billion to £10 billion.",
"title": "Prime Minister"
},
{
"paragraph_id": 70,
"text": "The most significant problem facing Attlee and his ministers remained the economy, as the war effort had left Britain nearly bankrupt. Overseas investments had been used up to pay for the war. The transition to a peacetime economy, and the maintaining of strategic military commitments abroad led to continuous and severe problems with the balance of trade. This resulted in strict rationing of food and other essential goods continuing in the post war period to force a reduction in consumption in an effort to limit imports, boost exports, and stabilise the Pound Sterling so that Britain could trade its way out of its financial state.",
"title": "Prime Minister"
},
{
"paragraph_id": 71,
"text": "The abrupt end of the American Lend-Lease programme in August 1945 almost caused a crisis. Some relief was provided by the Anglo-American loan, negotiated in December 1945. The conditions attached to the loan included making the pound fully convertible to the US dollar. When this was introduced in July 1947, it led to a currency crisis and convertibility had to be suspended after just five weeks. The UK benefited from the American Marshall Aid program in 1948, and the economic situation improved significantly. Another balance of payments crisis in 1949 forced Chancellor of the Exchequer, Stafford Cripps, into devaluation of the pound.",
"title": "Prime Minister"
},
{
"paragraph_id": 72,
"text": "Despite these problems, one of the main achievements of Attlee's government was the maintenance of near full employment. The government maintained most of the wartime controls over the economy, including control over the allocation of materials and manpower, and unemployment rarely rose above 500,000, or 3 per cent of the total workforce. Labour shortages proved a more frequent problem. The inflation rate was also kept low during his term. The rate of unemployment rarely rose above 2 per cent during Attlee's time in office, whilst there was no hard-core of long-term unemployed. Both production and productivity rose as a result of new equipment, while the average working week was shortened.",
"title": "Prime Minister"
},
{
"paragraph_id": 73,
"text": "The government was less successful in housing, which was the responsibility of Aneurin Bevan. The government had a target to build 400,000 new houses a year to replace those which had been destroyed in the war, but shortages of materials and manpower meant that less than half this number were built. Nevertheless, millions of people were rehoused as a result of the Attlee government's housing policies. Between August 1945 and December 1951, 1,016,349 new homes were completed in England, Scotland, and Wales.",
"title": "Prime Minister"
},
{
"paragraph_id": 74,
"text": "When the Attlee government was voted out of office in 1951, the economy had been improved compared to 1945. The period from 1946 to 1951 saw continuous full employment and steadily rising living standards, which increased by about 10 per cent each year. During that same period, the economy grew by 3 per cent a year, and by 1951 the UK had \"the best economic performance in Europe, while output per person was increasing faster than in the United States\". Careful planning after 1945 also ensured that demobilisation was carried out without having a negative impact upon economic recovery, and that unemployment stayed at very low levels. In addition, the number of motor cars on the roads rose from 3 million to 5 million from 1945 to 1951, and seaside holidays were taken by far more people than ever before. A Monopolies and Restrictive Practices (Inquiry and Control) Act was passed in 1948, which allowed for investigations of restrictive practices and monopolies. However, some economic historians have argued that the UK failed to develop economically after the war, with failure to support industry leaving the economy not recovering as effectively as Germany.",
"title": "Prime Minister"
},
{
"paragraph_id": 75,
"text": "1947 proved a particularly difficult year for the government; an exceptionally cold winter that year caused coal mines to freeze and cease production, creating widespread power cuts and food shortages. The Minister of Fuel and Power, Emanuel Shinwell was widely blamed for failing to ensure adequate coal stocks, and soon resigned from his post. The Conservatives capitalised on the crisis with the slogan 'Starve with Strachey and shiver with Shinwell' (referring to the Minister of Food John Strachey).",
"title": "Prime Minister"
},
{
"paragraph_id": 76,
"text": "The crisis led to an unsuccessful plot by Hugh Dalton to replace Attlee as Prime Minister with Ernest Bevin. Later that year Stafford Cripps tried to persuade Attlee to stand aside for Bevin. These plots petered out after Bevin refused to cooperate. Later that year, Dalton resigned as Chancellor after inadvertently leaking details of the budget to a journalist. He was replaced by Cripps.",
"title": "Prime Minister"
},
{
"paragraph_id": 77,
"text": "In foreign affairs, the Attlee government was concerned with four main issues: post-war Europe, the onset of the Cold War, the establishment of the United Nations, and decolonisation. The first two were closely related, and Attlee was assisted by Foreign Secretary Ernest Bevin. Attlee also attended the later stages of the Potsdam Conference, where he negotiated with President Harry S. Truman and Joseph Stalin.",
"title": "Prime Minister"
},
{
"paragraph_id": 78,
"text": "In the immediate aftermath of the war, the Government faced the challenge of managing relations with Britain's former war-time ally, Stalin and the Soviet Union. Ernest Bevin was a passionate anti-communist, based largely on his experience of fighting communist influence in the trade union movement. Bevin's initial approach to the USSR as Foreign Secretary was \"wary and suspicious, but not automatically hostile\". Attlee himself sought warm relations with Stalin. He put his trust in the United Nations, rejected notions that the Soviet Union was bent on world conquest, and warned that treating Moscow as an enemy would turn it into one. This put Attlee at sword's point with his foreign minister, the Foreign Office, and the military who all saw the Soviets as a growing threat to Britain's role in the Middle East. Suddenly in January 1947, Attlee reversed his position and agreed with Bevin on a hardline anti-Soviet policy.",
"title": "Prime Minister"
},
{
"paragraph_id": 79,
"text": "In an early \"good-will\" gesture that was later heavily criticised, the Attlee government allowed the Soviets to purchase, under the terms of a 1946 UK-USSR Trade agreement, a total of 25 Rolls-Royce Nene jet engines in September 1947 and March 1948. The agreement included an agreement not to use them for military purposes. The price was fixed under a commercial contract; a total of 55 jet engines were sold to the USSR in 1947. However, the Cold War intensified during this period and the Soviets, who at the time were well behind the West in jet technology, reverse-engineered the Nene and installed their own version in the MiG-15 interceptor. This was used to good effect against US-UK forces in the subsequent Korean War, as well as in several later MiG models.",
"title": "Prime Minister"
},
{
"paragraph_id": 80,
"text": "After Stalin took political control of most of Eastern Europe, and began to subvert other governments in the Balkans, Attlee's and Bevin's worst fears of Soviet intentions were realised. The Attlee government then became instrumental in the creation of the successful NATO defence alliance to protect Western Europe against any Soviet expansion. In a crucial contribution to the economic stability of post-war Europe, Attlee's Cabinet was instrumental in promoting the American Marshall Plan for the economic recovery of Europe. He called it one of the \"most bold, enlightened and good-natured acts in the history of nations\".",
"title": "Prime Minister"
},
{
"paragraph_id": 81,
"text": "A group of Labour MPs, organised under the banner of \"Keep Left\", urged the government to steer a middle way between the two emerging superpowers, and advocated the creation of a \"third force\" of European powers to stand between the US and USSR. However, deteriorating relations between Britain and the USSR, as well as Britain's economic reliance on America following the Marshall Plan, steered policy towards supporting the US. In January 1947, fear of both Soviet and American nuclear intentions led to a secret meeting of the Cabinet, where the decision was made to press ahead with the development of Britain's independent nuclear deterrent, an issue which later caused a split in the Labour Party. Britain's first successful nuclear test, however, did not occur until 1952, one year after Attlee had left office.",
"title": "Prime Minister"
},
{
"paragraph_id": 82,
"text": "The London dock strike of July 1949, led by Communists, was suppressed when the Attlee government sent in 13,000 Army troops and passed special legislation to promptly end the strike. His response reveals Attlee's growing concern that Soviet expansionism, supported by the British Communist Party, was a genuine threat to national security, and that the docks were highly vulnerable to sabotage ordered by Moscow. He noted that the strike was caused not by local grievances, but to help communist unions who were on strike in Canada. Attlee agreed with MI5 that he faced \"a very present menace\".",
"title": "Prime Minister"
},
{
"paragraph_id": 83,
"text": "Decolonisation was never a major election issue, but Attlee gave the matter a great deal of attention and was the chief leader in beginning the process of decolonisation of the British Empire.",
"title": "Prime Minister"
},
{
"paragraph_id": 84,
"text": "In August 1948, the Chinese Communists' victories caused Attlee to begin preparing for a Communist takeover of China. It kept open consulates in Communist-controlled areas and rejected the Chinese Nationalists' requests that British citizens assist in the defence of Shanghai. By December, the government concluded that although British property in China would likely be nationalised, British traders would benefit in the long run from a stable, industrialising Communist China. Retaining Hong Kong was especially important to him; although the Chinese Communists promised to not interfere with its rule, Britain reinforced the Hong Kong Garrison during 1949. When the victorious Chinese Communists government declared on 1 October 1949 that it would exchange diplomats with any country that ended relations with the Chinese Nationalists, Britain became the first western country to formally recognise the People's Republic of China in January 1950. In 1954, a Labour Party delegation including Attlee visited China at the invitation of then Foreign Minister Zhou Enlai. Attlee became the first high-ranking western politician to meet Mao Zedong.",
"title": "Prime Minister"
},
{
"paragraph_id": 85,
"text": "Attlee orchestrated the granting of independence to India and Pakistan in 1947. Attlee in 1928–1934 had been a member of the Indian Statutory Commission (otherwise known as the Simon Commission). He became the Labour Party expert on India and by 1934 was committed to granting India the same independent dominion status that Canada, Australia, New Zealand and South Africa had recently been given. He faced strong resistance from the die-hard Conservative imperialists, led by Churchill, who opposed both independence and efforts led by Prime Minister Stanley Baldwin to set up a system of limited local control by Indians themselves. Attlee and the Labour leadership were sympathetic to both the Congress led by Jawaharlal Nehru and the Pakistan movement led by Muhammad Ali Jinnah. During the Second World War, Attlee was in charge of Indian affairs. He set up the Cripps Mission in 1942, which tried and failed to bring the factions together. When Congress called for passive resistance in the Quit India movement of 1942–1945, it was the British regime ordered the widespread arrest and internment for the duration of tens of thousands of Congress leaders as part of its efforts to crush the revolt.",
"title": "Prime Minister"
},
{
"paragraph_id": 86,
"text": "Labour's election Manifesto in 1945 called for \"the advancement of India to responsible self-government\". In 1942 the British Raj tried to enlist all major political parties in support of the war effort. Congress, led by Nehru and Gandhi, demanded immediate independence and full control by Congress of all of India. That demand was rejected by the British, and Congress opposed the war effort with its \"Quit India campaign\". The Raj immediately responded in 1942 by imprisoning the major national, regional and local Congress leaders for the duration. Attlee did not object. By contrast, the Muslim League, led by Muhammad Ali Jinnah, strongly supported the war effort. They greatly enlarged their membership and won favour from London for their decision. Attlee retained a fondness for Congress and until 1946, accepted their thesis that they were a non-religious party that accepted Hindus, Muslims, Sikhs, and everyone else. Nevertheless, this difference in opinion between the Congress and the Muslim League towards the British war effort encouraged Attlee and his government to consider further negotiations with the Muslim League.",
"title": "Prime Minister"
},
{
"paragraph_id": 87,
"text": "The Muslim League insisted that it was the only true representative of all of the Muslims of India. With violence escalating in India after the war, but with British financial power at a low ebb, large-scale military involvement was impossible. Viceroy Wavell said he needed a further seven army divisions to prevent communal violence if independence negotiations failed. No divisions were available; independence was the only option. Given the increasing demands of the Muslim League, independence implied a partition that set off heavily Muslim Pakistan from the main portion of India. After becoming Prime Minister in 1945 Attlee originally planned to give India Dominion status in 1948.",
"title": "Prime Minister"
},
{
"paragraph_id": 88,
"text": "Attlee suggested in his memoirs that \"traditional\" colonial rule in Asia was no longer viable. He said that he expected it to meet renewed opposition after the war both by local national movements as well as by the United States. The prime minister's biographer John Bew says that Attlee hoped for a transition to a multilateral world order and a Commonwealth, and that the old British empire \"should not be supported beyond its natural lifespan\" and instead be ended \"on the right note.\" His exchequer Hugh Dalton meanwhile feared that post-war Britain could no longer afford to garrison its empire.",
"title": "Prime Minister"
},
{
"paragraph_id": 89,
"text": "Ultimately the Labour government gave full independence to India and Pakistan in 1947 through the Indian Independence Act. This involved creating a demarcation between the two regions which was known as the Radcliffe Line. The boundary between the newly created states of Pakistan and India involved the widespread resettlement of millions of Hindus, Sikhs and Muslims. Almost immediately, extreme anti-Hindu and anti-Sikh violence ensued in Lahore, Multan and Dacca when the Punjab province and the Bengal province were split in the Partition of India. This was followed by a rapid increase in widespread anti-Muslim violence in several areas including Amritsar, Rajkot, Jaipur, Calcutta and Delhi. Historian Yasmin Khan estimates that over a million people were killed of which several were women and children. Gandhi himself was assassinated in January 1948. Attlee remarked Gandhi as the \"greatest citizen\" of India and added, \"this one man has been the major factor in every consideration of the Indian problem. He had become the expression of the aspirations of the Indian people for independence\".",
"title": "Prime Minister"
},
{
"paragraph_id": 90,
"text": "Historian Andrew Roberts says the independence of India was a \"national humiliation\" but it was necessitated by urgent financial, administrative, strategic and political needs. Churchill in 1940–1945 had tightened the hold on India and imprisoned the Congress leadership, with Attlee's approval. Labour had looked forward to making it a fully independent dominion like Canada or Australia. Many of the Congress leaders in the India had studied in England, and were highly regarded as fellow idealistic socialists by Labour leaders. Attlee was the Labour expert on India and took special charge of decolonisation. Attlee found that Churchill's viceroy, Field Marshal Wavell, was too imperialistic, too keen on military solutions, and too neglectful of Indian political alignments. The new Viceroy was Lord Mountbatten, the dashing war hero and a cousin of the King.",
"title": "Prime Minister"
},
{
"paragraph_id": 91,
"text": "Attlee also sponsored the peaceful transition to independence in 1948 of Burma (Myanmar) and Ceylon (Sri Lanka).",
"title": "Prime Minister"
},
{
"paragraph_id": 92,
"text": "One of the most urgent problems facing Attlee concerned the future of the British mandate in Palestine, which had become too troublesome and expensive to handle. British policies in Palestine were perceived by the Zionist movement and the Truman administration to be pro-Arab and anti-Jewish, and Britain soon found itself unable to maintain public order in the face of a Jewish insurgency and a civil war.",
"title": "Prime Minister"
},
{
"paragraph_id": 93,
"text": "During this period, 70,000 Holocaust survivors attempted to reach Palestine as part of the Aliyah Bet refugee movement. Attlee's government tried several tactics to prevent the migration. Five ships were bombed by the Secret Intelligence Service (though with no casualties) with a fake Palestinian group created to take responsibility. The navy apprehended over 50,000 refugees en route, interning them in detention camps in Cyprus. Conditions in the camps were harsh and faced global criticism. Later, the refugee ship Exodus 1947 would be sent back to mainland Europe, instead of being taken to Cyprus.",
"title": "Prime Minister"
},
{
"paragraph_id": 94,
"text": "In response to the increasingly unpopular mandate, Attlee ordered the evacuation of all British military personnel and handed over the issue to the United Nations, a decision which was widely supported by the general public in Britain. With the establishment of the state of Israel in 1948, the camps in Cyprus were eventually closed, with their former occupants finally completing their journey to the new country.",
"title": "Prime Minister"
},
{
"paragraph_id": 95,
"text": "The government's policies with regard to the other colonies, particularly those in Africa, focused on keeping them as strategic Cold War assets while modernising their economies. The Labour Party had long attracted aspiring leaders from Africa and had developed elaborate plans before the war. Implementing them overnight with an empty treasury proved too challenging. A major military base was built in Kenya, and the African colonies came under an unprecedented degree of direct control from London. Development schemes were implemented to help solve Britain's post-war balance of payments crisis and raise African living standards. This \"new colonialism\" worked slowly, and had failures such as the Tanganyika groundnut scheme.",
"title": "Prime Minister"
},
{
"paragraph_id": 96,
"text": "The 1950 election gave Labour a massively reduced majority of five seats compared to the triple-digit majority of 1945. Although re-elected, the result was seen by Attlee as very disappointing, and was widely attributed to the effects of post-war austerity denting Labour's appeal to middle-class voters. With such a small majority leaving him dependent on a small number of MPs to govern, Attlee's second term was much tamer than his first. Some major reforms were nevertheless passed, particularly regarding industry in urban areas and regulations to limit air and water pollution.",
"title": "Prime Minister"
},
{
"paragraph_id": 97,
"text": "By 1951, the Attlee government was exhausted, with several of its most senior ministers ailing or ageing, and with a lack of new ideas. Attlee's record for settling internal differences in the Labour Party fell in April 1951, when there was a damaging split over an austerity Budget brought in by the Chancellor, Hugh Gaitskell, to pay for the cost of Britain's participation in the Korean War. Aneurin Bevan resigned to protest against the new charges for \"teeth and spectacles\" in the National Health Service introduced by that Budget, and was joined in this action by several senior ministers, including the future Prime Minister Harold Wilson, then the President of the Board of Trade. Thus escalated a battle between the left and right wings of the Party that continues today. Finding it increasingly impossible to govern, Attlee's only chance was to call a snap election in October 1951, in the hope of achieving a more workable majority and to regain authority. The gamble failed: Labour narrowly lost to the Conservative Party, despite winning considerably more votes (achieving the largest Labour vote in electoral history). Attlee tendered his resignation as Prime Minister the following day, after six years and three months in office.",
"title": "Prime Minister"
},
{
"paragraph_id": 98,
"text": "Following the defeat in 1951, Attlee continued to lead the party as Leader of the Opposition. His last four years as leader were, however, widely seen as one of the Labour Party's weaker periods.",
"title": "Return to opposition"
},
{
"paragraph_id": 99,
"text": "The period was dominated by infighting between the Labour Party's right wing, led by Hugh Gaitskell, and its left, led by Aneurin Bevan. Many Labour MPs felt that Attlee should have retired following 1951 election and allowed a younger man to lead the party. Bevan openly called for him to stand down in the summer of 1954. One of his main reasons for staying on as leader was to frustrate the leadership ambitions of Herbert Morrison, whom Attlee disliked for both political and personal reasons. At one time, Attlee had favoured Aneurin Bevan to succeed him as leader, but this became problematic after Bevan almost irrevocably split the party.",
"title": "Return to opposition"
},
{
"paragraph_id": 100,
"text": "Attlee, now aged 72, contested the 1955 general election against Anthony Eden, which saw Labour lose 18 seats, and the Conservatives increase their majority.",
"title": "Return to opposition"
},
{
"paragraph_id": 101,
"text": "In an interview with the News Chronicle columnist Percy Cudlipp in mid-September 1955, Attlee made clear his own thinking together with his preference for the leadership succession, stating:",
"title": "Return to opposition"
},
{
"paragraph_id": 102,
"text": "Labour has nothing to gain by dwelling in the past. Nor do I think we can impress the nation by adopting a futile left-wingism. I regard myself as Left of Centre which is where a Party Leader ought to be. It is no use asking, 'What would Keir Hardie have done?' We must have at the top men brought up in the present age, not, as I was, in the Victorian Age.",
"title": "Return to opposition"
},
{
"paragraph_id": 103,
"text": "He retired as Leader of the Labour Party on 7 December 1955, having led the party for twenty years, and on 14 December Hugh Gaitskell was elected as his successor.",
"title": "Return to opposition"
},
{
"paragraph_id": 104,
"text": "He was one of the signatories of the agreement to convene a convention for drafting a world constitution. As a result, for the first time in human history, a World Constituent Assembly convened to draft and adopt a Constitution for the Federation of Earth.",
"title": "Global policy"
},
{
"paragraph_id": 105,
"text": "He subsequently retired from the House of Commons and was elevated to the peerage as Earl Attlee and Viscount Prestwood on 16 December 1955, taking his seat in the House of Lords on 25 January. He believed Eden had been forced into taking a strong stand on the Suez Crisis by his backbenchers. In 1958, Attlee, along with numerous notables, established the Homosexual Law Reform Society: this campaigned for the decriminalisation of homosexual acts in private by consenting adults, a reform that was voted through Parliament nine years later. In May 1961, he travelled to Washington, D.C., to meet with President Kennedy.",
"title": "Retirement"
},
{
"paragraph_id": 106,
"text": "In 1962, he spoke twice in the House of Lords against the British government's application for the UK to join the European Communities (\"Common Market\"). In his second speech delivered in November, Attlee claimed that Britain had a separate parliamentary tradition from the Continental European countries that comprised the EC. He also claimed that if Britain became a member, EC rules would prevent the British government from planning the economy and that Britain's traditional policy had been outward-looking rather than Continental.",
"title": "Retirement"
},
{
"paragraph_id": 107,
"text": "He attended Winston Churchill's funeral in January 1965. He was frail by that time, and had to remain seated in the freezing cold as the coffin was carried, having tired himself out by standing at the rehearsal the previous day. He lived to see the Labour Party return to power under Harold Wilson in 1964, and also to see his old constituency of Walthamstow West fall to the Conservatives in a by-election in September 1967.",
"title": "Retirement"
},
{
"paragraph_id": 108,
"text": "Attlee died peacefully in his sleep of pneumonia, at the age of 84 at Westminster Hospital on 8 October 1967. Two thousand people attended his funeral in November, including the then-Prime Minister Harold Wilson and the Duke of Kent, representing the Queen. He was cremated and his ashes were buried at Westminster Abbey.",
"title": "Death"
},
{
"paragraph_id": 109,
"text": "Upon his death, the title passed to his son Martin Richard Attlee, 2nd Earl Attlee (1927–1991), who defected from Labour to the SDP in 1981. It is now held by Clement Attlee's grandson John Richard Attlee, 3rd Earl Attlee. The third earl (a member of the Conservative Party) retained his seat in the Lords as one of the hereditary peers to remain under an amendment to Labour's House of Lords Act 1999.",
"title": "Death"
},
{
"paragraph_id": 110,
"text": "Attlee's estate was sworn for probate purposes at a value of £7,295, (equivalent to £140,865 in 2021) a relatively modest sum for so prominent a figure, and only a fraction of the £75,394 in his father's estate when he died in 1908.",
"title": "Death"
},
{
"paragraph_id": 111,
"text": "The quotation about Attlee, \"A modest man, but then he has so much to be modest about\", is commonly ascribed to Churchill—though Churchill denied saying it, and respected Attlee's service in the War cabinet. Attlee's modesty and quiet manner hid a great deal that has only come to light with historical reappraisal. Attlee himself is said to have responded to critics with a limerick: \"There were few who thought him a starter, Many who thought themselves smarter. But he ended PM, CH and OM, an Earl and a Knight of the Garter\".",
"title": "Legacy"
},
{
"paragraph_id": 112,
"text": "The journalist and broadcaster Anthony Howard called him \"the greatest Prime Minister of the 20th century\".",
"title": "Legacy"
},
{
"paragraph_id": 113,
"text": "His leadership style of consensual government, acting as a chairman rather than a president, won him much praise from historians and politicians alike. Christopher Soames, the British Ambassador to France during the Conservative government of Edward Heath and cabinet minister under Margaret Thatcher, remarked that \"Mrs Thatcher was not really running a team. Every time you have a Prime Minister who wants to make all the decisions, it mainly leads to bad results. Attlee didn't. That's why he was so damn good\".",
"title": "Legacy"
},
{
"paragraph_id": 114,
"text": "Thatcher herself wrote in her 1995 memoirs, which charted her life from her beginnings in Grantham to her victory at the 1979 general election, that she admired Attlee, writing: \"Of Clement Attlee, however, I was an admirer. He was a serious man and a patriot. Quite contrary to the general tendency of politicians in the 1990s, he was all substance and no show\".",
"title": "Legacy"
},
{
"paragraph_id": 115,
"text": "Attlee's government presided over the successful transition from a wartime economy to peacetime, tackling problems of demobilisation, shortages of foreign currency, and adverse deficits in trade balances and government expenditure. Further domestic policies that he brought about included the creation of the National Health Service and the post-war Welfare state, which became key to the reconstruction of post-war Britain. Attlee and his ministers did much to transform the UK into a more prosperous and egalitarian society during their time in office with reductions in poverty and a rise in the general economic security of the population.",
"title": "Legacy"
},
{
"paragraph_id": 116,
"text": "In foreign affairs, he did much to assist with the post-war economic recovery of Europe. He proved a loyal ally of the US at the onset of the Cold War. Due to his style of leadership, it was not he, but Ernest Bevin who masterminded foreign policy. It was Attlee's government that decided Britain should have an independent nuclear weapons programme, and work on it began in 1947.",
"title": "Legacy"
},
{
"paragraph_id": 117,
"text": "Bevin, Attlee's Foreign Secretary, famously stated that \"We've got to have it [nuclear weapons] and it's got to have a bloody Union Jack on it\". The first operational British nuclear bomb was not detonated until October 1952, about one year after Attlee had left office. Independent British atomic research was prompted partly by the US McMahon Act, which nullified wartime expectations of postwar US–UK collaboration in nuclear research, and prohibited Americans from communicating nuclear technology even to allied countries. British atomic bomb research was kept secret even from some members of Attlee's own cabinet, whose loyalty or discretion seemed uncertain.",
"title": "Legacy"
},
{
"paragraph_id": 118,
"text": "Although a socialist, Attlee still believed in the British Empire of his youth. He thought of it as an institution that was a power for good in the world. Nevertheless, he saw that a large part of it needed to be self-governing. Using the Dominions of Canada, Australia, and New Zealand as a model, he continued the transformation of the empire into the modern-day British Commonwealth.",
"title": "Legacy"
},
{
"paragraph_id": 119,
"text": "His greatest achievement, surpassing many of these, was perhaps the establishment of a political and economic consensus about the governance of Britain that all three major parties subscribed to for three decades, fixing the arena of political discourse until the late-1970s. In 2004, he was voted the most successful British Prime Minister of the 20th century by a poll of 139 academics organised by Ipsos MORI.",
"title": "Legacy"
},
{
"paragraph_id": 120,
"text": "A blue plaque unveiled in 1979 commemorates Attlee at 17 Monkhams Avenue, in Woodford Green in the London borough of Redbridge.",
"title": "Legacy"
},
{
"paragraph_id": 121,
"text": "Attlee was elected a Fellow of the Royal Society in 1947. Attlee was awarded an Honorary Fellowship of Queen Mary College on 15 December 1948.",
"title": "Legacy"
},
{
"paragraph_id": 122,
"text": "In the 1960s a new suburb near Curepipe in British Mauritius was given the name Cité Atlee [sic] in his honour.",
"title": "Legacy"
},
{
"paragraph_id": 123,
"text": "On 30 November 1988, a bronze statue of Clement Attlee was unveiled by Harold Wilson (the next Labour Prime Minister after Attlee) outside Limehouse Library in Attlee's former constituency. By then Wilson was the last surviving member of Attlee's cabinet, and the unveiling of the statue would be one of the last public appearances by Wilson, who was by that point in the early stages of Alzheimer's disease; he died at the age of 79 in May 1995.",
"title": "Legacy"
},
{
"paragraph_id": 124,
"text": "Limehouse Library was closed in 2003, after which the statue was vandalised. The council surrounded it with protective hoarding for four years, before eventually removing it for repair and recasting in 2009. The restored statue was unveiled by Peter Mandelson in April 2011, in its new position less than a mile away at the Queen Mary University of London's Mile End campus.",
"title": "Legacy"
},
{
"paragraph_id": 125,
"text": "There is also a statue of Clement Attlee in the Houses of Parliament that was erected, instead of a bust, by parliamentary vote in 1979. The sculptor was Ivor Roberts-Jones.",
"title": "Legacy"
},
{
"paragraph_id": 126,
"text": "Attlee met Violet Millar while on a long trip with friends to Italy in 1921. They fell in love and were soon engaged, marrying at Christ Church, Hampstead, on 10 January 1922. It would come to be a devoted marriage, with Attlee providing protection and Violet providing a home that was an escape for Attlee from political turmoil. She died in 1964. They had four children:",
"title": "Personal life"
},
{
"paragraph_id": 127,
"text": "Although his parents were devout Anglicans, with one of his brothers becoming a clergyman and one of his sisters a missionary, Attlee himself is usually regarded as an agnostic. In an interview he described himself as \"incapable of religious feeling\", saying that he believed in \"the ethics of Christianity\" but not \"the mumbo-jumbo\". When asked whether he was an agnostic, Attlee replied \"I don't know\".",
"title": "Personal life"
},
{
"paragraph_id": 128,
"text": "Biographical",
"title": "Further reading"
},
{
"paragraph_id": 129,
"text": "Biographies of his cabinet and associates",
"title": "Further reading"
},
{
"paragraph_id": 130,
"text": "Scholarly studies",
"title": "Further reading"
}
] | Clement Richard Attlee, 1st Earl Attlee, was a British statesman and Labour Party politician who served as Prime Minister of the United Kingdom from 1945 to 1951 and Leader of the Labour Party from 1935 to 1955. He was Deputy Prime Minister during the wartime coalition government under Winston Churchill, and served twice as Leader of the Opposition from 1935 to 1940 and from 1951 to 1955. Attlee remains the longest serving Labour leader and is widely considered by historians and members of the public through various polls to be one of the greatest Prime Ministers of the United Kingdom. Attlee was born into an upper-middle-class family, the son of a wealthy London solicitor. After attending Haileybury College and the University of Oxford, he practised as a barrister. The volunteer work he carried out in London's East End exposed him to poverty, and his political views shifted leftwards thereafter. He joined the Independent Labour Party, gave up his legal career, and began lecturing at the London School of Economics; with his work briefly interrupted by service in the First World War. In 1919, he became mayor of Stepney and in 1922 was elected as the Member for Limehouse. Attlee served in the first Labour minority government led by Ramsay MacDonald in 1924, and then joined the Cabinet during MacDonald's second minority (1929–1931). After retaining his seat in Labour's landslide defeat of 1931, he became the party's Deputy Leader. Elected Leader of the Labour Party in 1935, and at first advocating pacificism and opposing re-armament, he became a critic of Neville Chamberlain's policy of appeasement in the lead-up to the Second World War. Attlee took Labour into the wartime coalition government in 1940 and served under Winston Churchill, initially as Lord Privy Seal and then as Deputy Prime Minister from 1942. As the European front of WWII reached its conclusion, the war cabinet headed by Churchill was dissolved and elections were scheduled to be held. The Labour Party, led by Attlee, won a landslide victory in the 1945 general election, on their post-war recovery platform. Following the election, Attlee led the construction of the first Labour majority government. His government's Keynesian approach to economic management aimed to maintain full employment, a mixed economy and a greatly enlarged system of social services provided by the state. To this end, it undertook the nationalisation of public utilities and major industries, and implemented wide-ranging social reforms, including the passing of the National Insurance Act 1946 and National Assistance Act 1948, the formation of the National Health Service (NHS) in 1948, and the enlargement of public subsidies for council house building. His government also reformed trade union legislation, working practices and children's services; it created the National Parks system, passed the New Towns Act 1946 and established the town and country planning system. The Attlee government proved itself to be a radical, reforming government. From 1945 to 1948, over 200 public Acts of Parliament were passed, with eight major pieces of legislation placed on the statute book in 1946 alone. Attlee's foreign policy focused on decolonization efforts which he delegated to Ernest Bevin, but Attlee personally oversaw the partition of India (1947), the independence of Burma and Ceylon, and the dissolution of the British mandates of Palestine and Transjordan. Attlee and Bevin encouraged the United States to take a vigorous role in the Cold War; unable to afford military intervention in Greece during its civil war, he called on Washington to counter the communists there. The strategy of containment was formalized between the two nations through the Truman Doctrine. He supported the Marshall Plan to rebuild Western Europe with American money and, in 1949, promoted the NATO military alliance against the Soviet bloc. After leading Labour to a narrow victory at the 1950 general election, he sent British troops to fight alongside South Korea in the Korean War. Attlee had inherited a country close to bankruptcy following the Second World War and beset by food, housing and resource shortages; despite his social reforms and economic programme, these problems persisted throughout his premiership, alongside recurrent currency crises and dependence on US aid. His party was narrowly defeated by the Conservatives in the 1951 general election, despite winning the most votes. He continued as Labour leader but retired after losing the 1955 election and was elevated to the House of Lords, where he served until his death in 1967. In public, he was modest and unassuming, but behind the scenes his depth of knowledge, quiet demeanour, objectivity and pragmatism proved decisive. He is often ranked as one of the greatest British prime ministers. In 2004, he was voted the most successful British Prime Minister of the 20th century by a poll of 139 academics. The majority of those responses singled out the Attlee government's welfare state reforms and the creation of the NHS as the key 20th century domestic policy achievements. He is also commended for continuing the 'Special Relationship' with the US and active involvement in NATO. | 2001-06-20T09:49:28Z | 2023-12-27T14:09:02Z | [
"Template:Snd",
"Template:See also",
"Template:Cite journal",
"Template:Infobox officeholder",
"Template:Further",
"Template:Cbignore",
"Template:Use British English",
"Template:NPG name",
"Template:Refn",
"Template:Infobox emblem wide",
"Template:Webarchive",
"Template:Refend",
"Template:S-ttl",
"Template:Short description",
"Template:Redirect",
"Template:PM20",
"Template:Cite magazine",
"Template:FadedPage",
"Template:S-reg",
"Template:Page needed",
"Template:S-end",
"Template:HMT",
"Template:Sfnm",
"Template:Failed verification",
"Template:Commons category",
"Template:Internet Archive author",
"Template:S-start",
"Template:Portal bar",
"Template:Cite news",
"Template:London Gazette",
"Template:Hansard",
"Template:ISBN",
"Template:Refbegin",
"Template:S-bef",
"Template:Clement Attlee sidebar",
"Template:Sfn",
"Template:Pn",
"Template:Dead link",
"Template:Hansard-contribs",
"Template:S-non",
"Template:S-ppo",
"Template:Authority control",
"Template:Blockquote",
"Template:UK National Archives ID",
"Template:S-aft",
"Template:S-civ",
"Template:Navboxes",
"Template:Use dmy dates",
"Template:Inflation-fn",
"Template:Infobox administration",
"Template:Reflist",
"Template:Cite encyclopedia",
"Template:S-off",
"Template:S-break",
"Template:Inflation-year",
"Template:Sic",
"Template:S-new",
"Template:Succession box",
"Template:Convert",
"Template:Main",
"Template:Wikiquote",
"Template:Cite book",
"Template:Post-nominals",
"Template:Cite web",
"Template:Librivox author",
"Template:S-par"
] | https://en.wikipedia.org/wiki/Clement_Attlee |
5,768 | Catullus | Gaius Valerius Catullus (Classical Latin: [ˈɡaːiʊs waˈɫɛriʊs kaˈtʊ:lʊs]; c. 84 - c. 54 BCE), often referred to simply as Catullus (kə-TUL-əs), was a Latin poet of the late Roman Republic who wrote chiefly in the neoteric style of poetry, focusing on personal life rather than classical heroes. His surviving works are still read widely and continue to influence poetry and other forms of art.
Catullus's poems were widely appreciated by contemporary poets, significantly influencing Ovid and Virgil, among others. After his rediscovery in the Late Middle Ages, Catullus again found admirers such as Petrarch. The explicit sexual imagery which he uses in some of his poems has shocked many readers. Yet, at many instruction levels, Catullus is considered a resource for teachers of Latin.
Catullus's style is highly personal, humorous, and emotional; he frequently uses hyperbole, anaphora, alliteration, and diminutives. In 25 of his poems, he mentions his devotion to a woman he refers to as "Lesbia", who is widely believed to have been the Roman aristocrat Clodia Metelli. One of the most famous of his poems is his 5th, which is often recognized for its passionate language and opening line: "Vivamus, mea Lesbia, atque amemus" ("Let us live, my Lesbia, and let us love").
Gāius Valerius Catullus was born to a leading equestrian family of Verona, in Cisalpine Gaul. The social prominence of the Catullus family allowed the father of Gaius Valerius to entertain Julius Caesar when he was the Promagistrate (proconsul) of both Gallic provinces. In a poem, Catullus describes his happy homecoming to the family villa at Sirmio, on Lake Garda, near Verona; he also owned a villa near the resort of Tibur (modern Tivoli).
Catullus appears to have spent most of his young adult years in Rome. His friends there included the poets Licinius Calvus, and Helvius Cinna, Quintus Hortensius (son of the orator and rival of Cicero) and the biographer Cornelius Nepos, to whom Catullus dedicated a libellus of poems, the relation of which to the extant collection remains a matter of debate. He appears to have been acquainted with the poet Marcus Furius Bibaculus. A number of prominent contemporaries appear in his poetry, including Cicero, Caesar and Pompey. According to an anecdote preserved by Suetonius, Caesar did not deny that Catullus's lampoons left an indelible stain on his reputation, but when Catullus apologized, he invited the poet for dinner the very same day.
It was probably in Rome that Catullus fell deeply in love with the "Lesbia" of his poems, who is usually identified with Clodia Metelli, a sophisticated woman from the aristocratic house of patrician family Claudii Pulchri, sister of the infamous Publius Clodius Pulcher, and wife to proconsul Quintus Caecilius Metellus Celer. In his poems Catullus describes several stages of their relationship: initial euphoria, doubts, separation, and his wrenching feelings of loss. Clodia had several other partners; "From the poems one can adduce no fewer than five lovers in addition to Catullus: Egnatius (poem 37), Gellius (poem 91), Quintius (poem 82), Rufus (poem 77), and Lesbius (poem 79)." There is also some question surrounding her husband's mysterious death in 59 BCE, with some critics believing he was domestically poisoned. However, a sensitive and passionate Catullus could not relinquish his flame for Clodia, regardless of her obvious indifference to his desire for a deep and permanent relationship. In his poems, Catullus wavers between devout, sweltering love and bitter, scornful insults that he directs at her blatant infidelity (as demonstrated in poems 11 and 58). His passion for her is unrelenting—yet it is unclear when exactly the couple split up for good. Catullus's poems about the relationship display striking depth and psychological insight.
He spent the provincial command year from summer 57 to summer 56 BCE in Bithynia on the staff of the commander Gaius Memmius. While in the East, he traveled to the Troad to perform rites at his brother's tomb, an event recorded in a moving poem.
No ancient biography of Catullus has survived. His life has to be pieced together from scattered references to him in other ancient authors and from his poems. Thus it is uncertain when he was born and when he died. Jerome stated that he was born in 87 BCE and died in Rome on his 30th year. However, Catullus’ poems include references to events of 55 and 54 BCE. Since the Roman consular fasti make it somewhat easy to confuse 87–57 BCE with 84–54 BCE, many scholars accept the dates 84 BC–54 BCE, supposing that his latest poems and the publication of his libellus coincided with the year of his death. Other authors suggest 52 or 51 BCE as the year of the poet's death. Though upon his elder brother's death Catullus lamented that their "whole house was buried along" with the deceased, the existence (and prominence) of Valerii Catulli is attested in the following centuries. T.P. Wiseman argues that after the brother's death Catullus could have married, and that, in this case, the later Valerii Catulli may have been his descendants.
Catullus's poems have been preserved in an anthology of 116 carmina (the actual number of poems may slightly vary in various editions), which can be divided into three parts according to their form: sixty short poems in varying meters, called polymetra, eight longer poems, and forty-eight epigrams.
There is no scholarly consensus on whether Catullus himself arranged the order of the poems. The longer poems differ from the polymetra and the epigrams not only in length but also in their subjects: There are seven hymns and one mini-epic, or epyllion, the most highly prized form for the "new poets".
The polymetra and the epigrams can be divided into four major thematic groups (ignoring a rather large number of poems that elude such categorization):
All these poems describe the lifestyle of Catullus and his friends, who, despite Catullus's temporary political post in Bithynia, lived their lives withdrawn from politics. They were interested mainly in poetry and love. Above all other qualities, Catullus seems to have valued venustas, or charm, in his acquaintances, a theme which he explores in a number of his poems. The ancient Roman concept of virtus (i.e., of virtue that had to be proved by a political or military career), which Cicero suggested as the solution to the societal problems of the late Republic, meant little to them.
However Catullus does not reject traditional notions, but rather their particular application to the vita activa of politics and war. Indeed, he tries to reinvent these notions from a personal point of view and to introduce them into human relationships. For example, he applies the word fides, which traditionally meant faithfulness towards one's political allies, to his relationship with Lesbia and reinterprets it as unconditional faithfulness in love. So, despite the seeming frivolity of his lifestyle, Catullus measured himself and his friends by quite ambitious standards.
Catullus's poetry was influenced by the innovative poetry of the Hellenistic Age, and especially by Callimachus and the Alexandrian school, which had propagated a new style of poetry that deliberately turned away from the classical epic poetry in the tradition of Homer. Cicero called these local innovators neoteroi (νεώτεροι) or "moderns" (in Latin poetae novi or 'new poets'), in that they cast off the heroic model handed down from Ennius in order to strike new ground and ring a contemporary note. Catullus and Callimachus did not describe the feats of ancient heroes and gods (except perhaps in re-evaluating and predominantly artistic circumstances, e.g. poems 63 and 64), focusing instead on small-scale personal themes. Although these poems sometimes seem quite superficial and their subjects often are mere everyday concerns, they are accomplished works of art. Catullus described his work as expolitum, or polished, to show that the language he used was very carefully and artistically composed.
Catullus was also an admirer of Sappho, a female poet of the seventh century BCE. Catullus 51 partly translates, partly imitates, and transforms Sappho 31. Some hypothesize that 61 and 62 were perhaps inspired by lost works of Sappho but this is purely speculative. Both of the latter are epithalamia, a form of laudatory or erotic wedding-poetry that Sappho was famous for. Catullus twice used a meter that Sappho was known for, called the Sapphic stanza, in poems 11 and 51, perhaps prompting his successor Horace's interest in the form.
Catullus, as was common to his era, was greatly influenced by stories from Greek and Roman myth. His longer poems—such as 63, 64, 65, 66, and 68—allude to mythology in various ways. Some stories he refers to are the wedding of Peleus and Thetis, the departure of the Argonauts, Theseus and the Minotaur, Ariadne's abandonment, Tereus and Procne, as well as Protesilaus and Laodamia.
Catullus wrote in many different meters including hendecasyllabic verse and elegiac couplets (common in love poetry). A great part of his poetry shows strong and occasionally wild emotions, especially in regard to Lesbia (e.g., poems 5 and 7). His love poems are very emotional and ardent, and are relatable to this day. Catullus describes his Lesbia as having multiple suitors and often showing little affection towards him. He also demonstrates a great sense of humour such as in Catullus 13.
The Hungarian born British composer Matyas Seiber set poem 31 for unaccompanied mixed chorus Sirmio in 1957. The American composer Ned Rorem set Catullus 101 to music for voice and piano; the song, "Catallus: On the Burial of His Brother", was originally published in 1969.
Catullus Dreams (2011) is a song cycle by David Glaser set to texts of Catullus, scored for soprano and seven instruments; it premiered at Symphony Space in New York by soprano Linda Larson and Sequitur Ensemble. "Carmina Catulli" is a song cycle arranged from 17 of Catullus's poems by American composer Michael Linton. The cycle was recorded in December 2013 and premiered at Carnegie Hall's Weill Recital Hall in March 2014 by French baritone Edwin Crossley-Mercer and pianist Jason Paul Peterson.
Thomas Campion also wrote a lute-song using his own translation of the first six lines of Catullus 5 followed by two verses of his own; the translation by Richard Crashaw was set to music in a four-part glee by Samuel Webbe Jr. It was also set to music, in a three-part glee by John Stafford Smith.
Catullus 5, the love poem "Vivamus mea Lesbia atque amemus", in the translation by Ben Jonson, was set to music, (lute accompanied song) by Alfonso Ferrabosco the younger. Dutch composer Bertha Tideman-Wijers used Catullus's text for her composition Variations on Valerius "Where that one already turns or turns." The Icelandic composer Jóhann Jóhannsson set Catullus 85 to music; entitled "Odi Et Amo", the song is found on Jóhannsson's album Englabörn, and is sung through a vocoder, and the music is played by a string quartet and piano. Catulli Carmina is a cantata by Carl Orff set to the texts of Catullus. Finnish jazz singer Reine Rimón has recorded poems of Catullus set to standard jazz tunes. | [
{
"paragraph_id": 0,
"text": "Gaius Valerius Catullus (Classical Latin: [ˈɡaːiʊs waˈɫɛriʊs kaˈtʊ:lʊs]; c. 84 - c. 54 BCE), often referred to simply as Catullus (kə-TUL-əs), was a Latin poet of the late Roman Republic who wrote chiefly in the neoteric style of poetry, focusing on personal life rather than classical heroes. His surviving works are still read widely and continue to influence poetry and other forms of art.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Catullus's poems were widely appreciated by contemporary poets, significantly influencing Ovid and Virgil, among others. After his rediscovery in the Late Middle Ages, Catullus again found admirers such as Petrarch. The explicit sexual imagery which he uses in some of his poems has shocked many readers. Yet, at many instruction levels, Catullus is considered a resource for teachers of Latin.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Catullus's style is highly personal, humorous, and emotional; he frequently uses hyperbole, anaphora, alliteration, and diminutives. In 25 of his poems, he mentions his devotion to a woman he refers to as \"Lesbia\", who is widely believed to have been the Roman aristocrat Clodia Metelli. One of the most famous of his poems is his 5th, which is often recognized for its passionate language and opening line: \"Vivamus, mea Lesbia, atque amemus\" (\"Let us live, my Lesbia, and let us love\").",
"title": ""
},
{
"paragraph_id": 3,
"text": "Gāius Valerius Catullus was born to a leading equestrian family of Verona, in Cisalpine Gaul. The social prominence of the Catullus family allowed the father of Gaius Valerius to entertain Julius Caesar when he was the Promagistrate (proconsul) of both Gallic provinces. In a poem, Catullus describes his happy homecoming to the family villa at Sirmio, on Lake Garda, near Verona; he also owned a villa near the resort of Tibur (modern Tivoli).",
"title": "Life"
},
{
"paragraph_id": 4,
"text": "Catullus appears to have spent most of his young adult years in Rome. His friends there included the poets Licinius Calvus, and Helvius Cinna, Quintus Hortensius (son of the orator and rival of Cicero) and the biographer Cornelius Nepos, to whom Catullus dedicated a libellus of poems, the relation of which to the extant collection remains a matter of debate. He appears to have been acquainted with the poet Marcus Furius Bibaculus. A number of prominent contemporaries appear in his poetry, including Cicero, Caesar and Pompey. According to an anecdote preserved by Suetonius, Caesar did not deny that Catullus's lampoons left an indelible stain on his reputation, but when Catullus apologized, he invited the poet for dinner the very same day.",
"title": "Life"
},
{
"paragraph_id": 5,
"text": "It was probably in Rome that Catullus fell deeply in love with the \"Lesbia\" of his poems, who is usually identified with Clodia Metelli, a sophisticated woman from the aristocratic house of patrician family Claudii Pulchri, sister of the infamous Publius Clodius Pulcher, and wife to proconsul Quintus Caecilius Metellus Celer. In his poems Catullus describes several stages of their relationship: initial euphoria, doubts, separation, and his wrenching feelings of loss. Clodia had several other partners; \"From the poems one can adduce no fewer than five lovers in addition to Catullus: Egnatius (poem 37), Gellius (poem 91), Quintius (poem 82), Rufus (poem 77), and Lesbius (poem 79).\" There is also some question surrounding her husband's mysterious death in 59 BCE, with some critics believing he was domestically poisoned. However, a sensitive and passionate Catullus could not relinquish his flame for Clodia, regardless of her obvious indifference to his desire for a deep and permanent relationship. In his poems, Catullus wavers between devout, sweltering love and bitter, scornful insults that he directs at her blatant infidelity (as demonstrated in poems 11 and 58). His passion for her is unrelenting—yet it is unclear when exactly the couple split up for good. Catullus's poems about the relationship display striking depth and psychological insight.",
"title": "Life"
},
{
"paragraph_id": 6,
"text": "He spent the provincial command year from summer 57 to summer 56 BCE in Bithynia on the staff of the commander Gaius Memmius. While in the East, he traveled to the Troad to perform rites at his brother's tomb, an event recorded in a moving poem.",
"title": "Life"
},
{
"paragraph_id": 7,
"text": "No ancient biography of Catullus has survived. His life has to be pieced together from scattered references to him in other ancient authors and from his poems. Thus it is uncertain when he was born and when he died. Jerome stated that he was born in 87 BCE and died in Rome on his 30th year. However, Catullus’ poems include references to events of 55 and 54 BCE. Since the Roman consular fasti make it somewhat easy to confuse 87–57 BCE with 84–54 BCE, many scholars accept the dates 84 BC–54 BCE, supposing that his latest poems and the publication of his libellus coincided with the year of his death. Other authors suggest 52 or 51 BCE as the year of the poet's death. Though upon his elder brother's death Catullus lamented that their \"whole house was buried along\" with the deceased, the existence (and prominence) of Valerii Catulli is attested in the following centuries. T.P. Wiseman argues that after the brother's death Catullus could have married, and that, in this case, the later Valerii Catulli may have been his descendants.",
"title": "Life"
},
{
"paragraph_id": 8,
"text": "Catullus's poems have been preserved in an anthology of 116 carmina (the actual number of poems may slightly vary in various editions), which can be divided into three parts according to their form: sixty short poems in varying meters, called polymetra, eight longer poems, and forty-eight epigrams.",
"title": "Poetry"
},
{
"paragraph_id": 9,
"text": "There is no scholarly consensus on whether Catullus himself arranged the order of the poems. The longer poems differ from the polymetra and the epigrams not only in length but also in their subjects: There are seven hymns and one mini-epic, or epyllion, the most highly prized form for the \"new poets\".",
"title": "Poetry"
},
{
"paragraph_id": 10,
"text": "The polymetra and the epigrams can be divided into four major thematic groups (ignoring a rather large number of poems that elude such categorization):",
"title": "Poetry"
},
{
"paragraph_id": 11,
"text": "All these poems describe the lifestyle of Catullus and his friends, who, despite Catullus's temporary political post in Bithynia, lived their lives withdrawn from politics. They were interested mainly in poetry and love. Above all other qualities, Catullus seems to have valued venustas, or charm, in his acquaintances, a theme which he explores in a number of his poems. The ancient Roman concept of virtus (i.e., of virtue that had to be proved by a political or military career), which Cicero suggested as the solution to the societal problems of the late Republic, meant little to them.",
"title": "Poetry"
},
{
"paragraph_id": 12,
"text": "However Catullus does not reject traditional notions, but rather their particular application to the vita activa of politics and war. Indeed, he tries to reinvent these notions from a personal point of view and to introduce them into human relationships. For example, he applies the word fides, which traditionally meant faithfulness towards one's political allies, to his relationship with Lesbia and reinterprets it as unconditional faithfulness in love. So, despite the seeming frivolity of his lifestyle, Catullus measured himself and his friends by quite ambitious standards.",
"title": "Poetry"
},
{
"paragraph_id": 13,
"text": "Catullus's poetry was influenced by the innovative poetry of the Hellenistic Age, and especially by Callimachus and the Alexandrian school, which had propagated a new style of poetry that deliberately turned away from the classical epic poetry in the tradition of Homer. Cicero called these local innovators neoteroi (νεώτεροι) or \"moderns\" (in Latin poetae novi or 'new poets'), in that they cast off the heroic model handed down from Ennius in order to strike new ground and ring a contemporary note. Catullus and Callimachus did not describe the feats of ancient heroes and gods (except perhaps in re-evaluating and predominantly artistic circumstances, e.g. poems 63 and 64), focusing instead on small-scale personal themes. Although these poems sometimes seem quite superficial and their subjects often are mere everyday concerns, they are accomplished works of art. Catullus described his work as expolitum, or polished, to show that the language he used was very carefully and artistically composed.",
"title": "Poetry"
},
{
"paragraph_id": 14,
"text": "Catullus was also an admirer of Sappho, a female poet of the seventh century BCE. Catullus 51 partly translates, partly imitates, and transforms Sappho 31. Some hypothesize that 61 and 62 were perhaps inspired by lost works of Sappho but this is purely speculative. Both of the latter are epithalamia, a form of laudatory or erotic wedding-poetry that Sappho was famous for. Catullus twice used a meter that Sappho was known for, called the Sapphic stanza, in poems 11 and 51, perhaps prompting his successor Horace's interest in the form.",
"title": "Poetry"
},
{
"paragraph_id": 15,
"text": "Catullus, as was common to his era, was greatly influenced by stories from Greek and Roman myth. His longer poems—such as 63, 64, 65, 66, and 68—allude to mythology in various ways. Some stories he refers to are the wedding of Peleus and Thetis, the departure of the Argonauts, Theseus and the Minotaur, Ariadne's abandonment, Tereus and Procne, as well as Protesilaus and Laodamia.",
"title": "Poetry"
},
{
"paragraph_id": 16,
"text": "Catullus wrote in many different meters including hendecasyllabic verse and elegiac couplets (common in love poetry). A great part of his poetry shows strong and occasionally wild emotions, especially in regard to Lesbia (e.g., poems 5 and 7). His love poems are very emotional and ardent, and are relatable to this day. Catullus describes his Lesbia as having multiple suitors and often showing little affection towards him. He also demonstrates a great sense of humour such as in Catullus 13.",
"title": "Poetry"
},
{
"paragraph_id": 17,
"text": "The Hungarian born British composer Matyas Seiber set poem 31 for unaccompanied mixed chorus Sirmio in 1957. The American composer Ned Rorem set Catullus 101 to music for voice and piano; the song, \"Catallus: On the Burial of His Brother\", was originally published in 1969.",
"title": "Musical settings"
},
{
"paragraph_id": 18,
"text": "Catullus Dreams (2011) is a song cycle by David Glaser set to texts of Catullus, scored for soprano and seven instruments; it premiered at Symphony Space in New York by soprano Linda Larson and Sequitur Ensemble. \"Carmina Catulli\" is a song cycle arranged from 17 of Catullus's poems by American composer Michael Linton. The cycle was recorded in December 2013 and premiered at Carnegie Hall's Weill Recital Hall in March 2014 by French baritone Edwin Crossley-Mercer and pianist Jason Paul Peterson.",
"title": "Musical settings"
},
{
"paragraph_id": 19,
"text": "Thomas Campion also wrote a lute-song using his own translation of the first six lines of Catullus 5 followed by two verses of his own; the translation by Richard Crashaw was set to music in a four-part glee by Samuel Webbe Jr. It was also set to music, in a three-part glee by John Stafford Smith.",
"title": "Musical settings"
},
{
"paragraph_id": 20,
"text": "Catullus 5, the love poem \"Vivamus mea Lesbia atque amemus\", in the translation by Ben Jonson, was set to music, (lute accompanied song) by Alfonso Ferrabosco the younger. Dutch composer Bertha Tideman-Wijers used Catullus's text for her composition Variations on Valerius \"Where that one already turns or turns.\" The Icelandic composer Jóhann Jóhannsson set Catullus 85 to music; entitled \"Odi Et Amo\", the song is found on Jóhannsson's album Englabörn, and is sung through a vocoder, and the music is played by a string quartet and piano. Catulli Carmina is a cantata by Carl Orff set to the texts of Catullus. Finnish jazz singer Reine Rimón has recorded poems of Catullus set to standard jazz tunes.",
"title": "Musical settings"
}
] | Gaius Valerius Catullus, often referred to simply as Catullus, was a Latin poet of the late Roman Republic who wrote chiefly in the neoteric style of poetry, focusing on personal life rather than classical heroes. His surviving works are still read widely and continue to influence poetry and other forms of art. Catullus's poems were widely appreciated by contemporary poets, significantly influencing Ovid and Virgil, among others. After his rediscovery in the Late Middle Ages, Catullus again found admirers such as Petrarch. The explicit sexual imagery which he uses in some of his poems has shocked many readers. Yet, at many instruction levels, Catullus is considered a resource for teachers of Latin. Catullus's style is highly personal, humorous, and emotional; he frequently uses hyperbole, anaphora, alliteration, and diminutives. In 25 of his poems, he mentions his devotion to a woman he refers to as "Lesbia", who is widely believed to have been the Roman aristocrat Clodia Metelli. One of the most famous of his poems is his 5th, which is often recognized for its passionate language and opening line: "Vivamus, mea Lesbia, atque amemus". | 2001-06-20T20:50:11Z | 2023-12-24T11:17:43Z | [
"Template:Refimprove section",
"Template:Free access",
"Template:Commons category",
"Template:Gutenberg author",
"Template:Authority control",
"Template:Circa",
"Template:See also",
"Template:Unreferenced section",
"Template:Cite web",
"Template:Citation",
"Template:Cite journal",
"Template:Further",
"Template:ISBN",
"Template:Short description",
"Template:Infobox writer",
"Template:Cite book",
"Template:Multiple issues",
"Template:IPA",
"Template:When",
"Template:Wikiquote",
"Template:Lang",
"Template:Wikibooks",
"Template:Library resources box",
"Template:Librivox author",
"Template:Ancient Rome topics",
"Template:Hatnote",
"Template:Respell",
"Template:Main",
"Template:Reflist",
"Template:Cite news",
"Template:Wikisource author",
"Template:Internet Archive author",
"Template:Catullus",
"Template:For",
"Template:Use mdy dates",
"Template:Fact"
] | https://en.wikipedia.org/wiki/Catullus |
5,769 | C. S. Forester | Cecil Louis Troughton Smith (27 August 1899 – 2 April 1966), known by his pen name Cecil Scott "C. S." Forester, was an English novelist known for writing tales of naval warfare, such as the 12-book Horatio Hornblower series depicting a Royal Navy officer during the Napoleonic Wars.
The Hornblower novels A Ship of the Line and Flying Colours were jointly awarded the James Tait Black Memorial Prize for fiction in 1938. His other works include The African Queen (1935; turned into a 1951 film by John Huston) and The Good Shepherd (1955; turned into a 2020 film, Greyhound, adapted by and starring Tom Hanks). During the Second World War he moved to Washington D.C. where he worked for the British Ministry of Information, writing propaganda for the Allied cause.
Forester was born in Cairo on 27 August 1899 to English parents George Foster Smith and Sarah Medhurst Troughton. His father George Smith was an English school teacher in Cairo in a school set up by the British protectorate to give upper-class Egyptian boys a taste of English schooling. After the family broke up when he was still at an early age his mother took him with her to London, where he was educated at Alleyn's School and Dulwich College. He began to study medicine at Guy's Hospital, but left without completing his degree. He was of good height and somewhat athletic, but wore glasses and had a slender physique, so he failed his Army physical and was told that there was no chance that he would be accepted. He began writing seriously, using his pen name, in around 1921.
During the Second World War Forester moved to the United States, where he worked for the British Ministry of Information and wrote propaganda to encourage the U.S. to join the Allies. He eventually settled in Berkeley, California.
In 1942, while he was living in Washington, D.C., he met the young British diplomat Roald Dahl and encouraged him to write about his experiences in the Royal Air Force. According to Dahl's autobiography, Lucky Break, Forester asked him about his experiences as a fighter pilot, and this prompted Dahl to write his first story, "A Piece of Cake".
Forester wrote many novels, but he is best known for the 12-book Horatio Hornblower series about an officer in the Royal Navy during the Napoleonic Wars. He began the series with Hornblower fairly high in rank in the first novel, which was published in 1937, but demand for more stories led him to fill in Hornblower's life story, and he wrote novels detailing his rise from the rank of midshipman. The last completed novel was published in 1962. Hornblower's fictional adventures were based on real events, but Forester wrote the body of the works carefully to avoid entanglements with real world history, so that Hornblower is always off on another mission when a great naval battle occurs during the Napoleonic Wars.
Forester's other novels include The African Queen (1935) and The General (1936); two novels about the Peninsular War, Death to the French (published in the United States as Rifleman Dodd) and The Gun (filmed as The Pride and the Passion in 1957); and seafaring stories that do not involve Hornblower, such as Brown on Resolution (1929), The Captain from Connecticut (1941), The Ship (1943), and Hunting the Bismarck (1959), which was used as the basis of the screenplay for the film Sink the Bismarck! (1960). Several of his novels have been filmed, including The African Queen (1951), directed by John Huston. Forester is also credited as story writer on several films not based on his published novels, including Commandos Strike at Dawn (1942).
Forester also wrote several volumes of short stories set during the Second World War. Those in The Nightmare (1954) were based on events in Nazi Germany, ending at the Nuremberg trials. The linked stories in The Man in the Yellow Raft (1969) follow the career of the destroyer USS Boon, while many of the stories in Gold from Crete (1971) follow the destroyer HMS Apache. The last of the stories in Gold from Crete is If Hitler Had Invaded England, which offers an imagined sequence of events starting with Hitler's attempt to implement Operation Sea Lion and culminating in the early military defeat of Nazi Germany in the summer of 1941.
His non-fiction works about seafaring include The Age of Fighting Sail (1956), an account of the sea battles between Great Britain and the United States in the War of 1812.
Forester also published the crime novels Payment Deferred (1926) and Plain Murder (1930), as well as two children's books. Poo-Poo and the Dragons (1942) was created as a series of stories told to his son George to encourage him to finish his meals. George had mild food allergies and needed encouragement to eat. The Barbary Pirates (1953) is a children's history of early 19th-century pirates.
Forester appeared as a contestant on the television quiz programme You Bet Your Life, hosted by Groucho Marx, in an episode broadcast on 1 November 1956.
A previously unknown novel of Forester's, The Pursued, was discovered in 2003 and published by Penguin Classics on 3 November 2011.
Forester married Kathleen Belcher in 1926. They had two sons, John, born in 1929, and George, born in 1933. The couple divorced in 1945. In 1947 he married Dorothy Foster. Kathleen Belcher’s great uncle was Capt. Edward Belcher, RN, who achieved renown as a hydrographer and explorer. After his retirement, Belcher devoted much of his time to writing. After penning biographical material, he turned his hand to naval fiction, inventing a character called Horatio Howard Brenton, and attributing great feats and adventures to him. It is possible that Forester found some inspiration in these stories for his own Horatio Hornblower.
Forester died in Fullerton, California on 2 April 1966.
John Forester wrote a two-volume biography of his father, including many elements of Forester's life which became clear to his son only after his father's death.
In addition to providing the source material for numerous adaptations (not all of which are listed below), Forester was also credited as "adapted for the screen by" for Captain Horatio Hornblower. | [
{
"paragraph_id": 0,
"text": "Cecil Louis Troughton Smith (27 August 1899 – 2 April 1966), known by his pen name Cecil Scott \"C. S.\" Forester, was an English novelist known for writing tales of naval warfare, such as the 12-book Horatio Hornblower series depicting a Royal Navy officer during the Napoleonic Wars.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The Hornblower novels A Ship of the Line and Flying Colours were jointly awarded the James Tait Black Memorial Prize for fiction in 1938. His other works include The African Queen (1935; turned into a 1951 film by John Huston) and The Good Shepherd (1955; turned into a 2020 film, Greyhound, adapted by and starring Tom Hanks). During the Second World War he moved to Washington D.C. where he worked for the British Ministry of Information, writing propaganda for the Allied cause.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Forester was born in Cairo on 27 August 1899 to English parents George Foster Smith and Sarah Medhurst Troughton. His father George Smith was an English school teacher in Cairo in a school set up by the British protectorate to give upper-class Egyptian boys a taste of English schooling. After the family broke up when he was still at an early age his mother took him with her to London, where he was educated at Alleyn's School and Dulwich College. He began to study medicine at Guy's Hospital, but left without completing his degree. He was of good height and somewhat athletic, but wore glasses and had a slender physique, so he failed his Army physical and was told that there was no chance that he would be accepted. He began writing seriously, using his pen name, in around 1921.",
"title": "Early years"
},
{
"paragraph_id": 3,
"text": "During the Second World War Forester moved to the United States, where he worked for the British Ministry of Information and wrote propaganda to encourage the U.S. to join the Allies. He eventually settled in Berkeley, California.",
"title": "Second World War"
},
{
"paragraph_id": 4,
"text": "In 1942, while he was living in Washington, D.C., he met the young British diplomat Roald Dahl and encouraged him to write about his experiences in the Royal Air Force. According to Dahl's autobiography, Lucky Break, Forester asked him about his experiences as a fighter pilot, and this prompted Dahl to write his first story, \"A Piece of Cake\".",
"title": "Second World War"
},
{
"paragraph_id": 5,
"text": "Forester wrote many novels, but he is best known for the 12-book Horatio Hornblower series about an officer in the Royal Navy during the Napoleonic Wars. He began the series with Hornblower fairly high in rank in the first novel, which was published in 1937, but demand for more stories led him to fill in Hornblower's life story, and he wrote novels detailing his rise from the rank of midshipman. The last completed novel was published in 1962. Hornblower's fictional adventures were based on real events, but Forester wrote the body of the works carefully to avoid entanglements with real world history, so that Hornblower is always off on another mission when a great naval battle occurs during the Napoleonic Wars.",
"title": "Literary career"
},
{
"paragraph_id": 6,
"text": "Forester's other novels include The African Queen (1935) and The General (1936); two novels about the Peninsular War, Death to the French (published in the United States as Rifleman Dodd) and The Gun (filmed as The Pride and the Passion in 1957); and seafaring stories that do not involve Hornblower, such as Brown on Resolution (1929), The Captain from Connecticut (1941), The Ship (1943), and Hunting the Bismarck (1959), which was used as the basis of the screenplay for the film Sink the Bismarck! (1960). Several of his novels have been filmed, including The African Queen (1951), directed by John Huston. Forester is also credited as story writer on several films not based on his published novels, including Commandos Strike at Dawn (1942).",
"title": "Literary career"
},
{
"paragraph_id": 7,
"text": "Forester also wrote several volumes of short stories set during the Second World War. Those in The Nightmare (1954) were based on events in Nazi Germany, ending at the Nuremberg trials. The linked stories in The Man in the Yellow Raft (1969) follow the career of the destroyer USS Boon, while many of the stories in Gold from Crete (1971) follow the destroyer HMS Apache. The last of the stories in Gold from Crete is If Hitler Had Invaded England, which offers an imagined sequence of events starting with Hitler's attempt to implement Operation Sea Lion and culminating in the early military defeat of Nazi Germany in the summer of 1941.",
"title": "Literary career"
},
{
"paragraph_id": 8,
"text": "His non-fiction works about seafaring include The Age of Fighting Sail (1956), an account of the sea battles between Great Britain and the United States in the War of 1812.",
"title": "Literary career"
},
{
"paragraph_id": 9,
"text": "Forester also published the crime novels Payment Deferred (1926) and Plain Murder (1930), as well as two children's books. Poo-Poo and the Dragons (1942) was created as a series of stories told to his son George to encourage him to finish his meals. George had mild food allergies and needed encouragement to eat. The Barbary Pirates (1953) is a children's history of early 19th-century pirates.",
"title": "Literary career"
},
{
"paragraph_id": 10,
"text": "Forester appeared as a contestant on the television quiz programme You Bet Your Life, hosted by Groucho Marx, in an episode broadcast on 1 November 1956.",
"title": "Literary career"
},
{
"paragraph_id": 11,
"text": "A previously unknown novel of Forester's, The Pursued, was discovered in 2003 and published by Penguin Classics on 3 November 2011.",
"title": "Literary career"
},
{
"paragraph_id": 12,
"text": "Forester married Kathleen Belcher in 1926. They had two sons, John, born in 1929, and George, born in 1933. The couple divorced in 1945. In 1947 he married Dorothy Foster. Kathleen Belcher’s great uncle was Capt. Edward Belcher, RN, who achieved renown as a hydrographer and explorer. After his retirement, Belcher devoted much of his time to writing. After penning biographical material, he turned his hand to naval fiction, inventing a character called Horatio Howard Brenton, and attributing great feats and adventures to him. It is possible that Forester found some inspiration in these stories for his own Horatio Hornblower.",
"title": "Personal life"
},
{
"paragraph_id": 13,
"text": "Forester died in Fullerton, California on 2 April 1966.",
"title": "Personal life"
},
{
"paragraph_id": 14,
"text": "John Forester wrote a two-volume biography of his father, including many elements of Forester's life which became clear to his son only after his father's death.",
"title": "Personal life"
},
{
"paragraph_id": 15,
"text": "In addition to providing the source material for numerous adaptations (not all of which are listed below), Forester was also credited as \"adapted for the screen by\" for Captain Horatio Hornblower.",
"title": "Film adaptations"
}
] | Cecil Louis Troughton Smith, known by his pen name Cecil Scott "C. S." Forester, was an English novelist known for writing tales of naval warfare, such as the 12-book Horatio Hornblower series depicting a Royal Navy officer during the Napoleonic Wars. The Hornblower novels A Ship of the Line and Flying Colours were jointly awarded the James Tait Black Memorial Prize for fiction in 1938. His other works include The African Queen and The Good Shepherd. During the Second World War he moved to Washington D.C. where he worked for the British Ministry of Information, writing propaganda for the Allied cause. | 2001-06-21T01:08:39Z | 2023-11-26T17:48:06Z | [
"Template:Wikiquote",
"Template:Use dmy dates",
"Template:More citations needed section",
"Template:FadedPage",
"Template:C. S. Forester",
"Template:Short description",
"Template:Cite book",
"Template:Cite AV media",
"Template:IMDb name",
"Template:Authority control",
"Template:Reflist",
"Template:Cite web",
"Template:OL author",
"Template:Infobox writer",
"Template:Books and Writers"
] | https://en.wikipedia.org/wiki/C._S._Forester |
5,770 | List of country calling codes | Country calling codes, country dial-in codes, international subscriber dialing (ISD) codes, or most commonly, telephone country codes are telephone number prefixes for reaching telephone subscribers in foreign countries or areas via international telecommunication networks. Country codes are defined by the International Telecommunication Union (ITU) in ITU-T standards E.123 and E.164. The prefixes enable international direct dialing (IDD).
Country codes constitute the international telephone numbering plan. They are used only when dialing a telephone number in a country or world region other than the caller's. Country codes are dialed before the national telephone number, but require at least one additional prefix, the international call prefix which is an exit code from the national numbering plan to the international one. In most countries, this prefix is 00, an ITU recommendation; it is 011 in the countries of the North American Numbering Plan while a minority of countries use other prefixes.
This table lists in its first column the initial digits of the country code shared by each country in each row, which is arranged in columns for the last digit. When three-digit codes share a common leading pair, the two-digit code is unassigned, being ambiguous (denoted by "ambig."). Unassigned codes are denoted by a dash (—). Countries are identified by ISO 3166-1 alpha-2 country codes; codes for non-geographic services are denoted by two asterisks (**).
World zones are organized principally, but only approximately, by geographic location. Exceptions exist for political and historical alignments.
NANP members are assigned three-digit numbering plan area (NPA) codes under the common country prefix 1, shown in the format 1 (NPA).
(but also Aruba, Faroe Islands, Greenland and British Indian Ocean Territory)
Some of the larger countries were assigned two-digit codes to compensate for their usually longer domestic numbers. Small countries were assigned three-digit codes, which also has been the practice since the 1980s.
Formerly assigned to the Soviet Union until its dissolution in 1991.
In Antarctica, telecommunication services are provided by the parent country of each base:
Other places with no country codes in use, although a code may be reserved: | [
{
"paragraph_id": 0,
"text": "Country calling codes, country dial-in codes, international subscriber dialing (ISD) codes, or most commonly, telephone country codes are telephone number prefixes for reaching telephone subscribers in foreign countries or areas via international telecommunication networks. Country codes are defined by the International Telecommunication Union (ITU) in ITU-T standards E.123 and E.164. The prefixes enable international direct dialing (IDD).",
"title": ""
},
{
"paragraph_id": 1,
"text": "Country codes constitute the international telephone numbering plan. They are used only when dialing a telephone number in a country or world region other than the caller's. Country codes are dialed before the national telephone number, but require at least one additional prefix, the international call prefix which is an exit code from the national numbering plan to the international one. In most countries, this prefix is 00, an ITU recommendation; it is 011 in the countries of the North American Numbering Plan while a minority of countries use other prefixes.",
"title": ""
},
{
"paragraph_id": 2,
"text": "This table lists in its first column the initial digits of the country code shared by each country in each row, which is arranged in columns for the last digit. When three-digit codes share a common leading pair, the two-digit code is unassigned, being ambiguous (denoted by \"ambig.\"). Unassigned codes are denoted by a dash (—). Countries are identified by ISO 3166-1 alpha-2 country codes; codes for non-geographic services are denoted by two asterisks (**).",
"title": "Overview"
},
{
"paragraph_id": 3,
"text": "World zones are organized principally, but only approximately, by geographic location. Exceptions exist for political and historical alignments.",
"title": "Ordered by world zone"
},
{
"paragraph_id": 4,
"text": "",
"title": "Ordered by world zone"
},
{
"paragraph_id": 5,
"text": "NANP members are assigned three-digit numbering plan area (NPA) codes under the common country prefix 1, shown in the format 1 (NPA).",
"title": "Ordered by world zone"
},
{
"paragraph_id": 6,
"text": "(but also Aruba, Faroe Islands, Greenland and British Indian Ocean Territory)",
"title": "Ordered by world zone"
},
{
"paragraph_id": 7,
"text": "",
"title": "Ordered by world zone"
},
{
"paragraph_id": 8,
"text": "Some of the larger countries were assigned two-digit codes to compensate for their usually longer domestic numbers. Small countries were assigned three-digit codes, which also has been the practice since the 1980s.",
"title": "Ordered by world zone"
},
{
"paragraph_id": 9,
"text": "",
"title": "Ordered by world zone"
},
{
"paragraph_id": 10,
"text": "",
"title": "Ordered by world zone"
},
{
"paragraph_id": 11,
"text": "",
"title": "Ordered by world zone"
},
{
"paragraph_id": 12,
"text": "Formerly assigned to the Soviet Union until its dissolution in 1991.",
"title": "Ordered by world zone"
},
{
"paragraph_id": 13,
"text": "",
"title": "Ordered by world zone"
},
{
"paragraph_id": 14,
"text": "",
"title": "Ordered by world zone"
},
{
"paragraph_id": 15,
"text": "",
"title": "Ordered by world zone"
},
{
"paragraph_id": 16,
"text": "In Antarctica, telecommunication services are provided by the parent country of each base:",
"title": "Locations with no country code"
},
{
"paragraph_id": 17,
"text": "Other places with no country codes in use, although a code may be reserved:",
"title": "Locations with no country code"
}
] | Country calling codes, country dial-in codes, international subscriber dialing (ISD) codes, or most commonly, telephone country codes are telephone number prefixes for reaching telephone subscribers in foreign countries or areas via international telecommunication networks. Country codes are defined by the International Telecommunication Union (ITU) in ITU-T standards E.123 and E.164. The prefixes enable international direct dialing (IDD). Country codes constitute the international telephone numbering plan. They are used only when dialing a telephone number in a country or world region other than the caller's. Country codes are dialed before the national telephone number, but require at least one additional prefix, the international call prefix which is an exit code from the national numbering plan to the international one. In most countries, this prefix is 00, an ITU recommendation; it is 011 in the countries of the North American Numbering Plan while a minority of countries use other prefixes. | 2001-06-21T23:05:43Z | 2023-12-20T08:47:28Z | [
"Template:Cite web",
"Template:Flagicon",
"Template:Ntsh",
"Template:Reflist",
"Template:Telecommunications",
"Template:Use dmy dates",
"Template:Flag",
"Template:Wikivoyage",
"Template:See also",
"Template:Flagu",
"Template:Short description",
"Template:Pp-vandalism",
"Template:Anchor",
"Template:Nbsp",
"Template:Portal",
"Template:Cite journal"
] | https://en.wikipedia.org/wiki/List_of_country_calling_codes |
5,771 | Christopher Marlowe | Christopher Marlowe, also known as Kit Marlowe (/ˈmɑːrloʊ/; baptised 26 February 1564 – 30 May 1593), was an English playwright, poet and translator of the Elizabethan era. Marlowe is among the most famous of the Elizabethan playwrights. Based upon the "many imitations" of his play Tamburlaine, modern scholars consider him to have been the foremost dramatist in London in the years just before his mysterious early death. Some scholars also believe that he greatly influenced William Shakespeare, who was baptised in the same year as Marlowe and later succeeded him as the pre-eminent Elizabethan playwright. Marlowe was the first to achieve critical reputation for his use of blank verse, which became the standard for the era. His plays are distinguished by their overreaching protagonists. Themes found within Marlowe's literary works have been noted as humanistic with realistic emotions, which some scholars find difficult to reconcile with Marlowe's "anti-intellectualism" and his catering to the prurient tastes of his Elizabethan audiences for generous displays of extreme physical violence, cruelty, and bloodshed.
Events in Marlowe's life were sometimes as extreme as those found in his plays. Differing sensational reports of Marlowe's death in 1593 abounded after the event and are contested by scholars today owing to a lack of good documentation. There have been many conjectures as to the nature and reason for his death, including a vicious bar-room fight, blasphemous libel against the church, homosexual intrigue, betrayal by another playwright, and espionage from the highest level: the Privy Council of Elizabeth I. An official coroner's account of Marlowe's death was discovered only in 1925, and it did little to persuade all scholars that it told the whole story, nor did it eliminate the uncertainties present in his biography.
Christopher Marlowe, the second of nine children, and oldest child after the death of his sister Mary in 1568, was born to Canterbury shoemaker John Marlowe and his wife Katherine, daughter of William Arthur of Dover. He was baptised at St George's Church, Canterbury, on 26 February 1564 (1563 in the old style dates in use at the time, which placed the new year on 25 March). Marlowe's birth was likely to have been a few days before, making him about two months older than William Shakespeare, who was baptised on 26 April 1564 in Stratford-upon-Avon.
By age 14, Marlowe was a pupil at The King's School, Canterbury on a scholarship and two years later a student at Corpus Christi College, Cambridge, where he also studied through a scholarship with expectation that he would become an Anglican clergyman. Instead, he received his Bachelor of Arts degree in 1584. Marlowe mastered Latin during his schooling, reading and translating the works of Ovid. In 1587, the university hesitated to award his Master of Arts degree because of a rumour that he intended to go to the English seminary at Rheims in northern France, presumably to prepare for ordination as a Roman Catholic priest. If true, such an action on his part would have been a direct violation of royal edict issued by Queen Elizabeth I in 1585 criminalising any attempt by an English citizen to be ordained in the Roman Catholic Church.
Large-scale violence between Protestants and Catholics on the European continent has been cited by scholars as the impetus for the Protestant English Queen's defensive anti-Catholic laws issued from 1581 until her death in 1603. Despite the dire implications for Marlowe, his degree was awarded on schedule when the Privy Council intervened on his behalf, commending him for his "faithful dealing" and "good service" to the Queen. The nature of Marlowe's service was not specified by the council, but its letter to the Cambridge authorities has provoked much speculation by modern scholars, notably the theory that Marlowe was operating as a secret agent for Privy Council member Sir Francis Walsingham. The only surviving evidence of the Privy Council's correspondence is found in their minutes, the letter being lost. There is no mention of espionage in the minutes, but its summation of the lost Privy Council letter is vague in meaning, stating that "it was not Her Majesties pleasure" that persons employed as Marlowe had been "in matters touching the benefit of his country should be defamed by those who are ignorant in th'affaires he went about." Scholars agree the vague wording was typically used to protect government agents, but they continue to debate what the "matters touching the benefit of his country" actually were in Marlowe's case and how they affected the 23-year-old writer as he launched his literary career in 1587.
Little is known about Marlowe's adult life. All available evidence, other than what can be deduced from his literary works, is found in legal records and other official documents. Writers of fiction and non-fiction have speculated about his professional activities, private life, and character. Marlowe has been described as a spy, a brawler, and a heretic, as well as a "magician", "duellist", "tobacco-user", "counterfeiter" and "rakehell". While J. A. Downie and Constance Kuriyama have argued against the more lurid speculations, it is the usually circumspect J. B. Steane who remarked, "it seems absurd to dismiss all of these Elizabethan rumours and accusations as 'the Marlowe myth'". Much has been written on his brief adult life, including speculation of: his involvement in royally-sanctioned espionage; his vocal declaration as an atheist; his (possibly same-sex) sexual interests; and the puzzling circumstances surrounding his death.
Marlowe is alleged to have been a government spy. Park Honan and Charles Nicholl speculate that this was the case and suggest that Marlowe's recruitment took place when he was at Cambridge. In 1587, when the Privy Council ordered the University of Cambridge to award Marlowe his degree as Master of Arts, it denied rumours that he intended to go to the English Catholic college in Rheims, saying instead that he had been engaged in unspecified "affaires" on "matters touching the benefit of his country". Surviving college records from the period also indicate that, in the academic year 1584–1585, Marlowe had had a series of unusually lengthy absences from the university which violated university regulations. Surviving college buttery accounts, which record student purchases for personal provisions, show that Marlowe began spending lavishly on food and drink during the periods he was in attendance; the amount was more than he could have afforded on his known scholarship income.
It has been speculated that Marlowe was the "Morley" who was tutor to Arbella Stuart in 1589. This possibility was first raised in a Times Literary Supplement letter by E. St John Brooks in 1937; in a letter to Notes and Queries, John Baker has added that only Marlowe could have been Arbella's tutor owing to the absence of any other known "Morley" from the period with an MA and not otherwise occupied. If Marlowe was Arbella's tutor, it might indicate that he was there as a spy, since Arbella, niece of Mary, Queen of Scots, and cousin of James VI of Scotland, later James I of England, was at the time a strong candidate for the succession to Elizabeth's throne. Frederick S. Boas dismisses the possibility of this identification, based on surviving legal records which document Marlowe's "residence in London between September and December 1589". Marlowe had been party to a fatal quarrel involving his neighbours and the poet Thomas Watson in Norton Folgate and was held in Newgate Prison for a fortnight. In fact, the quarrel and his arrest occurred on 18 September, he was released on bail on 1 October and he had to attend court, where he was acquitted on 3 December, but there is no record of where he was for the intervening two months.
In 1592 Marlowe was arrested in the English garrison town of Flushing (Vlissingen) in the Netherlands, for alleged involvement in the counterfeiting of coins, presumably related to the activities of seditious Catholics. He was sent to the Lord Treasurer (Burghley), but no charge or imprisonment resulted. This arrest may have disrupted another of Marlowe's spying missions, perhaps by giving the resulting coinage to the Catholic cause. He was to infiltrate the followers of the active Catholic plotter William Stanley and report back to Burghley.
Marlowe was reputed to be an atheist, which held the dangerous implication of being an enemy of God and the state, by association. With the rise of public fears concerning The School of Night, or "School of Atheism" in the late 16th century, accusations of atheism were closely associated with disloyalty to the Protestant monarchy of England.
Some modern historians consider that Marlowe's professed atheism, as with his supposed Catholicism, may have been no more than a sham to further his work as a government spy. Contemporary evidence comes from Marlowe's accuser in Flushing, an informer called Richard Baines. The governor of Flushing had reported that each of the men had "of malice" accused the other of instigating the counterfeiting and of intending to go over to the Catholic "enemy"; such an action was considered atheistic by the Church of England. Following Marlowe's arrest in 1593, Baines submitted to the authorities a "note containing the opinion of one Christopher Marly concerning his damnable judgment of religion, and scorn of God's word". Baines attributes to Marlowe a total of eighteen items which "scoff at the pretensions of the Old and New Testament" such as, "Christ was a bastard and his mother dishonest [unchaste]", "the woman of Samaria and her sister were whores and that Christ knew them dishonestly", "St John the Evangelist was bedfellow to Christ and leaned always in his bosom" (cf. John 13:23–25) and "that he used him as the sinners of Sodom". He also implied that Marlowe had Catholic sympathies. Other passages are merely sceptical in tone: "he persuades men to atheism, willing them not to be afraid of bugbears and hobgoblins". The final paragraph of Baines's document reads:
These thinges, with many other shall by good & honest witnes be approved to be his opinions and Comon Speeches, and that this Marlowe doth not only hould them himself, but almost into every Company he Cometh he persuades men to Atheism willing them not to be afeard of bugbeares and hobgoblins, and vtterly scorning both god and his ministers as I Richard Baines will Justify & approue both by mine oth and the testimony of many honest men, and almost al men with whome he hath Conversed any time will testify the same, and as I think all men in Cristianity ought to indevor that the mouth of so dangerous a member may be stopped, he saith likewise that he hath quoted a number of Contrarieties oute of the Scripture which he hath giuen to some great men who in Convenient time shalbe named. When these thinges shalbe Called in question the witnes shalbe produced.
Similar examples of Marlowe's statements were given by Thomas Kyd after his imprisonment and possible torture (see above); Kyd and Baines connect Marlowe with mathematician Thomas Harriot's and Sir Walter Raleigh's circle. Another document claimed about that time that "one Marlowe is able to show more sound reasons for Atheism than any divine in England is able to give to prove divinity, and that ... he hath read the Atheist lecture to Sir Walter Raleigh and others".
Some critics believe that Marlowe sought to disseminate these views in his work and that he identified with his rebellious and iconoclastic protagonists. Plays had to be approved by the Master of the Revels before they could be performed and the censorship of publications was under the control of the Archbishop of Canterbury. Presumably these authorities did not consider any of Marlowe's works to be unacceptable other than the Amores.
It has been claimed that Marlowe was homosexual. Some scholars argue that the identification of an Elizabethan as gay or homosexual in the modern sense is "anachronistic," claiming that for the Elizabethans the terms were more likely to have been applied to homoerotic affections or sexual acts rather than to what we currently understand as a settled sexual orientation or personal role identity. Other scholars argue that the evidence is inconclusive and that the reports of Marlowe's homosexuality may be rumours produced after his death. Richard Baines reported Marlowe as saying: "all they that love not Tobacco & Boies were fools". David Bevington and Eric C. Rasmussen describe Baines's evidence as "unreliable testimony" and "[t]hese and other testimonials need to be discounted for their exaggeration and for their having been produced under legal circumstances we would now regard as a witch-hunt".
J. B. Steane considered there to be "no evidence for Marlowe's homosexuality at all". Other scholars point to the frequency with which Marlowe explores homosexual themes in his writing: in Hero and Leander, Marlowe writes of the male youth Leander: "in his looks were all that men desire..." Edward the Second contains the following passage enumerating homosexual relationships:
The mightiest kings have had their minions; Great Alexander loved Hephaestion, The conquering Hercules for Hylas wept; And for Patroclus, stern Achilles drooped. And not kings only, but the wisest men: The Roman Tully loved Octavius, Grave Socrates, wild Alcibiades.
Marlowe wrote the only play about the life of Edward II up to his time, taking the humanist literary discussion of male sexuality much further than his contemporaries. The play was extremely bold, dealing with a star-crossed love story between Edward II and Piers Gaveston. Though it was a common practice at the time to reveal characters as homosexual to give audiences reason to suspect them as culprits in a crime, Christopher Marlowe's Edward II is portrayed as a sympathetic character. The decision to start the play Dido, Queen of Carthage with a homoerotic scene between Jupiter and Ganymede that bears no connection to the subsequent plot has long puzzled scholars.
In early May 1593, several bills were posted about London threatening the Protestant refugees from France and the Netherlands who had settled in the city. One of these, the "Dutch church libel", written in rhymed iambic pentameter, contained allusions to several of Marlowe's plays and was signed, "Tamburlaine". On 11 May the Privy Council ordered the arrest of those responsible for the libels. The next day, Marlowe's colleague Thomas Kyd was arrested, his lodgings were searched and a three-page fragment of a heretical tract was found. In a letter to Sir John Puckering, Kyd asserted that it had belonged to Marlowe, with whom he had been writing "in one chamber" some two years earlier. In a second letter, Kyd described Marlowe as blasphemous, disorderly, holding treasonous opinions, being an irreligious reprobate and "intemperate & of a cruel hart". They had both been working for an aristocratic patron, probably Ferdinando Stanley, Lord Strange. A warrant for Marlowe's arrest was issued on 18 May, when the Privy Council apparently knew that he might be found staying with Thomas Walsingham, whose father was a first cousin of the late Sir Francis Walsingham, Elizabeth's principal secretary in the 1580s and a man more deeply involved in state espionage than any other member of the Privy Council. Marlowe duly presented himself on 20 May but there apparently being no Privy Council meeting on that day, was instructed to "give his daily attendance on their Lordships, until he shall be licensed to the contrary". On Wednesday, 30 May, Marlowe was killed.
Various accounts of Marlowe's death were current over the next few years. In his Palladis Tamia, published in 1598, Francis Meres says Marlowe was "stabbed to death by a bawdy serving-man, a rival of his in his lewd love" as punishment for his "epicurism and atheism". In 1917, in the Dictionary of National Biography, Sir Sidney Lee wrote, on slender evidence, that Marlowe was killed in a drunken fight. His claim was not much at variance with the official account, which came to light only in 1925, when the scholar Leslie Hotson discovered the coroner's report of the inquest on Marlowe's death, held two days later on Friday 1 June 1593, by the Coroner of the Queen's Household, William Danby. Marlowe had spent all day in a house in Deptford, owned by the widow Eleanor Bull, with three men: Ingram Frizer, Nicholas Skeres and Robert Poley. All three had been employed by one or other of the Walsinghams. Skeres and Poley had helped snare the conspirators in the Babington plot and Frizer was a servant to Thomas Walsingham probably in the role of a financial or business agent, as he was for Walsingham's wife Audrey a few years later. These witnesses testified that Frizer and Marlowe had argued over payment of the bill (now famously known as the 'Reckoning') exchanging "divers malicious words" while Frizer was sitting at a table between the other two and Marlowe was lying behind him on a couch. Marlowe snatched Frizer's dagger and wounded him on the head. In the ensuing struggle, according to the coroner's report, Marlowe was stabbed above the right eye, killing him instantly. The jury concluded that Frizer acted in self-defence and within a month he was pardoned. Marlowe was buried in an unmarked grave in the churchyard of St. Nicholas, Deptford, immediately after the inquest, on 1 June 1593.
The complete text of the inquest report was published by Leslie Hotson in his book, The Death of Christopher Marlowe, in the introduction to which Prof. George Kittredge said, "The mystery of Marlowe's death, heretofore involved in a cloud of contradictory gossip and irresponsible guess-work, is now cleared up for good and all on the authority of public records of complete authenticity and gratifying fullness", but this confidence proved fairly short-lived. Hotson had considered the possibility that the witnesses had "concocted a lying account of Marlowe's behaviour, to which they swore at the inquest, and with which they deceived the jury" but came down against that scenario. Others began to suspect that this scenario was indeed the case. Writing to the Times Literary Supplement shortly after the book's publication, Eugénie de Kalb disputed that the struggle and outcome as described were even possible and Samuel A. Tannenbaum insisted the following year that such a wound could not have possibly resulted in instant death, as had been claimed. Even Marlowe's biographer John Bakeless acknowledged that "some scholars have been inclined to question the truthfulness of the coroner's report. There is something queer about the whole episode" and said that Hotson's discovery "raises almost as many questions as it answers". It has also been discovered more recently that the apparent absence of a local county coroner to accompany the Coroner of the Queen's Household would, if noticed, have made the inquest null and void.
One of the main reasons for doubting the truth of the inquest concerns the reliability of Marlowe's companions as witnesses. As an agent provocateur for the late Sir Francis Walsingham, Robert Poley was a consummate liar, the "very genius of the Elizabethan underworld" and is on record as saying "I will swear and forswear myself, rather than I will accuse myself to do me any harm". The other witness, Nicholas Skeres, had for many years acted as a confidence trickster, drawing young men into the clutches of people in the money-lending racket, including Marlowe's apparent killer, Ingram Frizer, with whom he was engaged in such a swindle. Despite their being referred to as generosi (gentlemen) in the inquest report, the witnesses were professional liars. Some biographers, such as Kuriyama and Downie, take the inquest to be a true account of what occurred, but in trying to explain what really happened if the account was not true, others have come up with a variety of murder theories:
Since there are only written documents on which to base any conclusions and since it is probable that the most crucial information about his death was never committed to paper, it is unlikely that the full circumstances of Marlowe's death will ever be known.
For his contemporaries in the literary world, Marlowe was above all an admired and influential artist. Within weeks of his death, George Peele remembered him as "Marley, the Muses' darling"; Michael Drayton noted that he "Had in him those brave translunary things / That the first poets had" and Ben Jonson even wrote of "Marlowe's mighty line". Thomas Nashe wrote warmly of his friend, "poor deceased Kit Marlowe," as did the publisher Edward Blount in his dedication of Hero and Leander to Sir Thomas Walsingham. Among the few contemporary dramatists to say anything negative about Marlowe was the anonymous author of the Cambridge University play The Return from Parnassus (1598) who wrote, "Pity it is that wit so ill should dwell, / Wit lent from heaven, but vices sent from hell".
The most famous tribute to Marlowe was paid by Shakespeare in As You Like It, where he not only quotes a line from Hero and Leander ("Dead Shepherd, now I find thy saw of might, 'Who ever lov'd that lov'd not at first sight?'") but also gives to the clown Touchstone the words "When a man's verses cannot be understood, nor a man's good wit seconded with the forward child, understanding, it strikes a man more dead than a great reckoning in a little room." This appears to be a reference to Marlowe's murder which involved a fight over the "reckoning," the bill, as well as to a line in Marlowe's Jew of Malta, "Infinite riches in a little room."
Shakespeare was much influenced by Marlowe in his work, as can be seen in the use of Marlovian themes in Antony and Cleopatra, The Merchant of Venice, Richard II and Macbeth (Dido, Jew of Malta, Edward II and Doctor Faustus, respectively). In Hamlet, after meeting with the travelling actors, Hamlet requests the Player perform a speech about the Trojan War, which at 2.2.429–432 has an echo of Marlowe's Dido, Queen of Carthage. In Love's Labour's Lost Shakespeare brings on a character "Marcade" (three syllables) in conscious acknowledgement of Marlowe's character "Mercury", also attending the King of Navarre, in Massacre at Paris. The significance, to those of Shakespeare's audience who were familiar with Hero and Leander, was Marlowe's identification of himself with the god Mercury.
An argument has arisen about the notion that Marlowe faked his death and then continued to write under the assumed name of William Shakespeare. Academic consensus rejects alternative candidates for authorship of Shakespeare's plays and sonnets, including Marlowe.
Six dramas have been attributed to the authorship of Christopher Marlowe either alone or in collaboration with other writers, with varying degrees of evidence. The writing sequence or chronology of these plays is mostly unknown and is offered here with any dates and evidence known. Among the little available information we have, Dido is believed to be the first Marlowe play performed, while it was Tamburlaine that was first to be performed on a regular commercial stage in London in 1587. Believed by many scholars to be Marlowe's greatest success, Tamburlaine was the first English play written in blank verse and, with Thomas Kyd's The Spanish Tragedy, is generally considered the beginning of the mature phase of the Elizabethan theatre.
The play Lust's Dominion was attributed to Marlowe upon its initial publication in 1657, though scholars and critics have almost unanimously rejected the attribution. He may also have written or co-written Arden of Faversham.
Publication and responses to the poetry and translations credited to Marlowe primarily occurred posthumously, including:
Modern scholars still look for evidence of collaborations between Marlowe and other writers. In 2016, one publisher was the first to endorse the scholarly claim of a collaboration between Marlowe and the playwright William Shakespeare:
Marlowe's plays were enormously successful, possibly because of the imposing stage presence of his lead actor, Edward Alleyn. Alleyn was unusually tall for the time and the haughty roles of Tamburlaine, Faustus and Barabas were probably written for him. Marlowe's plays were the foundation of the repertoire of Alleyn's company, the Admiral's Men, throughout the 1590s. One of Marlowe's poetry translations did not fare as well. In 1599, Marlowe's translation of Ovid was banned and copies were publicly burned as part of Archbishop Whitgift's crackdown on offensive material.
(Patrick Cheney's 2004 Cambridge Companion to Christopher Marlowe presents an alternative timeline based upon printing dates.)
First official record 1594
First published 1594; posthumously
First recorded performance between 1587 and 1593 by the Children of the Chapel, a company of boy actors in London.
Significance This play is believed by many scholars to be the first play by Christopher Marlowe to be performed.
Attribution The title page attributes the play to Marlowe and Thomas Nashe, yet some scholars question how much of a contribution Nashe made to the play.
Evidence No manuscripts by Marlowe exist for this play.
First official record 1587, Part I
First published 1590, Parts I and II in one octavo, London. No author named.
First recorded performance 1587, Part I, by the Admiral's Men, London.
Significance Tamburlaine is the first example of blank verse used in the dramatic literature of the Early Modern English theatre.
Attribution Author name is missing from first printing in 1590. Attribution of this work by scholars to Marlowe is based upon comparison to his other verified works. Passages and character development in Tamburlane are similar to many other Marlowe works.
Evidence No manuscripts by Marlowe exist for this play. Parts I and II were entered into the Stationers' Register on 14 August 1590. The two parts were published together by the London printer, Richard Jones, in 1590; a second edition in 1592, and a third in 1597. The 1597 edition of the two parts were published separately in quarto by Edward White; part I in 1605, and part II in 1606.
First official record 1592
First published 1592; earliest extant edition, 1633
First recorded performance 26 February 1592, by Lord Strange's acting company.
Significance The performances of the play were a success and it remained popular for the next fifty years. This play helps to establish the strong theme of "anti-authoritarianism" that is found throughout Marlowe's works.
Evidence No manuscripts by Marlowe exist for this play. The play was entered in the Stationers' Register on 17 May 1594 but the earliest surviving printed edition is from 1633.
First official record 1594–1597
First published 1601, no extant copy; first extant copy, 1604 (A text) quarto; 1616 (B text) quarto.
First recorded performance 1594–1597; 24 revival performances occurred between these years by the Lord Admiral's Company, Rose Theatre, London; earlier performances probably occurred around 1589 by the same company.
Significance This is the first dramatised version of the Faust legend of a scholar's dealing with the devil. Marlowe deviates from earlier versions of "The Devil's Pact" significantly: Marlowe's protagonist is unable to "burn his books" or repent to a merciful God to have his contract annulled at the end of the play; he is carried off by demons; and, in the 1616 quarto, his mangled corpse is found by the scholar characters.
Attribution The 'B text' was highly edited and censored, owing in part to the shifting theatre laws regarding religious words onstage during the seventeenth-century. Because it contains several additional scenes believed to be the additions of other playwrights, particularly Samuel Rowley and William Bird (alias Borne), a recent edition attributes the authorship of both versions to "Christopher Marlowe and his collaborator and revisers." This recent edition has tried to establish that the 'A text' was assembled from Marlowe's work and another writer, with the 'B text' as a later revision.
Evidence No manuscripts by Marlowe exist for this play. The two earliest-printed extant versions of the play, A and B, form a textual problem for scholars. Both were published after Marlowe's death and scholars disagree which text is more representative of Marlowe's original. Some editions are based on a combination of the two texts. Late-twentieth-century scholarly consensus identifies 'A text' as more representative because it contains irregular character names and idiosyncratic spelling, which are believed to reflect the author's handwritten manuscript or "foul papers". In comparison, 'B text' is highly edited with several additional scenes possibly written by other playwrights.
First official record 1593
First published 1590; earliest extant edition 1594 octavo
First recorded performance 1592, performed by the Earl of Pembroke's Men.
Significance Considered by recent scholars as Marlowe's "most modern play" because of its probing treatment of the private life of a king and unflattering depiction of the power politics of the time. The 1594 editions of Edward II and of Dido are the first published plays with Marlowe's name appearing as the author.
Attribution Earliest extant edition of 1594.
Evidence The play was entered into the Stationers' Register on 6 July 1593, five weeks after Marlowe's death.
First official record c. 1593, alleged foul sheet by Marlowe of "Scene 19"; although authorship by Marlowe is contested by recent scholars, the manuscript is believed written while the play was first performed and with an unknown purpose.
First published undated, c. 1594 or later, octavo, London; while this is the most complete surviving text, it is near half the length of Marlowe's other works and possibly a reconstruction. The printer and publisher credit, "E.A. for Edward White," also appears on the 1605/06 printing of Marlowe's Tamburlaine.
First recorded performance 26 Jan 1593, by Lord Strange's Men, at Henslowe's Rose Theatre, London, under the title The Tragedy of the Guise; 1594, in the repertory of the Admiral's Men.
Significance The Massacre at Paris is considered Marlowe's most dangerous play, as agitators in London seized on its theme to advocate the murders of refugees from the low countries of the Spanish Netherlands, and it warns Elizabeth I of this possibility in its last scene. It features the silent "English Agent", whom tradition has identified with Marlowe and his connexions to the secret service. Highest grossing play for Lord Strange's Men in 1593.
Attribution A 1593 loose manuscript sheet of the play, called a foul sheet, is alleged to be by Marlowe and has been claimed by some scholars as the only extant play manuscript by the author. It could also provide an approximate date of composition for the play. When compared with the extant printed text and his other work, other scholars reject the attribution to Marlowe. The only surviving printed text of this play is possibly a reconstruction from memory of Marlowe's original performance text. Current scholarship notes that there are only 1147 lines in the play, half the amount of a typical play of the 1590s. Other evidence that the extant published text may not be Marlowe's original is the uneven style throughout, with two-dimensional characterisations, deteriorating verbal quality and repetitions of content.
Evidence Never appeared in the Stationer's Register.
The Muse of Poetry, a bronze sculpture by Edward Onslow Ford references Marlowe and his work. It was erected on Buttermarket, Canterbury in 1891, and now stands outside the Marlowe Theatre in the city.
In July 2002, a memorial window to Marlowe was unveiled by the Marlowe Society at Poets' Corner in Westminster Abbey. Controversially, a question mark was added to his generally accepted date of death. On 25 October 2011 a letter from Paul Edmondson and Stanley Wells was published by The Times newspaper, in which they called on the Dean and Chapter to remove the question mark on the grounds that it "flew in the face of a mass of unimpugnable evidence". In 2012, they renewed this call in their e-book Shakespeare Bites Back, adding that it "denies history" and again the following year in their book Shakespeare Beyond Doubt.
The Marlowe Theatre in Canterbury, Kent, UK, was named for Marlowe in 1949.
Marlowe has been used as a character in books, theatre, film, television, games and radio.
Modern scholarly collected works of Marlowe include:
Royal Shakespeare Company
Royal National Theatre
Shakespeare's Globe
The Marlowe Sessions
Sources | [
{
"paragraph_id": 0,
"text": "Christopher Marlowe, also known as Kit Marlowe (/ˈmɑːrloʊ/; baptised 26 February 1564 – 30 May 1593), was an English playwright, poet and translator of the Elizabethan era. Marlowe is among the most famous of the Elizabethan playwrights. Based upon the \"many imitations\" of his play Tamburlaine, modern scholars consider him to have been the foremost dramatist in London in the years just before his mysterious early death. Some scholars also believe that he greatly influenced William Shakespeare, who was baptised in the same year as Marlowe and later succeeded him as the pre-eminent Elizabethan playwright. Marlowe was the first to achieve critical reputation for his use of blank verse, which became the standard for the era. His plays are distinguished by their overreaching protagonists. Themes found within Marlowe's literary works have been noted as humanistic with realistic emotions, which some scholars find difficult to reconcile with Marlowe's \"anti-intellectualism\" and his catering to the prurient tastes of his Elizabethan audiences for generous displays of extreme physical violence, cruelty, and bloodshed.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Events in Marlowe's life were sometimes as extreme as those found in his plays. Differing sensational reports of Marlowe's death in 1593 abounded after the event and are contested by scholars today owing to a lack of good documentation. There have been many conjectures as to the nature and reason for his death, including a vicious bar-room fight, blasphemous libel against the church, homosexual intrigue, betrayal by another playwright, and espionage from the highest level: the Privy Council of Elizabeth I. An official coroner's account of Marlowe's death was discovered only in 1925, and it did little to persuade all scholars that it told the whole story, nor did it eliminate the uncertainties present in his biography.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Christopher Marlowe, the second of nine children, and oldest child after the death of his sister Mary in 1568, was born to Canterbury shoemaker John Marlowe and his wife Katherine, daughter of William Arthur of Dover. He was baptised at St George's Church, Canterbury, on 26 February 1564 (1563 in the old style dates in use at the time, which placed the new year on 25 March). Marlowe's birth was likely to have been a few days before, making him about two months older than William Shakespeare, who was baptised on 26 April 1564 in Stratford-upon-Avon.",
"title": "Early life"
},
{
"paragraph_id": 3,
"text": "By age 14, Marlowe was a pupil at The King's School, Canterbury on a scholarship and two years later a student at Corpus Christi College, Cambridge, where he also studied through a scholarship with expectation that he would become an Anglican clergyman. Instead, he received his Bachelor of Arts degree in 1584. Marlowe mastered Latin during his schooling, reading and translating the works of Ovid. In 1587, the university hesitated to award his Master of Arts degree because of a rumour that he intended to go to the English seminary at Rheims in northern France, presumably to prepare for ordination as a Roman Catholic priest. If true, such an action on his part would have been a direct violation of royal edict issued by Queen Elizabeth I in 1585 criminalising any attempt by an English citizen to be ordained in the Roman Catholic Church.",
"title": "Early life"
},
{
"paragraph_id": 4,
"text": "Large-scale violence between Protestants and Catholics on the European continent has been cited by scholars as the impetus for the Protestant English Queen's defensive anti-Catholic laws issued from 1581 until her death in 1603. Despite the dire implications for Marlowe, his degree was awarded on schedule when the Privy Council intervened on his behalf, commending him for his \"faithful dealing\" and \"good service\" to the Queen. The nature of Marlowe's service was not specified by the council, but its letter to the Cambridge authorities has provoked much speculation by modern scholars, notably the theory that Marlowe was operating as a secret agent for Privy Council member Sir Francis Walsingham. The only surviving evidence of the Privy Council's correspondence is found in their minutes, the letter being lost. There is no mention of espionage in the minutes, but its summation of the lost Privy Council letter is vague in meaning, stating that \"it was not Her Majesties pleasure\" that persons employed as Marlowe had been \"in matters touching the benefit of his country should be defamed by those who are ignorant in th'affaires he went about.\" Scholars agree the vague wording was typically used to protect government agents, but they continue to debate what the \"matters touching the benefit of his country\" actually were in Marlowe's case and how they affected the 23-year-old writer as he launched his literary career in 1587.",
"title": "Early life"
},
{
"paragraph_id": 5,
"text": "Little is known about Marlowe's adult life. All available evidence, other than what can be deduced from his literary works, is found in legal records and other official documents. Writers of fiction and non-fiction have speculated about his professional activities, private life, and character. Marlowe has been described as a spy, a brawler, and a heretic, as well as a \"magician\", \"duellist\", \"tobacco-user\", \"counterfeiter\" and \"rakehell\". While J. A. Downie and Constance Kuriyama have argued against the more lurid speculations, it is the usually circumspect J. B. Steane who remarked, \"it seems absurd to dismiss all of these Elizabethan rumours and accusations as 'the Marlowe myth'\". Much has been written on his brief adult life, including speculation of: his involvement in royally-sanctioned espionage; his vocal declaration as an atheist; his (possibly same-sex) sexual interests; and the puzzling circumstances surrounding his death.",
"title": "Adult life and legend"
},
{
"paragraph_id": 6,
"text": "Marlowe is alleged to have been a government spy. Park Honan and Charles Nicholl speculate that this was the case and suggest that Marlowe's recruitment took place when he was at Cambridge. In 1587, when the Privy Council ordered the University of Cambridge to award Marlowe his degree as Master of Arts, it denied rumours that he intended to go to the English Catholic college in Rheims, saying instead that he had been engaged in unspecified \"affaires\" on \"matters touching the benefit of his country\". Surviving college records from the period also indicate that, in the academic year 1584–1585, Marlowe had had a series of unusually lengthy absences from the university which violated university regulations. Surviving college buttery accounts, which record student purchases for personal provisions, show that Marlowe began spending lavishly on food and drink during the periods he was in attendance; the amount was more than he could have afforded on his known scholarship income.",
"title": "Adult life and legend"
},
{
"paragraph_id": 7,
"text": "It has been speculated that Marlowe was the \"Morley\" who was tutor to Arbella Stuart in 1589. This possibility was first raised in a Times Literary Supplement letter by E. St John Brooks in 1937; in a letter to Notes and Queries, John Baker has added that only Marlowe could have been Arbella's tutor owing to the absence of any other known \"Morley\" from the period with an MA and not otherwise occupied. If Marlowe was Arbella's tutor, it might indicate that he was there as a spy, since Arbella, niece of Mary, Queen of Scots, and cousin of James VI of Scotland, later James I of England, was at the time a strong candidate for the succession to Elizabeth's throne. Frederick S. Boas dismisses the possibility of this identification, based on surviving legal records which document Marlowe's \"residence in London between September and December 1589\". Marlowe had been party to a fatal quarrel involving his neighbours and the poet Thomas Watson in Norton Folgate and was held in Newgate Prison for a fortnight. In fact, the quarrel and his arrest occurred on 18 September, he was released on bail on 1 October and he had to attend court, where he was acquitted on 3 December, but there is no record of where he was for the intervening two months.",
"title": "Adult life and legend"
},
{
"paragraph_id": 8,
"text": "In 1592 Marlowe was arrested in the English garrison town of Flushing (Vlissingen) in the Netherlands, for alleged involvement in the counterfeiting of coins, presumably related to the activities of seditious Catholics. He was sent to the Lord Treasurer (Burghley), but no charge or imprisonment resulted. This arrest may have disrupted another of Marlowe's spying missions, perhaps by giving the resulting coinage to the Catholic cause. He was to infiltrate the followers of the active Catholic plotter William Stanley and report back to Burghley.",
"title": "Adult life and legend"
},
{
"paragraph_id": 9,
"text": "Marlowe was reputed to be an atheist, which held the dangerous implication of being an enemy of God and the state, by association. With the rise of public fears concerning The School of Night, or \"School of Atheism\" in the late 16th century, accusations of atheism were closely associated with disloyalty to the Protestant monarchy of England.",
"title": "Adult life and legend"
},
{
"paragraph_id": 10,
"text": "Some modern historians consider that Marlowe's professed atheism, as with his supposed Catholicism, may have been no more than a sham to further his work as a government spy. Contemporary evidence comes from Marlowe's accuser in Flushing, an informer called Richard Baines. The governor of Flushing had reported that each of the men had \"of malice\" accused the other of instigating the counterfeiting and of intending to go over to the Catholic \"enemy\"; such an action was considered atheistic by the Church of England. Following Marlowe's arrest in 1593, Baines submitted to the authorities a \"note containing the opinion of one Christopher Marly concerning his damnable judgment of religion, and scorn of God's word\". Baines attributes to Marlowe a total of eighteen items which \"scoff at the pretensions of the Old and New Testament\" such as, \"Christ was a bastard and his mother dishonest [unchaste]\", \"the woman of Samaria and her sister were whores and that Christ knew them dishonestly\", \"St John the Evangelist was bedfellow to Christ and leaned always in his bosom\" (cf. John 13:23–25) and \"that he used him as the sinners of Sodom\". He also implied that Marlowe had Catholic sympathies. Other passages are merely sceptical in tone: \"he persuades men to atheism, willing them not to be afraid of bugbears and hobgoblins\". The final paragraph of Baines's document reads:",
"title": "Adult life and legend"
},
{
"paragraph_id": 11,
"text": "These thinges, with many other shall by good & honest witnes be approved to be his opinions and Comon Speeches, and that this Marlowe doth not only hould them himself, but almost into every Company he Cometh he persuades men to Atheism willing them not to be afeard of bugbeares and hobgoblins, and vtterly scorning both god and his ministers as I Richard Baines will Justify & approue both by mine oth and the testimony of many honest men, and almost al men with whome he hath Conversed any time will testify the same, and as I think all men in Cristianity ought to indevor that the mouth of so dangerous a member may be stopped, he saith likewise that he hath quoted a number of Contrarieties oute of the Scripture which he hath giuen to some great men who in Convenient time shalbe named. When these thinges shalbe Called in question the witnes shalbe produced.",
"title": "Adult life and legend"
},
{
"paragraph_id": 12,
"text": "Similar examples of Marlowe's statements were given by Thomas Kyd after his imprisonment and possible torture (see above); Kyd and Baines connect Marlowe with mathematician Thomas Harriot's and Sir Walter Raleigh's circle. Another document claimed about that time that \"one Marlowe is able to show more sound reasons for Atheism than any divine in England is able to give to prove divinity, and that ... he hath read the Atheist lecture to Sir Walter Raleigh and others\".",
"title": "Adult life and legend"
},
{
"paragraph_id": 13,
"text": "Some critics believe that Marlowe sought to disseminate these views in his work and that he identified with his rebellious and iconoclastic protagonists. Plays had to be approved by the Master of the Revels before they could be performed and the censorship of publications was under the control of the Archbishop of Canterbury. Presumably these authorities did not consider any of Marlowe's works to be unacceptable other than the Amores.",
"title": "Adult life and legend"
},
{
"paragraph_id": 14,
"text": "It has been claimed that Marlowe was homosexual. Some scholars argue that the identification of an Elizabethan as gay or homosexual in the modern sense is \"anachronistic,\" claiming that for the Elizabethans the terms were more likely to have been applied to homoerotic affections or sexual acts rather than to what we currently understand as a settled sexual orientation or personal role identity. Other scholars argue that the evidence is inconclusive and that the reports of Marlowe's homosexuality may be rumours produced after his death. Richard Baines reported Marlowe as saying: \"all they that love not Tobacco & Boies were fools\". David Bevington and Eric C. Rasmussen describe Baines's evidence as \"unreliable testimony\" and \"[t]hese and other testimonials need to be discounted for their exaggeration and for their having been produced under legal circumstances we would now regard as a witch-hunt\".",
"title": "Adult life and legend"
},
{
"paragraph_id": 15,
"text": "J. B. Steane considered there to be \"no evidence for Marlowe's homosexuality at all\". Other scholars point to the frequency with which Marlowe explores homosexual themes in his writing: in Hero and Leander, Marlowe writes of the male youth Leander: \"in his looks were all that men desire...\" Edward the Second contains the following passage enumerating homosexual relationships:",
"title": "Adult life and legend"
},
{
"paragraph_id": 16,
"text": "The mightiest kings have had their minions; Great Alexander loved Hephaestion, The conquering Hercules for Hylas wept; And for Patroclus, stern Achilles drooped. And not kings only, but the wisest men: The Roman Tully loved Octavius, Grave Socrates, wild Alcibiades.",
"title": "Adult life and legend"
},
{
"paragraph_id": 17,
"text": "Marlowe wrote the only play about the life of Edward II up to his time, taking the humanist literary discussion of male sexuality much further than his contemporaries. The play was extremely bold, dealing with a star-crossed love story between Edward II and Piers Gaveston. Though it was a common practice at the time to reveal characters as homosexual to give audiences reason to suspect them as culprits in a crime, Christopher Marlowe's Edward II is portrayed as a sympathetic character. The decision to start the play Dido, Queen of Carthage with a homoerotic scene between Jupiter and Ganymede that bears no connection to the subsequent plot has long puzzled scholars.",
"title": "Adult life and legend"
},
{
"paragraph_id": 18,
"text": "In early May 1593, several bills were posted about London threatening the Protestant refugees from France and the Netherlands who had settled in the city. One of these, the \"Dutch church libel\", written in rhymed iambic pentameter, contained allusions to several of Marlowe's plays and was signed, \"Tamburlaine\". On 11 May the Privy Council ordered the arrest of those responsible for the libels. The next day, Marlowe's colleague Thomas Kyd was arrested, his lodgings were searched and a three-page fragment of a heretical tract was found. In a letter to Sir John Puckering, Kyd asserted that it had belonged to Marlowe, with whom he had been writing \"in one chamber\" some two years earlier. In a second letter, Kyd described Marlowe as blasphemous, disorderly, holding treasonous opinions, being an irreligious reprobate and \"intemperate & of a cruel hart\". They had both been working for an aristocratic patron, probably Ferdinando Stanley, Lord Strange. A warrant for Marlowe's arrest was issued on 18 May, when the Privy Council apparently knew that he might be found staying with Thomas Walsingham, whose father was a first cousin of the late Sir Francis Walsingham, Elizabeth's principal secretary in the 1580s and a man more deeply involved in state espionage than any other member of the Privy Council. Marlowe duly presented himself on 20 May but there apparently being no Privy Council meeting on that day, was instructed to \"give his daily attendance on their Lordships, until he shall be licensed to the contrary\". On Wednesday, 30 May, Marlowe was killed.",
"title": "Adult life and legend"
},
{
"paragraph_id": 19,
"text": "Various accounts of Marlowe's death were current over the next few years. In his Palladis Tamia, published in 1598, Francis Meres says Marlowe was \"stabbed to death by a bawdy serving-man, a rival of his in his lewd love\" as punishment for his \"epicurism and atheism\". In 1917, in the Dictionary of National Biography, Sir Sidney Lee wrote, on slender evidence, that Marlowe was killed in a drunken fight. His claim was not much at variance with the official account, which came to light only in 1925, when the scholar Leslie Hotson discovered the coroner's report of the inquest on Marlowe's death, held two days later on Friday 1 June 1593, by the Coroner of the Queen's Household, William Danby. Marlowe had spent all day in a house in Deptford, owned by the widow Eleanor Bull, with three men: Ingram Frizer, Nicholas Skeres and Robert Poley. All three had been employed by one or other of the Walsinghams. Skeres and Poley had helped snare the conspirators in the Babington plot and Frizer was a servant to Thomas Walsingham probably in the role of a financial or business agent, as he was for Walsingham's wife Audrey a few years later. These witnesses testified that Frizer and Marlowe had argued over payment of the bill (now famously known as the 'Reckoning') exchanging \"divers malicious words\" while Frizer was sitting at a table between the other two and Marlowe was lying behind him on a couch. Marlowe snatched Frizer's dagger and wounded him on the head. In the ensuing struggle, according to the coroner's report, Marlowe was stabbed above the right eye, killing him instantly. The jury concluded that Frizer acted in self-defence and within a month he was pardoned. Marlowe was buried in an unmarked grave in the churchyard of St. Nicholas, Deptford, immediately after the inquest, on 1 June 1593.",
"title": "Adult life and legend"
},
{
"paragraph_id": 20,
"text": "The complete text of the inquest report was published by Leslie Hotson in his book, The Death of Christopher Marlowe, in the introduction to which Prof. George Kittredge said, \"The mystery of Marlowe's death, heretofore involved in a cloud of contradictory gossip and irresponsible guess-work, is now cleared up for good and all on the authority of public records of complete authenticity and gratifying fullness\", but this confidence proved fairly short-lived. Hotson had considered the possibility that the witnesses had \"concocted a lying account of Marlowe's behaviour, to which they swore at the inquest, and with which they deceived the jury\" but came down against that scenario. Others began to suspect that this scenario was indeed the case. Writing to the Times Literary Supplement shortly after the book's publication, Eugénie de Kalb disputed that the struggle and outcome as described were even possible and Samuel A. Tannenbaum insisted the following year that such a wound could not have possibly resulted in instant death, as had been claimed. Even Marlowe's biographer John Bakeless acknowledged that \"some scholars have been inclined to question the truthfulness of the coroner's report. There is something queer about the whole episode\" and said that Hotson's discovery \"raises almost as many questions as it answers\". It has also been discovered more recently that the apparent absence of a local county coroner to accompany the Coroner of the Queen's Household would, if noticed, have made the inquest null and void.",
"title": "Adult life and legend"
},
{
"paragraph_id": 21,
"text": "One of the main reasons for doubting the truth of the inquest concerns the reliability of Marlowe's companions as witnesses. As an agent provocateur for the late Sir Francis Walsingham, Robert Poley was a consummate liar, the \"very genius of the Elizabethan underworld\" and is on record as saying \"I will swear and forswear myself, rather than I will accuse myself to do me any harm\". The other witness, Nicholas Skeres, had for many years acted as a confidence trickster, drawing young men into the clutches of people in the money-lending racket, including Marlowe's apparent killer, Ingram Frizer, with whom he was engaged in such a swindle. Despite their being referred to as generosi (gentlemen) in the inquest report, the witnesses were professional liars. Some biographers, such as Kuriyama and Downie, take the inquest to be a true account of what occurred, but in trying to explain what really happened if the account was not true, others have come up with a variety of murder theories:",
"title": "Adult life and legend"
},
{
"paragraph_id": 22,
"text": "Since there are only written documents on which to base any conclusions and since it is probable that the most crucial information about his death was never committed to paper, it is unlikely that the full circumstances of Marlowe's death will ever be known.",
"title": "Adult life and legend"
},
{
"paragraph_id": 23,
"text": "For his contemporaries in the literary world, Marlowe was above all an admired and influential artist. Within weeks of his death, George Peele remembered him as \"Marley, the Muses' darling\"; Michael Drayton noted that he \"Had in him those brave translunary things / That the first poets had\" and Ben Jonson even wrote of \"Marlowe's mighty line\". Thomas Nashe wrote warmly of his friend, \"poor deceased Kit Marlowe,\" as did the publisher Edward Blount in his dedication of Hero and Leander to Sir Thomas Walsingham. Among the few contemporary dramatists to say anything negative about Marlowe was the anonymous author of the Cambridge University play The Return from Parnassus (1598) who wrote, \"Pity it is that wit so ill should dwell, / Wit lent from heaven, but vices sent from hell\".",
"title": "Reputation among contemporary writers"
},
{
"paragraph_id": 24,
"text": "The most famous tribute to Marlowe was paid by Shakespeare in As You Like It, where he not only quotes a line from Hero and Leander (\"Dead Shepherd, now I find thy saw of might, 'Who ever lov'd that lov'd not at first sight?'\") but also gives to the clown Touchstone the words \"When a man's verses cannot be understood, nor a man's good wit seconded with the forward child, understanding, it strikes a man more dead than a great reckoning in a little room.\" This appears to be a reference to Marlowe's murder which involved a fight over the \"reckoning,\" the bill, as well as to a line in Marlowe's Jew of Malta, \"Infinite riches in a little room.\"",
"title": "Reputation among contemporary writers"
},
{
"paragraph_id": 25,
"text": "Shakespeare was much influenced by Marlowe in his work, as can be seen in the use of Marlovian themes in Antony and Cleopatra, The Merchant of Venice, Richard II and Macbeth (Dido, Jew of Malta, Edward II and Doctor Faustus, respectively). In Hamlet, after meeting with the travelling actors, Hamlet requests the Player perform a speech about the Trojan War, which at 2.2.429–432 has an echo of Marlowe's Dido, Queen of Carthage. In Love's Labour's Lost Shakespeare brings on a character \"Marcade\" (three syllables) in conscious acknowledgement of Marlowe's character \"Mercury\", also attending the King of Navarre, in Massacre at Paris. The significance, to those of Shakespeare's audience who were familiar with Hero and Leander, was Marlowe's identification of himself with the god Mercury.",
"title": "Reputation among contemporary writers"
},
{
"paragraph_id": 26,
"text": "An argument has arisen about the notion that Marlowe faked his death and then continued to write under the assumed name of William Shakespeare. Academic consensus rejects alternative candidates for authorship of Shakespeare's plays and sonnets, including Marlowe.",
"title": "Shakespeare authorship theory"
},
{
"paragraph_id": 27,
"text": "Six dramas have been attributed to the authorship of Christopher Marlowe either alone or in collaboration with other writers, with varying degrees of evidence. The writing sequence or chronology of these plays is mostly unknown and is offered here with any dates and evidence known. Among the little available information we have, Dido is believed to be the first Marlowe play performed, while it was Tamburlaine that was first to be performed on a regular commercial stage in London in 1587. Believed by many scholars to be Marlowe's greatest success, Tamburlaine was the first English play written in blank verse and, with Thomas Kyd's The Spanish Tragedy, is generally considered the beginning of the mature phase of the Elizabethan theatre.",
"title": "Literary career"
},
{
"paragraph_id": 28,
"text": "The play Lust's Dominion was attributed to Marlowe upon its initial publication in 1657, though scholars and critics have almost unanimously rejected the attribution. He may also have written or co-written Arden of Faversham.",
"title": "Literary career"
},
{
"paragraph_id": 29,
"text": "Publication and responses to the poetry and translations credited to Marlowe primarily occurred posthumously, including:",
"title": "Literary career"
},
{
"paragraph_id": 30,
"text": "Modern scholars still look for evidence of collaborations between Marlowe and other writers. In 2016, one publisher was the first to endorse the scholarly claim of a collaboration between Marlowe and the playwright William Shakespeare:",
"title": "Literary career"
},
{
"paragraph_id": 31,
"text": "Marlowe's plays were enormously successful, possibly because of the imposing stage presence of his lead actor, Edward Alleyn. Alleyn was unusually tall for the time and the haughty roles of Tamburlaine, Faustus and Barabas were probably written for him. Marlowe's plays were the foundation of the repertoire of Alleyn's company, the Admiral's Men, throughout the 1590s. One of Marlowe's poetry translations did not fare as well. In 1599, Marlowe's translation of Ovid was banned and copies were publicly burned as part of Archbishop Whitgift's crackdown on offensive material.",
"title": "Literary career"
},
{
"paragraph_id": 32,
"text": "(Patrick Cheney's 2004 Cambridge Companion to Christopher Marlowe presents an alternative timeline based upon printing dates.)",
"title": "Chronology of dramatic works"
},
{
"paragraph_id": 33,
"text": "First official record 1594",
"title": "Chronology of dramatic works"
},
{
"paragraph_id": 34,
"text": "First published 1594; posthumously",
"title": "Chronology of dramatic works"
},
{
"paragraph_id": 35,
"text": "First recorded performance between 1587 and 1593 by the Children of the Chapel, a company of boy actors in London.",
"title": "Chronology of dramatic works"
},
{
"paragraph_id": 36,
"text": "Significance This play is believed by many scholars to be the first play by Christopher Marlowe to be performed.",
"title": "Chronology of dramatic works"
},
{
"paragraph_id": 37,
"text": "Attribution The title page attributes the play to Marlowe and Thomas Nashe, yet some scholars question how much of a contribution Nashe made to the play.",
"title": "Chronology of dramatic works"
},
{
"paragraph_id": 38,
"text": "Evidence No manuscripts by Marlowe exist for this play.",
"title": "Chronology of dramatic works"
},
{
"paragraph_id": 39,
"text": "First official record 1587, Part I",
"title": "Chronology of dramatic works"
},
{
"paragraph_id": 40,
"text": "First published 1590, Parts I and II in one octavo, London. No author named.",
"title": "Chronology of dramatic works"
},
{
"paragraph_id": 41,
"text": "First recorded performance 1587, Part I, by the Admiral's Men, London.",
"title": "Chronology of dramatic works"
},
{
"paragraph_id": 42,
"text": "Significance Tamburlaine is the first example of blank verse used in the dramatic literature of the Early Modern English theatre.",
"title": "Chronology of dramatic works"
},
{
"paragraph_id": 43,
"text": "Attribution Author name is missing from first printing in 1590. Attribution of this work by scholars to Marlowe is based upon comparison to his other verified works. Passages and character development in Tamburlane are similar to many other Marlowe works.",
"title": "Chronology of dramatic works"
},
{
"paragraph_id": 44,
"text": "Evidence No manuscripts by Marlowe exist for this play. Parts I and II were entered into the Stationers' Register on 14 August 1590. The two parts were published together by the London printer, Richard Jones, in 1590; a second edition in 1592, and a third in 1597. The 1597 edition of the two parts were published separately in quarto by Edward White; part I in 1605, and part II in 1606.",
"title": "Chronology of dramatic works"
},
{
"paragraph_id": 45,
"text": "First official record 1592",
"title": "Chronology of dramatic works"
},
{
"paragraph_id": 46,
"text": "First published 1592; earliest extant edition, 1633",
"title": "Chronology of dramatic works"
},
{
"paragraph_id": 47,
"text": "First recorded performance 26 February 1592, by Lord Strange's acting company.",
"title": "Chronology of dramatic works"
},
{
"paragraph_id": 48,
"text": "Significance The performances of the play were a success and it remained popular for the next fifty years. This play helps to establish the strong theme of \"anti-authoritarianism\" that is found throughout Marlowe's works.",
"title": "Chronology of dramatic works"
},
{
"paragraph_id": 49,
"text": "Evidence No manuscripts by Marlowe exist for this play. The play was entered in the Stationers' Register on 17 May 1594 but the earliest surviving printed edition is from 1633.",
"title": "Chronology of dramatic works"
},
{
"paragraph_id": 50,
"text": "First official record 1594–1597",
"title": "Chronology of dramatic works"
},
{
"paragraph_id": 51,
"text": "First published 1601, no extant copy; first extant copy, 1604 (A text) quarto; 1616 (B text) quarto.",
"title": "Chronology of dramatic works"
},
{
"paragraph_id": 52,
"text": "First recorded performance 1594–1597; 24 revival performances occurred between these years by the Lord Admiral's Company, Rose Theatre, London; earlier performances probably occurred around 1589 by the same company.",
"title": "Chronology of dramatic works"
},
{
"paragraph_id": 53,
"text": "Significance This is the first dramatised version of the Faust legend of a scholar's dealing with the devil. Marlowe deviates from earlier versions of \"The Devil's Pact\" significantly: Marlowe's protagonist is unable to \"burn his books\" or repent to a merciful God to have his contract annulled at the end of the play; he is carried off by demons; and, in the 1616 quarto, his mangled corpse is found by the scholar characters.",
"title": "Chronology of dramatic works"
},
{
"paragraph_id": 54,
"text": "Attribution The 'B text' was highly edited and censored, owing in part to the shifting theatre laws regarding religious words onstage during the seventeenth-century. Because it contains several additional scenes believed to be the additions of other playwrights, particularly Samuel Rowley and William Bird (alias Borne), a recent edition attributes the authorship of both versions to \"Christopher Marlowe and his collaborator and revisers.\" This recent edition has tried to establish that the 'A text' was assembled from Marlowe's work and another writer, with the 'B text' as a later revision.",
"title": "Chronology of dramatic works"
},
{
"paragraph_id": 55,
"text": "Evidence No manuscripts by Marlowe exist for this play. The two earliest-printed extant versions of the play, A and B, form a textual problem for scholars. Both were published after Marlowe's death and scholars disagree which text is more representative of Marlowe's original. Some editions are based on a combination of the two texts. Late-twentieth-century scholarly consensus identifies 'A text' as more representative because it contains irregular character names and idiosyncratic spelling, which are believed to reflect the author's handwritten manuscript or \"foul papers\". In comparison, 'B text' is highly edited with several additional scenes possibly written by other playwrights.",
"title": "Chronology of dramatic works"
},
{
"paragraph_id": 56,
"text": "First official record 1593",
"title": "Chronology of dramatic works"
},
{
"paragraph_id": 57,
"text": "First published 1590; earliest extant edition 1594 octavo",
"title": "Chronology of dramatic works"
},
{
"paragraph_id": 58,
"text": "First recorded performance 1592, performed by the Earl of Pembroke's Men.",
"title": "Chronology of dramatic works"
},
{
"paragraph_id": 59,
"text": "Significance Considered by recent scholars as Marlowe's \"most modern play\" because of its probing treatment of the private life of a king and unflattering depiction of the power politics of the time. The 1594 editions of Edward II and of Dido are the first published plays with Marlowe's name appearing as the author.",
"title": "Chronology of dramatic works"
},
{
"paragraph_id": 60,
"text": "Attribution Earliest extant edition of 1594.",
"title": "Chronology of dramatic works"
},
{
"paragraph_id": 61,
"text": "Evidence The play was entered into the Stationers' Register on 6 July 1593, five weeks after Marlowe's death.",
"title": "Chronology of dramatic works"
},
{
"paragraph_id": 62,
"text": "First official record c. 1593, alleged foul sheet by Marlowe of \"Scene 19\"; although authorship by Marlowe is contested by recent scholars, the manuscript is believed written while the play was first performed and with an unknown purpose.",
"title": "Chronology of dramatic works"
},
{
"paragraph_id": 63,
"text": "First published undated, c. 1594 or later, octavo, London; while this is the most complete surviving text, it is near half the length of Marlowe's other works and possibly a reconstruction. The printer and publisher credit, \"E.A. for Edward White,\" also appears on the 1605/06 printing of Marlowe's Tamburlaine.",
"title": "Chronology of dramatic works"
},
{
"paragraph_id": 64,
"text": "First recorded performance 26 Jan 1593, by Lord Strange's Men, at Henslowe's Rose Theatre, London, under the title The Tragedy of the Guise; 1594, in the repertory of the Admiral's Men.",
"title": "Chronology of dramatic works"
},
{
"paragraph_id": 65,
"text": "Significance The Massacre at Paris is considered Marlowe's most dangerous play, as agitators in London seized on its theme to advocate the murders of refugees from the low countries of the Spanish Netherlands, and it warns Elizabeth I of this possibility in its last scene. It features the silent \"English Agent\", whom tradition has identified with Marlowe and his connexions to the secret service. Highest grossing play for Lord Strange's Men in 1593.",
"title": "Chronology of dramatic works"
},
{
"paragraph_id": 66,
"text": "Attribution A 1593 loose manuscript sheet of the play, called a foul sheet, is alleged to be by Marlowe and has been claimed by some scholars as the only extant play manuscript by the author. It could also provide an approximate date of composition for the play. When compared with the extant printed text and his other work, other scholars reject the attribution to Marlowe. The only surviving printed text of this play is possibly a reconstruction from memory of Marlowe's original performance text. Current scholarship notes that there are only 1147 lines in the play, half the amount of a typical play of the 1590s. Other evidence that the extant published text may not be Marlowe's original is the uneven style throughout, with two-dimensional characterisations, deteriorating verbal quality and repetitions of content.",
"title": "Chronology of dramatic works"
},
{
"paragraph_id": 67,
"text": "Evidence Never appeared in the Stationer's Register.",
"title": "Chronology of dramatic works"
},
{
"paragraph_id": 68,
"text": "The Muse of Poetry, a bronze sculpture by Edward Onslow Ford references Marlowe and his work. It was erected on Buttermarket, Canterbury in 1891, and now stands outside the Marlowe Theatre in the city.",
"title": "Memorials"
},
{
"paragraph_id": 69,
"text": "In July 2002, a memorial window to Marlowe was unveiled by the Marlowe Society at Poets' Corner in Westminster Abbey. Controversially, a question mark was added to his generally accepted date of death. On 25 October 2011 a letter from Paul Edmondson and Stanley Wells was published by The Times newspaper, in which they called on the Dean and Chapter to remove the question mark on the grounds that it \"flew in the face of a mass of unimpugnable evidence\". In 2012, they renewed this call in their e-book Shakespeare Bites Back, adding that it \"denies history\" and again the following year in their book Shakespeare Beyond Doubt.",
"title": "Memorials"
},
{
"paragraph_id": 70,
"text": "The Marlowe Theatre in Canterbury, Kent, UK, was named for Marlowe in 1949.",
"title": "Memorials"
},
{
"paragraph_id": 71,
"text": "Marlowe has been used as a character in books, theatre, film, television, games and radio.",
"title": "Marlowe in fiction"
},
{
"paragraph_id": 72,
"text": "Modern scholarly collected works of Marlowe include:",
"title": "Modern compendia"
},
{
"paragraph_id": 73,
"text": "Royal Shakespeare Company",
"title": "Works of Marlowe in performance"
},
{
"paragraph_id": 74,
"text": "Royal National Theatre",
"title": "Works of Marlowe in performance"
},
{
"paragraph_id": 75,
"text": "Shakespeare's Globe",
"title": "Works of Marlowe in performance"
},
{
"paragraph_id": 76,
"text": "The Marlowe Sessions",
"title": "Works of Marlowe in performance"
},
{
"paragraph_id": 77,
"text": "Sources",
"title": "References"
}
] | Christopher Marlowe, also known as Kit Marlowe, was an English playwright, poet and translator of the Elizabethan era. Marlowe is among the most famous of the Elizabethan playwrights. Based upon the "many imitations" of his play Tamburlaine, modern scholars consider him to have been the foremost dramatist in London in the years just before his mysterious early death. Some scholars also believe that he greatly influenced William Shakespeare, who was baptised in the same year as Marlowe and later succeeded him as the pre-eminent Elizabethan playwright. Marlowe was the first to achieve critical reputation for his use of blank verse, which became the standard for the era. His plays are distinguished by their overreaching protagonists. Themes found within Marlowe's literary works have been noted as humanistic with realistic emotions, which some scholars find difficult to reconcile with Marlowe's "anti-intellectualism" and his catering to the prurient tastes of his Elizabethan audiences for generous displays of extreme physical violence, cruelty, and bloodshed. Events in Marlowe's life were sometimes as extreme as those found in his plays. Differing sensational reports of Marlowe's death in 1593 abounded after the event and are contested by scholars today owing to a lack of good documentation. There have been many conjectures as to the nature and reason for his death, including a vicious bar-room fight, blasphemous libel against the church, homosexual intrigue, betrayal by another playwright, and espionage from the highest level: the Privy Council of Elizabeth I. An official coroner's account of Marlowe's death was discovered only in 1925, and it did little to persuade all scholars that it told the whole story, nor did it eliminate the uncertainties present in his biography. | 2001-10-07T21:01:15Z | 2023-12-25T22:43:33Z | [
"Template:UK National Archives ID",
"Template:More citations needed section",
"Template:Reflist",
"Template:Cite ODNB",
"Template:Cite book",
"Template:Acad",
"Template:Internet Archive author",
"Template:Blockquote",
"Template:Sfnmp",
"Template:Main",
"Template:Cite web",
"Template:Cite news",
"Template:About",
"Template:Harvc",
"Template:Subscription required",
"Template:Cite EB9",
"Template:Cite journal",
"Template:ISBN",
"Template:Refend",
"Template:Infobox person",
"Template:Spaced ndash",
"Template:Sfnp",
"Template:'\"",
"Template:Circa",
"Template:Sister project links",
"Template:Gutenberg author",
"Template:Christopher Marlowe",
"Template:Authority control",
"Template:Librivox author",
"Template:Short description",
"Template:Use British English",
"Template:Use dmy dates",
"Template:Notelist",
"Template:Refbegin",
"Template:IPAc-en",
"Template:CSS image crop",
"Template:Multiple image",
"Template:Webarchive",
"Template:Cbignore",
"Template:Efn",
"Template:StandardEbooks",
"Template:Edward II"
] | https://en.wikipedia.org/wiki/Christopher_Marlowe |
5,772 | Cricket (disambiguation) | Cricket is a bat-and-ball sport contested by two teams.
Cricket also commonly refers to:
Cricket(s) or The Cricket(s) may also refer to:
F* Christine Blair or Cricket, a character in The Young and the Restless | [
{
"paragraph_id": 0,
"text": "Cricket is a bat-and-ball sport contested by two teams.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Cricket also commonly refers to:",
"title": ""
},
{
"paragraph_id": 2,
"text": "Cricket(s) or The Cricket(s) may also refer to:",
"title": ""
},
{
"paragraph_id": 3,
"text": "F* Christine Blair or Cricket, a character in The Young and the Restless",
"title": "Film and television"
}
] | Cricket is a bat-and-ball sport contested by two teams. Cricket also commonly refers to: Cricket (insect) Cricket(s) or The Cricket(s) may also refer to: | 2001-09-27T14:45:47Z | 2023-11-20T12:08:33Z | [
"Template:Wiktionary",
"Template:TOC right",
"Template:Intitle",
"Template:Disambiguation"
] | https://en.wikipedia.org/wiki/Cricket_(disambiguation) |
5,776 | Caving | Caving, also known as spelunking (United States and Canada) and potholing (United Kingdom and Ireland), is the recreational pastime of exploring wild cave systems (as distinguished from show caves). In contrast, speleology is the scientific study of caves and the cave environment.
The challenges involved in caving vary according to the cave being visited; in addition to the total absence of light beyond the entrance, negotiating pitches, squeezes, and water hazards can be difficult. Cave diving is a distinct, and more hazardous, sub-speciality undertaken by a small minority of technically proficient cavers. In an area of overlap between recreational pursuit and scientific study, the most devoted and serious-minded cavers become accomplished at the surveying and mapping of caves and the formal publication of their efforts. These are usually published freely and publicly, especially in the UK and other European countries, although in the US, these are generally private.
Sometimes categorized as an "extreme sport", it is not commonly considered as such by longtime enthusiasts, who may dislike the term for its connotation of disregard for safety.
Many caving skills overlap with those involved in canyoning and mine and urban exploration.
Caving is often undertaken for the enjoyment of the outdoor activity or for physical exercise, as well as original exploration, similar to mountaineering or diving. Physical or biological science is also an important goal for some cavers, while others are engaged in cave photography. Virgin cave systems comprise some of the last unexplored regions on Earth and much effort is put into trying to locate, enter and survey them. In well-explored regions (such as most developed nations), the most accessible caves have already been explored, and gaining access to new caves often requires cave digging or cave diving.
One old technique used by hill people in the United States to find caves worth exploring was to yell into a hole and listen for an echo. On finding a hole, the size of which did not matter, the would-be cave explorer would yell into the opening and listen for an echo. If there was none, the hole was just a hole. If there was an echo, the size of the cave could be determined by the length and strength of the echoes. This method is simple, cheap, and effective. The explorer could then enlarge the hole to make an entrance. Meriwether Lewis, of the Lewis and Clark Expedition, used the yelling technique to find caves in Kentucky when he was a boy. Since caves were dark, and flashlights had not been invented, Lewis, and other explorers, made torches out of knots of pine tree branches. Such torches burned a long time and cast a bright light.
Caving, in certain areas, has also been utilized as a form of eco and adventure tourism, for example in New Zealand. Tour companies have established an industry leading and guiding tours into and through caves. Depending on the type of cave and the type of tour, the experience could be adventure-based or ecological-based. There are tours led through lava tubes by a guiding service (e.g. Lava River Cave, the oceanic islands of Tenerife, Iceland and Hawaii).
Caving has also been described as an "individualist's team sport" by some, as cavers can often make a trip without direct physical assistance from others but will generally go in a group for companionship or to provide emergency help if needed. Some however consider the assistance cavers give each other as a typical team sport activity.
The term Potholing refers to the act of exploring potholes, a word originating in the north of England for predominantly vertical caves.
Clay Perry, an American caver of the 1940s, wrote about a group of men and boys who explored and studied caves throughout New England. This group referred to themselves as spelunkers, a term derived from the Latin spēlunca ("cave, cavern, den"), itself from the Greek σπῆλυγξ spēlynks ("cave"). This is regarded as the first use of the word in the Americas. Throughout the 1950s, spelunking was the general term used for exploring caves in US English. It was used freely, without any positive or negative connotations, although only rarely outside the US.
In the 1960s, the terms spelunking and spelunker began to be considered déclassé among experienced enthusiasts. In 1985, Steve Knutson – editor of the National Speleological Society (NSS) publication American Caving Accidents – made the following distinction:
…Note that (in this case) the term 'spelunker' denotes someone untrained and unknowledgeable in current exploration techniques, and 'caver' is for those who are.
This sentiment is exemplified by bumper stickers and T-shirts displayed by some cavers: "Cavers rescue spelunkers". Nevertheless, outside the caving community, "spelunking" and "spelunkers" predominately remain neutral terms referring to the practice and practitioners, without any respect to skill level.
In the mid-nineteenth century, John Birkbeck explored potholes in England, notably Gaping Gill in 1842 and Alum Pot in 1847–8, returning there in the 1870s. In the mid-1880s, Herbert E. Balch began exploring Wookey Hole Caves and in the 1890s, Balch was introduced to the caves of the Mendip Hills. One of the oldest established caving clubs, Yorkshire Ramblers' Club, was founded in 1892.
Caving as a specialized pursuit was pioneered by Édouard-Alfred Martel (1859–1938), who first achieved the descent and exploration of the Gouffre de Padirac, in France, as early as 1889 and the first complete descent of a 110-metre wet vertical shaft at Gaping Gill in 1895. He developed his own techniques based on ropes and metallic ladders. Martel visited Kentucky and notably Mammoth Cave National Park in October 1912. In the 1920s famous US caver Floyd Collins made important explorations in the area and in the 1930s, as caving became increasingly popular, small exploration teams both in the Alps and in the karstic high plateaus of southwest France (Causses and Pyrenees) transformed cave exploration into both a scientific and recreational activity. Robert de Joly, Guy de Lavaur and Norbert Casteret were prominent figures of that time, surveying mostly caves in Southwest France. During World War II, an alpine team composed of Pierre Chevalier, Fernand Petzl, Charles Petit-Didier and others explored the Dent de Crolles cave system near Grenoble, which became the deepest explored system in the world (-658m) at that time. The lack of available equipment during the war forced Pierre Chevalier and the rest of the team to develop their own equipment, leading to technical innovation. The scaling-pole (1940), nylon ropes (1942), use of explosives in caves (1947) and mechanical rope-ascenders (Henri Brenot's "monkeys", first used by Chevalier and Brenot in a cave in 1934) can be directly associated to the exploration of the Dent de Crolles cave system.
In 1941, American cavers organized themselves into the National Speleological Society (NSS) to advance the exploration, conservation, study and understanding of caves in the United States. American caver Bill Cuddington, known as "Vertical Bill", further developed the single-rope technique (SRT) in the late 1950s. In 1958, two Swiss alpinists, Juesi and Marti teamed together, creating the first rope ascender known as the Jumar. In 1968 Bruno Dressler asked Fernand Petzl, who worked as a metals machinist, to build a rope-ascending tool, today known as the Petzl Croll, that he had developed by adapting the Jumar to vertical caving. Pursuing these developments, Petzl started in the 1970s a caving equipment manufacturing company named Petzl. The development of the rappel rack and the evolution of mechanical ascension systems extended the practice and safety of vertical exploration to a wider range of cavers.
Hard hats are worn to protect the head from bumps and falling rocks. The caver's primary light source is usually mounted on the helmet in order to keep the hands free. Electric LED lights are most common. Many cavers carry two or more sources of light – one as primary and the others as backup in case the first fails. More often than not, a second light will be mounted to the helmet for quick transition if the primary fails. Carbide lamp systems are an older form of illumination, inspired by miner's equipment, and are still used by some cavers, particularly on remote expeditions where electric charging facilities are not available.
The type of clothes worn underground varies according to the environment of the cave being explored, and the local culture. In cold caves, the caver may wear a warm base layer that retains its insulating properties when wet, such as a fleece ("furry") suit or polypropylene underwear, and an oversuit of hard-wearing (e.g., cordura) or waterproof (e.g., PVC) material. Lighter clothing may be worn in warm caves, particularly if the cave is dry, and in tropical caves thin polypropylene clothing is used, to provide some abrasion protection while remaining as cool as possible. Wetsuits may be worn if the cave is particularly wet or involves stream passages. On the feet boots are worn – hiking-style boots in drier caves, or rubber boots (such as wellies) often with neoprene socks ("wetsocks") in wetter caves. Knee-pads (and sometimes elbow-pads) are popular for protecting joints during crawls. Depending on the nature of the cave, gloves are sometimes worn to protect the hands against abrasion or cold. In pristine areas and for restoration, clean oversuits and powder-free, non-latex surgical gloves are used to protect the cave itself from contaminants. Ropes are used for descending or ascending pitches (single rope technique or SRT) or for protection. Knots commonly used in caving are the figure-of-eight- (or figure-of-nine-) loop, bowline, alpine butterfly, and Italian hitch. Ropes are usually rigged using bolts, slings, and carabiners. In some cases cavers may choose to bring and use a flexible metal ladder.
In addition to the equipment already described, cavers frequently carry packs containing first-aid kits, emergency equipment, and food. Containers for securely transporting urine are also commonly carried. On longer trips, containers for securely transporting feces out of the cave are carried.
During very long trips, it may be necessary to camp in the cave – some cavers have stayed underground for many days, or in particularly extreme cases, for weeks at a time. This is particularly the case when exploring or mapping extensive cave systems, where it would be impractical to retrace the route back to the surface regularly. Such long trips necessitate the cavers carrying provisions, sleeping, and cooking equipment.
Caves can be dangerous places; hypothermia, falling, flooding, falling rocks and physical exhaustion are the main risks. Rescuing people from underground is difficult and time-consuming, and requires special skills, training, and equipment. Full-scale cave rescues often involve the efforts of dozens of rescue workers (often other long-time cavers who have participated in specialized courses, as normal rescue staff are not sufficiently experienced in cave environments), who may themselves be put in jeopardy in effecting the rescue. This said, caving is not necessarily a high-risk sport (especially if it does not involve difficult climbs or diving). As in all physical sports, knowing one's limitations is key.
Caving in warmer climates carries the risk of contracting histoplasmosis, a fungal infection that is contracted from bird or bat droppings. It can cause pneumonia and can disseminate in the body to cause continued infections.
In many parts of the world, leptospirosis ("a type of bacterial infection spread by animals" including rats) is a distinct threat due to the presence of rat urine in rainwater or precipitation that enters the caves water system. Complications are uncommon, but can be serious. Safety risks while caving can be minimized by using a number of techniques:
Many cave environments are very fragile. Many speleothems can be damaged by even the slightest touch and some by impacts as slight as a breath. Research suggests that increased carbon dioxide levels can lead to "a higher equilibrium concentration of calcium within the drip waters feeding the speleothems, and hence causes dissolution of existing features." In 2008, researchers found evidence that respiration from cave visitors may generate elevated carbon dioxide concentrations in caves, leading to increased temperatures of up to 3 °C and a dissolution of existing features.
Pollution is also of concern. Since water that flows through a cave eventually comes out in streams and rivers, any pollution may ultimately end up in someone's drinking water, and can even seriously affect the surface environment, as well. Even minor pollution such as dropping organic material can have a dramatic effect on the cave biota.
Cave-dwelling species are also very fragile, and often, a particular species found in a cave may live within that cave alone, and be found nowhere else in the world, such as Alabama cave shrimp. Cave-dwelling species are accustomed to a near-constant climate of temperature and humidity, and any disturbance can be disruptive to the species' life cycles. Though cave wildlife may not always be immediately visible, it is typically nonetheless present in most caves.
Bats are one such fragile species of cave-dwelling animal. Bats which hibernate are most vulnerable during the winter season, when no food supply exists on the surface to replenish the bat's store of energy should it be awakened from hibernation. Bats which migrate are most sensitive during the summer months when they are raising their young. For these reasons, visiting caves inhabited by hibernating bats is discouraged during cold months; and visiting caves inhabited by migratory bats is discouraged during the warmer months when they are most sensitive and vulnerable. Due to an affliction affecting bats in the northeastern US known as white nose syndrome (WNS), the US Fish & Wildlife Service has called for a moratorium effective March 26, 2009, on caving activity in states known to have hibernacula (MD, NY, VT, NH, MA, CT, NJ, PA, VA, and WV) affected by WNS, as well as adjoining states.
Some cave passages may be marked with flagging tape or other indicators to show biologically, aesthetically, or archaeologically sensitive areas. Marked paths may show ways around notably fragile areas such as a pristine floor of sand or silt which may be thousands of years old, dating from the last time water flowed through the cave. Such deposits may easily be spoiled forever by a single misplaced step. Active formations such as flowstone can be similarly marred with a muddy footprint or handprint, and ancient human artifacts, such as fiber products, may even crumble to dust under all but the most gentle touch.
In 1988, concerned that cave resources were becoming increasingly damaged through unregulated use, Congress enacted the Federal Cave Resources Protection Act, giving land management agencies in the United States expanded authority to manage cave conservation on public land.
Cavers in many countries have created organizations for the administration and oversight of caving activities within their nations. The oldest of these is the French Federation of Speleology (originally Société de spéléologie) founded by Édouard-Alfred Martel in 1895, which produced the first periodical journal in speleology, Spelunca. The first University-based speleological institute in the world was founded in 1920 in Cluj-Napoca, Romania, by Emil Racovita, a Romanian biologist, zoologist, speleologist and explorer of Antarctica.
The British Speleological Association was established in 1935 and the National Speleological Society in the US was founded in 1941 (originally formed as the Speleological Society of the District of Columbia on May 6, 1939).
An international speleological congress was proposed at a meeting in Valence-sur-Rhone, France in 1949 and first held in 1953 in Paris. The International Union of Speleology (UIS) was founded in 1965. | [
{
"paragraph_id": 0,
"text": "Caving, also known as spelunking (United States and Canada) and potholing (United Kingdom and Ireland), is the recreational pastime of exploring wild cave systems (as distinguished from show caves). In contrast, speleology is the scientific study of caves and the cave environment.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The challenges involved in caving vary according to the cave being visited; in addition to the total absence of light beyond the entrance, negotiating pitches, squeezes, and water hazards can be difficult. Cave diving is a distinct, and more hazardous, sub-speciality undertaken by a small minority of technically proficient cavers. In an area of overlap between recreational pursuit and scientific study, the most devoted and serious-minded cavers become accomplished at the surveying and mapping of caves and the formal publication of their efforts. These are usually published freely and publicly, especially in the UK and other European countries, although in the US, these are generally private.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Sometimes categorized as an \"extreme sport\", it is not commonly considered as such by longtime enthusiasts, who may dislike the term for its connotation of disregard for safety.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Many caving skills overlap with those involved in canyoning and mine and urban exploration.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Caving is often undertaken for the enjoyment of the outdoor activity or for physical exercise, as well as original exploration, similar to mountaineering or diving. Physical or biological science is also an important goal for some cavers, while others are engaged in cave photography. Virgin cave systems comprise some of the last unexplored regions on Earth and much effort is put into trying to locate, enter and survey them. In well-explored regions (such as most developed nations), the most accessible caves have already been explored, and gaining access to new caves often requires cave digging or cave diving.",
"title": "Motivation"
},
{
"paragraph_id": 5,
"text": "One old technique used by hill people in the United States to find caves worth exploring was to yell into a hole and listen for an echo. On finding a hole, the size of which did not matter, the would-be cave explorer would yell into the opening and listen for an echo. If there was none, the hole was just a hole. If there was an echo, the size of the cave could be determined by the length and strength of the echoes. This method is simple, cheap, and effective. The explorer could then enlarge the hole to make an entrance. Meriwether Lewis, of the Lewis and Clark Expedition, used the yelling technique to find caves in Kentucky when he was a boy. Since caves were dark, and flashlights had not been invented, Lewis, and other explorers, made torches out of knots of pine tree branches. Such torches burned a long time and cast a bright light.",
"title": "Motivation"
},
{
"paragraph_id": 6,
"text": "Caving, in certain areas, has also been utilized as a form of eco and adventure tourism, for example in New Zealand. Tour companies have established an industry leading and guiding tours into and through caves. Depending on the type of cave and the type of tour, the experience could be adventure-based or ecological-based. There are tours led through lava tubes by a guiding service (e.g. Lava River Cave, the oceanic islands of Tenerife, Iceland and Hawaii).",
"title": "Motivation"
},
{
"paragraph_id": 7,
"text": "Caving has also been described as an \"individualist's team sport\" by some, as cavers can often make a trip without direct physical assistance from others but will generally go in a group for companionship or to provide emergency help if needed. Some however consider the assistance cavers give each other as a typical team sport activity.",
"title": "Motivation"
},
{
"paragraph_id": 8,
"text": "The term Potholing refers to the act of exploring potholes, a word originating in the north of England for predominantly vertical caves.",
"title": "Etymology"
},
{
"paragraph_id": 9,
"text": "Clay Perry, an American caver of the 1940s, wrote about a group of men and boys who explored and studied caves throughout New England. This group referred to themselves as spelunkers, a term derived from the Latin spēlunca (\"cave, cavern, den\"), itself from the Greek σπῆλυγξ spēlynks (\"cave\"). This is regarded as the first use of the word in the Americas. Throughout the 1950s, spelunking was the general term used for exploring caves in US English. It was used freely, without any positive or negative connotations, although only rarely outside the US.",
"title": "Etymology"
},
{
"paragraph_id": 10,
"text": "In the 1960s, the terms spelunking and spelunker began to be considered déclassé among experienced enthusiasts. In 1985, Steve Knutson – editor of the National Speleological Society (NSS) publication American Caving Accidents – made the following distinction:",
"title": "Etymology"
},
{
"paragraph_id": 11,
"text": "…Note that (in this case) the term 'spelunker' denotes someone untrained and unknowledgeable in current exploration techniques, and 'caver' is for those who are.",
"title": "Etymology"
},
{
"paragraph_id": 12,
"text": "This sentiment is exemplified by bumper stickers and T-shirts displayed by some cavers: \"Cavers rescue spelunkers\". Nevertheless, outside the caving community, \"spelunking\" and \"spelunkers\" predominately remain neutral terms referring to the practice and practitioners, without any respect to skill level.",
"title": "Etymology"
},
{
"paragraph_id": 13,
"text": "In the mid-nineteenth century, John Birkbeck explored potholes in England, notably Gaping Gill in 1842 and Alum Pot in 1847–8, returning there in the 1870s. In the mid-1880s, Herbert E. Balch began exploring Wookey Hole Caves and in the 1890s, Balch was introduced to the caves of the Mendip Hills. One of the oldest established caving clubs, Yorkshire Ramblers' Club, was founded in 1892.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "Caving as a specialized pursuit was pioneered by Édouard-Alfred Martel (1859–1938), who first achieved the descent and exploration of the Gouffre de Padirac, in France, as early as 1889 and the first complete descent of a 110-metre wet vertical shaft at Gaping Gill in 1895. He developed his own techniques based on ropes and metallic ladders. Martel visited Kentucky and notably Mammoth Cave National Park in October 1912. In the 1920s famous US caver Floyd Collins made important explorations in the area and in the 1930s, as caving became increasingly popular, small exploration teams both in the Alps and in the karstic high plateaus of southwest France (Causses and Pyrenees) transformed cave exploration into both a scientific and recreational activity. Robert de Joly, Guy de Lavaur and Norbert Casteret were prominent figures of that time, surveying mostly caves in Southwest France. During World War II, an alpine team composed of Pierre Chevalier, Fernand Petzl, Charles Petit-Didier and others explored the Dent de Crolles cave system near Grenoble, which became the deepest explored system in the world (-658m) at that time. The lack of available equipment during the war forced Pierre Chevalier and the rest of the team to develop their own equipment, leading to technical innovation. The scaling-pole (1940), nylon ropes (1942), use of explosives in caves (1947) and mechanical rope-ascenders (Henri Brenot's \"monkeys\", first used by Chevalier and Brenot in a cave in 1934) can be directly associated to the exploration of the Dent de Crolles cave system.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "In 1941, American cavers organized themselves into the National Speleological Society (NSS) to advance the exploration, conservation, study and understanding of caves in the United States. American caver Bill Cuddington, known as \"Vertical Bill\", further developed the single-rope technique (SRT) in the late 1950s. In 1958, two Swiss alpinists, Juesi and Marti teamed together, creating the first rope ascender known as the Jumar. In 1968 Bruno Dressler asked Fernand Petzl, who worked as a metals machinist, to build a rope-ascending tool, today known as the Petzl Croll, that he had developed by adapting the Jumar to vertical caving. Pursuing these developments, Petzl started in the 1970s a caving equipment manufacturing company named Petzl. The development of the rappel rack and the evolution of mechanical ascension systems extended the practice and safety of vertical exploration to a wider range of cavers.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "Hard hats are worn to protect the head from bumps and falling rocks. The caver's primary light source is usually mounted on the helmet in order to keep the hands free. Electric LED lights are most common. Many cavers carry two or more sources of light – one as primary and the others as backup in case the first fails. More often than not, a second light will be mounted to the helmet for quick transition if the primary fails. Carbide lamp systems are an older form of illumination, inspired by miner's equipment, and are still used by some cavers, particularly on remote expeditions where electric charging facilities are not available.",
"title": "Practice and equipment"
},
{
"paragraph_id": 17,
"text": "The type of clothes worn underground varies according to the environment of the cave being explored, and the local culture. In cold caves, the caver may wear a warm base layer that retains its insulating properties when wet, such as a fleece (\"furry\") suit or polypropylene underwear, and an oversuit of hard-wearing (e.g., cordura) or waterproof (e.g., PVC) material. Lighter clothing may be worn in warm caves, particularly if the cave is dry, and in tropical caves thin polypropylene clothing is used, to provide some abrasion protection while remaining as cool as possible. Wetsuits may be worn if the cave is particularly wet or involves stream passages. On the feet boots are worn – hiking-style boots in drier caves, or rubber boots (such as wellies) often with neoprene socks (\"wetsocks\") in wetter caves. Knee-pads (and sometimes elbow-pads) are popular for protecting joints during crawls. Depending on the nature of the cave, gloves are sometimes worn to protect the hands against abrasion or cold. In pristine areas and for restoration, clean oversuits and powder-free, non-latex surgical gloves are used to protect the cave itself from contaminants. Ropes are used for descending or ascending pitches (single rope technique or SRT) or for protection. Knots commonly used in caving are the figure-of-eight- (or figure-of-nine-) loop, bowline, alpine butterfly, and Italian hitch. Ropes are usually rigged using bolts, slings, and carabiners. In some cases cavers may choose to bring and use a flexible metal ladder.",
"title": "Practice and equipment"
},
{
"paragraph_id": 18,
"text": "In addition to the equipment already described, cavers frequently carry packs containing first-aid kits, emergency equipment, and food. Containers for securely transporting urine are also commonly carried. On longer trips, containers for securely transporting feces out of the cave are carried.",
"title": "Practice and equipment"
},
{
"paragraph_id": 19,
"text": "During very long trips, it may be necessary to camp in the cave – some cavers have stayed underground for many days, or in particularly extreme cases, for weeks at a time. This is particularly the case when exploring or mapping extensive cave systems, where it would be impractical to retrace the route back to the surface regularly. Such long trips necessitate the cavers carrying provisions, sleeping, and cooking equipment.",
"title": "Practice and equipment"
},
{
"paragraph_id": 20,
"text": "Caves can be dangerous places; hypothermia, falling, flooding, falling rocks and physical exhaustion are the main risks. Rescuing people from underground is difficult and time-consuming, and requires special skills, training, and equipment. Full-scale cave rescues often involve the efforts of dozens of rescue workers (often other long-time cavers who have participated in specialized courses, as normal rescue staff are not sufficiently experienced in cave environments), who may themselves be put in jeopardy in effecting the rescue. This said, caving is not necessarily a high-risk sport (especially if it does not involve difficult climbs or diving). As in all physical sports, knowing one's limitations is key.",
"title": "Safety"
},
{
"paragraph_id": 21,
"text": "Caving in warmer climates carries the risk of contracting histoplasmosis, a fungal infection that is contracted from bird or bat droppings. It can cause pneumonia and can disseminate in the body to cause continued infections.",
"title": "Safety"
},
{
"paragraph_id": 22,
"text": "In many parts of the world, leptospirosis (\"a type of bacterial infection spread by animals\" including rats) is a distinct threat due to the presence of rat urine in rainwater or precipitation that enters the caves water system. Complications are uncommon, but can be serious. Safety risks while caving can be minimized by using a number of techniques:",
"title": "Safety"
},
{
"paragraph_id": 23,
"text": "Many cave environments are very fragile. Many speleothems can be damaged by even the slightest touch and some by impacts as slight as a breath. Research suggests that increased carbon dioxide levels can lead to \"a higher equilibrium concentration of calcium within the drip waters feeding the speleothems, and hence causes dissolution of existing features.\" In 2008, researchers found evidence that respiration from cave visitors may generate elevated carbon dioxide concentrations in caves, leading to increased temperatures of up to 3 °C and a dissolution of existing features.",
"title": "Cave conservation"
},
{
"paragraph_id": 24,
"text": "Pollution is also of concern. Since water that flows through a cave eventually comes out in streams and rivers, any pollution may ultimately end up in someone's drinking water, and can even seriously affect the surface environment, as well. Even minor pollution such as dropping organic material can have a dramatic effect on the cave biota.",
"title": "Cave conservation"
},
{
"paragraph_id": 25,
"text": "Cave-dwelling species are also very fragile, and often, a particular species found in a cave may live within that cave alone, and be found nowhere else in the world, such as Alabama cave shrimp. Cave-dwelling species are accustomed to a near-constant climate of temperature and humidity, and any disturbance can be disruptive to the species' life cycles. Though cave wildlife may not always be immediately visible, it is typically nonetheless present in most caves.",
"title": "Cave conservation"
},
{
"paragraph_id": 26,
"text": "Bats are one such fragile species of cave-dwelling animal. Bats which hibernate are most vulnerable during the winter season, when no food supply exists on the surface to replenish the bat's store of energy should it be awakened from hibernation. Bats which migrate are most sensitive during the summer months when they are raising their young. For these reasons, visiting caves inhabited by hibernating bats is discouraged during cold months; and visiting caves inhabited by migratory bats is discouraged during the warmer months when they are most sensitive and vulnerable. Due to an affliction affecting bats in the northeastern US known as white nose syndrome (WNS), the US Fish & Wildlife Service has called for a moratorium effective March 26, 2009, on caving activity in states known to have hibernacula (MD, NY, VT, NH, MA, CT, NJ, PA, VA, and WV) affected by WNS, as well as adjoining states.",
"title": "Cave conservation"
},
{
"paragraph_id": 27,
"text": "Some cave passages may be marked with flagging tape or other indicators to show biologically, aesthetically, or archaeologically sensitive areas. Marked paths may show ways around notably fragile areas such as a pristine floor of sand or silt which may be thousands of years old, dating from the last time water flowed through the cave. Such deposits may easily be spoiled forever by a single misplaced step. Active formations such as flowstone can be similarly marred with a muddy footprint or handprint, and ancient human artifacts, such as fiber products, may even crumble to dust under all but the most gentle touch.",
"title": "Cave conservation"
},
{
"paragraph_id": 28,
"text": "In 1988, concerned that cave resources were becoming increasingly damaged through unregulated use, Congress enacted the Federal Cave Resources Protection Act, giving land management agencies in the United States expanded authority to manage cave conservation on public land.",
"title": "Cave conservation"
},
{
"paragraph_id": 29,
"text": "Cavers in many countries have created organizations for the administration and oversight of caving activities within their nations. The oldest of these is the French Federation of Speleology (originally Société de spéléologie) founded by Édouard-Alfred Martel in 1895, which produced the first periodical journal in speleology, Spelunca. The first University-based speleological institute in the world was founded in 1920 in Cluj-Napoca, Romania, by Emil Racovita, a Romanian biologist, zoologist, speleologist and explorer of Antarctica.",
"title": "Caving organizations"
},
{
"paragraph_id": 30,
"text": "The British Speleological Association was established in 1935 and the National Speleological Society in the US was founded in 1941 (originally formed as the Speleological Society of the District of Columbia on May 6, 1939).",
"title": "Caving organizations"
},
{
"paragraph_id": 31,
"text": "An international speleological congress was proposed at a meeting in Valence-sur-Rhone, France in 1949 and first held in 1953 in Paris. The International Union of Speleology (UIS) was founded in 1965.",
"title": "Caving organizations"
}
] | Caving, also known as spelunking and potholing, is the recreational pastime of exploring wild cave systems. In contrast, speleology is the scientific study of caves and the cave environment. The challenges involved in caving vary according to the cave being visited; in addition to the total absence of light beyond the entrance, negotiating pitches, squeezes, and water hazards can be difficult. Cave diving is a distinct, and more hazardous, sub-speciality undertaken by a small minority of technically proficient cavers. In an area of overlap between recreational pursuit and scientific study, the most devoted and serious-minded cavers become accomplished at the surveying and mapping of caves and the formal publication of their efforts. These are usually published freely and publicly, especially in the UK and other European countries, although in the US, these are generally private. Sometimes categorized as an "extreme sport", it is not commonly considered as such by longtime enthusiasts, who may dislike the term for its connotation of disregard for safety. Many caving skills overlap with those involved in canyoning and mine and urban exploration. | 2001-06-23T04:52:28Z | 2023-11-21T03:52:19Z | [
"Template:Redirect-acronym",
"Template:Redirect",
"Template:Annotated link",
"Template:Cite news",
"Template:LSJ",
"Template:Caves",
"Template:Authority control",
"Template:See also",
"Template:Main",
"Template:Reflist",
"Template:Cite web",
"Template:Commons",
"Template:Wiktionary",
"Template:Blockquote",
"Template:Lang",
"Template:Cn",
"Template:Citation needed",
"Template:OEtymD",
"Template:L&S",
"Template:Cite journal",
"Template:Cite book",
"Template:Short description",
"Template:Subterranea"
] | https://en.wikipedia.org/wiki/Caving |
5,778 | Cave | A cave or cavern is a natural void in the ground, specifically a space large enough for a human to enter. Caves often form by the weathering of rock and often extend deep underground. The word cave can refer to smaller openings such as sea caves, rock shelters, and grottos, that extend a relatively short distance into the rock and they are called exogene caves. Caves which extend further underground than the opening is wide are called endogene caves.
Speleology is the science of exploration and study of all aspects of caves and the cave environment. Visiting or exploring caves for recreation may be called caving, potholing, or spelunking.
The formation and development of caves is known as speleogenesis; it can occur over the course of millions of years. Caves can range widely in size, and are formed by various geological processes. These may involve a combination of chemical processes, erosion by water, tectonic forces, microorganisms, pressure, and atmospheric influences. Isotopic dating techniques can be applied to cave sediments, to determine the timescale of the geological events which formed and shaped present-day caves.
It is estimated that a cave cannot be more than 3,000 metres (9,800 ft) vertically beneath the surface due to the pressure of overlying rocks. This does not, however, impose a maximum depth for a cave which is measured from its highest entrance to its lowest point, as the amount of rock above the lowest point is dependent on the topography of the landscape above it. For karst caves the maximum depth is determined on the basis of the lower limit of karst forming processes, coinciding with the base of the soluble carbonate rocks. Most caves are formed in limestone by dissolution.
Caves can be classified in various other ways as well, including a contrast between active and relict: active caves have water flowing through them; relict caves do not, though water may be retained in them. Types of active caves include inflow caves ("into which a stream sinks"), outflow caves ("from which a stream emerges"), and through caves ("traversed by a stream").
Solutional caves or karst caves are the most frequently occurring caves. Such caves form in rock that is soluble; most occur in limestone, but they can also form in other rocks including chalk, dolomite, marble, salt, and gypsum. Except for salt caves, solutional caves result when rock is dissolved by natural acid in groundwater that seeps through bedding planes, faults, joints, and comparable features. Over time cracks enlarge to become caves and cave systems.
The largest and most abundant solutional caves are located in limestone. Limestone dissolves under the action of rainwater and groundwater charged with H2CO3 (carbonic acid) and naturally occurring organic acids. The dissolution process produces a distinctive landform known as karst, characterized by sinkholes and underground drainage. Limestone caves are often adorned with calcium carbonate formations produced through slow precipitation. These include flowstones, stalactites, stalagmites, helictites, soda straws and columns. These secondary mineral deposits in caves are called speleothems.
The portions of a solutional cave that are below the water table or the local level of the groundwater will be flooded.
Lechuguilla Cave in New Mexico and nearby Carlsbad Cavern are now believed to be examples of another type of solutional cave. They were formed by H2S (hydrogen sulfide) gas rising from below, where reservoirs of oil give off sulfurous fumes. This gas mixes with groundwater and forms H2SO4 (sulfuric acid). The acid then dissolves the limestone from below, rather than from above, by acidic water percolating from the surface.
Caves formed at the same time as the surrounding rock are called primary caves.
Lava tubes are formed through volcanic activity and are the most common primary caves. As lava flows downhill, its surface cools and solidifies. Hot liquid lava continues to flow under that crust, and if most of it flows out, a hollow tube remains. Such caves can be found in the Canary Islands, Jeju-do, the basaltic plains of Eastern Idaho, and in other places. Kazumura Cave near Hilo, Hawaii is a remarkably long and deep lava tube; it is 65.6 km long (40.8 mi).
Lava caves include but are not limited to lava tubes. Other caves formed through volcanic activity include rifts, lava molds, open vertical conduits, inflationary, blisters, among others.
Sea caves are found along coasts around the world. A special case is littoral caves, which are formed by wave action in zones of weakness in sea cliffs. Often these weaknesses are faults, but they may also be dykes or bedding-plane contacts. Some wave-cut caves are now above sea level because of later uplift. Elsewhere, in places such as Thailand's Phang Nga Bay, solutional caves have been flooded by the sea and are now subject to littoral erosion. Sea caves are generally around 5 to 50 metres (16 to 164 ft) in length, but may exceed 300 metres (980 ft).
Corrasional or erosional caves are those that form entirely by erosion by flowing streams carrying rocks and other sediments. These can form in any type of rock, including hard rocks such as granite. Generally there must be some zone of weakness to guide the water, such as a fault or joint. A subtype of the erosional cave is the wind or aeolian cave, carved by wind-born sediments. Many caves formed initially by solutional processes often undergo a subsequent phase of erosional or vadose enlargement where active streams or rivers pass through them.
Glacier caves are formed by melting ice and flowing water within and under glaciers. The cavities are influenced by the very slow flow of the ice, which tends to collapse the caves again. Glacier caves are sometimes misidentified as "ice caves", though this latter term is properly reserved for bedrock caves that contain year-round ice formations.
Fracture caves are formed when layers of more soluble minerals, such as gypsum, dissolve out from between layers of less soluble rock. These rocks fracture and collapse in blocks of stone.
Talus caves are formed by the openings among large boulders that have fallen down into a random heap, often at the bases of cliffs. These unstable deposits are called talus or scree, and may be subject to frequent rockfalls and landslides.
Anchialine caves are caves, usually coastal, containing a mixture of freshwater and saline water (usually sea water). They occur in many parts of the world, and often contain highly specialized and endemic fauna.
Caves are found throughout the world, although the distribution of documented cave system is heavily skewed towards those countries where caving has been popular for many years (such as France, Italy, Australia, the UK, the United States, etc.). As a result, explored caves are found widely in Europe, Asia, North America and Oceania, but are sparse in South America, Africa, and Antarctica.
This is a rough generalization, as large expanses of North America and Asia contain no documented caves, whereas areas such as the Madagascar dry deciduous forests and parts of Brazil contain many documented caves. As the world's expanses of soluble bedrock are researched by cavers, the distribution of documented caves is likely to shift. For example, China, despite containing around half the world's exposed limestone—more than 1,000,000 square kilometres (390,000 sq mi)—has relatively few documented caves.
Cave-inhabiting animals are often categorized as troglobites (cave-limited species), troglophiles (species that can live their entire lives in caves, but also occur in other environments), trogloxenes (species that use caves, but cannot complete their life cycle fully in caves) and accidentals (animals not in one of the previous categories). Some authors use separate terminology for aquatic forms (for example, stygobites, stygophiles, and stygoxenes).
Of these animals, the troglobites are perhaps the most unusual organisms. Troglobitic species often show a number of characteristics, termed troglomorphic, associated with their adaptation to subterranean life. These characteristics may include a loss of pigment (often resulting in a pale or white coloration), a loss of eyes (or at least of optical functionality), an elongation of appendages, and an enhancement of other senses (such as the ability to sense vibrations in water). Aquatic troglobites (or stygobites), such as the endangered Alabama cave shrimp, live in bodies of water found in caves and get nutrients from detritus washed into their caves and from the feces of bats and other cave inhabitants. Other aquatic troglobites include cave fish, and cave salamanders such as the olm and the Texas blind salamander.
Cave insects such as Oligaphorura (formerly Archaphorura) schoetti are troglophiles, reaching 1.7 millimetres (0.067 in) in length. They have extensive distribution and have been studied fairly widely. Most specimens are female, but a male specimen was collected from St Cuthberts Swallet in 1969.
Bats, such as the gray bat and Mexican free-tailed bat, are trogloxenes and are often found in caves; they forage outside of the caves. Some species of cave crickets are classified as trogloxenes, because they roost in caves by day and forage above ground at night.
Because of the fragility of cave ecosystems, and the fact that cave regions tend to be isolated from one another, caves harbor a number of endangered species, such as the Tooth cave spider, liphistius trapdoor spider, and the gray bat.
Caves are visited by many surface-living animals, including humans. These are usually relatively short-lived incursions, due to the lack of light and sustenance.
Cave entrances often have typical florae. For instance, in the eastern temperate United States, cave entrances are most frequently (and often densely) populated by the bulblet fern, Cystopteris bulbifera.
Throughout history, primitive peoples have made use of caves. The earliest human fossils found in caves come from a series of caves near Krugersdorp and Mokopane in South Africa. The cave sites of Sterkfontein, Swartkrans, Kromdraai B, Drimolen, Malapa, Cooper's D, Gladysvale, Gondolin and Makapansgat have yielded a range of early human species dating back to between three and one million years ago, including Australopithecus africanus, Australopithecus sediba and Paranthropus robustus. However, it is not generally thought that these early humans were living in the caves, but that they were brought into the caves by carnivores that had killed them.
The first early hominid ever found in Africa, the Taung Child in 1924, was also thought for many years to come from a cave, where it had been deposited after being predated on by an eagle. However, this is now debated (Hopley et al., 2013; Am. J. Phys. Anthrop.). Caves do form in the dolomite of the Ghaap Plateau, including the Early, Middle and Later Stone Age site of Wonderwerk Cave; however, the caves that form along the escarpment's edge, like that hypothesised for the Taung Child, are formed within a secondary limestone deposit called tufa. There is numerous evidence for other early human species inhabiting caves from at least one million years ago in different parts of the world, including Homo erectus in China at Zhoukoudian, Homo rhodesiensis in South Africa at the Cave of Hearths (Makapansgat), Homo neanderthalensis and Homo heidelbergensis in Europe at Archaeological Site of Atapuerca, Homo floresiensis in Indonesia, and the Denisovans in southern Siberia.
In southern Africa, early modern humans regularly used sea caves as shelter starting about 180,000 years ago when they learned to exploit the sea for the first time. The oldest known site is PP13B at Pinnacle Point. This may have allowed rapid expansion of humans out of Africa and colonization of areas of the world such as Australia by 60–50,000 years ago. Throughout southern Africa, Australia, and Europe, early modern humans used caves and rock shelters as sites for rock art, such as those at Giant's Castle. Caves such as the yaodong in China were used for shelter; other caves were used for burials (such as rock-cut tombs), or as religious sites (such as Buddhist caves). Among the known sacred caves are China's Cave of a Thousand Buddhas and the sacred caves of Crete.
The importance of sound in caves predates a modern understanding of acoustics. Archaeologists have uncovered relationships between paintings of dots and lines, in specific areas of resonance, within the caves of Spain and France, as well as instruments depicting paleolithic motifs, indicators of musical events and rituals. Clusters of paintings were often found in areas with notable acoustics, sometimes even replicating the sounds of the animals depicted on the walls. The human voice was also theorized to be used as an echolocation device to navigate darker areas of the caves where torches were less useful. Dots of red ochre are often found in spaces with the highest resonance, where the production of paintings was too difficult.
Caves continue to provide usage for modern-day explorers of acoustics. Today Cumberland Caverns provides one of the best examples for modern musical usages of caves. Not only are caves utilized for the reverberations, but for the dampening qualities of their abnormal faces as well. The irregularities in the walls of the Cumberland Caverns diffuse sounds bouncing off the walls and give the space and almost recording studio-like quality. During the 20th century musicians began to explore the possibility of using caves as locations as clubs and concert halls, including the likes of Dinah Shore, Roy Acuff, and Benny Goodman. Unlike today, these early performances were typically held in the mouths of the caves, as the lack of technology made depths of the interior inaccessible with musical equipment. In Luray Caverns, Virginia, a functioning organ has been developed that generates sound by mallets striking stalactites, each with a different pitch. | [
{
"paragraph_id": 0,
"text": "A cave or cavern is a natural void in the ground, specifically a space large enough for a human to enter. Caves often form by the weathering of rock and often extend deep underground. The word cave can refer to smaller openings such as sea caves, rock shelters, and grottos, that extend a relatively short distance into the rock and they are called exogene caves. Caves which extend further underground than the opening is wide are called endogene caves.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Speleology is the science of exploration and study of all aspects of caves and the cave environment. Visiting or exploring caves for recreation may be called caving, potholing, or spelunking.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The formation and development of caves is known as speleogenesis; it can occur over the course of millions of years. Caves can range widely in size, and are formed by various geological processes. These may involve a combination of chemical processes, erosion by water, tectonic forces, microorganisms, pressure, and atmospheric influences. Isotopic dating techniques can be applied to cave sediments, to determine the timescale of the geological events which formed and shaped present-day caves.",
"title": "Formation types"
},
{
"paragraph_id": 3,
"text": "It is estimated that a cave cannot be more than 3,000 metres (9,800 ft) vertically beneath the surface due to the pressure of overlying rocks. This does not, however, impose a maximum depth for a cave which is measured from its highest entrance to its lowest point, as the amount of rock above the lowest point is dependent on the topography of the landscape above it. For karst caves the maximum depth is determined on the basis of the lower limit of karst forming processes, coinciding with the base of the soluble carbonate rocks. Most caves are formed in limestone by dissolution.",
"title": "Formation types"
},
{
"paragraph_id": 4,
"text": "Caves can be classified in various other ways as well, including a contrast between active and relict: active caves have water flowing through them; relict caves do not, though water may be retained in them. Types of active caves include inflow caves (\"into which a stream sinks\"), outflow caves (\"from which a stream emerges\"), and through caves (\"traversed by a stream\").",
"title": "Formation types"
},
{
"paragraph_id": 5,
"text": "Solutional caves or karst caves are the most frequently occurring caves. Such caves form in rock that is soluble; most occur in limestone, but they can also form in other rocks including chalk, dolomite, marble, salt, and gypsum. Except for salt caves, solutional caves result when rock is dissolved by natural acid in groundwater that seeps through bedding planes, faults, joints, and comparable features. Over time cracks enlarge to become caves and cave systems.",
"title": "Formation types"
},
{
"paragraph_id": 6,
"text": "The largest and most abundant solutional caves are located in limestone. Limestone dissolves under the action of rainwater and groundwater charged with H2CO3 (carbonic acid) and naturally occurring organic acids. The dissolution process produces a distinctive landform known as karst, characterized by sinkholes and underground drainage. Limestone caves are often adorned with calcium carbonate formations produced through slow precipitation. These include flowstones, stalactites, stalagmites, helictites, soda straws and columns. These secondary mineral deposits in caves are called speleothems.",
"title": "Formation types"
},
{
"paragraph_id": 7,
"text": "The portions of a solutional cave that are below the water table or the local level of the groundwater will be flooded.",
"title": "Formation types"
},
{
"paragraph_id": 8,
"text": "Lechuguilla Cave in New Mexico and nearby Carlsbad Cavern are now believed to be examples of another type of solutional cave. They were formed by H2S (hydrogen sulfide) gas rising from below, where reservoirs of oil give off sulfurous fumes. This gas mixes with groundwater and forms H2SO4 (sulfuric acid). The acid then dissolves the limestone from below, rather than from above, by acidic water percolating from the surface.",
"title": "Formation types"
},
{
"paragraph_id": 9,
"text": "Caves formed at the same time as the surrounding rock are called primary caves.",
"title": "Formation types"
},
{
"paragraph_id": 10,
"text": "Lava tubes are formed through volcanic activity and are the most common primary caves. As lava flows downhill, its surface cools and solidifies. Hot liquid lava continues to flow under that crust, and if most of it flows out, a hollow tube remains. Such caves can be found in the Canary Islands, Jeju-do, the basaltic plains of Eastern Idaho, and in other places. Kazumura Cave near Hilo, Hawaii is a remarkably long and deep lava tube; it is 65.6 km long (40.8 mi).",
"title": "Formation types"
},
{
"paragraph_id": 11,
"text": "Lava caves include but are not limited to lava tubes. Other caves formed through volcanic activity include rifts, lava molds, open vertical conduits, inflationary, blisters, among others.",
"title": "Formation types"
},
{
"paragraph_id": 12,
"text": "Sea caves are found along coasts around the world. A special case is littoral caves, which are formed by wave action in zones of weakness in sea cliffs. Often these weaknesses are faults, but they may also be dykes or bedding-plane contacts. Some wave-cut caves are now above sea level because of later uplift. Elsewhere, in places such as Thailand's Phang Nga Bay, solutional caves have been flooded by the sea and are now subject to littoral erosion. Sea caves are generally around 5 to 50 metres (16 to 164 ft) in length, but may exceed 300 metres (980 ft).",
"title": "Formation types"
},
{
"paragraph_id": 13,
"text": "Corrasional or erosional caves are those that form entirely by erosion by flowing streams carrying rocks and other sediments. These can form in any type of rock, including hard rocks such as granite. Generally there must be some zone of weakness to guide the water, such as a fault or joint. A subtype of the erosional cave is the wind or aeolian cave, carved by wind-born sediments. Many caves formed initially by solutional processes often undergo a subsequent phase of erosional or vadose enlargement where active streams or rivers pass through them.",
"title": "Formation types"
},
{
"paragraph_id": 14,
"text": "Glacier caves are formed by melting ice and flowing water within and under glaciers. The cavities are influenced by the very slow flow of the ice, which tends to collapse the caves again. Glacier caves are sometimes misidentified as \"ice caves\", though this latter term is properly reserved for bedrock caves that contain year-round ice formations.",
"title": "Formation types"
},
{
"paragraph_id": 15,
"text": "Fracture caves are formed when layers of more soluble minerals, such as gypsum, dissolve out from between layers of less soluble rock. These rocks fracture and collapse in blocks of stone.",
"title": "Formation types"
},
{
"paragraph_id": 16,
"text": "Talus caves are formed by the openings among large boulders that have fallen down into a random heap, often at the bases of cliffs. These unstable deposits are called talus or scree, and may be subject to frequent rockfalls and landslides.",
"title": "Formation types"
},
{
"paragraph_id": 17,
"text": "Anchialine caves are caves, usually coastal, containing a mixture of freshwater and saline water (usually sea water). They occur in many parts of the world, and often contain highly specialized and endemic fauna.",
"title": "Formation types"
},
{
"paragraph_id": 18,
"text": "Caves are found throughout the world, although the distribution of documented cave system is heavily skewed towards those countries where caving has been popular for many years (such as France, Italy, Australia, the UK, the United States, etc.). As a result, explored caves are found widely in Europe, Asia, North America and Oceania, but are sparse in South America, Africa, and Antarctica.",
"title": "Geographic distribution"
},
{
"paragraph_id": 19,
"text": "This is a rough generalization, as large expanses of North America and Asia contain no documented caves, whereas areas such as the Madagascar dry deciduous forests and parts of Brazil contain many documented caves. As the world's expanses of soluble bedrock are researched by cavers, the distribution of documented caves is likely to shift. For example, China, despite containing around half the world's exposed limestone—more than 1,000,000 square kilometres (390,000 sq mi)—has relatively few documented caves.",
"title": "Geographic distribution"
},
{
"paragraph_id": 20,
"text": "Cave-inhabiting animals are often categorized as troglobites (cave-limited species), troglophiles (species that can live their entire lives in caves, but also occur in other environments), trogloxenes (species that use caves, but cannot complete their life cycle fully in caves) and accidentals (animals not in one of the previous categories). Some authors use separate terminology for aquatic forms (for example, stygobites, stygophiles, and stygoxenes).",
"title": "Ecology"
},
{
"paragraph_id": 21,
"text": "Of these animals, the troglobites are perhaps the most unusual organisms. Troglobitic species often show a number of characteristics, termed troglomorphic, associated with their adaptation to subterranean life. These characteristics may include a loss of pigment (often resulting in a pale or white coloration), a loss of eyes (or at least of optical functionality), an elongation of appendages, and an enhancement of other senses (such as the ability to sense vibrations in water). Aquatic troglobites (or stygobites), such as the endangered Alabama cave shrimp, live in bodies of water found in caves and get nutrients from detritus washed into their caves and from the feces of bats and other cave inhabitants. Other aquatic troglobites include cave fish, and cave salamanders such as the olm and the Texas blind salamander.",
"title": "Ecology"
},
{
"paragraph_id": 22,
"text": "Cave insects such as Oligaphorura (formerly Archaphorura) schoetti are troglophiles, reaching 1.7 millimetres (0.067 in) in length. They have extensive distribution and have been studied fairly widely. Most specimens are female, but a male specimen was collected from St Cuthberts Swallet in 1969.",
"title": "Ecology"
},
{
"paragraph_id": 23,
"text": "Bats, such as the gray bat and Mexican free-tailed bat, are trogloxenes and are often found in caves; they forage outside of the caves. Some species of cave crickets are classified as trogloxenes, because they roost in caves by day and forage above ground at night.",
"title": "Ecology"
},
{
"paragraph_id": 24,
"text": "Because of the fragility of cave ecosystems, and the fact that cave regions tend to be isolated from one another, caves harbor a number of endangered species, such as the Tooth cave spider, liphistius trapdoor spider, and the gray bat.",
"title": "Ecology"
},
{
"paragraph_id": 25,
"text": "Caves are visited by many surface-living animals, including humans. These are usually relatively short-lived incursions, due to the lack of light and sustenance.",
"title": "Ecology"
},
{
"paragraph_id": 26,
"text": "Cave entrances often have typical florae. For instance, in the eastern temperate United States, cave entrances are most frequently (and often densely) populated by the bulblet fern, Cystopteris bulbifera.",
"title": "Ecology"
},
{
"paragraph_id": 27,
"text": "Throughout history, primitive peoples have made use of caves. The earliest human fossils found in caves come from a series of caves near Krugersdorp and Mokopane in South Africa. The cave sites of Sterkfontein, Swartkrans, Kromdraai B, Drimolen, Malapa, Cooper's D, Gladysvale, Gondolin and Makapansgat have yielded a range of early human species dating back to between three and one million years ago, including Australopithecus africanus, Australopithecus sediba and Paranthropus robustus. However, it is not generally thought that these early humans were living in the caves, but that they were brought into the caves by carnivores that had killed them.",
"title": "Archaeological and cultural importance"
},
{
"paragraph_id": 28,
"text": "The first early hominid ever found in Africa, the Taung Child in 1924, was also thought for many years to come from a cave, where it had been deposited after being predated on by an eagle. However, this is now debated (Hopley et al., 2013; Am. J. Phys. Anthrop.). Caves do form in the dolomite of the Ghaap Plateau, including the Early, Middle and Later Stone Age site of Wonderwerk Cave; however, the caves that form along the escarpment's edge, like that hypothesised for the Taung Child, are formed within a secondary limestone deposit called tufa. There is numerous evidence for other early human species inhabiting caves from at least one million years ago in different parts of the world, including Homo erectus in China at Zhoukoudian, Homo rhodesiensis in South Africa at the Cave of Hearths (Makapansgat), Homo neanderthalensis and Homo heidelbergensis in Europe at Archaeological Site of Atapuerca, Homo floresiensis in Indonesia, and the Denisovans in southern Siberia.",
"title": "Archaeological and cultural importance"
},
{
"paragraph_id": 29,
"text": "In southern Africa, early modern humans regularly used sea caves as shelter starting about 180,000 years ago when they learned to exploit the sea for the first time. The oldest known site is PP13B at Pinnacle Point. This may have allowed rapid expansion of humans out of Africa and colonization of areas of the world such as Australia by 60–50,000 years ago. Throughout southern Africa, Australia, and Europe, early modern humans used caves and rock shelters as sites for rock art, such as those at Giant's Castle. Caves such as the yaodong in China were used for shelter; other caves were used for burials (such as rock-cut tombs), or as religious sites (such as Buddhist caves). Among the known sacred caves are China's Cave of a Thousand Buddhas and the sacred caves of Crete.",
"title": "Archaeological and cultural importance"
},
{
"paragraph_id": 30,
"text": "The importance of sound in caves predates a modern understanding of acoustics. Archaeologists have uncovered relationships between paintings of dots and lines, in specific areas of resonance, within the caves of Spain and France, as well as instruments depicting paleolithic motifs, indicators of musical events and rituals. Clusters of paintings were often found in areas with notable acoustics, sometimes even replicating the sounds of the animals depicted on the walls. The human voice was also theorized to be used as an echolocation device to navigate darker areas of the caves where torches were less useful. Dots of red ochre are often found in spaces with the highest resonance, where the production of paintings was too difficult.",
"title": "Caves and acoustics"
},
{
"paragraph_id": 31,
"text": "Caves continue to provide usage for modern-day explorers of acoustics. Today Cumberland Caverns provides one of the best examples for modern musical usages of caves. Not only are caves utilized for the reverberations, but for the dampening qualities of their abnormal faces as well. The irregularities in the walls of the Cumberland Caverns diffuse sounds bouncing off the walls and give the space and almost recording studio-like quality. During the 20th century musicians began to explore the possibility of using caves as locations as clubs and concert halls, including the likes of Dinah Shore, Roy Acuff, and Benny Goodman. Unlike today, these early performances were typically held in the mouths of the caves, as the lack of technology made depths of the interior inaccessible with musical equipment. In Luray Caverns, Virginia, a functioning organ has been developed that generates sound by mallets striking stalactites, each with a different pitch.",
"title": "Caves and acoustics"
}
] | A cave or cavern is a natural void in the ground, specifically a space large enough for a human to enter. Caves often form by the weathering of rock and often extend deep underground. The word cave can refer to smaller openings such as sea caves, rock shelters, and grottos, that extend a relatively short distance into the rock and they are called exogene caves. Caves which extend further underground than the opening is wide are called endogene caves. Speleology is the science of exploration and study of all aspects of caves and the cave environment. Visiting or exploring caves for recreation may be called caving, potholing, or spelunking. | 2001-06-23T04:58:19Z | 2023-12-10T15:02:00Z | [
"Template:Commons category",
"Template:Caves",
"Template:Main",
"Template:Cite book",
"Template:Webarchive",
"Template:Div col",
"Template:Div col end",
"Template:Cite journal",
"Template:Subterranea",
"Template:Authority control",
"Template:Short description",
"Template:Convert",
"Template:For",
"Template:Citation needed",
"Template:Cite web",
"Template:Wikivoyage",
"Template:Americana Poster",
"Template:Other uses",
"Template:Redirect",
"Template:Circa",
"Template:Cbignore",
"Template:Annotated link",
"Template:Reflist",
"Template:Cite news"
] | https://en.wikipedia.org/wiki/Cave |
5,781 | Chinese numerals | Chinese numerals are words and characters used to denote numbers in written Chinese.
Today, speakers of Chinese languages use three written numeral systems: the system of Arabic numerals used worldwide, and two indigenous systems. The more familiar indigenous system is based on Chinese characters that correspond to numerals in the spoken language. These may be shared with other languages of the Chinese cultural sphere such as Korean, Japanese, and Vietnamese. Most people and institutions in China primarily use the Arabic or mixed Arabic-Chinese systems for convenience, with traditional Chinese numerals used in finance, mainly for writing amounts on cheques, banknotes, some ceremonial occasions, some boxes, and on commercials.
The other indigenous system consists of the Suzhou numerals, or huama, a positional system, the only surviving form of the rod numerals. These were once used by Chinese mathematicians, and later by merchants in Chinese markets, such as those in Hong Kong until the 1990s, but were gradually supplanted by Arabic numerals.
The Chinese character numeral system consists of the Chinese characters used by the Chinese written language to write spoken numerals. Similar to spelling-out numbers in English (e.g., "one thousand nine hundred forty-five"), it is not an independent system per se. Since it reflects spoken language, it does not use the positional system as in Arabic numerals, in the same way that spelling out numbers in English does not.
There are characters representing the numbers zero through nine, and other characters representing larger numbers such as tens, hundreds, thousands, ten thousands and hundred millions. There are two sets of characters for Chinese numerals: one for everyday writing, known as xiǎoxiě (小寫; 小写; 'small writing'), and one for use in commercial, accounting or financial contexts, known as dàxiě (大寫; 大写; 'big writing'). The latter arose because the characters used for writing numerals are geometrically simple, so simply using those numerals cannot prevent forgeries in the same way spelling numbers out in English would. A forger could easily change the everyday characters 三十 (30) to 五千 (5000) just by adding a few strokes. That would not be possible when writing using the financial characters 參拾 (30) and 伍仟 (5000). They are also referred to as "banker's numerals", "anti-fraud numerals", or "banker's anti-fraud numerals". For the same reason, rod numerals were never used in commercial records.
For numbers larger than 10,000, similarly to the long and short scales in the West, there have been four systems in ancient and modern usage. The original one, with unique names for all powers of ten up to the 14th, is ascribed to the Yellow Emperor in the 6th century book by Zhen Luan, Wujing suanshu; 'Arithmetic in Five Classics'. In modern Chinese, only the second system is used, in which the same ancient names are used, but each represents a myriad, 萬 wàn times the previous:
In practice, this situation does not lead to ambiguity, with the exception of 兆; zhào, which means 10 according to the system in common usage throughout the Chinese communities as well as in Japan and Korea, but has also been used for 10 in recent years (especially in mainland China for megabyte). To avoid problems arising from the ambiguity, the PRC government never uses this character in official documents, but uses 万亿 (wànyì) or 太; tài; 'tera-' instead. Partly due to this, combinations of 万 and 亿 are often used instead of the larger units of the traditional system as well, for example 亿亿; yìyì instead of 京. The ROC government in Taiwan uses 兆; zhào to mean 10 in official documents.
Numerals beyond 載 zǎi come from Buddhist texts in Sanskrit, but are mostly found in ancient texts. Some of the following words are still being used today, but may have transferred meanings.
The following are characters used to denote small order of magnitude in Chinese historically. With the introduction of SI units, some of them have been incorporated as SI prefixes, while the rest have fallen into disuse.
In the People's Republic of China, the early translation for the SI prefixes in 1981 was different from those used today. The larger (兆, 京, 垓, 秭, 穰) and smaller Chinese numerals (微, 纖, 沙, 塵, 渺) were defined as translation for the SI prefixes as mega, giga, tera, peta, exa, micro, nano, pico, femto, atto, resulting in the creation of yet more values for each numeral.
The Republic of China (Taiwan) defined 百萬 as the translation for mega and 兆 as the translation for tera. This translation is widely used in official documents, academic communities, informational industries, etc. However, the civil broadcasting industries sometimes use 兆赫 to represent "megahertz".
Today, the governments of both China and Taiwan use phonetic transliterations for the SI prefixes. However, the governments have each chosen different Chinese characters for certain prefixes. The following table lists the two different standards together with the early translation.
Multiple-digit numbers are constructed using a multiplicative principle; first the digit itself (from 1 to 9), then the place (such as 10 or 100); then the next digit.
In Mandarin, the multiplier 兩 (liǎng) is often used rather than 二 (èr) for all numbers 200 and greater with the "2" numeral (although as noted earlier this varies from dialect to dialect and person to person). Use of both 兩 (liǎng) or 二 (èr) are acceptable for the number 200. When writing in the Cantonese dialect, 二 (yi) is used to represent the "2" numeral for all numbers. In the southern Min dialect of Chaozhou (Teochew), 兩 (no) is used to represent the "2" numeral in all numbers from 200 onwards. Thus:
For the numbers 11 through 19, the leading "one" (一; yī) is usually omitted. In some dialects, like Shanghainese, when there are only two significant digits in the number, the leading "one" and the trailing zeroes are omitted. Sometimes, the one before "ten" in the middle of a number, such as 213, is omitted. Thus:
Notes:
In certain older texts like the Protestant Bible or in poetic usage, numbers such as 114 may be written as [100] [10] [4] (百十四).
Outside of Taiwan, digits are sometimes grouped by myriads instead of thousands. Hence it is more convenient to think of numbers here as in groups of four, thus 1,234,567,890 is regrouped here as 12,3456,7890. Larger than a myriad, each number is therefore four zeroes longer than the one before it, thus 10000 × wàn (萬) = yì (億). If one of the numbers is between 10 and 19, the leading "one" is omitted as per the above point. Hence (numbers in parentheses indicate that the number has been written as one number rather than expanded):
In Taiwan, pure Arabic numerals are officially always and only grouped by thousands. Unofficially, they are often not grouped, particularly for numbers below 100,000. Mixed Arabic-Chinese numerals are often used in order to denote myriads. This is used both officially and unofficially, and come in a variety of styles:
Interior zeroes before the unit position (as in 1002) must be spelt explicitly. The reason for this is that trailing zeroes (as in 1200) are often omitted as shorthand, so ambiguity occurs. One zero is sufficient to resolve the ambiguity. Where the zero is before a digit other than the units digit, the explicit zero is not ambiguous and is therefore optional, but preferred. Thus:
To construct a fraction, the denominator is written first, followed by 分; fēn; 'parts', then the literary possessive particle 之; zhī; 'of this', and lastly the numerator. This is the opposite of how fractions are read in English, which is numerator first. Each half of the fraction is written the same as a whole number. For example, to express "two thirds", the structure "three parts of-this two" is used. Mixed numbers are written with the whole-number part first, followed by 又; yòu; 'and', then the fractional part.
Percentages are constructed similarly, using 百; bǎi; '100' as the denominator. (The number 100 is typically expressed as 一百; yībǎi; 'one hundred', like the English "one hundred". However, for percentages, 百 is used on its own.)
Because percentages and other fractions are formulated the same, Chinese are more likely than not to express 10%, 20% etc. as "parts of 10" (or /10, /10, etc. i.e. 十分之一; shí fēnzhī yī, 十分之二; shí fēnzhī èr, etc.) rather than "parts of 100" (or /100, /100, etc. i.e. 百分之十; bǎi fēnzhī shí, 百分之二十; bǎi fēnzhī èrshí, etc.)
In Taiwan, the most common formation of percentages in the spoken language is the number per hundred followed by the word 趴; pā, a contraction of the Japanese パーセント; pāsento, itself taken from the English "percent". Thus 25% is 二十五趴; èrshíwǔ pā.
Decimal numbers are constructed by first writing the whole number part, then inserting a point (simplified Chinese: 点; traditional Chinese: 點; pinyin: diǎn), and finally the fractional part. The fractional part is expressed using only the numbers for 0 to 9, similarly to English.
半; bàn; 'half' functions as a number and therefore requires a measure word. For example: 半杯水; bàn bēi shuǐ; 'half a glass of water'.
Ordinal numbers are formed by adding 第; dì ("sequence") before the number.
The Heavenly Stems are a traditional Chinese ordinal system.
Negative numbers are formed by adding fù (负; 負) before the number.
Chinese grammar requires the use of classifiers (measure words) when a numeral is used together with a noun to express a quantity. For example, "three people" is expressed as 三个人; 三個人; sān ge rén, "three (ge particle) person", where 个/個 ge is a classifier. There exist many different classifiers, for use with different sets of nouns, although 个/個 is the most common, and may be used informally in place of other classifiers.
Chinese uses cardinal numbers in certain situations in which English would use ordinals. For example, 三楼/三樓; sān lóu (literally "three story/storey") means "third floor" ("second floor" in British § Numbering). Likewise, 二十一世纪/二十一世紀; èrshí yī shìjì (literally "twenty-one century") is used for "21st century".
Numbers of years are commonly spoken as a sequence of digits, as in 二零零一; èr líng líng yī ("two zero zero one") for the year 2001. Names of months and days (in the Western system) are also expressed using numbers: 一月; yīyuè ("one month") for January, etc.; and 星期一; xīngqīyī ("week one") for Monday, etc. There is only one exception: Sunday is 星期日; xīngqīrì, or informally 星期天; xīngqītiān, both literally "week day". When meaning "week", "星期" xīngqī and "禮拜; 礼拜" lǐbài are interchangeable. "禮拜天" lǐbàitiān or "禮拜日" lǐbàirì means "day of worship". Chinese Catholics call Sunday "主日" zhǔrì, "Lord's day".
Full dates are usually written in the format 2001年1月20日 for January 20, 2001 (using 年; nián "year", 月; yuè "month", and 日; rì "day") – all the numbers are read as cardinals, not ordinals, with no leading zeroes, and the year is read as a sequence of digits. For brevity the nián, yuè and rì may be dropped to give a date composed of just numbers. For example "6-4" in Chinese is "six-four", short for "month six, day four" i.e. June Fourth, a common Chinese shorthand for the 1989 Tiananmen Square protests (because of the violence that occurred on June 4). For another example 67, in Chinese is sixty seven, short for year nineteen sixty seven, a common Chinese shorthand for the Hong Kong 1967 leftist riots.
In the same way that Roman numerals were standard in ancient and medieval Europe for mathematics and commerce, the Chinese formerly used the rod numerals, which is a positional system. The Suzhou numerals (simplified Chinese: 苏州花码; traditional Chinese: 蘇州花碼; pinyin: Sūzhōu huāmǎ) system is a variation of the Southern Song rod numerals. Nowadays, the huāmǎ system is only used for displaying prices in Chinese markets or on traditional handwritten invoices.
There is a common method of using of one hand to signify the numbers one to ten. While the five digits on one hand can easily express the numbers one to five, six to ten have special signs that can be used in commerce or day-to-day communication.
Most Chinese numerals of later periods were descendants of the Shang dynasty oracle numerals of the 14th century BC. The oracle bone script numerals were found on tortoise shell and animal bones. In early civilizations, the Shang were able to express any numbers, however large, with only nine symbols and a counting board though it was still not positional .
Some of the bronze script numerals such as 1, 2, 3, 4, 10, 11, 12, and 13 became part of the system of rod numerals.
In this system, horizontal rod numbers are used for the tens, thousands, hundred thousands etc. It is written in Sunzi Suanjing that "one is vertical, ten is horizontal".
The counting rod numerals system has place value and decimal numerals for computation, and was used widely by Chinese merchants, mathematicians and astronomers from the Han dynasty to the 16th century.
In 690 AD, Empress Wǔ promulgated Zetian characters, one of which was "〇". The word is now used as a synonym for the number zero.
Alexander Wylie, Christian missionary to China, in 1853 already refuted the notion that "the Chinese numbers were written in words at length", and stated that in ancient China, calculation was carried out by means of counting rods, and "the written character is evidently a rude presentation of these". After being introduced to the rod numerals, he said "Having thus obtained a simple but effective system of figures, we find the Chinese in actual use of a method of notation depending on the theory of local value [i.e. place-value], several centuries before such theory was understood in Europe, and while yet the science of numbers had scarcely dawned among the Arabs."
During the Ming and Qing dynasties (after Arabic numerals were introduced into China), some Chinese mathematicians used Chinese numeral characters as positional system digits. After the Qing period, both the Chinese numeral characters and the Suzhou numerals were replaced by Arabic numerals in mathematical writings.
Traditional Chinese numeric characters are also used in Japan and Korea and were used in Vietnam before the 20th century. In vertical text (that is, read top to bottom), using characters for numbers is the norm, while in horizontal text, Arabic numerals are most common. Chinese numeric characters are also used in much the same formal or decorative fashion that Roman numerals are in Western cultures. Chinese numerals may appear together with Arabic numbers on the same sign or document. | [
{
"paragraph_id": 0,
"text": "Chinese numerals are words and characters used to denote numbers in written Chinese.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Today, speakers of Chinese languages use three written numeral systems: the system of Arabic numerals used worldwide, and two indigenous systems. The more familiar indigenous system is based on Chinese characters that correspond to numerals in the spoken language. These may be shared with other languages of the Chinese cultural sphere such as Korean, Japanese, and Vietnamese. Most people and institutions in China primarily use the Arabic or mixed Arabic-Chinese systems for convenience, with traditional Chinese numerals used in finance, mainly for writing amounts on cheques, banknotes, some ceremonial occasions, some boxes, and on commercials.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The other indigenous system consists of the Suzhou numerals, or huama, a positional system, the only surviving form of the rod numerals. These were once used by Chinese mathematicians, and later by merchants in Chinese markets, such as those in Hong Kong until the 1990s, but were gradually supplanted by Arabic numerals.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The Chinese character numeral system consists of the Chinese characters used by the Chinese written language to write spoken numerals. Similar to spelling-out numbers in English (e.g., \"one thousand nine hundred forty-five\"), it is not an independent system per se. Since it reflects spoken language, it does not use the positional system as in Arabic numerals, in the same way that spelling out numbers in English does not.",
"title": "Characters used as numerals"
},
{
"paragraph_id": 4,
"text": "There are characters representing the numbers zero through nine, and other characters representing larger numbers such as tens, hundreds, thousands, ten thousands and hundred millions. There are two sets of characters for Chinese numerals: one for everyday writing, known as xiǎoxiě (小寫; 小写; 'small writing'), and one for use in commercial, accounting or financial contexts, known as dàxiě (大寫; 大写; 'big writing'). The latter arose because the characters used for writing numerals are geometrically simple, so simply using those numerals cannot prevent forgeries in the same way spelling numbers out in English would. A forger could easily change the everyday characters 三十 (30) to 五千 (5000) just by adding a few strokes. That would not be possible when writing using the financial characters 參拾 (30) and 伍仟 (5000). They are also referred to as \"banker's numerals\", \"anti-fraud numerals\", or \"banker's anti-fraud numerals\". For the same reason, rod numerals were never used in commercial records.",
"title": "Characters used as numerals"
},
{
"paragraph_id": 5,
"text": "",
"title": "Characters used as numerals"
},
{
"paragraph_id": 6,
"text": "For numbers larger than 10,000, similarly to the long and short scales in the West, there have been four systems in ancient and modern usage. The original one, with unique names for all powers of ten up to the 14th, is ascribed to the Yellow Emperor in the 6th century book by Zhen Luan, Wujing suanshu; 'Arithmetic in Five Classics'. In modern Chinese, only the second system is used, in which the same ancient names are used, but each represents a myriad, 萬 wàn times the previous:",
"title": "Characters used as numerals"
},
{
"paragraph_id": 7,
"text": "In practice, this situation does not lead to ambiguity, with the exception of 兆; zhào, which means 10 according to the system in common usage throughout the Chinese communities as well as in Japan and Korea, but has also been used for 10 in recent years (especially in mainland China for megabyte). To avoid problems arising from the ambiguity, the PRC government never uses this character in official documents, but uses 万亿 (wànyì) or 太; tài; 'tera-' instead. Partly due to this, combinations of 万 and 亿 are often used instead of the larger units of the traditional system as well, for example 亿亿; yìyì instead of 京. The ROC government in Taiwan uses 兆; zhào to mean 10 in official documents.",
"title": "Characters used as numerals"
},
{
"paragraph_id": 8,
"text": "Numerals beyond 載 zǎi come from Buddhist texts in Sanskrit, but are mostly found in ancient texts. Some of the following words are still being used today, but may have transferred meanings.",
"title": "Characters used as numerals"
},
{
"paragraph_id": 9,
"text": "The following are characters used to denote small order of magnitude in Chinese historically. With the introduction of SI units, some of them have been incorporated as SI prefixes, while the rest have fallen into disuse.",
"title": "Characters used as numerals"
},
{
"paragraph_id": 10,
"text": "In the People's Republic of China, the early translation for the SI prefixes in 1981 was different from those used today. The larger (兆, 京, 垓, 秭, 穰) and smaller Chinese numerals (微, 纖, 沙, 塵, 渺) were defined as translation for the SI prefixes as mega, giga, tera, peta, exa, micro, nano, pico, femto, atto, resulting in the creation of yet more values for each numeral.",
"title": "Characters used as numerals"
},
{
"paragraph_id": 11,
"text": "The Republic of China (Taiwan) defined 百萬 as the translation for mega and 兆 as the translation for tera. This translation is widely used in official documents, academic communities, informational industries, etc. However, the civil broadcasting industries sometimes use 兆赫 to represent \"megahertz\".",
"title": "Characters used as numerals"
},
{
"paragraph_id": 12,
"text": "Today, the governments of both China and Taiwan use phonetic transliterations for the SI prefixes. However, the governments have each chosen different Chinese characters for certain prefixes. The following table lists the two different standards together with the early translation.",
"title": "Characters used as numerals"
},
{
"paragraph_id": 13,
"text": "Multiple-digit numbers are constructed using a multiplicative principle; first the digit itself (from 1 to 9), then the place (such as 10 or 100); then the next digit.",
"title": "Reading and transcribing numbers"
},
{
"paragraph_id": 14,
"text": "In Mandarin, the multiplier 兩 (liǎng) is often used rather than 二 (èr) for all numbers 200 and greater with the \"2\" numeral (although as noted earlier this varies from dialect to dialect and person to person). Use of both 兩 (liǎng) or 二 (èr) are acceptable for the number 200. When writing in the Cantonese dialect, 二 (yi) is used to represent the \"2\" numeral for all numbers. In the southern Min dialect of Chaozhou (Teochew), 兩 (no) is used to represent the \"2\" numeral in all numbers from 200 onwards. Thus:",
"title": "Reading and transcribing numbers"
},
{
"paragraph_id": 15,
"text": "For the numbers 11 through 19, the leading \"one\" (一; yī) is usually omitted. In some dialects, like Shanghainese, when there are only two significant digits in the number, the leading \"one\" and the trailing zeroes are omitted. Sometimes, the one before \"ten\" in the middle of a number, such as 213, is omitted. Thus:",
"title": "Reading and transcribing numbers"
},
{
"paragraph_id": 16,
"text": "Notes:",
"title": "Reading and transcribing numbers"
},
{
"paragraph_id": 17,
"text": "In certain older texts like the Protestant Bible or in poetic usage, numbers such as 114 may be written as [100] [10] [4] (百十四).",
"title": "Reading and transcribing numbers"
},
{
"paragraph_id": 18,
"text": "Outside of Taiwan, digits are sometimes grouped by myriads instead of thousands. Hence it is more convenient to think of numbers here as in groups of four, thus 1,234,567,890 is regrouped here as 12,3456,7890. Larger than a myriad, each number is therefore four zeroes longer than the one before it, thus 10000 × wàn (萬) = yì (億). If one of the numbers is between 10 and 19, the leading \"one\" is omitted as per the above point. Hence (numbers in parentheses indicate that the number has been written as one number rather than expanded):",
"title": "Reading and transcribing numbers"
},
{
"paragraph_id": 19,
"text": "In Taiwan, pure Arabic numerals are officially always and only grouped by thousands. Unofficially, they are often not grouped, particularly for numbers below 100,000. Mixed Arabic-Chinese numerals are often used in order to denote myriads. This is used both officially and unofficially, and come in a variety of styles:",
"title": "Reading and transcribing numbers"
},
{
"paragraph_id": 20,
"text": "Interior zeroes before the unit position (as in 1002) must be spelt explicitly. The reason for this is that trailing zeroes (as in 1200) are often omitted as shorthand, so ambiguity occurs. One zero is sufficient to resolve the ambiguity. Where the zero is before a digit other than the units digit, the explicit zero is not ambiguous and is therefore optional, but preferred. Thus:",
"title": "Reading and transcribing numbers"
},
{
"paragraph_id": 21,
"text": "To construct a fraction, the denominator is written first, followed by 分; fēn; 'parts', then the literary possessive particle 之; zhī; 'of this', and lastly the numerator. This is the opposite of how fractions are read in English, which is numerator first. Each half of the fraction is written the same as a whole number. For example, to express \"two thirds\", the structure \"three parts of-this two\" is used. Mixed numbers are written with the whole-number part first, followed by 又; yòu; 'and', then the fractional part.",
"title": "Reading and transcribing numbers"
},
{
"paragraph_id": 22,
"text": "Percentages are constructed similarly, using 百; bǎi; '100' as the denominator. (The number 100 is typically expressed as 一百; yībǎi; 'one hundred', like the English \"one hundred\". However, for percentages, 百 is used on its own.)",
"title": "Reading and transcribing numbers"
},
{
"paragraph_id": 23,
"text": "Because percentages and other fractions are formulated the same, Chinese are more likely than not to express 10%, 20% etc. as \"parts of 10\" (or /10, /10, etc. i.e. 十分之一; shí fēnzhī yī, 十分之二; shí fēnzhī èr, etc.) rather than \"parts of 100\" (or /100, /100, etc. i.e. 百分之十; bǎi fēnzhī shí, 百分之二十; bǎi fēnzhī èrshí, etc.)",
"title": "Reading and transcribing numbers"
},
{
"paragraph_id": 24,
"text": "In Taiwan, the most common formation of percentages in the spoken language is the number per hundred followed by the word 趴; pā, a contraction of the Japanese パーセント; pāsento, itself taken from the English \"percent\". Thus 25% is 二十五趴; èrshíwǔ pā.",
"title": "Reading and transcribing numbers"
},
{
"paragraph_id": 25,
"text": "Decimal numbers are constructed by first writing the whole number part, then inserting a point (simplified Chinese: 点; traditional Chinese: 點; pinyin: diǎn), and finally the fractional part. The fractional part is expressed using only the numbers for 0 to 9, similarly to English.",
"title": "Reading and transcribing numbers"
},
{
"paragraph_id": 26,
"text": "半; bàn; 'half' functions as a number and therefore requires a measure word. For example: 半杯水; bàn bēi shuǐ; 'half a glass of water'.",
"title": "Reading and transcribing numbers"
},
{
"paragraph_id": 27,
"text": "Ordinal numbers are formed by adding 第; dì (\"sequence\") before the number.",
"title": "Reading and transcribing numbers"
},
{
"paragraph_id": 28,
"text": "The Heavenly Stems are a traditional Chinese ordinal system.",
"title": "Reading and transcribing numbers"
},
{
"paragraph_id": 29,
"text": "Negative numbers are formed by adding fù (负; 負) before the number.",
"title": "Reading and transcribing numbers"
},
{
"paragraph_id": 30,
"text": "Chinese grammar requires the use of classifiers (measure words) when a numeral is used together with a noun to express a quantity. For example, \"three people\" is expressed as 三个人; 三個人; sān ge rén, \"three (ge particle) person\", where 个/個 ge is a classifier. There exist many different classifiers, for use with different sets of nouns, although 个/個 is the most common, and may be used informally in place of other classifiers.",
"title": "Reading and transcribing numbers"
},
{
"paragraph_id": 31,
"text": "Chinese uses cardinal numbers in certain situations in which English would use ordinals. For example, 三楼/三樓; sān lóu (literally \"three story/storey\") means \"third floor\" (\"second floor\" in British § Numbering). Likewise, 二十一世纪/二十一世紀; èrshí yī shìjì (literally \"twenty-one century\") is used for \"21st century\".",
"title": "Reading and transcribing numbers"
},
{
"paragraph_id": 32,
"text": "Numbers of years are commonly spoken as a sequence of digits, as in 二零零一; èr líng líng yī (\"two zero zero one\") for the year 2001. Names of months and days (in the Western system) are also expressed using numbers: 一月; yīyuè (\"one month\") for January, etc.; and 星期一; xīngqīyī (\"week one\") for Monday, etc. There is only one exception: Sunday is 星期日; xīngqīrì, or informally 星期天; xīngqītiān, both literally \"week day\". When meaning \"week\", \"星期\" xīngqī and \"禮拜; 礼拜\" lǐbài are interchangeable. \"禮拜天\" lǐbàitiān or \"禮拜日\" lǐbàirì means \"day of worship\". Chinese Catholics call Sunday \"主日\" zhǔrì, \"Lord's day\".",
"title": "Reading and transcribing numbers"
},
{
"paragraph_id": 33,
"text": "Full dates are usually written in the format 2001年1月20日 for January 20, 2001 (using 年; nián \"year\", 月; yuè \"month\", and 日; rì \"day\") – all the numbers are read as cardinals, not ordinals, with no leading zeroes, and the year is read as a sequence of digits. For brevity the nián, yuè and rì may be dropped to give a date composed of just numbers. For example \"6-4\" in Chinese is \"six-four\", short for \"month six, day four\" i.e. June Fourth, a common Chinese shorthand for the 1989 Tiananmen Square protests (because of the violence that occurred on June 4). For another example 67, in Chinese is sixty seven, short for year nineteen sixty seven, a common Chinese shorthand for the Hong Kong 1967 leftist riots.",
"title": "Reading and transcribing numbers"
},
{
"paragraph_id": 34,
"text": "In the same way that Roman numerals were standard in ancient and medieval Europe for mathematics and commerce, the Chinese formerly used the rod numerals, which is a positional system. The Suzhou numerals (simplified Chinese: 苏州花码; traditional Chinese: 蘇州花碼; pinyin: Sūzhōu huāmǎ) system is a variation of the Southern Song rod numerals. Nowadays, the huāmǎ system is only used for displaying prices in Chinese markets or on traditional handwritten invoices.",
"title": "Counting rod and Suzhou numerals"
},
{
"paragraph_id": 35,
"text": "There is a common method of using of one hand to signify the numbers one to ten. While the five digits on one hand can easily express the numbers one to five, six to ten have special signs that can be used in commerce or day-to-day communication.",
"title": "Hand gestures"
},
{
"paragraph_id": 36,
"text": "Most Chinese numerals of later periods were descendants of the Shang dynasty oracle numerals of the 14th century BC. The oracle bone script numerals were found on tortoise shell and animal bones. In early civilizations, the Shang were able to express any numbers, however large, with only nine symbols and a counting board though it was still not positional .",
"title": "Historical use of numerals in China"
},
{
"paragraph_id": 37,
"text": "Some of the bronze script numerals such as 1, 2, 3, 4, 10, 11, 12, and 13 became part of the system of rod numerals.",
"title": "Historical use of numerals in China"
},
{
"paragraph_id": 38,
"text": "In this system, horizontal rod numbers are used for the tens, thousands, hundred thousands etc. It is written in Sunzi Suanjing that \"one is vertical, ten is horizontal\".",
"title": "Historical use of numerals in China"
},
{
"paragraph_id": 39,
"text": "The counting rod numerals system has place value and decimal numerals for computation, and was used widely by Chinese merchants, mathematicians and astronomers from the Han dynasty to the 16th century.",
"title": "Historical use of numerals in China"
},
{
"paragraph_id": 40,
"text": "In 690 AD, Empress Wǔ promulgated Zetian characters, one of which was \"〇\". The word is now used as a synonym for the number zero.",
"title": "Historical use of numerals in China"
},
{
"paragraph_id": 41,
"text": "Alexander Wylie, Christian missionary to China, in 1853 already refuted the notion that \"the Chinese numbers were written in words at length\", and stated that in ancient China, calculation was carried out by means of counting rods, and \"the written character is evidently a rude presentation of these\". After being introduced to the rod numerals, he said \"Having thus obtained a simple but effective system of figures, we find the Chinese in actual use of a method of notation depending on the theory of local value [i.e. place-value], several centuries before such theory was understood in Europe, and while yet the science of numbers had scarcely dawned among the Arabs.\"",
"title": "Historical use of numerals in China"
},
{
"paragraph_id": 42,
"text": "During the Ming and Qing dynasties (after Arabic numerals were introduced into China), some Chinese mathematicians used Chinese numeral characters as positional system digits. After the Qing period, both the Chinese numeral characters and the Suzhou numerals were replaced by Arabic numerals in mathematical writings.",
"title": "Historical use of numerals in China"
},
{
"paragraph_id": 43,
"text": "Traditional Chinese numeric characters are also used in Japan and Korea and were used in Vietnam before the 20th century. In vertical text (that is, read top to bottom), using characters for numbers is the norm, while in horizontal text, Arabic numerals are most common. Chinese numeric characters are also used in much the same formal or decorative fashion that Roman numerals are in Western cultures. Chinese numerals may appear together with Arabic numbers on the same sign or document.",
"title": "Cultural influences"
}
] | Chinese numerals are words and characters used to denote numbers in written Chinese. Today, speakers of Chinese languages use three written numeral systems: the system of Arabic numerals used worldwide, and two indigenous systems. The more familiar indigenous system is based on Chinese characters that correspond to numerals in the spoken language. These may be shared with other languages of the Chinese cultural sphere such as Korean, Japanese, and Vietnamese. Most people and institutions in China primarily use the Arabic or mixed Arabic-Chinese systems for convenience, with traditional Chinese numerals used in finance, mainly for writing amounts on cheques, banknotes, some ceremonial occasions, some boxes, and on commercials. The other indigenous system consists of the Suzhou numerals, or huama, a positional system, the only surviving form of the rod numerals. These were once used by Chinese mathematicians, and later by merchants in Chinese markets, such as those in Hong Kong until the 1990s, but were gradually supplanted by Arabic numerals. | 2001-10-23T01:43:41Z | 2023-12-12T08:49:55Z | [
"Template:Citation needed",
"Template:Expand Chinese",
"Template:Reflist",
"Template:Cite web",
"Template:Multiple issues",
"Template:Numeral systems",
"Template:N/A",
"Template:Fraction",
"Template:Cite news",
"Template:Isbn",
"Template:Short description",
"Template:Refn",
"Template:See also",
"Template:Fs interlinear",
"Template:Zh",
"Template:Lang",
"Template:Cite book",
"Template:Section link",
"Template:ISBN",
"Template:Chinese language",
"Template:Convert",
"Template:Zhi",
"Template:Transl",
"Template:Commons category",
"Template:Lang-zh",
"Template:Main",
"Template:Webarchive",
"Template:In lang"
] | https://en.wikipedia.org/wiki/Chinese_numerals |
5,783 | Computer program | A computer program is a sequence or set of instructions in a programming language for a computer to execute. It is one component of software, which also includes documentation and other intangible components.
A computer program in its human-readable form is called source code. Source code needs another computer program to execute because computers can only execute their native machine instructions. Therefore, source code may be translated to machine instructions using the language's compiler. (Assembly language programs are translated using an assembler.) The resulting file is called an executable. Alternatively, source code may execute within the language's interpreter.
If the executable is requested for execution, then the operating system loads it into memory and starts a process. The central processing unit will soon switch to this process so it can fetch, decode, and then execute each machine instruction.
If the source code is requested for execution, then the operating system loads the corresponding interpreter into memory and starts a process. The interpreter then loads the source code into memory to translate and execute each statement. Running the source code is slower than running an executable. Moreover, the interpreter must be installed on the computer.
The "Hello, World!" program is used to illustrate a language's basic syntax. The syntax of the language BASIC (1964) was intentionally limited to make the language easy to learn. For example, variables are not declared before being used. Also, variables are automatically initialized to zero. Here is an example computer program, in Basic, to average a list of numbers:
Once the mechanics of basic computer programming are learned, more sophisticated and powerful languages are available to build large computer systems.
Improvements in software development are the result of improvements in computer hardware. At each stage in hardware's history, the task of computer programming changed dramatically.
In 1837, Jacquard's loom inspired Charles Babbage to attempt to build the Analytical Engine. The names of the components of the calculating device were borrowed from the textile industry. In the textile industry, yarn was brought from the store to be milled. The device had a "store" which consisted of memory to hold 1,000 numbers of 50 decimal digits each. Numbers from the "store" were transferred to the "mill" for processing. It was programmed using two sets of perforated cards. One set directed the operation and the other set inputted the variables. However, the thousands of cogged wheels and gears never fully worked together, even after Babbage spent more than £17,000 of government money.
Ada Lovelace worked for Charles Babbage to create a description of the Analytical Engine (1843). The description contained Note G which completely detailed a method for calculating Bernoulli numbers using the Analytical Engine. This note is recognized by some historians as the world's first computer program.
In 1936, Alan Turing introduced the Universal Turing machine, a theoretical device that can model every computation. It is a finite-state machine that has an infinitely long read/write tape. The machine can move the tape back and forth, changing its contents as it performs an algorithm. The machine starts in the initial state, goes through a sequence of steps, and halts when it encounters the halt state. All present-day computers are Turing complete.
The Electronic Numerical Integrator And Computer (ENIAC) was built between July 1943 and Fall 1945. It was a Turing complete, general-purpose computer that used 17,468 vacuum tubes to create the circuits. At its core, it was a series of Pascalines wired together. Its 40 units weighed 30 tons, occupied 1,800 square feet (167 m), and consumed $650 per hour (in 1940s currency) in electricity when idle. It had 20 base-10 accumulators. Programming the ENIAC took up to two months. Three function tables were on wheels and needed to be rolled to fixed function panels. Function tables were connected to function panels by plugging heavy black cables into plugboards. Each function table had 728 rotating knobs. Programming the ENIAC also involved setting some of the 3,000 switches. Debugging a program took a week. It ran from 1947 until 1955 at Aberdeen Proving Ground, calculating hydrogen bomb parameters, predicting weather patterns, and producing firing tables to aim artillery guns.
Instead of plugging in cords and turning switches, a stored-program computer loads its instructions into memory just like it loads its data into memory. As a result, the computer could be programmed quickly and perform calculations at very fast speeds. Presper Eckert and John Mauchly built the ENIAC. The two engineers introduced the stored-program concept in a three-page memo dated February 1944. Later, in September 1944, Dr. John von Neumann began working on the ENIAC project. On June 30, 1945, von Neumann published the First Draft of a Report on the EDVAC, which equated the structures of the computer with the structures of the human brain. The design became known as the von Neumann architecture. The architecture was simultaneously deployed in the constructions of the EDVAC and EDSAC computers in 1949.
The IBM System/360 (1964) was a family of computers, each having the same instruction set architecture. The Model 20 was the smallest and least expensive. Customers could upgrade and retain the same application software. The Model 195 was the most premium. Each System/360 model featured multiprogramming—having multiple processes in memory at once. When one process was waiting for input/output, another could compute.
IBM planned for each model to be programmed using PL/1. A committee was formed that included COBOL, Fortran and ALGOL programmers. The purpose was to develop a language that was comprehensive, easy to use, extendible, and would replace Cobol and Fortran. The result was a large and complex language that took a long time to compile.
Computers manufactured until the 1970s had front-panel switches for manual programming. The computer program was written on paper for reference. An instruction was represented by a configuration of on/off settings. After setting the configuration, an execute button was pressed. This process was then repeated. Computer programs also were automatically inputted via paper tape, punched cards or magnetic-tape. After the medium was loaded, the starting address was set via switches, and the execute button was pressed.
A major milestone in software development was the invention of the Very Large Scale Integration (VLSI) circuit (1964). Following World War II, tube-based technology was replaced with point-contact transistors (1947) and bipolar junction transistors (late 1950s) mounted on a circuit board. During the 1960s, the aerospace industry replaced the circuit board with an integrated circuit chip.
Robert Noyce, co-founder of Fairchild Semiconductor (1957) and Intel (1968), achieved a technological improvement to refine the production of field-effect transistors (1963). The goal is to alter the electrical resistivity and conductivity of a semiconductor junction. First, naturally occurring silicate minerals are converted into polysilicon rods using the Siemens process. The Czochralski process then converts the rods into a monocrystalline silicon, boule crystal. The crystal is then thinly sliced to form a wafer substrate. The planar process of photolithography then integrates unipolar transistors, capacitors, diodes, and resistors onto the wafer to build a matrix of metal–oxide–semiconductor (MOS) transistors. The MOS transistor is the primary component in integrated circuit chips.
Originally, integrated circuit chips had their function set during manufacturing. During the 1960s, controlling the electrical flow migrated to programming a matrix of read-only memory (ROM). The matrix resembled a two-dimensional array of fuses. The process to embed instructions onto the matrix was to burn out the unneeded connections. There were so many connections, firmware programmers wrote a computer program on another chip to oversee the burning. The technology became known as Programmable ROM. In 1971, Intel installed the computer program onto the chip and named it the Intel 4004 microprocessor.
The terms microprocessor and central processing unit (CPU) are now used interchangeably. However, CPUs predate microprocessors. For example, the IBM System/360 (1964) had a CPU made from circuit boards containing discrete components on ceramic substrates.
The Intel 4004 (1971) was a 4-bit microprocessor designed to run the Busicom calculator. Five months after its release, Intel released the Intel 8008, an 8-bit microprocessor. Bill Pentz led a team at Sacramento State to build the first microcomputer using the Intel 8008: the Sac State 8008 (1972). Its purpose was to store patient medical records. The computer supported a disk operating system to run a Memorex, 3-megabyte, hard disk drive. It had a color display and keyboard that was packaged in a single console. The disk operating system was programmed using IBM's Basic Assembly Language (BAL). The medical records application was programmed using a BASIC interpreter. However, the computer was an evolutionary dead-end because it was extremely expensive. Also, it was built at a public university lab for a specific purpose. Nonetheless, the project contributed to the development of the Intel 8080 (1974) instruction set.
In 1978, the modern software development environment began when Intel upgraded the Intel 8080 to the Intel 8086. Intel simplified the Intel 8086 to manufacture the cheaper Intel 8088. IBM embraced the Intel 8088 when they entered the personal computer market (1981). As consumer demand for personal computers increased, so did Intel's microprocessor development. The succession of development is known as the x86 series. The x86 assembly language is a family of backward-compatible machine instructions. Machine instructions created in earlier microprocessors were retained throughout microprocessor upgrades. This enabled consumers to purchase new computers without having to purchase new application software. The major categories of instructions are:
VLSI circuits enabled the programming environment to advance from a computer terminal (until the 1990s) to a graphical user interface (GUI) computer. Computer terminals limited programmers to a single shell running in a command-line environment. During the 1970s, full-screen source code editing became possible through a text-based user interface. Regardless of the technology available, the goal is to program in a programming language.
Programming language features exist to provide building blocks to be combined to express programming ideals. Ideally, a programming language should:
The programming style of a programming language to provide these building blocks may be categorized into programming paradigms. For example, different paradigms may differentiate:
Each of these programming styles has contributed to the synthesis of different programming languages.
A programming language is a set of keywords, symbols, identifiers, and rules by which programmers can communicate instructions to the computer. They follow a set of rules called a syntax.
Programming languages get their basis from formal languages. The purpose of defining a solution in terms of its formal language is to generate an algorithm to solve the underlining problem. An algorithm is a sequence of simple instructions that solve a problem.
The evolution of programming language began when the EDSAC (1949) used the first stored computer program in its von Neumann architecture. Programming the EDSAC was in the first generation of programming language.
Imperative languages specify a sequential algorithm using declarations, expressions, and statements:
FORTRAN (1958) was unveiled as "The IBM Mathematical FORmula TRANslating system." It was designed for scientific calculations, without string handling facilities. Along with declarations, expressions, and statements, it supported:
It succeeded because:
However, non-IBM vendors also wrote Fortran compilers, but with a syntax that would likely fail IBM's compiler. The American National Standards Institute (ANSI) developed the first Fortran standard in 1966. In 1978, Fortran 77 became the standard until 1991. Fortran 90 supports:
COBOL (1959) stands for "COmmon Business Oriented Language." Fortran manipulated symbols. It was soon realized that symbols did not need to be numbers, so strings were introduced. The US Department of Defense influenced COBOL's development, with Grace Hopper being a major contributor. The statements were English-like and verbose. The goal was to design a language so managers could read the programs. However, the lack of structured statements hindered this goal.
COBOL's development was tightly controlled, so dialects did not emerge to require ANSI standards. As a consequence, it was not changed for 15 years until 1974. The 1990s version did make consequential changes, like object-oriented programming.
ALGOL (1960) stands for "ALGOrithmic Language." It had a profound influence on programming language design. Emerging from a committee of European and American programming language experts, it used standard mathematical notation and had a readable, structured design. Algol was first to define its syntax using the Backus–Naur form. This led to syntax-directed compilers. It added features like:
Algol's direct descendants include Pascal, Modula-2, Ada, Delphi and Oberon on one branch. On another branch the descendants include C, C++ and Java.
BASIC (1964) stands for "Beginner's All-Purpose Symbolic Instruction Code." It was developed at Dartmouth College for all of their students to learn. If a student did not go on to a more powerful language, the student would still remember Basic. A Basic interpreter was installed in the microcomputers manufactured in the late 1970s. As the microcomputer industry grew, so did the language.
Basic pioneered the interactive session. It offered operating system commands within its environment:
However, the Basic syntax was too simple for large programs. Recent dialects added structure and object-oriented extensions. Microsoft's Visual Basic is still widely used and produces a graphical user interface.
C programming language (1973) got its name because the language BCPL was replaced with B, and AT&T Bell Labs called the next version "C." Its purpose was to write the UNIX operating system. C is a relatively small language, making it easy to write compilers. Its growth mirrored the hardware growth in the 1980s. Its growth also was because it has the facilities of assembly language, but uses a high-level syntax. It added advanced features like:
C allows the programmer to control which region of memory data is to be stored. Global variables and static variables require the fewest clock cycles to store. The stack is automatically used for the standard variable declarations. Heap memory is returned to a pointer variable from the malloc() function.
In the 1970s, software engineers needed language support to break large projects down into modules. One obvious feature was to decompose large projects physically into separate files. A less obvious feature was to decompose large projects logically into abstract data types. At the time, languages supported concrete (scalar) datatypes like integer numbers, floating-point numbers, and strings of characters. Abstract datatypes are structures of concrete datatypes, with a new name assigned. For example, a list of integers could be called integer_list.
In object-oriented jargon, abstract datatypes are called classes. However, a class is only a definition; no memory is allocated. When memory is allocated to a class and bound to an identifier, it's called an object.
Object-oriented imperative languages developed by combining the need for classes and the need for safe functional programming. A function, in an object-oriented language, is assigned to a class. An assigned function is then referred to as a method, member function, or operation. Object-oriented programming is executing operations on objects.
Object-oriented languages support a syntax to model subset/superset relationships. In set theory, an element of a subset inherits all the attributes contained in the superset. For example, a student is a person. Therefore, the set of students is a subset of the set of persons. As a result, students inherit all the attributes common to all persons. Additionally, students have unique attributes that other people do not have. Object-oriented languages model subset/superset relationships using inheritance. Object-oriented programming became the dominant language paradigm by the late 1990s.
C++ (1985) was originally called "C with Classes." It was designed to expand C's capabilities by adding the object-oriented facilities of the language Simula.
An object-oriented module is composed of two files. The definitions file is called the header file. Here is a C++ header file for the GRADE class in a simple school application:
A constructor operation is a function with the same name as the class name. It is executed when the calling operation executes the new statement.
A module's other file is the source file. Here is a C++ source file for the GRADE class in a simple school application:
Here is a C++ header file for the PERSON class in a simple school application:
Here is a C++ source file for the PERSON class in a simple school application:
Here is a C++ header file for the STUDENT class in a simple school application:
Here is a C++ source file for the STUDENT class in a simple school application:
Here is a driver program for demonstration:
Here is a makefile to compile everything:
Imperative languages have one major criticism: assigning an expression to a non-local variable may produce an unintended side effect. Declarative languages generally omit the assignment statement and the control flow. They describe what computation should be performed and not how to compute it. Two broad categories of declarative languages are functional languages and logical languages.
The principle behind a functional language is to use lambda calculus as a guide for a well defined semantic. In mathematics, a function is a rule that maps elements from an expression to a range of values. Consider the function:
times_10(x) = 10 * x
The expression 10 * x is mapped by the function times_10() to a range of values. One value happens to be 20. This occurs when x is 2. So, the application of the function is mathematically written as:
times_10(2) = 20
A functional language compiler will not store this value in a variable. Instead, it will push the value onto the computer's stack before setting the program counter back to the calling function. The calling function will then pop the value from the stack.
Imperative languages do support functions. Therefore, functional programming can be achieved in an imperative language, if the programmer uses discipline. However, a functional language will force this discipline onto the programmer through its syntax. Functional languages have a syntax tailored to emphasize the what.
A functional program is developed with a set of primitive functions followed by a single driver function. Consider the snippet:
function max(a,b){/* code omitted */}
function min(a,b){/* code omitted */}
function difference_between_largest_and_smallest(a,b,c) {
}
The primitives are max() and min(). The driver function is difference_between_largest_and_smallest(). Executing:
put(difference_between_largest_and_smallest(10,4,7)); will output 6.
Functional languages are used in computer science research to explore new language features. Moreover, their lack of side-effects have made them popular in parallel programming and concurrent programming. However, application developers prefer the object-oriented features of imperative languages.
Lisp (1958) stands for "LISt Processor." It is tailored to process lists. A full structure of the data is formed by building lists of lists. In memory, a tree data structure is built. Internally, the tree structure lends nicely for recursive functions. The syntax to build a tree is to enclose the space-separated elements within parenthesis. The following is a list of three elements. The first two elements are themselves lists of two elements:
((A B) (HELLO WORLD) 94)
Lisp has functions to extract and reconstruct elements. The function head() returns a list containing the first element in the list. The function tail() returns a list containing everything but the first element. The function cons() returns a list that is the concatenation of other lists. Therefore, the following expression will return the list x:
cons(head(x), tail(x))
One drawback of Lisp is when many functions are nested, the parentheses may look confusing. Modern Lisp environments help ensure parenthesis match. As an aside, Lisp does support the imperative language operations of the assignment statement and goto loops. Also, Lisp is not concerned with the datatype of the elements at compile time. Instead, it assigns (and may reassign) the datatypes at runtime. Assigning the datatype at runtime is called dynamic binding. Whereas dynamic binding increases the language's flexibility, programming errors may linger until late in the software development process.
Writing large, reliable, and readable Lisp programs requires forethought. If properly planned, the program may be much shorter than an equivalent imperative language program. Lisp is widely used in artificial intelligence. However, its usage has been accepted only because it has imperative language operations, making unintended side-effects possible.
ML (1973) stands for "Meta Language." ML checks to make sure only data of the same type are compared with one another. For example, this function has one input parameter (an integer) and returns an integer:
ML is not parenthesis-eccentric like Lisp. The following is an application of times_10():
It returns "20 : int". (Both the results and the datatype are returned.)
Like Lisp, ML is tailored to process lists. Unlike Lisp, each element is the same datatype. Moreover, ML assigns the datatype of an element at compile-time. Assigning the datatype at compile-time is called static binding. Static binding increases reliability because the compiler checks the context of variables before they are used.
Prolog (1972) stands for "PROgramming in LOGic." It is a logic programming language, based on formal logic. The language was developed by Alain Colmerauer and Philippe Roussel in Marseille, France. It is an implementation of Selective Linear Definite clause resolution, pioneered by Robert Kowalski and others at the University of Edinburgh.
The building blocks of a Prolog program are facts and rules. Here is a simple example:
After all the facts and rules are entered, then a question can be asked:
The following example shows how Prolog will convert a letter grade to its numeric value:
Here is a comprehensive example:
1) All dragons billow fire, or equivalently, a thing billows fire if the thing is a dragon:
2) A creature billows fire if one of its parents billows fire:
3) A thing X is a parent of a thing Y if X is the mother of Y or X is the father of Y:
4) A thing is a creature if the thing is a dragon:
5) Norberta is a dragon, and Puff is a creature. Norberta is the mother of Puff.
Rule (2) is a recursive (inductive) definition. It can be understood declaratively, without the need to understand how it is executed.
Rule (3) shows how functions are represented by using relations. Here, the mother and father functions ensure that every individual has only one mother and only one father.
Prolog is an untyped language. Nonetheless, inheritance can be represented by using predicates. Rule (4) asserts that a creature is a superclass of a dragon.
Questions are answered using backward reasoning. Given the question:
Prolog generates two answers :
Practical applications for Prolog are knowledge representation and problem solving in artificial intelligence.
Object-oriented programming is a programming method to execute operations (functions) on objects. The basic idea is to group the characteristics of a phenomenon into an object container and give the container a name. The operations on the phenomenon are also grouped into the container. Object-oriented programming developed by combining the need for containers and the need for safe functional programming. This programming method need not be confined to an object-oriented language. In an object-oriented language, an object container is called a class. In a non-object-oriented language, a data structure (which is also known as a record) may become an object container. To turn a data structure into an object container, operations need to be written specifically for the structure. The resulting structure is called an abstract datatype. However, inheritance will be missing. Nonetheless, this shortcoming can be overcome.
Here is a C programming language header file for the GRADE abstract datatype in a simple school application:
The grade_new() function performs the same algorithm as the C++ constructor operation.
Here is a C programming language source file for the GRADE abstract datatype in a simple school application:
In the constructor, the function calloc() is used instead of malloc() because each memory cell will be set to zero.
Here is a C programming language header file for the PERSON abstract datatype in a simple school application:
Here is a C programming language source file for the PERSON abstract datatype in a simple school application:
Here is a C programming language header file for the STUDENT abstract datatype in a simple school application:
Here is a C programming language source file for the STUDENT abstract datatype in a simple school application:
Here is a driver program for demonstration:
Here is a makefile to compile everything:
The formal strategy to build object-oriented objects is to:
For example:
The syntax of a programming language is a list of production rules which govern its form. A programming language's form is the correct placement of its declarations, expressions, and statements. Complementing the syntax of a language are its semantics. The semantics describe the meanings attached to various syntactic constructs. A syntactic construct may need a semantic description because a form may have an invalid interpretation. Also, different languages might have the same syntax; however, their behaviors may be different.
The syntax of a language is formally described by listing the production rules. Whereas the syntax of a natural language is extremely complicated, a subset of the English language can have this production rule listing:
The words in bold-face are known as "non-terminals". The words in 'single quotes' are known as "terminals".
From this production rule listing, complete sentences may be formed using a series of replacements. The process is to replace non-terminals with either a valid non-terminal or a valid terminal. The replacement process repeats until only terminals remain. One valid sentence is:
However, another combination results in an invalid sentence:
Therefore, a semantic is necessary to correctly describe the meaning of an eat activity.
One production rule listing method is called the Backus–Naur form (BNF). BNF describes the syntax of a language and itself has a syntax. This recursive definition is an example of a meta-language. The syntax of BNF includes:
Using BNF, a subset of the English language can have this production rule listing:
Using BNF, a signed-integer has the production rule listing:
Notice the recursive production rule:
This allows for an infinite number of possibilities. Therefore, a semantic is necessary to describe a limitation of the number of digits.
Notice the leading zero possibility in the production rules:
Therefore, a semantic is necessary to describe that leading zeros need to be ignored.
Two formal methods are available to describe semantics. They are denotational semantics and axiomatic semantics.
Software engineering is a variety of techniques to produce quality software. Computer programming is the process of writing or editing source code. In a formal environment, a systems analyst will gather information from managers about all the organization's processes to automate. This professional then prepares a detailed plan for the new or modified system. The plan is analogous to an architect's blueprint.
The systems analyst has the objective to deliver the right information to the right person at the right time. The critical factors to achieve this objective are:
Achieving performance objectives should be balanced with all of the costs, including:
Applying a systems development process will mitigate the axiom: the later in the process an error is detected, the more expensive it is to correct.
The waterfall model is an implementation of a systems development process. As the waterfall label implies, the basic phases overlap each other:
A computer programmer is a specialist responsible for writing or modifying the source code to implement the detailed plan. A programming team is likely to be needed because most systems are too large to be completed by a single programmer. However, adding programmers to a project may not shorten the completion time. Instead, it may lower the quality of the system. To be effective, program modules need to be defined and distributed to team members. Also, team members must interact with one another in a meaningful and effective way.
Computer programmers may be programming in the small: programming within a single module. Chances are a module will execute modules located in other source code files. Therefore, computer programmers may be programming in the large: programming modules so they will effectively couple with each other. Programming-in-the-large includes contributing to the application programming interface (API).
Modular programming is a technique to refine imperative language programs. Refined programs may reduce the software size, separate responsibilities, and thereby mitigate software aging. A program module is a sequence of statements that are bounded within a block and together identified by a name. Modules have a function, context, and logic:
The module's name should be derived first by its function, then by its context. Its logic should not be part of the name. For example, function compute_square_root( x ) or function compute_square_root_integer( i : integer ) are appropriate module names. However, function compute_square_root_by_division( x ) is not.
The degree of interaction within a module is its level of cohesion. Cohesion is a judgment of the relationship between a module's name and its function. The degree of interaction between modules is the level of coupling. Coupling is a judgement of the relationship between a module's context and the elements being performed upon.
The levels of cohesion from worst to best are:
The levels of coupling from worst to best are:
Data flow analysis is a design method used to achieve modules of functional cohesion and data coupling. The input to the method is a data-flow diagram. A data-flow diagram is a set of ovals representing modules. Each module's name is displayed inside its oval. Modules may be at the executable level or the function level.
The diagram also has arrows connecting modules to each other. Arrows pointing into modules represent a set of inputs. Each module should have only one arrow pointing out from it to represent its single output object. (Optionally, an additional exception arrow points out.) A daisy chain of ovals will convey an entire algorithm. The input modules should start the diagram. The input modules should connect to the transform modules. The transform modules should connect to the output modules.
Computer programs may be categorized along functional lines. The main functional categories are application software and system software. System software includes the operating system, which couples computer hardware with application software. The purpose of the operating system is to provide an environment where application software executes in a convenient and efficient manner. Both application software and system software execute utility programs. At the hardware level, a microcode program controls the circuits throughout the central processing unit.
Application software is the key to unlocking the potential of the computer system. Enterprise application software bundles accounting, personnel, customer, and vendor applications. Examples include enterprise resource planning, customer relationship management, and supply chain management software.
Enterprise applications may be developed in-house as a one-of-a-kind proprietary software. Alternatively, they may be purchased as off-the-shelf software. Purchased software may be modified to provide custom software. If the application is customized, then either the company's resources are used or the resources are outsourced. Outsourced software development may be from the original software vendor or a third-party developer.
The potential advantages of in-house software are features and reports may be developed exactly to specification. Management may also be involved in the development process and offer a level of control. Management may decide to counteract a competitor's new initiative or implement a customer or vendor requirement. A merger or acquisition may necessitate enterprise software changes. The potential disadvantages of in-house software are time and resource costs may be extensive. Furthermore, risks concerning features and performance may be looming.
The potential advantages of off-the-shelf software are upfront costs are identifiable, the basic needs should be fulfilled, and its performance and reliability have a track record. The potential disadvantages of off-the-shelf software are it may have unnecessary features that confuse end users, it may lack features the enterprise needs, and the data flow may not match the enterprise's work processes.
One approach to economically obtaining a customized enterprise application is through an application service provider. Specialty companies provide hardware, custom software, and end-user support. They may speed the development of new applications because they possess skilled information system staff. The biggest advantage is it frees in-house resources from staffing and managing complex computer projects. Many application service providers target small, fast-growing companies with limited information system resources. On the other hand, larger companies with major systems will likely have their technical infrastructure in place. One risk is having to trust an external organization with sensitive information. Another risk is having to trust the provider's infrastructure reliability.
An operating system is the low-level software that supports a computer's basic functions, such as scheduling processes and controlling peripherals.
In the 1950s, the programmer, who was also the operator, would write a program and run it. After the program finished executing, the output may have been printed, or it may have been punched onto paper tape or cards for later processing. More often than not the program did not work. The programmer then looked at the console lights and fiddled with the console switches. If less fortunate, a memory printout was made for further study. In the 1960s, programmers reduced the amount of wasted time by automating the operator's job. A program called an operating system was kept in the computer at all times.
The term operating system may refer to two levels of software. The operating system may refer to the kernel program that manages the processes, memory, and devices. More broadly, the operating system may refer to the entire package of the central software. The package includes a kernel program, command-line interpreter, graphical user interface, utility programs, and editor.
The kernel's main purpose is to manage the limited resources of a computer:
Originally, operating systems were programmed in assembly; however, modern operating systems are typically written in higher-level languages like C, Objective-C, and Swift.
A utility program is designed to aid system administration and software execution. Operating systems execute hardware utility programs to check the status of disk drives, memory, speakers, and printers. A utility program may optimize the placement of a file on a crowded disk. System utility programs monitor hardware and network performance. When a metric is outside an acceptable range, a trigger alert is generated.
Utility programs include compression programs so data files are stored on less disk space. Compressed programs also save time when data files are transmitted over the network. Utility programs can sort and merge data sets. Utility programs detect computer viruses.
A microcode program is the bottom-level interpreter that controls the data path of software-driven computers. (Advances in hardware have migrated these operations to hardware execution circuits.) Microcode instructions allow the programmer to more easily implement the digital logic level—the computer's real hardware. The digital logic level is the boundary between computer science and computer engineering.
A logic gate is a tiny transistor that can return one of two signals: on or off.
These five gates form the building blocks of binary algebra—the digital logic functions of the computer.
Microcode instructions are mnemonics programmers may use to execute digital logic functions instead of forming them in binary algebra. They are stored in a central processing unit's (CPU) control store. These hardware-level instructions move data throughout the data path.
The micro-instruction cycle begins when the microsequencer uses its microprogram counter to fetch the next machine instruction from random-access memory. The next step is to decode the machine instruction by selecting the proper output line to the hardware module. The final step is to execute the instruction using the hardware module's set of gates.
Instructions to perform arithmetic are passed through an arithmetic logic unit (ALU). The ALU has circuits to perform elementary operations to add, shift, and compare integers. By combining and looping the elementary operations through the ALU, the CPU performs its complex arithmetic.
Microcode instructions move data between the CPU and the memory controller. Memory controller microcode instructions manipulate two registers. The memory address register is used to access each memory cell's address. The memory data register is used to set and read each cell's contents.
Microcode instructions move data between the CPU and the many computer buses. The disk controller bus writes to and reads from hard disk drives. Data is also moved between the CPU and other functional units via the peripheral component interconnect express bus. | [
{
"paragraph_id": 0,
"text": "A computer program is a sequence or set of instructions in a programming language for a computer to execute. It is one component of software, which also includes documentation and other intangible components.",
"title": ""
},
{
"paragraph_id": 1,
"text": "A computer program in its human-readable form is called source code. Source code needs another computer program to execute because computers can only execute their native machine instructions. Therefore, source code may be translated to machine instructions using the language's compiler. (Assembly language programs are translated using an assembler.) The resulting file is called an executable. Alternatively, source code may execute within the language's interpreter.",
"title": ""
},
{
"paragraph_id": 2,
"text": "If the executable is requested for execution, then the operating system loads it into memory and starts a process. The central processing unit will soon switch to this process so it can fetch, decode, and then execute each machine instruction.",
"title": ""
},
{
"paragraph_id": 3,
"text": "If the source code is requested for execution, then the operating system loads the corresponding interpreter into memory and starts a process. The interpreter then loads the source code into memory to translate and execute each statement. Running the source code is slower than running an executable. Moreover, the interpreter must be installed on the computer.",
"title": ""
},
{
"paragraph_id": 4,
"text": "The \"Hello, World!\" program is used to illustrate a language's basic syntax. The syntax of the language BASIC (1964) was intentionally limited to make the language easy to learn. For example, variables are not declared before being used. Also, variables are automatically initialized to zero. Here is an example computer program, in Basic, to average a list of numbers:",
"title": "Example computer program"
},
{
"paragraph_id": 5,
"text": "Once the mechanics of basic computer programming are learned, more sophisticated and powerful languages are available to build large computer systems.",
"title": "Example computer program"
},
{
"paragraph_id": 6,
"text": "Improvements in software development are the result of improvements in computer hardware. At each stage in hardware's history, the task of computer programming changed dramatically.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "In 1837, Jacquard's loom inspired Charles Babbage to attempt to build the Analytical Engine. The names of the components of the calculating device were borrowed from the textile industry. In the textile industry, yarn was brought from the store to be milled. The device had a \"store\" which consisted of memory to hold 1,000 numbers of 50 decimal digits each. Numbers from the \"store\" were transferred to the \"mill\" for processing. It was programmed using two sets of perforated cards. One set directed the operation and the other set inputted the variables. However, the thousands of cogged wheels and gears never fully worked together, even after Babbage spent more than £17,000 of government money.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "Ada Lovelace worked for Charles Babbage to create a description of the Analytical Engine (1843). The description contained Note G which completely detailed a method for calculating Bernoulli numbers using the Analytical Engine. This note is recognized by some historians as the world's first computer program.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "In 1936, Alan Turing introduced the Universal Turing machine, a theoretical device that can model every computation. It is a finite-state machine that has an infinitely long read/write tape. The machine can move the tape back and forth, changing its contents as it performs an algorithm. The machine starts in the initial state, goes through a sequence of steps, and halts when it encounters the halt state. All present-day computers are Turing complete.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "The Electronic Numerical Integrator And Computer (ENIAC) was built between July 1943 and Fall 1945. It was a Turing complete, general-purpose computer that used 17,468 vacuum tubes to create the circuits. At its core, it was a series of Pascalines wired together. Its 40 units weighed 30 tons, occupied 1,800 square feet (167 m), and consumed $650 per hour (in 1940s currency) in electricity when idle. It had 20 base-10 accumulators. Programming the ENIAC took up to two months. Three function tables were on wheels and needed to be rolled to fixed function panels. Function tables were connected to function panels by plugging heavy black cables into plugboards. Each function table had 728 rotating knobs. Programming the ENIAC also involved setting some of the 3,000 switches. Debugging a program took a week. It ran from 1947 until 1955 at Aberdeen Proving Ground, calculating hydrogen bomb parameters, predicting weather patterns, and producing firing tables to aim artillery guns.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "Instead of plugging in cords and turning switches, a stored-program computer loads its instructions into memory just like it loads its data into memory. As a result, the computer could be programmed quickly and perform calculations at very fast speeds. Presper Eckert and John Mauchly built the ENIAC. The two engineers introduced the stored-program concept in a three-page memo dated February 1944. Later, in September 1944, Dr. John von Neumann began working on the ENIAC project. On June 30, 1945, von Neumann published the First Draft of a Report on the EDVAC, which equated the structures of the computer with the structures of the human brain. The design became known as the von Neumann architecture. The architecture was simultaneously deployed in the constructions of the EDVAC and EDSAC computers in 1949.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "The IBM System/360 (1964) was a family of computers, each having the same instruction set architecture. The Model 20 was the smallest and least expensive. Customers could upgrade and retain the same application software. The Model 195 was the most premium. Each System/360 model featured multiprogramming—having multiple processes in memory at once. When one process was waiting for input/output, another could compute.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "IBM planned for each model to be programmed using PL/1. A committee was formed that included COBOL, Fortran and ALGOL programmers. The purpose was to develop a language that was comprehensive, easy to use, extendible, and would replace Cobol and Fortran. The result was a large and complex language that took a long time to compile.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "Computers manufactured until the 1970s had front-panel switches for manual programming. The computer program was written on paper for reference. An instruction was represented by a configuration of on/off settings. After setting the configuration, an execute button was pressed. This process was then repeated. Computer programs also were automatically inputted via paper tape, punched cards or magnetic-tape. After the medium was loaded, the starting address was set via switches, and the execute button was pressed.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "A major milestone in software development was the invention of the Very Large Scale Integration (VLSI) circuit (1964). Following World War II, tube-based technology was replaced with point-contact transistors (1947) and bipolar junction transistors (late 1950s) mounted on a circuit board. During the 1960s, the aerospace industry replaced the circuit board with an integrated circuit chip.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "Robert Noyce, co-founder of Fairchild Semiconductor (1957) and Intel (1968), achieved a technological improvement to refine the production of field-effect transistors (1963). The goal is to alter the electrical resistivity and conductivity of a semiconductor junction. First, naturally occurring silicate minerals are converted into polysilicon rods using the Siemens process. The Czochralski process then converts the rods into a monocrystalline silicon, boule crystal. The crystal is then thinly sliced to form a wafer substrate. The planar process of photolithography then integrates unipolar transistors, capacitors, diodes, and resistors onto the wafer to build a matrix of metal–oxide–semiconductor (MOS) transistors. The MOS transistor is the primary component in integrated circuit chips.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "Originally, integrated circuit chips had their function set during manufacturing. During the 1960s, controlling the electrical flow migrated to programming a matrix of read-only memory (ROM). The matrix resembled a two-dimensional array of fuses. The process to embed instructions onto the matrix was to burn out the unneeded connections. There were so many connections, firmware programmers wrote a computer program on another chip to oversee the burning. The technology became known as Programmable ROM. In 1971, Intel installed the computer program onto the chip and named it the Intel 4004 microprocessor.",
"title": "History"
},
{
"paragraph_id": 18,
"text": "The terms microprocessor and central processing unit (CPU) are now used interchangeably. However, CPUs predate microprocessors. For example, the IBM System/360 (1964) had a CPU made from circuit boards containing discrete components on ceramic substrates.",
"title": "History"
},
{
"paragraph_id": 19,
"text": "The Intel 4004 (1971) was a 4-bit microprocessor designed to run the Busicom calculator. Five months after its release, Intel released the Intel 8008, an 8-bit microprocessor. Bill Pentz led a team at Sacramento State to build the first microcomputer using the Intel 8008: the Sac State 8008 (1972). Its purpose was to store patient medical records. The computer supported a disk operating system to run a Memorex, 3-megabyte, hard disk drive. It had a color display and keyboard that was packaged in a single console. The disk operating system was programmed using IBM's Basic Assembly Language (BAL). The medical records application was programmed using a BASIC interpreter. However, the computer was an evolutionary dead-end because it was extremely expensive. Also, it was built at a public university lab for a specific purpose. Nonetheless, the project contributed to the development of the Intel 8080 (1974) instruction set.",
"title": "History"
},
{
"paragraph_id": 20,
"text": "In 1978, the modern software development environment began when Intel upgraded the Intel 8080 to the Intel 8086. Intel simplified the Intel 8086 to manufacture the cheaper Intel 8088. IBM embraced the Intel 8088 when they entered the personal computer market (1981). As consumer demand for personal computers increased, so did Intel's microprocessor development. The succession of development is known as the x86 series. The x86 assembly language is a family of backward-compatible machine instructions. Machine instructions created in earlier microprocessors were retained throughout microprocessor upgrades. This enabled consumers to purchase new computers without having to purchase new application software. The major categories of instructions are:",
"title": "History"
},
{
"paragraph_id": 21,
"text": "VLSI circuits enabled the programming environment to advance from a computer terminal (until the 1990s) to a graphical user interface (GUI) computer. Computer terminals limited programmers to a single shell running in a command-line environment. During the 1970s, full-screen source code editing became possible through a text-based user interface. Regardless of the technology available, the goal is to program in a programming language.",
"title": "History"
},
{
"paragraph_id": 22,
"text": "Programming language features exist to provide building blocks to be combined to express programming ideals. Ideally, a programming language should:",
"title": "Programming paradigms and languages"
},
{
"paragraph_id": 23,
"text": "The programming style of a programming language to provide these building blocks may be categorized into programming paradigms. For example, different paradigms may differentiate:",
"title": "Programming paradigms and languages"
},
{
"paragraph_id": 24,
"text": "Each of these programming styles has contributed to the synthesis of different programming languages.",
"title": "Programming paradigms and languages"
},
{
"paragraph_id": 25,
"text": "A programming language is a set of keywords, symbols, identifiers, and rules by which programmers can communicate instructions to the computer. They follow a set of rules called a syntax.",
"title": "Programming paradigms and languages"
},
{
"paragraph_id": 26,
"text": "Programming languages get their basis from formal languages. The purpose of defining a solution in terms of its formal language is to generate an algorithm to solve the underlining problem. An algorithm is a sequence of simple instructions that solve a problem.",
"title": "Programming paradigms and languages"
},
{
"paragraph_id": 27,
"text": "The evolution of programming language began when the EDSAC (1949) used the first stored computer program in its von Neumann architecture. Programming the EDSAC was in the first generation of programming language.",
"title": "Programming paradigms and languages"
},
{
"paragraph_id": 28,
"text": "Imperative languages specify a sequential algorithm using declarations, expressions, and statements:",
"title": "Programming paradigms and languages"
},
{
"paragraph_id": 29,
"text": "FORTRAN (1958) was unveiled as \"The IBM Mathematical FORmula TRANslating system.\" It was designed for scientific calculations, without string handling facilities. Along with declarations, expressions, and statements, it supported:",
"title": "Programming paradigms and languages"
},
{
"paragraph_id": 30,
"text": "It succeeded because:",
"title": "Programming paradigms and languages"
},
{
"paragraph_id": 31,
"text": "However, non-IBM vendors also wrote Fortran compilers, but with a syntax that would likely fail IBM's compiler. The American National Standards Institute (ANSI) developed the first Fortran standard in 1966. In 1978, Fortran 77 became the standard until 1991. Fortran 90 supports:",
"title": "Programming paradigms and languages"
},
{
"paragraph_id": 32,
"text": "COBOL (1959) stands for \"COmmon Business Oriented Language.\" Fortran manipulated symbols. It was soon realized that symbols did not need to be numbers, so strings were introduced. The US Department of Defense influenced COBOL's development, with Grace Hopper being a major contributor. The statements were English-like and verbose. The goal was to design a language so managers could read the programs. However, the lack of structured statements hindered this goal.",
"title": "Programming paradigms and languages"
},
{
"paragraph_id": 33,
"text": "COBOL's development was tightly controlled, so dialects did not emerge to require ANSI standards. As a consequence, it was not changed for 15 years until 1974. The 1990s version did make consequential changes, like object-oriented programming.",
"title": "Programming paradigms and languages"
},
{
"paragraph_id": 34,
"text": "ALGOL (1960) stands for \"ALGOrithmic Language.\" It had a profound influence on programming language design. Emerging from a committee of European and American programming language experts, it used standard mathematical notation and had a readable, structured design. Algol was first to define its syntax using the Backus–Naur form. This led to syntax-directed compilers. It added features like:",
"title": "Programming paradigms and languages"
},
{
"paragraph_id": 35,
"text": "Algol's direct descendants include Pascal, Modula-2, Ada, Delphi and Oberon on one branch. On another branch the descendants include C, C++ and Java.",
"title": "Programming paradigms and languages"
},
{
"paragraph_id": 36,
"text": "BASIC (1964) stands for \"Beginner's All-Purpose Symbolic Instruction Code.\" It was developed at Dartmouth College for all of their students to learn. If a student did not go on to a more powerful language, the student would still remember Basic. A Basic interpreter was installed in the microcomputers manufactured in the late 1970s. As the microcomputer industry grew, so did the language.",
"title": "Programming paradigms and languages"
},
{
"paragraph_id": 37,
"text": "Basic pioneered the interactive session. It offered operating system commands within its environment:",
"title": "Programming paradigms and languages"
},
{
"paragraph_id": 38,
"text": "However, the Basic syntax was too simple for large programs. Recent dialects added structure and object-oriented extensions. Microsoft's Visual Basic is still widely used and produces a graphical user interface.",
"title": "Programming paradigms and languages"
},
{
"paragraph_id": 39,
"text": "C programming language (1973) got its name because the language BCPL was replaced with B, and AT&T Bell Labs called the next version \"C.\" Its purpose was to write the UNIX operating system. C is a relatively small language, making it easy to write compilers. Its growth mirrored the hardware growth in the 1980s. Its growth also was because it has the facilities of assembly language, but uses a high-level syntax. It added advanced features like:",
"title": "Programming paradigms and languages"
},
{
"paragraph_id": 40,
"text": "C allows the programmer to control which region of memory data is to be stored. Global variables and static variables require the fewest clock cycles to store. The stack is automatically used for the standard variable declarations. Heap memory is returned to a pointer variable from the malloc() function.",
"title": "Programming paradigms and languages"
},
{
"paragraph_id": 41,
"text": "In the 1970s, software engineers needed language support to break large projects down into modules. One obvious feature was to decompose large projects physically into separate files. A less obvious feature was to decompose large projects logically into abstract data types. At the time, languages supported concrete (scalar) datatypes like integer numbers, floating-point numbers, and strings of characters. Abstract datatypes are structures of concrete datatypes, with a new name assigned. For example, a list of integers could be called integer_list.",
"title": "Programming paradigms and languages"
},
{
"paragraph_id": 42,
"text": "In object-oriented jargon, abstract datatypes are called classes. However, a class is only a definition; no memory is allocated. When memory is allocated to a class and bound to an identifier, it's called an object.",
"title": "Programming paradigms and languages"
},
{
"paragraph_id": 43,
"text": "Object-oriented imperative languages developed by combining the need for classes and the need for safe functional programming. A function, in an object-oriented language, is assigned to a class. An assigned function is then referred to as a method, member function, or operation. Object-oriented programming is executing operations on objects.",
"title": "Programming paradigms and languages"
},
{
"paragraph_id": 44,
"text": "Object-oriented languages support a syntax to model subset/superset relationships. In set theory, an element of a subset inherits all the attributes contained in the superset. For example, a student is a person. Therefore, the set of students is a subset of the set of persons. As a result, students inherit all the attributes common to all persons. Additionally, students have unique attributes that other people do not have. Object-oriented languages model subset/superset relationships using inheritance. Object-oriented programming became the dominant language paradigm by the late 1990s.",
"title": "Programming paradigms and languages"
},
{
"paragraph_id": 45,
"text": "C++ (1985) was originally called \"C with Classes.\" It was designed to expand C's capabilities by adding the object-oriented facilities of the language Simula.",
"title": "Programming paradigms and languages"
},
{
"paragraph_id": 46,
"text": "An object-oriented module is composed of two files. The definitions file is called the header file. Here is a C++ header file for the GRADE class in a simple school application:",
"title": "Programming paradigms and languages"
},
{
"paragraph_id": 47,
"text": "A constructor operation is a function with the same name as the class name. It is executed when the calling operation executes the new statement.",
"title": "Programming paradigms and languages"
},
{
"paragraph_id": 48,
"text": "A module's other file is the source file. Here is a C++ source file for the GRADE class in a simple school application:",
"title": "Programming paradigms and languages"
},
{
"paragraph_id": 49,
"text": "Here is a C++ header file for the PERSON class in a simple school application:",
"title": "Programming paradigms and languages"
},
{
"paragraph_id": 50,
"text": "Here is a C++ source file for the PERSON class in a simple school application:",
"title": "Programming paradigms and languages"
},
{
"paragraph_id": 51,
"text": "Here is a C++ header file for the STUDENT class in a simple school application:",
"title": "Programming paradigms and languages"
},
{
"paragraph_id": 52,
"text": "Here is a C++ source file for the STUDENT class in a simple school application:",
"title": "Programming paradigms and languages"
},
{
"paragraph_id": 53,
"text": "Here is a driver program for demonstration:",
"title": "Programming paradigms and languages"
},
{
"paragraph_id": 54,
"text": "Here is a makefile to compile everything:",
"title": "Programming paradigms and languages"
},
{
"paragraph_id": 55,
"text": "Imperative languages have one major criticism: assigning an expression to a non-local variable may produce an unintended side effect. Declarative languages generally omit the assignment statement and the control flow. They describe what computation should be performed and not how to compute it. Two broad categories of declarative languages are functional languages and logical languages.",
"title": "Programming paradigms and languages"
},
{
"paragraph_id": 56,
"text": "The principle behind a functional language is to use lambda calculus as a guide for a well defined semantic. In mathematics, a function is a rule that maps elements from an expression to a range of values. Consider the function:",
"title": "Programming paradigms and languages"
},
{
"paragraph_id": 57,
"text": "times_10(x) = 10 * x",
"title": "Programming paradigms and languages"
},
{
"paragraph_id": 58,
"text": "The expression 10 * x is mapped by the function times_10() to a range of values. One value happens to be 20. This occurs when x is 2. So, the application of the function is mathematically written as:",
"title": "Programming paradigms and languages"
},
{
"paragraph_id": 59,
"text": "times_10(2) = 20",
"title": "Programming paradigms and languages"
},
{
"paragraph_id": 60,
"text": "A functional language compiler will not store this value in a variable. Instead, it will push the value onto the computer's stack before setting the program counter back to the calling function. The calling function will then pop the value from the stack.",
"title": "Programming paradigms and languages"
},
{
"paragraph_id": 61,
"text": "Imperative languages do support functions. Therefore, functional programming can be achieved in an imperative language, if the programmer uses discipline. However, a functional language will force this discipline onto the programmer through its syntax. Functional languages have a syntax tailored to emphasize the what.",
"title": "Programming paradigms and languages"
},
{
"paragraph_id": 62,
"text": "A functional program is developed with a set of primitive functions followed by a single driver function. Consider the snippet:",
"title": "Programming paradigms and languages"
},
{
"paragraph_id": 63,
"text": "function max(a,b){/* code omitted */}",
"title": "Programming paradigms and languages"
},
{
"paragraph_id": 64,
"text": "function min(a,b){/* code omitted */}",
"title": "Programming paradigms and languages"
},
{
"paragraph_id": 65,
"text": "function difference_between_largest_and_smallest(a,b,c) {",
"title": "Programming paradigms and languages"
},
{
"paragraph_id": 66,
"text": "}",
"title": "Programming paradigms and languages"
},
{
"paragraph_id": 67,
"text": "The primitives are max() and min(). The driver function is difference_between_largest_and_smallest(). Executing:",
"title": "Programming paradigms and languages"
},
{
"paragraph_id": 68,
"text": "put(difference_between_largest_and_smallest(10,4,7)); will output 6.",
"title": "Programming paradigms and languages"
},
{
"paragraph_id": 69,
"text": "Functional languages are used in computer science research to explore new language features. Moreover, their lack of side-effects have made them popular in parallel programming and concurrent programming. However, application developers prefer the object-oriented features of imperative languages.",
"title": "Programming paradigms and languages"
},
{
"paragraph_id": 70,
"text": "Lisp (1958) stands for \"LISt Processor.\" It is tailored to process lists. A full structure of the data is formed by building lists of lists. In memory, a tree data structure is built. Internally, the tree structure lends nicely for recursive functions. The syntax to build a tree is to enclose the space-separated elements within parenthesis. The following is a list of three elements. The first two elements are themselves lists of two elements:",
"title": "Programming paradigms and languages"
},
{
"paragraph_id": 71,
"text": "((A B) (HELLO WORLD) 94)",
"title": "Programming paradigms and languages"
},
{
"paragraph_id": 72,
"text": "Lisp has functions to extract and reconstruct elements. The function head() returns a list containing the first element in the list. The function tail() returns a list containing everything but the first element. The function cons() returns a list that is the concatenation of other lists. Therefore, the following expression will return the list x:",
"title": "Programming paradigms and languages"
},
{
"paragraph_id": 73,
"text": "cons(head(x), tail(x))",
"title": "Programming paradigms and languages"
},
{
"paragraph_id": 74,
"text": "One drawback of Lisp is when many functions are nested, the parentheses may look confusing. Modern Lisp environments help ensure parenthesis match. As an aside, Lisp does support the imperative language operations of the assignment statement and goto loops. Also, Lisp is not concerned with the datatype of the elements at compile time. Instead, it assigns (and may reassign) the datatypes at runtime. Assigning the datatype at runtime is called dynamic binding. Whereas dynamic binding increases the language's flexibility, programming errors may linger until late in the software development process.",
"title": "Programming paradigms and languages"
},
{
"paragraph_id": 75,
"text": "Writing large, reliable, and readable Lisp programs requires forethought. If properly planned, the program may be much shorter than an equivalent imperative language program. Lisp is widely used in artificial intelligence. However, its usage has been accepted only because it has imperative language operations, making unintended side-effects possible.",
"title": "Programming paradigms and languages"
},
{
"paragraph_id": 76,
"text": "ML (1973) stands for \"Meta Language.\" ML checks to make sure only data of the same type are compared with one another. For example, this function has one input parameter (an integer) and returns an integer:",
"title": "Programming paradigms and languages"
},
{
"paragraph_id": 77,
"text": "ML is not parenthesis-eccentric like Lisp. The following is an application of times_10():",
"title": "Programming paradigms and languages"
},
{
"paragraph_id": 78,
"text": "It returns \"20 : int\". (Both the results and the datatype are returned.)",
"title": "Programming paradigms and languages"
},
{
"paragraph_id": 79,
"text": "Like Lisp, ML is tailored to process lists. Unlike Lisp, each element is the same datatype. Moreover, ML assigns the datatype of an element at compile-time. Assigning the datatype at compile-time is called static binding. Static binding increases reliability because the compiler checks the context of variables before they are used.",
"title": "Programming paradigms and languages"
},
{
"paragraph_id": 80,
"text": "Prolog (1972) stands for \"PROgramming in LOGic.\" It is a logic programming language, based on formal logic. The language was developed by Alain Colmerauer and Philippe Roussel in Marseille, France. It is an implementation of Selective Linear Definite clause resolution, pioneered by Robert Kowalski and others at the University of Edinburgh.",
"title": "Programming paradigms and languages"
},
{
"paragraph_id": 81,
"text": "The building blocks of a Prolog program are facts and rules. Here is a simple example:",
"title": "Programming paradigms and languages"
},
{
"paragraph_id": 82,
"text": "After all the facts and rules are entered, then a question can be asked:",
"title": "Programming paradigms and languages"
},
{
"paragraph_id": 83,
"text": "The following example shows how Prolog will convert a letter grade to its numeric value:",
"title": "Programming paradigms and languages"
},
{
"paragraph_id": 84,
"text": "Here is a comprehensive example:",
"title": "Programming paradigms and languages"
},
{
"paragraph_id": 85,
"text": "1) All dragons billow fire, or equivalently, a thing billows fire if the thing is a dragon:",
"title": "Programming paradigms and languages"
},
{
"paragraph_id": 86,
"text": "2) A creature billows fire if one of its parents billows fire:",
"title": "Programming paradigms and languages"
},
{
"paragraph_id": 87,
"text": "3) A thing X is a parent of a thing Y if X is the mother of Y or X is the father of Y:",
"title": "Programming paradigms and languages"
},
{
"paragraph_id": 88,
"text": "4) A thing is a creature if the thing is a dragon:",
"title": "Programming paradigms and languages"
},
{
"paragraph_id": 89,
"text": "5) Norberta is a dragon, and Puff is a creature. Norberta is the mother of Puff.",
"title": "Programming paradigms and languages"
},
{
"paragraph_id": 90,
"text": "Rule (2) is a recursive (inductive) definition. It can be understood declaratively, without the need to understand how it is executed.",
"title": "Programming paradigms and languages"
},
{
"paragraph_id": 91,
"text": "Rule (3) shows how functions are represented by using relations. Here, the mother and father functions ensure that every individual has only one mother and only one father.",
"title": "Programming paradigms and languages"
},
{
"paragraph_id": 92,
"text": "Prolog is an untyped language. Nonetheless, inheritance can be represented by using predicates. Rule (4) asserts that a creature is a superclass of a dragon.",
"title": "Programming paradigms and languages"
},
{
"paragraph_id": 93,
"text": "Questions are answered using backward reasoning. Given the question:",
"title": "Programming paradigms and languages"
},
{
"paragraph_id": 94,
"text": "Prolog generates two answers :",
"title": "Programming paradigms and languages"
},
{
"paragraph_id": 95,
"text": "Practical applications for Prolog are knowledge representation and problem solving in artificial intelligence.",
"title": "Programming paradigms and languages"
},
{
"paragraph_id": 96,
"text": "Object-oriented programming is a programming method to execute operations (functions) on objects. The basic idea is to group the characteristics of a phenomenon into an object container and give the container a name. The operations on the phenomenon are also grouped into the container. Object-oriented programming developed by combining the need for containers and the need for safe functional programming. This programming method need not be confined to an object-oriented language. In an object-oriented language, an object container is called a class. In a non-object-oriented language, a data structure (which is also known as a record) may become an object container. To turn a data structure into an object container, operations need to be written specifically for the structure. The resulting structure is called an abstract datatype. However, inheritance will be missing. Nonetheless, this shortcoming can be overcome.",
"title": "Programming paradigms and languages"
},
{
"paragraph_id": 97,
"text": "Here is a C programming language header file for the GRADE abstract datatype in a simple school application:",
"title": "Programming paradigms and languages"
},
{
"paragraph_id": 98,
"text": "The grade_new() function performs the same algorithm as the C++ constructor operation.",
"title": "Programming paradigms and languages"
},
{
"paragraph_id": 99,
"text": "Here is a C programming language source file for the GRADE abstract datatype in a simple school application:",
"title": "Programming paradigms and languages"
},
{
"paragraph_id": 100,
"text": "In the constructor, the function calloc() is used instead of malloc() because each memory cell will be set to zero.",
"title": "Programming paradigms and languages"
},
{
"paragraph_id": 101,
"text": "Here is a C programming language header file for the PERSON abstract datatype in a simple school application:",
"title": "Programming paradigms and languages"
},
{
"paragraph_id": 102,
"text": "Here is a C programming language source file for the PERSON abstract datatype in a simple school application:",
"title": "Programming paradigms and languages"
},
{
"paragraph_id": 103,
"text": "Here is a C programming language header file for the STUDENT abstract datatype in a simple school application:",
"title": "Programming paradigms and languages"
},
{
"paragraph_id": 104,
"text": "Here is a C programming language source file for the STUDENT abstract datatype in a simple school application:",
"title": "Programming paradigms and languages"
},
{
"paragraph_id": 105,
"text": "Here is a driver program for demonstration:",
"title": "Programming paradigms and languages"
},
{
"paragraph_id": 106,
"text": "Here is a makefile to compile everything:",
"title": "Programming paradigms and languages"
},
{
"paragraph_id": 107,
"text": "The formal strategy to build object-oriented objects is to:",
"title": "Programming paradigms and languages"
},
{
"paragraph_id": 108,
"text": "For example:",
"title": "Programming paradigms and languages"
},
{
"paragraph_id": 109,
"text": "The syntax of a programming language is a list of production rules which govern its form. A programming language's form is the correct placement of its declarations, expressions, and statements. Complementing the syntax of a language are its semantics. The semantics describe the meanings attached to various syntactic constructs. A syntactic construct may need a semantic description because a form may have an invalid interpretation. Also, different languages might have the same syntax; however, their behaviors may be different.",
"title": "Programming paradigms and languages"
},
{
"paragraph_id": 110,
"text": "The syntax of a language is formally described by listing the production rules. Whereas the syntax of a natural language is extremely complicated, a subset of the English language can have this production rule listing:",
"title": "Programming paradigms and languages"
},
{
"paragraph_id": 111,
"text": "The words in bold-face are known as \"non-terminals\". The words in 'single quotes' are known as \"terminals\".",
"title": "Programming paradigms and languages"
},
{
"paragraph_id": 112,
"text": "From this production rule listing, complete sentences may be formed using a series of replacements. The process is to replace non-terminals with either a valid non-terminal or a valid terminal. The replacement process repeats until only terminals remain. One valid sentence is:",
"title": "Programming paradigms and languages"
},
{
"paragraph_id": 113,
"text": "However, another combination results in an invalid sentence:",
"title": "Programming paradigms and languages"
},
{
"paragraph_id": 114,
"text": "Therefore, a semantic is necessary to correctly describe the meaning of an eat activity.",
"title": "Programming paradigms and languages"
},
{
"paragraph_id": 115,
"text": "One production rule listing method is called the Backus–Naur form (BNF). BNF describes the syntax of a language and itself has a syntax. This recursive definition is an example of a meta-language. The syntax of BNF includes:",
"title": "Programming paradigms and languages"
},
{
"paragraph_id": 116,
"text": "Using BNF, a subset of the English language can have this production rule listing:",
"title": "Programming paradigms and languages"
},
{
"paragraph_id": 117,
"text": "Using BNF, a signed-integer has the production rule listing:",
"title": "Programming paradigms and languages"
},
{
"paragraph_id": 118,
"text": "Notice the recursive production rule:",
"title": "Programming paradigms and languages"
},
{
"paragraph_id": 119,
"text": "This allows for an infinite number of possibilities. Therefore, a semantic is necessary to describe a limitation of the number of digits.",
"title": "Programming paradigms and languages"
},
{
"paragraph_id": 120,
"text": "Notice the leading zero possibility in the production rules:",
"title": "Programming paradigms and languages"
},
{
"paragraph_id": 121,
"text": "Therefore, a semantic is necessary to describe that leading zeros need to be ignored.",
"title": "Programming paradigms and languages"
},
{
"paragraph_id": 122,
"text": "Two formal methods are available to describe semantics. They are denotational semantics and axiomatic semantics.",
"title": "Programming paradigms and languages"
},
{
"paragraph_id": 123,
"text": "Software engineering is a variety of techniques to produce quality software. Computer programming is the process of writing or editing source code. In a formal environment, a systems analyst will gather information from managers about all the organization's processes to automate. This professional then prepares a detailed plan for the new or modified system. The plan is analogous to an architect's blueprint.",
"title": "Software engineering and computer programming"
},
{
"paragraph_id": 124,
"text": "The systems analyst has the objective to deliver the right information to the right person at the right time. The critical factors to achieve this objective are:",
"title": "Software engineering and computer programming"
},
{
"paragraph_id": 125,
"text": "Achieving performance objectives should be balanced with all of the costs, including:",
"title": "Software engineering and computer programming"
},
{
"paragraph_id": 126,
"text": "Applying a systems development process will mitigate the axiom: the later in the process an error is detected, the more expensive it is to correct.",
"title": "Software engineering and computer programming"
},
{
"paragraph_id": 127,
"text": "The waterfall model is an implementation of a systems development process. As the waterfall label implies, the basic phases overlap each other:",
"title": "Software engineering and computer programming"
},
{
"paragraph_id": 128,
"text": "A computer programmer is a specialist responsible for writing or modifying the source code to implement the detailed plan. A programming team is likely to be needed because most systems are too large to be completed by a single programmer. However, adding programmers to a project may not shorten the completion time. Instead, it may lower the quality of the system. To be effective, program modules need to be defined and distributed to team members. Also, team members must interact with one another in a meaningful and effective way.",
"title": "Software engineering and computer programming"
},
{
"paragraph_id": 129,
"text": "Computer programmers may be programming in the small: programming within a single module. Chances are a module will execute modules located in other source code files. Therefore, computer programmers may be programming in the large: programming modules so they will effectively couple with each other. Programming-in-the-large includes contributing to the application programming interface (API).",
"title": "Software engineering and computer programming"
},
{
"paragraph_id": 130,
"text": "Modular programming is a technique to refine imperative language programs. Refined programs may reduce the software size, separate responsibilities, and thereby mitigate software aging. A program module is a sequence of statements that are bounded within a block and together identified by a name. Modules have a function, context, and logic:",
"title": "Software engineering and computer programming"
},
{
"paragraph_id": 131,
"text": "The module's name should be derived first by its function, then by its context. Its logic should not be part of the name. For example, function compute_square_root( x ) or function compute_square_root_integer( i : integer ) are appropriate module names. However, function compute_square_root_by_division( x ) is not.",
"title": "Software engineering and computer programming"
},
{
"paragraph_id": 132,
"text": "The degree of interaction within a module is its level of cohesion. Cohesion is a judgment of the relationship between a module's name and its function. The degree of interaction between modules is the level of coupling. Coupling is a judgement of the relationship between a module's context and the elements being performed upon.",
"title": "Software engineering and computer programming"
},
{
"paragraph_id": 133,
"text": "The levels of cohesion from worst to best are:",
"title": "Software engineering and computer programming"
},
{
"paragraph_id": 134,
"text": "The levels of coupling from worst to best are:",
"title": "Software engineering and computer programming"
},
{
"paragraph_id": 135,
"text": "Data flow analysis is a design method used to achieve modules of functional cohesion and data coupling. The input to the method is a data-flow diagram. A data-flow diagram is a set of ovals representing modules. Each module's name is displayed inside its oval. Modules may be at the executable level or the function level.",
"title": "Software engineering and computer programming"
},
{
"paragraph_id": 136,
"text": "The diagram also has arrows connecting modules to each other. Arrows pointing into modules represent a set of inputs. Each module should have only one arrow pointing out from it to represent its single output object. (Optionally, an additional exception arrow points out.) A daisy chain of ovals will convey an entire algorithm. The input modules should start the diagram. The input modules should connect to the transform modules. The transform modules should connect to the output modules.",
"title": "Software engineering and computer programming"
},
{
"paragraph_id": 137,
"text": "Computer programs may be categorized along functional lines. The main functional categories are application software and system software. System software includes the operating system, which couples computer hardware with application software. The purpose of the operating system is to provide an environment where application software executes in a convenient and efficient manner. Both application software and system software execute utility programs. At the hardware level, a microcode program controls the circuits throughout the central processing unit.",
"title": "Functional categories"
},
{
"paragraph_id": 138,
"text": "Application software is the key to unlocking the potential of the computer system. Enterprise application software bundles accounting, personnel, customer, and vendor applications. Examples include enterprise resource planning, customer relationship management, and supply chain management software.",
"title": "Functional categories"
},
{
"paragraph_id": 139,
"text": "Enterprise applications may be developed in-house as a one-of-a-kind proprietary software. Alternatively, they may be purchased as off-the-shelf software. Purchased software may be modified to provide custom software. If the application is customized, then either the company's resources are used or the resources are outsourced. Outsourced software development may be from the original software vendor or a third-party developer.",
"title": "Functional categories"
},
{
"paragraph_id": 140,
"text": "The potential advantages of in-house software are features and reports may be developed exactly to specification. Management may also be involved in the development process and offer a level of control. Management may decide to counteract a competitor's new initiative or implement a customer or vendor requirement. A merger or acquisition may necessitate enterprise software changes. The potential disadvantages of in-house software are time and resource costs may be extensive. Furthermore, risks concerning features and performance may be looming.",
"title": "Functional categories"
},
{
"paragraph_id": 141,
"text": "The potential advantages of off-the-shelf software are upfront costs are identifiable, the basic needs should be fulfilled, and its performance and reliability have a track record. The potential disadvantages of off-the-shelf software are it may have unnecessary features that confuse end users, it may lack features the enterprise needs, and the data flow may not match the enterprise's work processes.",
"title": "Functional categories"
},
{
"paragraph_id": 142,
"text": "One approach to economically obtaining a customized enterprise application is through an application service provider. Specialty companies provide hardware, custom software, and end-user support. They may speed the development of new applications because they possess skilled information system staff. The biggest advantage is it frees in-house resources from staffing and managing complex computer projects. Many application service providers target small, fast-growing companies with limited information system resources. On the other hand, larger companies with major systems will likely have their technical infrastructure in place. One risk is having to trust an external organization with sensitive information. Another risk is having to trust the provider's infrastructure reliability.",
"title": "Functional categories"
},
{
"paragraph_id": 143,
"text": "An operating system is the low-level software that supports a computer's basic functions, such as scheduling processes and controlling peripherals.",
"title": "Functional categories"
},
{
"paragraph_id": 144,
"text": "In the 1950s, the programmer, who was also the operator, would write a program and run it. After the program finished executing, the output may have been printed, or it may have been punched onto paper tape or cards for later processing. More often than not the program did not work. The programmer then looked at the console lights and fiddled with the console switches. If less fortunate, a memory printout was made for further study. In the 1960s, programmers reduced the amount of wasted time by automating the operator's job. A program called an operating system was kept in the computer at all times.",
"title": "Functional categories"
},
{
"paragraph_id": 145,
"text": "The term operating system may refer to two levels of software. The operating system may refer to the kernel program that manages the processes, memory, and devices. More broadly, the operating system may refer to the entire package of the central software. The package includes a kernel program, command-line interpreter, graphical user interface, utility programs, and editor.",
"title": "Functional categories"
},
{
"paragraph_id": 146,
"text": "The kernel's main purpose is to manage the limited resources of a computer:",
"title": "Functional categories"
},
{
"paragraph_id": 147,
"text": "Originally, operating systems were programmed in assembly; however, modern operating systems are typically written in higher-level languages like C, Objective-C, and Swift.",
"title": "Functional categories"
},
{
"paragraph_id": 148,
"text": "A utility program is designed to aid system administration and software execution. Operating systems execute hardware utility programs to check the status of disk drives, memory, speakers, and printers. A utility program may optimize the placement of a file on a crowded disk. System utility programs monitor hardware and network performance. When a metric is outside an acceptable range, a trigger alert is generated.",
"title": "Functional categories"
},
{
"paragraph_id": 149,
"text": "Utility programs include compression programs so data files are stored on less disk space. Compressed programs also save time when data files are transmitted over the network. Utility programs can sort and merge data sets. Utility programs detect computer viruses.",
"title": "Functional categories"
},
{
"paragraph_id": 150,
"text": "A microcode program is the bottom-level interpreter that controls the data path of software-driven computers. (Advances in hardware have migrated these operations to hardware execution circuits.) Microcode instructions allow the programmer to more easily implement the digital logic level—the computer's real hardware. The digital logic level is the boundary between computer science and computer engineering.",
"title": "Functional categories"
},
{
"paragraph_id": 151,
"text": "A logic gate is a tiny transistor that can return one of two signals: on or off.",
"title": "Functional categories"
},
{
"paragraph_id": 152,
"text": "These five gates form the building blocks of binary algebra—the digital logic functions of the computer.",
"title": "Functional categories"
},
{
"paragraph_id": 153,
"text": "Microcode instructions are mnemonics programmers may use to execute digital logic functions instead of forming them in binary algebra. They are stored in a central processing unit's (CPU) control store. These hardware-level instructions move data throughout the data path.",
"title": "Functional categories"
},
{
"paragraph_id": 154,
"text": "The micro-instruction cycle begins when the microsequencer uses its microprogram counter to fetch the next machine instruction from random-access memory. The next step is to decode the machine instruction by selecting the proper output line to the hardware module. The final step is to execute the instruction using the hardware module's set of gates.",
"title": "Functional categories"
},
{
"paragraph_id": 155,
"text": "Instructions to perform arithmetic are passed through an arithmetic logic unit (ALU). The ALU has circuits to perform elementary operations to add, shift, and compare integers. By combining and looping the elementary operations through the ALU, the CPU performs its complex arithmetic.",
"title": "Functional categories"
},
{
"paragraph_id": 156,
"text": "Microcode instructions move data between the CPU and the memory controller. Memory controller microcode instructions manipulate two registers. The memory address register is used to access each memory cell's address. The memory data register is used to set and read each cell's contents.",
"title": "Functional categories"
},
{
"paragraph_id": 157,
"text": "Microcode instructions move data between the CPU and the many computer buses. The disk controller bus writes to and reads from hard disk drives. Data is also moved between the CPU and other functional units via the peripheral component interconnect express bus.",
"title": "Functional categories"
}
] | A computer program is a sequence or set of instructions in a programming language for a computer to execute. It is one component of software, which also includes documentation and other intangible components. A computer program in its human-readable form is called source code. Source code needs another computer program to execute because computers can only execute their native machine instructions. Therefore, source code may be translated to machine instructions using the language's compiler. The resulting file is called an executable. Alternatively, source code may execute within the language's interpreter. If the executable is requested for execution, then the operating system loads it into memory and starts a process. The central processing unit will soon switch to this process so it can fetch, decode, and then execute each machine instruction. If the source code is requested for execution, then the operating system loads the corresponding interpreter into memory and starts a process. The interpreter then loads the source code into memory to translate and execute each statement. Running the source code is slower than running an executable. Moreover, the interpreter must be installed on the computer. | 2001-06-27T18:39:23Z | 2023-12-30T10:33:00Z | [
"Template:For",
"Template:See also",
"Template:Sxhl",
"Template:Notelist",
"Template:Cite journal",
"Template:Short description",
"Template:Efn",
"Template:Convert",
"Template:Main",
"Template:Reflist",
"Template:Cite web",
"Template:Cite book",
"Template:Citation"
] | https://en.wikipedia.org/wiki/Computer_program |
5,785 | Crime | In ordinary language, a crime is an unlawful act punishable by a state or other authority. The term crime does not, in modern criminal law, have any simple and universally accepted definition, though statutory definitions have been provided for certain purposes. The most popular view is that crime is a category created by law; in other words, something is a crime if declared as such by the relevant and applicable law. One proposed definition is that a crime or offence (or criminal offence) is an act harmful not only to some individual but also to a community, society, or the state ("a public wrong"). Such acts are forbidden and punishable by law.
The notion that acts such as murder, rape, and theft are to be prohibited exists worldwide. What precisely is a criminal offence is defined by the criminal law of each relevant jurisdiction. While many have a catalogue of crimes called the criminal code, in some common law nations no such comprehensive statute exists.
The state (government) has the power to severely restrict one's liberty for committing a crime. In modern societies, there are procedures to which investigations and trials must adhere. If found guilty, an offender may be sentenced to a form of reparation such as a community sentence, or, depending on the nature of their offence, to undergo imprisonment, life imprisonment or, in some jurisdictions, death.
Usually, to be classified as a crime, the "act of doing something criminal" (actus reus) must – with certain exceptions – be accompanied by the "intention to do something criminal" (mens rea).
While every crime violates the law, not every violation of the law counts as a crime. Breaches of private law (torts and breaches of contract) are not automatically punished by the state, but can be enforced through civil procedure.
The exact definition of crime is a philosophical issue without an agreed upon answer. Fields such as law, politics, sociology, and psychology define crime in different ways. Crimes may be variously considered as wrongs against individuals, against the community, or against the state. The criminality of an action is dependent on its context; acts of violence will be seen as crimes in many circumstances but as permissible or desirable in others. Crime was historically seen as a manifestation of evil, but this has been superseded by modern criminal theories.
Legal and political definitions of crime consider actions that are banned by authorities or punishable by law. Crime is defined by the criminal law of a given jurisdiction, including all actions that are subject to criminal procedure. There is no limit to what can be considered a crime in a legal system, so there may not be a unifying principle used to determine whether an action should be designated as a crime. From a legal perspective, crimes are generally wrong actions that are severe enough to warrant punishment that infringes on the perpetrator's liberties.
English criminal law and the related common law of Commonwealth countries can define offences that the courts alone have developed over the years, without any actual legislation: common law offences. The courts used the concept of malum in se to develop various common law offences.
As a sociological concept, crime is associated with actions that cause harm and violate social norms. Under this definition, crime is a type of social construct, and societal attitudes determine what is considered criminal.
In legal systems based on legal moralism, the predominant moral beliefs of society determine the legal definition as well as the social definition of crime. This system is less prominent in liberal democratic societies that prioritize individualism and multiculturalism over other moral beliefs.
Paternalism defines crime not only as harm to others or to society, but also as harm to the self.
Psychological definitions consider the state of mind of perpetrators and their relationship with their environment.
The study of crime is called criminology. Criminology is a subfield of sociology that addresses issues of social norms, social order, deviance, and violence. It includes the motivations and consequences of crime and its perpetrators, as well as preventative measures, either studying criminal acts on an individual level or the relationship of crime and the community. Due to the wide range of concepts associated with crime and the disagreement on a precise definition, the focus of criminology can vary considerably. Various theories within criminology provide different descriptions and explanations for crime, including social control theory, subcultural theory, strain theory, differential association, and labeling theory.
Subfields of criminology and related fields of study include crime prevention, criminal law, crime statistics, anthropological criminology, criminal psychology, criminal sociology, criminal psychiatry, victimology, penology, and forensic science. Besides sociology, criminology is often associated with law and psychology.
Information and statistics about crime in a given jurisdiction are collected as crime estimates, typically produced by national or international agencies. Methods to collect crime statistics may vary, even between jurisdictions within the same nation. Under-reporting of crime is common, particularly in developing nations. Victim studies may be used to determine the frequency of crime in a given population.
Justifying the state's use of force to coerce compliance with its laws has proven a consistent theoretical problem. One of the earliest justifications involved the theory of natural law. This posits that the nature of the world or of human beings underlies the standards of morality or constructs them. Thomas Aquinas wrote in the 13th century: "the rule and measure of human acts is the reason, which is the first principle of human acts". He regarded people as by nature rational beings, concluding that it becomes morally appropriate that they should behave in a way that conforms to their rational nature. Thus, to be valid, any law must conform to natural law and coercing people to conform to that law is morally acceptable. In the 1760s, William Blackstone described the thesis:
But John Austin (1790–1859), an early positivist, applied utilitarianism in accepting the calculating nature of human beings and the existence of an objective morality. He denied that the legal validity of a norm depends on whether its content conforms to morality. Thus in Austinian terms, a moral code can objectively determine what people ought to do, the law can embody whatever norms the legislature decrees to achieve social utility, but every individual remains free to choose what to do. Similarly, H.L.A. Hart saw the law as an aspect of sovereignty, with lawmakers able to adopt any law as a means to a moral end.
Thus the necessary and sufficient conditions for the truth of a proposition of law involved internal logic and consistency, and that the state's agents used state power with responsibility. Ronald Dworkin rejects Hart's theory and proposes that all individuals should expect the equal respect and concern of those who govern them as a fundamental political right. He offers a theory of compliance overlaid by a theory of deference (the citizen's duty to obey the law) and a theory of enforcement, which identifies the legitimate goals of enforcement and punishment. Legislation must conform to a theory of legitimacy, which describes the circumstances under which a particular person or group is entitled to make law, and a theory of legislative justice, which describes the law they are entitled or obliged to make.
There are natural-law theorists who have accepted the idea of enforcing the prevailing morality as a primary function of the law. This view entails the problem that it makes any moral criticism of the law impossible: if conformity with natural law forms a necessary condition for legal validity, all valid law must, by definition, count as morally just. Thus, on this line of reasoning, the legal validity of a norm necessarily entails its moral justice.
Restrictions on behavior existed in all prehistoric societies. Crime in early human society was seen as a personal transgression and was addressed by the community as a whole rather than through a formal legal system, often through the use of custom, religion, or the rule of a tribal leader. Some of the oldest extant writings are ancient criminal codes. The earliest known criminal code was the Code of Ur-Nammu (c. 2100 – c. 2050 BC), and the known first criminal code that incorporated retaliatory justice was the Code of Hammurabi. The latter influenced the conception of crime across several civilizations over the following millennia.
The Romans systematized law and applied their system across the Roman Empire. The initial rules of Roman law regarded assaults as a matter of private compensation. The most significant Roman law concept involved dominion. Most acts recognized as crimes in ancient societies, such as violence and theft, have persisted to the modern era. The criminal justice system of Imperial China existed unbroken for over 2,000 years.
Many of the earliest conceptions of crime are associated with sin and corresponded to acts that were believed to invoke the anger of a deity. This idea was further popularized with the development of the Abrahamic religions. The understanding of crime and sin were closely associated with one another for much of history, and conceptions of crime took on many of the ideas associated with sin. Islamic law developed its own system of criminal justice as Islam spread in the seventh and eighth centuries.
In post-classical Europe and East Asia, central government was limited and crime was defined locally. Towns established their own criminal justice systems, while crime in the countryside was defined by the social hierarchies of feudalism. In some places, such as the Russian Empire and the Kingdom of Italy, feudal justice survived into the 19th century.
Common law first developed in England under the rule of Henry II in the 12th century. He established a system of traveling judges that tried accused criminals in each region of England by applying precedent from previous rulings. Legal developments in 12th century England also resulted in the earliest known recording of official crime data.
In the modern era, crime came to be seen as an issue affecting society rather than conflicts between individuals. Writers such as Thomas Hobbes saw crime as a societal issue as early as the 17th century. Imprisonment developed as a long-term penalty for crime in the 18th century. Increasing urbanization and industrialization in the 19th century caused crime to become an immediate issue that affected society, prompting government intervention in crime and the establishment of criminology as its own field.
Anthropological criminology was popularized by Cesare Lombroso in the late-19th century. This was a biological determinist school of thought based in social darwinism, arguing that certain people are naturally born as criminals. The eugenics movement of the early-20th century similarly held that crime was caused primarily by genetic factors.
The concept of crime underwent a period of change as modernism was widely accepted in the years following World War II. Crime increasingly came to be seen as a societal issue, and criminal law was seen as a means to protect the public from antisocial behavior. This idea was associated with a larger trend in the western world toward social democracy and centre-left politics.
Through most of history, reporting of crime was generally local. The advent of mass media through radio and television in the mid-20th century allowed for the sensationalism of crime. This created well-known stories of criminals such as Jeffrey Dahmer, and it allowed for dramatization that perpetuates misconceptions about crime. Forensic science was popularized in the 1980s, establishing DNA profiling as a new method to prevent and analyze crime.
Violent crime is crime that involves an act of violent aggression against another person. Common examples of violent crime include homicide, assault, sexual assault, and robbery. Some violent crimes, such as assault, may be committed with the intention of causing harm. Other violent crimes, such as robbery, may use violence to further another goal. Violent crime is distinct from noncriminal types of violence, such as self-defense, use of force, and acts of war. Acts of violence are most often perceived as deviant when they are committed as an overreaction or a disproportionate response to provocation.
Common examples of property crime include burglary, theft, and vandalism.
Examples of financial crimes include counterfeiting, smuggling, tax evasion, and bribery. The scope of financial crimes has expanded significantly since the beginning of modern economics in the 17th century. In occupational crime, the complexity and anonymity of computer systems may help criminal employees camouflage their operations. The victims of the most costly scams include banks, brokerage houses, insurance companies, and other large financial institutions.
Public order crime is crime that violates a society's norms about what constitutes socially acceptable behavior. Examples of public order crimes include gambling, drug-related crime, public intoxication, prostitution, loitering, breach of the peace, panhandling, vagrancy, street harassment, excessive noise, and littering. Public order crime is associated with the broken windows theory, which posits that public order crimes increase the likelihood of other types of crime. Some public order crimes are considered victimless crimes in which no specific victim an be identified. Most nations in the Western world have moved toward decriminalization of victimless crimes in the modern era.
Adultery, fornication, blasphemy, apostasy, and invoking the name of God are commonly recognized as crimes in theocratic societies or those heavily influenced by religion.
Political crime is crime that directly challenges or threatens the state. Examples of political crimes include subversion, rebellion, treason, mutiny, espionage, sedition, terrorism, riot, and unlawful assembly. Political crimes are associated with the political agenda of a given state, and they are necessarily applied against political dissidents. Due to their unique relation to the state, political crimes are often encouraged by one nation against another, and it is political alignment rather than the act itself that determines criminality. State crime that is carried out by the state to repress law-abiding citizens may also be considered political crime.
Inchoate crime is crime that is carried out in anticipation of other illegal actions but does not cause direct harm. Examples of inchoate crimes include attempt and conspiracy. Inchoate crimes are defined by substantial action to facilitate a crime with the intention of the crime's occurrence. This is distinct from simple preparation for or consideration of criminal activity. They are unique in that renunciation of criminal intention is generally enough to absolve the perpetrator of criminal liability, as their actions are no longer facilitating a potential future crime.
A criminal is an individual who commits a crime. What constitutes a criminal can vary depending on the context and the law, and it often carries a pejorative connotation. Criminals are often seen as embodying certain stereotypes or traits and are seen as a distinct type of person from law-abiding citizens. Despite this, no mental or physical trend is identifiable that differentiates criminals from non-criminals. Public response to criminals may be indignant or sympathetic. Indignant responses involve resentment and a desire for vengeance, wishing to see criminals removed from society or made to suffer for harm that they cause. Sympathetic responses involve compassion and understanding, seeking to rehabilitate or forgive criminals and absolve them of blame.
A victim is an individual who has been treated unjustly or made to suffer. In the context of crime, the victim is the individual that is harmed by a violation of criminal law. Victimization is associated with post-traumatic stress and a long-term decrease in quality of life. Victimology is the study of victims, including their role in crime and how they are affected.
Several factors affect an individual's likelihood of becoming a victim. Some factors may cause victims of crime to experience short-term or long-term "repeat victimization". Common long-term victims are those that have close relationships with the criminal, manifesting in crimes such as domestic violence, embezzlement, child abuse, and bullying. Repeat victimization may also occur when a potential victim appears to be a viable target, such as when indicating wealth in a less affluent region. Many of the traits that indicate criminality also indicate victimality; victims of crime are more likely to engage in unlawful behavior and respond to provocation. Overall demographic trends of victims and criminals are often similar, and victims are more likely to have engaged in criminal activities themselves.
The victims may only want compensation for the injuries suffered, while remaining indifferent to a possible desire for deterrence. Victims, on their own, may lack the economies of scale that could allow them to administer a penal system, let alone to collect any fines levied by a court. Historically, from ancient times until the 19th century, many societies believed that non-human animals were capable of committing crimes, and prosecuted and punished them accordingly. Prosecutions of animals gradually dwindled during the 19th century, although a few were recorded as late as the 1910s and 1920s.
Virtually all countries in the 21st century have criminal law grounded in civil law, common law, Islamic law, or socialist law. Historically, criminal codes have often divided criminals by class or caste, prescribing different penalties depending on status. In some tribal societies, an entire clan is recognized as liable for a crime. In many cases, disputes over a crime in this system lead to a feud that lasts over several generations.
The state determines what actions are considered criminal in the scope of the law. Criminalization has significant human rights considerations, as it can infringe on rights of autonomy and subject individuals to unjust punishment.
The enforcement of criminal law seeks to prevent crime and sanction crimes that do occur. This enforcement is carried out by the state through law enforcement agencies, such as police, which are empowered to arrest suspected perpetrators of crimes. Law enforcement may focus on policing individual crimes, or it may focus on bringing down overall crime rates. One common variant, community policing, seeks to prevent crime by integrating police into the community and public life.
When the perpetrator of a crime is found guilty of the crime, the state delivers a sentence to determine the penalty for the crime.
Authorities may respond to crime through corrections, carrying out punishment as a means to censure the criminal act. Punishment is generally reserved for serious offenses. Individuals regularly engage in activity that could be scrutinized under criminal law but are deemed inconsequential. Retributive justice seeks to create a system of accountability and punish criminals in a way that knowingly causes suffering. This may arise out of a feeling that criminals deserve to suffer and that punishment should exist for its own sake. The existence of punishment also creates an effect of deterrence that discourages criminal action for fear of punishment.
Rehabilitation seeks to understand and mitigate the causes of a criminal's unlawful action to prevent recidivism. Different criminological theories propose different methods of rehabilitation, including strengthening social networks, reducing poverty, influencing values, and providing therapy for physical and mental ailments. Rehabilitative programs may include counseling or vocational education.
Developed nations are less likely to use physical punishments. Instead, they will impose financial penalties or imprisonment. In places with widespread corruption or limited rule of law, crime may be punished extralegally through mob rule and lynching.
Whether a crime can be resolved through financial compensation varies depending on the culture and the specific context of the crime. Historically, many societies have absolved acts of homicide through compensation to the victim's relatives.
If a crime is committed, the individual responsible is considered to be liable for the crime. For liability to exist, the individual must be capable of understanding the criminal process and the relevant authority must have legitimate power to establish what constitutes a crime.
International criminal law typically addresses serious offenses, such as genocide, crimes against humanity, and war crimes. As with all international law, these laws are created through treaties and international custom, and they are defined through the consensus of the involved states. International crimes are not prosecuted through a standard legal system, though international organizations may establish tribunals to investigate and rule on egregious offenses such as genocide.
Basic analysis of criminal behavior is determined by a cost–benefit analysis. A person that commits a criminal act typically believes that its benefits will outweigh the risk of being caught and punished. Negative economic factors (such as unemployment and income inequality) significantly increase the incentive to commit crime, while severe punishments decrease the incentive in some cases.
Social factors similarly affect the likelihood of criminal activity. Crime corresponds heavily with social integration; groups that are less integrated with society or that are forcibly integrated with society are more likely to engage in crime. Involvement in the community, such as through a church, decreases the likelihood of crime, while associating with criminals increases the likelihood of becoming a criminal as well.
There is no known genetic cause of crime. Some genes have been found to affect traits that may incline individuals toward criminal activity, but no biological or physiological trait has been found to directly cause or compel criminal actions. One biological factor is the disparity between men and women, as men are significantly more likely to commit crimes than women in virtually all cultures. Crimes committed by men also tend to be more severe than those committed by women.
Crime is often a high priority political issue in developed countries, regardless of the country's crime rates. People that are not regularly exposed to crime most often experience it through media, including news reporting and crime fiction. Exposure of crime through news stories is associated with alarmism and inaccurate perceptions of crime trends. Selection bias in new stories about criminals significantly over-represent the prevalence of violent crime, and news reporting will often overemphasize a specific type of crime for a period of time, creating a "crime wave" effect.
As public opinion of morality changes over time, actions that were once condemned as crimes may be considered justifiable. | [
{
"paragraph_id": 0,
"text": "In ordinary language, a crime is an unlawful act punishable by a state or other authority. The term crime does not, in modern criminal law, have any simple and universally accepted definition, though statutory definitions have been provided for certain purposes. The most popular view is that crime is a category created by law; in other words, something is a crime if declared as such by the relevant and applicable law. One proposed definition is that a crime or offence (or criminal offence) is an act harmful not only to some individual but also to a community, society, or the state (\"a public wrong\"). Such acts are forbidden and punishable by law.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The notion that acts such as murder, rape, and theft are to be prohibited exists worldwide. What precisely is a criminal offence is defined by the criminal law of each relevant jurisdiction. While many have a catalogue of crimes called the criminal code, in some common law nations no such comprehensive statute exists.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The state (government) has the power to severely restrict one's liberty for committing a crime. In modern societies, there are procedures to which investigations and trials must adhere. If found guilty, an offender may be sentenced to a form of reparation such as a community sentence, or, depending on the nature of their offence, to undergo imprisonment, life imprisonment or, in some jurisdictions, death.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Usually, to be classified as a crime, the \"act of doing something criminal\" (actus reus) must – with certain exceptions – be accompanied by the \"intention to do something criminal\" (mens rea).",
"title": ""
},
{
"paragraph_id": 4,
"text": "While every crime violates the law, not every violation of the law counts as a crime. Breaches of private law (torts and breaches of contract) are not automatically punished by the state, but can be enforced through civil procedure.",
"title": ""
},
{
"paragraph_id": 5,
"text": "The exact definition of crime is a philosophical issue without an agreed upon answer. Fields such as law, politics, sociology, and psychology define crime in different ways. Crimes may be variously considered as wrongs against individuals, against the community, or against the state. The criminality of an action is dependent on its context; acts of violence will be seen as crimes in many circumstances but as permissible or desirable in others. Crime was historically seen as a manifestation of evil, but this has been superseded by modern criminal theories.",
"title": "Definition"
},
{
"paragraph_id": 6,
"text": "Legal and political definitions of crime consider actions that are banned by authorities or punishable by law. Crime is defined by the criminal law of a given jurisdiction, including all actions that are subject to criminal procedure. There is no limit to what can be considered a crime in a legal system, so there may not be a unifying principle used to determine whether an action should be designated as a crime. From a legal perspective, crimes are generally wrong actions that are severe enough to warrant punishment that infringes on the perpetrator's liberties.",
"title": "Definition"
},
{
"paragraph_id": 7,
"text": "English criminal law and the related common law of Commonwealth countries can define offences that the courts alone have developed over the years, without any actual legislation: common law offences. The courts used the concept of malum in se to develop various common law offences.",
"title": "Definition"
},
{
"paragraph_id": 8,
"text": "As a sociological concept, crime is associated with actions that cause harm and violate social norms. Under this definition, crime is a type of social construct, and societal attitudes determine what is considered criminal.",
"title": "Definition"
},
{
"paragraph_id": 9,
"text": "In legal systems based on legal moralism, the predominant moral beliefs of society determine the legal definition as well as the social definition of crime. This system is less prominent in liberal democratic societies that prioritize individualism and multiculturalism over other moral beliefs.",
"title": "Definition"
},
{
"paragraph_id": 10,
"text": "Paternalism defines crime not only as harm to others or to society, but also as harm to the self.",
"title": "Definition"
},
{
"paragraph_id": 11,
"text": "Psychological definitions consider the state of mind of perpetrators and their relationship with their environment.",
"title": "Definition"
},
{
"paragraph_id": 12,
"text": "The study of crime is called criminology. Criminology is a subfield of sociology that addresses issues of social norms, social order, deviance, and violence. It includes the motivations and consequences of crime and its perpetrators, as well as preventative measures, either studying criminal acts on an individual level or the relationship of crime and the community. Due to the wide range of concepts associated with crime and the disagreement on a precise definition, the focus of criminology can vary considerably. Various theories within criminology provide different descriptions and explanations for crime, including social control theory, subcultural theory, strain theory, differential association, and labeling theory.",
"title": "Study"
},
{
"paragraph_id": 13,
"text": "Subfields of criminology and related fields of study include crime prevention, criminal law, crime statistics, anthropological criminology, criminal psychology, criminal sociology, criminal psychiatry, victimology, penology, and forensic science. Besides sociology, criminology is often associated with law and psychology.",
"title": "Study"
},
{
"paragraph_id": 14,
"text": "Information and statistics about crime in a given jurisdiction are collected as crime estimates, typically produced by national or international agencies. Methods to collect crime statistics may vary, even between jurisdictions within the same nation. Under-reporting of crime is common, particularly in developing nations. Victim studies may be used to determine the frequency of crime in a given population.",
"title": "Study"
},
{
"paragraph_id": 15,
"text": "Justifying the state's use of force to coerce compliance with its laws has proven a consistent theoretical problem. One of the earliest justifications involved the theory of natural law. This posits that the nature of the world or of human beings underlies the standards of morality or constructs them. Thomas Aquinas wrote in the 13th century: \"the rule and measure of human acts is the reason, which is the first principle of human acts\". He regarded people as by nature rational beings, concluding that it becomes morally appropriate that they should behave in a way that conforms to their rational nature. Thus, to be valid, any law must conform to natural law and coercing people to conform to that law is morally acceptable. In the 1760s, William Blackstone described the thesis:",
"title": "Foundational systems"
},
{
"paragraph_id": 16,
"text": "But John Austin (1790–1859), an early positivist, applied utilitarianism in accepting the calculating nature of human beings and the existence of an objective morality. He denied that the legal validity of a norm depends on whether its content conforms to morality. Thus in Austinian terms, a moral code can objectively determine what people ought to do, the law can embody whatever norms the legislature decrees to achieve social utility, but every individual remains free to choose what to do. Similarly, H.L.A. Hart saw the law as an aspect of sovereignty, with lawmakers able to adopt any law as a means to a moral end.",
"title": "Foundational systems"
},
{
"paragraph_id": 17,
"text": "Thus the necessary and sufficient conditions for the truth of a proposition of law involved internal logic and consistency, and that the state's agents used state power with responsibility. Ronald Dworkin rejects Hart's theory and proposes that all individuals should expect the equal respect and concern of those who govern them as a fundamental political right. He offers a theory of compliance overlaid by a theory of deference (the citizen's duty to obey the law) and a theory of enforcement, which identifies the legitimate goals of enforcement and punishment. Legislation must conform to a theory of legitimacy, which describes the circumstances under which a particular person or group is entitled to make law, and a theory of legislative justice, which describes the law they are entitled or obliged to make.",
"title": "Foundational systems"
},
{
"paragraph_id": 18,
"text": "There are natural-law theorists who have accepted the idea of enforcing the prevailing morality as a primary function of the law. This view entails the problem that it makes any moral criticism of the law impossible: if conformity with natural law forms a necessary condition for legal validity, all valid law must, by definition, count as morally just. Thus, on this line of reasoning, the legal validity of a norm necessarily entails its moral justice.",
"title": "Foundational systems"
},
{
"paragraph_id": 19,
"text": "Restrictions on behavior existed in all prehistoric societies. Crime in early human society was seen as a personal transgression and was addressed by the community as a whole rather than through a formal legal system, often through the use of custom, religion, or the rule of a tribal leader. Some of the oldest extant writings are ancient criminal codes. The earliest known criminal code was the Code of Ur-Nammu (c. 2100 – c. 2050 BC), and the known first criminal code that incorporated retaliatory justice was the Code of Hammurabi. The latter influenced the conception of crime across several civilizations over the following millennia.",
"title": "History"
},
{
"paragraph_id": 20,
"text": "The Romans systematized law and applied their system across the Roman Empire. The initial rules of Roman law regarded assaults as a matter of private compensation. The most significant Roman law concept involved dominion. Most acts recognized as crimes in ancient societies, such as violence and theft, have persisted to the modern era. The criminal justice system of Imperial China existed unbroken for over 2,000 years.",
"title": "History"
},
{
"paragraph_id": 21,
"text": "Many of the earliest conceptions of crime are associated with sin and corresponded to acts that were believed to invoke the anger of a deity. This idea was further popularized with the development of the Abrahamic religions. The understanding of crime and sin were closely associated with one another for much of history, and conceptions of crime took on many of the ideas associated with sin. Islamic law developed its own system of criminal justice as Islam spread in the seventh and eighth centuries.",
"title": "History"
},
{
"paragraph_id": 22,
"text": "In post-classical Europe and East Asia, central government was limited and crime was defined locally. Towns established their own criminal justice systems, while crime in the countryside was defined by the social hierarchies of feudalism. In some places, such as the Russian Empire and the Kingdom of Italy, feudal justice survived into the 19th century.",
"title": "History"
},
{
"paragraph_id": 23,
"text": "Common law first developed in England under the rule of Henry II in the 12th century. He established a system of traveling judges that tried accused criminals in each region of England by applying precedent from previous rulings. Legal developments in 12th century England also resulted in the earliest known recording of official crime data.",
"title": "History"
},
{
"paragraph_id": 24,
"text": "In the modern era, crime came to be seen as an issue affecting society rather than conflicts between individuals. Writers such as Thomas Hobbes saw crime as a societal issue as early as the 17th century. Imprisonment developed as a long-term penalty for crime in the 18th century. Increasing urbanization and industrialization in the 19th century caused crime to become an immediate issue that affected society, prompting government intervention in crime and the establishment of criminology as its own field.",
"title": "History"
},
{
"paragraph_id": 25,
"text": "Anthropological criminology was popularized by Cesare Lombroso in the late-19th century. This was a biological determinist school of thought based in social darwinism, arguing that certain people are naturally born as criminals. The eugenics movement of the early-20th century similarly held that crime was caused primarily by genetic factors.",
"title": "History"
},
{
"paragraph_id": 26,
"text": "The concept of crime underwent a period of change as modernism was widely accepted in the years following World War II. Crime increasingly came to be seen as a societal issue, and criminal law was seen as a means to protect the public from antisocial behavior. This idea was associated with a larger trend in the western world toward social democracy and centre-left politics.",
"title": "History"
},
{
"paragraph_id": 27,
"text": "Through most of history, reporting of crime was generally local. The advent of mass media through radio and television in the mid-20th century allowed for the sensationalism of crime. This created well-known stories of criminals such as Jeffrey Dahmer, and it allowed for dramatization that perpetuates misconceptions about crime. Forensic science was popularized in the 1980s, establishing DNA profiling as a new method to prevent and analyze crime.",
"title": "History"
},
{
"paragraph_id": 28,
"text": "Violent crime is crime that involves an act of violent aggression against another person. Common examples of violent crime include homicide, assault, sexual assault, and robbery. Some violent crimes, such as assault, may be committed with the intention of causing harm. Other violent crimes, such as robbery, may use violence to further another goal. Violent crime is distinct from noncriminal types of violence, such as self-defense, use of force, and acts of war. Acts of violence are most often perceived as deviant when they are committed as an overreaction or a disproportionate response to provocation.",
"title": "Types"
},
{
"paragraph_id": 29,
"text": "Common examples of property crime include burglary, theft, and vandalism.",
"title": "Types"
},
{
"paragraph_id": 30,
"text": "Examples of financial crimes include counterfeiting, smuggling, tax evasion, and bribery. The scope of financial crimes has expanded significantly since the beginning of modern economics in the 17th century. In occupational crime, the complexity and anonymity of computer systems may help criminal employees camouflage their operations. The victims of the most costly scams include banks, brokerage houses, insurance companies, and other large financial institutions.",
"title": "Types"
},
{
"paragraph_id": 31,
"text": "Public order crime is crime that violates a society's norms about what constitutes socially acceptable behavior. Examples of public order crimes include gambling, drug-related crime, public intoxication, prostitution, loitering, breach of the peace, panhandling, vagrancy, street harassment, excessive noise, and littering. Public order crime is associated with the broken windows theory, which posits that public order crimes increase the likelihood of other types of crime. Some public order crimes are considered victimless crimes in which no specific victim an be identified. Most nations in the Western world have moved toward decriminalization of victimless crimes in the modern era.",
"title": "Types"
},
{
"paragraph_id": 32,
"text": "Adultery, fornication, blasphemy, apostasy, and invoking the name of God are commonly recognized as crimes in theocratic societies or those heavily influenced by religion.",
"title": "Types"
},
{
"paragraph_id": 33,
"text": "Political crime is crime that directly challenges or threatens the state. Examples of political crimes include subversion, rebellion, treason, mutiny, espionage, sedition, terrorism, riot, and unlawful assembly. Political crimes are associated with the political agenda of a given state, and they are necessarily applied against political dissidents. Due to their unique relation to the state, political crimes are often encouraged by one nation against another, and it is political alignment rather than the act itself that determines criminality. State crime that is carried out by the state to repress law-abiding citizens may also be considered political crime.",
"title": "Types"
},
{
"paragraph_id": 34,
"text": "Inchoate crime is crime that is carried out in anticipation of other illegal actions but does not cause direct harm. Examples of inchoate crimes include attempt and conspiracy. Inchoate crimes are defined by substantial action to facilitate a crime with the intention of the crime's occurrence. This is distinct from simple preparation for or consideration of criminal activity. They are unique in that renunciation of criminal intention is generally enough to absolve the perpetrator of criminal liability, as their actions are no longer facilitating a potential future crime.",
"title": "Types"
},
{
"paragraph_id": 35,
"text": "A criminal is an individual who commits a crime. What constitutes a criminal can vary depending on the context and the law, and it often carries a pejorative connotation. Criminals are often seen as embodying certain stereotypes or traits and are seen as a distinct type of person from law-abiding citizens. Despite this, no mental or physical trend is identifiable that differentiates criminals from non-criminals. Public response to criminals may be indignant or sympathetic. Indignant responses involve resentment and a desire for vengeance, wishing to see criminals removed from society or made to suffer for harm that they cause. Sympathetic responses involve compassion and understanding, seeking to rehabilitate or forgive criminals and absolve them of blame.",
"title": "Participants"
},
{
"paragraph_id": 36,
"text": "A victim is an individual who has been treated unjustly or made to suffer. In the context of crime, the victim is the individual that is harmed by a violation of criminal law. Victimization is associated with post-traumatic stress and a long-term decrease in quality of life. Victimology is the study of victims, including their role in crime and how they are affected.",
"title": "Participants"
},
{
"paragraph_id": 37,
"text": "Several factors affect an individual's likelihood of becoming a victim. Some factors may cause victims of crime to experience short-term or long-term \"repeat victimization\". Common long-term victims are those that have close relationships with the criminal, manifesting in crimes such as domestic violence, embezzlement, child abuse, and bullying. Repeat victimization may also occur when a potential victim appears to be a viable target, such as when indicating wealth in a less affluent region. Many of the traits that indicate criminality also indicate victimality; victims of crime are more likely to engage in unlawful behavior and respond to provocation. Overall demographic trends of victims and criminals are often similar, and victims are more likely to have engaged in criminal activities themselves.",
"title": "Participants"
},
{
"paragraph_id": 38,
"text": "The victims may only want compensation for the injuries suffered, while remaining indifferent to a possible desire for deterrence. Victims, on their own, may lack the economies of scale that could allow them to administer a penal system, let alone to collect any fines levied by a court. Historically, from ancient times until the 19th century, many societies believed that non-human animals were capable of committing crimes, and prosecuted and punished them accordingly. Prosecutions of animals gradually dwindled during the 19th century, although a few were recorded as late as the 1910s and 1920s.",
"title": "Participants"
},
{
"paragraph_id": 39,
"text": "Virtually all countries in the 21st century have criminal law grounded in civil law, common law, Islamic law, or socialist law. Historically, criminal codes have often divided criminals by class or caste, prescribing different penalties depending on status. In some tribal societies, an entire clan is recognized as liable for a crime. In many cases, disputes over a crime in this system lead to a feud that lasts over several generations.",
"title": "Criminal law"
},
{
"paragraph_id": 40,
"text": "The state determines what actions are considered criminal in the scope of the law. Criminalization has significant human rights considerations, as it can infringe on rights of autonomy and subject individuals to unjust punishment.",
"title": "Criminal law"
},
{
"paragraph_id": 41,
"text": "The enforcement of criminal law seeks to prevent crime and sanction crimes that do occur. This enforcement is carried out by the state through law enforcement agencies, such as police, which are empowered to arrest suspected perpetrators of crimes. Law enforcement may focus on policing individual crimes, or it may focus on bringing down overall crime rates. One common variant, community policing, seeks to prevent crime by integrating police into the community and public life.",
"title": "Criminal law"
},
{
"paragraph_id": 42,
"text": "When the perpetrator of a crime is found guilty of the crime, the state delivers a sentence to determine the penalty for the crime.",
"title": "Criminal law"
},
{
"paragraph_id": 43,
"text": "Authorities may respond to crime through corrections, carrying out punishment as a means to censure the criminal act. Punishment is generally reserved for serious offenses. Individuals regularly engage in activity that could be scrutinized under criminal law but are deemed inconsequential. Retributive justice seeks to create a system of accountability and punish criminals in a way that knowingly causes suffering. This may arise out of a feeling that criminals deserve to suffer and that punishment should exist for its own sake. The existence of punishment also creates an effect of deterrence that discourages criminal action for fear of punishment.",
"title": "Criminal law"
},
{
"paragraph_id": 44,
"text": "Rehabilitation seeks to understand and mitigate the causes of a criminal's unlawful action to prevent recidivism. Different criminological theories propose different methods of rehabilitation, including strengthening social networks, reducing poverty, influencing values, and providing therapy for physical and mental ailments. Rehabilitative programs may include counseling or vocational education.",
"title": "Criminal law"
},
{
"paragraph_id": 45,
"text": "Developed nations are less likely to use physical punishments. Instead, they will impose financial penalties or imprisonment. In places with widespread corruption or limited rule of law, crime may be punished extralegally through mob rule and lynching.",
"title": "Criminal law"
},
{
"paragraph_id": 46,
"text": "Whether a crime can be resolved through financial compensation varies depending on the culture and the specific context of the crime. Historically, many societies have absolved acts of homicide through compensation to the victim's relatives.",
"title": "Criminal law"
},
{
"paragraph_id": 47,
"text": "If a crime is committed, the individual responsible is considered to be liable for the crime. For liability to exist, the individual must be capable of understanding the criminal process and the relevant authority must have legitimate power to establish what constitutes a crime.",
"title": "Criminal law"
},
{
"paragraph_id": 48,
"text": "International criminal law typically addresses serious offenses, such as genocide, crimes against humanity, and war crimes. As with all international law, these laws are created through treaties and international custom, and they are defined through the consensus of the involved states. International crimes are not prosecuted through a standard legal system, though international organizations may establish tribunals to investigate and rule on egregious offenses such as genocide.",
"title": "Criminal law"
},
{
"paragraph_id": 49,
"text": "Basic analysis of criminal behavior is determined by a cost–benefit analysis. A person that commits a criminal act typically believes that its benefits will outweigh the risk of being caught and punished. Negative economic factors (such as unemployment and income inequality) significantly increase the incentive to commit crime, while severe punishments decrease the incentive in some cases.",
"title": "Causes and correlates"
},
{
"paragraph_id": 50,
"text": "Social factors similarly affect the likelihood of criminal activity. Crime corresponds heavily with social integration; groups that are less integrated with society or that are forcibly integrated with society are more likely to engage in crime. Involvement in the community, such as through a church, decreases the likelihood of crime, while associating with criminals increases the likelihood of becoming a criminal as well.",
"title": "Causes and correlates"
},
{
"paragraph_id": 51,
"text": "There is no known genetic cause of crime. Some genes have been found to affect traits that may incline individuals toward criminal activity, but no biological or physiological trait has been found to directly cause or compel criminal actions. One biological factor is the disparity between men and women, as men are significantly more likely to commit crimes than women in virtually all cultures. Crimes committed by men also tend to be more severe than those committed by women.",
"title": "Causes and correlates"
},
{
"paragraph_id": 52,
"text": "Crime is often a high priority political issue in developed countries, regardless of the country's crime rates. People that are not regularly exposed to crime most often experience it through media, including news reporting and crime fiction. Exposure of crime through news stories is associated with alarmism and inaccurate perceptions of crime trends. Selection bias in new stories about criminals significantly over-represent the prevalence of violent crime, and news reporting will often overemphasize a specific type of crime for a period of time, creating a \"crime wave\" effect.",
"title": "Public perception"
},
{
"paragraph_id": 53,
"text": "As public opinion of morality changes over time, actions that were once condemned as crimes may be considered justifiable.",
"title": "Public perception"
}
] | In ordinary language, a crime is an unlawful act punishable by a state or other authority. The term crime does not, in modern criminal law, have any simple and universally accepted definition, though statutory definitions have been provided for certain purposes. The most popular view is that crime is a category created by law; in other words, something is a crime if declared as such by the relevant and applicable law. One proposed definition is that a crime or offence is an act harmful not only to some individual but also to a community, society, or the state. Such acts are forbidden and punishable by law. The notion that acts such as murder, rape, and theft are to be prohibited exists worldwide. What precisely is a criminal offence is defined by the criminal law of each relevant jurisdiction. While many have a catalogue of crimes called the criminal code, in some common law nations no such comprehensive statute exists. The state (government) has the power to severely restrict one's liberty for committing a crime. In modern societies, there are procedures to which investigations and trials must adhere. If found guilty, an offender may be sentenced to a form of reparation such as a community sentence, or, depending on the nature of their offence, to undergo imprisonment, life imprisonment or, in some jurisdictions, death. Usually, to be classified as a crime, the "act of doing something criminal" must – with certain exceptions – be accompanied by the "intention to do something criminal". While every crime violates the law, not every violation of the law counts as a crime. Breaches of private law are not automatically punished by the state, but can be enforced through civil procedure. | 2001-09-20T17:18:12Z | 2023-12-11T00:16:39Z | [
"Template:Cn",
"Template:Authority control",
"Template:Pp-semi",
"Template:Snd",
"Template:Portal",
"Template:Cite book",
"Template:Cite journal",
"Template:Harvc",
"Template:Subject bar",
"Template:Criminology and penology",
"Template:Sfn",
"Template:C.",
"Template:ISBN",
"Template:Cite news",
"Template:Navboxes",
"Template:Redirect",
"Template:Main",
"Template:Criminal law",
"Template:Reflist",
"Template:Webarchive",
"Template:Cite web",
"Template:Cite report",
"Template:Curlie",
"Template:Short description",
"Template:Pp-move-indef"
] | https://en.wikipedia.org/wiki/Crime |
5,786 | California Institute of Technology | The California Institute of Technology (branded as Caltech) is a private research university in Pasadena, California. The university is responsible for many modern scientific advancements and is among a small group of institutes of technology in the United States which are strongly devoted to the instruction of pure and applied sciences. Due to its history of technological innovation, Caltech has been considered to be one of the world's most prestigious universities.
The institution was founded as a preparatory and vocational school by Amos G. Throop in 1891 and began attracting influential scientists such as George Ellery Hale, Arthur Amos Noyes, and Robert Andrews Millikan in the early 20th century. The vocational and preparatory schools were disbanded and spun off in 1910 and the college assumed its present name in 1920. In 1934, Caltech was elected to the Association of American Universities, and the antecedents of NASA's Jet Propulsion Laboratory, which Caltech continues to manage and operate, were established between 1936 and 1943 under Theodore von Kármán.
Caltech has six academic divisions with strong emphasis on science and engineering, managing $332 million in 2011 in sponsored research. Its 124-acre (50 ha) primary campus is located approximately 11 mi (18 km) northeast of downtown Los Angeles. First-year students are required to live on campus, and 95% of undergraduates remain in the on-campus House System at Caltech. Although Caltech has a strong tradition of practical jokes and pranks, student life is governed by an honor code which allows faculty to assign take-home examinations. The Caltech Beavers compete in 13 intercollegiate sports in the NCAA Division III's Southern California Intercollegiate Athletic Conference (SCIAC).
Scientists and engineers at or from the university have played an essential role in many modern scientific breakthroughs and innovations, including advances in sustainability science, quantum physics, earthquake monitoring, protein engineering, and soft robotics. As of October 2022, there are 79 Nobel laureates who have been affiliated with Caltech, making it the institution with the highest number of Nobelists per capita in America. This includes 46 alumni and faculty members (47 prizes, with chemist Linus Pauling being the only individual in history to win two unshared prizes). In addition, four Fields Medalists and six Turing Award winners have been affiliated with Caltech.
Caltech started as a vocational school founded in present-day Old Pasadena on Fair Oaks Avenue and Chestnut Street on September 23, 1891, by local businessman and politician Amos G. Throop. The school was known successively as Throop University, Throop Polytechnic Institute (and Manual Training School) and Throop College of Technology before acquiring its current name in 1920. The vocational school was disbanded and the preparatory program was split off to form the independent Polytechnic School in 1907.
At a time when scientific research in the United States was still in its infancy, George Ellery Hale, a solar astronomer from the University of Chicago, founded the Mount Wilson Observatory in 1904. He joined Throop's board of trustees in 1907, and soon began developing it and the whole of Pasadena into a major scientific and cultural destination. He engineered the appointment of James A. B. Scherer, a literary scholar untutored in science but a capable administrator and fund-raiser, to Throop's presidency in 1908. Scherer persuaded retired businessman and trustee Charles W. Gates to donate $25,000 in seed money to build Gates Laboratory, the first science building on campus.
In 1910, Throop moved to its current site. Arthur Fleming donated the land for the permanent campus site. Theodore Roosevelt delivered an address at Throop Institute on March 21, 1911, and he declared:
I want to see institutions like Throop turn out perhaps ninety-nine of every hundred students as men who are to do given pieces of industrial work better than any one else can do them; I want to see those men do the kind of work that is now being done on the Panama Canal and on the great irrigation projects in the interior of this country—and the one-hundredth man I want to see with the kind of cultural scientific training that will make him and his fellows the matrix out of which you can occasionally develop a man like your great astronomer, George Ellery Hale.
In the same year, a bill was introduced in the California Legislature calling for the establishment of a publicly funded "California Institute of Technology", with an initial budget of a million dollars, ten times the budget of Throop at the time. The board of trustees offered to turn Throop over to the state, but the presidents of Stanford University and the University of California successfully lobbied to defeat the bill, which allowed Throop to develop as the only scientific research-oriented education institute in southern California, public or private, until the onset of the World War II necessitated the broader development of research-based science education. The promise of Throop attracted physical chemist Arthur Amos Noyes from MIT to develop the institution and assist in establishing it as a center for science and technology.
With the onset of World War I, Hale organized the National Research Council to coordinate and support scientific work on military problems. While he supported the idea of federal appropriations for science, he took exception to a federal bill that would have funded engineering research at land-grant colleges, and instead sought to raise a $1 million national research fund entirely from private sources. To that end, as Hale wrote in The New York Times:
Throop College of Technology, in Pasadena California has recently afforded a striking illustration of one way in which the Research Council can secure co-operation and advance scientific investigation. This institution, with its able investigators and excellent research laboratories, could be of great service in any broad scheme of cooperation. President Scherer, hearing of the formation of the council, immediately offered to take part in its work, and with this object, he secured within three days an additional research endowment of one hundred thousand dollars.
Through the National Research Council, Hale simultaneously lobbied for science to play a larger role in national affairs, and for Throop to play a national role in science. The new funds were designated for physics research, and ultimately led to the establishment of the Norman Bridge Laboratory, which attracted experimental physicist Robert Andrews Millikan from the University of Chicago in 1917. During the course of the war, Hale, Noyes and Millikan worked together in Washington on the NRC. Subsequently, they continued their partnership in developing Caltech.
Under the leadership of Hale, Noyes, and Millikan (aided by the booming economy of Southern California), Caltech grew to national prominence in the 1920s and concentrated on the development of Roosevelt's "Hundredth Man". On November 29, 1921, the trustees declared it to be the express policy of the institute to pursue scientific research of the greatest importance and at the same time "to continue to conduct thorough courses in engineering and pure science, basing the work of these courses on exceptionally strong instruction in the fundamental sciences of mathematics, physics, and chemistry; broadening and enriching the curriculum by a liberal amount of instruction in such subjects as English, history, and economics; and vitalizing all the work of the Institute by the infusion in generous measure of the spirit of research". In 1923, Millikan was awarded the Nobel Prize in Physics. In 1925, the school established a department of geology and hired William Bennett Munro, then chairman of the division of History, Government, and Economics at Harvard University, to create a division of humanities and social sciences at Caltech. In 1928, a division of biology was established under the leadership of Thomas Hunt Morgan, the most distinguished biologist in the United States at the time, and discoverer of the role of genes and the chromosome in heredity. In 1930, Kerckhoff Marine Laboratory was established in Corona del Mar under the care of Professor George MacGinitie. In 1926, a graduate school of aeronautics was created, which eventually attracted Theodore von Kármán. Kármán later helped create the Jet Propulsion Laboratory, and played an integral part in establishing Caltech as one of the world's centers for rocket science. In 1928, construction of the Palomar Observatory began.
Millikan served as "Chairman of the Executive Council" (effectively Caltech's president) from 1921 to 1945, and his influence was such that the institute was occasionally referred to as "Millikan's School". Millikan initiated a visiting-scholars program soon after joining Caltech. Notable scientists who accepted his invitation include Paul Dirac, Erwin Schrödinger, Werner Heisenberg, Hendrik Lorentz and Niels Bohr. Albert Einstein arrived on the Caltech campus for the first time in 1931 to polish up his Theory of General Relativity, and he returned to Caltech subsequently as a visiting professor in 1932 and 1933.
During World War II, Caltech was one of 131 colleges and universities nationally that took part in the V-12 Navy College Training Program which offered students a path to a Navy commission. The United States Navy also maintained a naval training school for aeronautical engineering, resident inspectors of ordinance and naval material, and a liaison officer to the National Defense Research Committee on campus.
From April to December 1951, Caltech was the host of a federal classified study, Project Vista. The selection of Caltech as host for the project was based on the university's expertise in rocketry and nuclear physics. In response to the war in Korea and the pressure from the Soviet Union, the project was Caltech's way of assisting the federal government in its effort to increase national security. The project was created to study new ways of improving the relationship between tactical air support and ground troops. The Army, Air Force, and Navy sponsored the project; however, it was under contract with the Army. The study was named after the hotel, Vista del Arroyo Hotel, which housed the study. The study operated under a committee with the supervision of President Lee A. DuBridge. William A. Fowler, a professor at Caltech, was selected as research director. More than a fourth of Caltech's faculty and a group of outside scientists staffed the project. Moreover, the number increases if one takes into account visiting scientists, military liaisons, secretarial, and security staff. In compensation for its participation, the university received about $750,000.
From the 1950s to 1980s, Caltech was the home of Murray Gell-Mann and Richard Feynman, whose work was central to the establishment of the Standard Model of particle physics. Feynman was also widely known outside the physics community as an exceptional teacher and a colorful, unconventional character.
During Lee A. DuBridge's tenure as Caltech's president (1946–1969), Caltech's faculty doubled and the campus tripled in size. DuBridge, unlike his predecessors, welcomed federal funding of science. New research fields flourished, including chemical biology, planetary science, nuclear astrophysics, and geochemistry. A 200-inch telescope was dedicated on nearby Palomar Mountain in 1948 and remained the world's most powerful optical telescope for over forty years.
Caltech opened its doors to female undergraduates during the presidency of Harold Brown in 1970, and they made up 14% of the entering class. The portion of female undergraduates has been increasing since then.
Protests by Caltech students are rare. The earliest was a 1968 protest outside the NBC Burbank studios, in response to rumors that NBC was to cancel Star Trek. In 1973, the students from Dabney House protested a presidential visit with a sign on the library bearing the simple phrase "Impeach Nixon". The following week, Ross McCollum, president of the National Oil Company, wrote an open letter to Dabney House stating that in light of their actions he had decided not to donate one million dollars to Caltech. The Dabney family, being Republicans, disowned Dabney House after hearing of the protest.
Since 2000, the Einstein Papers Project has been located at Caltech. The project was established in 1986 to assemble, preserve, translate, and publish papers selected from the literary estate of Albert Einstein and from other collections.
In fall 2008, the freshman class was 42% female, a record for Caltech's undergraduate enrollment. In the same year, the Institute concluded a six-year-long fund-raising campaign. The campaign raised more than $1.4 billion from about 16,000 donors. Nearly half of the funds went into the support of Caltech programs and projects.
In 2010, Caltech, in partnership with Lawrence Berkeley National Laboratory and headed by Professor Nathan Lewis, established a DOE Energy Innovation Hub aimed at developing revolutionary methods to generate fuels directly from sunlight. This hub, the Joint Center for Artificial Photosynthesis, will receive up to $122 million in federal funding over five years.
Since 2012, Caltech began to offer classes through massive open online courses (MOOCs) under Coursera, from 2013, edX, and bootcamps.
Jean-Lou Chameau, the eighth president, announced on February 19, 2013, that he would be stepping down to accept the presidency at King Abdullah University of Science and Technology. Thomas F. Rosenbaum was announced to be the ninth president of Caltech on October 24, 2013, and his term began on July 1, 2014.
In 2019, Caltech received a gift of $750 million for sustainability research from the Resnick family of The Wonderful Company. The gift is the largest ever for environmental sustainability research and the second-largest private donation to a US academic institution (after Bloomberg's gift of $1.8 billion to Johns Hopkins University in 2018).
On account of President Robert A. Millikan's affiliation with the Human Betterment Foundation, in January 2021, the Caltech Board of Trustees authorized the removal of Millikan's name (and the names of five other historical figures affiliated with the Foundation), from campus buildings.
Caltech's 124-acre (50 ha) primary campus is located in Pasadena, California, approximately 11 miles (18 km) northeast of downtown Los Angeles. It is within walking distance of Old Town Pasadena and the Pasadena Playhouse District and therefore the two locations are frequent getaways for Caltech students.
In 1917 Hale hired architect Bertram Goodhue to produce a master plan for the 22 acres (8.9 ha) campus. Goodhue conceived the overall layout of the campus and designed the physics building, Dabney Hall, and several other structures, in which he sought to be consistent with the local climate, the character of the school, and Hale's educational philosophy. Goodhue's designs for Caltech were also influenced by the traditional Spanish mission architecture of Southern California.
During the 1960s, Caltech underwent considerable expansion, in part due to the philanthropy of alumnus Arnold O. Beckman. In 1953, Beckman was asked to join the Caltech Board of Trustees. In 1964, he became its chairman. Over the next few years, as Caltech's president emeritus David Baltimore describes it, Arnold Beckman and his wife Mabel "shaped the destiny of Caltech".
In 1971 a magnitude-6.6 earthquake in San Fernando caused some damage to the Caltech campus. Engineers who evaluated the damage found that two historic buildings dating from the early days of the Institute—Throop Hall and the Goodhue-designed Culbertson Auditorium—had cracked.
New additions to the campus include the Cahill Center for Astronomy and Astrophysics and the Walter and Leonore Annenberg Center for Information Science and Technology, which opened in 2009, and the Warren and Katherine Schlinger Laboratory for Chemistry and Chemical Engineering followed in March 2010. The institute also concluded an upgrading of the South Houses in 2006. In late 2010, Caltech completed a 1.3 MW solar array projected to produce approximately 1.6 GWh in 2011.
Caltech is incorporated as a non-profit corporation and is governed by a privately appointed 46-member board of trustees who serve five-year terms of office and retire at the age of 72. The trustees elect a president to serve as the chief executive officer of the institute and administer the affairs on the institute on behalf of the board, a provost who serves as the chief academic officer of the institute below the president, and ten other vice presidential and other senior positions. Thomas F. Rosenbaum became the ninth president of Caltech in 2014. Caltech's endowment is governed by a permanent trustee committee and administered by an investment office.
The institute is organized into six primary academic divisions: Biology and Biological Engineering, Chemistry and Chemical Engineering, Engineering and Applied Science, Geological and Planetary Sciences, Humanities and Social Sciences, Physics, Mathematics, and Astronomy. The voting faculty of Caltech include all professors, instructors, research associates and fellows, and the University Librarian. Faculty are responsible for establishing admission requirements, academic standards, and curricula. The Faculty Board is the faculty's representative body and consists of 18 elected faculty representatives as well as other senior administration officials. Full-time professors are expected to teach classes, conduct research, advise students, and perform administrative work such as serving on committees.
Founded in 1930s, the Jet Propulsion Laboratory (JPL) is a federally funded research and development center (FFRDC) owned by NASA and operated as a division of Caltech through a contract between NASA and Caltech. In 2008, JPL spent over $1.6 billion on research and development and employed over 5,000 project-related and support employees. The JPL Director also serves as a Caltech Vice President and is responsible to the President of the Institute for the management of the laboratory.
Caltech is a small four-year, highly residential research university with slightly more students in graduate programs than undergraduate. The institute has been accredited by the Western Association of Schools and Colleges since 1949. Caltech is on the quarter system: the fall term starts in late September and ends before Christmas, the second term starts after New Year's Day and ends in mid-March, and the third term starts in late March or early April and ends in early June.
Caltech is consistently ranked within the top ten universities in the world, and within the top four in the United States, by major global ranking systems. In 2021, Caltech ranked 6th globally based on aggregate world university rankings of THE, QS, and ARWU. For 2022, U.S. News & World Report ranked Caltech as tied for 9th in the United States among national universities overall, 11th for most innovative, and 15th for best value. U.S. News & World Report also ranked the graduate programs in chemistry and earth sciences first among national universities.
Caltech was ranked 1st internationally between 2011 and 2016 by the Times Higher Education World University Rankings. Caltech was ranked as the best university in the world in two categories: Engineering & Technology and Physical Sciences. It was also found to have the highest faculty citation rate in the world.
Admission to Caltech is extremely rigorous. Prior to going test blind, Caltech students had some of the highest test scores in the nation. In 2022, Caltech was ranked by CBS News as the 3rd hardest college in America to gain acceptance to. For the freshmen who enrolled in 2019 (Class of 2023) the middle 50% range of SAT were 740–780 for evidence-based reading and writing and 790–800 for math, and 1530–1570 total. The middle 50% range ACT Composite score was 35–36. The SAT Math Level 2 middle 50% range was 800–800. The middle 50% range for the SAT Physics Subject Test was 760–800; SAT Chemistry Subject Test was 760–800; SAT Biology Subject Tests was 760–800. In June 2020, Caltech announced a test-blind policy where they would not require nor consider test scores for the next two years; in July 2021, the moratorium was extended by another year and then extended further. The institute is need-blind for domestic applicants.
For the Class of 2026 (enrolled Fall 2022), Caltech received 16662 applications and accepted 448 applicants for a 2.7% admit rate; 224 enrolled. The class included 48% women and 52% men. For the Class of 2025, 32% were of underrepresented ancestry (which includes students who self-identify as American Indian/Alaska Native, Hispanic/Latino, Black/African American, and/or Native Hawaiian/Pacific Islander), and 6% were foreign students. For the Class of 2027 (enrolled Fall 2023), Caltech had over 270 commits of 412 admits, at a yield rate of 66–67%.
Undergraduate tuition for the 2021–2022 school year was $56,394 and total annual costs were estimated to be $79,947 excluding the Caltech Student Health Insurance Plan. In 2012–2013, Caltech awarded $17.1 million in need-based aid, $438k in non-need-based aid, and $2.51 million in self-help support to enrolled undergraduate students. The average financial aid package of all students eligible for aid was $38,756 and students graduated with an average debt of $15,090.
The full-time, four-year undergraduate program emphasizes instruction in the arts and sciences and has high graduate coexistence. Caltech offers 28 majors (called "options") and 12 minors across all six academic divisions. Caltech also offers interdisciplinary programs in Applied Physics, Biochemistry, Bioengineering, Computation and Neural Systems, Control and Dynamical Systems, Environmental Science and Engineering, Geobiology and Astrobiology, Geochemistry, and Planetary Astronomy. The most popular options are Chemical Engineering, Computer Science, Electrical Engineering, Mechanical Engineering and Physics. The most popular majors of the class of 2023 were Computer Science, Mechanical Engineering, Physics, and Electrical Engineering.
Prior to the entering class of 2013, Caltech required students to take a core curriculum of five terms of mathematics, five terms of physics, two terms of chemistry, one term of biology, two terms of lab courses, one term of scientific communication, three terms of physical education, and 12 terms of humanities and social science. Since 2013, only three terms each of mathematics and physics have been required by the institute, with the remaining two terms each required by certain options.
A typical class is worth 9 academic units and given the extensive core curriculum requirements in addition to individual options' degree requirements, students need to take an average of 40.5 units per term (more than four classes) to graduate in four years. 36 units is the minimum full-time load, 48 units is considered a heavy load, and registrations above 51 units require an overload petition. Approximately 20 percent of students double-major. This is achievable since the humanities and social sciences majors have been designed to be done in conjunction with a science major. Although choosing two options in the same division is discouraged, it is still possible.
First-year students are enrolled in first-term classes based upon results of placement exams in math, physics, chemistry, and writing and take all classes in their first two terms on a Pass/Fail basis. There is little competition; collaboration on homework is encouraged and the honor system encourages take-home tests and flexible homework schedules. Caltech offers co-operative programs with other schools, such as the Pasadena Art Center College of Design and Occidental College.
According to a 2018 PayScale study, Caltech graduates earn a median early career salary of $83,400 and $143,100 mid-career, placing them in the top 5 among graduates of US colleges and universities. The average net return on investment over a period of 20 years is $887,000, the tenth-highest among US colleges.
Caltech offers Army and Air Force ROTC in cooperation with the University of Southern California.
The graduate instructional programs emphasize doctoral studies and are dominated by science, technology, engineering, and mathematics fields. The institute offers graduate degree programs for the Master of Science, Engineer's Degree, Doctor of Philosophy, BS/MS and MD/PhD, with the majority of students in the PhD program. The most popular options are Chemistry, Physics, Biology, Electrical Engineering and Chemical Engineering. Applicants for graduate studies are required to take the GRE. GRE Subject scores are either required or strongly recommended by several options. A joint program between Caltech and the Keck School of Medicine of the University of Southern California, and the UCLA David Geffen School of Medicine grants MD/PhD degrees. Students in this program do their preclinical and clinical work at USC or UCLA, and their PhD work with any member of the Caltech faculty, including the Biology, Chemistry, and Engineering and Applied Sciences Divisions. The MD degree would be from USC or UCLA and the PhD would be awarded from Caltech.
The research facilities at Caltech are available to graduate students, but there are opportunities for students to work in facilities of other universities, research centers as well as private industries. The graduate student to faculty ratio is 4:1.
Approximately 99 percent of doctoral students have full financial support. Financial support for graduate students comes in the form of fellowships, research assistantships, teaching assistantships or a combination of fellowship and assistantship support.
Graduate students are bound by the honor code, as are the undergraduates, and the Graduate Honor Council oversees any violations of the code.
Caltech is classified among "R1: Doctoral Universities – Very High Research Activity". Caltech was elected to the Association of American Universities in 1934 and remains a research university with "very high" research activity, primarily in STEM fields. Caltech manages research expenditures of $270 million annually, 66th among all universities in the U.S. and 17th among private institutions without medical schools for 2008. The largest federal agencies contributing to research are NASA, National Science Foundation, Department of Health and Human Services, Department of Defense, and Department of Energy. Caltech received $144 million in federal funding for the physical sciences, $40.8 million for the life sciences, $33.5 million for engineering, $14.4 million for environmental sciences, $7.16 million for computer sciences, and $1.97 million for mathematical sciences in 2008.
The institute was awarded an all-time high funding of $357 million in 2009. Active funding from the National Science Foundation Directorate of Mathematical and Physical Science (MPS) for Caltech stands at $343 million as of 2011, the highest for any educational institution in the nation, and higher than the total funds allocated to any state except California and New York.
In 2005, Caltech had 739,000 square feet (68,700 m) dedicated to research: 330,000 square feet (30,700 m) to physical sciences, 163,000 square feet (15,100 m) to engineering, and 160,000 square feet (14,900 m) to biological sciences.
In addition to managing JPL, Caltech also operates the Palomar Observatory in San Diego County, the Owens Valley Radio Observatory in Bishop, California, the Submillimeter Observatory and W. M. Keck Observatory at the Mauna Kea Observatory, the Laser Interferometer Gravitational-Wave Observatory at Livingston, Louisiana and Richland, Washington, and Kerckhoff Marine Laboratory in Corona del Mar, California. The Institute launched the Kavli Nanoscience Institute at Caltech in 2006, the Keck Institute for Space Studies in 2008, and is also the current home for the Einstein Papers Project. The Spitzer Science Center (SSC), part of the Infrared Processing and Analysis Center located on the Caltech campus, is the data analysis and community support center for NASA's Spitzer Space Telescope.
Caltech partnered with UCLA to establish a Joint Center for Translational Medicine (UCLA-Caltech JCTM), which conducts experimental research into clinical applications, including the diagnosis and treatment of diseases such as cancer.
Caltech operates several TCCON stations as part of an international collaborative effort of measuring greenhouse gases globally. One station is on campus.
Undergraduates at Caltech are also encouraged to participate in research. About 80% of the class of 2010 did research through the annual Summer Undergraduate Research Fellowships (SURF) program at least once during their stay, and many continued during the school year. Students write and submit SURF proposals for research projects in collaboration with professors, and about 70 percent of applicants are awarded SURFs. The program is open to both Caltech and non-Caltech undergraduate students. It serves as preparation for graduate school and helps to explain why Caltech has the highest percentage of alumni who go on to receive a PhD of all the major universities.
The licensing and transferring of technology to the commercial sector is managed by the Office of Technology Transfer (OTT). OTT protects and manages the intellectual property developed by faculty members, students, other researchers, and JPL technologists. Caltech receives more invention disclosures per faculty member than any other university in the nation. As of 2008, 1891 patents were granted to Caltech researchers since 1969.
During the early 20th century, a Caltech committee visited several universities and decided to transform the undergraduate housing system from fraternities to a house system. Four South Houses (or Hovses, as styled in the stone engravings) were built: Blacker House, Dabney House, Fleming House and Ricketts House. In the 1960s, three North Houses were built: Lloyd House, Page House, and Ruddock House, and during the 1990s, Avery House. The four South Houses closed for renovation in 2005 and reopened in 2006. The latest addition to residential life at Caltech is Bechtel Residence, which opened in 2018. It is not affiliated with the house system. All first- and second-year students live on campus in the house system or in the Bechtel Residence.
On account of Albert B. Ruddock's affiliation with the Human Betterment Foundation, in January 2021, the Caltech Board of Trustees authorized the removal of Ruddock's name from campus buildings. Ruddock House was renamed as the Grant D. Venerable House.
Caltech has athletic teams in baseball, men's and women's basketball, cross country, men's and women's soccer, swimming and diving, men's and women's tennis, track and field, women's volleyball, and men's and women's water polo. Caltech's mascot is the Beaver, a homage to nature's engineer. Its teams are members of the NCAA Division III and compete in the Southern California Intercollegiate Athletic Conference (SCIAC), which Caltech co-founded in 1915.
On January 6, 2007, the Beavers' men's basketball team snapped a 207-game losing streak to Division III schools, beating Bard College 81–52. It was their first Division III victory since 1996. Until their win over Occidental College on February 22, 2011 the team had not won a game in SCIAC play since 1985. Ryan Elmquist's free throw with 3.3 seconds in regulation gave the Beavers the victory. The documentary film Quantum Hoops concerns the events of the Beavers' 2005–06 season.
On January 13, 2007, the Caltech women's basketball team snapped a 50-game losing streak, defeating the Pomona-Pitzer Sagehens 55–53. The women's program, which entered the SCIAC in 2002, garnered their first conference win. On the bench as honorary coach for the evening was Robert Grubbs, 2005 Nobel laureate in Chemistry. The team went on to beat Whittier College on February 10, for its second SCIAC win, and placed its first member on the All Conference team.
In 2007, 2008, and 2009, the women's table tennis team (a club team) competed in nationals. The women's Ultimate club team, known as "Snatch", has also been very successful in recent years, ranking 44 of over 200 college teams in the Ultimate Player's Association.
On February 2, 2013, the Caltech baseball team ended a 228-game losing streak, the team's first win in nearly 10 years.
The track and field team's home venue is at the South Athletic Field in Tournament Park, the site of the first Rose Bowl Game.
The school also sponsored an intercollegiate football team from 1973 through 1977, and played part of its home schedule at the Rose Bowl.
The Caltech/Occidental College Orchestra is a full seventy-piece orchestra composed of students, faculty, and staff at Caltech and nearby Occidental College. The orchestra gives three pairs of concerts annually, at both Caltech and Occidental College. There are also two Caltech Jazz Bands and a Concert Band, as well as an active chamber music program. For vocal music, Caltech has a mixed-voice Glee Club and the smaller Chamber Singers. The theater program at Caltech is known as TACIT, or Theater Arts at the California Institute of Technology. There are two to three plays organized by TACIT per year, and they were involved in the production of the PHD Movie, released in 2011.
Every Halloween, Dabney House conducts the infamous "Millikan pumpkin-drop experiment" from the top of Millikan Library, the highest point on campus. According to tradition, a claim was once made that the shattering of a pumpkin frozen in liquid nitrogen and dropped from a sufficient height would produce a triboluminescent spark. This yearly event involves a crowd of observers, who try to spot the elusive spark. The title of the event is an oblique reference to the famous Millikan oil-drop experiment which measured e, the elemental unit of electrical charge.
On Ditch Day, the seniors ditch school, leaving behind elaborately designed tasks and traps at the doors of their rooms to prevent underclassmen from entering. Over the years this has evolved to the point where many seniors spend months designing mechanical, electrical, and software obstacles to confound the underclassmen. Each group of seniors designs a "stack" to be solved by a handful of underclassmen. The faculty have been drawn into the event as well, and cancel all classes on Ditch Day so the underclassmen can participate in what has become a highlight of the academic year.
Another long-standing tradition is the playing of Wagner's "Ride of the Valkyries" at 7:00 each morning during finals week with the largest, loudest speakers available. The playing of that piece is not allowed at any other time (except if one happens to be listening to the entire 14 hours and 5 minutes of The Ring Cycle), and any offender is dragged into the showers to be drenched in cold water fully dressed.
Caltech students have been known for their many pranks (also known as "RFs").
The two most famous in recent history are the changing of the Hollywood Sign to read "Caltech", by judiciously covering up certain parts of the letters, and the changing of the scoreboard to read Caltech 38, MIT 9 during the 1984 Rose Bowl Game. But the most famous of all occurred during the 1961 Rose Bowl Game, where Caltech students altered the flip-cards that were raised by the stadium attendees to display "Caltech", and several other "unintended" messages. This event is now referred to as the Great Rose Bowl Hoax.
In recent years, pranking has been officially encouraged by Tom Mannion, Caltech's Assistant VP for Student Affairs and Campus Life. "The grand old days of pranking have gone away at Caltech, and that's what we are trying to bring back," reported the Boston Globe.
In December 2011, Caltech students went to New York and pulled a prank in Manhattan's Greenwich Village. The prank involved making The Cube sculpture look like the Aperture Science Weighted Companion Cube from the video game Portal.
Caltech pranks have been documented in three Legends of Caltech books, the most recent of which was edited by alumni Autumn Looijen '99 and Mason Porter '98 and published in May 2007.
In 2005, a group of Caltech students pulled a string of pranks during MIT's Campus Preview Weekend for admitted students. These include covering up the word Massachusetts in the "Massachusetts Institute of Technology" engraving on the main building façade with a banner so that it read "That Other Institute of Technology". A group of MIT hackers responded by altering the banner so that the inscription read "The Only Institute of Technology." Caltech students also passed out T-shirts to MIT's incoming freshman class that had MIT written on the front and "... because not everyone can go to Caltech" along with an image of a palm tree on the back.
MIT retaliated in April 2006, when students posing as the Howe & Ser (Howitzer) Moving Company stole the 130-year-old, 1.7-ton Fleming House cannon and moved it over 3,000 miles to their campus in Cambridge, Massachusetts for their 2006 Campus Preview Weekend, repeating a similar prank performed by nearby Harvey Mudd College in 1986. Thirty members of Fleming House traveled to MIT and reclaimed their cannon on April 10, 2006.
On April 13, 2007 (Friday the 13th), a group of students from The California Tech, Caltech's campus newspaper, arrived and distributed fake copies of The Tech, MIT's campus newspaper, while prospective students were visiting for their Campus Preview Weekend. Articles included "MIT Invents the Interweb", "Architects Deem Campus 'Unfortunate'", and "Infinite Corridor Not Actually Infinite".
In December 2009, some Caltech students declared that MIT had been sold and had become the Caltech East campus. A "sold" banner was hung on front of the MIT dome building and a "Welcome to Caltech East: School of the Humanities" banner over the Massachusetts Avenue Entrance. Newspapers and T-shirts were distributed, and door labels and fliers in the infinite corridor were put up in accordance with the "curriculum change."
In September 2010, MIT students attempted to put a TARDIS, the time machine from the BBC's Doctor Who, onto a roof. Caught in mid-act, the prank was aborted. In January 2011, Caltech students in conjunction with MIT students helped put the TARDIS on top of Baxter. Caltech students then moved the TARDIS to UC Berkeley and Stanford.
In April 2014, during MIT's Campus Preview Weekend, a group of Caltech students handed out mugs emblazoned with the MIT logo on the front and the words "The Institute of Technology" on the back. When heated, the mugs turn orange, display a palm tree, and read "Caltech The Hotter Institute of Technology." Identical mugs continue to be sold at the Caltech campus store.
Life in the Caltech community is governed by the honor code, which simply states: "No member of the Caltech community shall take unfair advantage of any other member of the Caltech community." This is enforced by a Board of Control, which consists of undergraduate students, and by a similar body at the graduate level, called the Graduate Honor Council.
The honor code aims at promoting an atmosphere of respect and trust that allows Caltech students to enjoy privileges that make for a more relaxed atmosphere. For example, the honor code allows professors to make the majority of exams as take-home, allowing students to take them on their own schedule and in their preferred environment.
Through the late 1990s, the only exception to the honor code, implemented earlier in the decade in response to changes in federal regulations, concerned the sexual harassment policy. Today, there are myriad exceptions to the honor code in the form of new Institute policies such as the fire policy and alcohol policy. Although both policies are presented in the Honor System Handbook given to new members of the Caltech community, some undergraduates regard them as a slight against the honor code and the implicit trust and respect it represents within the community. In recent years, the Student Affairs Office has also taken up pursuing investigations independently of the Board of Control and Conduct Review Committee, an implicit violation of both the honor code and written disciplinary policy that has contributed to further erosion of trust between some parts of the undergraduate community and the administration.
As of October 2022, Caltech has 46 Nobel laureates to its name awarded to 30 alumni (26 graduates and 4 postdocs), including 5 Caltech professors who are also alumni (Carl D. Anderson, Linus Pauling, William A. Fowler, Edward B. Lewis, and Kip Thorne), and 16 non-alumni professors (14 at the time of the award, not including David Baltimore and Renato Dulbecco). The total number of Nobel Prizes is 47 because Pauling received prizes in both Chemistry and Peace. Eight faculty and alumni have received a Crafoord Prize from the Royal Swedish Academy of Sciences, while 58 have been awarded the U.S. National Medal of Science, and 11 have received the National Medal of Technology. One alumnus, Stanislav Smirnov, won the Fields Medal in 2010. Other distinguished researchers have been affiliated with Caltech as postdoctoral scholars (for example, Barbara McClintock, James D. Watson, Sheldon Glashow and John Gurdon) or visiting professors (for example, Albert Einstein, Stephen Hawking and Edward Witten).
Caltech enrolled 987 undergraduate students and 1,410 graduate students for the 2021–2022 school year. Women made up 45% of the undergraduate and 33% of the graduate student body. The racial demographics of the school substantially differ from those of the nation as a whole.
The four-year graduation rate is 79% and the six-year rate is 92%, which is low compared to most leading U.S. universities, but substantially higher than it was in the 1960s and 1970s. Students majoring in STEM fields traditionally have graduation rates below 70%.
There are 22,930 total living alumni in the U.S. and around the world. As of October 2022, 30 alumni and 16 non-alumni faculty have won the Nobel Prize. The Turing Award, the "Nobel Prize of Computer Science", has been awarded to six alumni, and one has won the Fields Medal.
Many alumni have participated in scientific research. Some have concentrated their studies on the very small universe of atoms and molecules. Nobel laureate Carl D. Anderson (BS 1927, PhD 1930) proved the existence of positrons and muons, Nobel laureate Edwin McMillan (BS 1928, MS 1929) synthesized the first transuranium element, Nobel laureate Leo James Rainwater (BS 1939) investigated the non-spherical shapes of atomic nuclei, and Nobel laureate Douglas D. Osheroff (BS 1967) studied the superfluid nature of helium-3. Donald Knuth (PhD 1963), the "father" of the analysis of algorithms, wrote The Art of Computer Programming and created the TeX computer typesetting system, which is commonly used in the scientific community. Bruce Reznick (BS 1973) is a mathematician noted for his contributions to number theory and the combinatorial-algebraic-analytic investigations of polynomials. Narendra Karmarkar (MS 1979) is known for the interior point method, a polynomial algorithm for linear programming known as Karmarkar's algorithm.
Other alumni have turned their gaze to the universe. C. Gordon Fullerton (BS 1957, MS 1958) piloted the third Space Shuttle mission. Astronaut (and later, United States Senator) Harrison Schmitt (BS 1957) was the only geologist to have walked on the surface of the Moon. Astronomer Eugene Merle Shoemaker (BS 1947, MS 1948) co-discovered Comet Shoemaker-Levy 9 (a comet which crashed into the planet Jupiter) and was the first person buried on the Moon (by having his ashes crashed into the Moon). Astronomer George O. Abell (BS 1951, MS 1952, PhD 1957) while a grad student at Caltech participated in the National Geographic Society-Palomar Sky Survey. This ultimately resulted in the publication of the Abell Catalogue of Clusters of Galaxies, the definitive work in the field.
Undergraduate alumni founded, or co-founded, companies such as LCD manufacturer Varitronix, Hotmail, Compaq, MathWorks (which created Matlab), and database provider Imply, while graduate students founded, or co-founded, companies such as Intel, TRW, and the non-profit educational organization, the Exploratorium.
Arnold Beckman (PhD 1928) invented the pH meter and commercialized it with the founding of Beckman Instruments. His success with that company enabled him to provide seed funding for William Shockley (BS 1932), who had co-invented semiconductor transistors and wanted to commercialize them. Shockley became the founding Director of the Shockley Semiconductor Laboratory division of Beckman Instruments. Shockley had previously worked at Bell Labs, whose first president was another alumnus, Frank Jewett (BS 1898). Because his aging mother lived in Palo Alto, California, Shockley established his laboratory near her in Mountain View, California. Shockley was a co-recipient of the Nobel Prize in physics in 1956, but his aggressive management style and odd personality at the Shockley Lab became unbearable. In late 1957, eight of his researchers resigned and with support from Sherman Fairchild formed Fairchild Semiconductor. Among the "traitorous eight" was Gordon E. Moore (PhD 1954), who later left Fairchild to co-found Intel. Other offspring companies of Fairchild Semiconductor include National Semiconductor and Advanced Micro Devices, which in turn spawned more technology companies in the area. Shockley's decision to use silicon instead of germanium as the semiconductor material, coupled with the abundance of silicon semiconductor related companies in the area, gave rise to the term "Silicon Valley" to describe that geographic region surrounding Palo Alto.
Caltech alumni also held public offices, with Mustafa A.G. Abushagur (PhD 1984) the Deputy Prime Minister of Libya and Prime Minister-Elect of Libya, James Fletcher (PhD 1948) the 4th and 7th Administrator of NASA, Steven Koonin (PhD 1972) the Undersecretary of Energy for Science, and Regina Dugan (PhD 1993) the 19th director of DARPA. The 20th director for DARPA, Arati Prabhakar, is also a Caltech alumna (PhD 1984) as well as Charles Elachi (Phd 1971), former director of the Jet Propulsion Lab. Arvind Virmani is a former Chief Economic Adviser to the Government of India. In 2013, President Obama announced the nomination of France Cordova (PhD 1979) as the director of the National Science Foundation and Ellen Williams (PhD 1982) as the director for ARPA-E.
Richard Feynman was among the most well-known physicists associated with Caltech, having published the Feynman Lectures on Physics, an undergraduate physics text, and popular science texts such as Six Easy Pieces for the general audience. The promotion of physics made him a public figure of science, although his Nobel-winning work in quantum electrodynamics was already very established in the scientific community. Murray Gell-Mann, a Nobel-winning physicist, introduced a classification of hadrons and went on to postulate the existence of quarks, which is currently accepted as part of the Standard Model. Long-time Caltech President Robert Andrews Millikan was the first to calculate the charge of the electron with his well-known oil-drop experiment, while Richard Chace Tolman is remembered for his contributions to cosmology and statistical mechanics. 2004 Nobel Prize in Physics winner H. David Politzer is a current professor at Caltech, as is astrophysicist and author Kip Thorne and eminent mathematician Barry Simon. Linus Pauling pioneered quantum chemistry and molecular biology, and went on to discover the nature of the chemical bond in 1939. Seismologist Charles Richter, also an alumnus, developed the magnitude scale that bears his name, the Richter magnitude scale for measuring the power of earthquakes. One of the founders of the geochemistry department, Clair Patterson was the first to accurately determine the age of the Earth via lead:uranium ratio in meteorites. In engineering, Theodore von Kármán made many key advances in aerodynamics, notably his work on supersonic and hypersonic airflow characterization. A repeating pattern of swirling vortices is named after him, the von Kármán vortex street. Participants in von Kármán's GALCIT project included Frank Malina, who helped develop the WAC Corporal, which was the first U.S. rocket to reach the edge of space, Jack Parsons, a pioneer in the development of liquid and solid rocket fuels who designed the first castable composite-based rocket motor, and Qian Xuesen, who was dubbed the "Father of Chinese Rocketry". More recently, Michael Brown, a professor of planetary astronomy, discovered many trans-Neptunian objects, most notably the dwarf planet Eris, which prompted the International Astronomical Union to redefine the term "planet".
David Baltimore, the Robert A. Millikan Professor of Biology, and Alice Huang, Senior Faculty Associate in Biology, served as the presidents of AAAS from 2007 to 2008 and 2010 to 2011, respectively.
33% of the faculty are members of the National Academy of Sciences or Engineering and/or fellows of the American Academy of Arts and Sciences. This is the highest percentage of any faculty in the country with the exception of the graduate institution Rockefeller University.
The average salary for assistant professors at Caltech is $111,300, associate professors $121,300, and full professors $172,800. Caltech faculty are active in applied physics, astronomy and astrophysics, biology, biochemistry, biological engineering, chemical engineering, computer science, geology, mechanical engineering, and physics.
Over the years Caltech has actively promoted the commercialization of technologies developed within its walls. Through its Office of Technology Transfer & Corporate Partnerships, scientific breakthroughs have led to the transfer of numerous technologies in a wide variety of scientific-related fields such as photovoltaic, radio-frequency identification (RFID), semiconductors, hyperspectral imaging, electronic devices, protein design, solid state amplifiers and many more. Companies such as Quora, Contour Energy Systems, Impinj, Fulcrum Microsystems, Nanosys, Inc., Photon etc., Xencor, and Wavestream Wireless have emerged from Caltech.
Caltech has appeared in many works of popular culture, both as itself and in disguised form. On television, it played a prominent role and was the workplace of all four male lead characters and one female lead character in the sitcom The Big Bang Theory. Caltech is also the inspiration, and frequent film location, for the California Institute of Science in Numb3rs. On film, the Pacific Tech of The War of the Worlds and Real Genius is based on Caltech. In nonfiction, two 2007 documentaries examine aspects of Caltech: Curious, its researchers, and Quantum Hoops, its men's basketball team.
Caltech is also prominently featured in many comics and television series by Marvel Entertainment. In Marvel Comics, the university serves as the alma mater of Hulk, Mister Fantastic, Bill Foster (Black Goliath), and Madman. In the Marvel Cinematic Universe, Bruno Carrelli (Kamala Khan's best friend and love interest) attends Caltech in the miniseries Ms. Marvel.
Given its Los Angeles-area location, the grounds of the Institute are often host to short scenes in movies and television. The Athenaeum dining club appears in the Beverly Hills Cop series, The X-Files, True Romance, and The West Wing. | [
{
"paragraph_id": 0,
"text": "The California Institute of Technology (branded as Caltech) is a private research university in Pasadena, California. The university is responsible for many modern scientific advancements and is among a small group of institutes of technology in the United States which are strongly devoted to the instruction of pure and applied sciences. Due to its history of technological innovation, Caltech has been considered to be one of the world's most prestigious universities.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The institution was founded as a preparatory and vocational school by Amos G. Throop in 1891 and began attracting influential scientists such as George Ellery Hale, Arthur Amos Noyes, and Robert Andrews Millikan in the early 20th century. The vocational and preparatory schools were disbanded and spun off in 1910 and the college assumed its present name in 1920. In 1934, Caltech was elected to the Association of American Universities, and the antecedents of NASA's Jet Propulsion Laboratory, which Caltech continues to manage and operate, were established between 1936 and 1943 under Theodore von Kármán.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Caltech has six academic divisions with strong emphasis on science and engineering, managing $332 million in 2011 in sponsored research. Its 124-acre (50 ha) primary campus is located approximately 11 mi (18 km) northeast of downtown Los Angeles. First-year students are required to live on campus, and 95% of undergraduates remain in the on-campus House System at Caltech. Although Caltech has a strong tradition of practical jokes and pranks, student life is governed by an honor code which allows faculty to assign take-home examinations. The Caltech Beavers compete in 13 intercollegiate sports in the NCAA Division III's Southern California Intercollegiate Athletic Conference (SCIAC).",
"title": ""
},
{
"paragraph_id": 3,
"text": "Scientists and engineers at or from the university have played an essential role in many modern scientific breakthroughs and innovations, including advances in sustainability science, quantum physics, earthquake monitoring, protein engineering, and soft robotics. As of October 2022, there are 79 Nobel laureates who have been affiliated with Caltech, making it the institution with the highest number of Nobelists per capita in America. This includes 46 alumni and faculty members (47 prizes, with chemist Linus Pauling being the only individual in history to win two unshared prizes). In addition, four Fields Medalists and six Turing Award winners have been affiliated with Caltech.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Caltech started as a vocational school founded in present-day Old Pasadena on Fair Oaks Avenue and Chestnut Street on September 23, 1891, by local businessman and politician Amos G. Throop. The school was known successively as Throop University, Throop Polytechnic Institute (and Manual Training School) and Throop College of Technology before acquiring its current name in 1920. The vocational school was disbanded and the preparatory program was split off to form the independent Polytechnic School in 1907.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "At a time when scientific research in the United States was still in its infancy, George Ellery Hale, a solar astronomer from the University of Chicago, founded the Mount Wilson Observatory in 1904. He joined Throop's board of trustees in 1907, and soon began developing it and the whole of Pasadena into a major scientific and cultural destination. He engineered the appointment of James A. B. Scherer, a literary scholar untutored in science but a capable administrator and fund-raiser, to Throop's presidency in 1908. Scherer persuaded retired businessman and trustee Charles W. Gates to donate $25,000 in seed money to build Gates Laboratory, the first science building on campus.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "In 1910, Throop moved to its current site. Arthur Fleming donated the land for the permanent campus site. Theodore Roosevelt delivered an address at Throop Institute on March 21, 1911, and he declared:",
"title": "History"
},
{
"paragraph_id": 7,
"text": "I want to see institutions like Throop turn out perhaps ninety-nine of every hundred students as men who are to do given pieces of industrial work better than any one else can do them; I want to see those men do the kind of work that is now being done on the Panama Canal and on the great irrigation projects in the interior of this country—and the one-hundredth man I want to see with the kind of cultural scientific training that will make him and his fellows the matrix out of which you can occasionally develop a man like your great astronomer, George Ellery Hale.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "In the same year, a bill was introduced in the California Legislature calling for the establishment of a publicly funded \"California Institute of Technology\", with an initial budget of a million dollars, ten times the budget of Throop at the time. The board of trustees offered to turn Throop over to the state, but the presidents of Stanford University and the University of California successfully lobbied to defeat the bill, which allowed Throop to develop as the only scientific research-oriented education institute in southern California, public or private, until the onset of the World War II necessitated the broader development of research-based science education. The promise of Throop attracted physical chemist Arthur Amos Noyes from MIT to develop the institution and assist in establishing it as a center for science and technology.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "With the onset of World War I, Hale organized the National Research Council to coordinate and support scientific work on military problems. While he supported the idea of federal appropriations for science, he took exception to a federal bill that would have funded engineering research at land-grant colleges, and instead sought to raise a $1 million national research fund entirely from private sources. To that end, as Hale wrote in The New York Times:",
"title": "History"
},
{
"paragraph_id": 10,
"text": "Throop College of Technology, in Pasadena California has recently afforded a striking illustration of one way in which the Research Council can secure co-operation and advance scientific investigation. This institution, with its able investigators and excellent research laboratories, could be of great service in any broad scheme of cooperation. President Scherer, hearing of the formation of the council, immediately offered to take part in its work, and with this object, he secured within three days an additional research endowment of one hundred thousand dollars.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "Through the National Research Council, Hale simultaneously lobbied for science to play a larger role in national affairs, and for Throop to play a national role in science. The new funds were designated for physics research, and ultimately led to the establishment of the Norman Bridge Laboratory, which attracted experimental physicist Robert Andrews Millikan from the University of Chicago in 1917. During the course of the war, Hale, Noyes and Millikan worked together in Washington on the NRC. Subsequently, they continued their partnership in developing Caltech.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "Under the leadership of Hale, Noyes, and Millikan (aided by the booming economy of Southern California), Caltech grew to national prominence in the 1920s and concentrated on the development of Roosevelt's \"Hundredth Man\". On November 29, 1921, the trustees declared it to be the express policy of the institute to pursue scientific research of the greatest importance and at the same time \"to continue to conduct thorough courses in engineering and pure science, basing the work of these courses on exceptionally strong instruction in the fundamental sciences of mathematics, physics, and chemistry; broadening and enriching the curriculum by a liberal amount of instruction in such subjects as English, history, and economics; and vitalizing all the work of the Institute by the infusion in generous measure of the spirit of research\". In 1923, Millikan was awarded the Nobel Prize in Physics. In 1925, the school established a department of geology and hired William Bennett Munro, then chairman of the division of History, Government, and Economics at Harvard University, to create a division of humanities and social sciences at Caltech. In 1928, a division of biology was established under the leadership of Thomas Hunt Morgan, the most distinguished biologist in the United States at the time, and discoverer of the role of genes and the chromosome in heredity. In 1930, Kerckhoff Marine Laboratory was established in Corona del Mar under the care of Professor George MacGinitie. In 1926, a graduate school of aeronautics was created, which eventually attracted Theodore von Kármán. Kármán later helped create the Jet Propulsion Laboratory, and played an integral part in establishing Caltech as one of the world's centers for rocket science. In 1928, construction of the Palomar Observatory began.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "Millikan served as \"Chairman of the Executive Council\" (effectively Caltech's president) from 1921 to 1945, and his influence was such that the institute was occasionally referred to as \"Millikan's School\". Millikan initiated a visiting-scholars program soon after joining Caltech. Notable scientists who accepted his invitation include Paul Dirac, Erwin Schrödinger, Werner Heisenberg, Hendrik Lorentz and Niels Bohr. Albert Einstein arrived on the Caltech campus for the first time in 1931 to polish up his Theory of General Relativity, and he returned to Caltech subsequently as a visiting professor in 1932 and 1933.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "During World War II, Caltech was one of 131 colleges and universities nationally that took part in the V-12 Navy College Training Program which offered students a path to a Navy commission. The United States Navy also maintained a naval training school for aeronautical engineering, resident inspectors of ordinance and naval material, and a liaison officer to the National Defense Research Committee on campus.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "From April to December 1951, Caltech was the host of a federal classified study, Project Vista. The selection of Caltech as host for the project was based on the university's expertise in rocketry and nuclear physics. In response to the war in Korea and the pressure from the Soviet Union, the project was Caltech's way of assisting the federal government in its effort to increase national security. The project was created to study new ways of improving the relationship between tactical air support and ground troops. The Army, Air Force, and Navy sponsored the project; however, it was under contract with the Army. The study was named after the hotel, Vista del Arroyo Hotel, which housed the study. The study operated under a committee with the supervision of President Lee A. DuBridge. William A. Fowler, a professor at Caltech, was selected as research director. More than a fourth of Caltech's faculty and a group of outside scientists staffed the project. Moreover, the number increases if one takes into account visiting scientists, military liaisons, secretarial, and security staff. In compensation for its participation, the university received about $750,000.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "From the 1950s to 1980s, Caltech was the home of Murray Gell-Mann and Richard Feynman, whose work was central to the establishment of the Standard Model of particle physics. Feynman was also widely known outside the physics community as an exceptional teacher and a colorful, unconventional character.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "During Lee A. DuBridge's tenure as Caltech's president (1946–1969), Caltech's faculty doubled and the campus tripled in size. DuBridge, unlike his predecessors, welcomed federal funding of science. New research fields flourished, including chemical biology, planetary science, nuclear astrophysics, and geochemistry. A 200-inch telescope was dedicated on nearby Palomar Mountain in 1948 and remained the world's most powerful optical telescope for over forty years.",
"title": "History"
},
{
"paragraph_id": 18,
"text": "Caltech opened its doors to female undergraduates during the presidency of Harold Brown in 1970, and they made up 14% of the entering class. The portion of female undergraduates has been increasing since then.",
"title": "History"
},
{
"paragraph_id": 19,
"text": "Protests by Caltech students are rare. The earliest was a 1968 protest outside the NBC Burbank studios, in response to rumors that NBC was to cancel Star Trek. In 1973, the students from Dabney House protested a presidential visit with a sign on the library bearing the simple phrase \"Impeach Nixon\". The following week, Ross McCollum, president of the National Oil Company, wrote an open letter to Dabney House stating that in light of their actions he had decided not to donate one million dollars to Caltech. The Dabney family, being Republicans, disowned Dabney House after hearing of the protest.",
"title": "History"
},
{
"paragraph_id": 20,
"text": "Since 2000, the Einstein Papers Project has been located at Caltech. The project was established in 1986 to assemble, preserve, translate, and publish papers selected from the literary estate of Albert Einstein and from other collections.",
"title": "History"
},
{
"paragraph_id": 21,
"text": "In fall 2008, the freshman class was 42% female, a record for Caltech's undergraduate enrollment. In the same year, the Institute concluded a six-year-long fund-raising campaign. The campaign raised more than $1.4 billion from about 16,000 donors. Nearly half of the funds went into the support of Caltech programs and projects.",
"title": "History"
},
{
"paragraph_id": 22,
"text": "In 2010, Caltech, in partnership with Lawrence Berkeley National Laboratory and headed by Professor Nathan Lewis, established a DOE Energy Innovation Hub aimed at developing revolutionary methods to generate fuels directly from sunlight. This hub, the Joint Center for Artificial Photosynthesis, will receive up to $122 million in federal funding over five years.",
"title": "History"
},
{
"paragraph_id": 23,
"text": "Since 2012, Caltech began to offer classes through massive open online courses (MOOCs) under Coursera, from 2013, edX, and bootcamps.",
"title": "History"
},
{
"paragraph_id": 24,
"text": "Jean-Lou Chameau, the eighth president, announced on February 19, 2013, that he would be stepping down to accept the presidency at King Abdullah University of Science and Technology. Thomas F. Rosenbaum was announced to be the ninth president of Caltech on October 24, 2013, and his term began on July 1, 2014.",
"title": "History"
},
{
"paragraph_id": 25,
"text": "In 2019, Caltech received a gift of $750 million for sustainability research from the Resnick family of The Wonderful Company. The gift is the largest ever for environmental sustainability research and the second-largest private donation to a US academic institution (after Bloomberg's gift of $1.8 billion to Johns Hopkins University in 2018).",
"title": "History"
},
{
"paragraph_id": 26,
"text": "On account of President Robert A. Millikan's affiliation with the Human Betterment Foundation, in January 2021, the Caltech Board of Trustees authorized the removal of Millikan's name (and the names of five other historical figures affiliated with the Foundation), from campus buildings.",
"title": "History"
},
{
"paragraph_id": 27,
"text": "Caltech's 124-acre (50 ha) primary campus is located in Pasadena, California, approximately 11 miles (18 km) northeast of downtown Los Angeles. It is within walking distance of Old Town Pasadena and the Pasadena Playhouse District and therefore the two locations are frequent getaways for Caltech students.",
"title": "Campus"
},
{
"paragraph_id": 28,
"text": "In 1917 Hale hired architect Bertram Goodhue to produce a master plan for the 22 acres (8.9 ha) campus. Goodhue conceived the overall layout of the campus and designed the physics building, Dabney Hall, and several other structures, in which he sought to be consistent with the local climate, the character of the school, and Hale's educational philosophy. Goodhue's designs for Caltech were also influenced by the traditional Spanish mission architecture of Southern California.",
"title": "Campus"
},
{
"paragraph_id": 29,
"text": "During the 1960s, Caltech underwent considerable expansion, in part due to the philanthropy of alumnus Arnold O. Beckman. In 1953, Beckman was asked to join the Caltech Board of Trustees. In 1964, he became its chairman. Over the next few years, as Caltech's president emeritus David Baltimore describes it, Arnold Beckman and his wife Mabel \"shaped the destiny of Caltech\".",
"title": "Campus"
},
{
"paragraph_id": 30,
"text": "In 1971 a magnitude-6.6 earthquake in San Fernando caused some damage to the Caltech campus. Engineers who evaluated the damage found that two historic buildings dating from the early days of the Institute—Throop Hall and the Goodhue-designed Culbertson Auditorium—had cracked.",
"title": "Campus"
},
{
"paragraph_id": 31,
"text": "New additions to the campus include the Cahill Center for Astronomy and Astrophysics and the Walter and Leonore Annenberg Center for Information Science and Technology, which opened in 2009, and the Warren and Katherine Schlinger Laboratory for Chemistry and Chemical Engineering followed in March 2010. The institute also concluded an upgrading of the South Houses in 2006. In late 2010, Caltech completed a 1.3 MW solar array projected to produce approximately 1.6 GWh in 2011.",
"title": "Campus"
},
{
"paragraph_id": 32,
"text": "Caltech is incorporated as a non-profit corporation and is governed by a privately appointed 46-member board of trustees who serve five-year terms of office and retire at the age of 72. The trustees elect a president to serve as the chief executive officer of the institute and administer the affairs on the institute on behalf of the board, a provost who serves as the chief academic officer of the institute below the president, and ten other vice presidential and other senior positions. Thomas F. Rosenbaum became the ninth president of Caltech in 2014. Caltech's endowment is governed by a permanent trustee committee and administered by an investment office.",
"title": "Organization and administration"
},
{
"paragraph_id": 33,
"text": "The institute is organized into six primary academic divisions: Biology and Biological Engineering, Chemistry and Chemical Engineering, Engineering and Applied Science, Geological and Planetary Sciences, Humanities and Social Sciences, Physics, Mathematics, and Astronomy. The voting faculty of Caltech include all professors, instructors, research associates and fellows, and the University Librarian. Faculty are responsible for establishing admission requirements, academic standards, and curricula. The Faculty Board is the faculty's representative body and consists of 18 elected faculty representatives as well as other senior administration officials. Full-time professors are expected to teach classes, conduct research, advise students, and perform administrative work such as serving on committees.",
"title": "Organization and administration"
},
{
"paragraph_id": 34,
"text": "Founded in 1930s, the Jet Propulsion Laboratory (JPL) is a federally funded research and development center (FFRDC) owned by NASA and operated as a division of Caltech through a contract between NASA and Caltech. In 2008, JPL spent over $1.6 billion on research and development and employed over 5,000 project-related and support employees. The JPL Director also serves as a Caltech Vice President and is responsible to the President of the Institute for the management of the laboratory.",
"title": "Organization and administration"
},
{
"paragraph_id": 35,
"text": "Caltech is a small four-year, highly residential research university with slightly more students in graduate programs than undergraduate. The institute has been accredited by the Western Association of Schools and Colleges since 1949. Caltech is on the quarter system: the fall term starts in late September and ends before Christmas, the second term starts after New Year's Day and ends in mid-March, and the third term starts in late March or early April and ends in early June.",
"title": "Academics"
},
{
"paragraph_id": 36,
"text": "Caltech is consistently ranked within the top ten universities in the world, and within the top four in the United States, by major global ranking systems. In 2021, Caltech ranked 6th globally based on aggregate world university rankings of THE, QS, and ARWU. For 2022, U.S. News & World Report ranked Caltech as tied for 9th in the United States among national universities overall, 11th for most innovative, and 15th for best value. U.S. News & World Report also ranked the graduate programs in chemistry and earth sciences first among national universities.",
"title": "Academics"
},
{
"paragraph_id": 37,
"text": "Caltech was ranked 1st internationally between 2011 and 2016 by the Times Higher Education World University Rankings. Caltech was ranked as the best university in the world in two categories: Engineering & Technology and Physical Sciences. It was also found to have the highest faculty citation rate in the world.",
"title": "Academics"
},
{
"paragraph_id": 38,
"text": "Admission to Caltech is extremely rigorous. Prior to going test blind, Caltech students had some of the highest test scores in the nation. In 2022, Caltech was ranked by CBS News as the 3rd hardest college in America to gain acceptance to. For the freshmen who enrolled in 2019 (Class of 2023) the middle 50% range of SAT were 740–780 for evidence-based reading and writing and 790–800 for math, and 1530–1570 total. The middle 50% range ACT Composite score was 35–36. The SAT Math Level 2 middle 50% range was 800–800. The middle 50% range for the SAT Physics Subject Test was 760–800; SAT Chemistry Subject Test was 760–800; SAT Biology Subject Tests was 760–800. In June 2020, Caltech announced a test-blind policy where they would not require nor consider test scores for the next two years; in July 2021, the moratorium was extended by another year and then extended further. The institute is need-blind for domestic applicants.",
"title": "Academics"
},
{
"paragraph_id": 39,
"text": "For the Class of 2026 (enrolled Fall 2022), Caltech received 16662 applications and accepted 448 applicants for a 2.7% admit rate; 224 enrolled. The class included 48% women and 52% men. For the Class of 2025, 32% were of underrepresented ancestry (which includes students who self-identify as American Indian/Alaska Native, Hispanic/Latino, Black/African American, and/or Native Hawaiian/Pacific Islander), and 6% were foreign students. For the Class of 2027 (enrolled Fall 2023), Caltech had over 270 commits of 412 admits, at a yield rate of 66–67%.",
"title": "Academics"
},
{
"paragraph_id": 40,
"text": "Undergraduate tuition for the 2021–2022 school year was $56,394 and total annual costs were estimated to be $79,947 excluding the Caltech Student Health Insurance Plan. In 2012–2013, Caltech awarded $17.1 million in need-based aid, $438k in non-need-based aid, and $2.51 million in self-help support to enrolled undergraduate students. The average financial aid package of all students eligible for aid was $38,756 and students graduated with an average debt of $15,090.",
"title": "Academics"
},
{
"paragraph_id": 41,
"text": "The full-time, four-year undergraduate program emphasizes instruction in the arts and sciences and has high graduate coexistence. Caltech offers 28 majors (called \"options\") and 12 minors across all six academic divisions. Caltech also offers interdisciplinary programs in Applied Physics, Biochemistry, Bioengineering, Computation and Neural Systems, Control and Dynamical Systems, Environmental Science and Engineering, Geobiology and Astrobiology, Geochemistry, and Planetary Astronomy. The most popular options are Chemical Engineering, Computer Science, Electrical Engineering, Mechanical Engineering and Physics. The most popular majors of the class of 2023 were Computer Science, Mechanical Engineering, Physics, and Electrical Engineering.",
"title": "Academics"
},
{
"paragraph_id": 42,
"text": "Prior to the entering class of 2013, Caltech required students to take a core curriculum of five terms of mathematics, five terms of physics, two terms of chemistry, one term of biology, two terms of lab courses, one term of scientific communication, three terms of physical education, and 12 terms of humanities and social science. Since 2013, only three terms each of mathematics and physics have been required by the institute, with the remaining two terms each required by certain options.",
"title": "Academics"
},
{
"paragraph_id": 43,
"text": "A typical class is worth 9 academic units and given the extensive core curriculum requirements in addition to individual options' degree requirements, students need to take an average of 40.5 units per term (more than four classes) to graduate in four years. 36 units is the minimum full-time load, 48 units is considered a heavy load, and registrations above 51 units require an overload petition. Approximately 20 percent of students double-major. This is achievable since the humanities and social sciences majors have been designed to be done in conjunction with a science major. Although choosing two options in the same division is discouraged, it is still possible.",
"title": "Academics"
},
{
"paragraph_id": 44,
"text": "First-year students are enrolled in first-term classes based upon results of placement exams in math, physics, chemistry, and writing and take all classes in their first two terms on a Pass/Fail basis. There is little competition; collaboration on homework is encouraged and the honor system encourages take-home tests and flexible homework schedules. Caltech offers co-operative programs with other schools, such as the Pasadena Art Center College of Design and Occidental College.",
"title": "Academics"
},
{
"paragraph_id": 45,
"text": "According to a 2018 PayScale study, Caltech graduates earn a median early career salary of $83,400 and $143,100 mid-career, placing them in the top 5 among graduates of US colleges and universities. The average net return on investment over a period of 20 years is $887,000, the tenth-highest among US colleges.",
"title": "Academics"
},
{
"paragraph_id": 46,
"text": "Caltech offers Army and Air Force ROTC in cooperation with the University of Southern California.",
"title": "Academics"
},
{
"paragraph_id": 47,
"text": "The graduate instructional programs emphasize doctoral studies and are dominated by science, technology, engineering, and mathematics fields. The institute offers graduate degree programs for the Master of Science, Engineer's Degree, Doctor of Philosophy, BS/MS and MD/PhD, with the majority of students in the PhD program. The most popular options are Chemistry, Physics, Biology, Electrical Engineering and Chemical Engineering. Applicants for graduate studies are required to take the GRE. GRE Subject scores are either required or strongly recommended by several options. A joint program between Caltech and the Keck School of Medicine of the University of Southern California, and the UCLA David Geffen School of Medicine grants MD/PhD degrees. Students in this program do their preclinical and clinical work at USC or UCLA, and their PhD work with any member of the Caltech faculty, including the Biology, Chemistry, and Engineering and Applied Sciences Divisions. The MD degree would be from USC or UCLA and the PhD would be awarded from Caltech.",
"title": "Academics"
},
{
"paragraph_id": 48,
"text": "The research facilities at Caltech are available to graduate students, but there are opportunities for students to work in facilities of other universities, research centers as well as private industries. The graduate student to faculty ratio is 4:1.",
"title": "Academics"
},
{
"paragraph_id": 49,
"text": "Approximately 99 percent of doctoral students have full financial support. Financial support for graduate students comes in the form of fellowships, research assistantships, teaching assistantships or a combination of fellowship and assistantship support.",
"title": "Academics"
},
{
"paragraph_id": 50,
"text": "Graduate students are bound by the honor code, as are the undergraduates, and the Graduate Honor Council oversees any violations of the code.",
"title": "Academics"
},
{
"paragraph_id": 51,
"text": "Caltech is classified among \"R1: Doctoral Universities – Very High Research Activity\". Caltech was elected to the Association of American Universities in 1934 and remains a research university with \"very high\" research activity, primarily in STEM fields. Caltech manages research expenditures of $270 million annually, 66th among all universities in the U.S. and 17th among private institutions without medical schools for 2008. The largest federal agencies contributing to research are NASA, National Science Foundation, Department of Health and Human Services, Department of Defense, and Department of Energy. Caltech received $144 million in federal funding for the physical sciences, $40.8 million for the life sciences, $33.5 million for engineering, $14.4 million for environmental sciences, $7.16 million for computer sciences, and $1.97 million for mathematical sciences in 2008.",
"title": "Research"
},
{
"paragraph_id": 52,
"text": "The institute was awarded an all-time high funding of $357 million in 2009. Active funding from the National Science Foundation Directorate of Mathematical and Physical Science (MPS) for Caltech stands at $343 million as of 2011, the highest for any educational institution in the nation, and higher than the total funds allocated to any state except California and New York.",
"title": "Research"
},
{
"paragraph_id": 53,
"text": "In 2005, Caltech had 739,000 square feet (68,700 m) dedicated to research: 330,000 square feet (30,700 m) to physical sciences, 163,000 square feet (15,100 m) to engineering, and 160,000 square feet (14,900 m) to biological sciences.",
"title": "Research"
},
{
"paragraph_id": 54,
"text": "In addition to managing JPL, Caltech also operates the Palomar Observatory in San Diego County, the Owens Valley Radio Observatory in Bishop, California, the Submillimeter Observatory and W. M. Keck Observatory at the Mauna Kea Observatory, the Laser Interferometer Gravitational-Wave Observatory at Livingston, Louisiana and Richland, Washington, and Kerckhoff Marine Laboratory in Corona del Mar, California. The Institute launched the Kavli Nanoscience Institute at Caltech in 2006, the Keck Institute for Space Studies in 2008, and is also the current home for the Einstein Papers Project. The Spitzer Science Center (SSC), part of the Infrared Processing and Analysis Center located on the Caltech campus, is the data analysis and community support center for NASA's Spitzer Space Telescope.",
"title": "Research"
},
{
"paragraph_id": 55,
"text": "Caltech partnered with UCLA to establish a Joint Center for Translational Medicine (UCLA-Caltech JCTM), which conducts experimental research into clinical applications, including the diagnosis and treatment of diseases such as cancer.",
"title": "Research"
},
{
"paragraph_id": 56,
"text": "Caltech operates several TCCON stations as part of an international collaborative effort of measuring greenhouse gases globally. One station is on campus.",
"title": "Research"
},
{
"paragraph_id": 57,
"text": "Undergraduates at Caltech are also encouraged to participate in research. About 80% of the class of 2010 did research through the annual Summer Undergraduate Research Fellowships (SURF) program at least once during their stay, and many continued during the school year. Students write and submit SURF proposals for research projects in collaboration with professors, and about 70 percent of applicants are awarded SURFs. The program is open to both Caltech and non-Caltech undergraduate students. It serves as preparation for graduate school and helps to explain why Caltech has the highest percentage of alumni who go on to receive a PhD of all the major universities.",
"title": "Research"
},
{
"paragraph_id": 58,
"text": "The licensing and transferring of technology to the commercial sector is managed by the Office of Technology Transfer (OTT). OTT protects and manages the intellectual property developed by faculty members, students, other researchers, and JPL technologists. Caltech receives more invention disclosures per faculty member than any other university in the nation. As of 2008, 1891 patents were granted to Caltech researchers since 1969.",
"title": "Research"
},
{
"paragraph_id": 59,
"text": "During the early 20th century, a Caltech committee visited several universities and decided to transform the undergraduate housing system from fraternities to a house system. Four South Houses (or Hovses, as styled in the stone engravings) were built: Blacker House, Dabney House, Fleming House and Ricketts House. In the 1960s, three North Houses were built: Lloyd House, Page House, and Ruddock House, and during the 1990s, Avery House. The four South Houses closed for renovation in 2005 and reopened in 2006. The latest addition to residential life at Caltech is Bechtel Residence, which opened in 2018. It is not affiliated with the house system. All first- and second-year students live on campus in the house system or in the Bechtel Residence.",
"title": "Student life"
},
{
"paragraph_id": 60,
"text": "On account of Albert B. Ruddock's affiliation with the Human Betterment Foundation, in January 2021, the Caltech Board of Trustees authorized the removal of Ruddock's name from campus buildings. Ruddock House was renamed as the Grant D. Venerable House.",
"title": "Student life"
},
{
"paragraph_id": 61,
"text": "Caltech has athletic teams in baseball, men's and women's basketball, cross country, men's and women's soccer, swimming and diving, men's and women's tennis, track and field, women's volleyball, and men's and women's water polo. Caltech's mascot is the Beaver, a homage to nature's engineer. Its teams are members of the NCAA Division III and compete in the Southern California Intercollegiate Athletic Conference (SCIAC), which Caltech co-founded in 1915.",
"title": "Student life"
},
{
"paragraph_id": 62,
"text": "On January 6, 2007, the Beavers' men's basketball team snapped a 207-game losing streak to Division III schools, beating Bard College 81–52. It was their first Division III victory since 1996. Until their win over Occidental College on February 22, 2011 the team had not won a game in SCIAC play since 1985. Ryan Elmquist's free throw with 3.3 seconds in regulation gave the Beavers the victory. The documentary film Quantum Hoops concerns the events of the Beavers' 2005–06 season.",
"title": "Student life"
},
{
"paragraph_id": 63,
"text": "On January 13, 2007, the Caltech women's basketball team snapped a 50-game losing streak, defeating the Pomona-Pitzer Sagehens 55–53. The women's program, which entered the SCIAC in 2002, garnered their first conference win. On the bench as honorary coach for the evening was Robert Grubbs, 2005 Nobel laureate in Chemistry. The team went on to beat Whittier College on February 10, for its second SCIAC win, and placed its first member on the All Conference team.",
"title": "Student life"
},
{
"paragraph_id": 64,
"text": "In 2007, 2008, and 2009, the women's table tennis team (a club team) competed in nationals. The women's Ultimate club team, known as \"Snatch\", has also been very successful in recent years, ranking 44 of over 200 college teams in the Ultimate Player's Association.",
"title": "Student life"
},
{
"paragraph_id": 65,
"text": "On February 2, 2013, the Caltech baseball team ended a 228-game losing streak, the team's first win in nearly 10 years.",
"title": "Student life"
},
{
"paragraph_id": 66,
"text": "The track and field team's home venue is at the South Athletic Field in Tournament Park, the site of the first Rose Bowl Game.",
"title": "Student life"
},
{
"paragraph_id": 67,
"text": "The school also sponsored an intercollegiate football team from 1973 through 1977, and played part of its home schedule at the Rose Bowl.",
"title": "Student life"
},
{
"paragraph_id": 68,
"text": "The Caltech/Occidental College Orchestra is a full seventy-piece orchestra composed of students, faculty, and staff at Caltech and nearby Occidental College. The orchestra gives three pairs of concerts annually, at both Caltech and Occidental College. There are also two Caltech Jazz Bands and a Concert Band, as well as an active chamber music program. For vocal music, Caltech has a mixed-voice Glee Club and the smaller Chamber Singers. The theater program at Caltech is known as TACIT, or Theater Arts at the California Institute of Technology. There are two to three plays organized by TACIT per year, and they were involved in the production of the PHD Movie, released in 2011.",
"title": "Student life"
},
{
"paragraph_id": 69,
"text": "Every Halloween, Dabney House conducts the infamous \"Millikan pumpkin-drop experiment\" from the top of Millikan Library, the highest point on campus. According to tradition, a claim was once made that the shattering of a pumpkin frozen in liquid nitrogen and dropped from a sufficient height would produce a triboluminescent spark. This yearly event involves a crowd of observers, who try to spot the elusive spark. The title of the event is an oblique reference to the famous Millikan oil-drop experiment which measured e, the elemental unit of electrical charge.",
"title": "Student life"
},
{
"paragraph_id": 70,
"text": "On Ditch Day, the seniors ditch school, leaving behind elaborately designed tasks and traps at the doors of their rooms to prevent underclassmen from entering. Over the years this has evolved to the point where many seniors spend months designing mechanical, electrical, and software obstacles to confound the underclassmen. Each group of seniors designs a \"stack\" to be solved by a handful of underclassmen. The faculty have been drawn into the event as well, and cancel all classes on Ditch Day so the underclassmen can participate in what has become a highlight of the academic year.",
"title": "Student life"
},
{
"paragraph_id": 71,
"text": "Another long-standing tradition is the playing of Wagner's \"Ride of the Valkyries\" at 7:00 each morning during finals week with the largest, loudest speakers available. The playing of that piece is not allowed at any other time (except if one happens to be listening to the entire 14 hours and 5 minutes of The Ring Cycle), and any offender is dragged into the showers to be drenched in cold water fully dressed.",
"title": "Student life"
},
{
"paragraph_id": 72,
"text": "Caltech students have been known for their many pranks (also known as \"RFs\").",
"title": "Student life"
},
{
"paragraph_id": 73,
"text": "The two most famous in recent history are the changing of the Hollywood Sign to read \"Caltech\", by judiciously covering up certain parts of the letters, and the changing of the scoreboard to read Caltech 38, MIT 9 during the 1984 Rose Bowl Game. But the most famous of all occurred during the 1961 Rose Bowl Game, where Caltech students altered the flip-cards that were raised by the stadium attendees to display \"Caltech\", and several other \"unintended\" messages. This event is now referred to as the Great Rose Bowl Hoax.",
"title": "Student life"
},
{
"paragraph_id": 74,
"text": "In recent years, pranking has been officially encouraged by Tom Mannion, Caltech's Assistant VP for Student Affairs and Campus Life. \"The grand old days of pranking have gone away at Caltech, and that's what we are trying to bring back,\" reported the Boston Globe.",
"title": "Student life"
},
{
"paragraph_id": 75,
"text": "In December 2011, Caltech students went to New York and pulled a prank in Manhattan's Greenwich Village. The prank involved making The Cube sculpture look like the Aperture Science Weighted Companion Cube from the video game Portal.",
"title": "Student life"
},
{
"paragraph_id": 76,
"text": "Caltech pranks have been documented in three Legends of Caltech books, the most recent of which was edited by alumni Autumn Looijen '99 and Mason Porter '98 and published in May 2007.",
"title": "Student life"
},
{
"paragraph_id": 77,
"text": "In 2005, a group of Caltech students pulled a string of pranks during MIT's Campus Preview Weekend for admitted students. These include covering up the word Massachusetts in the \"Massachusetts Institute of Technology\" engraving on the main building façade with a banner so that it read \"That Other Institute of Technology\". A group of MIT hackers responded by altering the banner so that the inscription read \"The Only Institute of Technology.\" Caltech students also passed out T-shirts to MIT's incoming freshman class that had MIT written on the front and \"... because not everyone can go to Caltech\" along with an image of a palm tree on the back.",
"title": "Student life"
},
{
"paragraph_id": 78,
"text": "MIT retaliated in April 2006, when students posing as the Howe & Ser (Howitzer) Moving Company stole the 130-year-old, 1.7-ton Fleming House cannon and moved it over 3,000 miles to their campus in Cambridge, Massachusetts for their 2006 Campus Preview Weekend, repeating a similar prank performed by nearby Harvey Mudd College in 1986. Thirty members of Fleming House traveled to MIT and reclaimed their cannon on April 10, 2006.",
"title": "Student life"
},
{
"paragraph_id": 79,
"text": "On April 13, 2007 (Friday the 13th), a group of students from The California Tech, Caltech's campus newspaper, arrived and distributed fake copies of The Tech, MIT's campus newspaper, while prospective students were visiting for their Campus Preview Weekend. Articles included \"MIT Invents the Interweb\", \"Architects Deem Campus 'Unfortunate'\", and \"Infinite Corridor Not Actually Infinite\".",
"title": "Student life"
},
{
"paragraph_id": 80,
"text": "In December 2009, some Caltech students declared that MIT had been sold and had become the Caltech East campus. A \"sold\" banner was hung on front of the MIT dome building and a \"Welcome to Caltech East: School of the Humanities\" banner over the Massachusetts Avenue Entrance. Newspapers and T-shirts were distributed, and door labels and fliers in the infinite corridor were put up in accordance with the \"curriculum change.\"",
"title": "Student life"
},
{
"paragraph_id": 81,
"text": "In September 2010, MIT students attempted to put a TARDIS, the time machine from the BBC's Doctor Who, onto a roof. Caught in mid-act, the prank was aborted. In January 2011, Caltech students in conjunction with MIT students helped put the TARDIS on top of Baxter. Caltech students then moved the TARDIS to UC Berkeley and Stanford.",
"title": "Student life"
},
{
"paragraph_id": 82,
"text": "In April 2014, during MIT's Campus Preview Weekend, a group of Caltech students handed out mugs emblazoned with the MIT logo on the front and the words \"The Institute of Technology\" on the back. When heated, the mugs turn orange, display a palm tree, and read \"Caltech The Hotter Institute of Technology.\" Identical mugs continue to be sold at the Caltech campus store.",
"title": "Student life"
},
{
"paragraph_id": 83,
"text": "Life in the Caltech community is governed by the honor code, which simply states: \"No member of the Caltech community shall take unfair advantage of any other member of the Caltech community.\" This is enforced by a Board of Control, which consists of undergraduate students, and by a similar body at the graduate level, called the Graduate Honor Council.",
"title": "Student life"
},
{
"paragraph_id": 84,
"text": "The honor code aims at promoting an atmosphere of respect and trust that allows Caltech students to enjoy privileges that make for a more relaxed atmosphere. For example, the honor code allows professors to make the majority of exams as take-home, allowing students to take them on their own schedule and in their preferred environment.",
"title": "Student life"
},
{
"paragraph_id": 85,
"text": "Through the late 1990s, the only exception to the honor code, implemented earlier in the decade in response to changes in federal regulations, concerned the sexual harassment policy. Today, there are myriad exceptions to the honor code in the form of new Institute policies such as the fire policy and alcohol policy. Although both policies are presented in the Honor System Handbook given to new members of the Caltech community, some undergraduates regard them as a slight against the honor code and the implicit trust and respect it represents within the community. In recent years, the Student Affairs Office has also taken up pursuing investigations independently of the Board of Control and Conduct Review Committee, an implicit violation of both the honor code and written disciplinary policy that has contributed to further erosion of trust between some parts of the undergraduate community and the administration.",
"title": "Student life"
},
{
"paragraph_id": 86,
"text": "As of October 2022, Caltech has 46 Nobel laureates to its name awarded to 30 alumni (26 graduates and 4 postdocs), including 5 Caltech professors who are also alumni (Carl D. Anderson, Linus Pauling, William A. Fowler, Edward B. Lewis, and Kip Thorne), and 16 non-alumni professors (14 at the time of the award, not including David Baltimore and Renato Dulbecco). The total number of Nobel Prizes is 47 because Pauling received prizes in both Chemistry and Peace. Eight faculty and alumni have received a Crafoord Prize from the Royal Swedish Academy of Sciences, while 58 have been awarded the U.S. National Medal of Science, and 11 have received the National Medal of Technology. One alumnus, Stanislav Smirnov, won the Fields Medal in 2010. Other distinguished researchers have been affiliated with Caltech as postdoctoral scholars (for example, Barbara McClintock, James D. Watson, Sheldon Glashow and John Gurdon) or visiting professors (for example, Albert Einstein, Stephen Hawking and Edward Witten).",
"title": "Notable people"
},
{
"paragraph_id": 87,
"text": "Caltech enrolled 987 undergraduate students and 1,410 graduate students for the 2021–2022 school year. Women made up 45% of the undergraduate and 33% of the graduate student body. The racial demographics of the school substantially differ from those of the nation as a whole.",
"title": "Notable people"
},
{
"paragraph_id": 88,
"text": "The four-year graduation rate is 79% and the six-year rate is 92%, which is low compared to most leading U.S. universities, but substantially higher than it was in the 1960s and 1970s. Students majoring in STEM fields traditionally have graduation rates below 70%.",
"title": "Notable people"
},
{
"paragraph_id": 89,
"text": "There are 22,930 total living alumni in the U.S. and around the world. As of October 2022, 30 alumni and 16 non-alumni faculty have won the Nobel Prize. The Turing Award, the \"Nobel Prize of Computer Science\", has been awarded to six alumni, and one has won the Fields Medal.",
"title": "Notable people"
},
{
"paragraph_id": 90,
"text": "Many alumni have participated in scientific research. Some have concentrated their studies on the very small universe of atoms and molecules. Nobel laureate Carl D. Anderson (BS 1927, PhD 1930) proved the existence of positrons and muons, Nobel laureate Edwin McMillan (BS 1928, MS 1929) synthesized the first transuranium element, Nobel laureate Leo James Rainwater (BS 1939) investigated the non-spherical shapes of atomic nuclei, and Nobel laureate Douglas D. Osheroff (BS 1967) studied the superfluid nature of helium-3. Donald Knuth (PhD 1963), the \"father\" of the analysis of algorithms, wrote The Art of Computer Programming and created the TeX computer typesetting system, which is commonly used in the scientific community. Bruce Reznick (BS 1973) is a mathematician noted for his contributions to number theory and the combinatorial-algebraic-analytic investigations of polynomials. Narendra Karmarkar (MS 1979) is known for the interior point method, a polynomial algorithm for linear programming known as Karmarkar's algorithm.",
"title": "Notable people"
},
{
"paragraph_id": 91,
"text": "Other alumni have turned their gaze to the universe. C. Gordon Fullerton (BS 1957, MS 1958) piloted the third Space Shuttle mission. Astronaut (and later, United States Senator) Harrison Schmitt (BS 1957) was the only geologist to have walked on the surface of the Moon. Astronomer Eugene Merle Shoemaker (BS 1947, MS 1948) co-discovered Comet Shoemaker-Levy 9 (a comet which crashed into the planet Jupiter) and was the first person buried on the Moon (by having his ashes crashed into the Moon). Astronomer George O. Abell (BS 1951, MS 1952, PhD 1957) while a grad student at Caltech participated in the National Geographic Society-Palomar Sky Survey. This ultimately resulted in the publication of the Abell Catalogue of Clusters of Galaxies, the definitive work in the field.",
"title": "Notable people"
},
{
"paragraph_id": 92,
"text": "Undergraduate alumni founded, or co-founded, companies such as LCD manufacturer Varitronix, Hotmail, Compaq, MathWorks (which created Matlab), and database provider Imply, while graduate students founded, or co-founded, companies such as Intel, TRW, and the non-profit educational organization, the Exploratorium.",
"title": "Notable people"
},
{
"paragraph_id": 93,
"text": "Arnold Beckman (PhD 1928) invented the pH meter and commercialized it with the founding of Beckman Instruments. His success with that company enabled him to provide seed funding for William Shockley (BS 1932), who had co-invented semiconductor transistors and wanted to commercialize them. Shockley became the founding Director of the Shockley Semiconductor Laboratory division of Beckman Instruments. Shockley had previously worked at Bell Labs, whose first president was another alumnus, Frank Jewett (BS 1898). Because his aging mother lived in Palo Alto, California, Shockley established his laboratory near her in Mountain View, California. Shockley was a co-recipient of the Nobel Prize in physics in 1956, but his aggressive management style and odd personality at the Shockley Lab became unbearable. In late 1957, eight of his researchers resigned and with support from Sherman Fairchild formed Fairchild Semiconductor. Among the \"traitorous eight\" was Gordon E. Moore (PhD 1954), who later left Fairchild to co-found Intel. Other offspring companies of Fairchild Semiconductor include National Semiconductor and Advanced Micro Devices, which in turn spawned more technology companies in the area. Shockley's decision to use silicon instead of germanium as the semiconductor material, coupled with the abundance of silicon semiconductor related companies in the area, gave rise to the term \"Silicon Valley\" to describe that geographic region surrounding Palo Alto.",
"title": "Notable people"
},
{
"paragraph_id": 94,
"text": "Caltech alumni also held public offices, with Mustafa A.G. Abushagur (PhD 1984) the Deputy Prime Minister of Libya and Prime Minister-Elect of Libya, James Fletcher (PhD 1948) the 4th and 7th Administrator of NASA, Steven Koonin (PhD 1972) the Undersecretary of Energy for Science, and Regina Dugan (PhD 1993) the 19th director of DARPA. The 20th director for DARPA, Arati Prabhakar, is also a Caltech alumna (PhD 1984) as well as Charles Elachi (Phd 1971), former director of the Jet Propulsion Lab. Arvind Virmani is a former Chief Economic Adviser to the Government of India. In 2013, President Obama announced the nomination of France Cordova (PhD 1979) as the director of the National Science Foundation and Ellen Williams (PhD 1982) as the director for ARPA-E.",
"title": "Notable people"
},
{
"paragraph_id": 95,
"text": "Richard Feynman was among the most well-known physicists associated with Caltech, having published the Feynman Lectures on Physics, an undergraduate physics text, and popular science texts such as Six Easy Pieces for the general audience. The promotion of physics made him a public figure of science, although his Nobel-winning work in quantum electrodynamics was already very established in the scientific community. Murray Gell-Mann, a Nobel-winning physicist, introduced a classification of hadrons and went on to postulate the existence of quarks, which is currently accepted as part of the Standard Model. Long-time Caltech President Robert Andrews Millikan was the first to calculate the charge of the electron with his well-known oil-drop experiment, while Richard Chace Tolman is remembered for his contributions to cosmology and statistical mechanics. 2004 Nobel Prize in Physics winner H. David Politzer is a current professor at Caltech, as is astrophysicist and author Kip Thorne and eminent mathematician Barry Simon. Linus Pauling pioneered quantum chemistry and molecular biology, and went on to discover the nature of the chemical bond in 1939. Seismologist Charles Richter, also an alumnus, developed the magnitude scale that bears his name, the Richter magnitude scale for measuring the power of earthquakes. One of the founders of the geochemistry department, Clair Patterson was the first to accurately determine the age of the Earth via lead:uranium ratio in meteorites. In engineering, Theodore von Kármán made many key advances in aerodynamics, notably his work on supersonic and hypersonic airflow characterization. A repeating pattern of swirling vortices is named after him, the von Kármán vortex street. Participants in von Kármán's GALCIT project included Frank Malina, who helped develop the WAC Corporal, which was the first U.S. rocket to reach the edge of space, Jack Parsons, a pioneer in the development of liquid and solid rocket fuels who designed the first castable composite-based rocket motor, and Qian Xuesen, who was dubbed the \"Father of Chinese Rocketry\". More recently, Michael Brown, a professor of planetary astronomy, discovered many trans-Neptunian objects, most notably the dwarf planet Eris, which prompted the International Astronomical Union to redefine the term \"planet\".",
"title": "Notable people"
},
{
"paragraph_id": 96,
"text": "David Baltimore, the Robert A. Millikan Professor of Biology, and Alice Huang, Senior Faculty Associate in Biology, served as the presidents of AAAS from 2007 to 2008 and 2010 to 2011, respectively.",
"title": "Notable people"
},
{
"paragraph_id": 97,
"text": "33% of the faculty are members of the National Academy of Sciences or Engineering and/or fellows of the American Academy of Arts and Sciences. This is the highest percentage of any faculty in the country with the exception of the graduate institution Rockefeller University.",
"title": "Notable people"
},
{
"paragraph_id": 98,
"text": "The average salary for assistant professors at Caltech is $111,300, associate professors $121,300, and full professors $172,800. Caltech faculty are active in applied physics, astronomy and astrophysics, biology, biochemistry, biological engineering, chemical engineering, computer science, geology, mechanical engineering, and physics.",
"title": "Notable people"
},
{
"paragraph_id": 99,
"text": "Over the years Caltech has actively promoted the commercialization of technologies developed within its walls. Through its Office of Technology Transfer & Corporate Partnerships, scientific breakthroughs have led to the transfer of numerous technologies in a wide variety of scientific-related fields such as photovoltaic, radio-frequency identification (RFID), semiconductors, hyperspectral imaging, electronic devices, protein design, solid state amplifiers and many more. Companies such as Quora, Contour Energy Systems, Impinj, Fulcrum Microsystems, Nanosys, Inc., Photon etc., Xencor, and Wavestream Wireless have emerged from Caltech.",
"title": "Caltech startups"
},
{
"paragraph_id": 100,
"text": "Caltech has appeared in many works of popular culture, both as itself and in disguised form. On television, it played a prominent role and was the workplace of all four male lead characters and one female lead character in the sitcom The Big Bang Theory. Caltech is also the inspiration, and frequent film location, for the California Institute of Science in Numb3rs. On film, the Pacific Tech of The War of the Worlds and Real Genius is based on Caltech. In nonfiction, two 2007 documentaries examine aspects of Caltech: Curious, its researchers, and Quantum Hoops, its men's basketball team.",
"title": "In media and popular culture"
},
{
"paragraph_id": 101,
"text": "Caltech is also prominently featured in many comics and television series by Marvel Entertainment. In Marvel Comics, the university serves as the alma mater of Hulk, Mister Fantastic, Bill Foster (Black Goliath), and Madman. In the Marvel Cinematic Universe, Bruno Carrelli (Kamala Khan's best friend and love interest) attends Caltech in the miniseries Ms. Marvel.",
"title": "In media and popular culture"
},
{
"paragraph_id": 102,
"text": "Given its Los Angeles-area location, the grounds of the Institute are often host to short scenes in movies and television. The Athenaeum dining club appears in the Beverly Hills Cop series, The X-Files, True Romance, and The West Wing.",
"title": "In media and popular culture"
}
] | The California Institute of Technology (branded as Caltech) is a private research university in Pasadena, California. The university is responsible for many modern scientific advancements and is among a small group of institutes of technology in the United States which are strongly devoted to the instruction of pure and applied sciences. Due to its history of technological innovation, Caltech has been considered to be one of the world's most prestigious universities. The institution was founded as a preparatory and vocational school by Amos G. Throop in 1891 and began attracting influential scientists such as George Ellery Hale, Arthur Amos Noyes, and Robert Andrews Millikan in the early 20th century. The vocational and preparatory schools were disbanded and spun off in 1910 and the college assumed its present name in 1920. In 1934, Caltech was elected to the Association of American Universities, and the antecedents of NASA's Jet Propulsion Laboratory, which Caltech continues to manage and operate, were established between 1936 and 1943 under Theodore von Kármán. Caltech has six academic divisions with strong emphasis on science and engineering, managing $332 million in 2011 in sponsored research. Its 124-acre (50 ha) primary campus is located approximately 11 mi (18 km) northeast of downtown Los Angeles. First-year students are required to live on campus, and 95% of undergraduates remain in the on-campus House System at Caltech. Although Caltech has a strong tradition of practical jokes and pranks, student life is governed by an honor code which allows faculty to assign take-home examinations. The Caltech Beavers compete in 13 intercollegiate sports in the NCAA Division III's Southern California Intercollegiate Athletic Conference (SCIAC). Scientists and engineers at or from the university have played an essential role in many modern scientific breakthroughs and innovations, including advances in sustainability science, quantum physics, earthquake monitoring, protein engineering, and soft robotics. As of October 2022, there are 79 Nobel laureates who have been affiliated with Caltech, making it the institution with the highest number of Nobelists per capita in America. This includes 46 alumni and faculty members. In addition, four Fields Medalists and six Turing Award winners have been affiliated with Caltech. | 2001-06-28T17:05:21Z | 2023-12-26T02:34:30Z | [
"Template:Official website",
"Template:Main",
"Template:See also",
"Template:Infobox US university ranking",
"Template:Bartable",
"Template:Use mdy dates",
"Template:Col-begin",
"Template:Cite news",
"Template:California Institute of Technology",
"Template:Webarchive",
"Template:Infobox university",
"Template:Wide image",
"Template:'\"",
"Template:Portal",
"Template:Convert",
"Template:Cite episode",
"Template:Wikiquote",
"Template:Short description",
"Template:Cite web",
"Template:Cite magazine",
"Template:Nbsp",
"Template:Reflist",
"Template:Cbignore",
"Template:Commons category",
"Template:Clear",
"Template:Rp",
"Template:Col-end",
"Template:Infobox U.S. college admissions",
"Template:Efn",
"Template:As of",
"Template:Cite book",
"Template:Authority control",
"Template:Navboxes",
"Template:Col-break",
"Template:Main list",
"Template:Notelist",
"Template:Cite journal"
] | https://en.wikipedia.org/wiki/California_Institute_of_Technology |
5,790 | Carlo Goldoni | Carlo Osvaldo Goldoni (/ɡɒlˈdoʊni/, also US: /ɡɔːlˈ-, ɡoʊlˈ-/, Italian: [ˈkarlo oˈzvaldo ɡolˈdoːni]; 25 February 1707 – 6 February 1793) was an Italian playwright and librettist from the Republic of Venice. His works include some of Italy's most famous and best-loved plays. Audiences have admired the plays of Goldoni for their ingenious mix of wit and honesty. His plays offered his contemporaries images of themselves, often dramatizing the lives, values, and conflicts of the emerging middle classes. Though he wrote in French and Italian, his plays make rich use of the Venetian language, regional vernacular, and colloquialisms. Goldoni also wrote under the pen name and title Polisseno Fegeio, Pastor Arcade, which he claimed in his memoirs the "Arcadians of Rome" bestowed on him.
There is an abundance of autobiographical information on Goldoni, most of which comes from the introductions to his plays and from his Memoirs. However, these memoirs are known to contain many errors of fact, especially about his earlier years.
In these memoirs, he paints himself as a born comedian, careless, light-hearted and with a happy temperament, proof against all strokes of fate, yet thoroughly respectable and honorable.
Goldoni was born in Venice in 1707, the son of Margherita Salvioni (or Saioni) and Giulio Goldoni. In his memoirs, Goldoni describes his father as a physician, and claims that he was introduced to theatre by his grandfather Carlo Alessandro Goldoni. In reality, it seems that Giulio was an apothecary; as for the grandfather, he had died four years before Carlo's birth. In any case, Goldoni was deeply interested in theatre from his earliest years, and all attempts to direct his activity into other channels were of no avail; his toys were puppets, and his books, plays.
His father placed him under the care of the philosopher Caldini at Rimini but the youth soon ran away with a company of strolling players and returned to Venice. In 1723 his father matriculated him into the stern Collegio Ghislieri in Pavia, which imposed the tonsure and monastic habits on its students. However, he relates in his Memoirs that a considerable part of his time was spent in reading Greek and Latin comedies. He had already begun writing at this time and, in his third year, he composed a libellous poem (Il colosso) in which he ridiculed the daughters of certain Pavian families. As a result of that incident (and/or of a visit paid with some schoolmates to a local brothel) he was expelled from the school and had to leave the city (1725). He studied law at Udine, and eventually took his degree at University of Modena. He was employed as a law clerk at Chioggia and Feltre, after which he returned to his native city and began practicing.
Educated as a lawyer, and holding lucrative positions as secretary and counsellor, he seemed, indeed, at one time to have settled down to the practice of law, but following an unexpected summons to Venice, after an absence of several years, he changed his career, and thenceforth he devoted himself to writing plays and managing theatres. His father died in 1731. In 1732, to avoid an unwanted marriage, he left the town for Milan and then for Verona where the theatre manager Giuseppe Imer helped him on his way to becoming a comical poet as well as introducing him to his future wife, Nicoletta Conio. Goldoni returned with her to Venice, where he stayed until 1743.
Goldoni entered the Italian theatre scene with a tragedy, Amalasunta, produced in Milan. The play was a critical and financial failure.
Submitting it to Count Prata, director of the opera, he was told that his piece "was composed with due regard for the rules of Aristotle and Horace, but not according to those laid down for the Italian drama." "In France", continued the count, "you can try to please the public, but here in Italy it is the actors and actresses whom you must consult, as well as the composer of the music and the stage decorators. Everything must be done according to a certain form which I will explain to you."
Goldoni thanked his critic, went back to his inn and ordered a fire, into which he threw the manuscript of his Amalasunta.
His next play, Belisario, written in 1734, was more successful, though of its success he afterward professed himself ashamed.
During this period he also wrote librettos for opera seria and served for a time as literary director of the San Giovanni Grisostomo, Venice's most distinguished opera house.
He wrote other tragedies for a time, but he was not long in discovering that his bent was for comedy. He had come to realize that the Italian stage needed reforming; adopting Molière as his model, he went to work in earnest and in 1738 produced his first real comedy, L'uomo di mondo ("The Man of the World"). During his many wanderings and adventures in Italy, he was constantly at work and when, at Livorno, he became acquainted with the manager Medebac, he determined to pursue the profession of playwriting in order to make a living. He was employed by Medebac to write plays for his theater in Venice. He worked for other managers and produced during his stay in that city some of his most characteristic works. He also wrote Momolo Cortesan in 1738. By 1743, he had perfected his hybrid style of playwriting (combining the model of Molière with the strengths of Commedia dell'arte and his own wit and sincerity). This style was typified in La Donna di garbo, the first Italian comedy of its kind.
After 1748, Goldoni collaborated with the composer Baldassare Galuppi, making significant contributions to the new form of 'opera buffa'. Galuppi composed the score for more than twenty of Goldoni's librettos. As with his comedies, Goldoni's opera buffa integrate elements of the Commedia dell'arte with recognisable local and middle-class realities. His operatic works include two of the most successful musical comedies of the eighteenth century, Il filosofo di campagna (The Country Philosopher), set by Galuppi (1752) and La buona figliuola (The Good Girl), set by Niccolò Piccinni (1760).
In 1753, following his return from Bologna, he defected to the Teatro San Luca of the Vendramin family, where he performed most of his plays to 1762.
In 1757, he engaged in a bitter dispute with playwright Carlo Gozzi, which left him utterly disgusted with the tastes of his countrymen; so much so that in 1761 he moved to Paris, where he received a position at court and was put in charge of the Théâtre-Italien. He spent the rest of his life in France, composing most of his plays in French and writing his memoirs in that language.
Among the plays which he wrote in French, the most successful was Le bourru bienfaisant, dedicated to the Marie Adélaïde, a daughter of Louis XV and aunt to the dauphin, the future Louis XVI of France. It premiered on 4 February 1771, almost nine months after the dauphin's marriage to Marie Antoinette. Goldoni enjoyed considerable popularity in France; in 1769, when he retired to Versailles, the King gave him a pension. He lost this pension after the French Revolution. The Convention eventually voted to restore his pension the day after his death. It was restored to his widow, at the pleading of the poet André Chénier; "She is old", he urged, "she is seventy-six, and her husband has left her no heritage save his illustrious name, his virtues and his poverty."
In his Memoirs Goldoni amply discusses the state of Italian comedy when he began writing. At that time, Italian comedy revolved around the conventionality of the Commedia dell'arte, or improvised comedy. Goldoni took to himself the task of superseding the comedy of masks and the comedy of intrigue by representations of actual life and manners through the characters and their behaviors. He maintained that Italian life and manners were susceptible of artistic treatment such as had not been given them before.
His works are a lasting monument to the changes that he initiated: a dramatic revolution that had been attempted but not achieved before. Goldoni's importance lay in providing good examples rather than precepts. Goldoni says that he took for his models the plays of Molière and that whenever a piece of his own succeeded he whispered to himself: "Good, but not yet Molière." Goldoni's plays are gentler and more optimistic in tone than Molière's.
It was this very success that was the object of harsh critiques by Carlo Gozzi, who accused Goldoni of having deprived the Italian theatre of the charms of poetry and imagination. The great success of Gozzi's fairy dramas so irritated Goldoni that it led to his self-exile to France.
Goldoni gave to his country a classical form, which, though it has since been cultivated, has yet to be cultivated by a master.
Goldoni's plays that were written while he was still in Italy ignore religious and ecclesiastical subjects. This may be surprising, considering his staunch Catholic upbringing. No thoughts are expressed about death or repentance in his memoirs or in his comedies. After his move to France, his position became clearer, as his plays took on a clear anti-clerical tone and often satirized the hypocrisy of monks and of the Church.
Goldoni was inspired by his love of humanity and the admiration he had for his fellow men. He wrote, and was obsessed with, the relationships that humans establish with one another, their cities and homes, the Humanist movement, and the study of philosophy. The moral and civil values that Goldoni promotes in his plays are those of rationality, civility, humanism, the importance of the rising middle-class, a progressive stance to state affairs, honor and honesty. Goldoni had a dislike for arrogance, intolerance and the abuse of power.
Goldoni's main characters are no abstract examples of human virtue, nor monstrous examples of human vice. They occupy the middle ground of human temperament. Goldoni maintains an acute sensibility for the differences in social classes between his characters as well as environmental and generational changes. Goldoni pokes fun at the arrogant nobility and the pauper who lacks dignity.
As in other theatrical works of the time and place, the characters in Goldoni's Italian comedies spoke originally either the literary Tuscan variety (which became modern Italian) or the Venetian dialect, depending on their station in life. However, in some printed editions of his plays he often turned the Venetian texts into Tuscan, too.
One of his best known works is the comic play Servant of Two Masters, which has been translated and adapted internationally numerous times. In 1966 it was adapted into an opera buffa by the American composer Vittorio Giannini. In 2011, Richard Bean adapted the play for the National Theatre of Great Britain as One Man, Two Guvnors. Its popularity led to a transfer to the West End and in 2012 to Broadway.
The film Carlo Goldoni – Venice, Grand Theatre of the World, directed by Alessandro Bettero, was released in 2007 and is available in English, Italian, French, and Japanese.
The following is a small sampling of Goldoni's enormous output. | [
{
"paragraph_id": 0,
"text": "Carlo Osvaldo Goldoni (/ɡɒlˈdoʊni/, also US: /ɡɔːlˈ-, ɡoʊlˈ-/, Italian: [ˈkarlo oˈzvaldo ɡolˈdoːni]; 25 February 1707 – 6 February 1793) was an Italian playwright and librettist from the Republic of Venice. His works include some of Italy's most famous and best-loved plays. Audiences have admired the plays of Goldoni for their ingenious mix of wit and honesty. His plays offered his contemporaries images of themselves, often dramatizing the lives, values, and conflicts of the emerging middle classes. Though he wrote in French and Italian, his plays make rich use of the Venetian language, regional vernacular, and colloquialisms. Goldoni also wrote under the pen name and title Polisseno Fegeio, Pastor Arcade, which he claimed in his memoirs the \"Arcadians of Rome\" bestowed on him.",
"title": ""
},
{
"paragraph_id": 1,
"text": "There is an abundance of autobiographical information on Goldoni, most of which comes from the introductions to his plays and from his Memoirs. However, these memoirs are known to contain many errors of fact, especially about his earlier years.",
"title": "Biography"
},
{
"paragraph_id": 2,
"text": "In these memoirs, he paints himself as a born comedian, careless, light-hearted and with a happy temperament, proof against all strokes of fate, yet thoroughly respectable and honorable.",
"title": "Biography"
},
{
"paragraph_id": 3,
"text": "Goldoni was born in Venice in 1707, the son of Margherita Salvioni (or Saioni) and Giulio Goldoni. In his memoirs, Goldoni describes his father as a physician, and claims that he was introduced to theatre by his grandfather Carlo Alessandro Goldoni. In reality, it seems that Giulio was an apothecary; as for the grandfather, he had died four years before Carlo's birth. In any case, Goldoni was deeply interested in theatre from his earliest years, and all attempts to direct his activity into other channels were of no avail; his toys were puppets, and his books, plays.",
"title": "Biography"
},
{
"paragraph_id": 4,
"text": "His father placed him under the care of the philosopher Caldini at Rimini but the youth soon ran away with a company of strolling players and returned to Venice. In 1723 his father matriculated him into the stern Collegio Ghislieri in Pavia, which imposed the tonsure and monastic habits on its students. However, he relates in his Memoirs that a considerable part of his time was spent in reading Greek and Latin comedies. He had already begun writing at this time and, in his third year, he composed a libellous poem (Il colosso) in which he ridiculed the daughters of certain Pavian families. As a result of that incident (and/or of a visit paid with some schoolmates to a local brothel) he was expelled from the school and had to leave the city (1725). He studied law at Udine, and eventually took his degree at University of Modena. He was employed as a law clerk at Chioggia and Feltre, after which he returned to his native city and began practicing.",
"title": "Biography"
},
{
"paragraph_id": 5,
"text": "Educated as a lawyer, and holding lucrative positions as secretary and counsellor, he seemed, indeed, at one time to have settled down to the practice of law, but following an unexpected summons to Venice, after an absence of several years, he changed his career, and thenceforth he devoted himself to writing plays and managing theatres. His father died in 1731. In 1732, to avoid an unwanted marriage, he left the town for Milan and then for Verona where the theatre manager Giuseppe Imer helped him on his way to becoming a comical poet as well as introducing him to his future wife, Nicoletta Conio. Goldoni returned with her to Venice, where he stayed until 1743.",
"title": "Biography"
},
{
"paragraph_id": 6,
"text": "Goldoni entered the Italian theatre scene with a tragedy, Amalasunta, produced in Milan. The play was a critical and financial failure.",
"title": "Biography"
},
{
"paragraph_id": 7,
"text": "Submitting it to Count Prata, director of the opera, he was told that his piece \"was composed with due regard for the rules of Aristotle and Horace, but not according to those laid down for the Italian drama.\" \"In France\", continued the count, \"you can try to please the public, but here in Italy it is the actors and actresses whom you must consult, as well as the composer of the music and the stage decorators. Everything must be done according to a certain form which I will explain to you.\"",
"title": "Biography"
},
{
"paragraph_id": 8,
"text": "Goldoni thanked his critic, went back to his inn and ordered a fire, into which he threw the manuscript of his Amalasunta.",
"title": "Biography"
},
{
"paragraph_id": 9,
"text": "His next play, Belisario, written in 1734, was more successful, though of its success he afterward professed himself ashamed.",
"title": "Biography"
},
{
"paragraph_id": 10,
"text": "During this period he also wrote librettos for opera seria and served for a time as literary director of the San Giovanni Grisostomo, Venice's most distinguished opera house.",
"title": "Biography"
},
{
"paragraph_id": 11,
"text": "He wrote other tragedies for a time, but he was not long in discovering that his bent was for comedy. He had come to realize that the Italian stage needed reforming; adopting Molière as his model, he went to work in earnest and in 1738 produced his first real comedy, L'uomo di mondo (\"The Man of the World\"). During his many wanderings and adventures in Italy, he was constantly at work and when, at Livorno, he became acquainted with the manager Medebac, he determined to pursue the profession of playwriting in order to make a living. He was employed by Medebac to write plays for his theater in Venice. He worked for other managers and produced during his stay in that city some of his most characteristic works. He also wrote Momolo Cortesan in 1738. By 1743, he had perfected his hybrid style of playwriting (combining the model of Molière with the strengths of Commedia dell'arte and his own wit and sincerity). This style was typified in La Donna di garbo, the first Italian comedy of its kind.",
"title": "Biography"
},
{
"paragraph_id": 12,
"text": "After 1748, Goldoni collaborated with the composer Baldassare Galuppi, making significant contributions to the new form of 'opera buffa'. Galuppi composed the score for more than twenty of Goldoni's librettos. As with his comedies, Goldoni's opera buffa integrate elements of the Commedia dell'arte with recognisable local and middle-class realities. His operatic works include two of the most successful musical comedies of the eighteenth century, Il filosofo di campagna (The Country Philosopher), set by Galuppi (1752) and La buona figliuola (The Good Girl), set by Niccolò Piccinni (1760).",
"title": "Biography"
},
{
"paragraph_id": 13,
"text": "In 1753, following his return from Bologna, he defected to the Teatro San Luca of the Vendramin family, where he performed most of his plays to 1762.",
"title": "Biography"
},
{
"paragraph_id": 14,
"text": "In 1757, he engaged in a bitter dispute with playwright Carlo Gozzi, which left him utterly disgusted with the tastes of his countrymen; so much so that in 1761 he moved to Paris, where he received a position at court and was put in charge of the Théâtre-Italien. He spent the rest of his life in France, composing most of his plays in French and writing his memoirs in that language.",
"title": "Biography"
},
{
"paragraph_id": 15,
"text": "Among the plays which he wrote in French, the most successful was Le bourru bienfaisant, dedicated to the Marie Adélaïde, a daughter of Louis XV and aunt to the dauphin, the future Louis XVI of France. It premiered on 4 February 1771, almost nine months after the dauphin's marriage to Marie Antoinette. Goldoni enjoyed considerable popularity in France; in 1769, when he retired to Versailles, the King gave him a pension. He lost this pension after the French Revolution. The Convention eventually voted to restore his pension the day after his death. It was restored to his widow, at the pleading of the poet André Chénier; \"She is old\", he urged, \"she is seventy-six, and her husband has left her no heritage save his illustrious name, his virtues and his poverty.\"",
"title": "Biography"
},
{
"paragraph_id": 16,
"text": "In his Memoirs Goldoni amply discusses the state of Italian comedy when he began writing. At that time, Italian comedy revolved around the conventionality of the Commedia dell'arte, or improvised comedy. Goldoni took to himself the task of superseding the comedy of masks and the comedy of intrigue by representations of actual life and manners through the characters and their behaviors. He maintained that Italian life and manners were susceptible of artistic treatment such as had not been given them before.",
"title": "Goldoni's impact on Italian theatre"
},
{
"paragraph_id": 17,
"text": "His works are a lasting monument to the changes that he initiated: a dramatic revolution that had been attempted but not achieved before. Goldoni's importance lay in providing good examples rather than precepts. Goldoni says that he took for his models the plays of Molière and that whenever a piece of his own succeeded he whispered to himself: \"Good, but not yet Molière.\" Goldoni's plays are gentler and more optimistic in tone than Molière's.",
"title": "Goldoni's impact on Italian theatre"
},
{
"paragraph_id": 18,
"text": "It was this very success that was the object of harsh critiques by Carlo Gozzi, who accused Goldoni of having deprived the Italian theatre of the charms of poetry and imagination. The great success of Gozzi's fairy dramas so irritated Goldoni that it led to his self-exile to France.",
"title": "Goldoni's impact on Italian theatre"
},
{
"paragraph_id": 19,
"text": "Goldoni gave to his country a classical form, which, though it has since been cultivated, has yet to be cultivated by a master.",
"title": "Goldoni's impact on Italian theatre"
},
{
"paragraph_id": 20,
"text": "Goldoni's plays that were written while he was still in Italy ignore religious and ecclesiastical subjects. This may be surprising, considering his staunch Catholic upbringing. No thoughts are expressed about death or repentance in his memoirs or in his comedies. After his move to France, his position became clearer, as his plays took on a clear anti-clerical tone and often satirized the hypocrisy of monks and of the Church.",
"title": "Themes"
},
{
"paragraph_id": 21,
"text": "Goldoni was inspired by his love of humanity and the admiration he had for his fellow men. He wrote, and was obsessed with, the relationships that humans establish with one another, their cities and homes, the Humanist movement, and the study of philosophy. The moral and civil values that Goldoni promotes in his plays are those of rationality, civility, humanism, the importance of the rising middle-class, a progressive stance to state affairs, honor and honesty. Goldoni had a dislike for arrogance, intolerance and the abuse of power.",
"title": "Themes"
},
{
"paragraph_id": 22,
"text": "Goldoni's main characters are no abstract examples of human virtue, nor monstrous examples of human vice. They occupy the middle ground of human temperament. Goldoni maintains an acute sensibility for the differences in social classes between his characters as well as environmental and generational changes. Goldoni pokes fun at the arrogant nobility and the pauper who lacks dignity.",
"title": "Themes"
},
{
"paragraph_id": 23,
"text": "As in other theatrical works of the time and place, the characters in Goldoni's Italian comedies spoke originally either the literary Tuscan variety (which became modern Italian) or the Venetian dialect, depending on their station in life. However, in some printed editions of his plays he often turned the Venetian texts into Tuscan, too.",
"title": "Venetian and Tuscan"
},
{
"paragraph_id": 24,
"text": "One of his best known works is the comic play Servant of Two Masters, which has been translated and adapted internationally numerous times. In 1966 it was adapted into an opera buffa by the American composer Vittorio Giannini. In 2011, Richard Bean adapted the play for the National Theatre of Great Britain as One Man, Two Guvnors. Its popularity led to a transfer to the West End and in 2012 to Broadway.",
"title": "Goldoni in popular culture"
},
{
"paragraph_id": 25,
"text": "The film Carlo Goldoni – Venice, Grand Theatre of the World, directed by Alessandro Bettero, was released in 2007 and is available in English, Italian, French, and Japanese.",
"title": "Goldoni in popular culture"
},
{
"paragraph_id": 26,
"text": "The following is a small sampling of Goldoni's enormous output.",
"title": "Selected works"
}
] | Carlo Osvaldo Goldoni was an Italian playwright and librettist from the Republic of Venice. His works include some of Italy's most famous and best-loved plays. Audiences have admired the plays of Goldoni for their ingenious mix of wit and honesty. His plays offered his contemporaries images of themselves, often dramatizing the lives, values, and conflicts of the emerging middle classes. Though he wrote in French and Italian, his plays make rich use of the Venetian language, regional vernacular, and colloquialisms. Goldoni also wrote under the pen name and title Polisseno Fegeio, Pastor Arcade, which he claimed in his memoirs the "Arcadians of Rome" bestowed on him. | 2001-06-29T09:11:13Z | 2023-11-16T05:34:00Z | [
"Template:Main",
"Template:Cite American Heritage Dictionary",
"Template:Cite web",
"Template:Pp.",
"Template:Wikiquote",
"Template:Use dmy dates",
"Template:Infobox writer",
"Template:IPAc-en",
"Template:Internet Archive author",
"Template:Carlo Goldoni",
"Template:More footnotes",
"Template:ISBN",
"Template:Wikisource",
"Template:Librivox author",
"Template:In lang",
"Template:Authority control",
"Template:Short description",
"Template:Reflist",
"Template:Gutenberg author",
"Template:Cite book",
"Template:Webarchive",
"Template:Commons category",
"Template:FadedPage",
"Template:Redirect",
"Template:IPA-it",
"Template:Cite Merriam-Webster"
] | https://en.wikipedia.org/wiki/Carlo_Goldoni |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.