sentence1
stringlengths
1
133k
sentence2
stringlengths
1
131k
the upper Yangtze against bands of warlord soldiers and outlaws. The warship engaged in continuous patrol operations between Ichang and Chungking throughout 1923, supplying armed guards to merchant ships, and protecting Americans at Chungking while that city was under siege by a warlord army. The Royal Navy had a series of Insect-class gunboats which patrolled between Chungking and Shanghai. Cruisers and destroyers and Fly-class gunboats also patrolled. The most infamous incident was when Panay and in 1937, were divebombed by Japanese airplanes during the notorious Nanking massacre. Westerners were forced to leave areas neighboring the Yangtze River with the Japanese takeover in 1941. The former steamers were either sabotaged or pressed into Japanese or Chinese service. Probably the most curious incident involved in 1949 during the Chinese Civil War between Kuomintang and People's Liberation Army forces; and led to the award of the Dickin Medal to the ship's cat Simon. Contemporary events In August 2019, Welsh adventurer Ash Dykes became the first person to complete the 4,000-mile (6,437 km) trek along the course of the river, walking for 352 days from its source to its mouth. Hydrology Periodic floods Tens of millions of people live in the floodplain of the Yangtze valley, an area that naturally floods every summer and is habitable only because it is protected by river dikes. The floods large enough to overflow the dikes have caused great distress to those who live and farm there. Floods of note include those of 1931, 1954, and 1998. The 1931 Central China floods or the Central China floods of 1931 were a series of floods that occurred in the Republic of China. The floods are generally considered among the deadliest natural disasters ever recorded, and almost certainly the deadliest of the 20th century (when pandemics and famines are discounted). Estimates of the total death toll range from 145,000 to between 3.7 million and 4 million. The Yangtze again flooded in 1935, causing great loss of life. From June to September 1954, the Yangtze River Floods were a series of catastrophic floodings that occurred mostly in Hubei Province. Due to unusually high volume of precipitation as well as an extraordinarily long rainy season in the middle stretch of the Yangtze River late in the spring of 1954, the river started to rise above its usual level in around late June. Despite efforts to open three important flood gates to alleviate the rising water by diverting it, the flood level continued to rise until it hit the historic high of 44.67 m in Jingzhou, Hubei and 29.73 m in Wuhan. The number of dead from this flood was estimated at around 33,000, including those who died of plague in the aftermath of the disaster. The 1998 Yangtze River floods were a series of major floods that lasted from middle of June to the beginning of September 1998 along the Yangtze. In the summer of 1998, China experienced massive flooding of parts of the Yangtze River, resulting in 3,704 dead, 15 million homeless and $26 billion in economic loss. Other sources report a total loss of 4150 people, and 180 million people were affected. A staggering were evacuated, 13.3 million houses were damaged or destroyed. The floods caused $26 billion in damages. The 2016 China floods caused US$22 billion in damages. In 2020, The Yangtze river saw the heaviest rainfall since 1961, with a 79% increase in June and July compared to the average for the period over the previous 41 years. A new theory suggested that abrupt reduction in emissions of greenhouse gases and aerosols, caused by shutdowns during the Covid-19 pandemic, was a key cause of the intense downpours. Over the past decades rainfall had decreased due to increase of aerosols in the atmosphere, and lower greenhouse gas emissions in 2020 caused the opposite effect - a major increase in rain. Such a dramatic reduction of aerosols caused a dramatic change in the various components of the climate system, but such sudden change of the climate system would be very different from changes in response to continuous but gradual policy-driven emissions reductions. Degradation of the river Beginning in the 1950s, dams and dikes were built for flood control, land reclamation, irrigation, and control of diseases vectors such as blood flukes that caused Schistosomiasis. More than a hundred lakes were thusly cut off from the main river. There were gates between the lakes that could be opened during floods. However, farmers and settlements encroached on the land next to the lakes although it was forbidden to settle there. When floods came, it proved impossible to open the gates since it would have caused substantial destruction. Thus the lakes partially or completely dried up. For example, Baidang Lake shrunk from in the 1950s to in 2005. Zhangdu Lake dwindled to one quarter of its original size. Natural fisheries output in the two lakes declined sharply. Only a few large lakes, such as Poyang Lake and Dongting Lake, remained connected to the Yangtze. Cutting off the other lakes that had served as natural buffers for floods increased the damage done by floods further downstream. Furthermore, the natural flow of migratory fish was obstructed and biodiversity across the whole basin decreased dramatically. Intensive farming of fish in ponds spread using one type of carp who thrived in eutrophic water conditions and who feeds on algae, causing widespread pollution. The pollution was exacerbated by the discharge of waste from pig farms as well as of untreated industrial and municipal sewage. In September 2012, the Yangtze river near Chongqing turned red from pollution. The erection of the Three Gorges Dam has created an impassable "iron barrier" that has led to a great reduction in the biodiversity of the river. Yangtze sturgeon use seasonal changes in the flow of the river to signal when is it time to migrate. However, these seasonal changes will be greatly reduced by dams and diversions. Other animals facing immediate threat of extinction are the baiji dolphin, narrow-ridged finless porpoise and the Yangtze alligator. These animals numbers went into freefall from the combined effects of accidental catches during fishing, river traffic, habitat loss and pollution. In 2006 the baiji dolphin became extinct; the world lost an entire genus. In 2020, a sweeping law was passed by the Chinese government to protect the ecology of the river. The new laws include strengthening ecological protection rules for hydropower projects along the river, banning chemical plants within 1 kilometer of the river, relocating polluting industries, severely restricting sand mining as well as a complete fishing ban on all the natural waterways of the river, including all its major tributaries and lakes. Contribution to ocean pollution The Yangtze River produces more ocean plastic pollution than any other, according to The Ocean Cleanup, a Dutch environmental research foundation that focuses on ocean pollution. Together with 9 other rivers, the Yangtze transports 90% of all the plastic that reaches the oceans. Reconnecting lakes In 2002 a pilot program was initiated to reconnect lakes to the Yangtze with the objective to increase biodiversity and to alleviate flooding. The first lakes to be reconnected in 2004 were Zhangdu Lake, Honghu Lake, and Tian'e-Zhou in Hubei on the middle Yangtze. In 2005 Baidang Lake in Anhui was also reconnected. Reconnecting the lakes improved water quality and fish were able to migrate from the river into the lake, replenishing their numbers and genetic stock. The trial also showed that reconnecting the lake reduced flooding. The new approach also benefitted the farmers economically. Pond farmers switched to natural fish feed, which helped them breed better-quality fish that can be sold for more, increasing their income by 30%. Based on the successful pilot project, other provincial governments emulated the experience and also reestablished connections to lakes that had previously been cut off from the river. In 2005 a Yangtze Forum has been established bringing together 13 riparian provincial governments to manage the river from source to sea. In 2006 China's Ministry of Agriculture made it a national policy to reconnect the Yangtze River with its lakes. As of 2010, provincial governments in five provinces and Shanghai set up a network of 40 effective protected areas, covering . As a result, populations of 47 threatened species increased, including the critically endangered Yangtze alligator. In the Shanghai area, reestablished wetlands now protect drinking water sources for the city. It is envisaged to extend the network throughout the entire Yangtze to eventually cover 102 areas and . The mayor of Wuhan announced that six huge, stagnating urban lakes including the East Lake (Wuhan) would be reconnected at the cost of US$2.3 billion creating China's largest urban wetland landscape. Major cities along the river Yushu Panzhihua Yibin Luzhou Hejiang Chongqing Fuling Fengdu Wanzhou Yichang Yidu Jingzhou Shashi Shishou Yueyang Xianning Wuhan Ezhou Huangshi Huanggang Chaohu Chizhou Jiujiang Anqing Tongling Wuhu Chuzhou Ma'anshan Taizhou Yangzhou Zhenjiang Nanjing Changzhou Nantong Shanghai Crossings Until 1957, there were no bridges across the Yangtze River from Yibin to Shanghai. For millennia, travelers crossed the river by ferry. On occasions, the crossing may have been dangerous, as evidenced by the Zhong'anlun disaster (October 15, 1945). The river stood as a major geographic barrier dividing northern and southern China. In the first half of the 20th century, rail passengers from Beijing to Guangzhou and Shanghai had to disembark, respectively, at Hanyang and Pukou, and cross the river by steam ferry before resuming journeys by train from Wuchang or Nanjing West. After the founding of the People's Republic in 1949, Soviet engineers assisted in the design and construction of the Wuhan Yangtze River Bridge, a dual-use road-rail bridge, built from 1955 to 1957. It was the first bridge across the Yangtze River. The second bridge across the river that was built was a single-track railway bridge built upstream in Chongqing in 1959. The Nanjing Yangtze River Bridge, also a road-rail bridge, was the first bridge to cross the lower reaches of the Yangtze, in Nanjing. It was built after the Sino-Soviet Split and did not receive foreign assistance. Road-rail bridges were then built in Zhicheng (1971) and Chongqing (1980). Bridge-building slowed in the 1980s before resuming in the 1990s and accelerating in the first decade of the 21st century. The Jiujiang Yangtze River Bridge was built in 1992 as part of the Beijing-Jiujiang Railway. A second bridge in Wuhan was completed in 1995. By 2005, there were a total of 56 bridges and one tunnel across the Yangtze River between Yibin and Shanghai. These include some of the longest suspension and cable-stayed bridges in the world on the Yangtze Delta: Jiangyin Suspension Bridge (1,385 m, opened in 1999), Runyang Bridge (1,490 m, opened 2005), Sutong Bridge (1,088 m, opened 2008). The rapid pace of bridge construction has continued. The city of Wuhan now has six bridges and one tunnel across the Yangtze. A number of power line crossings have also been built across the river. Dams As of 2007, there are two dams built on the Yangtze river: Three Gorges Dam and Gezhouba Dam. The Three Gorges Dam is the largest power station in the world by installed capacity, at 22.5 GW. Several dams are operating or are being constructed on the upper portion of the river, the Jinsha River. Among them, the Xiluodu Dam is the third largest power station in the world, and the Baihetan Dam, planned to be commissioned in 2021, will be the second largest after the Three Gorges Dam. Tributaries The Yangtze River has over 700 tributaries. The major tributaries (listed from upstream to downstream) with the locations of where they join the Yangtze are: Yalong River (Panzhihua, Sichuan) Min River (Yibin, Sichuan) Tuo River (Luzhou, Sichuan) Chishui River (Hejiang, Sichuan) Jialing River (Chongqing) Wu River (Fuling, Chongqing) Qing River (Yidu, Hubei) Yuan River (via Dongting Lake) Lishui River (via Dongting Lake) Zi River (via Dongting Lake) Xiang River (Yueyang, Hunan) Han River (Wuhan, Hubei) Gan River (near Jiujiang, Jiangxi) Shuiyang River (Dangtu, Anhui) Qingyi River (Wuhu, Anhui) Chao Lake water system (Chaohu, Anhui) Lake Tai water system (Shanghai) The Huai River flowed into the Yellow Sea until the 20th century, but now primarily discharges into the Yangtze. Protected areas Sanjiangyuan ("Three Rivers' Sources") National Nature Reserve in Qinghai Three Parallel Rivers of Yunnan Wildlife The Yangtze River has a high species richness, including many endemics. A high percentage of these are seriously threatened by human activities. Fish , 416 fish species are known from the Yangtze basin, including 362 that strictly are freshwater species. The remaining are also known from salt or brackish waters, such as the river's estuary or the East China Sea. This makes it one of the most species-rich rivers in Asia and by far the most species-rich in China (in comparison, the Pearl River has almost 300 fish species and the Yellow River 160). 178 fish species are endemic to the Yangtze River Basin. Many are only found in some section of the river basin and especially the upper reach (above Yichang, but below the headwaters in the Qinghai-Tibet Plateau) is rich with 279 species, including 147 Yangtze endemics and 97 strict endemics (found only in this part of the basin). In contrast, the headwaters, where the average altitude is above , are only home to 14 highly specialized species, but 8 of these are endemic to the river. The largest orders in the Yangtze are Cypriniformes (280 species, including 150 endemics), Siluriformes (40 species, including 20 endemics), Perciformes (50 species, including 4 endemics), Tetraodontiformes (12 species, including 1 endemic) and Osmeriformes (8 species, including 1 endemic). No
flow entirely within one country. It rises at Jari Hill in the Tanggula Mountains (Tibetan Plateau) and flows in a generally easterly direction to the East China Sea. It is the seventh-largest river by discharge volume in the world. Its drainage basin comprises one-fifth of the land area of China, and is home to nearly one-third of the country's population. The Yangtze has played a major role in the history, culture and economy of China. For thousands of years, the river has been used for water, irrigation, sanitation, transportation, industry, boundary-marking and war. The prosperous Yangtze Delta generates as much as 20% of China's GDP. The Three Gorges Dam on the Yangtze is the largest hydro-electric power station in the world. In mid-2014, the Chinese government announced it was building a multi-tier transport network, comprising railways, roads and airports, to create a new economic belt alongside the river. The Yangtze flows through a wide array of ecosystems and is habitat to several endemic and threatened species including the Chinese alligator, the narrow-ridged finless porpoise and the Yangtze sturgeon, but also was the home of the extinct Yangtze river dolphin (or baiji) and Chinese paddlefish. In recent years, the river has suffered from industrial pollution, plastic pollution, agricultural runoff, siltation, and loss of wetland and lakes, which exacerbates seasonal flooding. Some sections of the river are now protected as nature reserves. A stretch of the upstream Yangtze flowing through deep gorges in western Yunnan is part of the Three Parallel Rivers of Yunnan Protected Areas, a UNESCO World Heritage Site. Etymology Chinese () is the modern Chinese name for the Yangtze. However, because the source of the Yangtze was not ascertained until modern times, the Chinese have given different names to the upstream sections of the river up to its confluence with the Min River at Yibin, Sichuan. Jinsha River ("Gold Sands River") refers to the 2,308 km (1,434 mi) of the Yangtze from Yibin upstream to the confluence with the Batang River near Yushu in Qinghai, while the Tongtian River ("River that leads to Heaven") describes the 813 km (505 mi) section from Yushu up to the confluence of the Tuotuo River and the Dangqu River. Chang Jiang literally means the "Long River." In Old Chinese, the Yangtze was simply called Jiang/Kiang , a character of phono-semantic compound origin, combining the water radical with the homophone (now pronounced , but *kˤoŋ in Old Chinese). Krong was probably a word in the Austroasiatic language of local peoples such as the Yue. Similar to *krong in Proto-Vietnamese and krung in Mon, all meaning "river", it is related to modern Vietnamese sông (river) and Khmer krung (city on riverside), whence Thai krung (กรุง capital city), not kôngkea (water) which is from the Sanskrit root gáṅgā. By the Han dynasty, had come to mean any river in Chinese, and this river was distinguished as the "Great River" (). The epithet (simplified version ), means "long", was first formally applied to the river during the Six Dynasties period. Various sections of the Yangtze have local names. From Yibin to Yichang, the river through Sichuan and Chongqing Municipality is also known as the () or "Sichuan River." In Hubei, the river is also called the () or the "Jing River" after Jingzhou, one of the Nine Provinces of ancient China. In Anhui, the river takes on the local name after the shorthand name for Anhui, (皖). And () or the "Yangzi River", from which the English name Yangtze is derived, is the local name for the Lower Yangtze in the region of Yangzhou. The name likely comes from an ancient ferry crossing called or (). Europeans who arrived in the Yangtze River Delta region applied this local name to the whole river. The dividing site between upstream and midstream is considered to be at Yichang and that between midstream and downstream at Hukou (Jiujiang). English The river was called Quian () and Quianshui () by Marco Polo and appeared on the earliest English maps as Kian or Kiam, which derives from Cantonese, all recording dialects which preserved forms of the Middle Chinese pronunciation of as Kæwng. By the mid-19th century, these romanizations had standardized as Kiang; Dajiang, e.g., was rendered as "Ta-Kiang." "Keeang-Koo," "Kyang Kew," "Kian-ku," and related names derived from mistaking the Chinese term for the mouth of the Yangtze (, p Jiāngkǒu) as the name of the river itself. The name Blue River began to be applied in the 18th century, apparently owing to a former name of the Dam Chu or Min and to analogy with the Yellow River, but it was frequently explained in early English references as a 'translation' of Jiang, Jiangkou, or Yangzijiang. Very common in 18th- and 19th-century sources, the name fell out of favor due to growing awareness of its lack of any connection to the river's Chinese names and to the irony of its application to such a muddy waterway. Matteo Ricci's 1615 Latin account included descriptions of the "Ianſu" and "Ianſuchian." The posthumous account's translation of the name as Fils de la Mar ("Son of the Ocean") shows that Ricci, who by the end of his life was fluent in literary Chinese, was introduced to it as the homophonic rather than the 'proper' . Further, although railroads and the Shanghai concessions subsequently turned it into a backwater, Yangzhou was the lower river's principal port for much of the Qing dynasty, directing Liangjiang's important salt monopoly and connecting the Yangtze with the Grand Canal to Beijing. (That connection also made it one of the Yellow River's principal ports between the floods of 1344 and the 1850s, during which time the Yellow River ran well south of Shandong and discharged into the ocean a mere few hundred kilometers from the mouth of the Yangtze.) By 1800, English cartographers such as Aaron Arrowsmith had adopted the French style of the name as Yang-tse or Yang-tse Kiang. The British diplomat Thomas Wade emended this to Yang-tzu Chiang as part of his formerly popular romanization of Chinese, based on the Beijing dialect instead of Nanjing's and first published in 1867. The spellings Yangtze and Yangtze Kiang was a compromise between the two methods adopted at the 1906 Imperial Postal Conference in Shanghai, which established postal romanization. Hanyu Pinyin was adopted by the PRC's First Congress in 1958, but it was not widely employed in English outside mainland China prior to the normalization of diplomatic relations between the United States and the PRC in 1979; since that time, the spelling Yangzi has also been used. Tibetan The source and upper reaches of the Yangtze are located in ethnic Tibetan areas of Qinghai. In Tibetan, the Tuotuo headwaters are the Machu (, literally "Red River" or perhaps "Wound-[like Red] River?")). The Tongtian is the Drichu (, ‘Bri Chu}}, literally "River of the Female Yak"; transliterated into ). Geography The river originates from several tributaries in the eastern part of the Tibetan Plateau, two of which are commonly referred to as the "source." Traditionally, the Chinese government has recognized the source as the Tuotuo tributary at the base of a glacier lying on the west of Geladandong Mountain in the Tanggula Mountains. This source is found at and while not the furthest source of the Yangtze, it is the highest source at above sea level. The true source of the Yangtze, hydrologically the longest river distance from the sea, is at Jari Hill at the head of the Dam Qu tributary, approximately southeast of Geladandong. This source was only discovered in the late 20th century and lies in wetlands at and above sea level just southeast of Chadan Township in Zadoi County, Yushu Prefecture, Qinghai. As the historical spiritual source of the Yangtze, the Geladandong source is still commonly referred to as the source of the Yangtze since the discovery of the Jari Hill source. These tributaries join and the river then runs eastward through Qinghai (Tsinghai), turning southward down a deep valley at the border of Sichuan (Szechwan) and Tibet to reach Yunnan. In the course of this valley, the river's elevation drops from above to less than . The headwaters of the Yangtze are situated at an elevation of about . In its descent to sea level, the river falls to an altitude of at Yibin, Sichuan, the head of navigation for riverboats, and to at Chongqing (Chungking). Between Chongqing and Yichang (I-ch'ang), at an altitude of and a distance of about , it passes through the spectacular Yangtze Gorges, which are noted for their natural beauty but are dangerous to shipping. It enters the basin of Sichuan at Yibin. While in the Sichuan basin, it receives several mighty tributaries, increasing its water volume significantly. It then cuts through Mount Wushan bordering Chongqing and Hubei to create the famous Three Gorges. Eastward of the Three Gorges, Yichang is the first city on the Yangtze Plain. After entering Hubei province, the Yangtze receives water from a number of lakes. The largest of these lakes is Dongting Lake, which is located on the border of Hunan and Hubei provinces, and is the outlet for most of the rivers in Hunan. At Wuhan, it receives its biggest tributary, the Han River, bringing water from its northern basin as far as Shaanxi. At the northern tip of Jiangxi province, Lake Poyang, the biggest freshwater lake in China, merges into the river. The river then runs through Anhui and Jiangsu, receiving more water from innumerable smaller lakes and rivers, and finally reaches the East China Sea at Shanghai. Four of China's five main freshwater lakes contribute their waters to the Yangtze River. Traditionally, the upstream part of the Yangtze River refers to the section from Yibin to Yichang; the middle part refers to the section from Yichang to Hukou County, where Lake Poyang meets the river; the downstream part is from Hukou to Shanghai. The origin of the Yangtze River has been dated by some geologists to about 45 million years ago in the Eocene, but this dating has been disputed. Image gallery Characteristics The Yangtze flows into the East China Sea and was navigable by ocean-going vessels up from its mouth even before the Three Gorges Dam was built. The Yangtze is flanked with metallurgical, power, chemical, auto, building materials and machinery industrial belts and high-tech development zones. It is playing an increasingly crucial role in the river valley's economic growth and has become a vital link for international shipping to the inland provinces. The river is a major transportation artery for China, connecting the interior with the coast. The river is one of the world's busiest waterways. Traffic includes commercial traffic transporting bulk goods such as coal as well as manufactured goods and passengers. Cargo transportation reached 795 million tons in 2005. River cruises several days long, especially through the beautiful and scenic Three Gorges area, are becoming popular as the tourism industry grows in China. Flooding along the river has been a major problem. The rainy season in China is May and June in areas south of Yangtze River, and July and August in areas north of it. The huge river system receives water from both southern and northern flanks, which causes its flood season to extend from May to August. Meanwhile, the relatively dense population and rich cities along the river make the floods more deadly and costly. The most recent major floods were the 1998 Yangtze River Floods, but more disastrous were the 1954 Yangtze River Floods, which killed around 30,000 people. History Geologic history Although the mouth of the Yellow River has fluctuated widely north and south of the Shandong peninsula within the historical record, the Yangtze has remained largely static. Based on studies of sedimentation rates, however, it is unlikely that the present discharge site predates the late Miocene ( Ma). Prior to this, its headwaters drained south into the Gulf of Tonkin along or near the course of the present Red River. Early history The Yangtze River is important to the cultural origins of southern China and Japan. Human activity has been verified in the Three Gorges area as far back as 27,000 years ago, and by the 5th millennium BC, the lower Yangtze was a major population center occupied by the Hemudu and Majiabang cultures, both among the earliest cultivators of rice. By the 3rd millennium BC, the successor Liangzhu culture showed evidence of influence from the Longshan peoples of the North China Plain. What is now thought of as Chinese culture developed along the more fertile Yellow River basin; the "Yue" people of the lower Yangtze possessed very different traditions blackening their teeth, cutting their hair short, tattooing their bodies, and living in small settlements among bamboo groves and were considered barbarous by the northerners. The Central Yangtze valley was home to sophisticated Neolithic cultures. Later it became the earliest part of the Yangtze valley to be integrated into the North Chinese cultural sphere. (Northern Chinese were active there since the Bronze Age). In the lower Yangtze, two Yue tribes, the Gouwu in southern Jiangsu and the Yuyue in northern Zhejiang, display increasing Zhou (i.e., North Chinese) influence starting in the 9th century BC. Traditional accounts credit these changes to northern refugees (Taibo and Zhongyong in Wu and Wuyi in Yue) who assumed power over the local tribes, though these are generally assumed to be myths invented to legitimate them to other Zhou rulers. As the kingdoms of Wu and Yue, they were famed as fishers, shipwrights, and sword-smiths. Adopting Chinese characters, political institutions, and military technology, they were among the most powerful states during the later Zhou. In the middle Yangtze, the state of Jing seems to have begun in the upper Han River valley a minor Zhou polity, but it adapted to native culture as it expanded south and east into the Yangtze valley. In the process, it changed its name to Chu. Whether native or nativizing, the Yangtze states held their own against the northern Chinese homeland: some lists credit them with three of the Spring and Autumn period's Five Hegemons and one of the Warring States' Four Lords. They fell in against themselves, however. Chu's growing power led its rival Jin to support Wu as a counter. Wu successfully sacked Chu's capital Ying in 506 BC, but Chu subsequently supported Yue in its attacks against Wu's southern flank. In 473 BC, King Goujian of Yue fully annexed Wu and moved his court to its eponymous capital at modern Suzhou. In 333 BC, Chu finally united the lower Yangtze by annexing Yue, whose royal family was said to have fled south and established the Minyue kingdom in Fujian. Qin was able to unite China by first subduing Ba and Shu on the upper Yangtze in modern Sichuan, giving them a strong base to attack Chu's settlements along the river. The state of Qin conquered the central Yangtze region, previous heartland of Chu, in 278 BC, and incorporated the region into its expanding empire. Qin then used its connections along the Yangtze River the Xiang River to expand China into Hunan, Jiangxi and Guangdong, setting up military commanderies along the main lines of communication. At the collapse of the Qin Dynasty, these southern commanderies became the independent Nanyue Empire under Zhao Tuo while Chu and Han vied with each other for control of the north. Since the Han dynasty, the region of the Yangtze River grew ever more important to China's economy. The establishment of irrigation systems (the most famous one is Dujiangyan, northwest of Chengdu, built during the Warring States period) made agriculture very stable and productive, eventually exceeding even the Yellow River region. The Qin and Han empires were actively engaged in the agricultural colonization of the Yangtze lowlands, maintaining a system of dikes to protect farmland from seasonal floods. By the Song dynasty, the area along the Yangtze had become among the wealthiest and most developed parts of the country, especially in the lower reaches of the river. Early in the Qing dynasty, the region called Jiangnan (that includes the southern part of Jiangsu, the northern part of Zhejiang, and the southeastern part of Anhui) provided – of the nation's revenues. The Yangtze has long been the backbone of China's inland water transportation system, which remained particularly important for almost two thousand years, until the construction of the national railway network during the 20th century. The Grand Canal connects the lower Yangtze with the major cities of the Jiangnan region south of the river (Wuxi, Suzhou, Hangzhou) and with northern China (all the way from Yangzhou to Beijing). The less well known ancient Lingqu Canal, connecting the upper Xiang River with the headwaters of the Guijiang, allowed a direct water connection from the Yangtze Basin to the Pearl River Delta. Historically, the Yangtze became the political boundary between north China and south China several times (see History of China) because of the difficulty of crossing the river. This occurred notably during the Southern and Northern Dynasties, and the Southern Song. Many battles took place along the river, the most famous being the Battle of Red Cliffs in 208 AD during the Three Kingdoms period. The Yangtze was the site of naval battles between the Song dynasty and Jurchen Jin during the Jin–Song wars. In the Battle of Caishi of 1161, the ships of the Jin emperor Wanyan Liang clashed with the Song fleet on the Yangtze. Song soldiers fired bombs of lime and sulfur using trebuchets at the Jurchen warships. The battle was a Song victory that halted the invasion by the Jin. The Battle of Tangdao was another Yangtze naval battle in the same year. Politically, Nanjing was the capital of China several times, although most of the time its territory only covered the southeastern part of China, such as the Wu kingdom in the Three Kingdoms period, the Eastern Jin Dynasty, and during the Southern and Northern Dynasties and Five Dynasties and Ten Kingdoms periods. Only the Ming occupied most parts of China from their capital at Nanjing, though it later moved the capital to Beijing. The ROC capital was located in Nanjing in the periods 1911–12, 1927–37, and 1945–49. Age of steam The first merchant steamer in China, the Jardine, was built to order for the firm of Jardine, Matheson & Co. in 1835. She was a small vessel intended for use as a mail and passenger carrier between Lintin Island, Macao, and Whampoa. However, the
new teleportation device, the Telepod. When Marle volunteers to be teleported, her pendant interferes with the device, creating a portal that draws her in. Crono and Lucca recreate the portal and find themselves in 600 AD. They find Marle only to watch her vanish before their eyes. Lucca realizes that this time period's kingdom has mistaken Marle (who is actually Princess Nadia of Guardia) for Queen Leene, an ancestor of hers who had been kidnapped, thus putting off the recovery effort for her ancestor and creating a grandfather paradox. Crono and Lucca, with the help of Frog, restore history to normal by rescuing Leene. After Crono, Marle, and Lucca return to the present, Crono is arrested on charges of kidnapping and sentenced to death by Guardia's chancellor. Lucca and Marle help Crono escape prison, using another time portal to evade their pursuers. This portal leads to 2300 AD, where the trio learn that civilization has been wiped out by a giant creature known as Lavos that appeared in 1999 AD. The three vow to find a way to prevent the future destruction of their world. After meeting and repairing Robo and discovering another time Gate, Crono and his friends arrive at the End of Time, where they meet a mysterious old man who helps them acquire magical powers and travel through time by way of several pillars of light. The party discover that a powerful mage named Magus summoned Lavos into the world in 600 AD. They enlist Frog to help them stop Magus, but Frog requires the legendary sword, Masamune, to defeat him. During the subsequent battle with Magus, it is revealed that Magus did not create Lavos, but only woke him up. His disrupted spell to summon Lavos creates a temporal distortion that throws Crono and his friends to prehistory. The party recruit Ayla and do battle with the Reptites, enemies of prehistoric humans, and witness the true origin of Lavos as the creature arrives from deep space and crashes into the planet before burrowing to its core. Entering a Gate created by Lavos's impact, the party arrive in the ice age of 12,000 BC. There, the floating Kingdom of Zeal seeks to draw upon Lavos's power via a machine housed on the ocean floor. Before they can destroy the machine, the party are discovered by the Queen of Zeal thanks to a tip from a mysterious Prophet, and are banished from the time period via a magical lock on the Gate. Seeking a way to return to Zeal, the party discover a time machine in 2300 AD called the Wings of Time (or Epoch), which can access any time period at will. The party return to 12000 BC, where Zeal awakens Lavos, leading the Prophet to reveal himself as Magus, who tries and fails to kill the creature. Lavos defeats Magus and kills Crono, before the remaining party are transported to the safety of the surface by the Queen's daughter, Schala. Lavos annihilates the Kingdom of Zeal, and the debris of the fallen continent causes devastating floods that submerge most of the world below. Magus confesses to the party that he used to be Prince Janus of Zeal, and that in the original timeline, he and the Gurus of Zeal were scattered across time by Lavos's awakening in 12000 BC. Stranded as a child in 600 AD, Janus took the title of Magus and gained a cult of followers while plotting to summon and kill Lavos in revenge for the death of his sister, Schala. After the Gate in his castle returned him to Zeal, he disguised himself as a Prophet, and, using his knowledge of the future, bided his time for another chance to kill Lavos. At this point, Magus is either killed in a duel with Frog, or spared and convinced to join the party. Either way, he instructs the party to seek out Gaspar, the Guru of Time, to help them resurrect Crono. As they start to leave 12,000 BC, the ruined Ocean Palace rises into the air as the Black Omen, Queen Zeal's floating fortress. The party returns to the End of Time, where the old man reveals himself as Gaspar and gives them the "Chrono Trigger", an egg-shaped device that allows the group to revisit the moment of Crono's death with a Doppel Doll. The party then gather power by helping people across time with Gaspar's instructions. Their journeys involve defeating the remnants of the Mystics, stopping Robo's maniacal AI creator from snuffing out the last of humanity, giving Frog closure for Cyrus's death, locating and charging up the mythical Sun Stone, retrieving the legendary Rainbow Shell, unmasking Guardia's Chancellor as a monster, restoring a forest destroyed by a desert monster, and preventing an accident that disabled Lucca's mother. The party then enter the Black Omen and defeat Queen Zeal, after which they battle Lavos. They discover that Lavos is self-directing his evolution via absorbing DNA and energy from every living creature before razing the planet's surface in 1999 AD, so that it could spawn a new generation to destroy other worlds and continue the evolutionary cycle. The party slay Lavos, and celebrate at the final night of the Millennial Fair before returning to their own times. If Magus joined the party, he departs to search for Schala. If Crono was resurrected before defeating Lavos, his sentence for kidnapping Marle is revoked by her father, King Guardia XXXIII, thanks to testimonies from Marle's ancestors and descendants, whom Crono had helped during his journey. Crono's mother accidentally enters the time gate at the Millennial Fair before it closes, prompting Crono, Marle, and Lucca to set out in the Epoch to find her while fireworks light up the night sky. If Crono was not resurrected, Frog, Robo, and Ayla (along with Magus if he was recruited) chase Gaspar to the Millennial Fair and back again, revealing that Gaspar knows how to resurrect Crono; Marle and Lucca then use the Epoch to travel through time to accomplish this. Alternatively, if the party used the Epoch to break Lavos's outer shell, Marle will help her father hang Nadia's bell at the festival and accidentally get carried away by several balloons. If resurrected, Crono jumps on to help her, but cannot bring them down to earth. Hanging on in each other's arms, the pair travel through the cloudy, moonlit sky. Chrono Trigger DS added two new scenarios to the game. In the first, Crono and his friends can help a "lost sanctum" of Reptites, who reward powerful items and armor. The second scenario adds ties to Trigger's sequel, Chrono Cross. In a New Game Plus, the group can explore several temporal distortions to combat shadow versions of Crono, Marle, and Lucca, and to fight Dalton, who promises in defeat to raise an army in the town of Porre to destroy the Kingdom of Guardia. The group can then fight the Dream Devourer, a prototypical form of the Time Devourer—a fusion of Schala and Lavos seen in Chrono Cross. A version of Magus pleads with Schala to resist; though she recognizes him as her brother, she refuses to be helped and sends him away. Schala subsequently erases his memories and Magus awakens in a forest, determined to find what he had lost. Development Chrono Trigger was conceived in 1992 by Hironobu Sakaguchi, producer and creator of the Final Fantasy series; Yuji Horii, writer, game designer and creator of the Dragon Quest series; and Akira Toriyama, character designer of Dragon Quest and creator of the Dragon Ball manga series. Traveling to the United States to research computer graphics, the three decided to create something that "no one had done before". After spending over a year considering the difficulties of developing a new game, they received a call from Kazuhiko Aoki, who offered to produce. The four met and spent four days brainstorming ideas for the game. Square convened 50–60 developers, including scenario writer Masato Kato, whom Square designated story planner; about half of the staff had worked on Final Fantasy VI, with the other half being newcomers. Development started in early 1993. An uncredited Square employee suggested that the team develop a time travel-themed game, which Kato initially opposed, fearing repetitive, dull gameplay. Kato and Horii then met several hours per day during the first year of development to write the game's plot. Square intended to license the work under the Seiken Densetsu franchise and gave it the working title Maru Island; Hiromichi Tanaka (the future producer of Chrono Cross) monitored Toriyama's early designs. The team hoped to release it on Nintendo's planned Super Famicom Disk Drive; when Nintendo canceled the project, Square reoriented the game for release on a Super Famicom cartridge and rebranded it as Chrono Trigger. Tanaka credited the ROM cartridge platform for enabling seamless transition to battles on the field map. Aoki ultimately produced Chrono Trigger, while director credits were attributed to Akihiko Matsui, Yoshinori Kitase and Takashi Tokita. Toriyama designed the game's aesthetic, including characters, monsters, vehicles, and the look of each era. Masato Kato also contributed character ideas and designs. Kato planned to feature Gaspar as a playable character and Toriyama sketched him, but he was cut early in development. The development staff studied the drawings of Toriyama to approximate his style. Sakaguchi and Horii supervised; Sakaguchi was responsible for the game's overall system and contributed several monster ideas. Other notable designers include Tetsuya Takahashi, the graphic director, and Yasuyuki Honne, Tetsuya Nomura, and Yusuke Naora, who worked as field graphic artists. Yasuhiko Kamata programmed graphics, and cited Ridley Scott's visual work in the film Alien as an inspiration for the game's lighting. Kamata made the game's luminosity and color choice lay between that of Secret of Mana and the Final Fantasy series. Features originally intended to be used in Secret of Mana or Final Fantasy IV, also under development at the same time, were appropriated by the Chrono Trigger team. According to Tanaka, Secret of Mana (which itself was originally intended to be Final Fantasy IV) was codenamed "Chrono Trigger" during development before being called Seiken Densetsu 2 (Secret of Mana), and then the name Chrono Trigger was adopted for a new project. Yuji Horii, a fan of time travel fiction (such as the TV series The Time Tunnel), fostered a theme of time travel in his general story outline of Chrono Trigger with input from Akira Toriyama. Horii liked the scenario of the grandfather paradox surrounding Marle. Concerning story planning, Horii commented, "If there's a fairground, I just write that there's a fairground; I don't write down any of the details. Then the staff brainstorm and come up with a variety of attractions to put in." Sakaguchi contributed some minor elements, including the character Gato; he liked Marle's drama and reconciliation with her father. Masato Kato subsequently edited and completed the outline by writing the majority of the game's story, including all the events of the 12,000 BC era. He took pains to avoid what he described as "a long string of errands ... [such as] 'do this', 'take this', 'defeat these monsters', or 'plant this flag'." Kato and other developers held a series of meetings to ensure continuity, usually attended by around 30 personnel. Kato and Horii initially proposed Crono's death, though they intended he stay dead; the party would have retrieved an earlier, living version of him to complete the quest. Square deemed the scenario too depressing and asked that Crono be brought back to life later in the story. Kato also devised the system of multiple endings because he could not branch the story out to different paths. Yoshinori Kitase and Takashi Tokita then wrote various subplots. They also devised an "Active Time Event Logic" system, "where you can move your character around during scenes, even when an NPC is talking to you", and with players "talking to different people and steering the conversation in different directions", allowing each scene to "have many permutations." Kato became friends with composer Yasunori Mitsuda during development, and they would collaborate on several future projects. Katsuhisa Higuchi programmed the battle system, which hosted combat on the map without transition to a special battleground as most previous Square games had done. Higuchi noted extreme difficulty in loading battles properly without slow-downs or a brief, black loading screen. The game's use of animated monster sprites consumed much more memory than previous Final Fantasy games, which used static enemy graphics. Hironobu Sakaguchi likened the development of Chrono Trigger to "play[ing] around with Toriyama's universe," citing the inclusion of humorous sequences in the game that would have been "impossible with something like Final Fantasy." When Square Co. suggested a non-human player character, developers created Frog by adapting one of Toriyama's sketches. The team created the End of Time to help players with hints, worrying that they might become stuck and need to consult a walkthrough. The game's testers had previously complained that Chrono Trigger was too difficult; as Horii explained, "It's because we know too much. The developers think the game's just right; that they're being too soft. They're thinking from their own experience. The puzzles were the same. Lots of players didn't figure out things we thought they'd get easily." Sakaguchi later cited the unusual desire of beta testers to play the game a second time or "travel through time again" as an affirmation of the New Game Plus feature: "Wherever we could, we tried to make it so that a slight change in your behavior caused subtle differences in people's reactions, even down to the smallest details ... I think the second playthrough will hold a whole new interest." The game's reuse of locations due to time traveling made bug-fixing difficult, as corrections would cause unintended consequences in other eras. Music Chrono Trigger was scored primarily by Yasunori Mitsuda, with contributions from veteran Final Fantasy composer Nobuo Uematsu, and one track composed by Noriko Matsueda. A sound programmer at the time, Mitsuda was unhappy with his pay and threatened to leave Square if he could not compose music. Hironobu Sakaguchi suggested he score Chrono Trigger, remarking, "maybe your salary will go up." Mitsuda composed new music and drew on a personal collection of pieces composed over the previous two years. He reflected, "I wanted to create music that wouldn't fit into any established genre ... music of an imaginary world. The game's director, Masato Kato, was my close friend, and so I'd always talk with him about the setting and the scene before going into writing." Mitsuda slept in his studio several nights, and attributed certain pieces—such as the game's ending theme, To Far Away Times—to inspiring dreams. He later attributed this song to an idea he was developing before Chrono Trigger, reflecting that the
then the name Chrono Trigger was adopted for a new project. Yuji Horii, a fan of time travel fiction (such as the TV series The Time Tunnel), fostered a theme of time travel in his general story outline of Chrono Trigger with input from Akira Toriyama. Horii liked the scenario of the grandfather paradox surrounding Marle. Concerning story planning, Horii commented, "If there's a fairground, I just write that there's a fairground; I don't write down any of the details. Then the staff brainstorm and come up with a variety of attractions to put in." Sakaguchi contributed some minor elements, including the character Gato; he liked Marle's drama and reconciliation with her father. Masato Kato subsequently edited and completed the outline by writing the majority of the game's story, including all the events of the 12,000 BC era. He took pains to avoid what he described as "a long string of errands ... [such as] 'do this', 'take this', 'defeat these monsters', or 'plant this flag'." Kato and other developers held a series of meetings to ensure continuity, usually attended by around 30 personnel. Kato and Horii initially proposed Crono's death, though they intended he stay dead; the party would have retrieved an earlier, living version of him to complete the quest. Square deemed the scenario too depressing and asked that Crono be brought back to life later in the story. Kato also devised the system of multiple endings because he could not branch the story out to different paths. Yoshinori Kitase and Takashi Tokita then wrote various subplots. They also devised an "Active Time Event Logic" system, "where you can move your character around during scenes, even when an NPC is talking to you", and with players "talking to different people and steering the conversation in different directions", allowing each scene to "have many permutations." Kato became friends with composer Yasunori Mitsuda during development, and they would collaborate on several future projects. Katsuhisa Higuchi programmed the battle system, which hosted combat on the map without transition to a special battleground as most previous Square games had done. Higuchi noted extreme difficulty in loading battles properly without slow-downs or a brief, black loading screen. The game's use of animated monster sprites consumed much more memory than previous Final Fantasy games, which used static enemy graphics. Hironobu Sakaguchi likened the development of Chrono Trigger to "play[ing] around with Toriyama's universe," citing the inclusion of humorous sequences in the game that would have been "impossible with something like Final Fantasy." When Square Co. suggested a non-human player character, developers created Frog by adapting one of Toriyama's sketches. The team created the End of Time to help players with hints, worrying that they might become stuck and need to consult a walkthrough. The game's testers had previously complained that Chrono Trigger was too difficult; as Horii explained, "It's because we know too much. The developers think the game's just right; that they're being too soft. They're thinking from their own experience. The puzzles were the same. Lots of players didn't figure out things we thought they'd get easily." Sakaguchi later cited the unusual desire of beta testers to play the game a second time or "travel through time again" as an affirmation of the New Game Plus feature: "Wherever we could, we tried to make it so that a slight change in your behavior caused subtle differences in people's reactions, even down to the smallest details ... I think the second playthrough will hold a whole new interest." The game's reuse of locations due to time traveling made bug-fixing difficult, as corrections would cause unintended consequences in other eras. Music Chrono Trigger was scored primarily by Yasunori Mitsuda, with contributions from veteran Final Fantasy composer Nobuo Uematsu, and one track composed by Noriko Matsueda. A sound programmer at the time, Mitsuda was unhappy with his pay and threatened to leave Square if he could not compose music. Hironobu Sakaguchi suggested he score Chrono Trigger, remarking, "maybe your salary will go up." Mitsuda composed new music and drew on a personal collection of pieces composed over the previous two years. He reflected, "I wanted to create music that wouldn't fit into any established genre ... music of an imaginary world. The game's director, Masato Kato, was my close friend, and so I'd always talk with him about the setting and the scene before going into writing." Mitsuda slept in his studio several nights, and attributed certain pieces—such as the game's ending theme, To Far Away Times—to inspiring dreams. He later attributed this song to an idea he was developing before Chrono Trigger, reflecting that the tune was made in dedication to "a certain person with whom [he] wanted to share a generation". He also tried to use leitmotifs of the Chrono Trigger main theme to create a sense of consistency in the soundtrack. Mitsuda wrote each tune to be around two minutes long before repeating, unusual for Square's games at the time. Mitsuda suffered a hard drive crash that lost around forty in-progress tracks. After Mitsuda contracted stomach ulcers, Uematsu joined the project to compose ten pieces and finish the score. Mitsuda returned to watch the ending with the staff before the game's release, crying upon seeing the finished scene. At the time of the game's release, the number of tracks and sound effects was unprecedented—the soundtrack spanned three discs in its 1995 commercial pressing. Square also released a one-disc acid jazz arrangement called "The Brink of Time" by Guido that year. The Brink of Time came about because Mitsuda wanted to do something that no one else was doing, and he noted that acid jazz and its related genres were uncommon in the Japanese market. Mitsuda considers Chrono Trigger a landmark game which helped mature his talent. While Mitsuda later held that the title piece was "rough around the edges", he maintains that it had "significant influence on [his] life as a composer". In 1999, Square produced another one-disc soundtrack to complement the PlayStation release of Trigger, featuring orchestral tracks used in cut scenes. Tsuyoshi Sekito composed four new pieces for the game's bonus features which weren't included on the soundtrack. Some fans were displeased by Mitsuda's absence in creating the port, whose instruments sometimes aurally differed from the original game's. Mitsuda arranged versions of music from the Chrono series for Play! video game music concerts, presenting the main theme, Frog's Theme, and To Far Away Times. He worked with Square Enix to ensure that the music for the Nintendo DS would sound closer to the Super NES version. Mitsuda encouraged feedback about the game's soundtrack from contemporary children (who he thought would expect "full symphonic scores blaring out of the speakers"). Fans who preordered Chrono Trigger DS received a special music disc containing two orchestral arrangements of Chrono Trigger music directed by Natsumi Kameoka; Square Enix also held a random prize drawing for two signed copies of Chrono Trigger sheet music. Mitsuda expressed difficulty in selecting the tune for the orchestral medley, eventually picking a tune from each era and certain character themes. Mitsuda later wrote: Music from the game was performed live by the Tokyo Symphony Orchestra in 1996 at the Orchestral Game Concert in Tokyo, Japan. A suite of music including Chrono Trigger is a part of the symphonic world-tour with video game music Play! A Video Game Symphony, where Mitsuda was in attendance for the concert's world-premiere in Chicago on May 27, 2006. His suite of Chrono music, comprising "Reminiscence", "Chrono Trigger", "Chrono Cross~Time's Scar", "Frog's Theme", and "To Far Away Times" was performed. Mitsuda has also appeared with the Eminence Symphony Orchestra as a special guest. Video Games Live has also featured medleys from Chrono Trigger and Chrono Cross. A medley of Music from Chrono Trigger made of one of the four suites of the Symphonic Fantasies concerts in September 2009 which was produced by the creators of the Symphonic Game Music Concert series, conducted by Arnie Roth. Square Enix re-released the game's soundtrack, along with a video interview with Mitsuda in July 2009. Release The team planned to release Chrono Trigger in late 1994, but release was pushed back to the following year. Early alpha versions of Chrono Trigger were demonstrated at the 1994 and 1995 V Jump festivals in Japan. A few months prior to the game's release, Square shipped a beta version to magazine reviewers and game stores for review. An unfinished build of the game dated November 17, 1994, it contains unused music tracks, locations, and other features changed or removed from the final release—such as a dungeon named "Singing Mountain" and its eponymous tune. Some names also differed; the character Soysaw (Slash in the US version) was known as Wiener, while Mayonnay (Flea in the US version) was named Ketchappa. The ROM image for this early version was eventually uploaded to the internet, prompting fans to explore and document the game's differences, including two unused world map NPC character sprites and presumed additional sprites for certain non-player characters. Around the game's release, Yuji Horii commented that Chrono Trigger "went beyond [the development team's] expectations", and Hironobu Sakaguchi congratulated the game's graphic artists and field designers. Sakaguchi intended to perfect the "sense of dancing you get from exploring Toriyama's worlds" in the event that they would make a sequel. Chrono Trigger used a 32-megabit ROM cartridge with battery-backed RAM for saved games, lacking special on-cartridge coprocessors. The Japanese release of Chrono Trigger included art for the game's ending and running counts of items in the player's status menu. Developers created the North American version before adding these features to the original build, inadvertently leaving in vestiges of Chrono Trigger's early development (such as the piece "Singing Mountain"). Hironobu Sakaguchi asked translator Ted Woolsey to localize Chrono Trigger for English audiences and gave him roughly thirty days to work. Lacking the help of a modern translation team, he memorized scenarios and looked at drafts of commercial player's guides to put dialogue in context. Woolsey later reflected that he would have preferred two-and-a-half months, and blames his rushed schedule on the prevailing attitude in Japan that games were children's toys rather than serious works. Some of his work was cut due to space constraints, though he still considered Trigger "one of the most satisfying games [he] ever worked on or played". Nintendo of America censored certain dialogue, including references to breastfeeding, consumption of alcohol, and religion. The original SNES edition of Chrono Trigger was released on the Wii download service Virtual Console in Japan on April 26, 2011, in the US on May 16, 2011, and in Europe on May 20, 2011. Previously in April 2008, a Nintendo Power reader poll had identified Chrono Trigger as the third-most wanted game for the Virtual Console. The game has also been ported to i-mode, the Virtual Console, the PlayStation Network, iOS, Android, and Microsoft Windows. PlayStation Square released an enhanced port of Chrono Trigger developed by Tose in Japan for the Sony PlayStation in 1999. Square timed its release before that of Chrono Cross, the 1999 sequel to Chrono Trigger, to familiarize new players with the story leading up to it. This version included anime cutscenes created by original character designer Akira Toriyama's Bird Studio and animated at Toei Animation, as well as several bonus features, accessible after achieving various endings in the game. Scenarist Masato Kato attended planning meetings at Bird Studio to discuss how the ending cutscenes would illustrate subtle ties to Chrono Cross. The port was released in North America in 2001—along with a newly translated version of Final Fantasy IV—as Final Fantasy Chronicles. Reviewers criticized Chronicles for its lengthy load times and an absence of new in-game features. This same iteration was also re-released as a downloadable game on the PlayStation Network on October 4, 2011, for the PlayStation 3, PlayStation Vita, and PlayStation Portable. Nintendo DS On July 2, 2008, Square Enix announced that they were planning to bring Chrono Trigger to the Nintendo DS handheld platform. Composer Yasunori Mitsuda was pleased with the project, exclaiming "finally!" after receiving the news from Square Enix and maintaining, "it's still a very deep, very high-quality game even when you play it today. I'm very interested in seeing what kids today think about it when they play it." Square retained Masato Kato to oversee the port, and Tose to program it. Kato explained, "I wanted it to be based on the original Super NES release rather than the PlayStation version. I thought we should look at the additional elements from the Playstation version, re-examine and re-work them to make it a complete edition. That's how it struck me and I told the staff so later on." Square Enix touted the game by displaying Akira Toriyama's original art at the 2008 Tokyo Game Show. The DS re-release contains all of the bonus material from the PlayStation port, as well as other enhancements. The added features include a more accurate and revised translation by Tom Slattery, a dual-screen mode which clears the top screen of all menus, a self-completing map screen, and a default "run" option. It also features the option to choose between two control schemes: one mirroring the original SNES controls, and the other making use of the DS's touch screen. Masato Kato participated in development, overseeing the addition of the monster-battling Arena, two new areas, the Lost Sanctum and the Dimensional Vortex, and a new ending that further foreshadows the events of Chrono Cross. One of the areas within the Vortex uses the "Singing Mountain" song that was featured on the original Chrono Trigger soundtrack. These new dungeons met with mixed reviews; GameSpot called them "frustrating" and "repetitive", while IGN noted that "the extra quests in the game connect extremely well." It was a nominee for "Best RPG for the Nintendo DS" in IGN's 2008 video game awards. The Nintendo DS version of Chrono Trigger was the 22nd best-selling game of 2008 in Japan. Mobile A cellphone version was released in Japan on i-mode distribution service on August 25, 2011. An iOS version was released on December 8, 2011. This version is based on the Nintendo DS version, with graphics optimized for iOS. The game was later released for Android on October 29, 2012. An update incorporating most of the features of the Windows version—including the reintroduction of the animated cutscenes, which had been absent from the initial mobile release—was released on February 27, 2018 for both iOS and Android. Windows Square Enix released Chrono Trigger without an announcement for Microsoft Windows via Steam on February 27, 2018. This version includes all content from the Nintendo DS port, the higher resolution graphics from the mobile device releases, support for mouse and keyboard controls, and autosave features,
Armstrong Wood 2. Baker's Pit 3. Beales Meadows 4. Bissoe Valley 5. Bosvenning Common 6. Cabilla and Redrice Woods 7. Caer Brân 8. Carn Moor 9. Chûn Downs 10. Churchtown Farm, near Saltash 11. Chyverton 12. Devichoys Wood, near Penryn 13. Downhill Meadow 14. River Fal—River Ruan Estuary 15. Five Acres, at the Cornwall Wildlife Trust Headquarters, Allet, near Truro 16. Fox Corner, south of Truro 17. Greena Moor 18. Halbullock Moor, south of Truro 19. Hawkes Wood 20. Helman Tor (including Breney Common and Red Moor, near Lostwithiel 21. Kemyel Crease 22. Kennall Vale, at Ponsanooth, between Falmouth & Redruth 23. Lanvean Bottoms 24. Loggan's Moor, near Hayle 25. Loveny/Colliford Reservoir 26. Lower Lewdon 27. Luckett/Greenscombe Wood 28. Maer Lake 29. Nansmellyn Marsh 30. North Predannack Downs 31. Park Hoskyn - The Hayman Reserve 32. Pendarves Wood, near Camborne 33. Penlee Battery, near Kingsand 34. Phillips's Point 35. Priddacombe Downs 36. Prideaux Wood 37. Quoit Heathland 38. Redlake Cottage Meadows 39. Ropehaven Cliffs 40. Rosenannon Downs 41. St Erth Pits, at St. Erth 42. St George's Island (or Looe Island), near Looe 43. Swanvale, Falmouth 44. Sylvia's Meadow, near Callington 45. Tamar Estuary, near Saltash 46. Tincombe, near Saltash 47. Trebarwith, near Tintagel
near Truro 16. Fox Corner, south of Truro 17. Greena Moor 18. Halbullock Moor, south of Truro 19. Hawkes Wood 20. Helman Tor (including Breney Common and Red Moor, near Lostwithiel 21. Kemyel Crease 22. Kennall Vale, at Ponsanooth, between Falmouth & Redruth 23. Lanvean Bottoms 24. Loggan's Moor, near Hayle 25. Loveny/Colliford Reservoir 26. Lower Lewdon 27. Luckett/Greenscombe Wood 28. Maer Lake 29. Nansmellyn Marsh 30. North Predannack Downs 31. Park Hoskyn - The Hayman Reserve 32. Pendarves Wood, near Camborne 33. Penlee Battery, near Kingsand 34. Phillips's Point 35. Priddacombe Downs 36. Prideaux Wood 37. Quoit Heathland 38. Redlake Cottage Meadows 39. Ropehaven Cliffs 40. Rosenannon Downs 41. St Erth Pits, at St. Erth 42. St
where plants are cultivated, including medicinal ones and including attached residential solariums Music school, or a school devoted to other arts such as dance Sunroom,
as dance Sunroom, a smaller glass enclosure or garden shed attached to a house, also
common technique in topology. Alexandroff one-point compactification For any noncompact topological space X the (Alexandroff) one-point compactification αX of X is obtained by adding one extra point ∞ (often called a point at infinity) and defining the open sets of the new space to be the open sets of X together with the sets of the form G ∪ {∞}, where G is an open subset of X such that X \ G is closed and compact. The one-point compactification of X is Hausdorff if and only if X is Hausdorff, noncompact and locally compact. Stone–Čech compactification Of particular interest are Hausdorff compactifications, i.e., compactifications in which the compact space is Hausdorff. A topological space has a Hausdorff compactification if and only if it is Tychonoff. In this case, there is a unique (up to homeomorphism) "most general" Hausdorff compactification, the Stone–Čech compactification of X, denoted by βX; formally, this exhibits the category of Compact Hausdorff spaces and continuous maps as a reflective subcategory of the category of Tychonoff spaces and continuous maps. "Most general" or formally "reflective" means that the space βX is characterized by the universal property that any continuous function from X to a compact Hausdorff space K can be extended to a continuous function from βX to K in a unique way. More explicitly, βX is a compact Hausdorff space containing X such that the induced topology on X by βX is the same as the given topology on X, and for any continuous map f:X → K, where K is a compact Hausdorff space, there is a unique continuous map g:βX → K for which g restricted to X is identically f. The Stone–Čech compactification can be constructed explicitly as follows: let C be the set of continuous functions from X to the closed interval [0,1]. Then each point in X can be identified with an evaluation function on C. Thus X can be identified with a subset of [0,1]C, the space of all functions from C to [0,1]. Since the latter is compact by Tychonoff's theorem, the closure of X as a subset of that space will also be compact. This is the Stone–Čech compactification. Spacetime compactification Walter Benz and Isaak Yaglom have shown how stereographic projection onto a single-sheet hyperboloid can be used to provide a compactification for split complex numbers. In fact, the hyperboloid is part of a quadric in real projective four-space. The method is similar to that used to provide a base manifold for group action of the conformal group of spacetime. Projective space Real projective space RPn is a compactification of Euclidean space Rn. For each possible "direction" in which points in Rn can "escape", one new point at infinity is added (but each direction is identified with its opposite). The Alexandroff one-point compactification of R we constructed in the example above is in fact homeomorphic to RP1. Note however that the projective plane RP2 is not the one-point compactification of the plane R2 since more than one point is added. Complex projective space CPn is also a compactification of Cn; the Alexandroff one-point compactification of the plane C is (homeomorphic to) the complex projective line CP1, which in turn can be identified with a sphere, the Riemann sphere. Passing to projective space is a common tool in algebraic geometry because the added points at infinity lead
every open cover of the space contains a finite subcover. The methods of compactification are various, but each is a way of controlling points from "going off to infinity" by in some way adding "points at infinity" or preventing such an "escape". An example Consider the real line with its ordinary topology. This space is not compact; in a sense, points can go off to infinity to the left or to the right. It is possible to turn the real line into a compact space by adding a single "point at infinity" which we will denote by ∞. The resulting compactification can be thought of as a circle (which is compact as a closed and bounded subset of the Euclidean plane). Every sequence that ran off to infinity in the real line will then converge to ∞ in this compactification. Intuitively, the process can be pictured as follows: first shrink the real line to the open interval (-π,π) on the x-axis; then bend the ends of this interval upwards (in positive y-direction) and move them towards each other, until you get a circle with one point (the topmost one) missing. This point is our new point ∞ "at infinity"; adding it in completes the compact circle. A bit more formally: we represent a point on the unit circle by its angle, in radians, going from -π to π for simplicity. Identify each such point θ on the circle with the corresponding point on the real line tan(θ/2). This function is undefined at the point π, since tan(π/2) is undefined; we will identify this point with our point ∞. Since tangents and inverse tangents are both continuous, our identification function is a homeomorphism between the real line and the unit circle without ∞. What we have constructed is called the Alexandroff one-point compactification of the real line, discussed in more generality below. It is also possible to compactify the real line by adding two points, +∞ and -∞; this results in the extended real line. Definition An embedding of a topological space X as a dense subset of a compact space is called a compactification of X. It is often useful to embed topological spaces in compact spaces, because of the special properties compact spaces have. Embeddings into compact Hausdorff spaces may
other. This formulation is analogous to the construction of the cotangent space to define the Zariski tangent space in algebraic geometry. The construction also generalizes to locally ringed spaces. The differential of a function Let M be a smooth manifold and let be a smooth function. The differential of f at a point x is the map dfx(Xx) = Xx(f) where Xx is a tangent vector at x, thought of as a derivation. That is is the Lie derivative of f in the direction X, and one has . Equivalently, we can think of tangent vectors as tangents to curves, and write dfx(γ(0)) = (f ∘ γ)′(0) In either case, dfx is a linear map on TxM and hence it is a tangent covector at x. We can then define the differential map at a point x as the map which sends f to dfx. Properties of the differential map include: d is a linear map: d(af + bg) = a df + b dg for constants a and b, d(fg)x = f(x) dgx + g(x) dfx, The differential map provides the link between the two alternate definitions of the cotangent space given above. Given a function (a smooth function vanishing at x) we can form the linear functional dfx as above. Since the map d restricts to 0 on Ix2 (the reader should verify this), d descends to a map from to the dual of the tangent space, (TxM)*. One can show that this map is an isomorphism, establishing the equivalence of the two definitions. The pullback of a smooth map Just as every differentiable map between manifolds induces a linear map (called the pushforward or derivative) between the tangent spaces every such map induces a linear map (called the pullback) between the cotangent spaces, only this time in the reverse direction: The pullback is naturally defined as the dual (or transpose) of the pushforward. Unraveling the definition, this means the following: where and . Note carefully where everything lives. If we define tangent covectors in terms of equivalence classes of smooth maps vanishing at a point then the definition of the pullback is even more straightforward. Let g be a smooth function on N vanishing at f(x). Then the pullback of the covector determined by g (denoted dg) is given by That is, it is the equivalence class of functions on M vanishing at x determined by . Exterior powers The k-th exterior power of the cotangent space, denoted Λk(Tx*M), is another important object in differential geometry. Vectors in the kth exterior power, or more precisely sections of the k-th exterior power of the cotangent bundle, are called differential k-forms. They can be thought of as alternating, multilinear maps on k tangent vectors. For
canonical tangent vector. Formal definitions Definition as linear functionals Let be a smooth manifold and let be a point in . Let be the tangent space at . Then the cotangent space at x is defined as the dual space of Concretely, elements of the cotangent space are linear functionals on . That is, every element is a linear map where is the underlying field of the vector space being considered, for example, the field of real numbers. The elements of are called cotangent vectors. Alternative definition In some cases, one might like to have a direct definition of the cotangent space without reference to the tangent space. Such a definition can be formulated in terms of equivalence classes of smooth functions on . Informally, we will say that two smooth functions f and g are equivalent at a point if they have the same first-order behavior near , analogous to their linear Taylor polynomials; two functions f and g have the same first order behavior near if and only if the derivative of the function f − g vanishes at . The cotangent space will then consist of all the possible first-order behaviors of a function near . Let M be a smooth manifold and let x be a point in . Let be the ideal of all functions in vanishing at , and let be the set of functions of the form , where . Then and are both real vector spaces and the cotangent space can be defined as the quotient space by showing that the two spaces are isomorphic to each other. This formulation is analogous to the construction of the cotangent space to define the Zariski tangent space in algebraic geometry. The construction also generalizes to locally ringed spaces. The differential of a function Let M be a smooth manifold and let be a smooth function. The differential of f at a point x is the map dfx(Xx) = Xx(f) where Xx is
various means: creeping like snails, crawling like inchworms, or by somersaulting. A few can swim clumsily by waggling their bases. Nervous system and senses Cnidarians are generally thought to have no brains or even central nervous systems. However, they do have integrative areas of neural tissue that could be considered some form of centralization. Most of their bodies are innervated by decentralized nerve nets that control their swimming musculature and connect with sensory structures, though each clade has slightly different structures. These sensory structures, usually called rhopalia, can generate signals in response to various types of stimuli such as light, pressure, and much more. Medusa usually have several of them around the margin of the bell that work together to control the motor nerve net, that directly innervates the swimming muscles. Most Cnidarians also have a parallel system. In scyphozoans, this takes the form of a diffuse nerve net, which has modulatory effects on the nervous system. As well as forming the "signal cables" between sensory neurons and motoneurons, intermediate neurons in the nerve net can also form ganglia that act as local coordination centers. Communication between nerve cells can occur by chemical synapses or gap junctions in hydrozoans, though gap junctions are not present in all groups. Cnidarians have many of the same neurotransmitters as many animals, including chemicals such as glutamate, GABA, and acetylcholine. This structure ensures that the musculature is excited rapidly and simultaneously, and can be directly stimulated from any point on the body, and it also is better able to recover after injury. Medusae and complex swimming colonies such as siphonophores and chondrophores sense tilt and acceleration by means of statocysts, chambers lined with hairs which detect the movements of internal mineral grains called statoliths. If the body tilts in the wrong direction, the animal rights itself by increasing the strength of the swimming movements on the side that is too low. Most species have ocelli ("simple eyes"), which can detect sources of light. However, the agile box jellyfish are unique among Medusae because they possess four kinds of true eyes that have retinas, corneas and lenses. Although the eyes probably do not form images, Cubozoa can clearly distinguish the direction from which light is coming as well as negotiate around solid-colored objects. Feeding and excretion Cnidarians feed in several ways: predation, absorbing dissolved organic chemicals, filtering food particles out of the water, obtaining nutrients from symbiotic algae within their cells, and parasitism. Most obtain the majority of their food from predation but some, including the corals Hetroxenia and Leptogorgia, depend almost completely on their endosymbionts and on absorbing dissolved nutrients. Cnidaria give their symbiotic algae carbon dioxide, some nutrients, a place in the sun and protection against predators. Predatory species use their cnidocytes to poison or entangle prey, and those with venomous nematocysts may start digestion by injecting digestive enzymes. The "smell" of fluids from wounded prey makes the tentacles fold inwards and wipe the prey off into the mouth. In medusae the tentacles round the edge of the bell are often short and most of the prey capture is done by "oral arms", which are extensions of the edge of the mouth and are often frilled and sometimes branched to increase their surface area. Medusae often trap prey or suspended food particles by swimming upwards, spreading their tentacles and oral arms and then sinking. In species for which suspended food particles are important, the tentacles and oral arms often have rows of cilia whose beating creates currents that flow towards the mouth, and some produce nets of mucus to trap particles. Their digestion is both intra and extracellular. Once the food is in the digestive cavity, gland cells in the gastroderm release enzymes that reduce the prey to slurry, usually within a few hours. This circulates through the digestive cavity and, in colonial cnidarians, through the connecting tunnels, so that gastroderm cells can absorb the nutrients. Absorption may take a few hours, and digestion within the cells may take a few days. The circulation of nutrients is driven by water currents produced by cilia in the gastroderm or by muscular movements or both, so that nutrients reach all parts of the digestive cavity. Nutrients reach the outer cell layer by diffusion or, for animals or zooids such as medusae which have thick mesogleas, are transported by mobile cells in the mesoglea. Indigestible remains of prey are expelled through the mouth. The main waste product of cells' internal processes is ammonia, which is removed by the external and internal water currents. Respiration There are no respiratory organs, and both cell layers absorb oxygen from and expel carbon dioxide into the surrounding water. When the water in the digestive cavity becomes stale it must be replaced, and nutrients that have not been absorbed will be expelled with it. Some Anthozoa have ciliated grooves on their tentacles, allowing them to pump water out of and into the digestive cavity without opening the mouth. This improves respiration after feeding and allows these animals, which use the cavity as a hydrostatic skeleton, to control the water pressure in the cavity without expelling undigested food. Cnidaria that carry photosynthetic symbionts may have the opposite problem, an excess of oxygen, which may prove toxic. The animals produce large quantities of antioxidants to neutralize the excess oxygen. Regeneration All cnidarians can regenerate, allowing them to recover from injury and to reproduce asexually. Medusae have limited ability to regenerate, but polyps can do so from small pieces or even collections of separated cells. This enables corals to recover even after apparently being destroyed by predators. Reproduction Sexual Cnidarian sexual reproduction often involves a complex life cycle with both polyp and medusa stages. For example, in Scyphozoa (jellyfish) and Cubozoa (box jellies) a larva swims until it finds a good site, and then becomes a polyp. This grows normally but then absorbs its tentacles and splits horizontally into a series of disks that become juvenile medusae, a process called strobilation. The juveniles swim off and slowly grow to maturity, while the polyp re-grows and may continue strobilating periodically. The adults have gonads in the gastroderm, and these release ova and sperm into the water in the breeding season. This phenomenon of succession of differently organized generations (one asexually reproducing, sessile polyp, followed by a free-swimming medusa or a sessile polyp that reproduces sexually) is sometimes called "alternation of asexual and sexual phases" or "metagenesis", but should not be confused with the alternation of generations as found in plants. Shortened forms of this life cycle are common, for example some oceanic scyphozoans omit the polyp stage completely, and cubozoan polyps produce only one medusa. Hydrozoa have a variety of life cycles. Some have no polyp stages and some (e.g. hydra) have no medusae. In some species, the medusae remain attached to the polyp and are responsible for sexual reproduction; in extreme cases these reproductive zooids may not look much like medusae. Meanwhile, life cycle reversal, in which polyps are formed directly from medusae without the involvement of sexual reproduction process, was observed in both Hydrozoa (Turritopsis dohrnii and Laodicea undulata) and Scyphozoa (Aurelia sp.1). Anthozoa have no medusa stage at all and the polyps are responsible for sexual reproduction. Spawning is generally driven by environmental factors such as changes in the water temperature, and their release is triggered by lighting conditions such as sunrise, sunset or the phase of the moon. Many species of Cnidaria may spawn simultaneously in the same location, so that there are too many ova and sperm for predators to eat more than a tiny percentage — one famous example is the Great Barrier Reef, where at least 110 corals and a few non-cnidarian invertebrates produce enough gametes to turn the water cloudy. These mass spawnings may produce hybrids, some of which can settle and form polyps, but it is not known how long these can survive. In some species the ova release chemicals that attract sperm of the same species. The fertilized eggs develop into larvae by dividing until there are enough cells to form a hollow sphere (blastula) and then a depression forms at one end (gastrulation) and eventually becomes the digestive cavity. However, in cnidarians the depression forms at the end further from the yolk (at the animal pole), while in bilaterians it forms at the other end (vegetal pole). The larvae, called planulae, swim or crawl by means of cilia. They are cigar-shaped but slightly broader at the "front" end, which is the aboral, vegetal-pole end and eventually attaches to a substrate if the species has a polyp stage. Anthozoan larvae either have large yolks or are capable of feeding on plankton, and some already have endosymbiotic algae that help to feed them. Since the parents are immobile, these feeding capabilities extend the larvae's range and avoid overcrowding of sites. Scyphozoan and hydrozoan larvae have little yolk and most lack endosymbiotic algae, and therefore have to settle quickly and metamorphose into polyps. Instead, these species rely on their medusae to extend their ranges. Asexual All known cnidaria can reproduce asexually by various means, in addition to regenerating after being fragmented. Hydrozoan polyps only bud, while the medusae of some hydrozoans can divide down the middle. Scyphozoan polyps can both bud and split down the middle. In addition to both of these methods, Anthozoa can split horizontally just above the base. Asexual reproduction makes the daughter cnidarian a clone of the adult. Classification Cnidarians were for a long time grouped with Ctenophores in the phylum Coelenterata, but increasing awareness of their differences caused them to be placed in separate phyla. Modern cnidarians are generally classified into four main classes: sessile Anthozoa (sea anemones, corals, sea pens); swimming Scyphozoa (jellyfish) and Cubozoa (box jellies); and Hydrozoa, a diverse group that includes all the freshwater cnidarians as well as many marine forms, and has both sessile members such as Hydra and colonial swimmers such as the Portuguese Man o' War. Staurozoa have recently been recognised as a class in their own right rather than a sub-group of Scyphozoa, and the parasitic Myxozoa and Polypodiozoa are now recognized as highly derived cnidarians rather than more closely related to the bilaterians. Stauromedusae, small sessile cnidarians with stalks and no medusa stage, have traditionally been classified as members of the Scyphozoa, but recent research suggests they should be regarded as a separate class, Staurozoa. The Myxozoa, microscopic parasites, were first classified as protozoans. Research then found that Polypodium hydriforme, a non-Myxozoan parasite within the egg cells of sturgeon, is closely related to the Myxozoa and suggested that both Polypodium and the Myxozoa were intermediate between cnidarians and bilaterian animals. More recent research demonstrates that the previous identification of bilaterian genes reflected contamination of the Myxozoan samples by material from their host organism, and they are now firmly identified as heavily derived cnidarians, and more closely related to Hydrozoa and Scyphozoa than to Anthozoa. Some researchers classify the extinct conulariids as cnidarians, while others propose that they form a completely separate phylum. Current classification according to the World Register of Marine Species: class Anthozoa Ehrenberg, 1834 subclass Ceriantharia Perrier, 1893 — Tube-dwelling anemones subclass Hexacorallia Haeckel, 1896 - stony corals subclass Octocorallia Haeckel, 1866 - soft corals and sea fans class Cubozoa Werner, 1973 -- box jellies class Hydrozoa Owen, 1843 -- hydrozoans (fire corals, hydroids, hydroid jellyfishes, siphonophores...) class Myxozoa -- obligate parasites class Polypodiozoa Raikova, 1994 (uncertain status) class Scyphozoa Goette, 1887 -- "true" jellyfishes class Staurozoa Marques & Collins, 2004 -- stalked jellyfishes Ecology Many cnidarians are limited to shallow waters because they depend on endosymbiotic algae for much of their nutrients. The life cycles of most have polyp stages, which are limited to locations that offer stable substrates. Nevertheless, major cnidarian groups contain species that have escaped these limitations. Hydrozoans have a worldwide range: some, such as Hydra, live in freshwater; Obelia appears in the coastal waters of all the oceans; and Liriope can form large shoals near the surface in mid-ocean. Among anthozoans, a few scleractinian corals, sea pens and sea fans live in deep, cold waters, and some sea anemones inhabit polar seabeds while others live near hydrothermal vents over below sea-level. Reef-building corals are limited to tropical seas between 30°N and 30°S with a maximum depth of , temperatures between , high salinity, and low carbon dioxide levels. Stauromedusae, although usually classified as jellyfish, are stalked, sessile animals that live in cool to Arctic waters. Cnidarians range in size from a mere handful of cells for the parasitic myxozoans through Hydra'''s length of , to the Lion's mane jellyfish, which may exceed in diameter and in length. Prey of cnidarians ranges from plankton to animals several times larger than themselves. Some cnidarians are parasites, mainly on jellyfish but a few are major pests of fish. Others obtain most of their nourishment from endosymbiotic algae or dissolved nutrients. Predators of cnidarians include: sea slugs, which can incorporate nematocysts into their own bodies for self-defense; starfish, notably the crown of thorns starfish, which can devastate corals; butterfly fish and parrot fish, which eat corals; and marine turtles, which eat jellyfish. Some sea anemones and jellyfish have a symbiotic relationship with some fish; for example clown fish live among the tentacles of sea anemones, and each partner protects the other against predators. Coral reefs form some of the world's most productive ecosystems. Common coral reef cnidarians include both Anthozoans (hard corals, octocorals, anemones) and Hydrozoans (fire corals, lace corals). The endosymbiotic algae of many cnidarian species are very effective primary producers, in other words converters of inorganic chemicals into organic ones that other organisms can use, and their coral hosts use these organic chemicals very efficiently. In addition, reefs provide complex and varied habitats that support a wide range of other organisms. Fringing reefs just below low-tide level also have a mutually beneficial relationship with mangrove forests at high-tide level and seagrass meadows in between: the reefs protect the mangroves and seagrass from strong currents and waves that would damage them or erode the sediments in which they are rooted, while the mangroves and seagrass protect the coral from large influxes of silt, fresh water and pollutants. This additional level of variety in the environment is beneficial to many types of coral reef animals, which for example may feed in the sea grass and use the reefs for protection or breeding. Evolutionary history Fossil record The earliest widely accepted animal fossils are rather modern-looking cnidarians, possibly from around , although fossils from the Doushantuo Formation can only be dated approximately. The identification of some of these as embryos of animals has been contested, but other fossils from these rocks strongly resemble tubes and other mineralized structures made by corals. Their presence implies that the cnidarian and bilaterian lineages had already diverged. Although the Ediacaran fossil Charnia used to be classified as a jellyfish or sea pen, more recent study of growth patterns in Charnia and modern cnidarians has cast doubt on this hypothesis, leaving only the Canadian polyp, Haootia, as the only bona-fide cnidarian body fossil from the Ediacaran. Few fossils of cnidarians without mineralized skeletons are known from more recent rocks, except in lagerstätten that preserved soft-bodied animals. A few mineralized fossils that resemble corals have been found in rocks from the Cambrian period, and corals diversified in the Early Ordovician. These corals, which were wiped out in the Permian–Triassic extinction event about , did not dominate reef construction since sponges and algae also played a major part. During the Mesozoic era rudist bivalves were the main reef-builders, but they were wiped out in the Cretaceous–Paleogene extinction event , and since then the main reef-builders have been scleractinian corals. Family tree It is difficult to reconstruct the early stages in the evolutionary "family tree" of animals using only morphology (their shapes and structures), because the large differences between Porifera (sponges), Cnidaria plus Ctenophora (comb jellies), Placozoa and Bilateria (all the more complex animals) make comparisons difficult. Hence reconstructions now rely largely or entirely on molecular phylogenetics, which groups organisms according to similarities and differences in their biochemistry, usually in their DNA or RNA. It is now generally thought that the Calcarea (sponges with calcium carbonate spicules) are more closely related to Cnidaria, Ctenophora (comb jellies) and Bilateria (all
excess oxygen. Regeneration All cnidarians can regenerate, allowing them to recover from injury and to reproduce asexually. Medusae have limited ability to regenerate, but polyps can do so from small pieces or even collections of separated cells. This enables corals to recover even after apparently being destroyed by predators. Reproduction Sexual Cnidarian sexual reproduction often involves a complex life cycle with both polyp and medusa stages. For example, in Scyphozoa (jellyfish) and Cubozoa (box jellies) a larva swims until it finds a good site, and then becomes a polyp. This grows normally but then absorbs its tentacles and splits horizontally into a series of disks that become juvenile medusae, a process called strobilation. The juveniles swim off and slowly grow to maturity, while the polyp re-grows and may continue strobilating periodically. The adults have gonads in the gastroderm, and these release ova and sperm into the water in the breeding season. This phenomenon of succession of differently organized generations (one asexually reproducing, sessile polyp, followed by a free-swimming medusa or a sessile polyp that reproduces sexually) is sometimes called "alternation of asexual and sexual phases" or "metagenesis", but should not be confused with the alternation of generations as found in plants. Shortened forms of this life cycle are common, for example some oceanic scyphozoans omit the polyp stage completely, and cubozoan polyps produce only one medusa. Hydrozoa have a variety of life cycles. Some have no polyp stages and some (e.g. hydra) have no medusae. In some species, the medusae remain attached to the polyp and are responsible for sexual reproduction; in extreme cases these reproductive zooids may not look much like medusae. Meanwhile, life cycle reversal, in which polyps are formed directly from medusae without the involvement of sexual reproduction process, was observed in both Hydrozoa (Turritopsis dohrnii and Laodicea undulata) and Scyphozoa (Aurelia sp.1). Anthozoa have no medusa stage at all and the polyps are responsible for sexual reproduction. Spawning is generally driven by environmental factors such as changes in the water temperature, and their release is triggered by lighting conditions such as sunrise, sunset or the phase of the moon. Many species of Cnidaria may spawn simultaneously in the same location, so that there are too many ova and sperm for predators to eat more than a tiny percentage — one famous example is the Great Barrier Reef, where at least 110 corals and a few non-cnidarian invertebrates produce enough gametes to turn the water cloudy. These mass spawnings may produce hybrids, some of which can settle and form polyps, but it is not known how long these can survive. In some species the ova release chemicals that attract sperm of the same species. The fertilized eggs develop into larvae by dividing until there are enough cells to form a hollow sphere (blastula) and then a depression forms at one end (gastrulation) and eventually becomes the digestive cavity. However, in cnidarians the depression forms at the end further from the yolk (at the animal pole), while in bilaterians it forms at the other end (vegetal pole). The larvae, called planulae, swim or crawl by means of cilia. They are cigar-shaped but slightly broader at the "front" end, which is the aboral, vegetal-pole end and eventually attaches to a substrate if the species has a polyp stage. Anthozoan larvae either have large yolks or are capable of feeding on plankton, and some already have endosymbiotic algae that help to feed them. Since the parents are immobile, these feeding capabilities extend the larvae's range and avoid overcrowding of sites. Scyphozoan and hydrozoan larvae have little yolk and most lack endosymbiotic algae, and therefore have to settle quickly and metamorphose into polyps. Instead, these species rely on their medusae to extend their ranges. Asexual All known cnidaria can reproduce asexually by various means, in addition to regenerating after being fragmented. Hydrozoan polyps only bud, while the medusae of some hydrozoans can divide down the middle. Scyphozoan polyps can both bud and split down the middle. In addition to both of these methods, Anthozoa can split horizontally just above the base. Asexual reproduction makes the daughter cnidarian a clone of the adult. Classification Cnidarians were for a long time grouped with Ctenophores in the phylum Coelenterata, but increasing awareness of their differences caused them to be placed in separate phyla. Modern cnidarians are generally classified into four main classes: sessile Anthozoa (sea anemones, corals, sea pens); swimming Scyphozoa (jellyfish) and Cubozoa (box jellies); and Hydrozoa, a diverse group that includes all the freshwater cnidarians as well as many marine forms, and has both sessile members such as Hydra and colonial swimmers such as the Portuguese Man o' War. Staurozoa have recently been recognised as a class in their own right rather than a sub-group of Scyphozoa, and the parasitic Myxozoa and Polypodiozoa are now recognized as highly derived cnidarians rather than more closely related to the bilaterians. Stauromedusae, small sessile cnidarians with stalks and no medusa stage, have traditionally been classified as members of the Scyphozoa, but recent research suggests they should be regarded as a separate class, Staurozoa. The Myxozoa, microscopic parasites, were first classified as protozoans. Research then found that Polypodium hydriforme, a non-Myxozoan parasite within the egg cells of sturgeon, is closely related to the Myxozoa and suggested that both Polypodium and the Myxozoa were intermediate between cnidarians and bilaterian animals. More recent research demonstrates that the previous identification of bilaterian genes reflected contamination of the Myxozoan samples by material from their host organism, and they are now firmly identified as heavily derived cnidarians, and more closely related to Hydrozoa and Scyphozoa than to Anthozoa. Some researchers classify the extinct conulariids as cnidarians, while others propose that they form a completely separate phylum. Current classification according to the World Register of Marine Species: class Anthozoa Ehrenberg, 1834 subclass Ceriantharia Perrier, 1893 — Tube-dwelling anemones subclass Hexacorallia Haeckel, 1896 - stony corals subclass Octocorallia Haeckel, 1866 - soft corals and sea fans class Cubozoa Werner, 1973 -- box jellies class Hydrozoa Owen, 1843 -- hydrozoans (fire corals, hydroids, hydroid jellyfishes, siphonophores...) class Myxozoa -- obligate parasites class Polypodiozoa Raikova, 1994 (uncertain status) class Scyphozoa Goette, 1887 -- "true" jellyfishes class Staurozoa Marques & Collins, 2004 -- stalked jellyfishes Ecology Many cnidarians are limited to shallow waters because they depend on endosymbiotic algae for much of their nutrients. The life cycles of most have polyp stages, which are limited to locations that offer stable substrates. Nevertheless, major cnidarian groups contain species that have escaped these limitations. Hydrozoans have a worldwide range: some, such as Hydra, live in freshwater; Obelia appears in the coastal waters of all the oceans; and Liriope can form large shoals near the surface in mid-ocean. Among anthozoans, a few scleractinian corals, sea pens and sea fans live in deep, cold waters, and some sea anemones inhabit polar seabeds while others live near hydrothermal vents over below sea-level. Reef-building corals are limited to tropical seas between 30°N and 30°S with a maximum depth of , temperatures between , high salinity, and low carbon dioxide levels. Stauromedusae, although usually classified as jellyfish, are stalked, sessile animals that live in cool to Arctic waters. Cnidarians range in size from a mere handful of cells for the parasitic myxozoans through Hydra'''s length of , to the Lion's mane jellyfish, which may exceed in diameter and in length. Prey of cnidarians ranges from plankton to animals several times larger than themselves. Some cnidarians are parasites, mainly on jellyfish but a few are major pests of fish. Others obtain most of their nourishment from endosymbiotic algae or dissolved nutrients. Predators of cnidarians include: sea slugs, which can incorporate nematocysts into their own bodies for self-defense; starfish, notably the crown of thorns starfish, which can devastate corals; butterfly fish and parrot fish, which eat corals; and marine turtles, which eat jellyfish. Some sea anemones and jellyfish have a symbiotic relationship with some fish; for example clown fish live among the tentacles of sea anemones, and each partner protects the other against predators. Coral reefs form some of the world's most productive ecosystems. Common coral reef cnidarians include both Anthozoans (hard corals, octocorals, anemones) and Hydrozoans (fire corals, lace corals). The endosymbiotic algae of many cnidarian species are very effective primary producers, in other words converters of inorganic chemicals into organic ones that other organisms can use, and their coral hosts use these organic chemicals very efficiently. In addition, reefs provide complex and varied habitats that support a wide range of other organisms. Fringing reefs just below low-tide level also have a mutually beneficial relationship with mangrove forests at high-tide level and seagrass meadows in between: the reefs protect the mangroves and seagrass from strong currents and waves that would damage them or erode the sediments in which they are rooted, while the mangroves and seagrass protect the coral from large influxes of silt, fresh water and pollutants. This additional level of variety in the environment is beneficial to many types of coral reef animals, which for example may feed in the sea grass and use the reefs for protection or breeding. Evolutionary history Fossil record The earliest widely accepted animal fossils are rather modern-looking cnidarians, possibly from around , although fossils from the Doushantuo Formation can only be dated approximately. The identification of some of these as embryos of animals has been contested, but other fossils from these rocks strongly resemble tubes and other mineralized structures made by corals. Their presence implies that the cnidarian and bilaterian lineages had already diverged. Although the Ediacaran fossil Charnia used to be classified as a jellyfish or sea pen, more recent study of growth patterns in Charnia and modern cnidarians has cast doubt on this hypothesis, leaving only the Canadian polyp, Haootia, as the only bona-fide cnidarian body fossil from the Ediacaran. Few fossils of cnidarians without mineralized skeletons are known from more recent rocks, except in lagerstätten that preserved soft-bodied animals. A few mineralized fossils that resemble corals have been found in rocks from the Cambrian period, and corals diversified in the Early Ordovician. These corals, which were wiped out in the Permian–Triassic extinction event about , did not dominate reef construction since sponges and algae also played a major part. During the Mesozoic era rudist bivalves were the main reef-builders, but they were wiped out in the Cretaceous–Paleogene extinction event , and since then the main reef-builders have been scleractinian corals. Family tree It is difficult to reconstruct the early stages in the evolutionary "family tree" of animals using only morphology (their shapes and structures), because the large differences between Porifera (sponges), Cnidaria plus Ctenophora (comb jellies), Placozoa and Bilateria (all the more complex animals) make comparisons difficult. Hence reconstructions now rely largely or entirely on molecular phylogenetics, which groups organisms according to similarities and differences in their biochemistry, usually in their DNA or RNA. It is now generally thought that the Calcarea (sponges with calcium carbonate spicules) are more closely related to Cnidaria, Ctenophora (comb jellies) and Bilateria (all the more complex animals) than they are to the other groups of sponges. In 1866 it was proposed that Cnidaria and Ctenophora were more closely related to each other than to Bilateria and formed a group called Coelenterata ("hollow guts"), because Cnidaria and Ctenophora both rely on the flow of water in and out of a single cavity for feeding, excretion and respiration. In 1881, it was proposed that Ctenophora and Bilateria were more closely related to each other, since they shared features that Cnidaria lack, for example muscles in the middle layer (mesoglea in Ctenophora, mesoderm in Bilateria). However more recent analyses indicate that these similarities are rather vague, and the current view, based on molecular phylogenetics, is that Cnidaria and Bilateria are more closely related to each other than either is to Ctenophora. This grouping of Cnidaria and Bilateria has been labelled "Planulozoa" because it suggests that the earliest Bilateria were similar to the planula larvae of Cnidaria. Within the Cnidaria, the Anthozoa (sea anemones and corals) are regarded as the sister-group of the rest, which suggests that the earliest cnidarians were sessile polyps with no medusa stage. However, it is unclear how the other groups acquired the medusa stage, since Hydrozoa form medusae by budding from the side of the polyp while the other Medusozoa do so by splitting them off from the tip of the polyp. The traditional grouping of Scyphozoa included the Staurozoa, but morphology and molecular phylogenetics indicate that Staurozoa are more closely related to Cubozoa (box jellies) than to other "Scyphozoa". Similarities in the double body walls of Staurozoa and the extinct Conulariida suggest that they are closely related. However, in 2005 Katja Seipel and Volker Schmid suggested that cnidarians and ctenophores are simplified descendants of triploblastic animals, since ctenophores and the medusa stage of some cnidarians have striated muscle, which in bilaterians arises from the mesoderm. They did not commit themselves on whether bilaterians evolved from early cnidarians or from the hypothesized triploblastic ancestors of cnidarians. In molecular phylogenetics analyses from 2005 onwards, important groups of developmental genes show the same variety in cnidarians as in chordates. In fact cnidarians, and especially anthozoans (sea anemones and corals), retain some genes that are present in bacteria, protists, plants and fungi but not in bilaterians. The mitochondrial genome in the medusozoan cnidarians, unlike those in other animals, is linear with fragmented genes. The reason for this difference is unknown. Interaction with humans Jellyfish stings killed about 1,500 people in the 20th century, and cubozoans are particularly dangerous. On the other hand, some large jellyfish are considered a delicacy in East and Southeast Asia. Coral reefs have long been economically important as providers of fishing grounds, protectors of shore buildings against currents and tides, and more recently as centers of tourism. However, they are vulnerable to over-fishing, mining for construction materials, pollution, and damage caused by tourism. Beaches protected from tides and storms by coral reefs are often the best places for housing in tropical countries. Reefs are an important food source for low-technology fishing, both on the reefs themselves and in the adjacent seas. However, despite their great productivity, reefs are vulnerable to over-fishing, because much of the organic carbon they produce is exhaled as carbon dioxide by organisms at the middle levels of the food chain and never reaches the larger species that are of interest to fishermen. Tourism centered on reefs provides much of the income of some tropical islands, attracting photographers, divers and sports fishermen. However, human activities damage reefs in several ways: mining for construction materials; pollution, including large influxes of fresh water from storm drains; commercial fishing, including the use of dynamite to stun fish and the capture of young fish for aquariums; and tourist damage caused by boat anchors and the cumulative effect of walking on the reefs. Coral, mainly from the Pacific Ocean has long been used in jewellery, and demand rose sharply in the 1980s. Some large jellyfish species of the Rhizostomae order are commonly consumed in Japan, Korea and Southeast Asia. In parts of the range, fishing industry is restricted to daylight hours and calm conditions in two short seasons, from March to May and August to November. The commercial value of
and be permitted. The validity of this argument was heavily disputed within the movement. In 1952, members of the priestly caste were allowed to marry divorcees, conditioned on forfeiture of their privileges, as termination of marriage became widespread and women who underwent it could not be suspected of unsavory acts. In 1967, the ban on priests marrying converts was also lifted. In 1954, the issue of agunot (women refused divorce by their husbands) was largely settled by adding a clause to the prenuptial contract under which men had to pay alimony as long as they did not concede. In 1968, this mechanism was replaced by a retroactive expropriation of the bride price, rendering the marriage void. In 1955, more girls were celebrating Bat Mitzvah and demanded to be allowed ascents to the Torah, the CJLS agreed that the ordinance under which women were banned from this due to respect for the congregation (Kvod ha'Tzibur) was no longer relevant. In 1972 it was decreed that rennet, even if derived from unclean animals, was so transformed that it constituted a wholly new item (Panim Chadashot ba'u l'Khan) and therefore all hard cheese could be considered kosher. The 1970s and 1980s saw the emergence of women's rights on the main agenda. Growing pressure led the CJLS to adopt a motion that females may be counted as part of a quorum, based on the argument that only the Shulchan Aruch explicitly stated that it consist of men. While accepted, this was very controversial in the Committee and heavily disputed. A more complete solution was offered in 1983 by Rabbi Joel Roth, and was also enacted to allow women rabbinic ordination. Roth noted that some decisors of old acknowledged that women may bless when performing positive time-bound commandments (from which they are exempted, and therefore unable to fulfill the obligation for others), especially citing the manner in which they assumed upon themselves the Counting of the Omer. He suggested that women voluntarily commit to pray thrice a day et cetera, and his responsa was adopted. Since then, female rabbis were ordained at JTS and other seminaries. In 1994, the movement accepted Judith Hauptman's principally egalitarian argument, according to which equal prayer obligations for women were never banned explicitly and it was only their inferior status that hindered participation. In 2006, openly gay rabbinic candidates were also to be admitted into the JTS. In 2012, a commitment ceremony for same-sex couples was devised, though not defined as kiddushin. In 2016, the rabbis passed a resolution supporting transgender rights. Conservative Judaism in the United States held a relatively strict policy regarding intermarriage. Propositions for acknowledging Jews by patrilineal descent, as in the Reform movement, were overwhelmingly dismissed. Unconverted spouses were largely barred from community membership and participation in rituals; clergy are banned from any involvement in interfaith marriage on pain of dismissal. However, as the rate of such unions rose dramatically, Conservative congregations began describing gentile family members as K'rov Yisrael (Kin of Israel) and be more open toward them. The Leadership Council of Conservative Judaism stated in 1995: "we want to encourage the Jewish partner to maintain his/her Jewish identity, and raise their children as Jews." Despite the centralization of legal deliberation on matters of Jewish law in the CJLS individual synagogues and communities must, in the end, depend on their local decision-makers. The rabbi in his or her or their community is regarded as the Mara D'atra, or the local halakhic decisor. Rabbis trained in the reading practices of Conservative Jewish approaches, historical evaluation of Jewish law and interpretation of Biblical and Rabbinic texts may align directly with the CJLS decisions or themselves opine on matters based on precedents or readings of text that shine light on congregants' questions. So, for instance, a rabbi may or may not choose to permit video streaming on Shabbat despite a majority ruling that allows for use of electronics. A local mara d'atra may rely on the reasoning found in the majority or minority opinions of the CJLS or have other textual and halakhic grounds, i.e., prioritizing Jewish values or legal concepts, to rule one way or another on matters of ritual, family life or sacred pursuits. This balance between a centralization of halakhic authority and maintaining the authority of local rabbis reflects the commitment to pluralism at the heart of the Movement. Organization and demographics The term Conservative Judaism was used, still generically and not yet as a specific label, already in the 1887 dedication speech of the Jewish Theological Seminary of America by Rabbi Alexander Kohut. By 1901, the JTS alumni formed the Rabbinical Assembly, of which all ordained Conservative clergy in the world are members. As of 2010, there were 1,648 rabbis in the RA. In 1913, the United Synagogue of America, renamed the United Synagogue of Conservative Judaism in 1991, was founded as a congregational arm of the RA. The movement established the World Council of Conservative Synagogues in 1957. Offshoots outside North America mostly adopted the Hebrew name "Masorti", traditional', as did the Israeli Masorti Movement, founded in 1979, and the British Assembly of Masorti Synagogues, formed in 1985. The World Council eventually changed its name to "Masorti Olami", Masorti International. Besides the RA, the international Cantors Assembly supplies prayer leaders for congregations worldwide. The United Synagogue of Conservative Judaism, covering the United States, Canada and Mexico, is by far the largest constituent of Masorti Olami. While most congregations defining themselves as "Conservative" are affiliated with the USCJ, some are independent. While accurate information of Canada is scant, it is estimated that some third of religiously affiliated Canadian Jews are Conservative. In 2008, the more traditional Canadian Council of Conservative Synagogues seceded from the parent organization. It numbered seven communities as of 2014. According to the Pew Research Center survey in 2013, 18 per cent of Jews in the United States identified with the movement, making it the second largest in the country. Steven M. Cohen calculated that as of 2013, 962,000 U.S. Jewish adults considered themselves Conservative: 570,000 were registered congregants and further 392,000 were not members in a synagogue but identified. In addition, Cohen assumed in 2006 that 57,000 unconverted non-Jewish spouses were also registered (12 per cent of member households had one at the time): 40 per cent of members intermarry. Conservatives are also the most aged group: among those aged under 30 only 11 per cent identified as such, and there are three people over 55 for every single one aged between 35 and 44. As of November 2015, the USCJ had 580 member congregations (a sharp decline from 630 two years prior), 19 in Canada and the remainder in the United States. In 2011 the USCJ initiated a plan to reinvigorate the movement. Beyond North America, the movement has little presence—in 2011, Rela Mintz Geffen appraised there were only 100,000 members outside the U.S. (and the former figure including Canada). "Masorti AmLat", the MO branch in Latin America, is the largest with 35 communities in Argentina, 7 in Brazil, 6 in Chile and further 11 in the other countries. The British Assembly of Masorti Synagogues has 13 communities and estimates its membership at over 4,000. More than 20 communities are spread across Europe, and there are 3 in Australia and 2 in Africa. The Masorti Movement in Israel incorporates some 70 communities and prayer groups with several thousand full members. In addition, while Hungarian Neolog Judaism, with a few thousands of adherents and forty partially active synagogues, is not officially affiliated with Masorti Olami, Conservative Judaism regards it as a fraternal, "non-Orthodox but halakhic" movement. In New York, the JTS serves as the movement's original seminary and legacy institution, along with the Ziegler School of Rabbinic Studies at the American Jewish University in Los Angeles; the Marshall T. Meyer Latin American Rabbinical Seminary (Spanish: Seminario Rabínico Latinoamericano Marshall T. Meyer), in Buenos Aires, Argentina; and the Schechter Institute of Jewish Studies in Jerusalem. A Conservative institution that does not grant rabbinic ordination but which runs along the lines of a traditional yeshiva is the Conservative Yeshiva, located in Jerusalem. The Neolog Budapest University of Jewish Studies also maintains connections with Conservative Judaism. The current chancellor of the JTS is Shuly Rubin Schwartz, in office since 2020. She is the first woman elected to this position in the History of JTS. The current dean of the Ziegler School of Rabbinic Studies is Bradley Shavit Artson. The Committee on Jewish Law and Standards is chaired by Rabbi Elliot N. Dorff, serving since 2007. The Rabbinical Assembly is headed by President Rabbi Debra Newman Kamin, as of 2019, and managed by Chief Executive Officer, Rabbi Jacob Blumenthal. Rabbi Blumenthal holds the joint position as CEO of the United Synagogue of Conservative Judaism. The current USCJ President is Ned Gladstein. In South America, Rabbi Ariel Stofenmacher serves as chancellor in the Seminary and Rabbi Marcelo Rittner as president of Masorti AmLat. In Britain, the Masorti Assembly is chaired by Senior Rabbi Jonathan Wittenberg. In Israel, the Masorti movement's executive director is Yizhar Hess and chair Sophie Fellman Rafalovitz. The global youth movement is known as NOAM, an acronym for No'ar Masorti; its North American organization is called United Synagogue Youth. Marom Israel is the Masorti movement's organization for students and young adults, providing activities based on religious pluralism and Jewish content. The Women's League for Conservative Judaism is also active in North America. The USCJ maintains the Solomon Schechter Day Schools, comprising 76 day schools in 17 American states and 2 Canadian provinces serving Jewish children. Many other "community day schools" that are not affiliated with Schechter take a generally Conservative approach, but unlike these, generally have "no barriers to enrollment based on the faith of the parents or on religious practices in the home". During the first decade of the 21st century, a number of schools that were part of the Schechter network transformed themselves into non-affiliated community day schools. The USCJ also maintains the Camp Ramah system, where children and adolescents spend summers in an observant environment. History Positive-Historical School The rise of modern, centralized states in Europe by the early 19th century heralded the end of Jewish judicial autonomy and social seclusion. Their communal corporate rights were abolished, and the process of emancipation and acculturation that followed quickly transformed the values and norms of the public. Estrangement and apathy toward Judaism were rampant. The process of communal, educational and civil reform could not be restricted from affecting the core tenets of the faith. The new academic, critical study of Judaism (Wissenschaft des Judentums) soon became a source of controversy. Rabbis and scholars argued to what degree, if at all, its findings could be used to determine present conduct. The modernized Orthodox in Germany, like rabbis Isaac Bernays and Azriel Hildesheimer, were content to cautiously study it while stringently adhering to the sanctity of holy texts and refusing to grant Wissenschaft any say in religious matters. On the other extreme were Rabbi Abraham Geiger, who would emerge as the founding father of Reform Judaism, and his supporters. They opposed any limit on critical research or its practical application, laying more weight on the need for change than on continuity. The Prague-born Rabbi Zecharias Frankel, appointed chief rabbi of the Kingdom of Saxony in 1836, gradually rose to become the leader of those who stood at the middle. Besides working for the civic betterment of local Jews and educational reform, he displayed keen interest in Wissenschaft. But Frankel was always cautious and deeply reverent towards tradition, privately writing in 1836 that "the means must be applied with such care and discretion... that forward progress will be reached unnoticed, and seem inconsequential to the average spectator." He soon found himself embroiled in the great disputes of the 1840s. In 1842, during the second Hamburg Temple controversy, he opposed the new Reform prayerbook, arguing the elimination of petitions for a future Return to Zion led by the Messiah was a violation of an ancient tenet. But he also opposed the ban placed on the tome by Rabbi Bernays, stating this was a primitive behaviour. In the same year, he and the moderate conservative S.L. Rapoport were the only ones of nineteen respondents who negatively answered the Breslau community's enquiry on whether the deeply unorthodox Geiger could serve there. In 1843, Frankel clashed with the radical Reform rabbi Samuel Holdheim, who argued that the act of marriage in Judaism was a civic (memonot) rather than sanctified () matter and could be subject to the Law of the Land. In December 1843 Frankel launched the magazine Zeitschrift für die Religiösen Interessen des Judenthums. In the preamble, he attempted to present his approach to the present plight: "the further development of Judaism cannot be done through Reform that would lead to total dissipation... But must be involved in its study... pursued via scientific research, on a positive, historical basis." The term Positive-Historical became associated with him and his middle way. The Zeitschrift was, along the convictions of its publisher, neither dogmatically orthodox nor overly polemic, wholly opposing Biblical criticism and arguing for the antiquity of custom and practice. In 1844, Geiger and like-minded allies arranged a conference in Braunschweig that was to have enough authority (since 1826, Rabbi Aaron Chorin called for the convocation of a new Sanhedrin) to debate and enact thoroughgoing revisions. Frankel was willing to agree only to a meeting without any practical results, and refused the invitation. When the protocols, which contained many radical statements, were published, he denounced the assembly for "applying the scalpel of criticism" and favouring the spirit of the age over tradition. However, he later agreed to attend the second conference, held in Frankfurt am Main on 15 July 1845—in spite of warnings from Rapoport, who cautioned that compromise with Geiger was impossible and he would only damage his reputation among the traditionalists. On the 16th, the issue of Hebrew in the liturgy arose. Most present were inclined to retain it, but with more German segments. A small majority adopted a resolution stating there were subjective, but no objective, imperatives to keep it as the language of service. Frankel then astounded his peers by vehemently protesting, stating it was a breach with the past and that Hebrew was of dire importance and great sentimental value. The others immediately began quoting all passages in rabbinic literature allowing prayer in the vernacular. Frankel could not contend with the halakhic validity of their decision, but he perceived it as a sign of profound differences between them. On the 17th he formally withdrew, publishing a lambasting critique of the procedures. "Opponents of the conference, who feared he went to the other side," noted historian Michael A. Meyer, "now felt reassured of his loyalty". The rabbi of Saxony had many sympathizers, who supported a similarly moderate approach and change only on the basis of the authority of the Talmud. When Geiger began preparing a third conference in Breslau, Hirsch Bär Fassel convinced Frankel to organize one of his own in protest. Frankel invited colleagues to an assembly in Dresden, which was to be held on 21 October 1846. He announced that one measure he was willing to countenance was the possible abolition of the second day of festivals, though only if a broad consensus will be reached and not before thorough deliberation. Attendants were to include Rapoport, Fassel, Adolf Jellinek, Leopold Löw, Michael Sachs, Abraham Kohn and others. However, the Dresden assembly soon drew heated Orthodox resistance, especially from Rabbi Jacob Ettlinger, and was postponed indefinitely. In 1854, Frankel was appointed chancellor in the new Jewish Theological Seminary of Breslau, the first modern rabbinical seminary in Germany. His opponents on both flanks were incensed. Geiger and the Reform camp long accused him of theological ambiguity, hypocrisy and attachment to stagnant remnants, and now protested the "medieval" atmosphere in the seminary, which was mainly concerned with teaching Jewish Law. The hardline Orthodox Samson Raphael Hirsch, who fiercely opposed Wissenschaft and emphasized the divine origin of the entire halakhic system in the Theophany at Sinai, was deeply suspicious of Frankel's beliefs, use of science and constant assertions that Jewish Law was flexible and evolving. The final schism between Frankel and the Orthodox occurred after the 1859 publication of his Darke ha-Mishna (Ways of the Mishna). He heaved praise on the Beatified Sages, presenting them as bold innovators, but not once affirmed the divinity of the Oral Torah. On the ordinances classified as Law given to Moses at Sinai, he quoted Asher ben Jehiel that stated several of those were only apocryphally dubbed as such; he applied the latter's conclusion to all, noting they were "so evident as if given at Sinai". Hirsch branded Frankel a heretic, demanding he announce whether he believed that both the Oral and Written Torah were of celestial origin. Rabbis Benjamin Hirsch Auerbach, Solomon Klein and others published more complaisant tracts, but also requested an explanation. Rapoport marshaled to Frankel's aid, assuring that his words were merely reiterating ben Jehiel's and that he would soon release a statement that will belie Hirsch's accusations. But then the Chancellor of Breslau issued an ambiguous defence, writing that his book was not concerned with theology and avoiding giving any clear answer. Now even Rapoport joined his critics. Hirsch succeeded, severely tarnishing his Frankel's reputation among most concerned. Along with fellow Orthodox Rabbi Azriel Hildesheimer, Hirsch launched a protracted public campaign through the 1860s. They ceaselessly stressed the chasm between an Orthodox understanding of Halakha as derived and revealed, applied differently to different circumstances and subject to human judgement and possibly error, yet unchanging and divine in principle—as opposed to an evolutionary, historicist and non-dogmatic approach in which past authorities were not just elaborating but consciously innovating, as taught by Frankel. Hildesheimer often repeated that this issue utterly overshadowed any specific technical argument with the Breslau School (the students of which were often more lenient on matters of headcovering for women, Chalav Yisrael and other issues). Hildesheimer was concerned that Jewish public opinion perceived no practical difference between them; though he cared to distinguish the observant acolytes of Frankel from the Reform camp, he noted in his diary: "how meager is the principal difference between the Breslau School, who don silk gloves at their work, and Geiger who wields a sledgehammer." In 1863, when Breslau faculty member Heinrich Graetz published an article where he appeared to doubt the Messianic belief, Hildesheimer immediately seized upon the occasion to prove once more the dogmatic, rather than practical, divide. He denounced Graetz as a heretic. The Positive-Historical School was influential, but never institutionalized itself as thoroughly as its opponents. Apart from the many graduates of Breslau, Isaac Noah Mannheimer, Adolf Jellinek and Rabbi Moritz Güdemann led the central congregation in Vienna along a similar path. In Jellinek's local seminary, Meir Friedmann and Isaac Hirsch Weiss followed Frankel's moderate approach to critical research. The rabbinate of the
that would lead to total dissipation... But must be involved in its study... pursued via scientific research, on a positive, historical basis." The term Positive-Historical became associated with him and his middle way. The Zeitschrift was, along the convictions of its publisher, neither dogmatically orthodox nor overly polemic, wholly opposing Biblical criticism and arguing for the antiquity of custom and practice. In 1844, Geiger and like-minded allies arranged a conference in Braunschweig that was to have enough authority (since 1826, Rabbi Aaron Chorin called for the convocation of a new Sanhedrin) to debate and enact thoroughgoing revisions. Frankel was willing to agree only to a meeting without any practical results, and refused the invitation. When the protocols, which contained many radical statements, were published, he denounced the assembly for "applying the scalpel of criticism" and favouring the spirit of the age over tradition. However, he later agreed to attend the second conference, held in Frankfurt am Main on 15 July 1845—in spite of warnings from Rapoport, who cautioned that compromise with Geiger was impossible and he would only damage his reputation among the traditionalists. On the 16th, the issue of Hebrew in the liturgy arose. Most present were inclined to retain it, but with more German segments. A small majority adopted a resolution stating there were subjective, but no objective, imperatives to keep it as the language of service. Frankel then astounded his peers by vehemently protesting, stating it was a breach with the past and that Hebrew was of dire importance and great sentimental value. The others immediately began quoting all passages in rabbinic literature allowing prayer in the vernacular. Frankel could not contend with the halakhic validity of their decision, but he perceived it as a sign of profound differences between them. On the 17th he formally withdrew, publishing a lambasting critique of the procedures. "Opponents of the conference, who feared he went to the other side," noted historian Michael A. Meyer, "now felt reassured of his loyalty". The rabbi of Saxony had many sympathizers, who supported a similarly moderate approach and change only on the basis of the authority of the Talmud. When Geiger began preparing a third conference in Breslau, Hirsch Bär Fassel convinced Frankel to organize one of his own in protest. Frankel invited colleagues to an assembly in Dresden, which was to be held on 21 October 1846. He announced that one measure he was willing to countenance was the possible abolition of the second day of festivals, though only if a broad consensus will be reached and not before thorough deliberation. Attendants were to include Rapoport, Fassel, Adolf Jellinek, Leopold Löw, Michael Sachs, Abraham Kohn and others. However, the Dresden assembly soon drew heated Orthodox resistance, especially from Rabbi Jacob Ettlinger, and was postponed indefinitely. In 1854, Frankel was appointed chancellor in the new Jewish Theological Seminary of Breslau, the first modern rabbinical seminary in Germany. His opponents on both flanks were incensed. Geiger and the Reform camp long accused him of theological ambiguity, hypocrisy and attachment to stagnant remnants, and now protested the "medieval" atmosphere in the seminary, which was mainly concerned with teaching Jewish Law. The hardline Orthodox Samson Raphael Hirsch, who fiercely opposed Wissenschaft and emphasized the divine origin of the entire halakhic system in the Theophany at Sinai, was deeply suspicious of Frankel's beliefs, use of science and constant assertions that Jewish Law was flexible and evolving. The final schism between Frankel and the Orthodox occurred after the 1859 publication of his Darke ha-Mishna (Ways of the Mishna). He heaved praise on the Beatified Sages, presenting them as bold innovators, but not once affirmed the divinity of the Oral Torah. On the ordinances classified as Law given to Moses at Sinai, he quoted Asher ben Jehiel that stated several of those were only apocryphally dubbed as such; he applied the latter's conclusion to all, noting they were "so evident as if given at Sinai". Hirsch branded Frankel a heretic, demanding he announce whether he believed that both the Oral and Written Torah were of celestial origin. Rabbis Benjamin Hirsch Auerbach, Solomon Klein and others published more complaisant tracts, but also requested an explanation. Rapoport marshaled to Frankel's aid, assuring that his words were merely reiterating ben Jehiel's and that he would soon release a statement that will belie Hirsch's accusations. But then the Chancellor of Breslau issued an ambiguous defence, writing that his book was not concerned with theology and avoiding giving any clear answer. Now even Rapoport joined his critics. Hirsch succeeded, severely tarnishing his Frankel's reputation among most concerned. Along with fellow Orthodox Rabbi Azriel Hildesheimer, Hirsch launched a protracted public campaign through the 1860s. They ceaselessly stressed the chasm between an Orthodox understanding of Halakha as derived and revealed, applied differently to different circumstances and subject to human judgement and possibly error, yet unchanging and divine in principle—as opposed to an evolutionary, historicist and non-dogmatic approach in which past authorities were not just elaborating but consciously innovating, as taught by Frankel. Hildesheimer often repeated that this issue utterly overshadowed any specific technical argument with the Breslau School (the students of which were often more lenient on matters of headcovering for women, Chalav Yisrael and other issues). Hildesheimer was concerned that Jewish public opinion perceived no practical difference between them; though he cared to distinguish the observant acolytes of Frankel from the Reform camp, he noted in his diary: "how meager is the principal difference between the Breslau School, who don silk gloves at their work, and Geiger who wields a sledgehammer." In 1863, when Breslau faculty member Heinrich Graetz published an article where he appeared to doubt the Messianic belief, Hildesheimer immediately seized upon the occasion to prove once more the dogmatic, rather than practical, divide. He denounced Graetz as a heretic. The Positive-Historical School was influential, but never institutionalized itself as thoroughly as its opponents. Apart from the many graduates of Breslau, Isaac Noah Mannheimer, Adolf Jellinek and Rabbi Moritz Güdemann led the central congregation in Vienna along a similar path. In Jellinek's local seminary, Meir Friedmann and Isaac Hirsch Weiss followed Frankel's moderate approach to critical research. The rabbinate of the liberal Neolog public in Hungary, which formally separated from the Orthodox, was also permeated with the "Breslau spirit". Many of its members studied there, and its Jewish Theological Seminary of Budapest was modeled after it, though the assimilationist congregants cared little for rabbinic opinion. In Germany itself, Breslau alumni founded in 1868 a short-lived society, the Jüdisch-Theologische Verein. It was dissolved within a year, boycotted by both Reform and Orthodox. Michael Sachs led the Berlin congregation in a very conservative style, eventually resigning when an organ was introduced in services. Manuel Joël, another of the Frankelist party, succeeded Geiger in Breslau. He maintained his predecessor's truncated German translation of the liturgy for the sake of compromise, but restored the full Hebrew text. The Breslau Seminary and the Reform Hochschule für die Wissenschaft des Judentums maintained very different approaches; but on the communal level, the former's alumni failure to organize or articulate a coherent agenda, coupled with the declining prestige of Breslau and the conservatism of the Hochschule's alumni—a necessity in heterogeneous communities which remained unified, especially after the Orthodox gained the right to secede in 1876—imposed a rather uniform and mild character on what was known in Germany as "Liberal Judaism". In 1909, 63 rabbis associated with the Breslau approach founded the Freie jüdische Vereinigung, another brief attempt at institutionalization, but it too failed soon. Only in 1925 did the Religiöse Mittelpartei für Frieden und Einheit succeed in driving the same agenda. It won several seats in communal elections, but was small and of little influence. Jewish Theological Seminary Jewish immigration to the United States bred an amalgam of loose communities, lacking strong tradition or stable structures. In this free-spirited environment, a multitude of forces was at work. As early as 1866, Rabbi Jonas Bondi of New York wrote that a Judaism of the "golden middleway, which was termed Orthodox by the left and heterodox or reformer by the right" developed in the new country. The rapid ascendancy of Reform Judaism by the 1880s left few who opposed it: merely a handful of congregations and ministers remained outside the Union of American Hebrew Congregations. These included Sabato Morais and Rabbi Henry Pereira Mendes of the elitist Sephardi congregations, along with rabbis Bernard Drachman (ordained at Breslau, though he regarded himself as Orthodox) and Henry Schneeberger. While spearheaded by radical and principled Reformers like Rabbi Kaufmann Kohler, the UAHC was also home to more conservative elements. President Isaac Meyer Wise, a pragmatist intent on compromise, hoped to forge a broad consensus that would turn a moderate version of Reform to dominant in America. He kept the dietary laws at home and attempted to assuage traditionalists. On 11 July 1883, apparently due to negligence by the Jewish caterer, non-kosher dishes were served to UAHC rabbis in Wise's presence. Known to posterity as the "trefa banquet", it purportedly made some guests abandon the hall in disgust, but little is factually known about the incident. In 1885, the traditionalist forces were bolstered upon the arrival of Rabbi Alexander Kohut, an adherent of Frankel. He publicly excoriated Reform for disdaining ritual and received forms, triggering a heated polemic with Kohler. The debate was one of the main factors which motivated the latter to compose the Pittsburgh Platform, which unambiguously declared the principles of Reform Judaism: "to-day we accept as binding only the moral laws, and maintain only such ceremonies as elevate and sanctify our lives." The explicit wording alienated a handful of conservative UAHC ministers: Henry Hochheimer, Frederick de Sola Mendes, Aaron Wise, Marcus Jastrow, and Benjamin Szold. They joined Kohut, Morais and the others in seeking to establish a traditional rabbinic seminary that would serve as a counterweight to Hebrew Union College. In 1886, they founded the Jewish Theological Seminary of America in New York City. Kohut, professor of Talmud who held to the Positive-Historical ideal, was the main educational influence in the early years, prominent among the founders who encompassed the entire spectrum from progressive Orthodox to the brink of Reform; to describe what the seminary intended to espouse, he used the term "Conservative Judaism", which had no independent meaning at the time and was only in relation to Reform. In 1898, Pereira Mendes, Schneeberger and Drachman also founded the Orthodox Union, which maintained close ties with the seminary. The JTS was a small, fledgling institution with financial difficulties, and was ordaining merely a rabbi per year. But soon after Chancellor Morais' death in 1897, its fortunes turned. Since 1881, a wave of Jewish immigration from Eastern Europe was inundating the country—by 1920, 2.5 million of them had arrived, increasing American Jewry tenfold. They came from regions where civil equality or emancipation were never granted, while acculturation and modernization made little headway. Whether devout or irreligious, they mostly retained strong traditional sentiments in matters of faith, accustomed to old-style rabbinate; the hardline Agudas HaRabbanim, founded by emigrant clergy, opposed secular education or vernacular sermons, and its members spoke almost only Yiddish. The Eastern Europeans were alienated by the local Jews, who were all assimilated in comparison, and especially aghast by the mores of Reform. The need to find a religious framework that would both accommodate and Americanize them motivated Jacob Schiff and other rich philanthropists, all Reform and of German descent, to donate $500,000 to the JTS. The contribution was solicited by Professor Cyrus Adler. It was conditioned on the appointment of Solomon Schechter as Chancellor. In 1901, the Rabbinical Assembly was established as the fraternity of JTS alumni. Schechter arrived in 1902, and at once reorganized the faculty, dismissing both Pereira Mendes and Drachman for lack of academic merit. Under his aegis, the institute began to draw famous scholars, becoming a center of learning on par with HUC. Schechter was both traditional in sentiment and quite unorthodox in conviction. He maintained that theology was of little importance and it was practice that must be preserved. He aspired to solicit unity in American Judaism, denouncing sectarianism and not perceiving himself as leading a new denomination: "not to create a new party, but to consolidate an old one". The need to raise funds convinced him that a congregational arm for the Rabbinical Assembly and the JTS was required. On 23 February 1913, he founded the United Synagogue of America (since 1991: United Synagogue of Conservative Judaism), which then consisted of 22 communities. He and Mendes first came to major disagreement; Schechter insisted that any alumnus could be appointed to the USoA's managerial board, and not just to serve as communal rabbi, including several the latter did not consider sufficiently devout, or who tolerated mixed seating in their synagogues (though some of those he still regarded as Orthodox). Mendes, president of the Orthodox Union, therefore refused to join. He began to distinguish between the "Modern Orthodoxy" of himself and his peers in the OU, and "Conservatives" who tolerated what was beyond the pale for him. However, this first sign of institutionalization and separation was far from conclusive. Mendes himself could not clearly differentiate between the two groups, and many he viewed as Orthodox were members of the USoA. The epithets "Conservative" and "Orthodox" remained interchangeable for decades to come. JTS graduates served in OU congregations; many students of the Orthodox Rabbi Isaac Elchanan Theological Seminary and members of the OU's Rabbinical Council of America, or RCA, attended it. In 1926, RIETS and the JTS even negotiated a possible merger, though it was never materialized. Upon Schechter's death in 1915, the first generation of his disciples kept his non-sectarian legacy of striving for a united, traditional American Judaism. He was replaced by Cyrus Adler. The USoA grew rapidly as the Eastern European immigrant population slowly integrated. In 1923 it already had 150 affiliated communities, and 229 before 1930. Synagogues offered a more modernized ritual: English sermons, choir singing, late Friday evening services which tacitly acknowledging that most had to work until after the Sabbath began, and often mixed-gender seating. Men and women sat separately with no partition, and some houses of prayer already introduced family pews. Motivated by popular pressure and frowned upon by both RA and seminary faculty—in its own synagogue, the institute maintained a partition until 1983—this was becoming common among the OU as well. As both social conditions and apathy turned American Jews away from tradition (barely 20 per cent were attending prayers weekly), a young professor named Mordecai Kaplan promoted the idea of transforming the synagogue into a community center, a "Shul with a Pool", a policy which indeed stymied the tide somewhat. In 1927, the RA also established its own Committee of Jewish Law, entrusted with determining halakhic issues. Consisting of seven members, it was chaired by the traditionalist Rabbi Louis Ginzberg, who already distinguished himself in 1922, drafting a responsa that allowed the use of grape juice rather than fermented wine for Kiddush on the background of Prohibition. Kaplan himself, who rose to become an influential and popular figure within the JTS, concluded that his fellow rabbis' ambiguity in matters of belief and the contradiction between full observance and critical study were untenable and hypocritical. He formulated his own approach of Judaism as a Civilization, rejecting the concept of Revelation and any supernatural belief in favour of a cultural-ethnic perception. While valuing received mores, he eventually suggested giving the past "a vote, not a veto". Though popular among students, Kaplan's nascent Reconstructionism was opposed by the new traditionalist Chancellor Louis Finkelstein, appointed in 1940, and a large majority among the faculty. Tensions within the JTS and RA grew. The Committee of Jewish Law consisted mainly of scholars who had little field experience, almost solely from the seminary's Talmudic department. They were greatly concerned with halakhic licitness and indifferent to the pressures exerted on the pulpit rabbis, who had to contend with an Americanized public which cared little for such considerations or for tradition in general. In 1935, the RA almost adopted a groundbreaking motion: Rabbi Louis Epstein offered a solution to the agunah predicament, a clause that would have had husbands appoint wives as their proxies to issue divorce. It was repealed under pressure from the Orthodox Union. As late as 1947, CJL Chair Rabbi Boaz Cohen, himself a historicist who argued that the Law evolved much through time, rebuked pulpit clergy who requested lenient or radical rulings, stating he and his peers were content to "progress in inches... Free setting up of new premises and the introduction of novel categories of ritual upon the basis of pure reason and thinking would be perilous, if not fatal, to the principles and continuity of Jewish Law." A third movement The boundaries between Orthodox and Conservative Judaism in America were institutionalized only in the aftermath of World War II. The 1940s saw the younger generation of JTS graduates less patient with the prudence of the CJL and Talmud faculty in face of popular demand. Kaplan's Reconstructionism,
part of the former Portuguese Democratic Movement Comitetul Democrat Evreiesc, or Jewish Democratic Committee Concept development and experimentation, a technique for developing new ideas for military capabilities Cde., an abbreviation of comrade CDE, NYSE stock symbol for Coeur Mining Cardholder Data Environment, part of the Payment Card Industry
and OpenVMS Collaborative Development Environment, a software development methodology Comissões Democráticas Eleitorais, part of the former Portuguese Democratic Movement Comitetul Democrat Evreiesc, or Jewish Democratic Committee Concept development and experimentation, a technique for developing new ideas for military capabilities Cde., an abbreviation of comrade CDE, NYSE stock symbol for Coeur Mining Cardholder Data Environment, part of the Payment Card Industry Data Security Standard for credit card
HP contributed the primary environment for CDE, which was based on HP's Visual User Environment (VUE). HP VUE was itself derived from the Motif Window Manager. IBM contributed its Common User Access model from OS/2's Workplace Shell. Sun contributed its ToolTalk application interaction framework and a port of its DeskSet productivity tools, including mail and calendar clients, from its OpenWindows environment. USL provided desktop manager components and scalable systems technologies from UNIX System V. After its release, HP endorsed CDE as the new standard desktop for Unix, and provided documentation and software for migrating HP VUE customizations to CDE. In March 1994 CDE became the responsibility of the "new OSF", a merger of the Open Software Foundation and Unix International; in September 1995, the merger of Motif and CDE into a single project, CDE/Motif, was announced. OSF became part of the newly formed Open Group in 1996. In February 1997, the Open Group released their last major version of CDE, version 2.1. Red Hat Linux was the only Linux distribution that proprietary CDE was ported to. In 1997, Red Hat began offering a version of CDE licensed from TriTeal Corporation. In 1998, Xi Graphics, a company specializing in the X Windowing System, offered a version of CDE bundled with Red Hat Linux, called Xi Graphics maXimum cde/OS. These were phased out, and Red Hat moved to the GNOME desktop. Until about 2000, users of Unix desktops regarded CDE as the de facto standard, but at that time, other desktop environments such as GNOME and K Desktop Environment 2 were quickly becoming mature, and became widespread on Linux systems. In 2001, Sun Microsystems announced that they would phase out CDE as the standard desktop environment in Solaris in favor of GNOME. Solaris 10, released in early 2005, includes both CDE and the GNOME-based Java Desktop System. The OpenSolaris project, begun around the same time, did not include CDE, and had no intent to make Solaris CDE available as open-source. The original release of Solaris 11 in November 2011 only contained GNOME as standard desktop, though some CDE libraries, such as Motif and ToolTalk, remained for binary compatibility but Oracle Solaris 11.4, released in August 2018, removed support for the CDE runtime environment and background services. Systems that provided proprietary CDE IBM AIX Digital UNIX HP-UX: from version 10.10, released in 1996. IRIX: for a short time CDE was an alternative to IRIX Interactive Desktop. OpenVMS: available in OpenVMS Alpha V7.1 and onwards, referred to as the "DECWindows Motif New Desktop" Solaris: available starting with 2.3, standard in 2.6 to 10. Tru64 UNIX UnixWare UXP/DS Red Hat Linux: Two versions ported by Triteal and Xi Graphics License history From its launch until 2012, CDE was proprietary software. Motif, the toolkit on which CDE is built,
Solaris 11.4, released in August 2018, removed support for the CDE runtime environment and background services. Systems that provided proprietary CDE IBM AIX Digital UNIX HP-UX: from version 10.10, released in 1996. IRIX: for a short time CDE was an alternative to IRIX Interactive Desktop. OpenVMS: available in OpenVMS Alpha V7.1 and onwards, referred to as the "DECWindows Motif New Desktop" Solaris: available starting with 2.3, standard in 2.6 to 10. Tru64 UNIX UnixWare UXP/DS Red Hat Linux: Two versions ported by Triteal and Xi Graphics License history From its launch until 2012, CDE was proprietary software. Motif, the toolkit on which CDE is built, was released by The Open Group in 2000 as "Open Motif," under a "revenue sharing" license. That license did not meet either the open source or free software definitions. The Open Group had wished to make Motif open source, but did not succeed doing so at that time. Release under the GNU LGPL In 2006, a petition was created asking The Open Group to release the source code for CDE and Motif under a free license. On August 6, 2012, CDE was released under the LGPL-2.0-or-later license. The CDE source code was then released to SourceForge. The free software project OpenCDE had been started in 2010 to reproduce the look and feel, organization, and feature set of CDE. In August 2012, when CDE was released as free software, OpenCDE was officially deprecated in favor of CDE. On October 23, 2012, the Motif widget toolkit was also released under the LGPL-2.1-or-later license. This allowed CDE to become a completely free and open source desktop environment. Shortly after CDE was released as free software, a Linux live CD was created based on Debian 6 with CDE 2.2.0c pre-installed, called CDEbian. The live CD has since been discontinued. The Debian-based Linux distribution SparkyLinux offers binary packages of CDE that can be installed with APT. Development under CDE project In March 2014, the first stable release of CDE, version 2.2.1, was made since its release as free software. Beginning with version 2.2.2, released in July 2014, CDE is able to compile under FreeBSD 10 with the default Clang compiler. Since version 2.3.0, released in July 2018, CDE uses TIRPC on Linux, so that the
was originally serialized in Analog Science Fiction and Fact in 1976, and was the last Dune novel to be serialized before book publication. At the end of Dune Messiah, Paul Atreides walks into the desert, a blind man, leaving his twin children Leto and Ghanima in the care of the Fremen, while his sister Alia rules the universe as regent. Awakened in the womb by the spice, the children are the heirs to Paul's prescient vision of the fate of the universe, a role that Alia desperately craves. House Corrino schemes to return to the throne, while the Bene Gesserit make common cause with the Tleilaxu and Spacing Guild to gain control of the spice and the children of Paul Atreides. Initially selling over 75,000 copies, it became the first hardcover best-seller ever in the science fiction field. The novel was critically well-received for its plot, action, and atmosphere, and was nominated for the Hugo Award for Best Novel in 1977. Dune Messiah (1969) and Children of Dune were collectively adapted by the Sci-Fi Channel in 2003 into a miniseries titled Frank Herbert's Children of Dune. Plot Nine years after Emperor Paul Muad'Dib walked into the desert, blind, the ecological transformation of Dune has reached the point where some Fremen are living without stillsuits in the less arid climate and have started to move out of the sietches and into the villages and cities. As the old ways erode, more and more pilgrims arrive to experience the planet of Muad'Dib. The Imperial high council has lost its political might and is powerless to control the Jihad. Paul's young twin children, Leto II and Ghanima, have concluded that their guardian Alia has succumbed to Abomination—possession by her grandfather Baron Vladimir Harkonnen—and fear that a similar fate awaits them. They (and Alia) also realize that the terraforming of Dune will kill all the sandworms, thus destroying the source of the spice, but Harkonnen desires this outcome. Leto also fears that, like his father, he will become trapped by his prescience. Meanwhile, a new
begins to mentor Farad'n. He seizes power from his regent mother Wensicia and allies with the Bene Gesserit, who promise to marry him to Ghanima and support his bid to become Emperor. A band of Fremen outlaws capture Leto and force him to undergo the spice trance at the suggestion of Gurney Halleck, who has infiltrated the group on Jessica's orders. Leto's spice-induced visions show him myriad possible futures where humanity becomes extinct and only one where it survives. He names this future "The Golden Path" and resolves to bring it to fruition—something that his father, who had already glimpsed this future, refused to do. He escapes his captors and sacrifices his humanity in pursuit of the Golden Path by physically fusing with a school of sandtrout, gaining superhuman strength and near-invulnerability. He travels across the desert and confronts the Preacher, who is indeed Paul. Duncan returns to Arrakis and provokes Stilgar into killing him so that Stilgar is forced to take Ghanima and go into hiding. Alia recaptures Ghanima and arranges her marriage to Farad'n, planning to exploit the expected chaos when Ghanima kills him to avenge her brother's murder. The Preacher and Leto return to the capital to confront Alia. Upon arriving, Paul is murdered, to Alia's horror. Leto reveals himself in a display of superhuman strength and triggers the return of Ghanima's genuine memories. He confronts Alia and offers to help her overcome her possession, but Harkonnen resists. Alia manages to take her own life by throwing herself off a high balcony. Leto declares himself Emperor and asserts control over the Fremen. Farad'n enlists in his service and delivers control of the Corrino armies. Leto marries his sister Ghanima to further his goals, but Farad'n is her true consort so the Atreides line can continue. Publication history Parts of Dune Messiah and Children of Dune were written before Dune was completed. Children of Dune was originally serialized in Analog Science Fiction and Fact in 1976, and was the last Dune novel to be serialized before book publication. Dune Messiah and Children of Dune were published in one volume by the Science Fiction Book Club in 2002. Analysis Herbert likened the initial trilogy of novels (Dune, Dune Messiah, and Children of Dune) to a fugue, and while Dune was a heroic melody, Dune Messiah was its inversion. Paul rises to power in Dune by seizing control of the single critical resource in the universe, melange. His enemies are dead or overthrown, and he is
garden", in lieu of the Leibnizian mantra of Pangloss, "all is for the best" in the "best of all possible worlds". Candide is characterized by its tone as well as by its erratic, fantastical, and fast-moving plot. A picaresque novel with a story similar to that of a more serious coming-of-age narrative (Bildungsroman), it parodies many adventure and romance clichés, the struggles of which are caricatured in a tone that is bitter and matter-of-fact. Still, the events discussed are often based on historical happenings, such as the Seven Years' War and the 1755 Lisbon earthquake. As philosophers of Voltaire's day contended with the problem of evil, so does Candide in this short theological novel, albeit more directly and humorously. Voltaire ridicules religion, theologians, governments, armies, philosophies, and philosophers. Through Candide, he assaults Leibniz and his optimism. Candide has enjoyed both great success and great scandal. Immediately after its secretive publication, the book was widely banned to the public because it contained religious blasphemy, political sedition, and intellectual hostility hidden under a thin veil of naïveté. However, with its sharp wit and insightful portrayal of the human condition, the novel has since inspired many later authors and artists to mimic and adapt it. Today, Candide is considered as Voltaire's magnum opus and is often listed as part of the Western canon. It is among the most frequently taught works of French literature. The British poet and literary critic Martin Seymour-Smith listed Candide as one of the 100 most influential books ever written. Historical and literary background A number of historical events inspired Voltaire to write Candide, most notably the publication of Leibniz's "Monadology" (a short metaphysical treatise), the Seven Years' War, and the 1755 Lisbon earthquake. Both of the latter catastrophes are frequently referred to in Candide and are cited by scholars as reasons for its composition. The 1755 Lisbon earthquake, tsunami, and resulting fires of All Saints' Day, had a strong influence on theologians of the day and on Voltaire, who was himself disillusioned by them. The earthquake had an especially large effect on the contemporary doctrine of optimism, a philosophical system founded on the theodicy of Gottfried Wilhelm Leibniz, which insisted on God's benevolence in spite of such events. This concept is often put into the form, "all is for the best in the best of all possible worlds" (). Philosophers had trouble fitting the horrors of this earthquake into their optimistic world view. Voltaire actively rejected Leibnizian optimism after the natural disaster, convinced that if this were the best possible world, it should surely be better than it is. In both Candide and ("Poem on the Lisbon Disaster"), Voltaire attacks this optimist belief. He makes use of the Lisbon earthquake in both Candide and his to argue this point, sarcastically describing the catastrophe as one of the most horrible disasters "in the best of all possible worlds". Immediately after the earthquake, unreliable rumours circulated around Europe, sometimes overestimating the severity of the event. Ira Wade, a noted expert on Voltaire and Candide, has analyzed which sources Voltaire might have referenced in learning of the event. Wade speculates that Voltaire's primary source for information on the Lisbon earthquake was the 1755 work by Ange Goudar. Apart from such events, contemporaneous stereotypes of the German personality may have been a source of inspiration for the text, as they were for , a 1669 satirical picaresque novel written by Hans Jakob Christoffel von Grimmelshausen and inspired by the Thirty Years' War. The protagonist of this novel, who was supposed to embody stereotypically German characteristics, is quite similar to the protagonist of Candide. These stereotypes, according to Voltaire biographer Alfred Owen Aldridge, include "extreme credulousness or sentimental simplicity", two of Candide's and Simplicius's defining qualities. Aldridge writes, "Since Voltaire admitted familiarity with fifteenth-century German authors who used a bold and buffoonish style, it is quite possible that he knew as well." A satirical and parodic precursor of Candide, Jonathan Swift's Gulliver's Travels (1726) is one of Candides closest literary relatives. This satire tells the story of "a gullible ingenue", Gulliver, who (like Candide) travels to several "remote nations" and is hardened by the many misfortunes which befall him. As evidenced by similarities between the two books, Voltaire probably drew upon Gulliver's Travels for inspiration while writing Candide. Other probable sources of inspiration for Candide are (1699) by François Fénelon and (1753) by Louis-Charles Fougeret de Monbron. Candides parody of the is probably based on , which includes the prototypical parody of the tutor on whom Pangloss may have been partly based. Likewise, Monbron's protagonist undergoes a disillusioning series of travels similar to those of Candide. Creation Born François-Marie Arouet, Voltaire (1694–1778), by the time of the Lisbon earthquake, was already a well-established author, known for his satirical wit. He had been made a member of the Académie Française in 1746. He was a deist, a strong proponent of religious freedom, and a critic of tyrannical governments. Candide became part of his large, diverse body of philosophical, political and artistic works expressing these views. More specifically, it was a model for the eighteenth- and early nineteenth-century novels called the contes philosophiques. This genre, of which Voltaire was one of the founders, included previous works of his such as Zadig and Micromegas. It is unknown exactly when Voltaire wrote Candide, but scholars estimate that it was primarily composed in late 1758 and begun as early as 1757. Voltaire is believed to have written a portion of it while living at Les Délices near Geneva and also while visiting Charles Théodore, the Elector-Palatinate at Schwetzingen, for three weeks in the summer of 1758. Despite solid evidence for these claims, a popular legend persists that Voltaire wrote Candide in three days. This idea is probably based on a misreading of the 1885 work by Lucien Perey (real name: Clara Adèle Luce Herpin) and Gaston Maugras. The evidence indicates strongly that Voltaire did not rush or improvise Candide, but worked on it over a significant period of time, possibly even a whole year. Candide is mature and carefully developed, not impromptu, as the intentionally choppy plot and the aforementioned myth might suggest. There is only one extant manuscript of Candide that was written before the work's 1759 publication; it was discovered in 1956 by Wade and since named the La Vallière Manuscript. It is believed to have been sent, chapter by chapter, by Voltaire to the Duke and Duchess La Vallière in the autumn of 1758. The manuscript was sold to the Bibliothèque de l'Arsenal in the late eighteenth century, where it remained undiscovered for almost two hundred years. The La Vallière Manuscript, the most original and authentic of all surviving copies of Candide, was probably dictated by Voltaire to his secretary, Jean-Louis Wagnière, then edited directly. In addition to this manuscript, there is believed to have been another, one copied by Wagnière for the Elector Charles-Théodore, who hosted Voltaire during the summer of 1758. The existence of this copy was first postulated by Norman L. Torrey in 1929. If it exists, it remains undiscovered. Voltaire published Candide simultaneously in five countries no later than 15 January 1759, although the exact date is uncertain. Seventeen versions of Candide from 1759, in the original French, are known today, and there has been great controversy over which is the earliest. More versions were published in other languages: Candide was translated once into Italian and thrice into English that same year. The complicated science of calculating the relative publication dates of all of the versions of Candide is described at length in Wade's article "The First Edition of Candide: A Problem of Identification". The publication process was extremely secretive, probably the "most clandestine work of the century", because of the book's obviously illicit and irreverent content. The greatest number of copies of Candide were published concurrently in Geneva by Cramer, in Amsterdam by Marc-Michel Rey, in London by Jean Nourse, and in Paris by Lambert. Candide underwent one major revision after its initial publication, in addition to some minor ones. In 1761, a version of Candide was published that included, along with several minor changes, a major addition by Voltaire to the twenty-second chapter, a section that had been thought weak by the Duke of Vallière. The English title of this edition was Candide, or Optimism, Translated from the German of Dr. Ralph. With the additions found in the Doctor's pocket when he died at Minden, in the Year of Grace 1759. The last edition of Candide authorised by Voltaire was the one included in Cramer's 1775 edition of his complete works, known as , in reference to the border or frame around each page. Voltaire strongly opposed the inclusion of illustrations in his works, as he stated in a 1778 letter to the writer and publisher Charles Joseph Panckoucke: Despite this protest, two sets of illustrations for Candide were produced by the French artist Jean-Michel Moreau le Jeune. The first version was done, at Moreau's own expense, in 1787 and included in Kehl's publication of that year, Oeuvres Complètes de Voltaire. Four images were drawn by Moreau for this edition and were engraved by Pierre-Charles Baquoy. The second version, in 1803, consisted of seven drawings by Moreau which were transposed by multiple engravers. The twentieth-century modern artist Paul Klee stated that it was while reading Candide that he discovered his own artistic style. Klee illustrated the work, and his drawings were published in a 1920 version edited by Kurt Wolff. List of characters Main characters Candide: The title character. The illegitimate son of the sister of the Baron of Thunder-ten-Tronckh. In love with Cunégonde. Cunégonde: The daughter of the Baron of Thunder-ten-Tronckh. In love with Candide. Professor Pangloss: The royal educator of the court of the baron. Described as "the greatest philosopher of the Holy Roman Empire". The Old Woman: Cunégonde's maid while she is the mistress of Don Issachar and the Grand Inquisitor of Portugal. Flees with Candide and Cunégonde to the New World. Illegitimate daughter of Pope Urban X. Cacambo: From a Spanish father and a Peruvian mother. Lived half his life in Spain and half in Latin America. Candide's valet while in America. Martin: Dutch amateur philosopher and Manichaean. Meets Candide in Suriname, travels with him afterwards. The Baron of Thunder-ten-Tronckh: Brother of Cunégonde. Is seemingly killed by the Bulgarians, but becomes a Jesuit in Paraguay. Disapproves of Candide and Cunegonde's marriage. Secondary characters The baron and baroness of Thunder-ten-Tronckh: Father and mother of Cunégonde and the second baron. Both slain by the Bulgarians. The king of the Bulgarians. Jacques the Anabaptist: Saves Candide from a lynching in the Netherlands. Drowns in the port of Lisbon after saving another sailor's life. Don Issachar: Jewish landlord in Portugal. Cunégonde becomes his mistress, shared with the Grand Inquisitor of Portugal. Killed by Candide. The Grand Inquisitor of Portugal: Sentences Candide and Pangloss at the auto-da-fé. Cunégonde is his mistress jointly with Don Issachar. Killed by Candide. Don Fernando d'Ibarra y Figueroa y Mascarenes y Lampourdos y Souza: Spanish governor of Buenos Aires. Wants Cunégonde as a mistress. The king of El Dorado, who helps Candide and Cacambo out of El Dorado, lets them pick gold from the grounds, and makes them rich. Mynheer Vanderdendur: Dutch ship captain. Offers to take Candide from America to France for 30,000 gold coins, but then departs without him, stealing all his riches. The abbot of Périgord: Befriends Candide and Martin, leads the police to arrest them; he and the police officer accept three diamonds each and release them. The marchioness of Parolignac: Parisian wench who takes an elaborate title. The scholar: One of the guests of the "marchioness". Argues with Candide about art. Paquette: A chambermaid from Thunder-ten-Tronckh who gave Pangloss syphilis. After the slaying by the Bulgarians, works as a prostitute and becomes the property of Friar Giroflée. Friar Giroflée: Theatine friar. In love with the prostitute Paquette. Signor Pococurante: A Venetian noble. Candide and Martin visit his estate, where he discusses his disdain of most of the canon of great art. In an inn in Venice, Candide and Martin dine with six men who turn out to be deposed monarchs: Ahmed III Ivan VI of Russia Charles Edward Stuart Augustus III of Poland Stanisław Leszczyński Theodore of Corsica Synopsis Candide contains thirty episodic chapters, which may be grouped into two main schemes: one consists of two divisions, separated by the protagonist's hiatus in El Dorado; the other consists of three parts, each defined by its geographical setting. By the former scheme, the first half of Candide constitutes the rising action and the last part the resolution. This view is supported by the strong theme of travel and quest, reminiscent of adventure and picaresque novels, which tend to employ such a dramatic structure. By the latter scheme, the thirty chapters may be grouped into three parts each comprising ten chapters and defined by locale: I–X are set in Europe, XI–XX are set in the Americas, and XXI–XXX are set in Europe and the Ottoman Empire. The plot summary that follows uses this second format and includes Voltaire's additions of 1761. Chapters I–X The tale of Candide begins in the castle of the Baron Thunder-ten-Tronckh in Westphalia, home to the Baron's daughter, Lady Cunégonde; his bastard nephew, Candide; a tutor, Pangloss; a chambermaid, Paquette; and the rest of the Baron's family. The protagonist, Candide, is romantically attracted to Cunégonde. He is a young man of "the most unaffected simplicity" (), whose face is "the true index of his mind" (). Dr. Pangloss, professor of "" (English: "metaphysico-theologo-cosmolonigology") and self-proclaimed optimist, teaches his pupils that they live in the "best of all possible worlds" and that "all is for the best". All is well in the castle until Cunégonde sees Pangloss sexually engaged with Paquette in some bushes. Encouraged by this show of affection, Cunégonde drops her handkerchief next to Candide, enticing him to kiss her. For this infraction, Candide is evicted from the castle, at which point he is captured by Bulgar (Prussian) recruiters and coerced into military service, where he is flogged, nearly executed, and forced to participate in a major battle between the Bulgars and the Avars (an allegory representing the Prussians and the French). Candide eventually escapes the army and makes his way to Holland where he is given aid by Jacques, an Anabaptist, who strengthens Candide's optimism. Soon after, Candide finds his master Pangloss, now a beggar with syphilis. Pangloss reveals he was infected with this disease by Paquette and shocks Candide by relating how Castle Thunder-ten-Tronckh was destroyed by Bulgars, that Cunégonde and her whole family were killed, and that Cunégonde was raped before her death. Pangloss is cured of his illness by Jacques, losing one eye and one ear in the process, and the three set sail to Lisbon. In Lisbon's harbor, they are overtaken by a vicious storm which destroys the boat. Jacques attempts to save a sailor, and in the process is thrown overboard. The sailor makes no move to help the drowning Jacques, and Candide is in a state of despair until Pangloss explains to him that Lisbon harbor was created in order for Jacques to drown. Only Pangloss, Candide, and the "brutish sailor" who let Jacques drown survive the wreck and reach Lisbon, which is promptly hit by an earthquake, tsunami and fire that kill tens of thousands. The sailor leaves in order to loot the rubble while Candide, injured and begging for help, is lectured on the optimistic view of the situation by Pangloss. The next day, Pangloss discusses his optimistic philosophy with a member of the Portuguese Inquisition, and he and Candide are arrested for heresy, set to be tortured and killed in an "" set up to appease God and prevent another disaster. Candide is flogged and sees Pangloss hanged, but another earthquake intervenes and he escapes. He is approached by an old woman, who leads him to a house where Lady Cunégonde waits, alive. Candide is surprised: Pangloss had told him that Cunégonde had been raped and disemboweled. She had been, but Cunégonde points out that people survive such things. However, her rescuer sold her to a Jewish merchant, Don Issachar, who was then threatened by a corrupt Grand Inquisitor into sharing her (Don Issachar gets Cunégonde on Mondays, Wednesdays, and the sabbath day). Her owners arrive, find her with another man, and Candide kills them both. Candide and the two women flee the city, heading to the Americas. Along the way, Cunégonde falls into self-pity, complaining of all the misfortunes that have befallen her. Chapters XI–XX The old woman reciprocates by revealing her own tragic life: born the daughter of Pope Urban X and the Princess of Palestrina, she was kidnapped and enslaved by Barbary pirates, witnessed violent civil wars in Morocco under the bloodthirsty King Moulay Ismaïl (during which her mother was drawn and quartered), suffered constant hunger, nearly died from a plague in Algiers, and had a buttock cut off to feed starving Janissaries during the Russian capture of Azov. After traversing all the Russian Empire, she eventually became a servant of Don Issachar and met Cunégonde. The trio arrives in Buenos Aires, where Governor Don Fernando d'Ibarra y Figueroa y Mascarenes y Lampourdos y Souza asks to marry Cunégonde. Just then, an alcalde (a Spanish magistrate) arrives, pursuing Candide for killing the Grand Inquisitor. Leaving the women behind, Candide flees to Paraguay with his practical and heretofore unmentioned manservant, Cacambo. At a border post on the
drawings were published in a 1920 version edited by Kurt Wolff. List of characters Main characters Candide: The title character. The illegitimate son of the sister of the Baron of Thunder-ten-Tronckh. In love with Cunégonde. Cunégonde: The daughter of the Baron of Thunder-ten-Tronckh. In love with Candide. Professor Pangloss: The royal educator of the court of the baron. Described as "the greatest philosopher of the Holy Roman Empire". The Old Woman: Cunégonde's maid while she is the mistress of Don Issachar and the Grand Inquisitor of Portugal. Flees with Candide and Cunégonde to the New World. Illegitimate daughter of Pope Urban X. Cacambo: From a Spanish father and a Peruvian mother. Lived half his life in Spain and half in Latin America. Candide's valet while in America. Martin: Dutch amateur philosopher and Manichaean. Meets Candide in Suriname, travels with him afterwards. The Baron of Thunder-ten-Tronckh: Brother of Cunégonde. Is seemingly killed by the Bulgarians, but becomes a Jesuit in Paraguay. Disapproves of Candide and Cunegonde's marriage. Secondary characters The baron and baroness of Thunder-ten-Tronckh: Father and mother of Cunégonde and the second baron. Both slain by the Bulgarians. The king of the Bulgarians. Jacques the Anabaptist: Saves Candide from a lynching in the Netherlands. Drowns in the port of Lisbon after saving another sailor's life. Don Issachar: Jewish landlord in Portugal. Cunégonde becomes his mistress, shared with the Grand Inquisitor of Portugal. Killed by Candide. The Grand Inquisitor of Portugal: Sentences Candide and Pangloss at the auto-da-fé. Cunégonde is his mistress jointly with Don Issachar. Killed by Candide. Don Fernando d'Ibarra y Figueroa y Mascarenes y Lampourdos y Souza: Spanish governor of Buenos Aires. Wants Cunégonde as a mistress. The king of El Dorado, who helps Candide and Cacambo out of El Dorado, lets them pick gold from the grounds, and makes them rich. Mynheer Vanderdendur: Dutch ship captain. Offers to take Candide from America to France for 30,000 gold coins, but then departs without him, stealing all his riches. The abbot of Périgord: Befriends Candide and Martin, leads the police to arrest them; he and the police officer accept three diamonds each and release them. The marchioness of Parolignac: Parisian wench who takes an elaborate title. The scholar: One of the guests of the "marchioness". Argues with Candide about art. Paquette: A chambermaid from Thunder-ten-Tronckh who gave Pangloss syphilis. After the slaying by the Bulgarians, works as a prostitute and becomes the property of Friar Giroflée. Friar Giroflée: Theatine friar. In love with the prostitute Paquette. Signor Pococurante: A Venetian noble. Candide and Martin visit his estate, where he discusses his disdain of most of the canon of great art. In an inn in Venice, Candide and Martin dine with six men who turn out to be deposed monarchs: Ahmed III Ivan VI of Russia Charles Edward Stuart Augustus III of Poland Stanisław Leszczyński Theodore of Corsica Synopsis Candide contains thirty episodic chapters, which may be grouped into two main schemes: one consists of two divisions, separated by the protagonist's hiatus in El Dorado; the other consists of three parts, each defined by its geographical setting. By the former scheme, the first half of Candide constitutes the rising action and the last part the resolution. This view is supported by the strong theme of travel and quest, reminiscent of adventure and picaresque novels, which tend to employ such a dramatic structure. By the latter scheme, the thirty chapters may be grouped into three parts each comprising ten chapters and defined by locale: I–X are set in Europe, XI–XX are set in the Americas, and XXI–XXX are set in Europe and the Ottoman Empire. The plot summary that follows uses this second format and includes Voltaire's additions of 1761. Chapters I–X The tale of Candide begins in the castle of the Baron Thunder-ten-Tronckh in Westphalia, home to the Baron's daughter, Lady Cunégonde; his bastard nephew, Candide; a tutor, Pangloss; a chambermaid, Paquette; and the rest of the Baron's family. The protagonist, Candide, is romantically attracted to Cunégonde. He is a young man of "the most unaffected simplicity" (), whose face is "the true index of his mind" (). Dr. Pangloss, professor of "" (English: "metaphysico-theologo-cosmolonigology") and self-proclaimed optimist, teaches his pupils that they live in the "best of all possible worlds" and that "all is for the best". All is well in the castle until Cunégonde sees Pangloss sexually engaged with Paquette in some bushes. Encouraged by this show of affection, Cunégonde drops her handkerchief next to Candide, enticing him to kiss her. For this infraction, Candide is evicted from the castle, at which point he is captured by Bulgar (Prussian) recruiters and coerced into military service, where he is flogged, nearly executed, and forced to participate in a major battle between the Bulgars and the Avars (an allegory representing the Prussians and the French). Candide eventually escapes the army and makes his way to Holland where he is given aid by Jacques, an Anabaptist, who strengthens Candide's optimism. Soon after, Candide finds his master Pangloss, now a beggar with syphilis. Pangloss reveals he was infected with this disease by Paquette and shocks Candide by relating how Castle Thunder-ten-Tronckh was destroyed by Bulgars, that Cunégonde and her whole family were killed, and that Cunégonde was raped before her death. Pangloss is cured of his illness by Jacques, losing one eye and one ear in the process, and the three set sail to Lisbon. In Lisbon's harbor, they are overtaken by a vicious storm which destroys the boat. Jacques attempts to save a sailor, and in the process is thrown overboard. The sailor makes no move to help the drowning Jacques, and Candide is in a state of despair until Pangloss explains to him that Lisbon harbor was created in order for Jacques to drown. Only Pangloss, Candide, and the "brutish sailor" who let Jacques drown survive the wreck and reach Lisbon, which is promptly hit by an earthquake, tsunami and fire that kill tens of thousands. The sailor leaves in order to loot the rubble while Candide, injured and begging for help, is lectured on the optimistic view of the situation by Pangloss. The next day, Pangloss discusses his optimistic philosophy with a member of the Portuguese Inquisition, and he and Candide are arrested for heresy, set to be tortured and killed in an "" set up to appease God and prevent another disaster. Candide is flogged and sees Pangloss hanged, but another earthquake intervenes and he escapes. He is approached by an old woman, who leads him to a house where Lady Cunégonde waits, alive. Candide is surprised: Pangloss had told him that Cunégonde had been raped and disemboweled. She had been, but Cunégonde points out that people survive such things. However, her rescuer sold her to a Jewish merchant, Don Issachar, who was then threatened by a corrupt Grand Inquisitor into sharing her (Don Issachar gets Cunégonde on Mondays, Wednesdays, and the sabbath day). Her owners arrive, find her with another man, and Candide kills them both. Candide and the two women flee the city, heading to the Americas. Along the way, Cunégonde falls into self-pity, complaining of all the misfortunes that have befallen her. Chapters XI–XX The old woman reciprocates by revealing her own tragic life: born the daughter of Pope Urban X and the Princess of Palestrina, she was kidnapped and enslaved by Barbary pirates, witnessed violent civil wars in Morocco under the bloodthirsty King Moulay Ismaïl (during which her mother was drawn and quartered), suffered constant hunger, nearly died from a plague in Algiers, and had a buttock cut off to feed starving Janissaries during the Russian capture of Azov. After traversing all the Russian Empire, she eventually became a servant of Don Issachar and met Cunégonde. The trio arrives in Buenos Aires, where Governor Don Fernando d'Ibarra y Figueroa y Mascarenes y Lampourdos y Souza asks to marry Cunégonde. Just then, an alcalde (a Spanish magistrate) arrives, pursuing Candide for killing the Grand Inquisitor. Leaving the women behind, Candide flees to Paraguay with his practical and heretofore unmentioned manservant, Cacambo. At a border post on the way to Paraguay, Cacambo and Candide speak to the commandant, who turns out to be Cunégonde's unnamed brother. He explains that after his family was slaughtered, the Jesuits' preparation for his burial revived him, and he has since joined the order. When Candide proclaims he intends to marry Cunégonde, her brother attacks him, and Candide runs him through with his rapier. After lamenting all the people (mainly priests) he has killed, he and Cacambo flee. In their flight, Candide and Cacambo come across two naked women being chased and bitten by a pair of monkeys. Candide, seeking to protect the women, shoots and kills the monkeys, but is informed by Cacambo that the monkeys and women were probably lovers. Cacambo and Candide are captured by Oreillons, or Orejones; members of the Inca nobility who widened the lobes of their ears, and are depicted here as the fictional inhabitants of the area. Mistaking Candide for a Jesuit by his robes, the Oreillons prepare to cook Candide and Cacambo; however, Cacambo convinces the Oreillons that Candide killed a Jesuit to procure the robe. Cacambo and Candide are released and travel for a month on foot and then down a river by canoe, living on fruits and berries. After a few more adventures, Candide and Cacambo wander into El Dorado, a geographically isolated utopia where the streets are covered with precious stones, there exist no priests, and all of the king's jokes are funny. Candide and Cacambo stay a month in El Dorado, but Candide is still in pain without Cunégonde, and expresses to the king his wish to leave. The king points out that this is a foolish idea, but generously helps them do so. The pair continue their journey, now accompanied by one hundred red pack sheep carrying provisions and incredible sums of money, which they slowly lose or have stolen over the next few adventures. Candide and Cacambo eventually reach Suriname where they split up: Cacambo travels to Buenos Aires to retrieve Lady Cunégonde, while Candide prepares to travel to Europe to await the two. Candide's remaining sheep are stolen, and Candide is fined heavily by a Dutch magistrate for petulance over the theft. Before leaving Suriname, Candide feels in need of companionship, so he interviews a number of local men who have been through various ill-fortunes and settles on a man named Martin. Chapters XXI–XXX This companion, Martin, is a Manichaean scholar based on the real-life pessimist Pierre Bayle, who was a chief opponent of Leibniz. For the remainder of the voyage, Martin and Candide argue about philosophy, Martin painting the entire world as occupied by fools. Candide, however, remains an optimist at heart, since it is all he knows. After a detour to Bordeaux and Paris, they arrive in England and see an admiral (based on Admiral Byng) being shot for not killing enough of the enemy. Martin explains that Britain finds it necessary to shoot an admiral from time to time "pour encourager les autres" (to encourage the others). Candide, horrified, arranges for them to leave Britain immediately. Upon their arrival in Venice, Candide and Martin meet Paquette, the chambermaid who infected Pangloss with his syphilis. She is now a prostitute, and is spending her time with a Theatine monk, Brother Giroflée. Although both appear happy on the surface, they reveal their despair: Paquette has led a miserable existence as a sexual object, and the monk detests the religious order in which he was indoctrinated. Candide gives two thousand piastres to Paquette and one thousand to Brother Giroflée. Candide and Martin visit the Lord Pococurante, a noble Venetian. That evening, Cacambo—now a slave—arrives and informs Candide that Cunégonde is in Constantinople. Prior to their departure, Candide and Martin dine with six strangers who had come for the Carnival of Venice. These strangers are revealed to be dethroned kings: the Ottoman Sultan Ahmed III, Emperor Ivan VI of Russia, Charles Edward Stuart (an unsuccessful pretender to the English throne), Augustus III of Poland (deprived, at the time of writing, of his reign in Electorate of Saxony due to Seven Years' War) , Stanisław Leszczyński, and Theodore of Corsica. On the way to Constantinople, Cacambo reveals that Cunégonde—now horribly ugly—currently washes dishes on the banks of the Propontis as a slave for a Transylvanian prince by the name of Rákóczi. After arriving at the Bosphorus, they board a galley where, to Candide's surprise, he finds Pangloss and Cunégonde's brother among the rowers. Candide buys their freedom and further passage at steep prices. They both relate how they survived, but despite the horrors he has been through, Pangloss's optimism remains unshaken: "I still hold to my original opinions, because, after all, I'm a philosopher, and it wouldn't be proper for me to recant, since Leibniz cannot be wrong, and since pre-established harmony is the most beautiful thing in the world, along with the plenum and subtle matter." Candide, the baron, Pangloss, Martin, and Cacambo arrive at the banks of the Propontis, where they rejoin Cunégonde and the old woman. Cunégonde has indeed become hideously ugly, but Candide nevertheless buys their freedom and marries Cunégonde to spite her brother, who forbids Cunégonde from marrying anyone but a baron of the Empire (he is secretly sold back into slavery). Paquette and Brother Giroflée—having squandered their three thousand piastres—are reconciled with Candide on a small farm () which he just bought with the last of his finances. One day, the protagonists seek out a dervish known as a great philosopher of the land. Candide asks him why Man is made to suffer so, and what they all ought to do. The dervish responds by asking rhetorically why Candide is concerned about the existence of evil and good. The dervish describes human beings as mice on a ship sent by a king to Egypt; their comfort does not matter to the king. The dervish then slams his door on the group. Returning to their farm, Candide, Pangloss, and Martin meet a Turk whose philosophy is to devote his life only to simple work and not concern himself with external affairs. He and his four children cultivate a small area of land, and the work keeps them "free of three great evils: boredom, vice, and poverty." Candide, Pangloss, Martin, Cunégonde, Paquette, Cacambo, the old woman, and Brother Giroflée all set to work on this "commendable plan" () on their farm, each exercising his or her own talents. Candide ignores Pangloss's insistence that all turned out for the best by necessity, instead telling him "we must cultivate our garden" (). Style As Voltaire himself described it, the purpose of Candide was to "bring amusement to a small number of men of wit". The author achieves this goal by combining wit with a parody of the classic adventure-romance plot. Candide is confronted with horrible events described in painstaking detail so often that it becomes humorous. Literary theorist Frances K. Barasch described Voltaire's matter-of-fact narrative as treating topics such as mass death "as coolly as a weather report". The fast-paced and improbable plot—in which characters narrowly escape death repeatedly, for instance—allows for compounding tragedies to befall the same characters over and over again. In the end, Candide is primarily, as described by Voltaire's biographer Ian Davidson, "short, light, rapid and humorous". Behind the playful façade of Candide which has amused so many, there lies very harsh criticism of contemporary European civilization which angered many others. European governments such as France, Prussia, Portugal and England are each attacked ruthlessly by the author: the French and Prussians for the Seven Years' War, the Portuguese for their Inquisition, and the British for the execution of John Byng. Organised religion, too, is harshly treated in Candide. For example, Voltaire mocks the Jesuit order of the Roman Catholic Church. Aldridge provides a characteristic example of such anti-clerical passages for which the work was banned: while in Paraguay, Cacambo remarks, "[The Jesuits] are masters of everything, and the people have no money at all …". Here, Voltaire suggests the Christian mission in Paraguay is taking advantage of the local population. Voltaire depicts the Jesuits holding the indigenous peoples as slaves while they claim to be helping them. Satire The main method of Candides satire is to contrast ironically great tragedy and comedy. The story does not invent or exaggerate evils of the world—it displays real ones starkly, allowing Voltaire to simplify subtle philosophies and cultural traditions, highlighting their flaws. Thus Candide derides optimism, for instance, with a deluge of horrible, historical (or at least plausible) events with no apparent redeeming qualities. A simple example of the satire of Candide is seen in the treatment of the historic event witnessed by Candide and Martin in Portsmouth harbour. There, the duo spy an anonymous admiral, supposed to represent John Byng, being executed for failing to properly engage a French fleet. The admiral is blindfolded and shot on the deck of his own ship, merely "to encourage the others" (, an expression Voltaire is credited with originating). This depiction of military punishment trivializes Byng's death. The dry, pithy explanation "to encourage the others" thus satirises a serious historical event in characteristically Voltairian fashion. For its classic wit, this phrase has become one of the more often quoted from Candide. Voltaire depicts the worst of the world and his pathetic hero's desperate effort to fit it into an optimistic outlook. Almost all of Candide is a discussion of various forms of evil: its characters rarely find even temporary respite. There is at least one notable exception: the episode of El Dorado, a fantastic village in which the inhabitants are simply rational, and their society is just and reasonable. The positivity of El Dorado may be contrasted with the pessimistic attitude of most of the book. Even in this case, the bliss of El Dorado is fleeting: Candide soon leaves the village to seek Cunégonde, whom he eventually marries only out of a sense of obligation. Another element of the satire focuses on what William F. Bottiglia, author of many published works on Candide, calls the "sentimental foibles of the age" and Voltaire's attack on them. Flaws in European culture are highlighted as Candide parodies adventure and romance clichés, mimicking the style of a picaresque novel. A number of archetypal characters thus have recognisable manifestations in Voltaire's work: Candide is supposed to be the drifting rogue of low social class, Cunégonde the sex interest, Pangloss the knowledgeable mentor and Cacambo the skilful valet. As the plot unfolds, readers find that Candide is no rogue, Cunégonde becomes ugly and Pangloss is a stubborn fool. The characters of Candide are unrealistic, two-dimensional, mechanical, and even marionette-like; they are simplistic and stereotypical. As the initially naïve protagonist eventually comes to a mature conclusion—however noncommittal—the novella is a bildungsroman, if not a very serious one. Garden motif Gardens are thought by many critics to play a critical symbolic role in Candide. The first location commonly identified as a garden is the castle of the Baron, from which Candide and Cunégonde are evicted much in the same fashion as Adam and Eve are evicted from the Garden of Eden in the Book of Genesis. Cyclically, the main characters of Candide conclude the novel in a garden of their own making, one which might represent celestial paradise. The third most prominent "garden" is El Dorado, which may be a false Eden. Other possibly symbolic gardens include the Jesuit pavilion, the garden of Pococurante, Cacambo's garden, and the Turk's garden. These gardens are probably references to the Garden of Eden, but it has also been proposed, by Bottiglia, for example, that the gardens refer also to the Encyclopédie, and that Candide's conclusion to cultivate "his garden" symbolises Voltaire's great support for this endeavour. Candide and his companions, as they find themselves at the end of the novella, are in a very similar position to Voltaire's tightly knit philosophical circle which supported the : the main characters of Candide live in seclusion to "cultivate [their] garden", just as Voltaire suggested his colleagues leave society to write. In addition, there is evidence in the epistolary correspondence of Voltaire that he had elsewhere used the metaphor of gardening to describe writing the . Another interpretative possibility is that Candide cultivating "his garden" suggests his engaging in only necessary occupations, such as feeding oneself and fighting boredom. This is analogous to Voltaire's own view on gardening: he was himself a gardener at his estates in Les Délices and Ferney, and he often wrote in his correspondence that gardening was an important pastime of his own, it being an extraordinarily effective way to keep busy. Philosophy Optimism Candide satirises various philosophical and religious theories that Voltaire had previously criticised. Primary among these is Leibnizian optimism (sometimes called Panglossianism after its fictional proponent), which Voltaire ridicules with descriptions of seemingly endless calamity. Voltaire demonstrates a variety of irredeemable evils in the world, leading many critics to contend that Voltaire's treatment of evil—specifically the theological problem of its existence—is the focus of the work. Heavily referenced in the text are the Lisbon earthquake, disease, and the sinking of ships in storms. Also, war, thievery, and murder—evils of human design—are explored as extensively in Candide as are environmental ills. Bottiglia notes Voltaire is "comprehensive" in his enumeration of the world's evils. He is unrelenting in attacking Leibnizian optimism. Fundamental to Voltaire's attack is Candide's tutor Pangloss, a self-proclaimed follower of Leibniz and a teacher of his doctrine. Ridicule of Pangloss's theories thus ridicules Leibniz himself, and Pangloss's reasoning is silly at best. For example, Pangloss's first teachings of the narrative absurdly mix up cause and effect: Following such flawed reasoning even more doggedly than Candide, Pangloss defends optimism. Whatever their horrendous fortune, Pangloss reiterates "all is for the best" ("") and proceeds to "justify" the evil event's occurrence. A characteristic example of such theodicy is found in Pangloss's explanation of why it is good that syphilis exists: Candide, the impressionable and incompetent student of Pangloss, often tries to justify evil, fails, invokes his mentor and eventually despairs. It is by these failures that Candide is painfully cured (as Voltaire would see it) of his optimism. This critique of Voltaire's seems to be directed almost exclusively at Leibnizian optimism. Candide does not ridicule Voltaire's contemporary Alexander Pope, a later optimist of slightly different convictions. Candide does not discuss Pope's optimistic principle that "all is right", but Leibniz's that states, "this is the best of all possible worlds". However subtle the difference between the two, Candide is unambiguous as to which is its subject. Some critics conjecture that Voltaire meant to spare Pope this ridicule out of respect, although Voltaire's Poème may have been written as a more direct response to Pope's theories. This work is similar to Candide in subject matter, but very different from it in style: the Poème embodies a more serious philosophical argument than Candide. Conclusion The conclusion of the novel, in which Candide finally dismisses his tutor's optimism, leaves unresolved what philosophy the protagonist is to accept in its stead. This element of Candide has been written about voluminously,
Dune is a 1985 science fiction novel by Frank Herbert, the last in his Dune series of six novels. It rose to No. 2 on The New York Times Best Seller list. A direct follow-up to Heretics of Dune, the novel chronicles the continued struggles of the Bene Gesserit Sisterhood against the violent Honored Matres, who are succeeding in their bid to seize control of the universe and destroy the factions and planets that oppose them. Chapterhouse: Dune ends with a cliffhanger, and Herbert's subsequent death in 1986 left some overarching plotlines of the series unresolved. Two decades later, Herbert's son Brian Herbert, along with Kevin J. Anderson, published two sequels – Hunters of Dune (2006) and Sandworms of Dune (2007) – based in part on notes left behind by Frank Herbert for what he referred to as Dune 7, his own planned seventh novel in the Dune series. Plot The Bene Gesserit find themselves the target of the Honored Matres, whose conquest of the Old Empire is almost complete. The Matres are seeking to assimilate the technology and superhuman skills of the Bene Gesserit, and exterminate the Sisterhood itself. Now in command of the Bene Gesserit, Mother Superior Darwi Odrade continues to develop her drastic, secret plan to overcome the Honored Matres. The Bene Gesserit are also terraforming the planet Chapterhouse to accommodate the all-important sandworms, whose native planet Dune had been destroyed by the Matres. Sheeana, in charge of the project, expects sandworms to appear soon. The Honored Matres have also destroyed the entire Bene Tleilax civilization, with Tleilaxu Master Scytale the only one of his kind left alive. In Bene Gesserit captivity, Scytale possesses the Tleilaxu secret of ghola production, which he has reluctantly traded for the Sisterhood's protection. The first ghola produced is that of their recently deceased military genius, Miles Teg. The Bene Gesserit have two other prisoners on Chapterhouse: the latest Duncan Idaho ghola, and former Honored Matre Murbella, whom they have accepted as a novice despite their suspicion that she intends to escape back to the Honored Matres. Lampadas, a center for Bene Gesserit education, has been destroyed by the Honored Matres. The planet's Chancellor, Reverend Mother Lucilla, manages to escape carrying the shared-minds of millions of Reverend Mothers. Lucilla is forced to land on Gammu where she seeks refuge with an underground group of Jews. The Rabbi gives Lucilla sanctuary, but to save his people from the Matres he must deliver her to them. Before doing so, he reveals Rebecca, a "wild" Reverend Mother who has gained her Other Memory without Bene Gesserit training. Lucilla shares minds with Rebecca, who promises to take the memories of Lampadas safely back to the Sisterhood. Lucilla is then "betrayed", and taken before the Great Honored Matre Dama, who tries to persuade her to join the Honored Matres, preserving her life in exchange for Bene Gesserit secrets. The Honored Matres are particularly interested in learning to voluntarily modify their
sandworms, whose native planet Dune had been destroyed by the Matres. Sheeana, in charge of the project, expects sandworms to appear soon. The Honored Matres have also destroyed the entire Bene Tleilax civilization, with Tleilaxu Master Scytale the only one of his kind left alive. In Bene Gesserit captivity, Scytale possesses the Tleilaxu secret of ghola production, which he has reluctantly traded for the Sisterhood's protection. The first ghola produced is that of their recently deceased military genius, Miles Teg. The Bene Gesserit have two other prisoners on Chapterhouse: the latest Duncan Idaho ghola, and former Honored Matre Murbella, whom they have accepted as a novice despite their suspicion that she intends to escape back to the Honored Matres. Lampadas, a center for Bene Gesserit education, has been destroyed by the Honored Matres. The planet's Chancellor, Reverend Mother Lucilla, manages to escape carrying the shared-minds of millions of Reverend Mothers. Lucilla is forced to land on Gammu where she seeks refuge with an underground group of Jews. The Rabbi gives Lucilla sanctuary, but to save his people from the Matres he must deliver her to them. Before doing so, he reveals Rebecca, a "wild" Reverend Mother who has gained her Other Memory without Bene Gesserit training. Lucilla shares minds with Rebecca, who promises to take the memories of Lampadas safely back to the Sisterhood. Lucilla is then "betrayed", and taken before the Great Honored Matre Dama, who tries to persuade her to join the Honored Matres, preserving her life in exchange for Bene Gesserit secrets. The Honored Matres are particularly interested in learning to voluntarily modify their body chemistry, a skill that atrophied among the Bene Gesserit who went out into the Scattering and evolved into the Honored Matres. From this, Lucilla deduces that the greater enemy that the Matres are fleeing from is making extensive use of biological warfare. Lucilla refuses to share this knowledge with the Matres, and Dama ultimately kills her. Back on Chapterhouse, Odrade confronts Duncan and forces him to admit that he is a Mentat, proving that he retains the memories of his many ghola lives. Meanwhile, Murbella collapses under the pressure of Bene Gesserit training, and realizes that she wants to be Bene
difference is largely conceptual rather than practical. An attribute generally used to characterize a bus is that power is provided by the bus for the connected hardware. This emphasizes the busbar origins of bus architecture as supplying switched or distributed power. This excludes, as buses, schemes such as serial RS-232, parallel Centronics, IEEE 1284 interfaces and Ethernet, since these devices also needed separate power supplies. Universal Serial Bus devices may use the bus supplied power, but often use a separate power source. This distinction is exemplified by a telephone system with a connected modem, where the RJ11 connection and associated modulated signalling scheme is not considered a bus, and is analogous to an Ethernet connection. A phone line connection scheme is not considered to be a bus with respect to signals, but the Central Office uses buses with cross-bar switches for connections between phones. However, this distinctionthat power is provided by the busis not the case in many avionic systems, where data connections such as ARINC 429, ARINC 629, MIL-STD-1553B (STANAG 3838), and EFABus (STANAG 3910) are commonly referred to as “data buses” or, sometimes, "databuses". Such avionic data buses are usually characterized by having several equipments or Line Replaceable Items/Units (LRI/LRUs) connected to a common, shared media. They may, as with ARINC 429, be simplex, i.e. have a single source LRI/LRU or, as with ARINC 629, MIL-STD-1553B, and STANAG 3910, be duplex, allow all the connected LRI/LRUs to act, at different times (half duplex), as transmitters and receivers of data. Bus multiplexing The simplest system bus has completely separate input data lines, output data lines, and address lines. To reduce cost, most microcomputers have a bidirectional data bus, re-using the same wires for input and output at different times. Some processors use a dedicated wire for each bit of the address bus, data bus, and the control bus. For example, the 64-pin STEbus is composed of 8 physical wires dedicated to the 8-bit data bus, 20 physical wires dedicated to the 20-bit address bus, 21 physical wires dedicated to the control bus, and 15 physical wires dedicated to various power buses. Bus multiplexing requires fewer wires, which reduces costs in many early microprocessors and DRAM chips. One common multiplexing scheme, address multiplexing, has already been mentioned. Another multiplexing scheme re-uses the address bus pins as the data bus pins, an approach used by conventional PCI and the 8086. The various "serial buses" can be seen as the ultimate limit of multiplexing, sending each of the address bits and each of the data bits, one at a time, through a single pin (or a single differential pair). History Over time, several groups of people worked on various computer bus standards, including the IEEE Bus Architecture Standards Committee (BASC), the IEEE "Superbus" study group, the open microprocessor initiative (OMI), the open microsystems initiative (OMI), the "Gang of Nine" that developed EISA, etc. First generation Early computer buses were bundles of wire that attached computer memory and peripherals. Anecdotally termed the "digit trunk", they were named after electrical power buses, or busbars. Almost always, there was one bus for memory, and one or more separate buses for peripherals. These were accessed by separate instructions, with completely different timings and protocols. One of the first complications was the use of interrupts. Early computer programs performed I/O by waiting in a loop for the peripheral to become ready. This was a waste of time for programs that had other tasks to do. Also, if the program attempted to perform those other tasks, it might take too long for the program to check again, resulting in loss of data. Engineers thus arranged for the peripherals to interrupt the CPU. The interrupts had to be prioritized, because the CPU can only execute code for one peripheral at a time, and some devices are more time-critical than others. High-end systems introduced the idea of channel controllers, which were essentially small computers dedicated to handling the input and output of a given bus. IBM introduced these on the IBM 709 in 1958, and they became a common feature of their platforms. Other high-performance vendors like Control Data Corporation implemented similar designs. Generally, the channel controllers would do their best to run all of the bus operations internally, moving data when the CPU was known to be busy elsewhere if possible, and only using interrupts when necessary. This greatly reduced CPU load, and provided better overall system performance. To provide modularity, memory and I/O buses can be combined into a unified system bus. In this case, a single mechanical and electrical system can be used to connect together many of the system components, or in some cases, all of them. Later computer programs began to share memory common to several CPUs. Access to this memory bus had to be prioritized, as well. The simple way to prioritize interrupts or bus access was with a daisy chain. In this case signals will naturally flow through the bus in physical or logical order, eliminating the need for complex scheduling. Minis and micros Digital Equipment Corporation (DEC) further reduced cost for mass-produced minicomputers, and mapped peripherals into the memory bus, so that the input and output devices appeared to be memory locations. This was implemented in the Unibus of the PDP-11 around 1969. Early microcomputer bus systems were essentially a passive backplane connected directly or through buffer amplifiers to the pins of the CPU. Memory and other devices would be added to the bus using the same address and data pins as the CPU itself used, connected in parallel. Communication was controlled by the CPU, which read and wrote data from the devices as if they are blocks of memory, using the same instructions, all timed by a central clock controlling the speed of the CPU. Still, devices interrupted the CPU by signaling on separate CPU pins. For instance, a disk drive controller would signal the CPU that new data was ready to be read, at which point the CPU would move the data by reading the "memory location" that corresponded to the disk drive. Almost all early microcomputers were built in this fashion, starting with the S-100 bus in the Altair 8800 computer system. In some instances, most notably in the IBM PC, although similar physical architecture can be employed, instructions to access peripherals (in and out) and memory (mov and others) have not been made uniform at all, and still generate distinct CPU signals, that could be used to implement a separate I/O bus. These simple bus systems had a serious drawback when used for general-purpose computers. All the equipment on the bus had to talk at the same speed, as it shared a single clock. Increasing the speed of the CPU becomes harder, because the speed of all the devices must increase as well. When it is not practical or economical to have all devices as fast as the CPU, the CPU must either enter a wait state, or work at a slower clock frequency temporarily, to talk to other devices in the computer. While acceptable in embedded systems, this problem was not tolerated for long in general-purpose, user-expandable computers. Such bus systems are also difficult to configure when constructed from common off-the-shelf equipment. Typically each added expansion card requires many jumpers in order to set memory addresses, I/O addresses, interrupt priorities, and interrupt numbers. Second generation "Second generation" bus systems like NuBus addressed some of these problems. They typically separated the computer into two "worlds", the CPU and memory on one side, and the various devices on the other. A bus controller accepted data from the CPU side to be moved to the peripherals side, thus shifting the communications protocol burden from the CPU itself. This allowed the CPU and memory side to evolve separately from the device bus, or just "bus". Devices on the bus could talk to each other with no CPU intervention. This led to much better "real world" performance, but also required the cards to be much more complex. These buses also often addressed speed issues by being "bigger" in terms of the size of the data path, moving from 8-bit parallel buses in the first generation, to 16 or 32-bit in the second, as well as adding software setup (now standardised as Plug-n-play) to supplant or replace the jumpers. However, these newer systems shared one quality with their earlier cousins, in that everyone on the bus had to talk at the same speed. While the CPU was now isolated and could increase speed, CPUs and memory continued to increase in speed much faster than the buses they talked to. The result was that the bus speeds were now very much slower than what a modern system needed, and the machines were left starved for data. A particularly common example of this problem was that video cards quickly outran even the newer bus systems like PCI, and computers began to include AGP just to drive the video card. By 2004 AGP was outgrown again by high-end video cards and other peripherals and has been replaced by the new PCI Express bus. An increasing number of external devices started employing their own bus systems as well. When disk drives were first introduced, they would be added to the machine with a card plugged into the bus, which is why computers have so many slots on the bus. But through the 1980s and 1990s, new systems like SCSI and IDE were introduced to serve this need, leaving most slots in modern systems empty. Today there are likely to be about five different buses in the typical machine, supporting various devices. Third generation "Third generation" buses have been emerging into the market since about 2001, including HyperTransport and InfiniBand.
first complications was the use of interrupts. Early computer programs performed I/O by waiting in a loop for the peripheral to become ready. This was a waste of time for programs that had other tasks to do. Also, if the program attempted to perform those other tasks, it might take too long for the program to check again, resulting in loss of data. Engineers thus arranged for the peripherals to interrupt the CPU. The interrupts had to be prioritized, because the CPU can only execute code for one peripheral at a time, and some devices are more time-critical than others. High-end systems introduced the idea of channel controllers, which were essentially small computers dedicated to handling the input and output of a given bus. IBM introduced these on the IBM 709 in 1958, and they became a common feature of their platforms. Other high-performance vendors like Control Data Corporation implemented similar designs. Generally, the channel controllers would do their best to run all of the bus operations internally, moving data when the CPU was known to be busy elsewhere if possible, and only using interrupts when necessary. This greatly reduced CPU load, and provided better overall system performance. To provide modularity, memory and I/O buses can be combined into a unified system bus. In this case, a single mechanical and electrical system can be used to connect together many of the system components, or in some cases, all of them. Later computer programs began to share memory common to several CPUs. Access to this memory bus had to be prioritized, as well. The simple way to prioritize interrupts or bus access was with a daisy chain. In this case signals will naturally flow through the bus in physical or logical order, eliminating the need for complex scheduling. Minis and micros Digital Equipment Corporation (DEC) further reduced cost for mass-produced minicomputers, and mapped peripherals into the memory bus, so that the input and output devices appeared to be memory locations. This was implemented in the Unibus of the PDP-11 around 1969. Early microcomputer bus systems were essentially a passive backplane connected directly or through buffer amplifiers to the pins of the CPU. Memory and other devices would be added to the bus using the same address and data pins as the CPU itself used, connected in parallel. Communication was controlled by the CPU, which read and wrote data from the devices as if they are blocks of memory, using the same instructions, all timed by a central clock controlling the speed of the CPU. Still, devices interrupted the CPU by signaling on separate CPU pins. For instance, a disk drive controller would signal the CPU that new data was ready to be read, at which point the CPU would move the data by reading the "memory location" that corresponded to the disk drive. Almost all early microcomputers were built in this fashion, starting with the S-100 bus in the Altair 8800 computer system. In some instances, most notably in the IBM PC, although similar physical architecture can be employed, instructions to access peripherals (in and out) and memory (mov and others) have not been made uniform at all, and still generate distinct CPU signals, that could be used to implement a separate I/O bus. These simple bus systems had a serious drawback when used for general-purpose computers. All the equipment on the bus had to talk at the same speed, as it shared a single clock. Increasing the speed of the CPU becomes harder, because the speed of all the devices must increase as well. When it is not practical or economical to have all devices as fast as the CPU, the CPU must either enter a wait state, or work at a slower clock frequency temporarily, to talk to other devices in the computer. While acceptable in embedded systems, this problem was not tolerated for long in general-purpose, user-expandable computers. Such bus systems are also difficult to configure when constructed from common off-the-shelf equipment. Typically each added expansion card requires many jumpers in order to set memory addresses, I/O addresses, interrupt priorities, and interrupt numbers. Second generation "Second generation" bus systems like NuBus addressed some of these problems. They typically separated the computer into two "worlds", the CPU and memory on one side, and the various devices on the other. A bus controller accepted data from the CPU side to be moved to the peripherals side, thus shifting the communications protocol burden from the CPU itself. This allowed the CPU and memory side to evolve separately from the device bus, or just "bus". Devices on the bus could talk to each other with no CPU intervention. This led to much better "real world" performance, but also required the cards to be much more complex. These buses also often addressed speed issues by being "bigger" in terms of the size of the data path, moving from 8-bit parallel buses in the first generation, to 16 or 32-bit in the second, as well as adding software setup (now standardised as Plug-n-play) to supplant or replace the jumpers. However, these newer systems shared one quality with their earlier cousins, in that everyone on the bus had to talk at the same speed. While the CPU was now isolated and could increase speed, CPUs and memory continued to increase in speed much faster than the buses they talked to. The result was that the bus speeds were now very much slower than what a modern system needed, and the machines were left starved for data. A particularly common example of this problem was that video cards quickly outran even the newer bus systems like PCI, and computers began to include AGP just to drive the video card. By 2004 AGP was outgrown again by high-end video cards and other peripherals and has been replaced by the new PCI Express bus. An increasing number of external devices started employing their own bus systems as well. When disk drives were first introduced, they would be added to the machine with a card plugged into the bus, which is why computers have so many slots on the bus. But through the 1980s and 1990s, new systems like SCSI and IDE were introduced to serve this need, leaving most slots in modern systems empty. Today there are likely to be about five different buses in the typical machine, supporting various devices. Third generation "Third generation" buses have been emerging into the market since about 2001, including HyperTransport and InfiniBand. They also tend to be very flexible in terms of their physical connections, allowing them to be used both as internal buses, as well as connecting different machines together. This can lead to complex problems when trying to service different requests, so much of the work on these systems concerns software design, as opposed to the hardware itself. In general, these third generation buses tend to look more like a network than the original concept of a bus, with a higher protocol overhead needed than early systems, while also allowing multiple devices to use the bus at once. Buses such as Wishbone have been developed by the open source hardware movement in an attempt to further remove legal and patent constraints from computer design. The Compute Express Link (CXL) is an open standard interconnect for high-speed CPU-to-device and CPU-to-memory, designed to accelerate next-generation data center performance. Examples of internal computer buses Parallel ASUS Media Bus proprietary, used on some ASUS Socket 7 motherboards Computer Automated Measurement and Control (CAMAC) for instrumentation systems Extended ISA or EISA Industry Standard Architecture or ISA Low Pin Count or LPC MBus MicroChannel or MCA Multibus for industrial systems NuBus or IEEE 1196 OPTi local bus used on early Intel 80486 motherboards. Conventional PCI Parallel ATA (also known as Advanced Technology Attachment, ATA, PATA, IDE, EIDE, ATAPI, etc.), Hard disk drive, optical disk drive, tape drive peripheral attachment bus S-100 bus or IEEE 696, used in the Altair 8800 and similar microcomputers SBus or IEEE 1496 SS-50 Bus Runway bus, a proprietary front side CPU bus developed by Hewlett-Packard for use by its PA-RISC microprocessor family GSC/HSC, a proprietary peripheral bus developed by Hewlett-Packard for use by its PA-RISC microprocessor family Precision Bus, a proprietary bus developed by Hewlett-Packard for use by its HP3000 computer family STEbus STD Bus (for STD-80 [8-bit] and STD32 [16-/32-bit]), FAQ Unibus, a proprietary bus developed by Digital Equipment Corporation for their PDP-11 and early VAX computers. Q-Bus, a proprietary bus developed by Digital Equipment Corporation for their PDP and later VAX computers. VESA Local Bus or VLB or VL-bus VMEbus, the VERSAmodule Eurocard bus PC/104 PC/104-Plus PCI-104 PCI/104-Express PCI/104 Zorro II and Zorro III, used in Amiga computer systems Serial 1-Wire HyperTransport I²C I3C (bus) SLIMbus PCI Express or PCIe Serial ATA (SATA), Hard disk drive, solid state drive, optical disc drive, tape drive peripheral attachment bus Serial Peripheral Interface (SPI) bus UNI/O SMBus Examples of external computer buses Parallel HIPPI High Performance Parallel Interface IEEE-488 (also known as GPIB, General-Purpose Interface Bus, and HPIB, Hewlett-Packard Instrumentation Bus) PC Card, previously known as PCMCIA, much used in laptop computers and other portables, but fading with the introduction of USB and built-in network and modem connections Serial Camera Link CAN bus ("Controller Area Network") eSATA ExpressCard Fieldbus IEEE 1394 interface (FireWire) RS-232 RS-485 Thunderbolt USB Examples of internal/external computer buses Futurebus InfiniBand PCI Express External Cabling QuickRing Scalable Coherent Interface (SCI) Small Computer System Interface (SCSI), Hard disk drive and tape drive peripheral attachment
Cadillac may also refer to: People Antoine de la Mothe Cadillac, French explorer, founder of Detroit Marie-Therese Guyon Cadillac, American pioneer Cadillac Anderson (born 1964) nickname of U.S. basketball player Gregory Wayne Anderson Cadillac Williams (born 1982) nickname of U.S. American football player Carnell Lamar Williams Geography Cadillac (Montreal Metro), a metro station on the green line in Montreal Cadillac, Gironde, a commune in the Gironde department, in southwestern France Cadillac, Michigan, United States Cadillac, Saskatchewan, Canada Cadillac, a former municipality now part of Rouyn-Noranda, Quebec, Canada Cadillac Mountain, Maine, United States Cadillac Ranch (disambiguation) Lake Cadillac, a lake in
a 1989 song from Johnny Hallyday's eponymous album Cadillac "Cadillac", a 2011 song from the Original 7ven album Condensate "Cadillac" (Morgenshtern and Eldzhey song), a 2020 song by Russian rappers Morgenshtern and Eldzhey Other arts, entertainment, and media Cadillac, a guitar model made by Dean Guitars "The Cadillac", an episode of the television series Seinfeld Brands and enterprises Cadillac Gage, now part of Textron Marine & Land Systems Cadillac insurance plan in the United States Wine and grapes Burger (grape), a California-French wine grape that is also known as Cadillac Cadillac AOC, the appellation d'origine contrôlée Bordeaux wine produced in the French commune Muscadelle, a French wine grape that is also known as Cadillac Trebbiano, an Italian wine grape that is also known as Cadillac Other uses Cadillac, an alternative name for cocaine See also List of Cadillac vehicles, automobiles from GM division Cadillac Cadillac-en-Fronsadais,
the player can freely build a 'ladder' or 'bridge' with their pieces between the two opposite ends. But if a player's opponent occupies the home corner, the player may need to wait for opponent pieces to clear before filling the home vacancies. Variants Fast-paced or Super Chinese Checkers While the standard rules allow hopping over only a single adjacent occupied position at a time (as in checkers), this version of the game allows pieces to catapult over multiple adjacent occupied positions in a line when hopping. In the fast-paced or Super Chinese Checkers variant popular in France, a piece may hop over a non-adjacent piece. A hop consists of jumping over a distant piece (friend or enemy) to a symmetrical position on the opposite side, in the same line of direction. (For example, if there are two empty positions between the jumping piece and the piece being jumped, the jumping piece lands leaving exactly two empty positions immediately beyond the jumped piece.) As in the standard rules, a jumping move may consist of any number of a chain of hops. (When making a chain of hops, a piece is usually allowed to enter an empty corner, as long as it hops out again before the move is completed.) Jumping over two or more pieces in a hop is not allowed. Therefore, in this variant even more than in the standard version, it is sometimes strategically important to keep one's pieces bunched in order to prevent a long opposing hop. An alternative variant allows hops over any symmetrical arrangement, including pairs of pieces, pieces separated by empty positions, and so on. Capture In the capture variant, all sixty game pieces start out in the hexagonal field in the center of the gameboard. The center position is left unoccupied, so pieces form a symmetric hexagonal pattern. Color is irrelevant in this variant, so players take turns hopping any game piece over any other eligible game piece(s) on the board. The hopped-over pieces are captured (retired from the game, as in English draughts) and collected in the capturing player's bin. Only jumping moves are allowed; the game ends when no further jumps are possible. The player with the most captured pieces is the winner. The board is tightly packed at the start of the game; as more pieces are captured, the board frees up, often allowing multiple captures to take place in a single move. Two or more players can compete in this variant, but if there are more than six players, not everyone will get a fair turn. This variant resembles the game Leap Frog. The main difference being that in Leap Frog the board is a square board. Diamond game Diamond game (Japanese: ダイヤモンドゲーム) is a variant of Sternhalma played in South Korea and Japan. It uses
piece, either one's own or an opponent's, to the empty space directly beyond it in the same line of direction. Red might advance the indicated piece by a chain of three hops in a single move. It is not mandatory to make the most hops possible. (In some instances a player may choose to stop the jumping sequence part way in order to impede the opponent's progress, or to align pieces for planned future moves.) Starting layouts Six players Can be played "all versus all", or three teams of two. When playing teams, teammates usually sit at opposite corners of the star, with each team member controlling their own colored set of pieces. The first team to advance both sets to their home destination corners is the winner. The remaining players usually continue play to determine second- and third-place finishers, etc. Four players The four-player game is the same as the game for six players, except that two opposite corners will be unused. Three players In a three-player game, all players control either one or two sets of pieces each. If one set is used, pieces race across the board into empty, opposite corners. If two sets are used, each player controls two differently colored sets of pieces at opposite corners of the star. Two players In a two-player game, each player plays one, two, or three sets of pieces. If one set is played, the pieces usually go into the opponent's starting corner, and the number of pieces per side is increased to 15 (instead of the usual 10). If two sets are played, the pieces can either go into the opponent's starting corners, or one of the players' two sets can go into an opposite empty corner. If three sets are played, the pieces usually go into the opponent's starting corners. Strategy A basic strategy is to create or find the longest hopping path that leads closest to home, or immediately into it. (Multiple-jump moves are obviously faster to advance pieces than step-by-step moves.) Since either player can make use of any hopping 'ladder' or 'chain' created, a more advanced strategy involves hindering an opposing player in addition to helping oneself make jumps across the board. Of equal importance are the players' strategies for emptying and filling their starting and home corners. Games between top players are rarely decided by more than a couple of moves. Differing numbers of players result in different starting layouts, in turn imposing different best-game strategies. For example, if a player's home destination corner starts empty (i.e. is not an opponent's starting corner), the player can freely build a 'ladder' or 'bridge' with their pieces between the two opposite ends. But if a player's opponent occupies the home corner, the player may need to wait for opponent pieces to clear before filling the home vacancies. Variants Fast-paced or Super Chinese Checkers While the standard rules allow hopping over only a single adjacent occupied position at a time (as in checkers), this version of the game allows pieces to catapult over multiple adjacent occupied positions in a line when hopping. In the fast-paced or Super Chinese Checkers variant popular in France, a piece may hop over a non-adjacent piece. A hop consists of jumping over a distant piece (friend or enemy) to a symmetrical position on the opposite side, in the same line of direction. (For example, if there are two empty positions between the jumping piece and the piece being jumped, the jumping piece lands leaving exactly two empty positions immediately beyond the jumped piece.) As in the standard rules, a jumping move may consist of any number of a chain of hops. (When making a chain of hops, a piece is usually allowed to enter an empty corner, as long as it hops out again before the move is completed.) Jumping over two or more pieces in a hop is not allowed. Therefore, in this variant even more than in the standard version, it is sometimes strategically important to keep one's pieces bunched in order to prevent a long opposing hop. An alternative variant allows hops over any symmetrical arrangement, including pairs of
against Saudi Arabia for allegedly providing money to the hijackers and Al Qaeda. It was later joined in the suit by the Port Authority of New York. Most of the claims against Saudi Arabia were dismissed on January 18, 2005. In December 2013, Cantor Fitzgerald settled its lawsuit against American Airlines for $135 million. Cantor Fitzgerald had been suing for loss of property and interruption of business by alleging the airline to have been negligent by allowing hijackers to board Flight 11. Recent history In 2003, the firm launched its fixed income sales and trading group. In 2006, the Federal Reserve added Cantor Fitzgerald & Co. to its list of primary dealers. In 2009, the firm launched Cantor Prime Services, a provider of multi-asset, perimeter brokerage prime brokerage platforms to exploit its clearing, financing, and execution capabilities. Cantor Fitzgerald began building its real estate business with the launch of CCRE in 2010. On December 5, 2014, two Cantor Fitzgerald analysts were said to be in the top 25 analysts on TipRanks. Cantor Fitzgerald has a prolific special-purpose acquisition company underwriting practice, having led all banks in SPAC underwriting activity in both 2018 and 2019. Philanthropy Edie wrote An Unbroken Bond: The Untold Story of How the 658 Cantor Fitzgerald Families Faced the Tragedy of 9/11 and Beyond. All proceeds from the sale of the book benefit the Cantor Fitzgerald Relief Fund and the charities that it assists. The Cantor Fitzgerald Relief Fund provided $10 million to families affected by Hurricane Sandy. Howard Lutnick and the Relief Fund "adopted" 19 elementary schools in impacted areas by distributing $1,000 prepaid debit cards to each family from the schools. A total of $10 million in funds was given to families affected by the storm. Two days after the 2013 Moore tornado struck Moore, Oklahoma, killing 24 people and injuring hundreds, Lutnick pledged to donate $2 million to families affected by the tornado. The donation was given out in the form of $1,000 debit cards given out to families. Each year, on September 11, Cantor Fitzgerald and its affiliate, BGC Partners, donate 100% of their revenue to charitable causes on their annual Charity Day, which was originally established to raise money to assist the families of the Cantor employees who died in the World Trade Center attacks. Since its inception, Charity Day has raised $110 million for charities globally. Subsidiaries and affiliates The firm has many subsidiaries and affiliates such as the following: Aqua Securities is an alternative trading system for block trades that is currently used by nearly 200 institutions and brokers as an alternative to algorithmic trading of large orders. BGC Partners, named after fixed income trading innovator and founder B. Gerald Cantor, is a global brokerage company that services the wholesale financial markets and commercial real estate marketplace in New York, London, and other financial centers. BGC Partners includes Newmark Grubb Knight Frank, the fourth-largest real estate service provider in the US. Cantor Ventures
On September 2, 2004, Cantor and other organizations filed a civil lawsuit against Saudi Arabia for allegedly providing money to the hijackers and Al Qaeda. It was later joined in the suit by the Port Authority of New York. Most of the claims against Saudi Arabia were dismissed on January 18, 2005. In December 2013, Cantor Fitzgerald settled its lawsuit against American Airlines for $135 million. Cantor Fitzgerald had been suing for loss of property and interruption of business by alleging the airline to have been negligent by allowing hijackers to board Flight 11. Recent history In 2003, the firm launched its fixed income sales and trading group. In 2006, the Federal Reserve added Cantor Fitzgerald & Co. to its list of primary dealers. In 2009, the firm launched Cantor Prime Services, a provider of multi-asset, perimeter brokerage prime brokerage platforms to exploit its clearing, financing, and execution capabilities. Cantor Fitzgerald began building its real estate business with the launch of CCRE in 2010. On December 5, 2014, two Cantor Fitzgerald analysts were said to be in the top 25 analysts on TipRanks. Cantor Fitzgerald has a prolific special-purpose acquisition company underwriting practice, having led all banks in SPAC underwriting activity in both 2018 and 2019. Philanthropy Edie wrote An Unbroken Bond: The Untold Story of How the 658 Cantor Fitzgerald Families Faced the Tragedy of 9/11 and Beyond. All proceeds from the sale of the book benefit the Cantor Fitzgerald Relief Fund and the charities that it assists. The Cantor Fitzgerald Relief Fund provided $10 million to families affected by Hurricane Sandy. Howard Lutnick and the Relief Fund "adopted" 19 elementary schools in impacted areas by distributing $1,000 prepaid debit cards to each family from the schools. A total of $10 million in funds was given to families affected by the storm. Two days after the 2013 Moore tornado struck Moore, Oklahoma, killing 24 people and injuring hundreds, Lutnick pledged to donate $2 million to families affected by the tornado. The donation was given out in the form of $1,000 debit cards given out to families. Each year, on September 11, Cantor Fitzgerald and its affiliate, BGC Partners, donate 100% of their revenue to charitable causes on their annual Charity Day, which was originally established to raise money to assist the families of the Cantor employees who died in the World Trade Center attacks. Since its inception, Charity Day has raised $110 million for charities globally. Subsidiaries and affiliates The firm has many subsidiaries and affiliates such as the following: Aqua Securities is an alternative trading system for block trades that is currently used by nearly 200 institutions and brokers as an alternative to algorithmic trading of large orders. BGC Partners, named after fixed income trading innovator and founder B. Gerald Cantor, is a global brokerage company that services the wholesale financial markets and commercial real estate marketplace in New York, London, and other financial centers. BGC Partners includes Newmark Grubb Knight Frank, the fourth-largest real estate service provider in the US. Cantor Ventures is the corporate venture capital and enterprise development arm of the company. Led by Henrique De Castro, the group's current investments include delivery.com, Ritani, TopLine Game Labs, AdFin, Lucera, NewsWhip, and XIX Entertainment. Delivery.com is an online destination for consumers to shop in their neighborhood merchants, including local restaurants, grocers, wine and liquor stores, florists, and other retail and service providers. Global Gaming Asset Management is an investment vehicle formed by Cantor and former executives of Las Vegas Sands to invest in, acquire, develop, manage, and advise casino operators and other gaming assets. Hollywood Stock Exchange, founded in 1996, is the world's virtual entertainment stock market. TopLine Game
combat insects that infested sugarcane plantations. The introduction of the cane toad to the region was first suggested in 1933, following the successes in Puerto Rico and Hawaii. After considering the possible side effects, the national government of Fiji decided to release the toad in 1953, and 67 specimens were subsequently imported from Hawaii. Once the toads were established, a 1963 study concluded, as the toad's diet included both harmful and beneficial invertebrates, it was considered "economically neutral". Today, the cane toad can be found on all major islands in Fiji, although they tend to be smaller than their counterparts in other regions. New Guinea The cane toad was introduced into New Guinea to control the hawk moth larvae eating sweet potato crops. The first release occurred in 1937 using toads imported from Hawaii, with a second release the same year using specimens from the Australian mainland. Evidence suggests a third release in 1938, consisting of toads being used for human pregnancy tests—many species of toad were found to be effective for this task, and were employed for about 20 years after the discovery was announced in 1948. Initial reports argued the toads were effective in reducing the levels of cutworms and sweet potato yields were thought to be improving. As a result, these first releases were followed by further distributions across much of the region, although their effectiveness on other crops, such as cabbages, has been questioned; when the toads were released at Wau, the cabbages provided insufficient shelter and the toads rapidly left the immediate area for the superior shelter offered by the forest. A similar situation had previously arisen in the Australian cane fields, but this experience was either unknown or ignored in New Guinea. The cane toad has since become abundant in rural and urban areas. United States The cane toad naturally exists in South Texas, but attempts (both deliberate and accidental) have been made to introduce the species to other parts of the country. These include introductions to Florida and to the islands of Hawaii, as well as largely unsuccessful introductions to Louisiana. Initial releases into Florida failed. Attempted introductions before 1936 and 1944, intended to control sugarcane pests, were unsuccessful as the toads failed to proliferate. Later attempts failed in the same way. However, the toad gained a foothold in the state after an accidental release by an importer at Miami International Airport in 1957, and deliberate releases by animal dealers in 1963 and 1964 established the toad in other parts of Florida. Today, the cane toad is well established in the state, from the Keys to north of Tampa, and they are gradually extending further northward. In Florida, the toad is a regarded as a threat to native species and pets; so much so, the Florida Fish and Wildlife Conservation Commission recommends residents to kill them. Around 150 cane toads were introduced to Oahu in Hawaii in 1932, and the population swelled to 105,517 after 17 months. The toads were sent to the other islands, and more than 100,000 toads were distributed by July 1934; eventually over 600,000 were transported. Uses Other than the use as a biological control for pests, the cane toad has been employed in a number of commercial and noncommercial applications. Traditionally, within the toad's natural range in South America, the Embera-Wounaan would "milk" the toads for their toxin, which was then employed as an arrow poison. The toxins may have been used as an entheogen by the Olmec people. The toad has been hunted as a food source in parts of Peru, and eaten after the careful removal of the skin and parotoid glands. When properly prepared, the meat of the toad is considered healthy and as a source of omega-3 fatty acids. More recently, the toad's toxins have been used in a number of new ways: bufotenin has been used in Japan as an aphrodisiac and a hair restorer, and in cardiac surgery in China to lower the heart rates of patients. New research has suggested that the cane toad's poison may have some applications in treating prostate cancer. Other modern applications of the cane toad include pregnancy testing, as pets, laboratory research, and the production of leather goods. Pregnancy testing was conducted in the mid-20th century by injecting urine from a woman into a male toad's lymph sacs, and if spermatozoa appeared in the toad's urine, the patient was deemed to be pregnant. The tests using toads were faster than those employing mammals; the toads were easier to raise, and, although the initial 1948 discovery employed Bufo arenarum for the tests, it soon became clear that a variety of anuran species were suitable, including the cane toad. As a result, toads were employed in this task for around 20 years. As a laboratory animal, the cane toad has numerous advantages: they are plentiful, and easy and inexpensive to maintain and handle. The use of the cane toad in experiments started in the 1950s, and by the end of the 1960s, large numbers were being collected and exported to high schools and universities. Since then, a number of Australian states have introduced or tightened importation regulations. There are several commercial uses for dead cane toads. Cane toad skin is made into leather and novelty items. Stuffed cane toads, posed and accessorised, are merchandised at souvenir shops for tourists. Attempts have been made to produce fertiliser from toad carcasses. Invasive species Cane toads pose a serious threat to native species when introduced to a new ecosystem. Classified as an invasive species in over 20 countries, multiple reports exist of the cane toad moving into a new area to be followed by a decline in the biodiversity in that region. The most documented region of the cane toad's invasion and subsequent effect on native species is Australia, where multiple surveys and observations of the toad's conquest have been completed. The best way to illustrate this effect is through the plight of the northern quoll, as well as Mertens' water monitor, a large lizard native to South and Southeast Asia. Two sites were chosen to study the effects of cane toads on the northern quoll, one of which was at Mary River ranger station, which is located in the southern region of Kakadu National Park. The other site was located at the north end of the park. In addition to these two sites, a third site was located at the East Alligator ranger station, and this site was used as a control site, where the cane toads would not interact with the northern quoll population. Monitoring of the quoll population began at the Mary River ranger station using radio tracking in 2002, months before the first cane toads arrived at the site. After the arrival of the cane toads, the population of northern quolls in the Mary River site plummeted between October and December 2002, and by March 2003, the northern quoll appeared to be extinct in this section of the park, as no northern quolls were caught in the trapping trips in the following two months. In contrast, the population of northern quolls in the control site at the East Alligator ranger station remained relatively constant, not showing any symptoms of declining. The evidence from the Kakadu National Park is compelling not only because of the timing of the population of northern quolls plummeting just months after the arrival of the cane toad, but also because in the Mary River region 31% of mortalities within the quoll population were attributed to lethal toxic ingestion, as no signs of disease, parasite infestation, or any other obvious changes at the site were found that could have caused such a rapid decline. The most obvious evidence that supports the hypothesis that the invasion of the cane toads caused the local extinction of the northern quoll is that the closely monitored population of the control group, in the absence of cane toads, showed no signs of decline. In the case of Mertens' water monitor, only one region was monitored, but over the course of 18 months. This region is located 70 km south of Darwin, at the Manton Dam Recreation Area. Within the Manton Dam Recreation Area, 14 sites were set up to survey the population of water monitors, measuring abundance and site occupancy at each one. Seven surveys were conducted, each of which ran for 4 weeks and included 16 site visits, where each site was sampled twice per day for 2 consecutive days throughout the 4 weeks. Each site visit occurred between 7:30 and 10:30
push was made for the cane toad to be released in Australia to negate the pests ravaging the Queensland cane fields. As a result, 102 toads were collected from Hawaii and brought to Australia. Queensland's sugar scientists released the toad into cane fields in August 1935. After this initial release, the Commonwealth Department of Health decided to ban future introductions until a study was conducted into the feeding habits of the toad. The study was completed in 1936 and the ban lifted, when large-scale releases were undertaken; by March 1937, 62,000 toadlets had been released into the wild. The toads became firmly established in Queensland, increasing exponentially in number and extending their range into the Northern Territory and New South Wales. In 2010, one was found on the far western coast in Broome, Western Australia. However, the toad was generally unsuccessful in reducing the targeted grey-backed cane beetles (Dermolepida albohirtum), in part because the cane fields provided insufficient shelter for the predators during the day, and in part because the beetles live at the tops of sugar cane—and cane toads are not good climbers. Since its original introduction, the cane toad has had a particularly marked effect on Australian biodiversity. The population of a number of native predatory reptiles has declined, such as the varanid lizards Varanus mertensi, V. mitchelli, and V. panoptes, the land snakes Pseudechis australis and Acanthophis antarcticus, and the crocodile species Crocodylus johnstoni; in contrast, the population of the agamid lizard Amphibolurus gilberti—known to be a prey item of V. panoptes—has increased. Caribbean The cane toad was introduced to various Caribbean islands to counter a number of pests infesting local crops. While it was able to establish itself on some islands, such as Barbados, Jamaica, and Puerto Rico, other introductions, such as in Cuba before 1900 and in 1946, and on the islands of Dominica and Grand Cayman, were unsuccessful. The earliest recorded introductions were to Barbados and Martinique. The Barbados introductions were focused on the biological control of pests damaging the sugarcane crops, and while the toads became abundant, they have done even less to control the pests than in Australia. The toad was introduced to Martinique from French Guiana before 1944 and became established. Today, they reduce the mosquito and mole cricket populations. A third introduction to the region occurred in 1884, when toads appeared in Jamaica, reportedly imported from Barbados to help control the rodent population. While they had no significant effect on the rats, they nevertheless became well established. Other introductions include the release on Antigua—possibly before 1916, although this initial population may have died out by 1934 and been reintroduced at a later date— and Montserrat, which had an introduction before 1879 that led to the establishment of a solid population, which was apparently sufficient to survive the Soufrière Hills volcano eruption in 1995. In 1920, the cane toad was introduced into Puerto Rico to control the populations of white grub (Phyllophaga spp.), a sugarcane pest. Before this, the pests were manually collected by humans, so the introduction of the toad eliminated labor costs. A second group of toads was imported in 1923, and by 1932, the cane toad was well established. The population of white grubs dramatically decreased, and this was attributed to the cane toad at the annual meeting of the International Sugar Cane Technologists in Puerto Rico. However, there may have been other factors. The six-year period after 1931—when the cane toad was most prolific, and the white grub had a dramatic decline—had the highest-ever rainfall for Puerto Rico. Nevertheless, the cane toad was assumed to have controlled the white grub; this view was reinforced by a Nature article titled "Toads save sugar crop", and this led to large-scale introductions throughout many parts of the Pacific. The cane toad has been spotted in Carriacou and Dominica, the latter appearance occurring in spite of the failure of the earlier introductions. On September 8, 2013, the cane toad was also discovered on the island of New Providence in the Bahamas. The Philippines The cane toad was first introduced deliberately into the Philippines in 1930 as a biological control agent of pests in sugarcane plantations, after the success of the experimental introductions into Puerto Rico. It subsequently became the most ubiquitous amphibian in the islands. It still retains the common name of bakî or kamprag in the Visayan languages, a corruption of 'American frog', referring to its origins. It is also commonly known as "bullfrog" in Philippine English. Fiji The cane toad was introduced into Fiji to combat insects that infested sugarcane plantations. The introduction of the cane toad to the region was first suggested in 1933, following the successes in Puerto Rico and Hawaii. After considering the possible side effects, the national government of Fiji decided to release the toad in 1953, and 67 specimens were subsequently imported from Hawaii. Once the toads were established, a 1963 study concluded, as the toad's diet included both harmful and beneficial invertebrates, it was considered "economically neutral". Today, the cane toad can be found on all major islands in Fiji, although they tend to be smaller than their counterparts in other regions. New Guinea The cane toad was introduced into New Guinea to control the hawk moth larvae eating sweet potato crops. The first release occurred in 1937 using toads imported from Hawaii, with a second release the same year using specimens from the Australian mainland. Evidence suggests a third release in 1938, consisting of toads being used for human pregnancy tests—many species of toad were found to be effective for this task, and were employed for about 20 years after the discovery was announced in 1948. Initial reports argued the toads were effective in reducing the levels of cutworms and sweet potato yields were thought to be improving. As a result, these first releases were followed by further distributions across much of the region, although their effectiveness on other crops, such as cabbages, has been questioned; when the toads were released at Wau, the cabbages provided insufficient shelter and the toads rapidly left the immediate area for the superior shelter offered by the forest. A similar situation had previously arisen in the Australian cane fields, but this experience was either unknown or ignored in New Guinea. The cane toad has since become abundant in rural and urban areas. United States The cane toad naturally exists in South Texas, but attempts (both deliberate and accidental) have been made to introduce the species to other parts of the country. These include introductions to Florida and to the islands of Hawaii, as well as largely unsuccessful introductions to Louisiana. Initial releases into Florida failed. Attempted introductions before 1936 and 1944, intended to control sugarcane pests, were unsuccessful as the toads failed to proliferate. Later attempts failed in the same way. However, the toad gained a foothold in the state after an accidental release by an importer at Miami International Airport in 1957, and deliberate releases by animal dealers in 1963 and 1964 established the toad in other parts of Florida. Today, the cane toad is well established in the state, from the Keys to north of Tampa, and they are gradually extending further northward. In Florida, the toad is a regarded as a threat to native species and pets; so much so, the Florida Fish and Wildlife Conservation Commission recommends residents to kill them. Around 150 cane toads were introduced to Oahu in Hawaii in 1932, and the population swelled to 105,517 after 17 months. The toads were sent to the other islands, and more than 100,000 toads were distributed by July 1934; eventually over 600,000 were transported. Uses Other than the use as a biological control for pests, the cane toad has been employed in a number of commercial and noncommercial applications. Traditionally, within the toad's natural range in South America, the Embera-Wounaan would "milk" the toads for their toxin, which was then employed as an arrow poison. The toxins may have been used as an entheogen by the Olmec people. The toad has been hunted as a food source in parts of Peru, and eaten after the careful removal of the skin and parotoid glands. When properly prepared, the meat of the toad is considered healthy and as a source of omega-3 fatty acids. More recently, the toad's toxins have been used in a number of new ways: bufotenin has been used in Japan as an aphrodisiac and a hair restorer, and in cardiac surgery in China to lower the heart rates of patients. New research has suggested that the cane toad's poison may have some applications in treating prostate cancer. Other modern applications of the cane toad include pregnancy testing, as pets, laboratory research, and the production of leather goods. Pregnancy testing was conducted in the mid-20th century by injecting urine from a woman into a male toad's lymph sacs, and if spermatozoa appeared in the toad's urine, the patient was deemed to be pregnant. The tests using toads were faster than those employing mammals; the toads were easier to raise, and, although the initial 1948 discovery employed Bufo arenarum for the tests, it soon became clear that a variety of anuran species were suitable, including the cane toad. As a result, toads were employed in this task for around 20 years. As a laboratory animal, the cane toad has numerous advantages: they are plentiful, and easy and inexpensive to maintain and handle. The use of the cane toad in experiments started in the 1950s, and by the end of the 1960s, large numbers were being collected and exported to high schools and universities. Since then, a number of Australian states have introduced or tightened importation regulations. There are several commercial uses for dead cane toads. Cane toad skin is made into leather and novelty items. Stuffed cane toads, posed and accessorised, are merchandised at souvenir shops for tourists. Attempts have been made to produce fertiliser from toad carcasses. Invasive species Cane toads pose a serious threat to native species when introduced to a new ecosystem. Classified as an invasive species in over 20 countries, multiple reports exist of the cane toad moving into a new area to be followed by a decline in the biodiversity in that region. The most documented region of the cane toad's invasion and subsequent effect on native species is Australia, where multiple surveys and observations of the toad's conquest have been completed. The best way to illustrate this effect is through the plight of the northern quoll, as well as Mertens' water monitor, a large lizard native to South and Southeast Asia. Two sites were chosen to study the effects of cane toads on the northern quoll, one of which was at Mary River ranger station, which is located in the southern region of Kakadu National Park. The other site was located at the north end of the park. In addition to these two sites, a third site was located at the East Alligator ranger station, and this site was used as a control site, where the cane toads would not interact with the northern quoll population. Monitoring of the quoll population began at the Mary River ranger station using radio tracking in 2002, months before the first cane toads arrived at the site. After the arrival of the cane toads, the population of northern quolls in the Mary River site plummeted between October and December 2002, and by March 2003, the northern quoll appeared to be extinct in this section of the park, as no northern quolls were caught in the trapping trips in the following two months. In contrast, the population of northern quolls in the control site at the East Alligator ranger station remained relatively constant, not showing any symptoms of declining. The evidence from the Kakadu National Park is compelling not only because of the timing of the population of northern quolls plummeting just months after the arrival of the cane toad, but also because in the Mary River region 31% of mortalities within the quoll population were attributed to lethal toxic ingestion, as no signs of disease, parasite infestation, or any other obvious changes at the site were found that could have caused such a rapid decline. The most obvious evidence that supports the hypothesis that the invasion of the cane toads caused the local extinction of the northern quoll is that the closely monitored population of the control group, in the absence of cane toads, showed no signs of decline. In the case of Mertens' water monitor, only one region was monitored, but over the course of 18 months. This region is located 70 km south of Darwin, at the Manton Dam Recreation Area. Within the Manton Dam Recreation Area, 14 sites were set up to survey the population of water monitors, measuring abundance and site occupancy at each one. Seven surveys were conducted, each of which ran for 4 weeks and included 16 site visits, where each site was sampled twice per day for 2 consecutive days throughout the 4 weeks. Each site visit occurred between 7:30 and 10:30 am, and 4:00–7:00 pm, when Varanus mertensi can be viewed sunbathing on the shore or wrapped around a tree branch close to shore. The whole project lasted from December 2004 to May 2006, and had a total of 194 sightings of Varanus mertensi in 1568 site visits. Of the seven surveys, abundance was highest during the second survey, which took place in February 2005, 2 months into the project. Following this measurement, the abundance declined in the next four surveys, before declining sharply after the second to last survey in February 2006. In the final survey taken in May 2006, only two V. mertensi lizards were observed. Cane toads were first recorded in the region of study during the second survey during February 2005, also when the water monitor abundance was at its highest over the course of the study. Numbers of the cane toad population stayed low for the next year after introduction, and then skyrocketed to its
team ball game called or , akin to a chaotic version of hockey or football (depending on whether sticks were used), was regularly played in France and southern Britain between villages or parishes; it was attested in Cornwall as early as 1283. In the book Queen of Games: The History of Croquet, Nicky Smith presents two theories of the origin of the modern game of croquet, which took England by storm in the 1860s and then spread overseas. First origin theory The first explanation is that the ancestral game was introduced to Britain from France during the 1660–1685 reign of Charles II of England, Scotland and Ireland, and was played under the name of (among other spellings, today usually pall-mall), derived ultimately from Latin words for 'ball and mallet' (the latter also found in the name of the earlier French game, ). This was the explanation given in the ninth edition of Encyclopædia Britannica, dated 1877. In his 1801 book The Sports and Pastimes of the People of England, Joseph Strutt described the way pall-mall was played in England at the time:"Pale-maille is a game wherein a round box[wood] ball is struck with a mallet through a high arch of iron, which he that can do at the fewest blows, or at the number agreed upon, wins. It is to be observed, that there are two of these arches, that is one at either end of the alley. The game of mall was a fashionable amusement in the reign of Charles the Second, and the walk in Saint James's Park, now called the Mall, received its name from having been appropriated to the purpose of playing at mall, where Charles himself and his courtiers frequently exercised themselves in the practice of this pastime." While the name pall-mall and various games bearing this name also appeared elsewhere (France and Italy), the description above suggests that the croquet-like games in particular were popular in England by the early 17th century. Some other early modern sources refer to pall-mall being played over a large distance (as in golf); however, an image in Strutt's 1801 book shows a croquet-like ground billiards game (balls on ground, hoop, bats, and peg) being played over a , garden-sized distance. The image's caption describes the game as "a curious ancient pastime", confirming that croquet games were not new in early-19th-century England. In Samuel Johnson's 1755 dictionary, his definition of "pall-mall" clearly describes a game with similarities to modern croquet: "A play in which the ball is struck with a mallet through an iron ring". However, there is no evidence that pall-mall involved the croquet stroke which is the distinguishing characteristic of the modern game. Second origin theory The second theory is that the rules of the modern game of croquet arrived from Ireland during the 1850s, perhaps after being brought there from Brittany, where a similar game was played on the beaches. Regular contact between Ireland and France had continued since the Norman invasion of Ireland in 1169. By no later than the early 15th century, the game (itself ancestral to pall-mall and perhaps to indoor billiards) was popular in France, including in the courts of Henry II in the 16th century and Louis XIV of the 17th. At least one version of it, ('wheel') was a multi-ball lawn game. Records show a game called "crookey", similar to croquet, being played at Castlebellingham in County Louth, Ireland, in 1834, which was introduced to Galway in 1835 and played on the bishop's palace garden, and in the same year to the genteel Dublin suburb of Kingstown (today Dún Laoghaire) where it was first spelt as "croquet". There is, however, no pre-1858 Irish document that describes the way game was played, in particular, there is no reference to the distinctive croquet stroke, which is described above under "Variations: Association". The noted croquet historian Dr Prior, in his book of 1872, makes the categoric statement "One thing only is certain: it is from Ireland that croquet came to England and it was on the lawn of the late Lord Lonsdale that it was first played in this country." This was about 1851. John Jaques apparently claimed in a letter to Arthur Lillie in 1873 that he had himself seen the game played in Ireland, writing "I made the implements and published directions (such as they were) before Mr. Spratt [mentioned above] introduced the subject to me." Whatever the truth of the matter, Jaques certainly played an important role in popularising the game, producing editions of the rules in 1857, 1860, and 1864. Heyday and decline Croquet became highly popular as a social pastime in England during the 1860s. It was enthusiastically adopted and promoted by the Earl of Essex who held lavish croquet parties at Cassiobury House, his stately home in Watford, Hertfordshire, and the Earl even launched his own Cassiobury brand croquet set. By 1867, Jaques had printed 65,000 copies of his Laws and Regulations of the game. It quickly spread to other Anglophone countries, including Australia, Canada, New Zealand, South Africa, and the United States. No doubt one of the attractions was that the game could be played by both sexes; this also ensured a certain amount of adverse comment. It is no coincidence that the game became popular at the same time as the cylinder lawn mower, since croquet can only be played well on a lawn that is flat and finely-cut. By the late 1870s, however, croquet had been eclipsed by another fashionable game, lawn tennis, and many of the newly created croquet clubs, including the All England Club at Wimbledon, converted some or all of their lawns into tennis courts. There was a revival in the 1890s, but from then onwards, croquet was always a minority sport, with national individual participation amounting to a few thousand players. The All England Lawn Tennis and Croquet Club still has a croquet lawn, but has not hosted any significant tournaments. The English headquarters for the game is now in Cheltenham. The earliest known reference to croquet in Scotland is the booklet The Game of Croquet, its Laws and Regulations which was published in the mid-1860s for the proprietor of Eglinton Castle, the Earl of Eglinton. On the page facing the title page is a picture of Eglinton Castle with a game of "croquet" in full swing. The croquet lawn existed on the northern terrace, between Eglinton Castle and the Lugton Water. The 13th Earl developed a variation on croquet named Captain Moreton's Eglinton Castle croquet, which had small bells on the eight hoops "to ring the changes", two pegs, a double hoop with a bell and two tunnels for the ball to pass through. In 1865 the 'Rules of the Eglinton Castle and Cassiobury Croquet' was published by Edmund Routledge. Several incomplete sets of this form of croquet are known to exist, and one complete set is still used for demonstration games in the West of Scotland. Glossary of terms Backward ball: The ball of a side that has scored fewer hoops (compare with 'forward ball'). Ball-in-hand: A ball that the striker can pick up to change its position, for example: any ball when it leaves the court has to be replaced on the yard-line the striker's ball after making a roquet must be placed in contact with the roqueted ball the striker's ball when the striker is entitled to a lift. Ball in play: A ball after it has been played into the game, which is not a ball in hand or pegged out. Baulk: An imaginary line on which a ball is placed for its first shot in the game, or when taking a lift. The A-baulk coincides with the western half of the yard line along the south boundary; the B-baulk occupies the eastern half of the north boundary yard line. Bisque, half-bisque A bisque is a free turn in a handicap match. A half-bisque is a restricted handicap turn in which no point may be scored. Break down: To end a turn by making a mistake. Continuation stroke: Either the bonus stroke played after running a hoop in order or the second bonus stroke played after making a roquet. Croquet stroke: A stroke taken after making a roquet, in which the striker's ball and the roqueted ball are placed together in contact. Double tap: A fault in which the mallet makes more than one audible sound when it strikes the ball. Forward ball: The ball of a side that has scored more hoops (compare with 'backward ball'). Hoop: Metal U-shaped gate pushed into ground. (Also called a wicket in the US, which is of the same etymology as wicket gate). Leave: The position of the balls after a successful break, in which the striker is able to leave the balls placed so as to make life as difficult as possible for the opponent. Lift: A turn in which the player is entitled to remove the ball from its current position and play instead from either baulk line. A lift is permitted when a ball has been placed by the opponent in a position where it is wired from all other balls, and also in advanced play when the opponent has completed a break that includes hoops 1-back or 4-back. Object ball: A ball which is going to be rushed. Peg out: To cause a rover ball to strike the peg and conclude its active involvement in the game. Peel: To send a ball other than the striker's ball through its target hoop. Pioneer: A ball placed in a strategic position near the striker's next-but-one or next-but-two hoop, to assist in running that hoop later in the break. Primary colours or first colours: The main croquet ball colours used which are blue, red, black and yellow (in order of play). One player or team plays blue and black, the other red and yellow. Push: A fault when the mallet pushes the striker's ball, rather than making a clean strike. Roquet: (Second syllable rhymes with "play".) When the striker's ball hits a ball that he is entitled to then take a croquet shot with. At the start of a turn, the striker is entitled to roquet all the other three balls once. Once the striker's ball goes through its target hoop, it is again entitled to roquet the other balls once. Rover ball: A ball that has run all 12 hoops and can be pegged out. Rover hoop: The last hoop, indicated by a red top bar. The first hoop has a blue top. Run a hoop: To send the striker's ball through a hoop. If the hoop is the hoop in order for the striker's ball, the striker earns a bonus stroke. Rush: A roquet when the roqueted ball is sent to a specific position on the court, such as the next hoop for the striker's ball or close to a ball that the striker wishes to roquet next. Scatter shot: A continuation stroke used to hit a ball which may not be roqueted in order to send it to a less dangerous position. Secondary colours or second colours; also known as alternate colours: The colours of the balls used in the second game played on the same court in double-banking: green, pink, brown and white (in order of play). Green and brown versus pink and white, are played by the same player or pair. Sextuple peel (SXP): To peel the partner ball through its last six hoops in the course of a single turn. Very few players have achieved this feat, but it is being seen increasingly at championship level. Tice: A ball sent to a location that will entice an opponent to shoot at it but miss. Triple peel (TP): To send a ball other than the striker's ball through its last three hoops, and then peg it out. See also Triple Peel, A variant is the Triple Peel on Opponent (TPO), where the peelee is the opponent's ball rather than the partner ball. The significance of this manoeuvre is that in advanced play, making a break that includes the tenth hoop (called 4-back) is penalized by granting the opponent a lift (entitling him to take the next shot from either baulk line). Therefore, many breaks stop voluntarily with three hoops and the peg still to run. Wired: When a hoop or the peg impedes the path of a striker's ball, or the swing of the mallet. A player will often endeavour to finish a turn with the opponent's balls wired from each other. Yard line: An imaginary line from the boundary. Balls that go off the boundary are generally replaced on the yard line (but
golf) players will often attempt to move their opponents' balls to unfavourable positions. However, purely negative play is rarely a winning strategy: successful players (in all versions other than golf croquet) will use all four balls to set up a break for themselves, rather than simply making the game as difficult as possible for their opponents. At championship-standard association croquet, players can often make all 26 points (13 for each ball) in two turns. Croquet was an event at the 1900 Summer Olympics. Roque, an American variation on croquet, was an event at the 1904 Summer Olympics. Beginning in 1894 Spalding Athletic Library issued official rules (with illustrations) as adopted by the National American Croquet Association. Association Association croquet is the name of an advanced game of croquet, played at all levels up to international level. It involves four balls teamed in pairs, with both balls going through every hoop for one pair to win. The game's distinguishing feature is the "croquet" shot: when certain balls hit other balls, extra shots are allowed. The six hoops are arranged three at each end of the court, with a centre peg. One side takes the blue and black balls, the other takes red and yellow. At each turn, players can choose to play with either of their balls for that turn. At the start of a turn, the player plays a stroke. If the player either hits the ball through the correct hoop ("runs" the hoop), or hits another ball (a "roquet"), the turn continues. Following a roquet, the player picks up his or her own ball and puts it down next to the ball that it hit. The next shot is played with the two balls touching: this is the "croquet stroke" from which the game takes its name. By varying the speed and angle at which the mallet hits the striker's ball, a good player can control the final position of both balls: the horizontal angle determines how far the balls diverge in direction, while the vertical angle and the amount of follow-through determine the relative distance that the two balls travel. After the croquet stroke, the player plays a "continuation" stroke, during which the player may again attempt to make a roquet or run a hoop. Each of the other three balls may be roqueted once in a turn before a hoop is run, after which they become available to be roqueted again. The winner of the game is the team who completes the set circuit of six hoops (and then back again the other way), with both balls, and then strikes the centre peg (making a total of 13 points per ball = 26). Good players may make "s" or "s" of several hoops in a single turn. The best players may take a ball round a full circuit in one turn. "Advanced play" (a variant of association play for expert players) gives penalties to a player who runs certain hoops in a turn, to allow the opponent a chance of getting back into the game; feats of skill such as triple peels or better, in which the partner ball (or occasionally an opponent ball) is caused to run a number of hoops in a turn by the striker's ball, help avoid these penalties. A handicap system ("bisques") provides less experienced players a chance of winning against more formidable opponents. Players of all ages and both sexes compete on level terms. The World Championships are organised by the World Croquet Federation (WCF) and usually take place every two or three years. The 2020 championships took place in Melbourne, Australia; the winner was Reg Bamford. The current Women's Association Croquet World Champion (2015) is Miranda Chapman of England. The Australian team won the last MacRobertson International Croquet Shield tournament, which is the major international test tour trophy in association croquet. It is contested every three to four years between Australia, England, the United States and New Zealand. Historically the British have been the dominant force, winning 14 out of the 22 times that the event has been held. In individual competition, the UK is often divided by subnational country (England, Scotland and Wales), while Northern Ireland joins with the Republic in an All Ireland association (as it does in several other sports). The world's top 10 association croquet players as of December 2020 were Robert Fletcher (Australia), Reg Bamford (South Africa), Robert Fulford (England), Paddy Chapman (New Zealand), Matthew Essick (USA), Jonathan Kirby (Scotland), Simon Hockey (Australia), Malcolm Fletcher (Australia), Edward Wilson (Australia), Stephen Mulliner (England). Unlike most sports, men and women compete and are ranked together. Three women have won the British Open Championship: Lily Gower in 1905, Dorothy Steel in 1925, 1933, 1935 and 1936, and Hope Rotherham in 1960. While male players are in the majority at club level in the UK, the opposite is the case in Australia and New Zealand. The governing body in England is The Croquet Association, which has been the driving force of the development of the game. The laws and rules are now maintained by the World Croquet Federation. Golf In golf croquet, a hoop is won by the first ball to go through each hoop. Unlike association croquet, there are no additional turns for hitting other balls. Each player takes a stroke in turn, each trying to hit a ball through the same hoop. The sequence of play is blue, red, black, yellow. Blue and black balls play against red and yellow. When a hoop is won, the sequence of play continues as before. The winner of the game is the player/team who wins the most hoops. Golf croquet is the fastest-growing version of the game, owing largely to its simplicity and competitiveness. There is an especially large interest in competitive success by players in Egypt. Golf croquet is easier to learn and play, but requires strategic skills and accurate play. In comparison with association croquet, play is faster and balls are more likely to be lifted off the ground. In April 2013, Reg Bamford of South Africa beat Ahmed Nasr of Egypt in the final of the Golf Croquet World Championship in Cairo, becoming the first person to simultaneously hold the title in both association croquet and golf croquet. As of 2020, the Golf Croquet World Champion was Ben Rothman (USA) and the Women's Golf Croquet World Champion was Soha Mostafa (Egypt). In 2018, two international championships open to both sexes were won by women: in May, Rachel Gee of England beat Pierre Beaudry of Belgium to win the European Golf Croquet championship, and in October, Hanan Rashad of Egypt beat Yasser Fathy (also from Egypt) to win the World over-50s Golf Croquet championship. Garden Garden croquet is widely played in the UK. The rules are easy to learn and the game can be played on lawns of almost any size, but usually around by . The rules are similar to those described above for Association Croquet with three major differences: The starting point for all balls is a spot in from the boundary directly in front of hoop 1. If a striker's ball goes off, there is no penalty, it comes back on and the turn continues. In a croquet stroke, the croqueted ball does not have to move when the striker's ball is struck. This version of the game is easy for beginners to learn. The main Garden Croquet Club in the UK is the Bygrave Croquet Club which is a private club with five lawns. Other clubs also use garden croquet as an introduction to the game, notably the Hampstead Heath Croquet Club and the Watford Croquet Club. American six-wicket The American-rules version of croquet, another six-hoop game, is the dominant version of the game in the United States and is also widely played in Canada. It is governed by the United States Croquet Association. Its genesis is mostly in association croquet, but it differs in a number of important ways that reflect the home-grown traditions of American "backyard" croquet. Two of the most notable differences are that the balls are always played in the same sequence (blue, red, black, yellow) throughout the game, and that a ball's "deadness" on other balls is carried over from turn to turn until the ball has been "cleared" by scoring its next hoop. A Deadness Board is used to keep track of deadness on all four balls. Tactics are simplified on the one hand by the strict sequence of play, and complicated on the other hand by the continuation of deadness. A further difference is the more restrictive boundary-line rules of American croquet. In the American game, roqueting a ball out of bounds or running a hoop out of bounds causes the turn to end, and balls that go out of bounds are replaced only from the boundary rather than as in association croquet. "Attacking" balls on the boundary line to bring them into play is thus far more challenging. Nine-wicket Nine-wicket croquet, sometimes called "backyard croquet", is played mainly in Canada and the United States, and is the game most recreational players in those countries call simply "croquet". In this version of croquet, there are nine wickets, two stakes, and up to six balls. The course is arranged in a double-diamond pattern, with one stake at each end of the course. Players start at one stake, navigate one side of the double diamond, hit the turning stake, then navigate the opposite side of the double diamond and hit the starting stake to end. If playing individually (Cutthroat), the first player to stake out is the winner. In partnership play, all members of a team must stake out, and a player might choose to avoid staking out (becoming a Rover) in order to help a lagging teammate. Each time a ball is roqueted, the striker gets two bonus shots. For the first bonus shot, the player has four options: From a mallet-head distance or less away from the ball that was hit ("taking a mallet-head") From a position in contact with the ball that was hit, with the striker ball, held steady by the striker's foot or hand (a "foot shot" or "hand shot") From a position in contact with the ball that was hit, with the striker ball not held by foot or hand (a "croquet shot") From where the striker ball stopped after the roquet. The second bonus shot ("continuation shot") is an ordinary shot played from where the striker ball came to rest. An alternate endgame is "poison": in this variant, a player who has scored the last wicket but not hit the starting stake becomes a "poison ball", which may eliminate other balls from the game by roqueting them. A non-poison ball that roquets a poison ball has the normal options. A poison ball that hits a stake or passes through any wicket (possibly by the action of a non-poison player) is eliminated. The last person remaining is the winner. Ricochet This version of the game was invented by John Riches of Adelaide, Australia, with help from Tom Armstrong, in the 1980s. The game can be played by up to six people and is very easy to learn. For this reason it is often used as a stepping stone to association croquet. Ricochet has similar rules to association and garden croquet, except that when a ball is roqueted, the striker's ball remains live and two free shots are earned. This enables strikers to play their ball near to another opponent's ball and ricochet that too thus earning two more free shots. Running a hoop earns one free shot. One-ball One-ball croquet has become popular in recent years as a way of bringing AC (association) and GC (golf) players together. The rules are essentially those of association croquet, except that each player or team has only one ball rather than two. This makes it very hard to create a break, which leads to more interactive play. History The oldest document to bear the word croquet with a description of the modern game
soles; the slider shoe (usually known as a "slider") is designed for the sliding foot and the "gripper shoe" (usually known as a gripper) for the foot that kicks off from the hack. The slider is designed to slide and typically has a Teflon sole. It is worn by the thrower during delivery from the hack and by sweepers or the skip to glide down the ice when sweeping or otherwise traveling down the sheet quickly. Stainless steel and "red brick" sliders with lateral blocks of PVC on the sole are also available as alternatives to Teflon. Most shoes have a full-sole sliding surface, but some shoes have a sliding surface covering only the outline of the shoe and other enhancements with the full-sole slider. Some shoes have small disc sliders covering the front and heel portions or only the front portion of the foot, which allow more flexibility in the sliding foot for curlers playing with tuck deliveries. When a player is not throwing, the player's slider shoe can be temporarily rendered non-slippery by using a slip-on gripper. Ordinary athletic shoes may be converted to sliders by using a step-on or slip-on Teflon slider or by applying electrical or gaffer tape directly to the sole or over a piece of cardboard. This arrangement often suits casual or beginning players. The gripper is worn by the thrower on the foot that kicks off from the hack during delivery and is designed to grip the ice. It may have a normal athletic shoe sole or a special layer of rubbery material applied to the sole of a thickness to match the sliding shoe. The toe of the hack foot shoe may also have a rubberised coating on the top surface or a flap that hangs over the toe to reduce wear on the top of the shoe as it drags on the ice behind the thrower. Other equipment Other types of equipment include: Curling pants, made to be stretchy to accommodate the curling delivery. A stopwatch to time the stones over a fixed distance to calculate their speed. Stopwatches can be attached either to clothing or the broom. Curling gloves and mittens, to keep the hands warm and improve grip on the broom. Gameplay The purpose of a game is to score points by getting stones closer to the house centre, or the "button", than the other team's stones. Players from either team alternate in taking shots from the far side of the sheet. An end is complete when all eight rocks from each team have been delivered, a total of sixteen stones. If the teams are tied at the end of regulation, often extra ends are played to break the tie. The winner is the team with the highest score after all ends have been completed (see Scoring below). A game may be conceded if winning the game is infeasible. International competitive games are generally ten ends, so most of the national championships that send a representative to the World Championships or Olympics also play ten ends. However, there is a movement on the World Curling Tour to make the games only eight ends. Most tournaments on that tour are eight ends, as are the vast majority of recreational games. In international competition, each side is given 73 minutes to complete all of its throws. Each team is also allowed two minute-long timeouts per 10-end game. If extra ends are required, each team is allowed 10 minutes of playing time to complete its throws and one added 60-second timeout for each extra end. However, the "thinking time" system, in which the delivering team's game timer stops as soon as the shooter's rock crosses the t-line during the delivery, is becoming more popular, especially in Canada. This system allows each team 38 minutes per 10 ends, or 30 minutes per 8 ends, to make strategic and tactical decisions, with 4 minutes and 30 seconds an end for extra ends. The "thinking time" system was implemented after it was recognized that using shots which take more time for the stones to come to rest was being penalized in terms of the time the teams had available compared to teams which primarily use hits which require far less time per shot. Delivery The process of sliding a stone down the sheet is known as the delivery or throw. Players, with the exception of the skip, take turns throwing and sweeping; when one player (e.g., the lead) throws, the players not delivering (the second and third) sweep (see Sweeping, below). When the skip throws, the vice-skip takes their role. The skip, or the captain of the team, determines the desired stone placement and the required weight, turn, and line that will allow the stone to stop there. The placement will be influenced by the tactics at this point in the game, which may involve taking out, blocking, or tapping another stone. The weight of the stone is its velocity, which depends on the leg drive of the delivery rather than the arm. The turn or curl is the rotation of the stone, which gives it a curved trajectory. The line is the direction of the throw ignoring the effect of the turn. The skip may communicate the weight, turn, line, and other tactics by calling or tapping a broom on the ice. In the case of a takeout, guard, or a tap, the skip will indicate the stones involved. Before delivery, the running surface of the stone is wiped clean and the path across the ice swept with the broom if necessary, since any dirt on the bottom of a stone or in its path can alter the trajectory and ruin the shot. Intrusion by a foreign object is called a pick-up or pick. The thrower starts from the hack. The thrower's gripper shoe (with the non-slippery sole) is positioned against one of the hacks; for a right-handed curler the right foot is placed against the left hack and vice versa for a left-hander. The thrower, now in the hack, lines the body up with shoulders square to the skip's broom at the far end for line. The stone is placed in front of the foot now in the hack. Rising slightly from the hack, the thrower pulls the stone back (some older curlers may actually raise the stone in this backward movement) then lunges smoothly out from the hack pushing the stone ahead while the slider foot is moved in front of the gripper foot, which trails behind. The thrust from this lunge determines the weight, and hence the distance the stone will travel. Balance may be assisted by a broom held in the free hand with the back of the broom down so that it slides. One older writer suggests the player keep "a basilisk glance" at the mark. There are two common types of delivery currently, the typical flat-foot delivery and the Manitoba tuck delivery where the curler slides on the front ball of his foot. When the player releases the stone, a rotation (called the turn) is imparted by a slight clockwise or counter-clockwise twist of the handle from around the two or ten o'clock position to the twelve o'clock on release. A typical rate of turn is about rotations before coming to a rest. The stone must be released before its front edge crosses the near hog line. In major tournaments, the "Eye on the Hog" sensor is commonly used to enforce this rule. The sensor is in the handle of the stone and will indicate whether the stone was released before the near hog line. The lights on the stone handle will either light up green, indicating that the stone has been legally thrown, or red, in which case the illegally thrown stone will be immediately pulled from play instead of waiting for the stone to come to rest. The stone must clear the far hog line or else be removed from play (hogged); an exception is made if a stone fails to come to rest beyond the far hog line after rebounding from a stone in play just past the hog line. Sweeping After the stone is delivered, its trajectory is influenced by the two sweepers under instruction from the skip. Sweeping is done for several reasons: to make the stone travel farther, to decrease the amount of curl, and to clean debris from the stone's path. Sweeping is able to make the stone travel farther and straighter by slightly melting the ice under the brooms, thus decreasing the friction as the stone travels across that part of the ice. The stones curl more as they slow down, so sweeping early in travel tends to increase distance as well as straighten the path, and sweeping after sideways motion is established can increase the sideways distance. One of the basic technical aspects of curling is knowing when to sweep. When the ice in front of the stone is swept a stone will usually travel both farther and straighter, and in some situations one of those is not desirable. For example, a stone may be traveling too fast (said to have too much weight) but require sweeping to prevent curling into another stone. The team must decide which is better: getting by the other stone but traveling too far, or hitting the stone. Much of the yelling that goes on during a curling game is the skip and sweepers exchanging information about the stone's line and weight and deciding whether to sweep. The skip evaluates the path of the stone and calls to the sweepers to sweep as necessary to maintain the intended track. The sweepers themselves are responsible for judging the weight of the stone, ensuring that the length of travel is correct and communicating the weight of the stone back to the skip. Many teams use a number system to communicate in which of 10 zones the sweepers estimate the stone will stop. Some sweepers use stopwatches to time the stone from the back line or tee line to the nearest hog line to aid in estimating how far the stone will travel. Usually, the two sweepers will be on opposite sides of the stone's path, although depending on which side the sweepers' strengths lie this may not always be the case. Speed and pressure are vital to sweeping. In gripping the broom, one hand should be one third of the way from the top (non-brush end) of the handle while the other hand should be one third of the way from the head of the broom. The angle of the broom to the ice should be such that the most force possible can be exerted on the ice. The precise amount of pressure may vary from relatively light brushing ("just cleaning" - to ensure debris will not alter the stone's path) to maximum-pressure scrubbing. Sweeping is allowed anywhere on the ice up to the tee line; once the leading edge of a stone crosses the tee line only one player may sweep it. Additionally, if a stone is behind the tee line one player from the opposing team is allowed to sweep it. This is the only case that a stone may be swept by an opposing team member. In international rules, this player must be the skip, but if the skip is throwing, then the sweeping player must be the third. Burning a stone Occasionally, players may accidentally touch a stone with their broom or a body part. This is often referred to as burning a stone. Players touching a stone in such a manner are expected to call their own infraction as a matter of good sportsmanship. Touching a stationary stone when no stones are in motion (there is no delivery in progress) is not an infraction as long as the stone is struck in such a manner that its position is not altered, and this is a common way for the skip to indicate a stone that is to be taken out. When a stone is touched when stones are in play, the remedies vary between leaving the stones as they end up after the touch, replacing the stones as they would have been if no stone were touched, or removal of the touched stone from play. In non-officiated league play, the skip of the non-offending team has the final say on where the stones are placed after the infraction. Types of shots Many different types of shots are used to carefully place stones for strategic or tactical reasons; they fall into three fundamental categories as follows: Guards are thrown in front of the house in the free guard zone, usually to protect a stone or to make the opposing team's shot difficult. Guard shots include the centre-guard, on the centreline, and the corner-guards to the left or right sides of the centre line. See Free Guard Zone below. Draws are thrown only to reach the house. Draw shots include raise, come-around, and freeze shots. Takeouts are intended to remove stones from play and include the peel, hit-and-roll, and double shots. For a more complete listing, see Glossary of curling terms. Free guard zone The free guard zone is the area of the curling sheet between the hog line and tee line, excluding the house. Until five stones have been played (three from the side without hammer and two from the side with hammer), stones in the free guard zone may not be removed by an opponent's stone, although they can be moved within the playing area. If a stone in the free guard zone is knocked out of play, it is placed back in the position it was in before the shot was thrown and the opponent's stone is removed from play. This rule is known as the five-rock rule or the free guard zone rule (previous versions of the free guard zone rule only limited removing guards from play in the first three or four rocks). This rule, a relatively recent addition to curling, was added in response to a strategy by teams of gaining a lead in the game and then peeling all of the opponents' stones (knocking them out of play at an angle that caused the shooter's stone to also roll out of play, leaving no stones on the ice). By knocking all stones out the opponents could at best score one point, if they had the last stone of the end (called the hammer). If the team peeling the rocks had the hammer they could peel rock after rock which would blank the end (leave the end scoreless), keeping the last rock advantage for another end. This strategy had developed (mostly in Canada) as ice-makers had become skilled at creating a predictable ice surface and newer brushes allowed greater control over the rock. While a sound strategy, this made for an unexciting game. Observers at the time noted that if two teams equally skilled in the peel game faced each other on good ice, the outcome of the game would be predictable from who won the coin flip to have last rock (or had earned it in the schedule) at the beginning of the game. The 1990 Brier (Canadian men's championship) was considered by many curling fans as boring to watch because of the amount of peeling and the quick adoption of the free guard zone rule the following year reflected how disliked this aspect of the game had become. The free guard zone rule was originally called the Modified Moncton Rule and was developed from a suggestion made by Russ Howard for the Moncton 100 cashspiel in Moncton, New Brunswick, in January 1990. "Howard's Rule" (later known as the Moncton Rule), used for the tournament and based on a practice drill his team used, had the first four rocks in play unable to be removed no matter where they were at any time during the end. This method of play was altered by restricting the area in which a stone was protected to the free guard zone only for the first four rocks thrown and adopted as a four-rock free guard zone rule for international competition shortly after. Canada kept to the traditional rules until a three-rock free guard zone rule was adopted for the 1993–94 season. After several years of having the three-rock rule used for the Canadian championships and the winners then having to adjust to the four-rock rule in the World Championships, the Canadian Curling Association adopted the four-rock free guard zone in the 2002–2003 season. One strategy that has been developed by curlers in response to the free guard zone (Kevin Martin from Alberta is one of the best examples) is the "tick" game, where a shot is made attempting to knock (tick) the guard to the side, far enough that it is difficult or impossible to use but still remaining in play while the shot itself goes out of play. The effect is functionally identical to peeling the guard but significantly harder, as a shot that hits the guard too hard (knocking it out of play) results in its being replaced, while not hitting it hard enough can result in it still
may actually raise the stone in this backward movement) then lunges smoothly out from the hack pushing the stone ahead while the slider foot is moved in front of the gripper foot, which trails behind. The thrust from this lunge determines the weight, and hence the distance the stone will travel. Balance may be assisted by a broom held in the free hand with the back of the broom down so that it slides. One older writer suggests the player keep "a basilisk glance" at the mark. There are two common types of delivery currently, the typical flat-foot delivery and the Manitoba tuck delivery where the curler slides on the front ball of his foot. When the player releases the stone, a rotation (called the turn) is imparted by a slight clockwise or counter-clockwise twist of the handle from around the two or ten o'clock position to the twelve o'clock on release. A typical rate of turn is about rotations before coming to a rest. The stone must be released before its front edge crosses the near hog line. In major tournaments, the "Eye on the Hog" sensor is commonly used to enforce this rule. The sensor is in the handle of the stone and will indicate whether the stone was released before the near hog line. The lights on the stone handle will either light up green, indicating that the stone has been legally thrown, or red, in which case the illegally thrown stone will be immediately pulled from play instead of waiting for the stone to come to rest. The stone must clear the far hog line or else be removed from play (hogged); an exception is made if a stone fails to come to rest beyond the far hog line after rebounding from a stone in play just past the hog line. Sweeping After the stone is delivered, its trajectory is influenced by the two sweepers under instruction from the skip. Sweeping is done for several reasons: to make the stone travel farther, to decrease the amount of curl, and to clean debris from the stone's path. Sweeping is able to make the stone travel farther and straighter by slightly melting the ice under the brooms, thus decreasing the friction as the stone travels across that part of the ice. The stones curl more as they slow down, so sweeping early in travel tends to increase distance as well as straighten the path, and sweeping after sideways motion is established can increase the sideways distance. One of the basic technical aspects of curling is knowing when to sweep. When the ice in front of the stone is swept a stone will usually travel both farther and straighter, and in some situations one of those is not desirable. For example, a stone may be traveling too fast (said to have too much weight) but require sweeping to prevent curling into another stone. The team must decide which is better: getting by the other stone but traveling too far, or hitting the stone. Much of the yelling that goes on during a curling game is the skip and sweepers exchanging information about the stone's line and weight and deciding whether to sweep. The skip evaluates the path of the stone and calls to the sweepers to sweep as necessary to maintain the intended track. The sweepers themselves are responsible for judging the weight of the stone, ensuring that the length of travel is correct and communicating the weight of the stone back to the skip. Many teams use a number system to communicate in which of 10 zones the sweepers estimate the stone will stop. Some sweepers use stopwatches to time the stone from the back line or tee line to the nearest hog line to aid in estimating how far the stone will travel. Usually, the two sweepers will be on opposite sides of the stone's path, although depending on which side the sweepers' strengths lie this may not always be the case. Speed and pressure are vital to sweeping. In gripping the broom, one hand should be one third of the way from the top (non-brush end) of the handle while the other hand should be one third of the way from the head of the broom. The angle of the broom to the ice should be such that the most force possible can be exerted on the ice. The precise amount of pressure may vary from relatively light brushing ("just cleaning" - to ensure debris will not alter the stone's path) to maximum-pressure scrubbing. Sweeping is allowed anywhere on the ice up to the tee line; once the leading edge of a stone crosses the tee line only one player may sweep it. Additionally, if a stone is behind the tee line one player from the opposing team is allowed to sweep it. This is the only case that a stone may be swept by an opposing team member. In international rules, this player must be the skip, but if the skip is throwing, then the sweeping player must be the third. Burning a stone Occasionally, players may accidentally touch a stone with their broom or a body part. This is often referred to as burning a stone. Players touching a stone in such a manner are expected to call their own infraction as a matter of good sportsmanship. Touching a stationary stone when no stones are in motion (there is no delivery in progress) is not an infraction as long as the stone is struck in such a manner that its position is not altered, and this is a common way for the skip to indicate a stone that is to be taken out. When a stone is touched when stones are in play, the remedies vary between leaving the stones as they end up after the touch, replacing the stones as they would have been if no stone were touched, or removal of the touched stone from play. In non-officiated league play, the skip of the non-offending team has the final say on where the stones are placed after the infraction. Types of shots Many different types of shots are used to carefully place stones for strategic or tactical reasons; they fall into three fundamental categories as follows: Guards are thrown in front of the house in the free guard zone, usually to protect a stone or to make the opposing team's shot difficult. Guard shots include the centre-guard, on the centreline, and the corner-guards to the left or right sides of the centre line. See Free Guard Zone below. Draws are thrown only to reach the house. Draw shots include raise, come-around, and freeze shots. Takeouts are intended to remove stones from play and include the peel, hit-and-roll, and double shots. For a more complete listing, see Glossary of curling terms. Free guard zone The free guard zone is the area of the curling sheet between the hog line and tee line, excluding the house. Until five stones have been played (three from the side without hammer and two from the side with hammer), stones in the free guard zone may not be removed by an opponent's stone, although they can be moved within the playing area. If a stone in the free guard zone is knocked out of play, it is placed back in the position it was in before the shot was thrown and the opponent's stone is removed from play. This rule is known as the five-rock rule or the free guard zone rule (previous versions of the free guard zone rule only limited removing guards from play in the first three or four rocks). This rule, a relatively recent addition to curling, was added in response to a strategy by teams of gaining a lead in the game and then peeling all of the opponents' stones (knocking them out of play at an angle that caused the shooter's stone to also roll out of play, leaving no stones on the ice). By knocking all stones out the opponents could at best score one point, if they had the last stone of the end (called the hammer). If the team peeling the rocks had the hammer they could peel rock after rock which would blank the end (leave the end scoreless), keeping the last rock advantage for another end. This strategy had developed (mostly in Canada) as ice-makers had become skilled at creating a predictable ice surface and newer brushes allowed greater control over the rock. While a sound strategy, this made for an unexciting game. Observers at the time noted that if two teams equally skilled in the peel game faced each other on good ice, the outcome of the game would be predictable from who won the coin flip to have last rock (or had earned it in the schedule) at the beginning of the game. The 1990 Brier (Canadian men's championship) was considered by many curling fans as boring to watch because of the amount of peeling and the quick adoption of the free guard zone rule the following year reflected how disliked this aspect of the game had become. The free guard zone rule was originally called the Modified Moncton Rule and was developed from a suggestion made by Russ Howard for the Moncton 100 cashspiel in Moncton, New Brunswick, in January 1990. "Howard's Rule" (later known as the Moncton Rule), used for the tournament and based on a practice drill his team used, had the first four rocks in play unable to be removed no matter where they were at any time during the end. This method of play was altered by restricting the area in which a stone was protected to the free guard zone only for the first four rocks thrown and adopted as a four-rock free guard zone rule for international competition shortly after. Canada kept to the traditional rules until a three-rock free guard zone rule was adopted for the 1993–94 season. After several years of having the three-rock rule used for the Canadian championships and the winners then having to adjust to the four-rock rule in the World Championships, the Canadian Curling Association adopted the four-rock free guard zone in the 2002–2003 season. One strategy that has been developed by curlers in response to the free guard zone (Kevin Martin from Alberta is one of the best examples) is the "tick" game, where a shot is made attempting to knock (tick) the guard to the side, far enough that it is difficult or impossible to use but still remaining in play while the shot itself goes out of play. The effect is functionally identical to peeling the guard but significantly harder, as a shot that hits the guard too hard (knocking it out of play) results in its being replaced, while not hitting it hard enough can result in it still being tactically useful for the opposition. There is also a greater chance that the shot will miss the guard entirely because of the greater accuracy required to make the shot. Because of the difficulty of making this type of shot, only the best teams will normally attempt it, and it does not dominate the game the way the peel formerly did. Steve Gould from Manitoba popularized ticks played across the face of the guard stone. These are easier to make because they impart less speed on the object stone, therefore increasing the chance that it remains in play even if a bigger chunk of it is hit. With the tick shot reducing the effectiveness of the four-rock rule, the Grand Slam of Curling series of bonspiels adopted a five-rock rule in 2014. In 2017, the five-rock rule was adopted by the World Curling Federation and member organizations for official play, beginning in the 2018–19 season. Hammer The last rock in an end is called the hammer, and throwing the hammer gives a team a tactical advantage. Before the game, teams typically decide who gets the hammer in the first end either by chance (such as a coin toss), by a "draw-to-the-button" contest, where a representative of each team shoots to see who gets closer to the centre of the rings, or, particularly in tournament settings like the Winter Olympics, by a comparison of each team's win-loss record. In all subsequent ends, the team that did not score in the preceding end gets to throw second, thus having the hammer. In the event that neither team scores, called a blanked end, the hammer remains with the same team. Naturally, it is easier to score points with the hammer than without; the team with the hammer generally tries to score two or more points. If only one point is possible, the skip may try to avoid scoring at all in order to retain the hammer the next end, giving the team another chance to use the hammer advantage to try to score two points. Scoring without the hammer is commonly referred to as stealing, or a steal, and is much more difficult. Strategy Curling is a game of strategy, tactics, and skill. The strategy depends on the team's skill, the opponent's skill, the conditions of the ice, the score of the game, how many ends remain and whether the team has last-stone advantage (the hammer). A team may play an end aggressively or defensively. Aggressive playing will put a lot of stones in play by throwing mostly draws; this makes for an exciting game and is very risky but the reward can be very great. Defensive playing will throw a lot of hits preventing a lot of stones in play; this tends to be less exciting and less risky. A good drawing team will usually opt to play aggressively, while a good hitting team will opt to play defensively. If a team does not have the hammer in an end, it will opt to try to clog up the four-foot zone in the house to deny the opposing team access to the button. This can be done by throwing "centre line" guards in front of the house on the centre line, which can be tapped into the house later or drawn around. If a team has the hammer, they will try to keep this four-foot zone free so that they have access to the button area at all times. A team with the hammer may throw a corner guard as their first stone of an end placed in front of the house but outside the four-foot zone to utilize the free guard zone. Corner guards are key for a team to score two points in an end, because they can either draw around it later or hit and roll behind it, making the opposing team's shot to remove it more difficult. Ideally, the strategy in an end for a team with the hammer is to score two points or more. Scoring one point is often a wasted opportunity, as they will then lose last-rock advantage for the next end. If a team cannot score two points, they will often attempt to "blank an end" by removing any leftover opposition rocks and rolling out; or, if there are no opposition rocks, just throwing the rock through the house so that no team scores any points, and the team with the hammer can try again the next end to score two or more with it. Generally, a team without the hammer would want to either force the team with the hammer to only one point (so that they can get the hammer back) or "steal" the end by scoring one or more points of their own. Generally, the larger the lead a team will have in a game, the more defensively they should play. By hitting all of the opponent's stones, it removes opportunities for their getting multiple points, therefore defending the lead. If the leading team is quite comfortable, leaving their own stones in play can also be dangerous. Guards can be drawn around by the other team, and stones in the house can be tapped back (if they are in front of the tee line) or frozen onto (if they are behind the tee line). A frozen stone is difficult to remove because it is "frozen" (in front of and touching) to the opponent's stone. At this point, a team will opt for "peels", meaning that the stones they throw will be to not only hit their opposition stones, but to roll out of play as well. Peels are hits that are thrown with the most amount of power. Conceding a game It is common at any level for a losing team to terminate the match before all ends are completed if it believes it no longer has a realistic chance of winning. Competitive games end once the losing team has "run out of rocks"—that is, once it has fewer stones in play and available for play than the number of points needed to tie the game. Dispute resolution Most decisions about rules are left to the skips, although in official tournaments, decisions may be left to the officials. However, all scoring disputes are handled by the vice skip. No players other than the vice skip from each team should be in the house while score is being determined. In tournament play, the most frequent circumstance in which a decision has to be made by someone other than the vice skip is the failure of the vice skips to agree on which stone is closest to the button. An independent official (supervisor at Canadian and World championships) then measures the distances using a specially designed device that pivots at the centre of the button. When no independent officials are available, the vice skips measure the distances. Scoring The winner is the team having the highest number of accumulated points at the completion of ten ends. Points are scored at the conclusion of each of these ends as follows: when each team has thrown its eight stones, the team with the stone closest to the button wins that end; the winning team is then awarded one point for each of its own stones lying closer to the button than the opponent's closest stone. Only stones that are in the house are considered in the scoring. A stone is in the house if it lies within the zone or any portion of its edge lies over the edge of the ring. Since the bottom of the stone is rounded, a stone just barely in the house will not have any actual contact with the ring, which will pass under the rounded edge of the stone, but it still counts. This type of stone is known as a biter. It may not be obvious to the eye which of the two rocks is closer to the button (centre) or if a rock is actually biting or not. There are specialized devices to make these determinations, but these cannot be brought out until after an end is completed. Therefore, a team may make strategic decisions during an end based on assumptions of rock position that turn out to be incorrect. The score is marked on a scoreboard, of which there are two types; the baseball type and the club scoreboard. The baseball-style scoreboard was created for televised games for audiences not familiar with the club scoreboard. The ends are marked by columns 1 through 10 (or 11 for the possibility of an extra end to break ties) plus an additional column for the total. Below this are two rows, one for each team, containing the team's score for that end and their total score in the right-hand column. The club scoreboard is traditional and used in most curling clubs. Scoring on this board only requires the use of (up to) 11 digit cards, whereas with baseball-type scoring an unknown number of multiples of the digits (especially low digits like 1) may be needed. The numbered centre row represents various possible scores, and the numbers placed in the team rows represent the end in which that team achieved that cumulative score. If the red team scores three points in the first end (called a three-ender), then a 1 (indicating the first end) is placed beside the number 3 in the red row. If they score two more in the second end, then a 2 will be placed beside the 5 in the red row, indicating that the red team has five points in total (3+2). This scoreboard works because only one team can get points in an end. However, some confusion may arise if neither team scores points in an end, this is called a blank end. The blank end numbers are usually listed in the farthest column on the right in the row of the team that has the hammer (last rock advantage), or on a special spot for blank ends. The following example illustrates the difference between the two types. The example illustrates the men's final at the 2006 Winter Olympics. Eight points – all the rocks thrown by one team counting – is the highest score possible in an end, and is known as an "eight-ender" or "snowman". Scoring an eight-ender against a relatively competent team is very difficult; in curling, it is considered the equivalent of pitching a perfect game in baseball. Probably the best-known snowman came at the 2006 Players' Championships. Future (2007) World Champion Kelly Scott scored eight points in one of her games against 1998 World bronze medalist Cathy King. Curling culture Competition teams are normally named after the skip, for example, Team Martin after skip Kevin Martin.
ground with 15,013, at a game against Wakefield Trinity on 15 February 1981. Modern times When the Hillsborough disaster occurred in 1989, Fulham were in the second bottom rung of The Football League, but following the Taylor report Fulham's ambitious chairman Jimmy Hill tabled plans in 1996 for an all-seater stadium. These plans never came to fruition, partly due to local residents' pressure groups, and by the time Fulham reached the Premier League, they still had standing areas in the ground, something virtually unheard of at the time. A year remained to do something about this (teams reaching the second tier for the first time are allowed a three-year period to reach the required standards for the top two divisions), but by the time the last league game was played there, against Leicester City on 27 April 2002, no building plans had been made. Two more Intertoto Cup games were played there later that year (against FC Haka of Finland and Egaleo FC of Greece), and the eventual solution was to decamp to Loftus Road, home of local rivals QPR. During this time, many Fulham fans only went to away games in protest of moving from Craven Cottage. 'Back to the Cottage', later to become the 'Fulham Supporters Trust', was set up as a fans pressure group to encourage the chairman and his advisers that Craven Cottage was the only viable option for Fulham Football Club. After one and a half seasons at Loftus Road, no work had been done on the Cottage. In December 2003, plans were unveiled for £8 million worth of major refurbishment work to bring it in line with Premier League requirements. With planning permission granted, work began in January 2004 in order to meet the deadline of the new season. The work proceeded as scheduled and the club were able to return to their home for the start of the 2004–05 season. Their first game in the new-look 22,000 all-seater stadium was a pre-season friendly against Watford on 10 July 2004. Fenway Sports Group originally partnered with Fulham in 2009, due to the perceived heritage and quirks shared between the Cottage and Fenway Park, saying no English club identifies with its stadium as much as Fulham. The current stadium was one of the Premier League's smallest grounds at the time of Fulham's relegation at the end of the 2013–14 season (it was third-smallest, after the KC Stadium and the Liberty Stadium). Much admired for its fine architecture, the stadium has recently hosted a few international games, mostly including Australia. This venue is suitable for Australia because most of the country's top players are based in Europe, and West London has a significant community of expatriate Australians. Also, Greece vs. South Korea was hosted on 6 February 2007. In 2011 Brazil played Ghana, in an international friendly, and the Women's Champions League Final was hosted. Craven Cottage often hosts many other events such as 5-a-side football tournaments and weddings. Also, many have Sunday Lunch at the Riverside restaurant or the 'Cottage Cafe' on non-match days. Craven Cottage hosted the Oxbridge Varsity Football match annually between 1991 and 2000 and again in 2003, 2006 (the same day as the famous 'Boat Race'), 2008, 2009, and 2014 as well as having a Soccer Aid warm-up match in 2006. The half-time entertainment often includes the SW6ers (previously called The Cravenettes) which are a group of female cheerleaders. Other events have included brass bands, Michael Jackson (although just walking on the pitch, as opposed to performing), Travis playing, Arabic dancing, keepie uppie professionals and presentational awards. Most games also feature the 'Fulham flutter', a half-time draw; and a shoot-out competition of some kind, usually involving scoring through a 'hoop' or 'beat the goalie'. On the first home game of the season, there is a carnival where every Fulham fan is expected to turn up in black-and-white colours. There is usually live rock bands, player signings, clowns, stilt walkers, a steel (calypso) band, food stalls and a free training session for children in Bishops Park. The Fulham Ladies (before their demise) and Reserve teams occasionally play home matches at the Cottage. Other than this, they generally play at the club's training ground at Motspur Park or at Kingstonian and AFC Wimbledon's stadium, Kingsmeadow. Craven Cottage is known by several affectionate nicknames from fans, including: The (River) Cottage, The Fortress (or Fortress Fulham), Thameside, The Friendly Confines, SW6, Lord of the Banks, The House of Hope, The Pavilion of Perfection, The 'True' Fulham Palace and The Palatial Home. The Thames at the banks of the Cottage is often referred to as 'Old Father' or The River of Dreams. The most accessible route to the ground is to walk through Bishops Park from Putney Bridge (the nearest Underground station), often known as 'The Green Mile' by Fulham fans (as it is roughly a mile walk through pleasant greenery). The Telegraph ranked the Cottage 9th out of 54 grounds to hold Premier League football. Plans On 27 July 2012, Fulham FC were granted permission to redevelop the Riverside Stand, increasing the capacity of Craven Cottage to 30,000 seats. Beforehand various rumours arose including plans to return to ground-sharing with QPR in a new 40,000 seater White City stadium, although these now appear firmly on hold with the construction of the Westfield shopping centre on the proposed site. The board seem to have moved away from their ambition to make Fulham the "Manchester United of the south" as it became clear how expensive such a plan would be. With large spaces of land at a premium in south-west London, Fulham appear to be committed to a gradual increase of the ground's capacity often during the summer between seasons. The capacity of Craven Cottage has been increased during summers for instance in 2008 with a small increase in the capacity of the Hammersmith End. Fulham previously announced in 2007 that they are planning to increase the capacity of Craven Cottage by 4,000 seats, but this is yet to be implemented. There was also proposals for a bridge to span the Thames, for a redeveloped Riverside stand and a museum. More substantial plans arose in October 2011 with the 'Fulham Forever' campaign. With Mohamed Al-Fayed selling Harrods department store for £1.5 billion in May 2010 a detailed plan emerged in the Riverside Stand as the only viable area for expansion. The scheme involved the demolition of the back of the Riverside Stand with a new tier of seating added on top of the current one and a row of corporate boxes; bringing Craven Cottage up to 30,000 capacity. Taking into account local residents, the proposal would: reopen the riverside walk; light pollution would be reduced with the removal of floodlight masts; new access points would make match-day crowds more manageable; and the new stand would be respectful in design to its position on the River Thames. Buckingham Group Contracting were chosen in March 2013 as the construction company for the project. In May 2019, the club confirmed that work on the new Riverside Stand would commence in the summer of 2019. During the 2019–20 and 2020–21 seasons, the ground's capacity is temporarily reduced to 19,000. The ground as it stands Hammersmith End The Hammersmith End (or Hammy) is the northernmost stand in
largely motivated by Fulham's failure thus far to gain promotion to the top division of English football. There were also plans for Henry Norris to build a larger stadium on the other side of Stevenage Road but there was little need after the merger idea failed. During this era, the Cottage was used for choir singing and marching bands along with other performances, and Mass. In 1933 there were plans to demolish the ground and start again from scratch with a new 80,000 capacity stadium. These plans never materialised mainly due to the Great Depression. On 8 October 1938, 49,335 spectators watched Fulham play Millwall. It was the largest attendance ever at Craven Cottage and the record remains today, unlikely to be bettered as it is now an all-seater stadium with currently no room for more than 25,700. The ground hosted several football games for the 1948 Summer Olympics, and is one of the last extant that did. Post-War It was not until after Fulham first reached the top division, in 1949, that further improvements were made to the stadium. In 1962 Fulham became the final side in the first division to erect floodlights. The floodlights were said to be the most expensive in Europe at the time as they were so modern. The lights were like large pylons towering 50 metres over the ground and were similar in appearance to those at the WACA. An electronic scoreboard was installed on the Riverside Terrace at the same time as the floodlights were installed and flagpoles flying the flags of all of the other first division teams were flown from them. Following the sale of Alan Mullery to Tottenham Hotspur in 1964 (for £72,500) the Hammersmith End had a roof put over it at a cost of approximately £42,500. Although Fulham were relegated, the development of Craven Cottage continued. The Riverside terracing, infamous for the fact that fans occupying it would turn their heads annually to watch The Boat Race pass, was replaced by what was officially named the 'Eric Miller Stand', Eric Miller being a director of the club at the time. The stand, which cost £334,000 and held 4,200 seats, was opened with a friendly game against Benfica in February 1972, (which included Eusébio). Pelé was also to appear on the ground, with a friendly played against his team Santos F.C. The Miller stand brought the seated capacity up to 11,000 out of a total 40,000. Eric Miller committed suicide five years later after a political and financial scandal, and had shady dealings with trying to move Fulham away from the Cottage. The stand is now better known as the Riverside Stand. On Boxing Day 1963, Craven Cottage was the venue of the fastest hat-trick in the history of the English football league, which was completed in less than three minutes, by Graham Leggat. This helped his Fulham team to beat Ipswich 10–1 (a club record). The international record is held by Jimmy O'Connor, an Irish player who notched up his hat trick in 2 minutes 14 seconds in 1967. Between 1980 and 1984, Fulham rugby league played their home games at the Cottage. They have since evolved into the London Crusaders, the London Broncos and Harlequins Rugby League before reverting to London Broncos ahead of the 2012 season. Craven Cottage held the team's largest ever crowd at any ground with 15,013, at a game against Wakefield Trinity on 15 February 1981. Modern times When the Hillsborough disaster occurred in 1989, Fulham were in the second bottom rung of The Football League, but following the Taylor report Fulham's ambitious chairman Jimmy Hill tabled plans in 1996 for an all-seater stadium. These plans never came to fruition, partly due to local residents' pressure groups, and by the time Fulham reached the Premier League, they still had standing areas in the ground, something virtually unheard of at the time. A year remained to do something about this (teams reaching the second tier for the first time are allowed a three-year period to reach the required standards for the top two divisions), but by the time the last league game was played there, against Leicester City on 27 April 2002, no building plans had been made. Two more Intertoto Cup games were played there later that year (against FC Haka of Finland and Egaleo FC of Greece), and the eventual solution was to decamp to Loftus Road, home of local rivals QPR. During this time, many Fulham fans only went to away games in protest of moving from Craven Cottage. 'Back to the Cottage', later to become the 'Fulham Supporters Trust', was set up as a fans pressure group to encourage the chairman and his advisers that Craven Cottage was the only viable option for Fulham Football Club. After one and a half seasons at Loftus Road, no work had been done on the Cottage. In December 2003, plans were unveiled for £8 million worth of major refurbishment work to bring it in line with Premier League requirements. With planning permission granted, work began in January 2004 in order to meet the deadline of the new season. The work proceeded as scheduled and the club were able to return to their home for the start of the 2004–05 season. Their first game in the new-look 22,000 all-seater stadium was a pre-season friendly against Watford on 10 July 2004. Fenway Sports Group originally partnered with Fulham in 2009, due to the perceived heritage and quirks shared between the Cottage and Fenway Park, saying no English club identifies with its stadium as much as Fulham. The current stadium was one of the Premier League's smallest grounds at the time of Fulham's relegation at the end of the 2013–14 season (it was third-smallest, after the KC Stadium and the Liberty Stadium). Much admired for its fine architecture, the stadium has recently hosted a few international games, mostly including Australia. This venue is suitable for Australia because most of the country's top players are based in Europe, and West London has a significant community of expatriate Australians. Also, Greece vs. South Korea was hosted on 6 February 2007. In 2011 Brazil played Ghana, in an international friendly, and the Women's Champions League Final was hosted. Craven Cottage often hosts many other events such as 5-a-side football tournaments and weddings. Also, many have Sunday Lunch at the Riverside restaurant or the 'Cottage Cafe' on non-match days. Craven Cottage hosted the Oxbridge Varsity Football match annually between 1991 and 2000 and again in 2003, 2006 (the same day as the famous 'Boat Race'), 2008, 2009, and 2014 as well as having a Soccer Aid warm-up match in 2006. The half-time entertainment often includes the SW6ers (previously called The Cravenettes) which are a group of female cheerleaders. Other events have included brass bands, Michael Jackson (although just walking on the pitch, as opposed to performing), Travis playing, Arabic dancing, keepie uppie professionals and presentational awards. Most games also feature the 'Fulham flutter', a half-time draw; and a shoot-out competition of some kind, usually involving scoring through a 'hoop' or 'beat the goalie'. On the first home game of the season, there is a carnival where every Fulham fan is expected to turn up in black-and-white colours. There is usually live rock bands, player signings, clowns, stilt walkers, a steel (calypso) band, food stalls and a free training session for children in Bishops Park. The Fulham Ladies (before their demise) and Reserve teams occasionally play home matches at the Cottage. Other than this, they generally play at the club's training ground at Motspur Park or at Kingstonian and AFC Wimbledon's stadium, Kingsmeadow. Craven Cottage is known by several affectionate nicknames from fans, including: The (River) Cottage, The Fortress (or Fortress Fulham), Thameside, The Friendly Confines, SW6, Lord of the Banks, The House of Hope, The Pavilion of Perfection, The 'True' Fulham Palace and The Palatial Home. The Thames at the banks of the Cottage is often referred to as 'Old Father' or The River of Dreams. The most accessible route to the ground is to walk through Bishops Park from Putney Bridge (the nearest Underground station), often known as 'The Green Mile' by Fulham fans (as it is roughly a mile walk through pleasant greenery). The Telegraph ranked the Cottage 9th out of 54 grounds to hold Premier League football. Plans On 27 July 2012, Fulham FC were granted permission to redevelop the Riverside Stand, increasing the capacity of Craven Cottage to 30,000 seats. Beforehand various rumours arose including plans to return to ground-sharing with QPR in a new 40,000 seater White City stadium, although these now appear firmly on hold with the construction of the Westfield shopping centre on the proposed site. The board seem to have moved away from their ambition to make Fulham the "Manchester United of the south" as it became clear how expensive such a plan would be. With large spaces of land at a premium in south-west London, Fulham appear to be committed to a gradual increase of the ground's capacity often during the summer between seasons. The capacity of Craven Cottage has been increased during summers for instance in 2008 with a small increase in the capacity of the Hammersmith End. Fulham previously announced in 2007 that they are planning to increase the capacity of Craven Cottage by 4,000 seats, but this is yet to
Constantine III (Western Roman emperor) Constantine III (Byzantine emperor) Constantine IV Constantine V Constantine VI Constantine VII Porphyrogenitus Constantine VIII Constantine IX Monomachos Constantine X Doukas Constantine XI Palaiologos Emperors not enumerated Tiberius II, reigned officially as "Constantine" Constans II, reigned officially as "Constantine" Constantine (son of Leo V) Constantine (son of Theophilos) Constantine (son of Basil I) Constantine Doukas (co-emperor) Constantine Lekapenos Constantine Laskaris (?) Other rulers Constantine I, Prince of Armenia Constantine II, Prince of Armenia Constantine I, King of Armenia, also called Constantine III Constantine II, King of Armenia, also called Constantine IV Constantine III, King of Armenia, also called Constantine V Constantine IV, King of Armenia, also called Constantine VI Constantine of Baberon, regent of Zabel, and father of Hetoum I of Armenia, 13th century Constantine I (or Kuestantinos I) of Ethiopia, also known as Zara Yaqob Constantine II (or Kuestantinos II) of Ethiopia, also known as Eskender Constantine I of Greece Constantine II of Greece Constantine I of Arborea Constantín mac Fergusa, or Constantin of the Picts Constantín mac Cináeda, or Constantine I of Scotland Constantine II of Scotland Constantine III of Scotland Constantine I of Cagliari Constantine II of Cagliari Constantine III of Gallura Constantine I of Torres Constantine Tikh of Bulgaria Grand Duke Constantine Pavlovich of Russia Constantine Dragaš Constantine I of Georgia Constantine II of Georgia Constantine I of Imereti Constantine Mavrocordatos Constantine Ypsilantis Constantine (Briton), king in sub-Roman Britain Constantine of Strathclyde, supposed king of
obscure saints Constantine of Preslav, a medieval Bulgarian scholar Constantine or Causantín, Earl of Fife (fl. 1095–1128), a Scottish nobleman Constantine Stilbes (fl. 1070–1220), a Byzantine clergyman and poet Constantine the African (c. 1020–1087), a Tunisian doctor Constantine the Jew (d. c. 886), Byzantine monk Constantine-Silvanus (also called Silvanus), founder of the Paulicians Saint Cyril the Philosopher, whose original name was Constantine Fictional characters John Constantine, a fictional character appearing in DC Comics franchise, including Hellblazer Constantine (comic book), a comic book series replacing the earlier Hellblazer Constantine (film), a 2005 American film based on the DC Comic book character from the Hellblazer series Constantine (video game), an action-adventure video game based on the film Constantine (TV series), a 2014 NBC TV series, based on the comic book Hellblazer Constantine: City of Demons, a 2018 CW Seed animated web series Places Algeria Constantine, Algeria, the nation's third largest city and capital of Constantine Province Constantine Province, surrounding the city of the same name Beylik of Constantine, an administrative unit of the Regency of Algiers Constantine (departement), similar area during French Algeria Serbia Constantine the Great Airport, Niš, Serbia Switzerland Constantine, Switzerland, a municipality in the canton of Vaud United Kingdom Constantine Bay, near Padstow, Cornwall Constantine, Cornwall, near Falmouth Constantine College, York, a college of the University of York United States Constantine, Michigan, a village in St. Joseph county Other uses Order of Constantine Constantine (album), a 2007 album by Constantine Maroulis Constantine, a 2020 album by 40 Glocc Constantine, a frog character who resembles Kermit the Frog and is the foremost criminal in the 2014 film Muppets Most Wanted See also Constantin (disambiguation) Constantines,
of female composers by name List of female composers by birth date List of Australian female composers Genre Anime composer List of Carnatic composers List of film score composers List of major opera composers List of composers of musicals List of musicals by composer: A to L, M to Z List of ragtime composers List of sports television composers List of symphony composers List of acousmatic-music composers List of Spaghetti Western composers List of television theme music composers Era List of classical music composers by era List of Medieval composers List of Renaissance composers List of Baroque composers List
List of Romantic-era composers List of 20th-century classical composers List of 21st-century classical composers Nationality or ethnicity Chronological lists of classical composers by nationality List of composers by nationality Instrument List of composers for the classical guitar List of organ composers List of piano composers List of composers and their preferred lyricists List of string quartet composers Classification Chronological lists of classical composers List of Anglican church composers – See also Religious music
19 other states, Iowa does not prohibit municipal broadband from competing with the private cable TV monopoly. In 2020, Cedar Falls Utilities was recognized by PC Magazine as having the nation's fastest internet, by a factor of three. Media FM radio 88.1 KBBG 88.9 KWVI 89.5 KHKE 90.9 KUNI (FM) 92.3 KOEL-FM – Licensed to Oelwein with main studios in Waterloo 93.5 KCVM 94.5 KULT-LP 97.7 KCRR – Licensed to Grundy Center with main studios in Waterloo 98.5 KKHQ-FM 99.3 KWAY-FM – Located in Waverly 101.9 KNWS-FM 105.1 KCFI 105.7 KOKZ 107.9 KFMW AM radio 600 WMT – Located in Cedar Rapids 640 WOI – Located in Ames 950 KOEL – Located in Oelwein 1040 WHO – Located in Des Moines 1090 KNWS 1250 KCFI 1330 KPTY 1540 KXEL 1650 KCNZ Broadcast television 2 KGAN 2 (CBS) – Located in Cedar Rapids 7 KWWL 7 (NBC, The CW on DT2, Me-TV on DT3) 9 KCRG 9 (ABC) – Located in Cedar Rapids 12 KIIN 12 (PBS/IPTV) – Located in Iowa City 17 K17ET 17 / K31PO-D 44 (TBN) 20 KWKB 20 (Court TV Mystery) – Located in Iowa City 28 KFXA 28 (Fox) – Located in Cedar Rapids 32 KRIN 32 (PBS/IPTV) 40 KFXB-TV 40 (CTN) – Located in Dubuque Print The Courier, daily newspaper The Cedar Falls Times, weekly newspaper The Cedar Valley What Not, weekly advertiser Music The underground music scene in the Cedar Falls area from 1977 to present-day is well documented. The Wartburg College Art Gallery in Waverly, Iowa hosted a collaborative history of the bands, record labels, and music venues involved in the Cedar Falls music scene which ran from March 17 to April 14, 2007. This effort has been continued as a wiki-style website called The Secret History of the Cedar Valley. Notable people Actors Annabeth Gish – actress Gary Kroeger – actor, Saturday Night Live 1982–1985 Michael Mosley – actor, Scrubs Mark Steines – co-host, Entertainment Tonight, alumnus of University of Northern Iowa Joe Trotter — actor/comedian, Andersonville Athletes Trev Alberts – football player, 1993 Butkus Award (for best linebacker in NCAA Division I), All-American at Nebraska; a No. 1 draft choice of Indianapolis Colts, broadcaster, Director of Athletics at University of Nebraska-Omaha Don Denkinger – Major League Baseball umpire, made controversial call in 1985 World Series Travis Fulton – UFC fighter David Johnson – running back for NFL's Arizona Cardinals, UNI alumnus Bryce Paup – NFL player, UNI alumnus Chad Rinehart – NFL player, Boone High School, UNI Nick Ring – UFC fighter Edgar Seymour – Olympic bobsledder Terry Stotts – NBA player and coach Dedric Ward – NFL wide receiver, UNI alumnus Kurt Warner – NFL quarterback for St. Louis Rams, New York Giants and Arizona Cardinals, Super Bowl champion, UNI alumnus Ross Pierschbacher – NFL player Isaac Boettger – NFL player Military Robert Hibbs – Medal of Honor recipient Musicians Karen Holvik – classical soprano, currently on the faculty of the Eastman School of Music Nilo Hovey – acclaimed instrumental music pedagogue, author of numerous instrument method books House of Large Sizes – an alternative rock band Bonnie Koloc – folk singer, songwriter and musician, born in Waterloo, Iowa, attended UNI Spirit of the Stairway – Mathcore band Bill Stewart – jazz drummer and composer, attended UNI Tracie Spencer – singer Politicians Marv Diemer – Iowa state legislator Charles Grassley – U.S. Senator, attended UNI Gil Gutknecht – former Minnesota congressman Roger Jepsen – former U.S. Senator Scientists Gerald Guralnik – physicist, co-discoverer of the "Higgs Mechanism" Writers Bess Streeter Aldrich (1881–1954) – novelist R.V. Cassill – novelist and short story writer James Hearst – poet, farmer, professor of creative writing at UNI between 1941 and 1975 Helen Markley Miller (1896 – 1984), writer of historical and biographical fiction for children about the Western United States. Ruth Suckow Nuhn (1892–1960) – author of short stories and novels (including Country People, The Folks, New Hope) Ferner Nuhn (1903–1989) – literary critic, author of articles and essays, artist, Quaker activist Nancy Price – author of Sleeping with the Enemy Leland Sage – professor at UNI and historian Robert James Waller – author of The Bridges of Madison County, attended UNI Other Marc Andreessen – co-founder, Netscape Corporation Raja Chari – astronaut Adelia M. Hoyt (1865–1966) – Braille librarian, Library of Congress John H. Livingston – aviator and air racer Randy & Vicki Weaver – parents, John Deere Employee, Ruby Ridge incident Tim Dodd – popular STEM communicator and YouTube Creator known as the "Everyday Astronaut" See also Black Hawk Hotel Cedar Falls Ice House Cedar Falls Utilities University of Northern Iowa Teaching and Research Greenhouse References Further reading Brian C. Collins. Images of America: Cedar Falls, Iowa. Arcadia Publishing, Inc. 1998. External links City of Cedar Falls Cedar Falls Chamber of Commerce Cedar Falls Tourism and Visitors Bureau Cedar Falls Historical Society Cities in Iowa Cities in Black Hawk County,
of which 26.9% had children under the age of 18 living with them, 48.9% were married couples living together, 7.5% had a female householder with no husband present, and 41.1% were non-families. 25.5% of all households were made up of individuals, and 9.4% had someone living alone who was 65 years of age or older. The average household size was 2.45 and the average family size was 2.91. Age spread: 18.0% under the age of 18, 30.6% from 18 to 24, 20.5% from 25 to 44, 19.0% from 45 to 64, and 11.9% who were 65 years of age or older. The median age was 26 years. For every 100 females, there were 88.5 males. For every 100 females age 18 and over, there were 85.7 males. The median income for a household in the city was $70,226, and the median income for a family was $85,158. Males had a median income of $60,235 versus $50,312 for females. The per capita income for the city was $27,140. About 5.6% of families and 4.7% of the population were below the poverty line, including 8.5% of those under age 18, and 6.1% of those age 65 or over. Arts and culture In 1986, the City of Cedar Falls established the Cedar Falls Art and Culture Board, which oversees the operation of the City's Cultural Division and the James & Meryl Hearst Center for the Arts. Library The Cedar Falls Public Library is housed in the Adele Whitenach Davis building located at 524 Main Street. The 47,000 square foot (4,400 m2) structure, designed by Struxture Architects, replaced the Carniege-Dayton building in early 2004. As of the 2016 fiscal year, the library's holdings included approximately 8,000 audio materials, 12,000 video materials, and 104,000 books and periodicals for a grand total of approximately 124,000 items. Patrons made 245,000 visits which took advantage of circulation services, adult, teen, and youth programming. Circulation of library materials for fiscal year 2016 was 543,134. The library also provides public access to more than 30 public computers which provide internet access, office software suites, high resolution color printing, wi-fi, and various games. The mission of the Cedar Falls Public Library is to promote literacy and provide open access to resources which facilitate lifelong learning. The library is a member of the Cedar Valley Library Consortium. Cedar Falls Public Library shares an Integrated Library System (SirsiDynix Symphony) with the Waterloo Public Library. Library management is provided by Kelly Stern, Director of the Cedar Falls Public Library. Historical Society The Cedar Falls Historical Society has its offices in the Victorian Home and Carriage House Museum. It preserves Cedar Falls' history through its five museums, collection, archives, and public programs. Besides the Victorian House, the Society operates the Cedar Falls Ice House, Little Red Schoolhouse, and Behrens-Rapp Station. Education It hosts one of three public universities in Iowa, University of Northern Iowa (UNI). Cedar Falls Community Schools, which covers most of the city limits, includes Cedar Falls High School, two junior high schools, seven elementary schools. Waterloo Community School District covers a small section of Cedar Falls. There is a private Christian school, Valley Lutheran High School. Additionally there is a private Catholic elementary school at St. Patrick Catholic Church, under the Roman Catholic Archdiocese of Dubuque. A significant renovation occurred beginning in May 2014. The Malcolm Price Lab School/Northern University High School, was a state-funded K–12 school run by the university. It closed in 2012 following cuts at UNI. Utilities and internet access The city owns its power, gas and water, and cable TV service. Because of this, Cedar Falls Utilities provides gigabit speeds to residents, this became available on January 14, 2015. Cedar Falls has the power to do so because, unlike 19 other states, Iowa does not prohibit municipal broadband from competing with the private cable TV monopoly. In 2020, Cedar Falls Utilities was recognized by PC Magazine as having the nation's fastest internet, by a factor of three. Media FM radio 88.1 KBBG 88.9 KWVI 89.5 KHKE 90.9 KUNI (FM) 92.3 KOEL-FM – Licensed to Oelwein with main studios in Waterloo 93.5 KCVM 94.5 KULT-LP 97.7 KCRR – Licensed to Grundy Center with main studios in Waterloo 98.5 KKHQ-FM 99.3 KWAY-FM – Located in Waverly 101.9 KNWS-FM 105.1 KCFI 105.7 KOKZ 107.9 KFMW AM radio 600 WMT – Located in Cedar Rapids 640 WOI – Located in Ames 950 KOEL – Located in Oelwein 1040 WHO – Located in Des Moines 1090 KNWS 1250 KCFI 1330 KPTY 1540 KXEL 1650 KCNZ Broadcast television 2 KGAN 2 (CBS) – Located in Cedar Rapids 7 KWWL 7 (NBC, The CW on DT2, Me-TV on DT3) 9 KCRG 9 (ABC)
13 years and their first postseason trip since 2001. The Indians began their playoff run by defeating the Yankees in the ALDS three games to one. This series will be most remembered for the swarm of bugs that overtook the field in the later innings of Game Two. They also jumped out to a three-games-to-one lead over the Red Sox in the ALCS. The season ended in disappointment when Boston swept the final three games to advance to the 2007 World Series. Despite the loss, Cleveland players took home a number of awards. Grady Sizemore, who had a .995 fielding percentage and only two errors in 405 chances, won the Gold Glove award, Cleveland's first since 2001. Indians Pitcher CC Sabathia won the second Cy Young Award in team history with a 19–7 record, a 3.21 ERA and an MLB-leading 241 innings pitched. Eric Wedge was awarded the first Manager of the Year Award in team history. Shapiro was named to his second Executive of the Year in 2007. Second "rebuilding of the team" The Indians struggled during the 2008 season. Injuries to sluggers Travis Hafner and Victor Martinez, as well as starting pitchers Jake Westbrook and Fausto Carmona led to a poor start. The Indians, falling to last place for a short time in June and July, traded CC Sabathia to the Milwaukee Brewers for prospects Matt LaPorta, Rob Bryson, and Michael Brantley. and traded starting third baseman Casey Blake for catching prospect Carlos Santana. Pitcher Cliff Lee went 22–3 with an ERA of 2.54 and earned the AL Cy Young Award. Grady Sizemore had a career year, winning a Gold Glove Award and a Silver Slugger Award, and the Indians finished with a record of 81–81. Prospects for the 2009 season dimmed early when the Indians ended May with a record of 22–30. Shapiro made multiple trades: Cliff Lee and Ben Francisco to the Philadelphia Phillies for prospects Jason Knapp, Carlos Carrasco, Jason Donald and Lou Marson; Victor Martinez to the Boston Red Sox for prospects Bryan Price, Nick Hagadone and Justin Masterson; Ryan Garko to the Texas Rangers for Scott Barnes; and Kelly Shoppach to the Tampa Bay Rays for Mitch Talbot. The Indians finished the season tied for fourth in their division, with a record of 65–97. The team announced on September 30, 2009, that Eric Wedge and all of the team's coaching staff were released at the end of the 2009 season. Manny Acta was hired as the team's 40th manager on October 25, 2009. On February 18, 2010, it was announced that Shapiro (following the end of the 2010 season) would be promoted to team President, with current President Paul Dolan becoming the new Chairman/CEO, and longtime Shapiro assistant Chris Antonetti filling the GM role. 2011–present: Antonetti/Chernoff/Francona era On January 18, 2011, longtime popular former first baseman and manager Mike Hargrove was brought in as a special adviser. The Indians started the 2011 season strong – going 30–15 in their first 45 games and seven games ahead of the Detroit Tigers for first place. Injuries led to a slump where the Indians fell out of first place. Many minor leaguers such as Jason Kipnis and Lonnie Chisenhall got opportunities to fill in for the injuries. The biggest news of the season came on July 30 when the Indians traded four prospects for Colorado Rockies star pitcher, Ubaldo Jiménez. The Indians sent their top two pitchers in the minors, Alex White and Drew Pomeranz along with Joe Gardner and Matt McBride. On August 25, the Indians signed the team leader in home runs, Jim Thome off of waivers. He made his first appearance in an Indians uniform since he left Cleveland after the 2002 season. To honor Thome, the Indians placed him at his original position, third base, for one pitch against the Minnesota Twins on September 25. It was his first appearance at third base since 1996, and his last for Cleveland. The Indians finished the season in 2nd place, 15 games behind the division champion Tigers. The Indians broke Progressive Field's Opening Day attendance record with 43,190 against the Toronto Blue Jays on April 5, 2012. The game went 16 innings, setting the MLB Opening Day record, and lasted 5 hours and 14 minutes. On September 27, 2012, with six games left in the Indians' 2012 season, Manny Acta was fired; Sandy Alomar, Jr. was named interim manager for the remainder of the season. On October 6, the Indians announced that Terry Francona, who managed the Boston Red Sox to five playoff appearances and two World Series between 2004 and 2011, would take over as manager for 2013. The Indians entered the 2013 season following an active offseason of dramatic roster turnover. Key acquisitions included free agent 1B/OF Nick Swisher and CF Michael Bourn. The team added prized right-handed pitching prospect Trevor Bauer, OF Drew Stubbs, and relief pitchers Bryan Shaw and Matt Albers in a three-way trade with the Arizona Diamondbacks and Cincinnati Reds that sent RF Shin-Soo Choo to the Reds, and Tony Sipp to the Arizona Diamondbacks Other notable additions included utility man Mike Avilés, catcher Yan Gomes, designated hitter Jason Giambi, and starting pitcher Scott Kazmir. The 2013 Indians increased their win total by 24 over 2012 (from 68 to 92), finishing in second place, one game behind Detroit in the Central division, but securing the number one seed in the American League Wild Card Standings. In their first postseason appearance since 2007, Cleveland lost the 2013 American League Wild Card Game 4–0 at home to Tampa Bay. Francona was recognized for the turnaround with the 2013 American League Manager of the Year Award. With an 85–77 record, the 2014 Indians had consecutive winning seasons for the first time since 1999–2001, but they were eliminated from playoff contention during the last week of the season and finished third in the AL Central. In 2015, after struggling through the first half of the season, the Indians finished 81–80 for their third consecutive winning season, which the team had not done since 1999–2001. For the second straight year, the Tribe finished third in the Central and was eliminated from the Wild Card race during the last week of the season. Following the departure of longtime team executive Mark Shapiro on October 6, the Indians promoted GM Chris Antonetti to President of Baseball Operations, assistant general manager Mike Chernoff to GM, and named Derek Falvey as assistant GM. Falvey was later hired by the Minnesota Twins in 2016, becoming their President of Baseball Operations. The Indians set what was then a franchise record for longest winning streak when they won their 14th consecutive game, a 2–1 win over the Toronto Blue Jays in 19 innings on July 1, 2016, at Rogers Centre. The team clinched the Central Division pennant on September 26, their eighth division title overall and first since 2007, as well as returning to the playoffs for the first time since 2013. They finished the regular season at 94–67, marking their fourth straight winning season, a feat not accomplished since the 1990s and early 2000s. The Indians began the 2016 postseason by sweeping the Boston Red Sox in the best-of-five American League Division Series, then defeated the Blue Jays in five games in the 2016 American League Championship Series to claim their sixth American League pennant and advance to the World Series against the Chicago Cubs. It marked the first appearance for the Indians in the World Series since 1997 and first for the Cubs since 1945. The Indians took a 3–1 series lead following a victory in Game 4 at Wrigley Field, but the Cubs rallied to take the final three games and won the series 4 games to 3. The Indians' 2016 success led to Francona winning his second AL Manager of the Year Award with the club. From August 24 through September 15 during the 2017 season, the Indians set a new American League record by winning 22 games in a row. On September 28, the Indians won their 100th game of the season, marking only the third time in history the team has reached that milestone. They finished the regular season with 102 wins, second-most in team history (behind 1954's 111 win team). The Indians earned the AL Central title for the second consecutive year, along with home-field advantage throughout the American League playoffs, but they lost the 2017 ALDS to the Yankees 3–2 after being up 2–0. In 2018, the Indians won their third consecutive AL Central crown with a 91–71 record, but were swept in the 2018 American League Division Series by the Houston Astros, who outscored Cleveland 21–6. In 2019, despite a two-game improvement, the Indians missed the playoffs as they trailed three games behind the Tampa Bay Rays for the second AL Wild Card berth. During the 2020 season (shortened to 60 games because of the COVID-19 pandemic), the Indians were 35–25, finishing second behind the Minnesota Twins in the AL Central, but qualified for the expanded playoffs. In the best-of-three AL Wild Card Series, the Indians lost to the Yankees in a two-game sweep to end their season. On December 18, 2020, the team confirmed that the Indians name would be dropped after the 2021 season, and then announced on July 23, 2021, that their new name will be the Cleveland Guardians . They played their last game under the Indians name on October 3, 2021, a 6-0 win over the Texas Rangers. They officially became the Guardians on November 19, 2021. Season-by-season results Rivalries Interleague The rivalry with fellow Ohio team the Cincinnati Reds is known as the Battle of Ohio or Buckeye Series and features the Ohio Cup trophy for the winner. Prior to 1997, the winner of the cup was determined by an annual pre-season baseball game, played each year at minor-league Cooper Stadium in the state capital of Columbus, and staged just days before the start of each new Major League Baseball season. A total of eight Ohio Cup games were played, with the Indians winning six of them. It ended with the start of interleague play in 1997. The winner of the game each year was awarded the Ohio Cup in postgame ceremonies. The Ohio Cup was a favorite among baseball fans in Columbus, with attendances regularly topping 15,000. Since 1997, the two teams have played each other as part of the regular season, with the exception of 2002. The Ohio Cup was reintroduced in 2008 and is presented to the team who wins the most games in the series that season. Initially, the teams played one three-game series per season, meeting in Cleveland in 1997 and Cincinnati the following year. The teams have played two series per season against each other since 1999, with the exception of 2002, one at each ballpark. A format change in 2013 made each series two games, except in years when the AL and NL Central divisions meet in interleague play, where it is usually extended to three games per series. Through the 2020 meetings, the Indians lead the series 66–51. An on-and-off rivalry with the Pittsburgh Pirates stems from the close proximity of the two cities, and features some carryover elements from the longstanding rivalry in the National Football League between the Cleveland Browns and Pittsburgh Steelers. Because the Indians' designated interleague rival is the Reds and the Pirates' designated rival is the Tigers, the teams have played periodically, with one three-game series per season from 1997 to 2001, 2003, 2006, 2009–12, 2015, and 2018. Since 2012, the Indians and Pirates play three or four games every three seasons when the AL Central plays the NL Central as part of the interleague play rotation. The Pirates lead the series 21–18. The teams will play six games in 2020 as MLB instituted an abbreviated schedule focusing on regional match-ups Divisional As the Guardians play 19 games every year with each of their AL Central competitors, several rivalries have developed. The Guardians have a geographic rivalry with the Detroit Tigers, highlighted in recent years by intense battles for the AL Central title. The matchup has some carryover elements from the Ohio State-Michigan rivalry, as well as the general historic rivalry between Michigan and Ohio dating back to the Toledo War. The Chicago White Sox are another rival, dating back to the 1959 season, when the Sox slipped past the Indians to win the AL pennant. The rivalry intensified when both clubs were moved to the new AL Central in 1994. During that season, the two teams challenged for the division title, with the Indians one game back of Chicago when the strike began in August. During a game in Chicago, the White Sox confiscated Albert Belle's corked bat, followed by an attempt by Indians pitcher Jason Grimsley to crawl through the Comiskey Park clubhouse ceiling to retrieve it. Belle later signed with the White Sox in 1997, adding additional intensity to the rivalry. Logos and uniforms The official team colors are navy blue, red, and white. Home The primary home uniform is white with navy blue piping around each sleeve, and the "winged G" logo on the right sleeve. Across the front of the jersey in script font is the word "Guardians" in red with a navy blue outline, with navy blue undershirts, belts, and socks. The alternate home jersey is red with a navy blue script "Guardians" trimmed in white on the front, and navy blue piping on both sleeves, the "winged G" logo on the right sleeve, with navy blue undershirts, belts, and socks. The home cap is navy blue with a red bill and features a red "diamond C" on the front. Road The primary road uniform is gray, with "Cleveland" in navy blue "diamond C" letters, trimmed in red across the front of the jersey, navy blue piping around the sleeves, and navy blue undershirts, belts, and socks. The alternate road jersey is navy blue with "Cleveland" in red "diamond C" letters trimmed in white on the front of the jersey, and navy blue undershirts, belts, and socks. The road cap is similar to the home cap, with the only difference being the bill is navy blue. Universal For all games, the team uses a navy blue batting helmet with a red "diamond C" on the front. Name and logo controversy The club name and its cartoon logo have been criticized for perpetuating Native American stereotypes. In 1997 and 1998, protesters were arrested after effigies were burned. Charges were dismissed in the 1997 case, and were not filed in the 1998 case. Protesters arrested in the 1998 incident subsequently fought and lost a lawsuit alleging that their First Amendment rights had been violated. Bud Selig (then-Commissioner of Baseball) said in 2014 that he had never received a complaint about the logo. He has heard that there are some protesting against the mascots, but individual teams such as the Indians and Atlanta Braves, whose name was also criticized for similar reasons, should make their own decisions. An organized group consisting of Native Americans, which had protested for many years, protested Chief Wahoo on Opening Day 2015, noting that this was the 100th anniversary since the team became the Indians. Owner Paul Dolan, while stating his respect for the critics, said he mainly heard from fans who wanted to keep Chief Wahoo, and had no plans to change. On January 29, 2018, Major League Baseball announced that Chief Wahoo would be removed from the Indians' uniforms as of the 2019 season, stating that the logo was no longer appropriate for on-field use. The block "C" was promoted to the primary logo; at the time, there were no plans to change the team's name. In 2020, protests over the murder of George Floyd, a black man, by a Minneapolis police officer, led Dolan to reconsider use of the Indians name. On July 3, 2020, on the heels of the Washington Redskins announcing that they would "undergo a thorough review" of that team's name, the Indians announced that they would "determine the best path forward" regarding the team's name and emphasized the need to "keep improving as an organization on issues of social justice". On December 13, 2020, it was reported that the Indians name would be dropped after the 2021 season. Although it had been hinted by the team that they may move forward without a replacement name (in similar manner to the Washington Football Team), it was announced via Twitter on July 23, 2021, that the team will be named the Guardians, after the Guardians of Traffic, eight large Art Deco statues on the Hope Memorial Bridge, located close to Progressive Field. The club, however, found itself amid a trademark dispute with a men's roller derby team called the Cleveland Guardians. The Cleveland Guardians roller derby team has competed in the Men's Roller Derby Association since 2016. In addition, two other entities have attempted to preempt the team's use of the trademark by filing their own registrations with the U.S. Patent and Trademark Office. The roller derby team filed a federal lawsuit in the U.S. District Court for the Northern District of Ohio on October 27, 2021, seeking to
first round, losing the Division Series to the Red Sox, despite taking a 2–0 lead in the series. In game three, Indians starter Dave Burba went down with an injury in the 4th inning. Four pitchers, including presumed game four starter Jaret Wright, surrendered nine runs in relief. Without a long reliever or emergency starter on the playoff roster, Hargrove started both Bartolo Colón and Charles Nagy in games four and five on only three days rest. The Indians lost game four 23–7 and game five 12–8. Four days later, Hargrove was dismissed as manager. In 2000, the Indians had a 44–42 start, but caught fire after the All Star break and went 46–30 the rest of the way to finish 90–72. The team had one of the league's best offenses that year and a defense that yielded three gold gloves. However, they ended up five games behind the Chicago White Sox in the Central division and missed the wild card by one game to the Seattle Mariners. Mid-season trades brought Bob Wickman and Jake Westbrook to Cleveland. After the season, free-agent outfielder Manny Ramírez departed for the Boston Red Sox. In 2000, Larry Dolan bought the Indians for $320 million from Richard Jacobs, who, along with his late brother David, had paid $45 million for the club in 1986. The sale set a record at the time for the sale of a baseball franchise. 2001 saw a return to the postseason. After the departures of Ramírez and Sandy Alomar, Jr., the Indians signed Ellis Burks and former MVP Juan González, who helped the team win the Central division with a 91–71 record. One of the highlights came on August 5, when the Indians completed the biggest comeback in MLB History. Cleveland rallied to close a 14–2 deficit in the seventh inning to defeat the Seattle Mariners 15–14 in 11 innings. The Mariners, who won an MLB record-tying 116 games that season, had a strong bullpen, and Indians manager Charlie Manuel had already pulled many of his starters with the game seemingly out of reach. Seattle and Cleveland met in the first round of the postseason; however, the Mariners won the series 3–2. In the 2001–02 offseason, GM John Hart resigned and his assistant, Mark Shapiro, took the reins. 2002–2010: The Shapiro/Wedge years First "rebuilding of the team" Shapiro moved to rebuild by dealing aging veterans for younger talent. He traded Roberto Alomar to the New York Mets for a package that included outfielder Matt Lawton and prospects Alex Escobar and Billy Traber. When the team fell out of contention in mid-, Shapiro fired manager Charlie Manuel and traded pitching ace Bartolo Colón for prospects Brandon Phillips, Cliff Lee, and Grady Sizemore; acquired Travis Hafner from the Rangers for Ryan Drese and Einar Díaz; and picked up Coco Crisp from the St. Louis Cardinals for aging starter Chuck Finley. Jim Thome left after the season, going to the Phillies for a larger contract. Young Indians teams finished far out of contention in 2002 and under new manager Eric Wedge. They posted strong offensive numbers in , but continued to struggle with a bullpen that blew more than 20 saves. A highlight of the season was a 22–0 victory over the New York Yankees on August 31, one of the worst defeats suffered by the Yankees in team history. In early , the offense got off to a poor start. After a brief July slump, the Indians caught fire in August, and cut a 15.5 game deficit in the Central Division down to 1.5 games. However, the season came to an end as the Indians went on to lose six of their last seven games, five of them by one run, missing the playoffs by only two games. Shapiro was named Executive of the Year in 2005. The next season, the club made several roster changes, while retaining its nucleus of young players. The off-season was highlighted by the acquisition of top prospect Andy Marte from the Boston Red Sox. The Indians had a solid offensive season, led by career years from Travis Hafner and Grady Sizemore. Hafner, despite missing the last month of the season, tied the single season grand slam record of six, which was set in by Don Mattingly. Despite the solid offensive performance, the bullpen struggled with 23 blown saves (a Major League worst), and the Indians finished a disappointing fourth. In , Shapiro signed veteran help for the bullpen and outfield in the offseason. Veterans Aaron Fultz and Joe Borowski joined Rafael Betancourt in the Indians bullpen. The Indians improved significantly over the prior year and went into the All-Star break in second place. The team brought back Kenny Lofton for his third stint with the team in late July. The Indians finished with a 96–66 record tied with the Red Sox for best in baseball, their seventh Central Division title in 13 years and their first postseason trip since 2001. The Indians began their playoff run by defeating the Yankees in the ALDS three games to one. This series will be most remembered for the swarm of bugs that overtook the field in the later innings of Game Two. They also jumped out to a three-games-to-one lead over the Red Sox in the ALCS. The season ended in disappointment when Boston swept the final three games to advance to the 2007 World Series. Despite the loss, Cleveland players took home a number of awards. Grady Sizemore, who had a .995 fielding percentage and only two errors in 405 chances, won the Gold Glove award, Cleveland's first since 2001. Indians Pitcher CC Sabathia won the second Cy Young Award in team history with a 19–7 record, a 3.21 ERA and an MLB-leading 241 innings pitched. Eric Wedge was awarded the first Manager of the Year Award in team history. Shapiro was named to his second Executive of the Year in 2007. Second "rebuilding of the team" The Indians struggled during the 2008 season. Injuries to sluggers Travis Hafner and Victor Martinez, as well as starting pitchers Jake Westbrook and Fausto Carmona led to a poor start. The Indians, falling to last place for a short time in June and July, traded CC Sabathia to the Milwaukee Brewers for prospects Matt LaPorta, Rob Bryson, and Michael Brantley. and traded starting third baseman Casey Blake for catching prospect Carlos Santana. Pitcher Cliff Lee went 22–3 with an ERA of 2.54 and earned the AL Cy Young Award. Grady Sizemore had a career year, winning a Gold Glove Award and a Silver Slugger Award, and the Indians finished with a record of 81–81. Prospects for the 2009 season dimmed early when the Indians ended May with a record of 22–30. Shapiro made multiple trades: Cliff Lee and Ben Francisco to the Philadelphia Phillies for prospects Jason Knapp, Carlos Carrasco, Jason Donald and Lou Marson; Victor Martinez to the Boston Red Sox for prospects Bryan Price, Nick Hagadone and Justin Masterson; Ryan Garko to the Texas Rangers for Scott Barnes; and Kelly Shoppach to the Tampa Bay Rays for Mitch Talbot. The Indians finished the season tied for fourth in their division, with a record of 65–97. The team announced on September 30, 2009, that Eric Wedge and all of the team's coaching staff were released at the end of the 2009 season. Manny Acta was hired as the team's 40th manager on October 25, 2009. On February 18, 2010, it was announced that Shapiro (following the end of the 2010 season) would be promoted to team President, with current President Paul Dolan becoming the new Chairman/CEO, and longtime Shapiro assistant Chris Antonetti filling the GM role. 2011–present: Antonetti/Chernoff/Francona era On January 18, 2011, longtime popular former first baseman and manager Mike Hargrove was brought in as a special adviser. The Indians started the 2011 season strong – going 30–15 in their first 45 games and seven games ahead of the Detroit Tigers for first place. Injuries led to a slump where the Indians fell out of first place. Many minor leaguers such as Jason Kipnis and Lonnie Chisenhall got opportunities to fill in for the injuries. The biggest news of the season came on July 30 when the Indians traded four prospects for Colorado Rockies star pitcher, Ubaldo Jiménez. The Indians sent their top two pitchers in the minors, Alex White and Drew Pomeranz along with Joe Gardner and Matt McBride. On August 25, the Indians signed the team leader in home runs, Jim Thome off of waivers. He made his first appearance in an Indians uniform since he left Cleveland after the 2002 season. To honor Thome, the Indians placed him at his original position, third base, for one pitch against the Minnesota Twins on September 25. It was his first appearance at third base since 1996, and his last for Cleveland. The Indians finished the season in 2nd place, 15 games behind the division champion Tigers. The Indians broke Progressive Field's Opening Day attendance record with 43,190 against the Toronto Blue Jays on April 5, 2012. The game went 16 innings, setting the MLB Opening Day record, and lasted 5 hours and 14 minutes. On September 27, 2012, with six games left in the Indians' 2012 season, Manny Acta was fired; Sandy Alomar, Jr. was named interim manager for the remainder of the season. On October 6, the Indians announced that Terry Francona, who managed the Boston Red Sox to five playoff appearances and two World Series between 2004 and 2011, would take over as manager for 2013. The Indians entered the 2013 season following an active offseason of dramatic roster turnover. Key acquisitions included free agent 1B/OF Nick Swisher and CF Michael Bourn. The team added prized right-handed pitching prospect Trevor Bauer, OF Drew Stubbs, and relief pitchers Bryan Shaw and Matt Albers in a three-way trade with the Arizona Diamondbacks and Cincinnati Reds that sent RF Shin-Soo Choo to the Reds, and Tony Sipp to the Arizona Diamondbacks Other notable additions included utility man Mike Avilés, catcher Yan Gomes, designated hitter Jason Giambi, and starting pitcher Scott Kazmir. The 2013 Indians increased their win total by 24 over 2012 (from 68 to 92), finishing in second place, one game behind Detroit in the Central division, but securing the number one seed in the American League Wild Card Standings. In their first postseason appearance since 2007, Cleveland lost the 2013 American League Wild Card Game 4–0 at home to Tampa Bay. Francona was recognized for the turnaround with the 2013 American League Manager of the Year Award. With an 85–77 record, the 2014 Indians had consecutive winning seasons for the first time since 1999–2001, but they were eliminated from playoff contention during the last week of the season and finished third in the AL Central. In 2015, after struggling through the first half of the season, the Indians finished 81–80 for their third consecutive winning season, which the team had not done since 1999–2001. For the second straight year, the Tribe finished third in the Central and was eliminated from the Wild Card race during the last week of the season. Following the departure of longtime team executive Mark Shapiro on October 6, the Indians promoted GM Chris Antonetti to President of Baseball Operations, assistant general manager Mike Chernoff to GM, and named Derek Falvey as assistant GM. Falvey was later hired by the Minnesota Twins in 2016, becoming their President of Baseball Operations. The Indians set what was then a franchise record for longest winning streak when they won their 14th consecutive game, a 2–1 win over the Toronto Blue Jays in 19 innings on July 1, 2016, at Rogers Centre. The team clinched the Central Division pennant on September 26, their eighth division title overall and first since 2007, as well as returning to the playoffs for the first time since 2013. They finished the regular season at 94–67, marking their fourth straight winning season, a feat not accomplished since the 1990s and early 2000s. The Indians began the 2016 postseason by sweeping the Boston Red Sox in the best-of-five American League Division Series, then defeated the Blue Jays in five games in the 2016 American League Championship Series to claim their sixth American League pennant and advance to the World Series against the Chicago Cubs. It marked the first appearance for the Indians in the World Series since 1997 and first for the Cubs since 1945. The Indians took a 3–1 series lead following a victory in Game 4 at Wrigley Field, but the Cubs rallied to take the final three games and won the series 4 games to 3. The Indians' 2016 success led to Francona winning his second AL Manager of the Year Award with the club. From August 24 through September 15 during the 2017 season, the Indians set a new American League record by winning 22 games in a row. On September 28, the Indians won their 100th game of the season, marking only the third time in history the team has reached that milestone. They finished the regular season with 102 wins, second-most in team history (behind 1954's 111 win team). The Indians earned the AL Central title for the second consecutive year, along with home-field advantage throughout the American League playoffs, but they lost the 2017 ALDS to the Yankees 3–2 after being up 2–0. In 2018, the Indians won their third consecutive AL Central crown with a 91–71 record, but were swept in the 2018 American League Division Series by the Houston Astros, who outscored Cleveland 21–6. In 2019, despite a two-game improvement, the Indians missed the playoffs as they trailed three games behind the Tampa Bay Rays for the second AL Wild Card berth. During the 2020 season (shortened to 60 games because of the COVID-19 pandemic), the Indians were 35–25, finishing second behind the Minnesota Twins in the AL Central, but qualified for the expanded playoffs. In the best-of-three AL Wild Card Series, the Indians lost to the Yankees in a two-game sweep to end their season. On December 18, 2020, the team confirmed that the Indians name would be dropped after the 2021 season, and then announced on July 23, 2021, that their new name will be the Cleveland Guardians . They played their last game under the Indians name on October 3, 2021, a 6-0 win over the Texas Rangers. They officially became the Guardians on November 19, 2021. Season-by-season results Rivalries Interleague The rivalry with fellow Ohio team the Cincinnati Reds is known as the Battle of Ohio or Buckeye Series and features the Ohio Cup trophy for the winner. Prior to 1997, the winner of the cup was determined by an annual pre-season baseball game, played each year at minor-league Cooper Stadium in the state capital of Columbus, and staged just days before the start of each new Major League Baseball season. A total of eight Ohio Cup games were played, with the Indians winning six of them. It ended with the start of interleague play in 1997. The winner of the game each year was awarded the Ohio Cup in postgame ceremonies. The Ohio Cup was a favorite among baseball fans in Columbus, with attendances regularly topping 15,000. Since 1997, the two teams have played each other as part of the regular season, with the exception of 2002. The Ohio Cup was reintroduced in 2008 and is presented to the team who wins the most games in the series that season. Initially, the teams played one three-game series per season, meeting in Cleveland in 1997 and Cincinnati the following year. The teams have played two series per season against each other since 1999, with the exception of 2002, one at each ballpark. A format change in 2013 made each series two games, except in years when the AL and NL Central divisions meet in interleague play, where it is usually extended to three games per series. Through the 2020 meetings, the Indians lead the series 66–51. An on-and-off rivalry with the Pittsburgh Pirates stems from the close proximity of the two cities, and features some carryover elements from the longstanding rivalry in the National Football League between the Cleveland Browns and Pittsburgh Steelers. Because the Indians' designated interleague rival is the Reds and the Pirates' designated rival is the Tigers, the teams have played periodically, with one three-game series per season from 1997 to 2001, 2003, 2006, 2009–12, 2015, and 2018. Since 2012, the Indians and Pirates play three or four games every three seasons when the AL Central plays the NL Central as part of the interleague play rotation. The Pirates lead the series 21–18. The teams will play six games in 2020 as MLB instituted an abbreviated schedule focusing on regional match-ups Divisional As the Guardians play 19 games every year with each of their AL Central competitors, several rivalries have developed. The Guardians have a geographic rivalry with the Detroit Tigers, highlighted in recent years by intense battles for the AL Central title. The matchup has some carryover elements from the Ohio State-Michigan rivalry, as well as the general historic rivalry between Michigan and Ohio dating back to the Toledo War. The Chicago White Sox are another rival, dating back to the 1959 season, when the Sox slipped past the Indians to win the AL pennant. The rivalry intensified when both clubs were moved to the new AL Central in 1994. During that season, the two teams challenged for the division title, with the Indians one game back of Chicago when the strike began in August. During a game in Chicago, the White Sox confiscated Albert Belle's corked bat, followed by an attempt by Indians pitcher Jason Grimsley to crawl through the Comiskey Park clubhouse ceiling to retrieve it. Belle later signed with the White Sox in 1997, adding additional intensity to the rivalry. Logos and uniforms The official team colors are navy blue, red, and white. Home The primary home uniform is white with navy blue piping around each sleeve, and the "winged G" logo on the right sleeve. Across the front of the jersey in script font is the word "Guardians" in red with a navy blue outline, with navy blue undershirts, belts, and socks. The alternate home jersey is red with a navy blue script "Guardians" trimmed in white on the front, and navy blue piping on both sleeves, the "winged G" logo on the right sleeve, with navy blue undershirts, belts, and socks. The home cap is navy blue with a red bill and features a red "diamond C" on the front. Road The primary road uniform is gray, with "Cleveland" in navy blue "diamond C" letters, trimmed in red across the front of the jersey, navy blue piping around the sleeves, and navy blue undershirts, belts, and socks. The alternate road jersey is navy blue with "Cleveland" in red "diamond C" letters trimmed in white on the front of the jersey, and navy blue undershirts, belts, and socks. The road cap is similar to the home cap, with the only difference being the bill is navy blue. Universal For all games, the team uses a navy blue batting helmet with a red "diamond C" on the front. Name and logo controversy The club name and its cartoon logo have been criticized for perpetuating Native American stereotypes. In 1997 and 1998, protesters were arrested after effigies were burned. Charges were dismissed in the 1997 case, and were not filed in the 1998 case. Protesters arrested in the 1998 incident subsequently fought and lost a lawsuit alleging that their First Amendment rights had been violated. Bud Selig (then-Commissioner of Baseball) said in 2014 that he had never received a complaint about the logo. He has heard that there are some protesting against the mascots, but individual teams such as the Indians and Atlanta Braves, whose name was also criticized for similar reasons, should make their own decisions. An organized group consisting of Native Americans, which had protested for many years, protested Chief Wahoo on Opening Day 2015, noting that this was the 100th anniversary since the team became the Indians. Owner Paul Dolan, while stating his respect for the critics, said he mainly heard from fans who wanted to keep Chief Wahoo, and had no plans to change. On January 29, 2018, Major League Baseball announced that Chief Wahoo would be removed from the Indians' uniforms as of the 2019 season, stating that the logo was no longer appropriate for on-field use. The block "C" was promoted to the primary logo; at the time, there were no plans to change the team's name. In 2020, protests over the murder of George Floyd, a black man, by a Minneapolis police officer, led Dolan to reconsider use of the Indians name. On July 3, 2020, on the heels of the Washington Redskins announcing that they would "undergo a thorough review" of
species were introduced from Australia by the British authorities. Notably rooikrans to stabilise the sand of the Cape Flats to allow for a road connecting the peninsula with the rest of the African continent and eucalyptus to drain marshes. In 1859 the first railway line was built by the Cape Government Railways and a system of railways rapidly expanded in the 1870s. The discovery of diamonds in Griqualand West in 1867, and the Witwatersrand Gold Rush in 1886, prompted a flood of immigrants to South Africa. In 1895 the city's first public power station, the Graaff Electric Lighting Works, was opened. Conflicts between the Boer republics in the interior and the British colonial government resulted in the Second Boer War of 1899–1902, which Britain won. From 1891 to 1901, the city's population more than doubled from 67,000 to 171,000. As the 19th century came to an end, the economic and political dominance of Cape Town in the Southern Africa region during the 19th century started to gave way to the dominance of Johannesburg and Pretoria in the 20th century South African period In 1910, Britain established the Union of South Africa, which unified the Cape Colony with the two defeated Boer Republics and the British colony of Natal. Cape Town became the legislative capital of the Union, and later of the Republic of South Africa. Prior to the mid-twentieth century, Cape Town was one of the most racially integrated cities in the South Africa. In the 1948 national elections, the National Party won on a platform of apartheid (racial segregation) under the slogan of "swart gevaar" (Afrikaans for "black danger"). This led to the erosion and eventual abolition of the Cape's multiracial franchise, as well as to the Group Areas Act, which classified all areas according to race. Formerly multi-racial suburbs of Cape Town were either purged of residents deemed unlawful by apartheid legislation or demolished. The most infamous example of this in Cape Town was District Six. After it was declared a whites-only region in 1965, all housing there was demolished and over 60,000 residents were forcibly removed. Many of these residents were relocated to the Cape Flats. The earliest of the Cape Flats forced removals were to Langa particularly with the 1923 Native Urban Areas Act. Langa is the oldest township in Cape Town and the scene of much resistance against Apartheid. Its origins go back to the 19th century. Under apartheid, the Cape was considered a "Coloured labour preference area", to the exclusion of "Bantus", i.e. Africans. The implementation of this policy was widely opposed by trade unions, civil society and opposition parties. It is notable that this policy was not advocated for by any coloured political group, and its implementation was a unilateral decision by the apartheid government. School students from Langa, Gugulethu and Nyanga in Cape Town reacted to the news of protests against Bantu Education in Soweto in June 1976 and organised gatherings and marches, which were met with resistance from the police. A number of school buildings were burnt down. Cape Town was home to many leaders of the anti-apartheid movement. On Robben Island, a former penitentiary island from the city, many famous political prisoners were held for years. In one of the most famous moments marking the end of apartheid, Nelson Mandela made his first public speech since his imprisonment, from the balcony of Cape Town City Hall hours after being released on 11 February 1990. His speech heralded the beginning of a new era for the country, and the first democratic election, was held four years later, on 27 April 1994. Nobel Square in the Victoria & Alfred Waterfront features statues of South Africa's four Nobel Peace Prize winners: Albert Luthuli, Desmond Tutu, F. W. de Klerk and Nelson Mandela. There was a severe water shortage from 2015 to 2018. Since the beginning of the second decade of the twenty-first century Cape Town and the Western Cape province have been home to a growing independence movement. In the 2021 municipal elections pro-independence parties garnered around 5% of the city's vote. Geography Cape Town is located at latitude 33.55° S (approximately the same as Sydney and Buenos Aires and equivalent to Casablanca and Los Angeles in the northern hemisphere) and longitude 18.25° E. Table Mountain, with its near vertical cliffs and flat-topped summit over high, and with Devil's Peak and Lion's Head on either side, together form a dramatic mountainous backdrop enclosing the central area of Cape Town, the so-called City Bowl. A thin strip of cloud, known colloquially as the "tablecloth", sometimes forms on top of the mountain. To the immediate south, the Cape Peninsula is a scenic mountainous spine jutting southward into the Atlantic Ocean and terminating at Cape Point. There are over 70 peaks above within Cape Town's official city limits. Many of the city's suburbs lie on the large plain called the Cape Flats, which extends over to the east and joins the peninsula to the mainland. The Cape Town region is characterised by an extensive coastline, rugged mountain ranges, coastal plains and inland valleys. Robben Island UNESCO declared Robben Island in the Western Cape a World Heritage Site in 1999. Robben Island is located in Table Bay, some west of Bloubergstrand in Cape Town, and stands some 30m above sea level. Robben Island has been used as a prison where people were isolated, banished, and exiled for nearly 400 years. It was also used as a leper colony, a post office, a grazing ground, a mental hospital, and an outpost. Visitors can only access the island via the Robben Island Museum boat service, which runs three times daily until the beginning of the peak season (1 September). The ferries depart from the Nelson Mandela Gateway at the V&A Waterfront. Climate Cape Town has a warm Mediterranean climate (Köppen: Csb), with mild, moderately wet winters and dry, warm summers. Winter, which lasts from the beginning of June to the end of August, may see large cold fronts entering for limited periods from the Atlantic Ocean with significant precipitation and strong north-westerly winds. Winter months in the city average a maximum of and minimum of Total annual rainfall in the city averages although in the Southern Suburbs, close to the mountains, rainfall is significantly higher and averages closer to . Summer, which lasts from December to March, is warm and dry with an average maximum of and minimum of . The region can get uncomfortably hot when the Berg Wind, meaning "mountain wind", blows from the Karoo interior. Spring and summer generally feature a strong wind from the south-east, known locally as the south- or the Cape Doctor, so called because it blows air pollution away. This wind is caused by a persistent high-pressure system over the South Atlantic to the west of Cape Town, known as the South Atlantic High, which shifts latitude seasonally, following the sun, and influencing the strength of the fronts and their northward reach. Cape Town receives about 3,100 hours of sunshine per year. Water temperatures range greatly, between on the Atlantic Seaboard, to over in False Bay. Average annual ocean surface temperatures are between on the Atlantic Seaboard (similar to Californian waters, such as San Francisco or Big Sur), and in False Bay (similar to Northern Mediterranean temperatures, such as Nice or Monte Carlo). Unlike other parts of the country the city does not have many thunderstorms, and most of those that do occur, happen around October to December and March to April. Flora and fauna Located in a CI Biodiversity hotspot as well as the unique Cape Floristic Region, the city of Cape Town has one of the highest levels of biodiversity of any equivalent area in the world. These protected areas are a World Heritage Site, and an estimated 2,200 species of plants are confined to Table Mountain – more than exist in the whole of the United Kingdom which has 1200 plant species and 67 endemic plant species. Many of these species, including a great many types of proteas, are endemic to the mountain and can be found nowhere else. It is home to a total of 19 different vegetation types, of which several are endemic to the city and occur nowhere else in the world. It is also the only habitat of hundreds of endemic species, and hundreds of others which are severely restricted or threatened. This enormous species diversity is mainly because the city is uniquely located at the convergence point of several different soil types and micro-climates. Table Mountain has an unusually rich biodiversity. Its vegetation consists predominantly of several different types of the unique and rich Cape Fynbos. The main vegetation type is endangered Peninsula Sandstone Fynbos, but critically endangered Peninsula Granite Fynbos, Peninsula Shale Renosterveld and Afromontane forest occur in smaller portions on the mountain. Unfortunately, rapid population growth and urban sprawl has covered much of these ecosystems with development. Consequently, Cape Town now has over 300 threatened plant species and 13 which are now extinct. The Cape Peninsula, which lies entirely within the city of Cape Town, has the highest concentration of threatened species of any continental area of equivalent size in the world. Tiny remnant populations of critically endangered or near extinct plants sometimes survive on road sides, pavements and sports fields. The remaining ecosystems are partially protected through a system of over 30 nature reserves – including the massive Table Mountain National Park. Cape Town reached first place in the 2019 iNaturalist City Nature Challenge in two out of the three categories: Most Observations, and Most Species. This was the first entry by Capetonians in this annual competition to observe and record the local biodiversity over a four-day long weekend during what is considered the worst time of the year for local observations. A worldwide survey suggested that the extinction rate of endemic plants from the City of Cape Town is one of the highest in the world, at roughly three per year since 1900 - partly a consequence of the very small and localised habitats and high endemicity. Suburbs Cape Town's urban geography is influenced by the contours of Table Mountain, the surrounding peaks of the Cape Peninsula, the Durbanville Hills, and the expansive lowland region known as the Cape Flats. These geographic features in part divide the city into several commonly known groupings of suburbs (equivalent to districts outside South Africa), many of which developed historically together and share common attributes of language and culture. City Bowl The City Bowl is a natural amphitheatre-shaped area bordered by Table Bay and defined by the mountains of Signal Hill, Lion's Head, Table Mountain and Devil's Peak. The area includes the central business district of Cape Town, the harbour, the Company's Garden, and the residential suburbs of De Waterkant, Devil's Peak, District Six, Zonnebloem, Gardens, Bo-Kaap, Higgovale, Oranjezicht, Schotsche Kloof, Tamboerskloof, University Estate, Vredehoek, Walmer Estate and Woodstock. The Foreshore Freeway Bridge has stood in its unfinished state since construction officially ended in 1977. It was intended to be the Eastern Boulevard Highway in the city bowl, but is unfinished due to budget constraints. Atlantic Seaboard The Atlantic Seaboard lies west of the City Bowl and Table Mountain, and is characterised by its beaches, cliffs, promenade and hillside communities. The area includes, from north to south, the neighbourhoods of Green Point, Mouille Point, Three Anchor Bay, Sea Point, Fresnaye, Bantry Bay, Clifton, Camps Bay, Llandudno, and Hout Bay. The Atlantic Seaboard has some of the most expensive real estate in South Africa particularly on Nettleton and Clifton Roads in Clifton, Ocean View Drive and St Leon Avenue in Bantry Bay, Theresa Avenue in Bakoven and Fishermans Bend in Llandudno. Camps Bay is home to the highest concentration of multimillionaires in Cape Town and has the highest number of high-priced mansions in South Africa with more than 155 residential units exceeding R20 million (or $US1.8 million). Blaauwberg Blaauwberg is a coastal region of the Cape Town Metropolitan area and lies along the coast to the north of Cape Town, and includes the suburbs Bloubergstrand, Milnerton, Tableview, West Beach, Big Bay, Sunset Beach, Sunningdale, Parklands and Parklands North, as well as the exurbs of Atlantis, Mamre and Melkbosstrand. The Koeberg Nuclear Power Station is located within this area, and maximum housing density regulations are enforced in much of the nuclear plant area. Northern Suburbs The Northern Suburbs is a predominantly Afrikaans-speaking region of the Cape Town Metropolitan area and includes Bishop Lavis, Belhar, Bellville, Blue Downs, Bothasig, Burgundy Estate, Durbanville, Edgemead, Brackenfell, Elsie's River, Eerste River, Kraaifontein, Goodwood, Kensington, Maitland, Monte Vista, Panorama, Parow, Richwood, Kraaifontein and Kuils River. The Northern Suburbs are home to Tygerberg Hospital, the largest hospital in the Western Cape and second largest in South Africa. Southern Suburbs The Southern Suburbs lie along the eastern slopes of Table Mountain, southeast of the city centre. This area is predominantly English-speaking, and includes, from north to south, Observatory, Mowbray, Pinelands, Rosebank, Rondebosch, Rondebosch East, Newlands, Claremont, Lansdowne, Kenilworth, Bishopscourt, Constantia, Wynberg, Plumstead, Ottery, Bergvliet and Diep River. West of Wynberg lies Constantia which, in addition to being a wealthy neighbourhood, is a notable wine-growing region within the City of Cape Town, and attracts tourists for its well-known wine farms and Cape Dutch architecture. The Southern Suburbs is also well known as having some of the oldest, and most sought after residential areas within the City of Cape Town. South Peninsula The South Peninsula is a predominantly English-speaking area in the Cape Town Metropolitan area and is generally regarded as the area South of Muizenberg on False Bay and Noordhoek on the Atlantic Ocean, all the way to Cape Point. Until recently, this region was quite rural. Its population is growing quickly as new coastal developments proliferate and larger plots are subdivided to provide more compact housing. It includes Capri Village, Clovelly, Fish Hoek, Glencairn, Kalk Bay, Kommetjie, Masiphumelele, Muizenberg, Noordhoek, Ocean View, Scarborough, Simon's Town, St James, Sunnydale and Sun Valley. South Africa's largest naval base is located at Simon's Town harbour, and close by is Boulders Beach, the site of a large colony of African penguins. Cape Flats The Cape Flats is an expansive, low-lying, flat area situated to the city center's southeast. Due to the region having a Mediterranean climate, the wettest months on the Cape Flats are from April to September, with 82% most of its rainfall occurring between these months. The rainfall patterns on the Cape Flats vary with longitude, such that the eastern parts get a minimum of 214mm per year and the central and western parts get 800mm per year. A significant portion of this water ends up in the Cape Flats Aquifer, which lie beneath the central and southern parts of the Cape Flats. Most of the land of the Cape Flats is used for residential areas, the majority of which are formal, but with several informal settlements present. Light industrial areas are also found in the area. The Philippi Horticultural area in the south-east is used for cultivation and contains many smallholdings. Helderberg The Helderberg is a small region in the Cape Town Metropolitan area located on the north-eastern corner of False Bay. It consists of Somerset West, Strand, Gordons Bay and a few other suburbs which were previously towns in the Helderberg district. The district takes its name from the imposing Helderberg Mountain, which reaches a height of . Government Cape Town is governed by a 231-member city council elected in a system of mixed-member proportional representation. The city is divided into 116 wards, each of which elects a councillor by first-past-the-post voting. The remaining 115 councillors are elected from party lists so that the total number of councillors for each party is proportional to the number of votes received by that party. In the 2021 Municipal Elections, the Democratic Alliance (DA) kept its majority, this time diminished, taking 136 seats. The African National Congress lost substantially, receiving 43 of the seats. The Democratic Alliance candidate for the Cape Town mayoralty, Geordin Hill-Lewis was elected mayor. Demographics According to the South African National Census of 2011, the population of the City of Cape Town metropolitan municipalityan area that includes suburbs and exurbs is 3,740,026 people. This represents an annual growth rate of 2.6% compared to the results of the previous census in 2001 which found a population of 2,892,243 people. The sex ratio is 96, meaning that there are slightly more women than men. According to the 2016 City of Cape Town community survey, there were 4,004,793 people in the City of Cape Town metro. Out of this population, 42.6% identified as Black African, 39.9% identified as Coloured, 16.5% identified as White and 1.1% identified as Asian. In 1944, 47% of the city-proper's population was White, 46% was Coloured, less than 6% was Black African and 1% was Asian, though these numbers did not represent wider Cape Town. Also race definitions prior to the Population Registration Act of 1950 were extremely vague and would have had significant overlap between Coloured and Black African identified populations. The repealing of apartheid laws limiting the movement of people to Cape Town based on race in 1986 contributed to period of rapid population growth. The population of Cape Town increased from just under 1.2 million in 1970 to 2.8 million by the year 2000; with the population of residents described as Black African increasing from 9.6% of the city's population to 32.3% in the same period. Of those residents who were asked about their first language, 35.7% spoke Afrikaans, 29.8% spoke Xhosa and 28.4% spoke English. 24.8% of the population is under the age of 15, while 5.5% is 65 or older. Of those residents aged 20 or older, 1.8% have no schooling, 8.1% have some schooling but did not finish primary school, 4.6% finished primary school but have no secondary schooling, 38.9% have some secondary schooling but did not finish Grade 12, 29.9% finished Grade 12 but have no higher education, and 16.7% have higher education. Overall, 46.6% have at least a Grade 12 education. Of those aged between 5 and 25, 67.8% are attending an educational institution. Amongst those aged between 15 and 65 the unemployment rate is 23.7%. The average annual household income is R161,762. The total number of households grew from 653,085 in 1996 to 1,068,572 in 2011, which represents an increase of 63.6%. The average number of household members declined from 3,92 in 1996 to 3,50 in 2011. Of those households, 78.4% are in formal structures (houses or flats), while 20.5% are in informal structures (shacks). 97.3% of City-supplied households have access to electricity, and 94.0% of households use electricity for lighting. 87.3% of households have piped water to the dwelling, while 12.0% have piped water through a communal tap. 94.9% of households have regular refuse collection service. 91.4% of households have a flush toilet or chemical toilet, while 4.5% still use a bucket toilet. 82.1% of households have a refrigerator, 87.3% have a television and 70.1% have a radio. Only 34.0% have a landline telephone, but 91.3% have a cellphone. 37.9% have a computer, and 49.3% have access to the Internet (either through a computer or a cellphone). Since the outbreak of the COVID-19 pandemic in South Africa the South African media has reported that increasing numbers of wealthy and middle class South Africans have started moving from inland areas of South Africa to coastal regions of the country, most notably Cape Town, in a phenomenon referred to as "semigration." Economy The city is South Africa's second main economic centre and Africa's third main economic hub city. It serves as the regional manufacturing centre in the Western Cape. In 2019 the city's GMP of R489 billion (US$33.04 billion) represented 71.1% of the Western Cape's total GRP and 9.6% of South Africa's total GDP; the city also accounted for 11.1% of all employed people in the country and had a citywide GDP per capita of R111,364 (US$7,524). Since the global financial crisis of 2007 the city's economic growth rate has mirrored South Africa's decline in growth whilst the population growth rate for the city has remained steady at around 2% a year. Around 80% of the city's economic activity is generated by the tertiary sector of the economy with the finance, retail, real-estate, food and beverage industries being the four largest contributors to the city's economic growth rate. With the highest number of successful information technology companies in Africa, Cape Town is an important centre for the industry on the continent. This includes an increasing number of companies in the space industry. Growing at an annual rate of 8.5% and an estimated worth of R77 billion in 2010, nationwide the high tech industry in Cape Town is becoming increasingly important to the city's economy. The city was recently named as the most entrepreneurial city in South Africa, with the percentage of Capetonians pursuing business opportunities almost three times higher than the national average. Those aged between 18 and 64 were 190% more likely to pursue new business, whilst in Johannesburg, the same demographic group was only 60% more likely than the national average to pursue a new business. With a number of entrepreneurship initiatives and universities hosting technology startups such as Jumo, Yoco, Aerobotics, Luno and The Sun Exchange. Major companies Most companies headquartered in the city are insurance companies, retail groups, publishers, design houses, fashion designers, shipping companies, petrochemical companies, architects and advertising agencies. Some of the most notable companies headquartered in the city are food and fashion retailer Woolworths, supermarket chain Pick n Pay Stores and Shoprite, New Clicks Holdings Limited, fashion retailer Foschini Group, internet service provider MWEB, Mediclinic International, eTV, multinational mass media giant Naspers, and financial services giant Sanlam. Other notable companies include Belron, CapeRay (develops, manufactures and supplies medical imaging equipment for the diagnosis of breast cancer), Ceres Fruit Juices, Coronation Fund Managers, Vida e Caffè, Capitec Bank. The city is a manufacturing base for several multinational companies including, Johnson & Johnson, GlaxoSmithKline, Levi Strauss & Co., Adidas, Bokomo Foods, Yoco and Nampak. Amazon Web Services maintains one of its largest facilities in the world in Cape Town with the city serving as the Africa headquarters for its parent company Amazon. Inequality The city of Cape Town's Gini coefficient of 0.58 is lower than South Africa's Gini coefficient of 0.7 making it more equal than the rest of the country or any other major South Africa city although still highly unequal by international standards. Between 2001 and 2010 the city's Gini coefficient, a measure of inequality, improved by dropping from 0.59 in 2007 to 0.57 in 2010 only to increase to 0.58 by 2017. Infrastructure Most goods are handled through the Port of Cape Town or Cape Town International Airport. Most major shipbuilding companies have offices in Cape Town. The province is also a centre of energy development for the country, with the existing Koeberg nuclear power station providing energy for the Western Cape's needs. Cape Town has four major commercial nodes, with Cape Town Central Business District containing the majority of job opportunities and office space. Century City, the Bellville/Tygervalley strip and Claremont commercial nodes are well established and contain many offices and corporate headquarters. Tourism The Western Cape is an important tourist region in South Africa; the tourism industry accounts for 9.8% of the GDP of the province and employs 9.6% of the province's workforce. In 2010, over 1.5 million international tourists visited the area. Cape Town is not only a popular international tourist destination in South Africa, but Africa as a whole. This is due to its mild climate, natural setting, and well-developed infrastructure. The city has several well-known natural features that attract tourists, most notably Table Mountain, which forms a large part of the Table Mountain National Park and is the back end of the City Bowl. Reaching the top of the mountain can be achieved either by hiking up, or by taking the Table Mountain Cableway. Cape Point is recognised as the dramatic headland at the end of the Cape Peninsula. Many tourists also drive along Chapman's Peak Drive, a narrow road that links Noordhoek with Hout Bay, for the views of the Atlantic Ocean and nearby mountains. It is possible to either drive or hike up Signal Hill for closer views of the City Bowl and Table Mountain. Many tourists also visit Cape Town's beaches, which are popular with local residents. Due to the city's unique geography, it is possible to visit several different beaches in the same day, each with a different setting and atmosphere. Though the Cape's water ranges from cold to mild, the difference between the two sides of the city is dramatic. While the Atlantic Seaboard averages annual water temperatures barely above that of coastal California around , the False Bay coast is much warmer, averaging between annually. This is similar to water temperatures in much of the Northern Mediterranean (for example Nice). In summer, False Bay water averages slightly over , with a common high. Beaches located on the Atlantic Coast tend to have very cold water due to the Benguela current which originates from the Southern Ocean, whilst the water at False Bay beaches may be warmer by up to at the same moment due to the influence of the warm Agulhas current. It is a common misconception that False Bay is part of the Indian Ocean, with Cape Point being both the meeting point of the Indian and Atlantic Oceans, and the southernmost tip of Africa. The oceans in fact meet at the actual southernmost tip, Cape Agulhas, which lies approximately to the southeast. The misconception is fuelled by the relative warmth of the False Bay water to the Atlantic Seaboard water, and the many confusing instances of "Two Oceans" in names synonymous with Cape Town, such as the Two Oceans Marathon, the Two Oceans Aquarium, and places such as Two Oceans wine farm. Both coasts are equally popular, although the beaches in affluent Clifton and elsewhere on the Atlantic Coast are better developed with restaurants and cafés, with a strip of restaurants and bars accessible to the beach at Camps Bay. The Atlantic seaboard, known as Cape Town's Riviera, is regarded as one of the most scenic routes in South Africa, along the slopes of the Twelve Apostles to the boulders and white sand beaches of Llandudno, with the route ending in Hout Bay, a diverse bustling suburb with a harbour and a seal island. This fishing village is flanked by the Constantia valley and the picturesque Chapman's Peak drive. Boulders Beach near Simon's Town is known for its colony of African penguins. Surfing is popular and the city hosts the Red Bull Big Wave Africa surfing competition every year. The city has several notable cultural attractions. The Victoria & Alfred Waterfront, built on top of part of the docks of the Port of Cape Town, is the city's most visited tourist attraction. It is also one of the city's most popular shopping venues, with several hundred shops as well as the Two Oceans Aquarium. The V&A also hosts the Nelson Mandela Gateway, through which ferries depart for Robben Island. It is possible to take a ferry from the V&A to Hout Bay, Simon's Town and the Cape fur seal colonies on Seal and Duiker Islands. Several companies offer tours of the Cape Flats, a mostly Coloured township, and Khayelitsha, a mostly black township. The most popular areas for visitors to stay include Camps Bay, Sea Point, the V&A Waterfront, the City Bowl, Hout Bay, Constantia, Rondebosch, Newlands, and Somerset West. In November 2013, Cape Town was voted the best global city in The Daily Telegraph'''s annual Travel Awards. Cape Town offers tourists a range of air, land and sea-based adventure activities, including paragliding and skydiving. The City of Cape Town
endemic plant species. Many of these species, including a great many types of proteas, are endemic to the mountain and can be found nowhere else. It is home to a total of 19 different vegetation types, of which several are endemic to the city and occur nowhere else in the world. It is also the only habitat of hundreds of endemic species, and hundreds of others which are severely restricted or threatened. This enormous species diversity is mainly because the city is uniquely located at the convergence point of several different soil types and micro-climates. Table Mountain has an unusually rich biodiversity. Its vegetation consists predominantly of several different types of the unique and rich Cape Fynbos. The main vegetation type is endangered Peninsula Sandstone Fynbos, but critically endangered Peninsula Granite Fynbos, Peninsula Shale Renosterveld and Afromontane forest occur in smaller portions on the mountain. Unfortunately, rapid population growth and urban sprawl has covered much of these ecosystems with development. Consequently, Cape Town now has over 300 threatened plant species and 13 which are now extinct. The Cape Peninsula, which lies entirely within the city of Cape Town, has the highest concentration of threatened species of any continental area of equivalent size in the world. Tiny remnant populations of critically endangered or near extinct plants sometimes survive on road sides, pavements and sports fields. The remaining ecosystems are partially protected through a system of over 30 nature reserves – including the massive Table Mountain National Park. Cape Town reached first place in the 2019 iNaturalist City Nature Challenge in two out of the three categories: Most Observations, and Most Species. This was the first entry by Capetonians in this annual competition to observe and record the local biodiversity over a four-day long weekend during what is considered the worst time of the year for local observations. A worldwide survey suggested that the extinction rate of endemic plants from the City of Cape Town is one of the highest in the world, at roughly three per year since 1900 - partly a consequence of the very small and localised habitats and high endemicity. Suburbs Cape Town's urban geography is influenced by the contours of Table Mountain, the surrounding peaks of the Cape Peninsula, the Durbanville Hills, and the expansive lowland region known as the Cape Flats. These geographic features in part divide the city into several commonly known groupings of suburbs (equivalent to districts outside South Africa), many of which developed historically together and share common attributes of language and culture. City Bowl The City Bowl is a natural amphitheatre-shaped area bordered by Table Bay and defined by the mountains of Signal Hill, Lion's Head, Table Mountain and Devil's Peak. The area includes the central business district of Cape Town, the harbour, the Company's Garden, and the residential suburbs of De Waterkant, Devil's Peak, District Six, Zonnebloem, Gardens, Bo-Kaap, Higgovale, Oranjezicht, Schotsche Kloof, Tamboerskloof, University Estate, Vredehoek, Walmer Estate and Woodstock. The Foreshore Freeway Bridge has stood in its unfinished state since construction officially ended in 1977. It was intended to be the Eastern Boulevard Highway in the city bowl, but is unfinished due to budget constraints. Atlantic Seaboard The Atlantic Seaboard lies west of the City Bowl and Table Mountain, and is characterised by its beaches, cliffs, promenade and hillside communities. The area includes, from north to south, the neighbourhoods of Green Point, Mouille Point, Three Anchor Bay, Sea Point, Fresnaye, Bantry Bay, Clifton, Camps Bay, Llandudno, and Hout Bay. The Atlantic Seaboard has some of the most expensive real estate in South Africa particularly on Nettleton and Clifton Roads in Clifton, Ocean View Drive and St Leon Avenue in Bantry Bay, Theresa Avenue in Bakoven and Fishermans Bend in Llandudno. Camps Bay is home to the highest concentration of multimillionaires in Cape Town and has the highest number of high-priced mansions in South Africa with more than 155 residential units exceeding R20 million (or $US1.8 million). Blaauwberg Blaauwberg is a coastal region of the Cape Town Metropolitan area and lies along the coast to the north of Cape Town, and includes the suburbs Bloubergstrand, Milnerton, Tableview, West Beach, Big Bay, Sunset Beach, Sunningdale, Parklands and Parklands North, as well as the exurbs of Atlantis, Mamre and Melkbosstrand. The Koeberg Nuclear Power Station is located within this area, and maximum housing density regulations are enforced in much of the nuclear plant area. Northern Suburbs The Northern Suburbs is a predominantly Afrikaans-speaking region of the Cape Town Metropolitan area and includes Bishop Lavis, Belhar, Bellville, Blue Downs, Bothasig, Burgundy Estate, Durbanville, Edgemead, Brackenfell, Elsie's River, Eerste River, Kraaifontein, Goodwood, Kensington, Maitland, Monte Vista, Panorama, Parow, Richwood, Kraaifontein and Kuils River. The Northern Suburbs are home to Tygerberg Hospital, the largest hospital in the Western Cape and second largest in South Africa. Southern Suburbs The Southern Suburbs lie along the eastern slopes of Table Mountain, southeast of the city centre. This area is predominantly English-speaking, and includes, from north to south, Observatory, Mowbray, Pinelands, Rosebank, Rondebosch, Rondebosch East, Newlands, Claremont, Lansdowne, Kenilworth, Bishopscourt, Constantia, Wynberg, Plumstead, Ottery, Bergvliet and Diep River. West of Wynberg lies Constantia which, in addition to being a wealthy neighbourhood, is a notable wine-growing region within the City of Cape Town, and attracts tourists for its well-known wine farms and Cape Dutch architecture. The Southern Suburbs is also well known as having some of the oldest, and most sought after residential areas within the City of Cape Town. South Peninsula The South Peninsula is a predominantly English-speaking area in the Cape Town Metropolitan area and is generally regarded as the area South of Muizenberg on False Bay and Noordhoek on the Atlantic Ocean, all the way to Cape Point. Until recently, this region was quite rural. Its population is growing quickly as new coastal developments proliferate and larger plots are subdivided to provide more compact housing. It includes Capri Village, Clovelly, Fish Hoek, Glencairn, Kalk Bay, Kommetjie, Masiphumelele, Muizenberg, Noordhoek, Ocean View, Scarborough, Simon's Town, St James, Sunnydale and Sun Valley. South Africa's largest naval base is located at Simon's Town harbour, and close by is Boulders Beach, the site of a large colony of African penguins. Cape Flats The Cape Flats is an expansive, low-lying, flat area situated to the city center's southeast. Due to the region having a Mediterranean climate, the wettest months on the Cape Flats are from April to September, with 82% most of its rainfall occurring between these months. The rainfall patterns on the Cape Flats vary with longitude, such that the eastern parts get a minimum of 214mm per year and the central and western parts get 800mm per year. A significant portion of this water ends up in the Cape Flats Aquifer, which lie beneath the central and southern parts of the Cape Flats. Most of the land of the Cape Flats is used for residential areas, the majority of which are formal, but with several informal settlements present. Light industrial areas are also found in the area. The Philippi Horticultural area in the south-east is used for cultivation and contains many smallholdings. Helderberg The Helderberg is a small region in the Cape Town Metropolitan area located on the north-eastern corner of False Bay. It consists of Somerset West, Strand, Gordons Bay and a few other suburbs which were previously towns in the Helderberg district. The district takes its name from the imposing Helderberg Mountain, which reaches a height of . Government Cape Town is governed by a 231-member city council elected in a system of mixed-member proportional representation. The city is divided into 116 wards, each of which elects a councillor by first-past-the-post voting. The remaining 115 councillors are elected from party lists so that the total number of councillors for each party is proportional to the number of votes received by that party. In the 2021 Municipal Elections, the Democratic Alliance (DA) kept its majority, this time diminished, taking 136 seats. The African National Congress lost substantially, receiving 43 of the seats. The Democratic Alliance candidate for the Cape Town mayoralty, Geordin Hill-Lewis was elected mayor. Demographics According to the South African National Census of 2011, the population of the City of Cape Town metropolitan municipalityan area that includes suburbs and exurbs is 3,740,026 people. This represents an annual growth rate of 2.6% compared to the results of the previous census in 2001 which found a population of 2,892,243 people. The sex ratio is 96, meaning that there are slightly more women than men. According to the 2016 City of Cape Town community survey, there were 4,004,793 people in the City of Cape Town metro. Out of this population, 42.6% identified as Black African, 39.9% identified as Coloured, 16.5% identified as White and 1.1% identified as Asian. In 1944, 47% of the city-proper's population was White, 46% was Coloured, less than 6% was Black African and 1% was Asian, though these numbers did not represent wider Cape Town. Also race definitions prior to the Population Registration Act of 1950 were extremely vague and would have had significant overlap between Coloured and Black African identified populations. The repealing of apartheid laws limiting the movement of people to Cape Town based on race in 1986 contributed to period of rapid population growth. The population of Cape Town increased from just under 1.2 million in 1970 to 2.8 million by the year 2000; with the population of residents described as Black African increasing from 9.6% of the city's population to 32.3% in the same period. Of those residents who were asked about their first language, 35.7% spoke Afrikaans, 29.8% spoke Xhosa and 28.4% spoke English. 24.8% of the population is under the age of 15, while 5.5% is 65 or older. Of those residents aged 20 or older, 1.8% have no schooling, 8.1% have some schooling but did not finish primary school, 4.6% finished primary school but have no secondary schooling, 38.9% have some secondary schooling but did not finish Grade 12, 29.9% finished Grade 12 but have no higher education, and 16.7% have higher education. Overall, 46.6% have at least a Grade 12 education. Of those aged between 5 and 25, 67.8% are attending an educational institution. Amongst those aged between 15 and 65 the unemployment rate is 23.7%. The average annual household income is R161,762. The total number of households grew from 653,085 in 1996 to 1,068,572 in 2011, which represents an increase of 63.6%. The average number of household members declined from 3,92 in 1996 to 3,50 in 2011. Of those households, 78.4% are in formal structures (houses or flats), while 20.5% are in informal structures (shacks). 97.3% of City-supplied households have access to electricity, and 94.0% of households use electricity for lighting. 87.3% of households have piped water to the dwelling, while 12.0% have piped water through a communal tap. 94.9% of households have regular refuse collection service. 91.4% of households have a flush toilet or chemical toilet, while 4.5% still use a bucket toilet. 82.1% of households have a refrigerator, 87.3% have a television and 70.1% have a radio. Only 34.0% have a landline telephone, but 91.3% have a cellphone. 37.9% have a computer, and 49.3% have access to the Internet (either through a computer or a cellphone). Since the outbreak of the COVID-19 pandemic in South Africa the South African media has reported that increasing numbers of wealthy and middle class South Africans have started moving from inland areas of South Africa to coastal regions of the country, most notably Cape Town, in a phenomenon referred to as "semigration." Economy The city is South Africa's second main economic centre and Africa's third main economic hub city. It serves as the regional manufacturing centre in the Western Cape. In 2019 the city's GMP of R489 billion (US$33.04 billion) represented 71.1% of the Western Cape's total GRP and 9.6% of South Africa's total GDP; the city also accounted for 11.1% of all employed people in the country and had a citywide GDP per capita of R111,364 (US$7,524). Since the global financial crisis of 2007 the city's economic growth rate has mirrored South Africa's decline in growth whilst the population growth rate for the city has remained steady at around 2% a year. Around 80% of the city's economic activity is generated by the tertiary sector of the economy with the finance, retail, real-estate, food and beverage industries being the four largest contributors to the city's economic growth rate. With the highest number of successful information technology companies in Africa, Cape Town is an important centre for the industry on the continent. This includes an increasing number of companies in the space industry. Growing at an annual rate of 8.5% and an estimated worth of R77 billion in 2010, nationwide the high tech industry in Cape Town is becoming increasingly important to the city's economy. The city was recently named as the most entrepreneurial city in South Africa, with the percentage of Capetonians pursuing business opportunities almost three times higher than the national average. Those aged between 18 and 64 were 190% more likely to pursue new business, whilst in Johannesburg, the same demographic group was only 60% more likely than the national average to pursue a new business. With a number of entrepreneurship initiatives and universities hosting technology startups such as Jumo, Yoco, Aerobotics, Luno and The Sun Exchange. Major companies Most companies headquartered in the city are insurance companies, retail groups, publishers, design houses, fashion designers, shipping companies, petrochemical companies, architects and advertising agencies. Some of the most notable companies headquartered in the city are food and fashion retailer Woolworths, supermarket chain Pick n Pay Stores and Shoprite, New Clicks Holdings Limited, fashion retailer Foschini Group, internet service provider MWEB, Mediclinic International, eTV, multinational mass media giant Naspers, and financial services giant Sanlam. Other notable companies include Belron, CapeRay (develops, manufactures and supplies medical imaging equipment for the diagnosis of breast cancer), Ceres Fruit Juices, Coronation Fund Managers, Vida e Caffè, Capitec Bank. The city is a manufacturing base for several multinational companies including, Johnson & Johnson, GlaxoSmithKline, Levi Strauss & Co., Adidas, Bokomo Foods, Yoco and Nampak. Amazon Web Services maintains one of its largest facilities in the world in Cape Town with the city serving as the Africa headquarters for its parent company Amazon. Inequality The city of Cape Town's Gini coefficient of 0.58 is lower than South Africa's Gini coefficient of 0.7 making it more equal than the rest of the country or any other major South Africa city although still highly unequal by international standards. Between 2001 and 2010 the city's Gini coefficient, a measure of inequality, improved by dropping from 0.59 in 2007 to 0.57 in 2010 only to increase to 0.58 by 2017. Infrastructure Most goods are handled through the Port of Cape Town or Cape Town International Airport. Most major shipbuilding companies have offices in Cape Town. The province is also a centre of energy development for the country, with the existing Koeberg nuclear power station providing energy for the Western Cape's needs. Cape Town has four major commercial nodes, with Cape Town Central Business District containing the majority of job opportunities and office space. Century City, the Bellville/Tygervalley strip and Claremont commercial nodes are well established and contain many offices and corporate headquarters. Tourism The Western Cape is an important tourist region in South Africa; the tourism industry accounts for 9.8% of the GDP of the province and employs 9.6% of the province's workforce. In 2010, over 1.5 million international tourists visited the area. Cape Town is not only a popular international tourist destination in South Africa, but Africa as a whole. This is due to its mild climate, natural setting, and well-developed infrastructure. The city has several well-known natural features that attract tourists, most notably Table Mountain, which forms a large part of the Table Mountain National Park and is the back end of the City Bowl. Reaching the top of the mountain can be achieved either by hiking up, or by taking the Table Mountain Cableway. Cape Point is recognised as the dramatic headland at the end of the Cape Peninsula. Many tourists also drive along Chapman's Peak Drive, a narrow road that links Noordhoek with Hout Bay, for the views of the Atlantic Ocean and nearby mountains. It is possible to either drive or hike up Signal Hill for closer views of the City Bowl and Table Mountain. Many tourists also visit Cape Town's beaches, which are popular with local residents. Due to the city's unique geography, it is possible to visit several different beaches in the same day, each with a different setting and atmosphere. Though the Cape's water ranges from cold to mild, the difference between the two sides of the city is dramatic. While the Atlantic Seaboard averages annual water temperatures barely above that of coastal California around , the False Bay coast is much warmer, averaging between annually. This is similar to water temperatures in much of the Northern Mediterranean (for example Nice). In summer, False Bay water averages slightly over , with a common high. Beaches located on the Atlantic Coast tend to have very cold water due to the Benguela current which originates from the Southern Ocean, whilst the water at False Bay beaches may be warmer by up to at the same moment due to the influence of the warm Agulhas current. It is a common misconception that False Bay is part of the Indian Ocean, with Cape Point being both the meeting point of the Indian and Atlantic Oceans, and the southernmost tip of Africa. The oceans in fact meet at the actual southernmost tip, Cape Agulhas, which lies approximately to the southeast. The misconception is fuelled by the relative warmth of the False Bay water to the Atlantic Seaboard water, and the many confusing instances of "Two Oceans" in names synonymous with Cape Town, such as the Two Oceans Marathon, the Two Oceans Aquarium, and places such as Two Oceans wine farm. Both coasts are equally popular, although the beaches in affluent Clifton and elsewhere on the Atlantic Coast are better developed with restaurants and cafés, with a strip of restaurants and bars accessible to the beach at Camps Bay. The Atlantic seaboard, known as Cape Town's Riviera, is regarded as one of the most scenic routes in South Africa, along the slopes of the Twelve Apostles to the boulders and white sand beaches of Llandudno, with the route ending in Hout Bay, a diverse bustling suburb with a harbour and a seal island. This fishing village is flanked by the Constantia valley and the picturesque Chapman's Peak drive. Boulders Beach near Simon's Town is known for its colony of African penguins. Surfing is popular and the city hosts the Red Bull Big Wave Africa surfing competition every year. The city has several notable cultural attractions. The Victoria & Alfred Waterfront, built on top of part of the docks of the Port of Cape Town, is the city's most visited tourist attraction. It is also one of the city's most popular shopping venues, with several hundred shops as well as the Two Oceans Aquarium. The V&A also hosts the Nelson Mandela Gateway, through which ferries depart for Robben Island. It is possible to take a ferry from the V&A to Hout Bay, Simon's Town and the Cape fur seal colonies on Seal and Duiker Islands. Several companies offer tours of the Cape Flats, a mostly Coloured township, and Khayelitsha, a mostly black township. The most popular areas for visitors to stay include Camps Bay, Sea Point, the V&A Waterfront, the City Bowl, Hout Bay, Constantia, Rondebosch, Newlands, and Somerset West. In November 2013, Cape Town was voted the best global city in The Daily Telegraph'''s annual Travel Awards. Cape Town offers tourists a range of air, land and sea-based adventure activities, including paragliding and skydiving. The City of Cape Town works closely with Cape Town Tourism to promote the city both locally and internationally. The primary focus of Cape Town Tourism is to represent Cape Town as a tourist destination. Cape Town Tourism receives a portion of its funding from the City of Cape Town while the remainder is made up of membership fees and own-generated funds. The Tristan da Cunha government owns and operates a lodging facility in Cape Town which charges discounted rates to Tristan da Cunha residents and non-resident natives. Culture Cape Town is noted for its architectural heritage, with the highest density of Cape Dutch style buildings in the world. Cape Dutch style, which combines the architectural traditions of the Netherlands, Germany, France and Indonesia, is most visible in Constantia, the old government buildings in the Central Business District, and along Long Street. The annual Cape Town Minstrel Carnival, also known by its Afrikaans name of Kaapse Klopse, is a large minstrel festival held annually on 2 January or "Tweede Nuwe Jaar" (Second New Year). Competing teams of minstrels parade in brightly coloured costumes, performing Cape Jazz, either carrying colourful umbrellas or playing an array of musical instruments. The Artscape Theatre Centre is the largest performing arts venue in Cape Town. The city also encloses the 36 hectare Kirstenbosch National Botanical Garden that contains protected natural forest and fynbos along with a variety of animals and birds. There are over 7,000 species in cultivation at Kirstenbosch, including many rare and threatened species of the Cape Floristic Region. In 2004 this Region, including Kirstenbosch, was declared a UNESCO World Heritage Site. Cape Town's transport system links it to the rest of South Africa; it serves as the gateway to other destinations within the province. The Cape Winelands and in particular the towns of Stellenbosch, Paarl and Franschhoek are popular day trips from the city for sightseeing and wine tasting. Whale watching is popular amongst tourists: southern right whales and humpback whales are seen off the coast during the breeding season (August to November) and Bryde's whales and killer whale can be seen any time of the year. The nearby town of Hermanus is known for its Whale Festival, but whales can also be seen in False Bay. Heaviside's dolphins are endemic to the area and can be seen from the coast north of Cape Town; dusky dolphins live along the same coast and can occasionally be seen from the ferry to Robben Island. The only complete windmill in South Africa is Mostert's Mill, Mowbray. It was built in 1796 and restored in 1935 and again in 1995. Crime In recent years, the city has struggled with drugs, a surge in violent drug-related crime and more recently gang violence. In the Cape Flats alone, there were approximately 100,000 people in over 130 different gangs in 2018. While there are some alliances, this multitude and division is also cause for conflict between groups. At the same time, the economy has grown due to the boom in the tourism and the real estate industries. With a Gini coefficient of 0.58, Cape Town had the lowest inequality rate in South Africa in 2012. Since July 2019 widespread violent crime in poorer gang dominated areas of greater Cape Town has resulted in an ongoing military presence in these neighbourhoods. Cape Town had the highest murder rate among large South African cities at 77 murders per 100,000 people in the period April 2018 to March 2019, with 3157 murders mostly occurring in poor townships created under the apartheid regime. toll. Places of worship Most places of worship in the city are Christian churches and cathedrals: Zion Christian Church, Apostolic Faith Mission of South Africa, Assemblies of God, Baptist Union of Southern Africa (Baptist World Alliance), Methodist Church of Southern Africa (World Methodist Council), Anglican Church of Southern Africa (Anglican Communion), Presbyterian Church of Africa (World Communion of Reformed Churches), Roman Catholic Archdiocese of Cape Town (Catholic Church). Islam is the city's second largest religion with a long history in Cape Town resulting in a number of mosques and other Muslim religious sites spread across the city such as the Auwal Mosque South Africa's first mosque. Cape Town's significant Jewish population supports a number of synagogues most notably the historic Gardens Shul. The Cape Town Progressive Jewish Congregation (CTPJC) also has three temples in the city. Other religious sites in the city include Hindu, Buddhist and Baháʼí temples. Media Several newspapers, magazines and printing facilities have their offices in the city. Independent News and Media publishes the major English language papers in the city, the Cape Argus and the Cape Times. Naspers, the largest media conglomerate in South Africa, publishes Die Burger, the major Afrikaans language paper. Cape Town has many local community newspapers. Some of the largest community newspapers in English are the Athlone News from Athlone, the Atlantic Sun, the Constantiaberg Bulletin from Constantiaberg, the City Vision from Bellville, the False Bay Echo from False Bay, the Helderberg Sun from Helderberg, the Plainsman from Michell's Plain, the Sentinel News from Hout Bay, the Southern Mail from the Southern Peninsula, the Southern Suburbs Tatler from the Southern Suburbs, Table Talk from Table View and Tygertalk from Tygervalley/Durbanville. Afrikaans language community newspapers include the Landbou-Burger and the Tygerburger.Vukani, based in the Cape Flats, is published in Xhosa. Cape Town is a centre for major broadcast media with several radio stations that only broadcast within the city. 94.5 Kfm (94.5 MHz FM) and Good Hope FM (94–97 MHz FM) mostly play pop music. Heart FM (104.9 MHz FM), the former P4 Radio, plays jazz and R&B, while Fine Music Radio (101.3 FM) plays classical music and jazz, and Magic Music Radio
twice, to extend Chicago's losing streak to eight games. In a key play in the second game, on September 11, Cubs starter Dick Selma threw a surprise pickoff attempt to third baseman Ron Santo, who was nowhere near the bag or the ball. Selma's throwing error opened the gates to a Phillies rally. After that second Philly loss, the Cubs were 84–60 and the Mets had pulled ahead at 85–57. The Mets would not look back. The Cubs' eight-game losing streak finally ended the next day in St. Louis, but the Mets were in the midst of a ten-game winning streak, and the Cubs, wilting from team fatigue, generally deteriorated in all phases of the game.[1] The Mets (who had lost a record 120 games 7 years earlier), would go on to win the World Series. The Cubs, despite a respectable 92–70 record, would be remembered for having lost a remarkable 17½ games in the standings to the Mets in the last quarter of the season. 1977–1979: June Swoon Following the 1969 season, the club posted winning records for the next few seasons, but no playoff action. After the core players of those teams started to move on, the 70s got worse for the team, and they became known as "the Loveable Losers." In , the team found some life, but ultimately experienced one of its biggest collapses. The Cubs hit a high-water mark on June 28 at 47–22, boasting an game NL East lead, as they were led by Bobby Murcer (27 HR/89 RBI), and Rick Reuschel (20–10). However, the Philadelphia Phillies cut the lead to two by the All-star break, as the Cubs sat 19 games over .500, but they swooned late in the season, going 20–40 after July 31. The Cubs finished in fourth place at 81–81, while Philadelphia surged, finishing with 101 wins. The following two seasons also saw the Cubs get off to a fast start, as the team rallied to over 10 games above .500 well into both seasons, only to again wear down and play poorly later on, and ultimately settling back to mediocrity. This trait became known as the "June Swoon". Again, the Cubs' unusually high number of day games is often pointed to as one reason for the team's inconsistent late-season play. Wrigley died in 1977. The Wrigley family sold the team to the Chicago Tribune in 1981, ending a 65-year family relationship with the Cubs. Tribune Company years (1981–2008) 1984: Heartbreak After over a dozen more subpar seasons, in 1981 the Cubs hired GM Dallas Green from Philadelphia to turn around the franchise. Green had managed the 1980 Phillies to the World Series title. One of his early GM moves brought in a young Phillies minor-league 3rd baseman named Ryne Sandberg, along with Larry Bowa for Iván DeJesús. The 1983 Cubs had finished 71–91 under Lee Elia, who was fired before the season ended by Green. Green continued the culture of change and overhauled the Cubs roster, front-office and coaching staff prior to 1984. Jim Frey was hired to manage the 1984 Cubs, with Don Zimmer coaching 3rd base and Billy Connors serving as pitching coach. Green shored up the 1984 roster with a series of transactions. In December 1983 Scott Sanderson was acquired from Montreal in a three-team deal with San Diego for Carmelo Martínez. Pinch hitter Richie Hebner (.333 BA in 1984) was signed as a free-agent. In spring training, moves continued: LF Gary Matthews and CF Bobby Dernier came from Philadelphia on March 26, for Bill Campbell and a minor leaguer. Reliever Tim Stoddard (10–6 3.82, 7 saves) was acquired the same day for a minor leaguer; veteran pitcher Ferguson Jenkins was released. The team's commitment to contend was complete when Green made a midseason deal on June 15 to shore up the starting rotation due to injuries to Rick Reuschel (5–5) and Sanderson. The deal brought 1979 NL Rookie of the Year pitcher Rick Sutcliffe from the Cleveland Indians. Joe Carter (who was with the Triple-A Iowa Cubs at the time) and right fielder Mel Hall were sent to Cleveland for Sutcliffe and back-up catcher Ron Hassey (.333 with Cubs in 1984). Sutcliffe (5–5 with the Indians) immediately joined Sanderson (8–5 3.14), Eckersley (10–8 3.03), Steve Trout (13–7 3.41) and Dick Ruthven (6–10 5.04) in the starting rotation. Sutcliffe proceeded to go 16–1 for Cubs and capture the Cy Young Award. The Cubs 1984 starting lineup was very strong. It consisted of LF Matthews (.291 14–82 101 runs 17 SB), C Jody Davis (.256 19–94), RF Keith Moreland (.279 16–80), SS Larry Bowa (.223 10 SB), 1B Leon "Bull" Durham (.279 23–96 16SB), CF Dernier (.278 45 SB), 3B Ron Cey (.240 25–97), Closer Lee Smith(9–7 3.65 33 saves) and 1984 NL MVP Ryne Sandberg (.314 19–84 114 runs, 19 triples,32 SB). Reserve players Hebner, Thad Bosley, Henry Cotto, Hassey and Dave Owen produced exciting moments. The bullpen depth of Rich Bordi, George Frazier, Warren Brusstar and Dickie Noles did their job in getting the game to Smith or Stoddard. At the top of the order, Dernier and Sandberg were exciting, aptly coined "the Daily Double" by Harry Caray. With strong defense – Dernier CF and Sandberg 2B, won the NL Gold Glove- solid pitching and clutch hitting, the Cubs were a well-balanced team. Following the "Daily Double", Matthews, Durham, Cey, Moreland and Davis gave the Cubs an order with no gaps to pitch around. Sutcliffe anchored a strong top-to-bottom rotation, and Smith was one of the top closers in the game. The shift in the Cubs' fortunes was characterized June 23 on the "NBC Saturday Game of the Week" contest against the St. Louis Cardinals; it has since been dubbed simply "The Sandberg Game." With the nation watching and Wrigley Field packed, Sandberg emerged as a superstar with not one, but two game-tying home runs against Cardinals closer Bruce Sutter. With his shots in the 9th and 10th innings, Wrigley Field erupted and Sandberg set the stage for a comeback win that cemented the Cubs as the team to beat in the East. No one would catch them. In early August the Cubs swept the Mets in a 4-game home series that further distanced them from the pack. An infamous Keith Moreland-Ed Lynch fight erupted after Lynch hit Moreland with a pitch, perhaps forgetting Moreland was once a linebacker at the University of Texas. It was the second game of a doubleheader and the Cubs had won the first game in part due to a three-run home run by Moreland. After the bench-clearing fight, the Cubs won the second game, and the sweep put the Cubs at 68–45. In 1984, each league had two divisions, East and West. The divisional winners met in a best-of-5 series to advance to the World Series, in a "2–3" format, first two games were played at the home of the team who did not have home-field advantage. Then the last three games were played at the home of the team, with home-field advantage. Thus the first two games were played at Wrigley Field and the next three at the home of their opponents, San Diego. A common and unfounded myth is that since Wrigley Field did not have lights at that time the National League decided to give the home field advantage to the winner of the NL West. In fact, home-field advantage had rotated between the winners of the East and West since 1969 when the league expanded. In even-numbered years, the NL West had home-field advantage. In odd-numbered years, the NL East had home-field advantage. Since the NL East winners had had home-field advantage in 1983, the NL West winners were entitled to it. The confusion may stem from the fact that Major League Baseball did decide that, should the Cubs make it to the World Series, the American League winner would have home-field advantage. At the time home field advantage was rotated between each league. Odd-numbered years the AL had home-field advantage. Even-numbered years the NL had home-field advantage. In the 1982 World Series the St. Louis Cardinals of the NL had home-field advantage. In the 1983 World Series the Baltimore Orioles of the AL had home-field advantage. In the NLCS, the Cubs easily won the first two games at Wrigley Field against the San Diego Padres. The Padres were the winners of the Western Division with Steve Garvey, Tony Gwynn, Eric Show, Goose Gossage and Alan Wiggins. With wins of 13–0 and 4–2, the Cubs needed to win only one game of the next three in San Diego to make it to the World Series. After being beaten in Game 3 7–1, the Cubs lost Game 4 when Smith, with the game tied 5–5, allowed a game-winning home run to Garvey in the bottom of the ninth inning. In Game 5 the Cubs took a 3–0 lead into the 6th inning, and a 3–2 lead into the seventh with Sutcliffe (who won the Cy Young Award that year) still on the mound. Then, Leon Durham had a sharp grounder go under his glove. This critical error helped the Padres win the game 6–3, with a 4-run 7th inning and keep Chicago out of the 1984 World Series against the Detroit Tigers. The loss ended a spectacular season for the Cubs, one that brought alive a slumbering franchise and made the Cubs relevant for a whole new generation of Cubs fans. The Padres would be defeated in 5 games by Sparky Anderson's Tigers in the World Series. The 1985 season brought high hopes. The club started out well, going 35–19 through mid-June, but injuries to Sutcliffe and others in the pitching staff contributed to a 13-game losing streak that pushed the Cubs out of contention. 1989: NL East division championship In 1989, the first full season with night baseball at Wrigley Field, Don Zimmer's Cubs were led by a core group of veterans in Ryne Sandberg, Rick Sutcliffe and Andre Dawson, who were boosted by a crop of youngsters such as Mark Grace, Shawon Dunston, Greg Maddux, Rookie of the Year Jerome Walton, and Rookie of the Year Runner-Up Dwight Smith. The Cubs won the NL East once again that season winning 93 games. This time the Cubs met the San Francisco Giants in the NLCS. After splitting the first two games at home, the Cubs headed to the Bay Area, where despite holding a lead at some point in each of the next three games, bullpen meltdowns and managerial blunders ultimately led to three straight losses. The Cubs couldn't overcome the efforts of Will Clark, whose home run off Maddux, just after a managerial visit to the mound, led Maddux to think Clark knew what pitch was coming. Afterward, Maddux would speak into his glove during any mound conversation, beginning what is a norm today. Mark Grace was 11–17 in the series with 8 RBI. Eventually, the Giants lost to the "Bash Brothers" and the Oakland A's in the famous "Earthquake Series." 1998: Wild card race and home run chase The 1998 season began on a somber note with the death of broadcaster Harry Caray. After the retirement of Sandberg and the trade of Dunston, the Cubs had holes to fill, and the signing of Henry Rodríguez to bat cleanup provided protection for Sammy Sosa in the lineup, as Rodriguez slugged 31 round-trippers in his first season in Chicago. Kevin Tapani led the club with a career-high 19 wins while Rod Beck anchored a strong bullpen and Mark Grace turned in one of his best seasons. The Cubs were swamped by media attention in 1998, and the team's two biggest headliners were Sosa and rookie flamethrower Kerry Wood. Wood's signature performance was one-hitting the Houston Astros, a game in which he tied the major league record of 20 strikeouts in nine innings. His torrid strikeout numbers earned Wood the nickname "Kid K," and ultimately earned him the 1998 NL Rookie of the Year award. Sosa caught fire in June, hitting a major league record 20 home runs in the month, and his home run race with Cardinals slugger Mark McGwire transformed the pair into international superstars in a matter of weeks. McGwire finished the season with a new major league record of 70 home runs, but Sosa's .308 average and 66 homers earned him the National League MVP Award. After a down-to-the-wire Wild Card chase with the San Francisco Giants, Chicago and San Francisco ended the regular season tied, and thus squared off in a one-game playoff at Wrigley Field. Third baseman Gary Gaetti hit the eventual game-winning homer in the playoff game. The win propelled the Cubs into the postseason for the first time since 1989 with a 90–73 regular-season record. Unfortunately, the bats went cold in October, as manager Jim Riggleman's club batted .183 and scored only four runs en route to being swept by Atlanta in the National League Division Series. The home run chase between Sosa, McGwire and Ken Griffey, Jr. helped professional baseball to bring in a new crop of fans as well as bringing back some fans who had been disillusioned by the 1994 strike. The Cubs retained many players who experienced career years in 1998, but, after a fast start in 1999, they collapsed again (starting with being swept at the hands of the cross-town White Sox in mid-June) and finished in the bottom of the division for the next two seasons. 2001: Playoff push Despite losing fan favorite Grace to free agency and the lack of production from newcomer Todd Hundley, skipper Don Baylor's Cubs put together a good season in 2001. The season started with Mack Newton being brought in to preach "positive thinking." One of the biggest stories of the season transpired as the club made a midseason deal for Fred McGriff, which was drawn out for nearly a month as McGriff debated waiving his no-trade clause. The Cubs led the wild card race by 2.5 games in early September, but crumbled when Preston Wilson hit a three-run walk-off homer off of closer Tom "Flash" Gordon, which halted the team's momentum. The team was unable to make another serious charge, and finished at 88–74, five games behind both Houston and St. Louis, who tied for first. Sosa had perhaps his finest season and Jon Lieber led the staff with a 20-win season. 2003: Five more outs The Cubs had high expectations in 2002, but the squad played poorly. On July 5, 2002, the Cubs promoted assistant general manager and player personnel director Jim Hendry to the General Manager position. The club responded by hiring Dusty Baker and by making some major moves in 2003. Most notably, they traded with the Pittsburgh Pirates for outfielder Kenny Lofton and third baseman Aramis Ramírez, and rode dominant pitching, led by Kerry Wood and Mark Prior, as the Cubs led the division down the stretch. Chicago halted St. Louis' run to the playoffs by taking four of five games from the Cardinals at Wrigley Field in early September, after which they won their first division title in 14 years. They then went on to defeat the Atlanta Braves in a dramatic five-game Division Series, the franchise's first postseason series win since beating the Detroit Tigers in the 1908 World Series. After losing an extra-inning game in Game 1, the Cubs rallied and took a three-games-to-one lead over the Wild Card Florida Marlins in the National League Championship Series. Florida shut the Cubs out in Game 5, but the Cubs returned home to Wrigley Field with young pitcher Mark Prior to lead the Cubs in Game 6 as they took a 3–0 lead into the 8th inning. It was at this point when a now-infamous incident took place. Several spectators attempted to catch a foul ball off the bat of Luis Castillo. A Chicago Cubs fan by the name of Steve Bartman, of Northbrook, Illinois, reached for the ball and deflected it away from the glove of Moisés Alou for the second out of the eighth inning. Alou reacted angrily toward the stands and after the game stated that he would have caught the ball. Alou at one point recanted, saying he would not have been able to make the play, but later said this was just an attempt to make Bartman feel better and believing the whole incident should be forgotten. Interference was not called on the play, as the ball was ruled to be on the spectator side of the wall. Castillo was eventually walked by Prior. Two batters later, and to the chagrin of the packed stadium, Cubs shortstop Alex Gonzalez misplayed an inning-ending double play, loading the bases. The error would lead to eight Florida runs and a Marlin victory. Despite sending Kerry Wood to the mound and holding a lead twice, the Cubs ultimately dropped Game 7, and failed to reach the World Series. The "Steve Bartman incident" was seen as the "first domino" in the turning point of the era, and the Cubs did not win a playoff game for the next eleven seasons. 2004–2006 In 2004, the Cubs were a consensus pick by most media outlets to win the World Series. The offseason acquisition of Derek Lee (who was acquired in a trade with Florida for Hee-seop Choi) and the return of Greg Maddux only bolstered these expectations. Despite a mid-season deal for Nomar Garciaparra, misfortune struck the Cubs again. They led the Wild Card by 1.5 games over San Francisco and Houston on September 25. On that day, both teams lost, giving the Cubs a chance at increasing the lead to 2.5 games with only eight games remaining in the season, but reliever LaTroy Hawkins blew a save to the Mets, and the Cubs lost the game in extra innings. The defeat seemingly deflated the team, as they proceeded to drop six of their last eight games as the Astros won the Wild Card. Despite the fact that the Cubs had won 89 games, this fallout was decidedly unlovable, as the Cubs traded superstar Sammy Sosa after he had left the season's final game early and then lied about it publicly. Already a controversial figure in the clubhouse after his corked-bat incident, Sammy's actions alienated much of his once strong fan base as well as the few teammates still on good terms with him, (many teammates grew tired of Sosa playing loud salsa music in the locker room) and possibly tarnished his place in Cubs' lore for years to come. The disappointing season also saw fans start to become frustrated with the constant injuries to ace pitchers Mark Prior and Kerry Wood. Additionally, the 2004 season led to the departure of popular commentator Steve Stone, who had become increasingly critical of management during broadcasts and was verbally attacked by reliever Kent Mercker. Things were no better in 2005, despite a career year from first baseman Derrek Lee and the emergence of closer Ryan Dempster. The club struggled and suffered more key injuries, only managing to win 79 games after being picked by many to be a serious contender for the NL pennant. In 2006, the bottom fell out as the Cubs finished 66–96, last in the NL Central. 2007–2008: Back to back division titles After finishing last in the NL Central with 66 wins in 2006, the Cubs re-tooled and went from "worst to first" in 2007. In the offseason they signed Alfonso Soriano to a contract at eight years for $136 million, and replaced manager Dusty Baker with fiery veteran manager Lou Piniella. After a rough start, which included a brawl between Michael Barrett and Carlos Zambrano, the Cubs overcame the Milwaukee Brewers, who had led the division for most of the season. The Cubs traded Barrett to the Padres, and later acquired catcher Jason Kendall from Oakland. Kendall was highly successful with his management of the pitching rotation and helped at the plate as well. By September, Geovany Soto became the full-time starter behind the plate, replacing the veteran Kendall. Winning streaks in June and July, coupled with a pair of dramatic, late-inning wins against the Reds, led to the Cubs ultimately clinching the NL Central with a record of 85–77. They met Arizona in the NLDS, but controversy followed as Piniella, in a move that has since come under scrutiny, pulled Carlos Zambrano after the sixth inning of a pitcher's duel with D-Backs ace Brandon Webb, to "....save Zambrano for (a potential) Game 4." The Cubs, however, were unable to come through, losing the first game and eventually stranding over 30 baserunners in a three-game Arizona sweep. The Tribune company, in financial distress, was acquired by real-estate mogul Sam Zell in December 2007. This acquisition included the Cubs. However, Zell did not take an active part in running the baseball franchise, instead concentrating on putting together a deal to sell it. The Cubs successfully defended their National League Central title in 2008, going to the postseason in consecutive years for the first time since 1906–08. The offseason was dominated by three months of unsuccessful trade talks with the Orioles involving 2B Brian Roberts, as well as the signing of Chunichi Dragons star Kosuke Fukudome. The team recorded their 10,000th win in April, while establishing an early division lead. Reed Johnson and Jim Edmonds were added early on and Rich Harden was acquired from the Oakland Athletics in early July. The Cubs headed into the All-Star break with the NL's best record, and tied the league record with eight representatives to the All-Star game, including catcher Geovany Soto, who was named Rookie of the Year. The Cubs took control of the division by sweeping a four-game series in Milwaukee. On September 14, in a game moved to Miller Park due to Hurricane Ike, Zambrano pitched a no-hitter against the Astros, and six days later the team clinched by beating St. Louis at Wrigley. The club ended the season with a 97–64 record and met Los Angeles in the NLDS. The heavily favored Cubs took an early lead in Game 1, but James Loney's grand slam off Ryan Dempster changed the series' momentum. Chicago committed numerous critical errors and were outscored 20–6 in a Dodger sweep, which provided yet another sudden ending. The Ricketts era (2009–present) The Ricketts family acquired a majority interest in the Cubs in 2009, ending the Tribune years. Apparently handcuffed by the Tribune's bankruptcy and the sale of the club to the Ricketts siblings, led by chairman Thomas S. Ricketts, the Cubs' quest for a NL Central three-peat started with notice that there would be less invested into contracts than in previous years. Chicago engaged St. Louis in a see-saw battle for first place into August 2009, but the Cardinals played to a torrid 20–6 pace that month, designating their rivals to battle in the Wild Card race, from which they were eliminated in the season's final week. The Cubs were plagued by injuries in 2009, and were only able to field their Opening Day starting lineup three times the entire season. Third baseman Aramis Ramírez injured his throwing shoulder in an early May game against the Milwaukee Brewers, sidelining him until early July and forcing journeyman players like Mike Fontenot and Aaron Miles into more prominent roles. Additionally, key players like Derrek Lee (who still managed to hit .306 with 35 home runs and 111 RBI that season), Alfonso Soriano, and Geovany Soto also nursed nagging injuries. The Cubs posted a winning record (83–78) for the third consecutive season, the first time the club had done so since 1972, and a new era of ownership under the Ricketts family was approved by MLB owners in early October. 2010–2014: The decline and rebuild Rookie Starlin Castro debuted in early May (2010) as the starting shortstop. The club played poorly in the early season, finding themselves 10 games under .500 at the end of June. In addition, long-time ace Carlos Zambrano was pulled from a game against the White Sox on June 25 after a tirade and shoving match with Derrek Lee, and was suspended indefinitely by Jim Hendry, who called the conduct "unacceptable." On August 22, Lou Piniella, who had already announced his retirement at the end of the season, announced that he would leave the Cubs prematurely to take care of his sick mother. Mike Quade took over as the interim manager for the final 37 games of the year. Despite being well out of playoff contention the Cubs went 24–13 under Quade, the best record in baseball during that 37 game stretch, earning Quade the manager position going forward on October 19. On December 3, 2010, Cubs broadcaster and former third baseman, Ron Santo, died due to complications from bladder cancer and diabetes. He spent 13 seasons as a player with the Cubs, and at the time of his death was regarded as one of the greatest players not in the Hall of Fame. He was posthumously elected to the Major League Baseball Hall of Fame in 2012. Despite trading for pitcher Matt Garza and signing free-agent slugger Carlos Peña, the Cubs finished the 2011 season 20 games under .500 with a record of 71–91. Weeks after the season came to an end, the club was rejuvenated in the form of a new philosophy, as new owner Tom Ricketts signed Theo Epstein away from the Boston Red Sox, naming him club President and giving him a five-year contract worth over $18 million, and subsequently discharged manager Mike Quade. Epstein, a proponent of sabremetrics and one of the architects of the 2004 and 2007 World Series championships in Boston, brought along Jed Hoyer from the Padres to fill the role of GM and hired Dale Sveum as manager. Although the team had a dismal 2012 season, losing 101 games (the worst record since 1966), it was largely expected. The youth movement ushered in by Epstein and Hoyer began as longtime fan favorite Kerry Wood retired in May, followed by Ryan Dempster and Geovany Soto being traded to Texas at the All-Star break for a group of minor league prospects headlined by Christian Villanueva, but also included little thought of Kyle Hendricks. The development of Castro, Anthony Rizzo, Darwin Barney, Brett Jackson and pitcher Jeff Samardzija, as well as the replenishing of the minor-league system with prospects such as Javier Baez, Albert Almora, and Jorge Soler became the primary focus of the season, a philosophy which the new management said would carry over at least through the 2013 season. The 2013 season resulted in much as the same the year before. Shortly before the trade deadline, the Cubs traded Matt Garza to the Texas Rangers for Mike Olt, Carl Edwards Jr, Neil Ramirez, and Justin Grimm. Three days later, the Cubs sent Alfonso Soriano to the New York Yankees for minor leaguer Corey Black. The mid season fire sale led to another last place finish in the NL Central, finishing with a record of 66–96. Although there was a five-game improvement in the record from the year before, Anthony Rizzo and Starlin Castro seemed to take steps backward in their development. On September 30, 2013, Theo Epstein made the decision to fire manager Dale Sveum after just two seasons at the helm of the Cubs. The regression of several young players was thought to be the main focus point, as the front office said Sveum would not be judged based on wins and losses. In two seasons as skipper, Sveum finished with a record of 127–197. The 2013 season was also notable as the Cubs drafted future Rookie of the Year and MVP Kris Bryant with the second overall selection. On November 7, 2013, the Cubs hired San Diego Padres bench coach Rick Renteria to be the 53rd manager in team history. The Cubs finished the 2014 season in last place with a 73–89 record in Rentería's first and only season as manager. Despite the poor record, the Cubs improved in many areas during 2014, including rebound years by Anthony Rizzo and Starlin Castro, ending the season with a winning record at home for the first time since 2009, and compiling a 33–34 record after the All-Star Break. However, following unexpected availability of Joe Maddon when he exercised a clause that triggered on October 14 with the departure of General Manager Andrew Friedman to the Los Angeles Dodgers, the Cubs relieved Rentería of his managerial duties on October 31, 2014. During the season, the Cubs drafted Kyle Schwarber with the fourth overall selection. Hall of Famer Ernie Banks died of a heart attack on January 23, 2015, shortly before his 84th birthday. The 2015 uniform carried a commemorative #14 patch on both its home and away jerseys in his honor. 2015–2019: Championship run On November 2, 2014, the Cubs announced that Joe Maddon had signed a five-year contract to be the 54th manager in team history. On December 10, 2014, Maddon announced that the team had signed free agent Jon Lester to a six-year, $155 million contract. Many other trades and acquisitions occurred during the off season. The opening day lineup for the Cubs contained five new players including center fielder Dexter Fowler. Rookies Kris Bryant and Addison Russell were in the starting lineup by mid-April, and rookie Kyle Schwarber was added in mid-June. On August 30, Jake Arrieta threw a no hitter against the Los Angeles Dodgers. The Cubs finished the 2015 season in third place in the NL Central, with a record of 97–65, the third best record in the majors and earned a wild card berth. On October 7, in the 2015 National League Wild Card Game, Arrieta pitched a complete game shutout and the Cubs defeated the Pittsburgh Pirates 4–0. The Cubs defeated the Cardinals in the NLDS three-games-to-one, qualifying for a return to the NLCS for the first time in 12 years, where they faced the New York Mets. This was the first time in franchise history that the Cubs had clinched
2016 National League Championship Series and 2016 World Series, which ended a 71-year National League pennant drought and a 108-year World Series championship drought, both of which are record droughts in Major League Baseball. The 108-year drought was also the longest such occurrence in all major North American sports. Since the start of divisional play in 1969, the Cubs have appeared in the postseason 11 times through the 2020 season. The Cubs are known as "the North Siders", a reference to the location of Wrigley Field within the city of Chicago, and in contrast to the White Sox, whose home field (Guaranteed Rate Field) is located on the South Side. Through 2021, the franchise's all-time record is 11,087–10,521 (). History Early club history 1876–1902: A National League The Cubs began playing in 1870 as the Chicago White Stockings, joining the National League (NL) in 1876 as a charter member. Owner William Hulbert signed multiple star players, such as pitcher Albert Spalding and infielders Ross Barnes, Deacon White, and Adrian "Cap" Anson, to join the team prior to the NL's first season. The White Stockings played their home games at West Side Grounds and quickly established themselves as one of the new league's top teams. Spalding won forty-seven games and Barnes led the league in hitting at .429 as Chicago won the first-ever National League pennant, which at the time was the game's top prize. After back-to-back pennants in 1880 and 1881, Hulbert died, and Spalding, who had retired to start Spalding sporting goods, assumed ownership of the club. The White Stockings, with Anson acting as player-manager, captured their third consecutive pennant in 1882, and Anson established himself as the game's first true superstar. In 1885 and 1886, after winning NL pennants, the White Stockings met the champions of the short-lived American Association in that era's version of a World Series. Both seasons resulted in matchups with the St. Louis Brown Stockings, with the clubs tying in 1885 and with St. Louis winning in 1886. This was the genesis of what would eventually become one of the greatest rivalries in sports. In all, the Anson-led Chicago Base Ball Club won six National League pennants between 1876 and 1886. As a result, Chicago's club nickname transitioned, and by 1890 they had become known as the Chicago Colts, or sometimes "Anson's Colts", referring to Cap's influence within the club. Anson was the first player in history credited with collecting 3,000 career hits. After a disappointing record of 59–73 and a ninth-place finish in 1897, Anson was released by the club as both a player and manager. Due to Anson's absence from the club after 22 years, local newspaper reporters started to refer to the Colts as the "Orphans". After the 1900 season, the American Base-Ball League formed as a rival professional league, and incidentally the club's old White Stockings nickname (eventually shortened to White Sox) would be adopted by a new American League neighbor to the south. 1902–1920: A Cubs dynasty In 1902, Spalding, who by this time had revamped the roster to boast what would soon be one of the best teams of the early century, sold the club to Jim Hart. The franchise was nicknamed the Cubs by the Chicago Daily News in 1902, although not officially becoming the Chicago Cubs until the 1907 season. During this period, which has become known as baseball's dead-ball era, Cub infielders Joe Tinker, Johnny Evers, and Frank Chance were made famous as a double-play combination by Franklin P. Adams' poem "Baseball's Sad Lexicon." The poem first appeared in the July 18, 1910 edition of the New York Evening Mail. Mordecai "Three-Finger" Brown, Jack Taylor, Ed Reulbach, Jack Pfiester, and Orval Overall were several key pitchers for the Cubs during this time period. With Chance acting as player-manager from 1905 to 1912, the Cubs won four pennants and two World Series titles over a five-year span. Although they fell to the "Hitless Wonders" White Sox in the 1906 World Series, the Cubs recorded a record 116 victories and the best winning percentage (.763) in Major League history. With mostly the same roster, Chicago won back-to-back World Series championships in 1907 and 1908, becoming the first Major League club to play three times in the Fall Classic and the first to win it twice. However, the Cubs would not win another World Series until 2016; this remains the longest championship drought in North American professional sports. The next season, veteran catcher Johnny Kling left the team to become a professional pocket billiards player. Some historians think Kling's absence was significant enough to prevent the Cubs from also winning a third straight title in 1909, as they finished 6 games out of first place. When Kling returned the next year, the Cubs won the pennant again, but lost to the Philadelphia Athletics in the 1910 World Series. In 1914, advertising executive Albert Lasker obtained a large block of the club's shares and before the 1916 season assumed majority ownership of the franchise. Lasker brought in a wealthy partner, Charles Weeghman, the proprietor of a popular chain of lunch counters who had previously owned the Chicago Whales of the short-lived Federal League. As principal owners, the pair moved the club from the West Side Grounds to the much newer Weeghman Park, which had been constructed for the Whales only two years earlier, where they remain to this day. The Cubs responded by winning a pennant in the war-shortened season of 1918, where they played a part in another team's curse: the Boston Red Sox defeated Grover Cleveland Alexander's Cubs four games to two in the 1918 World Series, Boston's last Series championship until 2004. Beginning in 1916, Bill Wrigley of chewing-gum fame acquired an increasing quantity of stock in the Cubs. By 1921 he was the majority owner, maintaining that status into the 1930s. Meanwhile, the year 1919 saw the start of the tenure of Bill Veeck, Sr. as team president. Veeck would hold that post throughout the 1920s and into the 30s. The management team of Wrigley and Veeck came to be known as the "double-Bills." The Wrigley years (1921–1945) 1929–1938: Every three years Near the end of the first decade of the double-Bills' guidance, the Cubs won the NL Pennant in 1929 and then achieved the unusual feat of winning a pennant every three years, following up the 1929 flag with league titles in 1932, 1935, and 1938. Unfortunately, their success did not extend to the Fall Classic, as they fell to their AL rivals each time. The '32 series against the Yankees featured Babe Ruth's "called shot" at Wrigley Field in game three. There were some historic moments for the Cubs as well; In 1930, Hack Wilson, one of the top home run hitters in the game, had one of the most impressive seasons in MLB history, hitting 56 home runs and establishing the current runs-batted-in record of 191. That 1930 club, which boasted six eventual hall of fame members (Wilson, Gabby Hartnett, Rogers Hornsby, George "High Pockets" Kelly, Kiki Cuyler and manager Joe McCarthy) established the current team batting average record of .309. In 1935 the Cubs claimed the pennant in thrilling fashion, winning a record 21 games in a row in September. The '38 club saw Dizzy Dean lead the team's pitching staff and provided a historic moment when they won a crucial late-season game at Wrigley Field over the Pittsburgh Pirates with a walk-off home run by Gabby Hartnett, which became known in baseball lore as "The Homer in the Gloamin'". After the "Double-Bills" (Wrigley and Veeck) died in 1932 and 1933 respectively, P.K. Wrigley, son of Bill Wrigley, took over as majority owner. He was unable to extend his father's baseball success beyond 1938, and the Cubs slipped into years of mediocrity, although the Wrigley family would retain control of the team until 1981. 1945: "The Curse of the Billy Goat" The Cubs enjoyed one more pennant at the close of World War II, finishing 98–56. Due to the wartime travel restrictions, the first three games of the 1945 World Series were played in Detroit, where the Cubs won two games, including a one-hitter by Claude Passeau, and the final four were played at Wrigley. The Cubs lost the series, and did not return until the 2016 World Series. After losing the 1945 World Series to the Detroit Tigers, the Cubs finished with a respectable 82–71 record in the following year, but this was only good enough for third place. In the following two decades, the Cubs played mostly forgettable baseball, finishing among the worst teams in the National League on an almost annual basis. From 1947 to 1966, they only notched one winning season. Longtime infielder-manager Phil Cavarretta, who had been a key player during the 1945 season, was fired during spring training in 1954 after admitting the team was unlikely to finish above fifth place. Although shortstop Ernie Banks would become one of the star players in the league during the next decade, finding help for him proved a difficult task, as quality players such as Hank Sauer were few and far between. This, combined with poor ownership decisions such as the College of Coaches, and the ill-fated trade of future Hall of Fame member Lou Brock to the Cardinals for pitcher Ernie Broglio (who won only seven games over the next three seasons), hampered on-field performance. 1969: Fall of '69 The late-1960s brought hope of a renaissance, with third baseman Ron Santo, pitcher Ferguson Jenkins, and outfielder Billy Williams joining Banks. After losing a dismal 103 games in 1966, the Cubs brought home consecutive winning records in '67 and '68, marking the first time a Cub team had accomplished that feat in over two decades. In the Cubs, managed by Leo Durocher, built a substantial lead in the newly created National League Eastern Division by mid-August. Ken Holtzman pitched a no-hitter on August 19, and the division lead grew to 8 games over the St. Louis Cardinals and by 9 games over the New York Mets. After the game of September 2, the Cubs record was 84–52 with the Mets in second place at 77–55. But then a losing streak began just as a Mets winning streak was beginning. The Cubs lost the final game of a series at Cincinnati, then came home to play the resurgent Pittsburgh Pirates (who would finish in third place). After losing the first two games by scores of 9–2 and 13–4, the Cubs led going into the ninth inning. A win would be a positive springboard since the Cubs were to play a crucial series with the Mets the next day. But Willie Stargell drilled a two-out, two-strike pitch from the Cubs' ace reliever, Phil Regan, onto Sheffield Avenue to tie the score in the top of the ninth. The Cubs would lose 7–5 in extra innings.[6] Burdened by a four-game losing streak, the Cubs traveled to Shea Stadium for a short two-game set. The Mets won both games, and the Cubs left New York with a record of 84–58 just 1⁄2 game in front. More of the same followed in Philadelphia, as a 99 loss Phillies team nonetheless defeated the Cubs twice, to extend Chicago's losing streak to eight games. In a key play in the second game, on September 11, Cubs starter Dick Selma threw a surprise pickoff attempt to third baseman Ron Santo, who was nowhere near the bag or the ball. Selma's throwing error opened the gates to a Phillies rally. After that second Philly loss, the Cubs were 84–60 and the Mets had pulled ahead at 85–57. The Mets would not look back. The Cubs' eight-game losing streak finally ended the next day in St. Louis, but the Mets were in the midst of a ten-game winning streak, and the Cubs, wilting from team fatigue, generally deteriorated in all phases of the game.[1] The Mets (who had lost a record 120 games 7 years earlier), would go on to win the World Series. The Cubs, despite a respectable 92–70 record, would be remembered for having lost a remarkable 17½ games in the standings to the Mets in the last quarter of the season. 1977–1979: June Swoon Following the 1969 season, the club posted winning records for the next few seasons, but no playoff action. After the core players of those teams started to move on, the 70s got worse for the team, and they became known as "the Loveable Losers." In , the team found some life, but ultimately experienced one of its biggest collapses. The Cubs hit a high-water mark on June 28 at 47–22, boasting an game NL East lead, as they were led by Bobby Murcer (27 HR/89 RBI), and Rick Reuschel (20–10). However, the Philadelphia Phillies cut the lead to two by the All-star break, as the Cubs sat 19 games over .500, but they swooned late in the season, going 20–40 after July 31. The Cubs finished in fourth place at 81–81, while Philadelphia surged, finishing with 101 wins. The following two seasons also saw the Cubs get off to a fast start, as the team rallied to over 10 games above .500 well into both seasons, only to again wear down and play poorly later on, and ultimately settling back to mediocrity. This trait became known as the "June Swoon". Again, the Cubs' unusually high number of day games is often pointed to as one reason for the team's inconsistent late-season play. Wrigley died in 1977. The Wrigley family sold the team to the Chicago Tribune in 1981, ending a 65-year family relationship with the Cubs. Tribune Company years (1981–2008) 1984: Heartbreak After over a dozen more subpar seasons, in 1981 the Cubs hired GM Dallas Green from Philadelphia to turn around the franchise. Green had managed the 1980 Phillies to the World Series title. One of his early GM moves brought in a young Phillies minor-league 3rd baseman named Ryne Sandberg, along with Larry Bowa for Iván DeJesús. The 1983 Cubs had finished 71–91 under Lee Elia, who was fired before the season ended by Green. Green continued the culture of change and overhauled the Cubs roster, front-office and coaching staff prior to 1984. Jim Frey was hired to manage the 1984 Cubs, with Don Zimmer coaching 3rd base and Billy Connors serving as pitching coach. Green shored up the 1984 roster with a series of transactions. In December 1983 Scott Sanderson was acquired from Montreal in a three-team deal with San Diego for Carmelo Martínez. Pinch hitter Richie Hebner (.333 BA in 1984) was signed as a free-agent. In spring training, moves continued: LF Gary Matthews and CF Bobby Dernier came from Philadelphia on March 26, for Bill Campbell and a minor leaguer. Reliever Tim Stoddard (10–6 3.82, 7 saves) was acquired the same day for a minor leaguer; veteran pitcher Ferguson Jenkins was released. The team's commitment to contend was complete when Green made a midseason deal on June 15 to shore up the starting rotation due to injuries to Rick Reuschel (5–5) and Sanderson. The deal brought 1979 NL Rookie of the Year pitcher Rick Sutcliffe from the Cleveland Indians. Joe Carter (who was with the Triple-A Iowa Cubs at the time) and right fielder Mel Hall were sent to Cleveland for Sutcliffe and back-up catcher Ron Hassey (.333 with Cubs in 1984). Sutcliffe (5–5 with the Indians) immediately joined Sanderson (8–5 3.14), Eckersley (10–8 3.03), Steve Trout (13–7 3.41) and Dick Ruthven (6–10 5.04) in the starting rotation. Sutcliffe proceeded to go 16–1 for Cubs and capture the Cy Young Award. The Cubs 1984 starting lineup was very strong. It consisted of LF Matthews (.291 14–82 101 runs 17 SB), C Jody Davis (.256 19–94), RF Keith Moreland (.279 16–80), SS Larry Bowa (.223 10 SB), 1B Leon "Bull" Durham (.279 23–96 16SB), CF Dernier (.278 45 SB), 3B Ron Cey (.240 25–97), Closer Lee Smith(9–7 3.65 33 saves) and 1984 NL MVP Ryne Sandberg (.314 19–84 114 runs, 19 triples,32 SB). Reserve players Hebner, Thad Bosley, Henry Cotto, Hassey and Dave Owen produced exciting moments. The bullpen depth of Rich Bordi, George Frazier, Warren Brusstar and Dickie Noles did their job in getting the game to Smith or Stoddard. At the top of the order, Dernier and Sandberg were exciting, aptly coined "the Daily Double" by Harry Caray. With strong defense – Dernier CF and Sandberg 2B, won the NL Gold Glove- solid pitching and clutch hitting, the Cubs were a well-balanced team. Following the "Daily Double", Matthews, Durham, Cey, Moreland and Davis gave the Cubs an order with no gaps to pitch around. Sutcliffe anchored a strong top-to-bottom rotation, and Smith was one of the top closers in the game. The shift in the Cubs' fortunes was characterized June 23 on the "NBC Saturday Game of the Week" contest against the St. Louis Cardinals; it has since been dubbed simply "The Sandberg Game." With the nation watching and Wrigley Field packed, Sandberg emerged as a superstar with not one, but two game-tying home runs against Cardinals closer Bruce Sutter. With his shots in the 9th and 10th innings, Wrigley Field erupted and Sandberg set the stage for a comeback win that cemented the Cubs as the team to beat in the East. No one would catch them. In early August the Cubs swept the Mets in a 4-game home series that further distanced them from the pack. An infamous Keith Moreland-Ed Lynch fight erupted after Lynch hit Moreland with a pitch, perhaps forgetting Moreland was once a linebacker at the University of Texas. It was the second game of a doubleheader and the Cubs had won the first game in part due to a three-run home run by Moreland. After the bench-clearing fight, the Cubs won the second game, and the sweep put the Cubs at 68–45. In 1984, each league had two divisions, East and West. The divisional winners met in a best-of-5 series to advance to the World Series, in a "2–3" format, first two games were played at the home of the team who did not have home-field advantage. Then the last three games were played at the home of the team, with home-field advantage. Thus the first two games were played at Wrigley Field and the next three at the home of their opponents, San Diego. A common and unfounded myth is that since Wrigley Field did not have lights at that time the National League decided to give the home field advantage to the winner of the NL West. In fact, home-field advantage had rotated between the winners of the East and West since 1969 when the league expanded. In even-numbered years, the NL West had home-field advantage. In odd-numbered years, the NL East had home-field advantage. Since the NL East winners had had home-field advantage in 1983, the NL West winners were entitled to it. The confusion may stem from the fact that Major League Baseball did decide that, should the Cubs make it to the World Series, the American League winner would have home-field advantage. At the time home field advantage was rotated between each league. Odd-numbered years the AL had home-field advantage. Even-numbered years the NL had home-field advantage. In the 1982 World Series the St. Louis Cardinals of the NL had home-field advantage. In the 1983 World Series the Baltimore Orioles of the AL had home-field advantage. In the NLCS, the Cubs easily won the first two games at Wrigley Field against the San Diego Padres. The Padres were the winners of the Western Division with Steve Garvey, Tony Gwynn, Eric Show, Goose Gossage and Alan Wiggins. With wins of 13–0 and 4–2, the Cubs needed to win only one game of the next three in San Diego to make it to the World Series. After being beaten in Game 3 7–1, the Cubs lost Game 4 when Smith, with the game tied 5–5, allowed a game-winning home run to Garvey in the bottom of the ninth inning. In Game 5 the Cubs took a 3–0 lead into the 6th inning, and a 3–2 lead into the seventh with Sutcliffe (who won the Cy Young Award that year) still on the mound. Then, Leon Durham had a sharp grounder go under his glove. This critical error helped the Padres win the game 6–3, with a 4-run 7th inning and keep Chicago out of the 1984 World Series against the Detroit Tigers. The loss ended a spectacular season for the Cubs, one that brought alive a slumbering franchise and made the Cubs relevant for a whole new generation of Cubs fans. The Padres would be defeated in 5 games by Sparky Anderson's Tigers in the World Series. The 1985 season brought high hopes. The club started out well, going 35–19 through mid-June, but injuries to Sutcliffe and others in the pitching staff contributed to a 13-game losing streak that pushed the Cubs out of contention. 1989: NL East division championship In 1989, the first full season with night baseball at Wrigley Field, Don Zimmer's Cubs were led by a core group of veterans in Ryne Sandberg, Rick Sutcliffe and Andre Dawson, who were boosted by a crop of youngsters such as Mark Grace, Shawon Dunston, Greg Maddux, Rookie of the Year Jerome Walton, and Rookie of the Year Runner-Up Dwight Smith. The Cubs won the NL East once again that season winning 93 games. This time the Cubs met the San Francisco Giants in the NLCS. After splitting the first two games at home, the Cubs headed to the Bay Area, where despite holding a lead at some point in each of the next three games, bullpen meltdowns and managerial blunders ultimately led to three straight losses. The Cubs couldn't overcome the efforts of Will Clark, whose home run off Maddux, just after a managerial visit to the mound, led Maddux to think Clark knew what pitch was coming. Afterward, Maddux would speak into his glove during any mound conversation, beginning what is a norm today. Mark Grace was 11–17 in the series with 8 RBI. Eventually, the Giants lost to the "Bash Brothers" and the Oakland A's in the famous "Earthquake Series." 1998: Wild card race and home run chase The 1998 season began on a somber note with the death of broadcaster Harry Caray. After the retirement of Sandberg and the trade of Dunston, the Cubs had holes to fill, and the signing of Henry Rodríguez to bat cleanup provided protection for Sammy Sosa in the lineup, as Rodriguez slugged 31 round-trippers in his first season in Chicago. Kevin Tapani led the club with a career-high 19 wins while Rod Beck anchored a strong bullpen and Mark Grace turned in one of his best seasons. The Cubs were swamped by media attention in 1998, and the team's two biggest headliners were Sosa and rookie flamethrower Kerry Wood. Wood's signature performance was one-hitting the Houston Astros, a game in which he tied the major league record of 20 strikeouts in nine innings. His torrid strikeout numbers earned Wood the nickname "Kid K," and ultimately earned him the 1998 NL Rookie of the Year award. Sosa caught fire in June, hitting a major league record 20 home runs in the month, and his home run race with Cardinals slugger Mark McGwire transformed the pair into international superstars in a matter of weeks. McGwire finished the season with a new major league record of 70 home runs, but Sosa's .308 average and 66 homers earned him the National League MVP Award. After a down-to-the-wire Wild Card chase with the San Francisco Giants, Chicago and San Francisco ended the regular season tied, and thus squared off in a one-game playoff at Wrigley Field. Third baseman Gary Gaetti hit the eventual game-winning homer in the playoff game. The win propelled the Cubs into the postseason for the first time since 1989 with a 90–73 regular-season record. Unfortunately, the bats went cold in October, as manager Jim Riggleman's club batted .183 and scored only four runs en route to being swept by Atlanta in the National League Division Series. The home run chase between Sosa, McGwire and Ken Griffey, Jr. helped professional baseball to bring in a new crop of fans as well as bringing back some fans who had been disillusioned by the 1994 strike. The Cubs retained many players who experienced career years in 1998, but, after a fast start in 1999, they collapsed again (starting with being swept at the hands of the cross-town White Sox in mid-June) and finished in the bottom of the division for the next two seasons. 2001: Playoff push Despite losing fan favorite Grace to free agency and the lack of production from newcomer Todd Hundley, skipper Don Baylor's Cubs put together a good season in 2001. The season started with Mack Newton being brought in to preach "positive thinking." One of the biggest stories of the season transpired as the club made a midseason deal for Fred McGriff, which was drawn out for nearly a month as McGriff debated waiving his no-trade clause. The Cubs led the wild card race by 2.5 games in early September, but crumbled when Preston Wilson hit a three-run walk-off homer off of closer Tom "Flash" Gordon, which halted the team's momentum. The team was unable to make another serious charge, and finished at 88–74, five games behind both Houston and St. Louis, who tied for first. Sosa had perhaps his finest season and Jon Lieber led the staff with a 20-win season. 2003: Five more outs The Cubs had high expectations in 2002, but the squad played poorly. On July 5, 2002, the Cubs promoted assistant general manager and player personnel director Jim Hendry to the General Manager position. The club responded by hiring Dusty Baker and by making some major moves in 2003. Most notably, they traded with the Pittsburgh Pirates for outfielder Kenny Lofton and third baseman Aramis Ramírez, and rode dominant pitching, led by Kerry Wood and Mark Prior, as the Cubs led the division down the stretch. Chicago halted St. Louis' run to the playoffs by taking four of five games from the Cardinals at Wrigley Field in early September, after which they won their first division title in 14 years. They then went on to defeat the Atlanta Braves in a dramatic five-game Division Series, the franchise's first postseason series win since beating the Detroit Tigers in the 1908 World Series. After losing an extra-inning game in Game 1, the Cubs rallied and took a three-games-to-one lead over the Wild Card Florida Marlins in the National League Championship Series. Florida shut the Cubs out in Game 5, but the Cubs returned home to Wrigley Field with young pitcher Mark Prior to lead the Cubs in Game 6 as they took a 3–0 lead into the 8th inning. It was at this point when a now-infamous incident took place. Several spectators attempted to catch a foul ball off the bat of Luis Castillo. A Chicago Cubs fan by the name of Steve Bartman, of Northbrook, Illinois, reached for the ball and deflected it away from the glove of Moisés Alou for the second out of the eighth inning. Alou reacted angrily toward the stands and after the game stated that he would have caught the ball. Alou at one point recanted, saying he would not have been able to make the play, but later said this was just an attempt to make Bartman feel better and believing the whole incident should be forgotten. Interference was not called on the play, as the ball was ruled to be on the spectator side of the wall. Castillo was eventually walked by Prior. Two batters later, and to the chagrin of the packed stadium, Cubs shortstop Alex Gonzalez misplayed an inning-ending double play, loading the bases. The error would lead to eight Florida runs and a Marlin victory. Despite sending Kerry Wood to the mound and holding a lead twice, the Cubs ultimately dropped Game 7, and failed to reach the World Series. The "Steve Bartman incident" was seen as the "first domino" in the turning point of the era, and the Cubs did not win a playoff game for the next eleven seasons. 2004–2006 In 2004, the Cubs were a consensus pick by most media outlets to win the World Series. The offseason acquisition of Derek Lee (who was acquired in a trade with Florida for Hee-seop Choi) and the return of Greg Maddux only bolstered these expectations. Despite a mid-season deal for Nomar Garciaparra, misfortune struck the Cubs again. They led the Wild Card by 1.5 games over San Francisco and Houston on September 25. On that day, both teams lost, giving the Cubs a chance at increasing the lead to 2.5 games with only eight games remaining in the season, but reliever LaTroy Hawkins blew a save to the Mets, and the Cubs lost the game in extra innings. The defeat seemingly deflated the team, as they proceeded to drop six of their last eight games as the Astros won the Wild Card. Despite the fact that the Cubs had won 89 games, this fallout was decidedly unlovable, as the Cubs traded superstar Sammy Sosa after he had left the season's final game early and then lied about it publicly. Already a controversial figure in the clubhouse after his corked-bat incident, Sammy's actions alienated much of his once strong fan base as well as the few teammates still on good terms with him, (many teammates grew tired of Sosa playing loud salsa music in the locker room) and possibly tarnished his place in Cubs' lore for years to come. The disappointing season also saw fans start to become frustrated with the constant injuries to ace pitchers Mark Prior and Kerry Wood. Additionally, the 2004 season led to the departure of popular commentator Steve Stone, who had become increasingly critical of management during broadcasts and was verbally attacked by reliever Kent Mercker. Things were no better in 2005, despite a career year from first baseman Derrek Lee and the emergence of closer Ryan Dempster. The club struggled and suffered more key injuries, only managing to win 79 games after being picked by many to be a serious contender for the NL pennant. In 2006, the bottom fell out as the Cubs finished 66–96, last in the NL Central. 2007–2008: Back to back division titles After finishing last in the NL Central with 66 wins in 2006, the Cubs re-tooled and went from "worst to first" in 2007. In the offseason they signed Alfonso Soriano to a contract at eight years for $136 million, and replaced manager Dusty Baker with fiery veteran manager Lou Piniella. After a rough start, which included a brawl between Michael Barrett and Carlos Zambrano, the Cubs overcame the Milwaukee Brewers, who had led the division for most of the season. The Cubs traded Barrett to the Padres, and later acquired catcher Jason Kendall from Oakland. Kendall was highly successful with his management of the pitching rotation and helped at the plate as well. By September, Geovany Soto became the full-time starter behind the plate, replacing the veteran Kendall. Winning streaks in June and July, coupled with a pair of dramatic, late-inning wins against the Reds, led to the Cubs ultimately clinching the NL Central with a record of 85–77. They met Arizona in the NLDS, but controversy followed as Piniella, in a move that has since come under scrutiny, pulled Carlos Zambrano after the sixth inning of a pitcher's duel with D-Backs ace Brandon Webb, to "....save Zambrano for (a potential) Game 4." The Cubs, however, were unable to come through, losing the first game and eventually stranding over 30 baserunners in a three-game Arizona sweep. The Tribune company, in financial distress, was acquired by real-estate mogul Sam Zell in December 2007. This acquisition included the Cubs. However, Zell did not take an active part in running the baseball franchise, instead concentrating on putting together a deal to sell it. The Cubs successfully defended their National League Central title in 2008, going to the postseason in consecutive years for the first time since 1906–08. The offseason was dominated by three months of unsuccessful trade talks with the Orioles involving 2B Brian Roberts, as well as the signing of Chunichi Dragons star Kosuke Fukudome. The team recorded their 10,000th win in April, while establishing an early division lead. Reed Johnson and Jim Edmonds were added early on and Rich Harden was acquired from the Oakland Athletics in early July. The Cubs headed into the All-Star break with the NL's best record, and tied the league record with eight representatives to the All-Star game, including catcher Geovany Soto, who was named Rookie of the Year. The Cubs took control of the division by sweeping a four-game series in Milwaukee. On September 14, in a game moved to Miller Park due to Hurricane Ike, Zambrano pitched a no-hitter against the Astros, and six days later the team clinched by beating St. Louis at Wrigley. The club ended the season with a 97–64 record and met Los Angeles in the NLDS. The heavily favored Cubs took an early lead in Game 1, but James Loney's grand slam off Ryan Dempster changed the series' momentum. Chicago committed numerous critical errors and were outscored 20–6 in a Dodger sweep, which provided yet another sudden ending. The Ricketts era (2009–present) The Ricketts family acquired a majority interest in the Cubs in 2009, ending the Tribune years. Apparently handcuffed by the Tribune's bankruptcy and the sale of the club to the Ricketts siblings, led by chairman Thomas S. Ricketts, the Cubs' quest for a NL Central three-peat started with notice that there would be less invested into contracts than in previous years. Chicago engaged St. Louis in a see-saw battle for first place into August 2009, but the Cardinals played to a torrid 20–6 pace that month, designating their rivals to battle in the Wild Card race, from which they were eliminated in the season's final week. The Cubs were plagued by injuries in 2009, and were only able to field their Opening Day starting lineup three times the entire season. Third baseman Aramis Ramírez injured his throwing shoulder in an early May game against the Milwaukee Brewers, sidelining him until early July and forcing journeyman players like Mike Fontenot and Aaron Miles into more prominent roles. Additionally, key players like Derrek Lee (who still managed to hit .306 with 35 home runs and 111 RBI that season), Alfonso Soriano, and Geovany Soto also nursed nagging injuries. The Cubs posted a winning record (83–78) for the third consecutive season, the first time the club had done so since 1972, and a new era of ownership under the Ricketts family was approved by MLB owners in early October. 2010–2014: The decline and rebuild Rookie Starlin Castro debuted in early May (2010) as the starting shortstop. The club played poorly in the early season, finding themselves 10 games under .500 at the end of June. In addition, long-time ace Carlos Zambrano was pulled from a game against the White Sox on June 25 after a tirade and shoving match with Derrek Lee, and was suspended indefinitely by Jim Hendry, who called the conduct "unacceptable." On August 22, Lou Piniella, who had already announced his retirement at the end of the season, announced that he would leave the Cubs prematurely to take care of his sick mother. Mike Quade took over as the interim manager for the final 37 games of the year. Despite being well out of playoff contention the Cubs went 24–13 under Quade, the best record in baseball during that 37 game stretch, earning Quade the manager position going forward on October 19. On December 3, 2010, Cubs broadcaster and former third baseman, Ron Santo, died due to complications from bladder cancer and diabetes. He spent 13 seasons as a player with the Cubs, and at the time of his death was regarded as one of the greatest players not in
its music changed through random processes. Coldcut and Hex presented this multimedia project as an example of the forthcoming convergence of pop music and computer-game characters. In 1992, Hex's first single - "Global Chaos" / "Digital Love Opus 1" - combined rave visuals with techno and ambient interactive visuals. In November of that year, Hex released Global Chaos CDTV, which took advantage of the possibilities of the new CD-ROM medium. The Global Chaos CDTV disk (which contained the Top Banana game, interactive visuals and audio), was a forerunner of the "CD+" concept, uniting music, graphics, and video games into one. This multi-dimensional entertainment product received wide coverage in the national media, including features on Dance Energy, Kaleidoscope on BBC Radio 4, What's Up Doc? on ITV and Reportage on BBC Two. i-D Magazine was quoted as saying, "It's like your TV tripping". Coldcut videos were made for most songs, often by Hexstatic, and used a lot of stock and sampled footage. Their "Timber" video, which created an AV collage piece using analogous techniques to audio sample collage, was put on heavy rotation on MTV. Stuart Warren Hill of Hexstatic referred to this technique as: "What you see is what you hear". "Timber" (which appears on both Let Us Play, Coldcut's fourth album, and Let Us Replay, their fifth) won awards for its innovative use of repetitive video clips synced to the music, including being shortlisted at the Edinburgh Television and Film Festival in their top five music videos of the year in 1998. Coldcut began integrating video sampling into their live DJ gigs at the time, and incorporated multimedia content that caused press to credit the act as segueing "into the computer age". Throughout the 90s, Hex created visuals for Coldcut's live performances, and developed the CD-ROM portion of Coldcut's Let Us Play and Let Us Replay, in addition to software developed specifically for the album's world tour. Hex's inclusion of music videos and "playtools" (playful art/music software programs) on Coldcut's CD-Roms was completely ahead of the curve at that time, offering viewers/listeners a high level of interactivity. Playtools such as My Little Funkit and Playtime were the prototypes for Ninja Jamm, the app Coldcut designed and launched 16 years later. Playtime followed on from Coldcut and Hex's Synopticon installation, developing the auto-cutup algorhythm, and using other random processes to generate surprising combinations. Coldcut and Hex performed live using Playtime at the 1st Sonar Festival in 1994. Playtime was also used to generate the backing track for Coldcut's collaboration with Jello Biafra, "Every Home a Prison". In 1994 Coldcut and Hex contributed an installation to the Glasgow Gallery of Modern Art. The piece, called Generator was installed in the Fire Gallery. Generator was an interactive installation which allowed users to mix sound, video, text and graphics and make their own audio-visual mix, modelled on the techniques and technology used by Coldcut in clubs and live performance events. It consisted of two consoles: the left controlling how the sounds are played, the right controlling how the images are played. As part of the JAM exhibition of "Style, Music and Media" at the Barbican Art Gallery in 1996, Coldcut and Hex were commissioned to produce an interactive audiovisual piece called Synopticon. Conceived and designed by Robert Pepperell and Matt Black, the digital culture synthesiser allows users to "remix" sounds, images, text and music in a partially random, partially controlled way. The year 1996 also brought the Coldcut name back to More and Black, and the pair celebrated with 70 Minutes of Madness, a mix CD that became part of the Journeys by DJ series. The release was credited with "bringing to wider attention the sort of freestyle mixing the pair were always known for through their radio show on KISS FM, Solid Steel, and their steady club dates". It was voted "Best Compilation of All Time" by Jockey Slut in 1998. In February 1997, they released a double pack single "Atomic Moog 2000" / "Boot the System", the first Coldcut release on Ninja Tune. This was not eligible for the UK chart because time and format restrictions prevented the inclusion of the "Natural Rhythm" video on the CD. In August 1997, a reworking of the early track "More Beats + Pieces" gave them their first UK Top 40 hit since 1989. The album Let Us Play! followed in September and also made the Top 40. The fourth album by Coldcut, Let Us Play! paid homage to the greats that inspired them. Their first album to be released on Ninja Tune, it featured guest appearances by Grandmaster Flash, Steinski, Jello Biafra, Jimpster, The Herbaliser, Talvin Singh, Daniel Pemberton and Selena Saliva. Coldcut's cut 'n' paste method on the album was compared to that of Dadaism and William Burroughs. Hex collaborated with Coldcut to produce the multimedia CD-Rom for the album. Hex later evolved the software into the engine that was used on the Let Us Play! world tour. In 1997, Matt Black - alongside Cambridge based developers Camart - created real-time video manipulation software VJAMM. It allowed users to be a "digital video jockey", remixing and collaging sound and images and trigger audio and visual samples simultaneously, subsequently bringing futuristic technology to the audio-visual field. VJAMM rivalled some of the features of high-end and high cost tech at the time. The VJAMM technology, praised as being proof of how far computers changed the face of live music, became seminal in both Coldcut's live sets (which were called a "revelaton" by Melody Maker and DJ sets. Their CCTV live show was featured at major festivals including Glastonbury, Roskilde, Sónar, the Montreux Jazz Festival, and John Peel's Meltdown. The "beautifully simple and devastatingly effective" software was deemed revolutionary, and became recognized as a major factor in the evolution of clubs. It eventually earned a place in the American Museum of the Moving Image's permanent collection. As quoted by The Independent, Coldcut's rallying cry was "Don't hate the media, be the media'". NME was quoted as saying: "Veteran duo Coldcut are so cool they invented the remix - now they are doing the same for television." Also working with Camart, Black designed DJamm software in 1998, which Coldcut used on laptops for their live shows, providing the audio bed alongside VJAMM's audiovisual samples. Matt Black explained they designed DJamm so they "could perform electronic music in a different way – i.e., not just taking a session band out to reproduce what you put together in the studio using samples. It had a relationship to DJing, but was more interactive and more effective." Excitingly at that time, DJamm was pioneering in its ability to shuffle sliced loops into intricate sequences, enabling users to split loops into any number of parts. In 1999, Let Us Replay! was released, a double-disc remix album where Coldcut's classic tunes were remixed by the likes of Cornelius (which was heralded as a highlight of the album, Irresistible Force, Shut Up And Dance, Carl Craig and J Swinscoe. Let Us Replay! pieces together "short sharp shocks that put the mental in 'experimental' and still bring the breaks till the breakadawn". It also includes a few live tracks from the duo's innovative world tour. The CD-Rom of the album, which also contained a free demo disc of the VJamm software, was one of the earliest audiovisual CD- ROMs on the market, and Muzik claimed deserved to "have them canonized...it's like buying an entire mini studio for under $15". 2000s In 2000, the Solid Steel show moved to BBC London. Coldcut continued to forge interesting collaborations, including 2001's Re:volution as an EP in which Coldcut created their own political party (The Guilty Party). Featuring scratches and samples of Tony Blair and William Hague speeches, the 3-track EP included Nautilus' "Space Journey", which won an Intermusic contest in 2000. The video was widely played on MTV. With "Space Journey", Coldcut were arguably the first group to give fans access to the multitrack parts, or "stems" of their songs, building on the idea of interactivity and sharing from Let Us Play. In 2001, Coldcut produced tracks for the Sega music video game Rez. Rez replaced typical video-game sound effects with electronic music; the player created sounds and melodies, intended to simulate a form of synesthesia. The soundtrack also featured Adam Freeland and Oval. In 2002, while utilizing VJamm and Detraktor, Coldcut and Juxta remixed Herbie Hancock's classic "Rockit", creating both an audio and video remix. Working with Marcus Clements in 2002, Coldcut released the sample manipulation algorhythm from their DJamm software as a standalone VST plugin that could be used in other software, naming it the "Coldcutter". Also in 2002, Coldcut with UK VJs Headspace (now mainly performing as the VJamm Allstars developed Gridio, an interactive, immersive audio-visual installation for the Pompidou Centre as part of the ‘'Sonic Process exhibition. The Sonic Process exhibition was launched at the MACBA in Barcelona in conjunction with Sónar, featuring Gridio as its centerpiece. In 2003, a commission for Graz led to a specially built version of Gridio, in a cave inside the castle mountain in Austria. Gridio was later commissioned by O2 for two simultaneous customised installations at the O2 Wireless Festivals in Leeds and London in 2007. That same year, Gridio was featured as part of Optronica at the opening week of the new BFI Southbank development in London. In 2003, Black worked with Penny Rimbaud (ex Crass) on Crass Agenda's Savage Utopia project. Black performed the piece with Rimbaud, Eve Libertine and other players at London's Vortex Jazz Club. In 2004, Coldcut collaborated with American video mashup artist TV Sheriff to produce their cut-up entitled "Revolution USA". The tactical-media project (coordinated with Canadian art duo NomIg) followed on from the UK version and extended the premise "into an open access participatory project". Through the multimedia political art project, over 12 gigabytes of footage from the last 40 years of US politics were made accessible to download, allowing participants to create a cut-up over a Coldcut beat. Coldcut also collaborated with TV Sheriff and NomIg to produce two audiovisual pieces "World of Evil" (2004) and "Revolution '08" (2008), both composed of footage from the United States presidential elections of respective years. The music used was composed by Coldcut, with "Revolution '08" featuring a remix by the Qemists. Later that year, a collaboration with the British Antarctic Survey (BAS) led to the psychedelic art documentary Wavejammer. Coldcut was given access to the BAS archive in order to create sounds and visuals for the short film. 2004 also saw Coldcut produce a radio play in conjunction with renowned young author Hari Kunzru for BBC Radio 3 (incidentally called Sound Mirrors). Coldcut returned with the single "Everything Is Under Control" at the end of 2005, featuring Jon Spencer (of Jon Spencer Blues Explosion) and Mike Ladd. It was followed in 2006 by their fifth studio album Sound Mirrors, which was quoted as being "one of the most vital and imaginative records Jon Moore and Matt Black have ever made", and saw the duo "continue, impressively, to find new ways to present political statements through a gamut of pristine electronics and breakbeats" (CITATION: Future Music, 2007). The fascinating array of guest vocalists included Soweto Kinch, Annette Peacock, Ameri Baraka, and Saul Williams. The latter followed on from Coldcut's remix of Williams' "The Pledge" for a project with DJ Spooky. A 100-date audiovisual world tour commenced for Sound Mirrors, which was considered "no small feat in terms of technology or human effort". Coldcut was accompanied by scratch DJ Raj and AV artist Juxta, in addition to guest vocalists from the album, including UK rapper Juice Aleem, Roots Manuva, Mpho Skeef, Jon Spencer and house legend Robert Owens. Three further singles were released from the album including the Top 75 hit "True Skool" with Roots Manuva. The same track appeared on the soundtrack of the video game FIFA Street 2. Sponsored by the British Council, in 2005 Coldcut introduced
as saying: "Veteran duo Coldcut are so cool they invented the remix - now they are doing the same for television." Also working with Camart, Black designed DJamm software in 1998, which Coldcut used on laptops for their live shows, providing the audio bed alongside VJAMM's audiovisual samples. Matt Black explained they designed DJamm so they "could perform electronic music in a different way – i.e., not just taking a session band out to reproduce what you put together in the studio using samples. It had a relationship to DJing, but was more interactive and more effective." Excitingly at that time, DJamm was pioneering in its ability to shuffle sliced loops into intricate sequences, enabling users to split loops into any number of parts. In 1999, Let Us Replay! was released, a double-disc remix album where Coldcut's classic tunes were remixed by the likes of Cornelius (which was heralded as a highlight of the album, Irresistible Force, Shut Up And Dance, Carl Craig and J Swinscoe. Let Us Replay! pieces together "short sharp shocks that put the mental in 'experimental' and still bring the breaks till the breakadawn". It also includes a few live tracks from the duo's innovative world tour. The CD-Rom of the album, which also contained a free demo disc of the VJamm software, was one of the earliest audiovisual CD- ROMs on the market, and Muzik claimed deserved to "have them canonized...it's like buying an entire mini studio for under $15". 2000s In 2000, the Solid Steel show moved to BBC London. Coldcut continued to forge interesting collaborations, including 2001's Re:volution as an EP in which Coldcut created their own political party (The Guilty Party). Featuring scratches and samples of Tony Blair and William Hague speeches, the 3-track EP included Nautilus' "Space Journey", which won an Intermusic contest in 2000. The video was widely played on MTV. With "Space Journey", Coldcut were arguably the first group to give fans access to the multitrack parts, or "stems" of their songs, building on the idea of interactivity and sharing from Let Us Play. In 2001, Coldcut produced tracks for the Sega music video game Rez. Rez replaced typical video-game sound effects with electronic music; the player created sounds and melodies, intended to simulate a form of synesthesia. The soundtrack also featured Adam Freeland and Oval. In 2002, while utilizing VJamm and Detraktor, Coldcut and Juxta remixed Herbie Hancock's classic "Rockit", creating both an audio and video remix. Working with Marcus Clements in 2002, Coldcut released the sample manipulation algorhythm from their DJamm software as a standalone VST plugin that could be used in other software, naming it the "Coldcutter". Also in 2002, Coldcut with UK VJs Headspace (now mainly performing as the VJamm Allstars developed Gridio, an interactive, immersive audio-visual installation for the Pompidou Centre as part of the ‘'Sonic Process exhibition. The Sonic Process exhibition was launched at the MACBA in Barcelona in conjunction with Sónar, featuring Gridio as its centerpiece. In 2003, a commission for Graz led to a specially built version of Gridio, in a cave inside the castle mountain in Austria. Gridio was later commissioned by O2 for two simultaneous customised installations at the O2 Wireless Festivals in Leeds and London in 2007. That same year, Gridio was featured as part of Optronica at the opening week of the new BFI Southbank development in London. In 2003, Black worked with Penny Rimbaud (ex Crass) on Crass Agenda's Savage Utopia project. Black performed the piece with Rimbaud, Eve Libertine and other players at London's Vortex Jazz Club. In 2004, Coldcut collaborated with American video mashup artist TV Sheriff to produce their cut-up entitled "Revolution USA". The tactical-media project (coordinated with Canadian art duo NomIg) followed on from the UK version and extended the premise "into an open access participatory project". Through the multimedia political art project, over 12 gigabytes of footage from the last 40 years of US politics were made accessible to download, allowing participants to create a cut-up over a Coldcut beat. Coldcut also collaborated with TV Sheriff and NomIg to produce two audiovisual pieces "World of Evil" (2004) and "Revolution '08" (2008), both composed of footage from the United States presidential elections of respective years. The music used was composed by Coldcut, with "Revolution '08" featuring a remix by the Qemists. Later that year, a collaboration with the British Antarctic Survey (BAS) led to the psychedelic art documentary Wavejammer. Coldcut was given access to the BAS archive in order to create sounds and visuals for the short film. 2004 also saw Coldcut produce a radio play in conjunction with renowned young author Hari Kunzru for BBC Radio 3 (incidentally called Sound Mirrors). Coldcut returned with the single "Everything Is Under Control" at the end of 2005, featuring Jon Spencer (of Jon Spencer Blues Explosion) and Mike Ladd. It was followed in 2006 by their fifth studio album Sound Mirrors, which was quoted as being "one of the most vital and imaginative records Jon Moore and Matt Black have ever made", and saw the duo "continue, impressively, to find new ways to present political statements through a gamut of pristine electronics and breakbeats" (CITATION: Future Music, 2007). The fascinating array of guest vocalists included Soweto Kinch, Annette Peacock, Ameri Baraka, and Saul Williams. The latter followed on from Coldcut's remix of Williams' "The Pledge" for a project with DJ Spooky. A 100-date audiovisual world tour commenced for Sound Mirrors, which was considered "no small feat in terms of technology or human effort". Coldcut was accompanied by scratch DJ Raj and AV artist Juxta, in addition to guest vocalists from the album, including UK rapper Juice Aleem, Roots Manuva, Mpho Skeef, Jon Spencer and house legend Robert Owens. Three further singles were released from the album including the Top 75 hit "True Skool" with Roots Manuva. The same track appeared on the soundtrack of the video game FIFA Street 2. Sponsored by the British Council, in 2005 Coldcut introduced AV mixing to India with the Union project, alongside collaborators Howie B and Aki Nawaz of Fun-Da-Mental. Coldcut created an A/V remix of the Bollywood hit movie Kal Ho Naa Ho. In 2006, Coldcut performed an A/V set based on "Music for 18 Musicians" as part of Steve Reich's 70th birthday gig at the Barbican Centre in London. This was originally written for the 1999 album Reich Remixed. Coldcut remixed another classic song in 2007: Nina Simone's "Save Me". This was part of a remix album called Nina Simone: Remixed & Re-imagined, featuring remixes from Tony Humphries, Francois K and Chris Coco. In February 2007, Coldcut and Mixmaster Morris created a psychedelic AV obituary/tribute Coldcut, Mixmaster Morris, Ken Campbell, Bill Drummond and Alan Moore (18 March 2007). Robert Anton Wilson tribute show. Queen Elizabeth Hall, London: Mixmaster Morris. (28 August 2009) to Robert Anton Wilson, the 60s author of Illuminatus! Trilogy. The tribute featured graphic novel writer Alan Moore and artist Bill Drummond and a performance by experimental theatre legend Ken Campbell. Coldcut and Morris' hour and a half performance resembled a documentary being remixed on the fly, cutting up nearly 15 hours' worth of Wilson's lectures. In 2008, an international group of party organisers, activists and artists including Coldcut received a grant from the Intelligent Energy Department of the European Union, to create a project that promoted intelligent energy and environmental awareness to the youth of Europe. The result was Energy Union, a piece of VJ cinema, political campaign, music tour, party, art exhibition and social media hub. Energy Union toured 12 EU countries throughout 2009 and 2010, completing 24 events in total. Coldcut created the Energy Union show for the tour, a one-hour Audio/Visual montage on the theme of Intelligent Energy. In presenting new ideas for climate, environmental and energy communication strategies, the Energy Union tour was well received, and reached a widespread audience in cities across the UK, Germany, Belgium, The Netherlands, Croatia, Slovenia, Austria, Hungary, Bulgaria, Spain and the Czech Republic. Also in 2008, Coldcut was asked to remix the theme song for British cult TV show Doctor Who for the program's 40th anniversary. In October 2008, Coldcut celebrated the legacy of the BBC Radiophonic Workshop (the place where the Doctor Who theme was created) with a live DJ mix at London's legendary Roundhouse. The live mix incorporated classic Radiophonic Workshop compositions with extended sampling of the original gear. Additionally in 2008, Coldcut remixed "Ourselves", a Japanese No. 1 hit from the single "&" by Ayumi Hamasaki. This mix was included on the album Ayu-mi-x 6: Gold. Starting in 2009, Matt Black, with musician/artist/coder Paul Miller (creator of the TX Modular Open Source synth), developed Granul8, a new type of visual fx/source Black termed a "granular video synthesiser". Granul8 allows the use of realtime VJ techniques including video feedback combined with VDMX VJ software. From 2009 onwards, Black has been collaborating with coder and psychedelic mathematician William Rood to create a forthcoming project called Liveloom, a social media AV mixer. Recent work In 2010, Coldcut celebrated 20 years of releasing music with its label, Ninja Tune. A book entitled Ninja Tune: 20 Years of Beats and Pieces was released on 12 August 2010, and an exhibition was held at Black Dog Publishing's Black Dog Space in London, showcasing artwork, design and photography from the label's 20-year history. A compilation album was released on 20 September in two formats:
also named as multi sensory cooking, modernist cuisine, culinary physics, and experimental cuisine by some chefs. Besides, international trade brings new foodstuffs including ingredients to existing cuisines and leads to changes. The introduction of hot pepper to China from South America around the end of the 17th century, greatly influencing Sichuan cuisine, which combines the original taste (with use of Sichuan pepper) with the taste of newly introduced hot pepper and creates a unique mala () flavor that's mouth-numbingly spicy and pungent. Global cuisine A global cuisine is a cuisine that is practiced around the world, and can be categorized according to the common use of major foodstuffs, including grains, produce and cooking fats. Regional cuisines Regional cuisines can vary based on availability and usage of specific ingredients, local cooking traditions and practices, as well as overall cultural differences. Such factors can be more-or-less uniform across wide swaths of territory, or vary intensely within individual regions. For example, in Central and North South America, corn (maize), both fresh and dried, is a staple food, and is used in many different ways. In northern Europe, wheat, rye, and fats of animal origin predominate, while in southern Europe olive oil is ubiquitous and rice is more prevalent. In Italy, the cuisine of the north, featuring butter and rice, stands in contrast to that of the south, with its wheat pasta and olive oil. In some parts of China, rice is the staple, while in others this role is filled by noodles and bread. Throughout the Middle East and Mediterranean, common ingredients include lamb, olive oil, lemons, peppers, and rice. The vegetarianism practiced in much of India has made pulses (crops harvested solely for the dry seed) such as chickpeas and lentils as important as wheat or rice. From India to Indonesia, the extensive use of spices is characteristic; coconuts and seafood are also used throughout the region both as foodstuffs and as seasonings. African cuisine African cuisines use a combination of locally available fruits, cereals and vegetables, as well as milk and meat products. In some parts of the continent, the traditional diet features a preponderance of milk, curd and whey products. In much of tropical Africa, however, cow's milk is rare and cannot be produced locally (owing to various diseases that affect livestock). The continent's diverse demographic makeup is reflected in the many different eating and drinking habits, dishes, and preparation techniques of its manifold populations. Asian cuisines Asian cuisines are many and varied, and include East Asian cuisine, South Asian cuisine, Southeast Asian cuisine, Central Asian cuisine and West Asian cuisine. Ingredients common to East Asia and Southeast Asia (due to overseas Chinese influence) include rice, ginger, garlic, sesame seeds, chilies, dried onions, soy, and tofu, with stir frying, steaming, and deep frying being common cooking methods. While rice is common to most regional cuisines in Asia, different varieties are popular in the different regions: Basmati rice is popular in South Asia, Jasmine rice in Southeast Asia, and long-grain rice in China and short-grain rice in Japan and Korea. Curry is also a common ingredient found in South Asia, Southeast Asia, and East Asia (notably Japanese curry); however, they are not
has made pulses (crops harvested solely for the dry seed) such as chickpeas and lentils as important as wheat or rice. From India to Indonesia, the extensive use of spices is characteristic; coconuts and seafood are also used throughout the region both as foodstuffs and as seasonings. African cuisine African cuisines use a combination of locally available fruits, cereals and vegetables, as well as milk and meat products. In some parts of the continent, the traditional diet features a preponderance of milk, curd and whey products. In much of tropical Africa, however, cow's milk is rare and cannot be produced locally (owing to various diseases that affect livestock). The continent's diverse demographic makeup is reflected in the many different eating and drinking habits, dishes, and preparation techniques of its manifold populations. Asian cuisines Asian cuisines are many and varied, and include East Asian cuisine, South Asian cuisine, Southeast Asian cuisine, Central Asian cuisine and West Asian cuisine. Ingredients common to East Asia and Southeast Asia (due to overseas Chinese influence) include rice, ginger, garlic, sesame seeds, chilies, dried onions, soy, and tofu, with stir frying, steaming, and deep frying being common cooking methods. While rice is common to most regional cuisines in Asia, different varieties are popular in the different regions: Basmati rice is popular in South Asia, Jasmine rice in Southeast Asia, and long-grain rice in China and short-grain rice in Japan and Korea. Curry is also a common ingredient found in South Asia, Southeast Asia, and East Asia (notably Japanese curry); however, they are not popular in West Asian and Central Asian cuisines. Those curry dishes with origins in South Asia usually have a yogurt base, with origins in Southeast Asia a coconut milk base, and in East Asia a stewed meat and vegetable base. South Asian cuisine and Southeast Asian cuisine are often characterized by their extensive use of spices and herbs native to the tropical regions of Asia. European cuisine European cuisine (alternatively, "Western cuisine") include the cuisines of Europe and other Western countries. European cuisine includes non-indigenous cuisines of North America, Australasia, Oceania, and Latin America as well. The term is used by East Asians to contrast with East Asian styles of cooking. When used in English, the term may refer more specifically to cuisine in (Continental) Europe; in this context, a synonym is Continental cuisine, especially in British English. Oceanian cuisine Oceanian cuisines include Australian cuisine, New Zealand cuisine, and the cuisines from many other islands or island groups throughout Oceania. Australian cuisine consists of immigrant Anglo-Celtic derived cuisine, and Bushfood prepared and eaten by native Aboriginal Australian peoples, and various newer Asian influences. New Zealand cuisine also consists of European inspired dishes, such as Pavlova, and native Maori cuisine. Across Oceania, staples include the Kumura (Sweet potato) and Taro, which was/is a staple from Papua New Guinea to the South Pacific. On most islands in the south pacific, fish are widely consumed because of the proximity to the ocean. Cuisines of the Americas The cuisines of the Americas are found across North and South America, and are based on the cuisines of the countries from which the immigrant people came, primarily Europe. However, the traditional European cuisine has been adapted by the addition of many local and native ingredients, and many techniques have been added to traditional foods as well. Native American cuisine is prepared by indigenous populations across the continent, and its influences can be seen on multi-ethnic Latin American cuisine. Many staple foods eaten across the continent, such as corn (maize), beans, and potatoes have native origins. The regional cuisines are North American cuisine, Mexican cuisine, Central American cuisine, South American cuisine, and Caribbean cuisine. See also Culinary art Diet food Dish (food) Food group Food photography Food preparation Food presentation Foodpairing Haute cuisine Kitchen List of cuisines List of foods List of nutrition guides Meal Outline of cuisines Outline of food preparation Portion size Recipe Restaurant Traditional food Whole food References Further reading Albala, Ken (2011). Food Cultures of the World Encyclopedia Greenwood. California Culinary Academy (2001). In the World Kitchen: Global Cuisine from California Culinary Academy. Bay Books (CA). . Laudan, Rachel (2013). Cuisine and Empire: Cooking in World History University of California Press. MacVeigh, Jeremy (2008). International Cuisine. Delmar Cengage Learning; 1st edition. . Nenes, Michael F; Robbins, Joe (2008). International Cuisine. Hoboken, N.J.: Wiley, John & Sons; 1st edition. . Scarparto, Rosario (2000). New global cuisine: the perspective of postmodern gastronomy studies. Royal Melbourne Institute of Technology. Zobel, Myron (1962). Global cuisine: being the unique recipes of the 84 top restaurants
Many multimedia data streams contain both audio and video, and often some metadata that permits synchronization of audio and video. Each of these three streams may be handled by different programs, processes, or hardware; but for the multimedia data streams to be useful in stored or transmitted form, they must be encapsulated together in a container format. Lower bitrate codecs allow more users, but they also have more distortion. Beyond the initial increase in distortion, lower bit rate codecs also achieve their lower bit rates by using more complex algorithms that make certain assumptions, such as those about the media and the packet loss rate. Other codecs may not make those same assumptions. When a user with a low bitrate codec talks to a user with another codec, additional distortion is introduced by each transcoding. Audio Video Interleave (AVI) is sometimes erroneously described as a codec, but AVI is actually a container format, while a codec is a software or hardware tool that encodes or decodes audio or video into or from some audio or video format. Audio and video encoded with many codecs might be put into an AVI container, although AVI is not an ISO standard. There are also other well-known container formats, such as Ogg, ASF, QuickTime, RealMedia, Matroska, and DivX Media Format. MPEG transport stream, MPEG program stream, MP4, and ISO base media file format are examples of container formats that are ISO standardized. Malware are used when an online user takes a type of codec and installs viruses and other malware into whatever data is being compressed and uses it as a disguise. This disguise appears as a codec download through a pop up alert or ad. When a user goes to click or download that codec the malware is then installed on the computer. Once a fake codec is installed it is often used to access private data, corrupt an entire computer system or to keep spreading the malware. One of the previous most used ways to spread malware was fake AV pages and with the rise of
from the original uncompressed sound or images, depending on the codec and the settings used. The most widely used lossy data compression technique in digital media is based on the discrete cosine transform (DCT), used in compression standards such as JPEG images, H.26x and MPEG video, and MP3 and AAC audio. Smaller data sets ease the strain on relatively expensive storage sub-systems such as non-volatile memory and hard disk, as well as write-once-read-many formats such as CD-ROM, DVD and Blu-ray Disc. Lower data rates also reduce cost and improve performance when the data is transmitted, e.g. over the internet. Media codecs Two principal techniques are used in codecs, pulse-code modulation and delta modulation. Codecs are often designed to emphasize certain aspects of the media to be encoded. For example, a digital video (using a DV codec) of a sports event needs to encode motion well but not necessarily exact colors, while a video of an art exhibit needs to encode color and surface texture well. Audio codecs for cell phones need to have very low latency between source encoding and playback. In contrast, audio codecs for recording or broadcast can use high-latency audio compression techniques to achieve higher fidelity at a lower bit-rate. There are thousands of audio and video codecs, ranging in cost from free to hundreds of dollars or more. This variety of codecs can create compatibility and obsolescence issues. The impact is lessened for older formats, for which free or nearly-free codecs have existed for a long time. The older formats are often ill-suited to modern applications, however, such as playback in small portable devices. For example, raw uncompressed PCM audio (44.1 kHz, 16 bit stereo, as represented on an audio CD or in
with discovering 15 asteroids, and he observed nearly 800 asteroids during his search for Pluto and years of follow-up searches looking for another candidate for the postulated Planet X. Tombaugh is also credited with the discovery of periodic comet 274P/Tombaugh–Tenagra. He also discovered hundreds of variable stars, as well as star clusters, galaxy clusters, and a galaxy supercluster. Interest in UFOs Tombaugh was probably the most eminent astronomer to have reported seeing unidentified flying objects. On August 20, 1949, Tombaugh saw several unidentified objects near Las Cruces, New Mexico. He described them as six to eight rectangular lights, stating: "I doubt that the phenomenon was any terrestrial reflection, because... nothing of the kind has ever appeared before or since... I was so unprepared for such a strange sight that I was really petrified with astonishment.". Tombaugh observed these rectangles of light for about 3 seconds and his wife saw them for about seconds. He never supported the interpretation as a spaceship that has often been attributed to him. He considered other possibilities, with a temperature inversion as the most likely cause.From my own studies of the solar system I cannot entertain any serious possibility for intelligent life on other planets, not even for Mars... The logistics of visitations from planets revolving around the nearer stars is staggering. In consideration of the hundreds of millions of years in the geologic time scale when such visits may have possibly occurred, the odds of a single visit in a given century or millennium are overwhelmingly against such an event.A much more likely source of explanation is some natural optical phenomenon in our own atmosphere. In my 1949 sightings the faintness of the object, together with the manner of fading in intensity as it traveled away from the zenith towards the southeastern horizon, is quite suggestive of a reflection from an optical boundary or surface of slight contrast in refractive index, as in an inversion layer.I have never seen anything like it before or since, and I have spent a lot of time where the night sky could be seen well. This suggests that the phenomenon involves a comparatively rare set of conditions or circumstances to produce it, but nothing like the odds of an interstellar visitation. Another sighting by Tombaugh a year or two later while at a White Sands observatory was of an object of −6 magnitude, four times brighter than Venus at its brightest, going from the zenith to the southern horizon in about 3 seconds. The object executed the same maneuvers as in Tombaugh's first sighting. Tombaugh later reported having seen three of the mysterious green fireballs, which suddenly appeared over New Mexico in late 1948 and continued at least through the early 1950s. A researcher on Project Twinkle reported that Tombaugh "... never observed an unexplainable aerial object despite his continuous and extensive observations of the sky." According to an entry in "UFO updates", Tombaugh said: "I have seen three objects in the last seven years which defied any explanation of known phenomenon, such as Venus, atmospheric optic, meteors or planes. I am a professional, highly skilled, professional astronomer. In addition I have seen three green fireballs which were unusual in behavior from normal green fireballs... I think that several reputable scientists are being unscientific in refusing to entertain the possibility of extraterrestrial origin and nature." Shortly after this, in January 1957, in an Associated Press article in the Alamogordo Daily News titled "Celestial Visitors May Be Invading Earth's Atmosphere", Tombaugh was again quoted on his sightings and opinion about them. "Although our own solar system is believed to support no other life than on Earth, other stars in the galaxy may have hundreds of thousands of habitable worlds. Races on these worlds may have been able to utilize the tremendous amounts of power required to bridge the space between the stars ...". Tombaugh stated that he had observed celestial phenomena which he could not explain, but had seen none personally since 1951 or 1952. "These things, which do appear to be directed, are unlike any other phenomena I ever observed. Their apparent lack of obedience to the ordinary laws of celestial motion gives credence." In 1949, Tombaugh had also told the Naval missile director at White Sands Missile Range, Commander Robert McLaughlin, that he had seen a bright flash on Mars on August 27, 1941, which he now attributed to an atomic blast. Tombaugh also noted that the first atomic bomb tested in New Mexico would have lit up the dark side of the Earth like a neon sign and that Mars was coincidentally quite close at the time, the implication apparently being that the atomic test would have been visible from Mars. In June 1952, Dr. J. Allen Hynek, an astronomer acting as a scientific consultant to the Air Force's Project Blue Book UFO study, secretly conducted a survey of fellow astronomers on UFO sightings and attitudes while attending an astronomy convention. Tombaugh and four other astronomers, including Dr. Lincoln LaPaz of the University of New Mexico, told Hynek about their sightings. Tombaugh also told Hynek that his telescopes were at the Air Force's disposal for taking photos of UFOs,
This ruled out classification as an asteroid, and they decided this was the ninth planet that Lowell had predicted. The discovery was made on Tuesday, February 18, 1930, using images taken the previous month. Three classical mythological names were about equally popular among proposals for the new planet: Minerva, Cronus and Pluto. However, Minerva was already in use and the primary supporter of Cronus was widely disliked, leaving Pluto as the front-runner. Outside of Lowell staff, it was first proposed by an 11-year-old English schoolgirl, Venetia Burney. In its favor was that the Pluto of Roman mythology was able to render himself invisible, and that its first two letters formed Percival Lowell's initials. In order to avoid the name changes suffered by Neptune, the name was proposed to both the American Astronomical Society and the Royal Astronomical Society, both of which approved it unanimously. The name was officially adopted on May 1, 1930. Following the discovery, it was recognized that Pluto wasn't massive enough to be the expected ninth planet, and some astronomers began to consider it the first of a new class of object – and indeed Tombaugh searched for additional trans-Neptunian objects for years, though due to the lack of any further discoveries he concluded that Pluto was indeed a planet. The idea that Pluto was not a true planet remained a minority position until the discovery of other Kuiper belt objects in the late 1990s, which showed that it did not orbit alone but was at best the largest of a number of icy bodies in its region of space. After it was shown that at least one such body, dubbed Eris, was more massive than Pluto, the International Astronomical Union (IAU) reclassified Pluto on August 24, 2006, as a dwarf planet, leaving eight planets in the Solar System. Tombaugh's widow Patricia stated after the IAU's decision that while he might have been disappointed with the change since he had resisted attempts to remove Pluto's planetary status in his lifetime, he would have accepted the decision now if he were alive. She noted that he "was a scientist. He would understand they had a real problem when they start finding several of these things flying around the place." Hal Levison offered this perspective on Tombaugh's place in history: "Clyde Tombaugh discovered the Kuiper Belt. That's a helluva lot more interesting than the ninth planet." Further search Tombaugh continued searching for over a decade after the discovery of Pluto, and the lack of further discoveries left him satisfied that no other object of a comparable apparent magnitude existed near the ecliptic. No more trans-Neptunian objects were discovered until 15760 Albion, in 1992. However, more recently the relatively bright object has been discovered. It has a relatively high orbital inclination, but at the time of Tombaugh's discovery of Pluto, Makemake was only a few degrees from the ecliptic, near the border of Taurus and Auriga, at an apparent magnitude of 16. This position was also very near the galactic equator, making it almost impossible to find such an object within the dense concentration of background stars of the Milky Way. In the fourteen years of looking for planets, until he was drafted in July 1943, Tombaugh looked for motion in 90 million star images (two each of 45 million stars). Asteroids discovered Tombaugh is officially credited by the Minor Planet Center with discovering 15 asteroids, and he observed nearly 800 asteroids during his search for Pluto and years of follow-up searches looking for another candidate for the postulated Planet X. Tombaugh is also credited with the discovery of periodic comet 274P/Tombaugh–Tenagra. He also discovered hundreds of variable stars, as well as star clusters, galaxy clusters, and a galaxy supercluster. Interest in UFOs Tombaugh was probably the most eminent astronomer to have reported seeing unidentified flying objects. On August 20, 1949, Tombaugh saw several unidentified objects near Las Cruces, New Mexico. He described them as six to eight rectangular lights, stating: "I doubt that the phenomenon was any terrestrial reflection, because... nothing of the kind has ever appeared before or since... I was so unprepared for such a strange sight that I was really petrified with astonishment.". Tombaugh observed these rectangles of light for about 3 seconds and his wife saw them for about seconds. He never supported the interpretation as a spaceship that has often been attributed to him. He considered other possibilities, with a temperature inversion as the most likely cause.From my own studies of the solar system I cannot entertain any serious possibility for intelligent life on other planets, not even for Mars... The logistics of visitations from planets revolving around the nearer
supervise the administration of the Principality of Transylvania as the head of the Transylvanian chancellery at Kraków. Christopher ordered the imprisonment of Ferenc Dávid, a leading theologian of the Unitarian Church of Transylvania, who started to condemn the adoration of Jesus. He supported his brother's efforts to settle the Jesuits in Transylvania. Early life Christopher was the third of the four sons of Stephen Báthory of Somlyó and Catherine Telegdi. His father was a supporter of John Zápolya, King of Hungary, who made him voivode of Transylvania in February 1530. Christopher was born in Báthorys' castle at Szilágysomlyó (now Șimleu Silvaniei in Romania) in the same year. His father died in 1534. His brother, Andrew, and their kinsman, Tamás Nádasdy, took charge of Christopher's education. Christopher visited England, France, Italy, Spain, and the Holy Roman Empire in his youth. He also served as a page in Emperor Charles V's court. Career Christopher entered the service of John Zápolya's widow, Isabella Jagiellon, in the late 1550s. At the time, Isabella administered the eastern territories of the Kingdom of Hungary on behalf of her son, John Sigismund Zápolya. She wanted to persuade Henry II of France to withdraw his troops from three fortresses that the Ottomans had captured in Banat, so she sent Christopher to France to start negotiations in 1557. John Sigismund took charge of the administration of his realm after his mother died on 15 November 1559. He retained his mother's advisors, including Christopher who became one of his most influential officials. After the rebellion of Melchior Balassa, Christopher persuaded John Sigismund to fight for his realm instead of fleeing to Poland in 1562. Christopher was one of the commanders of John Sigismund's troops during the ensuing war against the Habsburg rulers of the western territories of the Kingdom of Hungary, Ferdinand and Maximilian, who tried to reunite the kingdom under their rule. Christopher defeated Maximilian's commander, Lazarus von Schwendi, forcing him to lift the siege of Huszt (now Khust in Ukraine) in 1565. After the death of John Sigismund, the Diet of Transylvania elected Christopher's younger brother, Stephen Báthory, voivode (or ruler) on 25 May 1571. Stephen made Christopher captain of Várad (now Oradea in Romania). The following year, the Ottoman Sultan, Selim II (who was the overlord of Transylvania), acknowledged the hereditary right of the Báthory family to rule the province. Reign Stephen Báthory was elected King of Poland on 15 December 1575. He adopted the title of Prince of Transylvania and made Christopher voivode on 14 January 1576. An Ottoman delegation confirmed Christopher's appointment at the Diet in Gyulafehérvár (now Alba Iulia in
Diet of Transylvania elected Christopher's younger brother, Stephen Báthory, voivode (or ruler) on 25 May 1571. Stephen made Christopher captain of Várad (now Oradea in Romania). The following year, the Ottoman Sultan, Selim II (who was the overlord of Transylvania), acknowledged the hereditary right of the Báthory family to rule the province. Reign Stephen Báthory was elected King of Poland on 15 December 1575. He adopted the title of Prince of Transylvania and made Christopher voivode on 14 January 1576. An Ottoman delegation confirmed Christopher's appointment at the Diet in Gyulafehérvár (now Alba Iulia in Romania) in July. The sultan's charter (or ahidnâme) sent to Christopher emphasized that he should keep the peace along the frontiers. Stephen set up a separate chancellery in Kraków to keep an eye on the administration of Transylvania. The head of the new chancellery, Márton Berzeviczy, and Christopher cooperated closely. Anti-Trinitarian preachers began to condemn the worshiping of Jesus in Partium and Székely Land in 1576, although the Diet had already forbade all doctrinal innovations. Ferenc Dávid, the most influential leader of the Unitarian Church of Transylvania, openly joined the dissenters in the autumn of 1578. Christopher invited Fausto Sozzini, a leading Anti-Trinitarian theologian, to Transylvania to convince Dávid that the new teaching was erroneous. Since Dávid refused to obey, Christopher held a Diet and the "Three Nations" (including the Unitarian delegates) ordered Dávid's imprisonment. Christopher also supported his brother's attempts to strengthen the position of the Roman Catholic Church in Transylvania. He granted estates to the Jesuits to promote the establishment of a college in Kolozsvár (now Cluj-Napoca in Romania) on 5 May 1579. Christopher fell seriously ill after his second wife, Elisabeth Bocskai, died in early 1581. After a false rumor about Christopher's death reached Istanbul, Koca Sinan Pasha proposed Transylvania to Pál Márkházy whom Christopher had been forced into exile. Although Christopher's only surviving son Sigismund was still a minor, the Diet elected him as voivode before Christopher's death, because they wanted to prevent the appointment of Márkházy. Christopher died in Gyulafehérvár on 27 May 1581. He was buried in the Jesuits' church in Gyulafehérvár, almost two years later, on 14 March 1583. Family Christopher's first wife, Catherina Danicska, was a Polish noblewoman, but only the Hungarian form of her name is known. Their eldest son, Balthasar Báthory, moved to Kraków shortly after Stephen Báthory was crowned King of Poland;
software modules and accompanying documentation for 39,000 distributions, written in the Perl programming language by over 12,000 contributors. CPAN can denote either the archive network or the Perl program that acts as an interface to the network and as an automated software installer (somewhat like a package manager). Most software on CPAN is free and open source software. History CPAN was conceived in 1993 and has been active online since October 1995. It is based on the CTAN model and began as a place to unify the structure of scattered Perl archives. Role Like many programming languages, Perl has mechanisms to use external libraries of code, making one file contain common routines used by several programs. Perl calls these modules. Perl modules are typically installed in one of several directories whose paths are placed in the Perl interpreter when it is first compiled; on Unix-like operating systems, common paths include /usr/lib/perl5, /usr/local/lib/perl5, and several of their subdirectories. Perl comes with a small set of core modules. Some of these perform bootstrapping tasks, such as ExtUtils::MakeMaker, which is used to create Makefiles for building and installing other extension modules; others, like List::Util, are merely commonly used. CPAN's main purpose is to help programmers locate modules and programs not included in the Perl standard distribution. Its structure is decentralized. Authors maintain and improve their own modules. Forking, and creating competing modules for the same task or purpose, is common. There is a third-party bug tracking system that is automatically set up for any uploaded distribution, but authors may opt to use a different bug tracking system such as GitHub. Similarly, though GitHub is a popular location to store the source for distributions, it may be stored anywhere the author prefers, or may not be publicly accessible at all. Maintainers may grant permissions to others to maintain or take over their modules, and permissions may be granted by admins for those wishing to take over abandoned modules. Previous versions of updated distributions are retained on CPAN until deleted by the uploader, and a secondary mirror network called BackPAN retains distributions even if they are deleted from CPAN. Also, the complete history of the CPAN and all its modules is available as the GitPAN project, allowing to easily see the complete history for all the modules and for easy maintenance of forks. CPAN is also used to distribute new versions of Perl, as well as related projects, such as Parrot and Raku. Structure Files on the CPAN are referred to as distributions. A distribution may consist of one or more modules, documentation files, or programs packaged in a common archiving format, such as a gzipped tar archive or a ZIP file. Distributions will often contain installation scripts (usually called Makefile.PL or Build.PL) and test scripts which can be run to verify the contents of the distribution are functioning properly. New distributions are uploaded to the Perl Authors Upload Server, or PAUSE (see the section Uploading distributions with PAUSE). In 2003, distributions started to include metadata files, called META.yml, indicating the distribution's name, version, dependencies, and other useful information; however, not all distributions contain metadata. When metadata is not present in a distribution, the PAUSE's software will try to analyze the code in the distribution to look for the same information; this is not necessarily very reliable. In 2010, version 2 of this specification was created to be used via a new file called META.json, with the YAML format file often also included for backward compatibility. With thousands of distributions, CPAN needs to be structured to be useful. Authors often place their modules in the natural hierarchy of Perl module names (such as Apache::DBI or Lingua::EN::Inflect) according to purpose or domain, though this is not enforced. CPAN module distributions usually have names in the form of CGI-Application-3.1 (where the :: used in the module's name has been replaced with a dash, and the version number has been appended
programs not included in the Perl standard distribution. Its structure is decentralized. Authors maintain and improve their own modules. Forking, and creating competing modules for the same task or purpose, is common. There is a third-party bug tracking system that is automatically set up for any uploaded distribution, but authors may opt to use a different bug tracking system such as GitHub. Similarly, though GitHub is a popular location to store the source for distributions, it may be stored anywhere the author prefers, or may not be publicly accessible at all. Maintainers may grant permissions to others to maintain or take over their modules, and permissions may be granted by admins for those wishing to take over abandoned modules. Previous versions of updated distributions are retained on CPAN until deleted by the uploader, and a secondary mirror network called BackPAN retains distributions even if they are deleted from CPAN. Also, the complete history of the CPAN and all its modules is available as the GitPAN project, allowing to easily see the complete history for all the modules and for easy maintenance of forks. CPAN is also used to distribute new versions of Perl, as well as related projects, such as Parrot and Raku. Structure Files on the CPAN are referred to as distributions. A distribution may consist of one or more modules, documentation files, or programs packaged in a common archiving format, such as a gzipped tar archive or a ZIP file. Distributions will often contain installation scripts (usually called Makefile.PL or Build.PL) and test scripts which can be run to verify the contents of the distribution are functioning properly. New distributions are uploaded to the Perl Authors Upload Server, or PAUSE (see the section Uploading distributions with PAUSE). In 2003, distributions started to include metadata files, called META.yml, indicating the distribution's name, version, dependencies, and other useful information; however, not all distributions contain metadata. When metadata is not present in a distribution, the PAUSE's software will try to analyze the code in the distribution to look for the same information; this is not necessarily very reliable. In 2010, version 2 of this specification was created to be used via a new file called META.json, with the YAML format file often also included for backward compatibility. With thousands of distributions, CPAN needs to be structured to be useful. Authors often place their modules in the natural hierarchy of Perl module names (such as Apache::DBI or Lingua::EN::Inflect) according to purpose or domain, though this is not enforced. CPAN module distributions usually have names in the form of CGI-Application-3.1 (where the :: used in the module's name has been replaced with a dash, and the version number has been appended to the name), but this is only a convention; many prominent distributions break the convention, especially those that contain multiple modules. Security restrictions prevent a distribution from ever being replaced with an identical filename, so virtually all distribution names do include a version number. Components The distribution infrastructure of CPAN consists of its worldwide network of more than 250 mirrors in more than 60 countries. Each full mirror hosts around 31 gigabytes of data. Most mirrors update themselves hourly, daily or bidaily from the CPAN master site. Some sites are major FTP servers which mirror lots of other software, but others are simply servers owned by companies that use Perl heavily. There are at least two mirrors on every continent except Antarctica. Several search engines have been written to help Perl programmers sort through the CPAN. The official includes textual search, a browsable index of modules, and extracted copies of all distributions currently on the CPAN. On 16 May 2018, the Perl Foundation announced that search.cpan.org would be shut down on 29 June 2018 (after 19 years of operation), due to its aging codebase and maintenance burden. Users will be transitioned and redirected to the third-party alternative MetaCPAN. CPAN Testers are a group of volunteers, who will download and test distributions as they are uploaded to CPAN. This enables the authors to have their modules tested on many platforms and environments that they would otherwise not have access to, thus helping to promote portability, as well as a degree of quality. Smoke testers send reports, which are then collated and used for a variety of presentation websites, including the main reports site, statistics and dependencies. Authors can upload new distributions to the CPAN through the Perl Authors Upload Server (PAUSE). To do so, they must request a PAUSE account. Once registered, they may use a web interface at pause.perl.org, or an FTP interface to upload files to their directory and delete them. Modules in the upload will only be indexed as canonical if the module name has not been used before (granting first-come permission to the uploader), or if the uploader has permission for that name, and if the module is a higher version than any existing entry. This can be specified through PAUSE's web interface. CPAN.pm, CPANPLUS, and cpanminus There is also a Perl core module named CPAN; it is usually differentiated from the repository itself by using the name CPAN.pm. CPAN.pm is mainly an interactive shell which can be used to search for, download, and install distributions. An interactive shell called is also provided in the Perl core, and is the usual way
Rockies were MLB's first team based in the Mountain Time Zone. They have reached the Major League Baseball postseason five times, each time as the National League wild card team. Twice (1995 and 2009) they were eliminated in the first round of the playoffs. In 2007, the Rockies advanced to the World Series, only to be swept by the Boston Red Sox. The team's stretch run was among the greatest ever for a Major League Baseball team. Having a record of 76-72 at the start of play on September 16, the Rockies proceeded to win 14 of their final 15 regular season games. The stretch culminated with a 9-8, 13-inning victory over the San Diego Padres in a one-game playoff for the wild card berth. Colorado then swept their first seven playoff games to win the NL pennant (thus, at the start of the World Series, the Rockies had won a total of 21 out of 22 games). Fans and media nicknamed their improbable run in October, Rocktober. Colorado made postseason berths in 2017 and 2018. In 2018, the Rockies became the first team since the 1922 Philadelphia Phillies to play in four cities against four teams in five days, including the 162nd game of the regular season, NL West tie-breaker, NL Wild Card Game and NLDS Game 1, eventually losing to the Milwaukee Brewers in the NLDS. Like their expansion brethren, the Miami Marlins, they have never won a division title since their establishment and they, along with the Pittsburgh Pirates are also one of three MLB teams that have never won their current division. The Rockies have played their home games at Coors Field since 1995. Their newest spring training home, Salt River Fields at Talking Stick in Scottsdale, Arizona, opened in March 2011 and is shared with the Arizona Diamondbacks. Controversies On June 1, 2006, USA Today reported that Rockies management, including manager Clint Hurdle, had instituted an explicitly Christian code of conduct for the team's players, banning men's magazines (such as Maxim and Playboy) and sexually explicit music from the team's clubhouse. The article sparked controversy, and soon-after The Denver Post published an article featuring many Rockies players contesting the claims made in the USA Today article. Former Rockies pitcher Jason Jennings said: "[The article in USA Today] was just bad. I am not happy at all. Some of the best teammates I have ever had are the furthest thing from Christian", Jennings said. "You don't have to be a Christian to have good character. They can be separate. [The article] was misleading." On October 17, 2007, a week before the first game of the 2007 World Series against the Boston Red Sox, the Colorado Rockies announced that tickets were to be available to the general public via online sales only, despite prior arrangements to sell the tickets at local retail outlets. Five days later on October 22, California-based ticket vendor Paciolan, Inc., the sole contractor authorized by the Colorado Rockies to distribute tickets, was forced to suspend sales after less than an hour due to an overwhelming number of attempts to purchase tickets. An official release from the baseball organization claimed that they were the victims of a denial of service attack. These claims, however, were unsubstantiated and neither the Rockies nor Paciolan have sought investigation into the matter. The United States Federal Bureau of Investigation started its own investigation into the claims. Ticket sales resumed the next day, with all three home games selling out within two and a half hours. Season record Uniforms The Rockies' home uniform is white with purple pinstripes, and the Rockies are the first team in Major League history to wear purple pinstripes. The front of the uniform is emblazoned with the team name in silver trimmed in black, and letters and numerals are in black trimmed in silver. During the Rockies' inaugural season, they went without names on the back of their home uniforms, but added them for the following season. In 2000, numerals were added to the chest. The Rockies' road uniform is grey with purple piping. The front of the uniform originally featured the team name in silver trimmed in purple, but was changed the next season to purple with white trim. Letters and numerals are in purple with white trim. In the 2000 season, piping was replaced with pinstripes, "Colorado" was emblazoned in front, chest numerals were placed, and black trim was added to the letters. Prior to the 2012 season, the Rockies brought back the purple piping on their road uniforms, but kept the other elements of their 2000 uniform change. The Rockies originally wore an alternate black uniform during their maiden 1993 season, but for only a few games. The uniform featured the team name in silver with purple trim, and letters and numerals in purple with white trim. In the 2005 season, the Rockies started wearing black sleeveless alternate uniforms, featuring "Colorado", letters and numerals in silver with purple and white trim. The uniforms also included black undershirts, and for a few games in 2005, purple undershirts. From 2002 to 2011, the Rockies wore alternate versions of their pinstriped white uniform, featuring the interlocking "CR" on the left chest and numerals on the right chest. This design featured sleeves until 2004, when they went with a vest design with black undershirts. In addition to the black sleeveless alternate uniform, the Rockies also wear
to the 2012 season, the Rockies brought back the purple piping on their road uniforms, but kept the other elements of their 2000 uniform change. The Rockies originally wore an alternate black uniform during their maiden 1993 season, but for only a few games. The uniform featured the team name in silver with purple trim, and letters and numerals in purple with white trim. In the 2005 season, the Rockies started wearing black sleeveless alternate uniforms, featuring "Colorado", letters and numerals in silver with purple and white trim. The uniforms also included black undershirts, and for a few games in 2005, purple undershirts. From 2002 to 2011, the Rockies wore alternate versions of their pinstriped white uniform, featuring the interlocking "CR" on the left chest and numerals on the right chest. This design featured sleeves until 2004, when they went with a vest design with black undershirts. In addition to the black sleeveless alternate uniform, the Rockies also wear a purple alternate uniform, which they first unveiled in the 2000 season. The design featured "Colorado" in silver with black and white trim, and letters and numerals in black with white trim. At the start of the 2012 season, the Rockies introduced "Purple Mondays" in which the team wears its purple uniform every Monday game day, though the team continued to wear them on other days of the week. Prior to 2019, the Rockies always wore their white pinstriped pants regardless of what uniform top they wore during home games. However, the Rockies have since added alternate white non-pinstriped pants to pair with either their black or purple alternate uniforms at home, as neither uniform contained pinstripes. The Rockies currently wear an all-black cap with "CR" in purple trimmed in silver and a purple-brimmed variation as an alternate. The team previously wore an all-purple cap with "CR" in black trimmed in silver, and in the 2018 season, caps with the "CR" in silver to commemorate the team's 25th anniversary. Baseball Hall of Famers In 2020, Larry Walker was the first Colorado Rockies player to be inducted to the Baseball Hall of Fame. Colorado Sports Hall of Fame Retired numbers Todd Helton is the first Colorado player to have his number (17) retired, which was done on Sunday, August 17, 2014. Newly-elected Hall of Fame member Larry Walker was to have his number (33) retired on April 19, 2020 at Coors Field, but this ceremony was postponed to September 25, 2021 due to the COVID-19 pandemic. Jackie Robinson's number, 42, was retired throughout all of baseball in 1997. Keli McGregor had worked with the Rockies since their inception in 1993, rising from senior director of operations to team president in 2002, until his death on April 20, 2010. He is honored at Coors Field alongside Helton, Walker, and Robinson with his initials. Out of Circulation, but not retired The Rockies have not re-issued Carlos Gonzalez's uniform number 5 since he left the team after 2018. Individual awards NL MVP 1997 – Larry Walker NLCS MVP 2007 – Matt Holliday NL Rookie of the Year 2002 – Jason Jennings NL Comeback Player of the Year 2017 – Greg Holland 2020 – Daniel Bard Silver Slugger Award Dante Bichette (1995) Vinny Castilla (1995, 1997–1998) Andrés Galarraga (1996) Eric Young (1996) Ellis Burks (1996) Larry Walker (1997, 1999) Mike Hampton (2001–2002) Todd Helton (2000–2003) Matt Holliday (2006–2008) Carlos González (2010, 2015) Troy Tulowitzki (2010–2011) Michael Cuddyer (2013) Nolan Arenado (2015–2018) Charlie Blackmon (2016–2017) Trevor Story (2018–2019) Germán Márquez (2018) Hank Aaron Award 2000 – Todd Helton Gold Glove Award Larry Walker (1997–1999, 2001–2002) Neifi Pérez (2000) Todd Helton (2001–2002, 2004) Carlos González (2010, 2012–2013) Troy Tulowitzki (2010–2011) Nolan Arenado (2013–2019) DJ LeMahieu (2014, 2017–2018) Manager of the Year Award 1995 – Don Baylor 2009 – Jim Tracy NL Batting Champion Andrés Galarraga (1993) Larry Walker (1998, 1999, 2001) Todd Helton (2000) Matt Holliday (2007) Carlos González (2010) Michael Cuddyer (2013) Justin Morneau (2014) DJ LeMahieu (2016) Charlie Blackmon (2017) DHL Hometown Heroes (2006) Larry Walker – voted by MLB fans as the most outstanding player in the history of the franchise, based on on-field performance, leadership quality and character value Team award – Warren Giles Trophy (National League champion) 2007 – Baseball America Organization of the Year Team records (single-game, single-season, career) Championships | colspan="3" style="text-align:center;"| National League Champions |- | style="width:30%; text-align:center;"| Preceded by:St. Louis Cardinals | style="width:40%; text-align:center;"| 2007 | style="width:30%; text-align:center;"| Succeeded by:Philadelphia Phillies |- | colspan="3" style="text-align:center;"| National League Wild Card Winners |- | style="width:30%; text-align:center;"| Preceded by:None (First) | style="width:40%; text-align:center;"| 1995 | style="width:30%; text-align:center;"| Succeeded by:Los Angeles Dodgers |- | style="width:30%; text-align:center;"| Preceded by:Los Angeles Dodgers | style="width:40%; text-align:center;"| 2007 | style="width:30%; text-align:center;"| Succeeded by:Milwaukee Brewers |- | style="width:30%; text-align:center;"| Preceded by:Milwaukee Brewers | style="width:40%; text-align:center;"| 2009 | style="width:30%; text-align:center;"| Succeeded by:Atlanta Braves |- | style="width:30%; text-align:center;"| Preceded by:Arizona Diamondbacks | style="width:40%; text-align:center;"| 2018 | style="width:30%; text-align:center;"| Succeeded by:Washington Nationals |- | colspan="3" style="text-align:center;"| National League Wild Card Runner-Up |- | | style="width:40%; text-align:center;"| 2017 | |- Roster Home attendance The Rockies led MLB attendance records for the first seven years of their existence. The inaugural season is currently the MLB all-time record for home attendance. + = 57 home games in strike shortened season. ++ = 72 home games in strike shortened season. Minor league affiliations The Colorado Rockies farm system consists of seven minor league affiliates. Radio and television As of 2010, Rockies' flagship radio station is KOA 850AM, with some late-season games broadcast on KHOW 630 AM due to conflicts with Denver Broncos games. The Rockies Radio Network is composed of 38 affiliate stations in eight states. As of 2019, Jack Corrigan is the radio announcer, serving as a backup TV announcer whenever Drew Goodman is not available. In January 2020, long-time KOA radio announcer Jerry Schemmel was let go from his role for budgetary reasons from KOA’s parent company. As of 2013, Spanish language radio broadcasts of the Rockies are heard on KNRV 1150 AM. As
CA in cement chemist notation, CCN) and mayenite Ca12Al14O33 (12 CaO · 7 Al2O3, or C12A7 in CCN). Strength forms by hydration to calcium aluminate hydrates. They are well-adapted for use in refractory (high-temperature resistant) concretes, e.g., for furnace linings. Calcium sulfoaluminate cements are made from clinkers that include ye'elimite (Ca4(AlO2)6SO4 or C4A3 in Cement chemist's notation) as a primary phase. They are used in expansive cements, in ultra-high early strength cements, and in "low-energy" cements. Hydration produces ettringite, and specialized physical properties (such as expansion or rapid reaction) are obtained by adjustment of the availability of calcium and sulfate ions. Their use as a low-energy alternative to Portland cement has been pioneered in China, where several million tonnes per year are produced. Energy requirements are lower because of the lower kiln temperatures required for reaction, and the lower amount of limestone (which must be endothermically decarbonated) in the mix. In addition, the lower limestone content and lower fuel consumption leads to a CO2 emission around half that associated with Portland clinker. However, SO2 emissions are usually significantly higher. "Natural" cements corresponding to certain cements of the pre-Portland era, are produced by burning argillaceous limestones at moderate temperatures. The level of clay components in the limestone (around 30–35%) is such that large amounts of belite (the low-early strength, high-late strength mineral in Portland cement) are formed without the formation of excessive amounts of free lime. As with any natural material, such cements have highly variable properties. Geopolymer cements are made from mixtures of water-soluble alkali metal silicates, and aluminosilicate mineral powders such as fly ash and metakaolin. Polymer cements are made from organic chemicals that polymerise. Producers often use thermoset materials. While they are often significantly more expensive, they can give a water proof material that has useful tensile strength. Sorel Cement is a hard, durable cement made by combining magnesium oxide and a magnesium chloride solution Fiber mesh cement or fiber reinforced concrete is cement that is made up of fibrous materials like synthetic fibers, glass fibers, natural fibers, and steel fibers. This type of mesh is distributed evenly throughout the wet concrete. The purpose of fiber mesh is to reduce water loss from the concrete as well as enhance its structural integrity. When used in plasters, fiber mesh increases cohesiveness, tensile strength, impact resistance, and to reduce shrinkage; ultimately, the main purpose of these combined properties is to reduce cracking. Setting, hardening and curing Cement starts to set when mixed with water, which causes a series of hydration chemical reactions. The constituents slowly hydrate and the mineral hydrates solidify and harden. The interlocking of the hydrates gives cement its strength. Contrary to popular belief, hydraulic cement does not set by drying out — proper curing requires maintaining the appropriate moisture content necessary for the hydration reactions during the setting and the hardening processes. If hydraulic cements dry out during the curing phase, the resulting product can be insufficiently hydrated and significantly weakened. A minimum temperature of 5 °C is recommended, and no more than 30 °C. The concrete at young age must be protected against water evaporation due to direct insolation, elevated temperature, low relative humidity and wind. The interfacial transition zone (ITZ) is a region of the cement paste around the aggregate particles in concrete. In the zone,a gradual transition in the microstructural features occurs. This zone can be up to 35 micrometer wide. Other studies have shown that the width can be up to 50 micrometer. The average content of unreacted clinker phase decreases and porosity decreases towards the aggregate surface. Similarly, the content of ettringite increases in ITZ. Safety issues Bags of cement routinely have health and safety warnings printed on them because not only is cement highly alkaline, but the setting process is exothermic. As a result, wet cement is strongly caustic (pH = 13.5) and can easily cause severe skin burns if not promptly washed off with water. Similarly, dry cement powder in contact with mucous membranes can cause severe eye or respiratory irritation. Some trace elements, such as chromium, from impurities naturally present in the raw materials used to produce cement may cause allergic dermatitis. Reducing agents such as ferrous sulfate (FeSO4) are often added to cement to convert the carcinogenic hexavalent chromate (CrO42−) into trivalent chromium (Cr3+), a less toxic chemical species. Cement users need also to wear appropriate gloves and protective clothing. Cement industry in the world In 2010, the world production of hydraulic cement was . The top three producers were China with 1,800, India with 220, and USA with 63.5 million tonnes for a total of over half the world total by the world's three most populated states. For the world capacity to produce cement in 2010, the situation was similar with the top three states (China, India, and USA) accounting for just under half the world total capacity. Over 2011 and 2012, global consumption continued to climb, rising to 3585 Mt in 2011 and 3736 Mt in 2012, while annual growth rates eased to 8.3% and 4.2%, respectively. China, representing an increasing share of world cement consumption, remains the main engine of global growth. By 2012, Chinese demand was recorded at 2160 Mt, representing 58% of world consumption. Annual growth rates, which reached 16% in 2010, appear to have softened, slowing to 5–6% over 2011 and 2012, as China's economy targets a more sustainable growth rate. Outside of China, worldwide consumption climbed by 4.4% to 1462 Mt in 2010, 5% to 1535 Mt in 2011, and finally 2.7% to 1576 Mt in 2012. Iran is now the 3rd largest cement producer in the world and has increased its output by over 10% from 2008 to 2011. Due to climbing energy costs in Pakistan and other major cement-producing countries, Iran is in a unique position as a trading partner, utilizing its own surplus petroleum to power clinker plants. Now a top producer in the Middle-East, Iran is further increasing its dominant position in local markets and abroad. The performance in North America and Europe over the 2010–12 period contrasted strikingly with that of China, as the global financial crisis evolved into a sovereign debt crisis for many economies in this region and recession. Cement consumption levels for this region fell by 1.9% in 2010 to 445 Mt, recovered by 4.9% in 2011, then dipped again by 1.1% in 2012. The performance in the rest of the world, which includes many emerging economies in Asia, Africa and Latin America and representing some 1020 Mt cement demand in 2010, was positive and more than offset the declines in North America and Europe. Annual consumption growth was recorded at 7.4% in 2010, moderating to 5.1% and 4.3% in 2011 and 2012, respectively. As at year-end 2012, the global cement industry consisted of 5673 cement production facilities, including both integrated and grinding, of which 3900 were located in China and 1773 in the rest of the world. Total cement capacity worldwide was recorded at 5245 Mt in 2012, with 2950 Mt located in China and 2295 Mt in the rest of the world. China "For the past 18 years, China consistently has produced more cement than any other country in the world. [...] (However,) China's cement export peaked in 1994 with 11 million tonnes shipped out and has been in steady decline ever since. Only 5.18 million tonnes were exported out of China in 2002. Offered at $34 a ton, Chinese cement is pricing itself out of the market as Thailand is asking as little as $20 for the same quality." In 2006, it was estimated that China manufactured 1.235 billion tonnes of cement, which was 44% of the world total cement production. "Demand for cement in China is expected to advance 5.4% annually and exceed 1 billion tonnes in 2008, driven by slowing but healthy growth in construction expenditures. Cement consumed in China will amount to 44% of global demand, and China will remain the world's largest national consumer of cement by a large margin." In 2010, 3.3 billion tonnes of cement was consumed globally. Of this, China accounted for 1.8 billion tonnes. Environmental impacts Cement manufacture causes environmental impacts at all stages of the process. These include emissions of airborne pollution in the form of dust, gases, noise and vibration when operating machinery and during blasting in quarries, and damage to countryside from quarrying. Equipment to reduce dust emissions during quarrying and manufacture of cement is widely used, and equipment to trap and separate exhaust gases are coming into increased use. Environmental protection also includes the re-integration of quarries into the countryside after they have been closed down by returning them to nature or re-cultivating them. CO2 emissions Carbon concentration in cement spans from ≈5% in cement structures to ≈8% in the case of roads in cement. Cement manufacturing releases in the atmosphere both directly when calcium carbonate is heated, producing lime and carbon dioxide, and also indirectly through the use of energy if its production involves the emission of CO2. The cement industry produces about 10% of global human-made CO2 emissions, of which 60% is from the chemical process, and 40% from burning fuel. A Chatham House study from 2018 estimates that the 4 billion tonnes of cement produced annually account for 8% of worldwide CO2 emissions. Nearly 900 kg of CO2 are emitted for every 1000 kg of Portland cement produced. In the European Union, the specific energy consumption for the production of cement clinker has been reduced by approximately 30% since the 1970s. This reduction in primary energy requirements is equivalent to approximately 11 million tonnes of coal per year with corresponding benefits in reduction of CO2 emissions. This accounts for approximately 5% of anthropogenic CO2. The majority of carbon dioxide emissions in the manufacture of Portland cement (approximately 60%) are produced from the chemical decomposition of limestone to lime, an ingredient in Portland cement clinker. These emissions may be reduced by lowering the clinker content of cement. They can also be reduced by alternative fabrication methods such as the intergrinding cement with sand or with slag or other pozzolan type minerals to a very fine powder. To reduce the transport of heavier raw materials and to minimize the associated costs, it is more economical to build cement plants closer to the limestone quarries rather than to the consumer centers. In certain applications, lime mortar reabsorbs some of the CO2 as was released in its manufacture, and has a lower energy requirement in production than mainstream cement. Newly developed cement types from Novacem and Eco-cement can absorb carbon dioxide from ambient air during hardening. carbon capture and storage is about to be trialed, but its financial viability is uncertain. Heavy metal emissions in the air In some circumstances, mainly depending on the origin and the composition of the raw materials used, the high-temperature calcination process of limestone and
normal technique was to use brick facing material as the formwork for an infill of mortar mixed with an aggregate of broken pieces of stone, brick, potsherds, recycled chunks of concrete, or other building rubble. Middle Ages Any preservation of this knowledge in literature from the Middle Ages is unknown, but medieval masons and some military engineers actively used hydraulic cement in structures such as canals, fortresses, harbors, and shipbuilding facilities. A mixture of lime mortar and aggregate with brick or stone facing material was used in the Eastern Roman Empire as well as in the West into the Gothic period. The German Rhineland continued to use hydraulic mortar throughout the Middle Ages, having local pozzolana deposits called trass. 16th century Tabby is a building material made from oyster shell lime, sand, and whole oyster shells to form a concrete. The Spanish introduced it to the Americas in the sixteenth century. 18th century The technical knowledge for making hydraulic cement was formalized by French and British engineers in the 18th century. John Smeaton made an important contribution to the development of cements while planning the construction of the third Eddystone Lighthouse (1755–59) in the English Channel now known as Smeaton's Tower. He needed a hydraulic mortar that would set and develop some strength in the twelve-hour period between successive high Tides. He performed experiments with combinations of different Limestones and additives including trass and pozzolanas and did exhaustive market research on the available hydraulic limes, visiting their production sites, and noted that the "hydraulicity" of the lime was directly related to the clay content of the limestone used to make it. Smeaton was a civil engineer by profession, and took the idea no further. In the South Atlantic seaboard of the United States, tabby relying on the oyster-shell middens of earlier Native American populations was used in house construction from the 1730s to the 1860s. In Britain particularly, good quality building stone became ever more expensive during a period of rapid growth, and it became a common practice to construct prestige buildings from the new industrial bricks, and to finish them with a stucco to imitate stone. Hydraulic limes were favored for this, but the need for a fast set time encouraged the development of new cements. Most famous was Parker's "Roman cement". This was developed by James Parker in the 1780s, and finally patented in 1796. It was, in fact, nothing like material used by the Romans, but was a "natural cement" made by burning septaria – nodules that are found in certain clay deposits, and that contain both clay minerals and calcium carbonate. The burnt nodules were ground to a fine powder. This product, made into a mortar with sand, set in 5–15 minutes. The success of "Roman cement" led other manufacturers to develop rival products by burning artificial hydraulic lime cements of clay and chalk. Roman cement quickly became popular but was largely replaced by Portland cement in the 1850s. 19th century Apparently unaware of Smeaton's work, the same principle was identified by Frenchman Louis Vicat in the first decade of the nineteenth century. Vicat went on to devise a method of combining chalk and clay into an intimate mixture, and, burning this, produced an "artificial cement" in 1817 considered the "principal forerunner" of Portland cement and "...Edgar Dobbs of Southwark patented a cement of this kind in 1811." In Russia, Egor Cheliev created a new binder by mixing lime and clay. His results were published in 1822 in his book A Treatise on the Art to Prepare a Good Mortar published in St. Petersburg. A few years later in 1825, he published another book, which described various methods of making cement and concrete, and the benefits of cement in the construction of buildings and embankments. Portland cement, the most common type of cement in general use around the world as a basic ingredient of concrete, mortar, stucco, and non-speciality grout, was developed in England in the mid 19th century, and usually originates from limestone. James Frost produced what he called "British cement" in a similar manner around the same time, but did not obtain a patent until 1822. In 1824, Joseph Aspdin patented a similar material, which he called Portland cement, because the render made from it was in color similar to the prestigious Portland stone quarried on the Isle of Portland, Dorset, England. However, Aspdins' cement was nothing like modern Portland cement but was a first step in its development, called a proto-Portland cement. Joseph Aspdins' son William Aspdin had left his father's company and in his cement manufacturing apparently accidentally produced calcium silicates in the 1840s, a middle step in the development of Portland cement. William Aspdin's innovation was counterintuitive for manufacturers of "artificial cements", because they required more lime in the mix (a problem for his father), a much higher kiln temperature (and therefore more fuel), and the resulting clinker was very hard and rapidly wore down the millstones, which were the only available grinding technology of the time. Manufacturing costs were therefore considerably higher, but the product set reasonably slowly and developed strength quickly, thus opening up a market for use in concrete. The use of concrete in construction grew rapidly from 1850 onward, and was soon the dominant use for cements. Thus Portland cement began its predominant role. Isaac Charles Johnson further refined the production of meso-Portland cement (middle stage of development) and claimed he was the real father of Portland cement. Setting time and "early strength" are important characteristics of cements. Hydraulic limes, "natural" cements, and "artificial" cements all rely on their belite (2 CaO · SiO2, abbreviated as C2S) content for strength development. Belite develops strength slowly. Because they were burned at temperatures below , they contained no alite (3 CaO · SiO2, abbreviated as C3S), which is responsible for early strength in modern cements. The first cement to consistently contain alite was made by William Aspdin in the early 1840s: This was what we call today "modern" Portland cement. Because of the air of mystery with which William Aspdin surrounded his product, others (e.g., Vicat and Johnson) have claimed precedence in this invention, but recent analysis of both his concrete and raw cement have shown that William Aspdin's product made at Northfleet, Kent was a true alite-based cement. However, Aspdin's methods were "rule-of-thumb": Vicat is responsible for establishing the chemical basis of these cements, and Johnson established the importance of sintering the mix in the kiln. In the US the first large-scale use of cement was Rosendale cement, a natural cement mined from a massive deposit of dolomite discovered in the early 19th century near Rosendale, New York. Rosendale cement was extremely popular for the foundation of buildings (e.g., Statue of Liberty, Capitol Building, Brooklyn Bridge) and lining water pipes. Sorel cement, or magnesia-based cement, was patented in 1867 by the Frenchman Stanislas Sorel. It was stronger than Portland cement but its poor water resistance (leaching) and corrosive properties (pitting corrosion due to the presence of leachable chloride anions and the low pH (8.5–9.5) of its pore water) limited its use as reinforced concrete for building construction. The next development in the manufacture of Portland cement was the introduction of the rotary kiln. It produced a clinker mixture that was both stronger, due to increased alite (C3S) formation at the higher temperature it achieved (1450 °C), and more homogeneous. Because raw material is constantly fed into a rotary kiln, it allowed a continuous manufacturing process to replace lower capacity batch production processes. 20th century Calcium aluminate cements were patented in 1908 in France by Jules Bied for better resistance to sulfates. Also in 1908, Thomas Edison experimented with pre-cast concrete in houses in Union, N.J. In the US, after World War One, the long curing time of at least a month for Rosendale cement made it unpopular for constructing highways and bridges, and many states and construction firms turned to Portland cement. Because of the switch to Portland cement, by the end of the 1920s only one of the 15 Rosendale cement companies had survived. But in the early 1930s, builders discovered that, while Portland cement set faster, it was not as durable, especially for highways—to the point that some states stopped building highways and roads with cement. Bertrain H. Wait, an engineer whose company had helped construct the New York City's Catskill Aqueduct, was impressed with the durability of Rosendale cement, and came up with a blend of both Rosendale and Portland cements that had the good attributes of both. It was highly durable and had a much faster setting time. Wait convinced the New York Commissioner of Highways to construct an experimental section of highway near New Paltz, New York, using one sack of Rosendale to six sacks of Portland cement. It was a success, and for decades the Rosendale-Portland cement blend was used in highway and bridge construction. Cementitious materials have been used as a nuclear waste immobilizing matrix for more than a half-century. Technologies of waste cementation have been developed and deployed at industrial scale in many countries. Cementitious wasteforms require a careful selection and design process adapted to each specific type of waste to satisfy the strict waste acceptance criteria for long-term storage and disposal. Modern cements Modern hydraulic development began with the start of the Industrial Revolution (around 1800), driven by three main needs: Hydraulic cement render (stucco) for finishing brick buildings in wet climates Hydraulic mortars for masonry construction of harbor works, etc., in contact with sea water Development of strong concretes Modern cements are often Portland cement or Portland cement blends, but industry also uses other cements. Portland cement Portland cement, a form of hydraulic cement, is by far the most common type of cement in general use around the world. This cement is made by heating limestone (calcium carbonate) with other materials (such as clay) to in a kiln, in a process known as calcination that liberates a molecule of carbon dioxide from the calcium carbonate to form calcium oxide, or quicklime, which then chemically combines with the other materials in the mix to form calcium silicates and other cementitious compounds. The resulting hard substance, called 'clinker', is then ground with a small amount of gypsum into a powder to make ordinary Portland cement, the most commonly used type of cement (often referred to as OPC). Portland cement is a basic ingredient of concrete, mortar, and most non-specialty grout. The most common use for Portland cement is to make concrete. Concrete is a composite material made of aggregate (gravel and sand), cement, and water. As a construction material, concrete can be cast in almost any shape, and once it hardens, can be a structural (load bearing) element. Portland cement may be grey or white. Portland cement blend Portland cement blends are often available as inter-ground mixtures from cement producers, but similar formulations are often also mixed from the ground components at the concrete mixing plant. Portland blast-furnace slag cement, or blast furnace cement (ASTM C595 and EN 197-1 nomenclature respectively), contains up to 95% ground granulated blast furnace slag, with the rest Portland clinker and a little gypsum. All compositions produce high ultimate strength, but as slag content is increased, early strength is reduced, while sulfate resistance increases and heat evolution diminishes. Used as an economic alternative to Portland sulfate-resisting and low-heat cements. Portland-fly ash cement contains up to 40% fly ash under ASTM standards (ASTM C595), or 35% under EN standards (EN 197–1). The fly ash is pozzolanic, so that ultimate strength is maintained. Because fly ash addition allows a lower concrete water content, early strength can also be maintained. Where good quality cheap fly ash is available, this can be an economic alternative to ordinary Portland cement. Portland pozzolan cement includes fly ash cement, since fly ash is a pozzolan, but also includes cements made from other natural or artificial pozzolans. In countries where volcanic ashes are available (e.g., Italy, Chile, Mexico, the Philippines), these cements are often the most common form in use. The maximum replacement ratios are generally defined as for Portland-fly ash cement. Portland silica fume cement. Addition of silica fume can yield exceptionally high strengths, and cements containing 5–20% silica fume are occasionally produced, with 10% being the maximum allowed addition under EN 197–1. However, silica fume is more usually added to Portland cement at the concrete mixer. Masonry cements are used for preparing bricklaying mortars and stuccos, and must not be used in concrete. They are usually complex proprietary formulations containing Portland clinker and a number of other ingredients that may include limestone, hydrated lime, air entrainers, retarders, waterproofers, and coloring agents. They are formulated to yield workable mortars that allow rapid and consistent masonry work. Subtle variations of masonry cement in North America are plastic cements and stucco cements. These are designed to produce a controlled bond with masonry blocks. Expansive cements contain, in addition to Portland clinker, expansive clinkers (usually sulfoaluminate clinkers), and are designed to offset the effects of drying shrinkage normally encountered in hydraulic cements. This cement can make concrete for floor slabs (up to 60 m square) without contraction joints. White blended cements may be made using white clinker (containing little or no iron) and white supplementary materials such as high-purity metakaolin. Colored cements serve decorative purposes. Some standards allow the addition of pigments to produce colored Portland cement. Other standards (e.g., ASTM) do not allow pigments in Portland cement, and colored cements are sold as blended hydraulic cements. Very finely ground cements are cement mixed with sand or with slag or other pozzolan type minerals that are extremely finely ground together. Such cements can have the same physical characteristics as normal cement but with 50% less cement, particularly due to their increased surface area for the chemical reaction. Even with intensive grinding they can use up to 50% less energy (and thus less carbon emissions) to fabricate than ordinary Portland cements. Other cements Pozzolan-lime cements are mixtures of ground pozzolan and lime. These are the cements the Romans used, and are present in surviving Roman structures like the Pantheon in Rome. They develop strength slowly, but their ultimate strength can be very high. The hydration products that produce strength are essentially the same as those in Portland cement. Slag-lime cements—ground granulated blast-furnace slag is not hydraulic on its own,
Hod Eller and left-hander Harry "Slim" Sallee. The Reds finished ahead of John McGraw's New York Giants, and then won the world championship in eight games over the Chicago White Sox. By 1920, the "Black Sox" scandal had brought a taint to the Reds' first championship. After 1926 and well into the 1930s, the Reds were second division dwellers. Eppa Rixey, Dolf Luque and Pete Donohue were pitching stars, but the offense never lived up to the pitching. By 1931, the team was bankrupt, the Great Depression was in full swing, and Redland Field was in a state of disrepair. Championship baseball and revival (1933–1940) Powel Crosley, Jr., an electronics magnate who, with his brother Lewis M. Crosley, produced radios, refrigerators and other household items, bought the Reds out of bankruptcy in 1933, and hired Larry MacPhail to be the general manager. Crosley had started WLW radio, the Reds flagship radio broadcaster, and the Crosley Broadcasting Corporation in Cincinnati, where he was also a prominent civic leader. MacPhail began to develop the Reds' minor league system and expanded the Reds' fan base. Throuhgout the rest of the decade, the Reds became a team of "firsts". The now-renamed Crosley Field became the host of the first night game in 1935, which was also the first baseball fireworks night (the fireworks at the game were shot by Joe Rozzi of Rozzi's Famous Fireworks). Johnny Vander Meer became the only pitcher in major league history to throw back-to-back no-hitters in 1938. Thanks to Vander Meer, Paul Derringer and second baseman/third baseman-turned-pitcher Bucky Walters, the Reds had a solid pitching staff. The offense came around in the late 1930s. By 1938, the Reds, now led by manager Bill McKechnie, were out of the second division, finishing fourth. Ernie Lombardi was named the National League's Most Valuable Player in 1938. By 1939, the Reds were National League champions only to be swept in the World Series by the New York Yankees. In 1940, the Reds repeated as NL Champions, and for the first time in 21 years, they captured a world championship, beating the Detroit Tigers 4 games to 3. Frank McCormick was the 1940 NL MVP; other position players included Harry Craft, Lonny Frey, Ival Goodman, Lew Riggs, and Bill Werber. 1941–1969 World War II and age finally caught up with the Reds as the team finished mostly in the second division throughout the 1940s and early 1950s. In 1944, Joe Nuxhall (who was later to become part of the radio broadcasting team), at age 15, pitched for the Reds on loan from Wilson Junior High school in Hamilton, Ohio. He became the youngest player ever to appear in a major league game, a record that still stands today. Ewell "The Whip" Blackwell was the main pitching stalwart before arm problems cut short his career. Ted Kluszewski was the NL home run leader in 1954. The rest of the offense was a collection of over-the-hill players and not-ready-for-prime-time youngsters. In April 1953, the Reds announced a preference to be called the "Redlegs", saying that the name of the club had been "Red Stockings" and then "Redlegs". A newspaper speculated that it was due to the developing political connotation of the word "red" to mean Communism. From 1956 to 1960, the club's logo was altered to remove the term "REDS" from the inside of the "wishbone C" symbol. The word "REDS" reappeared on the 1961 uniforms, but the point of the "C" was removed. The traditional home uniform logo was reinstated in 1967. In 1956, the Redlegs, led by National League Rookie of the Year Frank Robinson, hit 221 HR to tie the NL record. By 1961, Robinson was joined by Vada Pinson, Wally Post, Gordy Coleman, and Gene Freese. Pitchers Joey Jay, Jim O'Toole and Bob Purkey led the staff. The Reds captured the 1961 National League pennant, holding off the Los Angeles Dodgers and San Francisco Giants, only to be defeated by the perennially powerful New York Yankees in the World Series. The Reds had winning teams during the rest of the 1960s, but did not produce any championships. They won 98 games in 1962, paced by Purkey's 23, but finished third. In 1964, they lost the pennant by one game to the St. Louis Cardinals after having taken first place when the Philadelphia Phillies collapsed in September. Their beloved manager Fred Hutchinson died of cancer just weeks after the end of the 1964 season. The failure of the Reds to win the 1964 pennant led to owner Bill DeWitt selling off key components of the team in anticipation of relocating the franchise. In response to DeWitt's threatened move, women of Cincinnati banded together to form the Rosie Reds to urge DeWitt to keep the franchise in Cincinnati. The Rosie Reds are still in existence, and are currently the oldest fan club in Major League Baseball. After the 1965 season, DeWitt executed what is remembered as the most lopsided trade in baseball history, sending former MVP Frank Robinson to the Baltimore Orioles for pitchers Milt Pappas and Jack Baldschun, and outfielder Dick Simpson. Robinson went on to win the MVP and triple crown in the American League for 1966, and led Baltimore to its first-ever World Series title in a sweep of the Los Angeles Dodgers. The Reds did not recover from this trade until the rise of the "Big Red Machine" in the 1970s. Starting in the early 1960s, the Reds' farm system began producing a series of stars, including Jim Maloney (the Reds' pitching ace of the 1960s), Pete Rose, Tony Pérez, Johnny Bench, Lee May, Tommy Helms, Bernie Carbo, Hal McRae, Dave Concepción, and Gary Nolan. The tipping point came in 1967, with the appointment of Bob Howsam as general manager. That same year, the Reds avoided a move to San Diego when the city of Cincinnati and Hamilton County agreed to build a state-of-the-art, downtown stadium on the edge of the Ohio River. The Reds entered into a 30-year lease in exchange for the stadium commitment keeping the franchise in Cincinnati. In a series of strategic moves, Howsam brought in key personnel to complement the homegrown talent. The Reds' final game at Crosley Field, where they had played since 1912, was played on June 24, 1970, with a 5–4 victory over the San Francisco Giants. Under Howsam's administration starting in the late 1960s, all players coming to the Reds were required to shave and cut their hair for the next three decades in order to present the team as wholesome in an era of turmoil. The rule was controversial, but persisted well into the ownership of Marge Schott. On at least one occasion, in the early 1980s, enforcement of this rule lost the Reds the services of star reliever and Ohio native Rollie Fingers, who would not shave his trademark handlebar mustache in order to join the team. The rule was not officially rescinded until 1999, when the Reds traded for slugger Greg Vaughn, who had a goatee. The New York Yankees continue to have a similar rule today, although Yankees players are permitted to have mustaches. Much like when players leave the Yankees today, players who left the Reds took advantage with their new teams; Pete Rose, for instance, grew his hair out much longer than would be allowed by the Reds once he signed with the Philadelphia Phillies in 1979. The Reds' rules also included conservative uniforms. In Major League Baseball, a club generally provides most of the equipment and clothing needed for play. However, players are required to supply their gloves and shoes themselves. Many players enter into sponsorship arrangements with shoe manufacturers, but until the mid-1980s, the Reds had a strict rule requiring players to wear only plain black shoes with no prominent logo. Reds players decried what they considered to be the boring color choice, as well as the denial of the opportunity to earn more money through shoe contracts. In 1985, a compromise was struck in which players could paint red marks on their black shoes and were allowed to wear all-red shoes the following year. The Big Red Machine (1970–1976) In , little-known George "Sparky" Anderson was hired as manager of the Reds, and the team embarked upon a decade of excellence, with a lineup that came to be known as "the Big Red Machine". Playing at Crosley Field until June 30, 1970, when they moved into Riverfront Stadium, a new 52,000-seat multi-purpose venue on the shores of the Ohio River, the Reds began the 1970s with a bang by winning 70 of their first 100 games. Johnny Bench, Tony Pérez, Pete Rose, Lee May, and Bobby Tolan were the early offensive leaders of this era. Gary Nolan, Jim Merritt, Wayne Simpson, and Jim McGlothlin led a pitching staff which also contained veterans Tony Cloninger and Clay Carroll as well as youngsters Pedro Borbón and Don Gullett. The Reds breezed through the 1970 season, winning the NL West and capturing the NL pennant by sweeping the Pittsburgh Pirates in three games. By the time the club got to the World Series, however, the pitching staff had run out of gas, and the veteran Baltimore Orioles, led by Hall of Fame third baseman and World Series MVP Brooks Robinson, beat the Reds in five games. After the disastrous season (the only year in the decade in which the team finished with a losing record), the Reds reloaded by trading veterans Jimmy Stewart, May and Tommy Helms to the Houston Astros for Joe Morgan, César Gerónimo, Jack Billingham, Ed Armbrister, and Denis Menke. Meanwhile, Dave Concepción blossomed at shortstop. 1971 was also the year a key component of future world championships was acquired, when George Foster was traded to the Reds from the San Francisco Giants in exchange for shortstop Frank Duffy. The Reds won the NL West in baseball's first-ever strike-shortened season, and defeated the Pittsburgh Pirates in a five-game playoff series. They then faced the Oakland Athletics in the World Series, who won six of the seven games by one run. With powerful slugger Reggie Jackson sidelined by an injury incurred during Oakland's playoff series, Ohio native Gene Tenace got a chance to play in the series, delivering four home runs that tied the World Series record for homers, propelling Oakland to a dramatic seven-game series win. This was one of the few World Series in which no starting pitcher for either side pitched a complete game. The Reds won a third NL West crown in after a dramatic second-half comeback that saw them make up games on the Los Angeles Dodgers after the All-Star break. However, they lost the NL pennant to the New York Mets in five games in the NLCS. In game 1, Tom Seaver faced Jack Billingham in a classic pitching duel, with all three runs of the 2–1 margin being scored on home runs. John Milner provided New York's run off Billingham, while Pete Rose tied the game in the seventh inning off Seaver, setting the stage for a dramatic game-ending home run by Johnny Bench in the bottom of the ninth. The New York series provided plenty of controversy surrounding the riotous behavior of Shea Stadium fans towards Pete Rose when he and Bud Harrelson scuffled after a hard slide by Rose into Harrelson at second base during the fifth inning of game 3. A full bench-clearing fight resulted after Harrelson responded to Rose's aggressive move to prevent him from completing a double play by calling him a name. This also led to two more incidents in which play was stopped. The Reds trailed 9–3, and New York's manager Yogi Berra and legendary outfielder Willie Mays, at the request of National League president Warren Giles, appealed to fans in left field to restrain themselves. The next day the series was extended to a fifth game when Rose homered in the 12th inning to tie the series at two games each. The Reds won 98 games in , but finished second to the 102-win Los Angeles Dodgers. The 1974 season started off with much excitement, as the Atlanta Braves were in town to open the season with the Reds. Hank Aaron entered opening day with 713 home runs, one shy of tying Babe Ruth's record of 714. The first pitch Aaron swung at in the 1974 season was the record-tying home run off Jack Billingham. The next day, the Braves benched Aaron, hoping to save him for his record-breaking home run on their season-opening homestand. Then-commissioner Bowie Kuhn ordered Braves management to play Aaron the next day, where he narrowly missed a historic home run in the fifth inning. Aaron went on to set the record in Atlanta two nights later. The 1974 season also saw the debut of Hall of Fame radio announcer Marty Brennaman, after Al Michaels left the Reds to broadcast for the San Francisco Giants. With 1975, the Big Red Machine lineup solidified with the "Great Eight" starting team of Johnny Bench (catcher), Tony Pérez (first base), Joe Morgan (second base), Dave Concepción (shortstop), Pete Rose (third base), Ken Griffey (right field), César Gerónimo (center field), and George Foster (left field). The starting pitchers included Don Gullett, Fred Norman, Gary Nolan, Jack Billingham, Pat Darcy, and Clay Kirby. The bullpen featured Rawly Eastwick and Will McEnaney combining for 37 saves, and veterans Pedro Borbón and Clay Carroll. On Opening Day, Rose still played in left field and Foster was not a starter, while John Vukovich, an off-season acquisition, was the starting third baseman. While Vuckovich was a superb fielder, he was a weak hitter. In May, with the team off to a slow start and trailing the Dodgers, Sparky Anderson made a bold move by moving Rose to third base, a position where he had very little experience, and inserting Foster in left field. This was the jolt that the Reds needed to propel them into first place, with Rose proving to be reliable on defense, while adding Foster to the outfield gave the offense some added punch. During the season, the Reds compiled two notable streaks: (1) by winning 41 out of 50 games in one stretch, and (2) by going a month without committing any errors on defense. In the 1975 season, Cincinnati clinched the NL West with 108 victories before sweeping the Pittsburgh Pirates in three games to win the NL pennant. They went on to face the Boston Red Sox in the World Series, splitting the first four games and taking game 5. After a three-day rain delay, the two teams met in game 6, considered by many to be the best World Series game ever. The Reds were ahead 6–3 with 5 outs left when the Red Sox tied the game on former Red Bernie Carbo's three-run home run, his second pinch-hit three-run homer in the series. After a few close-calls either way, Carlton Fisk hit a dramatic 12th inning home run off the foul pole in left field to give the Red Sox a 7–6 win and force a deciding game 7. Cincinnati prevailed the next day when Morgan's RBI single won game 7 and gave the Reds their first championship in 35 years. The Reds have not lost a World Series game since Carlton Fisk's home run, a span of nine straight wins. saw a return of the same starting eight in the field. The starting rotation was again led by Nolan, Gullett, Billingham, and Norman, while the addition of rookies Pat Zachry and Santo Alcalá comprised an underrated staff in which four of the six had ERAs below 3.10. Eastwick, Borbon and McEnaney shared closer duties, recording 26, 8 and 7 saves, respectively. The Reds won the NL West by ten games and went undefeated in the postseason, sweeping the Philadelphia Phillies (winning game 3 in their final at-bat) to return to the World Series, where they beat the Yankees at the newly-renovated Yankee Stadium in the first Series held there since 1964. This was only the second-ever sweep of the Yankees in the World Series, and the Reds became the first NL team since the 1921–22 New York Giants to win consecutive World Series championships. To date, the 1975 and 1976 Reds were the last NL team to repeat as champions. Beginning with the 1970 National League pennant, the Reds beat either of the two Pennsylvania-based clubs, the Philadelphia Phillies or the Pittsburgh Pirates to win their pennants (Pirates in 1970, 1972, 1975, and 1990, Phillies in 1976), making the Big Red Machine part of the rivalry between the two Pennsylvania teams. In 1979, Pete Rose added further fuel to the Big Red Machine, being part of the rivalry when he signed with the Phillies and helped them win their first World Series in . The Machine dismantled (1977–1989) The late 1970s brought turmoil and change to the Reds. Popular Tony Pérez was sent to the Montreal
in 1970, 1972, 1975, and 1990, Phillies in 1976), making the Big Red Machine part of the rivalry between the two Pennsylvania teams. In 1979, Pete Rose added further fuel to the Big Red Machine, being part of the rivalry when he signed with the Phillies and helped them win their first World Series in . The Machine dismantled (1977–1989) The late 1970s brought turmoil and change to the Reds. Popular Tony Pérez was sent to the Montreal Expos after the 1976 season, breaking up the Big Red Machine's starting lineup. Manager Sparky Anderson and general manager Bob Howsam later considered this trade to be the biggest mistake of their careers. Starting pitcher Don Gullett left via free agency and signed with the New York Yankees. In an effort to fill that gap, a trade with the Oakland Athletics for starting ace Vida Blue was arranged during the 1976–77 offseason. However, Bowie Kuhn, then-commissioner of baseball, vetoed the trade in order to maintain competitive balance in baseball; some have suggested that the actual reason had more to do with Kuhn's continued feud with Athletics owner Charlie Finley. On June 15, 1977, the Reds acquired pitcher Tom Seaver from the New York Mets for Pat Zachry, Doug Flynn, Steve Henderson, and Dan Norman. In other deals that proved to be less successful, the Reds traded Gary Nolan to the California Angels for Craig Hendrickson; Rawly Eastwick to the St. Louis Cardinals for Doug Capilla; and Mike Caldwell to the Milwaukee Brewers for Rick O'Keeffe and Garry Pyka, as well as Rick Auerbach from Texas. The end of the Big Red Machine era was heralded by the replacement of general manager Bob Howsam with Dick Wagner. In his last season as a Red, Rose gave baseball a thrill as he challenged Joe DiMaggio's 56-game hitting streak, tying for the second-longest streak ever at 44 games. The streak came to an end in Atlanta after striking out in his fifth at-bat in the game against Gene Garber. Rose also earned his 3,000th hit that season, on his way to becoming baseball's all-time hits leader when he rejoined the Reds in the mid-1980s. The year also witnessed the only no-hitter of Hall of Fame pitcher Tom Seaver's career, coming against the St. Louis Cardinals on June 16, 1978. After the 1978 season and two straight second-place finishes, Wagner fired manager Anderson in a move that proved to be unpopular. Pete Rose, who had played almost every position for the team except pitcher, shortstop and catcher since 1963, signed with Philadelphia as a free agent. By , the starters were Bench (c), Dan Driessen (1b), Morgan (2b), Concepción (ss), and Ray Knight (3b), with Griffey, Foster and Geronimo again in the outfield. The pitching staff had experienced a complete turnover since 1976, except for Fred Norman. In addition to ace starter Tom Seaver, the remaining starters were Mike LaCoss, Bill Bonham and Paul Moskau. In the bullpen, only Borbon had remained. Dave Tomlin and Mario Soto worked middle relief, with Tom Hume and Doug Bair closing. The Reds won the 1979 NL West behind the pitching of Seaver, but were dispatched in the NL playoffs by the Pittsburgh Pirates. Game 2 featured a controversial play in which a ball hit by Pittsburgh's Phil Garner was caught by Reds outfielder Dave Collins but was ruled a trap, setting the Pirates up to take a 2–1 lead. The Pirates swept the series 3 games to 0 and went on to win the World Series against the Baltimore Orioles. The 1981 team fielded a strong lineup, with only Concepción, Foster and Griffey retaining their spots from the 1975–76 heyday. After Johnny Bench was able to play only a few games as catcher each year after 1980 due to ongoing injuries, Joe Nolan took over as starting catcher. Driessen and Bench shared first base, and Knight starred at third. Morgan and Geronimo had been replaced at second base and center field by Ron Oester and Dave Collins, respectively. Mario Soto posted a banner year starting on the mound, only surpassed by the outstanding performance of Seaver's Cy Young runner-up season. La Coss, Bruce Berenyi and Frank Pastore rounded out the starting rotation. Hume again led the bullpen as closer, joined by Bair and Joe Price. In , the Reds had the best overall record in baseball, but finished second in the division in both of the half-seasons that resulted from a mid-season players' strike, and missed the playoffs. To commemorate this, a team photo was taken, accompanied by a banner that read "Baseball's Best Record 1981". By , the Reds were a shell of the original Red Machine, having lost 101 games that year. Johnny Bench, after an unsuccessful transition to third base, retired a year later. After the heartbreak of 1981, general manager Dick Wagner pursued the strategy of ridding the team of veterans, including third-baseman Knight and the entire starting outfield of Griffey, Foster and Collins. Bench, after being able to catch only seven games in 1981, was moved from platooning at first base to be the starting third baseman; Alex Treviño became the regular starting catcher. The outfield was staffed with Paul Householder, César Cedeño and future Colorado Rockies and Pittsburgh Pirates manager Clint Hurdle on opening day. Hurdle was an immediate bust, and rookie Eddie Milner took his place in the starting outfield early in the year. The highly touted Householder struggled throughout the year despite extensive playing time. Cedeno, while providing steady veteran play, was a disappointment, unable to recapture his glory days with the Houston Astros. The starting rotation featured the emergence of a dominant Mario Soto, and featured strong years by Pastore and Bruce Berenyi, but Seaver was injured all year, and their efforts were wasted without a strong offensive lineup. Tom Hume still led the bullpen along with Joe Price, but the colorful Brad "The Animal" Lesley was unable to consistently excel, and former all-star Jim Kern was also a disappointment. Kern was also publicly upset over having to shave off his prominent beard to join the Reds, and helped force the issue of getting traded during mid-season by growing it back. The season also saw the midseason firing of manager John McNamara, who was replaced as skipper by Russ Nixon. The Reds fell to the bottom of the Western Division for the next few years. After the 1982 season, Seaver was traded back to the Mets. found Dann Bilardello behind the plate, Bench returning to part-time duty at first base, rookies Nick Esasky taking over at third base and Gary Redus taking over from Cedeno. Tom Hume's effectiveness as a closer had diminished, and no other consistent relievers emerged. Dave Concepción was the sole remaining starter from the Big Red Machine era. Wagner's tenure ended in 1983, when Howsam, the architect of the Big Red Machine, was brought back. The popular Howsam began his second term as Reds' General Manager by signing Cincinnati native Dave Parker as a free agent from Pittsburgh. In the Reds began to move up, depending on trades and some minor leaguers. In that season Dave Parker, Dave Concepción and Tony Pérez were in Cincinnati uniforms. In August 1984, Pete Rose was reacquired and hired to be the Reds player-manager. After raising the franchise from the grave, Howsam gave way to the administration of Bill Bergesch, who attempted to build the team around a core of highly regarded young players in addition to veterans like Parker. However, he was unable to capitalize on an excess of young and highly touted position players including Kurt Stillwell, Tracy Jones, and Kal Daniels by trading them for pitching. Despite the emergence of Tom Browning as rookie of the year in , when he won 20 games, the rotation was devastated by the early demise of Mario Soto's career to arm injury. Under Bergesch, the Reds finished second four times from 1985 to . Among the highlights, Rose became the all-time hits leader, Tom Browning threw a perfect game, Eric Davis became the first player in baseball history to hit at least 35 home runs and steal 50 bases, and Chris Sabo was the 1988 National League Rookie of the Year. The Reds also had a bullpen star in John Franco, who was with the team from 1984 to 1989. Rose once had Concepción pitch late in a game at Dodger Stadium. In , following the release of the Dowd Report, which accused Rose of betting on baseball games, Rose was banned from baseball by Commissioner Bart Giamatti, who declared Rose guilty of "conduct detrimental to baseball". Controversy also swirled around Reds owner Marge Schott, who was accused several times of ethnic and racial slurs. World championship and the end of an era (1990–2002) In , general manager Bergesch was replaced by Murray Cook, who initiated a series of deals that would finally bring the Reds back to the championship, starting with acquisitions of Danny Jackson and José Rijo. An aging Dave Parker was let go after a revival of his career in Cincinnati following the Pittsburgh drug trials. Barry Larkin emerged as the starting shortstop over Kurt Stillwell, who, along with reliever Ted Power, was traded for Jackson. In , Cook was succeeded by Bob Quinn, who put the final pieces of the championship puzzle together, with the acquisitions of Hal Morris, Billy Hatcher and Randy Myers. In , the Reds, under new manager Lou Piniella, shocked baseball by leading the NL West from wire-to-wire, making them the only NL team to do so. Winning their first nine games, they started off 33–12 and maintained their lead throughout the year. Led by Chris Sabo, Barry Larkin, Eric Davis, Paul O'Neill, and Billy Hatcher in the field, and by José Rijo, Tom Browning and the "Nasty Boys" of Rob Dibble, Norm Charlton and Randy Myers on the mound, the Reds took out the Pirates in the NLCS. The Reds swept the heavily favored Oakland Athletics in four straight, and extended a winning streak in the World Series to nine consecutive games. This Series, however, saw Eric Davis severely bruise a kidney diving for a fly ball in game 4, and his play was greatly limited the next year. In , Quinn was replaced in the front office by Jim Bowden. On the field, manager Lou Piniella wanted outfielder Paul O'Neill to be a power-hitter to fill the void Eric Davis left when he was traded to the Los Angeles Dodgers in exchange for Tim Belcher. However, O'Neill only hit .246 and 14 homers. The Reds returned to winning after a losing season in , but 90 wins was only enough for second place behind the division-winning Atlanta Braves. Before the season ended, Piniella got into an altercation with reliever Rob Dibble. In the offseason, Paul O'Neill was traded to the New York Yankees for outfielder Roberto Kelly, who was a disappointment for the Reds over the next couple of years, while O'Neill led a downtrodden Yankees franchise to a return to glory. Around this time, the Reds would replace their "Big Red Machine" era uniforms in favor of a pinstriped uniform with no sleeves. For the 1993 season, Piniella was replaced by fan favorite Tony Pérez, but he lasted only 44 games at the helm before being replaced by Davey Johnson. With Johnson steering the team, the Reds made steady progress. In , the Reds were in the newly created National League Central Division with the Chicago Cubs, St. Louis Cardinals, and fellow rivals Pittsburgh Pirates and Houston Astros. By the time the strike hit, the Reds finished a half-game ahead of the Houston Astros for first place in the NL Central. In , the Reds won the division thanks to MVP Barry Larkin. After defeating the NL West champion Dodgers in the first NLDS since 1981, however, they lost to the Atlanta Braves. Team owner Marge Schott announced mid-season that Johnson would be gone by the end of the year, regardless of outcome, to be replaced by former Reds third baseman Ray Knight. Johnson and Schott had never gotten along, and she did not approve of Johnson living with his fiancée before they were married. In contrast, Knight, along with his wife, professional golfer Nancy Lopez, were friends of Schott. The team took a dive under Knight, who was unable to complete two full seasons as manager and was subject to complaints in the press about his strict managerial style. In , the Reds won 96 games, led by manager Jack McKeon, but lost to the New York Mets in a one-game playoff. Earlier that year, Schott sold controlling interest in the Reds to Cincinnati businessman Carl Lindner. Despite an 85–77 finish in , and being named 1999 NL manager of the year, McKeon was fired after the 2000 season. The Reds did not have another winning season until 2010. Contemporary era (2003–present) Riverfront Stadium, by then known as Cinergy Field, was demolished in . Great American Ball Park opened in , with high expectations for a team led by local favorites, including outfielder Ken Griffey, Jr., shortstop Barry Larkin and first baseman Sean Casey. Although attendance improved considerably with the new ballpark, the Reds continued to lose. Schott had not invested much in the farm system since the early 1990s, leaving the team relatively thin on talent. After years of promises that the club was rebuilding toward the opening of the new ballpark, general manager Jim Bowden and manager Bob Boone were fired on July 28. This broke up the father-son combo of manager Bob Boone and third baseman Aaron Boone, and the latter was soon traded to the New York Yankees. Tragedy struck in November when Dernell Stenson, a promising young outfielder, was shot and killed during a carjack. Following the season, Dan O'Brien was hired as the Reds' 16th general manager. The and seasons continued the trend of big-hitting, poor pitching and poor records. Griffey, Jr. joined the 500 home run club in 2004, but was again hampered by injuries. Adam Dunn emerged as consistent home run hitter, including a home run against José Lima. He also broke the major league record for strikeouts in 2004. Although a number of free agents were signed before 2005, the Reds were quickly in last place, and manager Dave Miley was forced out in the 2005 midseason and replaced by Jerry Narron. Like many other small-market clubs, the Reds dispatched some of their veteran players and began entrusting their future to a young nucleus that included Adam Dunn and Austin Kearns. 2004 saw the opening of the Cincinnati Reds Hall of Fame (HOF), which had been in existence in name only since the 1950s, with player plaques, photos and other memorabilia scattered throughout their front offices. Ownership and management desired a standalone facility where the public could walk through interactive displays, see locker room recreations, watch videos of classic Reds moments, and peruse historical items. The first floor houses a movie theater that resembles an older, ivy-covered brick wall ball yard. The hallways contain vintage photographs, and the rear of the building features a three-story wall containing a baseball for every hit Pete Rose had during his career. The third floor contains interactive exhibits including a pitcher's mound, radio booth and children's area where the fundamentals of baseball are taught through videos featuring former Reds players. Robert Castellini took over as controlling owner from Lindner in 2006. Castellini promptly fired general manager Dan O'Brien and hired Wayne Krivsky. The Reds made a run at the playoffs, but ultimately fell short. The 2007 season was again mired in mediocrity. Midway through the season, Jerry Narron was fired as manager and replaced by Pete Mackanin. The Reds ended up posting a winning record under Mackanin, but finished the season in fifth place in the Central Division. Mackanin was manager in an interim capacity only, and the Reds, seeking a big name to fill the spot, ultimately brought in Dusty Baker. Early in the 2008 season, Krivsky was fired and replaced by Walt Jocketty. Although the Reds did not win under Krivsky, he is credited with revamping the farm system and signing young talent that could potentially lead the team to success in the future. The Reds failed to post winning records in both 2008 and 2009. In 2010, with NL MVP Joey Votto and Gold Glovers Brandon Phillips and Scott Rolen, the Reds posted a 91–71 record and were NL Central champions. The following week, the Reds became only the second team in MLB history to be no-hit in a postseason, game when Philadelphia's Roy Halladay shut down the National League's number one offense in game 1 of the NLDS. The Reds lost in a 3-game sweep of the NLDS to Philadelphia. After coming off their surprising 2010 NL Central Division title, the Reds fell short of many expectations for the 2011 season. Multiple injuries and inconsistent starting pitching played a big role in their mid-season collapse, along with a less productive offense as compared to the previous year. The Reds ended the season at 79–83, and won the 2012 NL Central Division Title. On September 28, Homer Bailey threw a 1-0 no-hitter against the Pittsburgh Pirates, marking the first Reds no-hitter since Tom Browning's perfect game in 1988. Finishing with a 97–65 record, the Reds earned the second seed in the Division Series and a match-up with the eventual World Series champion, the San Francisco Giants. After taking a 2–0 lead with road victories at AT&T Park, they headed home looking to win the series. However, they lost three straight at their home ballpark, becoming the first National League team since the Chicago Cubs in 1984 to lose a division series after leading 2–0. In the offseason, the team traded outfielder Drew Stubbs, as part of a three-team deal with the Arizona Diamondbacks and Cleveland Indians, to the Indians, and in turn received right fielder Shin-Soo Choo. On July 2, 2013, Homer Bailey pitched a no-hitter against the San Francisco Giants for a 4-0 Reds victory, making him the third pitcher in Reds history with two complete-game no-hitters in their career. Following six consecutive losses to close out the 2013 season, including a loss to the Pittsburgh Pirates, at PNC Park, in the National League wild-card playoff game, the Reds decided to fire Dusty Baker. During his six years as manager, Baker led the Reds to the playoff three times; however, they never advanced beyond the first round. On October 22, 2013, the Reds hired pitching coach Bryan Price to replace Baker as manager. They also hired long time scout, John "Cheddar" Ceprini. He has since become head scout and resides in Connecticut with his Wife, LooLoo. In between watching Bull Durham and Major League, Ol Cheddar has run the eastern scouting department for 7 years. Under Bryan Price, the Reds were led by pitchers Johnny Cueto and the hard-throwing Cuban Aroldis Chapman. While the offense was led by all-star third baseman Todd Frazier, Joey Votto, and Brandon Phillips. Although with plenty of star power, the Reds never got off to a good start and ending the season in lowly fourth place in the division to go along with a 76–86 record. During the offseason, the Reds traded pitchers Alfredo Simón to the Tigers and Mat Latos to the Marlins. In return, they acquired young talents such as Eugenio Suárez and Anthony DeSclafani. They also acquired veteran slugger Marlon Byrd from the Phillies to play left field. The Reds' 2015 season wasn't much better, as they finished with the second-worst record in the league with a record of 64–98, their worst finish since 1982. The Reds were forced to trade star pitchers Johnny Cueto (to the Kansas City Royals) and Mike Leake (to the San Francisco Giants), receiving minor league pitching prospects for both. Shortly after the season's end, the Reds traded home run derby champion Todd Frazier to the Chicago White Sox, and closing pitcher Aroldis Chapman to the New York Yankees. In 2016, the Reds broke the then-record for home runs allowed during a single season, The Reds held this record until the 2019 season when it was broken by the Baltimore Orioles. The previous record-holder was the 1996 Detroit Tigers with 241 longballs yielded to opposing teams. The Reds went 68–94, and again were one of the worst teams in the MLB. The Reds traded outfielder Jay Bruce to the Mets just before the July 31st non-waiver trade deadline in exchange for two prospects, infielder Dilson Herrera and pitcher Max Wotell. During the offseason, the Reds traded Brandon Phillips to the Atlanta Braves in exchange for two minor league pitchers. On September 25, 2020, the Reds earned their first postseason berth since 2013, ultimately earning the seventh seed in the expanded 2020 playoffs. The 2020 season had been shortened to 60 games as a result of the COVID-19 pandemic. The Reds lost their first-round series against the Atlanta Braves two games
containing leafy vegetables such as spinach and sometimes okra amongst others, widely distributed in the Caribbean, with a distinctively mixed African and indigenous character. The variety of dessert dishes in the area also reflects the mixed origins of the recipes. In some areas, black cake, a derivative of English Christmas pudding, may be served, especially on special occasions. Over time, food from the Caribbean has evolved into a narrative technique through which their culture has been accentuated and promoted. However, by studying Caribbean culture through a literary lens there then runs the risk of generalizing exoticist ideas about food practices from the tropics. Some food theorists argue that this depiction of Caribbean food in various forms of media contributes to the inaccurate conceptions revolving around their culinary practices, which are much more grounded in unpleasant historical events. Therefore, it can be argued that the connection between the idea of the Caribbean being the ultimate paradise and Caribbean food being exotic is based on inaccurate information. By location Anguillian cuisine Antigua and Barbuda cuisine Barbadian cuisine Bahamian cuisine Belizean cuisine Bermudian cuisine Cayman Islands cuisine Colombian cuisine Costa Rican cuisine Cuban cuisine Curaçaoan cuisine Dominica cuisine Dominican Republic cuisine French
In addition, the population has created styles that are unique to the region. Caribbean dishes Ingredients that are common in most islands' dishes are rice, plantains, beans, cassava, cilantro, bell peppers, chickpeas, tomatoes, sweet potatoes, coconut, and any of various meats that are locally available like beef, poultry, pork or fish. A characteristic seasoning for the region is a green herb-and-oil-based marinade called sofrito, which imparts a flavor profile which is quintessentially Caribbean in character. Ingredients may include garlic, onions, scotch bonnet peppers, celery, green onions, and herbs like cilantro, Mexican mint, chives, marjoram, rosemary, tarragon and thyme. This green seasoning is used for a variety of dishes like curries, stews and roasted meats. Traditional dishes are so important to regional culture that, for example, the local version of Caribbean goat stew has been chosen as the official national dish of Montserrat and is also one of the signature dishes of St. Kitts and Nevis. Another popular dish in the Anglophone Caribbean is called "cook-up", or pelau. Ackee and saltfish is another popular dish that is unique to Jamaica. Callaloo
and defend it from possible Russian intervention if a war between Austria-Hungary and Serbia took place. When Russia enacted a general mobilization, Germany viewed the act as provocative. The Russian government promised Germany that its general mobilization did not mean preparation for war with Germany but was a reaction to the events between Austria-Hungary and Serbia. The German government regarded the Russian promise of no war with Germany to be nonsense in light of its general mobilization, and Germany, in turn, mobilized for war. On 1 August, Germany sent an ultimatum to Russia stating that since both Germany and Russia were in a state of military mobilization, an effective state of war existed between the two countries. Later that day, France, an ally of Russia, declared a state of general mobilization. In August 1914, Germany waged war on Russia, citing Russian aggression as demonstrated by the mobilization of the Russian army, which had resulted in Germany mobilizing in response. After Germany declared war on Russia, France, with its alliance with Russia, prepared a general mobilization in expectation of war. On 3 August 1914, Germany responded to this action by declaring war on France. Germany, facing a two-front war, enacted what was known as the Schlieffen Plan, which involved German armed forces needing to move through Belgium and swing south into France and towards the French capital of Paris. This plan was hoped to quickly gain victory against the French and allow German forces to concentrate on the Eastern Front. Belgium was a neutral country and would not accept German forces crossing its territory. Germany disregarded Belgian neutrality and invaded the country to launch an offensive towards Paris. This caused Great Britain to declare war against the German Empire, as the action violated the Treaty of London that both nations signed in 1839 guaranteeing Belgian neutrality and defense of the kingdom if a nation reneged. Subsequently, several states declared war on Germany in late August 1914, with Italy declaring war on Austria-Hungary in 1915 and Germany on 27 August 1916, the United States declaring war on Germany on 6 April 1917 and Greece declaring war on Germany in July 1917. Colonies and dependencies Europe Upon its founding in 1871, the German Empire controlled Alsace-Lorraine as an "imperial territory" incorporated from France after the Franco-Prussian War. It was held as part of Germany's sovereign territory. Africa Germany held multiple African colonies at the time of World War I. All of Germany's African colonies were invaded and occupied by Allied forces during the war. Kamerun, German East Africa, and German Southwest Africa were German colonies in Africa. Togoland was a German protectorate in Africa. Asia The Kiautschou Bay concession was a German dependency in East Asia leased from China in 1898. Japanese forces occupied it following the Siege of Tsingtao. Pacific German New Guinea was a German protectorate in the Pacific. It was occupied by Australian forces in 1914. German Samoa was a German protectorate following the Tripartite Convention. It was occupied by the New Zealand Expeditionary Force in 1914. Austria-Hungary War justifications Austria-Hungary regarded the assassination of Archduke Franz Ferdinand as being orchestrated with the assistance of Serbia. The country viewed the assassination as setting a dangerous precedent of encouraging the country's South Slav population to rebel and threaten to tear apart the multinational country. Austria-Hungary formally sent an ultimatum to Serbia demanding a full-scale investigation of Serbian government complicity in the assassination and complete compliance by Serbia in agreeing to the terms demanded by Austria-Hungary. Serbia submitted to accept most of the demands. However, Austria-Hungary viewed this as insufficient and used this lack of full compliance to justify military intervention. These demands have been viewed as a diplomatic cover for what was going to be an inevitable Austro-Hungarian declaration of war on Serbia. Russia had warned Austria-Hungary that the Russian government would not tolerate Austria-Hungary invading Serbia. However, with Germany supporting Austria-Hungary's actions, the Austro-Hungarian government hoped that Russia would not intervene and that the conflict with Serbia would remain a regional conflict. Austria-Hungary's invasion of Serbia resulted in Russia declaring war on the country, and Germany, in turn, declared war on Russia, setting off the beginning of the clash of alliances that resulted in the World War. Territory Austria-Hungary was internally divided into two states with their own governments, joined in communion through the Habsburg throne. Austrian Cisleithania contained various duchies and principalities but also the Kingdom of Bohemia, the Kingdom of Dalmatia, the Kingdom of Galicia and Lodomeria. Hungarian Transleithania comprised the Kingdom of Hungary and the Kingdom of Croatia-Slavonia. In Bosnia and Herzegovina, sovereign authority was shared by both Austria and Hungary. Ottoman Empire War justifications The Ottoman Empire joined the war on the side of the Central Powers in November 1914. The Ottoman Empire had gained strong economic connections with Germany through the Berlin-to-Baghdad railway project that was still incomplete at the time. The Ottoman Empire made a formal alliance with Germany signed on 2 August 1914. The alliance treaty expected that the Ottoman Empire would become involved in the conflict in a short amount of time. However, for the first several months of the war, the Ottoman Empire maintained neutrality though it allowed a German naval squadron to enter and stay near the strait of Bosphorus. Ottoman officials informed the German government that the country needed time to prepare for conflict. Germany provided financial aid and weapons shipments to the Ottoman Empire. After pressure escalated from the German government demanding that the Ottoman Empire fulfill its treaty obligations, or else Germany would expel the country from the alliance and terminate economic and military assistance, the Ottoman government entered the war with the recently acquired cruisers from Germany, the Yavuz Sultan Selim (formerly SMS Goeben) and the Midilli (formerly SMS Breslau) launching a naval raid on the Russian port of Odessa, thus engaging in military action in accordance with its alliance obligations with Germany. Russia and the Triple Entente declared war on the Ottoman Empire. Bulgaria War justifications Bulgaria was still resentful after its defeat in July 1913 at the hands of Serbia, Greece and Romania. It signed a treaty of defensive alliance with the Ottoman Empire on 19 August 1914. It was the last country to join the Central Powers, which Bulgaria did in October 1915 by declaring war on Serbia. It invaded Serbia in conjunction with German and Austro-Hungarian forces. Bulgaria held claims on the region of Vardar Macedonia then held by Serbia following the Balkan Wars of 1912–1913 and the Treaty of Bucharest (1913). As a condition of entering WW1 on the side of the Central Powers, Bulgaria was granted the right to reclaim that territory.
if a nation reneged. Subsequently, several states declared war on Germany in late August 1914, with Italy declaring war on Austria-Hungary in 1915 and Germany on 27 August 1916, the United States declaring war on Germany on 6 April 1917 and Greece declaring war on Germany in July 1917. Colonies and dependencies Europe Upon its founding in 1871, the German Empire controlled Alsace-Lorraine as an "imperial territory" incorporated from France after the Franco-Prussian War. It was held as part of Germany's sovereign territory. Africa Germany held multiple African colonies at the time of World War I. All of Germany's African colonies were invaded and occupied by Allied forces during the war. Kamerun, German East Africa, and German Southwest Africa were German colonies in Africa. Togoland was a German protectorate in Africa. Asia The Kiautschou Bay concession was a German dependency in East Asia leased from China in 1898. Japanese forces occupied it following the Siege of Tsingtao. Pacific German New Guinea was a German protectorate in the Pacific. It was occupied by Australian forces in 1914. German Samoa was a German protectorate following the Tripartite Convention. It was occupied by the New Zealand Expeditionary Force in 1914. Austria-Hungary War justifications Austria-Hungary regarded the assassination of Archduke Franz Ferdinand as being orchestrated with the assistance of Serbia. The country viewed the assassination as setting a dangerous precedent of encouraging the country's South Slav population to rebel and threaten to tear apart the multinational country. Austria-Hungary formally sent an ultimatum to Serbia demanding a full-scale investigation of Serbian government complicity in the assassination and complete compliance by Serbia in agreeing to the terms demanded by Austria-Hungary. Serbia submitted to accept most of the demands. However, Austria-Hungary viewed this as insufficient and used this lack of full compliance to justify military intervention. These demands have been viewed as a diplomatic cover for what was going to be an inevitable Austro-Hungarian declaration of war on Serbia. Russia had warned Austria-Hungary that the Russian government would not tolerate Austria-Hungary invading Serbia. However, with Germany supporting Austria-Hungary's actions, the Austro-Hungarian government hoped that Russia would not intervene and that the conflict with Serbia would remain a regional conflict. Austria-Hungary's invasion of Serbia resulted in Russia declaring war on the country, and Germany, in turn, declared war on Russia, setting off the beginning of the clash of alliances that resulted in the World War. Territory Austria-Hungary was internally divided into two states with their own governments, joined in communion through the Habsburg throne. Austrian Cisleithania contained various duchies and principalities but also the Kingdom of Bohemia, the Kingdom of Dalmatia, the Kingdom of Galicia and Lodomeria. Hungarian Transleithania comprised the Kingdom of Hungary and the Kingdom of Croatia-Slavonia. In Bosnia and Herzegovina, sovereign authority was shared by both Austria and Hungary. Ottoman Empire War justifications The Ottoman Empire joined the war on the side of the Central Powers in November 1914. The Ottoman Empire had gained strong economic connections with Germany through the Berlin-to-Baghdad railway project that was still incomplete at the time. The Ottoman Empire made a formal alliance with Germany signed on 2 August 1914. The alliance treaty expected that the Ottoman Empire would become involved in the conflict in a short amount of time. However, for the first several months of the war, the Ottoman Empire maintained neutrality though it allowed a German naval squadron to enter and stay near the strait of Bosphorus. Ottoman officials informed the German government that the country needed time to prepare for conflict. Germany provided financial aid and weapons shipments to the Ottoman Empire. After pressure escalated from the German government demanding that the Ottoman Empire fulfill its treaty obligations, or else Germany would expel the country from the alliance and terminate economic and military assistance, the Ottoman government entered the war with the recently acquired cruisers from Germany, the Yavuz Sultan Selim (formerly SMS Goeben) and the Midilli (formerly SMS Breslau) launching a naval raid on the Russian port of Odessa, thus engaging in military action in accordance with its alliance obligations with Germany. Russia and the Triple Entente declared war on the Ottoman Empire. Bulgaria War justifications Bulgaria was still resentful after its defeat in July 1913 at the hands of Serbia, Greece and Romania. It signed a treaty of defensive alliance with the Ottoman Empire on 19 August 1914. It was the last country to join the Central Powers, which Bulgaria did in October 1915 by declaring war on Serbia. It invaded Serbia in conjunction with German and Austro-Hungarian forces. Bulgaria held claims on the region of Vardar Macedonia then held by Serbia following the Balkan Wars of 1912–1913 and the Treaty of Bucharest (1913). As a condition of entering WW1 on the side of the Central Powers, Bulgaria was granted the right to reclaim that territory. Declarations of war Co-belligerents South African Republic In opposition to offensive operations by Union of South Africa, which had joined the war, Boer army officers of what is now known as the Maritz Rebellion "refounded" the South African Republic in September 1914. Germany assisted the rebels, some rebels operating in and out of the German colony of German South-West Africa. The rebels were all defeated or captured by South African government forces by 4 February 1915. Senussi Order The Senussi Order was a Muslim political-religious tariqa (Sufi order) and clan in Libya, previously under Ottoman control, which had been lost to Italy in 1912. In 1915, they were courted by the Ottoman Empire and Germany, and Grand Senussi Ahmed Sharif as-Senussi declared jihad and attacked the Italians in Libya and British controlled Egypt in the Senussi Campaign. Sultanate of Darfur In 1915 the Sultanate of Darfur renounced allegiance to the Sudan government and aligned with the Ottomans. The Anglo-Egyptian Darfur Expedition preemptively acted in March 1916 to prevent an attack on Sudan and took control of the Sultanate by November 1916. Zaian Confederation The Zaian Confederation began to fight with France in the Zaian War to prevent French expansion into Morocco. The fighting lasted from 1914 and continued after the First World War ended,to 1921. The Central Powers (mainly the Germans) began to attempt to incite unrest to hopefully divert French resources from Europe. Client states With the Bolshevik attack of late 1917, the General Secretariat of Ukraine sought military protection first from the Central Powers and later from the armed forces of the Entente. The Ottoman Empire also had its own allies in Azerbaijan and the Northern Caucasus. The three nations fought alongside each other under the Army of Islam in the Battle of Baku. German client states Poland The Kingdom of Poland was a client state of Germany proclaimed in 1916 and established on 14 January 1917. This government was recognized by the emperors of Germany and Austria-Hungary in November 1916, and it adopted a constitution in 1917. The decision to create a Polish State was taken by Germany in order to attempt to legitimize its military occupation amongst the Polish inhabitants, following upon German propaganda sent to Polish inhabitants in 1915 that German soldiers were arriving as liberators to free Poland from subjugation by Russia. The German government utilized the state alongside punitive threats to induce Polish landowners living in the German-occupied Baltic territories to move to the state and sell their Baltic property to Germans in exchange for moving to Poland. Efforts were made to induce similar emigration of Poles from Prussia to the state. Lithuania The Kingdom of Lithuania was a client state of Germany created on 16 February 1918. Belarus The Belarusian People's Republic was a client state of Germany created on 9 March 1918. Ukraine The Ukrainian State was a client state of Germany led by Hetman Pavlo Skoropadskyi from 29 April 1918, after the government of the Ukrainian People's Republic was overthrown. Courland and Semigallia The Duchy of Courland and Semigallia was a client state of Germany created on 8 March 1918. Baltic State The Baltic State also known as the "United Baltic Duchy", was proclaimed on 22 September 1918 by the Baltic German ruling class. It was to encompass the former Estonian governorates and incorporate the recently established Courland and Semigallia into a unified state. An armed force in the form of the Baltische Landeswehr was created in November 1918, just before the surrender of Germany, which would participate in the Russian Civil War in the Baltics. Finland Finland had existed as an autonomous Grand Duchy of Russia since 1809, and the collapse of the Russian Empire in 1917 gave it its independence. Following the end of the Finnish Civil War, in which Germany supported the "White" against the Soviet-backed labour movement, in May 1918, there were moves to create a Kingdom of Finland. A German prince was elected, but the Armistice intervened. Crimea The Crimean Regional Government was a
Northern Ireland's Ulster Unionist Party (UUP) and the Democratic Unionist Party (DUP, founded in 1971), began to appear, although they have yet to make any significant impact at Westminster (, the DUP comprises the largest political party in the ruling coalition in the Northern Ireland Assembly), and from 2017 to 2019 the DUP provided support for the Conservative minority government. Modern conservatism in different countries Many sources refer to any political parties on the right of the political spectrum as conservative despite having no connection with historical conservatism. In most cases, these parties do not use the term conservative in their name or self-identify as conservative. Below is a partial list of such political parties. Australia The Liberal Party of Australia adheres to the principles of social conservatism and liberal conservatism. It is liberal in the sense of economics. Other conservative parties are the National Party of Australia, a sister party of the Liberals, Family First Party, Democratic Labor Party, Shooters, Fishers and Farmers Party, Australian Conservatives, and the Katter's Australian Party. The second largest party in the country is the Australian Labor Party and its dominant faction is Labor Right, a socially conservative element. Australia undertook significant economic reform under the Labor Party in the mid-1980s. Consequently, issues like protectionism, welfare reform, privatization and deregulation are no longer debated in the political space as they are in Europe or North America. Moser and Catley explain: "In America, 'liberal' means left-of-center, and it is a pejorative term when used by conservatives in adversarial political debate. In Australia, of course, the conservatives are in the Liberal Party". Jupp points out that, "[the] decline in English influences on Australian reformism and radicalism, and appropriation of the symbols of Empire by conservatives continued under the Liberal Party leadership of Sir Robert Menzies, which lasted until 1966". Brazil Conservatism in Brazil originates from the cultural and historical tradition of Brazil, whose cultural roots are Luso-Iberian and Roman Catholic. Brazilian conservatism from the 20th century on includes names such as Gerardo Melo Mourão and Otto Maria Carpeaux in literature; Oliveira Lima and Oliveira Torres in historiography; Sobral Pinto and Miguel Reale in law; Plinio Corrêa de Oliveira and Father Paulo Ricardo in the Catholic Church; Roberto Campos and Mario Henrique Simonsen in economics; Carlos Lacerda in the political arena; and Olavo de Carvalho in philosophy. Brazil Union, Progressistas, Republicans, Liberal Party, Brazilian Labour Renewal Party, Patriota, Brazilian Labour Party, Social Christian Party and Brasil 35 are the conservative parties in Brazil. Germany Conservatism developed alongside nationalism in Germany, culminating in Germany's victory over France in the Franco-Prussian War, the creation of the unified German Empire in 1871 and the simultaneous rise of Otto von Bismarck on the European political stage. Bismarck's "balance of power" model maintained peace in Europe for decades at the end of the 19th century. His "revolutionary conservatism" was a conservative state-building strategy designed to make ordinary Germans—not just the Junker elite—more loyal to state and emperor, he created the modern welfare state in Germany in the 1880s. According to Kees van Kersbergen and Barbara Vis, his strategy was: Bismarck also enacted universal male suffrage in the new German Empire in 1871. He became a great hero to German conservatives, who erected many monuments to his memory after he left office in 1890. With the rise of Nazism in 1933, agrarian movements faded and was supplanted by a more command-based economy and forced social integration. Though Adolf Hitler succeeded in garnering the support of many German industrialists, prominent traditionalists openly and secretly opposed his policies of euthanasia, genocide and attacks on organized religion, including Claus von Stauffenberg, Dietrich Bonhoeffer, Henning von Tresckow, Bishop Clemens August Graf von Galen and the monarchist Carl Friedrich Goerdeler. More recently, the work of conservative Christian Democratic Union leader and Chancellor Helmut Kohl helped bring about German reunification, along with the closer European integration in the form of the Maastricht Treaty. Today, German conservatism is often associated with politicians such as Chancellor Angela Merkel, whose tenure has been marked by attempts to save the common European currency (Euro) from demise. The German conservatives are divided under Merkel due to the refugee crisis in Germany and many conservatives in the CDU/CSU oppose the refugee and migrant policies developed under Merkel. India In India, the Bharatiya Janata Party (BJP), led by Narendra Modi, represent conservative politics. The BJP is the largest right-wing conservative party in the world. It promotes cultural nationalism, Hindu Nationalism, an aggressive foreign policy against Pakistan and a conservative social and fiscal policy. Italy By 1945 the extreme right fascist movement of Benito Mussolini was discredited. After World War II, in Italy the conservative parties were dominated by the centrist Christian Democracy (DC) party. With its landslide victory over the left in 1948, the Center (including progressive and conservative factions) was in power and was, says Denis Mack Smith, "moderately conservative, reasonably tolerant of everything which did not touch religion or property, but above all Catholic and sometimes clerical." It dominated politics until the DC party's dissolution in 1994. In 1994, the media tycoon and entrepreneur Silvio Berlusconi founded the liberal conservative party Forza Italia (FI). Berlusconi won three elections in 1994, 2001 and 2008, governing the country for almost ten years as Prime Minister. Forza Italia formed a coalition with right-wing regional party Lega Nord while in government. Besides FI, now the conservative ideas are mainly expressed by the New Centre-Right party led by Angelino Alfano, Berlusconi formed a new party, which is a rebirth of Forza Italia, thus founding a new conservative movement. Alfano served as Minister of Foreign Affairs. After the 2018 election, Lega Nord and the Five Star Movement formed a right-wing populist government, which later failed. Russia Under Vladimir Putin, the dominant leader since 1999, Russia has promoted explicitly conservative policies in social, cultural and political matters, both at home and abroad. Putin has attacked globalism and economic liberalism. Russian conservatism is unique in some respects as it supports Economic intervention with a mixed economy, with a strong nationalist sentiment and social conservatism with its views being largely populist. Russian conservatism as a result opposes libertarian ideals such as the aforementioned concept of economic liberalism found in other conservative movements around the world. Putin has as a result promoted new think tanks that bring together like-minded intellectuals and writers. For example, the Izborsky Club, founded in 2012 by Aleksandr Prokhanov, stresses Russian nationalism, the restoration of Russia's historical greatness and systematic opposition to liberal ideas and policies. Vladislav Surkov, a senior government official, has been one of the key ideologists during Putin's presidency. In cultural and social affairs, Putin has collaborated closely with the Russian Orthodox Church. Mark Woods provides specific examples of how the Church under Patriarch Kirill of Moscow has backed the expansion of Russian power into Crimea and eastern Ukraine. More broadly, The New York Times reports in September 2016 how that Church's policy prescriptions support the Kremlin's appeal to social conservatives: South Korea South Korea's major conservative party, the People Power Party (South Korea), has changed its form throughout its history. First it was the Democratic-Liberal Party(민주자유당, Minju Ja-yudang) and its first head was Roh Tae-woo who was the first President of the Sixth Republic of South Korea. Democratic-Liberal Party was founded by the merging of Roh Tae-woo's Democratic Justice Party, Kim Young Sam's Reunification Democratic Party and Kim Jong-pil's New Democratic Republican Party. And again through election its second leader, Kim Young-sam, became the fourteenth President of Korea. When the conservative party was beaten by the opposition party in the general election, it changed its form again to follow the party members' demand for reforms. It became the New Korean Party, but it changed again one year later since the President Kim Young-sam was blamed by the citizen for the International Monetary Fund. It changed its name to Grand National Party (GNP). Since the late Kim Dae-jung assumed the presidency in 1998, GNP had been the opposition party until Lee Myung-bak won the presidential election of 2007. Singapore Singapore's only conservative party is the People's Action Party (PAP). It is currently in government and has been in government since independence in 1965. It has promoted conservative values in the form of Asian democracy and values or 'shared values'. The main party on the left of the political spectrum in Singapore is the Workers' Party (WP). United States The meaning of "conservatism" in the United States has little in common with the way the word is used elsewhere. As Ribuffo (2011) notes, "what Americans now call conservatism much of the world calls liberalism or neoliberalism". American conservatism is a broad system of political beliefs in the United States that is characterized by respect for American traditions, support for Judeo-Christian values, economic liberalism, anti-communism and a defense of Western culture. Liberty within the bounds of conformity to conservatism is a core value, with a particular emphasis on strengthening the free market, limiting the size and scope of government and opposition to high taxes and government or labor union encroachment on the entrepreneur. In early American politics, it was the Democratic party practicing 'conservatism' in its attempts to maintain the social and economic institution of slavery. Democratic president Andrew Johnson, as one commonly known example, was considered a Conservative. "The Democrats were often called conservative and embraced that label. Many of them were conservative in the sense that they wanted things to be like they were in the past, especially as far as race was concerned." In 1892, Democrat Grover Cleveland won the election on a conservative platform, that argued for maintaining the gold standard, reducing tariffs, and supporting a laisse faire approach to government intervention. Since the 1950s, conservatism in the United States has been chiefly associated with the Republican Party. However, during the era of segregation, many Southern Democrats were conservatives and they played a key role in the conservative coalition that largely controlled domestic policy in Congress from 1937 to 1963. The conservative Democrats continued to have influence in the US politics until 1994's Republican Revolution, when the American South shifted from solid Democrat to solid Republican, while maintaining its conservative values. The major conservative party in the United States today is the Republican Party, also known as the GOP (Grand Old Party). Modern American conservatives consider individual liberty, as long as it conforms to conservative values, small government, deregulation of the government, economic liberalism, and free trade, as the fundamental trait of democracy, which contrasts with modern American liberals, who generally place a greater value on social equality and social justice. Other major priorities within American conservatism include support for the traditional family, law and order, the right to bear arms, Christian values, anti-communism and a defense of "Western civilization from the challenges of modernist culture and totalitarian governments". Economic conservatives and libertarians favor small government, low taxes, limited regulation and free enterprise. Some social conservatives see traditional social values threatened by secularism, so they support school prayer and oppose abortion and homosexuality. Neoconservatives want to expand American ideals throughout the world and show a strong support for Israel. Paleoconservatives, in opposition to multiculturalism, press for restrictions on immigration. Most US conservatives prefer Republicans over Democrats and most factions favor a strong foreign policy and a strong military. The conservative movement of the 1950s attempted to bring together these divergent strands, stressing the need for unity to prevent the spread of "godless communism", which Reagan later labeled an "evil empire". During the Reagan administration, conservatives also supported the so-called "Reagan Doctrine" under which the US as part of a Cold War strategy provided military and other support to guerrilla insurgencies that were fighting governments identified as socialist or communist. The Reagan administration also adopted neoliberalism and Reaganomics (pejoratively referred to as trickle-down economics), resulting in the 1980s economic growth and trillion-dollar deficits. Other modern conservative positions include opposition to big government and opposition to environmentalism. On average, American conservatives desire tougher foreign policies than liberals do. Economic liberalism, deregulation and social conservatism are major principles of the Republican Party. The Tea Party movement, founded in 2009, had proven a large outlet for populist American conservative ideas. Their stated goals included rigorous adherence to the US constitution, lower taxes, and opposition to a growing role for the federal government in health care. Electorally, it was considered a key force in Republicans reclaiming control of the US House of Representatives in 2010. Psychology Following the Second World War, psychologists conducted research into the different motives and tendencies that account for ideological differences between left and right. The early studies focused on conservatives, beginning with Theodor W. Adorno's The Authoritarian Personality (1950) based on the F-scale personality test. This book has been heavily criticized on theoretical and methodological grounds, but some of its findings have been confirmed by further empirical research. In 1973, British psychologist Glenn Wilson published an influential book providing evidence that a general factor underlying conservative beliefs is "fear of uncertainty." A meta-analysis of research literature by Jost, Glaser, Kruglanski, and Sulloway in 2003 found that many factors, such as intolerance of ambiguity and need for cognitive closure, contribute to the degree of one's political conservatism and its manifestations in decision-making. A study by Kathleen Maclay stated these traits "might be associated with such generally valued characteristics as personal commitment and unwavering loyalty". The research also suggested that while most people are resistant to change, liberals are more tolerant of it. According to psychologist Bob Altemeyer, individuals who are politically conservative tend to rank high in right-wing authoritarianism (RWA) on his RWA scale. This finding was echoed by Adorno. A study done on Israeli and Palestinian students in Israel found that RWA scores of right-wing party supporters were significantly higher than those of left-wing party supporters. However, a 2005 study by H. Michael Crowson and colleagues suggested a moderate gap between RWA and other conservative positions, stating that their "results indicated that conservatism is not synonymous with RWA". Psychologist Felicia Pratto and her colleagues have found evidence to support the idea that a high social dominance orientation (SDO) is strongly correlated with conservative political views and opposition to social engineering to promote equality, though Pratto's findings have been highly controversial as Pratto and her colleagues found that high SDO scores were highly correlated with measures of prejudice. However, David J. Schneider argued for a more complex relationships between the three factors, writing that "correlations between prejudice and political conservative are reduced virtually to zero when controls for SDO are instituted, suggesting that the conservatism–prejudice link is caused by SDO". Conservative political theorist Kenneth Minogue criticized Pratto's work, saying: "It is characteristic of the conservative temperament to value established identities, to praise habit and to respect prejudice, not because it is irrational, but because such things anchor the darting impulses of human beings in solidities of custom which we do not often begin to value until we are already losing them. Radicalism often generates youth movements, while conservatism is a condition found among the mature, who have discovered what it is in life they most value". A 1996 study on the relationship between racism and conservatism found that the correlation was stronger among more educated individuals, though "anti-Black affect had essentially no relationship with political conservatism at any level of educational or intellectual sophistication". They also found that the correlation between racism and conservatism could be entirely accounted for by their mutual relationship with social dominance orientation. In his 2008 book, Gross National Happiness, Arthur C. Brooks presents the finding that conservatives are roughly twice as happy as liberals. A 2008 study demonstrates that conservatives tend to be happier than liberals because of their tendency to justify the current state of affairs and because they're less bothered by inequalities in society. In fact, as income inequality increases, this difference in relative happiness increases because conservatives, more so than liberals, possess an ideological buffer against the negative hedonic effects of economic inequality. A 2012 study disputed this. A 2009 study found that conservatism and cognitive ability are negatively correlated. It found that conservatism has a negative correlation with SAT, Vocabulary, and Analogy test scores, measures of education (such as gross enrollment in primary, secondary, and tertiary levels), and performance on math and reading assignments from the PISA. It also found that conservatism correlates with components of the Failed States Index and "several other measures of economic and political development of nations." Nonetheless, in a Brazilian sample, the highest IQs were found among centre-rightists and centrists, even after correcting for gender, age, education and income. Personality psychology research has shown that conservatism is positively correlated to conscientiousness and negatively correlated with openness to new experiences. Because conscientiousness is positively related to job performance, a 2021 study found that conservative service workers earn higher ratings, evaluations, and tips than liberal ones. See also Conservatism in Australia Conservatism in Canada Conservatism in Hong Kong Conservatism in India Conservatism in New Zealand Conservatism in North America Conservatism in Pakistan Conservatism in Russia Conservatism in South Korea Conservatism in Taiwan Conservatism in the United Kingdom Conservatism in the United States Black conservatism Fiscal conservatism Liberal conservatism Libertarian conservatism National conservatism Social conservatism Traditionalist conservatism References Bibliography Hainsworth, Paul. The extreme right in Western Europe, Abingdon, OXON: Routledge, 2008 . Osterling, Jorge P. Democracy in Colombia: Clientelist Politics and Guerrilla Warfare. New Brunswick, NJ: Transaction Publishers, 1989 . Winthrop, Norman and Lovell, David W. "Varieties of Conservative Theory". In Winthrop, Norman. Liberal Democratic Theory and Its Critics. Beckenham, Kent: Croom Helm Ltd., 1983 . Further reading Blee, Kathleen M. and Sandra McGee Deutsch, eds. Women of the Right: Comparisons and Interplay Across Borders (Penn State University Press; 2012) 312 pages; scholarly essays
adopted its present name in 1945. It was consistently the largest political party in Luxembourg, and dominated politics throughout the 20th century. Norway The Conservative Party of Norway (Norwegian: Høyre, literally "right") was formed by the old upper class of state officials and wealthy merchants to fight the populist democracy of the Liberal Party, but lost power in 1884, when parliamentarian government was first practised. It formed its first government under parliamentarism in 1889 and continued to alternate in power with the Liberals until the 1930s, when Labour became the dominant political party. It has elements both of paternalism, stressing the responsibilities of the state, and of economic liberalism. It first returned to power in the 1960s. During Kåre Willoch's premiership in the 1980s, much emphasis was laid on liberalizing the credit and housing market, and abolishing the NRK TV and radio monopoly, while supporting law and order in criminal justice and traditional norms in education Sweden Sweden's conservative party, the Moderate Party, was formed in 1904, two years after the founding of the Liberal Party. The party emphasizes tax reductions, deregulation of private enterprise and privatization of schools, hospitals, and kindergartens. Switzerland There are a number of conservative parties in Switzerland's parliament, the Federal Assembly. These include the largest, the Swiss People's Party (SVP), the Christian Democratic People's Party (CVP) and the Conservative Democratic Party of Switzerland (BDP), which is a splinter of the SVP created in the aftermath to the election of Eveline Widmer-Schlumpf as Federal Council. The right-wing parties have a majority in the Federal Assembly. The Swiss People's Party (SVP or UDC) was formed from the 1971 merger of the Party of Farmers, Traders and Citizens, formed in 1917 and the smaller Swiss Democratic Party, formed in 1942. The SVP emphasized agricultural policy and was strong among farmers in German-speaking Protestant areas. As Switzerland considered closer relations with the European Union in the 1990s, the SVP adopted a more militant protectionist and isolationist stance. This stance has allowed it to expand into German-speaking Catholic mountainous areas. The Anti-Defamation League, a non-Swiss lobby group based in the United States has accused them of manipulating issues such as immigration, Swiss neutrality and welfare benefits, awakening antisemitism and racism. The Council of Europe has called the SVP "extreme right", although some scholars dispute this classification. For instance, Hans-Georg Betz describes it as "populist radical right". The SVP is the largest party since 2003. Ukraine Authoritarian Ukrainian State headed by Pavlo Skoropadskyi represented the conservative movement. The 1918 Hetman government, which appealed to the tradition of the 17th–18th century Cossack Hetman state, represented the conservative strand in Ukraine's struggle for independence. It had the support of the proprietary classes and of conservative and moderate political groups. Vyacheslav Lypynsky was an main ideologue of Ukrainian conservatism. United Kingdom According to historian James Sack, English conservatives celebrate Edmund Burke who was Irish, as their intellectual father. Burke was affiliated with the Whig Party which eventually became the Liberal Party, but the modern Conservative Party is generally thought to derive from the Tory party and the MPs of the modern conservative party are still frequently referred to as Tories. Shortly after Burke's death in 1797, conservatism revived as a mainstream political force as the Whigs suffered a series of internal divisions. This new generation of conservatives derived their politics not from Burke, but from his predecessor, the Viscount Bolingbroke (1678–1751), who was a Jacobite and traditional Tory, lacking Burke's sympathies for Whiggish policies such as Catholic emancipation and American independence (famously attacked by Samuel Johnson in "Taxation No Tyranny"). In the first half of the 19th century, many newspapers, magazines, and journals promoted loyalist or right-wing attitudes in religion, politics and international affairs. Burke was seldom mentioned, but William Pitt the Younger (1759–1806) became a conspicuous hero. The most prominent journals included The Quarterly Review, founded in 1809 as a counterweight to the Whigs' Edinburgh Review and the even more conservative Blackwood's Edinburgh Magazine. Sack finds that the Quarterly Review promoted a balanced Canningite toryism as it was neutral on Catholic emancipation and only mildly critical of Nonconformist Dissent; it opposed slavery and supported the current poor laws; and it was "aggressively imperialist". The high-church clergy of the Church of England read the Orthodox Churchman's Magazine which was equally hostile to Jewish, Catholic, Jacobin, Methodist and Unitarian spokesmen. Anchoring the ultra Tories, Blackwood's Edinburgh Magazine stood firmly against Catholic emancipation and favoured slavery, cheap money, mercantilism, the Navigation Acts and the Holy Alliance. Conservatism evolved after 1820, embracing free trade in 1846 and a commitment to democracy, especially under Disraeli. The effect was to significantly strengthen conservatism as a grassroots political force. Conservatism no longer was the philosophical defense of the landed aristocracy, but had been refreshed into redefining its commitment to the ideals of order, both secular and religious, expanding imperialism, strengthened monarchy and a more generous vision of the welfare state as opposed to the punitive vision of the Whigs and liberals. As early as 1835, Disraeli attacked the Whigs and utilitarians as slavishly devoted to an industrial oligarchy, while he described his fellow Tories as the only "really democratic party of England" and devoted to the interests of the whole people. Nevertheless, inside the party there was a tension between the growing numbers of wealthy businessmen on the one side and the aristocracy and rural gentry on the other. The aristocracy gained strength as businessmen discovered they could use their wealth to buy a peerage and a country estate. Although conservatives opposed attempts to allow greater representation of the middle class in parliament, they conceded that electoral reform could not be reversed and promised to support further reforms so long as they did not erode the institutions of church and state. These new principles were presented in the Tamworth Manifesto of 1834, which historians regard as the basic statement of the beliefs of the new Conservative Party. Some conservatives lamented the passing of a pastoral world where the ethos of noblesse oblige had promoted respect from the lower classes. They saw the Anglican Church and the aristocracy as balances against commercial wealth. They worked toward legislation for improved working conditions and urban housing. This viewpoint would later be called Tory democracy. However, since Burke, there has always been tension between traditional aristocratic conservatism and the wealthy business class. In 1834, Tory Prime Minister Robert Peel issued the Tamworth Manifesto in which he pledged to endorse moderate political reform. This marked the beginning of the transformation of British conservatism from High Tory reactionism towards a more modern form based on "conservation". The party became known as the Conservative Party as a result, a name it has retained to this day. However, Peel would also be the root of a split in the party between the traditional Tories (led by the Earl of Derby and Benjamin Disraeli) and the "Peelites" (led first by Peel himself, then by the Earl of Aberdeen). The split occurred in 1846 over the issue of free trade, which Peel supported, versus protectionism, supported by Derby. The majority of the party sided with Derby whilst about a third split away, eventually merging with the Whigs and the radicals to form the Liberal Party. Despite the split, the mainstream Conservative Party accepted the doctrine of free trade in 1852. In the second half of the 19th century, the Liberal Party faced political schisms, especially over Irish Home Rule. Leader William Gladstone (himself a former Peelite) sought to give Ireland a degree of autonomy, a move that elements in both the left and right-wings of his party opposed. These split off to become the Liberal Unionists (led by Joseph Chamberlain), forming a coalition with the Conservatives before merging with them in 1912. The Liberal Unionist influence dragged the Conservative Party towards the left as Conservative governments passing a number of progressive reforms at the turn of the 20th century. By the late 19th century, the traditional business supporters of the Liberal Party had joined the Conservatives, making them the party of business and commerce. After a period of Liberal dominance before the First World War, the Conservatives gradually became more influential in government, regaining full control of the cabinet in 1922. In the inter-war period, conservatism was the major ideology in Britain as the Liberal Party vied with the Labour Party for control of the left. After the Second World War, the first Labour government (1945–1951) under Clement Attlee embarked on a program of nationalization of industry and the promotion of social welfare. The Conservatives generally accepted those policies until the 1980s. In the 1980s, the Conservative government of Margaret Thatcher, guided by neoliberal economics, reversed many of Labour's programmes. The Conservative Party also adopt soft eurosceptic politics, and oppose Federal Europe. Other conservative political parties, such as the United Kingdom Independence Party (UKIP, founded in 1993), Northern Ireland's Ulster Unionist Party (UUP) and the Democratic Unionist Party (DUP, founded in 1971), began to appear, although they have yet to make any significant impact at Westminster (, the DUP comprises the largest political party in the ruling coalition in the Northern Ireland Assembly), and from 2017 to 2019 the DUP provided support for the Conservative minority government. Modern conservatism in different countries Many sources refer to any political parties on the right of the political spectrum as conservative despite having no connection with historical conservatism. In most cases, these parties do not use the term conservative in their name or self-identify as conservative. Below is a partial list of such political parties. Australia The Liberal Party of Australia adheres to the principles of social conservatism and liberal conservatism. It is liberal in the sense of economics. Other conservative parties are the National Party of Australia, a sister party of the Liberals, Family First Party, Democratic Labor Party, Shooters, Fishers and Farmers Party, Australian Conservatives, and the Katter's Australian Party. The second largest party in the country is the Australian Labor Party and its dominant faction is Labor Right, a socially conservative element. Australia undertook significant economic reform under the Labor Party in the mid-1980s. Consequently, issues like protectionism, welfare reform, privatization and deregulation are no longer debated in the political space as they are in Europe or North America. Moser and Catley explain: "In America, 'liberal' means left-of-center, and it is a pejorative term when used by conservatives in adversarial political debate. In Australia, of course, the conservatives are in the Liberal Party". Jupp points out that, "[the] decline in English influences on Australian reformism and radicalism, and appropriation of the symbols of Empire by conservatives continued under the Liberal Party leadership of Sir Robert Menzies, which lasted until 1966". Brazil Conservatism in Brazil originates from the cultural and historical tradition of Brazil, whose cultural roots are Luso-Iberian and Roman Catholic. Brazilian conservatism from the 20th century on includes names such as Gerardo Melo Mourão and Otto Maria Carpeaux in literature; Oliveira Lima and Oliveira Torres in historiography; Sobral Pinto and Miguel Reale in law; Plinio Corrêa de Oliveira and Father Paulo Ricardo in the Catholic Church; Roberto Campos and Mario Henrique Simonsen in economics; Carlos Lacerda in the political arena; and Olavo de Carvalho in philosophy. Brazil Union, Progressistas, Republicans, Liberal Party, Brazilian Labour Renewal Party, Patriota, Brazilian Labour Party, Social Christian Party and Brasil 35 are the conservative parties in Brazil. Germany Conservatism developed alongside nationalism in Germany, culminating in Germany's victory over France in the Franco-Prussian War, the creation of the unified German Empire in 1871 and the simultaneous rise of Otto von Bismarck on the European political stage. Bismarck's "balance of power" model maintained peace in Europe for decades at the end of the 19th century. His "revolutionary conservatism" was a conservative state-building strategy designed to make ordinary Germans—not just the Junker elite—more loyal to state and emperor, he created the modern welfare state in Germany in the 1880s. According to Kees van Kersbergen and Barbara Vis, his strategy was: Bismarck also enacted universal male suffrage in the new German Empire in 1871. He became a great hero to German conservatives, who erected many monuments to his memory after he left office in 1890. With the rise of Nazism in 1933, agrarian movements faded and was supplanted by a more command-based economy and forced social integration. Though Adolf Hitler succeeded in garnering the support of many German industrialists, prominent traditionalists openly and secretly opposed his policies of euthanasia, genocide and attacks on organized religion, including Claus von Stauffenberg, Dietrich Bonhoeffer, Henning von Tresckow, Bishop Clemens August Graf von Galen and the monarchist Carl Friedrich Goerdeler. More recently, the work of conservative Christian Democratic Union leader and Chancellor Helmut Kohl helped bring about German reunification, along with the closer European integration in the form of the Maastricht Treaty. Today, German conservatism is often associated with politicians such as Chancellor Angela Merkel, whose tenure has been marked by attempts to save the common European currency (Euro) from demise. The German conservatives are divided under Merkel due to the refugee crisis in Germany and many conservatives in the CDU/CSU oppose the refugee and migrant policies developed under Merkel. India In India, the Bharatiya Janata Party (BJP), led by Narendra Modi, represent conservative politics. The BJP is the largest right-wing conservative party in the world. It promotes cultural nationalism, Hindu Nationalism, an aggressive foreign policy against Pakistan and a conservative social and fiscal policy. Italy By 1945 the extreme right fascist movement of Benito Mussolini was discredited. After World War II, in Italy the conservative parties were dominated by the centrist Christian Democracy (DC) party. With its landslide victory over the left in 1948, the Center (including progressive and conservative factions) was in power and was, says Denis Mack Smith, "moderately conservative, reasonably tolerant of everything which did not touch religion or property, but above all Catholic and sometimes clerical." It dominated politics until the DC party's dissolution in 1994. In 1994, the media tycoon and entrepreneur Silvio Berlusconi founded the liberal conservative party Forza Italia (FI). Berlusconi won three elections in 1994, 2001 and 2008, governing the country for almost ten years as Prime Minister. Forza Italia formed a coalition with right-wing regional party Lega Nord while in government. Besides FI, now the conservative ideas are mainly expressed by the New Centre-Right party led by Angelino Alfano, Berlusconi formed a new party, which is a rebirth of Forza Italia, thus founding a new conservative movement. Alfano served as Minister of Foreign Affairs. After the 2018 election, Lega Nord and the Five Star Movement formed a right-wing populist government, which later failed. Russia Under Vladimir Putin, the dominant leader since 1999, Russia has promoted explicitly conservative policies in social, cultural and political matters, both at home and abroad. Putin has attacked globalism and economic liberalism. Russian conservatism is unique in some respects as it supports Economic intervention with a mixed economy, with a strong nationalist sentiment and social conservatism with its views being largely populist. Russian conservatism as a result opposes libertarian ideals such as the aforementioned concept of economic liberalism found in other conservative movements around the world. Putin has as a result promoted new think tanks that bring together like-minded intellectuals and writers. For example, the Izborsky Club, founded in 2012 by Aleksandr Prokhanov, stresses Russian nationalism, the restoration of Russia's historical greatness and systematic opposition to liberal ideas and policies. Vladislav Surkov, a senior government official, has been one of the key ideologists during Putin's presidency. In cultural and social affairs, Putin has collaborated closely with the Russian Orthodox Church. Mark Woods provides specific examples of how the Church under Patriarch Kirill of Moscow has backed the expansion of Russian power into Crimea and eastern Ukraine. More broadly, The New York Times reports in September 2016 how that Church's policy prescriptions support the Kremlin's appeal to social conservatives: South Korea South Korea's major conservative party, the People Power Party (South Korea), has changed its form throughout its history. First it was the Democratic-Liberal Party(민주자유당, Minju Ja-yudang) and its first head was Roh Tae-woo who was the first President of the Sixth Republic of South Korea. Democratic-Liberal Party was founded by the merging of Roh Tae-woo's Democratic Justice Party, Kim Young Sam's Reunification Democratic Party and Kim Jong-pil's New Democratic Republican Party. And again through election its second leader, Kim Young-sam, became the fourteenth President of Korea. When the conservative party was beaten by the opposition party in the general election, it changed its form again to follow the party members' demand for reforms. It became the New Korean Party, but it changed again one year later since the President Kim Young-sam was blamed by the citizen for the International Monetary Fund. It changed its name to Grand National Party (GNP). Since the late Kim Dae-jung assumed the presidency in 1998, GNP had been the opposition party until Lee Myung-bak won the presidential election of 2007. Singapore Singapore's only conservative party is the People's Action Party (PAP). It is currently in government and has been in government since independence in 1965. It has promoted conservative values in the form of Asian democracy and values or 'shared values'. The main party on the left of the political spectrum in Singapore is the Workers' Party (WP). United States The meaning of "conservatism" in the United States has little in common with the way the word is used elsewhere. As Ribuffo (2011) notes, "what Americans now call conservatism much of the world calls liberalism or neoliberalism". American conservatism is a broad system of political beliefs in the United States that is characterized by respect for American traditions, support for Judeo-Christian values, economic liberalism, anti-communism and a defense of Western culture. Liberty within the bounds of conformity to conservatism is a core value, with a particular emphasis on strengthening the free market, limiting the size and scope of government and opposition to high taxes and government or labor union encroachment on the entrepreneur. In early American politics, it was the Democratic party practicing 'conservatism' in its attempts to maintain the social and economic institution of slavery. Democratic president Andrew Johnson, as one commonly known example, was considered a Conservative. "The Democrats were often called conservative and embraced that label. Many of them were conservative in the sense that they wanted things to be like they were in the past, especially as far as race was concerned." In 1892, Democrat Grover Cleveland won the election on a conservative platform, that argued for maintaining the gold standard, reducing tariffs, and supporting a laisse faire approach to government intervention. Since the 1950s, conservatism in the United States has been chiefly associated with the Republican Party. However, during the era of segregation, many Southern Democrats were conservatives and they played a key role in the conservative coalition that largely controlled domestic policy in Congress from 1937 to 1963. The conservative Democrats continued to have influence in the US politics until 1994's Republican Revolution, when the American South shifted from solid Democrat to solid Republican, while maintaining its conservative values. The major conservative party in the United States today is the Republican Party, also known as the GOP (Grand Old Party). Modern American conservatives consider individual liberty, as long as it conforms to conservative values, small government, deregulation of the government, economic liberalism, and free trade, as the fundamental trait of democracy, which contrasts with modern American liberals, who generally place a greater value on social equality and social justice. Other major priorities within American conservatism include support for the traditional family, law and order, the right to bear arms, Christian values, anti-communism and a defense of "Western civilization from the challenges of modernist culture and totalitarian governments". Economic conservatives and libertarians favor small government, low taxes, limited regulation and free enterprise. Some social conservatives see traditional social values threatened by secularism, so they support school prayer and oppose abortion and homosexuality. Neoconservatives want to expand American ideals throughout the world and show a strong support for Israel. Paleoconservatives, in opposition to multiculturalism, press for restrictions on immigration. Most US conservatives prefer Republicans over Democrats and most factions favor a strong foreign policy and a strong military. The conservative movement of the 1950s attempted to bring together these divergent strands, stressing the need for unity to prevent the spread of "godless communism", which Reagan later labeled an "evil empire". During the Reagan administration, conservatives also supported the so-called "Reagan Doctrine" under which the US as part of a Cold War strategy provided military and other support to guerrilla insurgencies that were fighting governments identified as socialist or communist. The Reagan administration also adopted neoliberalism and Reaganomics (pejoratively referred to as trickle-down economics), resulting in the 1980s economic growth and trillion-dollar deficits. Other modern conservative positions include opposition to big government and opposition to environmentalism. On average, American conservatives desire tougher foreign policies than liberals do. Economic liberalism, deregulation and social conservatism are major principles of the Republican Party. The Tea Party movement, founded in 2009, had proven a large outlet for populist American conservative ideas. Their stated goals included rigorous adherence to the US constitution, lower taxes, and opposition to a growing role for the federal government in health care. Electorally, it was considered a key force in Republicans reclaiming control of the US House of Representatives in 2010. Psychology Following the Second World War, psychologists conducted research into the different motives and tendencies that account for ideological differences between left and right. The early studies focused on conservatives, beginning with Theodor W. Adorno's The Authoritarian Personality (1950) based on the F-scale personality test. This book has been heavily criticized on theoretical and methodological grounds, but some of its findings have been confirmed by further empirical research. In 1973, British psychologist Glenn Wilson published an influential book providing evidence that a general factor underlying conservative beliefs is "fear of uncertainty." A meta-analysis of research literature by Jost, Glaser, Kruglanski, and Sulloway in 2003 found that many factors, such as intolerance of ambiguity and need for cognitive closure, contribute to the degree of one's political conservatism and its manifestations in decision-making. A study by Kathleen Maclay stated these traits "might be associated with such generally valued characteristics as personal commitment and unwavering loyalty". The research also suggested that while most people are resistant to change, liberals are more tolerant of it. According to psychologist Bob Altemeyer, individuals who are politically conservative tend to rank high in right-wing authoritarianism (RWA) on his RWA scale. This finding was echoed by Adorno. A study done on Israeli and Palestinian students in Israel found that RWA scores of right-wing party supporters were significantly higher than those of left-wing party supporters. However, a 2005 study by H. Michael Crowson and colleagues suggested a moderate gap between RWA and other conservative positions, stating that their "results indicated that conservatism is not synonymous with RWA". Psychologist Felicia Pratto and her colleagues have found evidence to support the idea that a high social dominance orientation (SDO) is strongly correlated with conservative political views and opposition to social engineering to promote equality, though Pratto's findings have been highly controversial as Pratto and her colleagues found that high SDO scores were highly correlated with measures of prejudice. However, David J. Schneider argued for a more complex relationships between the three factors, writing that "correlations between prejudice and political conservative are reduced virtually to zero when controls for SDO are instituted, suggesting that the conservatism–prejudice link is caused by SDO". Conservative political theorist Kenneth Minogue criticized Pratto's work, saying: "It is characteristic of the conservative temperament to value established identities, to praise habit and to respect prejudice, not because it is irrational, but because such things anchor the darting impulses of human beings in solidities of custom which we do not often begin to value until we are already losing them. Radicalism often generates youth movements, while conservatism is a condition found among the mature, who have discovered what it is in life they most value". A 1996 study on the relationship between racism and conservatism found that the correlation was stronger among more educated individuals, though "anti-Black affect had essentially no relationship with political conservatism at any level of educational or intellectual sophistication". They also found that the correlation between racism and conservatism could be entirely accounted for by their mutual relationship with social dominance orientation. In his 2008 book, Gross National Happiness, Arthur C. Brooks presents the finding that conservatives are roughly twice as happy as liberals. A 2008 study demonstrates that conservatives tend to be happier than liberals because of their tendency to justify the current state of affairs and because they're less bothered by inequalities in society. In fact, as income inequality increases, this difference in relative happiness increases because conservatives, more so than liberals, possess an ideological buffer against the negative hedonic effects of economic inequality. A
the dominant political theory in Britain from the early 19th century until the First World War. Its notable victories were the Catholic Emancipation Act of 1829, the Reform Act of 1832 and the repeal of the Corn Laws in 1846. The Anti-Corn Law League brought together a coalition of liberal and radical groups in support of free trade under the leadership of Richard Cobden and John Bright, who opposed aristocratic privilege, militarism, and public expenditure and believed that the backbone of Great Britain was the yeoman farmer. Their policies of low public expenditure and low taxation were adopted by William Gladstone when he became Chancellor of the Exchequer and later Prime Minister. Classical liberalism was often associated with religious dissent and nonconformism. Although classical liberals aspired to a minimum of state activity, they accepted the principle of government intervention in the economy from the early 19th century on, with passage of the Factory Acts. From around 1840 to 1860, laissez-faire advocates of the Manchester School and writers in The Economist were confident that their early victories would lead to a period of expanding economic and personal liberty and world peace, but would face reversals as government intervention and activity continued to expand from the 1850s. Jeremy Bentham and James Mill, although advocates of laissez-faire, non-intervention in foreign affairs, and individual liberty, believed that social institutions could be rationally redesigned through the principles of utilitarianism. The Conservative Prime Minister Benjamin Disraeli rejected classical liberalism altogether and advocated Tory democracy. By the 1870s, Herbert Spencer and other classical liberals concluded that historical development was turning against them. By the First World War, the Liberal Party had largely abandoned classical liberal principles. The changing economic and social conditions of the 19th century led to a division between neo-classical and social (or welfare) liberals, who while agreeing on the importance of individual liberty differed on the role of the state. Neo-classical liberals, who called themselves "true liberals", saw Locke's Second Treatise as the best guide and emphasised "limited government" while social liberals supported government regulation and the welfare state. Herbert Spencer in Britain and William Graham Sumner were the leading neo-classical liberal theorists of the 19th century. The evolution from classical to social/welfare liberalism is for example reflected in Britain in the evolution of the thought of John Maynard Keynes. United States In the United States, liberalism took a strong root because it had little opposition to its ideals, whereas in Europe liberalism was opposed by many reactionary or feudal interests such as the nobility; the aristocracy, including army officers; the landed gentry; and the established church. Thomas Jefferson adopted many of the ideals of liberalism, but in the Declaration of Independence changed Locke's "life, liberty and property" to the more socially liberal "Life, Liberty and the pursuit of Happiness". As the United States grew, industry became a larger and larger part of American life; and during the term of its first populist President, Andrew Jackson, economic questions came to the forefront. The economic ideas of the Jacksonian era were almost universally the ideas of classical liberalism. Freedom, according to classical liberals, was maximised when the government took a "hands off" attitude toward the economy. Historian Kathleen G. Donohue argues: [A]t the center of classical liberal theory [in Europe] was the idea of laissez-faire. To the vast majority of American classical liberals, however, laissez-faire did not mean no government intervention at all. On the contrary, they were more than willing to see government provide tariffs, railroad subsidies, and internal improvements, all of which benefited producers. What they condemned was intervention on behalf of consumers. Leading magazine The Nation espoused liberalism every week starting in 1865 under the influential editor Edwin Lawrence Godkin (1831–1902). The ideas of classical liberalism remained essentially unchallenged until a series of depressions, thought to be impossible according to the tenets of classical economics, led to economic hardship from which the voters demanded relief. In the words of William Jennings Bryan, "You shall not crucify the American farmer on a cross of gold". Classical liberalism remained the orthodox belief among American businessmen until the Great Depression. The Great Depression in the United States saw a sea change in liberalism, with priority shifting from the producers to consumers. Franklin D. Roosevelt's New Deal represented the dominance of modern liberalism in politics for decades. In the words of Arthur Schlesinger Jr.: Alan Wolfe summarizes the viewpoint that there is a continuous liberal understanding that includes both Adam Smith and John Maynard Keynes: The view that modern liberalism is a continuation of classical liberalism is not universally shared. James Kurth, Robert E. Lerner, John Micklethwait, Adrian Wooldridge and several other political scholars have argued that classical liberalism still exists today, but in the form of American conservatism. According to Deepak Lal, only in the United States does classical liberalism continue to be a significant political force through American conservatism. American libertarians also claim to be the true continuation of the classical liberal tradition. Intellectual sources John Locke Central to classical liberal ideology was their interpretation of John Locke's Second Treatise of Government and A Letter Concerning Toleration, which had been written as a defence of the Glorious Revolution of 1688. Although these writings were considered too radical at the time for Britain's new rulers, they later came to be cited by Whigs, radicals and supporters of the American Revolution. However, much of later liberal thought was absent in Locke's writings or scarcely mentioned and his writings have been subject to various interpretations. For example, there is little mention of constitutionalism, the separation of powers and limited government. James L. Richardson identified five central themes in Locke's writing: individualism, consent, the concepts of the rule of law and government as trustee, the significance of property and religious toleration. Although Locke did not develop a theory of natural rights, he envisioned individuals in the state of nature as being free and equal. The individual, rather than the community or institutions, was the point of reference. Locke believed that individuals had given consent to government and therefore authority derived from the people rather than from above. This belief would influence later revolutionary movements. As a trustee, government was expected to serve the interests of the people, not the rulers; and rulers were expected to follow the laws enacted by legislatures. Locke also held that the main purpose of men uniting into commonwealths and governments was for the preservation of their property. Despite the ambiguity of Locke's definition of property, which limited property to "as much land as a man tills, plants, improves, cultivates, and can use the product of", this principle held great appeal to individuals possessed of great wealth. Locke held that the individual had the right to follow his own religious beliefs and that the state should not impose a religion against Dissenters, but there were limitations. No tolerance should be shown for atheists, who were seen as amoral, or to Catholics, who were seen as owing allegiance to the Pope over their own national government. Adam Smith Adam Smith's The Wealth of Nations, published in 1776, was to provide most of the ideas of economics, at least until the publication of John Stuart Mill's Principles of Political Economy in 1848. Smith addressed the motivation for economic activity, the causes of prices and the distribution of wealth and the policies the state should follow to maximise wealth. Smith wrote that as long as supply, demand, prices and competition were left free of government regulation, the pursuit of material self-interest, rather than altruism, would maximise the wealth of a society through profit-driven production of goods and services. An "invisible hand" directed individuals and firms to work toward the public good as an unintended consequence of efforts to maximise their own gain. This provided a moral justification for the accumulation of wealth, which had previously been viewed by some as sinful. He assumed that workers could be paid wages as low as was necessary for their survival, which was later transformed by David Ricardo and Thomas Robert Malthus into the "iron law of wages". His main emphasis was on the benefit of free internal and international trade, which he thought could increase wealth through specialisation in production. He also opposed restrictive trade preferences, state grants of monopolies and employers' organisations and trade unions. Government should be limited to defence, public works and the administration of justice, financed by taxes based on income. Smith's economics was carried into practice in the nineteenth century with the lowering of tariffs in the 1820s, the repeal of the Poor Relief Act that had restricted the mobility of labour in 1834 and the end of the rule of the East India Company over India in 1858. Classical economics In addition to Smith's legacy, Say's law, Thomas Robert Malthus' theories of population and David Ricardo's iron law of wages became central doctrines of classical economics. The pessimistic nature of these theories provided a basis for criticism of capitalism by its opponents and helped perpetuate the tradition of calling economics the "dismal science". Jean-Baptiste Say was a French economist who introduced Smith's economic theories into France and whose commentaries on Smith were read in both France and Britain. Say challenged Smith's labour theory of value, believing that prices were determined by utility and also emphasised the critical role of the entrepreneur in the economy. However, neither of those observations became accepted by British economists at the time. His most important contribution to economic thinking was Say's law, which was interpreted by classical economists that there could be no overproduction in a market and that there would always be a balance between supply and demand. This general belief influenced government policies until the 1930s. Following this law, since the economic cycle was seen as self-correcting, government did not intervene during periods of economic hardship because it was seen as futile. Malthus wrote two books, An Essay on the Principle of Population (published in 1798) and Principles of Political Economy (published in 1820). The second book which was a rebuttal of Say's law had little influence on contemporary economists. However, his first book became a major influence on classical liberalism. In that book, Malthus claimed that population growth would outstrip food production because population grew geometrically while food production grew arithmetically. As people were provided with food, they would reproduce until their growth outstripped the food supply. Nature would then provide a check to growth in the forms of vice and misery. No gains in income could prevent this and any welfare for the poor would be self-defeating. The poor were in fact responsible for their own problems which could have been avoided through self-restraint. Ricardo, who was an admirer of Smith, covered many of the same topics, but while Smith drew conclusions from broadly empirical observations he used deduction, drawing conclusions by reasoning from basic assumptions While Ricardo accepted Smith's labour theory of value, he acknowledged that utility could influence the price of some rare items. Rents on agricultural land were seen as the production that was surplus to the subsistence required by the tenants. Wages were seen as the amount required for workers' subsistence and to maintain current population levels. According to his iron law of wages, wages could never rise beyond subsistence levels. Ricardo explained profits as a return on capital, which itself was the product of labour, but a conclusion many drew from his theory was that profit was a surplus appropriated by capitalists to which they were not entitled. Utilitarianism Utilitarianism provided the political justification for implementation of economic liberalism
century, building on ideas from the previous century as a response to urbanization and to the Industrial Revolution in Europe and North America. Notable liberal individuals whose ideas contributed to classical liberalism include John Locke, Jean-Baptiste Say, Thomas Robert Malthus, and David Ricardo. It drew on classical economics, especially the economic ideas as espoused by Adam Smith in Book One of The Wealth of Nations and on a belief in natural law, progress, and utilitarianism. Classical liberalism, contrary to liberal branches like social liberalism; looks more negatively on social policies, taxation and the state involvement in the lives of individuals, and it advocates deregulation. Until the Great Depression and the rise of social liberalism, it was used under the name of economic liberalism. As a term, classical liberalism was applied in retronym to distinguish earlier 19th-century liberalism from social liberalism. By modern standards, in United States, simple liberalism often means social liberalism, but in Europe and Australia, simple liberalism often means classical liberalism. In the United States, classical liberalism is mainly conservative in economic issues, refers to cultural liberal tendencies in issues including LGBT rights or abortion, and can have a different meaning from classical liberalism used in other countries. In Europe, liberalism, whether social (especially radical) or conservative, is classical liberalism in itself, so the term classical liberalism mainly refers to centre-right economic liberalism. Evolution of core beliefs Core beliefs of classical liberals included new ideas—which departed from both the older conservative idea of society as a family and from the later sociological concept of society as a complex set of social networks. Classical liberals believed that individuals are "egoistic, coldly calculating, essentially inert and atomistic" and that society is no more than the sum of its individual members. Classical liberals agreed with Thomas Hobbes that government had been created by individuals to protect themselves from each other and that the purpose of government should be to minimize conflict between individuals that would otherwise arise in a state of nature. These beliefs were complemented by a belief that labourers could be best motivated by financial incentive. This belief led to the passage of the Poor Law Amendment Act 1834, which limited the provision of social assistance, based on the idea that markets are the mechanism that most efficiently leads to wealth. Adopting Thomas Robert Malthus's population theory, they saw poor urban conditions as inevitable, believed population growth would outstrip food production and thus regarded that consequence desirable because starvation would help limit population growth. They opposed any income or wealth redistribution, believing it would be dissipated by the lowest orders. Drawing on ideas of Adam Smith, classical liberals believed that it is in the common interest that all individuals be able to secure their own economic self-interest. They were critical of what would come to be the idea of the welfare state as interfering in a free market. Despite Smith's resolute recognition of the importance and value of labour and of labourers, classical liberals criticized labour's group rights being pursued at the expense of individual rights while accepting corporations' rights, which led to inequality of bargaining power. Classical liberals argued that individuals should be free to obtain work from the highest-paying employers while the profit motive would ensure that products that people desired were produced at prices they would pay. In a free market, both labour and capital would receive the greatest possible reward while production would be organized efficiently to meet consumer demand. Classical liberals argued for what they called a minimal state, limited to the following functions: A government to protect individual rights and to provide services that cannot be provided in a free market. A common national defence to provide protection against foreign invaders. Laws to provide protection for citizens from wrongs committed against them by other citizens, which included protection of private property, enforcement of contracts and common law. Building and maintaining public institutions. Public works that included a stable currency, standard weights and measures and building and upkeep of roads, canals, harbours, railways, communications and postal services. Classical liberals asserted that rights are of a negative nature and therefore stipulate that other individuals and governments are to refrain from interfering with the free market, opposing social liberals who assert that individuals have positive rights, such as the right to vote, the right to an education, the right to health care, and the right to a living wage. For society to guarantee positive rights, it requires taxation over and above the minimum needed to enforce negative rights. Core beliefs of classical liberals did not necessarily include democracy nor government by a majority vote by citizens because "there is nothing in the bare idea of majority rule to show that majorities will always respect the rights of property or maintain rule of law". For example, James Madison argued for a constitutional republic with protections for individual liberty over a pure democracy, reasoning that in a pure democracy a "common passion or interest will, in almost every case, be felt by a majority of the whole ... and there is nothing to check the inducements to sacrifice the weaker party". In the late 19th century, classical liberalism developed into neoclassical liberalism, which argued for government to be as small as possible to allow the exercise of individual freedom. In its most extreme form, neoclassical liberalism advocated social Darwinism. Right-libertarianism is a modern form of neoclassical liberalism. However, Edwin Van de Haar states although libertarianism is influenced by classical liberal thought there are significant differences between them. Classical liberalism refuses to give priority to liberty over order and therefore does not exhibit the hostility to the state which is the defining feature of libertarianism. As such, right-libertarians believe classical liberals favor too much state involvement, arguing that they do not have enough respect for individual property rights and lack sufficient trust in the workings of the free market and its spontaneous order leading to support of a much larger state. Right-libertarians also disagree with classical liberals as being too supportive of central banks and monetarist policies. Typology of beliefs Friedrich Hayek identified two different traditions within classical liberalism, namely the British tradition and the French tradition. Hayek saw the British philosophers Bernard Mandeville, David Hume, Adam Smith, Adam Ferguson, Josiah Tucker and William Paley as representative of a tradition that articulated beliefs in empiricism, the common law and in traditions and institutions which had spontaneously evolved but were imperfectly understood. The French tradition included Jean-Jacques Rousseau, Marquis de Condorcet, the Encyclopedists and the Physiocrats. This tradition believed in rationalism and sometimes showed hostility to tradition and religion. Hayek conceded that the national labels did not exactly correspond to those belonging to each tradition since he saw the Frenchmen Montesquieu, Benjamin Constant and Alexis de Tocqueville as belonging to the British tradition and the British Thomas Hobbes, Joseph Priestley, Richard Price and Thomas Paine as belonging to the French tradition. Hayek also rejected the label laissez-faire as originating from the French tradition and alien to the beliefs of Hume and Smith. Guido De Ruggiero also identified differences between "Montesquieu and Rousseau, the English and the democratic types of liberalism" and argued that there was a "profound contrast between the two Liberal systems". He claimed that the spirit of "authentic English Liberalism" had "built up its work piece by piece without ever destroying what had once been built, but basing upon it every new departure". This liberalism had "insensibly adapted ancient institutions to modern needs" and "instinctively recoiled from all abstract proclamations of principles and rights". Ruggiero claimed that this liberalism was challenged by what he called the "new Liberalism of France" that was characterised by egalitarianism and a "rationalistic consciousness". In 1848, Francis Lieber distinguished between what he called "Anglican and Gallican Liberty". Lieber asserted that "independence in the highest degree, compatible with safety and broad national guarantees of liberty, is the great aim of Anglican liberty, and self-reliance is the chief source from which it draws its strength". On the other hand, Gallican liberty "is sought in government ... . [T]he French look for the highest degree of political civilisation in organisation, that is, in the highest degree of interference by public power". History Great Britain Classical liberalism in Britain traces its roots to the Whigs and radicals, and was heavily influenced by French physiocracy. Whiggery had become a dominant ideology following the Glorious Revolution of 1688 and was associated with supporting the British Parliament, upholding the rule of law, and defending landed property. The origins of rights were seen as being in an ancient constitution, which had existed from time immemorial. These rights, which some Whigs considered to include freedom of the press and freedom of speech, were justified by custom rather than as natural rights. These Whigs believed that the power of the executive had to be constrained. While they supported limited suffrage, they saw voting as a privilege rather than as a right. However, there was no consistency in Whig ideology and diverse writers including John Locke, David Hume, Adam Smith and Edmund Burke were all influential among Whigs, although none of them were universally accepted. From the 1790s to the 1820s, British radicals concentrated on parliamentary and electoral reform, emphasising natural rights and popular sovereignty. Richard Price and Joseph Priestley adapted the language of Locke to the ideology of radicalism. The radicals saw parliamentary reform as a first step toward dealing with their many grievances, including the treatment of Protestant Dissenters, the slave trade, high prices, and high taxes. There was greater unity among classical liberals than there had been among Whigs. Classical liberals were committed to individualism, liberty, and equal rights. They believed these goals required a free economy with minimal government interference. Some elements of Whiggery were uncomfortable with the commercial nature of classical liberalism. These elements became associated with conservatism. Classical liberalism was the dominant political theory in Britain from the early 19th century until the First World War. Its notable victories were the Catholic Emancipation Act of 1829, the Reform Act of 1832 and the repeal of the Corn Laws in 1846.
to southern Europe. During the Roman Empire they were introduced to Corsica and Sardinia before the beginning of the 1st millennium. By the 5th century BC, they were familiar animals around settlements in Magna Graecia and Etruria. By the end of the Roman Empire in the 5th century, the Egyptian domestic cat lineage had arrived in a Baltic Sea port in northern Germany. During domestication, cats have undergone only minor changes in anatomy and behavior, and they are still capable of surviving in the wild. Several natural behaviors and characteristics of wildcats may have pre-adapted them for domestication as pets. These traits include their small size, social nature, obvious body language, love of play and relatively high intelligence. Captive Leopardus cats may also display affectionate behavior toward humans but were not domesticated. House cats often mate with feral cats, producing hybrids such as the Kellas cat in Scotland. Hybridisation between domestic and other Felinae species is also possible. Development of cat breeds started in the mid 19th century. An analysis of the domestic cat genome revealed that the ancestral wildcat genome was significantly altered in the process of domestication, as specific mutations were selected to develop cat breeds. Most breeds are founded on random-bred domestic cats. Genetic diversity of these breeds varies between regions, and is lowest in purebred populations, which show more than 20 deleterious genetic disorders. Characteristics Size The domestic cat has a smaller skull and shorter bones than the European wildcat. It averages about in head-to-body length and in height, with about long tails. Males are larger than females. Adult domestic cats typically weigh between . Skeleton Cats have seven cervical vertebrae (as do most mammals); 13 thoracic vertebrae (humans have 12); seven lumbar vertebrae (humans have five); three sacral vertebrae (as do most mammals, but humans have five); and a variable number of caudal vertebrae in the tail (humans have only vestigial caudal vertebrae, fused into an internal coccyx). The extra lumbar and thoracic vertebrae account for the cat's spinal mobility and flexibility. Attached to the spine are 13 ribs, the shoulder, and the pelvis. Unlike human arms, cat forelimbs are attached to the shoulder by free-floating clavicle bones which allow them to pass their body through any space into which they can fit their head. Skull The cat skull is unusual among mammals in having very large eye sockets and a powerful specialized jaw. Within the jaw, cats have teeth adapted for killing prey and tearing meat. When it overpowers its prey, a cat delivers a lethal neck bite with its two long canine teeth, inserting them between two of the prey's vertebrae and severing its spinal cord, causing irreversible paralysis and death. Compared to other felines, domestic cats have narrowly spaced canine teeth relative to the size of their jaw, which is an adaptation to their preferred prey of small rodents, which have small vertebrae. The premolar and first molar together compose the carnassial pair on each side of the mouth, which efficiently shears meat into small pieces, like a pair of scissors. These are vital in feeding, since cats' small molars cannot chew food effectively, and cats are largely incapable of mastication. Although cats tend to have better teeth than most humans, with decay generally less likely because of a thicker protective layer of enamel, a less damaging saliva, less retention of food particles between teeth, and a diet mostly devoid of sugar, they are nonetheless subject to occasional tooth loss and infection. Claws Cats have protractible and retractable claws. In their normal, relaxed position, the claws are sheathed with the skin and fur around the paw's toe pads. This keeps the claws sharp by preventing wear from contact with the ground and allows the silent stalking of prey. The claws on the fore feet are typically sharper than those on the hind feet. Cats can voluntarily extend their claws on one or more paws. They may extend their claws in hunting or self-defense, climbing, kneading, or for extra traction on soft surfaces. Cats shed the outside layer of their claw sheaths when scratching rough surfaces. Most cats have five claws on their front paws, and four on their rear paws. The dewclaw is proximal to the other claws. More proximally is a protrusion which appears to be a sixth "finger". This special feature of the front paws, on the inside of the wrists has no function in normal walking, but is thought to be an antiskidding device used while jumping. Some cat breeds are prone to having extra digits (“polydactyly”). Polydactylous cats occur along North America's northeast coast and in Great Britain. Ambulation The cat is digitigrade. It walks on the toes, with the bones of the feet making up the lower part of the visible leg. Unlike most mammals, it uses a "pacing" gait and moves both legs on one side of the body before the legs on the other side. It registers directly by placing each hind paw close to the track of the corresponding fore paw, minimizing noise and visible tracks. This also provides sure footing for hind paws when navigating rough terrain. As it speeds up walking to trotting, its gait changes to a "diagonal" gait: The diagonally opposite hind and fore legs move simultaneously. Balance Most breeds of cat have a noted fondness for sitting in high places, or perching. A higher place may serve as a concealed site from which to hunt; domestic cats strike prey by pouncing from a perch such as a tree branch. Another possible explanation is that height gives the cat a better observation point, allowing it to survey its territory. A cat falling from heights of up to can right itself and land on its paws. During a fall from a high place, a cat reflexively twists its body and rights itself to land on its feet using its acute sense of balance and flexibility. This reflex is known as the cat righting reflex. A cat always rights itself in the same way during a fall, if it has enough time to do so, which is the case in falls of or more. How cats are able to right themselves when falling has been investigated as the "falling cat problem". Senses Vision Cats have excellent night vision and can see at only one-sixth the light level required for human vision. This is partly the result of cat eyes having a tapetum lucidum, which reflects any light that passes through the retina back into the eye, thereby increasing the eye's sensitivity to dim light. Large pupils are an adaptation to dim light. The domestic cat has slit pupils, which allow it to focus bright light without chromatic aberration. At low light, a cat's pupils expand to cover most of the exposed surface of its eyes. The domestic cat has rather poor color vision and only two types of cone cells, optimized for sensitivity to blue and yellowish green; its ability to distinguish between red and green is limited. A response to middle wavelengths from a system other than the rod cells might be due to a third type of cone. This appears to be an adaptation to low light levels rather than representing true trichromatic vision. Hearing The domestic cat's hearing is most acute in the range of 500 Hz to 32 kHz. It can detect an extremely broad range of frequencies ranging from 55 Hz to 79,000 Hz. It can hear a range of 10.5 octaves, while humans and dogs can hear ranges of about 9 octaves. Its hearing sensitivity is enhanced by its large movable outer ears, the pinnae, which amplify sounds and help detect the location of a noise. It can detect ultrasound, which enables it to detect ultrasonic calls made by rodent prey. Recent research has shown that cats have socio-spatial cognitive abilities to create mental maps of owners' locations based on hearing owners' voices. The ability to track something out of sight is called object permanence and it is found in humans, primates, and some non-primates. Smell Cats have an acute sense of smell, due in part to their well-developed olfactory bulb and a large surface of olfactory mucosa, about in area, which is about twice that of humans. Cats and many other animals have a Jacobson's organ in their mouths that is used in the behavioral process of flehmening. It allows them to sense certain aromas in a way that humans cannot. Cats are sensitive to pheromones such as 3-mercapto-3-methylbutan-1-ol, which they use to communicate through urine spraying and marking with scent glands. Many cats also respond strongly to plants that contain nepetalactone, especially catnip, as they can detect that substance at less than one part per billion. About 70–80% of cats are affected by nepetalactone. This response is also produced by other plants, such as silver vine (Actinidia polygama) and the herb valerian; it may be caused by the smell of these plants mimicking a pheromone and stimulating cats' social or sexual behaviors. Taste Cats have relatively few taste buds compared to humans (470 or so versus more than 9,000 on the human tongue). Domestic and wild cats share a taste receptor gene mutation that keeps their sweet taste buds from binding to sugary molecules, leaving them with no ability to taste sweetness. Their taste buds instead respond to acids, amino acids like protein, and bitter tastes. Cats also have a distinct temperature preference for their food, preferring food with a temperature around which is similar to that of a fresh kill and routinely rejecting food presented cold or refrigerated (which would signal to the cat that the "prey" item is long dead and therefore possibly toxic or decomposing). Whiskers To aid with navigation and sensation, cats have dozens of movable whiskers (vibrissae) over their body, especially their faces. These provide information on the width of gaps and on the location of objects in the dark, both by touching objects directly and by sensing air currents; they also trigger protective blink reflexes to protect the eyes from damage. Behavior Outdoor cats are active both day and night, although they tend to be slightly more active at night. Domestic cats spend the majority of their time in the vicinity of their homes but can range many hundreds of meters from this central point. They establish territories that vary considerably in size, in one study ranging from . The timing of cats' activity is quite flexible and varied, which means house cats may be more active in the morning and evening, as a response to greater human activity at these times. Cats conserve energy by sleeping more than most animals, especially as they grow older. The daily duration of sleep varies, usually between 12 and 16 hours, with 13 and 14 being the average. Some cats can sleep as much as 20 hours. The term "cat nap" for a short rest refers to the cat's tendency to fall asleep (lightly) for a brief period. While asleep, cats experience short periods of rapid eye movement sleep often accompanied by muscle twitches, which suggests they are dreaming. Sociability The social behavior of the domestic cat ranges from widely dispersed individuals to feral cat colonies that gather around a food source, based on groups of co-operating females. Within such groups, one cat is usually dominant over the others. Each cat in a colony holds a distinct territory, with sexually active males having the largest territories, which are about 10 times larger than those of female cats and may overlap with several females' territories. These territories are marked by urine spraying, by rubbing objects at head height with secretions from facial glands, and by defecation. Between these territories are neutral areas where cats watch and greet one another without territorial conflicts. Outside these neutral areas, territory holders usually chase away stranger cats, at first by staring, hissing, and growling and, if that does not work, by short but noisy and violent attacks. Despite this colonial organization, cats do not have a social survival strategy or a pack mentality, and always hunt alone. Life in proximity to humans and other domestic animals has led to a symbiotic social adaptation in cats, and cats may express great affection toward humans or other animals. Ethologically, the human keeper of a cat functions as a sort of surrogate for the cat's mother. Adult cats live their lives in a kind of extended kittenhood, a form of behavioral neoteny. Their high-pitched sounds may mimic the cries of a hungry human infant, making them particularly difficult for humans to ignore. Some pet cats are poorly socialized. In particular, older cats show aggressiveness toward newly arrived kittens, which include biting and scratching; this type of behavior is known as feline asocial aggression. Domestic cats' scent rubbing behavior toward humans or other cats is thought to be a feline means for social bonding. Communication Domestic cats use many vocalizations for communication, including purring, trilling, hissing, growling/snarling, grunting, and several different forms of meowing. Their body language, including position of ears and tail, relaxation of the whole body, and kneading of the paws, are all indicators of mood. The tail and ears are particularly important social signal mechanisms in cats. A raised tail indicates a friendly greeting, and flattened ears indicates hostility. Tail-raising also indicates the cat's position in the group's social hierarchy, with dominant individuals raising their tails less often than subordinate ones. Feral cats are generally silent. Nose-to-nose touching is also a common greeting and may be followed by social grooming, which is solicited by one of the cats raising and tilting its head. Purring may have developed as an evolutionary advantage as a signaling mechanism of reassurance between mother cats and nursing kittens. Post-nursing cats often purr as a sign of contentment: when being petted, becoming relaxed, or eating. The mechanism by which cats purr is elusive; the cat has no unique anatomical feature that is clearly responsible for the sound. Grooming Cats are known for spending considerable amounts of time licking their coats to keep them clean. The cat's tongue has backward-facing spines about 500 μm long, which are called papillae. These contain keratin which makes them rigid so the papillae act like a hairbrush. Some cats, particularly longhaired cats, occasionally regurgitate hairballs of fur that have collected in their stomachs from grooming. These clumps of fur are usually sausage-shaped and about long. Hairballs can be prevented with remedies that ease elimination of the hair through the gut, as well as regular grooming of the coat with a comb or stiff brush. Fighting Among domestic cats, males are more likely to fight than females. Among feral cats, the most common reason for cat fighting is competition between two males to mate with a female. In such cases, most fights are won by the heavier male. Another common reason for fighting in domestic cats is the difficulty of establishing territories within a small home. Female cats also fight over territory or to defend their kittens. Neutering will decrease or eliminate this behavior in many cases, suggesting that the behavior is linked to sex hormones. When cats become aggressive, they try to make themselves appear larger and more threatening by raising their fur, arching their backs, turning sideways and hissing or spitting. Often, the ears are pointed down and back to avoid damage to the inner ear and potentially listen for any changes behind them while focused forward. They may also vocalize loudly and bare their teeth in an effort to further intimidate their opponent. Fights usually consist of grappling and delivering powerful slaps to the face and body with the forepaws as well as bites. Cats also throw themselves to the ground in a defensive posture to rake their opponent's belly with their powerful hind legs. Serious damage is rare, as the fights are usually short in duration, with the loser running away with little more than a few scratches to the face and ears. Fights for mating rights are typically more severe and injuries may include deep puncture wounds and lacerations. Normally, serious injuries from fighting are limited to infections of scratches and bites, though these can occasionally kill cats if untreated. In addition, bites are probably the main route of transmission of feline immunodeficiency virus. Sexually active males are usually involved in many fights during their lives, and often have decidedly battered faces with obvious scars and cuts to
wild ancestor are diploid and both possess 38 chromosomes and roughly 20,000 genes. The leopard cat (Prionailurus bengalensis) was tamed independently in China around 5500 BC. This line of partially domesticated cats leaves no trace in the domestic cat populations of today. Domestication The earliest known indication for the taming of an African wildcat (F. lybica) was excavated close by a human Neolithic grave in Shillourokambos, southern Cyprus, dating to about 7500–7200 BC. Since there is no evidence of native mammalian fauna on Cyprus, the inhabitants of this Neolithic village most likely brought the cat and other wild mammals to the island from the Middle Eastern mainland. Scientists therefore assume that African wildcats were attracted to early human settlements in the Fertile Crescent by rodents, in particular the house mouse (Mus musculus) and were tamed by Neolithic farmers. This mutual relationship between early farmers and tamed cats lasted thousands of years. As agricultural practices spread, so did tame and domesticated cats. Wildcats of Egypt contributed to the maternal gene pool of the domestic cat at a later time. The earliest known evidence for the occurrence of the domestic cat in Greece dates to around 1200 BC. Greek, Phoenician, Carthaginian and Etruscan traders introduced domestic cats to southern Europe. During the Roman Empire they were introduced to Corsica and Sardinia before the beginning of the 1st millennium. By the 5th century BC, they were familiar animals around settlements in Magna Graecia and Etruria. By the end of the Roman Empire in the 5th century, the Egyptian domestic cat lineage had arrived in a Baltic Sea port in northern Germany. During domestication, cats have undergone only minor changes in anatomy and behavior, and they are still capable of surviving in the wild. Several natural behaviors and characteristics of wildcats may have pre-adapted them for domestication as pets. These traits include their small size, social nature, obvious body language, love of play and relatively high intelligence. Captive Leopardus cats may also display affectionate behavior toward humans but were not domesticated. House cats often mate with feral cats, producing hybrids such as the Kellas cat in Scotland. Hybridisation between domestic and other Felinae species is also possible. Development of cat breeds started in the mid 19th century. An analysis of the domestic cat genome revealed that the ancestral wildcat genome was significantly altered in the process of domestication, as specific mutations were selected to develop cat breeds. Most breeds are founded on random-bred domestic cats. Genetic diversity of these breeds varies between regions, and is lowest in purebred populations, which show more than 20 deleterious genetic disorders. Characteristics Size The domestic cat has a smaller skull and shorter bones than the European wildcat. It averages about in head-to-body length and in height, with about long tails. Males are larger than females. Adult domestic cats typically weigh between . Skeleton Cats have seven cervical vertebrae (as do most mammals); 13 thoracic vertebrae (humans have 12); seven lumbar vertebrae (humans have five); three sacral vertebrae (as do most mammals, but humans have five); and a variable number of caudal vertebrae in the tail (humans have only vestigial caudal vertebrae, fused into an internal coccyx). The extra lumbar and thoracic vertebrae account for the cat's spinal mobility and flexibility. Attached to the spine are 13 ribs, the shoulder, and the pelvis. Unlike human arms, cat forelimbs are attached to the shoulder by free-floating clavicle bones which allow them to pass their body through any space into which they can fit their head. Skull The cat skull is unusual among mammals in having very large eye sockets and a powerful specialized jaw. Within the jaw, cats have teeth adapted for killing prey and tearing meat. When it overpowers its prey, a cat delivers a lethal neck bite with its two long canine teeth, inserting them between two of the prey's vertebrae and severing its spinal cord, causing irreversible paralysis and death. Compared to other felines, domestic cats have narrowly spaced canine teeth relative to the size of their jaw, which is an adaptation to their preferred prey of small rodents, which have small vertebrae. The premolar and first molar together compose the carnassial pair on each side of the mouth, which efficiently shears meat into small pieces, like a pair of scissors. These are vital in feeding, since cats' small molars cannot chew food effectively, and cats are largely incapable of mastication. Although cats tend to have better teeth than most humans, with decay generally less likely because of a thicker protective layer of enamel, a less damaging saliva, less retention of food particles between teeth, and a diet mostly devoid of sugar, they are nonetheless subject to occasional tooth loss and infection. Claws Cats have protractible and retractable claws. In their normal, relaxed position, the claws are sheathed with the skin and fur around the paw's toe pads. This keeps the claws sharp by preventing wear from contact with the ground and allows the silent stalking of prey. The claws on the fore feet are typically sharper than those on the hind feet. Cats can voluntarily extend their claws on one or more paws. They may extend their claws in hunting or self-defense, climbing, kneading, or for extra traction on soft surfaces. Cats shed the outside layer of their claw sheaths when scratching rough surfaces. Most cats have five claws on their front paws, and four on their rear paws. The dewclaw is proximal to the other claws. More proximally is a protrusion which appears to be a sixth "finger". This special feature of the front paws, on the inside of the wrists has no function in normal walking, but is thought to be an antiskidding device used while jumping. Some cat breeds are prone to having extra digits (“polydactyly”). Polydactylous cats occur along North America's northeast coast and in Great Britain. Ambulation The cat is digitigrade. It walks on the toes, with the bones of the feet making up the lower part of the visible leg. Unlike most mammals, it uses a "pacing" gait and moves both legs on one side of the body before the legs on the other side. It registers directly by placing each hind paw close to the track of the corresponding fore paw, minimizing noise and visible tracks. This also provides sure footing for hind paws when navigating rough terrain. As it speeds up walking to trotting, its gait changes to a "diagonal" gait: The diagonally opposite hind and fore legs move simultaneously. Balance Most breeds of cat have a noted fondness for sitting in high places, or perching. A higher place may serve as a concealed site from which to hunt; domestic cats strike prey by pouncing from a perch such as a tree branch. Another possible explanation is that height gives the cat a better observation point, allowing it to survey its territory. A cat falling from heights of up to can right itself and land on its paws. During a fall from a high place, a cat reflexively twists its body and rights itself to land on its feet using its acute sense of balance and flexibility. This reflex is known as the cat righting reflex. A cat always rights itself in the same way during a fall, if it has enough time to do so, which is the case in falls of or more. How cats are able to right themselves when falling has been investigated as the "falling cat problem". Senses Vision Cats have excellent night vision and can see at only one-sixth the light level required for human vision. This is partly the result of cat eyes having a tapetum lucidum, which reflects any light that passes through the retina back into the eye, thereby increasing the eye's sensitivity to dim light. Large pupils are an adaptation to dim light. The domestic cat has slit pupils, which allow it to focus bright light without chromatic aberration. At low light, a cat's pupils expand to cover most of the exposed surface of its eyes. The domestic cat has rather poor color vision and only two types of cone cells, optimized for sensitivity to blue and yellowish green; its ability to distinguish between red and green is limited. A response to middle wavelengths from a system other than the rod cells might be due to a third type of cone. This appears to be an adaptation to low light levels rather than representing true trichromatic vision. Hearing The domestic cat's hearing is most acute in the range of 500 Hz to 32 kHz. It can detect an extremely broad range of frequencies ranging from 55 Hz to 79,000 Hz. It can hear a range of 10.5 octaves, while humans and dogs can hear ranges of about 9 octaves. Its hearing sensitivity is enhanced by its large movable outer ears, the pinnae, which amplify sounds and help detect the location of a noise. It can detect ultrasound, which enables it to detect ultrasonic calls made by rodent prey. Recent research has shown that cats have socio-spatial cognitive abilities to create mental maps of owners' locations based on hearing owners' voices. The ability to track something out of sight is called object permanence and it is found in humans, primates, and some non-primates. Smell Cats have an acute sense of smell, due in part to their well-developed olfactory bulb and a large surface of olfactory mucosa, about in area, which is about twice that of humans. Cats and many other animals have a Jacobson's organ in their mouths that is used in the behavioral process of flehmening. It allows them to sense certain aromas in a way that humans cannot. Cats are sensitive to pheromones such as 3-mercapto-3-methylbutan-1-ol, which they use to communicate through urine spraying and marking with scent glands. Many cats also respond strongly to plants that contain nepetalactone, especially catnip, as they can detect that substance at less than one part per billion. About 70–80% of cats are affected by nepetalactone. This response is also produced by other plants, such as silver vine (Actinidia polygama) and the herb valerian; it may be caused by the smell of these plants mimicking a pheromone and stimulating cats' social or sexual behaviors. Taste Cats have relatively few taste buds compared to humans (470 or so versus more than 9,000 on the human tongue). Domestic and wild cats share a taste receptor gene mutation that keeps their sweet taste buds from binding to sugary molecules, leaving them with no ability to taste sweetness. Their taste buds instead respond to acids, amino acids like protein, and bitter tastes. Cats also have a distinct temperature preference for their food, preferring food with a temperature around which is similar to that of a fresh kill and routinely rejecting food presented cold or refrigerated (which would signal to the cat that the "prey" item is long dead and therefore possibly toxic or decomposing). Whiskers To aid with navigation and sensation, cats have dozens of movable whiskers (vibrissae) over their body, especially their faces. These provide information on the width of gaps and on the location of objects in the dark, both by touching objects directly and by sensing air currents; they also trigger protective blink reflexes to protect the eyes from damage. Behavior Outdoor cats are active both day and night, although they tend to be slightly more active at night. Domestic cats spend the majority of their time in the vicinity of their homes but can range many hundreds of meters from this central point. They establish territories that vary considerably in size, in one study ranging from . The timing of cats' activity is quite flexible and varied, which means house cats may be more active in the morning and evening, as a response to greater human activity at these times. Cats conserve energy by sleeping more than most animals, especially as they grow older. The daily duration of sleep varies, usually between 12 and 16 hours, with 13 and 14 being the average. Some cats can sleep as much as 20 hours. The term "cat nap" for a short rest refers to the cat's tendency to fall asleep (lightly) for a brief period. While asleep, cats experience short periods of rapid eye movement sleep often accompanied by muscle twitches, which suggests they are dreaming. Sociability The social behavior of the domestic cat ranges from widely dispersed individuals to feral cat colonies that gather around a food source, based on groups of co-operating females. Within such groups, one cat is usually dominant over the others. Each cat in a colony holds a distinct territory, with sexually active males having the largest territories, which are about 10 times larger than those of female cats and may overlap with several females' territories. These territories are marked by urine spraying, by rubbing objects at head height with secretions from facial glands, and by defecation. Between these territories are neutral areas where cats watch and greet one another without territorial conflicts. Outside these neutral areas, territory holders usually chase away stranger cats, at first by staring, hissing, and growling and, if that does not work, by short but noisy and violent attacks. Despite this colonial organization, cats do not have a social survival strategy or a pack mentality, and always hunt alone. Life in proximity to humans and other domestic animals has led to a symbiotic social adaptation in cats, and cats may express great affection toward humans or other animals. Ethologically, the human keeper of a cat functions as a sort of surrogate for the cat's mother. Adult cats live their lives in a kind of extended kittenhood, a form of behavioral neoteny. Their high-pitched sounds may mimic the cries of a hungry human infant, making them particularly difficult for humans to ignore. Some pet cats are poorly socialized. In particular, older cats show aggressiveness toward newly arrived kittens, which include biting and scratching; this type of behavior is known as feline asocial aggression. Domestic cats' scent rubbing behavior toward humans or other cats is thought to be a feline means for social bonding. Communication Domestic cats use many vocalizations for communication, including purring, trilling, hissing, growling/snarling, grunting, and several different forms of meowing. Their body language, including position of ears and tail, relaxation of the whole body, and kneading of the paws, are all indicators of mood. The tail and ears are particularly important social signal mechanisms in cats. A raised tail indicates a friendly greeting, and flattened ears indicates hostility. Tail-raising also indicates the cat's position in the group's social hierarchy, with dominant individuals raising their tails less often than subordinate ones. Feral cats are generally silent. Nose-to-nose touching is also a common greeting and may be followed by social grooming, which is solicited by one of the cats raising and tilting its head. Purring may have developed as an evolutionary advantage as a signaling mechanism of reassurance between mother cats and nursing kittens. Post-nursing cats often purr as a sign of contentment: when being petted, becoming relaxed, or eating. The mechanism by which cats purr is elusive; the cat has no unique anatomical feature that is clearly responsible for the sound. Grooming Cats are known for spending considerable amounts of time licking their coats to keep them clean. The cat's tongue has backward-facing spines about 500 μm long, which are called papillae. These contain keratin which makes them rigid so the papillae act like a hairbrush. Some cats, particularly longhaired cats, occasionally regurgitate hairballs of fur that have collected in their stomachs from grooming. These clumps of fur are usually sausage-shaped and about long. Hairballs can be prevented with remedies that ease elimination of the hair through the gut, as well as regular grooming of the coat with a comb or stiff brush. Fighting Among domestic cats, males are more likely to fight than females. Among feral cats, the most common reason for cat fighting is competition between two males to mate with a female. In such cases, most fights are won by the heavier male. Another common reason for fighting in domestic cats is the difficulty of establishing territories within a small home. Female cats also fight over territory or to defend their kittens. Neutering will decrease or eliminate this behavior in many cases, suggesting that the behavior is linked to sex hormones. When cats become aggressive, they try to make themselves appear larger and more threatening by raising their fur, arching their backs, turning sideways and hissing or spitting. Often, the ears are pointed down and back to avoid damage to the inner ear and potentially listen for any changes behind them while focused forward. They may also vocalize loudly and bare their teeth in an effort to further intimidate their opponent. Fights usually consist of grappling and delivering powerful slaps to the face and body with the forepaws as well as bites. Cats also throw themselves to the ground in a defensive posture to rake their opponent's belly with their powerful hind legs. Serious damage is rare, as the fights are usually short in duration, with the loser running away with little more than a few scratches to the face and ears. Fights for mating rights are typically more severe and injuries may include deep puncture wounds and lacerations. Normally, serious injuries from fighting are limited to infections of scratches and bites, though these can occasionally kill cats if untreated. In addition, bites are probably the main route of transmission of feline immunodeficiency virus. Sexually active males are usually involved in many fights during their lives, and often have decidedly battered faces with obvious scars and cuts to their ears and nose. Hunting and feeding The shape and structure of cats' cheeks is insufficient to allow them to take in liquids using suction. Therefore, when drinking they lap with the tongue to draw liquid upward into their mouths. Lapping at a rate of four times a second, the cat touches the smooth tip of its tongue to the surface of the water, and quickly retracts it like a corkscrew, drawing water upward. Feral cats and free-fed house cats consume several small meals in a day. The frequency and size of meals varies between individuals. They select food based on its temperature, smell and texture; they dislike chilled foods and respond most strongly to moist foods rich in amino acids, which are similar to meat. Cats reject novel flavors (a response termed neophobia) and learn quickly to avoid foods that have tasted unpleasant in the past. It is also a common misconception that cats like milk/cream, as they tend to avoid sweet food and milk. Most adult cats are lactose intolerant; the sugar in milk is not easily digested and may cause soft stools or diarrhea. Some also develop odd eating habits and like to eat or chew on things like wool, plastic, cables, paper, string, aluminum foil, or even coal. This condition, pica, can threaten their health, depending on the amount and toxicity of the items eaten. Cats hunt small prey, primarily birds and rodents, and are often used as a form of pest control. Cats use two hunting strategies, either stalking prey actively, or waiting in ambush until an animal comes close enough to be captured. The strategy used depends on the prey species in the area, with cats waiting in ambush outside burrows, but tending to actively stalk birds. Domestic cats are a major predator of wildlife in the United States, killing an estimated 1.3 to 4.0 billion birds and 6.3 to 22.3 billion mammals annually. Certain species appear more susceptible than others; for example, 30% of house sparrow mortality is linked to the domestic cat. In the recovery of ringed robins (Erithacus rubecula) and dunnocks (Prunella modularis), 31% of deaths were a result of cat predation. In parts of North America, the presence of larger carnivores such as coyotes which prey on cats and other small predators reduces the effect of predation by cats and other small predators such as opossums and raccoons on bird numbers and variety. Perhaps the best-known element of cats' hunting behavior, which is commonly misunderstood and often appalls cat owners because it looks like torture, is that cats often appear to "play" with prey by releasing it after capture. This cat and mouse behavior is due to an instinctive imperative to ensure that the prey is weak enough to be killed without endangering the cat. Another poorly understood element of cat hunting behavior is the presentation of prey to human guardians. One explanation is that cats adopt humans into their social group and share excess kill with others in the group according to the dominance hierarchy, in which humans are reacted to as if they are at, or near, the top. Another explanation is that they attempt to teach their guardians to hunt or to help their human as if feeding "an elderly cat, or an inept kitten". This hypothesis is inconsistent with the fact that male cats also bring home prey, despite males having negligible involvement in raising kittens. Play Domestic cats, especially young kittens, are known for their love of play. This behavior mimics hunting and is important in helping kittens learn to stalk, capture, and kill prey. Cats also engage in play fighting, with each other and with humans. This behavior may be a way for cats to practice the skills needed for real combat, and might also reduce any fear they associate with launching attacks on other animals. Cats also tend to play with toys more when they are hungry. Owing to the close similarity between play and hunting, cats prefer to play with objects that resemble prey, such as small furry toys that move rapidly, but rapidly lose interest. They become habituated to a toy they have played with before. String is often used as a toy, but if it is eaten, it can become caught at the base of the cat's tongue and then move into the intestines, a medical emergency which can cause serious illness, even death. Owing to the risks posed by cats eating string, it is sometimes replaced with a laser pointer's dot, which cats may chase.
by indie bands Mineral, The Gloria Record, and Bright Eyes Slang Crank (person), a pejorative term used for a person who holds an unshakable belief that most of his or her contemporaries consider to be false. Prank call or crank call, a false telephone call Crank, slang term for powdered substituted amphetamines, especially methamphetamine Other uses Cranks (restaurant), a chain of English wholefood vegetarian restaurants Crank (surname), a surname, notable people with the surname see there Crank conjecture, a term coined by Freeman Dyson to explain congruence patterns in integer partitions Crank of a partition, of a partition of an integer is a certain integer associated
Crank (mechanism), in mechanical engineering, a bent portion of an axle or shaft, or an arm keyed at right angles to the end of a shaft, by which motion is imparted to or received from it Crankset, the component of a bicycle drivetrain that converts the reciprocating motion of the rider's legs into rotational motion Crankshaft, the part of a piston engine which translates reciprocating linear piston motion into rotation Crank machine, a machine used to deliver hard labour in early Victorian prisons in the United Kingdom Places Crank, Merseyside, a village near Rainford, England Crank Halt railway station in the village of Crank Cranks, Kentucky, United States Popular culture Crank (film), a 2006 film starring Jason Statham Crank: High Voltage, the 2009 sequel Crank (Hoodoo Gurus
Increasingly, taxonomists try to avoid naming taxa that are not clades; that is, taxa that are not monophyletic. Some of the relationships between organisms that the molecular biology arm of cladistics has revealed include that fungi are closer relatives to animals than they are to plants, archaea are now considered different from bacteria, and multicellular organisms may have evolved from archaea. The term "clade" is also used with a similar meaning in other fields besides biology, such as historical linguistics; see Cladistics § In disciplines other than biology. Etymology The term "clade" was coined in 1957 by the biologist Julian Huxley to refer to the result of cladogenesis, the evolutionary splitting of a parent species into two distinct species, a concept Huxley borrowed from Bernhard Rensch. Many commonly named groups – rodents and insects, for example – are clades because, in each case, the group consists of a common ancestor with all its descendant branches. Rodents, for example, are a branch of mammals that split off after the end of the period when the clade Dinosauria stopped being the dominant terrestrial vertebrates 66 million years ago. The original population and all its descendants are a clade. The rodent clade corresponds to the order Rodentia, and insects to the class Insecta. These clades include smaller clades, such as chipmunk or ant, each of which consists of even smaller clades. The clade "rodent" is in turn included in the mammal, vertebrate and animal clades. History of nomenclature and taxonomy The idea of a clade did not exist in pre-Darwinian Linnaean taxonomy, which was based by necessity only on internal or external morphological similarities between organisms. Many of the better known animal groups in Linnaeus' original Systema Naturae (mostly vertebrate groups) do represent clades. The phenomenon of convergent evolution is responsible for many cases of misleading similarities in the
controversial. As an example, see the full current classification of Anas platyrhynchos (the mallard duck) with 40 clades from Eukaryota down by following this Wikispecies link and clicking on "Expand". The name of a clade is conventionally a plural, where the singular refers to each member individually. A unique exception is the reptile clade Dracohors, which was made by haplology from Latin "draco" and "cohors", i.e. "the dragon cohort"; its form with a suffix added should be e.g. "dracohortian". Definition A clade is by definition monophyletic, meaning that it contains one ancestor (which can be an organism, a population, or a species) and all its descendants. The ancestor can be known or unknown; any and all members of a clade can be extant or extinct. Clades and phylogenetic trees The science that tries to reconstruct phylogenetic trees and thus discover clades is called phylogenetics or cladistics, the latter term coined by Ernst Mayr (1965), derived from "clade". The results of phylogenetic/cladistic analyses are tree-shaped diagrams called cladograms; they, and all their branches, are phylogenetic hypotheses. Three methods of defining clades are featured in phylogenetic nomenclature: node-, stem-, and apomorphy-based (see Phylogenetic nomenclature§Phylogenetic definitions of clade names for detailed definitions). Terminology The relationship between clades can be described in several ways: A clade located within a clade is said to be nested within that clade. In the diagram, the hominoid clade, i.e. the apes and humans, is nested within the primate clade. Two clades are sisters if they have an immediate common ancestor. In the diagram, lemurs and lorises are sister clades, while humans and tarsiers are not. A clade A is basal to a clade B if A branches off the lineage leading to B before the first branch leading only to members of B. In the adjacent diagram, the strepsirrhine/prosimian clade, is basal to the hominoids/ape clade. In this example, both Haplorrhine as prosimians should be considered as most basal groupings. It is better to say that the prosimians are the sister group to the rest of the primates. This way one also avoids unintended and misconceived connotations about evolutionary advancement, complexity, diversity, ancestor status, and ancienity e.g. due to impact of sampling diversity and extinction. Basal clades should not be confused with stem groupings, as the latter is associated with paraphyletic or unresolved groupings. Age The age of a clade is measured as in two ways, crown age and stem age. The
all national telecom operators and provides international voice and SMS roaming in 121 countries and across 227 operators through prepaid and postpaid roaming tariffs. MTN also has a national ISP license which the company received in November 2008. MTN was the first company to introduce the popular per-second billing system in the country (also known as "pay as you talk") allowing its subscribers to transparently track their talk-time and receive billing summaries via SMS. The scheme was so popular that other GSM companies quickly adopted this method. Internet Afghanistan was given legal control of the ".af" domain in 2003, and the Afghanistan Network Information Center (AFGNIC) was established to administer domain names. As of 2016, there are at least 55 internet service providers (ISPs) in the country. Internet in Afghanistan is also at the peak with over 5 million users as of 2016. According to the Ministry of Communications, the following are some of the different ISPs operating in Afghanistan: TiiTACS Internet Services AfSat Afghan Telecom Neda CeReTechs Insta Telecom Global Services (P) Limited Rana Technologies Global Entourage Services LiwalNet Vizocom Movj Technology Television There are over 106 television operators in Afghanistan and 320 television transmitters, many of which are based Kabul, while others are broadcast from other provinces. Selected foreign channels are also shown to the public in Afghanistan, but with the use of the internet, over 3,500 international TV channels may be accessed in Afghanistan. Radio There are an estimated 150 FM radio operators throughout the country. Broadcasts are in Dari, Pashto, English, Uzbeki and a number of other languages. Radio listeners are generally decreasing and are being slowly outnumbered by television. Of Afghanistan's 6 main cities, Kandahar and Khost have the maximum number of radio listeners. Kabul and Jalalabad have moderate number of listeners. However, Mazar-e-Sharif and especially Herat have very few radio listeners. Postal service In 1870, a central post office was established at Bala Hissar in Kabul and a post office in the capital of each province. The service was slowly being expanded over the years as more postal offices were established in each large city by 1918. Afghanistan became a member of the Universal Postal Union in 1928, and the postal administration elevated to the Ministry of Communication in 1934. Civil war caused a disruption in issuing official stamps during the 1980s–90s war but in 1999 postal service was operating again. Postal services to/from Kabul worked remarkably well all throughout the war years. Postal services to/from Herat resumed in 1997. The Afghan government has reported to the UPU several times about illegal stamps being issued and sold in 2003 and 2007. Afghanistan Post has been reorganizing the postal service in 2000s with assistance from Pakistan Post. The Afghanistan Postal commission was formed to prepare a written policy for the development of the postal sector, which will form the basis of a new postal services law governing licensing of postal services providers. The project was expected to finish by 2008. Satellite In January 2014 the Afghan Ministry of Communications and Information Technology signed an agreement with Eutelsat for the use of satellite resources to enhance deployment of Afghanistan's national broadcasting and telecommunications infrastructure as well as its international connectivity. Afghansat 1 was officially launched in May
in Afghanistan is under the control of the Ministry of Communications and Information Technology (MCIT). It has rapidly expanded after the Karzai administration took over in late 2001, and has embarked on wireless companies, internet, radio stations and television channels. The Afghan government signed a $64.5 million agreement in 2006 with China's ZTE on the establishment of a countrywide optical fiber cable network. The project began to improve telephone, internet, television and radio broadcast services throughout Afghanistan. About 90% of the country's population had access to communication services in 2014. Afghanistan uses its own space satellite called Afghansat 1. There are about 18 million mobile phone users in the country. Telecom companies include Afghan Telecom, Afghan Wireless, Etisalat, MTN, Roshan, Salaam. 20% of the population have access to the internet. Telephone There are about 32 million GSM mobile phone subscribers in Afghanistan as of 2016, with over 114,192 fixed-telephone-lines and over 264,000 CDMA subscribers. Mobile communications have improved because of the introduction of wireless carriers into this developing country. The first was Afghan Wireless, which is US based that was founded by Ehsan Bayat. The second was Roshan, which began providing services to all major cities within Afghanistan. There are also a number of VSAT stations in major cities such as Kabul, Kandahar, Herat, Mazari Sharif, and Jalalabad, providing international and domestic voice/data connectivity. The international calling code for Afghanistan is +93. The following is a partial list of mobile phone companies in the country: Afghan Wireless, provides 4G services Etisalat, provides 4G services MTN Group, provides 4G services Roshan Salaam Network All the companies providing communication services are obligated to deliver 2.5% of their income to the communication development fund annually. According to the Ministry of Communication and Information Technology there are 4760 active towers throughout the country which covers 85% of the population. The Ministry of Communication and Information Technology plans to expand its services in remote parts of the country where the remaining 15% of the population will be covered with the installation of 700 new towers. Phone calls in Afghanistan have been monitored by the National Security Agency according to WikiLeaks. MTN-Afghanistan MTN 21 According to a three-year duopoly agreement between the MCIT and mobile operators AWCC and Roshan, no mobile operator could enter the Afghan telecom market until July 2006. The third GSM license was awarded to Areeba in September 2005 for a period of 15 years, and a total license fee of $40.1 million. Areeba was a subsidiary of the Lebanon-based firm Investcom in consortium with Alokozai-FZE. After commencing services in July 2006, Areeba had an estimated subscribership of 200,000 by the end of that year. Areeba was later acquired by the South African-based Mobile Telephone Network (MTN) in mid-2007 as part of a $5.53 billion global merger between the two companies. MTN-Afghanistan is a subsidiary of the South African-based MTN Group, a multinational telecommunications company operating across the Middle East and Africa. MTN is the majority (90%) shareholder, while International Finance Corporation (IFC) at 9% is also a debt and equity shareholder of MTN-Afghanistan. MTN operates at 900–1800 MHz GSM band,
first "Bishop of Prussia" at the Fourth Council of the Lateran. His seat as a bishop remained at Oliwa Abbey on the western side of the Vistula, whereas the pagan Prussian (later East Prussian) territory was on the eastern side of it. The attempts by Konrad of Masovia to subdue the Prussian lands had picked long-term and intense border quarrels, whereby the Polish lands of Masovia, Cuyavia and even Greater Poland became subject to continuous Prussian raids. Bishop Christian asked the new Pope Honorius III for the consent to start another Crusade, however a first campaign in 1217 proved a failure and even the joint efforts by Duke Konrad with the Polish High Duke Leszek I the White and Duke Henry I the Bearded of Silesia in 1222/23 only led to the reconquest of Chełmno Land but did not stop the Prussian invasions. At least Christian was able to establish the Diocese of Chełmno east of the Vistula, adopting the episcopal rights from the Masovian Bishop of Płock, confirmed by both Duke Konrad and the Pope. Duke Konrad of Masovia still was not capable to end the Prussian attacks on his territory and in 1226 began to conduct negotiations with the Teutonic Knights under Grand Master Hermann von Salza in order to strengthen his forces. As von Salza initially hesitated to offer his services, Christian created the military Order of Dobrzyń (Fratres Milites Christi) in 1228, however to little avail. Meanwhile, von Salza had to abandon his hope to establish an Order's State in the Burzenland region of Transylvania, which had led to an éclat with King Andrew II of Hungary. He obtained a charter by Emperor Frederick II issued in the 1226 Golden Bull of Rimini, whereby Chełmno Land would be the unshared possession of the Teutonic Knights, which was confirmed by Duke Konrad of Masovia in the 1230 Treaty
raids. Bishop Christian asked the new Pope Honorius III for the consent to start another Crusade, however a first campaign in 1217 proved a failure and even the joint efforts by Duke Konrad with the Polish High Duke Leszek I the White and Duke Henry I the Bearded of Silesia in 1222/23 only led to the reconquest of Chełmno Land but did not stop the Prussian invasions. At least Christian was able to establish the Diocese of Chełmno east of the Vistula, adopting the episcopal rights from the Masovian Bishop of Płock, confirmed by both Duke Konrad and the Pope. Duke Konrad of Masovia still was not capable to end the Prussian attacks on his territory and in 1226 began to conduct negotiations with the Teutonic Knights under Grand Master Hermann von Salza in order to strengthen his forces. As von Salza initially hesitated to offer his services, Christian created the military Order of Dobrzyń (Fratres Milites Christi) in 1228, however to little avail. Meanwhile, von Salza had to abandon his hope to establish an Order's State in the Burzenland region of Transylvania, which had led to an éclat with King Andrew II of Hungary. He obtained a charter by Emperor Frederick II issued in the 1226 Golden Bull of Rimini, whereby Chełmno Land would be the unshared possession of the Teutonic Knights, which was confirmed by Duke Konrad of Masovia in the 1230 Treaty of Kruszwica. Christian ceded his possessions to the new State of the Teutonic Order and in turn was appointed Bishop of Chełmno the next year. Bishop Christian continued his mission in Sambia (Samland), where from 1233 to 1239 he was held captive by pagan Prussians, and freed in trade for five other hostages who then in turn were released for a ransom of 800 Marks, granted to him by Pope Gregory IX. He had to deal with the constant cut-back of his autonomy by the Knights and asked the Roman Curia for mediation. In 1243, the Papal
for the first time since the Arab League boycotted the company in 1968. In April 2007, in Canada, the name "Coca-Cola Classic" was changed back to "Coca-Cola". The word "Classic" was removed because "New Coke" was no longer in production, eliminating the need to differentiate between the two. The formula remained unchanged. In January 2009, Coca-Cola stopped printing the word "Classic" on the labels of bottles sold in parts of the southeastern United States. The change was part of a larger strategy to rejuvenate the product's image. The word "Classic" was removed from all Coca-Cola products by 2011. In November 2009, due to a dispute over wholesale prices of Coca-Cola products, Costco stopped restocking its shelves with Coke and Diet Coke for two months; a separate pouring rights deal in 2013 saw Coke products removed from Costco food courts in favor of Pepsi. Some Costco locations (such as the ones in Tucson, Arizona) additionally sell imported Coca-Cola from Mexico with cane sugar instead of corn syrup from separate distributors. Coca-Cola introduced the 7.5-ounce mini-can in 2009, and on September 22, 2011, the company announced price reductions, asking retailers to sell eight-packs for $2.99. That same day, Coca-Cola announced the 12.5-ounce bottle, to sell for 89 cents. A 16-ounce bottle has sold well at 99 cents since being re-introduced, but the price was going up to $1.19. In 2012, Coca-Cola resumed business in Myanmar after 60 years of absence due to U.S.-imposed investment sanctions against the country. Coca-Cola's bottling plant is located in Yangon and is part of the company's five-year plan and $200 million investment in Myanmar. Coca-Cola with its partners is to invest US$5 billion in its operations in India by 2020. In February 2021, as a plan to combat plastic waste, Coca-Cola said that it would start selling its sodas in bottles made from 100% recycled plastic material in the United States, and by 2030 planned to recycle one bottle or can for each one it sold. Coca-Cola started by selling 2000 paper bottles to see if they held up due to the risk of safety and of changing the taste of the drink. Production Listed ingredients Carbonated water Sugar (sucrose or high-fructose corn syrup (HFCS) depending on country of origin) Caffeine Phosphoric acid Caramel color (E150d) Natural flavorings A typical can of Coca-Cola (12 fl ounces/355 ml) contains 38 grams of sugar, 50 mg of sodium, 0 grams fat, 0 grams potassium, and 140 calories. On May 5, 2014, Coca-Cola said it is working to remove a controversial ingredient, brominated vegetable oil, from all of its drinks. Formula of natural flavorings The exact formula of Coca-Cola's natural flavorings (but not its other ingredients, which are listed on the side of the bottle or can) is a trade secret. The original copy of the formula was held in SunTrust Bank's main vault in Atlanta for 86 years. Its predecessor, the Trust Company, was the underwriter for the Coca-Cola Company's initial public offering in 1919. On December 8, 2011, the original secret formula was moved from the vault at SunTrust Banks to a new vault containing the formula which will be on display for visitors to its World of Coca-Cola museum in downtown Atlanta. According to Snopes, a popular myth states that only two executives have access to the formula, with each executive having only half the formula. However, several sources state that while Coca-Cola does have a rule restricting access to only two executives, each knows the entire formula and others, in addition to the prescribed duo, have known the formulation process. On February 11, 2011, Ira Glass said on his PRI radio show, This American Life, that TAL staffers had found a recipe in "Everett Beal's Recipe Book", reproduced in the February 28, 1979, issue of The Atlanta Journal-Constitution, that they believed was either Pemberton's original formula for Coca-Cola, or a version that he made either before or after the product hit the market in 1886. The formula basically matched the one found in Pemberton's diary. Coca-Cola archivist Phil Mooney acknowledged that the recipe "could be a precursor" to the formula used in the original 1886 product, but emphasized that Pemberton's original formula is not the same as the one used in the current product. Use of stimulants in formula When launched, Coca-Cola's two key ingredients were cocaine and caffeine. The cocaine was derived from the coca leaf and the caffeine from kola nut (also spelled "cola nut" at the time), leading to the name Coca-Cola. Coca leaf Pemberton called for five ounces of coca leaf per gallon of syrup (approximately 37 g/L), a significant dose; in 1891, Candler claimed his formula (altered extensively from Pemberton's original) contained only a tenth of this amount. Coca-Cola once contained an estimated nine milligrams of cocaine per glass. (For comparison, a typical dose or "line" of cocaine is 50–75 mg.) In 1903, it was removed. After 1904, instead of using fresh leaves, Coca-Cola started using "spent" leaves – the leftovers of the cocaine-extraction process with trace levels of cocaine. Since then, Coca-Cola has used a cocaine-free coca leaf extract. Today, that extract is prepared at a Stepan Company plant in Maywood, New Jersey, the only manufacturing plant authorized by the federal government to import and process coca leaves, which it obtains from Peru and Bolivia. Stepan Company extracts cocaine from the coca leaves, which it then sells to Mallinckrodt, the only company in the United States licensed to purify cocaine for medicinal use. Long after the syrup had ceased to contain any significant amount of cocaine, in North Carolina "dope" remained a common colloquialism for Coca-Cola, and "dope-wagons" were trucks that transported it. Kola nuts for caffeine The kola nut acts as a flavoring and the original source of caffeine in Coca-Cola. It contains about 2.0 to 3.5% caffeine, and has a bitter flavor. In 1911, the U.S. government sued in United States v. Forty Barrels and Twenty Kegs of Coca-Cola, hoping to force the Coca-Cola Company to remove caffeine from its formula. The court found that the syrup, when diluted as directed, would result in a beverage containing 1.21 grains (or 78.4 mg) of caffeine per serving. The case was decided in favor of the Coca-Cola Company at the district court, but subsequently in 1912, the U.S. Pure Food and Drug Act was amended, adding caffeine to the list of "habit-forming" and "deleterious" substances which must be listed on a product's label. In 1913 the case was appealed to the Sixth Circuit in Cincinnati, where the ruling was affirmed, but then appealed again in 1916 to the Supreme Court, where the government effectively won as a new trial was ordered. The company then voluntarily reduced the amount of caffeine in its product, and offered to pay the government's legal costs to settle and avoid further litigation. Coca-Cola contains 34 mg of caffeine per 12 fluid ounces (9.8 mg per 100 ml). Franchised production model The actual production and distribution of Coca-Cola follows a franchising model. The Coca-Cola Company only produces a syrup concentrate, which it sells to bottlers throughout the world, who hold Coca-Cola franchises for one or more geographical areas. The bottlers produce the final drink by mixing the syrup with filtered water and sweeteners, putting the mixture into cans and bottles, and carbonating it, which the bottlers then sell and distribute to retail stores, vending machines, restaurants, and foodservice distributors. The Coca-Cola Company owns minority shares in some of its largest franchises, such as Coca-Cola Enterprises, Coca-Cola Amatil, Coca-Cola Hellenic Bottling Company, and Coca-Cola FEMSA, as well as some smaller ones, such as Coca-Cola Bottlers Uzbekistan, but fully independent bottlers produce almost half of the volume sold in the world. Independent bottlers are allowed to sweeten the drink according to local tastes. The bottling plant in Skopje, Macedonia, received the 2009 award for "Best Bottling Company". Geographic spread Since it announced its intention to begin distribution in Myanmar in June 2012, Coca-Cola has been officially available in every country in the world except Cuba and North Korea. However, it is reported to be available in both countries as a grey import. Coca-Cola has been a point of legal discussion in the Middle East. In the early 20th century, a fatwa was created in Egypt to discuss the question of "whether Muslims were permitted to drink Coca-Cola and Pepsi cola." The fatwa states: "According to the Muslim Hanefite, Shafi'ite, etc., the rule in Islamic law of forbidding or allowing foods and beverages is based on the presumption that such things are permitted unless it can be shown that they are forbidden on the basis of the Qur'an." The Muslim jurists stated that, unless the Qu'ran specifically prohibits the consumption of a particular product, it is permissible to consume. Another clause was discussed, whereby the same rules apply if a person is unaware of the condition or ingredients of the item in question. Brand portfolio This is a list of variants of Coca-Cola introduced around the world. In addition to the caffeine-free version of the original, additional fruit flavors have been included over the years. Not included here are versions of Diet Coke and Coca-Cola Zero Sugar; variant versions of those no-calorie colas can be found at their respective articles. Caffeine-Free Coca-Cola (1983–present) – Coca-Cola without the caffeine. Coca-Cola Cherry (1985–present) – Coca-Cola with a cherry flavor. Was available in Canada starting in 1996. Originally marketed as Cherry Coke (Cherry Coca-Cola) in North America until 2006. New Coke / Coca-Cola II (1985–2002) – An unpopular formula change, remained after the original formula quickly returned and was later rebranded as Coca-Cola II until its full discontinuation in 2002. In 2019, New Coke was re-introduced to the market to promote the third season of the Netflix original series, Stranger Things. Golden Coca-Cola (2001) was a limited edition produced by Beijing Coca-Cola company to celebrate Beijing's successful bid to host the Olympics. Coca-Cola with Lemon (2001–2005) – Coca-Cola with a lemon flavor. Available in: Australia, American Samoa, Austria, Belgium, Brazil, China, Denmark, Federation of Bosnia and Herzegovina, Finland, France, Germany, Hong Kong, Iceland, Korea, Luxembourg, Macau, Malaysia, Mongolia, Netherlands, New Caledonia, New Zealand, Réunion, Singapore, Spain, Switzerland, Taiwan, Tunisia, United Kingdom, United States and West Bank-Gaza Coca-Cola Vanilla (2002–2005; 2007–present) – Coca-Cola with a vanilla flavor. Available in: Austria, Australia, China, Czech Republic, Canada, Finland, France, Germany, Hong Kong, New Zealand, Malaysia, Slovakia, South-Africa, Sweden, Switzerland, United Kingdom and United States. It was reintroduced in June 2007 by popular demand. Coca-Cola with Lime (2005–present) – Coca-Cola with a lime flavor. Available in Belgium, Lithuania, Netherlands, Singapore, Canada, the United Kingdom, and the United States. Coca-Cola Raspberry (2005; 2009–present) – Coca-Cola with a raspberry flavor. Originally only available in New Zealand. Available in: Australia, United States, and the United Kingdom in Coca-Cola Freestyle fountain since 2009. Coca-Cola Black Cherry Vanilla (2006–2007) – Coca-Cola with a combination of black cherry and vanilla flavor. It replaced and was replaced by Vanilla Coke in June 2007. Coca-Cola Blāk (2006–2008) – Coca-Cola with a rich coffee flavor, formula depends on the country. Only available in the United States, France, Canada, Czech Republic, Bosnia and Herzegovina, Bulgaria and Lithuania Coca-Cola Citra (2005–present) – Coca-Cola with a citrus flavor. Only available in Bosnia and Herzegovina, New Zealand, and Japan. Coca-Cola Orange (2007) – Coca-Cola with an orange flavor. Was available in the United Kingdom and Gibraltar for a limited time. In Germany, Austria, and Switzerland it is sold under the label Mezzo Mix. Currently available in Coca-Cola Freestyle fountain outlets in the United States since 2009 and in the United Kingdom since 2014. Coca-Cola Life (2013–2020) – A version of Coca-Cola with stevia and sugar as sweeteners rather than simply sugar. Coca-Cola Ginger (2016–present) – A version that mixes in the taste of ginger beer. Available in Australia, New Zealand, and as a limited edition in Vietnam. Coca-Cola Orange Vanilla (2019–2021) – Coca-Cola with an orange vanilla flavor (intended to imitate the flavor of an orange Creamsicle). Made available nationwide in the United States on February 25, 2019. Coca-Cola Energy (2019–present) – An energy drink with a flavor similar to standard Coca-Cola, with guarana, vitamin B3 (niacinamide), vitamin B6 (pyridoxine hydrochloride), and extra caffeine. Introduced in 2019 in the United Kingdom, and released in the United States and Canada in January 2020. Also available in zero-sugar, cherry, and zero-sugar + cherry variants. In May 2021, the company announced they would discontinue the product in North America but it will remain available in other places and it will focus on its traditional beverages. Coca-Cola Cinnamon (2019–2020) – Coca-Cola with cinnamon flavor. Released in October 2019 in the United States as a limited release for the 2019 holiday season. Made available again in 2020 for the holiday season. Coca-Cola Cherry Vanilla (2020–present) – Coca-Cola with cherry vanilla flavor. Released in the United States on February 10, 2020. Coca-Cola with Coffee (2019–present) – Coca-Cola, with coffee. Introduced in 2019 in various European markets, and released in the United States and Canada in January 2021. Available in dark blend, vanilla and caramel versions, and also in zero-sugar dark blend and vanilla variants. Logo design The Coca-Cola logo was created by John Pemberton's bookkeeper, Frank Mason Robinson, in 1885. Robinson came up with the name and chose the logo's distinctive cursive script. The writing style used, known as Spencerian Script, was developed in the mid-19th century and was the dominant form of formal handwriting in the United States during that period. Robinson also played a significant role in early Coca-Cola advertising. His promotional suggestions to Pemberton included giving away thousands of free drink coupons and plastering the city of Atlanta with publicity banners and streetcar signs. Coca-Cola came under scrutiny in Egypt in 1951 because of a conspiracy theory that the Coca-Cola logo, when reflected in a mirror, spells out "No Mohammed no Mecca" in Arabic. Contour bottle design The Coca-Cola bottle, called the "contour bottle" within the company, was created by bottle designer Earl R. Dean and Coca-Cola's general counsel, Harold Hirsch. In 1915, The Coca-Cola Company was represented by their general counsel to launch a competition among its bottle suppliers as well as any competition entrants to create a new bottle for their beverage that would distinguish it from other beverage bottles, "a bottle which a person could recognize even if they felt it in the dark, and so shaped that, even if broken, a person could tell at a glance what it was." Chapman J. Root, president of the Root Glass Company of Terre Haute, Indiana, turned the project over to members of his supervisory staff, including company auditor T. Clyde Edwards, plant superintendent Alexander Samuelsson, and Earl R. Dean, bottle designer and supervisor of the bottle molding room. Root and his subordinates decided to base the bottle's design on one of the soda's two ingredients, the coca leaf or the kola nut, but were unaware of what either ingredient looked like. Dean and Edwards went to the Emeline Fairbanks Memorial Library and were unable to find any information about coca or kola. Instead, Dean was inspired by a picture of the gourd-shaped cocoa pod in the Encyclopædia Britannica. Dean made a rough sketch of the pod and returned to the plant to show Root. He explained to Root how he could transform the shape of the pod into a bottle. Root gave Dean his approval. Faced with the upcoming scheduled maintenance of the mold-making machinery, over the next 24 hours Dean sketched out a concept drawing which was approved by Root the next morning. Chapman Root approved the prototype bottle and a design patent was issued on the bottle in November 1915. The prototype never made it to production since its middle diameter was larger than its base, making it unstable on conveyor belts. Dean resolved this issue by decreasing the bottle's middle diameter. During the 1916 bottler's convention, Dean's contour bottle was chosen over other entries and was on the market the same year. By 1920, the contour bottle became the standard for The Coca-Cola Company. A revised version was also patented in 1923. Because the Patent Office releases the Patent Gazette on Tuesday, the bottle was patented on December 25, 1923, and was nicknamed the "Christmas bottle." Today, the contour Coca-Cola bottle is one of the most recognized packages on the planet..."even in the dark!". As a reward for his efforts, Dean was offered a choice between a $500 bonus or a lifetime job at the Root Glass Company. He chose the lifetime job and kept it until the Owens-Illinois Glass Company bought out the Root Glass Company in the mid-1930s. Dean went on to work in other Midwestern glass factories. Raymond Loewy updated the design in 1955 to accommodate larger formats. Others have attributed inspiration for the design not to the cocoa pod, but to a Victorian hooped dress. In 1944, Associate Justice Roger J. Traynor of the Supreme Court of California took advantage of a case involving a waitress injured by an exploding Coca-Cola bottle to articulate the doctrine of strict liability for defective products. Traynor's concurring opinion in Escola v. Coca-Cola Bottling Co. is widely recognized as a landmark case in U.S. law today. Examples Designer bottles Karl Lagerfeld is the latest designer to have created a collection of aluminum bottles for Coca-Cola. Lagerfeld is not the first fashion designer to create a special version of the famous Coca-Cola Contour bottle. A number of other limited edition bottles by fashion designers for Coca-Cola Light soda have been created in the last few years, including Jean Paul Gaultier. In 2009, in Italy, Coca-Cola Light had a Tribute to Fashion to celebrate 100 years of the recognizable contour bottle. Well known Italian designers Alberta Ferretti, Blumarine, Etro, Fendi, Marni, Missoni, Moschino, and Versace each designed limited edition bottles. In 2019, Coca-Cola shared the first beverage bottle made with ocean plastic. Competitors Pepsi, the flagship product of PepsiCo, The Coca-Cola Company's main rival in the soft drink industry, is usually second to Coke in sales, and outsells Coca-Cola in some markets. RC Cola, now owned by the Dr Pepper Snapple Group, the third-largest soft drink manufacturer, is also widely available. Around the world, many local brands compete with Coke. In South and Central America Kola Real, also known as Big Cola, is a growing competitor to Coca-Cola. On the French island of Corsica, Corsica Cola, made by brewers of the local Pietra beer, is a growing competitor to Coca-Cola. In the French region of Brittany, Breizh Cola is available. In Peru, Inca Kola outsells Coca-Cola, which led The Coca-Cola Company to purchase the brand in 1999. In Sweden, Julmust outsells Coca-Cola during the Christmas season. In Scotland, the locally produced Irn-Bru was more popular than Coca-Cola until 2005, when Coca-Cola and Diet Coke began to outpace its sales. In the former East Germany, Vita Cola, invented during Communist rule, is gaining popularity. In India, Coca-Cola ranked third behind the leader, Pepsi-Cola, and local drink Thums Up. The Coca-Cola Company purchased Thums Up in 1993. , Coca-Cola held a 60.9% market-share in India. Tropicola, a domestic drink, is served in Cuba instead of Coca-Cola, due to a United States embargo. French brand Mecca Cola and British brand Qibla Cola are competitors to Coca-Cola in the Middle East. In Turkey, Cola Turka, in Iran and the Middle East, Zamzam Cola and Parsi Cola, in some parts of China, China Cola, in the Czech Republic and Slovakia, Kofola, in Slovenia, Cockta, and the inexpensive Mercator Cola, sold only in the country's biggest supermarket chain, Mercator, are some of the brand's competitors. Classiko Cola, made by Tiko Group, the largest manufacturing company in Madagascar, is a competitor to Coca-Cola in many regions. In 2021, Coca-Cola petitioned to cancel registrations for the marks Thums Up and Limca issued to Meenaxi Enterprise, Inc. based on misrepresentation of source. The Trademark Trial and Appeal Board concluded that "Meenaxi engaged in blatant misuse in a manner calculated to trade on the goodwill and reputation of Coca-Cola in an attempt to confuse consumers in the United States that its Thums Up and Limca marks were licensed or produced by the source of the same types of cola and lemon-lime soda sold under these marks for decades in India." Advertising Coca-Cola's advertising has significantly affected American culture, and it is frequently credited with inventing the modern image of Santa Claus as an old man in a red-and-white suit. Although the company did start using the red-and-white Santa image in the 1930s, with its winter advertising campaigns illustrated by Haddon Sundblom, the motif was already common. Coca-Cola was not even the first soft drink company to use the modern image of Santa Claus in its advertising: White Rock Beverages used Santa in advertisements for its ginger ale in 1923, after first using him to sell mineral water in 1915. Before Santa Claus, Coca-Cola relied on images of smartly dressed young women to sell its beverages. Coca-Cola's first such advertisement appeared in 1895, featuring the young Bostonian actress Hilda Clark as its spokeswoman. 1941 saw the first use of the nickname "Coke" as an official trademark for the product, with a series of advertisements informing consumers that "Coke means Coca-Cola". In 1971, a song from a Coca-Cola commercial called "I'd Like to Teach the World to Sing", produced by Billy Davis, became a hit single. During the 1950s the term "cola wars" emerged, describing the on-going battle between Coca-Cola and Pepsi for supremacy in the soft drink industry. Coca-Cola and Pepsi were competing with new products, global expansion, US marketing initiatives and sport sponsorships.<ref>{{Cite journal|last=McKelvey|first=Steve M.|date=2006|title=Coca-Cola vs. PepsiCo — A "Super Battleground for the Cola Wars?|url=http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.392.5206&rep=rep1&type=pdf|journal=Sport MarHeting Quarterly|volume=15|pages=114–123|citeseerx=10.1.1.392.5206|via=CiteSeerX}}</ref> Coke's advertising is pervasive, as one of Woodruff's stated goals was to ensure that everyone on Earth drank Coca-Cola as their preferred beverage. This is especially true in southern areas of the United States, such as Atlanta, where Coke was born. Some Coca-Cola television commercials between 1960 through 1986 were written and produced by former Atlanta radio veteran Don Naylor (WGST 1936–1950, WAGA 1951–1959) during his career as a producer for the McCann Erickson advertising agency. Many of these early television commercials for Coca-Cola featured movie stars, sports heroes, and popular singers. During the 1980s, Pepsi-Cola ran a series of television advertisements showing people participating in taste tests demonstrating that, according to the commercials, "fifty percent of the participants who said they preferred Coke actually chose the Pepsi." Statisticians pointed out the problematic nature of a 50/50 result: most likely, the taste tests showed that in blind tests, most people cannot tell the difference between Pepsi and Coke. Coca-Cola ran ads to combat Pepsi's ads in an incident sometimes referred to as the cola wars; one of Coke's ads compared the so-called Pepsi challenge to two chimpanzees deciding which tennis ball was furrier. Thereafter, Coca-Cola regained its leadership in the market. Selena was a spokesperson for Coca-Cola from 1989 until the time of her death. She filmed three commercials for the company. During 1994, to commemorate her five years with the company, Coca-Cola issued special Selena coke bottles. The Coca-Cola Company purchased Columbia Pictures in 1982, and began inserting Coke-product images into many of its films. After a few early successes during Coca-Cola's ownership, Columbia began to underperform, and the studio was sold to Sony in 1989. Coca-Cola has gone through a number of different advertising slogans in its long history, including "The pause that refreshes", "I'd like to buy the world a Coke", and "Coke is it". In 1999, The Coca-Cola Company introduced the Coke Card, a loyalty program that offered deals on items like clothes, entertainment and food when the cardholder purchased a Coca-Cola Classic. The scheme was cancelled after three years, with a Coca-Cola spokesperson declining to state why. The company then introduced another loyalty campaign in 2006, My Coke Rewards. This allows consumers to earn points by entering codes from specially marked packages of Coca-Cola products into a website. These points can be redeemed for various prizes or sweepstakes entries. In Australia in 2011, Coca-Cola began the "share a Coke" campaign, where the Coca-Cola logo was replaced on the bottles and replaced with first names. Coca-Cola used the 150 most popular names in Australia to print on the bottles. The campaign was paired with a website page, Facebook page, and an online "share a virtual
drink: colas. The Coca-Cola Company produces concentrate, which is then sold to licensed Coca-Cola bottlers throughout the world. The bottlers, who hold exclusive territory contracts with the company, produce the finished product in cans and bottles from the concentrate, in combination with filtered water and sweeteners. A typical can contains of sugar (usually in the form of high-fructose corn syrup in North America). The bottlers then sell, distribute, and merchandise Coca-Cola to retail stores, restaurants, and vending machines throughout the world. The Coca-Cola Company also sells concentrate for soda fountains of major restaurants and foodservice distributors. The Coca-Cola Company has on occasion introduced other cola drinks under the Coke name. The most common of these is Diet Coke, along with others including Caffeine-Free Coca-Cola, Diet Coke Caffeine-Free, Coca-Cola Zero Sugar, Coca-Cola Cherry, Coca-Cola Vanilla, and special versions with lemon, lime, and coffee. Coca-Cola was called Coca-Cola Classic from July 1985 to 2009, to distinguish it from "New Coke". Based on Interbrand's "best global brand" study of 2020, Coca-Cola was the world's sixth most valuable brand. In 2013, Coke products were sold in over 200 countries worldwide, with consumers drinking more than 1.8 billion company beverage servings each day. Coca-Cola ranked No. 87 in the 2018 Fortune 500 list of the largest United States corporations by total revenue. History 19th century historical origins Confederate Colonel John Pemberton, wounded in the American Civil War and addicted to morphine, also had a medical degree and began a quest to find a substitute for the problematic drug. In 1885 at Pemberton's Eagle Drug and Chemical House, his drugstore in Columbus, Georgia, he registered Pemberton's French Wine Coca nerve tonic. Pemberton's tonic may have been inspired by the formidable success of Vin Mariani, a French-Corsican coca wine, but his recipe additionally included the African kola nut, the beverage's source of caffeine. It is also worth noting that a Spanish drink called "Kola Coca" was presented at a contest in Philadelphia in 1885, a year before the official birth of Coca-Cola. The rights for this Spanish drink were bought by Coca-Cola in 1953. In 1886, when Atlanta and Fulton County passed prohibition legislation, Pemberton responded by developing Coca-Cola, a nonalcoholic version of Pemberton's French Wine Coca. It was marketed as "Coca-Cola: The temperance drink", which appealed to many people as the temperance movement enjoyed wide support during this time. The first sales were at Jacob's Pharmacy in Atlanta, Georgia, on May 8, 1886, where it initially sold for five cents a glass. Drugstore soda fountains were popular in the United States at the time due to the belief that carbonated water was good for the health, and Pemberton's new drink was marketed and sold as a patent medicine, Pemberton claiming it a cure for many diseases, including morphine addiction, indigestion, nerve disorders, headaches, and impotence. Pemberton ran the first advertisement for the beverage on May 29 of the same year in the Atlanta Journal. By 1888, three versions of Coca-Cola – sold by three separate businesses – were on the market. A co-partnership had been formed on January 14, 1888, between Pemberton and four Atlanta businessmen: J.C. Mayfield, A.O. Murphey, C.O. Mullahy, and E.H. Bloodworth. Not codified by any signed document, a verbal statement given by Asa Candler years later asserted under testimony that he had acquired a stake in Pemberton's company as early as 1887. John Pemberton declared that the name "Coca-Cola" belonged to his son, Charley, but the other two manufacturers could continue to use the formula. Charley Pemberton's record of control over the "Coca-Cola" name was the underlying factor that allowed for him to participate as a major shareholder in the March 1888 Coca-Cola Company incorporation filing made in his father's place. Charley's exclusive control over the "Coca-Cola" name became a continual thorn in Asa Candler's side. Candler's oldest son, Charles Howard Candler, authored a book in 1950 published by Emory University. In this definitive biography about his father, Candler specifically states: " on April 14, 1888, the young druggist Asa Griggs Candler purchased a one-third interest in the formula of an almost completely unknown proprietary elixir known as Coca-Cola." The deal was actually between John Pemberton's son Charley and Walker, Candler & Co. – with John Pemberton acting as cosigner for his son. For $50 down and $500 in 30 days, Walker, Candler & Co. obtained all of the one-third interest in the Coca-Cola Company that Charley held, all while Charley still held on to the name. After the April 14 deal, on April 17, 1888, one-half of the Walker/Dozier interest shares were acquired by Candler for an additional $750. Company In 1892, Candler set out to incorporate a second company: "The Coca-Cola Company" (the current corporation). When Candler had the earliest records of the "Coca-Cola Company" destroyed in 1910, the action was claimed to have been made during a move to new corporation offices around this time. After Candler had gained a better foothold on Coca-Cola in April 1888, he nevertheless was forced to sell the beverage he produced with the recipe he had under the names "Yum Yum" and "Koke". This was while Charley Pemberton was selling the elixir, although a cruder mixture, under the name "Coca-Cola", all with his father's blessing. After both names failed to catch on for Candler, by the middle of 1888, the Atlanta pharmacist was quite anxious to establish a firmer legal claim to Coca-Cola, and hoped he could force his two competitors, Walker and Dozier, completely out of the business, as well. John Pemberton died suddenly on August 16, 1888. Asa Candler then decided to move swiftly forward to attain full control of the entire Coca-Cola operation. Charley Pemberton, an alcoholic and opium addict, unnerved Asa Candler more than anyone else. Candler is said to have quickly maneuvered to purchase the exclusive rights to the name "Coca-Cola" from Pemberton's son Charley immediately after he learned of Dr. Pemberton's death. One of several stories states that Candler approached Charley's mother at John Pemberton's funeral and offered her $300 in cash for the title to the name. Charley Pemberton was found on June 23, 1894, unconscious, with a stick of opium by his side. Ten days later, Charley died at Atlanta's Grady Hospital at the age of 40. In Charles Howard Candler's 1950 book about his father, he stated: "On August 30 [1888], he Asa Candler became the sole proprietor of Coca-Cola, a fact which was stated on letterheads, invoice blanks and advertising copy." With this action on August 30, 1888, Candler's sole control became technically all true. Candler had negotiated with Margaret Dozier and her brother Woolfolk Walker a full payment amounting to $1,000, which all agreed Candler could pay off with a series of notes over a specified time span. By May 1, 1889, Candler was now claiming full ownership of the Coca-Cola beverage, with a total investment outlay by Candler for the drink enterprise over the years amounting to $2,300. In 1914, Margaret Dozier, as co-owner of the original Coca-Cola Company in 1888, came forward to claim that her signature on the 1888 Coca-Cola Company bill of sale had been forged. Subsequent analysis of other similar transfer documents had also indicated John Pemberton's signature had most likely been forged as well, which some accounts claim was precipitated by his son Charley. On September 12, 1919, Coca-Cola Co. was purchased by a group of investors for $25 million and reincorporated in Delaware. The company publicly offered 500,000 shares of the company for $40 a share. In 1986, The Coca-Cola Company merged with two of their bottling operators (owned by JTL Corporation and BCI Holding Corporation) to form Coca-Cola Enterprises Inc. (CCE). In December 1991, Coca-Cola Enterprises merged with the Johnston Coca-Cola Bottling Group, Inc. Origins of bottling The first bottling of Coca-Cola occurred in Vicksburg, Mississippi, at the Biedenharn Candy Company on March 12, 1894. The proprietor of the bottling works was Joseph A. Biedenharn. The original bottles were Hutchinson bottles, very different from the much later hobble-skirt design of 1915 now so familiar. A few years later two entrepreneurs from Chattanooga, Tennessee, namely Benjamin F. Thomas and Joseph B. Whitehead, proposed the idea of bottling and were so persuasive that Candler signed a contract giving them control of the procedure for only one dollar. Candler later realized that he had made a grave mistake. Candler never collected his dollar, but in 1899, Chattanooga became the site of the first Coca-Cola bottling company. Candler remained very content just selling his company's syrup. The loosely termed contract proved to be problematic for The Coca-Cola Company for decades to come. Legal matters were not helped by the decision of the bottlers to subcontract to other companies, effectively becoming parent bottlers. This contract specified that bottles would be sold at 5¢ each and had no fixed duration, leading to the fixed price of Coca-Cola from 1886 to 1959. 20th century The first outdoor wall advertisement that promoted the Coca-Cola drink was painted in 1894 in Cartersville, Georgia. Cola syrup was sold as an over-the-counter dietary supplement for upset stomach. By the time of its 50th anniversary, the soft drink had reached the status of a national icon in the US. In 1935, it was certified kosher by Atlanta rabbi Tobias Geffen. With the help of Harold Hirsch, Geffen was the first person outside the company to see the top-secret ingredients list after Coke faced scrutiny from the American Jewish population regarding the drink's kosher status. Consequently, the company made minor changes in the sourcing of some ingredients so it could continue to be consumed by America's Jewish population, including during Passover. The longest running commercial Coca-Cola soda fountain anywhere was Atlanta's Fleeman's Pharmacy, which first opened its doors in 1914. Jack Fleeman took over the pharmacy from his father and ran it until 1995; closing it after 81 years. On July 12, 1944, the one-billionth gallon of Coca-Cola syrup was manufactured by The Coca-Cola Company. Cans of Coke first appeared in 1955. New Coke On April 23, 1985, Coca-Cola, amid much publicity, attempted to change the formula of the drink with "New Coke". Follow-up taste tests revealed most consumers preferred the taste of New Coke to both Coke and Pepsi but Coca-Cola management was unprepared for the public's nostalgia for the old drink, leading to a backlash. The company gave in to protests and returned to the old formula under the name Coca-Cola Classic, on July 10, 1985. "New Coke" remained available and was renamed Coke II in 1992; it was discontinued in 2002. 21st century On July 5, 2005, it was revealed that Coca-Cola would resume operations in Iraq for the first time since the Arab League boycotted the company in 1968. In April 2007, in Canada, the name "Coca-Cola Classic" was changed back to "Coca-Cola". The word "Classic" was removed because "New Coke" was no longer in production, eliminating the need to differentiate between the two. The formula remained unchanged. In January 2009, Coca-Cola stopped printing the word "Classic" on the labels of bottles sold in parts of the southeastern United States. The change was part of a larger strategy to rejuvenate the product's image. The word "Classic" was removed from all Coca-Cola products by 2011. In November 2009, due to a dispute over wholesale prices of Coca-Cola products, Costco stopped restocking its shelves with Coke and Diet Coke for two months; a separate pouring rights deal in 2013 saw Coke products removed from Costco food courts in favor of Pepsi. Some Costco locations (such as the ones in Tucson, Arizona) additionally sell imported Coca-Cola from Mexico with cane sugar instead of corn syrup from separate distributors. Coca-Cola introduced the 7.5-ounce mini-can in 2009, and on September 22, 2011, the company announced price reductions, asking retailers to sell eight-packs for $2.99. That same day, Coca-Cola announced the 12.5-ounce bottle, to sell for 89 cents. A 16-ounce bottle has sold well at 99 cents since being re-introduced, but the price was going up to $1.19. In 2012, Coca-Cola resumed business in Myanmar after 60 years of absence due to U.S.-imposed investment sanctions against the country. Coca-Cola's bottling plant is located in Yangon and is part of the company's five-year plan and $200 million investment in Myanmar. Coca-Cola with its partners is to invest US$5 billion in its operations in India by 2020. In February 2021, as a plan to combat plastic waste, Coca-Cola said that it would start selling its sodas in bottles made from 100% recycled plastic material in the United States, and by 2030 planned to recycle one bottle or can for each one it sold. Coca-Cola started by selling 2000 paper bottles to see if they held up due to the risk of safety and of changing the taste of the drink. Production Listed ingredients Carbonated water Sugar (sucrose or high-fructose corn syrup (HFCS) depending on country of origin) Caffeine Phosphoric acid Caramel color (E150d) Natural flavorings A typical can of Coca-Cola (12 fl ounces/355 ml) contains 38 grams of sugar, 50 mg of sodium, 0 grams fat, 0 grams potassium, and 140 calories. On May 5, 2014, Coca-Cola said it is working to remove a controversial ingredient, brominated vegetable oil, from all of its drinks. Formula of natural flavorings The exact formula of Coca-Cola's natural flavorings (but not its other ingredients, which are listed on the side of the bottle or can) is a trade secret. The original copy of the formula was held in SunTrust Bank's main vault in Atlanta for 86 years. Its predecessor, the Trust Company, was the underwriter for the Coca-Cola Company's initial public offering in 1919. On December 8, 2011, the original secret formula was moved from the vault at SunTrust Banks to a new vault containing the formula which will be on display for visitors to its World of Coca-Cola museum in downtown Atlanta. According to Snopes, a popular myth states that only two executives have access to the formula, with each executive having only half the formula. However, several sources state that while Coca-Cola does have a rule restricting access to only two executives, each knows the entire formula and others, in addition to the prescribed duo, have known the formulation process. On February 11, 2011, Ira
subsets of A containing no more than m elements. This is partially ordered under inclusion and the subsets with m elements are maximal. Thus the cofinality of this poset is n choose m. A subset of the natural numbers N is cofinal in N if and only if it is infinite, and therefore the cofinality of ℵ0 is ℵ0. Thus ℵ0 is a regular cardinal. The cofinality of the real numbers with their usual ordering is ℵ0, since N is cofinal in R. The usual ordering of R is not order isomorphic to c, the cardinality of the real numbers, which has cofinality strictly greater than ℵ0. This demonstrates that the cofinality depends on the order; different orders on the same set may have different cofinality. Properties If A admits a totally ordered cofinal subset, then we can find a subset B that is well-ordered and cofinal in A. Any subset of B is also well-ordered. Two cofinal subsets of B with minimal cardinality (i.e. their cardinality is the cofinality of B) need not be order isomorphic (for example if , then both and viewed as subsets of B have the countable cardinality of the cofinality of B but are not order isomorphic.) But cofinal subsets of B with minimal order type will be order isomorphic. Cofinality of ordinals and other well-ordered sets The cofinality of an ordinal α is the smallest ordinal δ that is the order
A is the least of the cardinalities of the cofinal subsets of A. This definition of cofinality relies on the axiom of choice, as it uses the fact that every non-empty set of cardinal numbers has a least member. The cofinality of a partially ordered set A can alternatively be defined as the least ordinal x such that there is a function from x to A with cofinal image. This second definition makes sense without the axiom of choice. If the axiom of choice is assumed, as will be the case in the rest of this article, then the two definitions are equivalent. Cofinality can be similarly defined for a directed set and is used to generalize the notion of a subsequence in a net. Examples The cofinality of a partially ordered set with greatest element is 1 as the set consisting only of the greatest element is cofinal (and must be contained in every other cofinal subset). In particular, the cofinality of any nonzero finite ordinal, or indeed any finite directed set, is 1, since such sets have a greatest element. Every cofinal subset of a partially ordered set must contain all maximal elements of that set. Thus the cofinality of a finite partially ordered set is equal to the number of its maximal elements. In particular, let A be a set of size n, and consider the set of subsets of A containing no more than m elements. This is partially ordered under inclusion and the subsets with m elements are maximal. Thus the cofinality of this poset is n choose m. A subset of the natural numbers N is cofinal in N if and only if it is infinite, and therefore the cofinality of ℵ0 is ℵ0. Thus ℵ0 is a regular cardinal. The cofinality of the real numbers with their usual ordering is ℵ0, since N is cofinal in R. The usual ordering of R is not order isomorphic to c, the cardinality of the real numbers, which has cofinality strictly greater than ℵ0. This demonstrates that the cofinality depends on the order; different orders on the same set may have different cofinality. Properties If A admits a totally ordered cofinal subset, then we can find a subset B that is well-ordered and cofinal in A. Any subset of B is also well-ordered. Two cofinal subsets of B with minimal cardinality (i.e. their cardinality is the cofinality of B) need not be order isomorphic (for example if , then both and viewed as subsets of B have the countable cardinality of the cofinality of B but are not order isomorphic.) But cofinal subsets of
Jerusalem and local supporters of the Seleucids held out for many years in the Acra citadel, making Maccabean rule in the rest of Jerusalem precarious. When finally gaining possession of the place, the Maccabeans pointedly destroyed and razed the Acra, though they constructed another citadel for their own use in a different part of Jerusalem. 400–1600 At various periods, and particularly during the Middle Ages and the Renaissance, the citadel – having its own fortifications, independent of the city walls – was the last defence of a besieged army, often held after the town had been conquered. Locals and defending armies have often held out citadels long after the city had fallen. For example, in the 1543 Siege of Nice the Ottoman forces led by Barbarossa conquered and pillaged the town and took many captives, but the citadel held out. In the Philippines, the Ivatan people of the northern islands of Batanes often built fortifications to protect themselves during times of war. They built their so-called idjangs on hills and elevated areas. These fortifications were likened to European castles because of their purpose. Usually, the only entrance to the castles would be via a rope ladder that would only be lowered for the villagers and could be kept away when invaders arrived. 1600 to the present In time of war the citadel in many cases afforded retreat to the people living in the areas around the town. However, citadels were often used also to protect a garrison or political power from the inhabitants of the town where it was located, being designed to ensure loyalty from the town that they defended. This was used, for example, during the Dutch Wars of 1664–1667, King Charles II of England constructed a Royal Citadel at Plymouth, an important channel port which needed to be defended from a possible naval attack. However, due to Plymouth's support for the Parliamentarians in the then-recent English Civil War, the Plymouth Citadel was so designed that its guns could fire on the town as well as on the sea approaches. Barcelona had a great citadel built in 1714 to intimidate the Catalans against repeating their mid-17th- and early-18th-century rebellions against the Spanish central government. In the 19th century, when the political climate had liberalized enough to permit it, the people of Barcelona had the citadel torn down, and replaced it with the city's main central park, the Parc de la Ciutadella. A similar example is the Citadella in Budapest, Hungary. The attack on the Bastille in the French Revolution – though afterwards remembered mainly for the release of the handful of prisoners incarcerated there – was to considerable degree motivated by the structure's being a Royal citadel in the midst of revolutionary Paris. Similarly, after Garibaldi's overthrow of Bourbon rule in Palermo, during the 1860 Unification of Italy, Palermo's Castellamare Citadel – symbol of the hated and oppressive former rule – was ceremoniously demolished. Following Belgium gaining its independence in 1830, a Dutch garrison under General David Hendrik Chassé held out in Antwerp Citadel between 1830 and 1832, while the city had already become part of the independent Belgium. The Siege of the Alcázar in the Spanish Civil War, in which the Nationalists held out against a much larger Republican force for two months until relieved, shows that in some cases a citadel can be
rulers for much the same purpose. In the first millennium BCE, the Castro culture emerged in northwestern Portugal and Spain in the region extending from the Douro river up to the Minho, but soon expanding north along the coast, and east following the river valleys. It was an autochthonous evolution of Atlantic Bronze Age communities. In 2008, the origins of the Celts were attributed to this period by John T. Koch and supported by Barry Cunliffe. The Ave River Valley in Portugal was the core region of this culture, with a large number of small settlements (the castros), but also settlements known as citadels or oppida by the Roman conquerors. These had several rings of walls and the Roman conquest of the citadels of Abobriga, Lambriaca and Cinania around 138 BCE was possible only by prolonged siege. Ruins of notable citadels still exist, and are known by archaeologists as Citânia de Briteiros, Citânia de Sanfins, Cividade de Terroso and Cividade de Bagunte. 167–160 BC Rebels who took power in the city but with the citadel still held by the former rulers could by no means regard their tenure of power as secure. One such incident played an important part in the history of the Maccabean Revolt against the Seleucid Empire. The Hellenistic garrison of Jerusalem and local supporters of the Seleucids held out for many years in the Acra citadel, making Maccabean rule in the rest of Jerusalem precarious. When finally gaining possession of the place, the Maccabeans pointedly destroyed and razed the Acra, though they constructed another citadel for their own use in a different part of Jerusalem. 400–1600 At various periods, and particularly during the Middle Ages and the Renaissance, the citadel – having its own fortifications, independent of the city walls – was the last defence of a besieged army, often held after the town had been conquered. Locals and defending armies have often held out citadels long after the city had fallen. For example, in the 1543 Siege of Nice the Ottoman forces led by Barbarossa conquered and pillaged the town and took many captives, but the citadel held out. In the Philippines, the Ivatan people of the northern islands of Batanes often built fortifications to protect themselves during times of war. They built their so-called idjangs on hills and elevated areas. These fortifications were likened to European castles because of their purpose. Usually, the only entrance to the castles would be via a rope ladder that would only be lowered for the villagers and could be kept away when invaders arrived. 1600 to the present In time of war the citadel in many cases afforded retreat to the people living in the areas around the town. However, citadels were often used also to protect a garrison or political power from the inhabitants of the town where it was located, being designed to ensure loyalty
Persians starting in the 3rd century AD, where it was supplemental to the scale and lamellar armour already used. West Asia, India and China Mail was commonly also used as horse armour for cataphracts and heavy cavalry as well as armour for the soldiers themselves. Asian mail could be just as heavy as the European variety and sometimes had prayer symbols stamped on the rings as a sign of their craftsmanship as well as for divine protection. Indeed, mail armour is mentioned in the Quran as being a gift revealed by Allah to David: 21:80 It was We Who taught him the making of coats of mail for your benefit, to guard you from each other's violence: will ye then be grateful? (Yusuf Ali's translation) From the Abbasid Caliphate, mail was quickly adopted in Central Asia by Timur (Tamerlane) and the Sogdians and by India's Delhi Sultanate. Mail armour was introduced by the Turks in late 12th century and commonly used by Turk and the Mughal and Suri armies where it eventually became the armour of choice in India. Indian mail was constructed with alternating rows of solid links and round riveted links and it was often integrated with plate protection (mail and plate armour). Mail and plate armour was commonly used in India until the Battle of Plassey by the Nawabs of Bengal and the subsequent British conquest of the sub-continent. The Ottoman Empire and the other Islamic Gunpowders used mail armour as well as mail and plate armour, and it was used in their armies until the 18th century by heavy cavalry and elite units such as the Janissaries. They spread its use into North Africa where it was adopted by Mamluk Egyptians and the Sudanese who produced it until the early 20th century. Ottoman mail was constructed with alternating rows of solid links and round riveted links. The Persians used mail armour as well as mail and plate armour. Persian mail and Ottoman mail were often quite similar in appearance. Mail was introduced to China when its allies in Central Asia paid tribute to the Tang Emperor in 718 by giving him a coat of "link armour" assumed to be mail. China first encountered the armour in 384 when its allies in the nation of Kuchi arrived wearing "armour similar to chains". Once in China, mail was imported but was not produced widely. Due to its flexibility, comfort, and rarity, it was typically the armour of high-ranking guards and those who could afford the exotic import (to show off their social status) rather than the armour of the rank and file, who used more common brigandine, scale, and lamellar types. However, it was one of the few military products that China imported from foreigners. Mail spread to Korea slightly later where it was imported as the armour of imperial guards and generals. Japanese mail armour In Japan mail is called kusari which means chain. When the word kusari is used in conjunction with an armoured item it usually means that mail makes up the majority of the armour composition. An example of this would be kusari gusoku which means chain armour. Kusari jackets, hoods, gloves, vests, shin guards, shoulder guards, thigh guards, and other armoured clothing were produced, even kusari tabi socks. Kusari was used in samurai armour at least from the time of the Mongol invasion (1270s) but particularly from the Nambokucho Period (1336–1392). The Japanese used many different weave methods including a square 4-in-1 pattern (so gusari), a hexagonal 6-in-1 pattern (hana gusari) and a European 4-in-1 (nanban gusari). The rings of Japanese mail were much smaller than their European counterparts; they would be used in patches to link together plates and to drape over vulnerable areas such as the armpits. Riveted kusari was known and used in Japan. On page 58 of the book Japanese Arms & Armor: Introduction by H. Russell Robinson, there is a picture of Japanese riveted kusari, and this quote from the translated reference of 1800 book, The Manufacture of Armour and Helmets in Sixteenth-Century Japan, shows that the Japanese not only knew of and used riveted kusari but that they manufactured it as well. ... karakuri-namban (riveted namban), with stout links each closed by a rivet. Its invention is credited to Fukushima Dembei Kunitaka, pupil, of Hojo Awa no Kami Ujifusa, but it is also said to be derived directly from foreign models. It is heavy because the links are tinned (biakuro-nagashi) and these are also sharp-edged because they are punched out of iron plate Butted or split (twisted) links made up the majority of kusari links used by the Japanese. Links were either butted together meaning that the ends touched each other and were not riveted, or the kusari was constructed with links where the wire was turned or twisted two or more times; these split links are similar to the modern split ring commonly used on keychains. The rings were lacquered black to prevent rusting, and were always stitched onto a backing of cloth or leather. The kusari was sometimes concealed entirely between layers of cloth. Kusari gusoku or chain armour was commonly used during the Edo period 1603 to 1868 as a stand-alone defense. According to George Cameron Stone Entire suits of mail kusari gusoku were worn on occasions, sometimes under the ordinary clothing Ian Bottomley in his book Arms and Armor of the Samurai: The History of Weaponry in Ancient Japan shows a picture of a kusari armour and mentions kusari katabira (chain jackets) with detachable arms being worn by samurai police officials during the Edo period. The end of the samurai era in the 1860s, along with the 1876 ban on wearing swords in public, marked the end of any practical use for mail and other armour in Japan. Japan turned to a conscription army and uniforms replaced armour. Effectiveness Mail armour provided an effective defense against slashing blows by edged weapons and some forms of penetration by many thrusting and piercing weapons; in fact, a study conducted at the Royal Armouries at Leeds concluded that "it is almost impossible to penetrate using any conventional medieval weapon". Generally speaking, mail's resistance to weapons is determined by four factors: linkage type (riveted, butted, or welded), material used (iron versus bronze or steel), weave density (a tighter weave needs a thinner weapon to surpass), and ring thickness (generally ranging from 18 to 14 gauge (1.02–1.63 mm diameter) wire in most examples). Mail, if a warrior could afford it, provided a significant advantage when combined with competent fighting techniques. When the mail was not riveted, a thrust from most sharp weapons could penetrate it. However, when mail was riveted, only a strong well-placed thrust from certain spears, or thin or dedicated mail-piercing swords like the estoc, could penetrate, and a pollaxe or halberd blow could break through the armour. Strong projectile weapons such as
theory relates the word to the old French maillier, meaning to hammer (related to the modern English word malleable). In modern French, maille refers to a loop or stitch. The Arabic words "burnus", , a burnoose; a hooded cloak, also a chasuble (worn by Coptic priests) and "barnaza", , to bronze, suggest an Arabic influence for the Carolingian armour known as "byrnie" (see below). The first attestations of the word mail are in Old French and Anglo-Norman: maille, maile, or male or other variants, which became mailye, maille, maile, male, or meile in Middle English. The modern usage of terms for mail armour is highly contested in popular and, to a lesser degree, academic culture. Medieval sources referred to armour of this type simply as mail; however, chain-mail has become a commonly used, if incorrect, neologism coined no later than 1786, appearing in Francis Grose's A Treatise on Ancient Armour and Weapons, and brought to popular attention no later than 1822 in Sir Walter Scott's novel The Fortunes of Nigel. Since then the word mail has been commonly, if incorrectly, applied to other types of armour, such as in plate-mail (first attested in Grose's Treatise in 1786). The more correct term is plate armour. Civilizations that used mail invented specific terms for each garment made from it. The standard terms for European mail armour derive from French: leggings are called chausses, a hood is a mail coif, and mittens, mitons. A mail collar hanging from a helmet is a camail or aventail. A shirt made from mail is a hauberk if knee-length and a haubergeon if mid-thigh length. A layer (or layers) of mail sandwiched between layers of fabric is called a jazerant. A waist-length coat in medieval Europe was called a byrnie, although the exact construction of a byrnie is unclear, including whether it was constructed of mail or other armour types. Noting that the byrnie was the "most highly valued piece of armour" to the Carolingian soldier, Bennet, Bradbury, DeVries, Dickie, and Jestice indicate that: There is some dispute among historians as to what exactly constituted the Carolingian byrnie. Relying... only on artistic and some literary sources because of the lack of archaeological examples, some believe that it was a heavy leather jacket with metal scales sewn onto it. It was also quite long, reaching below the hips and covering most of the arms. Other historians claim instead that the Carolingian byrnie was nothing more than a coat of mail, but longer and perhaps heavier than traditional early medieval mail. Without more certain evidence, this dispute will continue. In Europe The use of mail as battlefield armour was common during the Iron Age and the Middle Ages, becoming less common over the course of the 16th and 17th centuries when plate armour and more advanced firearms were developed. It is believed that the Roman Republic first came into contact with mail fighting the Gauls in Cisalpine Gaul, now Northern Italy. The Roman army adopted the technology for their troops in the form of the lorica hamata which was used as a primary form of armour through the Imperial period. After the fall of the Western Empire, much of the infrastructure needed to create plate armour diminished. Eventually the word "mail" came to be synonymous with armour. It was typically an extremely prized commodity, as it was expensive and time-consuming to produce and could mean the difference between life and death in a battle. Mail from dead combatants was frequently looted and was used by the new owner or sold for a lucrative price. As time went on and infrastructure improved, it came to be used by more soldiers. The oldest intact mail hauberk still in existence is thought to have been worn by Leopold III, Duke of Austria, who died in 1386 during the Battle of Sempach. Eventually with the rise of the lanced cavalry charge, impact warfare, and high-powered crossbows, mail came to be used as a secondary armour to plate for the mounted nobility. By the 14th century, articulated plate armour was commonly used to supplement mail. Eventually mail was supplanted by plate for the most part, as it provided greater protection against windlass crossbows, bludgeoning weapons, and lance charges while maintaining most of the mobility of mail. However, it was still widely used by many soldiers, along with brigandines and padded jacks. These three types of armour made up the bulk of the equipment used by soldiers, with mail being the most expensive. It was sometimes more expensive than plate armour. Mail typically persisted longer in less technologically advanced areas such as Eastern Europe but was in use throughout Europe into the 16th century. During the late 19th and early 20th century, mail was used as a material for bulletproof vests, most notably by the Wilkinson Sword Company. Results were unsatisfactory; Wilkinson mail worn by the Khedive of Egypt's regiment of "Iron Men" was manufactured from split rings which proved to be too brittle, and the rings would fragment when struck by bullets and aggravate the injury. The riveted mail armour worn by the opposing Sudanese Madhists did not have the same problem but also proved to be relatively useless against the firearms of British forces at the battle of Omdurman. During World War I, Wilkinson Sword transitioned from mail to a lamellar design which was the precursor to the flak jacket. Also during World War I, a mail fringe, designed by Captain Cruise of the British Infantry, was added to helmets to protect the face. This proved unpopular with soldiers, in spite of being proven to defend against a three-ounce (100 g) shrapnel round fired at a distance of . A protective face mask or splatter mask had a mail veil and was used by early tank crews as a measure against flying steel fragments (spalling) inside the vehicle. In Asia Mail armour was introduced to the Middle East and Asia through the Romans and was adopted by the Sassanid Persians starting in the 3rd century AD, where it was supplemental to the scale and lamellar armour already used. West Asia, India and China Mail was commonly also used as horse armour for cataphracts and heavy cavalry as well as armour for the soldiers themselves. Asian mail could be just as heavy as the European variety and sometimes had prayer symbols stamped on the rings as a sign of their craftsmanship as well as for divine protection. Indeed, mail armour is mentioned in the Quran as being a gift revealed by Allah to David: 21:80 It was We Who taught him the making of coats of mail for your benefit, to guard you from each other's violence: will ye then be grateful? (Yusuf Ali's translation) From the Abbasid Caliphate, mail was quickly adopted in Central Asia by Timur (Tamerlane) and the Sogdians and by India's Delhi Sultanate. Mail armour was introduced by the Turks in late 12th century and commonly used by Turk and the Mughal and Suri armies where it eventually became the armour of choice in India. Indian mail was constructed with alternating rows of solid links and round riveted links and it was often integrated with plate protection (mail and plate armour). Mail and plate armour was commonly used in India until the Battle of Plassey by the Nawabs of Bengal and the subsequent British conquest of the sub-continent. The Ottoman Empire and the other Islamic Gunpowders used mail armour as well as mail and plate armour, and it was used in their armies until the 18th century by heavy cavalry and elite units such as the Janissaries. They spread its use into North Africa where it was adopted by Mamluk Egyptians and the Sudanese who produced it until the early 20th century. Ottoman mail was constructed with alternating rows of solid links and round riveted links. The Persians used mail armour as well as mail and plate armour. Persian mail and Ottoman mail were often quite similar in appearance. Mail was introduced to China when its allies in Central Asia paid tribute to the Tang Emperor in 718 by giving him a coat of "link armour" assumed to be mail. China first encountered the armour in 384 when its allies in the nation of Kuchi arrived wearing "armour similar to chains". Once in China, mail was imported but was not produced widely. Due to its flexibility, comfort, and rarity, it was typically the armour of high-ranking guards and those who could afford the exotic import (to show off their social status) rather than the armour of the rank and file, who used more common brigandine, scale, and lamellar types. However, it was one of the few military products that China imported from foreigners. Mail spread to Korea slightly later where it was imported as the armour of imperial guards and generals. Japanese mail armour In Japan mail is called kusari which means chain. When the word kusari is used in conjunction with an armoured item it usually means that mail makes up the majority of the armour composition. An example of this would be kusari gusoku which means chain armour. Kusari jackets, hoods, gloves, vests, shin guards, shoulder guards, thigh guards, and other armoured clothing were produced, even kusari tabi socks. Kusari was used in samurai armour at least from the time of the Mongol invasion (1270s) but particularly from the Nambokucho Period (1336–1392). The Japanese used many different weave methods including a square 4-in-1 pattern (so gusari), a hexagonal 6-in-1 pattern (hana gusari) and a European 4-in-1 (nanban gusari). The rings of Japanese mail were much smaller than their European counterparts; they would be used in patches to link together plates and to drape over vulnerable areas such as the armpits.
in capturing Cerberus. And both Diodorus Siculus and Apollodorus say that Heracles was initiated into the Mysteries, in preparation for his descent into the underworld. According to Diodorus, Heracles went to Athens, where Musaeus, the son of Orpheus, was in charge of the initiation rites, while according to Apollodorus, he went to Eumolpus at Eleusis. Heracles also had the help of Hermes, the usual guide of the underworld, as well as Athena. In the Odyssey, Homer has Hermes and Athena as his guides. And Hermes and Athena are often shown with Heracles on vase paintings depicting Cerberus' capture. By most accounts, Heracles made his descent into the underworld through an entrance at Tainaron, the most famous of the various Greek entrances to the underworld. The place is first mentioned in connection with the Cerberus story in the rationalized account of Hecataeus of Miletus (fl. 500–494 BC), and Euripides, Seneca, and Apolodorus, all have Heracles descend into the underworld there. However Xenophon reports that Heracles was said to have descended at the Acherusian Chersonese near Heraclea Pontica, on the Black Sea, a place more usually associated with Heracles' exit from the underworld (see below). Heraclea, founded c. 560 BC, perhaps took its name from the association of its site with Heracles' Cerberian exploit. Theseus and Pirithous While in the underworld, Heracles met the heroes Theseus and Pirithous, where the two companions were being held prisoner by Hades for attempting to carry off Hades' wife Persephone. Along with bringing back Cerberus, Heracles also managed (usually) to rescue Theseus, and in some versions Pirithous as well. According to Apollodorus, Heracles found Theseus and Pirithous near the gates of Hades, bound to the "Chair of Forgetfulness, to which they grew and were held fast by coils of serpents", and when they saw Heracles, "they stretched out their hands as if they should be raised from the dead by his might", and Heracles was able to free Theseus, but when he tried to raise up Pirithous, "the earth quaked and he let go." The earliest evidence for the involvement of Theseus and Pirithous in the Cerberus story, is found on a shield-band relief (c. 560 BC) from Olympia, where Theseus and Pirithous (named) are seated together on a chair, arms held out in supplication, while Heracles approaches, about to draw his sword. The earliest literary mention of the rescue occurs in Euripides, where Heracles saves Theseus (with no mention of Pirithous). In the lost play Pirithous, both heroes are rescued, while in the rationalized account of Philochorus, Heracles was able to rescue Theseus, but not Pirithous. In one place Diodorus says Heracles brought back both Theseus and Pirithous, by the favor of Persephone, while in another he says that Pirithous remained in Hades, or according to "some writers of myth" that neither Theseus, nor Pirithous returned. Both are rescued in Hyginus. Capture There are various versions of how Heracles accomplished Cerberus' capture. According to Apollodorus, Heracles asked Hades for Cerberus, and Hades told Heracles he would allow him to take Cerberus only if he "mastered him without the use of the weapons which he carried", and so, using his lion-skin as a shield, Heracles squeezed Cerberus around the head until he submitted. In some early sources Cerberus' capture seems to involve Heracles fighting Hades. Homer (Iliad 5.395–397) has Hades injured by an arrow shot by Heracles. A scholium to the Iliad passage, explains that Hades had commanded that Heracles "master Cerberus without shield or Iron". Heracles did this, by (as in Apollodorus) using his lion-skin instead of his shield, and making stone points for his arrows, but when Hades still opposed him, Heracles shot Hades in anger. Consistent with the no iron requirement, on an early-sixth-century BC lost Corinthian cup, Heracles is shown attacking Hades with a stone, while the iconographic tradition, from c. 560 BC, often shows Heracles using his wooden club against Cerberus. Euripides, has Amphitryon ask Heracles: "Did you conquer him in fight, or receive him from the goddess [i.e. Persephone]? To which, Heracles answers: "In fight", and the Pirithous fragment says that Heracles "overcame the beast by force". However, according to Diodorus, Persephone welcomed Heracles "like a brother" and gave Cerberus "in chains" to Heracles. Aristophanes, has Heracles seize Cerberus in a stranglehold and run off, while Seneca has Heracles again use his lion-skin as shield, and his wooden club, to subdue Cerberus, after which a quailing Hades and Persephone, allow Heracles to lead a chained and submissive Cerberus away. Cerberus is often shown being chained, and Ovid tells that Heracles dragged the three headed Cerberus with chains of adamant. Exit from the underworld There were several locations which were said to be the place where Heracles brought up Cerberus from the underworld. The geographer Strabo (63/64 BC – c. AD 24) reports that "according to the myth writers" Cerberus was brought up at Tainaron, the same place where Euripides has Heracles enter the underworld. Seneca has Heracles enter and exit at Tainaron. Apollodorus, although he has Heracles enter at Tainaron, has him exit at Troezen. The geographer Pausanias tells us that there was a temple at Troezen with "altars to the gods said to rule under the earth", where it was said that, in addition to Cerberus being "dragged" up by Heracles, Semele was supposed to have been brought up out of the underworld by Dionysus. Another tradition had Cerberus brought up at Heraclea Pontica (the same place which Xenophon had earlier associated with Heracles' descent) and the cause of the poisonous plant aconite which grew there in abundance. Herodorus of Heraclea and Euphorion said that when Heracles brought Cerberus up from the underworld at Heraclea, Cerberus "vomited bile" from which the aconite plant grew up. Ovid, also makes Cerberus the cause of the poisonous aconite, saying that on the "shores of Scythia", upon leaving the underworld, as Cerberus was being dragged by Heracles from a cave, dazzled by the unaccustomed daylight, Cerberus spewed out a "poison-foam", which made the aconite plants growing there poisonous. Seneca's Cerberus too, like Ovid's, reacts violently to his first sight of daylight. Enraged, the previously submissive Cerberus struggles furiously, and Heracles and Theseus must together drag Cerberus into the light. Pausanias reports that according to local legend Cerberus was brought up through a chasm in the earth dedicated to Clymenus (Hades) next to the sanctuary of Chthonia at Hermione, and in Euripides' Heracles, though Euripides does not say that Cerberus was brought out there, he has Cerberus kept for a while in the "grove of Chthonia" at Hermione. Pausanias also mentions that at Mount Laphystion in Boeotia, that there was a statue of Heracles Charops ("with bright eyes"), where the Boeotians said Heracles brought up Cerberus. Other locations which perhaps were also associated with Cerberus being brought out of the underworld include, Hierapolis, Thesprotia, and Emeia near Mycenae. Presented to Eurystheus, returned to Hades In some accounts, after bringing Cerberus up from the underworld, Heracles paraded the captured Cerberus through Greece. Euphorion has Heracles lead Cerberus through Midea in Argolis, as women and children watch in fear, and Diodorus Siculus says of Cerberus, that Heracles "carried him away to the amazement of all and exhibited him to men." Seneca has Juno complain of Heracles "highhandedly parading the black hound through Argive cities" and Heracles greeted by laurel-wreathed crowds, "singing" his praises. Then, according to Apollodorus, Heracles showed Cerberus to Eurystheus, as commanded, after which he returned Cerberus to the underworld. However, according to Hesychius of Alexandria, Cerberus escaped, presumably returning to the underworld on his own. Principal sources The earliest mentions of Cerberus (c. 8th – 7th century BC) occur in Homer's Iliad and Odyssey, and Hesiod's Theogony. Homer does not name or describe Cerberus, but simply refers to Heracles being sent by Eurystheus to fetch the "hound of Hades", with Hermes and Athena as his guides, and, in a possible reference to Cerberus' capture, that Heracles shot Hades with an arrow. According to Hesiod, Cerberus was the offspring of the monsters Echidna and Typhon, was fifty-headed, ate raw flesh, and was the "brazen-voiced hound of Hades", who fawns on those that enter the house of Hades, but eats those who try to leave. Stesichorus (c. 630 – 555 BC) apparently wrote a poem called Cerberus, of which virtually nothing remains. However the early-sixth-century BC-lost Corinthian cup from Argos, which showed a single head, and snakes growing out from many places on his body, was possibly influenced by Stesichorus' poem. The mid-sixth-century BC cup from Laconia gives Cerberus three heads and a snake tail, which eventually becomes the standard representation. Pindar (c. 522 – c. 443 BC) apparently gave Cerberus one hundred heads. Bacchylides (5th century BC) also mentions Heracles bringing Cerberus up from the underworld, with no further details. Sophocles (c. 495 – c. 405 BC), in his Women of Trachis, makes Cerberus three-headed, and in his Oedipus at Colonus, the Chorus asks that Oedipus be allowed to pass the gates of the underworld undisturbed by Cerberus, called here the "untamable Watcher of Hades". Euripides (c. 480 – 406 BC) describes Cerberus as three-headed, and three-bodied, says that Heracles entered the underworld at Tainaron, has Heracles say that Cerberus was not given to him by Persephone, but rather he fought and conquered Cerberus, "for I had been lucky enough to witness the rites of the initiated", an apparent reference to his initiation into the Eleusinian Mysteries, and says that the capture of Cerberus was the last of Heracles' labors. The lost play Pirthous (attributed to either Euripides or his late contemporary Critias) has Heracles say that he came to the underworld at the command of Eurystheus, who had ordered him to bring back Cerberus alive, not because he wanted to see Cerberus, but only because Eurystheus thought Heracles would not be able to accomplish the task, and that Heracles "overcame the beast" and "received favour from the gods". Plato (c. 425 – 348 BC) refers to Cerberus' composite nature, citing Cerberus, along with Scylla and the Chimera, as an example from "ancient fables" of a creature composed of many animal forms "grown together in one". Euphorion of Chalcis (3rd century BC) describes Cerberus as having multiple snake tails, and eyes that flashed, like sparks from a blacksmith's forge, or the volcaninc Mount Etna. From Euphorion, also comes the first mention of a story which told that at Heraclea Pontica, where Cerberus was brought out of the underworld, by Heracles, Cerberus "vomited bile" from which the poisonous aconite plant grew up. According to Diodorus Siculus (1st century BC), the capture of Cerberus was the eleventh of Heracles' labors, the twelfth and last being stealing the Apples of the Hesperides. Diodorus says that Heracles thought it best to first go to Athens to take part in the Eleusinian Mysteries, "Musaeus, the son of Orpheus, being at that time in charge of the initiatory rites", after which, he entered into the underworld "welcomed like a brother by Persephone", and "receiving the dog Cerberus in chains he carried him away to the amazement of all and exhibited him to men." In Virgil's Aeneid (1st century BC), Aeneas and the Sibyl encounter Cerberus in a cave, where he "lay at vast length", filling the cave "from end to end", blocking the entrance to the underworld. Cerberus is described as "triple-throated", with "three fierce mouths", multiple "large backs", and serpents writhing around his neck. The Sibyl throws Cerberus a loaf laced with honey and herbs to induce sleep, enabling Aeneas to enter the underworld, and so apparently for Virgil—contradicting Hesiod—Cerberus guarded the underworld against entrance. Later Virgil describes Cerberus, in his bloody cave, crouching over half-gnawed bones. In his Georgics, Virgil refers to Cerberus, his "triple jaws agape" being tamed by Orpheus' playing his lyre. Horace (65 – 8 BC) also refers to Cerberus yielding to Orphesus' lyre, here Cerberus has a single dog head, which "like a Fury's is fortified by a hundred snakes", with a "triple-tongued mouth" oozing "fetid breath and gore". Ovid (43 BC – AD 17/18) has Cerberus' mouth produce venom, and like Euphorion, makes Cerberus the cause of the poisonous plant aconite. According to Ovid, Heracles dragged Cerberus from the underworld, emerging from a cave "where 'tis fabled, the plant grew / on soil infected by Cerberian teeth", and dazzled by the daylight, Cerberus spewed out a "poison-foam", which made the aconite plants growing there poisonous. Seneca, in his tragedy Hercules Furens gives a detailed description of Cerberus and his capture. Seneca's Cerberus has three heads, a mane of snakes, and a snake tail, with his three heads being covered in gore, and licked by the many snakes which surround them, and with hearing so acute that he can hear "even ghosts". Seneca has Heracles use his lion-skin as shield, and his wooden club, to beat Cerberus into submission, after which Hades and Persephone, quailing on their thrones, let Heracles lead a chained and submissive Cerberus away. But upon leaving the underworld, at his first sight of daylight, a frightened Cerberus struggles furiously, and Heracles, with the help of Theseus (who had been held captive by Hades, but released, at Heracles' request) drag Cerberus into the light. Seneca, like Diodorus, has Heracles parade the captured Cerberus through Greece. Apollodorus' Cerberus has three dog-heads, a serpent for a tail, and the heads of many snakes on his back. According to Apollodorus, Heracles' twelfth and final labor was to bring back Cerberus from Hades. Heracles first went to Eumolpus to be initiated into the Eleusinian Mysteries. Upon his entering the underworld, all the dead flee Heracles except for Meleager and the Gorgon Medusa. Heracles drew his sword against Medusa, but Hermes told Heracles that the dead are mere "empty phantoms". Heracles asked Hades (here called Pluto) for Cerberus, and Hades said that Heracles could take Cerberus provided he was able to subdue him without using weapons. Heracles found Cerberus at the gates of Acheron, and with his arms around Cerberus, though being bitten by Cerberus' serpent tail, Heracles squeezed until Cerberus submitted. Heracles carried Cerberus away, showed him to Eurystheus, then returned Cerberus to the underworld. In an apparently unique version of the story, related by the sixth-century AD Pseudo-Nonnus, Heracles descended into Hades to abduct Persephone, and killed Cerberus on his way back up. Iconography The capture of Cerberus was a popular theme in ancient Greek and Roman art. The earliest depictions date from the beginning of the sixth century BC. One of the two earliest depictions, a Corinthian cup (c. 590–580 BC) from Argos (now lost), shows a naked Heracles, with quiver on his back and bow in his right hand, striding left, accompanied by Hermes. Heracles threatens Hades with a stone, who flees left, while a goddess, perhaps Persephone or possibly Athena, standing in front of Hades' throne, prevents the attack. Cerberus, with a single canine head and snakes rising from his head and body, flees right. On the far right a column indicates the entrance to Hades' palace. Many of the elements of this scene—Hermes, Athena, Hades, Persephone, and a column or portico—are common occurrences in later works. The other earliest depiction, a relief pithos fragment from Crete (c. 590–570 BC), is thought to show a single lion-headed Cerberus with a snake (open-mouthed) over his back being led to the right. A mid-sixth-century BC Laconian cup by the Hunt Painter adds several new features to the scene which also become common in later works: three heads, a snake tail, Cerberus' chain and Heracles' club. Here Cerberus has three canine heads, is covered by a shaggy coat of snakes, and has a tail which ends in a snake head. He is being held on a chain leash by Heracles who holds his club raised over head. In Greek art, the vast majority of depictions of Heracles and Cerberus occur on Attic vases. Although the lost Corinthian cup shows Cerberus with a single dog head, and the relief pithos fragment (c. 590–570 BC) apparently shows a single lion-headed Cerberus, in Attic vase painting Cerberus usually has two dog heads. In other art, as in the Laconian cup, Cerberus is usually three-headed. Occasionally in Roman art Cerberus is shown with a large central lion head and two smaller dog heads on either side. As in the Corinthian and
(c. 8th – 7th century BC), Cerberus has fifty heads, while Pindar (c. 522 – c. 443 BC) gave him one hundred heads. However, later writers almost universally give Cerberus three heads. An exception is the Latin poet Horace's Cerberus which has a single dog head, and one hundred snake heads. Perhaps trying to reconcile these competing traditions, Apollodorus's Cerberus has three dog heads and the heads of "all sorts of snakes" along his back, while the Byzantine poet John Tzetzes (who probably based his account on Apollodorus) gives Cerberus fifty heads, three of which were dog heads, the rest being the "heads of other beasts of all sorts". In art Cerberus is most commonly depicted with two dog heads (visible), never more than three, but occasionally with only one. On one of the two earliest depictions (c. 590–580 BC), a Corinthian cup from Argos (see below), now lost, Cerberus was shown as a normal single-headed dog. The first appearance of a three-headed Cerberus occurs on a mid-sixth-century BC Laconian cup (see below). Horace's many snake-headed Cerberus followed a long tradition of Cerberus being part snake. This is perhaps already implied as early as in Hesiod's Theogony, where Cerberus' mother is the half-snake Echidna, and his father the snake-headed Typhon. In art Cerberus is often shown as being part snake, for example the lost Corinthian cup showed snakes protruding from Cerberus' body, while the mid sixth-century BC Laconian cup gives Cerberus a snake for a tail. In the literary record, the first certain indication of Cerberus' serpentine nature comes from the rationalized account of Hecataeus of Miletus (fl. 500–494 BC), who makes Cerberus a large poisonous snake. Plato refers to Cerberus' composite nature, and Euphorion of Chalcis (3rd century BC) describes Cerberus as having multiple snake tails, and presumably in connection to his serpentine nature, associates Cerberus with the creation of the poisonous aconite plant. Virgil has snakes writhe around Cerberus' neck, Ovid's Cerberus has a venomous mouth, necks "vile with snakes", and "hair inwoven with the threatening snake", while Seneca gives Cerberus a mane consisting of snakes, and a single snake tail. Cerberus was given various other traits. According to Euripides, Cerberus not only had three heads but three bodies, and according to Virgil he had multiple backs. Cerberus ate raw flesh (according to Hesiod), had eyes which flashed fire (according to Euphorion), a three-tongued mouth (according to Horace), and acute hearing (according to Seneca). The Twelfth Labour of Heracles Cerberus' only mythology concerns his capture by Heracles. As early as Homer we learn that Heracles was sent by Eurystheus, the king of Tiryns, to bring back Cerberus from Hades the king of the underworld. According to Apollodorus, this was the twelfth and final labour imposed on Heracles. In a fragment from a lost play Pirithous, (attributed to either Euripides or Critias) Heracles says that, although Eurystheus commanded him to bring back Cerberus, it was not from any desire to see Cerberus, but only because Eurystheus thought that the task was impossible. Heracles was aided in his mission by his being an initiate of the Eleusinian Mysteries. Euripides has his initiation being "lucky" for Heracles in capturing Cerberus. And both Diodorus Siculus and Apollodorus say that Heracles was initiated into the Mysteries, in preparation for his descent into the underworld. According to Diodorus, Heracles went to Athens, where Musaeus, the son of Orpheus, was in charge of the initiation rites, while according to Apollodorus, he went to Eumolpus at Eleusis. Heracles also had the help of Hermes, the usual guide of the underworld, as well as Athena. In the Odyssey, Homer has Hermes and Athena as his guides. And Hermes and Athena are often shown with Heracles on vase paintings depicting Cerberus' capture. By most accounts, Heracles made his descent into the underworld through an entrance at Tainaron, the most famous of the various Greek entrances to the underworld. The place is first mentioned in connection with the Cerberus story in the rationalized account of Hecataeus of Miletus (fl. 500–494 BC), and Euripides, Seneca, and Apolodorus, all have Heracles descend into the underworld there. However Xenophon reports that Heracles was said to have descended at the Acherusian Chersonese near Heraclea Pontica, on the Black Sea, a place more usually associated with Heracles' exit from the underworld (see below). Heraclea, founded c. 560 BC, perhaps took its name from the association of its site with Heracles' Cerberian exploit. Theseus and Pirithous While in the underworld, Heracles met the heroes Theseus and Pirithous, where the two companions were being held prisoner by Hades for attempting to carry off Hades' wife Persephone. Along with bringing back Cerberus, Heracles also managed (usually) to rescue Theseus, and in some versions Pirithous as well. According to Apollodorus, Heracles found Theseus and Pirithous near the gates of Hades, bound to the "Chair of Forgetfulness, to which they grew and were held fast by coils of serpents", and when they saw Heracles, "they stretched out their hands as if they should be raised from the dead by his might", and Heracles was able to free Theseus, but when he tried to raise up Pirithous, "the earth quaked and he let go." The earliest evidence for the involvement of Theseus and Pirithous in the Cerberus story, is found on a shield-band relief (c. 560 BC) from Olympia, where Theseus and Pirithous (named) are seated together on a chair, arms held out in supplication, while Heracles approaches, about to draw his sword. The earliest literary mention of the rescue occurs in Euripides, where Heracles saves Theseus (with no mention of Pirithous). In the lost play Pirithous, both heroes are rescued, while in the rationalized account of Philochorus, Heracles was able to rescue Theseus, but not Pirithous. In one place Diodorus says Heracles brought back both Theseus and Pirithous, by the favor of Persephone, while in another he says that Pirithous remained in Hades, or according to "some writers of myth" that neither Theseus, nor Pirithous returned. Both are rescued in Hyginus. Capture There are various versions of how Heracles accomplished Cerberus' capture. According to Apollodorus, Heracles asked Hades for Cerberus, and Hades told Heracles he would allow him to take Cerberus only if he "mastered him without the use of the weapons which he carried", and so, using his lion-skin as a shield, Heracles squeezed Cerberus around the head until he submitted. In some early sources Cerberus' capture seems to involve Heracles fighting Hades. Homer (Iliad 5.395–397) has Hades injured by an arrow shot by Heracles. A scholium to the Iliad passage, explains that Hades had commanded that Heracles "master Cerberus without shield or Iron". Heracles did this, by (as in Apollodorus) using his lion-skin instead of his shield, and making stone points for his arrows, but when Hades still opposed him, Heracles shot Hades in anger. Consistent with the no iron requirement, on an early-sixth-century BC lost Corinthian cup, Heracles is shown attacking Hades with a stone, while the iconographic tradition, from c. 560 BC, often shows Heracles using his wooden club against Cerberus. Euripides, has Amphitryon ask Heracles: "Did you conquer him in fight, or receive him from the goddess [i.e. Persephone]? To which, Heracles answers: "In fight", and the Pirithous fragment says that Heracles "overcame the beast by force". However, according to Diodorus, Persephone welcomed Heracles "like a brother" and gave Cerberus "in chains" to Heracles. Aristophanes, has Heracles seize Cerberus in a stranglehold and run off, while Seneca has Heracles again use his lion-skin as shield, and his wooden club, to subdue Cerberus, after which a quailing Hades and Persephone, allow Heracles to lead a chained and submissive Cerberus away. Cerberus is often shown being chained, and Ovid tells that Heracles dragged the three headed Cerberus with chains of adamant. Exit from the underworld There were several locations which were said to be the place where Heracles brought up Cerberus from the underworld. The geographer Strabo (63/64 BC – c. AD 24) reports that "according to the myth writers" Cerberus was brought up at Tainaron, the same place where Euripides has Heracles enter the underworld. Seneca has Heracles enter and exit at Tainaron. Apollodorus, although he has Heracles enter at Tainaron, has him exit at Troezen. The geographer Pausanias tells us that there was a temple at Troezen with "altars to the gods said to rule under the earth", where it was said that, in addition to Cerberus being "dragged" up by Heracles, Semele was supposed to have been brought up out of the underworld by Dionysus. Another tradition had Cerberus brought up at Heraclea Pontica (the same place which Xenophon had earlier associated with Heracles' descent) and the cause of the poisonous plant aconite which grew there in abundance. Herodorus of Heraclea and Euphorion said that when Heracles brought Cerberus up from the underworld at Heraclea, Cerberus "vomited bile" from which the aconite plant grew up. Ovid, also makes Cerberus the cause of the poisonous aconite, saying that on the "shores of Scythia", upon leaving the underworld, as Cerberus was being dragged by Heracles from a cave, dazzled by the unaccustomed daylight, Cerberus spewed out a "poison-foam", which made the aconite plants growing there poisonous. Seneca's Cerberus too, like Ovid's, reacts violently to his first sight of daylight. Enraged, the previously submissive Cerberus struggles furiously, and Heracles and Theseus must together drag Cerberus into the light. Pausanias reports that according to local legend Cerberus was brought up through a chasm in the earth dedicated to Clymenus (Hades) next to the sanctuary of Chthonia at Hermione, and in Euripides' Heracles, though Euripides does not say that Cerberus was brought out there, he has Cerberus kept for a while in the "grove of Chthonia" at Hermione. Pausanias also mentions that at Mount Laphystion in Boeotia, that there was a statue of Heracles Charops ("with bright eyes"), where the Boeotians said Heracles brought up Cerberus. Other locations which perhaps were also associated with Cerberus being brought out of the underworld include, Hierapolis, Thesprotia, and Emeia near Mycenae. Presented to Eurystheus, returned to Hades In some accounts, after bringing Cerberus up from the underworld, Heracles paraded the captured Cerberus through Greece. Euphorion has Heracles lead Cerberus through Midea in Argolis, as women and children watch in fear, and Diodorus Siculus says of Cerberus, that Heracles "carried him away to the amazement of all and exhibited him to men." Seneca has Juno complain of Heracles "highhandedly parading the black hound through Argive cities" and Heracles greeted by laurel-wreathed crowds, "singing" his praises. Then, according to Apollodorus, Heracles showed Cerberus to Eurystheus, as commanded, after which he returned Cerberus to the underworld. However, according to Hesychius of Alexandria, Cerberus escaped, presumably returning to the underworld on his own. Principal sources The earliest mentions of Cerberus (c. 8th – 7th century BC) occur in Homer's Iliad and Odyssey, and Hesiod's Theogony. Homer does not name or describe Cerberus, but simply refers to Heracles being sent by Eurystheus to fetch the "hound of Hades", with Hermes and Athena as his guides, and, in a possible reference to Cerberus' capture, that Heracles shot Hades with an arrow. According to Hesiod, Cerberus was the offspring of the monsters Echidna and Typhon, was fifty-headed, ate raw flesh, and was the "brazen-voiced hound of Hades", who fawns on those that enter the house of Hades, but eats those who try to leave. Stesichorus (c. 630 – 555 BC) apparently wrote a poem called Cerberus, of which virtually nothing remains. However the early-sixth-century BC-lost Corinthian cup from Argos, which showed a single head, and snakes growing out from many places on his body,
Some people and organizations, notably Microsoft, use the term camel case only for lower camel case, designating Pascal case for the upper camel case. Camel case is distinct from title case, which capitalises all words but retains the spaces between them, and from Tall Man lettering, which uses capitals to emphasize the differences between similar-looking product names such as "predniSONE" and "predniSOLONE". Camel case is also distinct from snake case, which uses underscores interspersed with lowercase letters (sometimes with the first letter capitalized). A combination of snake and camel case (identifiers Written_Like_This) is recommended in the Ada 95 style guide. Variations and synonyms The original name of the practice, used in media studies, grammars and the Oxford English Dictionary, was "medial capitals". Other synonyms include: camelBack (or camel-back) notation or CamelCaps CapitalizedWords or CapWords for upper camel case in Python compoundNames Embedded caps (or embedded capitals) HumpBack (or hump-back) notation InterCaps or intercapping (abbreviation of Internal Capitalization) mixedCase for lower camel case in Python PascalCase for upper camel case (after the Pascal programming language) Smalltalk case WikiWord or WikiCase (especially in older wikis) The earliest known occurrence of the term "InterCaps" on Usenet is in an April 1990 post to the group alt.folklore.computers by Avi Rappoport. The earliest use of the name "Camel Case" occurs in 1995, in a post by Newton Love. Love has since said, "With the advent of programming languages having these sorts of constructs, the humpiness of the style made me call it HumpyCase at first, before I settled on CamelCase. I had been calling it CamelCase for years. ... The citation above was just the first time I had used the name on USENET." Traditional use in natural language In word combinations The use of medial capitals as a convention in the regular spelling of everyday texts is rare, but is used in some languages as a solution to particular problems which arise when two words or segments are combined. In Italian, pronouns can be suffixed to verbs, and because the honorific form of second-person pronouns is capitalized, this can produce a sentence like non ho trovato il tempo di risponderLe ("I have not found time to answer you" – where Le means "to you"). In German, the medial capital letter I, called Binnen-I, is sometimes used in a word like StudentInnen ("students") to indicate that both Studenten ("male students") and Studentinnen ("female students") are intended simultaneously. However, mid-word capitalisation does not conform to German orthography apart from proper names like McDonald; the previous example could be correctly written using parentheses as Student(inn)en, analogous to "congress(wo)men" in English. In Irish, camel case is used when an inflectional prefix is attached to a proper noun, for example ("in Galway"), from ("Galway"); ("the Scottish person"), from ("Scottish person"); and ("to Ireland"), from ("Ireland"). In recent Scottish Gaelic orthography, a hyphen has been inserted: . This convention is also used by several written Bantu languages (e.g. isiZulu, "Zulu language") and several indigenous languages of Mexico (e.g. Nahuatl, Totonacan, Mixe–Zoque, and some Oto-Manguean languages). In Dutch, when capitalizing the digraph ij, both the letter I and the letter J are capitalized, for example in the country name IJsland ("Iceland"). In Chinese pinyin, camel case is sometimes used for place names so that readers can more easily pick out the different parts of the name. For example, places like Beijing (北京), Qinhuangdao (秦皇岛), and Daxing'anling (大兴安岭) can be written as BeiJing, QinHuangDao, and DaXingAnLing respectively, with the number of capital letters equaling the number of Chinese characters. Writing word compounds only by the initial letter of each character is also acceptable in some cases, so Beijing can be written as BJ, Qinghuangdao as QHD, and Daxing'anling as DXAL. In English, medial capitals are usually only found in Scottish or Irish "Mac-" or "Mc-" names, where for example MacDonald, McDonald, and Macdonald are common spelling variants of the same name, and in Anglo-Norman "Fitz-" names, where for example both FitzGerald and Fitzgerald are found. In their English style guide The King's English, first published in 1906, H. W. and F. G. Fowler suggested that medial capitals could be used in triple compound words where hyphens would cause ambiguity—the examples they give are KingMark-like (as against King Mark-like) and Anglo-SouthAmerican (as against Anglo-South American). However, they described the system as "too hopelessly contrary to use at present." In transliterations In the scholarly transliteration of languages written in other scripts, medial capitals are used in similar situations. For example, in transliterated Hebrew, ha'Ivri means "the Hebrew person" or "the Jew" and b'Yerushalayim means "in Jerusalem". In Tibetan proper names like rLobsang, the "r" stands for a prefix glyph in the original script that functions as tone marker rather than a normal letter. Another example is tsIurku, a Latin transcription of the Chechen term for the capping stone of the characteristic Medieval defensive towers of Chechenia and Ingushetia; the capital letter "I" here denoting a phoneme distinct from the one transcribed as "i". In abbreviations Medial capitals are traditionally used in abbreviations to reflect the capitalization that the words would have when written out in full, for example in the academic titles PhD or BSc. A more recent example is NaNoWriMo, a contraction of National Novel Writing Month and the designation for both the annual event and the nonprofit organization that runs it. In German, the names of statutes are abbreviated using embedded capitals, e.g. StGB for Strafgesetzbuch (Criminal Code), PatG for Patentgesetz (Patent Act), BVerfG for
the name. For example, places like Beijing (北京), Qinhuangdao (秦皇岛), and Daxing'anling (大兴安岭) can be written as BeiJing, QinHuangDao, and DaXingAnLing respectively, with the number of capital letters equaling the number of Chinese characters. Writing word compounds only by the initial letter of each character is also acceptable in some cases, so Beijing can be written as BJ, Qinghuangdao as QHD, and Daxing'anling as DXAL. In English, medial capitals are usually only found in Scottish or Irish "Mac-" or "Mc-" names, where for example MacDonald, McDonald, and Macdonald are common spelling variants of the same name, and in Anglo-Norman "Fitz-" names, where for example both FitzGerald and Fitzgerald are found. In their English style guide The King's English, first published in 1906, H. W. and F. G. Fowler suggested that medial capitals could be used in triple compound words where hyphens would cause ambiguity—the examples they give are KingMark-like (as against King Mark-like) and Anglo-SouthAmerican (as against Anglo-South American). However, they described the system as "too hopelessly contrary to use at present." In transliterations In the scholarly transliteration of languages written in other scripts, medial capitals are used in similar situations. For example, in transliterated Hebrew, ha'Ivri means "the Hebrew person" or "the Jew" and b'Yerushalayim means "in Jerusalem". In Tibetan proper names like rLobsang, the "r" stands for a prefix glyph in the original script that functions as tone marker rather than a normal letter. Another example is tsIurku, a Latin transcription of the Chechen term for the capping stone of the characteristic Medieval defensive towers of Chechenia and Ingushetia; the capital letter "I" here denoting a phoneme distinct from the one transcribed as "i". In abbreviations Medial capitals are traditionally used in abbreviations to reflect the capitalization that the words would have when written out in full, for example in the academic titles PhD or BSc. A more recent example is NaNoWriMo, a contraction of National Novel Writing Month and the designation for both the annual event and the nonprofit organization that runs it. In German, the names of statutes are abbreviated using embedded capitals, e.g. StGB for Strafgesetzbuch (Criminal Code), PatG for Patentgesetz (Patent Act), BVerfG for Bundesverfassungsgericht (Federal Constitutional Court), or the very common GmbH, for Gesellschaft mit beschränkter Haftung (private limited company). In this context, there can even be three or more camel case capitals, e.g. in TzBfG for Teilzeit- und Befristungsgesetz (Act on Part-Time and Limited Term Occupations). In French, camel case acronyms such as OuLiPo (1960) were favored for a time as alternatives to initialisms. Camel case is often used to transliterate initialisms into alphabets where two letters may be required to represent a single character of the original alphabet, e.g., DShK from Cyrillic ДШК. History of modern technical use Chemical formulae The first systematic and widespread use of medial capitals for technical purposes was the notation for chemical formulae invented by the Swedish chemist Jacob Berzelius in 1813. To replace the multitude of naming and symbol conventions used by chemists until that time, he proposed to indicate each chemical element by a symbol of one or two letters, the first one being capitalized. The capitalization allowed formulae like "NaCl" to be written without spaces and still be parsed without ambiguity. Berzelius' system continues to be used, augmented with three-letter symbols such as "Uue" for unconfirmed or unknown elements and abbreviations for some common substituents (especially in the field of organic chemistry, for instance "Et" for "ethyl-"). This has been further extended to describe the amino acid sequences of proteins and other similar domains. Early use in trademarks Since the early 20th century, medial capitals have occasionally been used for corporate names and product trademarks, such as DryIce Corporation (1925) marketed the solid form of carbon dioxide (CO2) as "Dry Ice", thus leading to its common name. CinemaScope and VistaVision, rival widescreen movie formats (1953) ShopKo (1962), retail stores, later renamed Shopko MisterRogers Neighborhood, the TV series also called Mister Rogers' Neighborhood (1968) ChemGrass (1965), later renamed AstroTurf (1967) ConAgra (1971), formerly Consolidated Mills MasterCraft (1968), a sports boat manufacturer AeroVironment (1971) PolyGram (1972), formerly Grammophon-Philips Group. United HealthCare (1977) MasterCard (1979), formerly Master Charge SportsCenter (1979) Computer programming In the 1970s and 1980s, medial capitals were adopted as a standard or alternative naming convention for multi-word identifiers in several programming languages. The precise origin of the convention in computer programming has not yet been settled. A 1954 conference proceedings occasionally informally referred to IBM's Speedcoding system as "SpeedCo". Christopher Strachey's paper on GPM (1965), shows a program that includes some medial capital identifiers, including "NextCh" and "WriteSymbol". Multiple-word descriptive identifiers with embedded spaces such as end of file or char table cannot be used in most programming languages because the spaces between the words would be parsed as delimiters between tokens. The alternative of running the words together as in endoffile or chartable is difficult to understand and possibly misleading; for example, chartable is an English word (able to be charted), whereas charTable means a table of chars . Some early programming languages, notably Lisp (1958) and COBOL (1959), addressed this problem by allowing a hyphen ("-") to be used between words of compound identifiers, as in "END-OF-FILE": Lisp because it worked well with prefix notation (a Lisp parser would not treat a hyphen in the middle of a symbol as a subtraction operator) and COBOL because its operators were individual English words. This convention remains in use in these languages, and is also common in program names entered on a command line, as in Unix. However, this solution was not adequate for mathematically-oriented languages such as FORTRAN (1955) and ALGOL (1958), which used the hyphen as an infix subtraction operator. FORTRAN ignored blanks altogether, so programmers could use embedded spaces in variable names. However, this feature was not very useful since the early versions of the language restricted identifiers to no more than six characters. Exacerbating the problem, common punched card character sets of the time were uppercase only and lacked other special characters. It was only in the late 1960s that the widespread adoption of the ASCII character set made both lowercase and the underscore character _ universally available. Some languages, notably C, promptly adopted underscores as word separators, and identifiers such as end_of_file are still prevalent in C programs and libraries (as well as in later languages influenced by C, such as Perl and
the same root. In fact, most persistent and flourishing empires throughout history in both hemispheres were centered in regions fertile for cereals. Historian Max Ostrovsky argues that this historic pattern never changed, not even in the Industrial Age. He stresses that all modern great powers have traditionally remained first and foremost great cereal powers. The “finest hour” of the Axis powers “ended precisely the moment they threw themselves against the two largest cereal lebensraums” (the United States and the USSR). The outcome of the Cold War followed the Soviet grave and long-lasting cereal crisis, exacerbated by the cereal embargo imposed on the USSR in 1980. And, called “the grain basket of the world,” the most productive “cereal lebensraum” dominates the world ever since. Having analyzed the mechanism at work behind this pattern, Ostrovsky outlined that the cereal power determines the percentage of manpower available to non-agricultural sectors including the heavy industry vital for military power. He emphasized that chronologically the Industrial Revolution follows the modern Agricultural Revolution and spatially the world's industrial regions are bound to cereal regions. Taken from space, map of the global illumination is said to indicate by its brightest parts the industrial regions. These regions coincide with cereal regions. Ostrovsky formulized a universal indicator of national power valid for all periods: total cereal tonnage produced by one percent of nation's manpower. For the present, this indicator demonstrates a unipolar international hierarchy. The Green Revolution During the second half of the 20th century there was a significant increase in the production of high-yield cereal crops worldwide, especially wheat and rice, due to an initiative known as the Green Revolution. The strategies developed by the Green Revolution focused on fending off starvation and increasing yield-per-plant, and were very successful in raising overall yields of cereal grains, but did not give sufficient relevance to nutritional quality. These modern high-yield cereal crops tend to have low quality proteins, with essential amino acid deficiencies, are high in carbohydrates, and lack balanced essential fatty acids, vitamins, minerals and other quality factors. So-called ancient grains and heirloom varieties have seen an increase in popularity with the "organic" movements of the early 21st century, but there is a tradeoff in yield-per-plant, putting pressure on resource-poor areas as food crops are replaced with cash crops. Cultivation While each individual species has its own peculiarities, the cultivation of all cereal crops is similar. Most are annual plants; consequently one planting yields one harvest. Wheat, rye, triticale, oats, barley, and spelt are the "cool-season" cereals. These are hardy plants that grow well in moderate weather and cease to grow in hot weather (approximately , but this varies by species and variety). The "warm-season" cereals are tender and prefer hot weather. Barley and rye are the hardiest cereals, able to overwinter in the subarctic and Siberia. Many cool-season cereals are grown in the tropics. However, some are only grown in cooler highlands, where it ' "may" ' be possible to grow multiple crops per year. For the past few decades, there has also been increasing interest in perennial grain plants. This interest developed due to advantages in erosion control, reduced need for fertiliser, and potential lowered costs to the farmer. Though research is still in early stages, The Land Institute in Salina, Kansas has been able to create a few cultivars that produce a fairly good crop yield. Planting The warm-season cereals are grown in tropical lowlands year-round and in temperate climates during the frost-free season. Rice is commonly grown in flooded fields, though some strains are grown on dry land. Other warm climate cereals, such as sorghum, are adapted to arid conditions. Cool-season cereals are well-adapted to temperate climates. Most varieties of a particular species are either winter or spring types. Winter varieties are sown in the autumn, germinate and grow vegetatively, then become dormant during winter. They resume growing in the springtime and mature
grains, but did not give sufficient relevance to nutritional quality. These modern high-yield cereal crops tend to have low quality proteins, with essential amino acid deficiencies, are high in carbohydrates, and lack balanced essential fatty acids, vitamins, minerals and other quality factors. So-called ancient grains and heirloom varieties have seen an increase in popularity with the "organic" movements of the early 21st century, but there is a tradeoff in yield-per-plant, putting pressure on resource-poor areas as food crops are replaced with cash crops. Cultivation While each individual species has its own peculiarities, the cultivation of all cereal crops is similar. Most are annual plants; consequently one planting yields one harvest. Wheat, rye, triticale, oats, barley, and spelt are the "cool-season" cereals. These are hardy plants that grow well in moderate weather and cease to grow in hot weather (approximately , but this varies by species and variety). The "warm-season" cereals are tender and prefer hot weather. Barley and rye are the hardiest cereals, able to overwinter in the subarctic and Siberia. Many cool-season cereals are grown in the tropics. However, some are only grown in cooler highlands, where it ' "may" ' be possible to grow multiple crops per year. For the past few decades, there has also been increasing interest in perennial grain plants. This interest developed due to advantages in erosion control, reduced need for fertiliser, and potential lowered costs to the farmer. Though research is still in early stages, The Land Institute in Salina, Kansas has been able to create a few cultivars that produce a fairly good crop yield. Planting The warm-season cereals are grown in tropical lowlands year-round and in temperate climates during the frost-free season. Rice is commonly grown in flooded fields, though some strains are grown on dry land. Other warm climate cereals, such as sorghum, are adapted to arid conditions. Cool-season cereals are well-adapted to temperate climates. Most varieties of a particular species are either winter or spring types. Winter varieties are sown in the autumn, germinate and grow vegetatively, then become dormant during winter. They resume growing in the springtime and mature in late spring or early summer. This cultivation system makes optimal use of water and frees the land for another crop early in the growing season. Winter varieties do not flower until springtime because they require vernalization: exposure to low temperatures for a genetically determined length of time. Where winters are too warm for vernalization or exceed the hardiness of the crop (which varies by species and variety), farmers grow spring varieties. Spring cereals are planted in early springtime and mature later that same summer, without vernalization. Spring cereals typically require more irrigation and yield less than winter cereals. Harvesting Once the cereal plants have grown their seeds, they have completed their life cycle. The plants die, become brown, and dry. As soon as the parent plants and their seed kernels are reasonably dry, harvest can begin. In developed countries, cereal crops are universally machine-harvested, typically using a combine harvester, which cuts, threshes, and winnows the grain during a single pass across the field. In developing countries, a variety of harvesting methods are in use, depending on the cost of labor, from combines to hand tools such as the scythe or grain cradle. If a crop is harvested during humid weather, the grain may not dry adequately in the field to prevent spoilage during its storage. In this case, the grain is sent to a dehydrating facility, where artificial heat dries it. In North America, farmers commonly deliver their newly harvested grain to a grain elevator, a large storage facility that consolidates the crops of many farmers. The farmer may sell the grain at the time of delivery or maintain ownership of a share of grain in the pool for later sale. Storage facilities should be protected from small grain pests, rodents and birds. Production statistics The following table shows the annual production of cereals in 1961, 1980, 2000, 2010, 2018 and 2019/2020. Maize, wheat, and rice together accounted for 89% of all cereal production worldwide in 2012, and 43% of the global supply of food energy in 2009, while the production of oats and rye have drastically fallen from their 1960s levels. Other cereals not included in FAO statistics, include: Teff, an ancient grain that is a staple in Ethiopia and grown in sub-Saharan Africa as a grass primarily for feeding horses. It
and Austrian) of Europe, as well as the Ottoman Empire, rupturing the Eastern Christian communities that had existed on its territory. The Christian empires were replaced by secular, even anti-clerical republics seeking to definitively keep the churches out of politics. The only surviving monarchy with an established church, Britain, was severely damaged by the war, lost most of Ireland due to Catholic–Protestant infighting, and was starting to lose grip on its colonies. Classical culture Western culture, throughout most of its history, has been nearly equivalent to Christian culture, and many of the population of the Western hemisphere could broadly be described as cultural Christians. The notion of "Europe" and the "Western World" has been intimately connected with the concept of "Christianity and Christendom"; many even attribute Christianity for being the link that created a unified European identity. Historian Paul Legutko of Stanford University said the Catholic Church is "at the center of the development of the values, ideas, science, laws, and institutions which constitute what we call Western civilization." Though Western culture contained several polytheistic religions during its early years under the Greek and Roman Empires, as the centralized Roman power waned, the dominance of the Catholic Church was the only consistent force in Western Europe. Until the Age of Enlightenment, Christian culture guided the course of philosophy, literature, art, music and science. Christian disciplines of the respective arts have subsequently developed into Christian philosophy, Christian art, Christian music, Christian literature etc. Art and literature, law, education, and politics were preserved in the teachings of the Church, in an environment that, otherwise, would have probably seen their loss. The Church founded many cathedrals, universities, monasteries and seminaries, some of which continue to exist today. Medieval Christianity created the first modern universities. The Catholic Church established a hospital system in Medieval Europe that vastly improved upon the Roman valetudinaria. These hospitals were established to cater to "particular social groups marginalized by poverty, sickness, and age," according to historian of hospitals, Guenter Risse. Christianity also had a strong impact on all other aspects of life: marriage and family, education, the humanities and sciences, the political and social order, the economy, and the arts. Christianity had a significant impact on education and science and medicine as the church created the bases of the Western system of education, and was the sponsor of founding universities in the Western world as the university is generally regarded as an institution that has its origin in the Medieval Christian setting. Many clerics throughout history have made significant contributions to science and Jesuits in particular have made numerous significant contributions to the development of science. The cultural influence of Christianity includes social welfare, founding hospitals, economics (as the Protestant work ethic), natural law (which would later influence the creation of international law), politics, architecture, literature, personal hygiene, and family life. Christianity played a role in ending practices common among pagan societies, such as human sacrifice, slavery, infanticide and polygamy. Art and literature Writings and poetry Christian literature is writing that deals with Christian themes and incorporates the Christian world view. This constitutes a huge body of extremely varied writing. Christian poetry is any poetry that contains Christian teachings, themes, or references. The influence of Christianity on poetry has been great in any area that Christianity has taken hold. Christian poems often directly reference the Bible, while others provide allegory. Supplemental arts Christian art is art produced in an attempt to illustrate, supplement and portray in tangible form the principles of Christianity. Virtually all Christian groupings use or have used art to some extent. The prominence of art and the media, style, and representations change; however, the unifying theme is ultimately the representation of the life and times of Jesus and in some cases the Old Testament. Depictions of saints are also common, especially in Anglicanism, Roman Catholicism, and Eastern Orthodoxy. Illumination An illuminated manuscript is a manuscript in which the text is supplemented by the addition of decoration. The earliest surviving substantive illuminated manuscripts are from the period AD 400 to 600, primarily produced in Ireland, Constantinople and Italy. The majority of surviving manuscripts are from the Middle Ages, although many illuminated manuscripts survive from the 15th century Renaissance, along with a very limited number from Late Antiquity. Most illuminated manuscripts were created as codices, which had superseded scrolls; some isolated single sheets survive. A very few illuminated manuscript fragments survive on papyrus. Most medieval manuscripts, illuminated or not, were written on parchment (most commonly of calf, sheep, or goat skin), but most manuscripts important enough to illuminate were written on the best quality of parchment, called vellum, traditionally made of unsplit calfskin, though high quality parchment from other skins was also called parchment. Iconography Christian art began, about two centuries after Christ, by borrowing motifs from Roman Imperial imagery, classical Greek and Roman religion and popular art. Religious images are used to some extent by the Abrahamic Christian faith, and often contain highly complex iconography, which reflects centuries of accumulated tradition. In the Late Antique period iconography began to be standardised, and to relate more closely to Biblical texts, although many gaps in the canonical Gospel narratives were plugged with matter from the apocryphal gospels. Eventually the Church would succeed in weeding most of these out, but some remain, like the ox and ass in the Nativity of Christ. An icon is a religious work of art, most commonly a painting, from Eastern Christianity. Christianity has used symbolism from its very beginnings. In both East and West, numerous iconic types of Christ, Mary and saints and other subjects were developed; the number of named types of icons of Mary, with or without the infant Christ, was especially large in the East, whereas Christ Pantocrator was much the commonest image of Christ. Christian symbolism invests objects or actions with an inner meaning expressing Christian ideas. Christianity has borrowed from the common stock of significant symbols known to most periods and to all regions of the world. Religious symbolism is effective when it appeals to both the intellect and the emotions. Especially important depictions of Mary include the Hodegetria and Panagia types. Traditional models evolved for narrative paintings, including large cycles covering the events of the Life of Christ, the Life of the Virgin, parts of the Old Testament, and, increasingly, the lives of popular saints. Especially in the West, a system of attributes developed for identifying individual figures of saints by a standard appearance and symbolic objects held by them; in the East they were more likely to identified by text labels. Each saint has a story and a reason why he or she led an exemplary life. Symbols have been used to tell these stories throughout the history of the Church. A number of Christian saints are traditionally represented by a symbol or iconic motif associated with their life, termed an attribute or emblem, in order to identify them. The study of these forms part of iconography in Art history. They were particularly Architecture Christian architecture encompasses a wide range of both secular and religious styles from the foundation of Christianity to the present day, influencing the design and construction of buildings and structures in Christian culture. Buildings were at first adapted from those originally intended for other purposes but, with the rise of distinctively ecclesiastical architecture, church buildings came to influence secular ones which have often imitated religious architecture. In the 20th century, the use of new materials, such as concrete, as well as simpler styles has had its effect upon the design of churches and arguably the flow of influence has been reversed. From the birth of Christianity to the present, the most significant period of transformation for Christian architecture in the west was the Gothic cathedral. In the east, Byzantine architecture was a continuation of Roman architecture. Philosophy Christian philosophy is a term to describe the fusion of various fields of philosophy with the theological doctrines of Christianity. Scholasticism, which means "that [which] belongs to the school", and was a method of learning taught by the academics (or school people) of medieval universities c. 1100–1500. Scholasticism originally started to reconcile the philosophy of the ancient classical philosophers with medieval Christian theology. Scholasticism is not a philosophy or theology in itself but a tool and method for learning which places emphasis on dialectical reasoning. Christian civilization Medieval conditions The Byzantine Empire, which was the most sophisticated culture during antiquity, suffered under Muslim conquests limiting its scientific prowess during the Medieval period. Christian Western Europe had suffered a catastrophic loss of knowledge following the fall of the Western Roman Empire. But thanks to the Church scholars such as Aquinas and Buridan, the West carried on at least the spirit of scientific inquiry which would later lead to Europe's taking the lead in science during the Scientific Revolution using translations of medieval works. Medieval technology refers to the technology used in medieval Europe under Christian rule. After the Renaissance of the 12th century, medieval Europe saw a radical change in the rate of new inventions, innovations in the ways of managing traditional means of production, and economic growth. The period saw major technological advances, including the adoption of gunpowder and the astrolabe, the invention of spectacles, and greatly improved water mills, building techniques, agriculture in general, clocks, and ships. The latter advances made possible the dawn of the Age of Exploration. The development of water mills was impressive, and extended from agriculture to sawmills both for timber and stone, probably derived from Roman technology. By the time of the Domesday Book, most large villages in Britain had mills. They also were widely used in mining, as described by Georg Agricola in De Re Metallica for raising ore from shafts, crushing ore, and even powering bellows. Significant in this respect were advances within the fields of navigation. The compass and astrolabe along with advances in shipbuilding, enabled the navigation of the World Oceans and thus domination of the worlds economic trade. Gutenberg’s printing press made possible a dissemination of knowledge to a wider population, that would not only lead to a gradually more egalitarian society, but one more able to dominate other cultures, drawing from a vast reserve of knowledge and experience. Renaissance innovations During the Renaissance, great advances occurred in geography, astronomy, chemistry, physics, math, manufacturing, and engineering. The rediscovery of ancient scientific texts was accelerated after the Fall of Constantinople, and the invention of printing which would democratize learning and allow a faster propagation of new ideas. Renaissance technology is the set of artifacts and customs, spanning roughly the 14th through the 16th century. The era is marked by such profound technical advancements like the printing press, linear perspectivity, patent law, double shell domes or Bastion fortresses. Draw-books of the Renaissance artist-engineers such as
structure of the power of the clergy; for on the one hand they were unimpeded by the narrowing egoism of the family, and on the other their apparent superiority to the call of the flesh added to the awe in which lay sinners held them.... In the latter half of the period in which they ruled, the clergy were as free from family cares as even Plato could desire. Later Middle Ages and Renaissance After the collapse of Charlemagne's empire, the southern remnants of the Holy Roman Empire became a collection of states loosely connected to the Holy See of Rome. Tensions between Pope Innocent III and secular rulers ran high, as the pontiff exerted control over their temporal counterparts in the west and vice versa. The pontificate of Innocent III is considered the height of temporal power of the papacy. The Corpus Christianum described the then-current notion of the community of all Christians united under the Roman Catholic Church. The community was to be guided by Christian values in its politics, economics and social life. Its legal basis was the corpus iuris canonica (body of canon law). In the East, Christendom became more defined as the Byzantine Empire's gradual loss of territory to an expanding Islam and the muslim conquest of Persia. This caused Christianity to become important to the Byzantine identity. Before the East–West Schism which divided the Church religiously, there had been the notion of a universal Christendom that included the East and the West. After the East–West Schism, hopes of regaining religious unity with the West were ended by the Fourth Crusade, when Crusaders conquered the Byzantine capital of Constantinople and hastened the decline of the Byzantine Empire on the path to its destruction. With the breakup of the Byzantine Empire into individual nations with nationalist Orthodox Churches, the term Christendom described Western Europe, Catholicism, Orthodox Byzantines, and other Eastern rites of the Church. The Catholic Church's peak of authority over all European Christians and their common endeavours of the Christian community — for example, the Crusades, the fight against the Moors in the Iberian Peninsula and against the Ottomans in the Balkans — helped to develop a sense of communal identity against the obstacle of Europe's deep political divisions. The popes, formally just the bishops of Rome, claimed to be the focus of all Christendom, which was largely recognised in Western Christendom from the 11th century until the Reformation, but not in Eastern Christendom. Moreover, this authority was also sometimes abused, and fostered the Inquisition and anti-Jewish pogroms, to root out divergent elements and create a religiously uniform community. Ultimately, the Inquisition was done away with by order of Pope Innocent III. Christendom ultimately was led into specific crisis in the late Middle Ages, when the kings of France managed to establish a French national church during the 14th century and the papacy became ever more aligned with the Holy Roman Empire of the German Nation. Known as the Western Schism, western Christendom was a split between three men, who were driven by politics rather than any real theological disagreement for simultaneously claiming to be the true pope. The Avignon Papacy developed a reputation for corruption that estranged major parts of Western Christendom. The Avignon schism was ended by the Council of Constance. Before the modern period, Christendom was in a general crisis at the time of the Renaissance Popes because of the moral laxity of these pontiffs and their willingness to seek and rely on temporal power as secular rulers did. Many in the Catholic Church's hierarchy in the Renaissance became increasingly entangled with insatiable greed for material wealth and temporal power, which led to many reform movements, some merely wanting a moral reformation of the Church's clergy, while others repudiated the Church and separated from it in order to form new sects. The Italian Renaissance produced ideas or institutions by which men living in society could be held together in harmony. In the early 16th century, Baldassare Castiglione (The Book of the Courtier) laid out his vision of the ideal gentleman and lady, while Machiavelli cast a jaundiced eye on "la verità effetuale delle cose" — the actual truth of things — in The Prince, composed, humanist style, chiefly of parallel ancient and modern examples of Virtù. Some Protestant movements grew up along lines of mysticism or renaissance humanism (cf. Erasmus). The Catholic Church fell partly into general neglect under the Renaissance Popes, whose inability to govern the Church by showing personal example of high moral standards set the climate for what would ultimately become the Protestant Reformation. During the Renaissance, the papacy was mainly run by the wealthy families and also had strong secular interests. To safeguard Rome and the connected Papal States the popes became necessarily involved in temporal matters, even leading armies, as the great patron of arts Pope Julius II did. It during these intermediate times popes strove to make Rome the capital of Christendom while projecting it, through art, architecture, and literature, as the center of a Golden Age of unity, order, and peace. Professor Frederick J. McGinness described Rome as essential in understanding the legacy the Church and its representatives encapsulated best by The Eternal City: No other city in Europe matches Rome in its traditions, history, legacies, and influence in the Western world. Rome in the Renaissance under the papacy not only acted as guardian and transmitter of these elements stemming from the Roman Empire but also assumed the role as artificer and interpreter of its myths and meanings for the peoples of Europe from the Middle Ages to modern times... Under the patronage of the popes, whose wealth and income were exceeded only by their ambitions, the city became a cultural center for master architects, sculptors, musicians, painters, and artisans of every kind...In its myth and message, Rome had become the sacred city of the popes, the prime symbol of a triumphant Catholicism, the center of orthodox Christianity, a new Jerusalem. It is clearly noticeable that the popes of the Italian Renaissance have been subjected by many writers with an overly harsh tone. Pope Julius II, for example, was not only an effective secular leader in military affairs, a deviously effective politician but foremost one of the greatest patron of the Renaissance period and person who also encouraged open criticism from noted humanists. The blossoming of renaissance humanism was made very much possible due to the universality of the institutions of Catholic Church and represented by personalities such as Pope Pius II, Nicolaus Copernicus, Leon Battista Alberti, Desiderius Erasmus, sir Thomas More, Bartolomé de Las Casas, Leonardo da Vinci and Teresa of Ávila. George Santayana in his work The Life of Reason postulated the tenets of the all encompassing order the Church had brought and as the repository of the legacy of classical antiquity: The enterprise of individuals or of small aristocratic bodies has meantime sown the world which we call civilised with some seeds and nuclei of order. There are scattered about a variety of churches, industries, academies, and governments. But the universal order once dreamt of and nominally almost established, the empire of universal peace, all-permeating rational art, and philosophical worship, is mentioned no more. An unformulated conception, the prerational ethics of private privilege and national unity, fills the background of men's minds. It represents feudal traditions rather than the tendency really involved in contemporary industry, science, or philanthropy. Those dark ages, from which our political practice is derived, had a political theory which we should do well to study; for their theory about a universal empire and a Catholic church was in turn the echo of a former age of reason, when a few men conscious of ruling the world had for a moment sought to survey it as a whole and to rule it justly. Reformation and Early Modern era Developments in western philosophy and European events brought change to the notion of the Corpus Christianum. The Hundred Years' War accelerated the process of transforming France from a feudal monarchy to a centralized state. The rise of strong, centralized monarchies denoted the European transition from feudalism to capitalism. By the end of the Hundred Years' War, both France and England were able to raise enough money through taxation to create independent standing armies. In the Wars of the Roses, Henry Tudor took the crown of England. His heir, the absolute king Henry VIII establishing the English church. In modern history, the Reformation and rise of modernity in the early 16th century entailed a change in the Corpus Christianum. In the Holy Roman Empire, the Peace of Augsburg of 1555 officially ended the idea among secular leaders that all Christians must be united under one church. The principle of cuius regio, eius religio ("whose the region is, his religion") established the religious, political and geographic divisions of Christianity, and this was established with the Treaty of Westphalia in 1648, which legally ended the concept of a single Christian hegemony in the territories of the Holy Roman Empire, despite the Catholic Church's doctrine that it alone is the one true Church founded by Christ. Subsequently, each government determined the religion of their own state. Christians living in states where their denomination was not the established one were guaranteed the right to practice their faith in public during allotted hours and in private at their will. At times there were mass expulsions of dissenting faiths as happened with the Salzburg Protestants. Some people passed as adhering to the official church, but instead lived as Nicodemites or crypto-protestants. The European wars of religion are usually taken to have ended with the Treaty of Westphalia (1648), or arguably, including the Nine Years' War and the War of the Spanish Succession in this period, with the Treaty of Utrecht of 1713. In the 18th century, the focus shifts away from religious conflicts, either between Christian factions or against the external threat of Islamic factions. End of Christendom The European Miracle, the Age of Enlightenment and the formation of the great colonial empires together with the beginning decline of the Ottoman Empire mark the end of the geopolitical "history of Christendom". Instead, the focus of Western history shifts to the development of the nation-state, accompanied by increasing atheism and secularism, culminating with the French Revolution and the Napoleonic Wars at the turn of the 19th century. Writing in 1997, Canadian theology professor Douglas John Hall argued that Christendom had either fallen already or was in its death throes; although its end was gradual and not as clear to pin down as its 4th-century establishment, the "transition to the post-Constantinian, or post-Christendom, situation (...) has already been in process for a century or two," beginning with the 18th-century rationalist Enlightenment and the French Revolution (the first attempt to topple the Christian establishment). American Catholic bishop Thomas John Curry stated (2001) that the end of Christendom came about because modern governments refused to "uphold the teachings, customs, ethos, and practice of Christianity." He argued the First Amendment to the United States Constitution (1791) and the Second Vatican Council's Declaration on Religious Freedom (1965) are two of the most important documents setting the stage for its end. According to British historian Diarmaid MacCulloch (2010), Christendom was 'killed' by the First World War (1914–18), which led to the fall of the three main Christian empires (Russian, German and Austrian) of Europe, as well as the Ottoman Empire, rupturing the Eastern Christian communities that had existed on its territory. The Christian empires were replaced by secular, even anti-clerical republics seeking to definitively keep the churches out of politics. The only surviving monarchy with an established church, Britain, was severely damaged by the war, lost most of Ireland due to Catholic–Protestant infighting, and was starting to lose grip on its colonies. Classical culture Western culture, throughout most of its history, has been nearly equivalent to Christian culture, and many of the population of the Western hemisphere could broadly be described as cultural Christians. The notion of "Europe" and the "Western World" has been intimately connected with the concept of "Christianity and Christendom"; many even attribute Christianity for being the link that created a unified European identity. Historian Paul Legutko of Stanford University said the Catholic Church is "at the center of the development of the values, ideas, science, laws, and institutions which constitute what we call Western civilization." Though Western culture contained several polytheistic religions during its early years under the Greek and Roman Empires, as the centralized Roman power waned, the dominance of the Catholic Church was the only consistent force in Western Europe. Until the Age of Enlightenment, Christian culture guided the course of philosophy, literature, art, music and science. Christian disciplines of the respective arts have subsequently developed into Christian philosophy, Christian art, Christian music, Christian literature etc. Art and literature, law, education, and politics were preserved in the teachings of the Church, in an environment that, otherwise, would have probably seen their loss. The Church founded many cathedrals, universities, monasteries and seminaries, some of which continue to exist today. Medieval Christianity created the first modern universities. The Catholic Church established a hospital system in Medieval Europe that vastly improved upon the Roman valetudinaria. These hospitals were established to cater to "particular social groups marginalized by poverty, sickness, and age," according to historian of hospitals, Guenter Risse. Christianity also had a strong impact on all other aspects of life: marriage and family, education, the humanities and sciences, the political and social order, the economy, and the arts. Christianity had a significant impact on education and science and medicine as the church created the bases of the Western system of education, and was the sponsor of founding universities in the Western world as the university is generally regarded as an institution that has its origin in the Medieval Christian setting. Many clerics throughout
of dogs by their more elongated, less rounded shape. Unlike dogs, the upper canines of coyotes extend past the mental foramina. Taxonomy and evolution History At the time of the European colonization of the Americas, coyotes were largely confined to open plains and arid regions of the western half of the continent. In early post-Columbian historical records, determining whether the writer is describing coyotes or wolves is often difficult. One record from 1750 in Kaskaskia, Illinois, written by a local priest, noted that the "wolves" encountered there were smaller and less daring than European wolves. Another account from the early 1800s in Edwards County mentioned wolves howling at night, though these were likely coyotes. This species was encountered several times during the Lewis and Clark Expedition (1804–1806), though it was already well known to European traders on the upper Missouri. Meriwether Lewis, writing on 5 May 1805, in northeastern Montana, described the coyote in these terms: The coyote was first scientifically described by naturalist Thomas Say in September 1819, on the site of Lewis and Clark's Council Bluffs, up the Missouri River from the mouth of the Platte during a government-sponsored expedition with Major Stephen Long. He had the first edition of the Lewis and Clark journals in hand, which contained Biddle's edited version of Lewis's observations dated 5 May 1805. His account was published in 1823. Say was the first person to document the difference between a "prairie wolf" (coyote) and on the next page of his journal a wolf which he named Canis nubilus (Great Plains wolf). Say described the coyote as: Naming and etymology The earliest written reference to the species comes from the naturalist Francisco Hernández's Plantas y Animales de la Nueva España (1651), where it is described as a "Spanish fox" or "jackal". The first published usage of the word "coyote" (which is a Spanish borrowing of its Nahuatl name coyōtl ) comes from the historian Francisco Javier Clavijero's Historia de México in 1780. The first time it was used in English occurred in William Bullock's Six months' residence and travels in Mexico (1824), where it is variously transcribed as cayjotte and cocyotie. The word's spelling was standardized as "coyote" by the 1880s. Alternative English names for the coyote include "prairie wolf", "brush wolf", "cased wolf", "little wolf" and "American jackal". Its binomial name Canis latrans translates to "barking dog", a reference to the many vocalizations they produce. Evolution Fossil record Xiaoming Wang and Richard H. Tedford, one of the foremost authorities on carnivore evolution, proposed that the genus Canis was the descendant of the coyote-like Eucyon davisi and its remains first appeared in the Miocene 6million years ago (Mya) in the southwestern US and Mexico. By the Pliocene (5Mya), the larger Canis lepophagus appeared in the same region and by the early Pleistocene (1Mya) C.latrans (the coyote) was in existence. They proposed that the progression from Eucyon davisi to C.lepophagus to the coyote was linear evolution. Additionally, C.latrans and C. aureus are closely related to C.edwardii, a species that appeared earliest spanning the mid-Blancan (late Pliocene) to the close of the Irvingtonian (late Pleistocene), and coyote remains indistinguishable from C. latrans were contemporaneous with C.edwardii in North America. Johnston describes C.lepophagus as having a more slender skull and skeleton than the modern coyote. Ronald Nowak found that the early populations had small, delicate, narrowly proportioned skulls that resemble small coyotes and appear to be ancestral to C. latrans. C. lepophagus was similar in weight to modern coyotes, but had shorter limb bones that indicate a less cursorial lifestyle. The coyote represents a more primitive form of Canis than the gray wolf, as shown by its relatively small size and its comparatively narrow skull and jaws, which lack the grasping power necessary to hold the large prey in which wolves specialize. This is further corroborated by the coyote's sagittal crest, which is low or totally flattened, thus indicating a weaker bite than the wolves. The coyote is not a specialized carnivore as the wolf is, as shown by the larger chewing surfaces on the molars, reflecting the species' relative dependence on vegetable matter. In these respects, the coyote resembles the fox-like progenitors of the genus more so than the wolf. The oldest fossils that fall within the range of the modern coyote date to 0.74–0.85 Ma (million years) in Hamilton Cave, West Virginia; 0.73 Ma in Irvington, California; 0.35–0.48 Ma in Porcupine Cave, Colorado, and in Cumberland Cave, Pennsylvania. Modern coyotes arose 1,000 years after the Quaternary extinction event. Compared to their modern Holocene counterparts, Pleistocene coyotes (C.l. orcutti) were larger and more robust, likely in response to larger competitors and prey. Pleistocene coyotes were likely more specialized carnivores than their descendants, as their teeth were more adapted to shearing meat, showing fewer grinding surfaces suited for processing vegetation. Their reduction in size occurred within 1,000 years of the Quaternary extinction event, when their large prey died out. Furthermore, Pleistocene coyotes were unable to exploit the big-game hunting niche left vacant after the extinction of the dire wolf (Aenocyondirus), as it was rapidly filled by gray wolves, which likely actively killed off the large coyotes, with natural selection favoring the modern gracile morph. DNA evidence In 1993, a study proposed that the wolves of North America display skull traits more similar to the coyote than wolves from Eurasia. In 2010, a study found that the coyote was a basal member of the clade that included the Tibetan wolf, the domestic dog, the Mongolian wolf and the Eurasian wolf, with the Tibetan wolf diverging early from wolves and domestic dogs. In 2016, a whole-genome DNA study proposed, based on the assumptions made, that all of the North American wolves and coyotes diverged from a common ancestor about 51,000 years ago. The study also indicated that all North American wolves have a significant amount of coyote ancestry and all coyotes some degree of wolf ancestry and that the red wolf and eastern wolf are highly admixed with different proportions of gray wolf and coyote ancestry. The proposed timing of the wolf/coyote divergence conflicts with the finding of a coyote-like specimen in strata dated to 1 Mya. Genetic studies relating to wolves or dogs have inferred phylogenetic relationships based on the only reference genome available, that of the Boxer dog. In 2017, the first reference genome of the wolf Canis lupus lupus was mapped to aid future research. In 2018, a study looked at the genomic structure and admixture of North American wolves, wolf-like canids, and coyotes using specimens from across their entire range that mapped the largest dataset of nuclear genome sequences against the wolf reference genome. The study supports the findings of previous studies that North American gray wolves and wolf-like canids were the result of complex gray wolf and coyote mixing. A polar wolf from Greenland and a coyote from Mexico represented the purest specimens. The coyotes from Alaska, California, Alabama, and Quebec show almost no wolf ancestry. Coyotes from Missouri, Illinois, and Florida exhibit 5–10% wolf ancestry. There was 40%:60% wolf to coyote ancestry in red wolves, 60%:40% in Eastern timber wolves, and 75%:25% in the Great Lakes wolves. There was 10% coyote ancestry in Mexican wolves and the Atlantic Coast wolves, 5% in Pacific Coast and Yellowstone wolves, and less than 3% in Canadian archipelago wolves. If a third canid had been involved in the admixture of the North American wolf-like canids then its genetic signature would have been found in coyotes and wolves, which it has not. In 2018, whole genome sequencing was used to compare members of the genus Canis. The study indicates that the common ancestor of the coyote and gray wolf has genetically admixed with a ghost population of an extinct unidentified canid. The canid was genetically close to the dhole and had evolved after the divergence of the African wild dog from the other canid species. The basal position of the coyote compared to the wolf is proposed to be due to the coyote retaining more of the mitochondrial genome of this unknown canid. Subspecies , 19 subspecies are recognized. Geographic variation in coyotes is not great, though taken as a whole, the eastern subspecies (C. l. thamnos and C. l. frustor) are large, dark-colored animals, with a gradual paling in color and reduction in size westward and northward (C. l. texensis, C. l. latrans, C. l. lestes, and C. l. incolatus), a brightening of ochraceous tonesdeep orange or browntowards the Pacific coast (C. l. ochropus, C. l. umpquensis), a reduction in size in Aridoamerica (C. l. microdon, C. l. mearnsi) and a general trend towards dark reddish colors and short muzzles in Mexican and Central American populations. Hybridization Coyotes occasionally mate with domestic dogs, sometimes producing crosses colloquially known as "coydogs". Such matings are rare in the wild, as the mating cycles of dogs and coyotes do not coincide, and coyotes are usually antagonistic towards dogs. Hybridization usually only occurs when coyotes are expanding into areas where conspecifics are few, and dogs are the only alternatives. Even then, pup survival rates are lower than normal, as dogs do not form pair bonds with coyotes, thus making the rearing of pups more difficult. In captivity, F1 hybrids (first generation) tend to be more mischievous and less manageable as pups than dogs, and are less trustworthy on maturity than wolf-dog hybrids. Hybrids vary in appearance, but generally retain the coyote's usual characteristics. F1 hybrids tend to be intermediate in form between dogs and coyotes, while F2 hybrids (second generation) are more varied. Both F1 and F2 hybrids resemble their coyote parents in terms of shyness and intrasexual aggression. Hybrids are fertile and can be successfully bred through four generations. Melanistic coyotes owe their black pelts to a mutation that first arose in domestic dogs. A population of nonalbino white coyotes in Newfoundland owe their coloration to a melanocortin 1 receptor mutation inherited from Golden Retrievers. Coyotes have hybridized with wolves to varying degrees, particularly in eastern North America. The so-called "eastern coyote" of northeastern North America probably originated in the aftermath of the extermination of gray and eastern wolves in the northeast, thus allowing coyotes to colonize former wolf ranges and mix with the remnant wolf populations. This hybrid is smaller than either the gray or eastern wolf, and holds smaller territories, but is in turn larger and holds more extensive home ranges than the typical western coyote. , the eastern coyote's genetic makeup is fairly uniform, with minimal influence from eastern wolves or western coyotes. Adult eastern coyotes are larger than western coyotes, with female eastern coyotes weighing 21% more than male western coyotes. Physical differences become more apparent by the age of 35 days, with eastern coyote pups having longer legs than their western counterparts. Differences in dental development also occurs, with tooth eruption being later, and in a different order in the eastern coyote. Aside from its size, the eastern coyote is physically similar to the western coyote. The four color phases range from dark brown to blond or reddish blond, though the most common phase is gray-brown, with reddish legs, ears, and flanks. No significant differences exist between eastern and western coyotes in aggression and fighting, though eastern coyotes tend to fight less, and are more playful. Unlike western coyote pups, in which fighting precedes play behavior, fighting among eastern coyote pups occurs after the onset of play. Eastern coyotes tend to reach sexual maturity at two years of age, much later than in western coyotes. Eastern and red wolves are also products of varying degrees of wolf-coyote hybridization. The eastern wolf probably was a result of a wolf-coyote admixture, combined with extensive backcrossing with parent gray wolf populations. The red wolf may have originated during a time of declining wolf populations in the Southeastern Woodlands, forcing a wolf-coyote hybridization, as well as backcrossing with local parent coyote populations to the extent that about 75–80% of the modern red wolf's genome is of coyote derivation. Behavior Social and reproductive behaviors Like the Eurasian golden jackal, the coyote is gregarious, but not as dependent on conspecifics as more social canid species like wolves are. This is likely because the coyote is not a specialized hunter of large prey as the latter species is. The basic social unit of a coyote pack is a family containing a reproductive female. However, unrelated coyotes may join forces for companionship, or to bring down prey too large to attack singly. Such "nonfamily" packs are only temporary, and may consist of bachelor males, nonreproductive females and subadult young. Families are formed in midwinter, when females enter estrus. Pair bonding can occur 2–3 months before actual copulation takes place. The copulatory tie can last 5–45 minutes. A female entering estrus attracts males by scent marking and howling with increasing frequency. A single female in heat can attract up to seven reproductive males, which can follow her for as long as a month. Although some squabbling may occur among the males, once the female has selected a mate and copulates, the rejected males do not intervene, and move on once they detect other estrous females. Unlike the wolf, which has been known to practice both monogamous and bigamous matings, the coyote is strictly monogamous, even in areas with high coyote densities and abundant food. Females that fail to mate sometimes assist their sisters or mothers in raising their pups, or join their siblings until the next time they can mate. The newly mated pair then establishes a territory and either constructs their own den or cleans out abandoned badger, marmot, or skunk earths. During the pregnancy, the male frequently hunts alone and brings back food for the female. The female may line the den with dried grass or with fur pulled from her belly. The gestation period is 63 days, with an average litter size of six, though the number fluctuates depending on coyote population density and the abundance of food. Coyote pups are born in dens, hollow trees, or under ledges, and weigh at birth. They are altricial, and are completely dependent on milk for their first 10 days. The incisors erupt at about 12 days, the canines at 16, and the second premolars at 21. Their eyes open after 10 days, by which point the pups become increasingly more mobile, walking by 20 days, and running at the age of six weeks. The parents begin supplementing the pup's diet with regurgitated solid food after 12–15 days. By the age of four to six weeks, when their milk teeth are fully functional, the pups are given small food items such as mice, rabbits, or pieces of ungulate carcasses, with lactation steadily decreasing after two months. Unlike wolf pups, coyote pups begin seriously fighting (as opposed to play fighting) prior to engaging in play behavior. A common play behavior includes the coyote "hip-slam". By three weeks of age, coyote pups bite each other with less inhibition than wolf pups. By the age of four to five weeks, pups have established dominance hierarchies, and are by then more likely to play rather than fight. The male plays an active role in feeding, grooming, and guarding the pups, but abandons them if the female goes missing before the pups are completely weaned. The den is abandoned by June to July, and the pups follow their parents in patrolling their territory and hunting. Pups may leave their families in August, though can remain for much longer. The pups attain adult dimensions at eight months and gain adult weight a month later. Territorial and sheltering behaviors Individual feeding territories vary in size from , with the general concentration of coyotes in a given area depending on food abundance, adequate denning sites, and competition with conspecifics and other predators. The coyote generally does not defend its territory outside of the denning season, and is much less aggressive towards intruders than the wolf is, typically chasing and sparring with them, but rarely killing them. Conflicts between coyotes can arise during times of food shortage. Coyotes mark their territories by raised-leg urination and ground-scratching. Like wolves, coyotes use a den (usually the deserted holes of other species) when gestating and rearing young, though they may occasionally give birth under sagebrushes in the open. Coyote dens can be located in canyons, washouts, coulees, banks, rock bluffs, or level ground. Some dens have been found under abandoned homestead shacks, grain bins, drainage pipes, railroad tracks, hollow logs, thickets, and thistles. The den is continuously dug and cleaned out by the female until the pups are born. Should the den be disturbed or infested with fleas, the pups are moved into another den. A coyote den can have several entrances and passages branching out from the main chamber. A single den can be used year after year. Hunting and feeding behaviors While the popular consensus is that olfaction is very important for hunting, two studies that experimentally investigated the role of olfactory, auditory, and visual cues found that visual cues are the most important ones for hunting in red foxes and coyotes. When hunting large prey, the coyote often works in pairs or small groups. Success in killing large ungulates depends on factors such as snow depth and crust density. Younger animals usually avoid participating in such hunts, with the breeding pair typically doing most of the work. Unlike the wolf, which attacks large prey from the rear, the coyote approaches from the front, lacerating its prey's head and throat. Like other canids, the coyote caches excess food. Coyotes catch mouse-sized rodents by pouncing, whereas ground squirrels are chased. Although coyotes can live in large groups, small prey is typically caught singly. Coyotes have been observed to kill porcupines in pairs, using their paws to flip the rodents on their backs, then attacking the soft underbelly. Only old and experienced coyotes can successfully prey on porcupines, with many predation attempts by young coyotes resulting in them being injured by their prey's quills. Coyotes sometimes urinate on their food, possibly to claim ownership over it. Recent evidence demonstrates that at least some coyotes have become more nocturnal in hunting, presumably to avoid humans. Coyotes may occasionally form mutualistic hunting relationships with American badgers, assisting each other in digging up rodent prey. The relationship between the two species may occasionally border on apparent "friendship", as some coyotes have been observed laying their heads on their badger companions or licking their faces without protest. The amicable interactions between coyotes and badgers were known to pre-Columbian civilizations, as shown on a Mexican jar dated to 1250–1300 CE depicting the relationship between the two. Food scraps, pet food, and animal feces may attract a coyote to a trash can. Communication Body language Being both a gregarious and solitary animal, the variability of the coyote's visual and vocal repertoire is intermediate between that of the solitary foxes and the highly social wolf. The aggressive behavior of the coyote bears more similarities to that of foxes than it does that of wolves and dogs. An aggressive coyote arches its back and lowers its tail. Unlike dogs, which solicit playful behavior by performing a "play-bow" followed by a "play-leap", play in coyotes consists of a bow, followed by side-to-side head flexions and a series of "spins" and "dives". Although coyotes will sometimes bite their playmates' scruff as dogs do, they typically approach low, and make upward-directed bites. Pups fight each other regardless of sex, while among adults, aggression is typically reserved for members of the same sex. Combatants approach each other waving their tails and snarling with their jaws open, though fights are typically silent. Males tend to fight in a vertical stance, while females fight on all four paws. Fights among females tend to be more serious than ones among males, as females seize their opponents' forelegs, throat, and shoulders. Vocalizations The coyote has been described as "the most vocal of all [wild] North American mammals". Its loudness and range of vocalizations was the cause for its binomial name Canis latrans, meaning "barking dog". At least 11 different vocalizations are known in adult coyotes. These sounds are divided into three categories: agonistic and alarm, greeting, and contact. Vocalizations of the first category include woofs, growls, huffs, barks, bark howls, yelps, and high-frequency whines. Woofs are used as low-intensity threats or alarms and are usually heard near den sites, prompting the pups to immediately retreat into their burrows. Growls are used as threats at short distances but have also been heard among pups playing and copulating males. Huffs are high-intensity threat vocalizations produced by rapid expiration of air. Barks can be classed as both long-distance threat vocalizations and alarm calls. Bark howls may serve similar functions. Yelps are emitted as a sign of submission, while high-frequency whines are produced by dominant animals acknowledging the submission of subordinates. Greeting vocalizations include low-frequency whines, 'wow-oo-wows', and group yip howls. Low-frequency whines are emitted by submissive animals and are usually accompanied by tail wagging and muzzle nibbling. The sound known as 'wow-oo-wow' has been described as a "greeting song". The group yip howl is emitted when two or more pack members reunite and may be the final act of a complex greeting ceremony. Contact calls include lone howls and group howls, as well as the previously mentioned group yip howls. The lone howl is the most iconic sound of
large prey in which wolves specialize. This is further corroborated by the coyote's sagittal crest, which is low or totally flattened, thus indicating a weaker bite than the wolves. The coyote is not a specialized carnivore as the wolf is, as shown by the larger chewing surfaces on the molars, reflecting the species' relative dependence on vegetable matter. In these respects, the coyote resembles the fox-like progenitors of the genus more so than the wolf. The oldest fossils that fall within the range of the modern coyote date to 0.74–0.85 Ma (million years) in Hamilton Cave, West Virginia; 0.73 Ma in Irvington, California; 0.35–0.48 Ma in Porcupine Cave, Colorado, and in Cumberland Cave, Pennsylvania. Modern coyotes arose 1,000 years after the Quaternary extinction event. Compared to their modern Holocene counterparts, Pleistocene coyotes (C.l. orcutti) were larger and more robust, likely in response to larger competitors and prey. Pleistocene coyotes were likely more specialized carnivores than their descendants, as their teeth were more adapted to shearing meat, showing fewer grinding surfaces suited for processing vegetation. Their reduction in size occurred within 1,000 years of the Quaternary extinction event, when their large prey died out. Furthermore, Pleistocene coyotes were unable to exploit the big-game hunting niche left vacant after the extinction of the dire wolf (Aenocyondirus), as it was rapidly filled by gray wolves, which likely actively killed off the large coyotes, with natural selection favoring the modern gracile morph. DNA evidence In 1993, a study proposed that the wolves of North America display skull traits more similar to the coyote than wolves from Eurasia. In 2010, a study found that the coyote was a basal member of the clade that included the Tibetan wolf, the domestic dog, the Mongolian wolf and the Eurasian wolf, with the Tibetan wolf diverging early from wolves and domestic dogs. In 2016, a whole-genome DNA study proposed, based on the assumptions made, that all of the North American wolves and coyotes diverged from a common ancestor about 51,000 years ago. The study also indicated that all North American wolves have a significant amount of coyote ancestry and all coyotes some degree of wolf ancestry and that the red wolf and eastern wolf are highly admixed with different proportions of gray wolf and coyote ancestry. The proposed timing of the wolf/coyote divergence conflicts with the finding of a coyote-like specimen in strata dated to 1 Mya. Genetic studies relating to wolves or dogs have inferred phylogenetic relationships based on the only reference genome available, that of the Boxer dog. In 2017, the first reference genome of the wolf Canis lupus lupus was mapped to aid future research. In 2018, a study looked at the genomic structure and admixture of North American wolves, wolf-like canids, and coyotes using specimens from across their entire range that mapped the largest dataset of nuclear genome sequences against the wolf reference genome. The study supports the findings of previous studies that North American gray wolves and wolf-like canids were the result of complex gray wolf and coyote mixing. A polar wolf from Greenland and a coyote from Mexico represented the purest specimens. The coyotes from Alaska, California, Alabama, and Quebec show almost no wolf ancestry. Coyotes from Missouri, Illinois, and Florida exhibit 5–10% wolf ancestry. There was 40%:60% wolf to coyote ancestry in red wolves, 60%:40% in Eastern timber wolves, and 75%:25% in the Great Lakes wolves. There was 10% coyote ancestry in Mexican wolves and the Atlantic Coast wolves, 5% in Pacific Coast and Yellowstone wolves, and less than 3% in Canadian archipelago wolves. If a third canid had been involved in the admixture of the North American wolf-like canids then its genetic signature would have been found in coyotes and wolves, which it has not. In 2018, whole genome sequencing was used to compare members of the genus Canis. The study indicates that the common ancestor of the coyote and gray wolf has genetically admixed with a ghost population of an extinct unidentified canid. The canid was genetically close to the dhole and had evolved after the divergence of the African wild dog from the other canid species. The basal position of the coyote compared to the wolf is proposed to be due to the coyote retaining more of the mitochondrial genome of this unknown canid. Subspecies , 19 subspecies are recognized. Geographic variation in coyotes is not great, though taken as a whole, the eastern subspecies (C. l. thamnos and C. l. frustor) are large, dark-colored animals, with a gradual paling in color and reduction in size westward and northward (C. l. texensis, C. l. latrans, C. l. lestes, and C. l. incolatus), a brightening of ochraceous tonesdeep orange or browntowards the Pacific coast (C. l. ochropus, C. l. umpquensis), a reduction in size in Aridoamerica (C. l. microdon, C. l. mearnsi) and a general trend towards dark reddish colors and short muzzles in Mexican and Central American populations. Hybridization Coyotes occasionally mate with domestic dogs, sometimes producing crosses colloquially known as "coydogs". Such matings are rare in the wild, as the mating cycles of dogs and coyotes do not coincide, and coyotes are usually antagonistic towards dogs. Hybridization usually only occurs when coyotes are expanding into areas where conspecifics are few, and dogs are the only alternatives. Even then, pup survival rates are lower than normal, as dogs do not form pair bonds with coyotes, thus making the rearing of pups more difficult. In captivity, F1 hybrids (first generation) tend to be more mischievous and less manageable as pups than dogs, and are less trustworthy on maturity than wolf-dog hybrids. Hybrids vary in appearance, but generally retain the coyote's usual characteristics. F1 hybrids tend to be intermediate in form between dogs and coyotes, while F2 hybrids (second generation) are more varied. Both F1 and F2 hybrids resemble their coyote parents in terms of shyness and intrasexual aggression. Hybrids are fertile and can be successfully bred through four generations. Melanistic coyotes owe their black pelts to a mutation that first arose in domestic dogs. A population of nonalbino white coyotes in Newfoundland owe their coloration to a melanocortin 1 receptor mutation inherited from Golden Retrievers. Coyotes have hybridized with wolves to varying degrees, particularly in eastern North America. The so-called "eastern coyote" of northeastern North America probably originated in the aftermath of the extermination of gray and eastern wolves in the northeast, thus allowing coyotes to colonize former wolf ranges and mix with the remnant wolf populations. This hybrid is smaller than either the gray or eastern wolf, and holds smaller territories, but is in turn larger and holds more extensive home ranges than the typical western coyote. , the eastern coyote's genetic makeup is fairly uniform, with minimal influence from eastern wolves or western coyotes. Adult eastern coyotes are larger than western coyotes, with female eastern coyotes weighing 21% more than male western coyotes. Physical differences become more apparent by the age of 35 days, with eastern coyote pups having longer legs than their western counterparts. Differences in dental development also occurs, with tooth eruption being later, and in a different order in the eastern coyote. Aside from its size, the eastern coyote is physically similar to the western coyote. The four color phases range from dark brown to blond or reddish blond, though the most common phase is gray-brown, with reddish legs, ears, and flanks. No significant differences exist between eastern and western coyotes in aggression and fighting, though eastern coyotes tend to fight less, and are more playful. Unlike western coyote pups, in which fighting precedes play behavior, fighting among eastern coyote pups occurs after the onset of play. Eastern coyotes tend to reach sexual maturity at two years of age, much later than in western coyotes. Eastern and red wolves are also products of varying degrees of wolf-coyote hybridization. The eastern wolf probably was a result of a wolf-coyote admixture, combined with extensive backcrossing with parent gray wolf populations. The red wolf may have originated during a time of declining wolf populations in the Southeastern Woodlands, forcing a wolf-coyote hybridization, as well as backcrossing with local parent coyote populations to the extent that about 75–80% of the modern red wolf's genome is of coyote derivation. Behavior Social and reproductive behaviors Like the Eurasian golden jackal, the coyote is gregarious, but not as dependent on conspecifics as more social canid species like wolves are. This is likely because the coyote is not a specialized hunter of large prey as the latter species is. The basic social unit of a coyote pack is a family containing a reproductive female. However, unrelated coyotes may join forces for companionship, or to bring down prey too large to attack singly. Such "nonfamily" packs are only temporary, and may consist of bachelor males, nonreproductive females and subadult young. Families are formed in midwinter, when females enter estrus. Pair bonding can occur 2–3 months before actual copulation takes place. The copulatory tie can last 5–45 minutes. A female entering estrus attracts males by scent marking and howling with increasing frequency. A single female in heat can attract up to seven reproductive males, which can follow her for as long as a month. Although some squabbling may occur among the males, once the female has selected a mate and copulates, the rejected males do not intervene, and move on once they detect other estrous females. Unlike the wolf, which has been known to practice both monogamous and bigamous matings, the coyote is strictly monogamous, even in areas with high coyote densities and abundant food. Females that fail to mate sometimes assist their sisters or mothers in raising their pups, or join their siblings until the next time they can mate. The newly mated pair then establishes a territory and either constructs their own den or cleans out abandoned badger, marmot, or skunk earths. During the pregnancy, the male frequently hunts alone and brings back food for the female. The female may line the den with dried grass or with fur pulled from her belly. The gestation period is 63 days, with an average litter size of six, though the number fluctuates depending on coyote population density and the abundance of food. Coyote pups are born in dens, hollow trees, or under ledges, and weigh at birth. They are altricial, and are completely dependent on milk for their first 10 days. The incisors erupt at about 12 days, the canines at 16, and the second premolars at 21. Their eyes open after 10 days, by which point the pups become increasingly more mobile, walking by 20 days, and running at the age of six weeks. The parents begin supplementing the pup's diet with regurgitated solid food after 12–15 days. By the age of four to six weeks, when their milk teeth are fully functional, the pups are given small food items such as mice, rabbits, or pieces of ungulate carcasses, with lactation steadily decreasing after two months. Unlike wolf pups, coyote pups begin seriously fighting (as opposed to play fighting) prior to engaging in play behavior. A common play behavior includes the coyote "hip-slam". By three weeks of age, coyote pups bite each other with less inhibition than wolf pups. By the age of four to five weeks, pups have established dominance hierarchies, and are by then more likely to play rather than fight. The male plays an active role in feeding, grooming, and guarding the pups, but abandons them if the female goes missing before the pups are completely weaned. The den is abandoned by June to July, and the pups follow their parents in patrolling their territory and hunting. Pups may leave their families in August, though can remain for much longer. The pups attain adult dimensions at eight months and gain adult weight a month later. Territorial and sheltering behaviors Individual feeding territories vary in size from , with the general concentration of coyotes in a given area depending on food abundance, adequate denning sites, and competition with conspecifics and other predators. The coyote generally does not defend its territory outside of the denning season, and is much less aggressive towards intruders than the wolf is, typically chasing and sparring with them, but rarely killing them. Conflicts between coyotes can arise during times of food shortage. Coyotes mark their territories by raised-leg urination and ground-scratching. Like wolves, coyotes use a den (usually the deserted holes of other species) when gestating and rearing young, though they may occasionally give birth under sagebrushes in the open. Coyote dens can be located in canyons, washouts, coulees, banks, rock bluffs, or level ground. Some dens have been found under abandoned homestead shacks, grain bins, drainage pipes, railroad tracks, hollow logs, thickets, and thistles. The den is continuously dug and cleaned out by the female until the pups are born. Should the den be disturbed or infested with fleas, the pups are moved into another den. A coyote den can have several entrances and passages branching out from the main chamber. A single den can be used year after year. Hunting and feeding behaviors While the popular consensus is that olfaction is very important for hunting, two studies that experimentally investigated the role of olfactory, auditory, and visual cues found that visual cues are the most important ones for hunting in red foxes and coyotes. When hunting large prey, the coyote often works in pairs or small groups. Success in killing large ungulates depends on factors such as snow depth and crust density. Younger animals usually avoid participating in such hunts, with the breeding pair typically doing most of the work. Unlike the wolf, which attacks large prey from the rear, the coyote approaches from the front, lacerating its prey's head and throat. Like other canids, the coyote caches excess food. Coyotes catch mouse-sized rodents by pouncing, whereas ground squirrels are chased. Although coyotes can live in large groups, small prey is typically caught singly. Coyotes have been observed to kill porcupines in pairs, using their paws to flip the rodents on their backs, then attacking the soft underbelly. Only old and experienced coyotes can successfully prey on porcupines, with many predation attempts by young coyotes resulting in them being injured by their prey's quills. Coyotes sometimes urinate on their food, possibly to claim ownership over it. Recent evidence demonstrates that at least some coyotes have become more nocturnal in hunting, presumably to avoid humans. Coyotes may occasionally form mutualistic hunting relationships with American badgers, assisting each other in digging up rodent prey. The relationship between the two species may occasionally border on apparent "friendship", as some coyotes have been observed laying their heads on their badger companions or licking their faces without protest. The amicable interactions between coyotes and badgers were known to pre-Columbian civilizations, as shown on a Mexican jar dated to 1250–1300 CE depicting the relationship between the two. Food scraps, pet food, and animal feces may attract a coyote to a trash can. Communication Body language Being both a gregarious and solitary animal, the variability of the coyote's visual and vocal repertoire is intermediate between that of the solitary foxes and the highly social wolf. The aggressive behavior of the coyote bears more similarities to that of foxes than it does that of wolves and dogs. An aggressive coyote arches its back and lowers its tail. Unlike dogs, which solicit playful behavior by performing a "play-bow" followed by a "play-leap", play in coyotes consists of a bow, followed by side-to-side head flexions and a series of "spins" and "dives". Although coyotes will sometimes bite their playmates' scruff as dogs do, they typically approach low, and make upward-directed bites. Pups fight each other regardless of sex, while among adults, aggression is typically reserved for members of the same sex. Combatants approach each other waving their tails and snarling with their jaws open, though fights are typically silent. Males tend to fight in a vertical stance, while females fight on all four paws. Fights among females tend to be more serious than ones among males, as females seize their opponents' forelegs, throat, and shoulders. Vocalizations The coyote has been described as "the most vocal of all [wild] North American mammals". Its loudness and range of vocalizations was the cause for its binomial name Canis latrans, meaning "barking dog". At least 11 different vocalizations are known in adult coyotes. These sounds are divided into three categories: agonistic and alarm, greeting, and contact. Vocalizations of the first category include woofs, growls, huffs, barks, bark howls, yelps, and high-frequency whines. Woofs are used as low-intensity threats or alarms and are usually heard near den sites, prompting the pups to immediately retreat into their burrows. Growls are used as threats at short distances but have also been heard among pups playing and copulating males. Huffs are high-intensity threat vocalizations produced by rapid expiration of air. Barks can be classed as both long-distance threat vocalizations and alarm calls. Bark howls may serve similar functions. Yelps are emitted as a sign of submission, while high-frequency whines are produced by dominant animals acknowledging the submission of subordinates. Greeting vocalizations include low-frequency whines, 'wow-oo-wows', and group yip howls. Low-frequency whines are emitted by submissive animals and are usually accompanied by tail wagging and muzzle nibbling. The sound known as 'wow-oo-wow' has been described as a "greeting song". The group yip howl is emitted when two or more pack members reunite and may be the final act of a complex greeting ceremony. Contact calls include lone howls and group howls, as well as the previously mentioned group yip howls. The lone howl is the most iconic sound of the coyote and may serve the purpose of announcing the presence of a lone individual separated from its pack. Group howls are used as both substitute group yip howls and as responses to either lone howls, group howls, or group yip howls. Ecology Habitat Prior to the near extermination of wolves and cougars, the coyote was most numerous in grasslands inhabited by bison, pronghorn, elk, and other deer, doing particularly well in short-grass areas with prairie dogs, though it was just as much at home in semiarid areas with sagebrush and jackrabbits or in deserts inhabited by cactus, kangaroo rats, and rattlesnakes. As long as it was not in direct competition with the wolf, the coyote ranged from the Sonoran Desert to the alpine regions of adjoining mountains or the plains and mountainous areas of Alberta. With the extermination of the wolf, the coyote's range expanded to encompass broken forests from the tropics of Guatemala and the northern slope of Alaska. Coyotes walk around per day, often along trails such as logging roads and paths; they may use iced-over rivers as travel routes in winter. They are often crepuscular, being more active around evening and the beginning of the night than during the day. Like many canids, coyotes are competent swimmers, reported to be able to travel at least across water. Diet The coyote is ecologically the North American equivalent of the Eurasian golden jackal. Likewise, the coyote is highly versatile in its choice of food, but is primarily carnivorous, with 90% of its diet consisting of meat. Prey species include bison (largely as carrion), white-tailed deer, mule deer, moose, elk, bighorn sheep, pronghorn, rabbits, hares, rodents, birds (especially galliformes, roadrunners, young water birds and pigeons and doves), amphibians (except toads), lizards, snakes, turtles and tortoises, fish, crustaceans, and insects. Coyotes may be picky over the prey they target, as animals such as shrews, moles, and brown rats do not occur in their diet in proportion to their numbers. However, terrestrial and/or burrowing small mammals such as ground squirrels and associated species (marmots, prairie dogs, chipmunks) as well as voles, pocket gophers, kangaroo rats and other ground-favoring rodents may be quite common foods, especially for lone coyotes. More unusual prey include fishers, young black bear cubs, harp seals and rattlesnakes. Coyotes kill rattlesnakes mostly for food (but also to protect their pups at their dens) by teasing the snakes until they stretch out and then biting their heads and snapping and shaking the snakes. Birds taken by coyotes may range in size from thrashers, larks and sparrows to adult wild turkeys and, rarely, brooding adult swans and pelicans. If working in packs or pairs, coyotes may have access to larger prey than lone individuals normally take, such as various prey weighing more than . In some cases, packs of coyotes have dispatched much larger prey such as adult Odocoileus deer, cow elk, pronghorns and wild sheep, although the young fawn, calves and lambs of these animals are considerably more often taken even by packs, as well as domestic sheep and domestic cattle. In some cases, coyotes can bring down prey weighing up to or more. When it comes to adult ungulates such as wild deer, they often exploit them when vulnerable such as those that are infirm, stuck in snow or ice, otherwise winter-weakened or heavily pregnant, whereas less wary domestic ungulates may be more easily exploited. Although coyotes prefer fresh meat, they will scavenge when the opportunity presents itself. Excluding the insects, fruit, and grass eaten, the coyote requires an estimated of food daily, or annually. The coyote readily cannibalizes the carcasses of conspecifics, with coyote fat having been successfully used by coyote hunters as a lure or poisoned bait. The coyote's winter diet consists mainly of large ungulate carcasses, with very little plant matter. Rodent prey increases in importance during the spring, summer, and fall. The coyote feeds on a variety of different produce, including blackberries, blueberries, peaches, pears, apples, prickly pears, chapotes, persimmons, peanuts, watermelons, cantaloupes, and carrots. During the winter and early spring, the coyote eats large quantities of grass, such as green wheat blades. It sometimes eats unusual items such as cotton cake, soybean meal, domestic animal droppings, beans, and cultivated grain such as maize, wheat, and sorghum. In coastal California, coyotes now consume a higher percentage of marine-based food than their ancestors, which is thought to be due to the extirpation of the grizzly bear from this region. In Death Valley, coyotes may consume great quantities of hawkmoth caterpillars or beetles in the spring flowering months. Enemies and competitors In areas where the ranges of coyotes and gray wolves overlap, interference competition and predation by wolves has been hypothesized to limit local coyote densities. Coyote ranges expanded during the 19th and 20th centuries following the extirpation of wolves, while coyotes were driven to extinction on Isle Royale after wolves colonized the island in the 1940s. One study conducted in Yellowstone National Park, where both species coexist, concluded that the coyote population in the Lamar River Valley declined by 39% following the reintroduction of wolves in the 1990s, while coyote populations in wolf inhabited areas of the Grand Teton National Park are 33% lower than in areas where they are absent. Wolves have been observed to not tolerate coyotes in their vicinity, though coyotes have been known to trail wolves to feed on their kills. Coyotes may compete with cougars in some areas. In the eastern Sierra Nevada, coyotes compete with cougars over mule deer. Cougars normally outcompete and dominate coyotes, and may kill them occasionally, thus reducing coyote predation pressure on smaller carnivores such as foxes and bobcats. Coyotes that are killed are sometimes not eaten, perhaps indicating that these comprise competitive interspecies interactions, however there are multiple confirmed cases of cougars also eating coyotes. In northeastern Mexico, cougar predation on coyotes continues apace but coyotes were absent from the prey spectrum of sympatric jaguars, apparently due to differing habitat usages. Other than by gray wolves and cougars, predation on adult coyotes is relatively rare but multiple other predators can be occasional threats. In some cases, adult coyotes have been preyed upon by both American black and grizzly bears, American alligators, large Canada lynx and golden eagles. At kill sites and carrion, coyotes, especially if working alone, tend to be dominated by wolves, cougars, bears, wolverines and, usually but not always, eagles (i.e., bald and golden). When such larger, more powerful and/or more aggressive predators such as these come to a shared feeding site, a coyote may either try to fight, wait until the other predator is done or occasionally share a kill, but if a major danger such as wolves or an adult cougar is present, the coyote will tend to flee. Coyotes rarely kill healthy adult red foxes, and have been observed to feed or den alongside them, though they often kill foxes caught in traps. Coyotes may kill fox kits, but this is not a major source of mortality. In southern California, coyotes frequently kill gray foxes, and these smaller canids tend to avoid areas with high coyote densities. In some areas, coyotes share their ranges with bobcats. These two similarly-sized species rarely physically confront one another, though bobcat populations tend to diminish in areas with high coyote densities. However, several studies have demonstrated interference competition between coyotes and bobcats, and in all cases coyotes dominated the interaction. Multiple researchers reported instances of coyotes killing bobcats, whereas bobcats killing coyotes is more rare. Coyotes attack bobcats using a bite-and-shake method similar to what is used on medium-sized prey. Coyotes (both single individuals and groups) have been known to occasionally kill bobcats – in most cases, the bobcats were relatively small specimens, such as adult females and juveniles. However, coyote attacks (by an unknown number of coyotes) on adult male bobcats have occurred. In California, coyote and bobcat populations are not negatively correlated across different habitat types, but predation by coyotes is an important source of mortality in bobcats. Biologist Stanley Paul Young noted that
its volume. Compressor may also refer to: A device that performs Compression (disambiguation) Compressor (audio signal processor), for dynamic range compression Compressor (software), a video and audio media compression
A device that performs Compression (disambiguation) Compressor (audio signal processor), for dynamic range compression Compressor (software), a video and audio media compression and encoding application See also Compression (disambiguation) Compaction (disambiguation) Decompression (disambiguation) Expansion
in 1973 and Ace Books picked up the line, reprinting the older volumes with new trade dress and continuing to release new ones). Howard's original stories received additional edits by de Camp, and de Camp also decided to create additional Conan stories to publish alongside the originals, working with Björn Nyberg and especially Lin Carter. These new stories were created from a mixture of already-complete Howard stories with different settings and characters that were altered to feature Conan and the Hyborian setting instead, incomplete fragments and outlines for Conan stories that were never completed by Howard, and all-new pastiches. Lastly, de Camp created prefaces for each story, fitting them into a timeline of Conan's life that he created. For roughly 40 years, the original versions of Howard's Conan stories remained out of print. In 1977, the publisher Berkley Books issued three volumes using the earliest published form of the texts from Weird Tales and thus no de Camp edits, with Karl Edward Wagner as series editor, but these were halted by action from de Camp before the remaining three intended volumes could be released. In the 1980s and 1990s, the copyright holders permitted Howard's stories to go out of print entirely as the public demand for sword & sorcery dwindled, but continued to release the occasional new Conan novel by other authors such as Leonard Carpenter, Roland Green, and Harry Turtledove. In 2000, the British publisher Gollancz Science Fiction issued a two-volume, complete edition of Howard's Conan stories as part of its Fantasy Masterworks imprint, which included several stories that had never seen print in their original form. The Gollancz edition mostly used the versions of the stories as published in Weird Tales. The two volumes were combined and the stories restored to chronological order as The Complete Chronicles of Conan: Centenary Edition (Gollancz Science Fiction, 2006; edited and with an Afterword by Steve Jones). In 2003, another British publisher, Wandering Star Books, made an effort both to restore Howard's original manuscripts and to provide a more scholarly and historical view of the Conan stories. It published hardcover editions in England, which were republished in the United States by the Del Rey imprint of Ballantine Books. The first book, Conan of Cimmeria: Volume One (1932–1933) (2003; published in the US as The Coming of Conan the Cimmerian) includes Howard's notes on his fictional setting as well as letters and poems concerning the genesis of his ideas. This was followed by Conan of Cimmeria: Volume Two (1934) (2004; published in the US as The Bloody Crown of Conan) and Conan of Cimmeria: Volume Three (1935–1936) (2005; published in the US as The Conquering Sword of Conan). These three volumes include all the original Conan stories. Setting The stories occur in the pseudo-historical "Hyborian Age", set after the destruction of Atlantis and before the rise of any known ancient civilization. This is a specific epoch in a fictional timeline created by Howard for many of the low fantasy tales of his artificial legendary. The reasons behind the invention of the Hyborian Age were perhaps commercial. Howard had an intense love for history and historical dramas, but he also recognized the difficulties and the time-consuming research work needed in maintaining historical accuracy. Also, the poorly-stocked libraries in the rural part of Texas where Howard lived did not have the material needed for such historical research. By conceiving "a vanished age" and by choosing names that resembled human history, Howard avoided anachronisms and the need for lengthy exposition. According to "The Phoenix on the Sword", the adventures of Conan take place "Between the years when the oceans drank Atlantis and the gleaming cities, and the years of the rise of the Sons of Aryas." Personality and character Conan is a Cimmerian. The writings of Robert E. Howard (particularly his essay "The Hyborian Age") suggests that his Cimmerians are based on the Celts or perhaps the historic Cimmerians. Conan was born on a battlefield and is the son of a village blacksmith. Conan matured quickly as a youth and, by age fifteen, he was already a respected warrior who had participated in the destruction of the Aquilonian fortress of Venarium. After its demise, he was struck by wanderlust and began the adventures chronicled by Howard, encountering skulking monsters, evil wizards, tavern wenches, and beautiful princesses. He roamed throughout the Hyborian Age nations as a thief, outlaw, mercenary, and pirate. As he grew older, he began commanding vast units of warriors and escalating his ambitions. In his forties, he seized the crown from the tyrannical king of Aquilonia, the most powerful kingdom of the Hyborian Age, having strangled the previous ruler on the steps of his own throne. Conan's adventures often result in him performing heroic feats, though his motivation for doing so is largely to protect his own survival or for personal gain. A conspicuous element of Conan's character is his chivalry. He is extremely reluctant to fight women (even when they fight him) and has a strong tendency to save a damsel in distress. In "Jewels of Gwahlur", he has to make a split-second decision whether to save the dancing girl Muriela or the chest of priceless gems which he spent months in search of. So, without hesitation, he rescues Muriela and allows for the treasure to be irrevocably lost. In "The Black Stranger", Conan saves the exile Zingaran Lady Belesa at considerable risk to himself, giving her as a parting gift his fortune in gems big enough to have a comfortable and wealthy life in Zingara, while asking for no favors in return. Reviewer Jennifer Bard also noted that when Conan is in a pirate crew or a robber gang led by another male, his tendency is to subvert and undermine the leader's authority, and eventually supplant (and often, kill) him (e.g. "Pool of the Black One", "A Witch Shall be Born", "Shadows in the Moonlight"). Conversely, in "Queen of the Black Coast", it is noted that Conan "generally agreed to Belit's plan. Hers was the mind that directed their raids, his the arm that carried out her ideas. It was a good life." And at the end of "Red Nails", Conan and Valeria seem to be headed towards a reasonably amicable piratical partnership. George Baxter noted: "Conan's recorded history mentions him as being prominently involved, at one time or another, with four different pirate fraternities, on two different seas, as well being a noted leader of land robbers at three different locales. Yet, we hardly ever see him involved in, well, robbing people. To be sure, he speaks about it often and with complete candor: "We Kozaks took to plundering the outlying dominions of Koth, Zamora, and Turan impartially" he says in "Shadows in the Moonlight". But that was before the story began. And "We're bound for waters where the seaports are fat, and the merchant ships are crammed with plunder!" Conan declares at the end of "The Pool of the Black One". But this plundering will take place after the story ends. When we see Conan onstage, we see him do many other things: he intervenes in the politics and dynastic struggles of various kingdoms; he hunts for hidden treasure; he explores desert islands and lost cities; he fights countless terrible monsters and evil sorcerers; he saves countless beautiful women and makes them fall in love with him... What we virtually never see Conan do is engage in the proper business of an armed robber, on land or by sea—which is to attack people who never threatened or provoked you, take away their possessions by main force, and run your sword through them if they dare to resist. A bit messy business, that. Armchair adventurers, who like to enjoy a good yarn in the perfect safety and comfort of their suburban homes, might not have liked to read it." Appearance Conan has "sullen", "smoldering", and "volcanic" blue eyes with a black "square-cut mane". Howard once describes him as having a hairy chest and, while comic book interpretations often portray Conan as wearing a loincloth or other minimalist clothing to give him a more barbaric image, Howard describes the character as wearing whatever garb is typical for the kingdom and culture in which Conan finds himself. Howard never gave a strict height or weight for Conan in a story, only describing him in loose terms like "giant" and "massive". In the tales, no human is ever described as being stronger than Conan, although few are mentioned as taller (including the strangler, Baal-Pteor) or of larger bulk. In a letter to P. Schuyler Miller and John D. Clark in 1936, only three months before Howard's death, Conan is described as standing 6 ft/183 cm and weighing when he takes part in an attack on Venarium at only 15 years old, though being far from fully grown. At one point, when he is meeting Juma in Kush, he describes Conan as tall as his friend, at nearly 7 ft. in height. Conan himself says in "Beyond the Black River" that he had "...not yet seen 15 snows". at the Battle of Venarium. "At Vanarium he was already a formidable antagonist, though only fifteen, He stood six feet tall [1.83 m] and weighed 180 pounds [82 kg], though he lacked much of having his full growth." Although Conan is muscular, Howard frequently compares his agility and way of moving to that of a panther (see, for instance, "Jewels of Gwahlur", "Beyond the Black River", or "Rogues in the House"). His skin is frequently characterized as bronzed from constant exposure to the sun. In his younger years, he is often depicted wearing a light chain shirt and a horned helmet, though appearances vary with different stories. During his reign as king of Aquilonia, Conan was ... a tall man, mightily shouldered and deep of chest, with a massive corded neck and heavily muscled limbs. He was clad in silk and velvet, with the royal lions of Aquilonia worked in gold upon his rich jupon, and the crown of Aquilonia shone on his square-cut black mane; but the great sword at his side seemed more natural to him than the regal accoutrements. His brow was low and broad, his eyes a volcanic blue that smoldered as if with some inner fire. His dark, scarred, almost sinister face was that of a fighting-man, and his velvet garments could not conceal the hard, dangerous lines of his limbs. Howard imagined the Cimmerians as a pre-Celtic people with mostly black hair and blue or grey eyes. Ethnically, the Cimmerians to which Conan belongs are descendants of the Atlanteans, though they do not remember their ancestry. In his fictional historical essay "The Hyborian Age", Howard describes how the people of Atlantis—the land where his character King Kull originated—had to move east after a great cataclysm changed the face of the world and sank their island, settling where Ireland and Scotland would eventually be located. Thus they are (in Howard's work) the ancestors of the Irish and Scottish (the Celtic Gaels) and not the Picts, the other ancestor of modern Scots who also appear in Howard's work. In the same work, Howard also described how the Cimmerians eventually moved south and east after the age of Conan (presumably in the vicinity of the Black Sea, where the historical Cimmerians dwelt). Abilities Despite his brutish appearance, Conan uses his brains as well as his brawn. The Cimmerian is a highly skilled warrior, possibly without peer with a sword, but his travels have given him vast experience in other trades, especially as a thief. He's also a talented commander, tactician, and strategist, as well as a born leader. In addition, Conan has advanced knowledge of languages and codes and is able to recognize, or even decipher, certain ancient or secret signs and writings. For example, in "Jewels of Gwahlur" Howard states: "In his roaming about the world the giant adventurer had picked up a wide smattering of knowledge, particularly including the speaking and reading of many alien tongues. Many a sheltered scholar would have been astonished at the Cimmerian's linguistic abilities." He also has incredible stamina, enabling him to go without sleep for a few days. In "A Witch Shall be Born", Conan fights armed men until he is overwhelmed, captured, and crucified, before going an entire night and day without water. However, Conan still possesses the strength to pull the nails from his feet, while hoisting himself into a horse's saddle and riding for ten miles. Another noticeable trait is his sense of humor, largely absent in the comics and movies, but very much a part of Howard's original vision of the character (particularly apparent in "Xuthal of the Dusk", also known as "The Slithering Shadow.") His sense of humor can also be rather grimly ironic, as was demonstrated by how he unleashes his own version of justice on the treacherous—and ill-fated—innkeeper Aram Baksh in "Shadows in Zamboula". He is a loyal friend to those true to him, with a barbaric code of conduct that often marks him as more honorable than the more sophisticated people he meets in his travels. Indeed, his straightforward nature and barbarism are constants in all the tales. Conan is a formidable combatant both armed and unarmed. With his back to the wall, Conan is capable of engaging and killing opponents by the score. This is seen in several stories, such as "Queen of the Black Coast", "The Scarlet Citadel", and "A Witch Shall Be Born". Conan is not superhuman, though; he needed the providential help of Zelata's wolf to defeat four Nemedian soldiers in Howard's novel The Hour of the Dragon. Some of his hardest victories have come from fighting single opponents of inhuman strength: one such as Thak, an ape-like humanoid from "Rogues in the House", or the strangler Baal-Pteor in "Shadows in Zamboula". Conan is far from untouchable and has been captured or defeated several times (on one occasion, knocking himself out after drunkenly running into a wall). Influences Howard frequently corresponded with H. P. Lovecraft, and the two would sometimes insert references or elements of each other's settings in their works. Later editors reworked many of the original Conan stories by Howard, thus diluting this connection. Nevertheless, many of Howard's unedited Conan stories are arguably part of the Cthulhu Mythos. Additionally, many of the Conan stories by Howard, de Camp, and Carter used geographical place names from Clark Ashton Smith's Hyperborean Cycle. Original Robert E. Howard Conan stories Conan stories published in Weird Tales "The Phoenix on the Sword" (novelette; vol. 20, #6, December 1932) "The Scarlet Citadel" (novelette; vol. 21, #1, January 1, 1933) "The Tower of the Elephant" (novelette; vol. 21, #3, March 1933) "Black Colossus" (novelette; vol. 21, #6, June 1933) "The Slithering Shadow" (novelette; vol. 22, #3, September 1933, alternate title "Xuthal of the Dusk") "The Pool of the Black One" (novelette; vol. 22, #4, October 1933) "Rogues in the House" (novelette; vol. 23, #1, January 1934) "Iron Shadows in the Moon" (novelette; vol. 23, #4, April 1934, published as "Shadows in the Moonlight") "Queen of the Black Coast" (novelette; vol. 23, #5, May 1934) "The Devil in Iron" (novelette; vol. 24, #2, August 1934) "The People of the Black Circle" (novella; vol. 24, #3–5, September–November 1934) "A Witch Shall Be Born" (novelette; vol. 24, #6, December 1934) "Jewels of Gwahlur" (novelette; vol. 25, #3, March 1935, author's original title "The Servants of Bit-Yakin") "Beyond the Black River" (novella; vol. 25, #5–6, May–June 1935) "Shadows in Zamboula" (novelette; vol. 26, #5, November 1935, author's original title "The Man-Eaters of Zamboula") "The Hour of the Dragon" (novel; vol. 26, #6 & vol. 27, #1–4, December 1935, January–April 1936) "Red Nails" (novella; vol. 28, #1–3, July, September, October 1936) Conan stories published in Fantasy Fan magazine "Gods of the North" (March 1934) – published as The Frost-Giant's Daughter in The Coming of Conan, 1953. Conan stories not published in Howard's lifetime "The God in the Bowl" – Published in Space Science Fiction, Sep. 1952. "The Black Stranger" – Published in Fantasy Magazine, Feb. 1953. "The Vale of Lost Women" – Published in The Magazine of Horror, Spring 1967. Unfinished Conan stories by Howard "Drums of Tombalku" – Fragment. Published in Conan the Adventurer, 1966. "The Hall of the Dead" – Synopsis. Published in The Magazine of Fantasy and Science Fiction, February 1967. "The Hand of Nergal" – Fragment. Published in Conan, 1967. "The Snout in the Dark" – Fragment. Published in Conan of Cimmeria, 1969. A number of untitled synopses for Conan stories also exist. Other Conan-related material by Howard "Wolves Beyond the Border" – A non-Conan story set in Conan's world. Fragment. Published in 1967 in Conan the Usurper "The Hyborian Age" – An essay written in 1932. Published in 1938 in The Hyborian Age. "Cimmeria" – A poem written in 1932. Published in 1965 in The Howard Collector. Book editions The character of Conan has proven durably popular, resulting in Conan stories by later writers such as Poul Anderson, Leonard Carpenter, Lin Carter, L. Sprague de Camp, Roland J. Green, John C. Hocking, Robert Jordan, Sean A. Moore, Björn Nyberg, Andrew J. Offutt, Steve Perry, John Maddox Roberts, Harry Turtledove, and Karl Edward Wagner. Some of these writers have finished incomplete Conan manuscripts by Howard. Others were created by rewriting Howard stories which originally featured entirely different characters from entirely different milieus. Most, however, are completely original works. In total, more than fifty novels and dozens of short stories featuring the Conan character have been written by authors other than Howard. The Gnome Press edition (1950–1957) was the first hardcover collection of Howard's Conan stories, including all the original Howard material known to exist at the time, some left unpublished in his lifetime. The later volumes contain some stories rewritten by L. Sprague de Camp (like "The Treasure of Tranicos"), including several non-Conan Howard stories, mostly historical exotica situated in the Levant at the time of the Crusades, which he turned into Conan yarns. The Gnome edition also issued the first Conan story written by an author other than Howard—the final volume published, which is by Björn Nyberg and revised by de Camp. The Lancer/Ace editions (1966–1977), under the direction of de Camp and Lin Carter, were the first comprehensive paperbacks, compiling the material from the Gnome Press series together in a chronological order with all the remaining original Howard material, including that left unpublished in his lifetime and fragments and outlines. These were completed by de Camp and Carter. The series also included Howard stories originally featuring other protagonists that were rewritten by de Camp as Conan stories. New Conan stories written entirely by de Camp and Carter were added as well. Lancer Books went out of business before bringing out the entire series, the publication of which was completed by Ace Books. Eight of the eventual twelve volumes published featured dynamic cover paintings by Frank Frazetta that, for many fans, presented the definitive, iconic impression of Conan and his world. For decades to come, most other portrayals of the Cimmerian and his imitators were heavily influenced by the cover paintings of this series. Most editions after the Lancer/Ace series have been of either the
carried out her ideas. It was a good life." And at the end of "Red Nails", Conan and Valeria seem to be headed towards a reasonably amicable piratical partnership. George Baxter noted: "Conan's recorded history mentions him as being prominently involved, at one time or another, with four different pirate fraternities, on two different seas, as well being a noted leader of land robbers at three different locales. Yet, we hardly ever see him involved in, well, robbing people. To be sure, he speaks about it often and with complete candor: "We Kozaks took to plundering the outlying dominions of Koth, Zamora, and Turan impartially" he says in "Shadows in the Moonlight". But that was before the story began. And "We're bound for waters where the seaports are fat, and the merchant ships are crammed with plunder!" Conan declares at the end of "The Pool of the Black One". But this plundering will take place after the story ends. When we see Conan onstage, we see him do many other things: he intervenes in the politics and dynastic struggles of various kingdoms; he hunts for hidden treasure; he explores desert islands and lost cities; he fights countless terrible monsters and evil sorcerers; he saves countless beautiful women and makes them fall in love with him... What we virtually never see Conan do is engage in the proper business of an armed robber, on land or by sea—which is to attack people who never threatened or provoked you, take away their possessions by main force, and run your sword through them if they dare to resist. A bit messy business, that. Armchair adventurers, who like to enjoy a good yarn in the perfect safety and comfort of their suburban homes, might not have liked to read it." Appearance Conan has "sullen", "smoldering", and "volcanic" blue eyes with a black "square-cut mane". Howard once describes him as having a hairy chest and, while comic book interpretations often portray Conan as wearing a loincloth or other minimalist clothing to give him a more barbaric image, Howard describes the character as wearing whatever garb is typical for the kingdom and culture in which Conan finds himself. Howard never gave a strict height or weight for Conan in a story, only describing him in loose terms like "giant" and "massive". In the tales, no human is ever described as being stronger than Conan, although few are mentioned as taller (including the strangler, Baal-Pteor) or of larger bulk. In a letter to P. Schuyler Miller and John D. Clark in 1936, only three months before Howard's death, Conan is described as standing 6 ft/183 cm and weighing when he takes part in an attack on Venarium at only 15 years old, though being far from fully grown. At one point, when he is meeting Juma in Kush, he describes Conan as tall as his friend, at nearly 7 ft. in height. Conan himself says in "Beyond the Black River" that he had "...not yet seen 15 snows". at the Battle of Venarium. "At Vanarium he was already a formidable antagonist, though only fifteen, He stood six feet tall [1.83 m] and weighed 180 pounds [82 kg], though he lacked much of having his full growth." Although Conan is muscular, Howard frequently compares his agility and way of moving to that of a panther (see, for instance, "Jewels of Gwahlur", "Beyond the Black River", or "Rogues in the House"). His skin is frequently characterized as bronzed from constant exposure to the sun. In his younger years, he is often depicted wearing a light chain shirt and a horned helmet, though appearances vary with different stories. During his reign as king of Aquilonia, Conan was ... a tall man, mightily shouldered and deep of chest, with a massive corded neck and heavily muscled limbs. He was clad in silk and velvet, with the royal lions of Aquilonia worked in gold upon his rich jupon, and the crown of Aquilonia shone on his square-cut black mane; but the great sword at his side seemed more natural to him than the regal accoutrements. His brow was low and broad, his eyes a volcanic blue that smoldered as if with some inner fire. His dark, scarred, almost sinister face was that of a fighting-man, and his velvet garments could not conceal the hard, dangerous lines of his limbs. Howard imagined the Cimmerians as a pre-Celtic people with mostly black hair and blue or grey eyes. Ethnically, the Cimmerians to which Conan belongs are descendants of the Atlanteans, though they do not remember their ancestry. In his fictional historical essay "The Hyborian Age", Howard describes how the people of Atlantis—the land where his character King Kull originated—had to move east after a great cataclysm changed the face of the world and sank their island, settling where Ireland and Scotland would eventually be located. Thus they are (in Howard's work) the ancestors of the Irish and Scottish (the Celtic Gaels) and not the Picts, the other ancestor of modern Scots who also appear in Howard's work. In the same work, Howard also described how the Cimmerians eventually moved south and east after the age of Conan (presumably in the vicinity of the Black Sea, where the historical Cimmerians dwelt). Abilities Despite his brutish appearance, Conan uses his brains as well as his brawn. The Cimmerian is a highly skilled warrior, possibly without peer with a sword, but his travels have given him vast experience in other trades, especially as a thief. He's also a talented commander, tactician, and strategist, as well as a born leader. In addition, Conan has advanced knowledge of languages and codes and is able to recognize, or even decipher, certain ancient or secret signs and writings. For example, in "Jewels of Gwahlur" Howard states: "In his roaming about the world the giant adventurer had picked up a wide smattering of knowledge, particularly including the speaking and reading of many alien tongues. Many a sheltered scholar would have been astonished at the Cimmerian's linguistic abilities." He also has incredible stamina, enabling him to go without sleep for a few days. In "A Witch Shall be Born", Conan fights armed men until he is overwhelmed, captured, and crucified, before going an entire night and day without water. However, Conan still possesses the strength to pull the nails from his feet, while hoisting himself into a horse's saddle and riding for ten miles. Another noticeable trait is his sense of humor, largely absent in the comics and movies, but very much a part of Howard's original vision of the character (particularly apparent in "Xuthal of the Dusk", also known as "The Slithering Shadow.") His sense of humor can also be rather grimly ironic, as was demonstrated by how he unleashes his own version of justice on the treacherous—and ill-fated—innkeeper Aram Baksh in "Shadows in Zamboula". He is a loyal friend to those true to him, with a barbaric code of conduct that often marks him as more honorable than the more sophisticated people he meets in his travels. Indeed, his straightforward nature and barbarism are constants in all the tales. Conan is a formidable combatant both armed and unarmed. With his back to the wall, Conan is capable of engaging and killing opponents by the score. This is seen in several stories, such as "Queen of the Black Coast", "The Scarlet Citadel", and "A Witch Shall Be Born". Conan is not superhuman, though; he needed the providential help of Zelata's wolf to defeat four Nemedian soldiers in Howard's novel The Hour of the Dragon. Some of his hardest victories have come from fighting single opponents of inhuman strength: one such as Thak, an ape-like humanoid from "Rogues in the House", or the strangler Baal-Pteor in "Shadows in Zamboula". Conan is far from untouchable and has been captured or defeated several times (on one occasion, knocking himself out after drunkenly running into a wall). Influences Howard frequently corresponded with H. P. Lovecraft, and the two would sometimes insert references or elements of each other's settings in their works. Later editors reworked many of the original Conan stories by Howard, thus diluting this connection. Nevertheless, many of Howard's unedited Conan stories are arguably part of the Cthulhu Mythos. Additionally, many of the Conan stories by Howard, de Camp, and Carter used geographical place names from Clark Ashton Smith's Hyperborean Cycle. Original Robert E. Howard Conan stories Conan stories published in Weird Tales "The Phoenix on the Sword" (novelette; vol. 20, #6, December 1932) "The Scarlet Citadel" (novelette; vol. 21, #1, January 1, 1933) "The Tower of the Elephant" (novelette; vol. 21, #3, March 1933) "Black Colossus" (novelette; vol. 21, #6, June 1933) "The Slithering Shadow" (novelette; vol. 22, #3, September 1933, alternate title "Xuthal of the Dusk") "The Pool of the Black One" (novelette; vol. 22, #4, October 1933) "Rogues in the House" (novelette; vol. 23, #1, January 1934) "Iron Shadows in the Moon" (novelette; vol. 23, #4, April 1934, published as "Shadows in the Moonlight") "Queen of the Black Coast" (novelette; vol. 23, #5, May 1934) "The Devil in Iron" (novelette; vol. 24, #2, August 1934) "The People of the Black Circle" (novella; vol. 24, #3–5, September–November 1934) "A Witch Shall Be Born" (novelette; vol. 24, #6, December 1934) "Jewels of Gwahlur" (novelette; vol. 25, #3, March 1935, author's original title "The Servants of Bit-Yakin") "Beyond the Black River" (novella; vol. 25, #5–6, May–June 1935) "Shadows in Zamboula" (novelette; vol. 26, #5, November 1935, author's original title "The Man-Eaters of Zamboula") "The Hour of the Dragon" (novel; vol. 26, #6 & vol. 27, #1–4, December 1935, January–April 1936) "Red Nails" (novella; vol. 28, #1–3, July, September, October 1936) Conan stories published in Fantasy Fan magazine "Gods of the North" (March 1934) – published as The Frost-Giant's Daughter in The Coming of Conan, 1953. Conan stories not published in Howard's lifetime "The God in the Bowl" – Published in Space Science Fiction, Sep. 1952. "The Black Stranger" – Published in Fantasy Magazine, Feb. 1953. "The Vale of Lost Women" – Published in The Magazine of Horror, Spring 1967. Unfinished Conan stories by Howard "Drums of Tombalku" – Fragment. Published in Conan the Adventurer, 1966. "The Hall of the Dead" – Synopsis. Published in The Magazine of Fantasy and Science Fiction, February 1967. "The Hand of Nergal" – Fragment. Published in Conan, 1967. "The Snout in the Dark" – Fragment. Published in Conan of Cimmeria, 1969. A number of untitled synopses for Conan stories also exist. Other Conan-related material by Howard "Wolves Beyond the Border" – A non-Conan story set in Conan's world. Fragment. Published in 1967 in Conan the Usurper "The Hyborian Age" – An essay written in 1932. Published in 1938 in The Hyborian Age. "Cimmeria" – A poem written in 1932. Published in 1965 in The Howard Collector. Book editions The character of Conan has proven durably popular, resulting in Conan stories by later writers such as Poul Anderson, Leonard Carpenter, Lin Carter, L. Sprague de Camp, Roland J. Green, John C. Hocking, Robert Jordan, Sean A. Moore, Björn Nyberg, Andrew J. Offutt, Steve Perry, John Maddox Roberts, Harry Turtledove, and Karl Edward Wagner. Some of these writers have finished incomplete Conan manuscripts by Howard. Others were created by rewriting Howard stories which originally featured entirely different characters from entirely different milieus. Most, however, are completely original works. In total, more than fifty novels and dozens of short stories featuring the Conan character have been written by authors other than Howard. The Gnome Press edition (1950–1957) was the first hardcover collection of Howard's Conan stories, including all the original Howard material known to exist at the time, some left unpublished in his lifetime. The later volumes contain some stories rewritten by L. Sprague de Camp (like "The Treasure of Tranicos"), including several non-Conan Howard stories, mostly historical exotica situated in the Levant at the time of the Crusades, which he turned into Conan yarns. The Gnome edition also issued the first Conan story written by an author other than Howard—the final volume published, which is by Björn Nyberg and revised by de Camp. The Lancer/Ace editions (1966–1977), under the direction of de Camp and Lin Carter, were the first comprehensive paperbacks, compiling the material from the Gnome Press series together in a chronological order with all the remaining original Howard material, including that left unpublished in his lifetime and fragments and outlines. These were completed by de Camp and Carter. The series also included Howard stories originally featuring other protagonists that were rewritten by de Camp as Conan stories. New Conan stories written entirely by de Camp and Carter were added as well. Lancer Books went out of business before bringing out the entire series, the publication of which was completed by Ace Books. Eight of the eventual twelve volumes published featured dynamic cover paintings by Frank Frazetta that, for many fans, presented the definitive, iconic impression of Conan and his world. For decades to come, most other portrayals of the Cimmerian and his imitators were heavily influenced by the cover paintings of this series. Most editions after the Lancer/Ace series have been of either the original Howard stories or Conan material by others, but not both. The exception are the Ace Maroto editions (1978–1981), which include both new material by other authors and older material by Howard, though the latter are some of the non-Conan tales by Howard rewritten as Conan stories by de Camp. Notable later editions of the original Howard Conan stories include the Donald M. Grant editions (1974–1989, incomplete); Berkley editions (1977); Gollancz editions (2000–2006), and Wandering Star/Del Rey editions (2003–2005). Later series of new Conan material include the Bantam editions (1978–1982) and Tor editions (1982–2004). Conan chronologies In an attempt to provide a coherent timeline which fit the numerous adventures of Conan penned by Robert E. Howard and later writers, various "Conan chronologies" have been prepared by many people from the 1930s onward. Note that no consistent timeline has yet accommodated every single Conan story. The following are the principal theories that have been advanced over the years. Miller/Clark chronology – A Probable Outline of Conan's Career (1936) was the first effort to put the tales in chronological order. Completed by P. Schuyler Miller and John Drury Clark, the chronology was later revised by Clark and L. Sprague de Camp in An Informal Biography of Conan the Cimmerian (1952). Robert Jordan chronology – A Conan Chronology by Robert Jordan (1987) was a new chronology written by Conan writer Robert Jordan that included all written Conan material up to that point. It was heavily influenced by the Miller/Clark/de Camp chronologies, though it departed from them in a number of idiosyncratic instances. William Galen Gray chronology – Timeline of Conan's Journeys (1997, rev. 2004), was fan William Galen Gray's attempt to create "a chronology of all the stories, both Howard and pastiche." Drawing on the earlier Miller/Clark and Jordan chronologies, it represents the ultimate expression of their tradition to date. Joe Marek chronology – Joe Marek's chronology is limited to stories written (or devised) by Howard, though within that context it is essentially a revision of the Miller/Clark tradition to better reflect the internal evidence of the stories and avoid forcing Conan into what he perceives as a "mad dash" around the Hyborian world within timeframes too rapid to be credible. Dale Rippke chronology – The Darkstorm Conan Chronology (2003) was a completely revised and heavily researched chronology, radically repositioning a number of stories and including only those stories written or devised by Howard. The Dark Horse comic series follows this chronology. Media Films Conan the Barbarian (1982) and Conan the Destroyer (1984) The very first Conan cinematic project was planned by Edward Summer. Summer envisioned a series of Conan films, much like the James Bond franchise. He outlined six stories for this film series, but none were ever made. An original screenplay by Summer and Roy Thomas was written, but their lore-authentic screen story was never filmed. However, the resulting film, Conan the Barbarian (1982), was a combination of director John Milius' ideas and plots from Conan stories (written also by Howard's successors, notably Lin Carter and L. Sprague de Camp). The addition of Nietzschean motto and Conan's life philosophy were crucial for bringing the spirit of Howard's literature to the screen. The plot of Conan the Barbarian (1982) begins with Conan being enslaved by the Vanir raiders of Thulsa Doom, a malevolent warlord who is responsible for the slaying of Conan's parents and the genocide of his people. Later, Thulsa Doom becomes a cult leader of a religion that worships Set, a Snake God. The vengeful Conan, the archer Subotai and the thief Valeria set out on a quest to rescue a princess held captive by Thulsa Doom. The film was directed by John Milius and produced by Dino De Laurentiis. The character of Conan was played by Jorge Sanz as a child and Arnold Schwarzenegger as an adult. It was Schwarzenegger's break-through role as an actor. This film was followed by a less popular sequel, Conan the Destroyer in 1984. This sequel was a more typical fantasy-genre film and was even less faithful to Howard's Conan stories, being just a picaresque story of an assorted bunch of adventurers. The third film in the Conan trilogy was planned for 1987 to be titled Conan the Conqueror. The director was to be either Guy Hamilton or John Guillermin. Since Arnold Schwarzenegger was committed to the film Predator and De Laurentiis's contract with the star had expired after his obligation to Red Sonja and Raw Deal, he wasn't keen to negotiate a new one; thus the third Conan film sank into development hell. The script was eventually turned into Kull the Conqueror. Conan the Barbarian (2011) There were rumors in the late 1990s of another Conan sequel, a story about an older Conan titled King Conan: Crown of Iron, but Schwarzenegger's election in 2003 as governor of California ended this project. Warner Bros. spent seven years trying to get the project off the ground. However, in June 2007 the rights reverted to Paradox Entertainment, though all drafts made under Warner remained with them. In August 2007, it was announced that Millennium Films had acquired the rights to the project. Production was aimed for a Spring 2006 start, with the intention of having stories more faithful to the Robert E. Howard creation. In June 2009, Millennium hired Marcus Nispel to direct. In January 2010, Jason Momoa was selected for the role of Conan. The film was released in August 2011, and met poor critical reviews and box office results. The Legend of Conan In 2012, producers Chris Morgan and Frederick Malmberg announced plans for a sequel to the 1982 Conan the Barbarian titled The Legend of Conan, with Arnold Schwarzenegger reprising his role as Conan. A year later, Deadline reported that Andrea Berloff would write the script. Years passed since the initial announcement as Schwarzenegger worked on other films, but as late as 2016, Schwarzenegger affirmed his enthusiasm for making the film, saying, "Interest is high ... but we are not rushing." The script was finished, and Schwarzenegger and Morgan were meeting with possible directors. In April 2017, producer Chris Morgan stated that Universal had dropped the project, although there was a possibility of a TV show. The story of the film was supposed to be set 30 years after the first, with some inspiration from Clint Eastwood's Unforgiven. Television There have been three television series related to Conan: Conan the Adventurer is an animated television series produced by Jetlag Productions and Sunbow Productions that debuted on September 13, 1992, ran for 65 episodes and concluded on November 23, 1993. The series involved Conan chasing Serpent Men across the world in an attempt to release his parents from eternal imprisonment as living statues. Conan and the Young Warriors is an animated television series that premiered in 1994 and ran for 13 episodes. DiC Entertainment produced the show and CBS aired this series as a spin-off to the previous animated series. This cartoon took place after the finale of Conan the Adventurer with Wrath-Amon vanquished and Conan's family returned to life from living stone. Conan soon finds that the family of one of his friends are being turned into wolves by an evil sorceress and he must train three warriors in order to aid him in rescuing them. Conan the Adventurer is a live-action television series that premiered on September 22, 1997, and ran for 22 episodes. It starred German bodybuilder
Bay of Pigs Invasion fiasco, and was subsequently banned. The banned essay was included in Marker's first volume of collected film commentaries, Commentaires I, published in 1961. The following year Marker published Coréennes, a collection of photographs and essays on conditions in Korea. La Jetée and Le Joli Mai (1962–1966) Marker became known internationally for the short film La Jetée (The Pier) in 1962. It tells of a post-nuclear war experiment in time travel by using a series of filmed photographs developed as a photomontage of varying pace, with limited narration and sound effects. In the film, a survivor of a futuristic third World War is obsessed with distant and disconnected memories of a pier at the Orly Airport, the image of a mysterious woman, and a man's death. Scientists experimenting in time travel choose him for their studies, and the man travels back in time to contact the mysterious woman, and discovers that the man's death at the Orly Airport was his own. Except for one shot of the woman mentioned above sleeping and suddenly waking up, the film is composed entirely of photographs by Jean Chiabaud and stars Davos Hanich as the man, Hélène Châtelain as the woman and filmmaker William Klein as a man from the future. La Jetée was the inspiration for Mamoru Oshii's 1987 debut live action feature The Red Spectacles (and later for parts of Oshii's 2001 film Avalon) and also inspired Terry Gilliam's 12 Monkeys (1995) and Jonás Cuarón's Year of the Nail (2007) as well as many of Mira Nair's shots in her 2006 film The Namesake. While making La Jetée, Marker was simultaneously making the 150-minute documentary essay-film Le joli mai, released in 1963. Beginning in the spring of 1962, Marker and his camera operator Pierre Lhomme shot 55 hours of footage interviewing random people on the streets of Paris. The questions, asked by the unseen Marker, range from their personal lives, as well as social and political issues of relevance at that time. As he had with montages of landscapes and indigenous art, Marker created a film essay that contrasted and juxtaposed a variety of lives with his signature commentary (spoken by Marker's friends, singer-actor Yves Montand in the French version and Simone Signoret in the English version). The film has been compared to the Cinéma vérité films of Jean Rouch, and criticized by its practitioners at the time. The term "Cinéma vérité" was itself anathema to Marker, who never used it. Instead, he preferred his own term “ciné, ma vérité,” meaning "cinéma, my truth." It was shown in competition at the 1963 Venice Film Festival, where it won the award for Best First Work. It also won the Golden Dove Award at the Leipzig DOK Festival. After the documentary Le Mystère Koumiko in 1965, Marker made Si j'avais quatre dromadaires, an essay-film that, like La Jetée, is a photomontage of over 800 photographs Marker had taken over the previous 10 years in 26 countries. The commentary involves a conversation between a fictitious photographer and two friends, who discuss the photos. The film's title is an allusion to a poem by Guillaume Apollinaire. It was the last film in which Marker included "travel footage" for many years. SLON and ISKRA (1967–1974) In 1967 Marker published his second volume of collected film essays, Commentaires II. That same year, Marker organized the omnibus film Loin du Vietnam, a protest against the Vietnam War with segments contributed by Marker, Jean-Luc Godard, Alain Resnais, Agnès Varda, Claude Lelouch, William Klein, Michele Ray and Joris Ivens. The film includes footage of the war, from both sides, as well as anti-war protests in New York and Paris and other anti-war activities. From this initial collection of filmmakers with left-wing political agendas, Marker created the group S.L.O.N. (Société pour le lancement des oeuvres nouvelles, "Society for launching new works", but also the Russian word for "elephant"). SLON was a film collective whose objectives were to make films and to encourage industrial workers to create film collectives of their own. Its members included Valerie Mayoux, Jean-Claude Lerner, Alain Adair and John Tooker. Marker is usually credited as director or co-director of all of the films made by SLON. After the events of May 1968, Marker felt a moral obligation to abandon his own personal film career and devote himself to SLON and its activities. SLON's first film was about a strike at a Rhodiacéta factory in France, À bientôt, j'espère (Rhodiacéta) in 1968. Later that year SLON made La Sixième face du pentagone, about an anti-war protest in Washington, D.C. and was a reaction to what SLON considered to be the unfair and censored reportage of such events on mainstream television. The film was shot by François Reichenbach, who received co-director credit. La Bataille des dix millions was made in 1970 with Mayoux as co-director and Santiago Álvarez as cameraman and is about the 1970 sugar crop in Cuba and its disastrous effects on the country. In 1971, SLON made Le Train en marche, a new prologue to Soviet filmmaker Aleksandr Medvedkin's 1935 film Schastye, which had recently been re-released in France. In 1974, SLON became I.S.K.R.A. (Images, Sons, Kinescope, Réalisations, Audiovisuelles, but also the name of Vladimir Lenin's political newspaper Iskra, which also is a Russian word for "spark"). Return to personal work (1974–1986) In 1974 returned to his personal work and made a film outside of ISKRA. La Solitude du chanteur de fond is a one-hour documentary about Marker's friend Yves Montand's benefit concert for Chilean refugees. The concert was Montand's first public performance in four years, and the documentary includes film clips from his long career as a singer and actor. Marker had been working on a film about Chile with ISKRA since 1973. Marker had collaborated with Belgian sociologist Armand Mattelart and ISKRA members Valérie Mayoux and Jacqueline Meppiel to shoot and collect the visual materials, which Marker then edited together and provided the commentary for. The resulting film was the two and a half-hour documentary La Spirale, released in 1975. The film chronicles events in Chile, beginning with the 1970 election of socialist President Salvador Allende until his murder and the resulting coup in 1973. Marker then began work on one of his most ambitious films, A Grin Without a Cat, released in 1977. The film's title refers to the Cheshire Cat from Alice in Wonderland. The metaphor compares the promise of the global socialist movement before May 1968 (the grin) with its actual presence in the world after May 1968 (the cat). The film's original French title is Le fond de l'air est rouge, which means "the air is essentially red", or "revolution is in the air", implying that the socialist movement was everywhere around the world. The film was intended to be an all-encompassing portrait of political movements since May 1968, a summation of the work which he had taken part in for ten years. The film is divided into two parts: the first half focuses on the hopes and idealism before May 1968, and the second half on the disillusion and disappointments since those events. Marker begins the film with the Odessa Steps sequence from Sergei Eisenstein's film The Battleship Potemkin, which Marker points out is a fictitious creation of Eisenstein which has still influenced the image of the historical event. Marker used very little commentary in this film, but the film's montage structure and preoccupation with memory make it a Marker film. Upon release, the film was criticized for not addressing many current issues of the New Left such as the woman's movement, sexual liberation and worker self-management. The film was re-released in the US in 2002. In the late 1970s, Marker traveled extensively throughout the world, including an extended period in Japan. From this inspiration, he first published the photo-essay Le Dépays in 1982, and then used the experience for his next film Sans Soleil, released in 1982. Sans Soleil stretches the limits of what could be called a documentary. It is an essay, a montage, mixing pieces of documentary with fiction and philosophical comments, creating an atmosphere of dream and science fiction. The main themes are Japan, Africa, memory and travel. A sequence in the middle of the film takes place in San Francisco, and heavily references Alfred Hitchcock's Vertigo. Marker has said that Vertigo is the only film "capable of portraying impossible memory, insane memory." The film's commentary are credited to the fictitious cameraman Sandor Krasna, and read in the form of letters by an unnamed woman. Though centered around Japan, the film was also shot in such other countries as Guinea Bissau, Ireland and Iceland. Sans Soleil was shown at the 1983 Berlin Film Festival where it won the OCIC Award. It was also awarded the Sutherland Trophy at the 1983 British Film Institute Awards. In 1984, Marker was invited by producer Serge Silberman to document the making of Akira Kurosawa's film Ran. From this Marker made A.K., released in 1985. The film focuses more on Kurosawa's remote but polite personality than on the making of the film. The film was screened in the Un Certain Regard section at the 1985 Cannes Film Festival, before Ran itself had been released. In 1985, Marker's long-time friend and neighbor Simone Signoret died of cancer. Marker then made the one-hour TV documentary Mémoires pour Simone as a tribute to her in 1986. Multimedia and later career (1987–2012) Beginning with Sans Soleil, Marker developed a deep interest in digital technology. From 1985 to 1988, he worked on a conversational program (a prototypical chatbot) called "Dialector," which he wrote in Applesoft BASIC on an Apple II. He incorporated audiovisual elements in addition to the snippets of dialogue and poetry that "Computer" exchanged with the user. Version 6 of this program was revived from a floppy disk (with Marker's help and permission) and emulated online in 2015. His interests in digital technology also led to his film Level Five (1996) and Immemory (1998, 2008), an interactive multimedia CD-ROM, produced for the Centre Pompidou (French language version) and from Exact Change (English version). Marker created a 19-minute multimedia piece in 2005 for the Museum of Modern Art in New York City titled Owls at Noon Prelude: The Hollow Men which was influenced by T. S. Eliot's poem. Marker lived in Paris, and very rarely granted interviews. One exception was a lengthy interview with Libération in 2003 in which he explained his approach to filmmaking. When asked for a picture of himself, he usually offered a photograph of a cat instead. (Marker was represented in Agnes Varda's 2008 documentary The Beaches of Agnes by a cartoon drawing of a cat, speaking in a technologically altered voice.) Marker's own cat was named Guillaume-en-égypte. In 2009, Marker commissioned an Avatar of Guillaume-en-Egypte to represent him in machinima works. The avatar was created by Exosius Woolley and first appeared in the short film / machinima, Ouvroir the Movie by Chris Marker. In the 2007 Criterion Collection release of La Jetée and Sans Soleil, Marker included a short essay, "Working on a
Chiabaud and stars Davos Hanich as the man, Hélène Châtelain as the woman and filmmaker William Klein as a man from the future. La Jetée was the inspiration for Mamoru Oshii's 1987 debut live action feature The Red Spectacles (and later for parts of Oshii's 2001 film Avalon) and also inspired Terry Gilliam's 12 Monkeys (1995) and Jonás Cuarón's Year of the Nail (2007) as well as many of Mira Nair's shots in her 2006 film The Namesake. While making La Jetée, Marker was simultaneously making the 150-minute documentary essay-film Le joli mai, released in 1963. Beginning in the spring of 1962, Marker and his camera operator Pierre Lhomme shot 55 hours of footage interviewing random people on the streets of Paris. The questions, asked by the unseen Marker, range from their personal lives, as well as social and political issues of relevance at that time. As he had with montages of landscapes and indigenous art, Marker created a film essay that contrasted and juxtaposed a variety of lives with his signature commentary (spoken by Marker's friends, singer-actor Yves Montand in the French version and Simone Signoret in the English version). The film has been compared to the Cinéma vérité films of Jean Rouch, and criticized by its practitioners at the time. The term "Cinéma vérité" was itself anathema to Marker, who never used it. Instead, he preferred his own term “ciné, ma vérité,” meaning "cinéma, my truth." It was shown in competition at the 1963 Venice Film Festival, where it won the award for Best First Work. It also won the Golden Dove Award at the Leipzig DOK Festival. After the documentary Le Mystère Koumiko in 1965, Marker made Si j'avais quatre dromadaires, an essay-film that, like La Jetée, is a photomontage of over 800 photographs Marker had taken over the previous 10 years in 26 countries. The commentary involves a conversation between a fictitious photographer and two friends, who discuss the photos. The film's title is an allusion to a poem by Guillaume Apollinaire. It was the last film in which Marker included "travel footage" for many years. SLON and ISKRA (1967–1974) In 1967 Marker published his second volume of collected film essays, Commentaires II. That same year, Marker organized the omnibus film Loin du Vietnam, a protest against the Vietnam War with segments contributed by Marker, Jean-Luc Godard, Alain Resnais, Agnès Varda, Claude Lelouch, William Klein, Michele Ray and Joris Ivens. The film includes footage of the war, from both sides, as well as anti-war protests in New York and Paris and other anti-war activities. From this initial collection of filmmakers with left-wing political agendas, Marker created the group S.L.O.N. (Société pour le lancement des oeuvres nouvelles, "Society for launching new works", but also the Russian word for "elephant"). SLON was a film collective whose objectives were to make films and to encourage industrial workers to create film collectives of their own. Its members included Valerie Mayoux, Jean-Claude Lerner, Alain Adair and John Tooker. Marker is usually credited as director or co-director of all of the films made by SLON. After the events of May 1968, Marker felt a moral obligation to abandon his own personal film career and devote himself to SLON and its activities. SLON's first film was about a strike at a Rhodiacéta factory in France, À bientôt, j'espère (Rhodiacéta) in 1968. Later that year SLON made La Sixième face du pentagone, about an anti-war protest in Washington, D.C. and was a reaction to what SLON considered to be the unfair and censored reportage of such events on mainstream television. The film was shot by François Reichenbach, who received co-director credit. La Bataille des dix millions was made in 1970 with Mayoux as co-director and Santiago Álvarez as cameraman and is about the 1970 sugar crop in Cuba and its disastrous effects on the country. In 1971, SLON made Le Train en marche, a new prologue to Soviet filmmaker Aleksandr Medvedkin's 1935 film Schastye, which had recently been re-released in France. In 1974, SLON became I.S.K.R.A. (Images, Sons, Kinescope, Réalisations, Audiovisuelles, but also the name of Vladimir Lenin's political newspaper Iskra, which also is a Russian word for "spark"). Return to personal work (1974–1986) In 1974 returned to his personal work and made a film outside of ISKRA. La Solitude du chanteur de fond is a one-hour documentary about Marker's friend Yves Montand's benefit concert for Chilean refugees. The concert was Montand's first public performance in four years, and the documentary includes film clips from his long career as a singer and actor. Marker had been working on a film about Chile with ISKRA since 1973. Marker had collaborated with Belgian sociologist Armand Mattelart and ISKRA members Valérie Mayoux and Jacqueline Meppiel to shoot and collect the visual materials, which Marker then edited together and provided the commentary for. The resulting film was the two and a half-hour documentary La Spirale, released in 1975. The film chronicles events in Chile, beginning with the 1970 election of socialist President Salvador Allende until his murder and the resulting coup in 1973. Marker then began work on one of his most ambitious films, A Grin Without a Cat, released in 1977. The film's title refers to the Cheshire Cat from Alice in Wonderland. The metaphor compares the promise of the global socialist movement before May 1968 (the grin) with its actual presence in the world after May 1968 (the cat). The film's original French title is Le fond de l'air est rouge, which means "the air is essentially red", or "revolution is in the air", implying that the socialist movement was everywhere around the world. The film was intended to be an all-encompassing portrait of political movements since May 1968, a summation of the work which he had taken part in for ten years. The film is divided into two parts: the first half focuses on the hopes and idealism before May 1968, and the second half on the disillusion and disappointments since those events. Marker begins the film with the Odessa Steps sequence from Sergei Eisenstein's film The Battleship Potemkin, which Marker points out is a fictitious creation of Eisenstein which has still influenced the image of the historical event. Marker used very little commentary in this film, but the film's montage structure and preoccupation with memory make it a Marker film. Upon release, the film was criticized for not addressing many current issues of the New Left such as the woman's movement, sexual liberation and worker self-management. The film was re-released in the US in 2002. In the late 1970s, Marker traveled extensively throughout the world, including an extended period in Japan. From this inspiration, he first published the photo-essay Le Dépays in 1982, and then used the experience for his next film Sans Soleil, released in 1982. Sans Soleil stretches the limits of what could be called a documentary. It is an essay, a montage, mixing pieces of documentary with fiction and philosophical comments, creating an atmosphere of dream and science fiction. The main themes are Japan, Africa, memory and travel. A sequence in the middle of the film takes place in San Francisco, and heavily references Alfred Hitchcock's Vertigo. Marker has said that Vertigo is the only film "capable of portraying impossible memory, insane memory." The film's commentary are credited to the fictitious cameraman Sandor Krasna, and read in the form of letters by an unnamed woman. Though centered around Japan, the film was also shot in such other countries as Guinea Bissau, Ireland and Iceland. Sans Soleil was shown at the 1983 Berlin Film Festival where it won the OCIC Award. It was also awarded the Sutherland Trophy at the 1983 British Film Institute Awards. In 1984, Marker was invited by producer Serge Silberman to document the making of Akira Kurosawa's film Ran. From this Marker made A.K., released in 1985. The film focuses more on Kurosawa's remote but polite personality than on the making of the film. The film was screened in the Un Certain Regard section at the 1985 Cannes Film Festival, before Ran itself had been released. In 1985, Marker's long-time friend and neighbor Simone Signoret died of cancer. Marker then made the one-hour TV documentary Mémoires pour Simone as a tribute to her in 1986. Multimedia and later career (1987–2012) Beginning with Sans Soleil, Marker developed a deep interest in digital technology. From 1985 to 1988, he worked on a conversational program (a prototypical chatbot) called "Dialector," which he wrote in Applesoft BASIC on an Apple II. He incorporated audiovisual elements in addition to the snippets of dialogue and poetry that "Computer" exchanged with the user. Version 6 of this program was revived from a floppy disk (with Marker's help and permission) and emulated online in 2015. His interests in digital technology also led to his film Level Five (1996) and Immemory (1998, 2008), an interactive multimedia CD-ROM, produced for the Centre Pompidou (French language version) and from Exact Change (English version). Marker created a 19-minute multimedia piece in 2005 for the Museum of Modern Art in New York City titled Owls at Noon Prelude: The Hollow Men which was influenced by T. S. Eliot's poem. Marker lived in Paris, and very rarely granted interviews. One exception was a lengthy interview with Libération in 2003 in which he explained his approach to filmmaking. When asked for a picture of himself, he usually offered a photograph of a cat instead. (Marker was represented in Agnes Varda's 2008 documentary The Beaches of Agnes by a cartoon drawing of a cat, speaking in a technologically altered voice.) Marker's own cat was named Guillaume-en-égypte. In 2009, Marker commissioned an Avatar of Guillaume-en-Egypte to represent him in machinima works. The avatar was created by Exosius Woolley and first appeared in the short film / machinima, Ouvroir the Movie by Chris Marker. In the 2007 Criterion Collection release of La Jetée and Sans Soleil, Marker included a short essay, "Working on a Shoestring Budget". He confessed to shooting all of Sans Soleil with a silent film camera, and recording all the audio on a primitive audio cassette recorder. Marker also reminds the reader that only one short scene in La Jetée is of a moving image, as Marker could only borrow a movie camera for one afternoon while working on the film. From 2007 through 2011 Marker collaborated with the art dealer and publisher Peter Blum on a variety of projects that were exhibited at the Peter Blum galleries in New York City's Soho and Chelsea neighborhoods. Marker's works were also exhibited at the
with spread lips. The vowel is produced with the tongue as far back and as high in the mouth as is possible, with protruded lips. This sound can be approximated by adopting the posture to whistle a very low note, or to blow out a candle. And is produced with the tongue as low and as far back in the mouth as possible. The other vowels are 'auditorily equidistant' between these three 'corner vowels', at four degrees of aperture or 'height': close (high tongue position), close-mid, open-mid, and open (low tongue position). These degrees of aperture plus the front-back distinction define 8 reference points on a mixture of articulatory and auditory criteria. These eight vowels are known as the eight 'primary cardinal vowels', and vowels like these are common in the world's languages. The lip positions can be reversed with the lip position for the corresponding vowel on the opposite side of the front-back dimension, so that e.g. Cardinal 1 can be produced with rounding somewhat similar to that of Cardinal 8; these are known as 'secondary cardinal vowels'. Sounds such as these are claimed to be less common in the world's languages. Other vowel sounds are also recognised on the vowel chart of the International Phonetic Alphabet. Jones argued that to be able to use the cardinal vowel system effectively one must undergo training with an expert phonetician, working both on the recognition and the production of the vowels. Cardinal vowels are not vowels of any particular language, but a measuring system. However, some languages contain vowel or vowels that are close to the cardinal vowel(s). An example of such language is Ngwe, which is spoken in Cameroon. It has been cited as a language with a vowel system that has 8 vowels which are rather similar to the 8 primary cardinal vowels (Ladefoged 1971:67). Cardinal vowels 19–22 were added by David Abercrombie. In IPA Numbers, cardinal vowels 1–18 have the same numbers but added to 300. Limits on the accuracy of the system The usual explanation of the cardinal vowel system implies that the competent user can reliably distinguish between sixteen Primary and Secondary vowels plus a small number of
Cardinal 8; these are known as 'secondary cardinal vowels'. Sounds such as these are claimed to be less common in the world's languages. Other vowel sounds are also recognised on the vowel chart of the International Phonetic Alphabet. Jones argued that to be able to use the cardinal vowel system effectively one must undergo training with an expert phonetician, working both on the recognition and the production of the vowels. Cardinal vowels are not vowels of any particular language, but a measuring system. However, some languages contain vowel or vowels that are close to the cardinal vowel(s). An example of such language is Ngwe, which is spoken in Cameroon. It has been cited as a language with a vowel system that has 8 vowels which are rather similar to the 8 primary cardinal vowels (Ladefoged 1971:67). Cardinal vowels 19–22 were added by David Abercrombie. In IPA Numbers, cardinal vowels 1–18 have the same numbers but added to 300. Limits on the accuracy of the system The usual explanation of the cardinal vowel system implies that the competent user can reliably distinguish between sixteen Primary and Secondary vowels plus a small number of central vowels. The provision of diacritics by the International Phonetic Association further implies that intermediate values may also be reliably recognized, so that a phonetician might be able to produce and recognize not only a close-mid front unrounded vowel and an open-mid front unrounded vowel but also a mid front unrounded vowel , a centralized mid front unrounded vowel , and so on. This suggests a range of vowels nearer to forty or fifty than to twenty in number. Empirical evidence for this ability in trained phoneticians is hard to come by. Ladefoged, in a series of pioneering experiments published in the 1950s and 60s, studied how trained phoneticians coped with the vowels of a dialect of Scottish Gaelic. He asked eighteen phoneticians to listen to a recording of ten words spoken by a native speaker of Gaelic and to place the vowels on a cardinal vowel quadrilateral. He then studied the degree of agreement or disagreement among the phoneticians. Ladefoged himself drew attention to the fact that the phoneticians who were trained in the British tradition established by Daniel Jones were closer to each
housing units at an average density of 676.8 per square mile (261.3/km). The racial makeup of the city was 81.54% White, 10.85% Black or African American, 0.39% Native American, 4.30% Asian, 0.04% Pacific Islander, 0.81% from other races, and 2.07% from two or more races. Hispanic or Latino of any race were 2.05% of the population. There were 33,689 households, out of which 26.1% had children under the age of 18 living with them, 38.2% were married couples living together, 10.3% had a female householder with no husband present, and 48.7% were non-families. 33.1% of all households were made up of individuals, and 6.5% had someone living alone who was 65 years of age or older. The average household size was 2.26 and the average family size was 2.92. In the city, the population was spread out, with 19.7% under the age of 18, 26.7% from 18 to 24, 28.7% from 25 to 44, 16.2% from 45 to 64, and 8.6% who were 65 years of age or older. The median age was 27 years. For every 100 females, there were 91.8 males. For every 100 females age 18 and over, there were 89.1 males. The median income for a household in the city was $33,729, and the median income for a family was $52,288. Males had a median income of $34,710 versus $26,694 for females. The per capita income for the city was $19,507. About 9.4% of families and 19.2% of the population were below the poverty line, including 14.8% of those under age 18 and 5.2% of those age 65 or over. However, traditional statistics of income and poverty can be misleading when applied to cities with high student populations, such as Columbia. Economy Columbia's economy is historically dominated by education, healthcare, and insurance. Jobs in government are also common, either in Columbia or a half-hour south in Jefferson City. The Columbia Regional Airport and the Missouri River Port of Rocheport connect the region with trade and transportation. With a Gross Metropolitan Product of $9.6 billion in 2018, Columbia's economy makes up 3% of the Gross State Product of Missouri. Columbia's metro area economy is slightly larger than the economy of Rwanda. Insurance corporations headquartered in Columbia include Shelter Insurance and the Columbia Insurance Group. Other organizations include StorageMart, Veterans United Home Loans, MFA Incorporated, the Missouri State High School Activities Association, and MFA Oil. Companies such as Socket, Datastorm Technologies, Inc. (no longer existent), Slackers CDs and Games, Carfax, and MBS Textbook Exchange were all founded in Columbia. Top employers According to Columbia's 2018 Comprehensive Annual Financial Report, the top employers in the city are: Culture The Missouri Theatre Center for the Arts and Jesse Auditorium are Columbia's largest fine arts venues. Ragtag Cinema annually hosts the True/False Film Festival. In 2008, filmmaker Todd Sklar completed the film Box Elder, which was filmed entirely in and around Columbia and the University of Missouri. The North Village Arts District, located on the north side of downtown, is home to galleries, restaurants, theaters, bars, music venues, and the Missouri Contemporary Ballet. The University of Missouri's Museum of Art and Archaeology displays 14,000 works of art and archaeological objects in five galleries for no charge to the public. Libraries include the Columbia Public Library, the University of Missouri Libraries, with over three million volumes in Ellis Library, and the State Historical Society of Missouri. Music The "We Always Swing" Jazz Series and the Roots N Blues Festival is held in Columbia. "9th Street Summerfest" (now hosted in Rose Park at Rose Music Hall) closes part of that street several nights each summer to hold outdoor performances and has featured Willie Nelson (2009), Snoop Dogg (2010), The Flaming Lips (2010), Weird Al Yankovic (2013), and others. The "University Concert Series" regularly includes musicians and dancers from various genres, typically in Jesse Hall. Other musical venues in town include the Missouri Theatre, the University's multipurpose Hearnes Center, the University's Mizzou Arena, The Blue Note, and Rose Music Hall. Shelter Gardens, a park on the campus of Shelter Insurance headquarters, also hosts outdoor performances during the summer. The University of Missouri School of Music attracts hundreds of musicians to Columbia, student performances are held in Whitmore Recital Hall. Among many non-profit organizations for classical music are included the "Odyssey Chamber Music Series", "Missouri Symphony", "Columbia Community Band", and "Columbia Civic Orchestra". Founded in 2006, the "Plowman Chamber Music Competition" is a biennial competition held in March/April of odd-numbered years, considered to be one of the finest, top five chamber music competitions in the nation. Theater Columbia has multiple opportunities to watch and perform in theatrical productions. Ragtag Cinema is one of the most well known theaters in Columbia. The city is home to Stephens College, a private institution known for performing arts. Their season includes multiple plays and musicals. The University of Missouri and Columbia College also present multiple productions a year. The city's three public high schools are also known for their productions. Rock Bridge High School performs a musical in November and two plays in the spring. Hickman High School also performs a similar season with two musical performances (one in the fall, and one in the spring) and 2 plays (one in the winter, and one at the end of their school year). The newest high school, Battle High, opened in 2013 and also is known for their productions. Battle presents a musical in the fall and a play in the spring, along with improv nights and more productions throughout the year. The city is also home to the indoor/outdoor theatre Maplewood Barn Theatre in Nifong Park and other community theatre programs such as Columbia Entertainment Company, Talking Horse Productions, Pace Youth Theatre and TRYPS. Sports The University of Missouri's sports teams, the Missouri Tigers, play a significant role in the city's sports culture. Faurot Field at Memorial Stadium, which has a capacity of 71,168, hosts home football games. The Hearnes Center and Mizzou Arena are two other large sport and event venues, the latter being the home arena for Mizzou's basketball team. Taylor Stadium is host to their baseball team and was the regional host for the 2007 NCAA Baseball Championship. Columbia College has several men and women collegiate sports teams as well. In 2007, Columbia hosted the National Association of Intercollegiate Athletics Volleyball National Championship, which the Lady Cougars participated in. Columbia also hosts the Show-Me State Games, a non-profit program of the Missouri Governor's Council on Physical Fitness and Health. They are the largest state games in the United States. Situated midway between St. Louis and Kansas City, Columbians will often have allegiances to the professional sports teams housed there, such as the St. Louis Cardinals, the Kansas City Royals, the Kansas City Chiefs, the St. Louis Blues, Sporting Kansas City, and St. Louis FC. Cuisine Columbia has many bars and restaurants that provide diverse styles of cuisine, due in part to having three colleges. One such establishment is the historic Booches bar, restaurant, and pool hall, which was established in 1884 and is frequented by college students. Shakespeare's Pizza is known across the nation for its college town pizza. Parks and recreation Throughout the city are many parks and trails for public usage. Among the more popularly frequented is the MKT which is a spur that connects to the Katy Trail, meeting up just south of Columbia proper. The MKT ranked second in the nation for "Best Urban Trail" in the 2015 USA Todays 10 Best Readers' Choice Awards. This 10-foot wide trail built on the old railbed of the MKT railroad begins in downtown Columbia in Flat Branch Park at 4th and Cherry Streets. The all-weather crushed limestone surface provides opportunities for walking, jogging, running, and bicycling. Stephens Lake Park is the highlight of Columbia's park system and is known for its 11-acre fishing/swimming lake, mature trees, and historical significance in the community. It serves as the center for outdoor winter sports, a variety of community festivals such as the Roots N Blues Festival, and outdoor concert series at the amphitheater. Stephens Lake has reservable shelters, playgrounds, swimming beach and spraygrounds, art sculptures, waterfalls, and walking trails. Rock Bridge State Park is open year round giving visitors the chance to scramble, hike, and bicycle through a scenic environment. Rock Bridge State Park contains some of the most popular hiking trails in the state, including the Gans Creek Wild Area. Media The city has two daily morning newspapers: the Columbia Missourian and the Columbia Daily Tribune. The Missourian is directed by professional editors and staffed by Missouri School of Journalism students who do reporting, design, copy editing, information graphics, photography, and multimedia. The Missourian publishes the weekly city magazine, Vox. With a daily circulation of nearly 20,000, the Daily Tribune is the most widely read newspaper in central Missouri. The University of Missouri has the independent official bi-weekly student newspaper called The Maneater, and the quarterly literary magazine, The Missouri Review. The now-defunct Prysms Weekly was also published in Columbia. In late 2009, KCOU News launched full operations out of KCOU 88.1 FM on the MU Campus. The entirely student-run news organization airs a weekday newscast, The Pulse. The city has 4 television channels. Columbia Access Television (CAT or CAT-TV) is the public access channel. CPSTV is the education access channel, managed by Columbia Public Schools as a function of the Columbia Public Schools Community Relations Department. The Government Access channel broadcasts City Council, Planning and Zoning Commission, and Board of Adjustment meetings. Television Radio Columbia has 19 radio stations as well as stations licensed from Jefferson City, Macon and, Lake of the Ozarks. {| |- | style="width:20%; vertical-align:top;"| AM KFAL 900 KHz • Country KWOS 950 KHz • News/Talk KFRU 1400 KHz • News/Talk KTGR 1580 KHz • Sports (ESPN Radio) FM KCOU 88.1 MHz • College KOPN 89.5 MHz • Public KMUC 90.5 MHz • Classical KBIA 91.3 MHz • News (NPR) KMFC 92.1 MHz • Christian (K-Love) KWJK 93.1 MHz •
The roots of Columbia's three economic foundations—education, medicine, and insurance— can be traced to the city's incorporation in 1821. Original plans for the town set aside land for a state university. In 1833, Columbia Baptist Female College opened, which later became Stephens College. Columbia College, distinct from today's and later to become the University of Missouri, was founded in 1839. When the state legislature decided to establish a state university, Columbia raised three times as much money as any competing city, and James S. Rollins donated the land that is today the Francis Quadrangle. Soon other educational institutions were founded in Columbia, such as Christian Female College, the first college for women west of the Mississippi, which later became Columbia College. The city benefited from being a stagecoach stop of the Santa Fe and Oregon trails, and later from the Missouri–Kansas–Texas Railroad. In 1822, William Jewell set up the first hospital. In 1830, the first newspaper began; in 1832, the first theater in the state was opened; and in 1835, the state's first agricultural fair was held. By 1839, the population of 13,000 and wealth of Boone County was exceeded in Missouri only by that of St. Louis County, which, at that time, included the City of St. Louis. Columbia's infrastructure was relatively untouched by the Civil War. As a slave state, Missouri had many residents with Southern sympathies, but it stayed in the Union. The majority of the city was pro-Union; however, the surrounding agricultural areas of Boone County and the rest of central Missouri were decidedly pro-Confederate. Because of this, the University of Missouri became a base from which Union troops operated. No battles were fought within the city because the presence of Union troops dissuaded Confederate guerrillas from attacking, though several major battles occurred at nearby Boonville and Centralia. After Reconstruction, race relations in Columbia followed the Southern pattern of increasing violence of whites against blacks in efforts to suppress voting and free movement: George Burke, a black man who worked at the university, was lynched in 1889. In the spring of 1923, James T. Scott, an African-American janitor at the University of Missouri, was arrested on allegations of raping a university professor's daughter. He was taken from the county jail and lynched on April 29 before a white mob of several hundred, hanged from the Old Stewart Road Bridge. In the 21st century, a number of efforts have been undertaken to recognize Scott's death. In 2010 his death certificate was changed to reflect that he was never tried or convicted of charges, and that he had been lynched. In 2011 a headstone was put at his grave at Columbia Cemetery; it includes his wife's and parents' names and dates, to provide a fuller account of his life. In 2016, a marker was erected at the lynching site to memorialize Scott. In 1901, Rufus Logan established The Columbia Professional newspaper to serve Columbia's large African American population. In 1963, University of Missouri System and the Columbia College system established their headquarters in Columbia. The insurance industry also became important to the local economy as several companies established headquarters in Columbia, including Shelter Insurance, Missouri Employers Mutual, and Columbia Insurance Group. State Farm Insurance has a regional office in Columbia. In addition, the now-defunct Silvey Insurance was a large local employer. Columbia became a transportation crossroads when U.S. Route 63 and U.S. Route 40 (which was improved as present-day Interstate 70) were routed through the city. Soon after, the city opened the Columbia Regional Airport. By 2000, the city's population was nearly 85,000. In 2017, Columbia was in the path of totality for the Solar eclipse of August 21, 2017. The city was expecting upwards of 400,000 tourists coming to view the eclipse. Geography Columbia, in northern mid-Missouri, is away from both St. Louis and Kansas City, and north of the state capital of Jefferson City. The city is near the Missouri River, between the Ozark Plateau and the Northern Plains. According to the United States Census Bureau, the city has a total area of , of which is land and is water. Topography The city generally slopes from the highest point in the Northeast to the lowest point in the Southwest towards the Missouri River. Prominent tributaries of the river are Perche Creek, Hinkson Creek, and Flat Branch Creek. Along these and other creeks in the area can be found large valleys, cliffs, and cave systems such as that in Rock Bridge State Park just south of the city. These creeks are largely responsible for numerous stream valleys giving Columbia hilly terrain similar to the Ozarks while also having prairie flatland typical of northern Missouri. Columbia also operates several greenbelts with trails and parks throughout town. Animal life Large mammal found in the city include urbanized coyotes, red foxes, and numerous whitetail deer. Eastern gray squirrel, and other rodents are abundant, as well as cottontail rabbits and the nocturnal opossum and raccoon. Large bird species are abundant in parks and include the Canada goose, mallard duck, as well as shorebirds, including the great egret and great blue heron. Turkeys are also common in wooded areas and can occasionally be seen on the MKT recreation trail. Populations of bald eagles are found by the Missouri River. The city is on the Mississippi Flyway, used by migrating birds, and has a large variety of small bird species, common to the eastern U.S. The Eurasian tree sparrow, an introduced species, is limited in North America to the counties surrounding St. Louis. Columbia has large areas of forested and open land and many of these areas are home to wildlife. Climate Columbia has a humid continental climate (Köppen Dfa) marked by sharp seasonal contrasts in temperature, and is in USDA Plant Hardiness Zone 6a. The monthly daily average temperature ranges from in January to in July, while the high reaches or exceeds on an average of 35 days per year, on two days, while two nights of sub- lows can be expected. Precipitation tends to be greatest and most frequent in the latter half of spring, when severe weather is also most common. Snow averages per season, mostly from December to March, with occasional November accumulation and falls in April being rarer; historically seasonal snow accumulation has ranged from in 2005–06 to in 1977–78. Extreme temperatures have ranged from on February 12, 1899 to on July 12 and 14, 1954. Readings of or are uncommon, the last occurrences being January 7, 2014 and July 31, 2012. Cityscape Columbia's most significant and well-known architecture is found in buildings located in its downtown area and on the university campuses. The University of Missouri's Jesse Hall and the neo-gothic Memorial Union have become icons of the city. The David R. Francis Quadrangle is an example of Thomas Jefferson's academic village concept. Four historic districts located within the city are listed on the National Register of Historic Places: Downtown Columbia, the East Campus Neighborhood, Francis Quadrangle, and the North Ninth Street Historic District. The downtown skyline is relatively low and is dominated by the 10-story Tiger Hotel and the 15-story Paquin Tower. Downtown Columbia is an area of approximately one square mile surrounded by the University of Missouri on the south, Stephens College to the east, and Columbia College on the north. The area serves as Columbia's financial and business district. Since the early-21st century, a large number of high-rise apartment complexes have been built in downtown Columbia. Many of these buildings also offer mixed-use business and retail space on the lower levels. These developments have not been without criticism, with some expressing concern the buildings hurt the historic feel of the area, or that the city does not yet have the infrastructure to support them. The city's historic residential core lies in a ring around downtown, extending especially to the west along Broadway, and south into the East Campus Neighborhood. The city government recognizes 63 neighborhood associations. The city's most dense commercial areas are primarily along Interstate 70, U.S. Route 63, Stadium Boulevard, Grindstone Parkway, and Downtown. Demographics 2010 census As of the census of 2010, 108,500 people, 43,065 households, and 21,418 families resided in the city. The population density was . There were 46,758 housing units at an average density of . The racial makeup of the city was 79.0% White, 11.3% African American, 0.3% Native American, 5.2% Asian, 0.1% Pacific Islander, 1.1% from other races, and 3.1% from two or more races. Hispanic or Latino of any race were 3.4% of the population. There were 43,065 households, of which 26.1% had children under the age of 18 living with them, 35.6% were married couples living together, 10.6% had a female householder with no husband present, 3.5% had a male householder with no wife present, and 50.3% were non-families. 32.0% of all households were made up of individuals, and 6.6% had someone living alone who was 65 years of age or older. The average household size was 2.32 and the average family size was 2.94. In the city the population was spread out, with 18.8% of residents under the age of 18; 27.3% between the ages of 18 and 24; 26.7% from 25 to 44; 18.6% from 45 to 64; and 8.5% who were 65 years of age or older. The median age in the city was 26.8 years. The gender makeup of the city was 48.3% male and 51.7% female. 2000 census As of the census of 2000, there were 84,531 people, 33,689 households, and 17,282 families residing in the city. The population density was 1,592.8 people per square mile (615.0/km). There were 35,916 housing units at an average density of 676.8 per square mile (261.3/km). The racial makeup of the city was 81.54% White, 10.85% Black or African American, 0.39% Native American, 4.30% Asian, 0.04% Pacific Islander, 0.81% from other races, and 2.07% from two or more races. Hispanic or Latino of any race were 2.05% of the population. There were 33,689 households, out of which 26.1% had children under the age of 18 living with them, 38.2% were married couples living together, 10.3% had a female householder with no husband present, and 48.7% were non-families. 33.1% of all households were made up of individuals, and 6.5% had someone living alone who was 65 years of age or older. The average household size was 2.26 and the average family size was 2.92. In the city, the population was spread out, with 19.7% under the age of 18, 26.7% from 18 to 24, 28.7% from 25 to 44, 16.2% from 45 to 64, and 8.6% who were 65 years of age or older. The median age was 27 years. For every 100 females, there were 91.8 males. For every 100 females age 18 and over, there were 89.1 males. The median income for a household in the city was $33,729, and the median income for a family was $52,288. Males had a median income of $34,710 versus $26,694 for females. The per capita income for the city was $19,507. About 9.4% of families and 19.2% of the population were below the poverty line, including 14.8% of those under age 18 and 5.2% of those age 65 or over. However, traditional statistics of income and poverty can be misleading when applied to cities with high student populations, such as Columbia. Economy Columbia's economy is historically dominated by education, healthcare, and insurance. Jobs in government are also common, either in Columbia or a half-hour south in Jefferson City. The Columbia Regional Airport and the Missouri River Port of Rocheport connect the region with trade and transportation. With a Gross Metropolitan Product of $9.6 billion in 2018, Columbia's economy makes up 3% of the Gross State Product of Missouri. Columbia's metro area economy is slightly larger than the economy of Rwanda. Insurance corporations headquartered in Columbia include Shelter Insurance and the Columbia Insurance Group. Other organizations include StorageMart, Veterans United Home Loans, MFA Incorporated, the Missouri State High School Activities Association, and MFA Oil. Companies such as Socket, Datastorm Technologies, Inc. (no longer existent), Slackers CDs and Games, Carfax, and MBS Textbook Exchange were all founded in Columbia. Top employers According to Columbia's 2018 Comprehensive Annual Financial Report, the top employers in the city are: Culture The Missouri Theatre Center for the Arts and Jesse Auditorium are Columbia's largest fine arts venues. Ragtag Cinema annually hosts the True/False Film Festival. In 2008, filmmaker Todd Sklar completed the film Box Elder, which was filmed entirely in and around Columbia and the University of Missouri. The North Village Arts District, located on the north side of downtown, is home to galleries, restaurants, theaters, bars, music venues, and the Missouri Contemporary Ballet. The University of Missouri's Museum of Art and Archaeology displays 14,000 works of art and archaeological objects in five galleries for no charge to the public. Libraries include the Columbia Public Library, the University of Missouri Libraries, with over three million volumes in Ellis Library, and the State Historical Society of Missouri. Music The "We Always Swing" Jazz Series and the Roots N Blues Festival is held in Columbia. "9th Street Summerfest" (now hosted in Rose Park at Rose Music Hall) closes part of that street several nights each summer to hold outdoor performances and has featured Willie Nelson (2009), Snoop Dogg (2010), The Flaming Lips (2010), Weird Al Yankovic (2013), and others. The "University Concert Series" regularly includes musicians and dancers from various genres, typically in Jesse Hall. Other musical venues in town include the Missouri Theatre, the University's multipurpose Hearnes Center, the University's Mizzou Arena, The Blue Note, and Rose Music Hall. Shelter Gardens, a park on the campus of Shelter Insurance headquarters, also hosts outdoor performances during the summer. The University of Missouri School of Music attracts hundreds of musicians to Columbia, student performances are held in Whitmore Recital Hall. Among many non-profit organizations for classical music are included the "Odyssey Chamber Music Series", "Missouri Symphony", "Columbia Community Band", and "Columbia Civic Orchestra". Founded in 2006, the "Plowman Chamber Music Competition" is a biennial competition held in March/April of odd-numbered years, considered to be one of the finest, top five chamber music competitions in the nation. Theater Columbia has multiple opportunities to watch and perform in theatrical productions. Ragtag Cinema is one of the most well known theaters in Columbia. The city is home to
promotions eight years in a row. In 1905–06 the team played only friendly games but joined, and won, the Lewisham League Division III for the 1906–07 season. For the 1907–08 season the team contested the Lewisham League, Woolwich League and entered the Woolwich Cup. It was also around this time the Addicks nickname was first used in the local press although it may have been in use before then. In the 1908–09 season Charlton Athletic were playing in the Blackheath and District League and by 1910–11 had progressed to the Southern Suburban League. During this period Charlton Athletic won the Woolwich Cup four times, the championship of the Woolwich League three times, won the Blackheath League twice and the Southern Suburban League three times. They became a senior side in 1913 the same year that nearby Woolwich Arsenal relocated to North London. At the outbreak of World War One, Charlton were one of the first clubs to close down to take part in the "Greater Game" overseas. The club was reformed in 1917, playing mainly friendlies to raise funds for charities connected to the war and for the Woolwich Memorial Hospital Cup, the trophy for which Charlton donated. It had previously been the Woolwich Cup that the team had won outright following three consecutive victories. After the war, they joined the Kent League for one season (1919–20) before becoming professional, appointing Walter Rayner as the first full-time manager. They were accepted by the Southern League and played just a single season (1920–21) before being voted into the Football League. Charlton's first Football League match was against Exeter City in August 1921, which they won 1–0. In 1923, Charlton became "giant killers" in the FA Cup beating top flight sides Manchester City, West Bromwich Albion, and Preston North End before losing to eventual winners Bolton Wanderers in the Quarter-Finals. Later that year, it was proposed that Charlton merge with Catford Southend to create a larger team with bigger support. In the 1923–24 season Charlton played in Catford at The Mount stadium and wore the colours of "The Enders", light and dark blue vertical stripes. However, the move fell through and the Addicks returned to the Charlton area in 1924, returning to the traditional red and white colours in the process. Charlton finished second bottom in the Football League in 1926 and were forced to apply for re-election which was successful. Three years later the Addicks won the Division Three championship in 1929 and they remained at the Division Two level for four years. After relegation into the Third Division south at the end of the 1932–33 season the club appointed Jimmy Seed as manager and he oversaw the most successful period in Charlton's history either side of the Second World War. Seed, an ex-miner who had made a career as a footballer despite suffering the effects of poison gas in the First World War, remains the most successful manager in Charlton's history. He is commemorated in the name of a stand at the Valley. Seed was an innovative thinker about the game at a time when tactical formations were still relatively unsophisticated. He later recalled "a simple scheme that enabled us to pull several matches out of the fire" during the 1934–35 season: when the team was in trouble "the centre-half was to forsake his defensive role and go up into the attack to add weight to the five forwards." The organisation Seed brought to the team proved effective and the Addicks gained successive promotions from the Third Division to the First Division between 1934 and 1936, becoming the first club to ever do so. Charlton finally secured promotion to the First Division by beating local rivals West Ham United at the Boleyn Ground, with their centre-half John Oakes playing on despite concussion and a broken nose. In 1937, Charlton finished runners up in the First Division, in 1938 finished fourth and 1939 finished third. They were the most consistent team in the top flight of English football over the three seasons immediately before the Second World War. This continued during the war years and they won the Football League War Cup and appeared in finals. Post-war success and fall from grace (1946–1984) Charlton reached the 1946 FA Cup Final, but lost 4–1 to Derby County at Wembley. Charlton's Bert Turner scored an own goal in the 80th minute before equalising for the Addicks a minute later to take them into extra time, but they conceded three further goals in the extra period. When the full league programme resumed in 1946–47 Charlton could finish only 19th in the First Division, just above the relegation spots, but they made amends with their performance in the FA Cup, reaching the 1947 FA Cup Final. This time they were successful, beating Burnley 1–0, with Chris Duffy scoring the only goal of the day. In this period of renewed football attendances, Charlton became one of only 13 English football teams to average over 40,000 as their attendance during a full season. The Valley was the largest football ground in the League, drawing crowds in excess of 70,000. However, in the 1950s little investment was made either for players or to The Valley, hampering the club's growth. In 1956, the then board undermined Jimmy Seed and asked for his resignation; Charlton were relegated the following year. From the late 1950s until the early 1970s, Charlton remained a mainstay of the Second Division before relegation to the Third Division in 1972 caused the team's support to drop, and even a promotion in 1975 back to the second division did little to re-invigorate the team's support and finances. In 1979–80 Charlton were relegated again to the Third Division, but won immediate promotion back to the Second Division in 1980–81. This was a turning point in the club's history leading to a period of turbulence and change including further promotion and exile. A change in management and shortly after a change in club ownership led to severe problems, such as the reckless signing of former European Footballer of the Year Allan Simonsen, and the club looked like it would go out of business. The "exiled" years (1985–1992) In 1984 financial matters came to a head and the club went into administration, to be reformed as Charlton Athletic (1984) Ltd. although the club's finances were still far from secure. They were forced to leave the Valley just after the start of the 1985–86 season, after its safety was criticised by Football League officials in the wake of the Bradford City stadium fire. The club began to groundshare with Crystal Palace at Selhurst Park and this arrangement looked to be for the long-term, as Charlton did not have enough funds to revamp the Valley to meet safety requirements. Despite the move away from the Valley, Charlton were promoted to the First Division as Second Division runners-up at the end of 1985–86, and remained at this level for four years (achieving a highest league finish of 14th) often with late escapes, most notably against Leeds in 1987, where the Addicks triumphed in extra-time of the play-off final replay to secure their top flight place. In 1987 Charlton also returned to Wembley for the first time since the 1947 FA Cup final for the Full Members Cup final against Blackburn. Eventually, Charlton were relegated in 1990 along with Sheffield Wednesday and bottom club Millwall. Manager Lennie Lawrence remained in charge for one more season before he accepted an offer to take charge of Middlesbrough. He was replaced by joint player-managers Alan Curbishley and Steve Gritt. The pair had unexpected success in their first season finishing just outside the play-offs, and 1992–93 began promisingly and Charlton looked good bets for promotion in the new Division One (the new name of the old Second Division following the formation of the Premier League). However, the club was forced to sell players such as Rob Lee to help pay for a return to the Valley, while club fans formed the Valley Party, nominating candidates to stand in local elections in 1990, pressing the local council to enable the club's return to the Valley - finally achieved in December 1992. In March 1993, defender Tommy Caton, who had been out of action due to injury since January 1991, announced his retirement from playing on medical advice. He died suddenly at the end of the following month at the age of 30. Premier League years (1998–2007) In 1995, new chairman Richard Murray appointed Alan Curbishley as sole manager of Charlton. Under his sole leadership Charlton made an appearance in the play-off in 1996 but were eliminated by Crystal Palace in the semi-finals and the following season brought a disappointing 15th-place finish. 1997–98 was Charlton's best season for years. They reached the Division One play-off final and battled against Sunderland in a thrilling game which ended with a 4–4 draw after extra time. Charlton won 7–6 on penalties, with the match described as "arguably the most dramatic game of football in Wembley's history", and were promoted to the Premier League. Charlton's first Premier League campaign began promisingly (they went top after two games) but they were unable to keep up their good form and were soon battling relegation. The battle was lost on the final day of the season but the club's board kept faith in Curbishley, confident that they could bounce back. Curbishley rewarded the chairman's loyalty with the Division One title in 2000 which signalled a return to the Premier League. After the club's return, Curbishley proved an astute spender and by 2003 he had succeeded in establishing Charlton in the top flight. Charlton spent much of the 2003–04 Premier League season challenging for a Champions League place, but a late-season slump in form and the sale of star player Scott Parker to Chelsea, left Charlton in seventh place, which was still the club's highest finish since the 1950s. Charlton were unable to build on this level of achievement and Curbishley departed in 2006, with the club still established as a solid mid-table side. In May 2006, Iain Dowie was named as Curbishley's successor, but was sacked after 12 league matches in November 2006, with only two wins. Les Reed replaced Dowie as manager, however he too failed to improve Charlton's position in the league table and on Christmas Eve 2006, Reed was replaced by former player Alan Pardew. Although results did improve, Pardew was unable to keep Charlton up and relegation was confirmed in the penultimate match of the season. Return to the Football League (2007–2014) Charlton's return to the second tier of English football was a disappointment, with their promotion campaign tailing off to an 11th-place finish. Early in the following season the Addicks were linked with a foreign takeover, but this was swiftly denied by the club. On 10 October 2008, Charlton received an indicative offer for the club from a Dubai-based diversified investment company. However, the deal later fell through. The full significance of this soon became apparent as the club recorded net losses of over £13 million for that financial year. Pardew left on 22 November after a 2–5 home loss to Sheffield United that saw the team fall into the relegation places. Matters did not improve under caretaker manager Phil Parkinson, and the team went a club record 18 games without a win, a new club record, before finally achieving a 1–0 away victory over Norwich City in an FA Cup Third Round replay; Parkinson was hired on a permanent basis. The team were relegated to League One after a 2–2 draw against Blackpool on 18 April 2009. After spending almost the entire 2009–10 season in the top six of League One, Charlton were defeated in the Football League One play-offs semi-final second leg on penalties against Swindon Town. After a change in ownership, Parkinson and Charlton legend Mark Kinsella left after a poor run of results. Another Charlton legend, Chris Powell, was appointed manager of the club in January 2011, winning his first game in charge 2–0 over Plymouth at the Valley. This was Charlton's first league win since November. Powell's bright start continued with a further three victories, before running into a downturn which saw the club go 11 games in succession without a win. Yet the fans' respect for Powell saw him come under remarkably little criticism. The club's fortunes picked up towards the end of the season, but leaving them far short of the play-offs. In a busy summer, Powell brought in 19 new players and after a successful season, on 14 April 2012, Charlton Athletic won promotion back to the Championship with a 1–0 away win at Carlisle United. A week later, on 21 April 2012, they were confirmed as champions after a 2–1 home win over Wycombe Wanderers. Charlton then lifted the League One trophy on 5 May 2012, having been in the top position since 15 September 2011, and after recording a 3–2 victory over Hartlepool United, recorded their highest ever league points score of 101, the highest in any professional European league that year. In the first season back in the Championship, the 2012–13 season saw Charlton finish ninth place with 65 points, just three points short of the play-off places to the Premier League. Duchâtelet's ownership (2014–2019) In early January 2014 during the 2013–14 season, Belgian businessman Roland Duchâtelet took over Charlton as owner in a deal worth £14million. This made Charlton a part of a network of football clubs owned by Duchâtelet. On 11 March 2014, two days after an FA Cup quarter-final loss to Sheffield United, and with Charlton sitting bottom of the table, Powell was sacked and leaked private emails suggested that this was due to a rift with the owner. New manager Jose Riga, despite having to join Charlton long after the transfer window had closed, was able to improve Charlton's form and eventually guide them to 18th place, successfully avoiding relegation. After Riga's departure to manage Blackpool, former Millwall player Bob Peeters was appointed as manager in May 2014 on a 12-month contract. Charlton started strong, but a long run of draws meant that after only 25 games in charge Peeters was dismissed with the team in 14th place. His replacement, Guy Luzon, ensured there was no relegation battle by winning most of the remaining matches, resulting in a 12th-place finish. The 2015–16 season began promisingly but results under Luzon deteriorated and on 24 October 2015 after a 3–0 defeat at home to Brentford he was sacked. Luzon said in a News Shopper interview that he "was not the one who chose how to do the recruitment" as the reason why he failed as manager. Karel Fraeye was appointed "interim head coach", but was sacked after 14 games and just two wins, with the club then second from bottom in the Championship. On 14 January 2016, Jose Riga was appointed head coach for a second spell, but could not prevent Charlton from being relegated to League One for the 2016–17 season. Riga resigned at the end of the season. To many fans, the managerial changes and subsequent relegation to League One were symptomatic of the mismanagement of the club under Duchâtelet's ownership and
Then followed Woolwich Common (1907–1908), Pound Park (1908–1913), and Angerstein Lane (1913–1915). After the end of the First World War, a chalk quarry known as the Swamps was identified as Charlton's new ground, and in the summer of 1919 work began to create the level playing area and remove debris from the site. The first match at this site, now known as the club's current ground The Valley, was in September 1919. Charlton stayed at The Valley until 1923, when the club moved to The Mount stadium in Catford as part of a proposed merger with Catford Southend Football Club. However, after this move collapsed in 1924 Charlton returned to The Valley. During the 1930s and 1940s, significant improvements were made to the ground, making it one of the largest in the country at that time. In 1938 the highest attendance to date at the ground was recorded at over 75,000 for a FA Cup match against Aston Villa. During the 1940s and 1950s the attendance was often above 40,000, and Charlton had one of the largest support bases in the country. However, after the club's relegation little investment was made in The Valley as it fell into decline. In the 1980s matters came to a head as the ownership of the club and The Valley was divided. The large East Terrace had been closed down by the authorities after the Bradford City stadium fire and the ground's owner wanted to use part of the site for housing. In September 1985, Charlton made the controversial move to ground-share with South London neighbours Crystal Palace at Selhurst Park. This move was unpopular with supporters and in the late 1980s significant steps were taken to bring about the club's return to The Valley. A single issue political party, the Valley Party, contested the 1990 local Greenwich Borough Council elections on a ticket of reopening the stadium, capturing 11% of the vote, aiding the club's return. The Valley Gold investment scheme was created to help supporters fund the return to The Valley, and several players were also sold to raise funds. For the 1991–92 season and part of the 1992–93 season, the Addicks played at West Ham's Upton Park as Wimbledon had moved into Selhurst Park alongside Crystal Palace. Charlton finally returned to The Valley in December 1992, celebrating with a 1–0 victory against Portsmouth. Since the return to The Valley, three sides of the ground have been completely redeveloped turning The Valley into a modern, all-seater stadium with a 27,111 capacity which is the biggest in South London. There are plans in place to increase the ground's capacity to approximately 31,000 and even around 40,000 in the future. Supporters The bulk of the club's support base comes from South East London and Kent, particularly the London boroughs of Greenwich, Bexley and Bromley. Supporters played a key role in the return of the club to The Valley in 1992 and were rewarded by being granted a voice on the board in the form of an elected supporter director. Any season ticket holder could put themselves forward for election, with a certain number of nominations, and votes were cast by all season ticket holders over the age of 18. The last such director, Ben Hayes, was elected in 2006 to serve until 2008, when the role was discontinued as a result of legal issues. Its functions were replaced by a fans forum, which met for the first time in December 2008 and is still active to this day. Nicknames Charlton's most common nickname is The Addicks. The origin of this name is from a local fishmonger, Arthur "Ikey" Bryan, who rewarded the team with meals of haddock and chips. The progression of the nickname can be seen in the book The Addicks Cartoons: An Affectionate Look into the Early History of Charlton Athletic, which covers the pre-First World War history of Charlton through a narrative based on 56 cartoons which appeared in the now defunct Kentish Independent. The very first cartoon, from 31 October 1908, calls the team the Haddocks. By 1910, the name had changed to Addicks although it also appeared as Haddick. The club also have two other nicknames, The Robins, adopted in 1931, and The Valiants, chosen in a fan competition in the 1960s which also led to the adoption of the sword badge which is still in use. The Addicks nickname never went away and was revived by fans after the club lost its Valley home in 1985 and went into exile at Crystal Palace. It is now once again the official nickname of the club. Charlton fans' chants have included "Valley, Floyd Road", a song noting the stadium's address to the tune of "Mull of Kintyre". . In popular culture Charlton Athletic featured in the ITV one-off drama Albert's Memorial, shown on 12 September 2010 and starring David Jason and David Warner. In the long-running BBC sitcom Only Fools and Horses, Rodney Charlton Trotter is named after the club. In the BBC sitcom Brush Strokes the lead character Jacko was a Charlton fan, reflecting the real life allegiance to the club of the actor who portrayed him, Karl Howman. Charlton's ground and the then manager, Alan Curbishley, made appearances in the Sky One TV series Dream Team. Charlton Athletic has also featured in a number of book publications, in both the realm of fiction and factual/sports writing. These include works by Charlie Connelly and Paul Breen's work of popular fiction which is entitled "The Charlton Men". The book is set against Charlton's successful 2011–12 season when they won the League One title and promotion back to the Championship in concurrence with the 2011 London riots. Timothy Young, the protagonist in Out of the Shelter, a novel by David Lodge, supports Charlton Athletic. The book describes Timothy listening to Charlton's victory in the 1947 FA Cup Final on the radio. Colours and crest Charlton have used a number of crests and badges during their history, although the current design has not been changed since 1968. The first known badge, from the 1930s, consisted of the letters CAF in the shape of a club from a pack of cards. In the 1940s, Charlton used a design featuring a robin sitting in a football within a shield, sometimes with the letters CAFC in the four-quarters of the shield, which was worn for the 1946 FA Cup Final. In the late 1940s and early 1950s, the crest of the former metropolitan borough of Greenwich was used as a symbol for the club but this was not used on the team's shirts. In 1963, a competition was held to find a new badge for the club, and the winning entry was a hand holding a sword, which complied with Charlton's nickname of the time, the Valiants. Over the next five years modifications were made to this design, such as the addition of a circle surrounding the hand and sword and including the club's name in the badge. By 1968, the design had reached the one known today, and has been used continuously from this year, apart from a period in the 1970s when just the letters CAFC appeared on the team's shirts. With the exception of one season, Charlton have always played in red and white - colours chosen by the boys who founded Charlton Athletic in 1905 after having to play their first matches in the borrowed kits of their local rivals Woolwich Arsenal, who also played in red and white. The exception came during part of the 1923–24 season when Charlton wore the colours of Catford Southend as part of the proposed move to Catford, which were light and dark blue stripes. However, after the move fell through, Charlton returned to wearing red and white as their home colours. Kit sponsors and manufacturers The sponsors were as follows: Rivalries Charlton's main rivals are their South London neighbours, Crystal Palace and Millwall. Unlike those rivals Charlton have never competed in football's fourth tier. Crystal Palace In 1985, Charlton was forced to ground-share with Crystal Palace after safety concerns at The Valley. They played their home fixtures at the Glaziers' Selhurst Park stadium until 1991. The arrangement was seen by Crystal Palace chairman Ron Noades as essential for the future of football, but it was unpopular with both sets of fans. Charlton fans campaigned for a return to The Valley throughout their time at Selhurst Park. In 2005, Palace were relegated by Charlton at the Valley after a 2–2 draw. Palace needed a win to survive. However, with seven minutes left, Charlton equalised, relegating their rivals. Post-match, there was a well-publicised altercation between the two chairmen of the respective clubs, Richard Murray and Simon Jordan. Since their first meeting in the Football League in 1925, Charlton have won 17, drawn 13 and lost 26 games against Palace. The teams last met in 2015, a 4–1 win for Palace in the League Cup. Millwall Charlton are closest in proximity to Millwall than any other club, with The Valley and The Den being less than four miles () apart. They last met in July 2020, a 1–0 win for Millwall at the Valley. Since their first Football League game in 1921, Charlton have won 12, drawn 26 and lost 37. The Addicks have not beaten Millwall in the last twelve fixtures between the sides and their last win came in March 1996 at The Valley. Players First-team squad Out on loan Under-23s Development squad Academy squad Former players Player of the Year Club officials As of 18 January 2022 Coaching staff Managerial history Chairman Honours Football League First Division (1st tier) Runners-up – 1936–37 Football League Second Division / Football League First Division (2nd tier) Champions – 1999–2000 Runners-up – 1935–36, 1985–86 Play-off winners – 1986–87, 1997–98 Football League Third Division / Football League One (3rd tier) Champions – 2011–12 Promoted (old Division 3) – 1974–75, 1980–81 Play-off winners – 2018–19 Football League Third Division South Champions – 1928–29, 1934–35 FA Cup Winners – 1946–47 Runners-up – 1946–46 Full Members Cup Runners-up – 1986–87 Football League War Cup Joint winners – 1943–44 Kent Senior Cup Winners – 1994–95, 2012–13, 2014–15 Runners-up – 2015–16 Records Goalkeeper Sam Bartram is Charlton's record appearance maker, having played a total of 623 times between 1934 and 1956. But for six years lost to the Second World War, when no league football was played, this tally would be far higher. Keith Peacock is the club's second highest appearance maker with 591 games between 1961 and 1979 He was also the first-ever substitute in a Football League game, replacing injured goalkeeper Mike Rose after 11 minutes of a match against Bolton Wanderers on 21 August 1965. Defender and midfielder Radostin Kishishev is Charlton's record international appearance maker, having received 42 caps for Bulgaria while a Charlton player. In total, 12 Charlton players have received full England caps. The first was Seth Plum, in 1923 and the most recent was Darren Bent, in 2006. Luke Young, with 7 caps, is Charlton's most capped England international. Charlton's record goalscorer is Derek Hales, who scored 168 times in all competitions in 368 matches, during two spells, for the club. Counting only league goals, Stuart Leary is the club's record scorer with 153 goals between 1951 and 1962. The record number of goals scored in one season is 33, scored by Ralph Allen in the 1934–35 season. Charlton's record home attendance is 75,031 which was set on 12 February 1938 for an FA Cup match against Aston Villa The record all-seated attendance is 27,111, The Valley's current capacity. This record was first set in September 2005 in a Premier League match against Chelsea and has since been equalled several times. Player records References Bibliography External links Charlton Athletic UEFA.com Charlton Athletic information and statistics Soccerbase Association football clubs established in 1905 Football clubs in England Premier League clubs English Football
technique) requires specialized equipment and techniques that adapt to the condition of the snow. Trail preparation employs snow machines which tow snow-compaction, texturing and track-setting devices. Groomers must adapt such equipment to the condition of the snow—crystal structure, temperature, degree of compaction, moisture content, etc. Depending on the initial condition of the snow, grooming may achieve an increase in density for new-fallen snow or a decrease in density for icy or compacted snow. Cross-country ski facilities may incorporate a course design that meets homologation standards for such organizations as the International Olympic Committee, the International Ski Federation, or national standards. Standards address course distances, degree of difficulty with maximums in elevation difference and steepness—both up and downhill, plus other factors. Some facilities have night-time lighting on select trails—called lysløype (light trails) in Norwegian and elljusspår (electric-light trails) in Swedish. The first lysløype opened in 1946 in Nordmarka and at Byåsen (Trondheim). Competition Cross-country ski competition encompasses a variety of formats for races over courses of varying lengths according to rules sanctioned by the International Ski Federation (FIS) and by national organizations, such as the U.S. Ski and Snowboard Association and Cross Country Ski Canada. It also encompasses cross-country ski marathon events, sanctioned by the Worldloppet Ski Federation, cross-country ski orienteering events, sanctioned by the International Orienteering Federation, and Paralympic cross-country skiing, sanctioned by the International Paralympic Committee. FIS-sanctioned competition The FIS Nordic World Ski Championships have been held in various numbers and types of events since 1925 for men and since 1954 for women. From 1924 to 1939, the World Championships were held every year, including the Winter Olympic Games. After World War II, the World Championships were held every four years from 1950 to 1982. Since 1985, the World Championships have been held in odd-numbered years. Notable cross-country ski competitions include the Winter Olympics, the FIS Nordic World Ski Championships, and the FIS World Cup events (including the Holmenkollen). Other sanctioned competition Cross-country ski marathons—races with distances greater than 40 kilometers—have two cup series, the Ski Classics, which started in 2011, and the Worldloppet. Skiers race in classic or free-style (skating) events, depending on the rules of the race. Notable ski marathons, include the Vasaloppet in Sweden, Birkebeineren in Norway, the Engadin Skimarathon in Switzerland, the American Birkebeiner, the Tour of Anchorage in Anchorage, Alaska, and the Boreal Loppet, held in Forestville, Quebec, Canada. Biathlon combines cross-country skiing and rifle shooting. Depending on the shooting performance, extra distance or time is added to the contestant's total running distance/time. For each shooting round, the biathlete must hit five targets; the skier receives a penalty for each missed target, which varies according to the competition rules. Ski orienteering is a form of cross-country skiing competition that requires navigation in a landscape, making optimal route choices at racing speeds. Standard orienteering maps are used, but with special green overprinting of trails and tracks to indicate their navigability in snow; other symbols indicate whether any roads are snow-covered or clear. Standard skate-skiing equipment is used, along with a map holder attached to the chest. It is one of the four orienteering disciplines recognized by the International Orienteering Federation. Upper body strength is especially important because of frequent double poling along narrow snow trails. Paralympic cross-country ski competition is an adaptation of cross-country skiing for athletes with disabilities. Paralympic cross-country skiing includes standing events, sitting events (for wheelchair users), and events for visually impaired athletes under the rules of the International Paralympic Committee. These are divided into several categories for people who are missing limbs, have amputations, are blind, or have any other physical disability, to continue their sport. Techniques Cross-country skiing has two basic propulsion techniques, which apply to different surfaces: classic (undisturbed snow and tracked snow) and skate skiing (firm, smooth snow surfaces). The classic technique relies on a wax or texture on the ski bottom under the foot for traction on the snow to allow the skier to slide the other ski forward in virgin or tracked snow. With the skate skiing technique a skier slides on alternating skis on a firm snow surface at an angle from each other in a manner similar to ice skating. Both techniques employ poles with baskets that allow the arms to participate in the propulsion. Specialized equipment is adapted to each technique and each type of terrain. A variety of turns are used, when descending. Poles contribute to forward propulsion, either simultaneously (usual for the skate technique) or in alternating sequence (common for the classical technique as the "diagonal stride"). Double poling is also used with the classical technique when higher speed can be achieved on flats and slight downhills than is available in the diagonal stride, which is favored to achieve higher power going uphill. Classic The classic style is often used on prepared trails (pistes) that have pairs of parallel grooves (tracks) cut into the snow. It is also the most usual technique where no tracks have been prepared. With this technique, each ski is pushed forward from the other stationary ski in a striding and gliding motion, alternating foot to foot. With the "diagonal stride" variant the poles are planted alternately on the opposite side of the forward-striding foot; with the "kick-double-pole" variant the poles are planted simultaneously with every other stride. At times, especially with gentle descents, double poling is the sole means of propulsion. On uphill terrain, techniques include the "side step" for steep slopes, moving the skis perpendicular to the fall line, the "herringbone" for moderate slopes, where the skier takes alternating steps with the skis splayed outwards, and, for gentle slopes, the skier uses the diagonal technique with shorter strides and greater arm force on the poles. Skate skiing With skate skiing, the skier provides propulsion on a smooth, firm snow surface by pushing alternating skis away from one another at an angle, in a manner similar to ice skating. Skate-skiing usually involves a coordinated use of poles and the upper body to add impetus, sometimes with a double pole plant each time the ski is extended on a temporarily "dominant" side ("V1") or with a double pole plant each time the ski is extended on either side ("V2"). Skiers climb hills with these techniques by widening the angle of the "V" and by making more frequent, shorter strides and more forceful use of poles. A variant of the technique is the "marathon skate" or "Siitonen step", where the skier leaves one ski in the track while skating outwards to the side with the other ski. Turns Turns, used while descending or for braking, include the snowplough (or "wedge turn"), the stem christie (or "wedge christie"), parallel turn, and the Telemark turn. The step turn is used for maintaining speed during descents or out of track on flats. Equipment Equipment comprises skis, poles, boots and bindings; these vary according to: Technique, classic vs skate Terrain, which may vary from groomed trails to wilderness Performance level, from recreational use to competition at the elite level Skis Skis used in cross-country are lighter and narrower than those used in alpine skiing. Ski bottoms are designed to provide a gliding
World Championships have been held in odd-numbered years. Notable cross-country ski competitions include the Winter Olympics, the FIS Nordic World Ski Championships, and the FIS World Cup events (including the Holmenkollen). Other sanctioned competition Cross-country ski marathons—races with distances greater than 40 kilometers—have two cup series, the Ski Classics, which started in 2011, and the Worldloppet. Skiers race in classic or free-style (skating) events, depending on the rules of the race. Notable ski marathons, include the Vasaloppet in Sweden, Birkebeineren in Norway, the Engadin Skimarathon in Switzerland, the American Birkebeiner, the Tour of Anchorage in Anchorage, Alaska, and the Boreal Loppet, held in Forestville, Quebec, Canada. Biathlon combines cross-country skiing and rifle shooting. Depending on the shooting performance, extra distance or time is added to the contestant's total running distance/time. For each shooting round, the biathlete must hit five targets; the skier receives a penalty for each missed target, which varies according to the competition rules. Ski orienteering is a form of cross-country skiing competition that requires navigation in a landscape, making optimal route choices at racing speeds. Standard orienteering maps are used, but with special green overprinting of trails and tracks to indicate their navigability in snow; other symbols indicate whether any roads are snow-covered or clear. Standard skate-skiing equipment is used, along with a map holder attached to the chest. It is one of the four orienteering disciplines recognized by the International Orienteering Federation. Upper body strength is especially important because of frequent double poling along narrow snow trails. Paralympic cross-country ski competition is an adaptation of cross-country skiing for athletes with disabilities. Paralympic cross-country skiing includes standing events, sitting events (for wheelchair users), and events for visually impaired athletes under the rules of the International Paralympic Committee. These are divided into several categories for people who are missing limbs, have amputations, are blind, or have any other physical disability, to continue their sport. Techniques Cross-country skiing has two basic propulsion techniques, which apply to different surfaces: classic (undisturbed snow and tracked snow) and skate skiing (firm, smooth snow surfaces). The classic technique relies on a wax or texture on the ski bottom under the foot for traction on the snow to allow the skier to slide the other ski forward in virgin or tracked snow. With the skate skiing technique a skier slides on alternating skis on a firm snow surface at an angle from each other in a manner similar to ice skating. Both techniques employ poles with baskets that allow the arms to participate in the propulsion. Specialized equipment is adapted to each technique and each type of terrain. A variety of turns are used, when descending. Poles contribute to forward propulsion, either simultaneously (usual for the skate technique) or in alternating sequence (common for the classical technique as the "diagonal stride"). Double poling is also used with the classical technique when higher speed can be achieved on flats and slight downhills than is available in the diagonal stride, which is favored to achieve higher power going uphill. Classic The classic style is often used on prepared trails (pistes) that have pairs of parallel grooves (tracks) cut into the snow. It is also the most usual technique where no tracks have been prepared. With this technique, each ski is pushed forward from the other stationary ski in a striding and gliding motion, alternating foot to foot. With the "diagonal stride" variant the poles are planted alternately on the opposite side of the forward-striding foot; with the "kick-double-pole" variant the poles are planted simultaneously with every other stride. At times, especially with gentle descents, double poling is the sole means of propulsion. On uphill terrain, techniques include the "side step" for steep slopes, moving the skis perpendicular to the fall line, the "herringbone" for moderate slopes, where the skier takes alternating steps with the skis splayed outwards, and, for gentle slopes, the skier uses the diagonal technique with shorter strides and greater arm force on the poles. Skate skiing With skate skiing, the skier provides propulsion on a smooth, firm snow surface by pushing alternating skis away from one another at an angle, in a manner similar to ice skating. Skate-skiing usually involves a coordinated use of poles and the upper body to add impetus, sometimes with a double pole plant each time the ski is extended on a temporarily "dominant" side ("V1") or with a double pole plant each time the ski is extended on either side ("V2"). Skiers climb hills with these techniques by widening the angle of the "V" and by making more frequent, shorter strides and more forceful use of poles. A variant of the technique is the "marathon skate" or "Siitonen step", where the skier leaves one ski in the track while skating outwards to the side with the other ski. Turns Turns, used while descending or for braking, include the snowplough (or "wedge turn"), the stem christie (or "wedge christie"), parallel turn, and the Telemark turn. The step turn is used for maintaining speed during descents or out of track on flats. Equipment Equipment comprises skis, poles, boots and bindings; these vary according to: Technique, classic vs skate Terrain, which may vary from groomed trails to wilderness Performance level, from recreational use to competition at the elite level Skis Skis used in cross-country are lighter and narrower than those used in alpine skiing. Ski bottoms are designed to provide a gliding surface and, for classic skis, a traction zone under foot. The base of the gliding surface is a plastic material that is designed both to minimize friction and, in many cases, to accept waxes. Glide wax may be used on the tails and tips of classic skis and across the length of skate skis. Types Each type of ski is sized and designed differently. Length affects maneuverability; camber affects pressure on the snow beneath the feet of the skier; side-cut affects the ease of turning; width affects forward friction; overall area on the snow affects bearing capacity; and tip geometry affects the ability to penetrate new snow or to stay in a track. Each of the following ski types has a different combination of these attributes: Classic skis: Designed for skiing in tracks. For adult skiers (between 155 cm/50 kg and 185 cm/75 kg), recommended lengths are between 180 and 210 centimetres (approximately 115% of the skier's height). Traction comes from a "grip zone" underfoot that when bearing the skier's weight engages either a textured gripping surface or a grip wax. Accordingly, these skis are classified as "waxable" or "waxless". Recreational waxless skis generally require little attention and are adapted for casual use. Waxable skis, if prepared correctly, provide better grip and glide. When the skier's weight is distributed on both skis, the ski's camber diminishes the pressure of the grip zone on the snow and promotes bearing on the remaining area of the ski—the "glide zone". A test for stiffness of camber is made with a piece of paper under the skier's foot, standing on skis on a flat, hard surface—the paper should be pinned throughout the grip zone of the ski on which all the skier's weight is placed, but slide freely when the skier's weight is bearing equally on both skis. Skate skis: Designed for skiing on groomed surfaces. Recommended lengths are between 170 and 200 centimetres (up to 110% of the skier's height) for adult skiers. The entire bottom of each skate ski is a glide zone—prepared for maximum glide. Traction comes from the skier pushing away from the edge of the previous ski onto the next ski. Back country skis: Designed for ski touring on natural snow conditions. Recommended lengths are between 150 and 195 centimeters for adult skiers, depending on height and weight of the user. Back country skis are typically heavier and wider than classic and skate skis; they often have metal edges for better grip on hard snow; and their greater sidecut helps to carve turns. The geometry of a back country ski depends on its purpose—skis suited for forested areas where loose powder can predominate may be shorter and wider than those selected for open, exposed areas where compacted snow may prevail. Sidecut on Telemark skis promotes turning in forest and rugged terrain. Width and short length aid turning in loose and deep snow. Longer, narrower and more rigid skis with sharp edges are suited for snow that has been compacted by wind or freeze-thaw. Touring ski design may represent a general-purpose compromise among these different ski conditions, plus being acceptable for use in groomed tracks. Traction may come from a textured or waxed grip zone, as with classic skis, or from ski skins, which are applied to the ski bottom for long, steep ascents and have hairs or mechanical texture that prevents sliding backwards. Gliding surface Glide waxes enhance the speed of the gliding surface, and are applied by ironing them onto the ski and then polishing the ski bottom. Three classes of glide wax are
Cups have taken place here. On July 28, 2013, the beach hosted the final event of the World Youth Day 2013. About 3 million people including 3 presidents joined Pope Francis when he celebrated the holy mass. From May till July, 2014 the United Buddy Bears exhibit was held on the Copacabana promenade and attracted more than 1,000,000 people. The presentation consisted of more than 140 bear sculptures, each two metres high and designed by a different artist. In August 2016, Copacabana Beach was the site of beach volleyball in the Olympic Games. New Year's Eve in Copacabana The fireworks display in Rio de Janeiro to celebrate New Year's Eve is one of the largest in the world, lasting 15 to 20 minutes. It is estimated that 2,000,000 people go to Copacabana Beach to see the spectacle. The festival also includes a concert that extends throughout the night. The celebration has become one of the biggest tourist attractions of Rio de Janeiro, attracting visitors from all over Brazil as well as from different parts of the world, and the city hotels generally stay fully booked. The celebration is broadcast live on major Brazilian networks including TV Globo. History New Year's Eve has been celebrated on Copacabana beach since the 1950s when cults of African origin such as Candomblé and Umbanda gathered in small groups dressed in white for ritual celebrations. The first fireworks display occurred in 1976, sponsored by a hotel on the waterfront and this has been repeated ever since. In the 1990s the city saw it as a great opportunity to promote the city and organized and expanded the event. An assessment made during the New Year's Eve 1992 highlighted the risks associated with increasing crowd numbers on Copacabana beach after the fireworks display. Since the 1993-94 event concerts have been held on the beach to retain the public. The result was a success with egress spaced out over a period of 2 hours without the previous turmoil, although critics claimed that it denied the spirit of the New Year's tradition of a religious festival with fireworks by the sea. The following year Rod Stewart beat attendance records. Finally, the Tribute to Tom Jobim - with Gal Costa, Gilberto Gil, Caetano Veloso, Chico Buarque, and Paulinho da Viola - consolidated the shows at the Copacabana Réveillon. There was a need to transform the fireworks display in a show of the same quality. The fireworks display was created by entrepreneurs
three. Notable events On 26 April 1949, broke in two as she was being towed into Rio de Janeiro harbour. Much of her cargo of oranges was washed up upon the beach. On December 31, 1994, the New Year's Eve celebrations featured a Rod Stewart concert with an attendance of 3.5 million, making it the largest concert crowd ever. More recently, the beach has been a site for huge free concerts unrelated to the year-end festivities. On March 21, 2005, Lenny Kravitz performed there in front of 300,000 people, on a Monday night. On February 18, 2006, a Saturday, The Rolling Stones surpassed that mark by far, attracting over 1.5 million people to the beach. On July 7, 2007, the beach hosted the Brazilian leg of the Live Earth concerts, which attracted 400,000 people. As the headliner, Lenny Kravitz got to play the venue a second time, with Jorge Benjor, Macy Gray, O Rappa and Pharrell as the main opening acts. On October 2, 2009, 100,000 people filled the beach for a huge beach party as the IOC announced Rio would be hosting the 2016 Olympics. 11 of the 15 FIFA Beach Soccer World Cups have taken place here. On July 28, 2013, the beach hosted the final event of the World Youth Day 2013. About 3 million people including 3 presidents joined Pope Francis when he celebrated the holy mass. From May till July, 2014 the United Buddy Bears exhibit was held on the Copacabana promenade and attracted more than 1,000,000 people. The presentation consisted of more than 140 bear sculptures, each two metres high and designed by a different artist. In August 2016, Copacabana Beach was the site of beach volleyball in the Olympic Games. New Year's Eve in Copacabana The fireworks display in Rio de Janeiro to celebrate New Year's Eve is one of the largest in the world, lasting 15 to 20 minutes. It is estimated that 2,000,000 people go to Copacabana Beach to see the spectacle. The festival also
third-place vote. Prior to 1970, writers only voted for the best pitcher and used a formula of one point per vote. History The Cy Young Award was first introduced in 1956 by Commissioner of Baseball Ford C. Frick in honor of Hall of Fame pitcher Cy Young, who died in 1955. Originally given to the single best pitcher in the major leagues, the award changed its format over time. From 1956 to 1966, the award was given to one pitcher in Major League Baseball. After Frick retired in 1967, William Eckert became the new Commissioner of Baseball. Due to fan requests, Eckert announced that the Cy Young Award would be given out both in the American League and the National League. From 1956 to 1958, a pitcher was not allowed to win the award on more than one occasion; this rule was eliminated in 1959. After a tie in the 1969 voting for the Cy Young Award, the process was changed, in which each writer was to vote for three pitchers: the first-place vote received five points, the second-place vote received three points, and the third-place vote received one point. The first recipient of the Cy Young Award was Don Newcombe of the Dodgers. In 1957, Warren Spahn became the first left-handed pitcher to win the award. In 1963, Sandy Koufax became the first pitcher to win the award in a unanimous vote; two years later he became the first multiple winner. In 1978, Gaylord Perry (age 40) became the oldest pitcher to receive the award, a record that stood until broken in 2004 by Roger Clemens (age 42). The youngest recipient was Dwight Gooden (age 20 in 1985). In 2012, R. A. Dickey became the first knuckleball pitcher to win the award. In 1974, Mike Marshall became the first relief pitcher to win the award. In 1992, Dennis Eckersley was the first modern closer (first player to be used almost exclusively in ninth-inning situations) to win the award, and since then only one other relief pitcher has won the award, Éric Gagné in 2003 (also a closer). A total of nine relief pitchers have won the Cy Young Award across both leagues. Steve Carlton in 1982 became the first pitcher to win more than three Cy Young Awards, while Greg Maddux in 1994 became the first to win at least three in a row (and received a fourth straight the following year), a feat later repeated by Randy Johnson. Winners Major Leagues combined (1956–1966) American League (1967–present) National League (1967–present) Multiple winners Twenty-one (21) pitchers have won the award multiple times. Roger Clemens currently holds the record for the most awards won, with seven – his first and last wins separated by eighteen years. Greg Maddux (1992–1995) and Randy Johnson (1999–2002) share the record for the most consecutive awards won (4). Clemens, Johnson, Pedro Martínez, Gaylord Perry, Roy Halladay and Max Scherzer are the only pitchers to have won the award in both the American League and National League; Sandy Koufax is the only pitcher who won multiple awards during the period when only one award was presented for all of Major League Baseball. Roger Clemens was the youngest pitcher to win a second Cy Young Award, while Tim Lincecum is the youngest pitcher to do so in the National League and Clayton Kershaw is the youngest left-hander to do so. Clayton Kershaw is the youngest pitcher to win a third Cy Young Award. Clemens
voting for the Cy Young Award, the process was changed, in which each writer was to vote for three pitchers: the first-place vote received five points, the second-place vote received three points, and the third-place vote received one point. The first recipient of the Cy Young Award was Don Newcombe of the Dodgers. In 1957, Warren Spahn became the first left-handed pitcher to win the award. In 1963, Sandy Koufax became the first pitcher to win the award in a unanimous vote; two years later he became the first multiple winner. In 1978, Gaylord Perry (age 40) became the oldest pitcher to receive the award, a record that stood until broken in 2004 by Roger Clemens (age 42). The youngest recipient was Dwight Gooden (age 20 in 1985). In 2012, R. A. Dickey became the first knuckleball pitcher to win the award. In 1974, Mike Marshall became the first relief pitcher to win the award. In 1992, Dennis Eckersley was the first modern closer (first player to be used almost exclusively in ninth-inning situations) to win the award, and since then only one other relief pitcher has won the award, Éric Gagné in 2003 (also a closer). A total of nine relief pitchers have won the Cy Young Award across both leagues. Steve Carlton in 1982 became the first pitcher to win more than three Cy Young Awards, while Greg Maddux in 1994 became the first to win at least three in a row (and received a fourth straight the following year), a feat later repeated by Randy Johnson. Winners Major Leagues combined (1956–1966) American League (1967–present) National League (1967–present) Multiple winners Twenty-one (21) pitchers have won the award multiple times. Roger Clemens currently holds the record for the most awards won, with seven – his first and last wins separated by eighteen years. Greg Maddux (1992–1995) and Randy Johnson (1999–2002) share the record for the most consecutive awards won (4). Clemens, Johnson, Pedro Martínez, Gaylord Perry, Roy Halladay and Max Scherzer are the only pitchers to have won the award in both the American League and National League; Sandy Koufax is the only pitcher who won multiple
the blood of Christian children in mockery of the Christian Eucharist. Sicut Judaeis Sicut Judaeis (the "Constitution for the Jews") was the official position of the papacy regarding Jews throughout the Middle Ages and later. The first bull was issued in about 1120 by Calixtus II, intended to protect Jews who suffered during the First Crusade, and was reaffirmed by many popes, even until the 15th century although they were not always strictly upheld. The bull forbade, besides other things, Christians from coercing Jews to convert, or to harm them, or to take their property, or to disturb the celebration of their festivals, or to interfere with their cemeteries, on pain of excommunication. Popular antisemitism Antisemitism in popular European Christian culture escalated beginning in the 13th century. Blood libels and host desecration drew popular attention and led to many cases of persecution against Jews. Many believed Jews poisoned wells to cause plagues. In the case of blood libel it was widely believed that the Jews would kill a child before Easter and needed Christian blood to bake matzo. Throughout history if a Christian child was murdered accusations of blood libel would arise no matter how small the Jewish population. The Church often added to the fire by portraying the dead child as a martyr who had been tortured and child had powers like Jesus was believed to. Sometimes the children were even made into Saints. Antisemitic imagery such as Judensau and Ecclesia et Synagoga recurred in Christian art and architecture. Anti-Jewish Easter holiday customs such as the Burning of Judas continue to present time. In Iceland, one of the hymns repeated in the days leading up to Easter includes the lines, The righteous Law of Moses The Jews here misapplied, Which their deceit exposes, Their hatred and their pride. The judgement is the Lord's. When by falsification The foe makes accusation, It's His to make awards. Persecutions and expulsions During the Middle Ages in Europe persecutions and formal expulsions of Jews were liable to occur at intervals, although it should be said that this was also the case for other minority communities, regardless of whether they were religious or ethnic. There were particular outbursts of riotous persecution during the Rhineland massacres of 1096 in Germany accompanying the lead-up to the First Crusade, many involving the crusaders as they travelled to the East. There were many local expulsions from cities by local rulers and city councils. In Germany the Holy Roman Emperor generally tried to restrain persecution, if only for economic reasons, but he was often unable to exert much influence. In the Edict of Expulsion, King Edward I expelled all the Jews from England in 1290 (only after ransoming some 3,000 among the most wealthy of them), on the accusation of usury and undermining loyalty to the dynasty. In 1306 there was a wave of persecution in France, and there were widespread Black Death Jewish persecutions as the Jews were blamed by many Christians for the plague, or spreading it. As late as 1519, the Imperial city of Regensburg took advantage of the recent death of Emperor Maximilian I to expel its 500 Jews. Expulsion of Jews from Spain The largest expulsion of Jews followed the Reconquista or the reunification of Spain, and it preceded the expulsion of the Muslims who would not convert, in spite of the protection of their religious rights promised by the Treaty of Granada (1491). On 31 March 1492 Ferdinand II of Aragon and Isabella I of Castile, the rulers of Spain who financed Christopher Columbus' voyage to the New World just a few months later in 1492, declared that all Jews in their territories should either convert to Christianity or leave the country. While some converted, many others left for Portugal, France, Italy (including the Papal States), Netherlands, Poland, the Ottoman Empire, and North Africa. Many of those who had fled to Portugal were later expelled by King Manuel in 1497 or left to avoid forced conversion and persecution. Renaissance to the 17th century Cum Nimis Absurdum On 14 July 1555, Pope Paul IV issued papal bull Cum nimis absurdum which revoked all the rights of the Jewish community and placed religious and economic restrictions on Jews in the Papal States, renewed anti-Jewish legislation and subjected Jews to various degradations and restrictions on their personal freedom. The bull established the Roman Ghetto and required Jews of Rome, which had existed as a community since before Christian times and which numbered about 2,000 at the time, to live in it. The Ghetto was a walled quarter with three gates that were locked at night. Jews were also restricted to one synagogue per city. Paul IV's successor, Pope Pius IV, enforced the creation of other ghettos in most Italian towns, and his successor, Pope Pius V, recommended them to other bordering states. Protestant Reformation Martin Luther at first made overtures towards the Jews, believing that the "evils" of Catholicism had prevented their conversion to Christianity. When his call to convert to his version of Christianity was unsuccessful, he became hostile to them. In his book On the Jews and Their Lies, Luther excoriates them as "venomous beasts, vipers, disgusting scum, canders, devils incarnate." He provided detailed recommendations for a pogrom against them, calling for their permanent oppression and expulsion, writing "Their private houses must be destroyed and devastated, they could be lodged in stables. Let the magistrates burn their synagogues and let whatever escapes be covered with sand and mud. Let them be forced to work, and if this avails nothing, we will be compelled to expel them like dogs in order not to expose ourselves to incurring divine wrath and eternal damnation from the Jews and their lies." At one point he wrote: "...we are at fault in not slaying them..." a passage that "may be termed the first work of modern antisemitism, and a giant step forward on the road to the Holocaust." Luther's harsh comments about the Jews are seen by many as a continuation of medieval Christian antisemitism. In his final sermon shortly before his death, however, Luther preached: "We want to treat them with Christian love and to pray for them, so that they might become converted and would receive the Lord." 18th century In accordance with the anti-Jewish precepts of the Russian Orthodox Church, Russia's discriminatory policies towards Jews intensified when the partition of Poland in the 18th century resulted, for the first time in Russian history, in the possession of land with a large Jewish population. This land was designated as the Pale of Settlement from which Jews were forbidden to migrate into the interior of Russia. In 1772 Catherine II, the empress of Russia, forced the Jews living in the Pale of Settlement to stay in their shtetls and forbade them from returning to the towns that they occupied before the partition of Poland. 19th century Throughout the 19th century and into the 20th, the Roman Catholic Church still incorporated strong antisemitic elements, despite increasing attempts to separate anti-Judaism (opposition to the Jewish religion on religious grounds) and racial antisemitism. Brown University historian David Kertzer, working from the Vatican archive, has argued in his book The Popes Against the Jews that in the 19th and early 20th centuries the Roman Catholic Church adhered to a distinction between "good antisemitism" and "bad antisemitism". The "bad" kind promoted hatred of Jews because of their descent. This was considered un-Christian because the Christian message was intended for all of humanity regardless of ethnicity; anyone could become a Christian. The "good" kind criticized alleged Jewish conspiracies to control newspapers, banks, and other institutions, to care only about accumulation of wealth, etc. Many Catholic bishops wrote articles criticizing Jews on such grounds, and, when they were accused of promoting hatred of Jews, they would remind people that they condemned the "bad" kind of antisemitism. Kertzer's work is not without critics. Scholar of Jewish-Christian relations Rabbi David G. Dalin, for example, criticized Kertzer in the Weekly Standard for using evidence selectively. Opposition to the French Revolution The counter-revolutionary Catholic royalist Louis de Bonald stands out among the earliest figures to explicitly call for the reversal of Jewish emancipation in the wake of the French Revolution. Bonald's attacks on the Jews are likely to have influenced Napoleon's decision to limit the civil rights of Alsatian Jews. Bonald's article Sur les juifs (1806) was one of the most venomous screeds of its era and furnished a paradigm which combined anti-liberalism, a defense of a rural society, traditional Christian antisemitism, and the identification of Jews with bankers and finance capital, which would in turn influence many subsequent right-wing reactionaries such as Roger Gougenot des Mousseaux, Charles Maurras, and Édouard Drumont, nationalists such as Maurice Barrès and Paolo Orano, and antisemitic socialists such as Alphonse Toussenel. Bonald furthermore declared that the Jews were an "alien" people, a "state within a state", and should be forced to wear a distinctive mark to more easily identify and discriminate against them. In the 1840s, the popular counter-revolutionary Catholic journalist Louis Veuillot propagated Bonald's arguments against the Jewish "financial aristocracy" along with vicious attacks against the Talmud and the Jews as a "deicidal people" driven by hatred to "enslave" Christians. Gougenot des Mousseaux's Le Juif, le judaïsme et la judaïsation des peuples chrétiens (1869) has been called a "Bible of modern antisemitism" and was translated into German by Nazi ideologue Alfred Rosenberg. Between 1882 and 1886 alone, French priests published twenty antisemitic books blaming France's ills on the Jews and urging the government to consign them back to the ghettos, expel them, or hang them from the gallows. In Italy the Jesuit priest Antonio Bresciani's highly popular novel 1850 novel L'Ebreo di Verona (The Jew of Verona) shaped religious anti-Semitism for decades, as did his work for La Civiltà Cattolica, which he helped launch. Pope Pius VII (1800–1823) had the walls of the Jewish ghetto in Rome rebuilt after the Jews were emancipated by Napoleon, and Jews were restricted to the ghetto through the end of the Papal States in 1870. Official Catholic organizations, such as the Jesuits, banned candidates "who are descended from the Jewish race unless it is clear that their father, grandfather, and great-grandfather have belonged to the Catholic Church" until 1946. 20th century In Russia, under the Tsarist regime, antisemitism intensified in the early years of the 20th century and was given official favour when the secret police forged the notorious Protocols of the Elders of Zion, a document purported to be a transcription of a plan by Jewish elders to achieve global domination. Violence against the Jews in the Kishinev pogrom in 1903 was continued after the 1905 revolution by the activities of the Black Hundreds. The Beilis Trial of 1913 showed that it was possible to revive the blood libel accusation in Russia. Catholic writers such as Ernest Jouin, who published the Protocols in French, seamlessly blended racial and religious anti-Semitism, as in his statement that "from the triple viewpoint of race, of nationality, and of religion, the Jew has become the enemy of humanity." Pope Pius XI praised Jouin for "combating our mortal [Jewish] enemy" and appointed him to high papal office as a protonotary apostolic. WWI to the eve of WWII In 1916, in the midst of the First World War, American Jews petitioned Pope Benedict XV on behalf of the Polish Jews. Nazi antisemitism During a meeting with Roman Catholic Bishop of Osnabrück On April 26, 1933, Hitler declared: “I have been attacked because of my handling of the Jewish question. The Catholic Church considered the Jews pestilent for fifteen hundred years, put them in ghettos, etc., because it recognized the Jews for what they were. In the epoch of liberalism the danger was no longer recognized. I am moving back toward the time in which a fifteen-hundred-year-long tradition was implemented. I do not set race over religion, but I recognize the representatives of this race as pestilent for the state and for the Church, and perhaps I am thereby doing Christianity a great service by pushing them out of schools and public functions.” The transcript of the discussion does not contain any response by Bishop Berning. Martin Rhonheimer does not consider this unusual because in his opinion, for a Catholic Bishop in 1933 there was nothing particularly objectionable "in this historically correct reminder". The Nazis used Martin Luther's book, On the Jews and Their Lies (1543), to justify their claim that their ideology was morally righteous. Luther even went so far as to advocate the murder of Jews who refused to convert to Christianity by writing that "we are at fault in not slaying them." Archbishop Robert Runcie asserted that: "Without centuries of Christian antisemitism, Hitler's passionate hatred would never have been so fervently echoed... because for centuries Christians have held Jews collectively responsible for the death of Jesus. On Good Friday Jews, have in times past, cowered behind locked doors with fear of a Christian mob seeking 'revenge' for deicide. Without the poisoning of Christian minds through the centuries, the Holocaust is unthinkable." The dissident Catholic priest Hans Küng has written that "Nazi anti-Judaism was the work of godless, anti-Christian criminals. But it would not have been possible without the almost two thousand years' pre-history of 'Christian' anti-Judaism..." The consensus among historians is that Nazism as a whole was either unrelated or actively opposed to Christianity, and Hitler was strongly critical of it, although Germany remained mostly Christian during the Nazi era. The document Dabru Emet was issued by over 220 rabbis and intellectuals from all branches of Judaism in 2000 as a statement about Jewish-Christian relations. This document states,"Nazism was not a Christian phenomenon. Without the long history of Christian anti-Judaism and Christian violence against Jews, Nazi ideology could not have taken hold nor could it have been carried out. Too many Christians participated in, or were sympathetic to, Nazi atrocities against Jews. Other Christians did not protest sufficiently against these atrocities. But Nazism itself was not an inevitable outcome of Christianity." According to American historian Lucy Dawidowicz, antisemitism has a long history within Christianity. The line of "antisemitic descent" from Luther, the author of On the Jews and Their Lies, to Hitler is "easy to draw." In her The War Against the Jews, 1933-1945, she contends that Luther and Hitler were obsessed by the "demonologized universe" inhabited by Jews. Dawidowicz writes that the similarities between Luther's anti-Jewish writings and modern antisemitism are no coincidence, because they derived from a common history of Judenhass, which can be traced to Haman's advice to Ahasuerus. Although modern German antisemitism also has its roots in German nationalism and the liberal revolution of 1848, Christian antisemitism she writes is a foundation that was laid by the Roman Catholic Church and "upon which Luther built." Collaborating Christians German Christians (movement) Gleichschaltung Hanns Kerrl, Minister for Ecclesiastical Affairs Positive Christianity (the approved Nazi version of Christianity) Protestant Reich Church Opposition to the Holocaust The Confessing Church was, in 1934, the first Christian opposition group. The Catholic Church officially condemned the Nazi theory of racism in Germany in 1937 with the encyclical "Mit brennender Sorge", signed by Pope Pius XI, and Cardinal Michael von Faulhaber led the Catholic opposition, preaching against racism. Many individual Christian clergy and laypeople of all denominations had to pay for their opposition with their lives, including: the Catholic priest, Maximilian Kolbe. the Lutheran pastor Dietrich Bonhoeffer the Catholic parson of the Berlin Cathedral, Bernhard Lichtenberg. the mostly Catholic members of the Munich-based resistance group the White Rose which was led by Hans and Sophie Scholl. By the 1940s, few Christians were willing to publicly oppose Nazi policy, but many Christians secretly helped save the lives of Jews. There are many sections of Israel's Holocaust Remembrance Museum, Yad Vashem, which are dedicated to honoring these "Righteous Among the Nations". Pope Pius XII Before he became Pope, Cardinal Pacelli addressed the International Eucharistic Congress in Budapest on 25–30 May 1938 during which he made reference to the Jews "whose lips curse [Christ] and whose hearts reject him even today"; at this time antisemitic laws were in the process of being formulated in Hungary. The 1937 encyclical Mit brennender Sorge was issued by Pope Pius XI, but drafted by the future Pope Pius XII and read from the pulpits of all German Catholic churches, it condemned Nazi ideology and has been characterized by scholars as the "first great official public document to dare to confront and criticize Nazism" and "one of the greatest such condemnations ever issued by the Vatican." In the summer of 1942, Pius explained to his college of Cardinals the reasons for the great gulf that existed between Jews and Christians at the theological level: "Jerusalem has responded to His call and to His grace with the same rigid blindness and stubborn ingratitude that has led it along the path of guilt to the murder of God." Historian Guido Knopp describes these comments of Pius as being "incomprehensible" at a time when "Jerusalem was being murdered by the million". This traditional adversarial relationship with Judaism would be reversed in Nostra aetate, which was issued during the Second Vatican Council. Prominent members of the Jewish community have contradicted the criticisms of Pius and spoke highly of his efforts to protect Jews. The Israeli historian Pinchas Lapide interviewed war survivors and concluded that Pius XII "was instrumental in saving at least 700,000, but probably as many as 860,000 Jews from certain death at Nazi hands". Some historians dispute this estimate. "White Power" movement The Christian Identity movement, the Ku Klux Klan and other White supremacist groups have expressed antisemitic views. They claim that their antisemitism is based on purported Jewish control of the media, control of international banks, involvement in radical left-wing politics, and the Jews' promotion of multiculturalism, anti-Christian groups, liberalism and perverse organizations. They rebuke charges of racism by claiming that Jews who share their views maintain membership in their organizations. A racial belief which is common among these groups, but not universal among them, is an alternative history doctrine concerning the descendants of the Lost Tribes of Israel. In some of its forms, this doctrine absolutely denies the view that modern Jews have any ethnic connection to the Israel of the Bible. Instead, according to extreme forms of this doctrine, the true Israelites and the true humans are the members of the Adamic (white) race. These groups are often rejected and they are not even considered Christian groups by mainstream Christian denominations and the vast majority of Christians around the world. Post World War II antisemitism Antisemitism remains a substantial problem in Europe and to a greater or lesser degree, it also exists in many other nations, including Eastern Europe and the former Soviet Union, and tensions between some Muslim immigrants and Jews have increased across Europe. The US State Department reports that antisemitism has increased dramatically in Europe and Eurasia since 2000. While it has been on the decline since the 1940s, a measurable amount of antisemitism still exists in the United States, although acts of violence are rare. For example, the influential Evangelical preacher Billy Graham and the then-president Richard Nixon were caught on tape in the early 1970s while they were discussing matters like how to address the Jews' control of the American media. This belief in Jewish conspiracies and domination of the media was similar to those of Graham's former mentors: William Bell Riley chose Graham to succeed him as the second president of Northwestern Bible and Missionary Training School and evangelist Mordecai Ham led the meetings where Graham first believed in Christ. Both held strongly antisemitic views. The 2001 survey by the Anti-Defamation League reported 1432 acts of antisemitism in the United States that year. The figure included 877 acts of harassment, including verbal intimidation, threats and physical assaults. A minority of American churches engage in anti-Israel activism, including support for the controversial BDS (Boycott, Divestment and Sanctions) movement. While not directly indicative of anti-semitism, this activism often conflates the Israeli government's treatment of Palestinians with that of Jesus, thereby promoting the anti-semitic doctrine of Jewish guilt. Many Christian Zionists are also accused of anti-semitism, such as John
published a detailed study of the description of Jews in the New Testament, and the historical effects that such passages have had in the Christian community throughout history. Similar studies of such verses have been made by both Christian and Jewish scholars, including Professors Clark Williamsom (Christian Theological Seminary), Hyam Maccoby (The Leo Baeck Institute), Norman A. Beck (Texas Lutheran College), and Michael Berenbaum (Georgetown University). Most rabbis feel that these verses are antisemitic, and many Christian scholars, in America and Europe, have reached the same conclusion. Another example is John Dominic Crossan's 1995 book, titled Who Killed Jesus? Exposing the Roots of Anti-Semitism in the Gospel Story of the Death of Jesus. Some biblical scholars have also been accused of holding antisemitic beliefs. Bruce J. Malina, a founding member of The Context Group, has come under criticism for going as far as to deny the Semitic ancestry of modern Israelis. He then ties this back to his work on first century cultural anthropology. Church Fathers After Paul's death, Christianity emerged as a separate religion, and Pauline Christianity emerged as the dominant form of Christianity, especially after Paul, James and the other apostles agreed on a compromise set of requirements. Some Christians continued to adhere to aspects of Jewish law, but they were few in number and often considered heretics by the Church. One example is the Ebionites, who seem to have denied the virgin birth of Jesus, the physical Resurrection of Jesus, and most of the books that were later canonized as the New Testament. For example, the Ethiopian Orthodox still continue Old Testament practices such as the Sabbath. As late as the 4th century Church Father John Chrysostom complained that some Christians were still attending Jewish synagogues. The Church Fathers identified Jews and Judaism with heresy and declared the people of Israel to be extra Deum (lat. "outside of God"). Saint Peter of Antioch referred to Christians that refused to worship religious images as having "Jewish minds". In the early second century AD, the heretic Marcion of Sinope ( 85 – 160 AD) declared that the Jewish God was a different God, inferior to the Christian one, and rejected the Jewish scriptures as the product of a lesser deity. Marcion's teachings, which were extremely popular, rejected Judaism not only as an incomplete revelation, but as a false one as well, but, at the same time, allowed less blame to be placed on the Jews personally for having not recognized Jesus, since, in Marcion's worldview, Jesus was not sent by the lesser Jewish God, but by the supreme Christian God, whom the Jews had no reason to recognize. In combating Marcion, orthodox apologists conceded that Judaism was an incomplete and inferior religion to Christianity, while also defending the Jewish scriptures as canonical. The Church Father Tertullian ( 155 – 240 AD) had a particularly intense personal dislike towards the Jews and argued that the Gentiles had been chosen by God to replace the Jews, because they were worthier and more honorable. Origen of Alexandria ( 184 – 253) was more knowledgeable about Judaism than any of the other Church Fathers, having studied Hebrew, met Rabbi Hillel the Younger, consulted and debated with Jewish scholars, and been influenced by the allegorical interpretations of Philo of Alexandria. Origen defended the canonicity of the Old Testament and defended Jews of the past as having been chosen by God for their merits. Nonetheless, he condemned contemporary Jews for not understanding their own Law, insisted that Christians were the "true Israel", and blamed the Jews for the death of Christ. He did, however, maintain that Jews would eventually attain salvation in the final apocatastasis. Hippolytus of Rome ( 170 – 235 AD) wrote that the Jews had "been darkened in the eyes of your soul with a darkness utter and everlasting." Patristic bishops of the patristic era such as Augustine argued that the Jews should be left alive and suffering as a perpetual reminder of their murder of Christ. Like his anti-Jewish teacher, Ambrose of Milan, he defined Jews as a special subset of those damned to hell. As "Witness People", he sanctified collective punishment for the Jewish deicide and enslavement of Jews to Catholics: "Not by bodily death, shall the ungodly race of carnal Jews perish ... 'Scatter them abroad, take away their strength. And bring them down O Lord. Augustine claimed to "love" the Jews but as a means to convert them to Christianity. Sometimes he identified all Jews with the evil Judas and developed the doctrine (together with Cyprian) that there was "no salvation outside the Church". Other Church Fathers, such as John Chrysostom, went further in their condemnation. The Catholic editor Paul Harkins wrote that St. John Chrysostom's anti-Jewish theology "is no longer tenable (..) For these objectively unchristian acts he cannot be excused, even if he is the product of his times." John Chrysostom held, as most Church Fathers did, that the sins of all Jews were communal and endless, to him his Jewish neighbours were the collective representation of all alleged crimes of all preexisting Jews. All Church Fathers applied the passages of the New Testament concerning the alleged advocation of the crucifixion of Christ to all Jews of his day, the Jews were the ultimate evil. However, John Chrysostom went so far to say that because Jews rejected the Christian God in human flesh, Christ, they therefore deserved to be killed: "grew fit for slaughter." In citing the New Testament, he claimed that Jesus was speaking about Jews when he said, "as for these enemies of mine who did not want me to reign over them, bring them here and slay them before me." St. Jerome identified Jews with Judas Iscariot and the immoral use of money ("Judas is cursed, that in Judas the Jews may be accursed... their prayers turn into sins"). Jerome's homiletical assaults, that may have served as the basis for the anti-Jewish Good Friday liturgy, contrasts Jews with the evil, and that "the ceremonies of the Jews are harmful and deadly to Christians", whoever keeps them was doomed to the devil: "My enemies are the Jews; they have conspired in hatred against Me, crucified Me, heaped evils of all kinds upon Me, blasphemed Me." Ephraim the Syrian wrote polemics against Jews in the 4th century, including the repeated accusation that Satan dwells among them as a partner. The writings were directed at Christians who were being proselytized by Jews. Ephraim feared that they were slipping back into Judaism; thus, he portrayed the Jews as enemies of Christianity, like Satan, to emphasize the contrast between the two religions, namely, that Christianity was Godly and true and Judaism was Satanic and false. Like John Chrysostom, his objective was to dissuade Christians from reverting to Judaism by emphasizing what he saw as the wickedness of the Jews and their religion. Middle Ages Bernard of Clairvaux said "For us the Jews are Scripture's living words, because they remind us of what Our Lord suffered. They are not to be persecuted, killed, or even put to flight." Jews were subjected to a wide range of legal disabilities and restrictions in Medieval Europe. Jews were excluded from many trades, the occupations varying with place and time, and determined by the influence of various non-Jewish competing interests. Often Jews were barred from all occupations but money-lending and peddling, with even these at times forbidden. Jews' association to money lending would carry on throughout history in the stereotype of Jews being greedy and perpetuating capitalism. In the later medieval period, the number of Jews who were permitted to reside in certain places was limited; they were concentrated in ghettos, and they were also not allowed to own land; they were forced to pay discriminatory taxes whenever they entered cities or districts other than their own, The Oath More Judaico, the form of oath required from Jewish witnesses, in some places developed bizarre or humiliating forms, e.g. in the Swabian law of the 13th century, the Jew would be required to stand on the hide of a sow or a bloody lamb. The Fourth Lateran Council which was held in 1215 was the first council to proclaim that Jews were required to wear something which distinguished them as Jews (the same requirement was also imposed on Muslims). On many occasions, Jews were accused of blood libels, the supposed drinking of the blood of Christian children in mockery of the Christian Eucharist. Sicut Judaeis Sicut Judaeis (the "Constitution for the Jews") was the official position of the papacy regarding Jews throughout the Middle Ages and later. The first bull was issued in about 1120 by Calixtus II, intended to protect Jews who suffered during the First Crusade, and was reaffirmed by many popes, even until the 15th century although they were not always strictly upheld. The bull forbade, besides other things, Christians from coercing Jews to convert, or to harm them, or to take their property, or to disturb the celebration of their festivals, or to interfere with their cemeteries, on pain of excommunication. Popular antisemitism Antisemitism in popular European Christian culture escalated beginning in the 13th century. Blood libels and host desecration drew popular attention and led to many cases of persecution against Jews. Many believed Jews poisoned wells to cause plagues. In the case of blood libel it was widely believed that the Jews would kill a child before Easter and needed Christian blood to bake matzo. Throughout history if a Christian child was murdered accusations of blood libel would arise no matter how small the Jewish population. The Church often added to the fire by portraying the dead child as a martyr who had been tortured and child had powers like Jesus was believed to. Sometimes the children were even made into Saints. Antisemitic imagery such as Judensau and Ecclesia et Synagoga recurred in Christian art and architecture. Anti-Jewish Easter holiday customs such as the Burning of Judas continue to present time. In Iceland, one of the hymns repeated in the days leading up to Easter includes the lines, The righteous Law of Moses The Jews here misapplied, Which their deceit exposes, Their hatred and their pride. The judgement is the Lord's. When by falsification The foe makes accusation, It's His to make awards. Persecutions and expulsions During the Middle Ages in Europe persecutions and formal expulsions of Jews were liable to occur at intervals, although it should be said that this was also the case for other minority communities, regardless of whether they were religious or ethnic. There were particular outbursts of riotous persecution during the Rhineland massacres of 1096 in Germany accompanying the lead-up to the First Crusade, many involving the crusaders as they travelled to the East. There were many local expulsions from cities by local rulers and city councils. In Germany the Holy Roman Emperor generally tried to restrain persecution, if only for economic reasons, but he was often unable to exert much influence. In the Edict of Expulsion, King Edward I expelled all the Jews from England in 1290 (only after ransoming some 3,000 among the most wealthy of them), on the accusation of usury and undermining loyalty to the dynasty. In 1306 there was a wave of persecution in France, and there were widespread Black Death Jewish persecutions as the Jews were blamed by many Christians for the plague, or spreading it. As late as 1519, the Imperial city of Regensburg took advantage of the recent death of Emperor Maximilian I to expel its 500 Jews. Expulsion of Jews from Spain The largest expulsion of Jews followed the Reconquista or the reunification of Spain, and it preceded the expulsion of the Muslims who would not convert, in spite of the protection of their religious rights promised by the Treaty of Granada (1491). On 31 March 1492 Ferdinand II of Aragon and Isabella I of Castile, the rulers of Spain who financed Christopher Columbus' voyage to the New World just a few months later in 1492, declared that all Jews in their territories should
On 19 June 2012, the USAF ordered its 224th and final C-17 to replace one that crashed in Alaska in July 2010. In September 2013, Boeing announced that C-17 production was starting to close down. In October 2014, the main wing spar of the 279th and last aircraft was completed; this C-17 was delivered in 2015, after which Boeing closed the Long Beach plant. Production of spare components was to continue until at least 2017. The C-17 is projected to be in service for several decades. In February 2014, Boeing was engaged in sales talks with "five or six" countries for the remaining 15 C-17s; thus Boeing decided to build 10 aircraft without confirmed buyers in anticipation of future purchases. In May 2015, The Wall Street Journal reported that Boeing expected to book a charge of under $100 million and cut 3,000 positions associated with the C-17 program, and it also suggested that Airbus' lower cost A400M Atlas has taken international sales away from the C-17. Sources: C-17 Globemaster III Pocket Guide, Boeing IDS Major Deliveries Design The C-17 Globemaster III is a strategic transport aircraft, able to airlift cargo close to a battle area. The size and weight of U.S. mechanized firepower and equipment have grown in recent decades from increased air mobility requirements, particularly for large or heavy non-palletized outsize cargo. It has a length of and a wingspan of , and uses about 8% composite materials, mostly in secondary structure and control surfaces. The C-17 is powered by four Pratt & Whitney F117-PW-100 turbofan engines, which are based on the commercial Pratt and Whitney PW2040 used on the Boeing 757. Each engine is rated at of thrust. The engine's thrust reversers direct engine exhaust air upwards and forward, reducing the chances of foreign object damage by ingestion of runway debris, and providing enough reverse thrust to back up the aircraft while taxiing. The thrust reversers can also be used in flight at idle-reverse for added drag in maximum-rate descents. In vortex surfing tests performed by two C-17s, up to 10% fuel savings were reported. For cargo operations the C-17 requires a crew of three: pilot, copilot, and loadmaster. The cargo compartment is long by wide by high. The cargo floor has rollers for palletized cargo but it can be flipped to provide a flat floor suitable for vehicles and other rolling stock. Cargo is loaded through a large aft ramp that accommodates rolling stock, such as a 69-ton (63-metric ton) M1 Abrams main battle tank, other armored vehicles, trucks, and trailers, along with palletized cargo. Maximum payload of the C-17 is , and its maximum takeoff weight is . With a payload of and an initial cruise altitude of , the C-17 has an unrefueled range of about on the first 71 aircraft, and on all subsequent extended-range models that include a sealed center wing bay as a fuel tank. Boeing informally calls these aircraft the C-17 ER. The C-17's cruise speed is about (Mach 0.74). It is designed to airdrop 102 paratroopers and their equipment. According to Boeing the maximum unloaded range is 6,230 nautical miles (10,026 Kilometers). The C-17 is designed to operate from runways as short as and as narrow as . The C-17 can also operate from unpaved, unimproved runways (although with greater chance to damage the aircraft). The thrust reversers can be used to move the aircraft backwards and reverse direction on narrow taxiways using a three- (or more) point turn. The plane is designed for 20 man-hours of maintenance per flight hour, and a 74% mission availability rate. Operational history United States Air Force The first production C-17 was delivered to Charleston Air Force Base, South Carolina, on 14 July 1993. The first C-17 unit, the 17th Airlift Squadron, became operationally ready on 17 January 1995. It has broken 22 records for oversized payloads. The C-17 was awarded U.S. aviation's most prestigious award, the Collier Trophy, in 1994. A Congressional report on operations in Kosovo and Operation Allied Force noted "One of the great success stories...was the performance of the Air Force's C-17A" It flew half of the strategic airlift missions in the operation, the type could use small airfields, easing operations; rapid turnaround times also led to efficient utilization. In 2006, eight C-17s were delivered to March Joint Air Reserve Base, California; controlled by the Air Force Reserve Command (AFRC), assigned to the 452d Air Mobility Wing and subsequently assigned to AMC's 436th Airlift Wing and its AFRC "associate" unit, the 512th Airlift Wing, at Dover Air Force Base, Delaware, supplementing the Lockheed C-5 Galaxy. The Mississippi Air National Guard's 172 Airlift Group received their first of eight C-17s in 2006. In 2011, the New York Air National Guard's 105th Airlift Wing at Stewart Air National Guard Base transitioned from the C-5 to the C-17. C-17s delivered military supplies during Operation Enduring Freedom in Afghanistan and Operation Iraqi Freedom in Iraq as well as humanitarian aid in the aftermath of the 2010 Haiti earthquake, and the 2011 Sindh floods, delivering thousands of food rations, tons of medical and emergency supplies. On 26 March 2003, 15 USAF C-17s participated in the biggest combat airdrop since the United States invasion of Panama in December 1989: the night-time airdrop of 1,000 paratroopers from the 173rd Airborne Brigade occurred over Bashur, Iraq. These airdrops were followed by C-17s ferrying M1 Abrams, M2 Bradleys, M113s and artillery. USAF C-17s have also assisted allies in their airlift requirements, such as Canadian vehicles to Afghanistan in 2003 and Australian forces during the Australian-led military deployment to East Timor in 2006. In 2006, USAF C-17s flew 15 Canadian Leopard C2 tanks from Kyrgyzstan into Kandahar in support of NATO's Afghanistan mission. In 2013, five USAF C-17s supported French operations in Mali, operating with other nations' C-17s (RAF, NATO and RCAF deployed a single C-17 each). Since 1999, C-17s have been flying annually to Antarctica on Operation Deep Freeze in support of the US Antarctic Research Program, replacing the C-141s used in prior years. The initial flight was flown by the USAF 62nd Airlift Wing. The C-17s fly round trip between Christchurch Airport and McMurdo Station around October each year and take 5 hours to fly each way. In 2006, the C-17 flew its first Antarctic airdrop mission, delivering 70 000 pounds of supplies. Subsequent air drops continued during the four subsequent years. A C-17 accompanies the President of the United States on his visits to both domestic and foreign arrangements, consultations, and meetings. It is used to transport the Presidential Limousine, Marine One, and security detachments. On several occasions, a C-17 has been used to transport the President himself, temporarily gaining the Air Force One call sign while doing so. Debate arose over follow-on C-17 orders, the USAF requested line shutdown while Congress called for further production. In FY2007, the USAF requested $1.6 billion in response to "excessive combat use" on the C-17 fleet. In 2008, USAF General Arthur Lichte, Commander of Air Mobility Command, indicated before a House of Representatives subcommittee on air and land forces a need to extend production to another 15 aircraft to increase the total to 205, and that C-17 production may continue to satisfy airlift requirements. The USAF finally decided to cap its C-17 fleet at 223 aircraft; the final delivery was on 12 September 2013. In 2015, as part of a missile-defense test at Wake Island, simulated medium-range ballistic missiles were launched from C-17s against THAAD missile defense systems and the USS John Paul Jones (DDG-53). In early 2020, palletized munitions–"Combat Expendable Platforms"– were tested from C-17s and C-130Js with results the USAF considered positive. On 15 August 2021, USAF C-17 02-1109 from the 62nd Airlift Wing and 446th Airlift Wing at Joint Base Lewis-McChord departed Hamid Karzai International Airport in Kabul, Afghanistan, while crowds of people trying to escape the 2021 Taliban offensive ran alongside the aircraft. The C-17 lifted off with people holding on to the outside, and at least two died after falling from the aircraft. There were an unknown number possibly crushed and killed by the landing gear retracting, with human remains found in the landing-gear stowage. Also that day, C-17 01-0186 from the 816th Expeditionary Airlift Squadron at Al Udeid Air Base transported 823 Afghan citizens from Hamid Karzai International Airport on a single flight, setting a new record for the type which was previously over 670 people during a 2013 typhoon evacuation from Tacloban, Philippines. Royal Air Force Boeing marketed the C-17 to many European nations including Belgium, Germany, France, Italy, Spain and the United Kingdom. The Royal Air Force (RAF) has established an aim of having interoperability and some weapons and capabilities commonality with the USAF. The 1998 Strategic Defence Review identified a requirement for a strategic airlifter. The Short-Term Strategic Airlift competition commenced in September of that year, but the tender was canceled in August 1999 with some bids identified by ministers as too expensive, including the Boeing/BAe C-17 bid, and others unsuitable. The project continued, with the C-17 seen as the favorite. In the light of Airbus A400M delays, the UK Secretary of State for Defence, Geoff Hoon, announced in May 2000 that the RAF would lease four C-17s at an annual cost of £100 million from Boeing for an initial seven years with an optional two-year extension. The RAF had the option to buy or return the aircraft to Boeing. The UK committed to upgrading its C-17s in line with the USAF so that if they were returned, the USAF could adopt them. The lease agreement restricted the C-17's operational use, meaning that the RAF could not use them for para-drop, airdrop, rough field, low-level operations and air to air refueling. The first C-17 was delivered to the RAF at Boeing's Long Beach facility on 17 May 2001 and flown to RAF Brize Norton by a crew from No. 99 Squadron. The RAF's fourth C-17 was delivered on 24 August 2001. The RAF aircraft were some of the first to take advantage of the new center wing fuel tank found in Block 13 aircraft. In RAF service, the C-17 has not been given an official service name and designation (for example, C-130J referred to as Hercules C4 or C5), but is referred to simply as the C-17 or "C-17A Globemaster". Although it was to be a fallback for the A400M, the Ministry of Defence (MoD) announced on 21 July 2004 that they had elected to buy their four C-17s at the end of the lease, though the A400M appeared to be closer to production. The C-17 gives the RAF strategic capabilities that it would not wish to lose, for example a maximum payload of compared to the A400M's . The C-17's capabilities allow the RAF to use it as an airborne hospital for medical evacuation missions. Another C-17 was ordered in August 2006, and delivered on 22 February 2008. The four leased C-17s were to be purchased later in 2008. Due to fears that the A400M may suffer further delays, the MoD announced in 2006 that it planned to acquire three more C-17s, for a total of eight, with delivery in 2009–2010. On 3 December 2007, the MoD announced a contract for a sixth C-17, which was received on 11 June 2008. On 18 December 2009, Boeing confirmed that the RAF had ordered a seventh C-17, which was delivered on 16 November 2010. The UK announced the purchase of its eighth C-17 in February 2012. The RAF showed interest in buying a ninth C-17 in November 2013. On 13 January 2013, the RAF deployed two C-17s from RAF Brize Norton to the French Évreux Air Base, transporting French armored vehicles to the Malian capital of Bamako during the French intervention in Mali. In June 2015, an RAF C-17 was used to medically evacuate four victims of the 2015 Sousse attacks from Tunisia. Royal Australian Air Force The Royal Australian Air Force (RAAF) began investigating an acquisition of strategic transport aircraft in 2005. In late 2005, the then Minister for Defence Robert Hill stated that such aircraft were being considered due to the limited availability of strategic airlift aircraft from partner nations and air freight companies. The C-17 was considered to be favored over the A400M as it was a "proven aircraft" and in production. One major RAAF requirement was the ability to airlift the Army's M1 Abrams tanks; another requirement was immediate delivery. Though unstated, commonality with the USAF and the RAF was also considered advantageous. RAAF aircraft were ordered directly from the USAF production run and are identical to American C-17s even in paint scheme, the only difference being the national markings, allowing deliveries to commence within nine months of commitment to the program. On 2 March 2006, the Australian government announced the purchase of three aircraft and one option with an entry into service date of 2006. In July 2006 a fixed price contract was awarded to Boeing to deliver four C-17s for (). Australia also signed a US$80.7M contract to join the global 'virtual fleet' C-17 sustainment program and the RAAF's C-17s will receive the same upgrades as the USAF's fleet. The RAAF took delivery of its first C-17 in a ceremony at Boeing's plant at Long Beach, California on 28 November 2006. Several days later the aircraft flew from Hickam Air Force Base, Hawaii to Defence Establishment Fairbairn, Canberra, arriving on 4 December 2006. The aircraft was formally accepted in a ceremony at Fairbairn shortly after arrival. The second aircraft was delivered to the RAAF on 11 May 2007 and the third was delivered on 18 December 2007. The fourth Australian C-17 was delivered on 19 January 2008. All the Australian C-17s are operated by No. 36 Squadron and are based at RAAF Base Amberley in Queensland. On 18 April 2011, Boeing announced that Australia had signed an agreement with the U.S. government to acquire a fifth C-17 due to an increased demand for humanitarian and disaster relief missions. The aircraft was delivered to the RAAF on 14 September 2011. On 23 September 2011, Australian Minister for Defence Materiel Jason Clare announced that the government was seeking information from the U.S. about the price and delivery schedule for a sixth Globemaster. In November 2011, Australia requested a sixth C-17 through the U.S. Foreign Military Sales program; it was ordered in June 2012, and was delivered on 1 November 2012. In August 2014, Defence Minister David Johnston announced the intention to purchase one or two additional C-17s. On 3 October 2014, Johnston announced the government's approval to buy two C-17s at a total cost of (). The United States Congress approved the sale under the Foreign Military Sales program. Prime Minister Tony Abbott confirmed in April 2015 that two additional aircraft were to be ordered, with both delivered by 4 November 2015; these added to the six C-17s it had . Royal Canadian Air Force The Canadian Forces had a long-standing need for strategic airlift for military and humanitarian operations around the world. It had followed a pattern similar to the German Air Force in leasing Antonovs and Ilyushins for many requirements, including deploying the Disaster Assistance Response Team to tsunami-stricken Sri Lanka in 2005; the Canadian Forces had relied entirely on leased An-124 Ruslan for a Canadian Army deployment to Haiti in 2003. A combination of leased Ruslans, Ilyushins and USAF C-17s was also used to move heavy equipment to Afghanistan. In 2002, the Canadian Forces Future Strategic Airlifter Project began to study alternatives, including long-term leasing arrangements. On 5 July 2006, the Canadian government issued a notice of intent to negotiate with Boeing to procure four airlifters for the Canadian Forces Air Command (Royal Canadian Air Force after August 2011). On 1 February 2007, Canada awarded a contract for four C-17s with delivery beginning in August 2007. Like Australia, Canada was granted airframes originally slated for the USAF to accelerate delivery. The official Canadian designation is CC-177 Globemaster III. On 23 July 2007, the first Canadian C-17 made its initial flight. It was turned over to Canada on 8 August, and participated at the Abbotsford International Airshow on 11 August prior to arriving at its new home base at 8 Wing, CFB Trenton, Ontario on 12 August. Its first operational mission was to deliver disaster relief to Jamaica following Hurricane Dean that month. The last of the initial four aircraft was delivered in April 2008. On 19 December 2014, it was reported that Canada intended to purchase one more C-17. On 30 March 2015, Canada's fifth C-17 arrived at CFB Trenton. The aircraft are assigned to 429 Transport Squadron based at CFB Trenton. On 14 April 2010, a Canadian C-17 landed for the first time at CFS Alert, the world's most northerly airport. Canadian Globemasters have been deployed in support of numerous missions worldwide, including Operation Hestia after the earthquake in Haiti, providing airlift as part of Operation Mobile and support to the Canadian mission in Afghanistan. After Typhoon Haiyan hit the Philippines in 2013, Canadian C-17s established an air bridge between the two nations, deploying Canada's DART and delivering humanitarian supplies and equipment. In 2014, they supported Operation Reassurance and Operation Impact. Strategic Airlift Capability program At the 2006 Farnborough Airshow, a number of NATO member nations signed a letter of intent to jointly purchase and operate several C-17s within the Strategic Airlift Capability (SAC). SAC members are Bulgaria, Estonia, Hungary, Lithuania, the Netherlands, Norway, Poland, Romania, Slovenia, the U.S., along with two Partnership for Peace countries Finland and Sweden as of 2010. The purchase was for two C-17s, and a third was contributed by the U.S. On 14 July 2009, Boeing delivered the first C-17 under the SAC program. The second and third C-17s were delivered in September and October 2009. The SAC C-17s are based at Pápa Air Base, Hungary. The Heavy Airlift Wing is hosted by Hungary, which acts as the flag nation. The aircraft are manned in similar fashion as the NATO E-3 AWACS aircraft. The C-17 flight crew are multi-national, but each mission is assigned to an individual member nation based on the SAC's annual flight hour share agreement. The NATO Airlift Management Programme Office (NAMPO) provides management and support for the Heavy Airlift Wing. NAMPO is a part of the NATO Support Agency (NSPA). In September 2014, Boeing stated that the three C-17s supporting SAC missions had achieved a readiness rate of nearly 94 percent over the last five years and supported over 1,000 missions. Indian Air Force In June 2009, the Indian Air Force (IAF) selected the C-17 for its Very Heavy Lift Transport Aircraft requirement to replace several types of transport aircraft. In January 2010, India requested 10 C-17s through the U.S.'s Foreign Military Sales program, the sale was approved by Congress in June 2010. On 23 June 2010, the IAF successfully test-landed a USAF C-17 at the Gaggal Airport, India to complete the IAF's C-17 trials. In February 2011, the IAF and Boeing agreed terms for the order of 10 C-17s with an option for six more; the US$4.1 billion order was approved by the Indian Cabinet Committee on Security on 6 June 2011. Deliveries began in June 2013 and were to continue to 2014. In 2012, the IAF reportedly finalized plans to buy six more C-17s in its five-year plan for 2017–2022. It provides strategic airlift, the ability to deploy special forces, and to operate in diverse terrain – from Himalayan air bases in North India at to Indian Ocean bases in South India. The C-17s are based at Hindon Air Force
3 December 2007, the MoD announced a contract for a sixth C-17, which was received on 11 June 2008. On 18 December 2009, Boeing confirmed that the RAF had ordered a seventh C-17, which was delivered on 16 November 2010. The UK announced the purchase of its eighth C-17 in February 2012. The RAF showed interest in buying a ninth C-17 in November 2013. On 13 January 2013, the RAF deployed two C-17s from RAF Brize Norton to the French Évreux Air Base, transporting French armored vehicles to the Malian capital of Bamako during the French intervention in Mali. In June 2015, an RAF C-17 was used to medically evacuate four victims of the 2015 Sousse attacks from Tunisia. Royal Australian Air Force The Royal Australian Air Force (RAAF) began investigating an acquisition of strategic transport aircraft in 2005. In late 2005, the then Minister for Defence Robert Hill stated that such aircraft were being considered due to the limited availability of strategic airlift aircraft from partner nations and air freight companies. The C-17 was considered to be favored over the A400M as it was a "proven aircraft" and in production. One major RAAF requirement was the ability to airlift the Army's M1 Abrams tanks; another requirement was immediate delivery. Though unstated, commonality with the USAF and the RAF was also considered advantageous. RAAF aircraft were ordered directly from the USAF production run and are identical to American C-17s even in paint scheme, the only difference being the national markings, allowing deliveries to commence within nine months of commitment to the program. On 2 March 2006, the Australian government announced the purchase of three aircraft and one option with an entry into service date of 2006. In July 2006 a fixed price contract was awarded to Boeing to deliver four C-17s for (). Australia also signed a US$80.7M contract to join the global 'virtual fleet' C-17 sustainment program and the RAAF's C-17s will receive the same upgrades as the USAF's fleet. The RAAF took delivery of its first C-17 in a ceremony at Boeing's plant at Long Beach, California on 28 November 2006. Several days later the aircraft flew from Hickam Air Force Base, Hawaii to Defence Establishment Fairbairn, Canberra, arriving on 4 December 2006. The aircraft was formally accepted in a ceremony at Fairbairn shortly after arrival. The second aircraft was delivered to the RAAF on 11 May 2007 and the third was delivered on 18 December 2007. The fourth Australian C-17 was delivered on 19 January 2008. All the Australian C-17s are operated by No. 36 Squadron and are based at RAAF Base Amberley in Queensland. On 18 April 2011, Boeing announced that Australia had signed an agreement with the U.S. government to acquire a fifth C-17 due to an increased demand for humanitarian and disaster relief missions. The aircraft was delivered to the RAAF on 14 September 2011. On 23 September 2011, Australian Minister for Defence Materiel Jason Clare announced that the government was seeking information from the U.S. about the price and delivery schedule for a sixth Globemaster. In November 2011, Australia requested a sixth C-17 through the U.S. Foreign Military Sales program; it was ordered in June 2012, and was delivered on 1 November 2012. In August 2014, Defence Minister David Johnston announced the intention to purchase one or two additional C-17s. On 3 October 2014, Johnston announced the government's approval to buy two C-17s at a total cost of (). The United States Congress approved the sale under the Foreign Military Sales program. Prime Minister Tony Abbott confirmed in April 2015 that two additional aircraft were to be ordered, with both delivered by 4 November 2015; these added to the six C-17s it had . Royal Canadian Air Force The Canadian Forces had a long-standing need for strategic airlift for military and humanitarian operations around the world. It had followed a pattern similar to the German Air Force in leasing Antonovs and Ilyushins for many requirements, including deploying the Disaster Assistance Response Team to tsunami-stricken Sri Lanka in 2005; the Canadian Forces had relied entirely on leased An-124 Ruslan for a Canadian Army deployment to Haiti in 2003. A combination of leased Ruslans, Ilyushins and USAF C-17s was also used to move heavy equipment to Afghanistan. In 2002, the Canadian Forces Future Strategic Airlifter Project began to study alternatives, including long-term leasing arrangements. On 5 July 2006, the Canadian government issued a notice of intent to negotiate with Boeing to procure four airlifters for the Canadian Forces Air Command (Royal Canadian Air Force after August 2011). On 1 February 2007, Canada awarded a contract for four C-17s with delivery beginning in August 2007. Like Australia, Canada was granted airframes originally slated for the USAF to accelerate delivery. The official Canadian designation is CC-177 Globemaster III. On 23 July 2007, the first Canadian C-17 made its initial flight. It was turned over to Canada on 8 August, and participated at the Abbotsford International Airshow on 11 August prior to arriving at its new home base at 8 Wing, CFB Trenton, Ontario on 12 August. Its first operational mission was to deliver disaster relief to Jamaica following Hurricane Dean that month. The last of the initial four aircraft was delivered in April 2008. On 19 December 2014, it was reported that Canada intended to purchase one more C-17. On 30 March 2015, Canada's fifth C-17 arrived at CFB Trenton. The aircraft are assigned to 429 Transport Squadron based at CFB Trenton. On 14 April 2010, a Canadian C-17 landed for the first time at CFS Alert, the world's most northerly airport. Canadian Globemasters have been deployed in support of numerous missions worldwide, including Operation Hestia after the earthquake in Haiti, providing airlift as part of Operation Mobile and support to the Canadian mission in Afghanistan. After Typhoon Haiyan hit the Philippines in 2013, Canadian C-17s established an air bridge between the two nations, deploying Canada's DART and delivering humanitarian supplies and equipment. In 2014, they supported Operation Reassurance and Operation Impact. Strategic Airlift Capability program At the 2006 Farnborough Airshow, a number of NATO member nations signed a letter of intent to jointly purchase and operate several C-17s within the Strategic Airlift Capability (SAC). SAC members are Bulgaria, Estonia, Hungary, Lithuania, the Netherlands, Norway, Poland, Romania, Slovenia, the U.S., along with two Partnership for Peace countries Finland and Sweden as of 2010. The purchase was for two C-17s, and a third was contributed by the U.S. On 14 July 2009, Boeing delivered the first C-17 under the SAC program. The second and third C-17s were delivered in September and October 2009. The SAC C-17s are based at Pápa Air Base, Hungary. The Heavy Airlift Wing is hosted by Hungary, which acts as the flag nation. The aircraft are manned in similar fashion as the NATO E-3 AWACS aircraft. The C-17 flight crew are multi-national, but each mission is assigned to an individual member nation based on the SAC's annual flight hour share agreement. The NATO Airlift Management Programme Office (NAMPO) provides management and support for the Heavy Airlift Wing. NAMPO is a part of the NATO Support Agency (NSPA). In September 2014, Boeing stated that the three C-17s supporting SAC missions had achieved a readiness rate of nearly 94 percent over the last five years and supported over 1,000 missions. Indian Air Force In June 2009, the Indian Air Force (IAF) selected the C-17 for its Very Heavy Lift Transport Aircraft requirement to replace several types of transport aircraft. In January 2010, India requested 10 C-17s through the U.S.'s Foreign Military Sales program, the sale was approved by Congress in June 2010. On 23 June 2010, the IAF successfully test-landed a USAF C-17 at the Gaggal Airport, India to complete the IAF's C-17 trials. In February 2011, the IAF and Boeing agreed terms for the order of 10 C-17s with an option for six more; the US$4.1 billion order was approved by the Indian Cabinet Committee on Security on 6 June 2011. Deliveries began in June 2013 and were to continue to 2014. In 2012, the IAF reportedly finalized plans to buy six more C-17s in its five-year plan for 2017–2022. It provides strategic airlift, the ability to deploy special forces, and to operate in diverse terrain – from Himalayan air bases in North India at to Indian Ocean bases in South India. The C-17s are based at Hindon Air Force Station and are operated by No. 81 Squadron IAF Skylords. The first C-17 was delivered in January 2013 for testing and training; it was officially accepted on 11 June 2013. The second C-17 was delivered on 23 July 2013 and put into service immediately. IAF Chief of Air Staff Norman AK Browne called it "a major component in the IAF's modernization drive" while taking delivery of the aircraft at Boeing's Long Beach factory. On 2 September 2013, the Skylords squadron with three C-17s officially entered IAF service. The Skylords regularly fly missions within India, such as to high-altitude bases at Leh and Thoise. The IAF first used the C-17 to transport an infantry battalion's equipment to Port Blair on Andaman Islands on 1 July 2013. Foreign deployments to date include Tajikistan in August 2013, and Rwanda to support Indian peacekeepers. One C-17 was used for transporting relief materials during Cyclone Phailin. The sixth aircraft was received in July 2014. In June 2017, the U.S. Department of State approved the potential sale of one C-17 to India under a proposed $366 million U.S. Foreign Military Sale. This aircraft, the last C-17 produced, increased the IAF's fleet to 11 C-17s. In March 2018, a contract was awarded for completion by 22 August 2019. Qatar Boeing delivered Qatar's first C-17 on 11 August 2009 and the second on 10 September 2009 for the Qatar Emiri Air Force. Qatar received its third C-17 in 2012, and fourth C-17 was received on 10 December 2012. In June 2013, The New York Times reported that Qatar was allegedly using its C-17s to ship weapons from Libya to the Syrian opposition during the civil war via Turkey. On 15 June 2015, it was announced at the Paris Airshow that Qatar agreed to order four additional C-17s from the five remaining "white tail" C-17s to double Qatar's C-17 fleet. One Qatari C-17 bears the civilian markings of government-owned Qatar Airways, although the airplane is owned and operated by the Qatar Emiri Air Force. This is because some airports are closed to airplanes with military markings. United Arab Emirates In February 2009, the United Arab Emirates Air Force agreed to buy four C-17s. In January 2010, a contract was signed for six C-17s. In May 2011, the first C-17 was handed over and the final was received in June 2012. Kuwait Kuwait requested the purchase of one C-17 in September 2010 and a second in April 2013 through the U.S.'s Foreign Military Sales (FMS) program. The nation ordered two C-17s; the first was delivered on 13 February 2014. Proposed operators In 2015, New Zealand's Minister of Defence, Gerry Brownlee was considering the purchase of two C-17s for the Royal New Zealand Air Force at an estimated cost of $600 million as a heavy air transport option. However, the New Zealand Government eventually decided not to acquire the C-17. Variants C-17A: Initial military airlifter version. C-17A "ER": Unofficial name for C-17As with extended range due to the addition of the center wing tank. This upgrade was incorporated in production beginning in 2001 with Block 13 aircraft. Block 16: This software/hardware upgrade was a major improvement of the improved Onboard Inert Gas-Generating System (OBIGGS II), a new weather radar, an improved stabilizer strut system and other avionics. Block 21: Adds ADS-B capability, IFF modification, communication/navigation upgrades and improved flight management. C-17B: A proposed tactical airlifter version with double-slotted flaps, an additional main landing gear on the center fuselage, more powerful engines, and other systems for shorter landing and take-off distances. Boeing offered the C-17B to the U.S. military in 2007 for carrying the Army's Future Combat Systems (FCS) vehicles and other equipment. MD-17: Proposed variant for civilian operators, later redesignated as BC-17 after 1997 merger. Operators Royal Australian Air Force – 8 C-17A ERs in service as of Jan. 2018. No. 36 Squadron Royal Canadian Air Force – 5 CC-177 (C-17A ER) aircraft in use as of Jan. 2018. 429 Transport Squadron, CFB Trenton Indian Air Force – 11 C-17s as of Aug. 2019. No. 81 Squadron (Skylords), Hindon AFS Kuwait Air Force – 2 C-17s as of Jan. 2018 Europe The multi-nation Strategic Airlift Capability Heavy Airlift Wing – 3 C-17s in service as of Jan. 2018, including 1 C-17 contributed by the USAF; based at Pápa Air Base, Hungary. Qatari Emiri Air Force – 8 C-17As in use as of Jan. 2018 United Arab Emirates Air Force – 8 C-17As in operation as of Jan. 2018 Royal Air Force – 8 C-17A ERs in use as of Jan. 2018 No. 99 Squadron, RAF Brize Norton United States Air Force – 222 C-17s in service (157 Active, 47 Air National Guard, 18 Air Force Reserve) 60th Air Mobility Wing – Travis Air Force Base, California 21st Airlift Squadron 62d Airlift Wing – McChord AFB, Washington 4th Airlift Squadron 7th Airlift Squadron 8th Airlift Squadron 10th Airlift Squadron - (2003–2016) 305th Air Mobility Wing – Joint Base McGuire–Dix–Lakehurst, New Jersey 6th Airlift Squadron 385th Air Expeditionary Group – Al Udeid Air Base, Qatar 816th Expeditionary Airlift Squadron 436th Airlift Wing – Dover Air Force Base, Delaware 3d Airlift Squadron 437th Airlift Wing – Charleston Air Force Base, South Carolina 14th Airlift Squadron 15th Airlift Squadron 16th Airlift Squadron 17th Airlift Squadron - (1993–2015) 3d Wing – Elmendorf Air Force Base, Alaska 517th Airlift Squadron (Associate) 15th Wing – Hickam Air Force Base, Hawaii 535th Airlift Squadron 97th Air Mobility Wing – Altus AFB, Oklahoma 58th Airlift Squadron 412th Test Wing – Edwards AFB, California 418th Flight Test Squadron Air Force Reserve 315th Airlift Wing (Associate) – Charleston AFB, South Carolina 300th Airlift Squadron 317th Airlift Squadron 701st Airlift Squadron 349th Air Mobility Wing (Associate) – Travis AFB, California 301st Airlift Squadron 445th Airlift Wing – Wright-Patterson AFB, Ohio 89th Airlift Squadron 446th Airlift Wing (Associate) – McChord AFB, Washington 97th Airlift Squadron 313th Airlift Squadron 728th Airlift Squadron 452d Air Mobility Wing – March ARB, California 729th Airlift Squadron 507th Air Refueling Wing – Tinker AFB, Oklahoma 730th Air Mobility Training Squadron (Altus AFB) 512th Airlift Wing (Associate) – Dover AFB, Delaware 326th Airlift Squadron 514th Air Mobility Wing (Associate) – Joint Base McGuire–Dix–Lakehurst, New Jersey 732d Airlift Squadron 911th Airlift Wing – Pittsburgh Air Reserve Station, Pennsylvania 758th Airlift Squadron Air National Guard 105th Airlift Wing – Stewart ANGB, New York 137th Airlift Squadron 145th Airlift Wing – Charlotte Air National Guard Base, North Carolina 156th Airlift Squadron 154th Wing – Hickam AFB, Hawaii 204th Airlift Squadron (Associate) 164th Airlift Wing – Memphis ANGB, Tennessee 155th Airlift Squadron 167th Airlift Wing – Shepherd Field ANGB, West Virginia 167th Airlift Squadron 172d Airlift Wing – Allen C. Thompson Field ANGB, Mississippi 183d Airlift Squadron 176th Wing – Elmendorf AFB, Alaska 144th Airlift Squadron Accidents and notable incidents On 10 September 1998, a USAF C-17 (AF Serial No.96-0006) delivered Keiko the whale to Vestmannaeyjar, Iceland, a runway, and suffered a landing gear failure during landing. There were no injuries, but the landing gear sustained major damage. After receiving temporary repairs, it flew to a nearby city for further repairs. On 10 December 2003, a USAF C-17 (AF Serial No. 98-0057) was hit by a surface-to-air missile after take-off from Baghdad, Iraq. One engine was disabled and the aircraft returned for a safe landing. It was repaired and returned to service. On 6 August 2005, a USAF C-17 (AF Serial No. 01-0196) ran off the runway at Bagram Air Base in Afghanistan while attempting to land, destroying its nose and main landing gear. After two months to make it flightworthy, a test pilot flew the aircraft to Boeing's Long Beach facility as the temporary repairs imposed performance limitations. In October 2006, it returned to service following repairs. On 30 January 2009, a USAF C-17 (AF Serial No. 96-0002 – "Spirit of the Air Force") made a gear-up landing at Bagram Air Base. It was ferried from Bagram AB, making several stops along the way, to Boeing's Long Beach plant for extensive repairs. The USAF Aircraft Accident Investigation Board concluded the cause was the crew's failure to follow the pre-landing checklist and lower the landing gear. On 28 July 2010, a USAF C-17 (AF Serial No. 00-0173 – "Spirit of the Aleutians") crashed at Elmendorf Air Force Base, Alaska, while practicing for the 2010 Arctic Thunder Air Show, killing all four aboard. It crashed near a railroad, disrupting rail operations. A military investigation found pilot error to have caused a stall. This is the C-17's only fatal crash and the only hull-loss incident. On 23 January 2012, a USAF C-17 (AF Serial No. 07-7189), assigned to the 437th Airlift Wing, Joint Base Charleston, South Carolina, landed on runway 34R at Forward Operating Base Shank, Afghanistan. The crew did not realize the required stopping distance exceeded the runway's length thus were unable to stop. It came to rest approximately 700 feet from the runway's end upon an embankment, causing major structural damage but no injuries. After 9 months of repairs to make airworthy, the C-17 flew to Long Beach. It returned to service at a reported cost of $69.4 million. On 20 July 2012, a USAF C-17 of the 305th Air Mobility Wing, flying from McGuire AFB, New Jersey to MacDill Air Force Base in Tampa, Florida mistakenly landed at nearby Peter O. Knight Airport. The landing followed an extended duration flight from Europe to Southwest Asia to embark military passengers before returning to the U.S. There were no injuries and no damage to the aircraft or the runway. It took off a short time later with ease from Knight's 3,580-foot runway to MacDill AFB. The USAF attributed the mistaken landing to pilot error and fatigue; both airfields' main runways were only a few miles apart and shared the same magnetic heading. On 9 April 2021, USAF C-17 10-0223 suffered a fire in its undercarriage after landing at Charleston AFB following a flight from RAF Mildenhall, United Kingdom. The fire spread to the fuselage before it was extinguished. Specifications (C-17A) See also References Bibliography Bonny, Danny, Barry Fryer and Martyn Swann. AMARC MASDC III, The Aerospace Maintenance and Regeneration Center, Davis-Monthan AFB, AZ, 1997–2005. Surrey, UK: British Aviation Research Group, 2006. . Department of Defense. Kosovo/Operation Allied Force After-Action Report, DIANE Publishing; 31 January 2000.. Gertler, Jeremiah. "Air Force C-17 Aircraft Procurement: Background and Issues for
Caber can refer to: Caber toss, a sport Places Caber, Çivril, a village in Çivril District, Denizli Province, Turkey Caber, Sarayköy, a village
a village in Sarayköy District, Denizli Province, Turkey Çabër, a village in Zubin Potok, Mitrovica district, Kosovo Other uses CaBER, Capillary Breakup Extensional Rheometer Caber (comics), a deity in Marvel Comics Caber Music,
used for this task, meaning that 32 or 64 bits of reference count storage must be allocated for each object. On some systems, it may be possible to mitigate this overhead by using a tagged pointer to store the reference count in unused areas of the object's memory. Often, an architecture does not actually allow programs to access the full range of memory addresses that could be stored in its native pointer size; certain number of high bits in the address is either ignored or required to be zero. If an object reliably has a pointer at a certain location, the reference count can be stored in the unused bits of the pointer. For example, each object in Objective-C has a pointer to its class at the beginning of its memory; on the ARM64 architecture using iOS 7, 19 unused bits of this class pointer are used to store the object's reference count. Speed overhead (increment/decrement) In naive implementations, each assignment of a reference and each reference falling out of scope often require modifications of one or more reference counters. However, in a common case when a reference is copied from an outer scope variable into an inner scope variable, such that the lifetime of the inner variable is bounded by the lifetime of the outer one, the reference incrementing can be eliminated. The outer variable "owns" the reference. In the programming language C++, this technique is readily implemented and demonstrated with the use of const references. Reference counting in C++ is usually implemented using "smart pointers" whose constructors, destructors and assignment operators manage the references. A smart pointer can be passed by reference to a function, which avoids the need to copy-construct a new smart pointer (which would increase the reference count on entry into the function and decrease it on exit). Instead the function receives a reference to the smart pointer which is produced inexpensively. The Deutsch-Bobrow method of reference counting capitalizes on the fact that most reference count updates are in fact generated by references stored in local variables. It ignores these references, only counting references in the heap, but before an object with reference count zero can be deleted, the system must verify with a scan of the stack and registers that no other reference to it still exists. A further substantial decrease in the overhead on counter updates can be obtained by update coalescing introduced by Levanoni and Petrank. Consider a pointer that in a given interval of the execution is updated several times. It first points to an object O1, then to an object O2, and so forth until at the end of the interval it points to some object On. A reference counting algorithm would typically execute rc(O1)--, rc(O2)++, rc(O2)--, rc(O3)++, rc(O3)--, ..., rc(On)++. But most of these updates are redundant. In order to have the reference count properly evaluated at the end of the interval it is enough to perform rc(O1)-- and rc(On)++. Levanoni and Petrank measured an elimination of more than 99% of the counter updates in typical Java benchmarks. Requires atomicity When used in a multithreaded environment, these modifications (increment and decrement) may need to be atomic operations such as compare-and-swap, at least for any objects which are shared, or potentially shared among multiple threads. Atomic operations are expensive on a multiprocessor, and even more expensive if they have to be emulated with software algorithms. It is possible to avoid this issue by adding per-thread or per-CPU reference counts and only accessing the global reference count when the local reference counts become or are no longer zero (or, alternatively, using a binary tree of reference counts, or even giving up deterministic destruction in exchange for not having a global reference count at all), but this adds significant memory overhead and thus tends to be only useful in special cases (it is used, for example, in the reference counting of Linux kernel modules). Update coalescing by Levanoni and Petrank can be used to eliminate all atomic operations from the write-barrier. Counters are never updated by the program threads in the course of program execution. They are only modified by the collector which executes as a single additional thread with no synchronization. This method can be used as a stop-the-world mechanism for parallel programs, and also with a concurrent reference counting collector. Not real-time Naive implementations of reference counting do not generally provide real-time behavior, because any pointer assignment can potentially cause a number of objects bounded only by total allocated memory size to be recursively freed while the thread is unable to perform other work. It is possible to avoid this issue by delegating the freeing of unreferenced objects to other threads, at the cost of extra overhead. Escape analysis Escape analysis is a compile-time technique that can convert heap allocations to stack allocations, thereby reducing the amount of garbage collection to be done. This analysis determines whether an object allocated inside a function is accessible outside of it. If a function-local allocation is found to be accessible to another function or thread, the allocation is said to "escape" and cannot be done on the stack. Otherwise, the object may be allocated directly on the stack and released when the function returns, bypassing the heap and associated memory management costs. Availability Generally speaking, higher-level programming languages are more likely to have garbage collection as a standard feature. In some languages that do not have built in garbage collection, it can be added through a library, as with the Boehm garbage collector for C and C++. Most functional programming languages, such as ML, Haskell, and APL, have garbage collection built in. Lisp is especially notable as both the first functional programming language and the first language to introduce garbage collection. Other dynamic languages, such as Ruby and Julia (but not Perl 5 or PHP before version 5.3, which both use reference counting), JavaScript and ECMAScript also tend to use GC. Object-oriented programming languages such as Smalltalk, RPL and Java usually provide integrated garbage collection. Notable exceptions are C++ and Delphi, which have destructors. BASIC BASIC and Logo have often used garbage collection for variable-length data types, such as strings and lists, so as not to burden programmers with memory management details. On the Altair 8800, programs with many string variables and little string space could cause long pauses due to garbage collection. Similarly the Applesoft BASIC interpreter's garbage collection algorithm repeatedly scans the string descriptors for the string having the highest address in order to compact it toward high memory, resulting in performance and pauses anywhere from a few seconds to a few minutes. A replacement garbage collector for Applesoft BASIC by Randy Wigginton identifies a group of strings in every pass over the heap, reducing collection time dramatically. BASIC.System, released with ProDOS in 1983, provides a windowing garbage collector for BASIC that is many times faster. Objective-C While the Objective-C traditionally had no garbage collection, with the release of OS X 10.5 in 2007 Apple introduced garbage collection for Objective-C 2.0, using an in-house developed runtime collector. However, with the 2012 release of OS X 10.8, garbage collection was deprecated in favor of LLVM's automatic reference counter (ARC) that was introduced with OS X 10.7. Furthermore, since May 2015 Apple even forbids the usage of garbage collection for new OS X applications in the App Store. For iOS, garbage collection has never been introduced due to problems in application responsivity and performance; instead, iOS uses ARC. Limited environments
object to store its reference count. The count may be stored adjacent to the object's memory or in a side table somewhere else, but in either case, every single reference-counted object requires additional storage for its reference count. Memory space with the size of an unsigned pointer is commonly used for this task, meaning that 32 or 64 bits of reference count storage must be allocated for each object. On some systems, it may be possible to mitigate this overhead by using a tagged pointer to store the reference count in unused areas of the object's memory. Often, an architecture does not actually allow programs to access the full range of memory addresses that could be stored in its native pointer size; certain number of high bits in the address is either ignored or required to be zero. If an object reliably has a pointer at a certain location, the reference count can be stored in the unused bits of the pointer. For example, each object in Objective-C has a pointer to its class at the beginning of its memory; on the ARM64 architecture using iOS 7, 19 unused bits of this class pointer are used to store the object's reference count. Speed overhead (increment/decrement) In naive implementations, each assignment of a reference and each reference falling out of scope often require modifications of one or more reference counters. However, in a common case when a reference is copied from an outer scope variable into an inner scope variable, such that the lifetime of the inner variable is bounded by the lifetime of the outer one, the reference incrementing can be eliminated. The outer variable "owns" the reference. In the programming language C++, this technique is readily implemented and demonstrated with the use of const references. Reference counting in C++ is usually implemented using "smart pointers" whose constructors, destructors and assignment operators manage the references. A smart pointer can be passed by reference to a function, which avoids the need to copy-construct a new smart pointer (which would increase the reference count on entry into the function and decrease it on exit). Instead the function receives a reference to the smart pointer which is produced inexpensively. The Deutsch-Bobrow method of reference counting capitalizes on the fact that most reference count updates are in fact generated by references stored in local variables. It ignores these references, only counting references in the heap, but before an object with reference count zero can be deleted, the system must verify with a scan of the stack and registers that no other reference to it still exists. A further substantial decrease in the overhead on counter updates can be obtained by update coalescing introduced by Levanoni and Petrank. Consider a pointer that in a given interval of the execution is updated several times. It first points to an object O1, then to an object O2, and so forth until at the end of the interval it points to some object On. A reference counting algorithm would typically execute rc(O1)--, rc(O2)++, rc(O2)--, rc(O3)++, rc(O3)--, ..., rc(On)++. But most of these updates are redundant. In order to have the reference count properly evaluated at the end of the interval it is enough to perform rc(O1)-- and rc(On)++. Levanoni and Petrank measured an elimination of more than 99% of the counter updates in typical Java benchmarks. Requires atomicity When used in a multithreaded environment, these modifications (increment and decrement) may need to be atomic operations such as compare-and-swap, at least for any objects which are shared, or potentially shared among multiple threads. Atomic operations are expensive on a multiprocessor, and even more expensive if they have to be emulated with software algorithms. It is possible to avoid this issue by adding per-thread or per-CPU reference counts and only accessing the global reference count when the local reference counts become or are no longer zero (or, alternatively, using a binary tree of reference counts, or even giving up deterministic destruction in exchange for not having a global reference count at all), but this adds significant memory overhead and thus tends to be only useful in special cases (it is used, for example, in the reference counting of Linux kernel modules). Update coalescing by Levanoni and Petrank can be used to eliminate all atomic operations from the write-barrier. Counters are never updated by the program threads in the course of program execution. They are only modified by the collector which executes as a single additional thread with no synchronization. This method can be used as a stop-the-world mechanism for parallel programs, and also with a concurrent reference counting collector. Not real-time Naive implementations of reference counting do not generally provide real-time behavior, because any pointer assignment can potentially cause a number of objects bounded only by total allocated memory size to be recursively freed while the thread is unable to perform other work. It is possible to avoid this issue by delegating the freeing of unreferenced objects to other threads, at the cost of extra overhead. Escape analysis Escape analysis is a compile-time technique that can convert heap allocations to stack allocations, thereby reducing the amount of garbage collection to be done. This analysis determines whether an object allocated inside a function is accessible outside of it. If a function-local allocation is found to be accessible to another function or thread, the allocation is said to "escape" and cannot be done on the stack. Otherwise, the object may be allocated directly on the stack and released when the function returns, bypassing the heap and associated memory management costs. Availability Generally speaking, higher-level programming languages are more likely to have garbage collection as a standard feature. In some languages that do not have built in garbage collection, it can be added through a library, as with the Boehm garbage collector for C and C++. Most functional programming languages, such as ML, Haskell, and APL, have garbage collection built in. Lisp is especially notable as both the first functional programming language and the first language to introduce garbage collection. Other dynamic languages, such as Ruby and Julia (but not Perl 5 or PHP before version 5.3, which both use reference counting), JavaScript and ECMAScript also tend to use GC. Object-oriented programming languages such as Smalltalk, RPL and Java usually provide integrated garbage collection. Notable exceptions are C++ and Delphi, which have destructors. BASIC BASIC and Logo have often used garbage collection for variable-length data types, such as strings and lists, so as not to burden programmers with memory management details. On the Altair 8800, programs with many string variables and little string space could cause long pauses due to garbage collection. Similarly the Applesoft BASIC interpreter's garbage collection algorithm repeatedly scans the string descriptors for the string having the highest address in order to compact it toward high memory, resulting in performance and pauses anywhere from a few seconds to a few minutes. A replacement garbage collector for Applesoft BASIC by Randy Wigginton identifies a group of strings in every pass over the heap, reducing collection time dramatically. BASIC.System, released with ProDOS in 1983, provides a windowing garbage collector for BASIC that is many times faster. Objective-C While the Objective-C traditionally had no garbage collection, with the release of OS X 10.5 in 2007 Apple introduced garbage collection for Objective-C 2.0, using an in-house developed runtime collector. However, with the 2012 release of OS X 10.8, garbage collection was deprecated in favor of LLVM's automatic reference counter (ARC) that was introduced with OS X 10.7. Furthermore, since May 2015 Apple even forbids the usage of garbage collection for new OS X applications in the App Store. For iOS, garbage collection has never been introduced due to problems in application responsivity and performance; instead, iOS uses ARC. Limited environments Garbage collection is rarely used on embedded or
molar and reduction of its parastyle distinguish these late Cenozoic canids and are the essential differences that identify their clade. The cat-like feliformia and dog-like caniforms emerged within the Carnivoramorpha around 45–42 Mya (million years ago). The Canidae first appeared in North America during the Late Eocene (37.8-33.9 Ma). They did not reach Eurasia until the Miocene or to South America until the Late Pliocene. Phylogenetic relationships This cladogram shows the phylogenetic position of canids within Caniformia, based on fossil finds: Evolution The Canidae today includes a diverse group of some 34 species ranging in size from the maned wolf with its long limbs to the short-legged bush dog. Modern canids inhabit forests, tundra, savannahs, and deserts throughout tropical and temperate parts of the world. The evolutionary relationships between the species have been studied in the past using morphological approaches, but more recently, molecular studies have enabled the investigation of phylogenetics relationships. In some species, genetic divergence has been suppressed by the high level of gene flow between different populations and where the species have hybridized, large hybrid zones exist. Eocene epoch Carnivorans evolved after the extinction of the non-avian dinosaurs 66 million years ago. Around 50 million years ago, or earlier, in the Paleocene, the carnivorans split into two main divisions: caniforms (dog-like) and feliforms (cat-like). By 40 Mya, the first identifiable member of the dog family had arisen. Named Prohesperocyon wilsoni, its fossilized remains have been found in what is now the southwestern part of Texas. The chief features which identify it as a canid include the loss of the upper third molar (part of a trend toward a more shearing bite), and the structure of the middle ear which has an enlarged bulla (the hollow bony structure protecting the delicate parts of the ear). Prohesperocyon probably had slightly longer limbs than its predecessors, and also had parallel and closely touching toes which differ markedly from the splayed arrangements of the digits in bears. The canid family soon subdivided into three subfamilies, each of which diverged during the Eocene: Hesperocyoninae (about 39.74–15 Mya), Borophaginae (about 34–2 Mya), and Caninae (about 34–0 Mya). The Caninae are the only surviving subfamily and all present-day canids, including wolves, foxes, coyotes, jackals, and domestic dogs. Members of each subfamily showed an increase in body mass with time and some exhibited specialized hypercarnivorous diets that made them prone to extinction. Oligocene epoch By the Oligocene, all three subfamilies of canids (Hesperocyoninae, Borophaginae, and Caninae) had appeared in the fossil records of North America. The earliest and most primitive branch of the Canidae was the Hesperocyoninae lineage, which included the coyote-sized Mesocyon of the Oligocene (38–24 Mya). These early canids probably evolved for the fast pursuit of prey in a grassland habitat; they resembled modern viverrids in appearance. Hesperocyonines eventually became extinct in the middle Miocene. One of the early members of the Hesperocyonines, the genus Hesperocyon, gave rise to Archaeocyon and Leptocyon. These branches led to the borophagine and canine radiations. Miocene epoch Around 9–10 mya during the Late Miocene, the Canis, Urocyon, and Vulpes genera expanded from southwestern North America, where the canine radiation began. The success of these canines was related to the development of lower carnassials that were capable of both mastication and shearing. Around 8 Mya, the Beringian land bridge allowed members of the genus Eucyon a means to enter Asia and they continued on to colonize Europe. Pliocene epoch During the Pliocene, around 4–5 Mya, Canis lepophagus appeared in North America. This was small and sometimes coyote-like. Others were wolf-like in characteristics. C. latrans (the coyote) is theorized to have descended from C. lepophagus. The formation of the Isthmus of Panama, about 3 Mya, joined South America to North America, allowing canids to invade South America, where they diversified. However, the most recent common ancestor of the South American canids lived in North America some 4 Mya and more than one incursion across the new land bridge is likely. One of the resulting lineages consisted of the gray fox (Urocyon cinereoargentus) and the now-extinct dire wolf (Aenocyon dirus). The other lineage consisted of the so-called South American endemic species; the maned wolf (Chrysocyon brachyurus), the short-eared dog (Atelocynus microtis), the bush dog (Speothos venaticus), the crab-eating fox (Cerdocyon thous), and the South American foxes (Lycalopex spp.). The monophyly of this group has been established by molecular means. Pleistocene epoch During the Pleistocene, the North American wolf line appeared, with Canis edwardii, clearly identifiable as a wolf, and Canis rufus appeared, possibly a direct descendant of C. edwardii. Around 0.8 Mya, Canis ambrusteri emerged in North America. A large wolf, it was found all over North and Central America and was eventually supplanted by its descendant, the dire wolf, which then spread into South America during the Late Pleistocene. By 0.3 Mya, a number of subspecies of the gray wolf (C. lupus) had developed and had spread throughout Europe and northern Asia. The gray wolf colonized North America during the late Rancholabrean era across the Bering land bridge, with at least three separate invasions, with each one consisting of one or more different Eurasian gray wolf clades. MtDNA studies have shown that there are at least four extant C. lupus lineages. The dire wolf shared its habitat with the gray wolf, but became extinct in a large-scale extinction event that occurred around 11,500 years ago. It may have been more of a scavenger than a hunter; its molars appear to be adapted for crushing bones and it may have gone extinct as a result of the extinction of the large herbivorous animals on whose carcasses it relied. In 2015, a study of mitochondrial genome sequences and whole-genome nuclear sequences of African and Eurasian canids indicated that extant wolf-like canids have colonized Africa from Eurasia at least five times throughout the Pliocene and Pleistocene, which is consistent with fossil evidence suggesting that much of African canid fauna diversity resulted from the immigration of Eurasian ancestors, likely coincident with Plio-Pleistocene climatic oscillations between arid and humid conditions. When comparing the African and Eurasian golden jackals, the study concluded that the African specimens represented a distinct monophyletic lineage that should be recognized
radiations. Miocene epoch Around 9–10 mya during the Late Miocene, the Canis, Urocyon, and Vulpes genera expanded from southwestern North America, where the canine radiation began. The success of these canines was related to the development of lower carnassials that were capable of both mastication and shearing. Around 8 Mya, the Beringian land bridge allowed members of the genus Eucyon a means to enter Asia and they continued on to colonize Europe. Pliocene epoch During the Pliocene, around 4–5 Mya, Canis lepophagus appeared in North America. This was small and sometimes coyote-like. Others were wolf-like in characteristics. C. latrans (the coyote) is theorized to have descended from C. lepophagus. The formation of the Isthmus of Panama, about 3 Mya, joined South America to North America, allowing canids to invade South America, where they diversified. However, the most recent common ancestor of the South American canids lived in North America some 4 Mya and more than one incursion across the new land bridge is likely. One of the resulting lineages consisted of the gray fox (Urocyon cinereoargentus) and the now-extinct dire wolf (Aenocyon dirus). The other lineage consisted of the so-called South American endemic species; the maned wolf (Chrysocyon brachyurus), the short-eared dog (Atelocynus microtis), the bush dog (Speothos venaticus), the crab-eating fox (Cerdocyon thous), and the South American foxes (Lycalopex spp.). The monophyly of this group has been established by molecular means. Pleistocene epoch During the Pleistocene, the North American wolf line appeared, with Canis edwardii, clearly identifiable as a wolf, and Canis rufus appeared, possibly a direct descendant of C. edwardii. Around 0.8 Mya, Canis ambrusteri emerged in North America. A large wolf, it was found all over North and Central America and was eventually supplanted by its descendant, the dire wolf, which then spread into South America during the Late Pleistocene. By 0.3 Mya, a number of subspecies of the gray wolf (C. lupus) had developed and had spread throughout Europe and northern Asia. The gray wolf colonized North America during the late Rancholabrean era across the Bering land bridge, with at least three separate invasions, with each one consisting of one or more different Eurasian gray wolf clades. MtDNA studies have shown that there are at least four extant C. lupus lineages. The dire wolf shared its habitat with the gray wolf, but became extinct in a large-scale extinction event that occurred around 11,500 years ago. It may have been more of a scavenger than a hunter; its molars appear to be adapted for crushing bones and it may have gone extinct as a result of the extinction of the large herbivorous animals on whose carcasses it relied. In 2015, a study of mitochondrial genome sequences and whole-genome nuclear sequences of African and Eurasian canids indicated that extant wolf-like canids have colonized Africa from Eurasia at least five times throughout the Pliocene and Pleistocene, which is consistent with fossil evidence suggesting that much of African canid fauna diversity resulted from the immigration of Eurasian ancestors, likely coincident with Plio-Pleistocene climatic oscillations between arid and humid conditions. When comparing the African and Eurasian golden jackals, the study concluded that the African specimens represented a distinct monophyletic lineage that should be recognized as a separate species, Canis anthus (African golden wolf). According to a phylogeny derived from nuclear sequences, the Eurasian golden jackal (Canis aureus) diverged from the wolf/coyote lineage 1.9 Mya, but the African golden wolf separated 1.3 Mya. Mitochondrial genome sequences indicated the Ethiopian wolf diverged from the wolf/coyote lineage slightly prior to that. Wild canids are found on every continent except Antarctica, and inhabit a wide range of different habitats, including deserts, mountains, forests, and grasslands. They vary in size from the fennec fox, which may be as little as in length and weigh , to the gray wolf, which may be up to long, and can weigh up to . Only a few species are arboreal—the gray fox, the closely related island fox and the raccoon dog habitually climb trees. All canids have a similar basic form, as exemplified by the gray wolf, although the relative length of muzzle, limbs, ears, and tail vary considerably between species. With the exceptions of the bush dog, the raccoon dog and some domestic dog breeds, canids have relatively long legs and lithe bodies, adapted for chasing prey. The tails are bushy and the length and quality of the pelage vary with the season. The muzzle portion of the skull is much more elongated than that of the cat family. The zygomatic arches are wide, there is a transverse lambdoidal ridge at the rear of the cranium and in some species, a sagittal crest running from front to back. The bony orbits around the eye never form a complete ring and the auditory bullae are smooth and rounded. Females have three to seven pairs of mammae. All canids are digitigrade, meaning they walk on their toes. The tip of the nose is always naked, as are the cushioned pads on the soles of the feet. These latter consist of a single pad behind the tip of each toe and a more-or-less three-lobed central pad under the roots of the digits. Hairs grow between the pads and in the Arctic fox, the sole of the foot is densely covered with hair at some times of the year. With the exception of the four-toed African wild dog (Lycaon pictus), five toes are on the forefeet, but the pollex (thumb) is reduced and does not reach the ground. On the hind feet are four toes, but in some domestic dogs, a fifth vestigial toe, known as a dewclaw, is sometimes present, but has no anatomical connection to the rest of the foot. The slightly curved nails are not retractile and more-or-less blunt. The penis in male canids
recurvata - its upturning tail - which is not found in any other canid. In 1999, a study of mitochondrial DNA indicated that the domestic dog may have originated from multiple wolf populations, with the dingo and New Guinea singing dog "breeds" having developed at a time when human populations were more isolated from each other. In the third edition of Mammal Species of the World published in 2005, the mammalogist W. Christopher Wozencraft listed under the wolf Canis lupus some 36 wild subspecies, and proposed two additional subspecies: familiaris Linnaeus, 1758 and dingo Meyer, 1793. Wozencraft included hallstromi – the New Guinea singing dog – as a taxonomic synonym for the dingo. Wozencraft referred to the mDNA study as one of the guides in forming his decision, and listed the 38 subspecies under the biological common name of "wolf", with the nominate subspecies being the Eurasian wolf (Canis lupus lupus) based on the type specimen that Linnaeus studied in Sweden. However, the classification of several of these canines as either species or subspecies has recently been challenged. List of extant subspecies Living subspecies recognized by MSW3 and divided into Old World and New World: Eurasia and Australasia Sokolov and Rossolimo (1985) recognised nine Old World subspecies of wolf. These were C. l. lupus, C. l. albus, C. l. pallipes, C. l. cubanensis, C. l. campestris, C. l. chanco, C. l. desortorum, C. l. hattai, and C. l. hodophilax. In his 1995 statistical analysis of skull morphometrics, mammalogist Robert Nowak recognized the first four of those subspecies, synonymized campestris, chanco and desortorum with C. l. lupus, but did not examine the two Japanese subspecies. In addition, he recognized C. l. communis as a subspecies distinct from C. l. lupus. In 2003, Nowak also recognized the distinctiveness of C. l. , C. l. hattai, C. l. italicus, and C. l. hodophilax. In 2005, MSW3 included C. l. filchneri. In 2003, two forms were distinguished in southern China and Inner Mongolia as being separate from C. l. chanco and C. l. filchneri and have yet to be named. North America For North America, in 1944 the zoologist Edward Goldman recognized as many as 23 subspecies based on morphology. In 1959, E. Raymond Hall proposed that there had been 24 subspecies of lupus in North America. In 1970, L. David Mech proposed that there was "probably far too many subspecific designations...in use", as most did not exhibit enough points of differentiation to be classified as separate subspecies. The 24 subspecies were accepted by many authorities in 1981 and these were based on morphological or geographical differences, or a unique history. In 1995, the American mammologist Robert M. Nowak analyzed data on the skull morphology of wolf specimens from around the world. For North America, he proposed that there were only five subspecies of the wolf. These include a large-toothed Arctic wolf named C. l. arctos, a large wolf from Alaska and western Canada named C. l. occidentalis, a small wolf from southeastern Canada named C. l. lycaon, a small wolf from the southwestern U.S. named C. l. baileyi and a moderate-sized wolf that was originally found from Texas to Hudson Bay and from Oregon to Newfoundland named C. l. nubilus. The taxonomic classification of Canis lupus in Mammal Species of the World (3rd edition, 2005) listed 27 subspecies of North American wolf, corresponding to the 24 Canis lupus subspecies and the three Canis rufus subspecies of Hall (1981). The table below shows the extant subspecies, with the extinct ones listed in the following section. List of historically extinct subspecies Subspecies recognized by MSW3 which have gone extinct over the past 150 years: Subspecies discovered since the publishing of MSW3 in 2005 which have gone extinct over the past 150 years: Disputed subspecies Global In 2019, a workshop hosted by the IUCN/SSC Canid Specialist Group considered the New Guinea singing dog and the dingo to be feral dogs (Canis familiaris). In 2020, a literature review of canid domestication stated that modern dogs were not descended from the same Canis lineage as modern wolves, and proposes that dogs may be descended from a Pleistocene wolf closer in size to a village dog. In 2021, the American Society of Mammalogists also considered dingos a feral dog (Canis familiaris) population. Eurasia Italian wolf The Italian wolf (or Apennine wolf) was first recognised as a distinct subspecies Canis lupus italicus in 1921 by zoologist Giuseppe Altobello. Altobello's classification was later rejected by several authors, including Reginald Innes Pocock, who synonymised C. l. italicus with C. l. lupus. In 2002, the noted paleontologist R.M. Nowak reaffirmed the morphological distinctiveness of the Italian wolf and recommended the recognition of Canis lupus italicus. A number of DNA studies have found the Italian wolf to be genetically distinct. In 2004, the genetic distinction of the Italian wolf subspecies was supported by analysis which consistently assigned all the wolf genotypes of a sample in Italy to a single group. This population also showed a unique mitochondrial DNA control-region haplotype, the absence of private alleles and lower heterozygosity at microsatellite loci, as compared to other wolf populations. In 2010, a genetic analysis indicated that a single wolf haplotype (w22) unique to the Apennine Peninsula and one of the two haplotypes (w24, w25), unique to the Iberian Peninsula, belonged to the same haplogroup as the prehistoric wolves of Europe. Another haplotype (w10) was found to be common to the Iberian peninsula and the Balkans. These three populations with geographic isolation exhibited a near lack of gene flow and spatially correspond to three glacial refugia. The taxonomic reference Mammal Species of the World (3rd edition, 2005) does not recognize Canis lupus italicus; however, NCBI/Genbank
stated that modern dogs were not descended from the same Canis lineage as modern wolves, and proposes that dogs may be descended from a Pleistocene wolf closer in size to a village dog. In 2021, the American Society of Mammalogists also considered dingos a feral dog (Canis familiaris) population. Eurasia Italian wolf The Italian wolf (or Apennine wolf) was first recognised as a distinct subspecies Canis lupus italicus in 1921 by zoologist Giuseppe Altobello. Altobello's classification was later rejected by several authors, including Reginald Innes Pocock, who synonymised C. l. italicus with C. l. lupus. In 2002, the noted paleontologist R.M. Nowak reaffirmed the morphological distinctiveness of the Italian wolf and recommended the recognition of Canis lupus italicus. A number of DNA studies have found the Italian wolf to be genetically distinct. In 2004, the genetic distinction of the Italian wolf subspecies was supported by analysis which consistently assigned all the wolf genotypes of a sample in Italy to a single group. This population also showed a unique mitochondrial DNA control-region haplotype, the absence of private alleles and lower heterozygosity at microsatellite loci, as compared to other wolf populations. In 2010, a genetic analysis indicated that a single wolf haplotype (w22) unique to the Apennine Peninsula and one of the two haplotypes (w24, w25), unique to the Iberian Peninsula, belonged to the same haplogroup as the prehistoric wolves of Europe. Another haplotype (w10) was found to be common to the Iberian peninsula and the Balkans. These three populations with geographic isolation exhibited a near lack of gene flow and spatially correspond to three glacial refugia. The taxonomic reference Mammal Species of the World (3rd edition, 2005) does not recognize Canis lupus italicus; however, NCBI/Genbank publishes research papers under that name. Iberian wolf The Iberian wolf was first recognised as a distinct subspecies (Canis lupus signatus) in 1907 by zoologist Ángel Cabrera. The wolves of the Iberian peninsula have morphologically distinct features from other Eurasian wolves and each are considered by their researchers to represent their own subspecies. The taxonomic reference Mammal Species of the World (3rd edition, 2005) does not recognize Canis lupus signatus; however, NCBI/Genbank does list it. Himalayan wolf The Himalayan wolf is distinguished by its mitochondrial DNA, which is basal to all other wolves. The taxonomic name of this wolf is disputed, with the species Canis himalayensis being proposed based on two limited DNA studies. In 2017, a study of mitochondrial DNA, X-chromosome (maternal lineage) markers and Y-chromosome (male lineage) markers found that the Himalayan wolf was genetically basal to the Holarctic grey wolf and has an association with the African golden wolf. In 2019, a workshop hosted by the IUCN/SSC Canid Specialist Group noted that the Himalayan wolf's distribution included the Himalayan range and the Tibetan Plateau. The group recommends that this wolf lineage be known as the "Himalayan wolf" and classified as Canis lupus chanco until a genetic analysis of the holotypes is available. In 2020, further research on the Himalayan wolf found that it warranted species-level recognition under the Unified Species Concept, the Differential Fitness Species Concept, and the Biological Species Concept. It was identified as an Evolutionary Significant Unit that warranted assignment onto the IUCN Red List for its protection. Indian plains wolf The Indian plains wolf is a proposed clade within the Indian wolf (Canis lupus pallipes) that is distinguished by its mitochondrial DNA, which is basal to all other wolves except for the Himalayan wolf. The taxonomic status of this wolf clade is disputed, with the separate species Canis indica being proposed based on two limited DNA studies. The proposal has not been endorsed because they relied on a limited number of museum and zoo samples that may not have been representative of the wild population and a call for further fieldwork has been made. The taxonomic reference Mammal Species of the World (3rd edition, 2005) does not recognize Canis indica; however, NCBI/Genbank lists it as a new subspecies Canis lupus indica. Southern Chinese wolf In 2017, a comprehensive study found that the gray wolf was present across all of mainland China, both in the past and today. It exists in southern China, which refutes claims made by some researchers in the western world that the wolf had never existed in southern China. This wolf has not been taxonomically classified. In 2019, a genomic study on the wolves of China included museum specimens of wolves from southern China that were collected between 1963 and 1988. The wolves in the study formed three clades: northern Asian wolves that included those from northern China and eastern Russia, Himalayan wolves from the Tibetan Plateau, and a unique population from southern China. One specimen from Zhejiang Province in eastern China shared gene flow with the wolves from southern China; however, its genome was 12-14 percent admixed with a canid that may be the dhole or
throughout the Central Asian region, the countries sometimes organise Buzkashi competition amongst each other. The First regional competition among the Central Asian countries, Russia, Chinese Xinjiang and Turkey was held in 2013. The first world title competition was played in 2017 and won by Kazakhstan. Association football is popular across Central Asia. Most countries are members of the Central Asian Football Association, a region of the Asian Football Confederation. However, Kazakhstan is a member of the UEFA. Wrestling is popular across Central Asia, with Kazakhstan having claimed 14 Olympic medals, Uzbekistan seven, and Kyrgyzstan three. As former Soviet states, Central Asian countries have been successful in gymnastics. Mixed Martial Arts is one of more common sports in Central Asia, Kyrgyz athlete Valentina Shevchenko holding the UFC Flyweight Champion title. Cricket is the most popular sport in Afghanistan. The Afghanistan national cricket team, first formed in 2001, has claimed wins over Bangladesh, West Indies and Zimbabwe. Notable Kazakh competitors include cyclists Alexander Vinokourov and Andrey Kashechkin, boxer Vassiliy Jirov and Gennady Golovkin, runner Olga Shishigina, decathlete Dmitriy Karpov, gymnast Aliya Yussupova, judoka Askhat Zhitkeyev and Maxim Rakov, skier Vladimir Smirnov, weightlifter Ilya Ilyin, and figure skaters Denis Ten and Elizabet Tursynbaeva. Notable Uzbekistani competitors include cyclist Djamolidine Abdoujaparov, boxer Ruslan Chagaev, canoer Michael Kolganov, gymnast Oksana Chusovitina, tennis player Denis Istomin, chess player Rustam Kasimdzhanov, and figure skater Misha Ge. Economy Since gaining independence in the early 1990s, the Central Asian republics have gradually been moving from a state-controlled economy to a market economy. The ultimate aim is to emulate the Asian Tigers by becoming the local equivalent, Central Asian snow leopards. However, reform has been deliberately gradual and selective, as governments strive to limit the social cost and ameliorate living standards. All five countries are implementing structural reforms to improve competitiveness. Kazakhstan is the only CIS country to be included in the 2020 and 2019 IWB World Competitiveness rankings. In particular, they have been modernizing the industrial sector and fostering the development of service industries through business-friendly fiscal policies and other measures, to reduce the share of agriculture in GDP. Between 2005 and 2013, the share of agriculture dropped in all but Tajikistan, where it increased while industry decreased. The fastest growth in industry was observed in Turkmenistan, whereas the services sector progressed most in the other four countries. Public policies pursued by Central Asian governments focus on buffering the political and economic spheres from external shocks. This includes maintaining a trade balance, minimizing public debt and accumulating national reserves. They cannot totally insulate themselves from negative exterior forces, however, such as the persistently weak recovery of global industrial production and international trade since 2008. Notwithstanding this, they have emerged relatively unscathed from the global financial crisis of 2008–2009. Growth faltered only briefly in Kazakhstan, Tajikistan and Turkmenistan and not at all in Uzbekistan, where the economy grew by more than 7% per year on average between 2008 and 2013. Turkmenistan achieved unusually high 14.7% growth in 2011. Kyrgyzstan's performance has been more erratic but this phenomenon was visible well before 2008. The republics which have fared best benefitted from the commodities boom during the first decade of the 2000s. Kazakhstan and Turkmenistan have abundant oil and natural gas reserves and Uzbekistan's own reserves make it more or less self-sufficient. Kyrgyzstan, Tajikistan and Uzbekistan all have gold reserves and Kazakhstan has the world's largest uranium reserves. Fluctuating global demand for cotton, aluminium and other metals (except gold) in recent years has hit Tajikistan hardest, since aluminium and raw cotton are its chief exports − the Tajik Aluminium Company is the country's primary industrial asset. In January 2014, the Minister of Agriculture announced the government's intention to reduce the acreage of land cultivated by cotton to make way for other crops. Uzbekistan and Turkmenistan are major cotton exporters themselves, ranking fifth and ninth respectively worldwide for volume in 2014. Although both exports and imports have grown significantly over the past decade, Central Asian republics countries remain vulnerable to economic shocks, owing to their reliance on exports of raw materials, a restricted circle of trading partners and a negligible manufacturing capacity. Kyrgyzstan has the added disadvantage of being considered resource poor, although it does have ample water. Most of its electricity is generated by hydropower. The Kyrgyz economy was shaken by a series of shocks between 2010 and 2012. In April 2010, President Kurmanbek Bakiyev was deposed by a popular uprising, with former minister of foreign affairs Roza Otunbayeva assuring the interim presidency until the election of Almazbek Atambayev in November 2011. Food prices rose two years in a row and, in 2012, production at the major Kumtor gold mine fell by 60% after the site was perturbed by geological movements. According to the World Bank, 33.7% of the population was living in absolute poverty in 2010 and 36.8% a year later. Despite high rates of economic growth in recent years, GDP per capita in Central Asia was higher than the average for developing countries only in Kazakhstan in 2013 (PPP$23,206) and Turkmenistan (PPP$14 201). It dropped to PPP$5,167 for Uzbekistan, home to 45% of the region's population, and was even lower for Kyrgyzstan and Tajikistan. Kazakhstan leads the Central Asian region in terms of foreign direct investments. The Kazakh economy accounts for more than 70% of all the investment attracted in Central Asia. In terms of the economic influence of big powers, China is viewed as one of the key economic players in Central Asia, especially after Beijing launched its grand development strategy known as the Belt and Road Initiative (BRI) in 2013. The Central Asian countries attracted $378.2 billion of foreign direct investment (FDI) between 2007 and 2019. Kazakhstan accounted for 77.7% of the total FDI directed to the region. Kazakhstan is also the largest country in Central Asia accounting for more than 60 percent of the region's gross domestic product (GDP). Education, science and technology Modernisation of research infrastructure Bolstered by strong economic growth in all but Kyrgyzstan, national development strategies are fostering new high-tech industries, pooling resources and orienting the economy towards export markets. Many national research institutions established during the Soviet era have since become obsolete with the development of new technologies and changing national priorities. This has led countries to reduce the number of national research institutions since 2009 by grouping existing institutions to create research hubs. Several of the Turkmen Academy of Science's institutes were merged in 2014: the Institute of Botany was merged with the Institute of Medicinal Plants to become the Institute of Biology and Medicinal Plants; the Sun Institute was merged with the Institute of Physics and Mathematics to become the Institute of Solar Energy; and the Institute of Seismology merged with the State Service for Seismology to become the Institute of Seismology and Atmospheric Physics. In Uzbekistan, more than 10 institutions of the Academy of Sciences have been reorganised, following the issuance of a decree by the Cabinet of Ministers in February 2012. The aim is to orient academic research towards problem-solving and ensure continuity between basic and applied research. For example, the Mathematics and Information Technology Research Institute has been subsumed under the National University of Uzbekistan and the Institute for Comprehensive Research on Regional Problems of Samarkand has been transformed into a problem-solving laboratory on environmental issues within Samarkand State University. Other research institutions have remained attached to the Uzbek Academy of Sciences, such as the Centre of Genomics and Bioinformatics. Kazakhstan and Turkmenistan are also building technology parks as part of their drive to modernise infrastructure. In 2011, construction began of a technopark in the village of Bikrova near Ashgabat, the Turkmen capital. It will combine research, education, industrial facilities, business incubators and exhibition centres. The technopark will house research on alternative energy sources (sun, wind) and the assimilation of nanotechnologies. Between 2010 and 2012, technological parks were set up in the east, south and north Kazakhstan oblasts (administrative units) and in the capital, Nur-Sultan. A Centre for Metallurgy was also established in the east Kazakhstan oblast, as well as a Centre for Oil and Gas Technologies which will be part of the planned Caspian Energy Hub. In addition, the Centre for Technology Commercialisation has been set up in Kazakhstan as part of the Parasat National Scientific and Technological Holding, a joint stock company established in 2008 that is 100% state-owned. The centre supports research projects in technology marketing, intellectual property protection, technology licensing contracts and start-ups. The centre plans to conduct a technology audit in Kazakhstan and to review the legal framework regulating the commercialisation of research results and technology. Countries are seeking to augment the efficiency of traditional extractive sectors but also to make greater use of information and communication technologies and other modern technologies, such as solar energy, to develop the business sector, education and research. In March 2013, two research institutes were created by presidential decree to foster the development of alternative energy sources in Uzbekistan, with funding from the Asian Development Bank and other institutions: the SPU Physical−Technical Institute (Physics Sun Institute) and the International Solar Energy Institute. Three universities have been set up since 2011 to foster competence in strategic economic areas: Nazarbayev University in Kazakhstan (first intake in 2011), an international research university, Inha University in Uzbekistan (first intake in 2014), specializing in information and communication technologies, and the International Oil and Gas University in Turkmenistan (founded in 2013). Kazakhstan and Uzbekistan are both generalizing the teaching of foreign languages at school, in order to facilitate international ties. Kazakhstan and Uzbekistan have both adopted the three-tier bachelor's, master's and PhD degree system, in 2007 and 2012 respectively, which is gradually replacing the Soviet system of Candidates and Doctors of Science. In 2010, Kazakhstan became the only Central Asian member of the Bologna Process, which seeks to harmonise higher education systems in order to create a European Higher Education Area. Financial investment in research The Central Asian republics' ambition of developing the business sector, education and research is being hampered by chronic low investment in research and development. Over the decade to 2013, the region's investment in research and development hovered around 0.2–0.3% of GDP. Uzbekistan broke with this trend in 2013 by raising its own research intensity to 0.41% of GDP. Kazakhstan is the only country where the business enterprise and private non-profit sectors make any significant contribution to research and development – but research intensity overall is low in Kazakhstan: just 0.18% of GDP in 2013. Moreover, few industrial enterprises conduct research in Kazakhstan. Only one in eight (12.5%) of the country's manufacturing firms were active in innovation in 2012, according to a survey by the UNESCO Institute for Statistics. Enterprises prefer to purchase technological solutions that are already embodied in imported machinery and equipment. Just 4% of firms purchase the license and patents that come with this technology. Nevertheless, there appears to be a growing demand for the products of research, since enterprises spent 4.5 times more on scientific and technological services in 2008 than in 1997. Trends in researchers Kazakhstan and Uzbekistan count the highest researcher density in Central Asia. The number of researchers per million population is close to the world average (1,083 in 2013) in Kazakhstan (1,046) and higher than the world average in Uzbekistan (1,097). Kazakhstan is the only Central Asian country where the business enterprise and private non-profit sectors make any significant contribution to research and development. Uzbekistan is in a particularly vulnerable position, with its heavy reliance on higher education: three-quarters of researchers were employed by the university sector in 2013 and just 6% in the business enterprise sector. With most Uzbek university researchers nearing retirement, this imbalance imperils Uzbekistan's research future. Almost all holders of a Candidate of Science, Doctor of Science or PhD are more than 40 years old and half are aged over 60; more than one in three researchers (38.4%) holds a PhD degree, or its equivalent, the remainder holding a bachelor's or master's degree. Kazakhstan, Kyrgyzstan and Uzbekistan have all maintained a share of women researchers above 40% since the fall of the Soviet Union. Kazakhstan has even achieved gender parity, with Kazakh women dominating medical and health research and representing some 45–55% of engineering and technology researchers in 2013. In Tajikistan, however, only one in three scientists (34%) was a woman in 2013, down from 40% in 2002. Although policies are in place to give Tajik women equal rights and opportunities, these are underfunded and poorly understood. Turkmenistan has offered a state guarantee of equality for women since a law adopted in 2007 but the lack of available data makes it impossible to draw any conclusions as to the law's impact on research. As for Turkmenistan, it does not make data available on higher education, research expenditure or researchers. Table: PhDs obtained in science and engineering in Central Asia, 2013 or closest year Source: UNESCO Science Report: towards 2030 (2015), Table 14.1 Note: PhD graduates in science cover life sciences, physical sciences, mathematics and statistics, and computing; PhDs in engineering also cover manufacturing and construction. For Central Asia, the generic term of PhD also encompasses Candidate of Science and Doctor of Science degrees. Data are unavailable for Turkmenistan. Table: Central Asian researchers by field of science and gender, 2013 or closest year Source: UNESCO Science Report: towards 2030 (2015), Table 14.1 Research output The number of scientific papers published in Central Asia grew by almost 50% between 2005 and 2014, driven by Kazakhstan, which overtook Uzbekistan over this period to become the region's most prolific scientific publisher, according to Thomson Reuters' Web of Science (Science Citation Index Expanded). Between 2005 and 2014, Kazakhstan's share of scientific papers from the region grew from 35% to 56%. Although two-thirds of papers from the region have a foreign co-author, the main partners tend to come from beyond Central Asia, namely the Russian Federation, USA, German, United Kingdom and Japan. Five Kazakh patents were registered at the US Patent and Trademark Office between 2008 and 2013, compared to three for Uzbek inventors and none at all for the other three Central Asian republics, Kyrgyzstan, Tajikistan and Turkmenistan. Kazakhstan is Central Asia's main trader in high-tech products. Kazakh imports nearly doubled between 2008 and 2013, from US$2.7 billion to US$5.1 billion. There has been a surge in imports of computers, electronics and telecommunications; these products represented an investment of US$744 million in 2008 and US$2.6 billion five years later. The growth in exports was more gradual – from US$2.3 billion to US$3.1 billion – and dominated by chemical products (other than pharmaceuticals), which represented two-thirds of exports in 2008 (US$1.5 billion) and 83% (US$2.6 billion) in 2013. International cooperation The five Central Asian republics belong to several international bodies, including the Organization for Security and Co-operation in Europe, the Economic Cooperation Organization and the Shanghai Cooperation Organisation. They are also members of the Central Asia Regional Economic Cooperation (CAREC) Programme, which also includes Afghanistan, Azerbaijan, China, Mongolia and Pakistan. In November 2011, the 10 member countries adopted the CAREC 2020 Strategy, a blueprint for furthering regional co-operation. Over the decade to 2020, US$50 billion is being invested in priority projects in transport, trade and energy to improve members' competitiveness. The landlocked Central Asian republics are conscious of the need to co-operate in order to maintain and develop their transport networks and energy, communication and irrigation systems. Only Kazakhstan, Azerbaijan, and Turkmenistan border the Caspian Sea and none of the republics has direct access to an ocean, complicating the transportation of hydrocarbons, in particular, to world markets. Kazakhstan is also one of the three founding members of the Eurasian Economic Union in 2014, along with Belarus and the Russian Federation. Armenia and Kyrgyzstan have since joined this body. As co-operation among the member states in science and technology is already considerable and well-codified in legal texts, the Eurasian Economic Union is expected to have a limited additional impact on co-operation among public laboratories or academia but it should encourage business ties and scientific mobility, since it includes provision for the free circulation of labour and unified patent regulations. Kazakhstan and Tajikistan participated in the Innovative Biotechnologies Programme (2011–2015) launched by the Eurasian Economic Community, the predecessor of the Eurasian Economic Union, The programme also involved Belarus and the Russian Federation. Within this programme, prizes were awarded at an annual bio-industry exhibition and conference. In 2012, 86 Russian organisations participated, plus three from Belarus, one from Kazakhstan and three from Tajikistan, as well as two scientific research groups from Germany. At the time, Vladimir Debabov, scientific director of the Genetika State Research Institute for Genetics and the Selection of Industrial Micro-organisms in the Russian Federation, stressed the paramount importance of developing bio-industry. "In the world today, there is a strong tendency to switch from petrochemicals to renewable biological sources", he said. "Biotechnology is developing two to three times faster than chemicals." Kazakhstan also participated in a second project of the Eurasian Economic Community, the establishment of the Centre for
is also the largest country in Central Asia accounting for more than 60 percent of the region's gross domestic product (GDP). Education, science and technology Modernisation of research infrastructure Bolstered by strong economic growth in all but Kyrgyzstan, national development strategies are fostering new high-tech industries, pooling resources and orienting the economy towards export markets. Many national research institutions established during the Soviet era have since become obsolete with the development of new technologies and changing national priorities. This has led countries to reduce the number of national research institutions since 2009 by grouping existing institutions to create research hubs. Several of the Turkmen Academy of Science's institutes were merged in 2014: the Institute of Botany was merged with the Institute of Medicinal Plants to become the Institute of Biology and Medicinal Plants; the Sun Institute was merged with the Institute of Physics and Mathematics to become the Institute of Solar Energy; and the Institute of Seismology merged with the State Service for Seismology to become the Institute of Seismology and Atmospheric Physics. In Uzbekistan, more than 10 institutions of the Academy of Sciences have been reorganised, following the issuance of a decree by the Cabinet of Ministers in February 2012. The aim is to orient academic research towards problem-solving and ensure continuity between basic and applied research. For example, the Mathematics and Information Technology Research Institute has been subsumed under the National University of Uzbekistan and the Institute for Comprehensive Research on Regional Problems of Samarkand has been transformed into a problem-solving laboratory on environmental issues within Samarkand State University. Other research institutions have remained attached to the Uzbek Academy of Sciences, such as the Centre of Genomics and Bioinformatics. Kazakhstan and Turkmenistan are also building technology parks as part of their drive to modernise infrastructure. In 2011, construction began of a technopark in the village of Bikrova near Ashgabat, the Turkmen capital. It will combine research, education, industrial facilities, business incubators and exhibition centres. The technopark will house research on alternative energy sources (sun, wind) and the assimilation of nanotechnologies. Between 2010 and 2012, technological parks were set up in the east, south and north Kazakhstan oblasts (administrative units) and in the capital, Nur-Sultan. A Centre for Metallurgy was also established in the east Kazakhstan oblast, as well as a Centre for Oil and Gas Technologies which will be part of the planned Caspian Energy Hub. In addition, the Centre for Technology Commercialisation has been set up in Kazakhstan as part of the Parasat National Scientific and Technological Holding, a joint stock company established in 2008 that is 100% state-owned. The centre supports research projects in technology marketing, intellectual property protection, technology licensing contracts and start-ups. The centre plans to conduct a technology audit in Kazakhstan and to review the legal framework regulating the commercialisation of research results and technology. Countries are seeking to augment the efficiency of traditional extractive sectors but also to make greater use of information and communication technologies and other modern technologies, such as solar energy, to develop the business sector, education and research. In March 2013, two research institutes were created by presidential decree to foster the development of alternative energy sources in Uzbekistan, with funding from the Asian Development Bank and other institutions: the SPU Physical−Technical Institute (Physics Sun Institute) and the International Solar Energy Institute. Three universities have been set up since 2011 to foster competence in strategic economic areas: Nazarbayev University in Kazakhstan (first intake in 2011), an international research university, Inha University in Uzbekistan (first intake in 2014), specializing in information and communication technologies, and the International Oil and Gas University in Turkmenistan (founded in 2013). Kazakhstan and Uzbekistan are both generalizing the teaching of foreign languages at school, in order to facilitate international ties. Kazakhstan and Uzbekistan have both adopted the three-tier bachelor's, master's and PhD degree system, in 2007 and 2012 respectively, which is gradually replacing the Soviet system of Candidates and Doctors of Science. In 2010, Kazakhstan became the only Central Asian member of the Bologna Process, which seeks to harmonise higher education systems in order to create a European Higher Education Area. Financial investment in research The Central Asian republics' ambition of developing the business sector, education and research is being hampered by chronic low investment in research and development. Over the decade to 2013, the region's investment in research and development hovered around 0.2–0.3% of GDP. Uzbekistan broke with this trend in 2013 by raising its own research intensity to 0.41% of GDP. Kazakhstan is the only country where the business enterprise and private non-profit sectors make any significant contribution to research and development – but research intensity overall is low in Kazakhstan: just 0.18% of GDP in 2013. Moreover, few industrial enterprises conduct research in Kazakhstan. Only one in eight (12.5%) of the country's manufacturing firms were active in innovation in 2012, according to a survey by the UNESCO Institute for Statistics. Enterprises prefer to purchase technological solutions that are already embodied in imported machinery and equipment. Just 4% of firms purchase the license and patents that come with this technology. Nevertheless, there appears to be a growing demand for the products of research, since enterprises spent 4.5 times more on scientific and technological services in 2008 than in 1997. Trends in researchers Kazakhstan and Uzbekistan count the highest researcher density in Central Asia. The number of researchers per million population is close to the world average (1,083 in 2013) in Kazakhstan (1,046) and higher than the world average in Uzbekistan (1,097). Kazakhstan is the only Central Asian country where the business enterprise and private non-profit sectors make any significant contribution to research and development. Uzbekistan is in a particularly vulnerable position, with its heavy reliance on higher education: three-quarters of researchers were employed by the university sector in 2013 and just 6% in the business enterprise sector. With most Uzbek university researchers nearing retirement, this imbalance imperils Uzbekistan's research future. Almost all holders of a Candidate of Science, Doctor of Science or PhD are more than 40 years old and half are aged over 60; more than one in three researchers (38.4%) holds a PhD degree, or its equivalent, the remainder holding a bachelor's or master's degree. Kazakhstan, Kyrgyzstan and Uzbekistan have all maintained a share of women researchers above 40% since the fall of the Soviet Union. Kazakhstan has even achieved gender parity, with Kazakh women dominating medical and health research and representing some 45–55% of engineering and technology researchers in 2013. In Tajikistan, however, only one in three scientists (34%) was a woman in 2013, down from 40% in 2002. Although policies are in place to give Tajik women equal rights and opportunities, these are underfunded and poorly understood. Turkmenistan has offered a state guarantee of equality for women since a law adopted in 2007 but the lack of available data makes it impossible to draw any conclusions as to the law's impact on research. As for Turkmenistan, it does not make data available on higher education, research expenditure or researchers. Table: PhDs obtained in science and engineering in Central Asia, 2013 or closest year Source: UNESCO Science Report: towards 2030 (2015), Table 14.1 Note: PhD graduates in science cover life sciences, physical sciences, mathematics and statistics, and computing; PhDs in engineering also cover manufacturing and construction. For Central Asia, the generic term of PhD also encompasses Candidate of Science and Doctor of Science degrees. Data are unavailable for Turkmenistan. Table: Central Asian researchers by field of science and gender, 2013 or closest year Source: UNESCO Science Report: towards 2030 (2015), Table 14.1 Research output The number of scientific papers published in Central Asia grew by almost 50% between 2005 and 2014, driven by Kazakhstan, which overtook Uzbekistan over this period to become the region's most prolific scientific publisher, according to Thomson Reuters' Web of Science (Science Citation Index Expanded). Between 2005 and 2014, Kazakhstan's share of scientific papers from the region grew from 35% to 56%. Although two-thirds of papers from the region have a foreign co-author, the main partners tend to come from beyond Central Asia, namely the Russian Federation, USA, German, United Kingdom and Japan. Five Kazakh patents were registered at the US Patent and Trademark Office between 2008 and 2013, compared to three for Uzbek inventors and none at all for the other three Central Asian republics, Kyrgyzstan, Tajikistan and Turkmenistan. Kazakhstan is Central Asia's main trader in high-tech products. Kazakh imports nearly doubled between 2008 and 2013, from US$2.7 billion to US$5.1 billion. There has been a surge in imports of computers, electronics and telecommunications; these products represented an investment of US$744 million in 2008 and US$2.6 billion five years later. The growth in exports was more gradual – from US$2.3 billion to US$3.1 billion – and dominated by chemical products (other than pharmaceuticals), which represented two-thirds of exports in 2008 (US$1.5 billion) and 83% (US$2.6 billion) in 2013. International cooperation The five Central Asian republics belong to several international bodies, including the Organization for Security and Co-operation in Europe, the Economic Cooperation Organization and the Shanghai Cooperation Organisation. They are also members of the Central Asia Regional Economic Cooperation (CAREC) Programme, which also includes Afghanistan, Azerbaijan, China, Mongolia and Pakistan. In November 2011, the 10 member countries adopted the CAREC 2020 Strategy, a blueprint for furthering regional co-operation. Over the decade to 2020, US$50 billion is being invested in priority projects in transport, trade and energy to improve members' competitiveness. The landlocked Central Asian republics are conscious of the need to co-operate in order to maintain and develop their transport networks and energy, communication and irrigation systems. Only Kazakhstan, Azerbaijan, and Turkmenistan border the Caspian Sea and none of the republics has direct access to an ocean, complicating the transportation of hydrocarbons, in particular, to world markets. Kazakhstan is also one of the three founding members of the Eurasian Economic Union in 2014, along with Belarus and the Russian Federation. Armenia and Kyrgyzstan have since joined this body. As co-operation among the member states in science and technology is already considerable and well-codified in legal texts, the Eurasian Economic Union is expected to have a limited additional impact on co-operation among public laboratories or academia but it should encourage business ties and scientific mobility, since it includes provision for the free circulation of labour and unified patent regulations. Kazakhstan and Tajikistan participated in the Innovative Biotechnologies Programme (2011–2015) launched by the Eurasian Economic Community, the predecessor of the Eurasian Economic Union, The programme also involved Belarus and the Russian Federation. Within this programme, prizes were awarded at an annual bio-industry exhibition and conference. In 2012, 86 Russian organisations participated, plus three from Belarus, one from Kazakhstan and three from Tajikistan, as well as two scientific research groups from Germany. At the time, Vladimir Debabov, scientific director of the Genetika State Research Institute for Genetics and the Selection of Industrial Micro-organisms in the Russian Federation, stressed the paramount importance of developing bio-industry. "In the world today, there is a strong tendency to switch from petrochemicals to renewable biological sources", he said. "Biotechnology is developing two to three times faster than chemicals." Kazakhstan also participated in a second project of the Eurasian Economic Community, the establishment of the Centre for Innovative Technologies on 4 April 2013, with the signing of an agreement between the Russian Venture Company (a government fund of funds), the Kazakh JSC National Agency and the Belarusian Innovative Foundation. Each of the selected projects is entitled to funding of US$3–90 million and is implemented within a public–private partnership. The first few approved projects focused on supercomputers, space technologies, medicine, petroleum recycling, nanotechnologies and the ecological use of natural resources. Once these initial projects have spawned viable commercial products, the venture company plans to reinvest the profits in new projects. This venture company is not a purely economic structure; it has also been designed to promote a common economic space among the three participating countries. Kazakhstan recognises the role civil society initiatives have to address the consequences of the COVID-19 crisis. Four of the five Central Asian republics have also been involved in a project launched by the European Union in September 2013, IncoNet CA. The aim of this project is to encourage Central Asian countries to participate in research projects within Horizon 2020, the European Union's eighth research and innovation funding programme. The focus of this research projects is on three societal challenges considered as being of mutual interest to both the European Union and Central Asia, namely: climate change, energy and health. IncoNet CA builds on the experience of earlier projects which involved other regions, such as Eastern Europe, the South Caucasus and the Western Balkans. IncoNet CA focuses on twinning research facilities in Central Asia and Europe. It involves a consortium of partner institutions from Austria, the Czech Republic, Estonia, Germany, Hungary, Kazakhstan, Kyrgyzstan, Poland, Portugal, Tajikistan, Turkey and Uzbekistan. In May 2014, the European Union launched a 24-month call for project applications from twinned institutions – universities, companies
(died 768), antipope from 767 to 768 Constantine II of Scotland (c.878 – 952), King of Scotland 900–942 or 943 Constantine II, Prince of Armenia (died 1129) Constantine II of Cagliari (c. 1100 – 1163) Constantine II of Torres (died 1198), called de Martis, was the giudice of Logudoro Constantine II the Woolmaker (died 1322), Catholicos of the Armenian Apostolic Church Constantine II, King of Armenia (died 1344), first Latin King of Armenian Cilicia of the Lusignan dynasty Constantine II of Bulgaria (early 1370s–1422), last emperor of Bulgaria 1396–1422. Eskender (1471–1494), Emperor of Ethiopia sometimes known
943 Constantine II, Prince of Armenia (died 1129) Constantine II of Cagliari (c. 1100 – 1163) Constantine II of Torres (died 1198), called de Martis, was the giudice of Logudoro Constantine II the Woolmaker (died 1322), Catholicos of the Armenian Apostolic Church Constantine II, King of Armenia (died 1344), first Latin King of Armenian Cilicia of the Lusignan dynasty Constantine II of Bulgaria (early 1370s–1422), last emperor of Bulgaria 1396–1422. Eskender
has been formed into tiny granules of couscous. In the traditional method of preparing couscous, groups of people come together to make large batches over several days, which are then dried in the sun and used for several months. Handmade couscous may need to be re-hydrated as it is prepared; this is achieved by a process of moistening and steaming over stew until the couscous reaches the desired light and fluffy consistency. In some regions couscous is made from farina or coarsely ground barley or pearl millet. In modern times, couscous production is largely mechanized, and the product is sold in markets around the world. This couscous can be sauteed before it is cooked in water or another liquid. Properly cooked couscous is light and fluffy, not gummy or gritty. Traditionally, North Africans use a food steamer (called ataseksut in Berber, a kiskas in Arabic or a couscoussier in French). The base is a tall metal pot shaped rather like an oil jar in which the meat and vegetables are cooked as a stew. On top of the base, a steamer sits where the couscous is cooked, absorbing the flavours from the stew. The lid to the steamer has holes around its edge so steam can escape. It is also possible to use a pot with a steamer insert. If the holes are too big, the steamer can be lined with damp cheesecloth. There is little archaeological evidence of early diets including couscous, possibly because the original couscoussier was probably made from organic materials that could not survive extended exposure to the elements. The couscous that is sold in most Western supermarkets has been pre-steamed and dried. It is typically prepared by adding 1.5 measures of boiling water or stock to each measure of couscous then leaving covered tightly for about five minutes. Pre-steamed couscous takes less time to prepare than regular couscous, most dried pasta, or dried grains (such as rice). Packaged sets of quick-preparation couscous and canned vegetables, and generally meat, are routinely sold in European grocery stores and supermarkets. Couscous is widely consumed in France, where it was introduced by Maghreb immigrants and voted the third most popular dish in a 2011 survey. Recognition In December 2020, Algeria, Mauritania, Morocco and Tunisia obtained official recognition for the knowledge, know-how and practices pertaining to the production and consumption of couscous on the Representative List of the Intangible Cultural Heritage of Humanity by UNESCO. The joint submission by the four countries was hailed as an "example of international cooperation". Local variations Couscous proper is about 2 mm in diameter, but there also exist a larger variety (3 mm more) that is known as Berkoukes, as well as an ultra fine version (around 1 mm). In Morocco, Algeria, Tunisia, and Libya, it is generally served with vegetables (carrots, potatoes, and turnips) cooked in a spicy or mild broth or stew, and some meat (generally, chicken, lamb or mutton). Algeria and Morocco Algerian couscous can also include tomatoes and legumes. Moroccan couscous uses saffron. In both Algeria and Morocco it may be served at the end of a meal or by itself in a dish called "sfouff". Along the Mediterranean coast of Algeria and Morocco, an ultra-fine ( in diameter) grade of couscous, known as seffa or mesfuf, is also produced. It can also be served as a dessert, for which the couscous is usually steamed several times until it is fluffy and pale in color. It is then sprinkled with almonds, cinnamon and sugar. Traditionally, this dessert is served with milk perfumed with orange flower water, or it can be served plain with buttermilk in a bowl as a cold light soup
Trapani, Sicily the dish is still made to the medieval recipe of Andalusian author Ibn Razin al-Tujibi. Ligurian families that moved from Tabarka to Sardinia brought the dish with them to Carloforte in the 18th century. Known in France since the 16th century, it was brought into French cuisine at the beginning of the 20th century, via the French colonial Empire and the Pieds-Noirs of Algeria. Preparation Couscous is traditionally made from the hard part of the durum, the part of the grain that resisted the grinding of the millstone. The semolina is sprinkled with water and rolled with the hands to form small pellets, sprinkled with dry flour to keep them separate, and then sieved. Any pellets that are too small to be finished granules of couscous fall through the sieve and are again rolled and sprinkled with dry semolina and rolled into pellets. This labor-intensive process continues until all the semolina has been formed into tiny granules of couscous. In the traditional method of preparing couscous, groups of people come together to make large batches over several days, which are then dried in the sun and used for several months. Handmade couscous may need to be re-hydrated as it is prepared; this is achieved by a process of moistening and steaming over stew until the couscous reaches the desired light and fluffy consistency. In some regions couscous is made from farina or coarsely ground barley or pearl millet. In modern times, couscous production is largely mechanized, and the product is sold in markets around the world. This couscous can be sauteed before it is cooked in water or another liquid. Properly cooked couscous is light and fluffy, not gummy or gritty. Traditionally, North Africans use a food steamer (called ataseksut in Berber, a kiskas in Arabic or a couscoussier in French). The base is a tall metal pot shaped rather like an oil jar in which the meat and vegetables are cooked as a stew. On top of the base, a steamer sits where the couscous is cooked, absorbing the flavours from the stew. The lid to the steamer has holes around its edge so steam can escape. It is also possible to use a pot with a steamer insert. If the holes are too big, the steamer can be lined with damp cheesecloth. There is little archaeological evidence of early diets including couscous, possibly because the original couscoussier was probably made from organic materials that could not survive extended exposure to the elements. The couscous that is sold in most Western supermarkets has been pre-steamed and dried. It is typically prepared by adding 1.5 measures of boiling water or stock to each measure of couscous then leaving covered tightly for about five minutes. Pre-steamed couscous takes less time to prepare than regular couscous, most dried pasta, or dried grains (such as rice). Packaged sets of quick-preparation couscous and canned vegetables, and generally meat, are routinely sold in European grocery stores and supermarkets. Couscous is widely consumed in France, where it was introduced by Maghreb immigrants and voted the third most popular dish in a 2011 survey. Recognition In December 2020, Algeria, Mauritania, Morocco and Tunisia obtained official recognition for the knowledge, know-how and practices pertaining to the production and consumption of couscous on the Representative List of the Intangible Cultural Heritage of Humanity by UNESCO. The joint submission by the four countries was hailed as an "example of international cooperation". Local variations Couscous proper is about 2 mm in diameter, but there also exist a larger variety (3 mm more) that is known as Berkoukes, as well as an ultra fine version (around 1 mm). In Morocco, Algeria, Tunisia, and Libya, it is generally served with vegetables (carrots, potatoes, and turnips) cooked in a spicy or mild broth or stew, and some meat (generally, chicken, lamb or mutton). Algeria and Morocco Algerian couscous can also include tomatoes and legumes. Moroccan couscous uses saffron. In both Algeria and Morocco it may be served at the end of a meal or by itself in a dish called "sfouff". Along the Mediterranean coast of Algeria and Morocco, an ultra-fine ( in diameter) grade of couscous, known as seffa or mesfuf, is also produced. It can also be served as a dessert, for which the couscous is usually steamed several times until it is fluffy and pale in color. It is then sprinkled with almonds, cinnamon and sugar. Traditionally, this dessert is served with milk perfumed with orange flower water, or it can be served plain with buttermilk in a bowl as a cold light soup for supper. Tunisia In Tunisia, couscous is made mostly spicy with harissa sauce and served commonly with any dish, including lamb, fish, seafood, beef and sometimes, in southern regions, camel. Fish couscous is a Tunisian specialty and can also be made with octopus, squid or other seafood in hot, red, spicy sauce. Libya In Libya, it is mostly served with lamb, but also camel, and rarely beef, in Tripoli and the western parts of Libya, but not during official ceremonies or weddings. Another way to eat couscous is as a dessert; it is prepared with dates, sesame, and pure honey, and locally referred to as maghrood. Mauritania In Mauritania, the couscous uses large wheat grains (mabroum) and is darker than the yellow couscous of Morocco. It is cooked with lamb, beef, or camel meat together with vegetables, primarily onion, tomato and carrots, then mixed with a sauce and served with ghee, locally known as dhen. Similar foods Couscous is made from crushed wheat flour rolled into its constituent granules or pearls, making it distinct from pasta, even pasta such as orzo and risoni of similar size, which is made from ground
meantime, Constantius had been receiving disturbing reports regarding the actions of his cousin Gallus. Possibly as a result of these reports, Constantius concluded a peace with the Alamanni and traveled to Mediolanum (Milan). In Mediolanum, Constantius first summoned Ursicinus, Gallus’ magister equitum, for reasons that remain unclear. Constantius then summoned Gallus and Constantina. Although Gallus and Constantina complied with the order at first, when Constantina died in Bithynia, Gallus began to hesitate. However, after some convincing by one of Constantius’ agents, Gallus continued his journey west, passing through Constantinople and Thrace to Poetovio (Ptuj) in Pannonia. In Poetovio, Gallus was arrested by the soldiers of Constantius under the command of Barbatio. Gallus was then moved to Pola and interrogated. Gallus claimed that it was Constantina who was to blame for all the trouble while he was in charge of the eastern provinces. This angered Constantius so greatly that he immediately ordered Gallus' execution. He soon changed his mind, however, and recanted the order. Unfortunately for Gallus, this second order was delayed by Eusebius, one of Constantius' eunuchs, and Gallus was executed. Religious issues Paganism Laws dating from the 350s prescribed the death penalty for those who performed or attended pagan sacrifices, and for the worshipping of idols. Pagan temples were shut down, and the Altar of Victory was removed from the Senate meeting house. There were also frequent episodes of ordinary Christians destroying, pillaging and desecrating many ancient pagan temples, tombs and monuments. Paganism was still popular among the population at the time. The emperor's policies were passively resisted by many governors and magistrates. In spite of this, Constantius never made any attempt to disband the various Roman priestly colleges or the Vestal Virgins. He never acted against the various pagan schools. At times, he actually made some effort to protect paganism. In fact, he even ordered the election of a priest for Africa. Also, he remained pontifex maximus and was deified by the Roman Senate after his death. His relative moderation toward paganism is reflected by the fact that it was over twenty years after his death, during the reign of Gratian, that any pagan senator protested his treatment of their religion. Christianity Although often considered an Arian, Constantius ultimately preferred a third, compromise version that lay somewhere in between Arianism and the Nicene Creed, retrospectively called Semi-Arianism. During his reign he attempted to mold the Christian church to follow this compromise position, convening several Christian councils. "Unfortunately for his memory the theologians whose advice he took were ultimately discredited and the malcontents whom he pressed to conform emerged victorious," writes the historian A.H.M. Jones. "The great councils of 359–60 are therefore not reckoned ecumenical in the tradition of the church, and Constantius II is not remembered as a restorer of unity, but as a heretic who arbitrarily imposed his will on the church." Judaism Judaism faced some severe restrictions under Constantius, who seems to have followed an anti-Jewish policy in line with that of his father. This included edicts to limit the ownership of slaves by Jewish people and banning marriages between Jews and Christian women. Later edicts sought to discourage convertions from Christianity to Judaism by confiscating the apostate's property. However, Constantius' actions in this regard may not have been so much to do with Jewish religion as with Jewish business—apparently, privately owned Jewish businesses were often in competition with state-owned businesses. As a result, Constantius may have sought to provide an advantage to state-owned businesses by limiting the skilled workers and slaves available to Jewish businesses. Further crises On 11 August 355, the magister militum Claudius Silvanus revolted in Gaul. Silvanus had surrendered to Constantius after the Battle of Mursa Major. Constantius had made him magister militum in 353 with the purpose of blocking the German threats, a feat that Silvanus achieved by bribing the German tribes with the money he had collected. A plot organized by members of Constantius' court led the emperor to recall Silvanus. After Silvanus revolted, he received a letter from Constantius recalling him to Milan, but which made no reference to the revolt. Ursicinus, who was meant to replace Silvanus, bribed some troops, and Silvanus was killed. Constantius realised that too many threats still faced the Empire, however, and he could not possibly handle all of them by himself. So on 6 November 355, he elevated his last remaining male relative, Julian, to the rank of caesar. A few days later, Julian was married to Helena, the last surviving sister of Constantius. Constantius soon sent Julian off to Gaul. Constantius spent the next few years overseeing affairs in the western part of the empire primarily from his base at Mediolanum. In 357 he visited Rome for the only time in his life. The same year, he forced Sarmatian and Quadi invaders out of Pannonia and Moesia Inferior, then led a successful counter-attack across the Danube. In the winter of 357–58, Constantius received ambassadors from Shapur II who demanded that Rome restore the lands surrendered by Narseh. Despite rejecting these terms, Constantius tried to avert war with the Sassanid Empire by sending two embassies to Shapur II. Shapur II nevertheless launched another invasion of Roman Mesopotamia. In 360, when news reached Constantius that Shapur II had destroyed Singara (Sinjar), and taken Kiphas (Hasankeyf), Amida (Diyarbakır), and Ad Tigris (Cizre), he decided to travel east to face the re-emergent threat. Usurpation of Julian and crises in the east In the meantime, Julian had won some victories against the Alamanni, who had once again invaded Roman Gaul. However, when Constantius requested reinforcements from Julian's army for the eastern campaign, the Gallic legions revolted and proclaimed Julian augustus. On account of the immediate Sassanid threat, Constantius was unable to directly respond to his cousin's usurpation, other than by sending missives in which he tried to convince Julian to resign the title of augustus and be satisfied with that of caesar. By 361, Constantius saw no alternative but to face the usurper with force, and yet the threat of the Sassanids remained. Constantius had already spent part of early 361 unsuccessfully attempting to re-take the fortress of Ad Tigris. After a time he had withdrawn to Antioch to regroup and prepare for a confrontation with Shapur II. The campaigns of the previous year had inflicted heavy losses on the Sassanids, however, and they did not attempt another round of campaigns that year. This temporary respite in hostilities allowed Constantius to turn his full attention to facing Julian. Death Constantius immediately gathered his forces and set off west. However, by the time he reached Mopsuestia in Cilicia, it was clear that he was fatally ill and would not survive to face Julian. The sources claim that realising his death was near, Constantius had himself baptised by Euzoius, the Semi-Arian bishop of Antioch, and then declared that Julian was his rightful successor. Constantius II died of fever on 3 November 361. Like Constantine the Great, he was buried in the Church of the Holy Apostles, in a porphyry sarcophagus that was described in the 10th century by Constantine VII Porphyrogenitus in the De Ceremoniis.
Amida. Constantius promptly attacked Narses, and after suffering minor setbacks defeated and killed Narses at the Battle of Narasara. Constantius captured Amida and initiated a major refortification of the city, enhancing the city's circuit walls and constructing large towers. He also built a new stronghold in the hinterland nearby, naming it Antinopolis. Augustus in the East In early 337, Constantius hurried to Constantinople after receiving news that his father was near death. After Constantine died, Constantius buried him with lavish ceremony in the Church of the Holy Apostles. Soon after his father's death Constantius supposedly ordered a massacre of his relatives descended from the second marriage of his paternal grandfather Constantius Chlorus (also known as Constantius I), though the details are unclear. Eutropius, writing between 350 and 370, states that Constantius merely sanctioned “the act, rather than commanding it”. The massacre killed two of Constantius' uncles and six of his cousins, including Hannibalianus and Dalmatius, rulers of Pontus and Moesia respectively. The massacre left Constantius, his older brother Constantine II, his younger brother Constans, and three cousins Gallus, Julian and Nepotianus as the only surviving male relatives of Constantine the Great. Soon after, Constantius met his brothers in Pannonia at Sirmium to formalize the partition of the empire. Constantius received the eastern provinces, including Constantinople, Thrace, Asia Minor, Syria, Egypt, and Cyrenaica; Constantine received Britannia, Gaul, Hispania, and Mauretania; and Constans, initially under the supervision of Constantine II, received Italy, Africa, Illyricum, Pannonia, Macedonia, and Achaea. Constantius then hurried east to Antioch to resume the war with Persia. While Constantius was away from the eastern frontier in early 337, King Shapur II assembled a large army, which included war elephants, and launched an attack on Roman territory, laying waste to Mesopotamia and putting the city of Nisibis under siege. Despite initial success, Shapur lifted his siege after his army missed an opportunity to exploit a collapsed wall. When Constantius learned of Shapur's withdrawal from Roman territory, he prepared his army for a counter-attack. Constantius repeatedly defended the eastern border against invasions by the aggressive Sassanid Empire under Shapur. These conflicts were mainly limited to Sassanid sieges of the major fortresses of Roman Mesopotamia, including Nisibis (Nusaybin), Singara, and Amida (Diyarbakir). Although Shapur seems to have been victorious in most of these confrontations, the Sassanids were able to achieve little. However, the Romans won a decisive victory at the Battle of Narasara, killing Shapur's brother, Narses. Ultimately, Constantius was able to push back the invasion, and Shapur failed to make any significant gains. Meanwhile, Constantine II desired to retain control of Constans' realm, leading the brothers into open conflict. Constantine was killed in 340 near Aquileia during an ambush. As a result, Constans took control of his deceased brother's realms and became sole ruler of the Western two-thirds of the empire. This division lasted until 350, when Constans was assassinated by forces loyal to the usurper Magnentius. War against Magnentius As the only surviving son of Constantine the Great, Constantius felt that the position of emperor was his alone, and he determined to march west to fight the usurper, Magnentius. However, feeling that the east still required some sort of imperial presence, he elevated his cousin Constantius Gallus to caesar of the eastern provinces. As an extra measure to ensure the loyalty of his cousin, he married the elder of his two sisters, Constantina, to him. Before facing Magnentius, Constantius first came to terms with Vetranio, a loyal general in Illyricum who had recently been acclaimed emperor by his soldiers. Vetranio immediately sent letters to Constantius pledging his loyalty, which Constantius may have accepted simply in order to stop Magnentius from gaining more support. These events may have been spurred by the action of Constantina, who had since traveled east to marry Gallus. Constantius subsequently sent Vetranio the imperial diadem and acknowledged the general's new position as augustus. However, when Constantius arrived, Vetranio willingly resigned his position and accepted Constantius’ offer of a comfortable retirement in Bithynia. In 351, Constantius clashed with Magnentius in Pannonia with a large army. The ensuing Battle of Mursa Major was one of the largest and bloodiest battles ever between two Roman armies. The result was a victory for Constantius, but a costly one. Magnentius survived the battle and, determined to fight on, withdrew into northern Italy. Rather than pursuing his opponent, however, Constantius turned his attention to securing the Danubian border, where he spent the early months of 352 campaigning against the Sarmatians along the middle Danube. After achieving his aims, Constantius advanced on Magnentius in Italy. This action led the cities of Italy to switch their allegiance to him and eject the usurper's garrisons. Again, Magnentius withdrew, this time to southern Gaul. In 353, Constantius and Magnentius met for the final time at the Battle of Mons Seleucus in southern Gaul, and again Constantius emerged the victor. Magnentius, realizing the futility of continuing his position, committed suicide on 10 August 353. Sole ruler of the empire Constantius spent much of the rest of 353 and early 354 on campaign against the Alamanni on the Danube frontier. The campaign was successful and raiding by the Alamanni ceased temporarily. In the meantime, Constantius had been receiving disturbing reports regarding the actions of his cousin Gallus. Possibly as a result of these reports, Constantius concluded a peace with the Alamanni and traveled to Mediolanum (Milan). In Mediolanum, Constantius first summoned Ursicinus, Gallus’ magister equitum, for reasons that remain unclear. Constantius then summoned Gallus and Constantina. Although Gallus and Constantina complied with the order at first, when Constantina died in Bithynia, Gallus began to hesitate. However, after some convincing by one of Constantius’ agents, Gallus continued his journey west, passing through Constantinople and Thrace to Poetovio (Ptuj) in Pannonia. In Poetovio, Gallus was arrested by the soldiers of Constantius under the command of Barbatio. Gallus was then moved to Pola and interrogated. Gallus claimed that it was Constantina who was to blame for all the trouble while he was in charge of the eastern provinces. This angered Constantius so greatly that he immediately ordered Gallus' execution. He soon changed his mind, however, and recanted the order. Unfortunately for Gallus, this second order was delayed by Eusebius, one of Constantius' eunuchs, and Gallus was executed. Religious issues Paganism Laws dating from the 350s prescribed the death penalty for those who performed or attended pagan sacrifices, and for the worshipping of idols. Pagan temples were shut down, and the Altar of Victory was removed from the Senate meeting house. There were also frequent episodes of ordinary Christians destroying, pillaging and desecrating many ancient pagan temples, tombs and monuments. Paganism was still popular among the population at the time. The emperor's policies were passively resisted by many governors and magistrates. In spite of this, Constantius never made any attempt to disband the various Roman priestly colleges or the Vestal Virgins. He never acted against the various pagan schools. At times, he actually made some effort to protect paganism. In fact, he even ordered the election of a priest for Africa. Also, he remained pontifex maximus and was deified by the Roman Senate after his death. His relative moderation toward paganism is reflected by the fact that it was over twenty years after his death, during the reign of Gratian, that any pagan senator protested his treatment of their religion. Christianity Although often considered an Arian, Constantius ultimately preferred a third, compromise version that lay somewhere in between Arianism and the Nicene Creed, retrospectively called Semi-Arianism. During his reign he attempted to mold the Christian church to follow this compromise position, convening several Christian councils. "Unfortunately for his memory the theologians whose advice he took were ultimately discredited and the malcontents whom he pressed to conform emerged victorious," writes the historian A.H.M. Jones. "The great councils of 359–60 are therefore not reckoned ecumenical in the tradition of the church, and Constantius II is not remembered as a restorer of unity, but as a heretic who arbitrarily imposed his will on the church." Judaism Judaism faced some severe restrictions under Constantius, who seems to have followed an anti-Jewish policy in line with that of his father. This included edicts to limit the ownership of slaves by Jewish people and banning marriages between Jews and Christian women. Later edicts sought to discourage convertions from Christianity to Judaism by confiscating the apostate's property. However, Constantius' actions in this regard may not have been so much to do with Jewish religion as with Jewish business—apparently, privately owned Jewish businesses were often in competition with state-owned businesses. As a result, Constantius may have sought to provide an advantage to state-owned businesses by limiting the skilled workers and slaves available to Jewish businesses. Further crises On 11 August 355, the magister militum Claudius Silvanus revolted in Gaul. Silvanus had surrendered to Constantius after the Battle of Mursa Major. Constantius had made him magister militum in 353 with the purpose of blocking the German threats, a feat that Silvanus achieved by bribing the German tribes with the money he had collected. A plot organized by members of Constantius' court led the emperor to recall Silvanus. After Silvanus revolted, he received a letter from Constantius recalling him to Milan, but which made no reference to the revolt. Ursicinus, who was meant to replace Silvanus, bribed some troops, and Silvanus was killed. Constantius realised that too many threats still faced the Empire, however, and he could not possibly handle all of them by himself. So on 6 November 355, he elevated his last remaining male relative, Julian, to the rank of caesar. A few days later, Julian was married to Helena, the last surviving sister of Constantius. Constantius soon sent Julian off to Gaul. Constantius spent the next few years overseeing affairs in the western part of the empire primarily from his base at Mediolanum. In 357 he visited Rome for the only time in his life. The same year, he
Dalmatius, Constantine demanded that Constans hand over the African provinces, which he agreed to do in order to maintain a fragile peace. Soon, however, they began quarreling over which parts of the African provinces belonged to Carthage and Constantine, and which parts belonged to Italy and Constans. This led to growing tensions between the two brothers, which were only heightened by Constans finally coming of age and Constantine refusing to give up his guardianship. In 340 Constantine II invaded Italy. Constans, at that time in Dacia, detached and sent a select and disciplined body of his Illyrian troops, stating that he would follow them in person with the remainder of his forces. Constantine was eventually trapped at Aquileia, where he died, leaving Constans to inherit all of his brother's former territories – Hispania, Britannia and Gaul. Constans began his reign in an energetic fashion. In 341–342, he led a successful campaign against the Franks, and in the early months of 343 he visited Britain. The source for this visit, Julius Firmicus Maternus, does not provide a reason, but the quick movement and the danger involved in crossing the English Channel in the winter months suggests it was in response to a military emergency, possibly to repel the Picts and Scots. Regarding religion, Constans was tolerant of Judaism and promulgated an edict banning pagan sacrifices in 341. He suppressed Donatism in Africa and supported Nicene orthodoxy against Arianism, which was championed by his brother Constantius. Although Constans called the Council of Serdica in 343 to settle the conflict, it was a complete failure, and by 346 the two emperors were on the point of open warfare over the dispute. The conflict was only resolved by an interim agreement which allowed each emperor to support their preferred clergy within their own spheres of influence. Homosexuality Surviving sources, possibly influenced by the propaganda of Magnentius's faction, accuse Constans of misrule and of homosexuality. The Roman historian Eutropius says Constans "indulged in great vices," in reference to his homosexuality, and Aurelius Victor stated that Constans had a reputation for scandalous behaviour with "handsome barbarian hostages." Nevertheless, Constans did sponsor a decree alongside Constantius II that ruled that marriage based on "unnatural" sex should be punished meticulously. However, according to John Boswell, it was likely that Constans promulgated the legislation under pressure from the growing band of Christian leaders, in an attempt to placate public outrage at his own perceived indecencies. Death In the final years of his reign, Constans developed a reputation for cruelty and misrule. Dominated by favourites and openly preferring his select bodyguard, he lost the support of the legions. In 350, the general Magnentius declared himself emperor at Augustodunum (Autun) with the support of the troops on the Rhine frontier and, later, the western provinces
Saecular Games (), in the reign of Philip the Arab (). Philip may also have raised his son to co-augustus at the start of the anniversary year. Rome had been calculated by the 1st-century BC Latin author Marcus Terentius Varro to have been founded by Romulus in 753 BC. Byzantium was thought to have been founded in 667 BC by Byzas, according to the reckoning derived from the History of the Peloponnesian War by Thucydides, the 5th-century BC Greek historian and used by Constantine's court historian Eusebius of Caesarea in his Chronicon. Augustus With Constantine's death in 337, Constans and his two brothers, Constantine II and Constantius II, divided the Roman world among themselves and disposed of virtually all relatives who could possibly have a claim to the throne. The army proclaimed them Augusti on 9 September 337. Almost immediately, Constans was required to deal with a Sarmatian invasion in late 337, in which he won a resounding victory. Constans was initially under the guardianship of Constantine II. The original settlement assigned Constans the praetorian prefecture of Italy, which included Northern Africa. Constans was unhappy with this division, so the brothers met at Viminacium in 338 to revise the boundaries. Constans managed to extract the prefecture of Illyricum and the diocese of Thrace, provinces that were originally to be ruled by his cousin Dalmatius, as per Constantine I's proposed division after his death. Constantine II soon complained that he had not received the amount of territory that was his due as the eldest son. Annoyed that Constans had received Thrace and Macedonia after the death of Dalmatius, Constantine demanded that Constans hand over the African provinces, which he agreed to do in order to maintain a fragile peace. Soon, however, they began quarreling over which parts of the African provinces belonged to Carthage and Constantine, and which parts belonged to Italy and Constans. This led to growing tensions between the two brothers, which were only heightened by Constans finally coming of age and Constantine refusing to give up his guardianship. In 340 Constantine II invaded Italy. Constans, at that time in Dacia, detached and sent a select and disciplined body of his Illyrian troops, stating that he would follow them in person with the remainder of his forces. Constantine was eventually trapped at Aquileia, where he died, leaving Constans to inherit all of his brother's former territories – Hispania, Britannia and Gaul. Constans began his reign in an energetic fashion. In 341–342, he led a successful campaign against the Franks, and in the early months of 343 he visited Britain. The source for this visit, Julius Firmicus Maternus, does not provide a reason, but the quick movement and the danger involved in crossing the English Channel in the winter months suggests it was in response to a military emergency, possibly to repel the Picts and Scots. Regarding religion, Constans was tolerant of Judaism and promulgated an edict banning pagan sacrifices in 341. He suppressed Donatism in Africa and supported Nicene orthodoxy against Arianism, which was championed by his brother Constantius. Although Constans called the Council of Serdica in 343 to settle the conflict, it was a complete failure, and by 346 the two emperors were on the point of
conference tournament, and also resulted in a recommendation by the NCAA that conferences and tournaments do not allow pyramids two and one half levels high or higher, and a stunt known as basket tosses, during the rest of the men's and women's basketball season. On July 11, 2006, the bans were made permanent by the AACCA rules committee: The committee unanimously voted for sweeping revisions to cheerleading safety rules, the most major of which restricts specific upper-level skills during basketball games. Basket tosses, 2½ high pyramids, one-arm stunts, stunts that involve twisting or flipping, and twisting tumbling skills may be performed only during halftime and post-game on a matted surface and are prohibited during game play or time-outs. Types of teams in the United States today School-sponsored Most American elementary schools, middle schools, high schools, and colleges have organized cheerleading squads. Some colleges even offer cheerleading scholarships for students. A school cheerleading team may compete locally, regionally, or nationally, but their main purpose is typically to cheer for sporting events and encourage audience participation. Cheerleading is quickly becoming a year-round activity, starting with tryouts during the spring semester of the preceding school year. Teams may attend organized summer cheerleading camps and practices to improve skills and create routines for competition. In addition to supporting their schools’ football or other sports teams, student cheerleaders may compete with recreational-style routine at competitions year-round. Elementary school In far more recent years, it has become more common for elementary schools to have an organized cheerleading team. This is a great way to get younger children introduced to the sport and used to being crowd leaders. Also, with young children learning so much so quickly, tumbling can come very easy to a child in elementary school. Middle school Middle school cheerleading evolved shortly after high school squads were created and is set at the district level. In middle school, cheerleading squads serve the same purpose, but often follow a modified set of rules from high school squads with possible additional rules. Squads can cheer for basketball teams, football teams, and other sports teams in their school. Squads may also perform at pep rallies and compete against other local schools from the area. Cheerleading in middle school sometimes can be a two-season activity: fall and winter. However, many middle school cheer squads will go year-round like high school squads. Middle school cheerleaders use the same cheerleading movements as their older counterparts, yet may perform less extreme stunts and tumbling elements, depending on the rules in their area.. High school In high school, there are usually two squads per school: varsity and a junior varsity. High school cheerleading contains aspects of school spirit as well as competition. These squads have become part of a year-round cycle. Starting with tryouts in the spring, year-round practice, cheering on teams in the fall and winter, and participating in cheerleading competitions. Most squads practice at least three days a week for about two hours each practice during the summer. Many teams also attend separate tumbling sessions outside of practice. During the school year, cheerleading is usually practiced five- to six-days-a-week. During competition season, it often becomes seven days with practice twice a day sometimes. The school spirit aspect of cheerleading involves cheering, supporting, and "hyping up" the crowd at football games, basketball games, and even at wrestling meets. Along with this, cheerleaders usually perform at pep rallies, and bring school spirit to other students. In May 2009, the National Federation of State High School Associations released the results of their first true high school participation study. They estimated that the number of high school cheerleaders from public high schools is around 394,700. There are different cheerleading organizations that put on competitions; some of the major ones include state and regional competitions. Many high schools will often host cheerleading competitions, bringing in IHSA judges. The regional competitions are qualifiers for national competitions, such as the UCA (Universal Cheerleaders Association) in Orlando, Florida every year. Many teams have a professional choreographer that choreographs their routine in order to ensure they are not breaking rules or regulations and to give the squad creative elements. College Most American universities have a cheerleading squad to cheer for football, basketball, volleyball, wrestling, and soccer. Most college squads tend to be larger coed teams, although in recent years; all-girl squads and smaller college squads have increased rapidly. Cheerleading is not recognized by NCAA, NAIA, and NJCAA as athletics; therefore, there are few to no scholarships offered to athletes wanting to pursue cheerleading at the collegiate level. However, some community colleges and universities offer scholarships directly from the program or sponsorship funds. Some colleges offer scholarships for an athlete's talents, academic excellence, and/or involvement in community events. College squads perform more difficult stunts which include multi-level pyramids, as well as flipping and twisting basket tosses. Not only do college cheerleaders cheer on the other sports at their university, many teams at universities compete with other schools at either UCA College Nationals or NCA College Nationals. This requires the teams to choreograph a 2-minute and 30 second routine that includes elements of jumps, tumbling, stunting, basket tosses, pyramids, and a crowd involvement section. Winning one of these competitions is a very prestigious accomplishment, and is seen as another national title for most schools. Youth leagues and athletic associations Organizations that sponsor youth cheer teams usually sponsor either youth league football or basketball teams as well. This allows for the two, under the same sponsor, to be intermingled. Both teams have the same mascot name and the cheerleaders will perform at their football or basketball games. Examples of such sponsors include Pop Warner, American Youth Football, and the YMCA. The purpose of these squads is primarily to support their associated football or basketball players, but some teams do compete at local or regional competitions. The Pop Warner Association even hosts a national championship each December for teams in their program who qualify. All-star or club cheerleading ”All-star” or club cheerleading differs from school or sideline cheerleading because all-star teams focus solely on performing a competition routine and not on leading cheers for other sports teams. All-star cheerleaders are members of a privately owned gym or club which they typically pay dues or tuition to, similar to a gymnastics gym. During the early 1980s, cheerleading squads not associated with a school or sports league, whose main objective was competition, began to emerge. The first organization to call themselves all-stars were the Q94 Rockers from Richmond, Virginia, founded in 1982. All-star teams competing prior to 1987 were placed into the same divisions as teams that represented schools and sports leagues. In 1986, the National Cheerleaders Association (NCA) addressed this situation by creating a separate division for teams lacking a sponsoring school or athletic association, calling it the All-Star Division and debuting it at their 1987 competitions. As the popularity of this type of team grew, more and more of them were formed, attending competitions sponsored by many different organizations and companies, each using its own set of rules, regulations, and divisions. This situation became a concern to coaches and gym owners, as the inconsistencies caused coaches to keep their routines in a constant state of flux, detracting from time that could be better utilized for developing skills and providing personal attention to their athletes. More importantly, because the various companies were constantly vying for a competitive edge, safety standards had become more and more lax. In some cases, unqualified coaches and inexperienced squads were attempting dangerous stunts as a result of these expanded sets of rules. The United States All Star Federation (USASF) was formed in 2003 by the competition companies to act as the national governing body for all star cheerleading and to create a standard set of rules and judging criteria to be followed by all competitions sanctioned by the Federation. Eager to grow the sport and create more opportunities for high-level teams, The USASF hosted the first Cheerleading Worlds on April 24, 2004. At the same time, cheerleading coaches from all over the country organized themselves for the same rule making purpose, calling themselves the National All Star Cheerleading Coaches Congress (NACCC). In 2005, the NACCC was absorbed by the USASF to become their rule making body. In late 2006, the USASF facilitated the creation of the International All-Star Federation (IASF), which now governs club cheerleading worldwide. , all-star cheerleading, as sanctioned by the USASF, involves a squad of 5–36 females and males. All-star cheerleaders are placed into divisions, which are grouped based upon age, size of the team, gender of participants, and ability level. The age groups vary from under 4 years of age to 18 years and over. The squad prepares year-round for many different competition appearances, but they actually perform only for up to 2½ minutes during their team's routine. The numbers of competitions a team participates in varies from team to team, but generally, most teams tend to participate in six to ten competitions a year. These competitions include locals or regionals, which normally take place in school gymnasiums or local venues, nationals, hosted in large venues all around the U.S., and the Cheerleading Worlds, which takes place at Walt Disney World in Orlando, Florida. During a competition routine, a squad performs carefully choreographed stunting, tumbling, jumping, and dancing to their own custom music. Teams create their routines to an eight-count system and apply that to the music so that the team members execute the elements with precise timing and synchronization. All-star cheerleaders compete at competitions hosted by private event production companies, the foremost of these being Varsity Spirit. Varsity Spirit is the parent company for many subsidiaries including The National Cheerleader's Association, The Universal Cheerleader's Association, AmeriCheer, Allstar Challenge, and JamFest, among others. Each separate company or subsidiary typically hosts their own local and national level competitions. This means that many gyms within the same area could be state and national champions for the same year and never have competed against each other. Currently, there is no system in place that awards only one state or national title. Judges at a competition watch closely for illegal skills from the group or any individual member. Here, an illegal skill is something that is not allowed in that division due to difficulty or safety restrictions. They look out for deductions, or things that go wrong, such as a dropped stunt or a tumbler who doesn’t stick a landing. More generally, judges look at the difficulty and execution of jumps, stunts and tumbling, synchronization, creativity, the sharpness of the motions, showmanship, and overall routine execution. If a level 6 or 7 team places high enough at selected USASF/IASF sanctioned national competitions, they could earn a place at the Cheerleading Worlds and compete against teams from all over the world, as well as receive money for placing. For elite level cheerleaders, The Cheerleading Worlds is the highest level of competition to which they can aspire, and winning a world championship title is an incredible honor. Professional Professional cheerleaders and dancers cheer for sports such as football, basketball, baseball, wrestling, or hockey. There are only a small handful of professional cheerleading leagues around the world; some professional leagues include the NBA Cheerleading League, the NFL Cheerleading League, the CFL Cheerleading League, the MLS Cheerleading League, the MLB Cheerleading League, and the NHL Ice Girls. Although professional cheerleading leagues exist in multiple countries, there are no Olympic teams. In addition to cheering at games and competing, professional cheerleaders also, as teams, can often do a lot of philanthropy and charity work, modeling, motivational speaking, television performances, and advertising. Injuries and accidents Cheerleading carries the highest rate of catastrophic injuries to girl athletes in sports. Of the United States' 2.9 million female high school athletes, only 3% are cheerleaders, yet cheerleading accounts for nearly 65% of all catastrophic injuries in girls' high school athletics. In data covering the 1982-83 academic year through the 2018-19 academic year in the US, the rate of serious, direct traumatic injury per 100,000 participants was 1.68 for female cheerleaders at the high school level, the highest for all high school sports surveyed. (table 9a) The college rate could not be determined, as the total number of collegiate cheerleaders was unknown, but the total number of traumatic, direct catastrophic injuries over this period was 33 (28 female, 5 male), higher than all sports at this level aside from football. (table 5a) Another study found that between 1982 and 2007, there were 103 fatal, disabling, or serious injuries recorded among female high school athletes, with the vast majority (67) occurring in cheerleading. The main source of injuries comes from stunting, also known as pyramids. These stunts are performed at games and pep rallies, as well as competitions. Sometimes competition routines are focused solely around the use of difficult and risky stunts. These stunts usually include a flyer (the person on top), along with one or two bases (the people on the bottom), and one or two spotters in the front and back on the bottom. The most common cheerleading related injury is a concussion. 96% of those concussions are stunt related. Others injuries are: sprained ankles, sprained wrists, back injuries, head injuries (sometimes concussions), broken arms, elbow injuries, knee injuries, broken noses, and broken collarbones. Sometimes, however, injuries can be as serious as whiplash, broken necks, broken vertebrae, and death. The journal Pediatrics has reportedly said that the number of cheerleaders suffering from broken bones, concussions, and sprains has increased by over 100 percent between the years of 1990 and 2002, and that in 2001, there were 25,000 hospital visits reported for cheerleading injuries dealing with the shoulder, ankle, head, and neck. Meanwhile, in the US, cheerleading accounted for 65.1% of all major physical injuries to high school females, and to 66.7% of major injuries to college students due to physical activity from 1982 to 2007, with 22,900 minors being admitted to hospital with cheerleading-related injuries in 2002. The risks of cheerleading were highlighted the death of Lauren Chang. Chang died on April 14, 2008 after competing in a competition where her teammate had kicked her so hard in the chest that her lungs collapsed. Cheerleading (for both girls and boys) was one of the sports studied in the Pediatric Injury Prevention, Education and Research Program of the Colorado School of Public Health in 2009/10–2012/13. Data on cheerleading injuries is included in the report for 2012–13. Associations, federations, and organizations International Cheer Union (ICU): Established on April 26, 2004, the ICU is recognized by the SportAccord as the world governing body of cheerleading and the authority on all matters with relation to it. Including participation from its 105-member national federations reaching 3.5 million athletes globally, the ICU continues to serve as the unified voice for those dedicated to cheerleading's positive development around the world. Following a positive vote by the SportAccord General Assembly on May 31, 2013, in Saint Petersburg, the International Cheer Union (ICU) became SportAccord's 109th member, and SportAccord's 93rd international sports federation to join the international sports family. In accordance with the SportAccord statutes, the ICU is recognized as the world governing body of cheerleading and the authority
or time-outs. Types of teams in the United States today School-sponsored Most American elementary schools, middle schools, high schools, and colleges have organized cheerleading squads. Some colleges even offer cheerleading scholarships for students. A school cheerleading team may compete locally, regionally, or nationally, but their main purpose is typically to cheer for sporting events and encourage audience participation. Cheerleading is quickly becoming a year-round activity, starting with tryouts during the spring semester of the preceding school year. Teams may attend organized summer cheerleading camps and practices to improve skills and create routines for competition. In addition to supporting their schools’ football or other sports teams, student cheerleaders may compete with recreational-style routine at competitions year-round. Elementary school In far more recent years, it has become more common for elementary schools to have an organized cheerleading team. This is a great way to get younger children introduced to the sport and used to being crowd leaders. Also, with young children learning so much so quickly, tumbling can come very easy to a child in elementary school. Middle school Middle school cheerleading evolved shortly after high school squads were created and is set at the district level. In middle school, cheerleading squads serve the same purpose, but often follow a modified set of rules from high school squads with possible additional rules. Squads can cheer for basketball teams, football teams, and other sports teams in their school. Squads may also perform at pep rallies and compete against other local schools from the area. Cheerleading in middle school sometimes can be a two-season activity: fall and winter. However, many middle school cheer squads will go year-round like high school squads. Middle school cheerleaders use the same cheerleading movements as their older counterparts, yet may perform less extreme stunts and tumbling elements, depending on the rules in their area.. High school In high school, there are usually two squads per school: varsity and a junior varsity. High school cheerleading contains aspects of school spirit as well as competition. These squads have become part of a year-round cycle. Starting with tryouts in the spring, year-round practice, cheering on teams in the fall and winter, and participating in cheerleading competitions. Most squads practice at least three days a week for about two hours each practice during the summer. Many teams also attend separate tumbling sessions outside of practice. During the school year, cheerleading is usually practiced five- to six-days-a-week. During competition season, it often becomes seven days with practice twice a day sometimes. The school spirit aspect of cheerleading involves cheering, supporting, and "hyping up" the crowd at football games, basketball games, and even at wrestling meets. Along with this, cheerleaders usually perform at pep rallies, and bring school spirit to other students. In May 2009, the National Federation of State High School Associations released the results of their first true high school participation study. They estimated that the number of high school cheerleaders from public high schools is around 394,700. There are different cheerleading organizations that put on competitions; some of the major ones include state and regional competitions. Many high schools will often host cheerleading competitions, bringing in IHSA judges. The regional competitions are qualifiers for national competitions, such as the UCA (Universal Cheerleaders Association) in Orlando, Florida every year. Many teams have a professional choreographer that choreographs their routine in order to ensure they are not breaking rules or regulations and to give the squad creative elements. College Most American universities have a cheerleading squad to cheer for football, basketball, volleyball, wrestling, and soccer. Most college squads tend to be larger coed teams, although in recent years; all-girl squads and smaller college squads have increased rapidly. Cheerleading is not recognized by NCAA, NAIA, and NJCAA as athletics; therefore, there are few to no scholarships offered to athletes wanting to pursue cheerleading at the collegiate level. However, some community colleges and universities offer scholarships directly from the program or sponsorship funds. Some colleges offer scholarships for an athlete's talents, academic excellence, and/or involvement in community events. College squads perform more difficult stunts which include multi-level pyramids, as well as flipping and twisting basket tosses. Not only do college cheerleaders cheer on the other sports at their university, many teams at universities compete with other schools at either UCA College Nationals or NCA College Nationals. This requires the teams to choreograph a 2-minute and 30 second routine that includes elements of jumps, tumbling, stunting, basket tosses, pyramids, and a crowd involvement section. Winning one of these competitions is a very prestigious accomplishment, and is seen as another national title for most schools. Youth leagues and athletic associations Organizations that sponsor youth cheer teams usually sponsor either youth league football or basketball teams as well. This allows for the two, under the same sponsor, to be intermingled. Both teams have the same mascot name and the cheerleaders will perform at their football or basketball games. Examples of such sponsors include Pop Warner, American Youth Football, and the YMCA. The purpose of these squads is primarily to support their associated football or basketball players, but some teams do compete at local or regional competitions. The Pop Warner Association even hosts a national championship each December for teams in their program who qualify. All-star or club cheerleading ”All-star” or club cheerleading differs from school or sideline cheerleading because all-star teams focus solely on performing a competition routine and not on leading cheers for other sports teams. All-star cheerleaders are members of a privately owned gym or club which they typically pay dues or tuition to, similar to a gymnastics gym. During the early 1980s, cheerleading squads not associated with a school or sports league, whose main objective was competition, began to emerge. The first organization to call themselves all-stars were the Q94 Rockers from Richmond, Virginia, founded in 1982. All-star teams competing prior to 1987 were placed into the same divisions as teams that represented schools and sports leagues. In 1986, the National Cheerleaders Association (NCA) addressed this situation by creating a separate division for teams lacking a sponsoring school or athletic association, calling it the All-Star Division and debuting it at their 1987 competitions. As the popularity of this type of team grew, more and more of them were formed, attending competitions sponsored by many different organizations and companies, each using its own set of rules, regulations, and divisions. This situation became a concern to coaches and gym owners, as the inconsistencies caused coaches to keep their routines in a constant state of flux, detracting from time that could be better utilized for developing skills and providing personal attention to their athletes. More importantly, because the various companies were constantly vying for a competitive edge, safety standards had become more and more lax. In some cases, unqualified coaches and inexperienced squads were attempting dangerous stunts as a result of these expanded sets of rules. The United States All Star Federation (USASF) was formed in 2003 by the competition companies to act as the national governing body for all star cheerleading and to create a standard set of rules and judging criteria to be followed by all competitions sanctioned by the Federation. Eager to grow the sport and create more opportunities for high-level teams, The USASF hosted the first Cheerleading Worlds on April 24, 2004. At the same time, cheerleading coaches from all over the country organized themselves for the same rule making purpose, calling themselves the National All Star Cheerleading Coaches Congress (NACCC). In 2005, the NACCC was absorbed by the USASF to become their rule making body. In late 2006, the USASF facilitated the creation of the International All-Star Federation (IASF), which now governs club cheerleading worldwide. , all-star cheerleading, as sanctioned by the USASF, involves a squad of 5–36 females and males. All-star cheerleaders are placed into divisions, which are grouped based upon age, size of the team, gender of participants, and ability level. The age groups vary from under 4 years of age to 18 years and over. The squad prepares year-round for many different competition appearances, but they actually perform only for up to 2½ minutes during their team's routine. The numbers of competitions a team participates in varies from team to team, but generally, most teams tend to participate in six to ten competitions a year. These competitions include locals or regionals, which normally take place in school gymnasiums or local venues, nationals, hosted in large venues all around the U.S., and the Cheerleading Worlds, which takes place at Walt Disney World in Orlando, Florida. During a competition routine, a squad performs carefully choreographed stunting, tumbling, jumping, and dancing to their own custom music. Teams create their routines to an eight-count system and apply that to the music so that the team members execute the elements with precise timing and synchronization. All-star cheerleaders compete at competitions hosted by private event production companies, the foremost of these being Varsity Spirit. Varsity Spirit is the parent company for many subsidiaries including The National Cheerleader's Association, The Universal Cheerleader's Association, AmeriCheer, Allstar Challenge, and JamFest, among others. Each separate company or subsidiary typically hosts their own local and national level competitions. This means that many gyms within the same area could be state and national champions for the same year and never have competed against each other. Currently, there is no system in place that awards only one state or national title. Judges at a competition watch closely for illegal skills from the group or any individual member. Here, an illegal skill is something that is not allowed in that division due to difficulty or safety restrictions. They look out for deductions, or things that go wrong, such as a dropped stunt or a tumbler who doesn’t stick a landing. More generally, judges look at the difficulty and execution of jumps, stunts and tumbling, synchronization, creativity, the sharpness of the motions, showmanship, and overall routine execution. If a level 6 or 7 team places high enough at selected USASF/IASF sanctioned national competitions, they could earn a place at the Cheerleading Worlds and compete against teams from all over the world, as well as receive money for placing. For elite level cheerleaders, The Cheerleading Worlds is the highest level of competition to which they can aspire, and winning a world championship title is an incredible honor. Professional Professional cheerleaders and dancers cheer for sports such as football, basketball, baseball, wrestling, or hockey. There are only a small handful of professional cheerleading leagues around the world; some professional leagues include the NBA Cheerleading League, the NFL Cheerleading League, the CFL Cheerleading League, the MLS Cheerleading League, the MLB Cheerleading League, and the NHL Ice Girls. Although professional cheerleading leagues exist in multiple countries, there are no Olympic teams. In addition to cheering at games and competing, professional cheerleaders also, as teams, can often do a lot of philanthropy and charity work, modeling, motivational speaking, television performances, and advertising. Injuries and accidents Cheerleading carries the highest rate of catastrophic injuries to girl athletes in sports. Of the United States' 2.9 million female high school athletes, only 3% are cheerleaders, yet cheerleading accounts for nearly 65% of all catastrophic injuries in girls' high school athletics. In data covering the 1982-83 academic year through the 2018-19 academic year in the US, the rate of serious, direct traumatic injury per 100,000 participants was 1.68 for female cheerleaders at the high school level, the highest for all high school sports surveyed. (table 9a) The college rate could not be determined, as the total number of collegiate cheerleaders was unknown, but the total number of traumatic, direct catastrophic injuries over this period was 33 (28 female, 5 male), higher than all sports at this level aside from football. (table 5a) Another study found that between 1982 and 2007, there were 103 fatal, disabling, or serious injuries recorded among female high school athletes, with the vast majority (67) occurring in cheerleading. The main source of injuries comes from stunting, also known as pyramids. These stunts are performed at games and pep rallies, as well as competitions. Sometimes competition routines are focused solely around the use of difficult and risky stunts. These stunts usually include a flyer (the person on top), along with one or two bases (the people on the bottom), and one or two spotters in the front and back on the bottom. The most common cheerleading related injury is a concussion. 96% of those concussions are stunt related. Others injuries are: sprained ankles, sprained wrists, back injuries, head injuries (sometimes concussions), broken arms, elbow injuries, knee injuries, broken noses, and broken collarbones. Sometimes, however, injuries can be as serious as whiplash, broken necks, broken vertebrae, and death. The journal Pediatrics has reportedly said that the number of cheerleaders suffering from broken bones, concussions, and sprains has increased by over 100 percent between the years of 1990 and 2002, and that in 2001, there were 25,000 hospital visits reported for cheerleading injuries dealing with the shoulder, ankle, head, and neck. Meanwhile, in the US, cheerleading accounted for 65.1% of all major physical injuries to high school females, and to 66.7% of major injuries to college students due to physical activity from 1982 to 2007, with 22,900 minors being admitted to hospital with cheerleading-related injuries in 2002. The risks of cheerleading were highlighted the death of Lauren Chang. Chang died on April 14, 2008 after competing in a competition where her teammate had kicked her so hard in the chest that her lungs collapsed. Cheerleading (for both girls and boys) was one of the sports studied in the Pediatric Injury Prevention, Education and Research Program of the Colorado School of Public Health in 2009/10–2012/13. Data on cheerleading injuries is included in the report for 2012–13. Associations, federations, and organizations International Cheer Union (ICU): Established on April 26, 2004, the ICU is recognized by the SportAccord as the world governing body of cheerleading and the authority on all matters with relation to it. Including participation from its 105-member national federations reaching 3.5 million athletes globally, the ICU continues to serve as the unified voice for those dedicated to cheerleading's positive development around the world. Following a positive vote by the SportAccord General Assembly on May 31, 2013, in Saint Petersburg, the International Cheer Union (ICU) became SportAccord's 109th member, and SportAccord's 93rd international sports federation to join the international sports family. In accordance with the SportAccord statutes, the ICU is recognized as the world governing body of cheerleading and the authority on all matters related to it. As of the 2016–17 season, the ICU has introduced a Junior aged team (12-16) to compete at the Cheerleading Worlds, because cheerleading is now in provisional status to become a sport in the Olympics. For cheerleading to one day be in the Olympics, there must be a junior and senior team that competes at the world championships. The first junior cheerleading team that was selected to become the junior national team was Eastside Middle School, located in Mount Washington Kentucky and will represent the United States in the inaugural junior division at the world championships. The ICU holds training seminars for judges and coaches, global events and the World Cheerleading Championships. The ICU is also fully applied to the International Olympic Committee (IOC) and is compliant under the code set by the World Anti-Doping Agency (WADA). International Federation of Cheerleading (IFC): Established on July 5, 1998, the International Federation of Cheerleading (IFC) is a non-profit federation based in Tokyo, Japan, and is a world governing body of cheerleading, primarily in Asia. The IFC objectives are to promote cheerleading worldwide, to spread knowledge of cheerleading, and to develop friendly relations among the member associations and federations. USA Cheer The USA Federation for Sport Cheering (USA Cheer) was established in 2007 to serve as the national governing body for all types of cheerleading in the United States and is recognized by the ICU. "The USA Federation for Sport Cheering is a not-for profit 501(c)(6) organization that was established in 2007 to serve as the National Governing Body for Sport Cheering in the United States. USA Cheer exists to serve the cheer community, including club cheering (all star) and traditional school based cheer programs, and the growing sport of STUNT. USA Cheer has three primary objectives: help grow and develop interest and participation in cheer throughout the United States; promote safety and safety education for cheer in the United States; and represent the United States of America in international cheer competitions." In March 2018, they absorbed the American Association of Cheerleading Coaches and Advisors (AACCA) and now provide safety guidelines and training for all levels of cheerleading. Additionally, they organize the USA National Team. Universal Cheerleading Association: UCA is an association owned by the company brand Varsity. "Universal Cheerleaders Association was founded in 1974 by Jeff Webb to provide the best educational training for cheerleaders with the goal of incorporating high-level skills with traditional crowd leading. It was Jeff’s vision that would transform cheerleading into the dynamic, athletic combination of high energy entertainment and school leadership that is loved by so many." "Today, UCA is the largest cheerleading camp company in the world, offering the widest array of dates and locations of any camp company. We also celebrate cheerleader’s incredible hard work and athleticism through the glory of competition at over 50 regional events across the country and our Championships at the Walt Disney World Resort® every year." "UCA has instilled leadership skills and personal confidence in more than 4.5 million athletes on and off the field while continuing to be the industry’s leader for more than forty-five years. UCA has helped many cheerleaders get the training they need to succeed. Competitions and companies Asian Thailand Cheerleading Invitational (ATCI): Organised by the Cheerleading Association of Thailand (CAT) in accordance with the rules and regulations of the International Federation of Cheerleading (IFC). The ATCI is held every year since 2009. At the ATCI, many teams from all over Thailand compete, joining them are many invited neighbouring nations who also send cheer squads. Cheerleading Asia International Open Championships (CAIOC): Hosted by the Foundation of Japan Cheerleading Association (FJCA) in accordance with the rules and regulations of the IFC. The CAIOC has been a yearly event since 2007. Every year, many teams from all over Asia converge in Tokyo to compete. Cheerleading World Championships (CWC): Organised by the IFC. The IFC is a non-profit organisation founded in 1998 and based in Tokyo, Japan. The CWC has been held every two years since 2001, and to date, the competition has been held in Japan, the United Kingdom, Finland, Germany, and Hong Kong. The 6th CWC was held at the Hong Kong Coliseum on November 26–27, 2011. ICU World Championships: The International Cheer Union currently encompasses 105 National Federations from countries across the globe. Every year, the ICU host the World Cheerleading Championship. This competition uses a more collegiate style performance and rulebook. Countries assemble and send only one team to represent them. National Cheerleading Championships (NCC): The NCC is the annual IFC-sanctioned national cheerleading competition in Indonesia organised by the Indonesian Cheerleading Community (ICC). Since NCC 2010, the event is now open to international competition, representing a significant step forward for the ICC. Teams from many countries such as Japan, Thailand, the Philippines, and Singapore participated in the ground breaking event. Pan-American Cheerleading Championships (PCC): The PCC was held for the first time in 2009 in the city of Latacunga, Ecuador and is the continental championship organised by the Pan-American Federation of Cheerleading (PFC). The PFC, operating under the umbrella of the IFC, is the non-profit continental body of cheerleading whose aim it is to promote and develop cheerleading in the Americas. The PCC is a biennial event, and was held for the second time in Lima, Peru, in November 2010. USASF/IASF Worlds: Many United States cheerleading organizations form and register the not-for-profit entity the United States All Star Federation (USASF) and also the International All Star Federation (IASF) to support international club cheerleading and the World Cheerleading Club Championships. The first World Cheerleading Championships, or Cheerleading Worlds, were hosted by the USASF/IASF at the Walt Disney World Resort and taped for an ESPN global broadcast in 2004. This competition is only for All-Star/Club cheer. Only level 6 and 7 teams may attend and must receive a bid from a partner company. Varsity: Varsity Spirit, a branch of Varsity Brands is a parent company which, over the past 10 years, has absorbed or bought most other cheerleading event production companies. The following is a list of subsidiary competition companies owned by Varsity Spirit: All Star Challenge All Star Championships All Things Cheer Aloha Spirit Championships America's Best Championships American Cheer and Dance American Cheer Power American Cheerleaders Association AmeriCheer: Americheer was founded in 1987 by Elizabeth Rossetti. It is the parent company to Ameridance and Eastern Cheer and Dance Association. In 2005, Americheer became one of the founding members of the NLCC. This means that Americheer events offer bids to The U.S. Finals: The Final Destination. AmeriCheer InterNational Championship competition is held every March at the Walt Disney World Resort in Orlando, Florida. Athletic Championships Champion Cheer and Dance Champion Spirit Group Cheer LTD CHEERSPORT: CHEERSPORT was founded in 1993 by all star coaches who believed they could conduct competitions that would be better for the athletes, coaches and spectators. Their main event is CHEERSPORT Nationals, held each February at the Georgia World Congress Center in Atlanta, Georgia CheerStarz COA Cheer and Dance Coastal Cheer and Dance Encore Championships GLCC Events Golden State Spirit Association The JAM Brands: The JAM Brands, headquartered in Louisville, Kentucky, provides products and services for the cheerleading and
friend in Cape Town, South Africa, where Frances had lived for most of her life, enclosing the photograph of herself with the fairies. On the back she wrote "It is funny, I never used to see them in Africa. It must be too hot for them there." The photographs became public in mid-1919, after Elsie's mother attended a meeting of the Theosophical Society in Bradford. The lecture that evening was on "fairy life", and at the end of the meeting Polly Wright showed the two fairy photographs taken by her daughter and niece to the speaker. As a result, the photographs were displayed at the society's annual conference in Harrogate, held a few months later. There they came to the attention of a leading member of the society, Edward Gardner. One of the central beliefs of theosophy is that humanity is undergoing a cycle of evolution, towards increasing "perfection", and Gardner recognised the potential significance of the photographs for the movement: Initial examinations Gardner sent the prints along with the original glass-plate negatives to Harold Snelling, a photography expert. Snelling's opinion was that "the two negatives are entirely genuine, unfaked photographs ... [with] no trace whatsoever of studio work involving card or paper models". He did not go so far as to say that the photographs showed fairies, stating only that "these are straight forward photographs of whatever was in front of the camera at the time". Gardner had the prints "clarified" by Snelling, and new negatives produced, "more conducive to printing", for use in the illustrated lectures he gave around the UK. Snelling supplied the photographic prints which were available for sale at Gardner's lectures. Author and prominent spiritualist Sir Arthur Conan Doyle learned of the photographs from the editor of the spiritualist publication Light. Doyle had been commissioned by The Strand Magazine to write an article on fairies for their Christmas issue, and the fairy photographs "must have seemed like a godsend" according to broadcaster and historian Magnus Magnusson. Doyle contacted Gardner in June 1920 to determine the background to the photographs, and wrote to Elsie and her father to request permission from the latter to use the prints in his article. Arthur Wright was "obviously impressed" that Doyle was involved, and gave his permission for publication, but he refused payment on the grounds that, if genuine, the images should not be "soiled" by money. Gardner and Doyle sought a second expert opinion from the photographic company Kodak. Several of the company's technicians examined the enhanced prints, and although they agreed with Snelling that the pictures "showed no signs of being faked", they concluded that "this could not be taken as conclusive evidence ... that they were authentic photographs of fairies". Kodak declined to issue a certificate of authenticity. Gardner believed that the Kodak technicians might not have examined the photographs entirely objectively, observing that one had commented "after all, as fairies couldn't be true, the photographs must have been faked somehow". The prints were also examined by another photographic company, Ilford, who reported unequivocally that there was "some evidence of faking". Gardner and Doyle, perhaps rather optimistically, interpreted the results of the three expert evaluations as two in favour of the photographs' authenticity and one against. Doyle also showed the photographs to the physicist and pioneering psychical researcher Sir Oliver Lodge, who believed the photographs to be fake. He suggested that a troupe of dancers had masqueraded as fairies, and expressed doubt as to their "distinctly 'Parisienne hairstyles. On October 4, 2018 the first two of the photographs, Alice and the Fairies and Iris and the Gnome, were to be sold by Dominic Winter Auctioneers, in Gloucestershire. The prints, suspected to have been made in 1920 to sell at theosophical lectures, were expected to bring £700–£1000 each. As it turned out, 'Iris with the Gnome' sold for a hammer price of £5,400 (plus 24% buyer's premium incl. VAT), while 'Alice and the Fairies' sold for a hammer price of £15,000 (plus 24% buyer's premium incl. VAT). 1920 photographs Doyle was preoccupied with organising an imminent lecture tour of Australia, and in July 1920, sent Gardner to meet the Wright family. By this point, Frances was living with her parents in Scarborough, but Elsie's father told Gardner that he had been so certain the photographs were fakes that while the girls were away he searched their bedroom and the area around the beck (stream), looking for scraps of pictures or cutouts, but found nothing "incriminating". Gardner believed the Wright family to be honest and respectable. To place the matter of the photographs' authenticity beyond doubt, he returned to Cottingley at the end of July with two W. Butcher & Sons Cameo folding plate cameras and 24 secretly marked photographic plates. Frances was invited to stay with the Wright family during the school summer holiday so that she and Elsie could take more pictures of the fairies.
packed in cotton wool and returned to Gardner in London, who sent an "ecstatic" telegram to Doyle, by then in Melbourne. Doyle wrote back: Publication and reaction Doyle's article in the December 1920 issue of The Strand contained two higher-resolution prints of the 1917 photographs, and sold out within days of publication. To protect the girls' anonymity, Frances and Elsie were called Alice and Iris respectively, and the Wright family was referred to as the "Carpenters". An enthusiastic and committed spiritualist, Doyle hoped that if the photographs convinced the public of the existence of fairies then they might more readily accept other psychic phenomena. He ended his article with the words: Early press coverage was "mixed", generally a combination of "embarrassment and puzzlement". The historical novelist and poet Maurice Hewlett published a series of articles in the literary journal John O' London's Weekly, in which he concluded: "And knowing children, and knowing that Sir Arthur Conan Doyle has legs, I decide that the Miss Carpenters have pulled one of them." The Sydney newspaper Truth on 5 January 1921 expressed a similar view; "For the true explanation of these fairy photographs what is wanted is not a knowledge of occult phenomena but a knowledge of children." Some public figures were more sympathetic. Margaret McMillan, the educational and social reformer, wrote: "How wonderful that to these dear children such a wonderful gift has been vouchsafed." The novelist Henry De Vere Stacpoole decided to take the fairy photographs and the girls at face value. In a letter to Gardner he wrote: "Look at Alice's [Frances'] face. Look at Iris's [Elsie's] face. There is an extraordinary thing called Truth which has 10 million faces and forms – it is God's currency and the cleverest coiner or forger can't imitate it." Major John Hall-Edwards, a keen photographer and pioneer of medical X-ray treatments in Britain, was a particularly vigorous critic: Doyle used the later photographs in 1921 to illustrate a second article in The Strand, in which he described other accounts of fairy sightings. The article formed the foundation for his 1922 book The Coming of the Fairies. As before, the photographs were received with mixed credulity. Sceptics noted that the fairies "looked suspiciously like the traditional fairies of nursery tales" and that they had "very fashionable hairstyles". Gardner's final visit Gardner made a final visit to Cottingley in August 1921. He again brought cameras and photographic plates for Frances and Elsie, but was accompanied by the occultist Geoffrey Hodson. Although neither of the girls claimed to see any fairies, and there were no more photographs, "on the contrary, he [Hodson] saw them [fairies] everywhere" and wrote voluminous notes on his observations. By now Elsie and Frances were tired of the whole fairy business. Years later Elsie looked at a photograph of herself and Frances taken with Hodson and said: "Look at that, fed up with fairies." Both Elsie and Frances later admitted that they "played along" with Hodson "out of mischief", and that they considered him "a fake". Later investigations Public interest in the Cottingley Fairies gradually subsided after 1921. Elsie and Frances eventually married and lived abroad for many years. In 1966, a reporter from the Daily Express newspaper traced Elsie, who was by then back in England. She admitted in an interview given that year that the fairies might have been "figments of my imagination", but left open the possibility she believed that she had somehow managed to photograph her thoughts. The media subsequently became interested in Frances and Elsie's photographs once again. BBC television's Nationwide programme investigated the case in 1971, but Elsie stuck to her story: "I've told you that they're photographs of figments of our imagination, and that's what I'm sticking to". Elsie and Frances were interviewed by journalist Austin Mitchell in September 1976, for a programme broadcast on Yorkshire Television. When pressed, both women agreed that "a rational person doesn't see fairies", but they denied having fabricated the photographs. In 1978 the magician and scientific sceptic James Randi and a team from the Committee for the Scientific Investigation of Claims of the Paranormal examined the photographs, using a "computer enhancement process". They concluded that the photographs were fakes, and that strings could be seen supporting the fairies. Geoffrey Crawley, editor of the British Journal of Photography, undertook a "major scientific investigation of the photographs and the events surrounding them", published between 1982 and 1983, "the first major postwar analysis of the affair". He also concluded that the pictures were fakes. Confession In 1983, the cousins admitted in an article published in the magazine The Unexplained that the photographs had been faked, although both maintained that they really had seen fairies. Elsie had copied illustrations of dancing girls from a popular children's book of the time, Princess Mary's Gift Book, published in 1914, and drew wings on them. They said they had then cut out the cardboard figures and supported them with hatpins, disposing of their props in the beck once the photograph had been taken. But the cousins disagreed about the fifth and final photograph, which Doyle in his The Coming of the Fairies described in this way: Elsie maintained it was a fake, just like all the others, but Frances insisted that it was genuine. In an interview given in the early 1980s Frances said: Both Frances and Elsie claimed to have taken the fifth photograph. In a letter published in The Times newspaper on 9 April 1983, Geoffrey Crawley explained the discrepancy by suggesting that the photograph was "an unintended double exposure of fairy cutouts in the grass", and thus "both ladies can be quite sincere in believing that they each took it". In a 1985 interview on Yorkshire Television's Arthur C. Clarke's World of Strange Powers, Elsie said that she and Frances were too embarrassed to admit the truth after fooling Doyle, the author of Sherlock Holmes: "Two village kids and a brilliant man like Conan Doyle – well, we could only keep quiet." In the same interview Frances said: "I never even thought of it as being a fraud – it was just Elsie and I having a bit of fun and I can't understand to this day why they were taken in – they wanted to be taken in." Subsequent history Frances died in 1986, and Elsie in 1988. Prints of their photographs of the fairies, along with a few other items including a first edition of Doyle's book The Coming of the Fairies, were sold at auction in London for £21,620 in 1998. That same year, Geoffrey Crawley sold his Cottingley Fairy material to the National Museum of Film, Photography and Television in Bradford (now the National Science and Media Museum), where it is on display. The collection included prints of the photographs, two of the cameras used by the girls, watercolours of fairies painted by Elsie, and a nine-page letter from Elsie admitting to the hoax. The glass photographic plates were bought for £6,000 by an unnamed buyer at a London auction held in 2001. Frances's daughter, Christine Lynch, appeared in an episode of the television programme Antiques Roadshow in Belfast, broadcast on BBC One in January 2009, with the photographs and one of the cameras given to the girls by Doyle. Christine told the expert, Paul Atterbury, that she believed, as her mother had done, that the fairies in the fifth photograph were genuine. Atterbury estimated the value of the items at between £25,000 and £30,000. The first edition of Frances's memoirs was published a few months later, under the title Reflections on the Cottingley Fairies. The book contains correspondence, sometimes "bitter", between Elsie and Frances. In one letter, dated 1983, Frances wrote: The 1997 films FairyTale: A True Story and Photographing Fairies were inspired by the events surrounding the Cottingley Fairies. The photographs were parodied in a 1994 book written by Terry Jones and Brian Froud, Lady Cottington's Pressed Fairy Book. In 2017 a further two fairy photographs were presented as evidence that the girls' parents were part of the conspiracy. Dating from 1917 and 1918, both photographs are poorly executed copies of two of the original fairy photographs. One was published in 1918 in The Sphere newspaper, which was before the originals had been seen by anyone outside the girls' immediate family. In 2019, a print of the first of the five photographs,
held in readiness for summary execution in reprisal for any alleged counter-revolutionary act. Wholesale, indiscriminate arrests became an integral part of the system. The Cheka used trucks disguised as delivery trucks, called "Black Marias", for the secret arrest and transport of prisoners. It was during the Red Terror that the Cheka, hoping to avoid the bloody aftermath of having half-dead victims writhing on the floor, developed a technique for execution known later by the German words "Nackenschuss or "Genickschuss, a shot to the nape of the neck, which caused minimal blood loss and instant death. The victim's head was bent forward, and the executioner fired slightly downward at point-blank range. This had become the standard method used later by the NKVD to liquidate Joseph Stalin's purge victims and others. Persecution of deserters It is believed that there were more than three million deserters from the Red Army in 1919 and 1920. Approximately 500,000 deserters were arrested in 1919 and close to 800,000 in 1920, by troops of the 'Special Punitive Department' of the Cheka, created to punish desertions. These troops were used to forcibly repatriate deserters, taking and shooting hostages to force compliance or to set an example. Throughout the course of the civil war, several thousand deserters were shot – a number comparable to that of belligerents during World War I. In September 1918, according to The Black Book of Communism, in only twelve provinces of Russia, 48,735 deserters and 7,325 "bandits" were arrested, 1,826 were killed and 2,230 were executed. The exact identity of these individuals is confused by the fact that the Soviet Bolshevik government used the term 'bandit' to cover ordinary criminals as well as armed and unarmed political opponents, such as the anarchists. Repression Number of victims Estimates on Cheka executions vary widely. The lowest figures (disputed below) are provided by Dzerzhinsky's lieutenant Martyn Latsis, limited to RSFSR over the period 1918–1920: For the period 1918 – July 1919, covering only twenty provinces of central Russia: In 1918: 6,300; in 1919 (up to July): 2,089; Total: 8,389 For the whole period 1918–19: In 1918: 6,185; in 1919: 3,456; Total: 9,641 For the whole period 1918–20: In January–June 1918: 22; in July–December 1918: more than 6,000; in 1918–20: 12,733. Experts generally agree these semi-official figures are vastly understated. Pioneering historian of the Red Terror Sergei Melgunov claims that this was done deliberately in an attempt to demonstrate the government's humanity. For example, he refutes the claim made by Latsis that only 22 executions were carried out in the first six months of the Cheka's existence by providing evidence that the true number was 884 executions. W. H. Chamberlin claims, "It is simply impossible to believe that the Cheka only put to death 12,733 people in all of Russia up to the end of the civil war." Donald Rayfield concurs, noting that, "Plausible evidence reveals that the actual numbers . . . vastly exceeded the official figures." Chamberlin provides the "reasonable and probably moderate" estimate of 50,000, while others provide estimates ranging up to 500,000. Several scholars put the number of executions at about 250,000. Some believe it is possible more people were murdered by the Cheka than died in battle. Historian James Ryan gives a modest estimate of 28,000 executions per year from December 1917 to February 1922. Lenin himself seemed unfazed by the killings. On 12 January 1920, while addressing trade union leaders, he said: "We did not hesitate to shoot thousands of people, and we shall not hesitate, and we shall save the . On 14 May 1921, the Politburo, chaired by Lenin, passed a motion "broadening the rights of the [Cheka] in relation to the use of the [death penalty]." Atrocities The Cheka engaged in the widespread practice of torture. Depending on Cheka committees in various cities, the methods included: being skinned alive, scalped, "crowned" with barbed wire, impaled, crucified, hanged, stoned to death, tied to planks and pushed slowly into furnaces or tanks of boiling water, or rolled around naked in internally nail-studded barrels. Chekists reportedly poured water on naked prisoners in the winter-bound streets until they became living ice statues. Others reportedly beheaded their victims by twisting their necks until their heads could be torn off. The Cheka detachments stationed in Kyiv reportedly would attach an iron tube to the torso of a bound victim and insert a rat in the tube closed off with wire netting, while the tube was held over a flame until the rat began gnawing through the victim's guts in an effort to escape. Women and children were also victims of Cheka terror. Women would sometimes be tortured and raped before being shot. Children between the ages of 8 and 13 were imprisoned and occasionally executed. All of these atrocities were published on numerous occasions in Pravda and Izvestiya: January 26, 1919 Izvestiya #18 article Is it really a medieval imprisonment? («Неужели средневековый застенок?»); February 22, 1919 Pravda #12 publishes details of the Vladimir Cheka's tortures, September 21, 1922 Socialist Herald publishes details of series of tortures conducted by the Stavropol Cheka (hot basement, cold basement, skull measuring, etc.). The Chekists were also supplemented by the militarized Units of Special Purpose (the Party's Spetsnaz or ). Cheka was actively and openly utilizing kidnapping methods. With kidnapping methods, Cheka was able to extinguish numerous cases of discontent especially among the rural population. Among the notorious ones was the Tambov rebellion. Villages were bombarded to complete annihilation, as in the case of Tretyaki, Novokhopersk uyezd, Voronezh Governorate. As a result of this relentless violence, more than a few Chekists ended up with psychopathic disorders, which Nikolai Bukharin said were "an occupational hazard of the Chekist profession." Many hardened themselves to the executions by heavy drinking and drug use. Some developed a gangster-like slang for the verb to kill in an attempt to distance themselves from the killings, such as 'shooting partridges', or 'sealing' a victim, or giving him a natsokal (onomatopoeia of the trigger action). On November 30, 1992, by the initiative of the President of the Russian Federation the Constitutional Court of the Russian Federation recognized the Red Terror as unlawful, which in turn led to the suspension of Communist Party of the RSFSR. Regional Chekas Cheka departments were organized not only in big cities and guberniya seats, but also in each uyezd, at any front-lines and military formations. Nothing is known on what resources they were created. Many who were hired to head those departments were so-called "nestlings of Alexander Kerensky". Moscow Cheka (1918–1919) Chairman – Felix Dzerzhynsky, Deputy – Yakov Peters (initially heading the Petrograd Department), other members – Shklovsky, Kneyfis, Tseystin, Razmirovich, Kronberg, Khaikina, Karlson, Shauman, Lentovich, Rivkin, Antonov, Delafabr, Tsytkin, G.Sverdlov, Bizensky, Yakov Blumkin, Aleksandrovich, Fines, Zaks, Yakov Goldin, Galpershtein, Kniggisen, Martin Latsis (later transfer (chief of jail), Fogel, Zakis, Shillenkus, Yanson). Petrograd Cheka (1918–1919) Chairman – Meinkman, Moisei Uritsky (reiller, Kozlovsky, Model, Rozmirovich, I.Diesporov, Iselevich, Krassikov, Bukhan, Merbis, Paykis, Anvelt. Kharkov Cheka Deych, Vikhman, Timofey, Vera (Dora) Grebenshchikova, Aleksandra (ag Ashykin. Popular culture The Cheka were popular staples in Soviet film and literature. This was partly due to a romanticization of the organisation in the post-Stalin period, and also because they provided a useful action/detection template. Films featuring the Cheka include Ostern's Miles of Fire, Nikita Mikhalkov's At Home among Strangers, the miniseries The Adjutant of His Excellency, and also Dead Season (starring Donatas Banionis), and the 1992 Russian drama film The Chekist. In Spain, during the Spanish Civil War, the detention and torture centers operated by the Republicans were named "checas" after the Soviet organization. Alfonso Laurencic was their promoter, ideologist and builder. Dzerzhinsky, who rarely drank, is said to have told Lenin – on an occasion in which he did so excessively – that secret police work could be done by "only saints or scoundrels ... but now the saints are running away from me and I am left with the scoundrels". Legacy Konstantin Preobrazhenskiy criticised the continuing celebration of the professional holiday of the old and the modern Russian security services on the anniversary of the creation of the Cheka, with the assent of the Presidents of Russia. (Vladimir Putin, former KGB officer, chose not to change the date to another): "The successors of the KGB still haven't renounced anything; they even celebrate their professional holiday the same day, as during repression, on the 20th of December. It is as if the present intelligence and counterespionage services of Germany celebrated Gestapo Day. I can imagine how indignant our press would be!" See also Chekism Commanders of the border troops USSR and RF Central Case Examination Group Chronology of Soviet secret police agencies Great Purge Ministry for State Security (Soviet Union) Okhrana People's Commissariat for State Security (Soviet Union) Russian Revolution of 1917 References Citations Sources Andrew, Christopher M. and Vasili Mitrokhin (1999) The Sword and the Shield : The Mitrokhin Archive and the Secret History of the KGB. New York: Basic Books. . Carr, E. H. (1958) The Origin and
Council of People's Commissars of the RSFSR (, Vserossiyskaya chrezvychaynaya komissiya po borbe s kontrrevolyutsiyey i sabotazhem pri Sovete narodnykh komisarov RSFSR). In 1918 its name was changed, becoming All-Russian Extraordinary Commission for Combating Counter-Revolution, Profiteering and Corruption. A member of Cheka was called a chekist (). Also, the term chekist often referred to Soviet secret police throughout the Soviet period, despite official name changes over time. In The Gulag Archipelago, Alexander Solzhenitsyn recalls that zeks in the labor camps used old chekist as a mark of special esteem for particularly experienced camp administrators. The term is still found in use in Russia today (for example, President Vladimir Putin has been referred to in the Russian media as a chekist due to his career in the KGB and as head of the KGB's successor, FSB). The chekists commonly dressed in black leather, including long flowing coats, reportedly after being issued such distinctive coats early in their existence. Western communists adopted this clothing fashion. The Chekists also often carried with them Greek-style worry beads made of amber, which had become "fashionable among high officials during the time of the 'cleansing'". History In 1921, the Troops for the Internal Defense of the Republic (a branch of the Cheka) numbered at least 200,000. These troops policed labor camps, ran the Gulag system, conducted requisitions of food, and subjected political opponents to secret arrest, detention, torture and summary execution. They also put down rebellions and riots by workers or peasants, and mutinies in the desertion-plagued Red Army. After 1922 Cheka groups underwent the first of a series of reorganizations; however the theme of a government dominated by "the organs" persisted indefinitely afterward, and Soviet citizens continued to refer to members of the various organs as Chekists. Creation In the first month and half after the October Revolution (1917), the duty of "extinguishing the resistance of exploiters" was assigned to the Petrograd Military Revolutionary Committee (or PVRK). It represented a temporary body working under directives of the Council of People's Commissars (Sovnarkom) and Central Committee of RDSRP(b). The VRK created new bodies of government, organized food delivery to cities and the Army, requisitioned products from bourgeoisie, and sent its emissaries and agitators into provinces. One of its most important functions was the security of revolutionary order, and the fight against counterrevolutionary activity (see: Anti-Soviet agitation). On December 1, 1917, the All-Russian Central Executive Committee (VTsIK or TsIK) reviewed a proposed reorganization of the VRK, and possible replacement of it. On December 5, the Petrograd VRK published an announcement of dissolution and transferred its functions to the department of TsIK for the fight against "counterrevolutionaries". On December 6, the Council of People's Commissars (Sovnarkom) strategized how to persuade government workers to strike across Russia. They decided that a special commission was needed to implement the "most energetically revolutionary" measures. Felix Dzerzhinsky (the Iron Felix) was appointed as Director and invited the participation of the following individuals: V. K. Averin, V.V Yakovlev, D. G. Yevseyev, N. A. Zhydelev, I. K. Ksenofontov, G. K. Ordjonikidze, Ya. Kh. Peters, K. A. Peterson, V. A. Trifonov. On December 7, 1917, all invited except Zhydelev and Vasilevsky gathered in the Smolny Institute to discuss the competence and structure of the commission to combat counterrevolution and sabotage. The obligations of the commission were: "to liquidate to the root all of the counterrevolutionary and sabotage activities and all attempts to them in all of Russia, to hand over counter-revolutionaries and saboteurs to the revolutionary tribunals, develop measures to combat them and relentlessly apply them in real-world applications. The commission should only conduct a preliminary investigation". The commission should also observe the press and counterrevolutionary parties, sabotaging officials and other criminals. Three sections were created: informational, organizational, and a unit to combat counter-revolution and sabotage. Upon the end of the meeting, Dzerzhinsky reported to the Sovnarkom with the requested information. The commission was allowed to apply such measures of repression as 'confiscation, deprivation of ration cards, publication of lists of enemies of the people etc.'". That day, Sovnarkom officially confirmed the creation of VCheKa. The commission was created not under the VTsIK as was previously anticipated, but rather under the Council of the People's Commissars. On December 8, 1917, some of the original members of the VCheka were replaced. Averin, Ordzhonikidze, and Trifonov were replaced by V. V. Fomin, S. E. Shchukin, Ilyin, and Chernov. On the meeting of December 8, the presidium of VChK was elected of five members, and chaired by Dzerzhinsky. The issue of "speculation" was raised at the same meeting, which was assigned to Peters to address and report with results to one of the next meetings of the commission. A circular, published on , gave the address of VCheka's first headquarters as "Petrograd, Gorokhovaya 2, 4th floor". On December 11, Fomin was ordered to organize a section to suppress "speculation." And in the same day, VCheKa offered Shchukin to conduct arrests of counterfeiters. In January 1918, a subsection of the anti-counterrevolutionary effort was created to police bank officials. The structure of VCheKa was changing repeatedly. By March 1918, when the organization came to Moscow, it contained the following sections: against counterrevolution, speculation, non-residents, and information gathering. By the end of 1918–1919, some new units were created: secretly operative, investigatory, of transportation, military (special), operative, and instructional. By 1921, it changed once again, forming the following sections: directory of affairs, administrative-organizational, secretly operative, economical, and foreign affairs. First months In the first months of its existence, VCheKa consisted of only 40 officials. It commanded a team of soldiers, the Sveaborgesky regiment, as well as a group of Red Guardsmen. On January 14, 1918, Sovnarkom ordered Dzerzhinsky to organize teams of "energetic and ideological" sailors to combat speculation. By the spring of 1918, the commission had several teams: in addition to the Sveaborge team, it had an intelligence team, a team of sailors, and a strike team. Through the winter of 1917–1918, all activities of VCheKa were centralized mainly in the city of Petrograd. It was one of several other commissions in the country which fought against counterrevolution, speculation, banditry, and other activities perceived as crimes. Other organizations included: the Bureau of Military Commissars, and an Army-Navy investigatory commission to attack the counterrevolutionary element in the Red Army, plus the Central Requisite and Unloading Commission to fight speculation. The investigation of counterrevolutionary or major criminal offenses was conducted by the Investigatory Commission of Revtribunal. The functions of VCheKa were closely intertwined with the Commission of V. D. Bonch-Bruyevich, which beside the fight against wine pogroms was engaged in the investigation of most major political offenses (see: Bonch-Bruyevich Commission). All results of its activities, VCheKa had either to transfer to the Investigatory Commission of Revtribunal, or to dismiss. The control of the commission's activity was provided by the People's Commissariat for Justice (Narkomjust, at that time headed by Isidor Steinberg) and Internal Affairs (NKVD, at that time headed by Grigory Petrovsky). Although the VCheKa was officially an independent organization from the NKVD, its chief members such as Dzerzhinsky, Latsis, Unszlicht, and Uritsky (all main chekists), since November 1917 composed the collegiate of NKVD headed by Petrovsky. In November 1918, Petrovsky was appointed as head of the All-Ukrainian Central Military Revolutionary Committee during VCheKa's expansion to provinces and front-lines. At the time of political competition between Bolsheviks and SRs (January 1918), Left SRs attempted to curb the rights of VCheKa and establish through the Narkomiust their control over its work. Having failed in attempts to subordinate the VCheKa to Narkomiust, the Left SRs tried to gain control of the Extraordinary Commission in a different way: they requested that the Central Committee of the party was granted the right to directly enter their representatives into the VCheKa. Sovnarkom recognized the desirability of including five representatives of the Left Socialist-Revolutionary faction of VTsIK. Left SRs were granted the post of a companion (deputy) chairman of VCheKa. However, Sovnarkom, in which the majority belonged to the representatives of RSDLP(b) retained the right to approve members of the collegium of the VCheKa. Originally, members of the Cheka were exclusively Bolshevik; however, in January 1918, Left SRs also joined the organization. The Left SRs were expelled or arrested later in 1918, following the attempted assassination of Lenin by an SR, Fanni Kaplan. Consolidation of VCheKa and National Establishment By the end of January 1918, the Investigatory Commission of Petrograd Soviet (probably same as of Revtribunal) petitioned Sovnarkom to delineate the role of detection and judicial-investigatory organs. It offered to leave, for the VCheKa and the Commission of Bonch-Bruyevich, only the functions of detection and suppression, while investigative functions entirely transferred to it. The Investigatory Commission prevailed. On January 31, 1918, Sovnarkom ordered to relieve VCheKa of the investigative functions, leaving for the commission only the functions of detection, suppression, and prevention of anti revolutionary crimes. At the meeting of the Council of People's Commissars on January 31, 1918, a merger of VCheKa and the Commission of Bonch-Bruyevich was proposed.
word they connect to. Proclitic A proclitic appears before its host. It is common in Romance languages. For example, in French, there is il s'est réveillé ("he woke up") or je t'aime ("I love you"), while the same in Italian are both (lui) si è svegliato, (io) ti amo and sè svegliato, tamo. One proclitic in American English is the informal second-person plural pronoun occurring in y'all ("you all"). Enclitic An enclitic appears after its host. Latin: Senatus Populus-que Romanus "Senate people-and Roman" = "The Senate and people of Rome" Ancient Greek: ánthrōpoí (-te) theoí -te "people (and) gods and" = "(both) men and gods" Sanskrit: naro gajaś -ca 'नरो गजश्च' i.e. "naraḥ gajaḥ ca" "नरस् गजस् -च" with sandhi "the man the elephant and" = "the man and the elephant" Sanskrit: Namaste < namaḥ + te, (Devanagari: नमः + -ते = नमस्ते), with sandhi change namaḥ > namas. "bowing to you" Czech: Nevím, chtělo-li by se mi si to tam však také vyzkoušet. "However (však), I do not know (nevím), if (-li) it would (by) want (chtělo se) to try (vyzkoušet si) it (to) to me (mi) there (tam) as well (také)." (= However, I'm not sure if I would like to try it there as well.) Tamil: idhu en poo = இது என் பூ (This is my flower). With enclitic -vē, which indicates certainty, this sentence becomes idhu en poovē = இது என் பூவே (This is certainly my flower) Telugu: idi nā puvvu = ఇది నా పువ్వు (This is my flower). With enclitic -ē, which indicates certainty, this sentence becomes Idi nā puvvē = ఇది నా పువ్వే (This is certainly my flower) Estonian: Rahagagi vaene means "Poor even having money". Enclitic -gi with the comitative case turns "with/having something" into "even with/having something". Without the enclitic, the saying would be "rahaga vaene", which would mean that the predicate is "poor, but has money" (compared to "poor even having money", having money won't make a difference if the predicate is poor or not). It is considered a grammatical mistake to turn the enclitic into a mesoclitic. Portuguese: Deram-te dinheiro, with enclitic -te meaning "you"; the sentence means "they gave you money". Portuguese possesses an extensive set of rules regarding pronoun placement that allows for proclitics, enclitics and mesoclitics. However, the actual observance of said rules varies by dialect, with a shift towards the generalization of proclitics being observable in spoken Brazilian Portuguese. Mesoclitic A mesoclitic appears between the stem of the host and other affixes. For example, in Portuguese, conquistar-se-á ("it will be conquered"), dá-lo-ei ("I will give it"), matá-la-ia ("he/she/it would kill her"). These are found much more often in writing than in speech. It is even possible to use two pronouns inside the verb, as in dar-no-lo-á ("he/she/it will give it to us"), or dar-ta-ei (ta = te + a, "I will give it/her to you"). As in other Western Romance languages, the Portuguese synthetic future tense comes from the merging of the infinitive and the corresponding finite forms of the verb haver (from Latin habēre), which explains the possibility of separating it from the infinitive. Endoclitic The endoclitic splits apart the root and is inserted between the two pieces. Endoclitics defy the Lexical Integrity Hypothesis (or Lexicalist hypothesis) and so were long thought impossible. However, evidence from the Udi language suggests that they exist. Endoclitics are also found in Pashto and are reported to exist in Degema. Distinction One distinction drawn by some scholars divides the broad term "clitics" into two categories, simple clitics and special clitics. This distinction is, however, disputed. Simple clitics Simple clitics are free morphemes: can stand alone in a phrase or sentence. They are unaccented and thus phonologically dependent upon a nearby word. They derive meaning only from that "host". Special clitics Special clitics are morphemes that are bound to the word upon which they depend: they exist as a part of their host. That form, which is unaccented, represents a variant of a free form that carries stress. Both variants carry similar meaning and phonological makeup, but the special clitic is bound to a host word and is unaccented. Properties Some clitics can be understood as elements undergoing a historical process of grammaticalization: lexical item → clitic → affix According to this model from Judith Klavans, an autonomous lexical item in a particular context loses the properties of a fully independent word over time and acquires the properties of a morphological affix (prefix, suffix, infix, etc.). At any intermediate stage of this evolutionary process, the element in question can be described as a "clitic". As a result, this term ends up being applied to a highly heterogeneous class of elements, presenting different combinations of word-like and affix-like properties. Prosody One characteristic shared by many clitics is a lack of prosodic independence. A clitic attaches to an adjacent word, known as its host. Orthographic conventions treat clitics in different ways: Some are written as separate words, some are written as one word with their hosts, and some are attached to their hosts, but set off by punctuation (a hyphen or an apostrophe, for example). Comparison with affixes Although the term "clitic" can be used descriptively to refer to any element whose grammatical status is somewhere in between a typical word and a typical affix, linguists have proposed various definitions of "clitic" as a technical term. One common approach is to treat clitics as words that are prosodically deficient: they cannot appear without a host, and they can only form an accentual unit in combination with their host. The term "postlexical clitic" is used for this narrower sense of the term. Given this basic definition, further criteria are needed to establish a dividing line between postlexical clitics and morphological affixes, since both are characterized by a lack of prosodic autonomy. There is no natural, clear-cut boundary between the two categories (since from a diachronic point of view, a given form can move gradually from one to the other by morphologization). However, by identifying clusters of observable properties that are associated with core examples of clitics on the one hand, and core examples of affixes on the other, one can pick out a battery of tests that provide an empirical foundation for a clitic/affix distinction. An affix syntactically and phonologically attaches to a base morpheme of a limited part of speech, such as a verb, to form a new word. A clitic syntactically functions above the word level, on the phrase or clause level, and attaches only phonetically to the first, last, or only word in the phrase or clause, whichever part of speech the word belongs to. The results of applying these criteria sometimes reveal that elements that have traditionally been called "clitics" actually have the status of affixes (e.g., the Romance pronominal clitics discussed below). Zwicky and Pullum postulated five characteristics that distinguish clitics from affixes: Clitics do not select their hosts. That is, they are "promiscuous", attaching to whichever word happens to be in the right place. Affixes do select their host: They only attach to the word they are connected to semantically, and generally attach to a particular part of speech. Clitics do not exhibit arbitrary gaps. Affixes, on the other hand, are often lexicalized and may simply not occur with certain words. (English plural -s, for example, does not occur with "child".) Clitics do not exhibit morphophonological idiosyncrasies. That is, they follow the morphophonological rules of the rest of the language. Affixes may be irregular in this regard. Clitics do not exhibit semantic idiosyncrasies. That is, the meaning of the phrase-plus-clitic is predictable from the meanings of the phrase and the clitic. Affixes may have irregular meanings. Clitics can attach to material already containing clitics (and affixes). Affixes can attach to other affixes, but not to material containing clitics. An example of differing analyses by different linguists is the discussion of the non-pronominal possessive marker ('s) in English. Some linguists treat it as an affix, while others treat it as a special clitic. Comparison with words Similar to the discussion above, clitics must be distinguishable from words. Linguists have proposed a number of tests to differentiate between the two categories. Some tests, specifically, are based upon the understanding that when comparing the two, clitics resemble affixes, while words resemble syntactic phrases. Clitics and words resemble different categories, in the sense that they share certain properties. Six such tests are described below. These, of course, are not the only ways to differentiate between words and clitics. If a morpheme is bound to a word and can never occur in complete isolation, then it is likely a clitic. In contrast, a word is not bound and can appear on its own. If the addition of a morpheme to a word prevents further affixation, then it is likely a clitic. If a morpheme combines with single words to convey a further degree of meaning, then it is likely a clitic. A word combines with a group of words or phrases to denote further meaning. If a morpheme must be in a certain order with respect to other morphemes within the construction, then it is likely a clitic. Independent words enjoy free ordering with respect to other words, within the confines of the word order of the language. If a morpheme's allowable behavior is determined by one
generally attach to a particular part of speech. Clitics do not exhibit arbitrary gaps. Affixes, on the other hand, are often lexicalized and may simply not occur with certain words. (English plural -s, for example, does not occur with "child".) Clitics do not exhibit morphophonological idiosyncrasies. That is, they follow the morphophonological rules of the rest of the language. Affixes may be irregular in this regard. Clitics do not exhibit semantic idiosyncrasies. That is, the meaning of the phrase-plus-clitic is predictable from the meanings of the phrase and the clitic. Affixes may have irregular meanings. Clitics can attach to material already containing clitics (and affixes). Affixes can attach to other affixes, but not to material containing clitics. An example of differing analyses by different linguists is the discussion of the non-pronominal possessive marker ('s) in English. Some linguists treat it as an affix, while others treat it as a special clitic. Comparison with words Similar to the discussion above, clitics must be distinguishable from words. Linguists have proposed a number of tests to differentiate between the two categories. Some tests, specifically, are based upon the understanding that when comparing the two, clitics resemble affixes, while words resemble syntactic phrases. Clitics and words resemble different categories, in the sense that they share certain properties. Six such tests are described below. These, of course, are not the only ways to differentiate between words and clitics. If a morpheme is bound to a word and can never occur in complete isolation, then it is likely a clitic. In contrast, a word is not bound and can appear on its own. If the addition of a morpheme to a word prevents further affixation, then it is likely a clitic. If a morpheme combines with single words to convey a further degree of meaning, then it is likely a clitic. A word combines with a group of words or phrases to denote further meaning. If a morpheme must be in a certain order with respect to other morphemes within the construction, then it is likely a clitic. Independent words enjoy free ordering with respect to other words, within the confines of the word order of the language. If a morpheme's allowable behavior is determined by one principle, it is likely a clitic. For example, "a" precedes indefinite nouns in English. Words can rarely be described with one such description. In general, words are more morphologically complex than clitics. Clitics are rarely composed of more than one morpheme. Word order Clitics do not always appear next to the word or phrase that they are associated with grammatically. They may be subject to global word order constraints that act on the entire sentence. Many Indo-European languages, for example, obey Wackernagel's law (named after Jacob Wackernagel), which requires sentential clitics to appear in "second position", after the first syntactic phrase or the first stressed word in a clause: Latin had three enclitics that appeared in second or third position of a clause: -enim 'indeed, for', -autem 'but, moreover', -vero 'however'. For example, quis enim (quisenim) potest negare? (from Martial's epigram LXIV, literally "who indeed can deny [her riches]?"). Spevak (2010) reports that in her corpus of Caesar, Cicero and Sallust, these three words appear in such position in 100% of the cases. Indo-European languages Germanic languages English English enclitics include the contracted versions of auxiliary verbs, as in I'm and we've. Some also regard the possessive marker, as in The Queen of England's crown as an enclitic, rather than a (phrasal) genitival inflection. Some consider the infinitive marker to and the English articles a, an, the to be proclitics. The negative marker -n't as in couldn't etc. is typically considered a clitic that developed from the lexical item not. Linguists Arnold Zwicky and Geoffrey Pullum argue, however, that the form has the properties of an affix rather than a syntactically independent clitic. Other Germanic languages Old Norse: The definite article was the enclitic -inn, -in, -itt (masculine, feminine and neuter nominative singular), as in álfrinn "the elf", gjǫfin "the gift", and tréit "the tree", an abbreviated form of the independent pronoun hinn, cognate of the German pronoun jener. It was fully declined for gender, case and number. Since both the noun and enclitic were declined, this led to "double declension". The situation remains similar in modern Faroese and Icelandic, but in Danish, Norwegian and Swedish, the enclitics have become endings. Old Norse had also some enclitics of personal pronouns that were attached to verbs. These were -sk (from sik), -mk (from mik), -k (from ek), and -ðu / -du / -tu (from þú). These could even be stacked up, e.g. "fásktu" (from Hávamál, stanza 116). Dutch: t definite article of neuter nouns and third person singular neuter pronoun, k first person pronoun, je second person singular pronoun, ie third person masculine singular pronoun, ze third person plural pronoun Plautdietsch: "Deit'a't vondoag?": "Will he do it today?" Gothic: Sentence clitics appear in second position in accordance with Wackernagel's Law, including -u (yes-no question), -uh "and", þan "then", ƕa "anything", for example ab-u þus silbin "of thyself?". Multiple clitics can be stacked up, and split a preverb from the rest of the verb if the preverb comes at the beginning of the clause, e.g. diz-uh-þan-sat ijōs "and then he seized them (fem.)", ga-u-ƕa-sēƕi "whether he saw anything". Yiddish: The unspecified pronoun מען can be contracted to מ'. Celtic languages In Cornish, the clitics ma and na are used after a noun and definite article to express "this/these" and "that/those", respectively. For example: an lyver "the book", an lyver ma "this book", an lyver na "that book" an lyvrow "the books", an lyvrow ma "these books", an lyvrow na "those books" Irish Gaelic uses seo and sin as clitics in a similar way, also to express "this/these" and "that/those". For example: an leabhar "the book", an leabhar seo "this book", an leabhar sin "that book" na leabhair "the books", na leabhair seo "these books", na leabhair sin "those books" Romance languages In Romance languages, some feel the object personal pronoun forms are clitics. Others consider them affixes, as they only attach to the verb they are the object of. There is no general agreement on the issue. For the Spanish object pronouns, for example:lo atamos ("it tied-1PL" = "we tied it" or "we tied him"; can only occur with the verb it is the object of)dámelo ("give me it") Portuguese allows object suffixes before the conditional and future suffixes of the verbs: Ela levá-lo-ia ("She take-it-would" – "She would take it"). Eles dar-no-lo-ão ("They give-us-it-will" – "They will give it to us"). Colloquial Portuguese and Spanish of the former Gran Colombia allow ser to be conjugated as a verbal clitic adverbial adjunct to emphasize the importance of the phrase compared to its context, or with the meaning of "really" or "in truth": Ele estava era gordo ("He was fat" – "He was very fat"). Ele ligou é para Paula ("He phoned is Paula" – "He phoned Paula (with emphasis)"). Note that this clitic form is only for the verb ser and is restricted to only third-person singular conjugations. It is not used as a verb in the grammar of the sentence but introduces prepositional phrases and adds emphasis. It does not need to concord with the tense of the main verb, as in the second example, and can be usually removed from the sentence without affecting the simple meaning. Proto-Indo-European In the Indo-European languages, some clitics can be traced back to Proto-Indo-European: for example, * is the original form of Sanskrit च (-ca), Greek τε (-te), and Latin -que. Latin: -que "and", -ve "or", -ne (yes-no question) Greek: τε "and", δέ "but", γάρ "for" (in a logical argument), οὖν "therefore" Slavic languages Russian: ли (yes-no question), же (emphasis), то (emphasis), не "not" (proclitic), бы (subjunctive) Czech: special clitics: weak personal and reflexive pronouns (mu, "him"), certain auxiliary verbs (by, "would"), and various short particles and adverbs (tu, "here"; ale, "though"). "Nepodařilo by se mi mu to dát" "I would not succeed in giving it to him". In addition there are various simple clitics including short prepositions. Polish: -by (conditional mood particle), się (reflexive, also modifies meaning of certain verbs), no and -że (emphasis), -m, -ś, -śmy, -ście (personal auxiliary), mi, ci, cię, go, mu &c. (unstressed personal pronouns in oblique cases) Croatian: the reflexive pronoun forms si and se, li (yes-no question), unstressed present and aorist tense forms of biti ("to be"; sam, si, je, smo, ste, su; and bih, bi, bi, bismo, biste, bi, for the respective tense), unstressed personal pronouns in genitive (me, te, ga, je, nas, vas, ih), dative (mi, ti, mu, joj, nam, vam, im) and accusative (me, te, ga (nj), je (ju), nas, vas, ih), and unstressed present tense of htjeti ("want/will"; ću, ćeš, će, ćemo, ćete, će) In Croatian these clitics follow the first stressed word in the sentence or clause in most cases, which may have been inherited from Proto-Indo-European (see Wackernagel's Law), even though many of the modern clitics became cliticised much more recently in the language (e.g. auxiliary verbs or the accusative forms of pronouns). In subordinate clauses and questions, they follow the connector and/or the question word respectively. Examples (clitics – sam "I am", biste "you would (pl.)", mi "to me", vam "to you (pl.)", ih "them"): Pokažite mi ih. "Show (pl.) them to me." Pokazao sam vam ih jučer. "I showed them to you (pl.) yesterday." Sve sam vam ih (jučer) pokazao. / Sve sam vam ih pokazao (jučer). "I showed all of them to you (yesterday)." (focus on "all") Jučer sam vam ih (sve) pokazao. "I showed (all of) them to you yesterday." (focus on "yesterday") Znam da sam vam ih već pokazao. "I know that I have already shown them to you." Zašto sam vam ih jučer pokazao? "Why did I show them to you yesterday?" Zar sam vam ih jučer pokazao? "Did I (really) show them to you yesterday?" Kad biste mi ih sada dali... "If you (pl.) gave them to me now..." (lit. If you-would to-me them now give-PARTICIPLE...) Što sam god vidio... "Whatever I saw..." (lit. What I-am ever see-PARTICIPLE...) In certain rural dialects this rule is (or was until recently) very strict, whereas elsewhere various exceptions occur. These include phrases containing conjunctions (e. g. Ivan i Ana "Ivan and Ana"), nouns with a genitival attribute (e. g. vrh brda "the top of the hill"), proper names and titles and the like (e. g. (gospođa) Ivana Marić "(Mrs) Ivana Marić", grad Zagreb "the city (of) Zagreb"), and in many local varieties clitics are hardly ever inserted into any phrases (e. g. moj najbolji prijatelj "my best friend", sutra ujutro "tomorrow morning"). In cases like these, clitics normally follow the initial phrase, although some Standard grammar handbooks recommend that they should be placed immediately after the verb (many native speakers find this unnatural). Examples: Ja smo i on otišli u grad. "He and I went to town." (lit. I are and him gone to town) – this is dialectal. Ja i on smo otišli u grad. – commonly heard Ja i on otišli
doesn't restrict the grammar's language. Second block of b's of double size Another example of a non-regular language is . It is context-free as it can be generated by the following context-free grammar: First-order logic formulas The formation rules for the terms and formulas of formal logic fit the definition of context-free grammar, except that the set of symbols may be infinite and there may be more than one start symbol. Examples of languages that are not context free In contrast to well-formed nested parentheses and square brackets in the previous section, there is no context-free grammar for generating all sequences of two different types of parentheses, each separately balanced disregarding the other, where the two types need not nest inside one another, for example: or The fact that this language is not context free can be proven using Pumping lemma for context-free languages and a proof by contradiction, observing that all words of the form should belong to the language. This language belongs instead to a more general class and can be described by a conjunctive grammar, which in turn also includes other non-context-free languages, such as the language of all words of the form . Regular grammars Every regular grammar is context-free, but not all context-free grammars are regular. The following context-free grammar, for example, is also regular. The terminals here are and , while the only nonterminal is . The language described is all nonempty strings of s and s that end in . This grammar is regular: no rule has more than one nonterminal in its right-hand side, and each of these nonterminals is at the same end of the right-hand side. Every regular grammar corresponds directly to a nondeterministic finite automaton, so we know that this is a regular language. Using pipe symbols, the grammar above can be described more tersely as follows: Derivations and syntax trees A derivation of a string for a grammar is a sequence of grammar rule applications that transform the start symbol into the string. A derivation proves that the string belongs to the grammar's language. A derivation is fully determined by giving, for each step: the rule applied in that step the occurrence of its left-hand side to which it is applied For clarity, the intermediate string is usually given as well. For instance, with the grammar: the string can be derived from the start symbol with the following derivation: (by rule 1. on ) (by rule 1. on the second ) (by rule 2. on the first ) (by rule 2. on the second ) (by rule 3. on the third ) Often, a strategy is followed that deterministically chooses the next nonterminal to rewrite: in a leftmost derivation, it is always the leftmost nonterminal; in a rightmost derivation, it is always the rightmost nonterminal. Given such a strategy, a derivation is completely determined by the sequence of rules applied. For instance, one leftmost derivation of the same string is (by rule 1 on the leftmost ) (by rule 2 on the leftmost ) (by rule 1 on the leftmost ) (by rule 2 on the leftmost ) (by rule 3 on the leftmost ), which can be summarized as rule 1 rule 2 rule 1 rule 2 rule 3. One rightmost derivation is: (by rule 1 on the rightmost ) (by rule 1 on the rightmost ) (by rule 3 on the rightmost ) (by rule 2 on the rightmost ) (by rule 2 on the rightmost ), which can be summarized as rule 1 rule 1 rule 3 rule 2 rule 2. The distinction between leftmost derivation and rightmost derivation is important because in most parsers the transformation of the input is defined by giving a piece of code for every grammar rule that is executed whenever the rule is applied. Therefore, it is important to know whether the parser determines a leftmost or a rightmost derivation because this determines the order in which the pieces of code will be executed. See for an example LL parsers and LR parsers. A derivation also imposes in some sense a hierarchical structure on the string that is derived. For example, if the string "1 + 1 + a" is derived according to the leftmost derivation outlined above, the structure of the string would be: where indicates a substring recognized as belonging to . This hierarchy can also be seen as a tree: This tree is called a parse tree or "concrete syntax tree" of the string, by contrast with the abstract syntax tree. In this case the presented leftmost and the rightmost derivations define the same parse tree; however, there is another rightmost derivation of the same string (by rule 1 on the rightmost ) (by rule 3 on the rightmost ) (by rule 1 on the rightmost ) (by rule 2 on the rightmost ) (by rule 2 on the rightmost ), which defines a string with a different structure and a different parse tree: Note however that both parse trees can be obtained by both leftmost and rightmost derivations. For example, the last tree can be obtained with the leftmost derivation as follows: (by rule 1 on the leftmost ) (by rule 1 on the leftmost ) (by rule 2 on the leftmost ) (by rule 2 on the leftmost ) (by rule 3 on the leftmost ), If a string in the language of the grammar has more than one parsing tree, then the grammar is said to be an ambiguous grammar. Such grammars are usually hard to parse because the parser cannot always decide which grammar rule it has to apply. Usually, ambiguity is a feature of the grammar, not the language, and an unambiguous grammar can be found that generates the same context-free language. However, there are certain languages that can only be generated by ambiguous grammars; such languages are called inherently ambiguous languages. Example: Algebraic expressions Here is a context-free grammar for syntactically correct infix algebraic expressions in the variables x, y and z: This grammar can, for example, generate the string as follows: (by rule 5) (by rule 6, applied to the leftmost ) (by rule 7, applied to the rightmost ) (by rule 8, applied to the leftmost ) (by rule 8, applied to the rightmost ) (by rule 4, applied to the leftmost ) (by rule 6, applied to the fourth ) (by rule 4, applied to the rightmost ) (etc.) Note that many choices were made underway as to which rewrite was going to be performed next. These choices look quite arbitrary. As a matter of fact, they are, in the sense that the string finally generated is always the same. For example, the second and third rewrites (by rule 6, applied to the leftmost ) (by rule 7, applied to the rightmost ) could be done in the opposite order: (by rule 7, applied to the rightmost ) (by rule 6, applied to the leftmost ) Also, many choices were made on which rule to apply to each selected . Changing the choices made and not only the order they were made in usually affects which terminal string comes out at the end. Let's look at this in more detail. Consider the parse tree of this derivation: Starting at the top, step by step, an S in the tree is expanded, until no more unexpanded es (nonterminals) remain. Picking a different order of expansion will produce a different derivation, but the same parse tree. The parse tree will only change if we pick a different rule to apply at some position in the tree. But can a different parse tree still produce the same terminal string, which is in this case? Yes, for this particular grammar, this is possible. Grammars with this property are called ambiguous. For example, can be produced with these two different parse trees: However, the language described by this grammar is not inherently ambiguous: an alternative, unambiguous grammar can be given for the language, for example: , once again picking as the start symbol. This alternative grammar will produce with a parse tree similar to the left one above, i.e. implicitly assuming the association , which does not follow standard order of operations. More elaborate, unambiguous and context-free grammars can be constructed that produce parse trees that obey all desired operator precedence and associativity rules. Normal forms Every context-free grammar with no ε-production has an equivalent grammar in Chomsky normal form, and a grammar in Greibach normal form. "Equivalent" here means that the two grammars generate the same language. The especially simple form of production rules in Chomsky normal form grammars has both theoretical and practical implications. For instance, given a context-free grammar, one can use the Chomsky normal form to construct a polynomial-time algorithm that decides whether a given string is in the language represented by that grammar or not (the CYK algorithm). Closure properties Context-free languages are closed under the various operations, that is, if the languages K and L are context-free, so is the result of the following operations: union K ∪ L; concatenation K ∘ L; Kleene star L* substitution (in particular homomorphism) inverse homomorphism intersection with a regular language They are not closed under general intersection (hence neither under complementation) and set difference. Decidable problems The following are some decidable problems about context-free grammars. Parsing The parsing problem, checking whether a given word belongs to the language given by a context-free grammar, is decidable, using one of the general-purpose parsing algorithms: CYK algorithm (for grammars in Chomsky normal form) Earley parser GLR parser LL parser (only for the proper subclass of for LL(k) grammars) Context-free parsing for Chomsky normal form grammars was shown by Leslie G. Valiant to be reducible to boolean matrix multiplication, thus inheriting its complexity upper bound of O(n2.3728639). Conversely, Lillian Lee has shown O(n3−ε) boolean matrix multiplication to be reducible to O(n3−3ε) CFG parsing, thus establishing some kind of lower bound for the latter. Reachability, productiveness, nullability A nonterminal symbol is called productive, or generating, if there is a derivation for some string of terminal symbols. is called reachable if there is a derivation for some strings of nonterminal and terminal symbols from the start symbol. is called useless if it is unreachable or unproductive. is called nullable if there is a derivation
grammars Every regular grammar is context-free, but not all context-free grammars are regular. The following context-free grammar, for example, is also regular. The terminals here are and , while the only nonterminal is . The language described is all nonempty strings of s and s that end in . This grammar is regular: no rule has more than one nonterminal in its right-hand side, and each of these nonterminals is at the same end of the right-hand side. Every regular grammar corresponds directly to a nondeterministic finite automaton, so we know that this is a regular language. Using pipe symbols, the grammar above can be described more tersely as follows: Derivations and syntax trees A derivation of a string for a grammar is a sequence of grammar rule applications that transform the start symbol into the string. A derivation proves that the string belongs to the grammar's language. A derivation is fully determined by giving, for each step: the rule applied in that step the occurrence of its left-hand side to which it is applied For clarity, the intermediate string is usually given as well. For instance, with the grammar: the string can be derived from the start symbol with the following derivation: (by rule 1. on ) (by rule 1. on the second ) (by rule 2. on the first ) (by rule 2. on the second ) (by rule 3. on the third ) Often, a strategy is followed that deterministically chooses the next nonterminal to rewrite: in a leftmost derivation, it is always the leftmost nonterminal; in a rightmost derivation, it is always the rightmost nonterminal. Given such a strategy, a derivation is completely determined by the sequence of rules applied. For instance, one leftmost derivation of the same string is (by rule 1 on the leftmost ) (by rule 2 on the leftmost ) (by rule 1 on the leftmost ) (by rule 2 on the leftmost ) (by rule 3 on the leftmost ), which can be summarized as rule 1 rule 2 rule 1 rule 2 rule 3. One rightmost derivation is: (by rule 1 on the rightmost ) (by rule 1 on the rightmost ) (by rule 3 on the rightmost ) (by rule 2 on the rightmost ) (by rule 2 on the rightmost ), which can be summarized as rule 1 rule 1 rule 3 rule 2 rule 2. The distinction between leftmost derivation and rightmost derivation is important because in most parsers the transformation of the input is defined by giving a piece of code for every grammar rule that is executed whenever the rule is applied. Therefore, it is important to know whether the parser determines a leftmost or a rightmost derivation because this determines the order in which the pieces of code will be executed. See for an example LL parsers and LR parsers. A derivation also imposes in some sense a hierarchical structure on the string that is derived. For example, if the string "1 + 1 + a" is derived according to the leftmost derivation outlined above, the structure of the string would be: where indicates a substring recognized as belonging to . This hierarchy can also be seen as a tree: This tree is called a parse tree or "concrete syntax tree" of the string, by contrast with the abstract syntax tree. In this case the presented leftmost and the rightmost derivations define the same parse tree; however, there is another rightmost derivation of the same string (by rule 1 on the rightmost ) (by rule 3 on the rightmost ) (by rule 1 on the rightmost ) (by rule 2 on the rightmost ) (by rule 2 on the rightmost ), which defines a string with a different structure and a different parse tree: Note however that both parse trees can be obtained by both leftmost and rightmost derivations. For example, the last tree can be obtained with the leftmost derivation as follows: (by rule 1 on the leftmost ) (by rule 1 on the leftmost ) (by rule 2 on the leftmost ) (by rule 2 on the leftmost ) (by rule 3 on the leftmost ), If a string in the language of the grammar has more than one parsing tree, then the grammar is said to be an ambiguous grammar. Such grammars are usually hard to parse because the parser cannot always decide which grammar rule it has to apply. Usually, ambiguity is a feature of the grammar, not the language, and an unambiguous grammar can be found that generates the same context-free language. However, there are certain languages that can only be generated by ambiguous grammars; such languages are called inherently ambiguous languages. Example: Algebraic expressions Here is a context-free grammar for syntactically correct infix algebraic expressions in the variables x, y and z: This grammar can, for example, generate the string as follows: (by rule 5) (by rule 6, applied to the leftmost ) (by rule 7, applied to the rightmost ) (by rule 8, applied to the leftmost ) (by rule 8, applied to the rightmost ) (by rule 4, applied to the leftmost ) (by rule 6, applied to the fourth ) (by rule 4, applied to the rightmost ) (etc.) Note that many choices were made underway as to which rewrite was going to be performed next. These choices look quite arbitrary. As a matter of fact, they are, in the sense that the string finally generated is always the same. For example, the second and third rewrites (by rule 6, applied to the leftmost ) (by rule 7, applied to the rightmost ) could be done in the opposite order: (by rule 7, applied to the rightmost ) (by rule 6, applied to the leftmost ) Also, many choices were made on which rule to apply to each selected . Changing the choices made and not only the order they were made in usually affects which terminal string comes out at the end. Let's look at this in more detail. Consider the parse tree of this derivation: Starting at the top, step by step, an S in the tree is expanded, until no more unexpanded es (nonterminals) remain. Picking a different order of expansion will produce a different derivation, but the same parse tree. The parse tree will only change if we pick a different rule to apply at some position in the tree. But can a different parse tree still produce the same terminal string, which is in this case? Yes, for this particular grammar, this is possible. Grammars with this property are called ambiguous. For example, can be produced with these two different parse trees: However, the language described by this grammar is not inherently ambiguous: an alternative, unambiguous grammar can be given for the language, for example: , once again picking as the start symbol. This alternative grammar will produce with a parse tree similar to the left one above, i.e. implicitly assuming the association , which does not follow standard order of operations. More elaborate, unambiguous and context-free grammars can be constructed that produce parse trees that obey all desired operator precedence and associativity rules. Normal forms Every context-free grammar with no ε-production has an equivalent grammar in Chomsky normal form, and a grammar in Greibach normal form. "Equivalent" here means that the two grammars generate the same language. The especially simple form of production rules in Chomsky normal form grammars has both theoretical and practical implications. For instance, given a context-free grammar, one can use the Chomsky normal form to construct a polynomial-time algorithm that decides whether a given string is in the language represented by that grammar or not (the CYK algorithm). Closure properties Context-free languages are closed under the various operations, that is, if the languages K and L are context-free, so is the result of the following operations: union K ∪ L; concatenation K ∘ L; Kleene star L* substitution (in particular homomorphism) inverse homomorphism intersection with a regular language They are not closed under general intersection (hence neither under complementation) and set difference. Decidable problems The following are some decidable problems about context-free grammars. Parsing The parsing problem, checking whether a given word belongs to the language given by a context-free grammar, is decidable, using one of the general-purpose parsing algorithms: CYK algorithm (for grammars in Chomsky normal form) Earley parser GLR parser LL parser (only for the proper subclass of for LL(k) grammars) Context-free parsing for Chomsky normal form grammars was shown by Leslie G. Valiant to be reducible to boolean matrix multiplication, thus inheriting its complexity upper bound of O(n2.3728639). Conversely, Lillian Lee has shown O(n3−ε) boolean matrix multiplication to be reducible to O(n3−3ε) CFG parsing, thus establishing some kind of lower bound for the latter. Reachability, productiveness, nullability A nonterminal symbol is called productive, or generating, if there is a derivation for some string of terminal symbols. is called reachable if there is a derivation for some strings of nonterminal and terminal symbols from the start symbol. is called useless if it is unreachable or unproductive. is called nullable if there is a derivation . A rule is called an ε-production. A derivation is called a cycle. Algorithms are known to eliminate from a given grammar, without changing its generated language, unproductive symbols, unreachable symbols, ε-productions, with one possible exception, and cycles. In particular, an alternative containing a useless nonterminal symbol can be deleted from the right-hand side of a rule. Such rules and alternatives are called useless. In the depicted example grammar, the nonterminal D is unreachable, and E is unproductive, while C → C causes a cycle. Hence, omitting the last three rules doesn't change the language generated by the grammar, nor does omitting the alternatives "| Cc | Ee" from the right-hand side of the rule for S. A context-free grammar is said to be proper if it has neither useless symbols nor ε-productions nor cycles. Combining the above algorithms, every context-free grammar not generating ε can be transformed into a weakly equivalent proper one. Regularity and LL(k) checks It is decidable whether a given grammar is a regular grammar, as well as whether it is an LL(k) grammar for a given k≥0. If k is not given, the latter problem is undecidable. Given a context-free language, it is neither decidable whether it is regular, nor whether it is an LL(k) language for a given k. Emptiness and finiteness There are algorithms to decide whether a language of a given context-free language is empty, as well as whether it is finite. Undecidable problems Some questions that are undecidable for wider classes of grammars become decidable for context-free grammars; e.g. the emptiness problem (whether the grammar generates any terminal strings at all), is undecidable for context-sensitive grammars, but decidable for context-free grammars. However, many problems are undecidable even for context-free grammars. Examples are: Universality Given a CFG, does it generate the language of all strings over the alphabet of terminal symbols used in its rules? A reduction can be demonstrated to this problem from the well-known undecidable problem of determining whether a Turing machine accepts a particular input (the halting problem). The reduction uses the concept of a computation history, a string describing an entire computation of a Turing machine. A CFG can be constructed that generates all strings that are not accepting computation histories for a particular Turing machine on a particular input, and thus
concentrations during freezing usually prevent frozen cells from functioning again after thawing. Ice crystals can also disrupt connections between cells that are necessary for organs to function. In 2016, Robert L. McIntyre and Gregory Fahy at the cryobiology research company 21st Century Medicine, Inc. won the Small Animal Brain Preservation Prize of the Brain Preservation Foundation by demonstrating to the satisfaction of neuroscientist judges that a particular implementation of fixation and vitrification called aldehyde-stabilized cryopreservation could preserve a rabbit brain in "near perfect" condition at −135 °C, with the cell membranes, synapses, and intracellular structures intact in electron micrographs. Brain Preservation Foundation President, Ken Hayworth, said, "This result directly answers a main skeptical and scientific criticism against cryonics—that it does not provably preserve the delicate synaptic circuitry of the brain." However, the price paid for perfect preservation, as seen by microscopy, was tying up all protein molecules with chemical crosslinks, completely eliminating biological viability. Actual cryonics organizations use vitrification without a chemical fixation step, sacrificing some structural preservation quality for less damage at the molecular level. Some scientists, like Joao Pedro Magalhaes, have questioned whether using a deadly chemical for fixation eliminates the possibility of biological revival, making chemical fixation unsuitable for cryonics. Outside of cryonics firms and cryonics-linked interest groups, many scientists show strong skepticism toward cryonics methods. Cryobiologist Dayong Gao states that "we simply don't know if (subjects have) been damaged to the point where they've 'died' during vitrification because the subjects are now inside liquid nitrogen canisters." Biochemist Ken Storey argues (based on experience with organ transplants), that "even if you only wanted to preserve the brain, it has dozens of different areas, which would need to be cryopreserved using different protocols." Revival Revival would require repairing damage from lack of oxygen, cryoprotectant toxicity, thermal stress (fracturing) and freezing in tissues that do not successfully vitrify, finally followed by reversing the cause of death. In many cases, extensive tissue regeneration would be necessary. This revival technology remains speculative and does not currently exist. Legal issues Historically, a person had little control regarding how their body was treated after death as religion held jurisdiction over the ultimate fate of their body. However, secular courts began to exercise jurisdiction over the body and use discretion in carrying out of the wishes of the deceased person. Most countries legally treat preserved individuals as deceased persons because of laws that forbid vitrifying someone who is medically alive. In France, cryonics is not considered a legal mode of body disposal; only burial, cremation, and formal donation to science are allowed. However, bodies may legally be shipped to other countries for cryonic freezing. As of 2015, the Canadian province of British Columbia prohibits the sale of arrangements for body preservation based on cryonics. In Russia, cryonics falls outside both the medical industry and the funeral services industry, making it easier in Russia than in the U.S. to get hospitals and morgues to release cryonics candidates. In London in 2016, the English High Court ruled in favor of a mother's right to seek cryopreservation of her terminally ill 14-year-old daughter, as the girl wanted, contrary to the father's wishes. The decision was made on the basis that the case represented a conventional dispute over the disposal of the girl's body, although the judge urged ministers to seek "proper regulation" for the future of cryonic preservation following concerns raised by the hospital about the competence and professionalism of the team that conducted the preservation procedures. In Alcor Life Extension Foundation v. Richardson, the Iowa Court of Appeals ordered for the disinterment of Richardson, who was buried against his wishes for cryopreservation. A detailed legal examination by Jochen Taupitz concludes that cryonic storage is legal in Germany for an indefinite period of time. Ethics In 2009, writing in Bioethics, David Shaw examines the ethical status of cryonics. The arguments against it include changing the concept of death, the expense of preservation and revival, lack of scientific advancement to permit revival, temptation to use premature euthanasia, and failure due to catastrophe. Arguments in favor of cryonics include the potential benefit to society, the prospect of immortality, and the benefits associated with avoiding death. Shaw explores the expense and the potential payoff, and applies an adapted version of Pascal's Wager to the question. In 2016, Charles Tandy wrote in favor of cryonics, arguing that honoring someone's last wishes is seen as a benevolent duty in American and many other cultures. History Cryopreservation was applied to human cells beginning in 1954 with frozen sperm, which was thawed and used to inseminate three women. The freezing of humans was first scientifically proposed by Michigan professor Robert Ettinger when he wrote The Prospect of Immortality (1962). In April 1966, the first human body was frozen—though it had been embalmed for two months—by being placed in liquid nitrogen and stored at just above freezing. The middle-aged woman from Los Angeles, whose name is unknown, was soon thawed out and buried by relatives. The first body to be cryopreserved and then frozen with the hope of future revival was that of James Bedford, claimed by Alcor's Mike Darwin to have occurred within around two hours of his death from cardiorespiratory
time to take advantage even of the supposed benefits offered: historically, even the most robust corporations have only a one-in-a-thousand chance of surviving even one hundred years. Many cryonics companies have failed; , all but one of the pre-1973 batch had gone out of business, and their stored corpses have been defrosted and disposed of. Obstacles to success Preservation damage Cryopreservation has long been used by medical laboratories to maintain animal cells, human embryos, and even some organized tissues, for periods as long as three decades. Recovering large animals and organs from a frozen state is however not considered possible at the current level of scientific knowledge Large vitrified organs tend to develop fractures during cooling, a problem worsened by the large tissue masses and very low temperatures of cryonics. Without cryoprotectants, cell shrinkage and high salt concentrations during freezing usually prevent frozen cells from functioning again after thawing. Ice crystals can also disrupt connections between cells that are necessary for organs to function. In 2016, Robert L. McIntyre and Gregory Fahy at the cryobiology research company 21st Century Medicine, Inc. won the Small Animal Brain Preservation Prize of the Brain Preservation Foundation by demonstrating to the satisfaction of neuroscientist judges that a particular implementation of fixation and vitrification called aldehyde-stabilized cryopreservation could preserve a rabbit brain in "near perfect" condition at −135 °C, with the cell membranes, synapses, and intracellular structures intact in electron micrographs. Brain Preservation Foundation President, Ken Hayworth, said, "This result directly answers a main skeptical and scientific criticism against cryonics—that it does not provably preserve the delicate synaptic circuitry of the brain." However, the price paid for perfect preservation, as seen by microscopy, was tying up all protein molecules with chemical crosslinks, completely eliminating biological viability. Actual cryonics organizations use vitrification without a chemical fixation step, sacrificing some structural preservation quality for less damage at the molecular level. Some scientists, like Joao Pedro Magalhaes, have questioned whether using a deadly chemical for fixation eliminates the possibility of biological revival, making chemical fixation unsuitable for cryonics. Outside of cryonics firms and cryonics-linked interest groups, many scientists show strong skepticism toward cryonics methods. Cryobiologist Dayong Gao states that "we simply don't know if (subjects have) been damaged to the point where they've 'died' during vitrification because the subjects are now inside liquid nitrogen canisters." Biochemist Ken Storey argues (based on experience with organ transplants), that "even if you only wanted to preserve the brain, it has dozens of different areas, which would need to be cryopreserved using different protocols." Revival Revival would require repairing damage from lack of oxygen, cryoprotectant toxicity, thermal stress (fracturing) and freezing in tissues that do not successfully vitrify, finally followed by reversing the cause of death. In many cases, extensive tissue regeneration would be necessary. This revival technology remains speculative and does not currently exist. Legal issues Historically, a person had little control regarding how their body was treated after death as religion held jurisdiction over the ultimate fate of their body. However, secular courts began to exercise jurisdiction over the body and use discretion in carrying out of the wishes of the deceased person. Most countries legally treat preserved individuals as deceased persons because of laws that forbid vitrifying someone who is medically alive. In France, cryonics is not considered a legal mode of body disposal; only burial, cremation, and formal donation to science are allowed. However, bodies may legally be shipped to other countries for cryonic freezing. As of 2015, the Canadian province of British Columbia prohibits the sale of arrangements for body preservation based on cryonics. In Russia, cryonics falls outside both the medical industry and the funeral services industry, making it easier in Russia than in the U.S. to get hospitals and morgues to release cryonics candidates. In London in 2016, the English High Court ruled in favor of a mother's right to seek cryopreservation of her terminally ill 14-year-old daughter, as the girl wanted, contrary to the father's wishes. The decision was made on the basis that the case represented a conventional dispute over the disposal of the girl's body, although the judge urged ministers to seek "proper regulation" for the future of cryonic preservation following concerns raised by the hospital about the competence and professionalism of the team that conducted the preservation procedures. In Alcor Life Extension Foundation v. Richardson, the Iowa Court of Appeals ordered for the disinterment of Richardson, who was buried against his wishes for cryopreservation. A detailed legal examination by Jochen Taupitz concludes that cryonic storage is legal in Germany for an indefinite period of time. Ethics In 2009, writing in Bioethics, David Shaw examines the ethical status of cryonics. The arguments against it include changing the concept of death, the expense of preservation and revival, lack of scientific advancement to permit revival, temptation to use premature euthanasia, and failure due to catastrophe. Arguments in favor of cryonics include the potential benefit to society, the prospect of immortality, and the benefits associated with avoiding death. Shaw explores the expense and the potential payoff, and applies an adapted version of Pascal's Wager to the question. In 2016, Charles Tandy wrote in favor of cryonics, arguing that honoring someone's last wishes is seen as a benevolent duty in American and many other cultures. History Cryopreservation was applied to human cells beginning in 1954 with frozen sperm, which was thawed and used to inseminate three women. The freezing of humans was first scientifically proposed by Michigan professor Robert Ettinger when he wrote The Prospect of Immortality (1962). In April 1966, the first human body was frozen—though it had been embalmed for two months—by being placed in liquid nitrogen and stored at just above freezing. The middle-aged woman from Los Angeles, whose name is unknown, was soon thawed out and buried by relatives. The first body to be cryopreserved and then frozen with the hope of future revival was that of James Bedford, claimed by Alcor's Mike Darwin to have occurred within around two hours of his death from cardiorespiratory arrest (secondary to metastasized kidney cancer) on January 12, 1967. Bedford's corpse is the only one frozen before 1974 still preserved today. In 1976, Ettinger founded the Cryonics Institute; his corpse was cryopreserved in 2011. Robert Nelson, "a former TV repairman with no scientific background" who led the Cryonics Society of California, was sued in 1981 for allowing nine bodies to thaw and decompose in the 1970s; in his defense, he claimed that the Cryonics Society had run out of money. This led to the lowered reputation of cryonics in the U.S. In 2018, a Y-Combinator startup called Nectome was recognized for developing a method of preserving brains with chemicals rather than by freezing. The method is fatal, performed as euthanasia under general anethesia, but the hope is that future technology would allow the brain to be physically scanned into a computer simulation, neuron by neuron. Demographics According to The New York Times, cryonicists are predominantly non-religious white males, outnumbering women by about three to one. According to The Guardian, as of 2008, while most cryonicists used to be young, male, and "geeky", recent demographics have shifted slightly towards whole families. In 2015, Du Hong, a 61-year-old female writer of children's literature, became the first known Chinese national to have their head cryopreserved. Reception Cryonics is
in the words of the regulation, "for information purposes only and should not have any legal effect". The maintenance fees, with a single fee for the whole area, are also expected to be lower compared to the sum of the renewal fees for national patents of the corresponding area, but the fees have yet to be announced. The negotiations which resulted in the unitary patent can be traced back to various initiatives dating to the 1970s. At different times, the project, or very similar projects, have been referred to as the "European Union patent" (the name used in the EU treaties, which serve as the legal basis for EU competency), "EU patent", "Community patent", "European Community Patent", "EC patent" and "COMPAT". On 17 December 2012, agreement was reached between the European Council and European Parliament on the two EU regulations that made the unitary patent possible through enhanced cooperation at EU level. The legality of the two regulations was challenged by Spain and Italy, but all their claims were rejected by the European Court of Justice. Italy subsequently joined the unitary patent regulation in September 2015, so that all EU member states except Spain and Croatia now participate in the enhanced cooperation for a unitary patent. Unitary effect of newly granted European patents will be available from the date when the related Unified Patent Court Agreement enters into force for the first group of ratifiers, and will extend to those participating member states for which the UPC Agreement enters into force at the time of registration of the unitary patent. Previously granted unitary patents will not automatically get their unitary effect extended to the territory of participating states which ratify the UPC agreement at a later date. The unitary patent system will apply as from the date of entry into force of the UPC Agreement. The Austrian ratification recently triggered the entry force clause of the Protocol on Provisional Application of the UPC Agreement on 19 January 2022. The start of the new system is currently expected for the second half of 2022, following the expected final ratification step by Germany. Background Legislative history In 2009, three draft documents were published regarding a community patent: a European patent in which the European Community was designated: Council regulation on the community patent, Agreement on the European and Community Patents Court (open to the European Community and all states of the European Patent Convention) Decision to open negotiations regarding this Agreement Based on those documents, the European Council requested on 6 July 2009 an opinion from the Court of Justice of the European Union, regarding the compatibility of the envisioned Agreement with EU law: "'Is the envisaged agreement creating a Unified Patent Litigation System (currently named European and Community Patents Court) compatible with the provisions of the Treaty establishing the European Community?’" In December 2010, the use of the enhanced co-operation procedure, under which Articles 326–334 of the Treaty on the Functioning of the European Union provides that a group of member states of the European Union can choose to co-operate on a specific topic, was proposed by twelve Member States to set up a unitary patent applicable in all participating European Union Member States. The use of this procedure had only been used once in the past, for harmonising rules regarding the applicable law in divorce across several EU Member States. In early 2011, the procedure leading to the enhanced co-operation was reported to be progressing. Twenty-five Member States had written to the European Commission requesting to participate, with Spain and Italy remaining outside, primarily on the basis of ongoing concerns over translation issues. On 15 February, the European Parliament approved the use of the enhanced co-operation procedure for unitary patent protection by a vote of 471 to 160. and on 10 March 2011 the Council gave their authorisation. Two days earlier, on 8 March 2011, the Court of Justice of the European Union had issued its opinion, stating that the draft Agreement creating the European and Community Patent Court would be incompatible with EU law. The same day, the Hungarian Presidency of the Council insisted that this opinion would not affect the enhanced co-operation procedure. In November 2011, negotiations on the enhanced co-operation system were reportedly advancing rapidly—too fast, in some views. It was announced that implementation required an enabling European Regulation, and a Court agreement between the states that elect to take part. The European Parliament approved the continuation of negotiations in September. A draft of the agreement was issued on 11 November 2011 and was open to all member states of the European Union, but not to other European Patent Convention states. However, serious criticisms of the proposal remained mostly unresolved. A meeting of the Competitiveness Council on 5 December failed to agree on the final text. In particular, there was no agreement on where the Central Division of a Unified Patent Court should be located, "with London, Munich and Paris the candidate cities." The Polish Presidency acknowledged on 16 December 2011 the failure to reach an agreement "on the question of the location of the seat of the central division." The Danish Presidency therefore inherited the issue. According to the President of the European Commission in January 2012, the only question remaining to be settled was the location of the Central Division of the Court. However, evidence presented to the UK House of Commons European Scrutiny Committee in February suggested that the position was more complicated. At an EU summit at the end of January 2012, participants agreed to press on and finalise the system by June. On 26 April, Herman Van Rompuy, President of the European Council, wrote to members of the council, saying "This important file has been discussed for many years and we are now very close to a final deal,.... This deal is needed now, because this is an issue of crucial importance for innovation and growth. I very much hope that the last outstanding issue will be sorted out at the May Competitiveness Council. If not, I will take it up at the June European Council." The Competitiveness Council met on 30 May and failed to reach agreement. A compromise agreement on the seat(s) of the unified court was eventually reached at the June European Council (28–29 June 2012), splitting the central division according to technology between Paris (the main seat), London and Munich. However, on 2 July 2012, the European Parliament decided to postpone the vote following a move by the European Council to modify the arrangements previously approved by MEPs in negotiations with the European Council. The modification was considered controversial and included the deletion of three key articles (6–8) of the legislation, seeking to reduce the competence of the European Union Court of Justice in unitary patent litigation. On 9 July 2012, the Committee on Legal Affairs of the European Parliament debated the patent package following the decisions adopted by the General Council on 28–29 June 2012 in camera in the presence of MEP Bernhard Rapkay. A later press release by Rapkay quoted from a legal opinion submitted by the Legal Service of the European Parliament, which affirmed the concerns of MEPs to approve the decision of a recent EU summit to delete said articles as it "nullifies central aspects of a substantive patent protection". A Europe-wide uniform protection of intellectual property would thus not exist with the consequence that the requirements of the corresponding EU treaty would not be met and that the European Court of Justice could therefore invalidate the legislation. By the end of 2012 a new compromise was reached between the European Parliament and the European Council, including a limited role for the European Court of Justice. The Unified Court will apply the Unified Patent Court Agreement, which is considered national patent law from an EU law point of view, but still is equal for each participant. [However the draft statutory instrument aimed at implementation of the Unified Court and UPC in the UK provides for different infringement laws for: European patents (unitary or not) litigated through the Unified Court; European patents (UK) litigated before UK courts; and national patents]. The legislation for the enhanced co-operation mechanism was approved by the European Parliament on 11 December 2012 and the regulations were signed by the European Council and European Parliament officials on 17 December 2012. On 30 May 2011, Italy and Spain challenged the council's authorisation of the use of enhanced co-operation to introduce the trilingual (English, French, German) system for the unitary patent, which they viewed as discriminatory to their languages, with the CJEU on the grounds that it did not comply with the EU treaties. In January 2013, Advocate General Yves Bot delivered his recommendation that the court reject the complaint. Suggestions by the Advocate General are advisory only, but are generally followed by the court. The case was dismissed by the court in April 2013, however Spain launched two new challenges with the EUCJ in March 2013 against the regulations implementing the unitary patent package. The court hearing for both cases was scheduled for 1 July 2014. Advocate-General Yves Bot published his opinion on 18 November 2014, suggesting that both actions be dismissed ( and ). The court handed down its decisions on 5 May 2015 as and fully dismissing the Spanish claims. Following a request by the government of Italy, it became a participant of the unitary patent regulations in September 2015. European patents European patents are granted in accordance with the provisions of the European Patent Convention (EPC), via a unified procedure before the European Patent Office. While upon filing of a European patent application, all 38 Contracting States are automatically designated, an European patents automatically becomes a bundle of "national" European patents upon grant. In contrast to the unified character of a European patent application, a granted European patent has, in effect, no unitary character, except for the centralized opposition procedure (which can be initiated within 9 months from grant, by somebody else than the patent proprietor), and the centralized limitation and revocation procedures (which can only be instituted by the patent proprietor). In other words, a European patent in one Contracting State, i.e. a "national" European patent, is effectively independent of the same European patent in each other Contracting State, except for the opposition, limitation and revocation procedures. The enforcement of a European patent is dealt with by national law. The abandonment, revocation or limitation of the European patent in one state does not affect the European patent in other states. While the EPC already provided the possibility for a group of member states to allow European patents to have a unitary character also after grant, until now, only Liechtenstein and Switzerland have opted to create a unified protection area (see Unitary patent (Switzerland and Liechtenstein)). By requesting unitary effect upon grant, the patent proprietor will now be able to obtain uniform protection in the participating members states of the European Union in a single step, considerably simplifying obtaining patent protection in a large part of the EU. The unitary patent system will co-exist with national patent systems and European patent without unitary effects. The unitary patent will in particular not cover EPC countries that are not member of the European Union, such as UK or Turkey. Legal basis and implementation Three instruments were proposed for the implementation of the unitary patent: Regulation of the European Parliament and of the Council implementing enhanced co-operation in the area of the creation of unitary patent protection Council Regulation implementing enhanced co-operation in the area of the creation of unitary patent protection with regard to the applicable translation arrangements Agreement on a Unified Patent Court The system is based on EU law as well as the European Patent Convention (EPC). provides the legal basis for establishing a common system of patents for Parties to the EPC. Previously, only Liechtenstein and Switzerland had used this possibility to create a unified protection area (see Unitary patent (Switzerland and Liechtenstein)). Regulations regarding the unitary patent The first two regulations were approved by the European Parliament on 11 December 2012, with future application set for the 25 member states then participating in the enhanced cooperation for a unitary patent (all current EU member states except Croatia, Italy and Spain). The instruments were adopted as regulations EU 1257/2012 and 1260/2012 on 17 December 2012, and entered into force in January 2013. Following a request by the government of Italy, it became a participant of the unitary patent regulations in September 2015. As of March 2017, neither of the two remaining non-participants in the unitary patent (Spain and Croatia) had requested the European Commission to participate. Although formally the Regulations will apply to all 25 participating states from the moment the UPC Agreement enters into force for the first group of ratifiers, the unitary effect of newly granted unitary patents will only extend to those of the 25 states where the UPC Agreement has entered into force, while patent coverage for other participating states without UPC Agreement ratification will be covered by a coexisting normal European patent in each of those states. The unitary effect of unitary patents means a single renewal fee, a single ownership, a single object of property, a single court (the Unified Patent Court) and uniform protection, which means that revocation as well as infringement proceedings are to be decided for the unitary patent as a whole rather than for each country individually. Licensing is however to remain possible for part of the unitary territory. Role of the European Patent Office Some administrative tasks relating to the European patents with unitary effect will be performed by the European Patent Office. These tasks include the collection of renewal fees and registration of unitary effect upon grant, recording licenses and statements that licenses are available to any person. Decisions of the European Patent Office regarding the unitary patent are open to appeal to the Unified Patent Court, rather than to the EPO Boards of Appeal. Translation requirements For a unitary patent ultimately no translation will be required, which significantly reduces the cost for protection in the whole area. However, article 6 of EU regulation 1260/2012 provides that during a transition period of no more than twelve years one translation needs to be provided. A translation needs to be provided either into English if the application is in French or German, or into any EU official language if the application is in English. In addition, machine translations will be provided, which will be, in the words
states. While the EPC already provided the possibility for a group of member states to allow European patents to have a unitary character also after grant, until now, only Liechtenstein and Switzerland have opted to create a unified protection area (see Unitary patent (Switzerland and Liechtenstein)). By requesting unitary effect upon grant, the patent proprietor will now be able to obtain uniform protection in the participating members states of the European Union in a single step, considerably simplifying obtaining patent protection in a large part of the EU. The unitary patent system will co-exist with national patent systems and European patent without unitary effects. The unitary patent will in particular not cover EPC countries that are not member of the European Union, such as UK or Turkey. Legal basis and implementation Three instruments were proposed for the implementation of the unitary patent: Regulation of the European Parliament and of the Council implementing enhanced co-operation in the area of the creation of unitary patent protection Council Regulation implementing enhanced co-operation in the area of the creation of unitary patent protection with regard to the applicable translation arrangements Agreement on a Unified Patent Court The system is based on EU law as well as the European Patent Convention (EPC). provides the legal basis for establishing a common system of patents for Parties to the EPC. Previously, only Liechtenstein and Switzerland had used this possibility to create a unified protection area (see Unitary patent (Switzerland and Liechtenstein)). Regulations regarding the unitary patent The first two regulations were approved by the European Parliament on 11 December 2012, with future application set for the 25 member states then participating in the enhanced cooperation for a unitary patent (all current EU member states except Croatia, Italy and Spain). The instruments were adopted as regulations EU 1257/2012 and 1260/2012 on 17 December 2012, and entered into force in January 2013. Following a request by the government of Italy, it became a participant of the unitary patent regulations in September 2015. As of March 2017, neither of the two remaining non-participants in the unitary patent (Spain and Croatia) had requested the European Commission to participate. Although formally the Regulations will apply to all 25 participating states from the moment the UPC Agreement enters into force for the first group of ratifiers, the unitary effect of newly granted unitary patents will only extend to those of the 25 states where the UPC Agreement has entered into force, while patent coverage for other participating states without UPC Agreement ratification will be covered by a coexisting normal European patent in each of those states. The unitary effect of unitary patents means a single renewal fee, a single ownership, a single object of property, a single court (the Unified Patent Court) and uniform protection, which means that revocation as well as infringement proceedings are to be decided for the unitary patent as a whole rather than for each country individually. Licensing is however to remain possible for part of the unitary territory. Role of the European Patent Office Some administrative tasks relating to the European patents with unitary effect will be performed by the European Patent Office. These tasks include the collection of renewal fees and registration of unitary effect upon grant, recording licenses and statements that licenses are available to any person. Decisions of the European Patent Office regarding the unitary patent are open to appeal to the Unified Patent Court, rather than to the EPO Boards of Appeal. Translation requirements For a unitary patent ultimately no translation will be required, which significantly reduces the cost for protection in the whole area. However, article 6 of EU regulation 1260/2012 provides that during a transition period of no more than twelve years one translation needs to be provided. A translation needs to be provided either into English if the application is in French or German, or into any EU official language if the application is in English. In addition, machine translations will be provided, which will be, in the words of the regulation, "for information purposes only and should not have any legal effect". In several contracting states, for "national" European patents a translation has to be filed within a three-month time limit after the publication of grant in the European Patent Bulletin under , otherwise the patent is considered never to have existed (void ab initio) in that state. For the 21 parties to the London Agreement, this requirement has already been abolished or reduced (e.g. by dispensing with the requirement if the patent is available in English, and/or only requiring translation of the claims). Translation requirements for the participating states in the enhanced cooperation for a unitary patent are shown below: Unitary patent as an object of property Article 7 of Regulation 1257/2012 provides that, as an object of property, a European patent with unitary effect will be treated "in its entirety and in all participating Member States as a national patent of the participating Member State in which that patent has unitary effect and in which the applicant had her/his residence or principal place of business or, by default, had a place of business on the date of filing the application for the European patent." When the applicant had no domicile in a participating Member State, German law will apply. Ullrich has the criticized the system, which is similar to the Community Trademark and the Community Design, as being "in conflict with both the purpose of the creation of unitary patent protection and with primary EU law." Agreement on a Unified Patent Court The Agreement on a Unified Patent Court provides the legal basis for the Unified Patent Court (UPC): a patent court for European patents (with and without unitary effect), with jurisdiction in those countries where the Agreement is in effect. In addition to regulations regarding the court structure, it also contains substantive provisions relating to the right to prevent use of an invention and allowed use by non-patent proprietors (e.g. for private non-commercial use), preliminary and permanent injunctions. Entry into force for the UPC will take place after Germany deposits its instrument of ratification of the UPC Agreement, which will trigger the countdown until this Agreement’s entry into force and set the date for the start of the UPC’s operations. Parties The UPC Agreement was signed on 19 February 2013 by 24 EU member states, including all states then participating in the enhanced co-operation measures except Bulgaria and Poland. Bulgaria signed the agreement on 5 March 2013 following internal administrative procedures. Italy, which did not originally join the enhanced co-operation measures but subsequently signed up, did sign the UPC agreement. The agreement remains open to accession for all remaining EU member states, with all European Union Member States except Spain and Poland having signed the Agreement. States which do not participate in the unitary patent regulations can still become parties to the UPC agreement, which would allow the new court to handle European patents validated in the country. On 18 January 2019, Kluwer Patent Blog wrote, "a recurring theme for some years has been that 'the UPC will start next year'". Then, Brexit and German constitutional court complaint were considered as the main obstacles. The German constitutional court first decided in a decision of 13 February 2020 against the German ratification of the Agreement on the ground that the German Parliament did not vote with the required majority (2/3 according to the judgement). After a second vote and further, this time unsuccessful, constitutional complaints, Germany formally ratified the UPC Agreement on 7 August 2021. While the UK ratified the agreement in April 2018, the UK later withdrew from the Agreement following Brexit. As of 21 February 2022, 16 countries have finally ratified the Agreement. Jurisdiction The Unified Patent Court will have exclusive jurisdiction in infringement and revocation proceedings involving European patents with unitary effect, and during a transition period non-exclusive jurisdiction regarding European patents without unitary effect in the states where the Agreement applies, unless the patent proprietor decides to opt out. It furthermore has jurisdiction to hear cases against decisions of the European Patent Office regarding unitary patents. As a court of several member states of the European Union it may (Court of First Instance) or must (Court of Appeal) ask prejudicial questions to the European Court of Justice when the interpretation of EU law (including the two unitary patent regulations, but excluding the UPC Agreement) is not obvious. Organization The court will have two instances: a court of first instance and a court of appeal. The court of appeal and the registry will have their seats in Luxembourg, while the central division of the court of first instance would have its seat in Paris. The central division will have a thematic branch Munich (the London location has yet to replaced by a new location within the EU). The court of first instance may further have local and regional divisions in all member states that wish to set up such divisions. Geographical scope of and request for unitary effect While the regulations formally apply to all 25 member states participating in the enhanced cooperation for a unitary patent, from the date the UPC agreement has entered into force for the first group of ratifiers, unitary patents will only extend to the territory of those participating member states where the UPC Agreement had entered into force when the unitary effect was registered. If the unitary effect territory subsequently expands to additional participating member states for which the UPC Agreement later enters into force, this will be reflected for all subsequently registered unitary patents, but the territorial scope of the unitary effect of existing unitary patents will not be extended to these states. Unitary effect can be requested up to one month after grant of the European patent directly at the EPO, with retroactive effect from the date of grant. However, according to the Draft Rules Relating to Unitary Patent Protection, unitary effect would be registered only if the European patent has been granted with the same set of claims for all the 25 participating member states in the regulations, whether the unitary effect applies to them or not. European patents automatically become a bundle of "national" European patents upon grant. Upon the grant of unitary effect, the "national" European patents will retroactively be considered to never have existed in the territories where the unitary patent has effect. The unitary effect does not affect "national" European patents in states where the unitary patent does not apply. Any "national" European patents applying outside the "unitary effect" zone will co-exist with the unitary patent. Special territories of participating member states As the unitary patent is introduced by an EU regulation, it is expected to not only be valid in the mainland territory of the participating member states that are party to the UPC, but also in those of their special territories that are part of the European Union. As of April 2014, this includes the following fourteen territories: Cyprus: UN Buffer Zone Finland: Åland France: French Guiana, Guadeloupe, Martinique, Mayotte, Réunion, Saint Martin Germany: Büsingen am Hochrhein, Helgoland Greece: Mount Athos Portugal: Azores, Madeira In addition to the territories above, the European Patent Convention has been extended by two member states participating in the enhanced cooperation for a unitary patent to cover some of their dependent territories outside the European Union: France: French Polynesia, French Southern and Antarctic Lands, New Caledonia, Saint Barthélemy, Saint-Pierre and Miquelon, Wallis and Futuna Netherlands: Caribbean Netherlands, Curaçao, Sint Maarten Among the dependencies in the second list, Caribbean Netherlands, Curaçao and Sint Maarten intend to apply the unitary patent. The 2019 amendment of the Dutch Patents act extends the unitary patent regulation to these territories after it has entered into force. Entry into force is conditional on extension of the Unified Patent Court agreement to these territories. and extends the unified patent court agreement there as well. Costs The renewal fees were originally planned to be based on the cumulative renewal fees of Germany, France, the UK and the Netherlands, the 4 states in which most European patents are in force. The renewal fees of the unitary patent would thus be ranging from 32 Euro in the second year to 4855 in the 20th year. It is however not yet clear if the UK leaving the UPC system following Brexit will lead to a reduction of the renewal fee. The renewal fees will be collected by the EPO, with the EPO keeping 50% of the fees and the other 50% being redistributed to the participating member states. Translation requirements as well as the requirement to pay yearly patent fees in all countries in which a European patent is designated, presently renders the European patent system costly in the European Union. In an impact assessment the European Commission estimated that the costs of obtaining a patent in all 27 EU countries would drop from over 32 000 euro (mainly due to translation costs) to 6 500 euro (for the combination of an EU, Spanish and Italian patent) due to introduction of the EU
test; distinct positions (or loci) within a genome are cistronic. History The words cistron and gene were coined before the advancing state of biology made it clear that the concepts they refer to are practically equivalent. The same historical naming practices are responsible for many of the synonyms in the life sciences. The term cistron was coined by Seymour Benzer in an article entitled The elementary units of heredity. The cistron was defined by an operational test applicable to most organisms that is sometimes referred to as a cis-trans test, but more often as a complementation test. Definition For example, suppose a mutation at a chromosome position is responsible for a change in recessive trait in a diploid organism (where chromosomes come in pairs). We say that the mutation is recessive because the organism
practically equivalent. The same historical naming practices are responsible for many of the synonyms in the life sciences. The term cistron was coined by Seymour Benzer in an article entitled The elementary units of heredity. The cistron was defined by an operational test applicable to most organisms that is sometimes referred to as a cis-trans test, but more often as a complementation test. Definition For example, suppose a mutation at a chromosome position is responsible for a change in recessive trait in a diploid organism (where chromosomes come in pairs). We say that the mutation is recessive because the organism will exhibit the wild type phenotype (ordinary trait) unless both chromosomes of a pair have the mutation (homozygous mutation). Similarly, suppose a mutation at another position, , is responsible for the same recessive trait. The positions and are said to be within the same cistron when an organism that has the mutation at on one chromosome and
The term evolved to become a title to a number of political entities. Three countries – Australia, The Bahamas, and Dominica – have the official title "Commonwealth", as do four U.S. states and two U.S. territories. Since the early 20th century, the term has been used to name some fraternal associations of states, most notably the Commonwealth of Nations, an organisation primarily of former territories of the British Empire. The organisation is not to be confused with the realms of the Commonwealth. Historical use Rome Translations of Ancient Roman writers' works to English have on occasion translated "Res publica", and variants thereof, to "the commonwealth", a term referring to the Roman state as a whole. England The Commonwealth of England was the official name of the political unit (de facto military rule in the name of parliamentary supremacy) that replaced the Kingdom of England (after the English Civil War) from 1649–53 and 1659–60, under the rule of Oliver Cromwell and his son and successor Richard. From 1653 to 1659, although still legally known as a Commonwealth, the republic, united with the former Kingdom of Scotland, operated under different institutions (at times as a de facto monarchy) and is known by historians as the Protectorate. In a British context, it is sometimes referred to as the "Old Commonwealth". Iceland The Icelandic Commonwealth or the Icelandic Free State () was the state existing in Iceland between the establishment of the Althing in 930 and the pledge of fealty to the Norwegian king in 1262. It was initially established by a public consisting largely of recent immigrants from Norway who had fled the unification of that country under King Harald Fairhair. Philippines The Commonwealth of the Philippines was the administrative body that governed the Philippines from 1935 to 1946, aside from a period of exile in the Second World War from 1942 to 1945 when Japan occupied the country. It replaced the Insular Government, a United States territorial government, and was established by the Tydings–McDuffie Act. The Commonwealth was designed as a transitional administration in preparation for the country's full achievement of independence, which was achieved in 1946. The Commonwealth of the Philippines was a founding member of the United Nations. Poland–Lithuania Republic is still an alternative translation of the traditional name Rzeczpospolita of the Polish–Lithuanian Commonwealth. Wincenty Kadłubek (Vincent Kadlubo, 1160–1223) used for the first time the original Latin term res publica in the context of Poland in his "Chronicles of the Kings and Princes of Poland". The name
the official name of the political unit (de facto military rule in the name of parliamentary supremacy) that replaced the Kingdom of England (after the English Civil War) from 1649–53 and 1659–60, under the rule of Oliver Cromwell and his son and successor Richard. From 1653 to 1659, although still legally known as a Commonwealth, the republic, united with the former Kingdom of Scotland, operated under different institutions (at times as a de facto monarchy) and is known by historians as the Protectorate. In a British context, it is sometimes referred to as the "Old Commonwealth". Iceland The Icelandic Commonwealth or the Icelandic Free State () was the state existing in Iceland between the establishment of the Althing in 930 and the pledge of fealty to the Norwegian king in 1262. It was initially established by a public consisting largely of recent immigrants from Norway who had fled the unification of that country under King Harald Fairhair. Philippines The Commonwealth of the Philippines was the administrative body that governed the Philippines from 1935 to 1946, aside from a period of exile in the Second World War from 1942 to 1945 when Japan occupied the country. It replaced the Insular Government, a United States territorial government, and was established by the Tydings–McDuffie Act. The Commonwealth was designed as a transitional administration in preparation for the country's full achievement of independence, which was achieved in 1946. The Commonwealth of the Philippines was a founding member of the United Nations. Poland–Lithuania Republic is still an alternative translation of the traditional name Rzeczpospolita of the Polish–Lithuanian Commonwealth. Wincenty Kadłubek (Vincent Kadlubo, 1160–1223) used for the first time the original Latin term res publica in the context of Poland in his "Chronicles of the Kings and Princes of Poland". The name was used officially for the confederal union formed by Poland and Lithuania 1569–1795. It is also often referred as "Nobles' Commonwealth" (1505–1795, i.e., before the union). In the contemporary political doctrine of the Polish–Lithuanian Commonwealth, "our state is a Republic (or Commonwealth) under the presidency of the King". The Commonwealth introduced a doctrine of religious tolerance called Warsaw Confederation, had its own parliament Sejm (although elections were restricted to nobility and elected kings, who were bound to certain contracts Pacta conventa from the beginning of the reign). "A commonwealth of good counsaile" was the title of the 1607 English translation of the work of Wawrzyniec Grzymała Goślicki "De optimo senatore" that presented to English readers many of the ideas present in the political system of the Polish–Lithuanian Commonwealth. Catalonia Between 1914 and 1925, Catalonia was an autonomous region of Spain. Its government during that time was given the title mancomunidad (Catalan: mancomunitat), which is translated into English as "commonwealth". The Commonwealth of Catalonia had limited powers and was formed as a federation of the four Catalan provinces. A number of Catalan-language institutions were created during its existence. Liberia Between 1838 and 1847, Liberia
they could compute and cook on top of their 1500-series disk drives at the same time". A series of humorous tips in MikroBitti in 1989 said "When programming late, coffee and kebab keep nicely warm on top of the 1541." The MikroBitti review of the 1541-II said that its external power source "should end the jokes about toasters". The drive-head mechanism installed in the early production years is notoriously easy to misalign. The most common cause of the 1541's drive head knocking and subsequent misalignment is copy-protection schemes on commercial software. The main cause of the problem is that the disk drive itself does not feature any means of detecting when the read/write head reaches track zero. Accordingly, when a disk is not formatted or a disk error occurs, the unit tries to move the head 40 times in the direction of track zero (although the 1541 DOS only uses 35 tracks, the drive mechanism itself is a 40-track unit, so this ensured track zero would be reached no matter where the head was before). Once track zero is reached, every further attempt to move the head in that direction would cause it to be rammed against a solid stop: for example, if the head happened to be on track 18 (where the directory is located) before this procedure, the head would be actually moved 18 times, and then rammed against the stop 22 times. This ramming gives the characteristic "machine gun" noise and sooner or later throws the head out of alignment. A defective head-alignment part likely caused many of the reliability issues in early 1541 drives; one dealer told Compute!s Gazette in 1983 that the part had caused all but three of several hundred drive failures that he had repaired. The drives were so unreliable that Info magazine joked, "Sometimes it seems as if one of the original design specs ... must have said 'Mean time between failure: 10 accesses.'" Users can realign the drive themselves with a software program and a calibration disk. What the user would do is remove the drive from its case and then loosen the screws holding the stepper motor that moved the head, then with the calibration disk in the drive gently turn the stepper motor back and forth until the program shows a good alignment. The screws are then tightened and the drive is put back into its case. A third-party fix for the 1541 appeared in which the solid head stop was replaced by a sprung stop, giving the head a much easier life. The later 1571 drive (which is 1541-compatible) incorporates track-zero detection by photo-interrupter and is thus immune to the problem. Also, a software solution, which resides in the drive controller's ROM, prevents the rereads from occurring, though this could cause problems when genuine errors did occur. Due to the alignment issues on the Alps drive mechanisms, Commodore switched suppliers to Newtronics in 1984. The Newtronics mechanism drives have a lever rather than a pull-down tab to close the drive door. Although the alignment issues were resolved after the switch, the Newtronics drives added a new reliability problem in that many of the read/write heads were improperly sealed, causing moisture to penetrate the head and short it out. The 1541's PCB consists mainly of a 6502 CPU, two 6522 VIA chips, and 2k of work RAM. Up to 48k of RAM can be added; this was mainly useful for defeating copy protection schemes since an entire disk track could be loaded into drive RAM, while the standard 2k only accommodated a few sectors (theoretically eight, but some of the RAM was used by CBM DOS as work space). Some Commodore users used 1541s as an impromptu math coprocessor by uploading math-intensive code to the drive for background processing. Interface The 1541 uses a proprietary serialized derivative of the IEEE-488 parallel interface, which Commodore used on their previous disk drives for the PET/CBM range of personal and business computers, but when the VIC-20 was in development, a cheaper alternative to the expensive IEEE-488 cables was sought. To ensure a ready supply of inexpensive cabling for its home computer peripherals, Commodore chose standard DIN connectors for the serial interface. Disk drives and other peripherals such as printers connected to the computer via a daisy chain setup, necessitating only a single connector on the computer itself. Control Throughput and software IEEE Spectrum in 1985 stated that: The C-64's designers blamed the 1541's slow speed on the marketing department's insistence that the computer be compatible with the 1540, which was slow because of a flaw in the 6522 VIA interface controller. Initially, Commodore intended to use a hardware shift register (one component of the 6522) to maintain fast drive speeds with the new serial interface. However, a hardware bug with this chip prevented the initial design from working as anticipated, and the ROM code was hastily rewritten to handle the entire operation in software. According to Jim Butterfield, this causes a speed reduction by a factor of five; had 1540 compatibility not been a requirement, the disk interface would have been much faster. In any case, the C64 normally could not work with a 1540 unless the VIC-II video output was disabled via a register write, which would stop the halting the CPU during certain video lines which ensured correct serial timing. As implemented on the VIC-20 and C64, Commodore DOS transfers 300 bytes per second, compared to the Atari 810's 2,400 bytes per second, the Apple Disk II's 15,000 bytes per second, and the 300-baud data rate of the Commodore Datasette storage system. About 20 minutes are needed to copy one disk—10 minutes of reading time, and 10 minutes of writing time. However, since both the computer and the drive can easily be reprogrammed, third parties quickly wrote more efficient firmware that would speed up drive operations drastically. Without hardware modifications, some "fast loader" utilities (which bypassed routines in the 1541's onboard ROM) managed to achieve speeds of up to 4 KB/s. The most common of these products are the Epyx Fast Load, the Final Cartridge, and the Action Replay plug-in ROM cartridges, which all have machine code monitor and disk editor software on board as well. The popular Commodore computer magazines of the era also entered the arena with type-in fast-load utilities, with Compute!'s Gazette publishing TurboDisk in 1985 and RUN publishing Sizzle in 1987. Even though each 1541 has its own on-board disk controller and disk operating system, it is not possible for a user to command two 1541 drives to copy a disk (one drive reading and the other writing) as with older dual drives like the 4040 that was often found with the PET computer, and which the 1541 is backward-compatible with (it can read 4040 disks but not write to them as a minor difference in the number of header bytes makes the 4040 and 1541 only read-compatible). Originally, to copy from drive to drive, software running on the C64 was needed and it would first read from one drive into computer memory, then write out to the other. Only when Fast Hack'em and, later, other disk backup programs were released, was true drive-to-drive copying possible for a pair of 1541s. The user could, if they wished, unplug the C64 from the drives (i.e., from the first drive in the daisy chain) and do something else with the computer as the drives proceeded to copy the entire disk. This is
128, available in Europe. It offers MFM capability for accessing CP/M disks, improved speed, and somewhat quieter operation, but was only manufactured until Commodore got its production lines going with the 1571, the double-sided drive. Finally, the small, external-power-supply-based, MFM-based Commodore 1581 3½-inch drive was made, giving 800 KB access to the C128 and C64. Design Hardware The 1541 does not have DIP switches to change the device number. If a user added more than one drive to a system the user had to open the case and cut a trace in the circuit board to permanently change the drive's device number, or hand-wire an external switch to allow it to be changed externally. It was also possible to change the drive number via a software command, which was temporary and would be erased as soon as the drive was powered off. 1541 drives at power up always default to device #8. If multiple drives in a chain are used, then the startup procedure is to power on the first drive in the chain, alter its device number via a software command to the highest number in the chain (if three drives were used, then the first drive in the chain would be set to device #10), then power on the next drive, alter its device number to the next lowest, and repeat the procedure until the final drive at the end of the chain was powered on and left as device #8. Unlike the Apple II, where support for two drives was normal, it was relatively uncommon for Commodore software to support this setup, and the CBM DOS copy file command was not able to copy files between drives – a third party copy utility needed to be used instead. The pre-II 1541s also have an internal power source, which generated a lot of heat. The heat generation was a frequent source of humour. For example, Compute! stated in 1988 that "Commodore 64s used to be a favorite with amateur and professional chefs since they could compute and cook on top of their 1500-series disk drives at the same time". A series of humorous tips in MikroBitti in 1989 said "When programming late, coffee and kebab keep nicely warm on top of the 1541." The MikroBitti review of the 1541-II said that its external power source "should end the jokes about toasters". The drive-head mechanism installed in the early production years is notoriously easy to misalign. The most common cause of the 1541's drive head knocking and subsequent misalignment is copy-protection schemes on commercial software. The main cause of the problem is that the disk drive itself does not feature any means of detecting when the read/write head reaches track zero. Accordingly, when a disk is not formatted or a disk error occurs, the unit tries to move the head 40 times in the direction of track zero (although the 1541 DOS only uses 35 tracks, the drive mechanism itself is a 40-track unit, so this ensured track zero would be reached no matter where the head was before). Once track zero is reached, every further attempt to move the head in that direction would cause it to be rammed against a solid stop: for example, if the head happened to be on track 18 (where the directory is located) before this procedure, the head would be actually moved 18 times, and then rammed against the stop 22 times. This ramming gives the characteristic "machine gun" noise and sooner or later throws the head out of alignment. A defective head-alignment part likely caused many of the reliability issues in early 1541 drives; one dealer told Compute!s Gazette in 1983 that the part had caused all but three of several hundred drive failures that he had repaired. The drives were so unreliable that Info magazine joked, "Sometimes it seems as if one of the original design specs ... must have said 'Mean time between failure: 10 accesses.'" Users can realign the drive themselves with a software program and a calibration disk. What the user would do is remove the drive from its case and then loosen the screws holding the stepper motor that moved the head, then with the calibration disk in the drive gently turn the stepper motor back and forth until the program shows a good alignment. The screws are then tightened and the drive is put back into its case. A third-party fix for the 1541 appeared in which the solid head stop was replaced by a sprung stop, giving the head a much easier life. The later 1571 drive (which is 1541-compatible) incorporates track-zero detection by photo-interrupter and is thus immune to the problem. Also, a software solution, which resides in the drive controller's ROM, prevents the rereads from occurring, though this could cause problems when genuine errors did occur. Due to the alignment issues on the Alps drive mechanisms, Commodore switched suppliers to Newtronics in 1984. The Newtronics mechanism drives have a lever rather than a pull-down tab to close the drive door. Although the alignment issues were resolved after the switch, the Newtronics drives added a new reliability problem in that many of the read/write heads were improperly sealed, causing moisture to penetrate the head and short it out. The 1541's PCB consists mainly of a 6502 CPU, two 6522 VIA chips, and 2k of work RAM. Up to 48k of RAM can be added; this was mainly useful for defeating copy protection schemes since an entire disk track could be loaded into drive RAM, while the standard 2k only accommodated a few sectors (theoretically eight, but some of the RAM was used by CBM DOS as work space). Some Commodore users used 1541s as an impromptu math coprocessor by uploading math-intensive code to the drive for background processing. Interface The 1541 uses a proprietary serialized derivative of the IEEE-488 parallel interface, which Commodore used on their previous disk drives for the PET/CBM range of personal and business computers, but when the VIC-20 was in development, a cheaper alternative to the expensive IEEE-488 cables was sought. To ensure a ready supply of inexpensive cabling for its home computer peripherals, Commodore chose standard DIN connectors for the serial interface. Disk drives and other peripherals such as printers connected to the computer via a daisy chain setup, necessitating only a single connector on the computer itself. Control Throughput and software IEEE Spectrum in 1985 stated that: The C-64's designers blamed the 1541's slow speed on the marketing department's insistence that the computer be compatible with the 1540, which was slow because of a flaw in the 6522 VIA interface controller. Initially, Commodore intended to use a hardware shift register (one component of the 6522) to maintain fast drive speeds with the new serial interface. However, a hardware bug with this chip prevented the initial design from working as anticipated, and the ROM code was hastily rewritten to handle the entire operation in software. According to Jim Butterfield, this causes a speed reduction by a factor of five; had 1540 compatibility not been a requirement, the disk interface would have been much faster. In any case, the C64 normally could not work with a 1540 unless the VIC-II video output was disabled via a register write, which would stop the halting the CPU during certain video lines which ensured correct serial timing. As implemented on the VIC-20 and C64, Commodore DOS transfers 300 bytes per second, compared to the Atari 810's 2,400 bytes per second, the Apple Disk II's 15,000 bytes per second, and the 300-baud data rate of the Commodore Datasette storage system. About 20 minutes are needed to copy one disk—10 minutes of reading time, and 10 minutes of writing time. However, since both the computer and the drive can easily be reprogrammed, third parties quickly wrote more efficient firmware that would speed up drive operations drastically. Without hardware modifications, some "fast loader" utilities (which bypassed routines in the 1541's onboard ROM) managed to achieve speeds of up to 4 KB/s. The most common of these products are the Epyx Fast Load, the Final Cartridge, and the Action Replay plug-in ROM cartridges, which all have machine code monitor and disk editor software on board as well. The popular Commodore computer magazines of the era also entered the arena with type-in fast-load utilities, with Compute!'s Gazette publishing TurboDisk in 1985 and RUN publishing Sizzle in 1987. Even though each 1541 has its own on-board disk controller and disk operating system, it is not possible for a user to command two 1541 drives to copy a disk (one drive reading and the other writing) as with older dual drives like the 4040 that was often found with the PET computer, and which the 1541 is backward-compatible with (it can read 4040 disks but not write to them as a minor difference in the number of header bytes makes the 4040 and 1541 only read-compatible). Originally, to copy from drive to drive, software running on the C64
size format. This capability was most frequently used to read MS-DOS disks. The drive was released in the summer of 1987 and quickly became popular with bulletin board system (BBS) operators and other users. Like the 1541 and 1571, the 1581 has an onboard MOS Technology 6502 CPU with its own ROM and RAM, and uses a serial version of the IEEE-488 interface. Inexplicably, the drive's ROM contains commands for parallel use, although no parallel interface was available. Unlike the 1571, which is nearly 100% backward-compatible with the 1541, the 1581 is only compatible with previous Commodore drives at the DOS level and cannot utilize software that performs low-level disk access (as the vast majority of Commodore 64 games do). The version of Commodore DOS built into the 1581 added support for partitions, which could also function as fixed-allocation subdirectories. PC-style subdirectories were rejected as being too difficult to work with in terms of block availability maps, then still much in vogue, and which for some time had been the traditional way of inquiring into block availability. The 1581 supports the C128's burst mode for fast disk access, but not when connected to an older Commodore machine like the Commodore 64. The 1581 provides a total of 3160 blocks free when formatted (a block being equal to 256 bytes). The number of permitted directory entries was also increased, to 296 entries. With a storage capacity of 800 kB, the 1581 is the highest-capacity serial-bus drive that was ever made by Commodore (the 1-MB SFD-1001 uses the parallel IEEE-488), and the only 3½" one. However, starting in 1991, Creative Micro Designs (CMD) made the FD-2000 high density (1.6 MB) and FD-4000 extra-high density (3.2 MB) 3½" drives, both of which offered not only a 1581-emulation mode but also 1541- and 1571-compatibility modes. Like the 1541 and 1571, a nearly identical job queue is available to the user in zero page (except for job 0), providing for exceptional degrees of compatibility. Unlike the cases of the 1541 and 1571, the low-level disk format used by the 1581 is similar enough to the MS-DOS format as the 1581 is built around a WD1770 FM/MFM floppy controller chip. The 1581 disk format consists of 80 tracks and ten 512 byte sectors per track, used as 20 logical sectors of 256 bytes each. Special software is required to read 1581 disks on a PC due to the different file system. An internal floppy drive and
power supply provided with them. Specifications 1581 Image Layout The 1581 disk has 80 logical tracks, each with 40 logical sectors (the actual physical layout of the diskette is abstracted and managed by a hardware translation layer). The directory starts on 40/3 (track 40, sector 3). The disk header is on 40/0, and the BAM (block availability map) resides on 40/1 and 40/2. Header Contents $00–01 T/S reference to first directory sector (40/3) 02 DOS version ('D') 04-13 Disk Label, $A0 padded 16-17 Disk ID 19-1A DOS type ('3D') BAM Contents, 40/1 $00–01 T/S to next BAM sector (40/2) 02 DOS version ('D') 04-05 Disk ID 06 I/O byte 07 Autoboot flag 10-FF BAM entries for Tracks 1-40 BAM Contents, 40/2 $00–01 00/FF 02 DOS version ('D') 04-05 Disk ID 06 I/O byte 07 Autoboot flag 10-FF BAM entries for Tracks 41-80 See also Commodore 64 peripherals Commodore 128 References External links d81.de: Permanent home of 1581-Copy, A MS-Windows based Tool uses any standard x86-PC 3.5" drive to WRITE & READ 1581 disk images (d81). optusnet.com.au: 1581 Games, Commodore 1581 Games, D81, CMD FD2000 & FD4000 Games, Tools & Games specifically for the 1581 disk drive. optusnet.com.au: SEGA SF-7000 with PC 3.5" Floppy Drive, Copy disk to PC and vice versa, How to use a PC 3.5" floppy drive in the 1581 device vice-emu: Commodore compatible Disk Drives, drive info tut.fi: DCN-2692 floppy controller board, C1581 clone (complete) Products introduced in 1987 Commodore 64 CBM floppy
Virginia and North Carolina and the "Deep South's Oldest Rivalry", between Georgia and Auburn. Although before the mid-1920s most national powers came from the Northeast or the Midwest, the trend changed when several teams from the South and the West Coast achieved national success. Wallace William Wade's 1925 Alabama team won the 1926 Rose Bowl after receiving its first national title and William Alexander's 1928 Georgia Tech team defeated California in the 1929 Rose Bowl. College football quickly became the most popular spectator sport in the South. Several major modern college football conferences rose to prominence during this time period. The Southwest Athletic Conference had been founded in 1915. Consisting mostly of schools from Texas, the conference saw back-to-back national champions with Texas Christian University (TCU) in 1938 and Texas A&M in 1939. The Pacific Coast Conference (PCC), a precursor to the Pac-12 Conference (Pac-12), had its own back-to-back champion in the University of Southern California which was awarded the title in 1931 and 1932. The Southeastern Conference (SEC) formed in 1932 and consisted mostly of schools in the Deep South. As in previous decades, the Big Ten continued to dominate in the 1930s and 1940s, with Minnesota winning 5 titles between 1934 and 1941, and Michigan (1933, 1947, and 1948) and Ohio State (1942) also winning titles. As it grew beyond its regional affiliations in the 1930s, college football garnered increased national attention. Four new bowl games were created: the Orange Bowl, Sugar Bowl, the Sun Bowl in 1935, and the Cotton Bowl in 1937. In lieu of an actual national championship, these bowl games, along with the earlier Rose Bowl, provided a way to match up teams from distant regions of the country that did not otherwise play. In 1936, the Associated Press began its weekly poll of prominent sports writers, ranking all of the nation's college football teams. Since there was no national championship game, the final version of the AP poll was used to determine who was crowned the National Champion of college football. The 1930s saw growth in the passing game. Though some coaches, such as General Robert Neyland at Tennessee, continued to eschew its use, several rules changes to the game had a profound effect on teams' ability to throw the ball. In 1934, the rules committee removed two major penalties—a loss of five yards for a second incomplete pass in any series of downs and a loss of possession for an incomplete pass in the end zone—and shrunk the circumference of the ball, making it easier to grip and throw. Players who became famous for taking advantage of the easier passing game included Alabama end Don Hutson and TCU passer "Slingin" Sammy Baugh. In 1935, New York City's Downtown Athletic Club awarded the first Heisman Trophy to University of Chicago halfback Jay Berwanger, who was also the first ever NFL Draft pick in 1936. The trophy was designed by sculptor Frank Eliscu and modeled after New York University player Ed Smith. The trophy recognizes the nation's "most outstanding" college football player and has become one of the most coveted awards in all of American sports. During World War II, college football players enlisted in the armed forces, some playing in Europe during the war. As most of these players had eligibility left on their college careers, some of them returned to college at West Point, bringing Army back-to-back national titles in 1944 and 1945 under coach Red Blaik. Doc Blanchard (known as "Mr. Inside") and Glenn Davis (known as "Mr. Outside") both won the Heisman Trophy, in 1945 and 1946. On the coaching staff of those 1944–1946 Army teams was future Pro Football Hall of Fame coach Vince Lombardi. The 1950s saw the rise of yet more dynasties and power programs. Oklahoma, under coach Bud Wilkinson, won three national titles (1950, 1955, 1956) and all ten Big Eight Conference championships in the decade while building a record 47-game winning streak. Woody Hayes led Ohio State to two national titles, in 1954 and 1957, and won three Big Ten titles. The Michigan State Spartans were known as the "football factory" during the 1950s, where coaches Clarence Munn and Duffy Daugherty led the Spartans to two national titles and two Big Ten titles after joining the Big Ten athletically in 1953. Wilkinson and Hayes, along with Robert Neyland of Tennessee, oversaw a revival of the running game in the 1950s. Passing numbers dropped from an average of 18.9 attempts in 1951 to 13.6 attempts in 1955, while teams averaged just shy of 50 running plays per game. Nine out of ten Heisman Trophy winners in the 1950s were runners. Notre Dame, one of the biggest passing teams of the decade, saw a substantial decline in success; the 1950s were the only decade between 1920 and 1990 when the team did not win at least a share of the national title. Paul Hornung, Notre Dame quarterback, did, however, win the Heisman in 1956, becoming the only player from a losing team ever to do so. Modern college football (since 1958) Following the enormous success of the 1958 NFL Championship Game, college football no longer enjoyed the same popularity as the NFL, at least on a national level. While both games benefited from the advent of television, since the late 1950s, the NFL has become a nationally popular sport while college football has maintained strong regional ties. As professional football became a national television phenomenon, college football did as well. In the 1950s, Notre Dame, which had a large national following, formed its own network to broadcast its games, but by and large the sport still retained a mostly regional following. In 1952, the NCAA claimed all television broadcasting rights for the games of its member institutions, and it alone negotiated television rights. This situation continued until 1984, when several schools brought a suit under the Sherman Antitrust Act; the Supreme Court ruled against the NCAA and schools are now free to negotiate their own television deals. ABC Sports began broadcasting a national Game of the Week in 1966, bringing key matchups and rivalries to a national audience for the first time. New formations and play sets continued to be developed. Emory Bellard, an assistant coach under Darrell Royal at the University of Texas, developed a three-back option style offense known as the wishbone. The wishbone is a run-heavy offense that depends on the quarterback making last second decisions on when and to whom to hand or pitch the ball to. Royal went on to teach the offense to other coaches, including Bear Bryant at Alabama, Chuck Fairbanks at Oklahoma and Pepper Rodgers at UCLA; who all adapted and developed it to their own tastes. The strategic opposite of the wishbone is the spread offense, developed by professional and college coaches throughout the 1960s and 1970s. Though some schools play a run-based version of the spread, its most common use is as a passing offense designed to "spread" the field both horizontally and vertically. Some teams have managed to adapt with the times to keep winning consistently. In the rankings of the most victorious programs, Michigan, Ohio State, and Alabama ranked first, second, and third in total wins. Growth of bowl games In 1940, for the highest level of college football, there were only five bowl games (Rose, Orange, Sugar, Sun, and Cotton). By 1950, three more had joined that number and in 1970, there were still only eight major college bowl games. The number grew to eleven in 1976. At the birth of cable television and cable sports networks like ESPN, there were fifteen bowls in 1980. With more national venues and increased available revenue, the bowls saw an explosive growth throughout the 1980s and 1990s. In the thirty years from 1950 to 1980, seven bowl games were added to the schedule. From 1980 to 2008, an additional 20 bowl games were added to the schedule. Some have criticized this growth, claiming that the increased number of games has diluted the significance of playing in a bowl game. Yet others have countered that the increased number of games has increased exposure and revenue for a greater number of schools, and see it as a positive development. With the growth of bowl games, it became difficult to determine a national champion in a fair and equitable manner. As conferences became contractually bound to certain bowl games (a situation known as a tie-in), match-ups that guaranteed a consensus national champion became increasingly rare. In 1992, seven conferences and independent Notre Dame formed the Bowl Coalition, which attempted to arrange an annual No.1 versus No.2 matchup based on the final AP poll standings. The Coalition lasted for three years; however, several scheduling issues prevented much success; tie-ins still took precedence in several cases. For example, the Big Eight and SEC champions could never meet, since they were contractually bound to different bowl games. The coalition also excluded the Rose Bowl, arguably the most prestigious game in the nation, and two major conferences—the Pac-10 and Big Ten—meaning that it had limited success. In 1995, the Coalition was replaced by the Bowl Alliance, which reduced the number of bowl games to host a national championship game to three—the Fiesta, Sugar, and Orange Bowls—and the participating conferences to five—the ACC, SEC, Southwest, Big Eight, and Big East. It was agreed that the No.1 and No.2 ranked teams gave up their prior bowl tie-ins and were guaranteed to meet in the national championship game, which rotated between the three participating bowls. The system still did not include the Big Ten, Pac-10, or the Rose Bowl, and thus still lacked the legitimacy of a true national championship. However, one positive side effect is that if there were three teams at the end of the season vying for a national title, but one of them was a Pac-10/Big Ten team bound to the Rose Bowl, then there would be no difficulty in deciding which teams to place in the Bowl Alliance "national championship" bowl; if the Pac-10 / Big Ten team won the Rose Bowl and finished with the same record as whichever team won the other bowl game, they could have a share of the national title. This happened in the final year of the Bowl Alliance, with Michigan winning the 1998 Rose Bowl and Nebraska winning the 1998 Orange Bowl. Without the Pac-10/Big Ten team bound to a bowl game, it would be difficult to decide which two teams should play for the national title. Bowl Championship Series In 1998, a new system was put into place called the Bowl Championship Series. For the first time, it included all major conferences (ACC, Big East, Big 12, Big Ten, Pac-10, and SEC) and four major bowl games (Rose, Orange, Sugar and Fiesta). The champions of these six conferences, along with two "at-large" selections, were invited to play in the four bowl games. Each year, one of the four bowl games served as a national championship game. Also, a complex system of human polls, computer rankings, and strength of schedule calculations was instituted to rank schools. Based on this ranking system, the No.1 and No.2 teams met each year in the national championship game. Traditional tie-ins were maintained for schools and bowls not part of the national championship. For example, in years when not a part of the national championship, the Rose Bowl still hosted the Big Ten and Pac-10 champions. The system continued to change, as the formula for ranking teams was tweaked from year to year. At-large teams could be chosen from any of the Division I-A conferences, though only one selection—Utah in 2005—came from a BCS non-AQ conference. Starting with the 2006 season, a fifth game—simply called the BCS National Championship Game—was added to the schedule, to be played at the site of one of the four BCS bowl games on a rotating basis, one week after the regular bowl game. This opened up the BCS to two additional at-large teams. Also, rules were changed to add the champions of five additional conferences (Conference USA [C-USA], the Mid-American Conference [MAC], the Mountain West Conference [MW], the Sun Belt Conference and the Western Athletic Conference [WAC]), provided that said champion ranked in the top twelve in the final BCS rankings, or was within the top 16 of the BCS rankings and ranked higher than the champion of at least one of the BCS Automatic Qualifying (AQ) conferences. Several times since this rule change was implemented, schools from non-AQ conferences have played in BCS bowl games. In 2009, Boise State played TCU in the Fiesta Bowl, the first time two schools from non-AQ conferences played each other in a BCS bowl game. The last team from the non-AQ ranks to reach a BCS bowl game in the BCS era was Northern Illinois in 2012, which played in (and lost) the 2013 Orange Bowl. College Football Playoff The longtime resistance to a playoff system at the FBS level finally ended with the creation of the College Football Playoff (CFP) beginning with the 2014 season. The CFP is a Plus-One system, a concept that became popular as a BCS alternative following controversies in 2003 and 2004. The CFP is a four-team tournament whose participants are chosen and seeded by a 13-member selection committee. The semifinals are hosted by two of a group of traditional bowl games known as the New Year's Six, with semifinal hosting rotating annually among three pairs of games in the following order: Rose/Sugar, Orange/Cotton, and Fiesta/Peach. The two semifinal winners then advance to the College Football Playoff National Championship, whose host is determined by open bidding several years in advance. The establishment of the CFP followed a tumultuous period of conference realignment in Division I. The WAC, after seeing all but two of its football members leave, dropped football after the 2012 season. The Big East split into two leagues in 2013; the schools that did not play FBS football reorganized as a new non-football Big East Conference, while the FBS member schools that remained in the original structure joined with several new members and became the American Athletic Conference. The American retained the Big East's automatic BCS bowl bid for the 2013 season, but lost this status in the CFP era. The Alabama Crimson Tide have been the sports dominant power in recent years, qualifying for all but one College Football Playoff. The 10 FBS conferences are formally and popularly divided into two groups: Power Five – Five of the six AQ conferences of the BCS era, specifically the ACC, Big Ten, Big 12, Pac-12, and SEC. Each champion of these conferences is assured of a spot in a New Year's Six bowl, though not necessarily in a semifinal game. Notre Dame remains a football independent, but is counted among the Power Five because of its full but non-football ACC membership, including a football scheduling alliance with that conference. In the 2020 season, Notre Dame played as a full-time member of the conference due to the effects that COVID-19 had on the college football season, causing many conferences to play conference-only regular seasons. It has its own arrangement for access to the New Year's Six games should it meet certain standards. Group of Five – The remaining five FBS conferences – American, C-USA, MAC, MW, and Sun Belt. The other six current FBS independents, Army, BYU, Liberty, New Mexico State, UConn, and UMass are also considered to be part of this group. One conference champion from this group receives a spot in a New Year's Six game. In the first seven seasons of the CFP, the Group of Five did not place a team in a semifinal. In 2021, Cincinnati, a member of the American, qualified for the Playoff, becoming the first Group of 5 team to qualify. Of the seven Group of Five teams selected for New Year's Six bowls, three have won their games. Official rules and notable rule distinctions Although rules for the high school, college, and NFL games are generally consistent, there are several minor differences. The NCAA Football Rules Committee determines the playing rules for Division I (both Bowl and Championship Subdivisions), II, and III games (the National Association of Intercollegiate Athletics (NAIA) is a separate organization, but uses the NCAA rules). A pass is ruled complete if one of the receiver's feet is inbounds at the time of the catch. In the NFL both feet must be inbounds. A player is considered down when any part of his body other than the feet or hands touches the ground or when the ball carrier is tackled or otherwise falls and loses possession of the ball as he contacts the ground with any part of his body, with the sole exception of the holder for field goal and extra point attempts. In the NFL a player is active until he is tackled or forced down by a member of the opposing team (down by contact). The clock stops after the offense completes a first down and begins again—assuming it is following a play in which the clock would not normally stop—once the referee says the ball is ready for play. In the NFL the clock does not explicitly stop for a first down. Overtime was introduced in 1996, eliminating most ties except in the regular season. Since 2021, during overtime, each team is given one possession from its opponent's twenty-five yard line with no game clock, despite the one timeout per period and use of play clock; the procedure repeats for next possession if needed; all possessions thereafter will be from the opponent's 3-yard line. The team leading after both possessions is declared the winner. If the teams remain tied, overtime periods continue, with a coin flip determining the first possession. Possessions alternate with each overtime, until one team leads the other at the end of the overtime. A two-point conversion is required if a touchdown is scored in double overtime. From triple overtime, only two-point conversion attempts will be conducted hereafter. [In the NFL overtime is decided by a modified sudden-death period of 10 minutes in regular-season games (no overtime in preseason up to & since ) and 15 minutes in playoff games, and regular-season games can still end in a tie if neither team scores. Overtime for regular-season games in the NFL began with the 1974 season; the overtime period for all games was 15 minutes until it was shortened for non-playoff games effective in . In the postseason, if the teams are still tied, teams will play multiple overtime periods until either team scores.] A tie game is still possible, per NCAA Rule 3-3-3 (c) and (d). If a game is suspended because of inclement weather while tied, typically in the second half or at the end of regulation, and the game is unable to be continued, the game ends in a tie. Similar to baseball, if one team has scored in its possession and the other team has not completed its possession, the score during the overtime can be wiped out and the game ruled a tie. Some conferences may enforce a curfew for the safety of the players. If, because of numerous overtimes or weather, the game reaches the time-certain finish imposed by the curfew tied, the game is ruled a tie. Extra point tries are attempted from the three-yard line. Kicked tries count as one point. Teams can also go for "the two-point conversion" which is when a team will line up at the three-yard line and try to score. If they are successful, they receive two points, if they are not, then they receive zero points. Starting with the 2015 season, the NFL uses the 15-yard line as the line of scrimmage for placekick attempts, but the two-yard line for two-point attempts. The two-point conversion was not implemented in the NFL until 1994, but it had been previously used in the old American Football League (AFL) before it merged with the NFL in 1970. The defensive team may score two points on a point-after touchdown attempt by returning a blocked kick, fumble, or interception into the opposition's end zone. In addition, if the defensive team gains possession, but then moves backwards into the end zone and is stopped, a one-point safety will be awarded to the offense, although, unlike a real safety, the offense kicks off, opposed to the team charged with the safety. This college rule was added in 1988. The NFL, which previously treated the ball as dead during a conversion attempt—meaning that the attempt ended when the defending team gained possession of the football—adopted the college rule in 2015. The two-minute warning is not used in college football, except in rare cases where the scoreboard clock has malfunctioned and is not being used. There is an option to use instant replay review of officiating decisions. Division I FBS schools use replay in virtually all games; replay is rarely used in lower division games. Every play is subject to booth review with coaches only having one challenge. In the NFL, only scoring plays, turnovers, the final 2:00 of each half and all overtime periods are reviewed, and coaches are issued two challenges (with the option for a 3rd if the first two are successful). Since the 2012 season, the ball is placed on the 25-yard line following a touchback on either a kickoff or a free kick following a safety. The NFL adopted this rule in 2018. In all other touchback situations at all levels of the game, the ball is placed on the 20. Among other rule changes in 2007, kickoffs were moved from the 35-yard line back five yards to the 30-yard line, matching a change that the NFL had made in 1994. Some coaches and officials questioned this rule change as it could lead to more injuries to the players as there will likely be more kickoff returns. The rationale for the rule change was to help reduce dead time in the game. The NFL returned its kickoff location to the 35-yard line effective in 2011; college football did not do so until 2012. Several changes were made to college rules in 2011, all of which differ from NFL practice: If a player is penalized for unsportsmanlike conduct for actions that occurred during a play ending in a touchdown by that team, but before the goal line was crossed, the touchdown will be nullified. In the NFL, the same foul would result in a penalty on the conversion attempt or ensuing kickoff, at the option of the non-penalized team. If a team is penalized in the final minute of a half and the penalty causes the clock to stop, the opposing team now has the right to have 10 seconds run off the clock in addition to the yardage penalty. The NFL has a similar rule in the final minute of the half, but it applies only to specified violations against the offensive team. The new NCAA rule applies to penalties on both sides of the ball. Players lined up outside the tackle box—more specifically, those lined up more than 7 yards from the center—will now be allowed to block below the waist only if they are blocking straight ahead or toward the nearest sideline. On placekicks, offensive linemen now can't be engaged by at least three defensive players. They risk a 5-yard penalty upon violation. In 2018, the NCAA made a further change to touchback rules that the NFL has yet to duplicate; a fair catch on a kickoff or a free kick following a safety that takes place between the receiving team's goal line and 25-yard lines is treated as a touchback, with the ball placed at the 25. Yards lost on quarterback sacks are included in individual rushing yardage under NCAA rules. In the NFL, yards lost on sacks are included in team passing yardage, but are not included in individual passing statistics. Organization College teams mostly play other similarly sized schools through the NCAA's divisional system. Division I generally consists of the major collegiate athletic powers with larger budgets, more elaborate facilities, and (with the exception of a few conferences such as the Pioneer Football League) more athletic scholarships. Division II primarily consists of smaller public and private institutions that offer fewer scholarships than those in Division I. Division III institutions also field teams, but do not offer any scholarships. Football teams in Division I are further divided into the Bowl Subdivision (consisting of the largest programs) and the Championship Subdivision. The Bowl Subdivision has historically not used an organized tournament to determine its champion, and instead teams compete in post-season bowl games. That changed with the debut of the four-team College Football Playoff at the end of the 2014 season. Teams in each of these four divisions are further divided into various regional conferences. Several organizations operate college football programs outside the jurisdiction of the NCAA: The National Association of Intercollegiate Athletics has jurisdiction over more than 80 college football teams, mostly in the Midwest. The National Junior College Athletic Association has jurisdiction over two-year institutions, except in California. The California Community College Athletic Association governs sports, including football, at that state's two-year institutions. CCCAA members compete for their own championships and do not participate in the NJCAA. Club football, a sport in which student clubs run the teams instead of the colleges themselves, is overseen by two organizations: the National Club Football Association and the Intercollegiate Club Football Federation. The two competing sanctioning bodies have some overlap, and several clubs are members of both organizations. The Collegiate Sprint Football League governs 9 teams, all in the northeast. Its primary restriction is that all players must weigh less than the average college student (that threshold is set, , at ). A college that fields a team in the NCAA is not restricted from fielding teams in club or sprint football, and several colleges field two
the South was in 1905, when Dan McGugin and Captain Innis Brown, of Vanderbilt went to Atlanta to see Sewanee play Georgia Tech." Fuzzy Woodruff claims Davidson was the first in the south to throw a legal forward pass in 1906. The following season saw Vanderbilt execute a double pass play to set up the touchdown that beat Sewanee in a meeting of the unbeaten for the SIAA championship. Grantland Rice cited this event as the greatest thrill he ever witnessed in his years of watching sports. Vanderbilt coach Dan McGugin in Spalding's Football Guide's summation of the season in the SIAA wrote "The standing. First, Vanderbilt; second, Sewanee, a might good second;" and that Aubrey Lanier "came near winning the Vanderbilt game by his brilliant dashes after receiving punts." Bob Blake threw the final pass to center Stein Stone, catching it near the goal amongst defenders. Honus Craig then ran in the winning touchdown. Heisman shift Utilizing the "jump shift" offense, John Heisman's Georgia Tech Golden Tornado won 222 to 0 over Cumberland on October 7, 1916, at Grant Field in the most lopsided victory in college football history. Tech went on a 33-game winning streak during this period. The 1917 team was the first national champion from the South, led by a powerful backfield. It also had the first two players from the Deep South selected first-team All-American in Walker Carpenter and Everett Strupper. Pop Warner's Pittsburgh Panthers were also undefeated, but declined a challenge by Heisman to a game. When Heisman left Tech after 1919, his shift was still employed by protégé William Alexander. Notable intersectional games In 1906, Vanderbilt defeated Carlisle 4 to 0, the result of a Bob Blake field goal. In 1907 Vanderbilt fought Navy to a 6 to 6 tie. In 1910 Vanderbilt held defending national champion Yale to a scoreless tie. Helping Georgia Tech's claim to a title in 1917, the Auburn Tigers held undefeated, Chic Harley-led Big Ten champion Ohio State to a scoreless tie the week before Georgia Tech beat the Tigers 68 to 7. The next season, with many players gone due to World War I, a game was finally scheduled at Forbes Field with Pittsburgh. The Panthers, led by freshman Tom Davies, defeated Georgia Tech 32 to 0. Tech center Bum Day was the first player on a Southern team ever selected first-team All-American by Walter Camp. 1917 saw the rise of another Southern team in Centre of Danville, Kentucky. In 1921 Bo McMillin-led Centre upset defending national champion Harvard 6 to 0 in what is widely considered one of the greatest upsets in college football history. The next year Vanderbilt fought Michigan to a scoreless tie at the inaugural game at Dudley Field (now Vanderbilt Stadium), the first stadium in the South made exclusively for college football. Michigan coach Fielding Yost and Vanderbilt coach Dan McGugin were brothers-in-law, and the latter the protégé of the former. The game featured the season's two best defenses and included a goal line stand by Vanderbilt to preserve the tie. Its result was "a great surprise to the sporting world." Commodore fans celebrated by throwing some 3,000 seat cushions onto the field. The game features prominently in Vanderbilt's history. That same year, Alabama upset Penn 9 to 7. Vanderbilt's line coach then was Wallace Wade, who coached Alabama to the South's first Rose Bowl victory in 1925. This game is commonly referred to as "the game that changed the south." Wade followed up the next season with an undefeated record and Rose Bowl tie. Georgia's 1927 "dream and wonder team" defeated Yale for the first time. Georgia Tech, led by Heisman protégé William Alexander, gave the dream and wonder team its only loss, and the next year were national and Rose Bowl champions. The Rose Bowl included Roy Riegels' wrong-way run. On October 12, 1929, Yale lost to Georgia in Sanford Stadium in its first trip to the south. Wade's Alabama again won a national championship and Rose Bowl in 1930. Coaches of the era Glenn "Pop" Warner Glenn "Pop" Warner coached at several schools throughout his career, including the University of Georgia, Cornell University, University of Pittsburgh, Stanford University, Iowa State University, and Temple University. One of his most famous stints was at the Carlisle Indian Industrial School, where he coached Jim Thorpe, who went on to become the first president of the National Football League, an Olympic Gold Medalist, and is widely considered one of the best overall athletes in history. Warner wrote one of the first important books of football strategy, Football for Coaches and Players, published in 1927. Though the shift was invented by Stagg, Warner's single wing and double wing formations greatly improved upon it; for almost 40 years, these were among the most important formations in football. As part of his single and double wing formations, Warner was one of the first coaches to effectively utilize the forward pass. Among his other innovations are modern blocking schemes, the three-point stance, and the reverse play. The youth football league, Pop Warner Little Scholars, was named in his honor. Knute Rockne Knute Rockne rose to prominence in 1913 as an end for the University of Notre Dame, then a largely unknown Midwestern Catholic school. When Army scheduled Notre Dame as a warm-up game, they thought little of the small school. Rockne and quarterback Gus Dorais made innovative use of the forward pass, still at that point a relatively unused weapon, to defeat Army 35–13 and helped establish the school as a national power. Rockne returned to coach the team in 1918, and devised the powerful Notre Dame Box offense, based on Warner's single wing. He is credited with being the first major coach to emphasize offense over defense. Rockne is also credited with popularizing and perfecting the forward pass, a seldom used play at the time. The 1924 team featured the Four Horsemen backfield. In 1927, his complex shifts led directly to a rule change whereby all offensive players had to stop for a full second before the ball could be snapped. Rather than simply a regional team, Rockne's "Fighting Irish" became famous for barnstorming and played any team at any location. It was during Rockne's tenure that the annual Notre Dame-University of Southern California rivalry began. He led his team to an impressive 105–12–5 record before his premature death in a plane crash in 1931. He was so famous at that point that his funeral was broadcast nationally on radio. From a regional to a national sport (1930–1958) In the early 1930s, the college game continued to grow, particularly in the South, bolstered by fierce rivalries such as the "South's Oldest Rivalry", between Virginia and North Carolina and the "Deep South's Oldest Rivalry", between Georgia and Auburn. Although before the mid-1920s most national powers came from the Northeast or the Midwest, the trend changed when several teams from the South and the West Coast achieved national success. Wallace William Wade's 1925 Alabama team won the 1926 Rose Bowl after receiving its first national title and William Alexander's 1928 Georgia Tech team defeated California in the 1929 Rose Bowl. College football quickly became the most popular spectator sport in the South. Several major modern college football conferences rose to prominence during this time period. The Southwest Athletic Conference had been founded in 1915. Consisting mostly of schools from Texas, the conference saw back-to-back national champions with Texas Christian University (TCU) in 1938 and Texas A&M in 1939. The Pacific Coast Conference (PCC), a precursor to the Pac-12 Conference (Pac-12), had its own back-to-back champion in the University of Southern California which was awarded the title in 1931 and 1932. The Southeastern Conference (SEC) formed in 1932 and consisted mostly of schools in the Deep South. As in previous decades, the Big Ten continued to dominate in the 1930s and 1940s, with Minnesota winning 5 titles between 1934 and 1941, and Michigan (1933, 1947, and 1948) and Ohio State (1942) also winning titles. As it grew beyond its regional affiliations in the 1930s, college football garnered increased national attention. Four new bowl games were created: the Orange Bowl, Sugar Bowl, the Sun Bowl in 1935, and the Cotton Bowl in 1937. In lieu of an actual national championship, these bowl games, along with the earlier Rose Bowl, provided a way to match up teams from distant regions of the country that did not otherwise play. In 1936, the Associated Press began its weekly poll of prominent sports writers, ranking all of the nation's college football teams. Since there was no national championship game, the final version of the AP poll was used to determine who was crowned the National Champion of college football. The 1930s saw growth in the passing game. Though some coaches, such as General Robert Neyland at Tennessee, continued to eschew its use, several rules changes to the game had a profound effect on teams' ability to throw the ball. In 1934, the rules committee removed two major penalties—a loss of five yards for a second incomplete pass in any series of downs and a loss of possession for an incomplete pass in the end zone—and shrunk the circumference of the ball, making it easier to grip and throw. Players who became famous for taking advantage of the easier passing game included Alabama end Don Hutson and TCU passer "Slingin" Sammy Baugh. In 1935, New York City's Downtown Athletic Club awarded the first Heisman Trophy to University of Chicago halfback Jay Berwanger, who was also the first ever NFL Draft pick in 1936. The trophy was designed by sculptor Frank Eliscu and modeled after New York University player Ed Smith. The trophy recognizes the nation's "most outstanding" college football player and has become one of the most coveted awards in all of American sports. During World War II, college football players enlisted in the armed forces, some playing in Europe during the war. As most of these players had eligibility left on their college careers, some of them returned to college at West Point, bringing Army back-to-back national titles in 1944 and 1945 under coach Red Blaik. Doc Blanchard (known as "Mr. Inside") and Glenn Davis (known as "Mr. Outside") both won the Heisman Trophy, in 1945 and 1946. On the coaching staff of those 1944–1946 Army teams was future Pro Football Hall of Fame coach Vince Lombardi. The 1950s saw the rise of yet more dynasties and power programs. Oklahoma, under coach Bud Wilkinson, won three national titles (1950, 1955, 1956) and all ten Big Eight Conference championships in the decade while building a record 47-game winning streak. Woody Hayes led Ohio State to two national titles, in 1954 and 1957, and won three Big Ten titles. The Michigan State Spartans were known as the "football factory" during the 1950s, where coaches Clarence Munn and Duffy Daugherty led the Spartans to two national titles and two Big Ten titles after joining the Big Ten athletically in 1953. Wilkinson and Hayes, along with Robert Neyland of Tennessee, oversaw a revival of the running game in the 1950s. Passing numbers dropped from an average of 18.9 attempts in 1951 to 13.6 attempts in 1955, while teams averaged just shy of 50 running plays per game. Nine out of ten Heisman Trophy winners in the 1950s were runners. Notre Dame, one of the biggest passing teams of the decade, saw a substantial decline in success; the 1950s were the only decade between 1920 and 1990 when the team did not win at least a share of the national title. Paul Hornung, Notre Dame quarterback, did, however, win the Heisman in 1956, becoming the only player from a losing team ever to do so. Modern college football (since 1958) Following the enormous success of the 1958 NFL Championship Game, college football no longer enjoyed the same popularity as the NFL, at least on a national level. While both games benefited from the advent of television, since the late 1950s, the NFL has become a nationally popular sport while college football has maintained strong regional ties. As professional football became a national television phenomenon, college football did as well. In the 1950s, Notre Dame, which had a large national following, formed its own network to broadcast its games, but by and large the sport still retained a mostly regional following. In 1952, the NCAA claimed all television broadcasting rights for the games of its member institutions, and it alone negotiated television rights. This situation continued until 1984, when several schools brought a suit under the Sherman Antitrust Act; the Supreme Court ruled against the NCAA and schools are now free to negotiate their own television deals. ABC Sports began broadcasting a national Game of the Week in 1966, bringing key matchups and rivalries to a national audience for the first time. New formations and play sets continued to be developed. Emory Bellard, an assistant coach under Darrell Royal at the University of Texas, developed a three-back option style offense known as the wishbone. The wishbone is a run-heavy offense that depends on the quarterback making last second decisions on when and to whom to hand or pitch the ball to. Royal went on to teach the offense to other coaches, including Bear Bryant at Alabama, Chuck Fairbanks at Oklahoma and Pepper Rodgers at UCLA; who all adapted and developed it to their own tastes. The strategic opposite of the wishbone is the spread offense, developed by professional and college coaches throughout the 1960s and 1970s. Though some schools play a run-based version of the spread, its most common use is as a passing offense designed to "spread" the field both horizontally and vertically. Some teams have managed to adapt with the times to keep winning consistently. In the rankings of the most victorious programs, Michigan, Ohio State, and Alabama ranked first, second, and third in total wins. Growth of bowl games In 1940, for the highest level of college football, there were only five bowl games (Rose, Orange, Sugar, Sun, and Cotton). By 1950, three more had joined that number and in 1970, there were still only eight major college bowl games. The number grew to eleven in 1976. At the birth of cable television and cable sports networks like ESPN, there were fifteen bowls in 1980. With more national venues and increased available revenue, the bowls saw an explosive growth throughout the 1980s and 1990s. In the thirty years from 1950 to 1980, seven bowl games were added to the schedule. From 1980 to 2008, an additional 20 bowl games were added to the schedule. Some have criticized this growth, claiming that the increased number of games has diluted the significance of playing in a bowl game. Yet others have countered that the increased number of games has increased exposure and revenue for a greater number of schools, and see it as a positive development. With the growth of bowl games, it became difficult to determine a national champion in a fair and equitable manner. As conferences became contractually bound to certain bowl games (a situation known as a tie-in), match-ups that guaranteed a consensus national champion became increasingly rare. In 1992, seven conferences and independent Notre Dame formed the Bowl Coalition, which attempted to arrange an annual No.1 versus No.2 matchup based on the final AP poll standings. The Coalition lasted for three years; however, several scheduling issues prevented much success; tie-ins still took precedence in several cases. For example, the Big Eight and SEC champions could never meet, since they were contractually bound to different bowl games. The coalition also excluded the Rose Bowl, arguably the most prestigious game in the nation, and two major conferences—the Pac-10 and Big Ten—meaning that it had limited success. In 1995, the Coalition was replaced by the Bowl Alliance, which reduced the number of bowl games to host a national championship game to three—the Fiesta, Sugar, and Orange Bowls—and the participating conferences to five—the ACC, SEC, Southwest, Big Eight, and Big East. It was agreed that the No.1 and No.2 ranked teams gave up their prior bowl tie-ins and were guaranteed to meet in the national championship game, which rotated between the three participating bowls. The system still did not include the Big Ten, Pac-10, or the Rose Bowl, and thus still lacked the legitimacy of a true national championship. However, one positive side effect is that if there were three teams at the end of the season vying for a national title, but one of them was a Pac-10/Big Ten team bound to the Rose Bowl, then there would be no difficulty in deciding which teams to place in the Bowl Alliance "national championship" bowl; if the Pac-10 / Big Ten team won the Rose Bowl and finished with the same record as whichever team won the other bowl game, they could have a share of the national title. This happened in the final year of the Bowl Alliance, with Michigan winning the 1998 Rose Bowl and Nebraska winning the 1998 Orange Bowl. Without the Pac-10/Big Ten team bound to a bowl game, it would be difficult to decide which two teams should play for the national title. Bowl Championship Series In 1998, a new system was put into place called the Bowl Championship Series. For the first time, it included all major conferences (ACC, Big East, Big 12, Big Ten, Pac-10, and SEC) and four major bowl games (Rose, Orange, Sugar and Fiesta). The champions of these six conferences, along with two "at-large" selections, were invited to play in the four bowl games. Each year, one of the four bowl games served as a national championship game. Also, a complex system of human polls, computer rankings, and strength of schedule calculations was instituted to rank schools. Based on this ranking system, the No.1 and No.2 teams met each year in the national championship game. Traditional tie-ins were maintained for schools and bowls not part of the national championship. For example, in years when not a part of the national championship, the Rose Bowl still hosted the Big Ten and Pac-10 champions. The system continued to change, as the formula for ranking teams was tweaked from year to year. At-large teams could be chosen from any of the Division I-A conferences, though only one selection—Utah in 2005—came from a BCS non-AQ conference. Starting with the 2006 season, a fifth game—simply called the BCS National Championship Game—was added to the schedule, to be played at the site of one of the four BCS bowl games on a rotating basis, one week after the regular bowl game. This opened up the BCS to two additional at-large teams. Also, rules were changed to add the champions of five additional conferences (Conference USA [C-USA], the Mid-American Conference [MAC], the Mountain West Conference [MW], the Sun Belt Conference and the Western Athletic Conference [WAC]), provided that said champion ranked in the top twelve in the final BCS rankings, or was within the top 16 of the BCS rankings and ranked higher than the champion of at least one of the BCS Automatic Qualifying (AQ) conferences. Several times since this rule change was implemented, schools from non-AQ conferences have played in BCS bowl games. In 2009, Boise State played TCU in the Fiesta Bowl, the first time two schools from non-AQ conferences played each other in a BCS bowl game. The last team from the non-AQ ranks to reach a BCS bowl game in the BCS era was Northern Illinois in 2012, which played in (and lost) the 2013 Orange Bowl. College Football Playoff The longtime resistance to a playoff system at the FBS level finally ended with the creation of the College Football Playoff (CFP) beginning with the 2014 season. The CFP is a Plus-One system, a concept that became popular as a BCS alternative following controversies in 2003 and 2004. The CFP is a four-team tournament whose participants are chosen and seeded by a 13-member selection committee. The semifinals are hosted by two of a group of traditional bowl games known as the New Year's Six, with semifinal hosting rotating annually among three pairs of games in the following order: Rose/Sugar, Orange/Cotton, and Fiesta/Peach. The two semifinal winners then advance to the College Football Playoff National Championship, whose host is determined by open bidding several years in advance. The establishment of the CFP followed a tumultuous period of conference realignment in Division I. The WAC, after seeing all but two of its football members leave, dropped football after the 2012 season. The Big East split into two leagues in 2013; the schools that did not play FBS football reorganized as a new non-football Big East Conference, while the FBS member schools that remained in the original structure joined with several new members and became the American Athletic Conference. The American retained the Big East's automatic BCS bowl bid for the 2013 season, but lost this status in the CFP era. The Alabama Crimson Tide have been the sports dominant power in recent years, qualifying for all but one College Football Playoff. The 10 FBS conferences are formally and popularly divided into two groups: Power Five – Five of the six AQ conferences of the BCS era, specifically the ACC, Big Ten, Big 12, Pac-12, and SEC. Each champion of these conferences is assured of a spot in a New Year's Six bowl, though not necessarily in a semifinal game. Notre Dame remains a football independent, but is counted among the Power Five because of its full but non-football ACC membership, including a football scheduling alliance with that conference. In the 2020 season, Notre Dame played as a full-time member of the conference due to the effects that COVID-19 had on the college football season, causing many conferences to play conference-only regular seasons. It has its own arrangement for access to the New Year's Six games should it meet certain standards. Group of Five – The remaining five FBS conferences – American, C-USA, MAC, MW, and Sun Belt. The other six current FBS independents, Army, BYU, Liberty, New Mexico State, UConn, and UMass are also considered to be part of this group. One conference champion from this group receives a spot in a New Year's Six game. In the first seven seasons of the CFP, the Group of Five did not place a team in a semifinal. In 2021, Cincinnati, a member of the American, qualified for the Playoff, becoming the first Group of 5 team to qualify. Of the seven Group of Five teams selected for New Year's Six bowls, three have won their games. Official rules and notable rule distinctions Although rules for the high school, college, and NFL games are generally consistent, there are several minor differences. The NCAA Football Rules Committee determines the playing rules for Division I (both Bowl and Championship Subdivisions), II, and III games (the National Association of Intercollegiate Athletics (NAIA) is a separate organization, but uses the NCAA rules). A pass is ruled complete if one of the receiver's feet is inbounds at the time of the catch. In the NFL both feet must be inbounds. A player is considered down when any part of his body other than the feet or hands touches the ground or when the ball carrier is tackled or otherwise falls and loses possession of the ball as he contacts the ground with any part of his body, with the sole exception of the holder for field goal and extra point attempts. In the NFL a player is active until he is tackled or forced down by a member of the opposing team (down by contact). The clock stops after the offense completes a first down and begins again—assuming it is following a play in which the clock would not normally stop—once the referee says the ball is ready for play. In the NFL the clock does not explicitly stop for a first down. Overtime was introduced in 1996, eliminating most ties except in the regular season. Since 2021, during overtime, each team is given one possession from its opponent's twenty-five yard line with no game clock, despite the one timeout per period and use of play clock; the procedure repeats for next possession if needed; all possessions thereafter will be from the opponent's 3-yard line. The team leading after both possessions is declared the winner. If the teams remain tied, overtime periods continue, with a coin flip determining the first possession. Possessions alternate with each overtime, until one team leads the other at the end of the overtime. A two-point conversion is required if a touchdown is scored in double overtime. From triple overtime, only two-point conversion attempts will be conducted hereafter. [In the NFL overtime is decided by a modified sudden-death period of 10 minutes in regular-season games (no overtime in preseason up to & since ) and 15 minutes in playoff games, and regular-season games can still end in a tie if neither team scores. Overtime for regular-season games in the NFL began with the 1974 season; the overtime period for all games was 15 minutes until it was shortened for non-playoff games effective in . In the postseason, if the teams are still tied, teams will play multiple overtime periods until either team scores.] A tie game is still possible, per NCAA Rule 3-3-3 (c) and (d). If a game is suspended because of inclement weather while tied, typically in the second half or at the end of regulation, and the game is unable to be continued, the game ends in a tie. Similar to baseball, if one team has scored in its possession and the other team has not completed its possession, the score during the overtime can be wiped out and the game ruled a tie. Some conferences may enforce a curfew for the safety of the players. If, because of numerous overtimes or weather, the game reaches the time-certain finish imposed by the curfew tied, the game is ruled a tie. Extra point tries are attempted from the three-yard line. Kicked tries count as one point. Teams can also go for "the two-point conversion" which is when a team will line up at the three-yard line and try to score. If they are successful, they receive two points, if they are not, then they receive zero points. Starting with the 2015 season, the NFL uses the 15-yard line as the line of scrimmage for placekick attempts, but the two-yard line for two-point attempts. The two-point conversion was not implemented in the NFL until 1994, but it had been previously used in the old American Football League (AFL) before it merged with the NFL in 1970. The defensive team may score two points on a point-after touchdown attempt by returning a blocked kick, fumble, or interception into the opposition's end zone. In addition, if the defensive team gains possession, but then moves backwards into the end zone and is stopped, a one-point safety will be awarded to the offense, although, unlike a real safety, the offense kicks off, opposed to the team charged with the safety. This college rule was added in 1988. The NFL, which previously treated the ball as dead during a conversion attempt—meaning that the attempt ended when the defending team gained possession of the football—adopted the college rule in 2015. The two-minute warning is not used in college football, except in rare cases where the scoreboard clock has malfunctioned and is not being used. There is an option to use instant replay review of officiating decisions. Division I FBS schools use replay in virtually all games; replay is rarely used in lower division games. Every play is subject to booth review with coaches only having one challenge. In the NFL, only scoring plays, turnovers, the final 2:00 of each half and all overtime periods are reviewed, and coaches are issued two challenges (with the option for a 3rd if the first two are successful). Since the 2012 season, the ball is placed on the 25-yard line following a touchback on either a kickoff or a free kick following a safety. The NFL adopted this rule in 2018. In all other touchback situations at all levels of the game, the ball is placed on the 20. Among other rule changes in 2007, kickoffs were moved from the 35-yard line back five yards to the 30-yard line, matching a change that the NFL had made in 1994. Some coaches and officials questioned this rule change as it could lead to more injuries to the players as there will likely be more kickoff returns. The rationale for the rule change was to help reduce dead time in the game. The NFL returned its kickoff location to the 35-yard line effective in 2011; college football did not do so until 2012. Several changes were made to college rules in 2011, all of which differ from NFL practice: If a player is penalized for unsportsmanlike conduct for actions that occurred during a play ending in a touchdown by that team, but before the goal line was crossed, the touchdown will be nullified. In the NFL, the same foul would result in a penalty on the conversion attempt or ensuing kickoff, at the option of the non-penalized team. If a team is penalized in the final minute of a half and the penalty causes the clock to stop, the opposing team now has the right to have 10 seconds run off the clock in addition to the yardage penalty. The NFL has a similar rule in the final minute of the half, but it applies only to specified violations against the offensive team. The new NCAA rule applies to penalties on both sides of the ball. Players lined up outside the tackle box—more specifically, those lined up more than 7 yards from the center—will now be allowed to block below the waist only if they are blocking straight ahead or toward the nearest sideline. On placekicks, offensive linemen now can't be engaged by at least three defensive players. They risk a 5-yard penalty upon violation. In 2018, the NCAA made a further change to touchback rules that the NFL has yet to duplicate; a fair catch on a kickoff or a free kick following a safety that takes place between the receiving team's goal line and 25-yard lines is treated as a touchback, with the ball placed at the 25. Yards lost on quarterback sacks are included in individual rushing yardage under NCAA rules. In the NFL, yards lost on sacks are included in team passing yardage, but are not included in individual passing statistics. Organization College teams mostly play other similarly sized schools through the NCAA's divisional system. Division I generally consists of the major collegiate athletic powers with larger budgets, more elaborate facilities, and (with the exception of a few conferences such as the Pioneer Football League) more athletic scholarships. Division II primarily consists of smaller public and private institutions that offer fewer scholarships than those in Division I. Division III institutions also field teams, but do not offer any scholarships. Football teams in Division I are further divided into the Bowl Subdivision (consisting of the largest programs) and the Championship Subdivision. The Bowl Subdivision has historically not used an organized tournament to determine its champion, and instead teams compete in post-season bowl games. That changed with the debut of the four-team College Football Playoff at the end of the 2014 season. Teams in each of these four divisions are further divided into various regional conferences. Several organizations operate college football programs outside the jurisdiction of the NCAA: The National Association of Intercollegiate Athletics has jurisdiction over more than 80 college football teams, mostly in the Midwest. The National Junior College Athletic Association has jurisdiction over two-year institutions, except in California. The California Community College Athletic Association governs sports, including football, at that state's two-year institutions. CCCAA members compete for their own championships and do not participate in the NJCAA. Club football, a sport in which student clubs run the teams instead of the colleges themselves, is overseen by two organizations: the National Club Football Association and the Intercollegiate Club Football Federation. The two competing sanctioning bodies have some overlap, and several clubs are members of both organizations. The Collegiate Sprint Football League governs 9 teams, all in the northeast. Its primary restriction is that all players must weigh less than the average college student (that threshold is set, , at ). A college that fields a team in the NCAA is not restricted from fielding teams in club or sprint football, and several colleges field two teams, a varsity (NCAA) squad and a club or sprint squad (no schools, , field both club and sprint teams at the same time). Coaching National championships College football national championships in NCAA Division I FBS – Overview of systems for determining national champions at the highest level of college football from 1869 to present. College Football Playoff – Four-team playoff for determining national champions at the highest level of college football beginning in 2014. Bowl Championship Series – The primary method of determining the national champion at the highest level of college football from 1998 to 2013; preceded by the Bowl Alliance (1995–1997) and the Bowl Coalition (1992–1994). NCAA Division I Football Championship – Playoff for determining the national champion at the second highest level of college football, Division I FCS, from 1978 to present. NCAA Division I FCS Consensus Mid-Major Football National Championship – Awarded by poll from 2001 to 2007 for a subset of the second-highest level of play in college football, FCS. NCAA Division II Football Championship – Playoff for determining the national champion at the third highest level of college football from 1973 to present. NCAA Division III Football Championship – Playoff for determining the national champion at the fourth highest level of college football from 1973 to present. NAIA National Football Championship - Playoff for determining the national champions of college football governed by the National Association of Intercollegiate Athletics. NJCAA National Football Championship – Playoff for determining the national champions of college football governed by the National Junior College Athletic Association. CSFL Championship – Champions of the Collegiate Sprint Football League, a weight restricted football sport. Team maps Playoff games Started in the 2014 season, four Division I FBS teams are selected at the end of regular season to compete in a playoff for the FBS national championship. The inaugural champion was Ohio State University. The College Football Playoff replaced the Bowl Championship Series, which had been used as the selection method to determine the national championship game participants since in the 1998 season. The Georgia Bulldogs won the most recent playoff 33-18 over the Alabama Crimson Tide in the 2022 College Football Playoff. At the Division I FCS level, the teams participate in a 24-team playoff (most recently expanded from 20 teams in 2013) to determine the national championship. Under the current playoff structure, the top eight teams are all seeded, and receive a bye week in the first round. The highest seed receives automatic home field advantage. Starting in 2013, non-seeded teams can only host a playoff game if both teams involved are unseeded; in such a matchup, the schools must bid for the right to host the game. Selection for the playoffs is determined by a selection committee, although usually a team must have an 8–4 record to even be considered. Losses to an FBS team count against their playoff eligibility, while wins against a Division II opponent do not count towards playoff consideration. Thus, only Division I wins (whether FBS, FCS, or FCS non-scholarship) are considered for playoff selection. The Division I National Championship game is held in Frisco, Texas. Division II and Division III of the NCAA also participate in their own respective playoffs, crowning national champions at the end of the season. The National Association of Intercollegiate Athletics also holds a playoff. Bowl games Unlike other college football divisions and most other sports—collegiate or professional—the Football Bowl Subdivision, formerly known as Division I-A college football, has historically not employed a playoff system to determine a champion. Instead, it has a series of postseason "bowl games". The annual National Champion in the Football Bowl Subdivision is then instead traditionally determined by a vote of sports writers and other non-players. This system has been challenged often, beginning with an NCAA committee proposal in 1979 to have a four-team playoff following the bowl games. However, little headway was made in instituting a playoff tournament until 2014, given the entrenched vested economic interests in the various bowls. Although the NCAA publishes lists of claimed FBS-level national champions in its official publications, it has never recognized an official FBS national championship; this policy continues even after the establishment of the College Football Playoff (which is not directly run by the NCAA) in 2014. As a result, the official Division I National Champion is the winner of the Football Championship Subdivision, as it is the highest level of football with an NCAA-administered championship tournament. (This also means that FBS student-athletes are the only NCAA athletes who are ineligible for the Elite 90 Award, an academic award presented to the upper class player with the highest grade-point average among the teams that advance to the championship final site.) The first bowl game was the 1902 Rose Bowl, played between Michigan and Stanford; Michigan won 49–0. It ended when Stanford requested and Michigan agreed to end it with 8 minutes on the clock. That game was so lopsided that the game was not played annually until 1916, when the Tournament of Roses decided to reattempt the postseason game. The term "bowl" originates from the shape of the Rose Bowl stadium in Pasadena, California, which was built in 1923 and resembled the Yale Bowl, built in 1915. This is where the name came into use, as it became known as the Rose Bowl Game. Other games came along and used the term "bowl", whether the stadium was shaped like a bowl or not. At the Division I FBS level, teams must earn the right to be bowl eligible by winning at least 6 games during the season (teams that play 13 games in a season, which is allowed for Hawaii and any of its home opponents, must win 7 games). They are then invited to a bowl game based on their conference ranking and the tie-ins that the conference has to each bowl game. For the 2009 season, there were 34 bowl games, so 68 of the 120 Division I FBS teams were invited to play at a bowl. These games are played from mid-December to early January and most of the later bowl games are typically considered more prestigious. After the Bowl Championship Series, additional all-star bowl games round out the post-season schedule through the beginning of February. Division I FBS National Championship Games Partly as a compromise between both bowl game and playoff supporters, the NCAA created the Bowl Championship Series (BCS) in 1998 in order to create a definitive national championship game for college football. The series included the four most prominent bowl games (Rose Bowl, Orange Bowl, Sugar Bowl, Fiesta Bowl), while the national championship game rotated each year between one of these venues. The BCS system was slightly adjusted in 2006, as the NCAA added a fifth game to the series, called the National Championship Game. This allowed the four other BCS bowls to use their normal selection process to select the teams in their games while the top two teams in the BCS rankings would play in the new National Championship Game. The BCS selection committee used a complicated, and often controversial, computer system to rank all Division I-FBS teams and the top two teams at the end of the season played for the national championship. This computer system, which factored in newspaper polls, online polls, coaches' polls, strength of schedule, and various other factors of a team's season, led to much dispute over whether the two best teams in the country were being selected to play in the National Championship Game. The BCS ended after the 2013 season and, since the 2014 season, the FBS national champion has been determined by a four-team tournament known as the College Football Playoff (CFP). A selection committee of college football experts decides the participating teams. Six major bowl games (the Rose, Sugar, Cotton, Orange, Peach, and Fiesta) rotate on a three-year cycle as semifinal games, with the winners advancing to the College Football Playoff National Championship. This arrangement is contractually locked in until the 2026 season. Controversy College football is a controversial institution within American higher education, where the amount of money involved—what people will pay for the entertainment provided—is a corrupting factor within universities that they are usually ill-equipped to deal with.Jay Schalin, "Time for universities to punt football", Washington Times, September 1, 2011, http://www.washingtontimes.com/news/2011/sep/1/time-for-universities-to-punt-football/?page=all According to William E. Kirwan, chancellor of the University of Maryland System and co-director of the Knight Commission on Intercollegiate Athletics, "We've reached a point where big-time intercollegiate athletics is undermining the integrity of our
and joints, endocarditis, gastroenteritis, malignant otitis externa, respiratory tract infections, cellulitis, urinary tract infections, prostatitis, anthrax, and chancroid. Ciprofloxacin only treats bacterial infections; it does not treat viral infections such as the common cold. For certain uses including acute sinusitis, lower respiratory tract infections and uncomplicated gonorrhea, ciprofloxacin is not considered a first-line agent. Ciprofloxacin occupies an important role in treatment guidelines issued by major medical societies for the treatment of serious infections, especially those likely to be caused by Gram-negative bacteria, including Pseudomonas aeruginosa. For example, ciprofloxacin in combination with metronidazole is one of several first-line antibiotic regimens recommended by the Infectious Diseases Society of America for the treatment of community-acquired abdominal infections in adults. It also features prominently in treatment guidelines for acute pyelonephritis, complicated or hospital-acquired urinary tract infection, acute or chronic prostatitis, certain types of endocarditis, certain skin infections, and prosthetic joint infections. In other cases, treatment guidelines are more restrictive, recommending in most cases that older, narrower-spectrum drugs be used as first-line therapy for less severe infections to minimize fluoroquinolone-resistance development. For example, the Infectious Diseases Society of America recommends the use of ciprofloxacin and other fluoroquinolones in urinary tract infections be reserved to cases of proven or expected resistance to narrower-spectrum drugs such as nitrofurantoin or trimethoprim/sulfamethoxazole. The European Association of Urology recommends ciprofloxacin as an alternative regimen for the treatment of uncomplicated urinary tract infections, but cautions that the potential for "adverse events have to be considered". Although approved by regulatory authorities for the treatment of respiratory infections, ciprofloxacin is not recommended for respiratory infections by most treatment guidelines due in part to its modest activity against the common respiratory pathogen Streptococcus pneumoniae. "Respiratory quinolones" such as levofloxacin, having greater activity against this pathogen, are recommended as first line agents for the treatment of community-acquired pneumonia in patients with important co-morbidities and in patients requiring hospitalization (Infectious Diseases Society of America 2007). Similarly, ciprofloxacin is not recommended as a first-line treatment for acute sinusitis. Ciprofloxacin is approved for the treatment of gonorrhea in many countries, but this recommendation is widely regarded as obsolete due to resistance development. Pregnancy In the United States ciprofloxacin is pregnancy category C. This category includes drugs for which no adequate and well-controlled studies in human pregnancy exist, and for which animal studies have suggested the potential for harm to the fetus, but potential benefits may warrant use of the drug in pregnant women despite potential risks. An expert review of published data on experiences with ciprofloxacin use during pregnancy by the Teratogen Information System concluded therapeutic doses during pregnancy are unlikely to pose a substantial teratogenic risk (quantity and quality of data=fair), but the data are insufficient to state no risk exists. Exposure to quinolones, including levofloxacin, during the first-trimester is not associated with an increased risk of stillbirths, premature births, birth defects, or low birth weight. Two small post-marketing epidemiology studies of mostly short-term, first-trimester exposure found that fluoroquinolones did not increase risk of major malformations, spontaneous abortions, premature birth, or low birth weight. The label notes, however, that these studies are insufficient to reliably evaluate the definitive safety or risk of less common defects by ciprofloxacin in pregnant women and their developing fetuses. Breastfeeding Fluoroquinolones have been reported as present in a mother's milk and thus passed on to the nursing child. The U.S. Food and Drug Administration (FDA) recommends that because of the risk of serious adverse reactions (including articular damage) in infants nursing from mothers taking ciprofloxacin, a decision should be made whether to discontinue nursing or discontinue the drug, taking into account the importance of the drug to the mother. Children Oral and intravenous ciprofloxacin are approved by the FDA for use in children for only two indications due to the risk of permanent injury to the musculoskeletal system: Inhalational anthrax (postexposure) Complicated urinary tract infections and pyelonephritis due to Escherichia coli, but never as first-line agents. Current recommendations by the American Academy of Pediatrics note the systemic use of ciprofloxacin in children should be restricted to infections caused by multidrug-resistant pathogens or when no safe or effective alternatives are available. Spectrum of activity Its spectrum of activity includes most strains of bacterial pathogens responsible for community-acquired pneumonias, bronchitis, urinary tract infections, and gastroenteritis. Ciprofloxacin is particularly effective against Gram-negative bacteria (such as Escherichia coli, Haemophilus influenzae, Klebsiella pneumoniae, Legionella pneumophila, Moraxella catarrhalis, Proteus mirabilis, and Pseudomonas aeruginosa), but is less effective against Gram-positive bacteria (such as methicillin-sensitive Staphylococcus aureus, Streptococcus pneumoniae, and Enterococcus faecalis) than newer fluoroquinolones. Bacterial resistance As a result of its widespread use to treat minor infections readily treatable with older, narrower spectrum antibiotics, many bacteria have developed resistance to this drug in recent years, leaving it significantly less effective than it would have been otherwise. Resistance to ciprofloxacin and other fluoroquinolones may evolve rapidly, even during a course of treatment. Numerous pathogens, including enterococci, Streptococcus pyogenes and Klebsiella pneumoniae (quinolone-resistant) now exhibit resistance. Widespread veterinary usage of the fluoroquinolones, particularly in Europe, has been implicated. Meanwhile, some Burkholderia cepacia, Clostridium innocuum and Enterococcus faecium strains have developed resistance to ciprofloxacin to varying degrees. Fluoroquinolones had become the class of antibiotics most commonly prescribed to adults in 2002. Nearly half (42%) of those prescriptions in the U.S. were for conditions not approved by the FDA, such as acute bronchitis, otitis media, and acute upper respiratory tract infection, according to a study supported in part by the Agency for Healthcare Research and Quality. Additionally, they were commonly prescribed for medical conditions that were not even bacterial to begin with, such as viral infections, or those to which no proven benefit existed. Contraindications Contraindications include: Taking tizanidine at the same time Use by those who are hypersensitive to any member of the quinolone class of antimicrobial agents Use by those who are diagnosed with myasthenia graves, as muscle weakness may be exacerbated Ciprofloxacin is also considered to be contraindicated in children (except for the indications outlined above), in pregnancy, to nursing mothers, and in people with epilepsy or other seizure disorders. Caution may be required in people with Marfan syndrome or Ehlers-Danlos syndrome. Adverse effects Adverse effects can involve the tendons, muscles, joints, nerves, and the central nervous system. Rates of adverse effects appear to be higher than with some groups of antibiotics such as cephalosporins but lower than with others such as clindamycin. Compared to other antibiotics some studies find a higher rate of adverse effects while others find no difference. In clinical trials most of the adverse events were described as mild or moderate in severity, abated soon after the drug was discontinued, and required no treatment. Some adverse effects may be permanent. Ciprofloxacin was stopped because of an adverse event in 1% of people treated with the medication by mouth. The most frequently reported drug-related events, from trials of all formulations, all dosages, all drug-therapy durations, and for all indications, were nausea (2.5%), diarrhea (1.6%), abnormal liver function tests (1.3%), vomiting (1%), and rash (1%). Other adverse events occurred at rates of <1%. Tendon problems Ciprofloxacin includes a boxed warning in the United States due to an increased risk of tendinitis and tendon rupture, especially in people who are older than 60 years, people who also use corticosteroids, and people with kidney, lung, or heart transplants. Tendon rupture can occur during therapy or even months after discontinuation of the medication. One study found that fluoroquinolone use was associated with a 1.9-fold increase in tendon problems. The risk increased to 3.2 in those over 60 years of age and to 6.2 in those over the age of 60 who were also taking corticosteroids. Among the 46,766 quinolone users in the study, 38 (0.08%) cases of Achilles tendon rupture were identified. Cardiac arrhythmia The fluoroquinolones, including ciprofloxacin, are associated with an increased risk of cardiac toxicity, including QT interval prolongation, torsades de pointes, ventricular arrhythmia, and sudden death. Nervous system Because Ciprofloxacin is lipophilic, it has the ability to cross the blood-brain barrier. The 2013 FDA label warns of nervous system effects. Ciprofloxacin, like other fluoroquinolones, is known to trigger seizures or lower the seizure threshold, and may cause other central nervous system adverse effects. Headache, dizziness, and insomnia have been reported as occurring fairly commonly in postapproval review articles, along with a much lower incidence of serious CNS adverse effects such as tremors, psychosis, anxiety, hallucinations, paranoia, and suicide attempts, especially at higher doses. Like other fluoroquinolones, it is also known to cause peripheral neuropathy that may be irreversible, such as weakness, burning pain, tingling or numbness. Cancer Ciprofloxacin is active in six of eight in vitro assays used as rapid screens for the detection of genotoxic effects, but is not active in in vivo assays of genotoxicity. Long-term carcinogenicity studies in rats and mice resulted in no carcinogenic or tumorigenic effects due to ciprofloxacin at daily oral dose levels up to 250 and 750 mg/kg to rats and mice, respectively (about 1.7 and 2.5 times the highest recommended therapeutic dose based upon mg/m2). Results from photo co-carcinogenicity testing indicate ciprofloxacin does not reduce the time to appearance of UV-induced skin tumors as compared to vehicle control. Other The other black box warning is that ciprofloxacin should not be used in people with myasthenia gravis due to possible exacerbation of muscle weakness which may lead to breathing problems resulting in death or ventilator support. Fluoroquinolones are known to block neuromuscular transmission. There are concerns that fluoroquinolones including ciprofloxacin can affect cartilage in young children. Clostridium difficile-associated diarrhea is a serious adverse effect of ciprofloxacin and other fluoroquinolones; it is unclear whether the risk is higher than with other broad-spectrum antibiotics. A wide range of rare but potentially fatal adverse effects reported to the U.S. FDA or the subject of case reports includes aortic dissection, toxic epidermal necrolysis, Stevens–Johnson syndrome, low blood pressure, allergic pneumonitis, bone marrow suppression, hepatitis or liver failure, and sensitivity to light. The medication should be discontinued if a rash, jaundice, or other sign of hypersensitivity occurs. Children and the elderly are at a much greater risk of experiencing adverse reactions. Overdose Overdose of ciprofloxacin may result in reversible renal toxicity. Treatment of overdose includes emptying of the stomach by induced vomiting or gastric lavage, as well as administration of antacids containing magnesium, aluminium, or calcium to reduce drug absorption. Renal function and urinary pH should be monitored. Important support includes adequate hydration and urine acidification if necessary to prevent crystalluria. Hemodialysis or peritoneal dialysis can only remove less than 10% of ciprofloxacin. Ciprofloxacin may be quantified in plasma or serum to monitor for drug accumulation in patients with
by major medical societies for the treatment of serious infections, especially those likely to be caused by Gram-negative bacteria, including Pseudomonas aeruginosa. For example, ciprofloxacin in combination with metronidazole is one of several first-line antibiotic regimens recommended by the Infectious Diseases Society of America for the treatment of community-acquired abdominal infections in adults. It also features prominently in treatment guidelines for acute pyelonephritis, complicated or hospital-acquired urinary tract infection, acute or chronic prostatitis, certain types of endocarditis, certain skin infections, and prosthetic joint infections. In other cases, treatment guidelines are more restrictive, recommending in most cases that older, narrower-spectrum drugs be used as first-line therapy for less severe infections to minimize fluoroquinolone-resistance development. For example, the Infectious Diseases Society of America recommends the use of ciprofloxacin and other fluoroquinolones in urinary tract infections be reserved to cases of proven or expected resistance to narrower-spectrum drugs such as nitrofurantoin or trimethoprim/sulfamethoxazole. The European Association of Urology recommends ciprofloxacin as an alternative regimen for the treatment of uncomplicated urinary tract infections, but cautions that the potential for "adverse events have to be considered". Although approved by regulatory authorities for the treatment of respiratory infections, ciprofloxacin is not recommended for respiratory infections by most treatment guidelines due in part to its modest activity against the common respiratory pathogen Streptococcus pneumoniae. "Respiratory quinolones" such as levofloxacin, having greater activity against this pathogen, are recommended as first line agents for the treatment of community-acquired pneumonia in patients with important co-morbidities and in patients requiring hospitalization (Infectious Diseases Society of America 2007). Similarly, ciprofloxacin is not recommended as a first-line treatment for acute sinusitis. Ciprofloxacin is approved for the treatment of gonorrhea in many countries, but this recommendation is widely regarded as obsolete due to resistance development. Pregnancy In the United States ciprofloxacin is pregnancy category C. This category includes drugs for which no adequate and well-controlled studies in human pregnancy exist, and for which animal studies have suggested the potential for harm to the fetus, but potential benefits may warrant use of the drug in pregnant women despite potential risks. An expert review of published data on experiences with ciprofloxacin use during pregnancy by the Teratogen Information System concluded therapeutic doses during pregnancy are unlikely to pose a substantial teratogenic risk (quantity and quality of data=fair), but the data are insufficient to state no risk exists. Exposure to quinolones, including levofloxacin, during the first-trimester is not associated with an increased risk of stillbirths, premature births, birth defects, or low birth weight. Two small post-marketing epidemiology studies of mostly short-term, first-trimester exposure found that fluoroquinolones did not increase risk of major malformations, spontaneous abortions, premature birth, or low birth weight. The label notes, however, that these studies are insufficient to reliably evaluate the definitive safety or risk of less common defects by ciprofloxacin in pregnant women and their developing fetuses. Breastfeeding Fluoroquinolones have been reported as present in a mother's milk and thus passed on to the nursing child. The U.S. Food and Drug Administration (FDA) recommends that because of the risk of serious adverse reactions (including articular damage) in infants nursing from mothers taking ciprofloxacin, a decision should be made whether to discontinue nursing or discontinue the drug, taking into account the importance of the drug to the mother. Children Oral and intravenous ciprofloxacin are approved by the FDA for use in children for only two indications due to the risk of permanent injury to the musculoskeletal system: Inhalational anthrax (postexposure) Complicated urinary tract infections and pyelonephritis due to Escherichia coli, but never as first-line agents. Current recommendations by the American Academy of Pediatrics note the systemic use of ciprofloxacin in children should be restricted to infections caused by multidrug-resistant pathogens or when no safe or effective alternatives are available. Spectrum of activity Its spectrum of activity includes most strains of bacterial pathogens responsible for community-acquired pneumonias, bronchitis, urinary tract infections, and gastroenteritis. Ciprofloxacin is particularly effective against Gram-negative bacteria (such as Escherichia coli, Haemophilus influenzae, Klebsiella pneumoniae, Legionella pneumophila, Moraxella catarrhalis, Proteus mirabilis, and Pseudomonas aeruginosa), but is less effective against Gram-positive bacteria (such as methicillin-sensitive Staphylococcus aureus, Streptococcus pneumoniae, and Enterococcus faecalis) than newer fluoroquinolones. Bacterial resistance As a result of its widespread use to treat minor infections readily treatable with older, narrower spectrum antibiotics, many bacteria have developed resistance to this drug in recent years, leaving it significantly less effective than it would have been otherwise. Resistance to ciprofloxacin and other fluoroquinolones may evolve rapidly, even during a course of treatment. Numerous pathogens, including enterococci, Streptococcus pyogenes and Klebsiella pneumoniae (quinolone-resistant) now exhibit resistance. Widespread veterinary usage of the fluoroquinolones, particularly in Europe, has been implicated. Meanwhile, some Burkholderia cepacia, Clostridium innocuum and Enterococcus faecium strains have developed resistance to ciprofloxacin to varying degrees. Fluoroquinolones had become the class of antibiotics most commonly prescribed to adults in 2002. Nearly half (42%) of those prescriptions in the U.S. were for conditions not approved by the FDA, such as acute bronchitis, otitis media, and acute upper respiratory tract infection, according to a study supported in part by the Agency for Healthcare Research and Quality. Additionally, they were commonly prescribed for medical conditions that were not even bacterial to begin with, such as viral infections, or those to which no proven benefit existed. Contraindications Contraindications include: Taking tizanidine at the same time Use by those who are hypersensitive to any member of the quinolone class of antimicrobial agents Use by those who are diagnosed with myasthenia graves, as muscle weakness may be exacerbated Ciprofloxacin is also considered to be contraindicated in children (except for the indications outlined above), in pregnancy, to nursing mothers, and in people with epilepsy or other seizure disorders. Caution may be required in people with Marfan syndrome or Ehlers-Danlos syndrome. Adverse effects Adverse effects can involve the tendons, muscles, joints, nerves, and the central nervous system. Rates of adverse effects appear to be higher than with some groups of antibiotics such as cephalosporins but lower than with others such as clindamycin. Compared to other antibiotics some studies find a higher rate of adverse effects while others find no difference. In clinical trials most of the adverse events were described as mild or moderate in severity, abated soon after the drug was discontinued, and required no treatment. Some adverse effects may be permanent. Ciprofloxacin was stopped because of an adverse event in 1% of people treated with the medication by mouth. The most frequently reported drug-related events, from trials of all formulations, all dosages, all drug-therapy durations, and for all indications, were nausea (2.5%), diarrhea (1.6%), abnormal liver function tests (1.3%), vomiting (1%), and rash (1%). Other adverse events occurred at rates of <1%. Tendon problems Ciprofloxacin includes a boxed warning in the United States due to an increased risk of tendinitis and tendon rupture, especially in people who are older than 60 years, people who also use corticosteroids, and people with kidney, lung, or heart transplants. Tendon rupture can occur during therapy or even months after discontinuation of the medication. One study found that fluoroquinolone use was associated with a 1.9-fold increase in tendon problems. The risk increased to 3.2 in those over 60 years of age and to 6.2 in those over the age of 60 who were also taking corticosteroids. Among the 46,766 quinolone users in the study, 38 (0.08%) cases of Achilles tendon rupture were identified. Cardiac arrhythmia The fluoroquinolones, including ciprofloxacin, are associated with an increased risk of cardiac toxicity, including QT interval prolongation, torsades de pointes, ventricular arrhythmia, and sudden death. Nervous system Because Ciprofloxacin is lipophilic, it has the ability to cross the blood-brain barrier. The 2013 FDA label warns of nervous system effects. Ciprofloxacin, like other fluoroquinolones, is known to trigger seizures or lower the seizure threshold, and may cause other central nervous system adverse effects. Headache, dizziness, and insomnia have been reported as occurring fairly commonly in postapproval review articles, along with a much lower incidence of serious CNS adverse effects such as tremors, psychosis, anxiety, hallucinations, paranoia, and suicide attempts, especially at higher doses. Like other fluoroquinolones, it is also known to cause peripheral neuropathy that may be irreversible, such as weakness, burning pain, tingling or numbness. Cancer Ciprofloxacin is active in six of eight in vitro assays used as rapid screens for the detection
sacrament, the substance of the body and blood of Christ are present alongside the substance of the bread and wine, which remain present. It was part of the doctrines of Lollardy, and considered a heresy by the Roman Catholic Church. It was later championed by Edward Pusey of the Oxford Movement, and is therefore held by many high church Anglicans. Development In England in the late 14th century, there was a political and religious movement known as Lollardy. Among much broader goals, the Lollards affirmed a form of consubstantiation—that the Eucharist remained physically bread and wine, while becoming spiritually the body and blood
bread and wine, which remain present. It was part of the doctrines of Lollardy, and considered a heresy by the Roman Catholic Church. It was later championed by Edward Pusey of the Oxford Movement, and is therefore held by many high church Anglicans. Development In England in the late 14th century, there was a political and religious movement known as Lollardy. Among much broader goals, the Lollards affirmed a form of consubstantiation—that the Eucharist remained physically bread and wine, while becoming spiritually the body
dos Santos et al. 2017 for the green algal clades and Novíkov & Barabaš-Krasni 2015 for the land plants clade. Sánchez-Baracaldo et al. is followed for the basal clades. A 2020 paper places the "Prasinodermophyta" (i.e. Prasinodermophyceae + Palmophyllophyceae) as the basal Viridiplantae clade. Leliaert et al. 2012 Simplified phylogeny of the Chlorophyta, according to Leliaert et al. 2012. Note that many algae previously classified in Chlorophyta are placed here in Streptophyta. Viridiplantae Chlorophyta core chlorophytes Ulvophyceae Cladophorales Dasycladales Bryopsidales Trentepohliales Ulvales-Ulotrichales Oltmannsiellopsidales Chlorophyceae Oedogoniales Chaetophorales Chaetopeltidales Chlamydomonadales Sphaeropleales Trebouxiophyceae Chlorellales Oocystaceae Microthamniales Trebouxiales Prasiola clade Chlorodendrophyceae prasinophytes (paraphyletic) Pyramimonadales Mamiellophyceae Pycnococcaceae Nephroselmidophyceae Prasinococcales Palmophyllales Streptophyta charophytes Mesostigmatophyceae Chlorokybophyceae Klebsormidiophyceae Charophyceae Zygnematophyceae Coleochaetophyceae Embryophyta (land plants) Pombert et al. 2005 A possible classification when Chlorophyta refers to one of the two clades of the Viridiplantae is shown below. Class Prasinophyceae T. A. Chr. ex Ø. Moestrup & J. Throndsen Class Chlorophyceae Wille Class Trebouxiophyceae T. Friedl Class Ulvophyceae Lewis & McCourt 2004 Division Chlorophyta (green algae sensu stricto) Subdivision Chlorophytina Class Chlorophyceae (chlorophytes) Order Chlamydomonadales (+ some Chlorococcales + some Tetrasporales + some Chlorosarcinales) Order Sphaeropleales (sensu Deason, plus Bracteacoccus, Schroederia, Scenedesmaceae, Selanastraceae) Order Oedogoniales Order Chaetopeltidales Order Chaetophorales Incertae Sedis (Cylindrocapsa clade, Mychonastes clade) Class Ulvophyceae (ulvophytes) Order Ulotrichales Order Ulvales Order Siphoncladales/Cladophorales Order Caulerpales Order Dasycladales Class Trebouxiophyceae (trebouxiophytes) Order Trebouxiales Order Microthamniales Order Prasiolales Order Chlorellales Class Prasinophyceae (prasinophytes) Order Pyramimonadales Order Mamiellales Order Pseudoscourfieldiales Order Chlorodendrales Incertae sedis (Unnamed clade of coccoid taxa) Division Charophyta (charophyte algae and embryophytes) Class Mesostigmatophyceae (mesostigmatophytes) Class Chlorokybophyceae (chlorokybophytes) Class Klebsormidiophyceae (klebsormidiophytes) Class Zygnemophyceae (conjugates) Order Zygnematales (filamentous conjugates and saccoderm desmids) Order Desmidiales (placoderm desmids) Class Coleochaetophyceae (coleochaetophytes) Order Coleochaetales Subdivision Streptophytina Class Charophyceae (reverts to use of GM Smith) Order Charales (charophytes sensu stricto) Class Embryophyceae (embryophytes) Hoek, Mann and Jahns 1995 Classification of the Chlorophyta, treated as all green algae, according to Hoek, Mann and Jahns 1995. Class Prasinophyceae (orders Mamiellales, Pseudocourfeldiales, Pyramimonadales, Chlorodendrales) Class Chlorophyceae (orders Volvocales [including the Tetrasporales], Chlorococcales, Chaetophorales, Oedogoniales) Class Ulvophyceae (orders Codiolales, Ulvales) Class
The clade Streptophyta consists of the Charophyta in which the Embryophyta (land plants) emerged. In this latter sense the Chlorophyta includes only about 4,300 species. About 90% of all known species live in freshwater. Like the land plants (embryophytes: bryophytes and tracheophytes), green algae (chlorophytes and charophytes besides embryophytes) contain chlorophyll a and chlorophyll b and store food as starch in their plastids. With the exception of Palmophyllophyceae, Trebouxiophyceae, Ulvophyceae and Chlorophyceae, which show various degrees of multicellularity, all the Chlorophyta lineages are unicellular. Some members of the group form symbiotic relationships with protozoa, sponges, and cnidarians. Others form symbiotic relationships with fungi to form lichens, but the majority of species are free-living. Some conduct sexual reproduction, which is oogamous or isogamous. All members of the clade have motile flagellated swimming cells. While most species live in freshwater habitats and a large number in marine habitats, other species are adapted to a wide range of land environments. For example, Chlamydomonas nivalis, which causes Watermelon snow, lives on summer alpine snowfields. Others, such as Trentepohlia species, live attached to rocks or woody parts of trees. Monostroma kuroshiense, an edible green alga cultivated worldwide and most expensive among green algae, belongs to this group. Ecology Species of Chlorophyta (treated as what is now considered one of the two main clades of Viridiplantae) are common inhabitants of marine, freshwater and terrestrial environments. Several species have adapted to specialised and extreme environments, such as deserts, arctic environments, hypersaline habitats, marine deep waters, deep-sea hydrothermal vents and habitats that experiences extreme changes in temperature, light and salinity. Some groups, such as the Trentepohliales are exclusively found on land. Several species of Chlorophyta live in symbiosis with a diverse range of eukaryotes, including fungi (to form lichens), ciliates, forams, cnidarians and molluscs. Some species of Chlorophyta are heterotrophic, either free-living or parasitic. Others are mixotrophic bacterivores through phagocytosis. Two common species of the heterotrophic green alga Prototheca are pathogenic and can cause the disease protothecosis in humans and animals. Classifications Characteristics used for the classification of Chlorophyta are: type of zoid, mitosis (karyokynesis), cytokinesis, organization level, life cycle, type of gametes, cell wall polysaccharides and more recently genetic data. Phylogeny A newer proposed classification follows Leliaert et al. 2011 and modified with Silar 2016, Leliaert 2016 and Lopes dos Santos et al. 2017 for the green algal clades and Novíkov & Barabaš-Krasni 2015 for the land plants clade. Sánchez-Baracaldo et al. is followed for the basal clades. A 2020 paper places the "Prasinodermophyta" (i.e. Prasinodermophyceae + Palmophyllophyceae) as the basal Viridiplantae clade. Leliaert et al. 2012 Simplified phylogeny of the Chlorophyta, according to Leliaert et al. 2012. Note that many algae previously classified in Chlorophyta are placed here in Streptophyta. Viridiplantae Chlorophyta core chlorophytes Ulvophyceae Cladophorales Dasycladales Bryopsidales Trentepohliales Ulvales-Ulotrichales Oltmannsiellopsidales Chlorophyceae Oedogoniales Chaetophorales Chaetopeltidales Chlamydomonadales
phylogeny The capybara and the lesser capybara belong to the subfamily Hydrochoerinae along with the rock cavies. The living capybaras and their extinct relatives were previously classified in their own family Hydrochoeridae. Since 2002, molecular phylogenetic studies have recognized a close relationship between Hydrochoerus and Kerodon, the rock cavies, supporting placement of both genera in a subfamily of Caviidae. Paleontological classifications previously used Hydrochoeridae for all capybaras, while using Hydrochoerinae for the living genus and its closest fossil relatives, such as Neochoerus, but more recently have adopted the classification of Hydrochoerinae within Caviidae. The taxonomy of fossil hydrochoerines is also in a state of flux. In recent years, the diversity of fossil hydrochoerines has been substantially reduced. This is largely due to the recognition that capybara molar teeth show strong variation in shape over the life of an individual. In one instance, material once referred to four genera and seven species on the basis of differences in molar shape is now thought to represent differently aged individuals of a single species, Cardiatherium paranense. Among fossil species, the name "capybara" can refer to the many species of Hydrochoerinae that are more closely related to the modern Hydrochoerus than to the "cardiomyine" rodents like Cardiomys. The fossil genera Cardiatherium, Phugatherium, Hydrochoeropsis, and Neochoerus are all capybaras under that concept. Description The capybara has a heavy, barrel-shaped body and short head, with reddish-brown fur on the upper part of its body that turns yellowish-brown underneath. Its sweat glands can be found in the surface of the hairy portions of its skin, an unusual trait among rodents. The animal lacks down hair, and its guard hair differs little from over hair.Adult capybaras grow to in length, stand tall at the withers, and typically weigh , with an average in the Venezuelan llanos of . Females are slightly heavier than males. The top recorded weights are for a wild female from Brazil and for a wild male from Uruguay. Also, an 81 kg individual was reported in São Paulo in 2001 or 2002. The dental formula is . Capybaras have slightly webbed feet and vestigial tails. Their hind legs are slightly longer than their forelegs; they have three toes on their rear feet and four toes on their front feet. Their muzzles are blunt, with nostrils, and the eyes and ears are near the top of their heads. Its karyotype has 2n = 66 and FN = 102, meaning it has 66 chromosomes with a total of 102 arms Ecology Capybaras are semiaquatic mammals found throughout almost all countries of South America except Chile. They live in densely forested areas near bodies of water, such as lakes, rivers, swamps, ponds, and marshes, as well as flooded savannah and along rivers in the tropical rainforest. They are superb swimmers and can hold their breath underwater for up to five minutes at a time. Capybara have flourished in cattle ranches. They roam in home ranges averaging 10 hectares (25 acres) in high-density populations. Many escapees from captivity can also be found in similar watery habitats around the world. Sightings are fairly common in Florida, although a breeding population has not yet been confirmed. These escaped populations occur in areas where prehistoric capybaras inhabited; late Pleistocene capybaras inhabited Florida and Hydrochoerus gaylordi in Grenada, and feral capybaras in North America may actually fill the ecological niche of the Pleistocene species. In 2011, one specimen was spotted on the Central Coast of California. Diet and predation Capybaras are herbivores, grazing mainly on grasses and aquatic plants, as well as fruit and tree bark. They are very selective feeders and feed on the leaves of one species and disregard other species surrounding it. They eat a greater variety of plants during the dry season, as fewer plants are available. While they eat grass during the wet season, they have to switch to more abundant reeds during the dry season. Plants that capybaras eat during the summer lose their nutritional value in the winter, so they are not consumed at that time. The capybara's jaw hinge is not perpendicular, so they chew food by grinding back-and-forth rather than
the cellulose in the grass that forms their normal diet, and to extract the maximum protein and vitamins from their food. They also regurgitate food to masticate again, similar to cud-chewing by cattle. As is the case with other rodents, the front teeth of capybaras grow continually to compensate for the constant wear from eating grasses; their cheek teeth also grow continuously. Like its relative the guinea pig, the capybara does not have the capacity to synthesize vitamin C, and capybaras not supplemented with vitamin C in captivity have been reported to develop gum disease as a sign of scurvy. They can have a lifespan of 8–10 years, but tend to live less than four years in the wild due to predation from jaguars, pumas, ocelots, eagles, and caimans. The capybara is also the preferred prey of the green anaconda. Social organization Capybaras are known to be gregarious. While they sometimes live solitarily, they are more commonly found in groups of around 10–20 individuals, with two to four adult males, four to seven adult females, and the remainder juveniles. Capybara groups can consist of as many as 50 or 100 individuals during the dry season when the animals gather around available water sources. Males establish social bonds, dominance, or general group consensus. They can make dog-like barks when threatened or when females are herding young. Capybaras have two types of scent glands; a morrillo, located on the snout, and anal glands. Both sexes have these glands, but males have much larger morrillos and use their anal glands more frequently. The anal glands of males are also lined with detachable hairs. A crystalline form of scent secretion is coated on these hairs and is released when in contact with objects such as plants. These hairs have a longer-lasting scent mark and are tasted by other capybaras. Capybaras scent-mark by rubbing their morrillos on objects, or by walking over scrub and marking it with their anal glands. Capybaras can spread their scent further by urinating; however, females usually mark without urinating and scent-mark less frequently than males overall. Females mark more often during the wet season when they are in estrus. In addition to objects, males also scent-mark females. Reproduction When in estrus, the female's scent changes subtly and nearby males begin pursuit. In addition, a female alerts males she is in estrus by whistling through her nose. During mating, the female has the advantage and mating choice. Capybaras mate only in water, and if a female does not want to mate with a certain male, she either submerges or leaves the water. Dominant males are highly protective of the females, but they usually cannot prevent some of the subordinates from copulating. The larger the group, the harder it is for the male to watch all the females. Dominant males secure significantly more matings than each subordinate, but subordinate males, as a class, are responsible for more matings than each dominant male. The lifespan of the capybara's sperm is longer than that of other rodents. Capybara gestation is 130–150 days, and produces a litter of four young on average, but may produce between one and eight in a single litter. Birth is on land and the female rejoins the group within a few hours of delivering the newborn capybaras, which join the group as soon as they are mobile. Within a week, the young can eat grass, but continue to suckle—from any female in the group—until weaned around 16 weeks. The young form a group
or T-Pose. The position of each segment of the skeletal model is defined by animation variables, or Avars for short. In human and animal characters, many parts of the skeletal model correspond to the actual bones, but skeletal animation is also used to animate other things, with facial features (though other methods for facial animation exist). The character "Woody" in Toy Story, for example, uses 700 Avars (100 in the face alone). The computer doesn't usually render the skeletal model directly (it is invisible), but it does use the skeletal model to compute the exact position and orientation of that certain character, which is eventually rendered into an image. Thus by changing the values of Avars over time, the animator creates motion by making the character move from frame to frame. There are several methods for generating the Avar values to obtain realistic motion. Traditionally, animators manipulate the Avars directly. Rather than set Avars for every frame, they usually set Avars at strategic points (frames) in time and let the computer interpolate or tween between them in a process called keyframing. Keyframing puts control in the hands of the animator and has roots in hand-drawn traditional animation. In contrast, a newer method called motion capture makes use of live action footage. When computer animation is driven by motion capture, a real performer acts out the scene as if they were the character to be animated. Their motion is recorded to a computer using video cameras and markers and that performance is then applied to the animated character. Each method has its advantages and as of 2007, games and films are using either or both of these methods in productions. Keyframe animation can produce motions that would be difficult or impossible to act out, while motion capture can reproduce the subtleties of a particular actor. For example, in the 2006 film Pirates of the Caribbean: Dead Man's Chest, Bill Nighy provided the performance for the character Davy Jones. Even though Nighy doesn't appear in the movie himself, the movie benefited from his performance by recording the nuances of his body language, posture, facial expressions, etc. Thus motion capture is appropriate in situations where believable, realistic behavior and action is required, but the types of characters required exceed what can be done throughout the conventional costuming. Modeling 3D computer animation combines 3D models of objects and programmed or hand "keyframed" movement. These models are constructed out of geometrical vertices, faces, and edges in a 3D coordinate system. Objects are sculpted much like real clay or plaster, working from general forms to specific details with various sculpting tools. Unless a 3D model is intended to be a solid color, it must be painted with "textures" for realism. A bone/joint animation system is set up to deform the CGI model (e.g., to make a humanoid model walk). In a process known as rigging, the virtual marionette is given various controllers and handles for controlling movement. Animation data can be created using motion capture, or keyframing by a human animator, or a combination of the two. 3D models rigged for animation may contain thousands of control points — for example, "Woody" from Toy Story uses 700 specialized animation controllers. Rhythm and Hues Studios labored for two years to create Aslan in the movie The Chronicles of Narnia: The Lion, the Witch and the Wardrobe, which had about 1,851 controllers (742 in the face alone). In the 2004 film The Day After Tomorrow, designers had to design forces of extreme weather with the help of video references and accurate meteorological facts. For the 2005 remake of King Kong, actor Andy Serkis was used to help designers pinpoint the gorilla's prime location in the shots and used his expressions to model "human" characteristics onto the creature. Serkis had earlier provided the voice and performance for Gollum in J. R. R. Tolkien's The Lord of the Rings trilogy. Equipment Computer animation can be created with a computer and an animation software. Some impressive animation can be achieved even with basic programs; however, the rendering can require much time on an ordinary home computer. Professional animators of movies, television and video games could make photorealistic animation with high detail. This level of quality for movie animation would take hundreds of years to create on a home computer. Instead, many powerful workstation computers are used. Graphics workstation computers use two to four processors, and they are a lot more powerful than an actual home computer and are specialized for rendering. Many workstations (known as a "render farm") are networked together to effectively act as a giant computer, resulting in a computer-animated movie that can be completed in about one to five years (however, this process is not composed solely of rendering). A workstation typically costs $2,000-16,000 with the more expensive stations being able to render much faster due to the more technologically-advanced hardware that they contain. Professionals also use digital movie cameras, motion/performance capture, bluescreens, film editing software, props, and other tools used for movie animation. Programs like Blender allow for people who can't afford expensive animation and rendering software to be able to work in a similar manner to those who use the commercial grade equipment. Facial animation The realistic modeling of human facial features is both one of the most challenging and sought after elements in computer-generated imagery. Computer facial animation is a highly complex field where models typically include a very large number of animation variables. Historically speaking, the first SIGGRAPH tutorials on State of the art in Facial Animation in 1989 and 1990 proved to be a turning point in the field by bringing together and consolidating multiple research elements and sparked interest among a number of researchers. The Facial Action Coding System (with 46 "action units", "lip bite" or "squint"), which had been developed in 1976, became a popular basis for many systems. As early as 2001, MPEG-4 included 68 Face Animation Parameters (FAPs) for lips, jaws, etc., and the field has made significant progress since then and the use of facial microexpression has increased. In some cases, an affective space, the PAD emotional state model, can be used to assign specific emotions to the faces of avatars. In this approach, the PAD model is used as a high level emotional space and the lower level space is the MPEG-4 Facial Animation Parameters (FAP). A mid-level Partial Expression Parameters (PEP) space is then used to in a two-level structure – the PAD-PEP mapping and the PEP-FAP translation model. Realism Realism in computer animation can mean making each frame look photorealistic, in the sense that the scene is rendered to resemble a photograph or make the characters' animation believable and lifelike. Computer animation can also be realistic with or without the photorealistic rendering. One of the greatest challenges in computer animation has been creating human characters that look and move with the highest degree of realism. Part of the difficulty in making pleasing, realistic human characters is the uncanny valley, the concept where the human audience (up to a point) tends to have an increasingly negative, emotional response as a human replica looks and acts more and more human. Films
75-120 frames per second, no improvement in realism or smoothness is perceivable due to the way the eye and the brain both process images. At rates below 12 frames per second, most people can detect jerkiness associated with the drawing of new images that detracts from the illusion of realistic movement. Conventional hand-drawn cartoon animation often uses 15 frames per second in order to save on the number of drawings needed, but this is usually accepted because of the stylized nature of cartoons. To produce more realistic imagery, computer animation demands higher frame rates. Films seen in theaters in the United States run at 24 frames per second, which is sufficient to create the illusion of continuous movement. For high resolution, adapters are used. History Early digital computer animation was developed at Bell Telephone Laboratories in the 1960s by Edward E. Zajac, Frank W. Sinden, Kenneth C. Knowlton, and A. Michael Noll. Other digital animation was also practiced at the Lawrence Livermore National Laboratory. In 1967, a computer animation named "Hummingbird" was created by Charles Csuri and James Shaffer. In 1968, a computer animation called "Kitty" was created with BESM-4 by Nikolai Konstantinov, depicting a cat moving around. In 1971, a computer animation called "Metadata" was created, showing various shapes. An early step in the history of computer animation was the sequel to the 1973 film Westworld, a science-fiction film about a society in which robots live and work among humans. The sequel, Futureworld (1976), used the 3D wire-frame imagery, which featured a computer-animated hand and face both created by University of Utah graduates Edwin Catmull and Fred Parke. This imagery originally appeared in their student film A Computer Animated Hand, which they completed in 1972. Developments in CGI technologies are reported each year at SIGGRAPH, an annual conference on computer graphics and interactive techniques that is attended by thousands of computer professionals each year. Developers of computer games and 3D video cards strive to achieve the same visual quality on personal computers in real-time as is possible for CGI films and animation. With the rapid advancement of real-time rendering quality, artists began to use game engines to render non-interactive movies, which led to the art form Machinima. Film and television CGI short films have been produced as independent animation since 1976. Early examples of feature films incorporating CGI animation include the live-action films Star Trek II: The Wrath of Khan and Tron (both 1982), and the Japanese anime film Golgo 13: The Professional (1983). VeggieTales is the first American fully 3D computer animated series sold directly (made in 1993); its success inspired other animation series, such as ReBoot (1994) and Transformers: Beast Wars (1996) to adopt a fully computer-generated style. The first full length computer animated television series was ReBoot, which debuted in September 1994; the series followed the adventures of characters who lived inside a computer. The first feature-length computer animated film is Toy Story (1995), which was made by Disney and Pixar: following an adventure centered around anthropomorphic toys and their owners, this groundbreaking film was also the first of many fully computer-animated movies. The popularity of computer animation (especially in the field of special effects) skyrocketed during the modern era of U.S. animation. Films like Avatar (2009) and The Jungle Book (2016) use CGI for the majority of the movie runtime, but still incorporate human actors into the mix. Computer animation in this era has achieved photorealism, to the point that computer animated films such as The Lion King (2019) are able to be marketed as if they were live-action. Animation methods In most 3D computer animation systems, an animator creates a simplified representation of a character's anatomy, which is analogous to a skeleton or stick figure. They are arranged into a default position known as a bind pose, or T-Pose. The position of each segment of the skeletal model is defined by animation variables, or Avars for short. In human and animal characters, many parts of the skeletal model correspond to the actual bones, but skeletal animation is also used to animate other things, with facial features (though other methods for facial animation exist). The character "Woody" in Toy Story, for example, uses 700 Avars (100 in the face alone). The computer doesn't usually render the skeletal model directly (it is invisible), but it does use the skeletal model to compute the exact position and orientation of that certain character, which is eventually rendered into an image. Thus by changing the values of Avars over time, the animator creates motion by making the character move from frame to frame. There are several methods for generating the Avar values to obtain realistic motion. Traditionally, animators manipulate the Avars directly. Rather than set Avars for every frame, they usually set Avars at strategic points (frames) in time and let the computer interpolate or tween between them in a process called keyframing. Keyframing puts control in the hands of the animator and has roots in hand-drawn traditional animation. In contrast, a newer method called motion capture makes use of live action footage. When computer animation is driven by motion capture, a real performer acts out the scene as if they were the character to be animated. Their motion is recorded to a computer using video cameras and markers and that performance is then applied to the animated character. Each method has its advantages and as of 2007, games and films are using either or both of these methods in productions. Keyframe animation can produce motions that would be difficult or impossible to act out, while motion capture can reproduce the subtleties of a particular actor. For example, in the 2006 film Pirates of the Caribbean: Dead Man's Chest, Bill Nighy provided the performance for the character Davy Jones. Even though Nighy doesn't appear in the movie himself, the movie benefited from his performance by recording the nuances of his body language, posture, facial expressions, etc. Thus motion capture is appropriate in situations where believable, realistic behavior and action is required, but the types of characters required exceed what can be done throughout the conventional costuming. Modeling 3D computer animation combines 3D models of objects and programmed or hand "keyframed" movement. These models are constructed out of geometrical vertices, faces, and edges in a 3D coordinate system. Objects are sculpted much like real clay or plaster, working from general forms to specific details with various sculpting tools. Unless a 3D model is intended to be a solid color, it must be painted with "textures" for realism. A bone/joint animation system is set up to deform the CGI model (e.g., to make a humanoid model walk). In a process known as rigging, the virtual marionette is given various controllers and handles for controlling movement. Animation data can be created using motion capture, or keyframing by a human animator, or a combination of the two. 3D models rigged for animation may contain thousands of control points — for example, "Woody" from Toy Story uses 700 specialized animation controllers. Rhythm and Hues Studios labored for two years to create Aslan in the movie The Chronicles of Narnia: The Lion, the Witch and the Wardrobe, which had about 1,851 controllers (742 in the face alone). In the 2004 film The Day After Tomorrow, designers had to design forces of extreme weather with the help of video references and accurate meteorological facts. For the 2005 remake of King Kong, actor Andy Serkis was used to help designers pinpoint the gorilla's prime location in the shots and used his expressions to model "human" characteristics onto the creature. Serkis had earlier provided the voice and performance for Gollum in J. R. R. Tolkien's The Lord of the Rings trilogy. Equipment Computer animation can be created with a computer and an animation software. Some impressive animation can be achieved even with basic programs; however, the rendering can require much time on an ordinary home computer. Professional animators of movies, television and video games could make photorealistic animation with high detail. This level of quality for movie animation would take hundreds of years to create on a home computer. Instead, many powerful workstation computers are used. Graphics workstation computers use two to four processors, and they are a lot more powerful than an actual home computer and are specialized for rendering. Many workstations (known as a "render farm") are networked together to effectively act as a giant computer, resulting in a computer-animated movie that can be completed in about one to five years (however, this process is not composed solely of rendering). A workstation typically costs $2,000-16,000 with the more expensive stations being able to render much faster due to the more technologically-advanced hardware that they contain. Professionals also use digital movie cameras, motion/performance capture, bluescreens, film editing software, props, and other tools used for movie animation. Programs like Blender allow for people who can't afford expensive animation and rendering software to be able to work in a similar manner to those who use the commercial grade equipment. Facial animation The
Wessex genealogies may have come about because of efforts to integrate Ceawlin's line with the other lineages: it became very important to the West Saxons to be able to trace the ancestors of their rulers back to Cerdic. Another reason for doubting the literal nature of these early genealogies is that the etymology of the names of several early members of the dynasty do not appear to be Germanic, as would be expected in the names of leaders of an apparently Anglo-Saxon dynasty. The name Ceawlin is one of the names that do not have convincing Anglo-Saxon etymologies; it seems more likely to be of native British origin. The earliest sources do not use the term "West Saxon". According to Bede's Ecclesiastical History of the English People, the term is interchangeable with the Gewisse. The term "West Saxon" appears only in the late seventh century, after the reign of Cædwalla. West Saxon expansion Ultimately, the kingdom of Wessex occupied the southwest of England, but the initial stages in this expansion are not apparent from the sources. Cerdic's landing, whenever it is to be dated, seems to have been near the Isle of Wight, and the annals record the conquest of the island in 530. In 534, according to the Chronicle, Cerdic died and his son Cynric took the throne; the Chronicle adds that "they gave the Isle of Wight to their nephews, Stuf and Wihtgar". These records are in direct conflict with Bede, who states that the Isle of Wight was settled by Jutes, not Saxons; the archaeological record is somewhat in favour of Bede on this. Subsequent entries in the Chronicle give details of some of the battles by which the West Saxons won their kingdom. Ceawlin's campaigns are not given as near the coast. They range along the Thames Valley and beyond, as far as Surrey in the east and the mouth of the Severn in the west. Ceawlin clearly is part of the West Saxon expansion, but the military history of the period is difficult to understand. In what follows the dates are as given in the Chronicle, although, as noted above, these are earlier than now thought accurate. 556: The first record of a battle fought by Ceawlin is in 556, when he and his father, Cynric, fought the native Britons at "", or Bera's Stronghold. This now is identified as Barbury Castle, an Iron Age hill fort in Wiltshire, near Swindon. Cynric would have been king of Wessex at this time. 568: Wibbandun The first battle Ceawlin fought as king is dated by the Chronicle to 568, when he and Cutha fought with Æthelberht, the king of Kent. The entry says "Here Ceawlin and Cutha fought against Aethelberht and drove him into Kent; and they killed two ealdormen, Oslaf and Cnebba, on Wibbandun." The location of "Wibbandun", which can be translated as "Wibba's Mount", has not been identified definitely; it was at one time thought to be Wimbledon, but this now is known to be incorrect. David Cooper proposes Wyboston, a small village 8 miles north-east of Bedford on the west bank of the Great Ouse. Wibbandun is often written as Wibba's Dun, which is close phonetically to Wyboston and Æthelberht's dominance, from Kent to the Humber according to Bede, extended across those Anglian territories south of the Wash. It was this region that came under threat from Ceawlin as he looked to establish a defensible boundary on the Great Ouse River in the easternmost part of his territory. In addition, Cnebba, named as slain in this battle, has been associated with Knebworth, which lies 20 miles to the south of Wyboston. Half-a-mile south of Wyboston is a village called Chawston. The origin of the place-name is unknown but might be derived from the Old English Ceawston or Ceawlinston. A defeat at Wyboston for Æthelberht would have damaged his overlord status and diminished his influence over the Anglians. The idea that he was driven or 'pursued' into Kent (depending on which Anglo-Saxon Chronicle translation is preferred) should not be taken literally. Similar phraseology is often found in the Chronicle when one king bests another. A defeat suffered as part of an expedition to help his Anglian clients would have caused Æthelberht to withdraw into Kent to recover. This battle is notable as the first recorded conflict between the invading peoples: previous battles recorded in the Chronicle are between the Anglo-Saxons and the native Britons. There are multiple examples of joint kingship in Anglo-Saxon history, and this may be another: it is not clear what Cutha's relationship to Ceawlin is, but it certainly is possible he was also a king. The annal for 577, below, is another possible example. 571: Bedcanford The annal for 571 reads: "Here Cuthwulf fought against the Britons at Bedcanford, and took four settlements: Limbury and Aylesbury, Benson and Eynsham; and in the same year he passed away." Cuthwulf's relationship with Ceawlin is unknown, but the alliteration common to Anglo-Saxon royal families suggests Cuthwulf may be part of the West Saxon royal line. The location of the battle itself is unidentified. It has been suggested that it was Bedford, but what is known of the early history of Bedford's names does not support this. This battle is of interest because it is surprising that an area so far east should still be in Briton hands this late: there is ample archaeological evidence of early Saxon and Anglian presence in the Midlands, and historians generally have interpreted Gildas's De Excidio as implying that the Britons had lost control of this area by the mid-sixth century. One possible explanation is that this annal records a reconquest of land that was lost to the Britons in the campaigns ending in the battle of Mons Badonicus. 577: Lower Severn The annal for 577 reads "Here Cuthwine and Ceawlin fought against the Britons, and they killed three kings, Coinmail and Condidan and Farinmail, in the place which is called Dyrham, and took three cities: Gloucester and Cirencester and Bath." This entry is all that is known of these Briton kings; their names are in an archaic form that makes it very likely that this annal derives from a much older written source. The battle itself has long been regarded as a key moment in the Saxon advance, since in reaching the Bristol Channel, the West Saxons divided the Britons west of the Severn from land communication with those in the peninsula to the south of the Channel. Wessex almost certainly lost this territory to Penda of Mercia in 628, when the Chronicle records that "Cynegils and Cwichelm fought against Penda at Cirencester and then came to an agreement." It is possible that when Ceawlin and Cuthwine took Bath, they found the Roman baths still operating to some extent. Nennius, a ninth-century historian, mentions a "Hot Lake" in the land of the Hwicce, which was along the Severn, and adds "It is surrounded by a wall, made of brick and stone, and men may go there to bathe at any time, and every man can have the kind of bath he likes. If he wants, it will be a cold bath; and if he wants a hot bath, it will be hot". Bede also describes hot baths in the geographical introduction to the Ecclesiastical History in terms very similar to those of Nennius. Wansdyke, an early-medieval defensive linear earthwork, runs from south of Bristol to near Marlborough, Wiltshire, passing not far from Bath. It probably was built in the fifth or sixth centuries, perhaps by Ceawlin. 584: Fethan leag Ceawlin's last recorded victory is in 584. The entry reads "Here Ceawlin and Cutha fought against the Britons at the place which is named Fethan leag, and Cutha was killed; and Ceawlin took many towns and countless war-loot, and in anger he turned back to his own [territory]." There is a wood named "Fethelée" mentioned in a twelfth-century document that relates to Stoke Lyne, in Oxfordshire, and it now is thought that the battle of Fethan leag must have been fought in this area. The phrase "in anger he turned back to his own" probably indicates that this annal is drawn from saga material, as perhaps are all of the early Wessex annals. It also has been used to argue that perhaps, Ceawlin did not win the battle and that the chronicler chose not to record the outcome fully—a king does not usually come home "in anger" after taking "many towns and countless war-loot". It may be that Ceawlin's overlordship of the southern Britons came to an end with this battle. Bretwaldaship About 731, Bede, a Northumbrian monk and chronicler, wrote a work called the Ecclesiastical History of the English People. The work was not primarily a secular history, but Bede provides much information about the history of the Anglo-Saxons, including a list early in the history of seven kings who, he said, held "imperium" over the other kingdoms south of the Humber. The usual translation for "imperium" is "overlordship". Bede names Ceawlin as the second on the list, although he spells it "Caelin", and adds that he was "known in the speech of his own people as Ceaulin". Bede also makes it clear that Ceawlin was not a Christian—Bede mentions a later king, Æthelberht of Kent, as "the first to enter the kingdom of heaven". The Anglo-Saxon Chronicle, in an entry for the year 827, repeats Bede's list, adds Egbert of Wessex, and also mentions that they were known as "bretwalda", or "Britain-ruler". A great deal of scholarly attention has been given to the meaning of this word. It has been described as a term "of encomiastic poetry", but there also is evidence that it implied a definite role of military leadership. Bede says that these kings had authority "south of the Humber", but the span of control, at least of the earlier bretwaldas, likely was less than this. In Ceawlin's case the range of control is hard to determine accurately, but Bede's inclusion of Ceawlin in the list of kings who held imperium, and the list of battles he is recorded as having won, indicate an energetic and successful leader who, from a base in the upper Thames valley, dominated much of the surrounding area and held overlordship over the southern Britons for some period. Despite Ceawlin's military successes, the northern conquests he made could not always be retained: Mercia took much of the upper Thames valley, and the north-eastern towns won in 571 were among territory subsequently under the control of Kent and Mercia at different times. Bede's concept of the power of these overlords also must be regarded as the product of his eighth-century viewpoint. When the Ecclesiastical History was written, Æthelbald of Mercia dominated the English south of the Humber, and Bede's view of the earlier kings was doubtless strongly coloured by the state of England at that time. For
common to Anglo-Saxon royal families suggests Cuthwulf may be part of the West Saxon royal line. The location of the battle itself is unidentified. It has been suggested that it was Bedford, but what is known of the early history of Bedford's names does not support this. This battle is of interest because it is surprising that an area so far east should still be in Briton hands this late: there is ample archaeological evidence of early Saxon and Anglian presence in the Midlands, and historians generally have interpreted Gildas's De Excidio as implying that the Britons had lost control of this area by the mid-sixth century. One possible explanation is that this annal records a reconquest of land that was lost to the Britons in the campaigns ending in the battle of Mons Badonicus. 577: Lower Severn The annal for 577 reads "Here Cuthwine and Ceawlin fought against the Britons, and they killed three kings, Coinmail and Condidan and Farinmail, in the place which is called Dyrham, and took three cities: Gloucester and Cirencester and Bath." This entry is all that is known of these Briton kings; their names are in an archaic form that makes it very likely that this annal derives from a much older written source. The battle itself has long been regarded as a key moment in the Saxon advance, since in reaching the Bristol Channel, the West Saxons divided the Britons west of the Severn from land communication with those in the peninsula to the south of the Channel. Wessex almost certainly lost this territory to Penda of Mercia in 628, when the Chronicle records that "Cynegils and Cwichelm fought against Penda at Cirencester and then came to an agreement." It is possible that when Ceawlin and Cuthwine took Bath, they found the Roman baths still operating to some extent. Nennius, a ninth-century historian, mentions a "Hot Lake" in the land of the Hwicce, which was along the Severn, and adds "It is surrounded by a wall, made of brick and stone, and men may go there to bathe at any time, and every man can have the kind of bath he likes. If he wants, it will be a cold bath; and if he wants a hot bath, it will be hot". Bede also describes hot baths in the geographical introduction to the Ecclesiastical History in terms very similar to those of Nennius. Wansdyke, an early-medieval defensive linear earthwork, runs from south of Bristol to near Marlborough, Wiltshire, passing not far from Bath. It probably was built in the fifth or sixth centuries, perhaps by Ceawlin. 584: Fethan leag Ceawlin's last recorded victory is in 584. The entry reads "Here Ceawlin and Cutha fought against the Britons at the place which is named Fethan leag, and Cutha was killed; and Ceawlin took many towns and countless war-loot, and in anger he turned back to his own [territory]." There is a wood named "Fethelée" mentioned in a twelfth-century document that relates to Stoke Lyne, in Oxfordshire, and it now is thought that the battle of Fethan leag must have been fought in this area. The phrase "in anger he turned back to his own" probably indicates that this annal is drawn from saga material, as perhaps are all of the early Wessex annals. It also has been used to argue that perhaps, Ceawlin did not win the battle and that the chronicler chose not to record the outcome fully—a king does not usually come home "in anger" after taking "many towns and countless war-loot". It may be that Ceawlin's overlordship of the southern Britons came to an end with this battle. Bretwaldaship About 731, Bede, a Northumbrian monk and chronicler, wrote a work called the Ecclesiastical History of the English People. The work was not primarily a secular history, but Bede provides much information about the history of the Anglo-Saxons, including a list early in the history of seven kings who, he said, held "imperium" over the other kingdoms south of the Humber. The usual translation for "imperium" is "overlordship". Bede names Ceawlin as the second on the list, although he spells it "Caelin", and adds that he was "known in the speech of his own people as Ceaulin". Bede also makes it clear that Ceawlin was not a Christian—Bede mentions a later king, Æthelberht of Kent, as "the first to enter the kingdom of heaven". The Anglo-Saxon Chronicle, in an entry for the year 827, repeats Bede's list, adds Egbert of Wessex, and also mentions that they were known as "bretwalda", or "Britain-ruler". A great deal of scholarly attention has been given to the meaning of this word. It has been described as a term "of encomiastic poetry", but there also is evidence that it implied a definite role of military leadership. Bede says that these kings had authority "south of the Humber", but the span of control, at least of the earlier bretwaldas, likely was less than this. In Ceawlin's case the range of control is hard to determine accurately, but Bede's inclusion of Ceawlin in the list of kings who held imperium, and the list of battles he is recorded as having won, indicate an energetic and successful leader who, from a base in the upper Thames valley, dominated much of the surrounding area and held overlordship over the southern Britons for some period. Despite Ceawlin's military successes, the northern conquests he made could not always be retained: Mercia took much of the upper Thames valley, and the north-eastern towns won in 571 were among territory subsequently under the control of Kent and Mercia at different times. Bede's concept of the power of these overlords also must be regarded as the product of his eighth-century viewpoint. When the Ecclesiastical History was written, Æthelbald of Mercia dominated the English south of the Humber, and Bede's view of the earlier kings was doubtless strongly coloured by the state of England at that time. For the earlier bretwaldas, such as Ælle and Ceawlin, there must be some element of anachronism in Bede's description. It also is possible that Bede only meant to refer to power over Anglo-Saxon kingdoms, not the native Britons. Ceawlin is the second king in Bede's list. All the subsequent bretwaldas followed more or less consecutively, but there is a long gap, perhaps fifty years, between Ælle of Sussex, the first bretwalda, and Ceawlin. The lack of gaps between the overlordships of the later bretwaldas has been used to make an argument for Ceawlin's dates matching the later entries in the Chronicle with reasonable accuracy. According to this analysis, the next bretwalda, Æthelberht of Kent, must have been already a dominant king by the time Pope Gregory the Great wrote to him in 601, since Gregory would have not written to an underking. Ceawlin defeated Æthelberht in 568 according to the Chronicle. Æthelberht's dates are a matter of debate, but recent scholarly consensus has his reign starting no earlier than 580. The 568 date for the battle at Wibbandun is thought to be unlikely because of the assertion in various versions of the West Saxon Genealogical Regnal
in Ipswich, Suffolk Christchurch Park, a park surrounding Christchurch Mansion Christ Church, Barbados, Barbados Southwark Christchurch, England Educational institutions Christchurch School, Christchurch, Virginia, U.S. Christchurch Boys' High School, Christchurch, New Zealand Christchurch Girls' High School, Christchurch, New Zealand Christ Church, Oxford University of Otago Christchurch School of Medicine, one of three medical schools of University of Otago, New Zealand Christchurch Anglo-Indian Higher Secondary School, Christchurch, Chennai, India Sports teams Christchurch F.C., England Christchurch United, New Zealand Christchurch Technical, New Zealand Christchurch High School Old Boys, New Zealand Other uses Christchurch-Campbell, an automobile made in 1922 ChristChurch London, an evangelic church in London, UK Christchurch the
may also refer to: Places Christchurch, New Zealand Christchurch (New Zealand electorate), a former electorate in New Zealand, also called Town (or City) of Christchurch Christchurch Central, the current electorate of Christchurch in New Zealand Christchurch mosque shootings, a 2019 terrorist attack in Christchurch, New Zealand Christchurch, Cambridgeshire, in England Christchurch, Dorset, town on the south coast of England RAF Christchurch, a WW II airfield near the town Christchurch (UK Parliament constituency), England, centred on the town Christchurch (Dorset) railway station, a railway station serving the town Christchurch, Gloucestershire, hamlet in the west of the Forest of Dean, Gloucestershire, England Christchurch, Newport, in Wales Christchurch, Virginia, United States Christchurch Mansion, a stately home
flat and smooth. The polycarbonate disc is coated on the pregroove side with a very thin layer of organic dye. Then, on top of the dye is coated a thin, reflecting layer of silver, a silver alloy, or gold. Finally, a protective coating of a photo-polymerizable lacquer is applied on top of the metal reflector and cured with UV light. A blank CD-R is not "empty"; the pregroove has a wobble (the ATIP), which helps the writing laser to stay on track and to write the data to the disc at a constant rate. Maintaining a constant rate is essential to ensure the proper size and spacing of the pits and lands burned into the dye layer. As well as providing timing information, the ATIP (absolute time in pregroove) is also a data track containing information about the CD-R manufacturer, the dye used, and media information (disc length and so on). The pregroove is not destroyed when the data are written to the CD-R, a point which some copy protection schemes use to distinguish copies from an original CD. There are three basic formulations of dye used in CD-Rs: Cyanine dye CD-Rs were the earliest ones developed, and their formulation is patented by Taiyo Yuden. CD-Rs based on this dye are mostly green in color. The earlier models were very chemically unstable and this made cyanine-based discs unsuitable for archival use; they could fade and become unreadable in a few years. Many manufacturers like Taiyo Yuden use proprietary chemical additives to make more stable cyanine discs ("metal-stabilized Cyanine", "Super Cyanine"). Older cyanine dye-based CD-Rs, as well as all the hybrid dyes based on cyanine, are very sensitive to UV-rays and can become unreadable after only a few days if they were exposed to direct sunlight. Although the additives used have made cyanine more stable, it is still the most sensitive of the dyes in UV rays (showing signs of degradation within a week of direct sunlight exposure). Common mistake users make is to leave the CD-Rs with the "clear" (recording) surface upwards, in order to protect it from scratches, as this lets the sun hit the recording surface directly. Phthalocyanine dye CD-Rs are usually silver, gold, or light green. The patents on phthalocyanine CD-Rs are held by Mitsui and Ciba Specialty Chemicals. Phthalocyanine is a natively stable dye (has no need for stabilizers) and CD-Rs based on this are often given a rated lifetime of hundreds of years. Unlike cyanine, phthalocyanine is more resistant to UV rays, and CD-Rs based on this dye show signs of degradation only after two weeks of direct sunlight exposure. However, phthalocyanine is more sensitive than cyanine to writing laser power calibration, meaning that the power level used by the writing laser has to be more accurately adjusted for the disc in order to get a good recording; this may erode the benefits of dye stability, as marginally written discs (with higher correctable error rates) will lose data (i.e. have uncorrectable errors) after less dye degradation than well-written discs (with lower correctable error rates). Azo dye CD-Rs are dark blue in color, and their formulation is patented by Mitsubishi Chemical Corporation. Azo dyes are also chemically stable, and Azo CD-Rs are typically rated with a lifetime of decades. Azo is the most resistant dye against UV light and begins to degrade only after the third or fourth week of direct sunlight exposure. More modern implementations of this kind of dye include Super Azo which is not as deep blue as the earlier Metal Azo. This change of composition was necessary in order to achieve faster writing speeds. There are many hybrid variations of the dye formulations, such as Formazan by Kodak (a hybrid of cyanine and phthalocyanine). Unfortunately, many manufacturers have added additional coloring to disguise their unstable cyanine CD-Rs in the past, so the formulation of a disc cannot be determined based purely on its color. Similarly, a gold reflective layer does not guarantee the use of phthalocyanine dye. The quality of the disc is also not only dependent on the dye used, it is also influenced by sealing, the top layer, the reflective layer, and the polycarbonate. Simply choosing a disc based on its dye type may be problematic. Furthermore, correct power calibration of the laser in the writer, as well as correct timing of the laser pulses, stable disc speed, and so on, is critical to not only the immediate readability but the longevity of the recorded disc, so for archiving it is important to have not only a high-quality disc but a high-quality writer. In fact, a high-quality writer may produce adequate results with medium-quality media, but high-quality media cannot compensate for a mediocre writer, and discs written by such a writer cannot achieve their maximum potential archival lifetime. Speed These times only include the actual optical writing pass over the disc. For most disc recording operations, additional time is used for overhead processes, such as organizing the files and tracks, which adds to the theoretical minimum total time required to produce a disc. (An exception might be making a disc from a prepared ISO image, for which the overhead would likely be trivial.) At the lowest write speeds, this overhead takes so much less time than the actual disc writing pass that it may be negligible, but at higher write speeds, the overhead time becomes a larger proportion of the overall time taken to produce a finished disc and may add significantly to it. Also, above 20× speed, drives use a Zoned-CLV or CAV strategy, where the advertised maximum speed is only reached near the outer rim of the disc. This is not taken into account by the above table. (If this were not done, the faster rotation that would be required at the inner tracks could cause the disc to fracture and/or could cause excessive vibration which would make accurate and successful writing impossible.) Writing methods The blank disc has a pre-groove track onto which the data are written. The pre-groove track, which also contains timing information, ensures that the recorder follows the same spiral path as a conventional CD. A CD recorder writes data to a CD-R disc by pulsing its laser to heat areas of the organic dye layer. The writing process does not produce indentations (pits); instead, the heat permanently changes the optical properties of the dye, changing the reflectivity of those areas. Using a low laser power, so as not to further alter the dye, the disc is read back in the same way as a CD-ROM. However, the reflected light is modulated not by pits, but by the alternating regions of heated and unaltered dye. The change of the intensity of the reflected laser radiation is transformed into an electrical signal, from which the digital information is recovered ("decoded"). Once a section of a CD-R is written, it cannot be erased or rewritten, unlike a CD-RW. A CD-R can be recorded in multiple sessions. A CD recorder can write to a CD-R using several methods including: Disc At Once – the whole CD-R is written in one session with no gaps and the disc is "closed" meaning no more data can be added and the CD-R effectively becomes a standard read-only CD. With no gaps between the tracks, the Disc At Once format is useful for "live" audio recordings. Track At Once – data are written to the CD-R one track at a time but the CD is left "open" for further recording at a later stage. It also allows data and audio to reside on the same CD-R. Packet Writing – used to record data
The pregroove is molded into the top side of the polycarbonate disc, where the pits and lands would be molded if it were a pressed, nonrecordable Red Book CD. The bottom side, which faces the laser beam in the player or drive, is flat and smooth. The polycarbonate disc is coated on the pregroove side with a very thin layer of organic dye. Then, on top of the dye is coated a thin, reflecting layer of silver, a silver alloy, or gold. Finally, a protective coating of a photo-polymerizable lacquer is applied on top of the metal reflector and cured with UV light. A blank CD-R is not "empty"; the pregroove has a wobble (the ATIP), which helps the writing laser to stay on track and to write the data to the disc at a constant rate. Maintaining a constant rate is essential to ensure the proper size and spacing of the pits and lands burned into the dye layer. As well as providing timing information, the ATIP (absolute time in pregroove) is also a data track containing information about the CD-R manufacturer, the dye used, and media information (disc length and so on). The pregroove is not destroyed when the data are written to the CD-R, a point which some copy protection schemes use to distinguish copies from an original CD. There are three basic formulations of dye used in CD-Rs: Cyanine dye CD-Rs were the earliest ones developed, and their formulation is patented by Taiyo Yuden. CD-Rs based on this dye are mostly green in color. The earlier models were very chemically unstable and this made cyanine-based discs unsuitable for archival use; they could fade and become unreadable in a few years. Many manufacturers like Taiyo Yuden use proprietary chemical additives to make more stable cyanine discs ("metal-stabilized Cyanine", "Super Cyanine"). Older cyanine dye-based CD-Rs, as well as all the hybrid dyes based on cyanine, are very sensitive to UV-rays and can become unreadable after only a few days if they were exposed to direct sunlight. Although the additives used have made cyanine more stable, it is still the most sensitive of the dyes in UV rays (showing signs of degradation within a week of direct sunlight exposure). Common mistake users make is to leave the CD-Rs with the "clear" (recording) surface upwards, in order to protect it from scratches, as this lets the sun hit the recording surface directly. Phthalocyanine dye CD-Rs are usually silver, gold, or light green. The patents on phthalocyanine CD-Rs are held by Mitsui and Ciba Specialty Chemicals. Phthalocyanine is a natively stable dye (has no need for stabilizers) and CD-Rs based on this are often given a rated lifetime of hundreds of years. Unlike cyanine, phthalocyanine is more resistant to UV rays, and CD-Rs based on this dye show signs of degradation only after two weeks of direct sunlight exposure. However, phthalocyanine is more sensitive than cyanine to writing laser power calibration, meaning that the power level used by the writing laser has to be more accurately adjusted for the disc in order to get a good recording; this may erode the benefits of dye stability, as marginally written discs (with higher correctable error rates) will lose data (i.e. have uncorrectable errors) after less dye degradation than well-written discs (with lower correctable error rates). Azo dye CD-Rs are dark blue in color, and their formulation is patented by Mitsubishi Chemical Corporation. Azo dyes are also chemically stable, and Azo CD-Rs are typically rated with a lifetime of decades. Azo is the most resistant dye against UV light and begins to degrade only after the third or fourth week of direct sunlight exposure. More modern implementations of this kind of dye include Super Azo which is not as deep blue as the earlier Metal Azo. This change of composition was necessary in order to achieve faster writing speeds. There are many hybrid variations of the dye formulations, such as Formazan by Kodak (a hybrid of cyanine and phthalocyanine). Unfortunately, many manufacturers have added additional coloring to disguise their unstable cyanine CD-Rs in the past, so the formulation of a disc cannot be determined based purely on its color. Similarly, a gold reflective layer does not guarantee the use of phthalocyanine dye. The quality of the disc is also not only dependent on the dye used, it is also influenced by sealing, the top layer, the reflective layer, and the polycarbonate. Simply choosing a disc based on its dye type may be problematic. Furthermore, correct power calibration of the laser in the writer, as well as correct timing of the laser pulses, stable disc speed, and so on, is critical to not only the immediate readability but the longevity of the recorded disc, so for archiving it is important to have not only a high-quality disc but a high-quality writer. In fact, a high-quality writer may produce adequate results with medium-quality media, but high-quality media cannot compensate for a mediocre writer, and discs written by such a writer cannot achieve their maximum potential archival lifetime. Speed These times only include the actual optical writing pass over the disc. For most disc recording operations, additional time is used for overhead processes, such as organizing the files and tracks, which adds to the theoretical minimum total time required to produce a disc. (An exception might be making a disc from a prepared ISO image, for which the overhead would likely be trivial.) At the lowest write speeds, this overhead takes so much less time than the actual disc writing pass that it may be negligible, but at higher write speeds, the overhead time becomes a larger proportion of the overall time taken to produce a finished disc and may add significantly to it. Also, above 20× speed, drives use a Zoned-CLV or CAV strategy, where the advertised maximum speed is only reached near the outer rim of the disc. This is not taken into account by the above table. (If this were not done, the faster rotation that would be required at the inner tracks could cause the disc to fracture and/or could cause excessive vibration which would make accurate and successful writing impossible.) Writing methods The blank disc has a pre-groove track onto which the data are written. The pre-groove track, which also contains timing information, ensures that the recorder follows the same spiral path as a conventional CD. A CD recorder writes data to a CD-R disc by pulsing its laser to heat areas of the organic dye layer. The writing process does not produce indentations (pits); instead, the heat permanently changes the optical properties of the dye, changing the reflectivity of those areas. Using a low laser power, so as not to further alter the dye, the disc is read back in the same way as a CD-ROM. However, the reflected light is modulated not by pits, but by the alternating regions of heated and unaltered dye. The change of the intensity of the reflected laser radiation is transformed into an electrical signal, from which the digital information is recovered ("decoded"). Once a section of a CD-R is written, it cannot be erased or rewritten, unlike a CD-RW. A CD-R can be recorded in multiple sessions. A CD recorder can write to a CD-R using several methods including: Disc At Once – the whole CD-R is written in one session with no gaps and the disc is "closed" meaning no more data can be added and the CD-R effectively becomes a standard read-only CD. With no gaps between the tracks, the Disc At Once format is useful for "live" audio recordings. Track At Once – data are written to the CD-R one track at a time but the CD is left "open" for further recording at a later stage. It also allows data and audio to reside on the same CD-R. Packet Writing – used to record data to a CD-R in "packets", allowing extra information to be appended to a disc at a later time, or for information on the disc to be made "invisible". In this way, CD-R can emulate CD-RW; however, each time information on the disc is altered, more data has to be written to the disc. There can be compatibility issues with this format and some CD drives. With careful examination, the written and unwritten areas can be distinguished by the naked eye. CD-Rs are written from the center outwards, so the written area appears as an inner band with slightly different shading. CDs have a Power Calibration Area, used to calibrate the writing laser before and during recording. CDs contain two such areas: one close to the inner edge of the disc, for low-speed calibration, and another on the outer edge on the disc, for high-speed calibration. The calibration results are recorded on a Recording Management Area (RMA) that can hold up to 99 calibrations. The disc cannot be written after the RMA is full, however, the RMA may be emptied in CD-RW discs. Lifespan Real-life (not accelerated aging) tests have revealed that some CD-Rs degrade quickly even if stored normally. The quality of a CD-R disc has a large and direct influence on longevity—low-quality discs should not be expected to last very long. According to research conducted by J. Perdereau, CD-Rs are expected to have an average life expectancy of 10 years. Branding isn't a reliable guide to quality, because many brands (major as well as no name) do not manufacture their own discs. Instead, they are sourced from different manufacturers of varying quality. For best results, the actual manufacturer and material components of each batch of discs should be verified. Burned CD-Rs suffer from material degradation, just like most writable media. CD-R media have an internal layer of dye used to store data. In a CD-RW disc, the recording layer is made of an alloy of silver and other metals—indium, antimony, and tellurium. In CD-R media, the dye itself can degrade, causing data to become unreadable. As well as degradation of the dye, failure of a CD-R can be due to the reflective surface. While silver is less expensive and more widely used, it is more prone to oxidation resulting in
For example, up to 200,000 different small molecules might be made in plants, although not all these will be present in the same species, or in a single cell. Estimates of the number of metabolites in single cells such as E. coli and baker's yeast predict that under 1,000 are made. Water Most of the cytosol is water, which makes up about 70% of the total volume of a typical cell. The pH of the intracellular fluid is 7.4. while human cytosolic pH ranges between 7.0–7.4, and is usually higher if a cell is growing. The viscosity of cytoplasm is roughly the same as pure water, although diffusion of small molecules through this liquid is about fourfold slower than in pure water, due mostly to collisions with the large numbers of macromolecules in the cytosol. Studies in the brine shrimp have examined how water affects cell functions; these saw that a 20% reduction in the amount of water in a cell inhibits metabolism, with metabolism decreasing progressively as the cell dries out and all metabolic activity halting when the water level reaches 70% below normal. Although water is vital for life, the structure of this water in the cytosol is not well understood, mostly because methods such as nuclear magnetic resonance spectroscopy only give information on the average structure of water, and cannot measure local variations at the microscopic scale. Even the structure of pure water is poorly understood, due to the ability of water to form structures such as water clusters through hydrogen bonds. The classic view of water in cells is that about 5% of this water is strongly bound in by solutes or macromolecules as water of solvation, while the majority has the same structure as pure water. This water of solvation is not active in osmosis and may have different solvent properties, so that some dissolved molecules are excluded, while others become concentrated. However, others argue that the effects of the high concentrations of macromolecules in cells extend throughout the cytosol and that water in cells behaves very differently from the water in dilute solutions. These ideas include the proposal that cells contain zones of low and high-density water, which could have widespread effects on the structures and functions of the other parts of the cell. However, the use of advanced nuclear magnetic resonance methods to directly measure the mobility of water in living cells contradicts this idea, as it suggests that 85% of cell water acts like that pure water, while the remainder is less mobile and probably bound to macromolecules. Ions The concentrations of the other ions in cytosol are quite different from those in extracellular fluid and the cytosol also contains much higher amounts of charged macromolecules such as proteins and nucleic acids than the outside of the cell structure. In contrast to extracellular fluid, cytosol has a high concentration of potassium ions and a low concentration of sodium ions. This difference in ion concentrations is critical for osmoregulation, since if the ion levels were the same inside a cell as outside, water would enter constantly by osmosis - since the levels of macromolecules inside cells are higher than their levels outside. Instead, sodium ions are expelled and potassium ions taken up by the Na⁺/K⁺-ATPase, potassium ions then flow down their concentration gradient through potassium-selection ion channels, this loss of positive charge creates a negative membrane potential. To balance this potential difference, negative chloride ions also exit the cell, through selective chloride channels. The loss of sodium and chloride ions compensates for the osmotic effect of the higher concentration of organic molecules inside the cell. Cells can deal with even larger osmotic changes by accumulating osmoprotectants such as betaines or trehalose in their cytosol. Some of these molecules can allow cells to survive being completely dried out and allow an organism to enter a state of suspended animation called cryptobiosis. In this state the cytosol and osmoprotectants become a glass-like solid that helps stabilize proteins and cell membranes from the damaging effects of desiccation. The low concentration of calcium in the cytosol allows calcium ions to function as a second messenger in calcium signaling. Here, a signal such as a hormone or an action potential opens calcium channel so that calcium floods into the cytosol. This sudden increase in cytosolic calcium activates other signalling molecules, such as calmodulin and protein kinase C. Other ions such as chloride and potassium may also have signaling functions in the cytosol, but these are not well understood. Macromolecules Protein molecules that do not bind to cell membranes or the cytoskeleton are dissolved in the cytosol. The amount of protein in cells is extremely high, and approaches 200 mg/ml, occupying about 20–30% of the volume of the cytosol. However, measuring precisely how much protein is dissolved in cytosol in intact cells is difficult, since some proteins appear to be weakly associated with membranes or organelles in whole cells and are released into solution
molecules behave, through macromolecular crowding. Although it was once thought to be a simple solution of molecules, the cytosol has multiple levels of organization. These include concentration gradients of small molecules such as calcium, large complexes of enzymes that act together and take part in metabolic pathways, and protein complexes such as proteasomes and carboxysomes that enclose and separate parts of the cytosol. Definition The term "cytosol" was first introduced in 1965 by H. A. Lardy, and initially referred to the liquid that was produced by breaking cells apart and pelleting all the insoluble components by ultracentrifugation. Such a soluble cell extract is not identical to the soluble part of the cell cytoplasm and is usually called a cytoplasmic fraction. The term cytosol is now used to refer to the liquid phase of the cytoplasm in an intact cell. This excludes any part of the cytoplasm that is contained within organelles. Due to the possibility of confusion between the use of the word "cytosol" to refer to both extracts of cells and the soluble part of the cytoplasm in intact cells, the phrase "aqueous cytoplasm" has been used to describe the liquid contents of the cytoplasm of living cells. Prior to this, other terms, including hyaloplasm, were used for the cell fluid, not always synonymously, as its nature was not very clear (see protoplasm). Properties and composition The proportion of cell volume that is cytosol varies: for example while this compartment forms the bulk of cell structure in bacteria, in plant cells the main compartment is the large central vacuole. The cytosol consists mostly of water, dissolved ions, small molecules, and large water-soluble molecules (such as proteins). The majority of these non-protein molecules have a molecular mass of less than 300 Da. This mixture of small molecules is extraordinarily complex, as the variety of molecules that are involved in metabolism (the metabolites) is immense. For example, up to 200,000 different small molecules might be made in plants, although not all these will be present in the same species, or in a single cell. Estimates of the number of metabolites in single cells such as E. coli and baker's yeast predict that under 1,000 are made. Water Most of the cytosol is water, which makes up about 70% of the total volume of a typical cell. The pH of the intracellular fluid is 7.4. while human cytosolic pH ranges between 7.0–7.4, and is usually higher if a cell is growing. The viscosity of cytoplasm is roughly the same as pure water, although diffusion of small molecules through this liquid is about fourfold slower than in pure water, due mostly to collisions with the large numbers of macromolecules in the cytosol. Studies in the brine shrimp have examined how water affects cell functions; these saw that a 20% reduction in the amount of water in a cell inhibits metabolism, with metabolism decreasing progressively as the cell dries out and all metabolic activity halting when the water level reaches 70% below normal. Although water is vital for life, the structure of this water in the cytosol is not well understood, mostly because methods such as nuclear magnetic resonance spectroscopy only give information on the average structure of water, and cannot measure local variations at the microscopic scale. Even the structure of pure water is poorly understood, due to the ability of water to form structures such as water clusters through hydrogen bonds. The classic view of water in cells is that about 5% of this water is strongly bound in by solutes or macromolecules as water of solvation, while the majority has the same structure as pure water. This water of solvation is not active in osmosis and may have different solvent properties, so that some dissolved molecules are excluded, while others become concentrated. However, others argue that the effects of the high concentrations of macromolecules in cells extend throughout the cytosol and that water in cells behaves very differently from the water in dilute solutions. These ideas include the proposal that cells contain zones of low and high-density water, which could have widespread effects on the structures and functions of the other parts of the cell. However, the use of advanced nuclear magnetic resonance methods to directly measure the mobility of water in living cells contradicts this idea, as it suggests that 85% of cell water acts like that pure water, while the remainder is less mobile and probably bound to macromolecules. Ions The concentrations of the other ions in cytosol are quite different from those in extracellular fluid and the cytosol also contains much higher amounts of charged macromolecules such as proteins and nucleic acids than the outside of the cell structure. In contrast to extracellular fluid, cytosol has a high concentration of potassium ions and a low concentration of sodium ions. This difference in ion concentrations is critical for osmoregulation, since if the ion levels were the same inside a cell as outside, water would enter constantly by osmosis - since the levels of macromolecules inside cells are higher than their levels outside. Instead, sodium ions are expelled and potassium ions taken up by the Na⁺/K⁺-ATPase, potassium ions then flow down their concentration gradient through potassium-selection ion channels, this loss of positive charge creates a negative membrane potential. To balance this potential difference, negative chloride ions also exit the cell, through selective chloride channels. The loss of sodium and chloride ions compensates for the osmotic effect of the higher concentration of organic molecules inside the cell. Cells can deal with even larger osmotic changes by accumulating osmoprotectants such as betaines or trehalose in their cytosol. Some of these molecules can allow cells to survive being completely dried out and allow an organism to enter a state of suspended animation called cryptobiosis. In this state the cytosol and osmoprotectants become a glass-like solid that helps stabilize proteins and cell membranes from the damaging effects of desiccation. The low concentration of calcium in the cytosol allows calcium ions to function as a second messenger in calcium signaling. Here, a signal such as a hormone or an action potential opens calcium channel so that calcium floods into the cytosol. This sudden increase in cytosolic calcium activates other signalling molecules, such as calmodulin and protein kinase C. Other ions such as chloride and potassium may also have signaling functions in the cytosol, but these are not well understood. Macromolecules Protein molecules that do not bind to cell membranes or the cytoskeleton are dissolved in the cytosol. The amount of protein in cells is extremely high, and approaches 200 mg/ml, occupying about 20–30% of the volume of the cytosol. However, measuring precisely how much protein is dissolved in cytosol in intact
slum in Zambia Government and law Composition (fine), a legal procedure in use after the English Civil War Committee for Compounding with Delinquents, an English Civil War institution that allowed Parliament to compound the estates of Royalists Compounding treason, an offence under the common law of England Compounding a felony, a previous offense under the common law of England Linguistics Compound (linguistics), a word that consists of more than one radical element Compound sentence (linguistics), a type of sentence made up of two or more independent clauses and no subordinate (dependent) clauses Science, technology, and mathematics Biology and medicine Compounding, the mixing of drugs in pharmacy Compound fracture, a complete fractures of bone where at least one fragment has damaged the skin, soft tissue or surrounding body cavity Compound leaf, a type of leaf being divided into smaller leaflets Chemistry and materials science Chemical compound, combination of two or more elements Plastic compounding, a method of preparing plastic
mines in South Africa The Compound, an area of Palm Bay, Florida, US Komboni or compound, a type of slum in Zambia Government and law Composition (fine), a legal procedure in use after the English Civil War Committee for Compounding with Delinquents, an English Civil War institution that allowed Parliament to compound the estates of Royalists Compounding treason, an offence under the common law of England Compounding a felony, a previous offense under the common law of England Linguistics Compound (linguistics), a word that consists of more than one radical element Compound sentence (linguistics), a type
defend their cities were no longer content with having a subordinate social status but demanded a greater role in the form of citizenship. Membership in guilds was an indirect form of citizenship in that it helped their members succeed financially. The rise of citizenship was linked to the rise of republicanism, according to one account, since independent citizens meant that kings had less power. Citizenship became an idealized, almost abstract, concept, and did not signify a submissive relation with a lord or count, but rather indicated the bond between a person and the state in the rather abstract sense of having rights and duties. Modern times The modern idea of citizenship still respects the idea of political participation, but it is usually done through "elaborate systems of political representation at a distance" such as representative democracy. Modern citizenship is much more passive; action is delegated to others; citizenship is often a constraint on acting, not an impetus to act. Nevertheless, citizens are usually aware of their obligations to authorities and are aware that these bonds often limit what they can do. United States From 1790 until the mid-twentieth century, United States law used racial criteria to establish citizenship rights and regulate who was eligible to become a naturalized citizen. The Naturalization Act of 1790, the first law in U.S. history to establish rules for citizenship and naturalization, barred citizenship to all people who were not of European descent, stating that "any alien being a free white person, who shall have resided within the limits and under the jurisdiction of the United States for the term of two years, maybe admitted to becoming a citizen thereof." Under early U.S. laws, African Americans were not eligible for citizenship. In 1857, these laws were upheld in the US Supreme Court case Dred Scott v. Sandford, which ruled that "a free negro of the African race, whose ancestors were brought to this country and sold as slaves, is not a 'citizen' within the meaning of the Constitution of the United States," and that "the special rights and immunities guaranteed to citizens do not apply to them." It was not until the abolition of slavery following the American Civil War that African Americans were granted citizenship rights. The 14th Amendment to the U.S. Constitution, ratified on July 9, 1868, stated that "all persons born or naturalized in the United States, and subject to the jurisdiction thereof, are citizens of the United States and of the State wherein they reside." Two years later, the Naturalization Act of 1870 would extend the right to become a naturalized citizen to include "aliens of African nativity and to persons of African descent". Despite the gains made by African Americans after the Civil War, Native Americans, Asians, and others not considered "free white persons" were still denied the ability to become citizens. The 1882 Chinese Exclusion Act explicitly denied naturalization rights to all people of Chinese origin, while subsequent acts passed by the US Congress, such as laws in 1906, 1917, and 1924, would include clauses that denied immigration and naturalization rights to people based on broadly defined racial categories. Supreme Court cases such as Ozawa v. the United States (1922) and U.S. v. Bhagat Singh Thind (1923), would later clarify the meaning of the phrase "free white persons," ruling that ethnically Japanese, Indian, and other non-European people were not "white persons", and were therefore ineligible for naturalization under U.S. law. Native Americans were not granted full US citizenship until the passage of the Indian Citizenship Act in 1924. However, even well into the 1960s, some state laws prevented Native Americans from exercising their full rights as citizens, such as the right to vote. In 1962, New Mexico became the last state to enfranchise Native Americans. It was not until the passage of the Immigration and Nationality Act of 1952 that the racial and gender restrictions for naturalization were explicitly abolished. However, the act still contained restrictions regarding who was eligible for US citizenship and retained a national quota system which limited the number of visas given to immigrants based on their national origin, to be fixed "at a rate of one-sixth of one percent of each nationality's population in the United States in 1920". It was not until the passage of the Immigration and Nationality Act of 1965 that these immigration quota systems were drastically altered in favor of a less discriminatory system. Union of the Soviet Socialist Republics The 1918 constitution of revolutionary Russia granted citizenship to any foreigners who were living within the Russian Soviet Federative Socialist Republic, so long as they were "engaged in work and [belonged] to the working class." It recognized "the equal rights of all citizens, irrespective of their racial or national connections" and declared oppression of any minority group or race "to be contrary to the fundamental laws of the Republic." The 1918 constitution also established the right to vote and be elected to soviets for both men and women "irrespective of religion, nationality, domicile, etc. [...] who shall have completed their eighteenth year by the day of the election." The later constitutions of the USSR would grant universal Soviet citizenship to the citizens of all member republics in concord with the principles of non-discrimination laid out in the original 1918 constitution of Russia. Nazi Germany Nazism, the German variant of twentieth-century fascism, classified inhabitants of the country into three main hierarchical categories, each of which would have different rights in relation to the state: citizens, subjects, and aliens. The first category, citizens, were to possess full civic rights and responsibilities. Citizenship was conferred only on males of German (or so-called "Aryan") heritage who had completed military service, and could be revoked at any time by the state. The Reich Citizenship Law of 1935 established racial criteria for citizenship in the German Reich, and because of this law Jews and others who could not "prove German racial heritage" were stripped of their citizenship. The second category, subjects, referred to all others who were born within the nation's boundaries who did not fit the racial criteria for citizenship. Subjects would have no voting rights, could not hold any position within the state, and possessed none of the other rights and civic responsibilities conferred on citizens. All women were to be conferred "subject" status upon birth, and could only obtain "citizen" status if they worked independently or if they married a German citizen (see women in Nazi Germany). The final category, aliens, referred to those who were citizens of another state, who also had no rights. Israel The primary principles of Israeli citizenship is jus sanguinis (citizenship by descent) for Jews and jus soli (citizenship by place of birth) for others. Different senses Many theorists suggest that there are two opposing conceptions of citizenship: an economic one, and a political one. For further information, see History of citizenship. Citizenship status, under social contract theory, carries with it both rights and duties. In this sense, citizenship was described as "a bundle of rights -- primarily, political participation in the life of the community, the right to vote, and the right to receive certain protection from the community, as well as obligations." Citizenship is seen by most scholars as culture-specific, in the sense that the meaning of the term varies considerably from culture to culture, and over time. In China, for example, there is a cultural politics of citizenship which could be called "peopleship". How citizenship is understood depends on the person making the determination. The relation of citizenship has never been fixed or static, but constantly changes within each society. While citizenship has varied considerably throughout history, and within societies over time, there are some common elements but they vary considerably as well. As a bond, citizenship extends beyond basic kinship ties to unite people of different genetic backgrounds. It usually signifies membership in a political body. It is often based on or was a result of, some form of military service or expectation of future service. It usually involves some form of political participation, but this can vary from token acts to active service in government. Citizenship is a status in society. It is an ideal state as well. It generally describes a person with legal rights within a given political order. It almost always has an element of exclusion, meaning that some people are not citizens and that this distinction can sometimes be very important, or not important, depending on a particular society. Citizenship as a concept is generally hard to isolate intellectually and compare with related political notions since it relates to many other aspects of society such as the family, military service, the individual, freedom, religion, ideas of right, and wrong, ethnicity, and patterns for how a person should behave in society. When there are many different groups within a nation, citizenship may be the only real bond that unites everybody as equals without discrimination—it is a "broad bond" linking "a person with the state" and gives people a universal identity as a legal member of a specific nation. Modern citizenship has often been looked at as two competing underlying ideas: The liberal-individualist or sometimes liberal conception of citizenship suggests that citizens should have entitlements necessary for human dignity. It assumes people act for the purpose of enlightened self-interest. According to this viewpoint, citizens are sovereign, morally autonomous beings with duties to pay taxes, obey the law, engage in business transactions, and defend the nation if it comes under attack, but are essentially passive politically, and their primary focus is on economic betterment. This idea began to appear around the seventeenth and eighteenth centuries and became stronger over time, according to one view. According to this formulation, the state exists for the benefit of citizens and has an obligation to respect and protect the rights of citizens, including civil rights and political rights. It was later that so-called social rights became part of the obligation for the state. The civic-republican or sometimes classical or civic humanist conception of citizenship emphasizes man's political nature and sees citizenship as an active process, not a passive state or legal marker. It is relatively more concerned that government will interfere with popular places to practice citizenship in the public sphere. Citizenship means being active in government affairs. According to one view, most people today live as citizens according to the liberal-individualist conception but wished they lived more according to the civic-republican ideal. An ideal citizen is one who exhibits "good civic behavior". Free citizens and a republic government are "mutually interrelated." Citizenship suggested a commitment to "duty and civic virtue". Scholars suggest that the concept of citizenship contains many unresolved issues, sometimes called tensions, existing within the relation, that continue to reflect uncertainty about what citizenship is supposed to mean. Some unresolved issues regarding citizenship include questions about what is the proper balance between duties and rights. Another is a question about what is the proper balance between political citizenship versus social citizenship. Some thinkers see benefits with people being absent from public affairs, since too much participation such as revolution can be destructive, yet too little participation such as total apathy can be problematic as well. Citizenship can be seen as a special elite status, and it can also be seen as a democratizing force and something that everybody has; the concept can include both senses. According to sociologist Arthur Stinchcombe, citizenship is based on the extent that a person can control one's own destiny within the group in the sense of being able to influence the government of the group. One last distinction within citizenship is the so-called consent descent
Roman sense increasingly reflected the fact that citizens could act upon material things as well as other citizens, in the sense of buying or selling property, possessions, titles, goods. One historian explained: Roman citizenship reflected a struggle between the upper-class patrician interests against the lower-order working groups known as the plebeian class. A citizen came to be understood as a person "free to act by law, free to ask and expect the law's protection, a citizen of such and such a legal community, of such and such a legal standing in that community". Citizenship meant having rights to have possessions, immunities, expectations, which were "available in many kinds and degrees, available or unavailable to many kinds of person for many kinds of reason". The law itself was a kind of bond uniting people. Roman citizenship was more impersonal, universal, multiform, having different degrees and applications. Middle Ages During the European Middle Ages, citizenship was usually associated with cities and towns (see medieval commune), and applied mainly to middle-class folk. Titles such as burgher, grand burgher (German Großbürger) and the bourgeoisie denoted political affiliation and identity in relation to a particular locality, as well as membership in a mercantile or trading class; thus, individuals of respectable means and socioeconomic status were interchangeable with citizens. During this era, members of the nobility had a range of privileges above commoners (see aristocracy), though political upheavals and reforms, beginning most prominently with the French Revolution, abolished privileges and created an egalitarian concept of citizenship. Renaissance During the Renaissance, people transitioned from being subjects of a king or queen to being citizens of a city and later to a nation. Each city had its own law, courts, and independent administration. And being a citizen often meant being subject to the city's law in addition to having power in some instances to help choose officials. City dwellers who had fought alongside nobles in battles to defend their cities were no longer content with having a subordinate social status but demanded a greater role in the form of citizenship. Membership in guilds was an indirect form of citizenship in that it helped their members succeed financially. The rise of citizenship was linked to the rise of republicanism, according to one account, since independent citizens meant that kings had less power. Citizenship became an idealized, almost abstract, concept, and did not signify a submissive relation with a lord or count, but rather indicated the bond between a person and the state in the rather abstract sense of having rights and duties. Modern times The modern idea of citizenship still respects the idea of political participation, but it is usually done through "elaborate systems of political representation at a distance" such as representative democracy. Modern citizenship is much more passive; action is delegated to others; citizenship is often a constraint on acting, not an impetus to act. Nevertheless, citizens are usually aware of their obligations to authorities and are aware that these bonds often limit what they can do. United States From 1790 until the mid-twentieth century, United States law used racial criteria to establish citizenship rights and regulate who was eligible to become a naturalized citizen. The Naturalization Act of 1790, the first law in U.S. history to establish rules for citizenship and naturalization, barred citizenship to all people who were not of European descent, stating that "any alien being a free white person, who shall have resided within the limits and under the jurisdiction of the United States for the term of two years, maybe admitted to becoming a citizen thereof." Under early U.S. laws, African Americans were not eligible for citizenship. In 1857, these laws were upheld in the US Supreme Court case Dred Scott v. Sandford, which ruled that "a free negro of the African race, whose ancestors were brought to this country and sold as slaves, is not a 'citizen' within the meaning of the Constitution of the United States," and that "the special rights and immunities guaranteed to citizens do not apply to them." It was not until the abolition of slavery following the American Civil War that African Americans were granted citizenship rights. The 14th Amendment to the U.S. Constitution, ratified on July 9, 1868, stated that "all persons born or naturalized in the United States, and subject to the jurisdiction thereof, are citizens of the United States and of the State wherein they reside." Two years later, the Naturalization Act of 1870 would extend the right to become a naturalized citizen to include "aliens of African nativity and to persons of African descent". Despite the gains made by African Americans after the Civil War, Native Americans, Asians, and others not considered "free white persons" were still denied the ability to become citizens. The 1882 Chinese Exclusion Act explicitly denied naturalization rights to all people of Chinese origin, while subsequent acts passed by the US Congress, such as laws in 1906, 1917, and 1924, would include clauses that denied immigration and naturalization rights to people based on broadly defined racial categories. Supreme Court cases such as Ozawa v. the United States (1922) and U.S. v. Bhagat Singh Thind (1923), would later clarify the meaning of the phrase "free white persons," ruling that ethnically Japanese, Indian, and other non-European people were not "white persons", and were therefore ineligible for naturalization under U.S. law. Native Americans were not granted full US citizenship until the passage of the Indian Citizenship Act in 1924. However, even well into the 1960s, some state laws prevented Native Americans from exercising their full rights as citizens, such as the right to vote. In 1962, New Mexico became the last state to enfranchise Native Americans. It was not until the passage of the Immigration and Nationality Act of 1952 that the racial and gender restrictions for naturalization were explicitly abolished. However, the act still contained restrictions regarding who was eligible for US citizenship and retained a national quota system which limited the number of visas given to immigrants based on their national origin, to be fixed "at a rate of one-sixth of one percent of each nationality's population in the United States in 1920". It was not until the passage of the Immigration and Nationality Act of 1965 that these immigration quota systems were drastically altered in favor of a less discriminatory system. Union of the Soviet Socialist Republics The 1918 constitution of revolutionary Russia granted citizenship to any foreigners who were living within the Russian Soviet Federative Socialist Republic, so long as they were "engaged in work and [belonged] to the working class." It recognized "the equal rights of all citizens, irrespective of their racial or national connections" and declared oppression of any minority group or race "to be contrary to the fundamental laws of the Republic." The 1918 constitution also established the right to vote and be elected to soviets for both men and women "irrespective of religion, nationality, domicile, etc. [...] who shall have completed their eighteenth year by the day of the election." The later constitutions of the USSR would grant universal Soviet citizenship to the citizens of all member republics in concord with the principles of non-discrimination laid out in the original 1918 constitution of Russia. Nazi Germany Nazism, the German variant of twentieth-century fascism, classified inhabitants of the country into three main hierarchical categories, each of which would have different rights in relation to the state: citizens, subjects, and aliens. The first category, citizens, were to possess full civic rights and responsibilities. Citizenship was conferred only on males of German (or so-called "Aryan") heritage who had completed military service, and could be revoked at any time by the state. The Reich Citizenship Law of 1935 established racial criteria for citizenship in the German Reich, and because of this law Jews and others who could not "prove German racial heritage" were stripped of their citizenship. The second category, subjects, referred to all others who were born within the nation's boundaries who did not fit the racial criteria for citizenship. Subjects would have no voting rights, could not hold any position within the state, and possessed none of the other rights and civic responsibilities conferred on citizens. All women were to be conferred "subject" status upon birth, and could only obtain "citizen" status if they worked independently or if they married a German citizen (see women in Nazi Germany). The final category, aliens, referred to those who were citizens of another state, who also had no rights. Israel The primary principles of Israeli citizenship is jus sanguinis (citizenship by descent) for Jews and jus soli (citizenship by place of birth) for others. Different senses Many theorists suggest that there are two opposing conceptions of citizenship: an economic one, and a political one. For further information, see History of citizenship. Citizenship status, under social contract theory, carries with it both rights and duties. In this sense, citizenship was described as "a bundle of rights -- primarily, political participation in the life of the community, the right to vote, and the right to receive certain protection from the community, as well as obligations." Citizenship is seen by most scholars as culture-specific, in the sense that the meaning of the term varies considerably from culture to culture, and over time. In China, for example, there is a cultural politics of citizenship which could be called "peopleship". How citizenship is understood depends on the person making the determination. The relation of citizenship has never been fixed or static, but constantly changes within each society. While citizenship has varied considerably throughout history, and within societies over time, there are some common elements but they vary considerably as well. As a bond, citizenship extends beyond basic kinship ties to unite people of different genetic backgrounds. It usually signifies membership in a political body. It is often based on or was a result of, some form of military service or expectation of future service. It usually involves some form of political participation, but this can vary from token acts to active service in government. Citizenship is a status in society. It is an ideal state as well. It generally describes a person with legal rights within a given political order. It almost always has an element of exclusion, meaning that some people are not citizens and that this distinction can sometimes be very important, or not important, depending on a particular society. Citizenship as a concept is generally hard to isolate intellectually and compare with related political notions since it relates to many other aspects of society such as the family, military service, the individual, freedom, religion, ideas of right, and wrong, ethnicity, and patterns for how a person should behave in society. When there are many different groups within a nation, citizenship may be the only real bond that unites everybody as equals without discrimination—it is a "broad bond" linking "a person with the state" and gives people a universal identity as a legal member of a specific nation. Modern citizenship has often been looked at as two competing underlying ideas: The liberal-individualist or sometimes liberal conception of citizenship suggests that citizens should have entitlements necessary for human dignity. It assumes people act for the purpose of enlightened self-interest. According to this viewpoint, citizens are sovereign, morally autonomous beings with duties to pay taxes, obey the law, engage in business transactions, and defend the nation if it comes under attack, but are essentially passive politically, and their primary focus is on economic betterment. This idea began to appear around the seventeenth and eighteenth centuries and became stronger over time, according to one view. According to this formulation, the state exists for the benefit of citizens and has an obligation to respect and protect the rights of citizens, including civil rights and political rights. It was later that so-called social rights became part of the obligation for the state. The civic-republican or sometimes classical or civic humanist conception of citizenship emphasizes man's political nature and sees citizenship as an active process, not a passive state or legal marker. It is relatively more concerned that government will interfere with popular places to practice citizenship in the public sphere. Citizenship means being active in government affairs. According to one view, most people today live as citizens according to the liberal-individualist conception but wished they lived more according to the civic-republican ideal. An ideal citizen is one who exhibits "good civic behavior". Free citizens and a republic government are "mutually interrelated." Citizenship suggested a commitment to "duty and civic virtue". Scholars suggest that the concept of citizenship contains many unresolved issues, sometimes called tensions, existing within the relation, that continue to reflect uncertainty about what citizenship is supposed to mean. Some unresolved issues regarding citizenship include questions about what is the proper balance between duties and rights. Another is a question about what is the proper balance between political citizenship versus social citizenship. Some thinkers see benefits with people being absent from public affairs, since too much participation such as revolution can be destructive, yet too little participation such as total apathy can be problematic as well. Citizenship can be seen as a special elite status, and it can also be seen as a democratizing force and something that everybody has; the concept can include both senses. According to sociologist Arthur Stinchcombe, citizenship is based on the extent that a person can control one's own destiny within the group in the sense of being able to influence the government of the group. One last distinction within citizenship is the so-called consent descent distinction, and this issue addresses whether citizenship is a fundamental matter determined by a person choosing to belong to a particular nation––by their consent––or is citizenship a matter of where a person was born––that is, by their descent. International Some intergovernmental organizations have extended the concept and terminology associated with citizenship to the international level, where it is applied to the totality of the citizens of their constituent countries combined. Citizenship at this level is a secondary concept, with rights deriving from national citizenship. European Union The Maastricht Treaty introduced the concept of citizenship of the European Union. Article 17 (1) of the Treaty on European Union stated that: Citizenship of the Union is hereby established. Every person holding the nationality of a Member State shall be a citizen of the Union. Citizenship of the Union shall be additional to and not replace national citizenship. An agreement is known as the amended EC Treaty established certain minimal rights for European Union citizens. Article 12 of the amended EC Treaty guaranteed a general right of non-discrimination within the scope of the Treaty. Article 18 provided a limited right to free movement and residence in the Member States other than that of which the European Union citizen is a national. Articles 18-21 and 225 provide certain political rights. Union citizens have also extensive rights to move in order to exercise economic activity in any of the Member States which predate the introduction of Union citizenship. Mercosur Citizenship of the Mercosur is granted to eligible citizens of the Southern Common
Casas, which had a very limited budget, to the extent that it had to ally with San Juan Chamula challenged Tuxtla Gutierrez which, with only a small ragtag army overwhelmingly defeated the army helped by chamulas from San Cristóbal. There were three years of peace after that until troops allied with the "First Chief" of the revolutionary Constitutionalist forces, Venustiano Carranza, entered in 1914 taking over the government, with the aim of imposing the Ley de Obreros (Workers' Law) to address injustices against the state's mostly indigenous workers. Conservatives responded violently months later when they were certain the Carranza forces would take their lands. This was mostly by way of guerrilla actions headed by farm owners who called themselves the Mapaches. This action continued for six years, until President Carranza was assassinated in 1920 and revolutionary general Álvaro Obregón became president of Mexico. This allowed the Mapaches to gain political power in the state and effectively stop many of the social reforms occurring in other parts of Mexico. The Mapaches continued to fight against socialists and communists in Mexico from 1920 to 1936, to maintain their control over the state. In general, elite landowners also allied with the nationally dominant party founded by Plutarco Elías Calles following the assassination of president-elect Obregón in 1928; that party was renamed the Institutional Revolutionary Party in 1946. Through that alliance, they could block land reform in this way as well. The Mapaches were first defeated in 1925 when an alliance of socialists and former Carranza loyalists had Carlos A. Vidal selected as governor, although he was assassinated two years later. The last of the Mapache resistance was overcome in the early 1930s by Governor Victorico Grajales, who pursued President Lázaro Cárdenas' social and economic policies including persecution of the Catholic Church. These policies would have some success in redistributing lands and organizing indigenous workers but the state would remain relatively isolated for the rest of the 20th century. The territory was reorganized into municipalities in 1916. The current state constitution was written in 1921. There was political stability from the 1940s to the early 1970s; however, regionalism regained with people thinking of themselves as from their local city or municipality over the state. This regionalism impeded the economy as local authorities restrained outside goods. For this reason, construction of highways and communications were pushed to help with economic development. Most of the work was done around Tuxtla Gutiérrez and Tapachula. This included the Sureste railroad connecting northern municipalities such as Pichucalco, Salto de Agua, Palenque, Catazajá and La Libertad. The Cristobal Colon highway linked Tuxtla to the Guatemalan border. Other highways included El Escopetazo to Pichucalco, a highway between San Cristóbal and Palenque with branches to Cuxtepeques and La Frailesca. This helped to integrate the state's economy, but it also permitted the political rise of communal land owners called ejidatarios. Mid-20th century to 1990 In the mid-20th century, the state experienced a significant rise in population, which outstripped local resources, especially land in the highland areas. Since the 1930s, many indigenous and mestizos have migrated from the highland areas into the Lacandon Jungle with the populations of Altamirano, Las Margaritas, Ocosingo and Palenque rising from less than 11,000 in 1920 to over 376,000 in 2000. These migrants came to the jungle area to clear forest and grow crops and raise livestock, especially cattle. Economic development in general raised the output of the state, especially in agriculture, but it had the effect of deforesting many areas, especially the Lacandon. Added to this was there were still serf like conditions for many workers and insufficient educational infrastructure. Population continued to increase faster than the economy could absorb. There were some attempts to resettle peasant farmers onto non cultivated lands, but they were met with resistance. President Gustavo Díaz Ordaz awarded a land grant to the town of Venustiano Carranza in 1967, but that land was already being used by cattle-ranchers who refused to leave. The peasants tried to take over the land anyway, but when violence broke out, they were forcibly removed. In Chiapas poor farmland and severe poverty afflict the Mayan Indians which led to unsuccessful non violent protests and eventually armed struggle started by the Zapatista National Liberation Army in January 1994. These events began to lead to political crises in the 1970s, with more frequent land invasions and takeovers of municipal halls. This was the beginning of a process that would lead to the emergence of the Zapatista movement in the 1990s. Another important factor to this movement would be the role of the Catholic Church from the 1960s to the 1980s. In 1960, Samuel Ruiz became the bishop of the Diocese of Chiapas, centered in San Cristóbal. He supported and worked with Marist priests and nuns following an ideology called liberation theology. In 1974, he organized a statewide "Indian Congress" with representatives from the Tzeltal, Tzotzil, Tojolabal and Ch'ol peoples from 327 communities as well as Marists and the Maoist People's Union. This congress was the first of its kind with the goal of uniting the indigenous peoples politically. These efforts were also supported by leftist organizations from outside Mexico, especially to form unions of ejido organizations. These unions would later form the base of the EZLN organization. One reason for the Church's efforts to reach out to the indigenous population was that starting in the 1970s, a shift began from traditional Catholic affiliation to Protestant, Evangelical and other Christian sects. The 1980s saw a large wave of refugees coming into the state from Central America as a number of these countries, especially Guatemala, were in the midst of violent political turmoil. The Chiapas/Guatemala border had been relatively porous with people traveling back and forth easily in the 19th and 20th centuries, much like the Mexico/U.S. border around the same time. This is in spite of tensions caused by Mexico's annexation of the Soconusco region in the 19th century. The border between Mexico and Guatemala had been traditionally poorly guarded, due to diplomatic considerations, lack of resources and pressure from landowners who need cheap labor sources. The arrival of thousands of refugees from Central America stressed Mexico's relationship with Guatemala, at one point coming close to war as well as a politically destabilized Chiapas. Although Mexico is not a signatory to the UN Convention Relating to the Status of Refugees, international pressure forced the government to grant official protection to at least some of the refugees. Camps were established in Chiapas and other southern states, and mostly housed Mayan peoples. However, most Central American refugees from that time never received any official status, estimated by church and charity groups at about half a million from El Salvador alone. The Mexican government resisted direct international intervention in the camps, but eventually relented somewhat because of finances. By 1984, there were 92 camps with 46,000 refugees in Chiapas, concentrated in three areas, mostly near the Guatemalan border. To make matters worse, the Guatemalan army conducted raids into camps on Mexican territories with significant casualties, terrifying the refugees and local populations. From within Mexico, refugees faced threats by local governments who threatened to deport them, legally or not, and local paramilitary groups funded by those worried about the political situation in Central American spilling over into the state. The official government response was to militarize the areas around the camps, which limited international access and migration into Mexico from Central America was restricted. By 1990, it was estimated that there were over 200,000 Guatemalans and half a million from El Salvador, almost all peasant farmers and most under age twenty. In the 1980s, the politization of the indigenous and rural populations of the state that began in the 1960s and 1970s continued. In 1980, several ejido (communal land organizations) joined to form the Union of Ejidal Unions and United Peasants of Chiapas, generally called the Union of Unions, or UU. It had a membership of 12,000 families from over 180 communities. By 1988, this organization joined with other to form the ARIC-Union of Unions (ARIC-UU) and took over much of the Lacandon Jungle portion of the state. Most of the members of these organization were from Protestant and Evangelical sects as well as "Word of God" Catholics affiliated with the political movements of the Diocese of Chiapas. What they held in common was indigenous identity vis-à-vis the non-indigenous, using the old 19th century "caste war" word "Ladino" for them. Economic liberalization and the EZLN The adoption of liberal economic reforms by the Mexican federal government clashed with the leftist political ideals of these groups, notably as the reforms were believed to have begun to have negative economic effects on poor farmers, especially small-scale indigenous coffee-growers. Opposition would coalesce into the Zapatista movement in the 1990s. Although the Zapatista movement couched its demands and cast its role in response to contemporary issues, especially in its opposition to neoliberalism, it operates in the tradition of a long line of peasant and indigenous uprisings that have occurred in the state since the colonial era. This is reflected in its indigenous vs. Mestizo character. However, the movement was an economic one as well. Although the area has extensive resources, much of the local population of the state, especially in rural areas, did not benefit from this bounty. In the 1990s, two thirds of the state's residents did not have sewage service, only a third had electricity and half did not have potable water. Over half of the schools offered education only to the third grade and most pupils dropped out by the end of first grade. Grievances, strongest in the San Cristóbal and Lacandon Jungle areas, were taken up by a small leftist guerrilla band led by a man called only "Subcomandante Marcos." This small band, called the Zapatista Army of National Liberation (Ejército Zapatista de Liberación Nacional, EZLN), came to the world's attention when on January 1, 1994 (the day the NAFTA treaty went into effect) EZLN forces occupied and took over the towns of San Cristobal de las Casas, Las Margaritas, Altamirano, Ocosingo and three others. They read their proclamation of revolt to the world and then laid siege to a nearby military base, capturing weapons and releasing many prisoners from the jails. This action followed previous protests in the state in opposition to neoliberal economic policies. Although it has been estimated as having no more than 300 armed guerrilla members, the EZLN paralyzed the Mexican government, which balked at the political risks of direct confrontation. The major reason for this was that the rebellion caught the attention of the national and world press, as Marcos made full use of the then-new Internet to get the group's message out, putting the spotlight on indigenous issues in Mexico in general. Furthermore, the opposition press in Mexico City, especially La Jornada, actively supported the rebels. These factors encouraged the rebellion to go national. Many blamed the unrest on infiltration of leftists among the large Central American refugee population in Chiapas, and the rebellion opened up splits in the countryside between those supporting and opposing the EZLN. Zapatista sympathizers have included mostly Protestants and Word of God Catholics, opposing those "traditionalist" Catholics who practiced a syncretic form of Catholicism and indigenous beliefs. This split had existed in Chiapas since the 1970s, with the latter group supported by the caciques and others in the traditional power-structure. Protestants and Word of God Catholics (allied directly with the bishopric in San Cristóbal) tended to oppose traditional power structures. The Bishop of Chiapas, Samuel Ruiz, and the Diocese of Chiapas reacted by offering to mediate between the rebels and authorities. However, because of this diocese's activism since the 1960s, authorities accused the clergy of being involved with the rebels. There was some ambiguity about the relationship between Ruiz and Marcos and it was a constant feature of news coverage, with many in official circles using such to discredit Ruiz. Eventually, the activities of the Zapatistas began to worry the Roman Catholic Church in general and to upstage the diocese's attempts to re establish itself among Chiapan indigenous communities against Protestant evangelization. This would lead to a breach between the Church and the Zapatistas. The Zapatista story remained in headlines for a number of years. One reason for this was the December 1997 massacre of forty-five unarmed Tzotzil peasants, mostly women and children, in the Zapatista-controlled village of Acteal in the Chenhaló municipality just north of San Cristóbal. This allowed many media outlets in Mexico to step up their criticisms of the government. Despite this, the armed conflict was brief, mostly because the Zapatistas, unlike many other guerilla movements, did not try to gain traditional political power. It focused more on trying to manipulate public opinion in order to obtain concessions from the government. This has linked the Zapatistas to other indigenous and identity-politics movements that arose in the late-20th century. The main concession that the group received was the San Andrés Accords (1996), also known as the Law on Indian Rights and Culture. The Accords appear to grant certain indigenous zones autonomy, but this is against the Mexican constitution, so its legitimacy has been questioned. Zapatista declarations since the mid-1990s have called for a new constitution. the government had not found a solution to this problem. The revolt also pressed the government to institute anti-poverty programs such as "Progresa" (later called "Oportunidades") and the "Puebla-Panama Plan" – aiming to increase trade between southern Mexico and Central America. As of the first decade of the 2000s the Zapatista movement remained popular in many indigenous communities. The uprising gave indigenous peoples a more active role in the state's politics. However, it did not solve the economic issues that many peasant farmers face, especially the lack of land to cultivate. This problem has been at crisis proportions since the 1970s, and the government's reaction has been to encourage peasant farmers—mostly indigenous—to migrate into the sparsely populated Lacandon Jungle, a trend since earlier in the century. From the 1970s on, some 100,000 people set up homes in this rainforest area, with many being recognized as ejidos, or communal land-holding organizations. These migrants included Tzeltals, Tojolabals, Ch'ols and mestizos, mostly farming corn and beans and raising livestock. However, the government changed policies in the late 1980s with the establishment of the Montes Azules Biosphere Reserve, as much of the Lacandon Jungle had been destroyed or severely damaged. While armed resistance has wound down, the Zapatistas have remained a strong political force, especially around San Cristóbal and the Lacandon Jungle, its traditional bases. Since the Accords, they have shifted focus in gaining autonomy for the communities they control. Since the 1994 uprising, migration into the Lacandon Jungle has significantly increased, involving illegal settlements and cutting in the protected biosphere reserve. The Zapatistas support these actions as part of indigenous rights, but that has put them in conflict with international environmental groups and with the indigenous inhabitants of the rainforest area, the Lacandons. Environmental groups state that the settlements pose grave risks to what remains of the Lacandon, while the Zapatistas accuse them of being fronts for the government, which wants to open the rainforest up to multinational corporations. Added to this is the possibility that significant oil and gas deposits exist under this area. The Zapatista movement has had some successes. The agricultural sector of the economy now favors ejidos and other commonly-owned land. There have been some other gains economically as well. In the last decades of the 20th century, Chiapas's traditional agricultural economy has diversified somewhat with the construction of more roads and better infrastructure by the federal and state governments. Tourism has become important in some areas of the state, especially in San Cristóbal de las Casas and Palenque. Its economy is important to Mexico as a whole as well, producing coffee, corn, cacao, tobacco, sugar, fruit, vegetables and honey for export. It is also a key state for the nation's petrochemical and hydroelectric industries. A significant percentage of PEMEX's drilling and refining takes place in Chiapas and Tabasco, and Chiapas produces fifty-five percent of Mexico's hydroelectric energy. However, Chiapas remains one of the poorest states in Mexico. Ninety-four of its 111 municipalities have a large percentage of the population living in poverty. In areas such as Ocosingo, Altamirano and Las Margaritas, the towns where the Zapatistas first came into prominence in 1994, 48% of the adults were illiterate. Chiapas is still considered isolated and distant from the rest of Mexico, both culturally and geographically. It has significantly underdeveloped infrastructure compared to the rest of the country, and its significant indigenous population with isolationist tendencies keep the state distinct culturally. Cultural stratification, neglect and lack of investment by the Mexican federal government has exacerbated this problem. Geography Political geography Chiapas is located in the south east of Mexico, bordering the states of Tabasco, Veracruz and Oaxaca with the Pacific Ocean to the south and Guatemala to the east. It has a territory of 74,415 km2, the eighth largest state in Mexico. The state consists of 118 municipalities organized into nine political regions called Center, Altos, Fronteriza, Frailesca, Norte, Selva, Sierra, Soconusco and Istmo-Costa. There are 18 cities, twelve towns (villas) and 111 pueblos (villages). Major cities include Tuxtla Gutiérrez, San Cristóbal de las Casas, Tapachula, Palenque, Comitán, and Chiapa de Corzo. Geographical regions The state has a complex geography with seven distinct regions according to the Mullerried classification system. These include the Pacific Coast Plains, the Sierra Madre de Chiapas, the Central Depression, the Central Highlands, the Eastern Mountains, the Northern Mountains and the Gulf Coast Plains. The Pacific Coast Plains is a strip of land parallel to the ocean. It is composed mostly of sediment from the mountains that border it on the northern side. It is uniformly flat, and stretches from the Bernal Mountain south to Tonalá. It has deep salty soils due to its proximity to the sea. It has mostly deciduous rainforest although most has been converted to pasture for cattle and fields for crops. It has numerous estuaries with mangroves and other aquatic vegetation. The Sierra Madre de Chiapas runs parallel to the Pacific coastline of the state, northwest to southeast as a continuation of the Sierra Madre del Sur. This area has the highest altitudes in Chiapas including the Tacaná Volcano, which rises above sea level. Most of these mountains are volcanic in origin although the nucleus is metamorphic rock. It has a wide range of climates but little arable land. It is mostly covered in middle altitude rainforest, high altitude rainforest, and forests of oaks and pines. The mountains partially block rain clouds from the Pacific, a process known as Orographic lift, which creates a particularly rich coastal region called the Soconusco. The main commercial center of the sierra is the town of Motozintla, also near the Guatemalan border. The Central Depression is in the center of the state. It is an extensive semi flat area bordered by the Sierra Madre de Chiapas, the Central Highlands and the Northern Mountains. Within the depression there are a number of distinct valleys. The climate here can be very hot and humid in the summer, especially due to the large volume of rain received in July and August. The original vegetation was lowland deciduous forest with some rainforest of middle altitudes and some oaks above above sea level. The Central Highlands, also referred to as Los Altos, are mountains oriented from northwest to southeast with altitudes ranging from above sea level. The western highlands are displaced faults, while the eastern highlands are mainly folds of sedimentary formationsmainly limestone, shale, and sandstone. These mountains, along the Sierra Madre of Chiapas become the Cuchumatanes where they extend over the border into Guatemala. Its topography is mountainous with many narrow valleys and karst formations called uvalas or poljés, depending on the size. Most of the rock is limestone allowing for a number of formations such as caves and sinkholes. There are also some isolated pockets of volcanic rock with the tallest peaks being the Tzontehuitz and Huitepec volcanos. There are no significant surface water systems as they are almost all underground. The original vegetation was forest of oak and pine but these have been heavily damaged. The highlands climate in the Koeppen modified classification system for Mexico is humid temperate C(m) and subhumid temperate C (w 2 ) (w). This climate exhibits a summer rainy season and a dry winter, with possibilities of frost from December to March. The Central Highlands have been the population center of Chiapas since the Conquest. European epidemics were hindered by the tierra fría climate, allowing the indigenous peoples in the highlands to retain their large numbers. The Eastern Mountains (Montañas del Oriente) are in the east of the state, formed by various parallel mountain chains mostly made of limestone and sandstone. Its altitude varies from . This area receives moisture from the Gulf of Mexico with abundant rainfall and exuberant vegetation, which creates the Lacandon Jungle, one of the most important rainforests in Mexico. The Northern Mountains (Montañas del Norte) are in the north of the state. They separate the flatlands of the Gulf Coast Plains from the Central Depression. Its rock is mostly limestone. These mountains also receive large amounts of rainfall with moisture from the Gulf of Mexico giving it a mostly hot and humid climate with rains year round. In the highest elevations around , temperatures are somewhat cooler and do experience a winter. The terrain is rugged with small valleys whose natural vegetation is high altitude rainforest. The Gulf Coast Plains (Llanura Costera del Golfo) stretch into Chiapas from the state of Tabasco, which gives it the alternate name of the Tabasqueña Plains. These plains are found only in the extreme north of the state. The terrain is flat and prone to flooding during the rainy season as it was built by sediments deposited by rivers and streams heading to the Gulf. Lacandon Jungle The Lacandon Jungle is situated in north eastern Chiapas, centered on a series of canyonlike valleys called the Cañadas, between smaller mountain ridges oriented from northwest to southeast. The ecosystem covers an area of approximately extending from Chiapas into northern Guatemala and southern Yucatán Peninsula and into Belize. This area contains as much as 25% of Mexico's total species diversity, most of which has not been researched. It has a predominantly hot and humid climate (Am w" i g) with most rain falling from summer to part of fall, with an average of between 2300 and 2600 mm per year. There is a short dry season from March to May. The predominate wild vegetation is perennial high rainforest. The Lacandon comprises a biosphere reserve (Montes Azules); four natural protected areas (Bonampak, Yaxchilan, Chan Kin, and Lacantum); and the communal reserve (La Cojolita), which functions as a biological corridor with the area of Petén in Guatemala. Flowing within the Rainforest is the Usumacinta River, considered to be one of the largest rivers in Mexico and seventh largest in the world based on volume of water. During the 20th century, the Lacandon has had a dramatic increase in population and along with it, severe deforestation. The population of municipalities in this area, Altamirano, Las Margaritas, Ocosingo and Palenque have risen from 11,000 in 1920 to over 376,000 in 2000. Migrants include Ch'ol, Tzeltal, Tzotzil, Tojolabal indigenous peoples along with mestizos, Guatemalan refugees and others. Most of these migrants are peasant farmers, who cut forest to plant crops. However, the soil of this area cannot support annual crop farming for more than three or four harvests. The increase in population and the need to move on to new lands has pitted migrants against each other, the native Lacandon people, and the various ecological reserves for land. It is estimated that only ten percent of the original Lacandon rainforest in Mexico remains, with the rest strip-mined, logged and farmed. It once stretched over a large part of eastern Chiapas but all that remains is along the northern edge of the Guatemalan border. Of this remaining portion, Mexico is losing over five percent each year. The best preserved portion of the Lacandon is within the Montes Azules Biosphere Reserve. It is centered on what was a commercial logging grant by the Porfirio Díaz government, which the government later nationalized. However, this nationalization and conversion into a reserve has made it one of the most contested lands in Chiapas, with the already existing ejidos and other settlements within the park along with new arrivals squatting on the land. Soconusco The Soconusco region encompasses a coastal plain and a mountain range with elevations of up to above sea levels paralleling the Pacific Coast. The highest peak in Chiapas is the Tacaná Volcano at above sea level. In accordance with an 1882 treaty, the dividing line between Mexico and Guatemala goes right over the summit of this volcano. The climate is tropical, with a number of rivers and evergreen forests in the mountains. This is Chiapas’ major coffee-producing area, as it has the best soils and climate for coffee. Before the arrival of the Spanish, this area was the principal source of cocoa seeds in the Aztec empire, which they used as currency, and for the highly prized quetzal feathers used by the nobility. It would become the first area to produce coffee, introduced by an Italian entrepreneur on the La Chacara farm. Coffee is cultivated on the slopes of these mountains mostly between asl. Mexico produces about 4 million sacks of green coffee each year, fifth in the world behind Brazil, Colombia, Indonesia and Vietnam. Most producers are small with plots of land under . From November to January, the annual crop is harvested and processed employing thousands of seasonal workers. Lately, a number of coffee haciendas have been developing tourism infrastructure as well. Environment and protected areas Chiapas is located in the tropical belt of the planet, but the climate is moderated in many areas by altitude. For this reason, there are hot, semi-hot, temperate and even cold climates. Some areas have abundant rainfall year-round and others receive most of their rain between May and October, with a dry season from November to April. The mountain areas affect wind and moisture flow over the state, concentrating moisture in certain areas of the state. They also are responsible for some cloud-covered rainforest areas in the Sierra Madre. Chiapas' rainforests are home to thousands of animals and plants, some of which cannot be found anywhere else in the world. Natural vegetation varies from lowland to highland tropical forest, pine and oak forests in the highest altitudes and plains area with some grassland. Chiapas is ranked second in forest resources in Mexico with valued woods such as pine, cypress, Liquidambar, oak, cedar, mahogany and more. The Lacandon Jungle is one of the last major tropical rainforests in the northern hemisphere with an extension of . It contains about sixty percent of Mexico's tropical tree species, 3,500 species of plants, 1,157 species of invertebrates and over 500 of vertebrate species. Chiapas has one of the greatest diversities in wildlife in the Americas. There are more than 100 species of amphibians, 700 species of birds, fifty of mammals and just over 200 species of reptiles. In the hot lowlands, there are armadillos, monkeys, pelicans, wild boar, jaguars, crocodiles, iguanas and many others. In the temperate regions there are species such as bobcats, salamanders, a large red lizard Abronia lythrochila, weasels, opossums, deer, ocelots and bats. The coastal areas have large quantities of fish, turtles, and crustaceans, with many species in danger of extinction or endangered as they are endemic only to this area. The total biodiversity of the state is estimated at over 50,000 species of plants and animals. The diversity of species is not limited to the hot lowlands. The higher altitudes also have mesophile forests, oak/pine forests in the Los Altos, Northern Mountains and Sierra Madre and the extensive estuaries and mangrove wetlands along the coast. Chiapas has about thirty percent of Mexico's fresh water resources. The Sierra Madre divides them into those that flow to the Pacific and those that flow to the Gulf of Mexico. Most of the first are short rivers and streams; most longer ones flow to the Gulf. Most Pacific side rivers do not drain directly into this ocean but into lagoons and estuaries. The two largest rivers are the Grijalva and the Usumacinta, with both part of the same system. The Grijalva has four dams built on it the Belisario Dominguez (La Angostura); Manuel Moreno Torres (Chicoasén); Nezahualcóyotl (Malpaso); and Angel Albino Corzo (Peñitas). The Usumacinta divides the state from Guatemala and is the longest river in Central America. In total, the state has of surface waters, of coastline, control of of ocean, of estuaries and ten lake systems. Laguna Miramar is a lake in the Montes Azules reserve and the largest in the Lacandon Jungle at 40 km in diameter. The color of its waters varies from indigo to emerald green and in ancient times, there were settlements on its islands and its caves on the shoreline. The Catazajá Lake is 28 km north of the city of Palenque. It is formed by rainwater captured as it makes it way to the Usumacinta River. It contains wildlife such as manatees and iguanas and it is surrounded by rainforest. Fishing on this lake is an ancient tradition and the lake has an annual bass fishing tournament. The Welib Já Waterfall is located on the road between Palenque and Bonampak. The state has thirty-six protected areas at the state and federal levels along with 67 areas protected by various municipalities. The Sumidero Canyon National Park was decreed in 1980 with an extension of . It extends over two of the regions of the state, the Central Depression and the Central Highlands over the municipalities of Tuxtla Gutiérrez, Nuevo Usumacinta, Chiapa de Corzo and San Fernando. The canyon has steep and vertical sides that rise to up to 1000 meters from the river below with mostly tropical rainforest but some areas with xerophile vegetation such as cactus can be found. The river below, which has cut the canyon over the course of twelve million years, is called the Grijalva. The canyon is emblematic for the state as it is featured in the state seal. The Sumidero Canyon was once the site of a battle between the Spaniards and Chiapanecan Indians. Many Chiapanecans chose to throw themselves from the high edges of the canyon rather than be defeated by Spanish forces. Today, the canyon is a popular destination for ecotourism. Visitors can take boat trips down the river that runs through the canyon and see the area's many birds and abundant vegetation. The Montes Azules Biosphere Reserve was decreed in 1978. It is located in the northeast of the state in the Lacandon Jungle. It covers in the municipalities of Maravilla Tenejapa, Ocosingo and Las Margaritas. It conserves highland perennial rainforest. The jungle is in the Usumacinta River basin east of the Chiapas Highlands. It is recognized by the United Nations Environment Programme for its global biological and cultural significance. In 1992, the Lacantun Reserve, which includes the Classic Maya archaeological sites of Yaxchilan and Bonampak, was added to the biosphere reserve. Agua Azul Waterfall Protection Area is in the Northern Mountains in the municipality of Tumbalá. It covers an area of of rainforest and pine-oak forest, centered on the waterfalls it is named after. It is located in an area locally called the "Mountains of Water", as many rivers flow through there on their way to the Gulf of Mexico. The rugged terrain encourages waterfalls with large pools at the bottom, that the falling water has carved into the sedimentary rock and limestone. Agua Azul is one of the best known in the state. The waters of the Agua Azul River emerge from a cave that forms a natural bridge of thirty meters and five small waterfalls in succession, all with pools of water at the bottom. In addition to Agua Azul, the area has other attractions—such as the Shumuljá River, which contains rapids and waterfalls, the Misol Há Waterfall with a thirty-meter drop, the Bolón Ajau Waterfall with a fourteen-meter drop, the Gallito Copetón rapids, the Blacquiazules Waterfalls, and a section of calm water called the Agua Clara. The El Ocote Biosphere Reserve was decreed in 1982 located in the Northern Mountains at the boundary with the Sierra Madre del Sur in the municipalities of Ocozocoautla, Cintalapa and Tecpatán. It has a surface area of and preserves a rainforest area with karst formations. The Lagunas de Montebello National Park was decreed in 1959 and consists of near the Guatemalan border in the municipalities of La Independencia and La Trinitaria. It contains two of the most threatened ecosystems in Mexico the "cloud rainforest" and the Soconusco rainforest. The El Triunfo Biosphere Reserve, decreed in 1990, is located in the Sierra Madre de Chiapas in the municipalities of Acacoyagua, Ángel Albino Corzo, Montecristo de Guerrero, La Concordia, Mapastepec, Pijijiapan, Siltepec and Villa Corzo near the Pacific Ocean with . It conserves areas of tropical rainforest and many freshwater systems endemic to Central America. It is home to around 400 species of birds including several rare species such as the horned guan, the quetzal and the azure-rumped tanager. The Palenque National Forest is centered on the archaeological site of the same name and was decreed in 1981. It is located in the municipality of Palenque where the Northern Mountains meet the Gulf Coast Plain. It extends over of tropical rainforest. The Laguna Bélgica Conservation Zone is located in the north west of the state in the municipality of Ocozocoautla. It covers forty-two hectares centered on the Bélgica Lake. The El Zapotal Ecological Center was established in 1980. Nahá–Metzabok is an area in the Lacandon Forest whose name means "place of the black lord" in Nahuatl. It extends over and in 2010, it was included in the World Network of Biosphere Reserves. Two main communities in the area are called Nahá and Metzabok. They were established in the 1940s, but the oldest communities in the area belong to the Lacandon people. The area has large numbers of wildlife including endangered species such as eagles, quetzals and jaguars. Demographics General statistics As of 2010, the population is 4,796,580, the eighth most populous state in Mexico. The 20th century saw large population growth in Chiapas. From fewer than one million inhabitants in 1940, the state had about two million in 1980, and over 4 million in 2005. Overcrowded land in the highlands was relieved when the rainforest to the east was subject to land reform. Cattle ranchers, loggers, and subsistence farmers migrated to the rain forest area. The population of the Lacandon was only one thousand people in 1950, but by the mid-1990s this had increased to 200 thousand. As of 2010, 78% lives in urban communities with 22% in rural communities. While birthrates are still high in the state, they have come down in recent decades from 7.4 per woman in 1950. However, these rates still mean significant population growth in raw numbers. About half of the state's population is under age 20, with an average age of 19. In 2005, there were 924,967 households, 81% headed by men and the rest by women. Most households were nuclear families (70.7%) with 22.1% consisting of extended families. More migrate out of Chiapas than migrate in, with emigrants leaving for Tabasco, Oaxaca, Veracruz, State of Mexico and the Federal District primarily. While Catholics remain the majority, their numbers have dropped as many have converted to Protestant denominations in recent decades. Islam is also a small but growing religion due to the Indigenous Muslims as well as Muslim immigrants from Africa continuously rising in numbers. The National Presbyterian Church in Mexico has a large following in Chiapas; some estimate that 40% of the population are followers of the Presbyterian church. There are a number of people in the state with African features. These are the descendants of slaves brought to the state in the 16th century. There are also those with predominantly European features who are the descendants of the original Spanish colonizers as well as later immigrants to Mexico. The latter mostly came at the end of the 19th and early 20th century under the Porfirio Díaz regime to start plantations. According to the 2020 Census, 1.02% of Chiapas' population identified as Black, Afro-Mexican, or of African descent. Indigenous population Numbers and influence Over the history of Chiapas, there have been 3 main indigenous groups: the Mixes-Zoques, the Mayas and the Chiapa. Today, there are an estimated fifty-six linguistic groups. As of the 2005 Census, there were 957,255 people who spoke an indigenous language out of a total population of about 3.5 million. Of this one million, one third do not speak Spanish. Out of Chiapas' 111 municipios, 99 have majority indigenous populations. 22 municipalities have indigenous populations over 90%, and 36 municipalities have native populations exceeding 50%. However, despite population growth in indigenous villages, the percentage of indigenous to non indigenous continues to fall with less than 35% indigenous. Indian populations are concentrated in a few areas, with the largest concentration of indigenous-language-speaking individuals is living in 5 of Chiapas's 9 economic regions: Los Altos, Selva, Norte, Fronteriza, and Sierra. The remaining three regions, Soconusco, Centro and Costa, have populations that are considered to be dominantly mestizo. The state has about 13.5% of all of Mexico's indigenous population, and it has been ranked among the ten "most indianized" states, with only Campeche, Oaxaca, Quintana Roo and Yucatán having been ranked above it between 1930 and the present. These indigenous peoples have been historically resistant to assimilation into the broader Mexican society, with it best seen in the retention rates of indigenous languages and the historic demands for autonomy over geographic areas as well as cultural domains. Much of the latter has been prominent since the Zapatista uprising in 1994. Most of Chiapas' indigenous groups are descended from the Mayans, speaking languages that are closely related to one another, belonging to the Western Maya language group. The state was part of a large region dominated by the Mayans during the Classic period. The most numerous of these Mayan groups include the Tzeltal, Tzotzil, Ch'ol, Zoque, Tojolabal, Lacandon and Mam, which have traits in common such as syncretic religious practices, and social structure based on kinship. The most common Western Maya languages are Tzeltal and Tzotzil along with Chontal, Ch’ol, Tojolabal, Chuj, Kanjobal, Acatec, Jacaltec and Motozintlec. 12 of Mexico's officially recognized native peoples live in the state have conserved their language, customs, history, dress and traditions to a significant degree. The primary groups include the Tzeltal, Tzotzil, Ch'ol, Tojolabal, Zoque, Chuj, Kanjobal, Mam, Jacalteco, Mochó Cakchiquel and Lacandon. Most indigenous communities are found in the municipalities of the Centro, Altos, Norte and Selva regions, with many having indigenous populations of over fifty percent. These include Bochil, Sitalá, Pantepec, Simojovel to those with over ninety percent indigenous such as San Juan Cancuc, Huixtán, Tenejapa, Tila, Oxchuc, Tapalapa, Zinacantán, Mitontic, Ocotepec, Chamula, and Chalchihuitán. The most numerous indigenous communities are the Tzeltal and Tzotzil peoples, who number about 400,000 each, together accounting for about half of the state's indigenous population. The next most numerous are the Ch’ol with about 200,000 people and the Tojolabal and Zoques, who number about 50,000 each. The top 3 municipalities in Chiapas with indigenous language speakers 3 years of age and older are: Ocosingo (133,811), Chilon (96,567), and San Juan Chamula (69,475). These 3 municipalities accounted for 24.8% (299,853) of all indigenous language speakers 3 years or older in the state of Chiapas, out of a total of 1,209,057 indigenous language speakers 3 years or older. Although most indigenous language speakers are bilingual, especially in the younger generations, many of these languages have shown resilience. 4 of Chiapas' indigenous languages Tzeltal, Tzotzil, Tojolabal and Chol are high-vitality languages, meaning that a
Guatemala. Elites in highland cities pushed for incorporation into Mexico. In 1822, then-Emperor Agustín de Iturbide decreed that Chiapas was part of Mexico. In 1823, the Junta General de Gobierno was held and Chiapas declared independence again. In July 1824, the Soconusco District of southwestern Chiapas split off from Chiapas, announcing that it would join the Central American Federation. In September of the same year, a referendum was held on whether the intendencia would join Central America or Mexico, with many of the elite endorsing union with Mexico. This referendum ended in favor of incorporation with Mexico (allegedly through manipulation of the elite in the highlands), but the Soconusco region maintained a neutral status until 1842, when Oaxacans under General Antonio López de Santa Anna occupied the area, and declared it reincorporated into Mexico. Elites of the area would not accept this until 1844. Guatemala would not recognize Mexico's annexation of the Soconusco region until 1895 even though a final border between Chiapas and the country was finalized until 1882. The State of Chiapas was officially declared in 1824, with its first constitution in 1826. Ciudad Real was renamed San Cristóbal de las Casas in 1828. In the decades after the official end of the war, the provinces of Chiapas and Soconusco unified, with power concentrated in San Cristóbal de las Casas. The state's society evolved into three distinct spheres: indigenous peoples, mestizos from the farms and haciendas and the Spanish colonial cities. Most of the political struggles were between the last two groups especially over who would control the indigenous labor force. Economically, the state lost one of its main crops, indigo, to synthetic dyes. There was a small experiment with democracy in the form of "open city councils" but it was shortlived because voting was heavily rigged. The Universidad Pontificia y Literaria de Chiapas was founded in 1826, with Mexico's second teacher's college founded in the state in 1828. Era of the Liberal Reform With the ouster of conservative Antonio López de Santa Anna, Mexican liberals came to power. The Reform War (1858–1861) fought between Liberals, who favored federalism and sought economic development, decreased power of the Roman Catholic Church, and Mexican army, and Conservatives, who favored centralized autocratic government, retention of elite privileges, did not lead to any military battles in the state. Despite that it strongly affected Chiapas politics. In Chiapas, the Liberal-Conservative division had its own twist. Much of the division between the highland and lowland ruling families was for whom the Indians should work for and for how long as the main shortage was of labor. These families split into Liberals in the lowlands, who wanted further reform and Conservatives in the highlands who still wanted to keep some of the traditional colonial and church privileges. For most of the early and mid 19th century, Conservatives held most of the power and were concentrated in the larger cities of San Cristóbal de las Casas, Chiapa (de Corzo), Tuxtla and Comitán. As Liberals gained the upper hand nationally in the mid-19th century, one Liberal politician Ángel Albino Corzo gained control of the state. Corzo became the primary exponent of Liberal ideas in the southeast of Mexico and defended the Palenque and Pichucalco areas from annexation by Tabasco. However, Corzo's rule would end in 1875, when he opposed the regime of Porfirio Díaz. Liberal land reforms would have negative effects on the state's indigenous population unlike in other areas of the country. Liberal governments expropriated lands that were previously held by the Spanish Crown and Catholic Church in order to sell them into private hands. This was not only motivated by ideology, but also due to the need to raise money. However, many of these lands had been in a kind of "trust" with the local indigenous populations, who worked them. Liberal reforms took away this arrangement and many of these lands fell into the hands of large landholders who when made the local Indian population work for three to five days a week just for the right to continue to cultivate the lands. This requirement caused many to leave and look for employment elsewhere. Most became "free" workers on other farms, but they were often paid only with food and basic necessities from the farm shop. If this was not enough, these workers became indebted to these same shops and then unable to leave. The opening up of these lands also allowed many whites and mestizos (often called Ladinos in Chiapas) to encroach on what had been exclusively indigenous communities in the state. These communities had had almost no contact with the Ladino world, except for a priest. The new Ladino landowners occupied their acquired lands as well as others, such as shopkeepers, opened up businesses in the center of Indian communities. In 1848, a group of Tzeltals plotted to kill the new mestizos in their midst, but this plan was discovered, and was punished by the removal of large number of the community's male members. The changing social order had severe negative effects on the indigenous population with alcoholism spreading, leading to more debts as it was expensive. The struggles between Conservatives and Liberals nationally disrupted commerce and confused power relations between Indian communities and Ladino authorities. It also resulted in some brief respites for Indians during times when the instability led to uncollected taxes. One other effect that Liberal land reforms had was the start of coffee plantations, especially in the Soconusco region. One reason for this push in this area was that Mexico was still working to strengthen its claim on the area against Guatemala's claims on the region. The land reforms brought colonists from other areas of the country as well as foreigners from England, the United States and France. These foreign immigrants would introduce coffee production to the areas, as well as modern machinery and professional administration of coffee plantations. Eventually, this production of coffee would become the state's most important crop. Although the Liberals had mostly triumphed in the state and the rest of the country by the 1860s, Conservatives still held considerable power in Chiapas. Liberal politicians sought to solidify their power among the indigenous groups by weakening the Roman Catholic Church. The more radical of these even allowed indigenous groups the religious freedoms to return to a number of native rituals and beliefs such as pilgrimages to natural shrines such as mountains and waterfalls. This culminated in the Chiapas "caste war", which was an uprising of Tzotzils beginning in 1868. The basis of the uprising was the establishment of the "three stones cult" in Tzajahemal. Agustina Gómez Checheb was a girl tending her father's sheep when three stones fell from the sky. Collecting them, she put them on her father's altar and soon claimed that the stone communicated with her. Word of this soon spread and the "talking stones" of Tzajahemel soon became a local indigenous pilgrimage site. The cult was taken over by one pilgrim, Pedro Díaz Cuzcat, who also claimed to be able to communicate with the stones, and had knowledge of Catholic ritual, becoming a kind of priest. However, this challenged the traditional Catholic faith and non Indians began to denounce the cult. Stories about the cult include embellishments such as the crucifixion of a young Indian boy. This led to the arrest of Checheb and Cuzcat in December 1868. This caused resentment among the Tzotzils. Although the Liberals had earlier supported the cult, Liberal landowners had also lost control of much of their Indian labor and Liberal politicians were having a harder time collecting taxes from indigenous communities. An Indian army gathered at Zontehuitz then attacked various villages and haciendas. By the following June the city of San Cristóbal was surrounded by several thousand Indians, who offered the exchanged of several Ladino captives for their religious leaders and stones. Chiapas governor Dominguéz came to San Cristóbal with about three hundred heavily armed men, who then attacked the Indian force armed only with sticks and machetes. The indigenous force was quickly dispersed and routed with government troops pursuing pockets of guerrilla resistance in the mountains until 1870. The event effectively returned control of the indigenous workforce back to the highland elite. Porfiriato, 1876–1911 The Porfirio Díaz era at the end of the 19th century and beginning of the 20th was initially thwarted by regional bosses called caciques, bolstered by a wave of Spanish and mestizo farmers who migrated to the state and added to the elite group of wealthy landowning families. There was some technological progress such as a highway from San Cristóbal to the Oaxaca border and the first telephone line in the 1880s, but Porfirian era economic reforms would not begin until 1891 with Governor Emilio Rabasa. This governor took on the local and regional caciques and centralized power into the state capital, which he moved from San Cristóbal de las Casas to Tuxtla in 1892. He modernized public administration, transportation and promoted education. Rabasa also introduced the telegraph, limited public schooling, sanitation and road construction, including a route from San Cristóbal to Tuxtla then Oaxaca, which signaled the beginning of favoritism of development in the central valley over the highlands. He also changed state policies to favor foreign investment, favored large land mass consolidation for the production of cash crops such as henequen, rubber, guayule, cochineal and coffee. Agricultural production boomed, especially coffee, which induced the construction of port facilities in Tonalá. The economic expansion and investment in roads also increased access to tropical commodities such as hardwoods, rubber and chicle. These still required cheap and steady labor to be provided by the indigenous population. By the end of the 19th century, the four main indigenous groups, Tzeltals, Tzotzils, Tojolabals and Ch’ols were living in "reducciones" or reservations, isolated from one another. Conditions on the farms of the Porfirian era was serfdom, as bad if not worse than for other indigenous and mestizo populations leading to the Mexican Revolution. While this coming event would affect the state, Chiapas did not follow the uprisings in other areas that would end the Porfirian era. Japanese immigration to Mexico began in 1897 when the first thirty five migrants arrived in Chiapas to work on coffee farms, so that Mexico was the first Latin American country to receive organized Japanese immigration. Although this colony ultimately failed, there remains a small Japanese community in Acacoyagua, Chiapas. Early 20th century to 1960 In the early 20th century and into the Mexican Revolution, the production of coffee was particularly important but labor-intensive. This would lead to a practice called enganche (hook), where recruiters would lure workers with advanced pay and other incentives such as alcohol and then trap them with debts for travel and other items to be worked off. This practice would lead to a kind of indentured servitude and uprisings in areas of the state, although they never led to large rebel armies as in other parts of Mexico. A small war broke out between Tuxtla Gutiérrez and San Cristobal in 1911. San Cristóbal, allied with San Juan Chamula, tried to regain the state's capital but the effort failed. San Cristóbal de las Casas, which had a very limited budget, to the extent that it had to ally with San Juan Chamula challenged Tuxtla Gutierrez which, with only a small ragtag army overwhelmingly defeated the army helped by chamulas from San Cristóbal. There were three years of peace after that until troops allied with the "First Chief" of the revolutionary Constitutionalist forces, Venustiano Carranza, entered in 1914 taking over the government, with the aim of imposing the Ley de Obreros (Workers' Law) to address injustices against the state's mostly indigenous workers. Conservatives responded violently months later when they were certain the Carranza forces would take their lands. This was mostly by way of guerrilla actions headed by farm owners who called themselves the Mapaches. This action continued for six years, until President Carranza was assassinated in 1920 and revolutionary general Álvaro Obregón became president of Mexico. This allowed the Mapaches to gain political power in the state and effectively stop many of the social reforms occurring in other parts of Mexico. The Mapaches continued to fight against socialists and communists in Mexico from 1920 to 1936, to maintain their control over the state. In general, elite landowners also allied with the nationally dominant party founded by Plutarco Elías Calles following the assassination of president-elect Obregón in 1928; that party was renamed the Institutional Revolutionary Party in 1946. Through that alliance, they could block land reform in this way as well. The Mapaches were first defeated in 1925 when an alliance of socialists and former Carranza loyalists had Carlos A. Vidal selected as governor, although he was assassinated two years later. The last of the Mapache resistance was overcome in the early 1930s by Governor Victorico Grajales, who pursued President Lázaro Cárdenas' social and economic policies including persecution of the Catholic Church. These policies would have some success in redistributing lands and organizing indigenous workers but the state would remain relatively isolated for the rest of the 20th century. The territory was reorganized into municipalities in 1916. The current state constitution was written in 1921. There was political stability from the 1940s to the early 1970s; however, regionalism regained with people thinking of themselves as from their local city or municipality over the state. This regionalism impeded the economy as local authorities restrained outside goods. For this reason, construction of highways and communications were pushed to help with economic development. Most of the work was done around Tuxtla Gutiérrez and Tapachula. This included the Sureste railroad connecting northern municipalities such as Pichucalco, Salto de Agua, Palenque, Catazajá and La Libertad. The Cristobal Colon highway linked Tuxtla to the Guatemalan border. Other highways included El Escopetazo to Pichucalco, a highway between San Cristóbal and Palenque with branches to Cuxtepeques and La Frailesca. This helped to integrate the state's economy, but it also permitted the political rise of communal land owners called ejidatarios. Mid-20th century to 1990 In the mid-20th century, the state experienced a significant rise in population, which outstripped local resources, especially land in the highland areas. Since the 1930s, many indigenous and mestizos have migrated from the highland areas into the Lacandon Jungle with the populations of Altamirano, Las Margaritas, Ocosingo and Palenque rising from less than 11,000 in 1920 to over 376,000 in 2000. These migrants came to the jungle area to clear forest and grow crops and raise livestock, especially cattle. Economic development in general raised the output of the state, especially in agriculture, but it had the effect of deforesting many areas, especially the Lacandon. Added to this was there were still serf like conditions for many workers and insufficient educational infrastructure. Population continued to increase faster than the economy could absorb. There were some attempts to resettle peasant farmers onto non cultivated lands, but they were met with resistance. President Gustavo Díaz Ordaz awarded a land grant to the town of Venustiano Carranza in 1967, but that land was already being used by cattle-ranchers who refused to leave. The peasants tried to take over the land anyway, but when violence broke out, they were forcibly removed. In Chiapas poor farmland and severe poverty afflict the Mayan Indians which led to unsuccessful non violent protests and eventually armed struggle started by the Zapatista National Liberation Army in January 1994. These events began to lead to political crises in the 1970s, with more frequent land invasions and takeovers of municipal halls. This was the beginning of a process that would lead to the emergence of the Zapatista movement in the 1990s. Another important factor to this movement would be the role of the Catholic Church from the 1960s to the 1980s. In 1960, Samuel Ruiz became the bishop of the Diocese of Chiapas, centered in San Cristóbal. He supported and worked with Marist priests and nuns following an ideology called liberation theology. In 1974, he organized a statewide "Indian Congress" with representatives from the Tzeltal, Tzotzil, Tojolabal and Ch'ol peoples from 327 communities as well as Marists and the Maoist People's Union. This congress was the first of its kind with the goal of uniting the indigenous peoples politically. These efforts were also supported by leftist organizations from outside Mexico, especially to form unions of ejido organizations. These unions would later form the base of the EZLN organization. One reason for the Church's efforts to reach out to the indigenous population was that starting in the 1970s, a shift began from traditional Catholic affiliation to Protestant, Evangelical and other Christian sects. The 1980s saw a large wave of refugees coming into the state from Central America as a number of these countries, especially Guatemala, were in the midst of violent political turmoil. The Chiapas/Guatemala border had been relatively porous with people traveling back and forth easily in the 19th and 20th centuries, much like the Mexico/U.S. border around the same time. This is in spite of tensions caused by Mexico's annexation of the Soconusco region in the 19th century. The border between Mexico and Guatemala had been traditionally poorly guarded, due to diplomatic considerations, lack of resources and pressure from landowners who need cheap labor sources. The arrival of thousands of refugees from Central America stressed Mexico's relationship with Guatemala, at one point coming close to war as well as a politically destabilized Chiapas. Although Mexico is not a signatory to the UN Convention Relating to the Status of Refugees, international pressure forced the government to grant official protection to at least some of the refugees. Camps were established in Chiapas and other southern states, and mostly housed Mayan peoples. However, most Central American refugees from that time never received any official status, estimated by church and charity groups at about half a million from El Salvador alone. The Mexican government resisted direct international intervention in the camps, but eventually relented somewhat because of finances. By 1984, there were 92 camps with 46,000 refugees in Chiapas, concentrated in three areas, mostly near the Guatemalan border. To make matters worse, the Guatemalan army conducted raids into camps on Mexican territories with significant casualties, terrifying the refugees and local populations. From within Mexico, refugees faced threats by local governments who threatened to deport them, legally or not, and local paramilitary groups funded by those worried about the political situation in Central American spilling over into the state. The official government response was to militarize the areas around the camps, which limited international access and migration into Mexico from Central America was restricted. By 1990, it was estimated that there were over 200,000 Guatemalans and half a million from El Salvador, almost all peasant farmers and most under age twenty. In the 1980s, the politization of the indigenous and rural populations of the state that began in the 1960s and 1970s continued. In 1980, several ejido (communal land organizations) joined to form the Union of Ejidal Unions and United Peasants of Chiapas, generally called the Union of Unions, or UU. It had a membership of 12,000 families from over 180 communities. By 1988, this organization joined with other to form the ARIC-Union of Unions (ARIC-UU) and took over much of the Lacandon Jungle portion of the state. Most of the members of these organization were from Protestant and Evangelical sects as well as "Word of God" Catholics affiliated with the political movements of the Diocese of Chiapas. What they held in common was indigenous identity vis-à-vis the non-indigenous, using the old 19th century "caste war" word "Ladino" for them. Economic liberalization and the EZLN The adoption of liberal economic reforms by the Mexican federal government clashed with the leftist political ideals of these groups, notably as the reforms were believed to have begun to have negative economic effects on poor farmers, especially small-scale indigenous coffee-growers. Opposition would coalesce into the Zapatista movement in the 1990s. Although the Zapatista movement couched its demands and cast its role in response to contemporary issues, especially in its opposition to neoliberalism, it operates in the tradition of a long line of peasant and indigenous uprisings that have occurred in the state since the colonial era. This is reflected in its indigenous vs. Mestizo character. However, the movement was an economic one as well. Although the area has extensive resources, much of the local population of the state, especially in rural areas, did not benefit from this bounty. In the 1990s, two thirds of the state's residents did not have sewage service, only a third had electricity and half did not have potable water. Over half of the schools offered education only to the third grade and most pupils dropped out by the end of first grade. Grievances, strongest in the San Cristóbal and Lacandon Jungle areas, were taken up by a small leftist guerrilla band led by a man called only "Subcomandante Marcos." This small band, called the Zapatista Army of National Liberation (Ejército Zapatista de Liberación Nacional, EZLN), came to the world's attention when on January 1, 1994 (the day the NAFTA treaty went into effect) EZLN forces occupied and took over the towns of San Cristobal de las Casas, Las Margaritas, Altamirano, Ocosingo and three others. They read their proclamation of revolt to the world and then laid siege to a nearby military base, capturing weapons and releasing many prisoners from the jails. This action followed previous protests in the state in opposition to neoliberal economic policies. Although it has been estimated as having no more than 300 armed guerrilla members, the EZLN paralyzed the Mexican government, which balked at the political risks of direct confrontation. The major reason for this was that the rebellion caught the attention of the national and world press, as Marcos made full use of the then-new Internet to get the group's message out, putting the spotlight on indigenous issues in Mexico in general. Furthermore, the opposition press in Mexico City, especially La Jornada, actively supported the rebels. These factors encouraged the rebellion to go national. Many blamed the unrest on infiltration of leftists among the large Central American refugee population in Chiapas, and the rebellion opened up splits in the countryside between those supporting and opposing the EZLN. Zapatista sympathizers have included mostly Protestants and Word of God Catholics, opposing those "traditionalist" Catholics who practiced a syncretic form of Catholicism and indigenous beliefs. This split had existed in Chiapas since the 1970s, with the latter group supported by the caciques and others in the traditional power-structure. Protestants and Word of God Catholics (allied directly with the bishopric in San Cristóbal) tended to oppose traditional power structures. The Bishop of Chiapas, Samuel Ruiz, and the Diocese of Chiapas reacted by offering to mediate between the rebels and authorities. However, because of this diocese's activism since the 1960s, authorities accused the clergy of being involved with the rebels. There was some ambiguity about the relationship between Ruiz and Marcos and it was a constant feature of news coverage, with many in official circles using such to discredit Ruiz. Eventually, the activities of the Zapatistas began to worry the Roman Catholic Church in general and to upstage the diocese's attempts to re establish itself among Chiapan indigenous communities against Protestant evangelization. This would lead to a breach between the Church and the Zapatistas. The Zapatista story remained in headlines for a number of years. One reason for this was the December 1997 massacre of forty-five unarmed Tzotzil peasants, mostly women and children, in the Zapatista-controlled village of Acteal in the Chenhaló municipality just north of San Cristóbal. This allowed many media outlets in Mexico to step up their criticisms of the government. Despite this, the armed conflict was brief, mostly because the Zapatistas, unlike many other guerilla movements, did not try to gain traditional political power. It focused more on trying to manipulate public opinion in order to obtain concessions from the government. This has linked the Zapatistas to other indigenous and identity-politics movements that arose in the late-20th century. The main concession that the group received was the San Andrés Accords (1996), also known as the Law on Indian Rights and Culture. The Accords appear to grant certain indigenous zones autonomy, but this is against the Mexican constitution, so its legitimacy has been questioned. Zapatista declarations since the mid-1990s have called for a new constitution. the government had not found a solution to this problem. The revolt also pressed the government to institute anti-poverty programs such as "Progresa" (later called "Oportunidades") and the "Puebla-Panama Plan" – aiming to increase trade between southern Mexico and Central America. As of the first decade of the 2000s the Zapatista movement remained popular in many indigenous communities. The uprising gave indigenous peoples a more active role in the state's politics. However, it did not solve the economic issues that many peasant farmers face, especially the lack of land to cultivate. This problem has been at crisis proportions since the 1970s, and the government's reaction has been to encourage peasant farmers—mostly indigenous—to migrate into the sparsely populated Lacandon Jungle, a trend since earlier in the century. From the 1970s on, some 100,000 people set up homes in this rainforest area, with many being recognized as ejidos, or communal land-holding organizations. These migrants included Tzeltals, Tojolabals, Ch'ols and mestizos, mostly farming corn and beans and raising livestock. However, the government changed policies in the late 1980s with the establishment of the Montes Azules Biosphere Reserve, as much of the Lacandon Jungle had been destroyed or severely damaged. While armed resistance has wound down, the Zapatistas have remained a strong political force, especially around San Cristóbal and the Lacandon Jungle, its traditional bases. Since the Accords, they have shifted focus in gaining autonomy for the communities they control. Since the 1994 uprising, migration into the Lacandon Jungle has significantly increased, involving illegal settlements and cutting in the protected biosphere reserve. The Zapatistas support these actions as part of indigenous rights, but that has put them in conflict with international environmental groups and with the indigenous inhabitants of the rainforest area, the Lacandons. Environmental groups state that the settlements pose grave risks to what remains of the Lacandon, while the Zapatistas accuse them of being fronts for the government, which wants to open the rainforest up to multinational corporations. Added to this is the possibility that significant oil and gas deposits exist under this area. The Zapatista movement has had some successes. The agricultural sector of the economy now favors ejidos and other commonly-owned land. There have been some other gains economically as well. In the last decades of the 20th century, Chiapas's traditional agricultural economy has diversified somewhat with the construction of more roads and better infrastructure by the federal and state governments. Tourism has become important in some areas of the state, especially in San Cristóbal de las Casas and Palenque. Its economy is important to Mexico as a whole as well, producing coffee, corn, cacao, tobacco, sugar, fruit, vegetables and honey for export. It is also a key state for the nation's petrochemical and hydroelectric industries. A significant percentage of PEMEX's drilling and refining takes place in Chiapas and Tabasco, and Chiapas produces fifty-five percent of Mexico's hydroelectric energy. However, Chiapas remains one of the poorest states in Mexico. Ninety-four of its 111 municipalities have a large percentage of the population living in poverty. In areas such as Ocosingo, Altamirano and Las Margaritas, the towns where the Zapatistas first came into prominence in 1994, 48% of the adults were illiterate. Chiapas is still considered isolated and distant from the rest of Mexico, both culturally and geographically. It has significantly underdeveloped infrastructure compared to the rest of the country, and its significant indigenous population with isolationist tendencies keep the state distinct culturally. Cultural stratification, neglect and lack of investment by the Mexican federal government has exacerbated this problem. Geography Political geography Chiapas is located in the south east of Mexico, bordering the states of Tabasco, Veracruz and Oaxaca with the Pacific Ocean to the south and Guatemala to the east. It has a territory of 74,415 km2, the eighth largest state in Mexico. The state consists of 118 municipalities organized into nine political regions called Center, Altos, Fronteriza, Frailesca, Norte, Selva, Sierra, Soconusco and Istmo-Costa. There are 18 cities, twelve towns (villas) and 111 pueblos (villages). Major cities include Tuxtla Gutiérrez, San Cristóbal de las Casas, Tapachula, Palenque, Comitán, and Chiapa de Corzo. Geographical regions The state has a complex geography with seven distinct regions according to the Mullerried classification system. These include the Pacific Coast Plains, the Sierra Madre de Chiapas, the Central Depression, the Central Highlands, the Eastern Mountains, the Northern Mountains and the Gulf Coast Plains. The Pacific Coast Plains is a strip of land parallel to the ocean. It is composed mostly of sediment from the mountains that border it on the northern side. It is uniformly flat, and stretches from the Bernal Mountain south to Tonalá. It has deep salty soils due to its proximity to the sea. It has mostly deciduous rainforest although most has been converted to pasture for cattle and fields for crops. It has numerous estuaries with mangroves and other aquatic vegetation. The Sierra Madre de Chiapas runs parallel to the Pacific coastline of the state, northwest to southeast as a continuation of the Sierra Madre del Sur. This area has the highest altitudes in Chiapas including the Tacaná Volcano, which rises above sea level. Most of these mountains are volcanic in origin although the nucleus is metamorphic rock. It has a wide range of climates but little arable land. It is mostly covered in middle altitude rainforest, high altitude rainforest, and forests of oaks and pines. The mountains partially block rain clouds from the Pacific, a process known as Orographic lift, which creates a particularly rich coastal region called the Soconusco. The main commercial center of the sierra is the town of Motozintla, also near the Guatemalan border. The Central Depression is in the center of the state. It is an extensive semi flat area bordered by the Sierra Madre de Chiapas, the Central Highlands and the Northern Mountains. Within the depression there are a number of distinct valleys. The climate here can be very hot and humid in the summer, especially due to the large volume of rain received in July and August. The original vegetation was lowland deciduous forest with some rainforest of middle altitudes and some oaks above above sea level. The Central Highlands, also referred to as Los Altos, are mountains oriented from northwest to southeast with altitudes ranging from above sea level. The western highlands are displaced faults, while the eastern highlands are mainly folds of sedimentary formationsmainly limestone, shale, and sandstone. These mountains, along the Sierra Madre of Chiapas become the Cuchumatanes where they extend over the border into Guatemala. Its topography is mountainous with many narrow valleys and karst formations called uvalas or poljés, depending on the size. Most of the rock is limestone allowing for a number of formations such as caves and sinkholes. There are also some isolated pockets of volcanic rock with the tallest peaks being the Tzontehuitz and Huitepec volcanos. There are no significant surface water systems as they are almost all underground. The original vegetation was forest of oak and pine but these have been heavily damaged. The highlands climate in the Koeppen modified classification system for Mexico is humid temperate C(m) and subhumid temperate C (w 2 ) (w). This climate exhibits a summer rainy season and a dry winter, with possibilities of frost from December to March. The Central Highlands have been the population center of Chiapas since the Conquest. European epidemics were hindered by the tierra fría climate, allowing the indigenous peoples in the highlands to retain their large numbers. The Eastern Mountains (Montañas del Oriente) are in the east of the state, formed by various parallel mountain chains mostly made of limestone and sandstone. Its altitude varies from . This area receives moisture from the Gulf of Mexico with abundant rainfall and exuberant vegetation, which creates the Lacandon Jungle, one of the most important rainforests in Mexico. The Northern Mountains (Montañas del Norte) are in the north of the state. They separate the flatlands of the Gulf Coast Plains from the Central Depression. Its rock is mostly limestone. These mountains also receive large amounts of rainfall with moisture from the Gulf of Mexico giving it a mostly hot and humid climate with rains year round. In the highest elevations around , temperatures are somewhat cooler and do experience a winter. The terrain is rugged with small valleys whose natural vegetation is high altitude rainforest. The Gulf Coast Plains (Llanura Costera del Golfo) stretch into Chiapas from the state of Tabasco, which gives it the alternate name of the Tabasqueña Plains. These plains are found only in the extreme north of the state. The terrain is flat and prone to flooding during the rainy season as it was built by sediments deposited by rivers and streams heading to the Gulf. Lacandon Jungle The Lacandon Jungle is situated in north eastern Chiapas, centered on a series of canyonlike valleys called the Cañadas, between smaller mountain ridges oriented from northwest to southeast. The ecosystem covers an area of approximately extending from Chiapas into northern Guatemala and southern Yucatán Peninsula and into Belize. This area contains as much as 25% of Mexico's total species diversity, most of which has not been researched. It has a predominantly hot and humid climate (Am w" i g) with most rain falling from summer to part of fall, with an average of between 2300 and 2600 mm per year. There is a short dry season from March to May. The predominate wild vegetation is perennial high rainforest. The Lacandon comprises a biosphere reserve (Montes Azules); four natural protected areas (Bonampak, Yaxchilan, Chan Kin, and Lacantum); and the communal reserve (La Cojolita), which functions as a biological corridor with the area of Petén in Guatemala. Flowing within the Rainforest is the Usumacinta River, considered to be one of the largest rivers in Mexico and seventh largest in the world based on volume of water. During the 20th century, the Lacandon has had a dramatic increase in population and along with it, severe deforestation. The population of municipalities in this area, Altamirano, Las Margaritas, Ocosingo and Palenque have risen from 11,000 in 1920 to over 376,000 in 2000. Migrants include Ch'ol, Tzeltal, Tzotzil, Tojolabal indigenous peoples along with mestizos, Guatemalan refugees and others. Most of these migrants are peasant farmers, who cut forest to plant crops. However, the soil of this area cannot support annual crop farming for more than three or four harvests. The increase in population and the need to move on to new lands has pitted migrants against each other, the native Lacandon people, and the various ecological reserves for land. It is estimated that only ten percent of the original Lacandon rainforest in Mexico remains, with the rest strip-mined, logged and farmed. It once stretched over a large part of eastern Chiapas but all that remains is along the northern edge of the Guatemalan border. Of this remaining portion, Mexico is losing over five percent each year. The best preserved portion of the Lacandon is within the Montes Azules Biosphere Reserve. It is centered on what was a commercial logging grant by the Porfirio Díaz government, which the government later nationalized. However, this nationalization and conversion into a reserve has made it one of the most contested lands in Chiapas, with the already existing ejidos and other settlements within the park along with new arrivals squatting on the land. Soconusco The Soconusco region encompasses a coastal plain and a mountain range with elevations of up to above sea levels paralleling the Pacific Coast. The highest peak in Chiapas is the Tacaná Volcano at above sea level. In accordance with an 1882 treaty, the dividing line between Mexico and Guatemala goes right over the summit of this volcano. The climate is tropical, with a number of rivers and evergreen forests in the mountains. This is Chiapas’ major coffee-producing area, as it has the best soils and climate for coffee. Before the arrival of the Spanish, this area was the principal source of cocoa seeds in the Aztec empire, which they used as currency, and for the highly prized quetzal feathers used by the nobility. It would become the first area to produce coffee, introduced by an Italian entrepreneur on the La Chacara farm. Coffee is cultivated on the slopes of these mountains mostly between asl. Mexico produces about 4 million sacks of green coffee each year, fifth in the world behind Brazil, Colombia, Indonesia and Vietnam. Most producers are small with plots of land under . From November to January, the annual crop is harvested and processed employing thousands of seasonal workers. Lately, a number of coffee haciendas have been developing tourism infrastructure as well. Environment and protected areas Chiapas is located in the tropical belt of the planet, but the climate is moderated in many areas by altitude. For this reason, there are hot, semi-hot, temperate and even cold climates. Some areas have abundant rainfall year-round and others receive most of their rain between May and October, with a dry season from November to April. The mountain areas affect wind and moisture flow over the state, concentrating moisture in certain areas of the state. They also are responsible for some cloud-covered rainforest areas in the Sierra Madre. Chiapas' rainforests are home to thousands of animals and plants, some of which cannot be found anywhere else in the world. Natural vegetation varies from lowland to highland tropical forest, pine and oak forests in the highest altitudes and plains area with some grassland. Chiapas is ranked second in forest resources in Mexico with valued woods such as pine, cypress, Liquidambar, oak, cedar, mahogany and more. The Lacandon Jungle is one of the last major tropical rainforests in the northern hemisphere with an extension of . It contains about sixty percent of Mexico's tropical tree species, 3,500 species of plants, 1,157 species of invertebrates and over 500 of vertebrate species. Chiapas has one of the greatest diversities in wildlife in the Americas. There are more than 100 species of amphibians, 700 species of birds, fifty of mammals and just over 200 species of reptiles. In the hot lowlands, there are armadillos, monkeys, pelicans, wild boar, jaguars, crocodiles, iguanas and many others. In the temperate regions there are species such as bobcats, salamanders, a large red lizard Abronia lythrochila, weasels, opossums, deer, ocelots and bats. The coastal areas have large quantities of fish, turtles, and crustaceans, with many species in danger of extinction or endangered as they are endemic only to this area. The total biodiversity of the state is estimated at over 50,000 species of plants and animals. The diversity of species is not limited to the hot lowlands. The higher altitudes also have mesophile forests, oak/pine forests in the Los Altos, Northern Mountains and Sierra Madre and the extensive estuaries and mangrove wetlands along the coast. Chiapas has about thirty percent of Mexico's fresh water resources. The Sierra Madre divides them into those that flow to the Pacific and those that flow to the Gulf of Mexico. Most of the first are short rivers and streams; most longer ones flow to the Gulf. Most Pacific side rivers do not drain directly into this ocean but into lagoons and estuaries. The two largest rivers are the Grijalva and the Usumacinta, with both part of the same system. The Grijalva has four dams built on it the Belisario Dominguez (La Angostura); Manuel Moreno Torres (Chicoasén); Nezahualcóyotl (Malpaso); and Angel Albino Corzo (Peñitas). The Usumacinta divides the state from Guatemala and is the longest river in Central America. In total, the state has of surface waters, of coastline, control of of ocean, of estuaries and ten lake systems. Laguna Miramar is a lake in the Montes Azules reserve and the largest in the Lacandon Jungle at 40 km in diameter. The color of its waters varies from indigo to emerald green and in ancient times, there were settlements on its islands and its caves on the shoreline. The Catazajá Lake is 28 km north of the city of Palenque. It is formed by rainwater captured as it makes it way to the Usumacinta River. It contains wildlife such as manatees and iguanas and it is surrounded by rainforest. Fishing on this lake is an ancient tradition and the lake has an annual bass fishing tournament. The Welib Já Waterfall is located on the road between Palenque and Bonampak. The state has thirty-six protected areas at the state and federal levels along with 67 areas protected by various municipalities. The Sumidero Canyon National Park was decreed in 1980 with an extension of . It extends over two of the regions of the state, the Central Depression and the Central Highlands over the municipalities of Tuxtla Gutiérrez, Nuevo Usumacinta, Chiapa de Corzo and San Fernando. The canyon has steep and vertical sides that rise to up to 1000 meters from the river below with mostly tropical
and Lexington Avenue in Midtown Manhattan. At , it is the tallest brick building in the world with a steel framework, and was the world's tallest building for 11 months after its completion in 1930. , the Chrysler is the 11th-tallest building in the city, tied with The New York Times Building. Originally a project of real estate developer and former New York State Senator William H. Reynolds, the building was constructed by Walter Chrysler, the head of the Chrysler Corporation. The construction of the Chrysler Building, an early skyscraper, was characterized by a competition with 40 Wall Street and the Empire State Building to become the world's tallest building. Although the Chrysler Building was built and designed specifically for the car manufacturer, the corporation did not pay for its construction and never owned it; Walter Chrysler decided to fund the entire cost personally so his children could inherit it. An annex was completed in 1952, and the building was sold by the Chrysler family the next year, with numerous subsequent owners. When the Chrysler Building opened, there were mixed reviews of the building's design, ranging from views of it as inane and unoriginal to the idea that it was modernist and iconic. Perceptions of the building have slowly evolved into its now being seen as a paragon of the Art Deco architectural style; and in 2007, it was ranked ninth on the List of America's Favorite Architecture by the American Institute of Architects. The building was designated a New York City landmark in 1978, and was added to the National Register of Historic Places as a National Historic Landmark in 1976. Site The Chrysler Building is on the eastern side of Lexington Avenue between 42nd and 43rd streets. The land was donated to The Cooper Union for the Advancement of Science and Art in 1902. The site is roughly a trapezoid with a frontage on Lexington Avenue; a frontage on 42nd Street; and a frontage on 43rd Street. The site bordered the old Boston Post Road, which predated, and ran aslant of, the Manhattan street grid established by the Commissioners' Plan of 1811. As a result, the east side of the building's base is similarly aslant. The Grand Hyatt New York hotel and the Graybar Building are across Lexington Avenue, while the Socony–Mobil Building is across 42nd Street. In addition, the Chanin Building is to the southwest, diagonally across Lexington Avenue and 42nd Street. History Context In the mid-1920s, New York's metropolitan area surpassed London's as the world's most populous metropolitan area and its population exceeded ten million by the early 1930s. The era was characterized by profound social and technological changes. Consumer goods such as radio, cinema, and the automobile became widespread. In 1927, Walter Chrysler's automotive company, the Chrysler Corporation, became the third-largest car manufacturer in the United States, behind Ford and General Motors. The following year, Chrysler was named Time magazine's "Person of the Year". The economic boom of the 1920s and speculation in the real estate market fostered a wave of new skyscraper projects in New York City. The Chrysler Building was built as part of an ongoing building boom that resulted in the city having the world's tallest building from 1908 to 1974. Following the end of World War I, European and American architects came to see simplified design as the epitome of the modern era and Art Deco skyscrapers as symbolizing progress, innovation, and modernity. The 1916 Zoning Resolution restricted the height that street-side exterior walls of New York City buildings could rise before needing to be setback from the street. This led to the construction of Art Deco structures in New York City with significant setbacks, large volumes, and striking silhouettes that were often elaborately decorated. Art Deco buildings were constructed for only a short period of time; but because that period was during the city's late-1920s real estate boom, the numerous skyscrapers built in the Art Deco style predominated in the city skyline, giving it the romantic quality seen in films and plays. The Chrysler Building project was shaped by these circumstances. Development Planning Originally, the Chrysler Building was to be the Reynolds Building, a project of real estate developer and former New York State Senator William H. Reynolds. Prior to his involvement in planning the building, Reynolds was best known for developing Coney Island's Dreamland amusement park. When the amusement park was destroyed by fire in 1911, Reynolds turned his attention to Manhattan real estate, where he set out to build the tallest building in the world. In 1921, Reynolds rented a large plot of land at the corner of Lexington Avenue and 42nd Street with the intention of building a tall building on the site. In 1927, after several years of delays, Reynolds hired the architect William Van Alen to design a forty-story building there. Van Alen's original design featured many Modernist stylistic elements, with glazed, curved windows at the corners. Van Alen was respected in his field for his work on the Albemarle Building at Broadway and 24th Street, designing it in collaboration with his partner H. Craig Severance. Van Alen and Severance complemented each other, with Van Alen being an original, imaginative architect and Severance being a shrewd businessperson who handled the firm's finances. However, the relationship between them became tense over disagreements on how best to run the firm. The breaking point came after a 1924 article in the Architectural Review, praising the Albemarle Building's design; Van Alen was attributed as the designer in the firm, while Severance's role was altogether ignored. The architects' partnership dissolved acrimoniously several months later, with lawsuits over the firm's clients and assets lasting over a year. The rivalry ended up being decisive for the design of the future Chrysler Building, since Severance's more traditional architectural style would otherwise have restrained Van Alen's more modern outlook. Refinement of designs By February 2, 1928, the proposed building's height had been increased to 54 stories, which would have made it the tallest building in Midtown. The proposal was changed again two weeks later, with official plans for a 63-story building. A little more than a week after that, the plan was changed for the third time, with two additional stories added. By this time, 42nd Street and Lexington Avenue were both hubs for construction activity, due to the removal of the Third Avenue Elevated's 42nd Street spur, which was seen as a blight on the area. The adjacent 56-story Chanin Building was also under construction. Because of the elevated spur's removal, real estate speculators believed that Lexington Avenue would become the "Broadway of the East Side", causing a ripple effect that would spur developments farther east. In April 1928, Reynolds signed a 67-year lease for the plot and finalized the details of his ambitious project. Van Alen's original design for the skyscraper called for a base with first-floor showroom windows that would be triple-height, and above would be 12 stories with glass-wrapped corners, to create the impression that the tower was floating in mid-air. Reynolds's main contribution to the building's design was his insistence that it have a metallic crown, despite Van Alen's initial opposition; the metal-and-crystal crown would have looked like "a jeweled sphere" at night. Originally, the skyscraper would have risen , with 67 floors. These plans were approved in June 1928. Van Alen's drawings were unveiled in the following August and published in a magazine run by the American Institute of Architects (AIA). Eventually, this design would prove too advanced and expensive for Reynolds. He instead devised an alternate design for the Reynolds Building, which was published in August 1928. The new design was much more conservative, with an Italianate dome that a critic compared to Governor Al Smith's bowler hat, and a brick arrangement on the upper floors that simulated windows in the corners, a detail that remains in the current Chrysler Building. This design almost exactly reflected the shape, setbacks, and the layout of the windows of the current building, but with a different dome. Final plans and start of construction With the design complete, groundbreaking for the Reynolds Building took place on September 19, 1928, but Reynolds did not have the means to carry on construction. Reynolds sold the plot, lease, plans, and architect's services to Walter Chrysler for $2 million on October 15, 1928. That same day, the Goodwin Construction Company began demolition of what had been built. A contract was awarded on October 28, and demolition was completed on November 9. Chrysler's initial plans for the building were similar to Reynolds's, but with the 808-foot building having 68 floors instead of 67. The plans entailed a ground-floor pedestrian arcade; a facade of stone below the fifth floor and brick-and-terracotta above; and a three-story bronze-and-glass "observation dome" at the top. However, Chrysler wanted a more progressive design, and he worked with Van Alen to redesign the skyscraper to be tall. At the new height, Chrysler's building would be taller than the Woolworth Building, a building in lower Manhattan that was the world's tallest at the time. At one point, Chrysler had requested that Van Alen shorten the design by ten floors, but reneged on that decision after realizing that the increased height would also result in increased publicity. From late 1928 to early 1929, modifications to the design of the dome continued. In March 1929, the press published details of an "artistic dome" that had the shape of a giant thirty-pointed star, which would be crowned by a sculpture five meters high. The final design of the dome included several arches and triangular windows. Lower down, the design was affected by Walter Chrysler's intention to make the building the Chrysler Corporation's headquarters, and as such, various architectural details were modeled after Chrysler automobile products, such as the hood ornaments of the Plymouth (see ). The building's gargoyles on the 31st floor and the eagles on the 61st floor, were created to represent flight, and to embody the machine age of the time. Even the topmost needle was built using a process similar to one Chrysler used to manufacture his cars, with precise "hand craftmanship". In his autobiography, Chrysler says he suggested that his building be taller than the Eiffel Tower. Meanwhile, excavation of the new building's foundation began in mid-November 1928 and was completed in mid-January 1929, when bedrock was reached. A total of of rock and of soil were excavated for the foundation, equal to 63% of the future building's weight. Construction of the building proper began on January 21, 1929. The Carnegie Steel Company provided the steel beams, the first of which was installed on March 27; and by April 9, the first upright beams had been set into place. The steel structure was "a few floors" high by June 1929, 35 floors high by early August, and completed by September. Despite a frantic steelwork construction pace of about four floors per week, no workers died during the construction of the skyscraper's steelwork. Chrysler lauded this achievement, saying, "It is the first time that any structure in the world has reached such a height, yet the entire steel construction was accomplished without loss of life". In total, 391,881 rivets were used, and approximately 3,826,000 bricks were manually laid to create the non-loadbearing walls of the skyscraper. Walter Chrysler personally financed the construction with his income from his car company. The Chrysler Building's height officially surpassed the Woolworth's on October 16, 1929, thereby becoming the world's tallest structure. Competition for "world's tallest building" title The same year that the Chrysler Building's construction started, banker George L. Ohrstrom proposed the construction of a 47-story office building at 40 Wall Street downtown. Shortly thereafter, Ohrstrom modified his project to have 60 floors, but it was still below Woolworth and the 808-foot Chrysler Building project, as announced in 1928. H. Craig Severance, Van Alen's former partner and the architect of 40 Wall Street, increased 40 Wall's height to with 62 floors in April of that year. It would thus exceed the Woolworth's height by and the Chrysler's by . 40 Wall Street and the Chrysler Building started competing for the distinction of "world's tallest building". The Empire State Building, on 34th Street and Fifth Avenue, entered the competition in 1929. The race was defined by at least five other proposals, although only the Empire State Building would survive the Wall Street Crash of 1929. The "Race into the Sky", as popular media called it at the time, was representative of the country's optimism in the 1920s, which helped fuel the building boom in major cities. The 40 Wall Street tower was revised from to 925 feet in April 1929, which would make it the world's tallest. Severance then publicly claimed the title of the world's tallest building. Construction of 40 Wall Street began in May 1929 at a frantic pace, and it was completed twelve months later. In response, Van Alen obtained permission for a spire and had it secretly constructed inside the frame of his building. The spire was delivered to the site in four different sections. On October 23, 1929, one week after surpassing the Woolworth Building's height and one day before the catastrophic Wall Street Crash of 1929 started, the spire was assembled. According to one account, "the bottom section of the spire was hoisted to the top of the building's dome and lowered into the 66th floor of the building." Then, within 90 minutes the rest of the spire's pieces were raised and riveted in sequence, helping raise the tower's height to 1,046 feet. Van Alen, who witnessed the process from the street along with its engineers and Walter Chrysler, compared the experience to watching a butterfly leaving its cocoon. In "The Structure and Metal Work of the Chrysler Building", an article published in the October 1930 edition of Architectural Forum, Van Alen explained the design and construction of the crown and needle: The steel tip brought the Chrysler Building to a height of , greatly exceeding 40 Wall Street's height. However, contemporary news media did not write of the spire's erection, nor were there any press releases celebrating the spire's erection. Even the New York Herald Tribune, which had virtually continuous coverage of the tower's construction, did not report on the spire's installation until days after the spire had been raised. Chrysler realized that his tower's height would exceed the Empire State Building's as well, having ordered Van Alen to change the Chrysler's original roof from a stubby Romanesque dome to the narrow steel spire. However, the Empire State's developer John J. Raskob reviewed the plans and realized that he could add five more floors and a spire of his own to his 80-story building, and subsequently acquired the nearby plots needed to support that building's height extension. Two days later, the Empire State Building's co-developer, former Governor Al Smith, announced the updated plans for that skyscraper, with an observation deck on the 86th-floor roof at a height of , higher than the Chrysler's 71st-floor observation deck at . Completion In January 1930, it was announced that the Chrysler Corporation would maintain offices in the Chrysler Building during Automobile Show Week, and the first leases by outside tenants were announced in April 1930, before the building was officially completed. The building was formally opened on May 27, 1930, in a ceremony that coincided with the 42nd Street Property Owners and Merchants Association's meeting that year. In the lobby of the building, a bronze plaque that read "in recognition of Mr. Chrysler's contribution to civic advancement" was unveiled. Former Governor Smith, former Assemblyman Martin G. McCue, and 42nd Street Association president George W. Sweeney were among those in attendance. By
With the design complete, groundbreaking for the Reynolds Building took place on September 19, 1928, but Reynolds did not have the means to carry on construction. Reynolds sold the plot, lease, plans, and architect's services to Walter Chrysler for $2 million on October 15, 1928. That same day, the Goodwin Construction Company began demolition of what had been built. A contract was awarded on October 28, and demolition was completed on November 9. Chrysler's initial plans for the building were similar to Reynolds's, but with the 808-foot building having 68 floors instead of 67. The plans entailed a ground-floor pedestrian arcade; a facade of stone below the fifth floor and brick-and-terracotta above; and a three-story bronze-and-glass "observation dome" at the top. However, Chrysler wanted a more progressive design, and he worked with Van Alen to redesign the skyscraper to be tall. At the new height, Chrysler's building would be taller than the Woolworth Building, a building in lower Manhattan that was the world's tallest at the time. At one point, Chrysler had requested that Van Alen shorten the design by ten floors, but reneged on that decision after realizing that the increased height would also result in increased publicity. From late 1928 to early 1929, modifications to the design of the dome continued. In March 1929, the press published details of an "artistic dome" that had the shape of a giant thirty-pointed star, which would be crowned by a sculpture five meters high. The final design of the dome included several arches and triangular windows. Lower down, the design was affected by Walter Chrysler's intention to make the building the Chrysler Corporation's headquarters, and as such, various architectural details were modeled after Chrysler automobile products, such as the hood ornaments of the Plymouth (see ). The building's gargoyles on the 31st floor and the eagles on the 61st floor, were created to represent flight, and to embody the machine age of the time. Even the topmost needle was built using a process similar to one Chrysler used to manufacture his cars, with precise "hand craftmanship". In his autobiography, Chrysler says he suggested that his building be taller than the Eiffel Tower. Meanwhile, excavation of the new building's foundation began in mid-November 1928 and was completed in mid-January 1929, when bedrock was reached. A total of of rock and of soil were excavated for the foundation, equal to 63% of the future building's weight. Construction of the building proper began on January 21, 1929. The Carnegie Steel Company provided the steel beams, the first of which was installed on March 27; and by April 9, the first upright beams had been set into place. The steel structure was "a few floors" high by June 1929, 35 floors high by early August, and completed by September. Despite a frantic steelwork construction pace of about four floors per week, no workers died during the construction of the skyscraper's steelwork. Chrysler lauded this achievement, saying, "It is the first time that any structure in the world has reached such a height, yet the entire steel construction was accomplished without loss of life". In total, 391,881 rivets were used, and approximately 3,826,000 bricks were manually laid to create the non-loadbearing walls of the skyscraper. Walter Chrysler personally financed the construction with his income from his car company. The Chrysler Building's height officially surpassed the Woolworth's on October 16, 1929, thereby becoming the world's tallest structure. Competition for "world's tallest building" title The same year that the Chrysler Building's construction started, banker George L. Ohrstrom proposed the construction of a 47-story office building at 40 Wall Street downtown. Shortly thereafter, Ohrstrom modified his project to have 60 floors, but it was still below Woolworth and the 808-foot Chrysler Building project, as announced in 1928. H. Craig Severance, Van Alen's former partner and the architect of 40 Wall Street, increased 40 Wall's height to with 62 floors in April of that year. It would thus exceed the Woolworth's height by and the Chrysler's by . 40 Wall Street and the Chrysler Building started competing for the distinction of "world's tallest building". The Empire State Building, on 34th Street and Fifth Avenue, entered the competition in 1929. The race was defined by at least five other proposals, although only the Empire State Building would survive the Wall Street Crash of 1929. The "Race into the Sky", as popular media called it at the time, was representative of the country's optimism in the 1920s, which helped fuel the building boom in major cities. The 40 Wall Street tower was revised from to 925 feet in April 1929, which would make it the world's tallest. Severance then publicly claimed the title of the world's tallest building. Construction of 40 Wall Street began in May 1929 at a frantic pace, and it was completed twelve months later. In response, Van Alen obtained permission for a spire and had it secretly constructed inside the frame of his building. The spire was delivered to the site in four different sections. On October 23, 1929, one week after surpassing the Woolworth Building's height and one day before the catastrophic Wall Street Crash of 1929 started, the spire was assembled. According to one account, "the bottom section of the spire was hoisted to the top of the building's dome and lowered into the 66th floor of the building." Then, within 90 minutes the rest of the spire's pieces were raised and riveted in sequence, helping raise the tower's height to 1,046 feet. Van Alen, who witnessed the process from the street along with its engineers and Walter Chrysler, compared the experience to watching a butterfly leaving its cocoon. In "The Structure and Metal Work of the Chrysler Building", an article published in the October 1930 edition of Architectural Forum, Van Alen explained the design and construction of the crown and needle: The steel tip brought the Chrysler Building to a height of , greatly exceeding 40 Wall Street's height. However, contemporary news media did not write of the spire's erection, nor were there any press releases celebrating the spire's erection. Even the New York Herald Tribune, which had virtually continuous coverage of the tower's construction, did not report on the spire's installation until days after the spire had been raised. Chrysler realized that his tower's height would exceed the Empire State Building's as well, having ordered Van Alen to change the Chrysler's original roof from a stubby Romanesque dome to the narrow steel spire. However, the Empire State's developer John J. Raskob reviewed the plans and realized that he could add five more floors and a spire of his own to his 80-story building, and subsequently acquired the nearby plots needed to support that building's height extension. Two days later, the Empire State Building's co-developer, former Governor Al Smith, announced the updated plans for that skyscraper, with an observation deck on the 86th-floor roof at a height of , higher than the Chrysler's 71st-floor observation deck at . Completion In January 1930, it was announced that the Chrysler Corporation would maintain offices in the Chrysler Building during Automobile Show Week, and the first leases by outside tenants were announced in April 1930, before the building was officially completed. The building was formally opened on May 27, 1930, in a ceremony that coincided with the 42nd Street Property Owners and Merchants Association's meeting that year. In the lobby of the building, a bronze plaque that read "in recognition of Mr. Chrysler's contribution to civic advancement" was unveiled. Former Governor Smith, former Assemblyman Martin G. McCue, and 42nd Street Association president George W. Sweeney were among those in attendance. By June, it was reported that 65% of the available space had been leased. By August, the building was declared complete, but the New York City Department of Construction did not mark it as finished until February 1932. The added height of the spire allowed the Chrysler Building to surpass 40 Wall Street as the tallest building in the world and the Eiffel Tower as the tallest structure. The Chrysler Building was thus the first man-made structure to be taller than ; and as one newspaper noted, the tower was also taller than the highest points of five states. The Chrysler Building was appraised at $14 million, but was exempt from city taxes per an 1859 law that gave tax exemptions to sites owned by the Cooper Union. The city had attempted to repeal the tax exemption, but Cooper Union had opposed that measure. Because the Chrysler Building retains the tax exemption, it has paid Cooper Union for the use of their land since opening. Van Alen's satisfaction at these accomplishments was likely muted by Walter Chrysler's later refusal to pay the balance of his architectural fee. Chrysler alleged that Van Alen had received bribes from suppliers, and Van Alen had not signed any contracts with Walter Chrysler when he took over the project. Van Alen sued and the courts ruled in his favor, requiring Chrysler to pay Van Alen $840,000, or 6% of the total budget of the building. However, the lawsuit against Chrysler markedly diminished Van Alen's reputation as an architect, which, along with the effects of the Great Depression and negative criticism, ended up ruining his career. Van Alen ended his career as professor of sculpture at the nearby Beaux-Arts Institute of Design and died in 1954. According to author Neal Bascomb, "The Chrysler Building was his greatest accomplishment, and the one that guaranteed his obscurity." The Chrysler Building's distinction as the world's tallest building was short-lived. John Raskob realized the 1,050-foot Empire State Building would only be taller than the Chrysler Building, and Raskob was afraid that Walter Chrysler might try to "pull a trick like hiding a rod in the spire and then sticking it up at the last minute." Another revision brought the Empire State Building's roof to , making it the tallest building in the world by far when it opened on May 1, 1931. However, the Chrysler Building is still the world's tallest steel-supported brick building. The Chrysler Building fared better commercially than the Empire State Building did: by 1935, the Chrysler had already rented 70% of its floor area, while the Empire State had only leased 23% of its area and was popularly derided as the "Empty State Building". The Chrysler Corporation was not involved in the construction or ownership of the Chrysler Building, although it was built and designed for the corporation. It was a project of Walter P. Chrysler for his children. In his autobiography, Chrysler wrote that he wanted to erect the building "so that his sons would have something to be responsible for". Use 20th century The Chrysler family inherited the property after the death of Walter Chrysler in 1940, with the property being under the ownership of W.P. Chrysler Building Corporation. In 1944, the corporation filed plans to build a 38-story annex to the east of the building, at 666 Third Avenue. In 1949, this was revised to a 32-story annex costing $9 million. The annex building, designed by Reinhard, Hofmeister & Walquist, had a facade similar to that of the original Chrysler Building. The stone for the original building was no longer manufactured, and had to be specially replicated. Construction started on the annex in June 1950, and the first tenants started leasing in June 1951. The building itself was completed by 1952, and a sky bridge connecting the two buildings' seventh floors was built in 1959. The family sold the building in 1953 to William Zeckendorf for its assessed price of $18 million. The 1953 deal included the annex and the nearby Graybar Building, which, along with the Chrysler Building, sold for a combined $52 million. The new owners were Zeckendorf's company Webb and Knapp, who held a 75% interest in the sale, and the Graysler Corporation, who held a 25% stake. At the time, it was reported to be the largest real estate sale in New York City's history. In 1957, the Chrysler Building, its annex, and the Graybar Building were sold for $66 million to Lawrence Wien's realty syndicate, setting a new record for the largest sale in the city. In 1960, the complex was purchased by Sol Goldman and Alex DiLorenzo, who received a mortgage from the Massachusetts Mutual Life Insurance Company. In 1961, the building's stainless steel elements, including the needle, crown, gargoyles, and entrance doors, were polished for the first time. A group of ten workers steam-cleaned the facade below the 30th floor, and manually cleaned the portion of the tower above the 30th floor, for a cost of about $200,000. Massachusetts Mutual obtained outright ownership in 1975 after Goldman and DiLorenzo defaulted on the mortgage. The company purchased the building for $35 million. In 1978, they devised plans to renovate the facade, heating, ventilation, air‐conditioning, elevators, lobby murals, and Cloud Club headquarters in a $23 million project. This renovation was completed in 1979. They delegated the leasing of the building's space to the Edward S. Gordon Company, which leased of vacant space within five years. During Massachusetts Mutual's ownership of the Chrysler Building, the tower received two historic designations. The building was designated as a National Historic Landmark in 1976, and as a New York City Landmark in 1978, although the city only landmarked the lobby and facade. Massachusetts Mutual had opposed the city landmark designation because it "would cause 'inevitable delay' in moving new tenants into the skyscraper". At the time, the building had of vacant floor space, representing 40% of the total floor area. In September 1979, the building was sold again, this time to entrepreneur and Washington Redskins owner Jack Kent Cooke, in a deal that also transferred ownership of the Los Angeles Kings and Lakers to Jerry Buss. The spire underwent a restoration that was completed in 1995. The joints in the now-closed observation deck were polished, and the facade restored, as part of a $1.5 million project. Some damaged steel strips of the needle were replaced and several parts of the gargoyles were re-welded together. The cleaning received the New York Landmarks Conservancy's Lucy G. Moses Preservation Award for 1997. Cooke died in 1997, and creditors moved to foreclose on the estate's unpaid fees soon after. Tishman Speyer Properties and the Travelers Insurance Group bought the Chrysler Center in 1997–1998 for about $220 million (equal to $ million in ) from a consortium of banks and the estate of Jack Kent Cooke. Tishman Speyer Properties had negotiated a 150-year lease from the Cooper Union, and the college continues to own the land under the Chrysler Building. Cooper Union's name is on the deed. 21st century In 2001, a 75% stake in the building was sold, for US$300 million (equal to $ million in ), to TMW, the German arm of an Atlanta-based investment fund. In June 2008, it was reported that the Abu Dhabi Investment Council was in negotiations to buy TMW's 75% economic interest, a 15% interest from Tishman Speyer Properties in the building, and a share of the Trylons retail structure next door for US$800 million. In July 2008, it was announced that the transaction had been completed, and that the Abu Dhabi Investment Council was now 90% owner of the building, with Tishman Speyer retaining 10%. From 2010 to 2011, the building's energy, plumbing, and waste management systems were renovated. This resulted in a 21% decrease in the building's total energy consumption, a 64% decrease in water consumption, and an 81% rate of waste being recycled. In 2012, the building received a LEED Gold accreditation from the U.S. Green Building Council, which recognized the building's environmental sustainability and energy efficiency. The Abu Dhabi Investment Council and Tishman Speyer put the Chrysler Building on sale again in January 2019. It was reported in March 2019 that Aby Rosen's RFR Holding LLC, in a joint venture with the Austrian SIGNA Group, had reached an agreement to purchase the Chrysler Building, albeit at a steeply discounted price, for US$150 million. Design The Chrysler Building is considered a leading example of Art Deco architecture. It is constructed of a steel frame in-filled with masonry, with areas of decorative metal cladding. The structure contains 3,862 exterior windows. Approximately fifty metal ornaments protrude at the building's corners on five floors reminiscent of gargoyles on Gothic cathedrals. The 31st-floor contains gargoyles as well as replicas of the 1929 Chrysler radiator caps, and the 61st-floor is adorned with eagles as a nod to America's national bird. The Chrysler Building uses bright "Nirosta" stainless steel extensively in its design, an austenitic alloy developed in Germany by Krupp (a German acronym for nichtrostender Stahl, meaning "non-rusting steel"). It was the first use of this "18-8 stainless steel" in an American project, composed of 18% chromium and 8% nickel. Nirosta was used in the exterior ornaments, the window frames, the crown, and the needle. The steel was an integral part of Van Alen's design, as E.E. Thum explains: "The use of permanently bright metal was of greatest aid in the carrying of rising lines and the diminishing circular forms in the roof treatment, so as to accentuate the gradual upward swing until it literally dissolves into the sky...." Stainless steel producers used the Chrysler Building to evaluate the durability of the product in architecture. In 1929, the American Society for Testing Materials created an inspection committee to study its performance, which regarded the Chrysler Building as the best location to do so; a subcommittee examined the building's panels every five years until 1960, when the inspections were canceled because the panels had shown minimal deterioration. Form The Chrysler Building's height and legally mandated setbacks influenced Van Alen in his design. The walls of the lowermost sixteen floors rise directly from the sidewalk property lines, except for a recess on one side that gives the building a "U"-shaped floor plan above the fourth floor. There are setbacks on floors 16, 18, 23, 28, and 31, making the building compliant with the 1916 Zoning Resolution. This gives the building the appearance of a ziggurat on one side and a U-shaped palazzo on the other. Above the 31st floor, there are no more setbacks until the 60th floor, above which the structure is funneled into a Maltese cross shape that "blends the square shaft to the finial", according to author and photographer Cervin Robinson. The floor plans of the first sixteen floors were made as large as possible to optimize the amount of rental space nearest ground level, which was seen as most desirable. The U-shaped cut above the fourth floor served as a shaft for air flow and illumination. The area between floors 28 and 31 added "visual interest to the middle of the building, preventing it from being dominated by the heavy detail of the lower floors and the eye-catching design of the finial. They provide a base to the column of the tower, effecting a transition between the blocky lower stories and the lofty shaft." Facade Base and shaft The ground floor exterior is covered in polished black granite from Shastone, while the three floors above it are clad in white marble from Georgia. There are two main entrances, on Lexington Avenue and on 42nd Street, each three floors high with Shastone granite surrounding each proscenium-shaped entryway. At some distance into each main entryway, there are revolving doors "beneath intricately patterned metal and glass screens", designed so as to embody the Art Deco tenet of amplifying the entrance's visual impact. A smaller side entrance on 43rd Street is one story high. There are storefronts consisting of large Nirosta-steel-framed windows at ground level, with office windows on the second through fourth floors. The west and east elevations of the building contain the air shafts above the fourth floor, while the north and south sides contain the receding setbacks. Below the 16th floor, the facade is clad with white brick interrupted by white-marble bands in a manner similar to a basket weaving. The windows, arranged in grids, do not
Cape Breton Island, Canada Cape Breton Highlands, a mountain range in the north of Cape Breton Island, Canada Cape Breton Highlands National Park Cape Breton Regional Municipality, a regional municipality in Nova Scotia Cape Breton—Canso, a federal electoral district In France Capbreton or Cap Berton, a commune of the Landes département in
a regional municipality in Nova Scotia Cape Breton—Canso, a federal electoral district In France Capbreton or Cap Berton, a commune of the Landes département in southwestern France Organizations Cape Breton Eagles, a Sydney-based ice hockey team Cape Breton Post Cape Breton Development Corporation Cape Breton University Other uses Cape Breton and Central Nova Scotia Railway See also Breton (disambiguation) Cape (disambiguation)