sentence1
stringlengths
1
133k
sentence2
stringlengths
1
131k
and joke about. ... But rural folks have spoken up and said 'enough' to the Hollywood mockers." Hillbilly Elegy: A Memoir of a Family and Culture in Crisis (2016) is a memoir by J. D. Vance about the Appalachian values of his upbringing and their relationship to the social problems of his hometown, Middletown, Ohio. The book topped The New York Times Best Seller list in August 2016. A family of "Hill People", who are employed as migrant workers on a farm in 1952 Arkansas, have a major role in John Grisham's book A Painted House, with Grisham trying to avoid stereotypes. Music Hillbilly music was at one time considered an acceptable label for what is now known as country music. The label, coined in 1925 by country pianist Al Hopkins, persisted until the 1950s. The "hillbilly music" categorization covers a wide variety of musical genres including bluegrass, country, western, and gospel. Appalachian folk song existed long before the "hillbilly" label. When the commercial industry was combined with "traditional Appalachian folksong", "hillbilly music" was formed. Some argue this is a "High Culture" issue where sophisticated individuals may see something considered "unsophisticated" as "trash". In the early-20th century, artists began to utilize the "hillbilly" label. The term gained momentum due to Ralph Peer, the recording director of OKeh Records, who heard it being used among Southerners when he went down to Virginia to record the music and labeled all Southern country music as so from then on. The York Brothers entitled one of their songs "Hillbilly Rose" and the Delmore Brothers followed with their song "Hillbilly Boogie". In 1927, the Gennett studios in Richmond, Indiana, made a recording of black fiddler Jim Booker. The recordings were labeled "made for Hillbilly" in the Gennett files and were marketed to a white audience. Columbia Records had much success with the "Hill Billies" featuring Al Hopkins and Fiddlin' Charlie Bowman. By the late-1940s, radio stations started to use the "hillbilly music" label. Originally, "hillbilly" was used to describe fiddlers and string bands, but now it was used to describe traditional Appalachian music. Appalachians had never used this term to describe their own music. Popular songs whose style bore characteristics of both hillbilly and African American music were referred to as hillbilly boogie and rockabilly. Elvis Presley was a prominent player of rockabilly and was known early in his career as the "Hillbilly Cat". When the Country Music Association was founded in 1958, the term hillbilly music gradually fell out of use. The music industry merged hillbilly music, Western swing, and Cowboy music, to form the current category C&W, Country and Western. Some artists (notably Hank Williams) and fans were offended by the "hillbilly music" label. While the term is not used as frequently today, it is still used on occasion to refer to old-time music or bluegrass. For example, WHRB broadcasts a popular weekly radio show entitled "Hillbilly at Harvard". The show is devoted to playing a mix of old-time music, bluegrass, and traditional country and western. Cultural implications The hillbilly stereotype is considered to have had a traumatizing effect on some in the Appalachian region. Feelings of shame, self-hatred, and detachment are cited as a result of "culturally transmitted traumatic stress syndrome". Appalachian scholars say that the large-scale stereotyping has rewritten Appalachian history, making Appalachians feel particularly vulnerable. "Hillbilly" has now become part of Appalachian identity and some Appalachians feel they are constantly defending themselves against this image. The stereotyping also has political implications for the region. There is a sense of "perceived history" that prevents many political issues from receiving adequate attention. Appalachians are often blamed for economic struggles. "Moonshiners, welfare cheats, and coal miners" are stereotypes stemming from the greater hillbilly stereotype in the region. This prejudice has been said to serve as a barrier for addressing some serious issues such as the economy and the environment. Despite the political and social difficulties associated with stereotyping, Appalachians have organized to enact change. The War on Poverty is sometimes considered to be an example of one effort that allowed for Appalachian community organization. Grassroots movements, protests, and strikes are common in the area, though not always successful. Intragroup versus intergroup usage The Springfield, Missouri Chamber of Commerce once presented dignitaries visiting the city with an "Ozark Hillbilly Medallion" and a certificate proclaiming the honoree a "hillbilly of the Ozarks". On June 7, 1952, President Harry S. Truman received the medallion after a breakfast speech at the Shrine Mosque for the 35th Division Association. Other recipients included US Army generals Omar Bradley and Matthew Ridgway, J. C. Penney, Johnny Olson, and Ralph Story. Hillbilly Days is an annual festival held in mid-April in Pikeville, Kentucky celebrating the best of Appalachian culture. The event began by local Shriners as a fundraiser to support the Shriners Children's Hospital. It has grown since its beginning in 1976 and now is the second largest festival held in the state of Kentucky. Artists and craftspeople showcase their talents and sell their works on display. Nationally renowned musicians as well as the best of the regional mountain musicians share six different stages located throughout the downtown area of Pikeville. Aspiring hillbillies from across the nation compete to come up with the wildest Hillbilly outfit. The event has earned its name as the Mardi Gras of the Mountains. Fans of "mountain music" come from around the United States to hear this annual concentrated gathering of talent. Some refer to this event as the equivalent of a "Woodstock" for mountain music. The term "Hillbilly" is used with pride by a number of people within the region as well as famous persons, such as singer Dolly Parton, chef Sean Brock, and was used by actress Minnie Pearl. Positive self-identification
grown up with Raylan. The show's plots often included "hillbilly" tropes such as dimwitted and easily manipulated men, use of homemade drugs, and snake-handling revivalists. "Hillbillies" were at the center of reality television in the 21st century. Network television shows such as The Real Beverly Hillbillies, High Life, and The Simple Life displayed the "hillbilly" lifestyle for viewers in the United States. This sparked protests across the country with rural-minded individuals gathering to fight the stereotype. The Center for Rural Strategies started a nationwide campaign stating the stereotype was "politically incorrect". The Kentucky-based organization engaged political figures in the movement such as Robert Byrd and Mike Huckabee. Both protestors argued that the discrimination of any other group in United States would not be tolerated, so neither should the discrimination against rural U.S. citizens. A 2003 piece published by The Cincinnati Enquirer read, "In this day of hypersensitivity to diversity and political correctness, Appalachians have been a group that it is still socially acceptable to demean and joke about. ... But rural folks have spoken up and said 'enough' to the Hollywood mockers." Hillbilly Elegy: A Memoir of a Family and Culture in Crisis (2016) is a memoir by J. D. Vance about the Appalachian values of his upbringing and their relationship to the social problems of his hometown, Middletown, Ohio. The book topped The New York Times Best Seller list in August 2016. A family of "Hill People", who are employed as migrant workers on a farm in 1952 Arkansas, have a major role in John Grisham's book A Painted House, with Grisham trying to avoid stereotypes. Music Hillbilly music was at one time considered an acceptable label for what is now known as country music. The label, coined in 1925 by country pianist Al Hopkins, persisted until the 1950s. The "hillbilly music" categorization covers a wide variety of musical genres including bluegrass, country, western, and gospel. Appalachian folk song existed long before the "hillbilly" label. When the commercial industry was combined with "traditional Appalachian folksong", "hillbilly music" was formed. Some argue this is a "High Culture" issue where sophisticated individuals may see something considered "unsophisticated" as "trash". In the early-20th century, artists began to utilize the "hillbilly" label. The term gained momentum due to Ralph Peer, the recording director of OKeh Records, who heard it being used among Southerners when he went down to Virginia to record the music and labeled all Southern country music as so from then on. The York Brothers entitled one of their songs "Hillbilly Rose" and the Delmore Brothers followed with their song "Hillbilly Boogie". In 1927, the Gennett studios in Richmond, Indiana, made a recording of black fiddler Jim Booker. The recordings were labeled "made for Hillbilly" in the Gennett files and were marketed to a white audience. Columbia Records had much success with the "Hill Billies" featuring Al Hopkins and Fiddlin' Charlie Bowman. By the late-1940s, radio stations started to use the "hillbilly music" label. Originally, "hillbilly" was used to describe fiddlers and string bands, but now it was used to describe traditional Appalachian music. Appalachians had never used this term to describe their own music. Popular songs whose style bore characteristics of both hillbilly and African American music were referred to as hillbilly boogie and rockabilly. Elvis Presley was a prominent player of rockabilly and was known early in his career as the "Hillbilly Cat". When the Country Music Association was founded in 1958, the term hillbilly music gradually fell out of use. The music industry merged hillbilly music, Western swing, and Cowboy music, to form the current category C&W, Country and Western. Some artists (notably Hank Williams) and fans were offended by the "hillbilly music" label. While the term is not used as frequently today, it is still used on occasion to refer to old-time music or bluegrass. For example, WHRB broadcasts a popular weekly radio show entitled "Hillbilly at Harvard". The show is devoted to playing a mix of old-time music, bluegrass, and traditional country and western. Cultural implications The hillbilly stereotype is considered to have had a traumatizing effect on some in the Appalachian region. Feelings of shame, self-hatred, and detachment are cited as a result of "culturally transmitted traumatic stress syndrome". Appalachian scholars say that the large-scale stereotyping has rewritten Appalachian history, making Appalachians feel particularly vulnerable. "Hillbilly" has now become part of Appalachian identity and some Appalachians feel they are constantly defending themselves against this image. The stereotyping also has political implications for the region. There is a sense of "perceived history" that prevents many political issues from receiving adequate attention. Appalachians are often blamed for economic struggles. "Moonshiners, welfare cheats, and coal miners" are stereotypes stemming from the greater hillbilly stereotype in the region. This prejudice has been said to serve as a barrier for addressing some serious issues such as the economy and the environment. Despite the political and social difficulties associated with stereotyping, Appalachians have organized to enact change. The War on Poverty is sometimes considered to be an example of one effort that allowed for Appalachian community organization. Grassroots movements, protests, and strikes are common in the area, though not always successful. Intragroup versus intergroup usage The Springfield, Missouri Chamber of Commerce once presented dignitaries visiting the city with an "Ozark Hillbilly Medallion" and a certificate proclaiming the honoree a "hillbilly of the Ozarks". On June 7, 1952, President Harry S. Truman received the medallion after a breakfast speech at the Shrine Mosque for the 35th Division Association. Other recipients included US Army generals Omar Bradley and Matthew Ridgway, J. C. Penney, Johnny Olson, and Ralph Story. Hillbilly Days is an annual festival held in mid-April in Pikeville, Kentucky celebrating the best of Appalachian culture. The event began by local Shriners as a fundraiser to support the Shriners Children's Hospital. It has grown since its beginning in 1976 and now is the second largest festival held in the state of Kentucky. Artists and craftspeople showcase their talents and sell their works on display. Nationally renowned musicians as well as the best of the regional mountain musicians share six different stages located throughout the downtown area of Pikeville. Aspiring hillbillies from across the nation compete to come up with the wildest Hillbilly outfit. The event has earned its name as the Mardi Gras of the Mountains. Fans of "mountain music" come from around the United States to hear this annual concentrated gathering of talent. Some refer to this event as the equivalent of a "Woodstock" for mountain music. The term "Hillbilly" is used with pride by
individuals to serve content to the Internet Virtual host, allowing several DNS names to share the same IP address Host, in hardware virtualization, a machine on which a virtual machine runs Cross compiler, also called a "host", a computer platform on which software development is done for a different target computer platform UOL HOST, Universal Online's HOST webhosting service Groups or formations Host, an archaic military term for an army Host, a great number; multitude Cossack host, military formations of Eastern Europe Furious Host or the Wild Hunt, a European folk myth Religion Heavenly host, an "army" of good angels in Heaven Lord of hosts, a common epithet of the God of the Old Testament Sacramental bread, called the host or hostia, used in Christian liturgy Roles Host (radio), the presenter or announcer on a radio show Television presenter, the host or announcer on a television show Casino host Maître d'hôtel (Maître d') or head waiter of a restaurant or hotel Master of ceremonies Talk show host, a presenter of a TV or radio talk
Michel Host (1942–2021), French writer "Host", an author abbreviation in botany for Nicolaus Thomas Host Arts, entertainment, and media Fictional entities Hosts (World of Darkness), fictional characters in game Werewolf: The Forsaken Hosts, alien invaders and overlords in the TV series Colony Armies and hosts of Middle-earth warfare, fictional entities in J.R.R. Tolkien's works Avenging Host, a group of characters in Marvel Comics' Earth X series of comic books Rutan Host, fictional aliens from Doctor Who Film Host (film), a 2020 horror film directed by Rob Savage Literature Host, the third novel in the Rogue Mage series by Faith Hunter Host, a 1993 book by Peter James Hosts (novel), a 2001 book written by American author F. Paul Wilson The Hosts of Rebecca, a 1960 novel by Alexander Cordell about the Rebecca Riots Music H.O.S.T., an influential hip-hop group in Azerbaijan Host (Critters Buggin album), 1996 Host (Paradise Lost album), 1999 Computing and technology Host (network), a computer connected to the Internet or another IP-based network hosts (file), a computer file to be used to store information on where to find an internet host
of Mexico, led by Fray Martín de Valencia. Franciscan Geronimo de Mendieta claimed that Cortés's most important deed was the way he met this first group of Franciscans. The conqueror himself was said to have met the friars as they approached the capital, kneeling at the feet of the friars who had walked from the coast. This story was told by Franciscans to demonstrate Cortés piety and humility and was a powerful message to all, including the Indians, that Cortés's earthly power was subordinate to the spiritual power of the friars. However, one of the first twelve Franciscans, Fray Toribio de Benavente Motolinia does not mention it in his history. Cortés and the Franciscans had a particularly strong alliance in Mexico, with Franciscans seeing him as "the new Moses" for conquering Mexico and opening it to Christian evangelization. In Motolinia's 1555 response to Dominican Bartolomé de Las Casas, he praises Cortés. And as to those who murmur against the Marqués del Valle [Cortés], God rest him, and who try to blacken and obscure his deeds, I believe that before God their deeds are not as acceptable as those of the Marqués. Although as a human he was a sinner, he had faith and works of a good Christian, and a great desire to employ his life and property in widening and augmenting the fair of Jesus Christ, and dying for the conversion of these gentiles ... Who has loved and defended the Indians of this new world like Cortés? ... Through this captain, God opened the door for us to preach his holy gospel and it was he who caused the Indians to revere the holy sacraments and respect the ministers of the church. In Fray Bernardino de Sahagún's 1585 revision of the conquest narrative first codified as Book XII of the Florentine Codex, there are laudatory references to Cortés that do not appear in the earlier text from the indigenous perspective. Whereas Book XII of the Florentine Codex concludes with an account of Spaniards' search for gold, in Sahagún's 1585 revised account, he ends with praise of Cortés for requesting the Franciscans be sent to Mexico to convert the Indians. Expedition to Honduras and aftermath From 1524 to 1526, Cortés headed an expedition to Honduras where he defeated Cristóbal de Olid, who had claimed Honduras as his own under the influence of the Governor of Cuba Diego Velázquez. Fearing that Cuauhtémoc might head an insurrection in Mexico, he brought him with him to Honduras. In a controversial move, Cuauhtémoc was executed during the journey. Raging over Olid's treason, Cortés issued a decree to arrest Velázquez, whom he was sure was behind Olid's treason. This, however, only served to further estrange the Crown of Castile and the Council of Indies, both of which were already beginning to feel anxious about Cortés's rising power. Cortés's fifth letter to King Charles attempts to justify his conduct, concludes with a bitter attack on "various and powerful rivals and enemies" who have "obscured the eyes of your Majesty". Charles, who was also Holy Roman Emperor, had little time for distant colonies (much of Charles's reign was taken up with wars with France, the German Protestants and the expanding Ottoman Empire), except insofar as they contributed to finance his wars. In 1521, year of the Conquest, Charles was attending to matters in his German domains and Bishop Adrian of Utrecht functioned as regent in Spain. Velázquez and Fonseca persuaded the regent to appoint a commissioner (a Juez de residencia, Luis Ponce de León) with powers to investigate Cortés's conduct and even arrest him. Cortés was once quoted as saying that it was "more difficult to contend against [his] own countrymen than against the Aztecs." Governor Diego Velázquez continued to be a thorn in his side, teaming up with Bishop Juan Rodríguez de Fonseca, chief of the Spanish colonial department, to undermine him in the Council of the Indies. A few days after Cortés's return from his expedition, Ponce de León suspended Cortés from his office of governor of New Spain. The Licentiate then fell ill and died shortly after his arrival, appointing Marcos de Aguilar as alcalde mayor. The aged Aguilar also became sick and appointed Alonso de Estrada governor, who was confirmed in his functions by a royal decree in August 1527. Cortés, suspected of poisoning them, refrained from taking over the government. Estrada sent Diego de Figueroa to the south. De Figueroa raided graveyards and extorted contributions, meeting his end when the ship carrying these treasures sank. Albornoz persuaded Alonso de Estrada to release Gonzalo de Salazar and Chirinos. When Cortés complained angrily after one of his adherents' hands was cut off, Estrada ordered him exiled. Cortés sailed for Spain in 1528 to appeal to King Charles. First return to Spain (1528) and Marquessate of the Valley of Oaxaca In 1528, Cortés returned to Spain to appeal to the justice of his master, Charles V. Juan Altamirano and Alonso Valiente stayed in Mexico and acted as Cortés's representatives during his absence. Cortés presented himself with great splendor before Charles V's court. By this time Charles had returned and Cortés forthrightly responded to his enemy's charges. Denying he had held back on gold due the crown, he showed that he had contributed more than the quinto (one-fifth) required. Indeed, he had spent lavishly to build the new capital of Mexico City on the ruins of the Aztec capital of Tenochtitlán, leveled during the siege that brought down the Aztec empire. He was received by Charles with every distinction, and decorated with the order of Santiago. In return for his efforts in expanding the still young Spanish Empire, Cortés was rewarded in 1529 by being accorded the noble title of don but more importantly named the "Marqués del Valle de Oaxaca" (Marquess of the Valley of Oaxaca and married the Spanish noblewoman Doña Juana Zúñiga, after the 1522 death of his much less distinguished first wife, Catalina Suárez. The noble title and senorial estate of the Marquesado was passed down to his descendants until 1811. The Oaxaca Valley was one of the wealthiest regions of New Spain, and Cortés had 23,000 vassals in 23 named encomiendas in perpetuity. Although confirmed in his land holdings and vassals, he was not reinstated as governor and was never again given any important office in the administration of New Spain. During his travel to Spain, his property was mismanaged by abusive colonial administrators. He sided with local natives in a lawsuit. The natives documented the abuses in the Huexotzinco Codex. The entailed estate and title passed to his legitimate son Don Martín Cortés upon Cortés's death in 1547, who became the Second Marquess. Don Martín's association with the so-called Encomenderos' Conspiracy endangered the entailed holdings, but they were restored and remained the continuing reward for Hernán Cortés's family through the generations. Return to Mexico Cortés returned to Mexico in 1530 with new titles and honors, but with diminished power. Although Cortés still retained military authority and permission to continue his conquests, viceroy Don Antonio de Mendoza was appointed in 1535 to administer New Spain's civil affairs. This division of power led to continual dissension, and caused the failure of several enterprises in which Cortés was engaged. On returning to Mexico, Cortés found the country in a state of anarchy. There was a strong suspicion in court circles of an intended rebellion by Cortés. After reasserting his position and reestablishing some sort of order, Cortés retired to his estates at Cuernavaca, about 30 miles (48 km) south of Mexico City. There he concentrated on the building of his palace and on Pacific exploration. Remaining in Mexico between 1530 and 1541, Cortés quarreled with Nuño Beltrán de Guzmán and disputed the right to explore the territory that is today California with Antonio de Mendoza, the first viceroy. Cortés acquired several silver mines in Zumpango del Rio in 1534. By the early 1540s, he owned 20 silver mines in Sultepec, 12 in Taxco, and 3 in Zacualpan. Earlier, Cortés had claimed the silver in the Tamazula area. In 1536, Cortés explored the northwestern part of Mexico and discovered the Baja California Peninsula. Cortés also spent time exploring the Pacific coast of Mexico. The Gulf of California was originally named the Sea of Cortés by its discoverer Francisco de Ulloa in 1539. This was the last major expedition by Cortés. Later life and death Second return to Spain After his exploration of Baja California, Cortés returned to Spain in 1541, hoping to confound his angry civilians, who had brought many lawsuits against him (for debts, abuse of power, etc.). On his return he went through a crowd to speak to the emperor, who demanded of him who he was. "I am a man," replied Cortés, "who has given you more provinces than your ancestors left you cities." Expedition against Algiers The emperor finally permitted Cortés to join him and his fleet commanded by Andrea Doria at the great expedition against Algiers in the Barbary Coast in 1541, which was then part of the Ottoman Empire and was used as a base by Hayreddin Barbarossa, a famous Turkish corsair and Admiral-in-Chief of the Ottoman Fleet. During this campaign, Cortés was almost drowned in a storm that hit his fleet while he was pursuing Barbarossa. Last years, death, and remains Having spent a great deal of his own money to finance expeditions, he was now heavily in debt. In February 1544 he made a claim on the royal treasury, but was ignored for the next three years. Disgusted, he decided to return to Mexico in 1547. When he reached Seville, he was stricken with dysentery. He died in Castilleja de la Cuesta, Seville province, on December 2, 1547, from a case of pleurisy at the age of 62. He left his many mestizo and white children well cared for in his will, along with every one of their mothers. He requested in his will that his remains eventually be buried in Mexico. Before he died he had the Pope remove the "natural" status of four of his children (legitimizing them in the eyes of the church), including Martin, the son he had with Doña Marina (also known as La Malinche), said to be his favourite. His daughter, Doña Catalina, however, died shortly after her father's death. After his death, his body was moved more than eight times for several reasons. On December 4, 1547, he was buried in the mausoleum of the Duke of Medina in the church of San Isidoro del Campo, Sevilla. Three years later (1550) due to the space being required by the duke, his body was moved to the altar of Santa Catarina in the same church. In his testament, Cortés asked for his body to be buried in the monastery he had ordered to be built in Coyoacan in México, ten years after his death, but the monastery was never built. So in 1566, his body was sent to New Spain and buried in the church of San Francisco de Texcoco, where his mother and one of his sisters were buried. In 1629, Don Pedro Cortés fourth "Marquez del Valle, his last male descendant, died, so the viceroy decided to move the bones of Cortés along with those of his descendant to the Franciscan church in México. This was delayed for nine years, while his body stayed in the main room of the palace of the viceroy. Eventually it was moved to the Sagrario of Franciscan church, where it stayed for 87 years. In 1716, it was moved to another place in the same church. In 1794, his bones were moved to the "Hospital de Jesus" (founded by Cortés), where a statue by Tolsá and a mausoleum were made. There was a public ceremony and all the churches in the city rang their bells. In 1823, after the independence of México, it seemed imminent that his body would be desecrated, so the mausoleum was removed, the statue and the coat of arms were sent to Palermo, Sicily, to be protected by the Duke of Terranova. The bones were hidden, and everyone thought that they had been sent out of México. In 1836, his bones were moved to another place in the same building. It was not until November 24, 1946 that they were rediscovered, thanks to the discovery of a secret document by Lucas Alamán. His bones were put in the charge of the Instituto Nacional de Antropología e Historia (INAH). The remains were authenticated by INAH. They were then restored to the same place, this time with a bronze inscription and his coat of arms. When the bones were first rediscovered, the supporters of the Hispanic tradition in Mexico were excited, but one supporter of an indigenist vision of Mexico "proposed that the remains be publicly burned in front of the statue of Cuauhtemoc, and the ashes flung into the air". Following the discovery and authentication of Cortés's remains, there was a discovery of what were described as the bones of Cuauhtémoc, resulting in a "battle of the bones". Taxa named after Cortés Cortés is commemorated in the scientific name of a subspecies of Mexican lizard, Phrynosoma orbiculare cortezii. Disputed interpretation of his life There are relatively few sources to the early life of Cortés; his fame arose from his participation in the conquest of Mexico and it was only after this that people became interested in reading and writing about him. Probably the best source is his letters to the king which he wrote during the campaign in Mexico, but they are written with the specific purpose of putting his efforts in a favourable light and so must be read critically. Another main source is the biography written by Cortés's private chaplain Lopez de Gómara, which was written in Spain several years after the conquest. Gómara never set foot in the Americas and knew only what Cortés had told him, and he had an affinity for knightly romantic stories which he incorporated richly in the biography. The third major source is written as a reaction to what its author calls "the lies of Gomara", the eyewitness account written by the Conquistador Bernal Díaz del Castillo does not paint Cortés as a romantic hero but rather tries to emphasize that Cortés's men should also be remembered as important participants in the undertakings in Mexico. In the years following the conquest more critical accounts of the Spanish arrival in Mexico were written. The Dominican friar Bartolomé de Las Casas wrote his A Short Account of the Destruction of the Indies which raises strong accusations of brutality and heinous violence towards the Indians; accusations against both the conquistadors in general and Cortés in particular. The accounts of the conquest given in the Florentine Codex by the Franciscan Bernardino de Sahagún and his native informants are also less than flattering towards Cortés. The scarcity of these sources has led to a sharp division in the description of Cortés's personality and a tendency to describe him as either a vicious and ruthless person or a noble and honorable cavalier. Representations in Mexico In México there are few representations of Cortés. However, many landmarks still bear his name, from the castle Palacio de Cortés in the city of Cuernavaca to some street names throughout the republic. The pass between the volcanoes Iztaccíhuatl and Popocatépetl where Cortés took his soldiers on their march to Mexico City. It is known as the Paso de Cortés. The muralist Diego Rivera painted several representation of him but the most famous, depicts him as a powerful and ominous figure along with Malinche in a mural in the National Palace in Mexico City. In 1981, President Lopez Portillo tried to bring Cortés to public recognition. First, he made public a copy of the bust of Cortés made by Manuel Tolsá in the Hospital de Jesús Nazareno with an official ceremony, but soon a nationalist group tried to destroy it, so it had to be taken out of the public. Today the copy of the bust is in the "Hospital de Jesús Nazareno" while the original is in Naples, Italy, in the Villa Pignatelli. Later, another monument, known as "Monumento al Mestizaje" by Julián Martínez y M. Maldonado (1982) was commissioned by Mexican president José López Portillo to be put in the "Zócalo" (Main square) of Coyoacan, near the place of his country house, but it had to be removed to a little known park, the Jardín Xicoténcatl, Barrio de San Diego Churubusco, to quell protests. The statue depicts Cortés, Malinche and their son Martín. There is another statue by Sebastián Aparicio, in Cuernavaca, was in a hotel "El casino de la selva". Cortés is barely recognizable, so it sparked little interest. The hotel was closed to make a commercial center, and the statue was put out of public display by Costco the builder of the commercial center. Cultural depictions Hernán Cortés is a character in the opera La Conquista (2005) by Italian composer Lorenzo Ferrero, which depicts the major episodes of the Spanish conquest of the Aztec Empire in 1521. Writings: the Cartas de Relación Cortés's personal account of the conquest of Mexico is narrated in his five letters addressed to Charles V. These five letters, the cartas de relación, are Cortés's only surviving writings. See "Letters and Dispatches of Cortés", translated by George Folsom (New York, 1843); Prescott's "Conquest of Mexico" (Boston, 1843); and Sir Arthur Helps's "Life of Hernando Cortes" (London, 1871). His first letter was considered lost, and the one from the municipality of Veracruz has to take its place. It was published for the first time in volume IV of "Documentos para la Historia de España", and subsequently reprinted. The Segunda Carta de Relacion, bearing the date of October 30, 1520, appeared in print at Seville in 1522. The third letter, dated May 15, 1522, appeared at Seville in 1523. The fourth, October 20, 1524, was printed at Toledo in 1525. The fifth, on the Honduras expedition, is contained in volume IV of the Documentos para la Historia de España. Children Natural children of Don Hernán Cortés doña Catalina Pizarro, born between 1514 and 1515 in Santiago de Cuba or maybe later in Nueva España, daughter of a Cuban woman, Leonor Pizarro. Doña Catalina married Juan de Salcedo, a conqueror and encomendero, with whom she had a son, Pedro. don Martín Cortés, born in Coyoacán in 1522, son of doña Marina (La Malinche), called the First Mestizo; about him was written The New World of Martín Cortés; married doña Bernaldina de Porras and had two children: doña Ana Cortés don Fernando Cortés, Principal Judge of Veracruz. Descendants of this line are alive today in Mexico. don Luis Cortés, born in 1525, son of doña Antonia or Elvira Hermosillo, a native of Trujillo (Cáceres) doña Leonor Cortés Moctezuma, born in 1527 or 1528 in Ciudad de Mexico, daughter of Aztec princess Tecuichpotzin (baptized Isabel), born in Tenochtitlan on July 11, 1510, and died on July 9, 1550, the eldest legitimate daughter of Moctezuma II Xocoyotzin and wife doña María Miahuaxuchitl; married to Juan de Tolosa, a Basque merchant and miner. doña María Cortés de Moctezuma, daughter of an Aztec princess; nothing more is known about her except that she probably was born with some deformity. He married twice: firstly in Cuba to Catalina Suárez Marcaida, who died at Coyoacán in 1522 without issue, and secondly in 1529 to doña Juana Ramírez de Arellano de Zúñiga, daughter of don Carlos Ramírez de Arellano, 2nd Count of Aguilar and wife the Countess doña Juana de Zúñiga, and had: don Luis Cortés y Ramírez de Arellano, born in Texcoco in 1530 and died shortly after his birth. doña Catalina Cortés de Zúñiga, born in Cuernavaca in 1531 and died shortly after her birth. don Martín Cortés y Ramírez de Arellano, 2nd Marquess of the Valley of Oaxaca, born in Cuernavaca in 1532, married at Nalda on February 24, 1548, his twice cousin once removed doña Ana Ramírez de Arellano y Ramírez de Arellano and had issue, currently extinct in male line doña María Cortés de Zúñiga, born in Cuernavaca between 1533 and 1536, married to don Luis de Quiñones y Pimentel, 5th Count of Luna doña Catalina Cortés de Zúñiga, born in Cuernavaca between 1533 and 1536, died unmarried in Sevilla after the funeral of her father doña Juana Cortés de Zúñiga, born in Cuernavaca between 1533 and 1536, married Don Fernando Enríquez de Ribera y Portocarrero, 2nd Duke of Alcalá de los Gazules, 3rd Marquess of Tarifa and 6th Count of Los Molares, and had issue In popular culture Cortés was portrayed (as "Hernando Cortez") by actor Cesar Romero in the 1947 historical adventure film Captain from Castile. "Cortez the Killer", a 1975 song by Neil Young Cortés is a major villain in the 2000 animated movie The Road to El Dorado, voiced by Jim Cummings Cortes played by Óscar Jaenada, is a morally ambiguous protagonist in the 2019 8 episode TV-series Hernán. In 1986, Polish illustrator Jerzy Wróblewski created a 48-page comic book titled Hernán Cortés i podbój Meksyku (Hernán Cortés and the Conquest of Mexico). The comic book, based on historical chronicles, narrated Cortés's life, concentrating on the titular 1519–1521 period; it was noted for its realistic depictions of violence, unusual in Polish comic books of the era. Cortés features in the 1980 novel Aztec by Gary Jennings as an antagonist. See also History of Mexico History of Mexico City New Spain Palace
also Holy Roman Emperor, had little time for distant colonies (much of Charles's reign was taken up with wars with France, the German Protestants and the expanding Ottoman Empire), except insofar as they contributed to finance his wars. In 1521, year of the Conquest, Charles was attending to matters in his German domains and Bishop Adrian of Utrecht functioned as regent in Spain. Velázquez and Fonseca persuaded the regent to appoint a commissioner (a Juez de residencia, Luis Ponce de León) with powers to investigate Cortés's conduct and even arrest him. Cortés was once quoted as saying that it was "more difficult to contend against [his] own countrymen than against the Aztecs." Governor Diego Velázquez continued to be a thorn in his side, teaming up with Bishop Juan Rodríguez de Fonseca, chief of the Spanish colonial department, to undermine him in the Council of the Indies. A few days after Cortés's return from his expedition, Ponce de León suspended Cortés from his office of governor of New Spain. The Licentiate then fell ill and died shortly after his arrival, appointing Marcos de Aguilar as alcalde mayor. The aged Aguilar also became sick and appointed Alonso de Estrada governor, who was confirmed in his functions by a royal decree in August 1527. Cortés, suspected of poisoning them, refrained from taking over the government. Estrada sent Diego de Figueroa to the south. De Figueroa raided graveyards and extorted contributions, meeting his end when the ship carrying these treasures sank. Albornoz persuaded Alonso de Estrada to release Gonzalo de Salazar and Chirinos. When Cortés complained angrily after one of his adherents' hands was cut off, Estrada ordered him exiled. Cortés sailed for Spain in 1528 to appeal to King Charles. First return to Spain (1528) and Marquessate of the Valley of Oaxaca In 1528, Cortés returned to Spain to appeal to the justice of his master, Charles V. Juan Altamirano and Alonso Valiente stayed in Mexico and acted as Cortés's representatives during his absence. Cortés presented himself with great splendor before Charles V's court. By this time Charles had returned and Cortés forthrightly responded to his enemy's charges. Denying he had held back on gold due the crown, he showed that he had contributed more than the quinto (one-fifth) required. Indeed, he had spent lavishly to build the new capital of Mexico City on the ruins of the Aztec capital of Tenochtitlán, leveled during the siege that brought down the Aztec empire. He was received by Charles with every distinction, and decorated with the order of Santiago. In return for his efforts in expanding the still young Spanish Empire, Cortés was rewarded in 1529 by being accorded the noble title of don but more importantly named the "Marqués del Valle de Oaxaca" (Marquess of the Valley of Oaxaca and married the Spanish noblewoman Doña Juana Zúñiga, after the 1522 death of his much less distinguished first wife, Catalina Suárez. The noble title and senorial estate of the Marquesado was passed down to his descendants until 1811. The Oaxaca Valley was one of the wealthiest regions of New Spain, and Cortés had 23,000 vassals in 23 named encomiendas in perpetuity. Although confirmed in his land holdings and vassals, he was not reinstated as governor and was never again given any important office in the administration of New Spain. During his travel to Spain, his property was mismanaged by abusive colonial administrators. He sided with local natives in a lawsuit. The natives documented the abuses in the Huexotzinco Codex. The entailed estate and title passed to his legitimate son Don Martín Cortés upon Cortés's death in 1547, who became the Second Marquess. Don Martín's association with the so-called Encomenderos' Conspiracy endangered the entailed holdings, but they were restored and remained the continuing reward for Hernán Cortés's family through the generations. Return to Mexico Cortés returned to Mexico in 1530 with new titles and honors, but with diminished power. Although Cortés still retained military authority and permission to continue his conquests, viceroy Don Antonio de Mendoza was appointed in 1535 to administer New Spain's civil affairs. This division of power led to continual dissension, and caused the failure of several enterprises in which Cortés was engaged. On returning to Mexico, Cortés found the country in a state of anarchy. There was a strong suspicion in court circles of an intended rebellion by Cortés. After reasserting his position and reestablishing some sort of order, Cortés retired to his estates at Cuernavaca, about 30 miles (48 km) south of Mexico City. There he concentrated on the building of his palace and on Pacific exploration. Remaining in Mexico between 1530 and 1541, Cortés quarreled with Nuño Beltrán de Guzmán and disputed the right to explore the territory that is today California with Antonio de Mendoza, the first viceroy. Cortés acquired several silver mines in Zumpango del Rio in 1534. By the early 1540s, he owned 20 silver mines in Sultepec, 12 in Taxco, and 3 in Zacualpan. Earlier, Cortés had claimed the silver in the Tamazula area. In 1536, Cortés explored the northwestern part of Mexico and discovered the Baja California Peninsula. Cortés also spent time exploring the Pacific coast of Mexico. The Gulf of California was originally named the Sea of Cortés by its discoverer Francisco de Ulloa in 1539. This was the last major expedition by Cortés. Later life and death Second return to Spain After his exploration of Baja California, Cortés returned to Spain in 1541, hoping to confound his angry civilians, who had brought many lawsuits against him (for debts, abuse of power, etc.). On his return he went through a crowd to speak to the emperor, who demanded of him who he was. "I am a man," replied Cortés, "who has given you more provinces than your ancestors left you cities." Expedition against Algiers The emperor finally permitted Cortés to join him and his fleet commanded by Andrea Doria at the great expedition against Algiers in the Barbary Coast in 1541, which was then part of the Ottoman Empire and was used as a base by Hayreddin Barbarossa, a famous Turkish corsair and Admiral-in-Chief of the Ottoman Fleet. During this campaign, Cortés was almost drowned in a storm that hit his fleet while he was pursuing Barbarossa. Last years, death, and remains Having spent a great deal of his own money to finance expeditions, he was now heavily in debt. In February 1544 he made a claim on the royal treasury, but was ignored for the next three years. Disgusted, he decided to return to Mexico in 1547. When he reached Seville, he was stricken with dysentery. He died in Castilleja de la Cuesta, Seville province, on December 2, 1547, from a case of pleurisy at the age of 62. He left his many mestizo and white children well cared for in his will, along with every one of their mothers. He requested in his will that his remains eventually be buried in Mexico. Before he died he had the Pope remove the "natural" status of four of his children (legitimizing them in the eyes of the church), including Martin, the son he had with Doña Marina (also known as La Malinche), said to be his favourite. His daughter, Doña Catalina, however, died shortly after her father's death. After his death, his body was moved more than eight times for several reasons. On December 4, 1547, he was buried in the mausoleum of the Duke of Medina in the church of San Isidoro del Campo, Sevilla. Three years later (1550) due to the space being required by the duke, his body was moved to the altar of Santa Catarina in the same church. In his testament, Cortés asked for his body to be buried in the monastery he had ordered to be built in Coyoacan in México, ten years after his death, but the monastery was never built. So in 1566, his body was sent to New Spain and buried in the church of San Francisco de Texcoco, where his mother and one of his sisters were buried. In 1629, Don Pedro Cortés fourth "Marquez del Valle, his last male descendant, died, so the viceroy decided to move the bones of Cortés along with those of his descendant to the Franciscan church in México. This was delayed for nine years, while his body stayed in the main room of the palace of the viceroy. Eventually it was moved to the Sagrario of Franciscan church, where it stayed for 87 years. In 1716, it was moved to another place in the same church. In 1794, his bones were moved to the "Hospital de Jesus" (founded by Cortés), where a statue by Tolsá and a mausoleum were made. There was a public ceremony and all the churches in the city rang their bells. In 1823, after the independence of México, it seemed imminent that his body would be desecrated, so the mausoleum was removed, the statue and the coat of arms were sent to Palermo, Sicily, to be protected by the Duke of Terranova. The bones were hidden, and everyone thought that they had been sent out of México. In 1836, his bones were moved to another place in the same building. It was not until November 24, 1946 that they were rediscovered, thanks to the discovery of a secret document by Lucas Alamán. His bones were put in the charge of the Instituto Nacional de Antropología e Historia (INAH). The remains were authenticated by INAH. They were then restored to the same place, this time with a bronze inscription and his coat of arms. When the bones were first rediscovered, the supporters of the Hispanic tradition in Mexico were excited, but one supporter of an indigenist vision of Mexico "proposed that the remains be publicly burned in front of the statue of Cuauhtemoc, and the ashes flung into the air". Following the discovery and authentication of Cortés's remains, there was a discovery of what were described as the bones of Cuauhtémoc, resulting in a "battle of the bones". Taxa named after Cortés Cortés is commemorated in the scientific name of a subspecies of Mexican lizard, Phrynosoma orbiculare cortezii. Disputed interpretation of his life There are relatively few sources to the early life of Cortés; his fame arose from his participation in the conquest of Mexico and it was only after this that people became interested in reading and writing about him. Probably the best source is his letters to the king which he wrote during the campaign in Mexico, but they are written with the specific purpose of putting his efforts in a favourable light and so must be read critically. Another main source is the biography written by Cortés's private chaplain Lopez de Gómara, which was written in Spain several years after the conquest. Gómara never set foot in the Americas and knew only what Cortés had told him, and he had an affinity for knightly romantic stories which he incorporated richly in the biography. The third major source is written as a reaction to what its author calls "the lies of Gomara", the eyewitness account written by the Conquistador Bernal Díaz del Castillo does not paint Cortés as a romantic hero but rather tries to emphasize that Cortés's men should also be remembered as important participants in the undertakings in Mexico. In the years following the conquest more critical accounts of the Spanish arrival in Mexico were written. The Dominican friar Bartolomé de Las Casas wrote his A Short Account of the Destruction of the Indies which raises strong accusations of brutality and heinous violence towards the Indians; accusations against both the conquistadors in general and Cortés in particular. The accounts of the conquest given in the Florentine Codex by the Franciscan Bernardino de Sahagún and his native informants are also less than flattering towards Cortés. The scarcity of these sources has led to a sharp division in the description of Cortés's personality and a tendency to describe him as either a vicious and ruthless person or a noble and honorable cavalier. Representations in Mexico In México there are few representations of Cortés. However, many landmarks still bear his name, from the castle Palacio de Cortés in the city of Cuernavaca to some street names throughout the republic. The pass between the volcanoes Iztaccíhuatl and Popocatépetl where Cortés took his soldiers on their march to Mexico City. It is known as the Paso de Cortés. The muralist Diego Rivera painted several representation of him but the most famous, depicts him as a powerful and ominous figure along with Malinche in a mural in the National Palace in Mexico City. In 1981, President Lopez Portillo tried to bring Cortés to public recognition. First, he made public a copy of the bust of Cortés made by Manuel Tolsá in the Hospital de Jesús Nazareno with an official ceremony, but soon a nationalist group tried to destroy it, so it had to be taken out of the public. Today the copy of the bust is in the "Hospital de Jesús Nazareno" while the original is in Naples, Italy, in the Villa Pignatelli. Later, another monument, known as "Monumento al Mestizaje" by Julián Martínez y M. Maldonado (1982) was commissioned by Mexican president José López Portillo to be put in the "Zócalo" (Main square) of Coyoacan, near the place of his country house, but it had to be removed to a little known park, the Jardín Xicoténcatl, Barrio de San Diego Churubusco, to quell protests. The statue depicts Cortés, Malinche and their son Martín. There is another statue by Sebastián Aparicio, in Cuernavaca, was in a hotel "El casino de la selva". Cortés is barely recognizable, so it sparked little interest. The hotel was closed to make a commercial center, and the statue was put out of public display by Costco the builder of the commercial center. Cultural depictions Hernán Cortés is a character in the opera La Conquista (2005) by Italian composer Lorenzo Ferrero, which depicts the major episodes of the Spanish conquest of the Aztec Empire in 1521. Writings: the Cartas de Relación Cortés's personal account of the conquest of Mexico is narrated in his five letters addressed to Charles V. These five letters, the cartas de relación, are Cortés's only surviving writings. See "Letters and Dispatches of Cortés", translated by George Folsom (New York, 1843); Prescott's "Conquest of Mexico" (Boston, 1843); and Sir Arthur Helps's "Life of Hernando Cortes" (London, 1871). His first letter was considered lost, and the one from the municipality of Veracruz has to take its place. It was published for the first time in volume IV of "Documentos para la Historia de España", and subsequently reprinted. The Segunda Carta de Relacion, bearing the date of October 30, 1520, appeared in print at Seville in 1522. The third letter, dated May 15, 1522, appeared at Seville in 1523. The fourth, October 20, 1524, was printed at Toledo in 1525. The fifth, on the Honduras expedition, is contained in volume IV of the Documentos para la Historia de España. Children Natural children of Don Hernán Cortés doña Catalina Pizarro, born between 1514 and 1515 in Santiago de Cuba or maybe later in Nueva España, daughter of a Cuban woman, Leonor Pizarro. Doña Catalina married Juan de Salcedo, a conqueror and encomendero, with whom she had a son, Pedro. don Martín Cortés, born in Coyoacán in 1522, son of doña Marina (La Malinche), called the First Mestizo; about him was written The New World of Martín Cortés; married doña Bernaldina de Porras and had two children: doña Ana Cortés don Fernando Cortés, Principal Judge of Veracruz. Descendants of this line are alive today in Mexico. don Luis Cortés, born in 1525, son of doña Antonia or Elvira Hermosillo, a native of Trujillo (Cáceres) doña Leonor Cortés Moctezuma, born in 1527 or 1528 in Ciudad de Mexico, daughter of Aztec princess Tecuichpotzin (baptized Isabel), born in Tenochtitlan on July 11, 1510, and died on July 9, 1550, the eldest legitimate daughter of Moctezuma II Xocoyotzin and wife doña María Miahuaxuchitl; married to Juan de Tolosa, a Basque merchant and miner. doña María Cortés de Moctezuma, daughter of an Aztec princess; nothing more is known about her except that she probably was born with some deformity. He married twice: firstly in Cuba to Catalina Suárez Marcaida, who died at Coyoacán in 1522 without issue, and secondly in
an alteration of the word "history", as part of a feminist critique of conventional historiography, which in their opinion is traditionally written as "his story", i.e., from the male point of view. The term is a neologism since the word "history"—from the Ancient Greek word ἱστορία, or more directly from its Latin derivate historia, meaning "knowledge obtained by inquiry"— is etymologically unrelated to the possessive pronoun his. Usage The Oxford English Dictionary credits Robin Morgan with first using the term "herstory" in print in her 1970 anthology Sisterhood is Powerful. Concerning the feminist organization W.I.T.C.H., Morgan wrote: The fluidity and wit of the witches is evident in the ever-changing acronym: the basic, original title was Women's International Terrorist Conspiracy from Hell [...] and the latest heard at this writing is Women Inspired to Commit Herstory. During the 1970s and 1980s, second-wave feminists saw the study of history as a male-dominated intellectual enterprise and presented "herstory" as a means of compensation. The term, intended to be both serious and comic, became a rallying cry used on T-shirts and buttons as well as in academia. In 2017, Hridith Sudev, an inventor, environmentalist and social activist associated with various youth movements, launched 'The Herstory Movement,' an online platform to "celebrate lesser known great persons; female, queer or otherwise marginalized, who
It originated as an alteration of the word "history", as part of a feminist critique of conventional historiography, which in their opinion is traditionally written as "his story", i.e., from the male point of view. The term is a neologism since the word "history"—from the Ancient Greek word ἱστορία, or more directly from its Latin derivate historia, meaning "knowledge obtained by inquiry"— is etymologically unrelated to the possessive pronoun his. Usage The Oxford English Dictionary credits Robin Morgan with first using the term "herstory" in print in her 1970 anthology Sisterhood is Powerful. Concerning the feminist organization W.I.T.C.H., Morgan wrote: The fluidity and wit of the witches is evident in the ever-changing acronym: the basic, original title was Women's International Terrorist Conspiracy from Hell [...] and the latest heard at this writing is Women Inspired to Commit Herstory. During the 1970s and 1980s, second-wave feminists saw the study of history as a male-dominated intellectual enterprise and presented "herstory" as a means of compensation. The term, intended to be both serious and comic, became a rallying cry used on T-shirts and buttons as well as in academia. In 2017, Hridith Sudev, an inventor, environmentalist and social activist associated with various youth movements, launched 'The Herstory Movement,' an online platform to "celebrate lesser known great persons; female, queer or otherwise marginalized, who helped shape the modern World History." It is intended as an academic platform to
Billsborough. Urquhart eliminates Woolton by a prolonged scheme: at the party conference, he pressures O'Neill into persuading his personal assistant and lover, Penny Guy (Alphonsia Emmanuel), to have a one-night stand with Woolton in his suite, which Urquhart records via a bugged ministerial red box. When the tape is sent to Woolton, he is led to assume that Samuels is behind the scheme and backs Urquhart in the contest. Urquhart also receives support from Collingridge, who is unaware of Urquhart's role in his own downfall. Samuels is forced out of the running when the tabloids reveal that he backed leftist causes as a student at University of Cambridge. Stumbling across contradictions in the allegations against Collingridge and his brother, Mattie begins to dig deeper. On Urquhart's orders, O'Neill arranges for her car and flat to be vandalised in a show of intimidation. However, O'Neill becomes increasingly uneasy with what he is being asked to do, and his cocaine addiction adds to his instability. Urquhart mixes O'Neill's cocaine with rat poison, causing him to kill himself when taking the cocaine in a motorway service station lavatory on the M27 at Rownhams. Though initially blind to the truth of matters thanks to her relations with Urquhart, Mattie eventually deduces that Urquhart is responsible for O'Neill's death and is behind the unfortunate downfalls of Collingridge and all of Urquhart's rivals. Mattie looks for Urquhart at the point when it seems his victory is certain. She eventually finds him on the roof garden of the Houses of Parliament, where she confronts him. He admits to O'Neill's murder and everything else he has done. He then asks whether he can trust Mattie, and, though she answers in the affirmative, he does not believe her and throws her off the roof onto a van parked below. An unseen person picks up Mattie's tape recorder, which she had been using to secretly record her conversations with Urquhart. The series ends with Urquhart defeating Samuels in the second leadership ballot and being driven to Buckingham Palace to be invited to form a government by Elizabeth II. Deviations from the novel in the series In the first novel, but not in the television series: Urquhart never speaks directly to the reader; the character is written solely in a third-person perspective. When alone, Urquhart is much less self-assured and decisive. Mattie Storin works for The Daily Telegraph. (In the television series she is a journalist with the fictional Chronicle newspaper.) Mattie Storin does not have a relationship with Urquhart; she does not even talk to him frequently. She does, however, have a sexual relationship with John Krajewski. Urquhart's wife is called Miranda and is a minor character, not sharing in his schemes. (In the later novels, To Play the King and The Final Cut, however, she is called "Elizabeth" and plays a larger role, as in the television series.) The Conservative party conference is held in Bournemouth. (In the television series it occurs in Brighton.) The minor character Tim Stamper is introduced for the on-screen adaptation (although Dobbs introduces him in the novel To Play the King). Earle's rent boy appears in person at an important speech of Earle's, distracting him; subsequently, Earle is harassed by reporters who have been told of his indiscretion. In the final confrontation scene Urquhart throws himself from the roof terrace and Mattie survives. Before the series was reissued in 2013 to coincide with the release of the US version of House of Cards, Dobbs rewrote portions of the novel to bring the series in line with the television series and restore continuity among the three novels. In the 2013 version: Urquhart murders Mattie Storin, throwing her off the roof after she confronts Urquhart about his actions. Mattie Storin does not scream "Daddy" as she falls. Urquhart covers up his murder of Mattie Storin by claiming she was an obsessed stalker who was mentally ill and vows to make mental health amongst the young a priority. Mattie Storin works for newspaper The Chronicle, per the TV series. Urquhart's wife Miranda is changed to Mortima. Tim Stamper, though present in the serial, does not appear in the revised version of the novel. Urquhart makes asides to the audience in the form of epigraphs at the beginning of each chapter (the original novel has no chapters). Reception The first installment of the TV series coincidentally aired two days before the Conservative Party leadership election. During a time of "disillusionment with politics", the series "caught the nation's mood". Ian Richardson won a Best Actor BAFTA in 1991 for his role as Urquhart, and Andrew Davies won an Emmy for outstanding writing in a miniseries. The series ranked 84th in the British Film Institute list of the 100 Greatest British Television Programmes. American adaptation The Urquhart trilogy has been adapted in the United States as House of Cards. The show stars Kevin Spacey as Francis "Frank" Underwood, the Majority Whip of the Democratic caucus in the U.S. House of Representatives, who schemes and murders his way to becoming President of the United States. It is produced by
where she confronts him. He admits to O'Neill's murder and everything else he has done. He then asks whether he can trust Mattie, and, though she answers in the affirmative, he does not believe her and throws her off the roof onto a van parked below. An unseen person picks up Mattie's tape recorder, which she had been using to secretly record her conversations with Urquhart. The series ends with Urquhart defeating Samuels in the second leadership ballot and being driven to Buckingham Palace to be invited to form a government by Elizabeth II. Deviations from the novel in the series In the first novel, but not in the television series: Urquhart never speaks directly to the reader; the character is written solely in a third-person perspective. When alone, Urquhart is much less self-assured and decisive. Mattie Storin works for The Daily Telegraph. (In the television series she is a journalist with the fictional Chronicle newspaper.) Mattie Storin does not have a relationship with Urquhart; she does not even talk to him frequently. She does, however, have a sexual relationship with John Krajewski. Urquhart's wife is called Miranda and is a minor character, not sharing in his schemes. (In the later novels, To Play the King and The Final Cut, however, she is called "Elizabeth" and plays a larger role, as in the television series.) The Conservative party conference is held in Bournemouth. (In the television series it occurs in Brighton.) The minor character Tim Stamper is introduced for the on-screen adaptation (although Dobbs introduces him in the novel To Play the King). Earle's rent boy appears in person at an important speech of Earle's, distracting him; subsequently, Earle is harassed by reporters who have been told of his indiscretion. In the final confrontation scene Urquhart throws himself from the roof terrace and Mattie survives. Before the series was reissued in 2013 to coincide with the release of the US version of House of Cards, Dobbs rewrote portions of the novel to bring the series in line with the television series and restore continuity among the three novels. In the 2013 version: Urquhart murders Mattie Storin, throwing her off the roof after she confronts Urquhart about his actions. Mattie Storin does not scream "Daddy" as she falls. Urquhart covers up his murder of Mattie Storin by claiming she was an obsessed stalker who was mentally ill and vows to make mental health amongst the young a priority. Mattie Storin works for newspaper The Chronicle, per the TV series. Urquhart's wife Miranda is changed to Mortima. Tim Stamper, though present in the serial, does not appear in the revised version of the novel. Urquhart makes asides to the audience in the form of epigraphs at the beginning of each chapter (the original novel has no chapters). Reception The first installment of the TV series coincidentally aired two days before the Conservative Party leadership election. During a time of "disillusionment with politics", the series "caught the nation's mood". Ian Richardson won a Best Actor BAFTA in 1991 for his role as Urquhart, and Andrew Davies won an Emmy for outstanding writing in a miniseries. The series ranked 84th in the British Film Institute list of the 100 Greatest British Television Programmes. American adaptation The Urquhart trilogy has been adapted in the United States as House of Cards. The show stars Kevin Spacey as Francis "Frank" Underwood, the Majority Whip of the Democratic caucus in the U.S. House of Representatives, who schemes and murders his way to becoming President of the United States. It is produced by David Fincher and Spacey's Trigger Street Productions, with the initial episodes directed by Fincher. The series, produced and financed by independent studio Media Rights Capital, was one of Netflix's first forays into original programming. Series one was made available online on 1 February 2013. The series is filmed in Baltimore, Maryland. The first series was critically acclaimed and earned four Golden Globe Nominations, including Best Drama, actor, actress and supporting actor, with Robin Wright winning best actress. It also earned nine Primetime Emmy Award nominations, winning three, and was the first show to earn nominations that was broadcast solely via an internet streaming service. In popular culture The drama introduced and popularised the phrase: "You might very well think that; I couldn't possibly comment". It was a non-confirmation confirmative statement, used by Urquhart whenever he could not be seen to agree with a leading statement, with the emphasis on either the "I" or the "possibly", depending on the situation. The phrase was even used in the House of Commons, House of Lords and Parliamentary Committees following the series. Prince Charles himself said the phrase in response to a provocative question from a journalist in 2014. A variation on the phrase was written into the TV adaptation of Terry Pratchett's Hogfather for the character Death, as an in-joke on the fact that he was voiced by Richardson. During the first Gulf War, a British reporter speaking from Baghdad, conscious of the possibility of censorship, used the code phrase "You might very well think that; I couldn't possibly comment" to answer a BBC presenter's question. A further variation was used by Nicola Murray, a fictional government minister, in the third series finale of The Thick of It. In the US adaptation, the phrase is used by Frank Underwood in the first episode during his initial meeting with Zoe Barnes, the US counterpart of Mattie Storin. See also List of House of Cards trilogy characters Politics in fiction A Very British Coup, a similar drama of fictional contemporary British politics from a left-wing perspective Yes Minister (and its sequel Yes, Prime Minister), a satirical sitcom about a generic British government List of fictional prime ministers of the United Kingdom References External links House of Cards at British Film Institute Screen Online 1990 British television series debuts 1990 British television series endings 1990s British drama television series 1990s British political television series 1990s British television miniseries BBC television dramas English-language television shows 2 Peabody Award-winning television programs Primetime Emmy Award-winning television series Television shows written by Andrew Davies Television shows based on British novels Television series about
her retirement on May 2, 1972, the day Hoover died. Hoover said of her: "if there is anyone in this Bureau whose services are indispensable, I consider Miss Gandy to be that person." Despite this, Curt Gentry wrote: Theirs was a rigidly formal relationship. He'd always called her 'Miss Gandy' (when angry, barking it out as one word). In all those fifty-four years he had never once called her by her first name. Hoover biographers Theoharis and Cox would say "her stern face recalled Cerberus at the gate," a view echoed by Anthony Summers in his life of Hoover, who also pictured Gandy as Hoover's first line of defense against the outside world. When Attorney General Robert F. Kennedy, Hoover's superior, had a direct telephone line installed between their offices, Hoover refused to answer the phone. "Put that damn thing on Miss Gandy's desk where it belongs," Hoover would declare. Gentry described Gandy's influence: Her genteel manner and pleasant voice contrasted sharply with this domineering presence. Yet behind the politeness was a resolute firmness not unlike his, and no small amount of influence. Many a career in the Bureau had been quietly manipulated by her. Even those who disliked him, praised her, most often commenting on her remarkable ability to get along with all kinds of people. That she had held her position for fifty-four years was the best evidence of this, for it was a Bureau tradition that the closer you were to him, the more demanding he was. William C. Sullivan, an agent with the Bureau for three decades, reported in his memoir when he worked in the public relations section answering mail from the public, he gave a correspondent the wrong measurements for Hoover's personal popover recipe, relying on memory rather than the files. Gandy, ever protective of her boss, caught the error and brought it to Hoover's attention. The director then placed an official letter of reprimand in Sullivan's file for the lapse. Mark Felt, deputy associate director of the FBI, wrote in his memoir that Gandy "was bright and alert and quick-tempered—and completely dedicated to her boss." Files Hoover died during the night of May 1–2, 1972. According to Curt Gentry, who wrote the 1991 book J Edgar Hoover: The Man and the Secrets, Hoover's body was not discovered by his live-in cook and general housekeeper, Annie Fields; rather, it was discovered by James Crawford, who had been Hoover's chauffeur for 37 years. Crawford then yelled out to Fields and Tom Moton (Hoover's new chauffeur after Crawford had retired in January 1972). Fields first called Hoover's personal physician, Dr. Robert Choisser, then used another phone to call Clyde Tolson's private number. Tolson then called Gandy's private number with the news of Hoover's death along with orders to begin destroying the files. Within an hour, the "D List" ("d" standing for destruction) was being distributed, and the destruction of files began. However, The New York Times quoted an anonymous FBI source in spring 1975, who said: "Gandy had begun almost a year before Mr. Hoover's death and was instructed to purge the files that were then in his office." Anthony Summers reported that G. Gordon Liddy had said of his sources in the FBI: "by the time Gray went in to get the files, Miss Gandy had already got rid of them." The day after Hoover died, L. Patrick Gray, who had been named acting director by President Richard Nixon upon Tolson's resignation from that position, went to Hoover's office. Gandy paused from her work to give Gray a tour. He found file cabinets open and packing boxes being filled with papers. She informed him the boxes contained personal papers of Hoover's. Gandy stated Gray flipped through a few files and approved her work, but Gray was to deny he looked at any papers. Gandy also told Gray it would be a week before she could clear Hoover's effects out so Gray could move into the suite. Gray reported to Nixon that he had secured Hoover's office and its contents. However, he had sealed only Hoover's personal inner office, where no files were stored, not the entire suite of offices. Since 1957, Hoover's "Official/Confidential" files, containing material too sensitive to include in the FBI's central files, had been kept in the outer office, where Gandy sat. Gentry reported that Gray would not have known where to look in Gandy's office for the files, as her office was lined floor to ceiling with filing cabinets; moreover, without her index to the files, he would not have been able to locate incriminating material, for files were deliberately mislabeled, e.g., President Nixon's file was labeled "Obscene Matters". On May 4, Gandy turned over 12 boxes labelled "Official/Confidential", containing 167 files and 17,750 pages, to Mark Felt. Many of them contained derogatory information. Gray told the press that afternoon that "there are no dossiers or secret files. There are just general files and I took steps to preserve their integrity." Gandy retained the "Personal File". Gandy worked on going through Hoover's "Personal File" in the office until May 12. She then
was instructed to purge the files that were then in his office." Anthony Summers reported that G. Gordon Liddy had said of his sources in the FBI: "by the time Gray went in to get the files, Miss Gandy had already got rid of them." The day after Hoover died, L. Patrick Gray, who had been named acting director by President Richard Nixon upon Tolson's resignation from that position, went to Hoover's office. Gandy paused from her work to give Gray a tour. He found file cabinets open and packing boxes being filled with papers. She informed him the boxes contained personal papers of Hoover's. Gandy stated Gray flipped through a few files and approved her work, but Gray was to deny he looked at any papers. Gandy also told Gray it would be a week before she could clear Hoover's effects out so Gray could move into the suite. Gray reported to Nixon that he had secured Hoover's office and its contents. However, he had sealed only Hoover's personal inner office, where no files were stored, not the entire suite of offices. Since 1957, Hoover's "Official/Confidential" files, containing material too sensitive to include in the FBI's central files, had been kept in the outer office, where Gandy sat. Gentry reported that Gray would not have known where to look in Gandy's office for the files, as her office was lined floor to ceiling with filing cabinets; moreover, without her index to the files, he would not have been able to locate incriminating material, for files were deliberately mislabeled, e.g., President Nixon's file was labeled "Obscene Matters". On May 4, Gandy turned over 12 boxes labelled "Official/Confidential", containing 167 files and 17,750 pages, to Mark Felt. Many of them contained derogatory information. Gray told the press that afternoon that "there are no dossiers or secret files. There are just general files and I took steps to preserve their integrity." Gandy retained the "Personal File". Gandy worked on going through Hoover's "Personal File" in the office until May 12. She then transferred at least 32 file drawers of material to the basement recreation room of Hoover's Washington home at 4936 Thirtieth Place, NW, where she continued her work from May 13 to July 17. Gandy later testified nothing official had been removed from the FBI's offices, "not even his badge." At Hoover's residence the destruction was overseen by John P. Mohr, the number three man in the FBI after Hoover and Tolson. They were aided by James Jesus Angleton, the Central Intelligence Agency's counterintelligence chief, whom Hoover's neighbors saw removing boxes from Hoover's home. Mohr would claim the boxes Angleton removed were cases of spoiled wine. In 1975, when the House Committee on Government Oversight investigated the FBI's illegal COINTELPRO program of spying on and harassment of Martin Luther King Jr. and others, Gandy was called to testify regarding the "Personal Files". "I tore them up, put them in boxes, and they were taken away to be shredded," she told the congressmen about the papers. The FBI Washington field office had FBI drivers transport the material to Hoover's home, then once Gandy had gone through the material, the drivers transported it back to the field office in the Old Post Office Building on Pennsylvania Avenue, where it was shredded and burned. Gandy stated that Hoover had left standing instructions to destroy his personal papers upon his death, and that this instruction was confirmed by Tolson and Gray. Gandy stated that she destroyed no official papers, that everything was personal papers of Hoover's. The staff of the subcommittee did not believe her, but she told the committee: "I have no reason to lie." Representative Andrew Maguire (D-New Jersey), a freshman member of the 94th Congress, said "I find your testimony very difficult to believe." Gandy held her ground: "That is your privilege." "I can give you my word. I know what there was—letters to and from friends, personal friends, a lot of letters," she testified. Gandy also said the files she took to his home also included his financial papers, such as tax returns and investment statements, the deed to his home, and papers relating to his dogs' pedigrees. Curt Gentry wrote: Helen Gandy must have felt quite safe in testifying as she did for who could contradict her? Only one other person knew exactly what the files contained and he was dead. In J. Edgar Hoover: The Man and His Secrets, Gentry describes the nature of the files: "... their contents included blackmail material on the patriarch of an American political dynasty, his sons, their wives, and other women; allegations of two homosexual arrests which Hoover leaked to help defeat a witty, urbane Democratic presidential candidate; the surveillance reports on one of America's best-known first ladies and her alleged lovers, both male and female, white and black; the child molestation documentation the director used to control and manipulate one of the Red-baiting proteges; a list of the Bureau's spies in the White House during the eight administrations when Hoover was FBI director; the forbidden fruit of hundreds of illegal wiretaps and bugs, containing, for example, evidence that an attorney general, Tom C. Clark, who later became Supreme Court justice, had received payoffs from the Chicago syndicate; as well as celebrity files, with all the unsavory gossip Hoover could amass on some of the biggest names in show business." Later years Hoover left Gandy $5,000 in his will. In 1961, she and her sister, Lucy G. Rodman, donated a portrait of their mother by Thomas Eakins to the Smithsonian American Art Museum. Gandy lived in Washington until 1986, when she moved to DeLand, Florida, in Volusia County, where a niece lived. Gandy was an avid trout fisherman. Death Gandy died of a heart attack on July 7, 1988, either in DeLand (as indicated by her New York Times obituary) or in nearby Orange City, Florida (as stated in her Washington Post obituary). In popular culture Gandy has been portrayed by actresses Lee Kessler in J. Edgar
previously suggested per minute, and Tredgold suggested per minute. "Watt found by experiment in 1782 that a "brewery horse" could produce per minute." James Watt and Matthew Boulton standardized that figure at per minute the next year. A common legend states that the unit was created when one of Watt's first customers, a brewer, specifically demanded an engine that would match a horse, and chose the strongest horse he had and driving it to the limit. Watt, while aware of the trick, accepted the challenge and built a machine that was actually even stronger than the figure achieved by the brewer, and the output of that machine became the horsepower. In 1993, R. D. Stevenson and R. J. Wassersug published correspondence in Nature summarizing measurements and calculations of peak and sustained work rates of a horse. Citing measurements made at the 1926 Iowa State Fair, they reported that the peak power over a few seconds has been measured to be as high as and also observed that for sustained activity, a work rate of about per horse is consistent with agricultural advice from both the 19th and 20th centuries and also consistent with a work rate of about four times the basal rate expended by other vertebrates for sustained activity. When considering human-powered equipment, a healthy human can produce about briefly (see orders of magnitude) and sustain about indefinitely; trained athletes can manage up to about briefly and for a period of several hours. The Jamaican sprinter Usain Bolt produced a maximum of 0.89 seconds into his 9.58 second dash world record in 2009. Calculating power When torque is in pound-foot units, rotational speed is in rpm, the resulting power in horsepower is The constant 5252 is the rounded value of (33,000 ft⋅lbf/min)/(2π rad/rev). When torque is in inch-pounds, The constant 63,025 is the approximation of Definitions The following definitions have been or are widely used: In certain situations it is necessary to distinguish between the various definitions of horsepower and thus a suffix is added: hp(I) for mechanical (or imperial) horsepower, hp(M) for metric horsepower, hp(S) for boiler (or steam) horsepower and hp(E) for electrical horsepower. Mechanical horsepower Assuming the third CGPM (1901, CR 70) definition of standard gravity, , is used to define the pound-force as well as the kilogram force, and the international avoirdupois pound (1959), one mechanical horsepower is: {| |- |1 hp |≡ 33,000 ft·lbf/min | colspan="2" |by definition |- | |= 550 ft⋅lbf/s |since |1 min = 60 s |- | |= 550 × 0.3048 × 0.45359237 m⋅kgf/s |since |1 ft ≡ 0.3048 m and 1 lb ≡ 0.45359237 kg |- | |= 76.0402249 kgf⋅m/s | | |- | |= 76.0402249 × 9.80665 kg⋅m2/s3 |since |g = 9.80665 m/s2 |- | |= 745.6998715822702 W |≈ 745.700 W |since |1 W ≡ 1 J/s = 1 N⋅m/s = 1 (kg⋅m/s2)⋅(m/s) |} Or given that 1 hp = 550 ft⋅lbf/s, 1 ft = 0.3048 m, 1 lbf ≈ 4.448 N, 1 J = 1 N⋅m, 1 W = 1 J/s: 1 hp ≈ 746 W Metric horsepower (PS, cv, hk, pk, ks, ch) The various units used to indicate this definition (PS, KM, cv, hk, pk, ks and ch) all translate to horse power in English. British manufacturers often intermix metric horsepower and mechanical horsepower depending on the origin of the engine in question. DIN 66036 defines one metric horsepower as the power to raise a mass of 75 kilograms against the Earth's gravitational force over a distance of one metre in one second: = 75 ⋅m/s = 1 PS. This is equivalent to 735.49875 W, or 98.6% of an imperial mechanical horsepower. In 1972, the PS was replaced by the kilowatt as the official power-measuring unit in EEC directives. Other names for the metric horsepower are the Italian , Dutch , the French , the Spanish and Portuguese , the Russian , the Swedish , the Finnish , the Estonian , the Norwegian and Danish , the Hungarian , the Czech and Slovak or ), the Bosnian/Croatian/Serbian , the Bulgarian , the Macedonian , the Polish , Slovenian , the Ukrainian , the Romanian , and the German . In the 19th century, the French had their own unit, which they used instead of the CV or horsepower. It was called the poncelet and was abbreviated p. Tax horsepower Tax or fiscal horsepower is a non-linear rating of a motor vehicle for tax purposes. Tax horsepower ratings were originally more or less directly related to the size of the engine; but as of 2000, many countries changed over to systems based on CO2 emissions, so are not directly comparable to older ratings. The Citroën 2CV is named for its French fiscal horsepower rating, "deux chevaux" (2CV). Electrical horsepower Nameplates on electrical motors show their power output, not the power input (the power delivered at the shaft, not the power consumed to drive the motor). This power output is ordinarily stated in watts or kilowatts. In the United States, the power output is stated in horsepower, which for this purpose is defined as exactly 746 W. Hydraulic horsepower Hydraulic horsepower can represent the power available within hydraulic machinery, power through the down-hole nozzle of a drilling rig, or can be used to estimate the mechanical power needed to generate a known hydraulic flow rate. It may be calculated as where pressure is in psi, and flow rate is in US gallons per minute. Drilling rigs are powered mechanically by rotating the drill pipe from above. Hydraulic power is still needed though, as between 2 and 7 hp are required to push mud through the drill bit to clear waste rock. Additional hydraulic power may also be used to drive a down-hole mud motor to power directional drilling. Boiler horsepower Boiler horsepower is a boiler's capacity to deliver steam to a steam engine and is not the same unit of power as the 550 ft lb/s definition. One boiler horsepower is equal to the thermal energy rate required to evaporate of fresh water at in one hour. In the early days of steam use, the boiler horsepower was roughly comparable to the horsepower of engines fed by the boiler. The term "boiler horsepower" was originally developed at the Philadelphia Centennial Exhibition in 1876, where the best steam engines of that period were tested. The average steam consumption of those engines (per output horsepower) was determined to be the evaporation of of water per hour, based on feed water at , and saturated steam generated at . This original definition is equivalent to a boiler heat output of . A few years later in 1884, the ASME re-defined the boiler horsepower as the thermal output equal to the evaporation of 34.5 pounds per hour of water "from and at" 212 °F. This considerably simplified boiler testing, and provided more accurate comparisons of the boilers at that time. This revised definition is equivalent to a boiler heat output of . Present industrial practice is to define "boiler horsepower" as a boiler thermal output equal to , which is very close to the original and revised definitions. Boiler horsepower is still used to measure boiler output in industrial boiler engineering in the US. Boiler horsepower is abbreviated BHP, not to be confused with brake horsepower, below, which is also abbreviated BHP. Drawbar horsepower Drawbar horsepower (dbhp) is the power a railway locomotive has available to haul a train or an agricultural tractor to pull an implement. This is a measured figure rather than a calculated one. A special railway car called a dynamometer car coupled behind the locomotive keeps a continuous record of the drawbar pull exerted, and the speed. From these, the power generated can be calculated. To determine the maximum power available, a controllable load is required; it is normally a second locomotive with its brakes applied, in addition to a static load. If the drawbar
of steam engines with the power of draft horses. It was later expanded to include the output power of other types of piston engines, as well as turbines, electric motors and other machinery. The definition of the unit varied among geographical regions. Most countries now use the SI unit watt for measurement of power. With the implementation of the EU Directive 80/181/EEC on 1 January 2010, the use of horsepower in the EU is permitted only as a supplementary unit. History The development of the steam engine provided a reason to compare the output of horses with that of the engines that could replace them. In 1702, Thomas Savery wrote in The Miner's Friend: So that an engine which will raise as much water as two horses, working together at one time in such a work, can do, and for which there must be constantly kept ten or twelve horses for doing the same. Then I say, such an engine may be made large enough to do the work required in employing eight, ten, fifteen, or twenty horses to be constantly maintained and kept for doing such a work… The idea was later used by James Watt to help market his improved steam engine. He had previously agreed to take royalties of one third of the savings in coal from the older Newcomen steam engines. This royalty scheme did not work with customers who did not have existing steam engines but used horses instead. Watt determined that a horse could turn a mill wheel 144 times in an hour (or 2.4 times a minute). The wheel was in radius; therefore, the horse travelled feet in one minute. Watt judged that the horse could pull with a force of . So: Watt defined and calculated the horsepower as 32,572 ft⋅lbf/min, which was rounded to an even 33,000 ft⋅lbf/min. Watt determined that a pony could lift an average per minute over a four-hour working shift. Watt then judged a horse was 50% more powerful than a pony and thus arrived at the 33,000 ft⋅lbf/min figure. Engineering in History recounts that John Smeaton initially estimated that a horse could produce per minute. John Desaguliers had previously suggested per minute, and Tredgold suggested per minute. "Watt found by experiment in 1782 that a "brewery horse" could produce per minute." James Watt and Matthew Boulton standardized that figure at per minute the next year. A common legend states that the unit was created when one of Watt's first customers, a brewer, specifically demanded an engine that would match a horse, and chose the strongest horse he had and driving it to the limit. Watt, while aware of the trick, accepted the challenge and built a machine that was actually even stronger than the figure achieved by the brewer, and the output of that machine became the horsepower. In 1993, R. D. Stevenson and R. J. Wassersug published correspondence in Nature summarizing measurements and calculations of peak and sustained work rates of a horse. Citing measurements made at the 1926 Iowa State Fair, they reported that the peak power over a few seconds has been measured to be as high as and also observed that for sustained activity, a work rate of about per horse is consistent with agricultural advice from both the 19th and 20th centuries and also consistent with a work rate of about four times the basal rate expended by other vertebrates for sustained activity. When considering human-powered equipment, a healthy human can produce about briefly (see orders of magnitude) and sustain about indefinitely; trained athletes can manage up to about briefly and for a period of several hours. The Jamaican sprinter Usain Bolt produced a maximum of 0.89 seconds into his 9.58 second dash world record in 2009. Calculating power When torque is in pound-foot units, rotational speed is in rpm, the resulting power in horsepower is The constant 5252 is the rounded value of (33,000 ft⋅lbf/min)/(2π rad/rev). When torque is in inch-pounds, The constant 63,025 is the approximation of Definitions The following definitions have been or are widely used: In certain situations it is necessary to distinguish between the various definitions of horsepower and thus a suffix is added: hp(I) for mechanical (or imperial) horsepower, hp(M) for metric horsepower, hp(S) for boiler (or steam) horsepower and hp(E) for electrical horsepower. Mechanical horsepower Assuming the third CGPM (1901, CR 70) definition of standard gravity, , is used to define the pound-force as well as the kilogram force, and the international avoirdupois pound (1959), one mechanical horsepower is: {| |- |1 hp |≡ 33,000 ft·lbf/min | colspan="2" |by definition |- | |= 550 ft⋅lbf/s |since |1 min = 60 s |- | |= 550 × 0.3048 × 0.45359237 m⋅kgf/s |since |1 ft ≡ 0.3048 m and 1 lb ≡ 0.45359237 kg |- | |= 76.0402249 kgf⋅m/s | | |- | |= 76.0402249 × 9.80665 kg⋅m2/s3 |since |g = 9.80665 m/s2 |- | |= 745.6998715822702 W |≈ 745.700 W |since |1 W ≡ 1 J/s = 1 N⋅m/s = 1 (kg⋅m/s2)⋅(m/s) |} Or given that 1 hp = 550 ft⋅lbf/s, 1 ft = 0.3048 m, 1 lbf ≈ 4.448 N, 1 J = 1 N⋅m, 1 W = 1 J/s: 1 hp ≈ 746 W Metric horsepower (PS, cv, hk, pk, ks, ch) The various units used to indicate this definition (PS, KM, cv, hk, pk, ks and ch) all translate to horse power in English. British manufacturers often intermix metric horsepower and mechanical horsepower depending on the origin of the engine in question. DIN 66036 defines one metric horsepower as the power to raise a mass of 75 kilograms against the Earth's gravitational force over a distance of one metre in one second: = 75 ⋅m/s = 1 PS. This is equivalent to 735.49875 W, or 98.6% of an imperial mechanical horsepower. In 1972, the PS was replaced by the kilowatt as the official power-measuring unit in EEC directives. Other names for the metric horsepower are the Italian , Dutch , the French , the Spanish and Portuguese , the Russian , the Swedish , the Finnish , the Estonian , the Norwegian and Danish , the Hungarian , the Czech and Slovak or ), the Bosnian/Croatian/Serbian , the Bulgarian , the Macedonian , the Polish , Slovenian , the Ukrainian , the Romanian , and the German . In the 19th century, the French had their own unit, which they used instead of the CV or horsepower. It was called the poncelet and was abbreviated p. Tax horsepower Tax or fiscal horsepower is a non-linear rating of a motor vehicle for tax purposes. Tax horsepower ratings were originally more or less directly related to the size of the engine; but as of 2000, many countries changed over to systems based on CO2 emissions, so are not directly comparable to older ratings. The Citroën 2CV is named for its French fiscal horsepower rating, "deux chevaux" (2CV). Electrical horsepower Nameplates on electrical motors show their power output, not the power input (the power delivered at the shaft, not the power consumed to drive the motor). This power output is ordinarily stated in watts or kilowatts. In the United States, the power output is stated in horsepower, which for this purpose is defined as exactly 746 W. Hydraulic horsepower Hydraulic horsepower can represent the power available within hydraulic machinery, power through the down-hole nozzle of a drilling rig, or can be used to estimate the mechanical power needed to generate a known hydraulic flow rate. It may be calculated as where pressure is in psi, and flow rate is in US gallons per minute. Drilling rigs are powered mechanically by rotating the drill pipe from above. Hydraulic power is still needed though, as between 2 and 7 hp are required to push mud through the drill bit to clear waste rock. Additional hydraulic power may also be used to drive a down-hole mud motor to power directional drilling. Boiler horsepower Boiler horsepower is a boiler's capacity to deliver steam to a steam engine and is not the same unit of power as the 550 ft lb/s definition. One boiler horsepower is equal to the thermal energy rate required to evaporate of fresh water at in one hour. In the early days of steam use, the boiler horsepower was roughly comparable to the horsepower of engines fed by the boiler. The term "boiler horsepower" was originally developed at the Philadelphia Centennial Exhibition in 1876, where the best steam engines of that period were tested. The average steam consumption of those engines (per output horsepower) was determined to be the evaporation of of water per hour, based on feed water at , and saturated steam generated at . This original definition is equivalent to a boiler heat output of . A few years later in 1884, the ASME re-defined the boiler horsepower as the thermal output equal to the evaporation of 34.5 pounds per hour of water "from and at" 212 °F. This considerably simplified boiler testing, and provided more accurate comparisons of the boilers at that time. This revised definition is equivalent to a boiler heat output of . Present
1216, during the First Barons' War London was occupied by Prince Louis of France, who had been called in by the baronial rebels against King John and was acclaimed as King of England in St Paul's Cathedral. However, following John's death in 1217 Louis's supporters reverted to their Plantagenet allegiance, rallying round John's son Henry III, and Louis was forced to withdraw from England. In 1224, after an accusation of ritual murder, the Jewish community was subjected to a steep punitive levy. Then in 1232, Henry III confiscated the principal synagogue of the London Jewish community because he claimed their chanting was audible in a neighboring church. In 1264, during the Second Barons' War, Simon de Montfort's rebels occupied London and killed 500 Jews while attempting to seize records of debts. London's Jewish community was forced to leave England by the expulsion by Edward I in 1290. They left for France, Holland and further afield; their property was seized, and many suffered robbery and murder as they departed. Over the following centuries, London would shake off the heavy French cultural and linguistic influence which had been there since the times of the Norman conquest. The city would figure heavily in the development of Early Modern English. During the Peasants' Revolt of 1381, London was invaded by rebels led by Wat Tyler. A group of peasants stormed the Tower of London and executed the Lord Chancellor, Archbishop Simon Sudbury, and the Lord Treasurer. The peasants looted the city and set fire to numerous buildings. Tyler was stabbed to death by the Lord Mayor William Walworth in a confrontation at Smithfield and the revolt collapsed. Trade increased steadily during the Middle Ages, and London grew rapidly as a result. In 1100, London's population was somewhat more than 15,000. By 1300, it had grown to roughly 80,000. London lost at least half of its population during the Black Death in the mid-14th century, but its economic and political importance stimulated a rapid recovery despite further epidemics. Trade in London was organised into various guilds, which effectively controlled the city, and elected the Lord Mayor of the City of London. Medieval London was made up of narrow and twisting streets, and most of the buildings were made from combustible materials such as timber and straw, which made fire a constant threat, while sanitation in cities was of low-quality. Modern history Tudor London (1485–1603) In 1475, the Hanseatic League set up its main English trading base (kontor) in London, called Stalhof or Steelyard. It existed until 1853, when the Hanseatic cities of Lübeck, Bremen and Hamburg sold the property to South Eastern Railway. Woollen cloth was shipped undyed and undressed from 14th/15th century London to the nearby shores of the Low Countries, where it was considered indispensable. During the Reformation, London was the principal early centre of Protestantism in England. Its close commercial connections with the Protestant heartlands in northern continental Europe, large foreign mercantile communities, disproportionately large number of literate inhabitants and role as the centre of the English print trade all contributed to the spread of the new ideas of religious reform. Before the Reformation, more than half of the area of London was the property of monasteries, nunneries and other religious houses. Henry VIII's "Dissolution of the Monasteries" had a profound effect on the city as nearly all of this property changed hands. The process started in the mid 1530s, and by 1538 most of the larger monastic houses had been abolished. Holy Trinity Aldgate went to Lord Audley, and the Marquess of Winchester built himself a house in part of its precincts. The Charterhouse went to Lord North, Blackfriars to Lord Cobham, the leper hospital of St Giles to Lord Dudley, while the king took for himself the leper hospital of St James, which was rebuilt as St James's Palace. The period saw London rapidly rising in importance among Europe's commercial centres. Trade expanded beyond Western Europe to Russia, the Levant, and the Americas. This was the period of mercantilism and monopoly trading companies such as the Muscovy Company (1555) and the British East India Company (1600) were established in London by Royal Charter. The latter, which ultimately came to rule India, was one of the key institutions in London, and in Britain as a whole, for two and a half centuries. Immigrants arrived in London not just from all over England and Wales, but from abroad as well, for example Huguenots from France; the population rose from an estimated 50,000 in 1530 to about 225,000 in 1605. The growth of the population and wealth of London was fuelled by a vast expansion in the use of coastal shipping. The late 16th and early 17th century saw the great flourishing of drama in London whose preeminent figure was William Shakespeare. During the mostly calm later years of Elizabeth's reign, some of her courtiers and some of the wealthier citizens of London built themselves country residences in Middlesex, Essex and Surrey. This was an early stirring of the villa movement, the taste for residences which were neither of the city nor on an agricultural estate, but at the time of Elizabeth's death in 1603, London was still very compact. Xenophobia was rampant in London, and increased after the 1580s. Many immigrants became disillusioned by routine threats of violence and molestation, attempts at expulsion of foreigners, and the great difficulty in acquiring English citizenship. Dutch cities proved more hospitable, and many left London permanently. Foreigners are estimated to have made up 4,000 of the 100,000 residents of London by 1600, many being Dutch and German workers and traders. Stuart London (1603–1714) London's expansion beyond the boundaries of the City was decisively established in the 17th century. In the opening years of that century the immediate environs of the City, with the principal exception of the aristocratic residences in the direction of Westminster, were still considered not conducive to health. Immediately to the north was Moorfields, which had recently been drained and laid out in walks, but it was frequented by beggars and travellers, who crossed it in order to get into London. Adjoining Moorfields were Finsbury Fields, a favourite practising ground for the archers, Mile End, then a common on the Great Eastern Road and famous as a rendezvous for the troops. The preparations for King James I becoming king were interrupted by a severe plague epidemic, which may have killed over thirty thousand people. The Lord Mayor's Show, which had been discontinued for some years, was revived by order of the king in 1609. The dissolved monastery of the Charterhouse, which had been bought and sold by the courtiers several times, was purchased by Thomas Sutton for £13,000. The new hospital, chapel, and schoolhouse were begun in 1611. Charterhouse School was to be one of the principal public schools in London until it moved to Surrey in Victorian times, and the site is still used as a medical school. The general meeting-place of Londoners in the day-time was the nave of Old St. Paul's Cathedral. Merchants conducted business in the aisles, and used the font as a counter upon which to make their payments; lawyers received clients at their particular pillars; and the unemployed looked for work. St Paul's Churchyard was the centre of the book trade and Fleet Street was a centre of public entertainment. Under James I the theatre, which established itself so firmly in the latter years of Elizabeth, grew further in popularity. The performances at the public theatres were complemented by elaborate masques at the royal court and at the inns of court. Charles I acceded to the throne in 1625. During his reign, aristocrats began to inhabit the West End in large numbers. In addition to those who had specific business at court, increasing numbers of country landowners and their families lived in London for part of the year simply for the social life. This was the beginning of the "London season". Lincoln's Inn Fields was built about 1629. The piazza of Covent Garden, designed by England's first classically trained architect Inigo Jones followed in about 1632. The neighbouring streets were built shortly afterwards, and the names of Henrietta, Charles, James, King and York Streets were given after members of the royal family. In January 1642 five members of parliament whom the King wished to arrest were granted refuge in the City. In August of the same year the King raised his banner at Nottingham, and during the English Civil War London took the side of the parliament. Initially the king had the upper hand in military terms and in November he won the Battle of Brentford a few miles to the west of London. The City organised a new makeshift army and Charles hesitated and retreated. Subsequently, an extensive system of fortifications was built to protect London from a renewed attack by the Royalists. This comprised a strong earthen rampart, enhanced with bastions and redoubts. It was well beyond the City walls and encompassed the whole urban area, including Westminster and Southwark. London was not seriously threatened by the royalists again, and the financial resources of the City made an important contribution to the parliamentarians' victory in the war. The unsanitary and overcrowded City of London has suffered from the numerous outbreaks of the plague many times over the centuries, but in Britain it is the last major outbreak which is remembered as the "Great Plague" It occurred in 1665 and 1666 and killed around 60,000 people, which was one fifth of the population. Samuel Pepys chronicled the epidemic in his diary. On 4 September 1665 he wrote "I have stayed in the city till above 7400 died in one week, and of them about 6000 of the plague, and little noise heard day or night but tolling of bells." Great Fire of London (1666) The Great Plague was immediately followed by another catastrophe, albeit one which helped to put an end to the plague. On the Sunday, 2 September 1666 the Great Fire of London broke out at one o'clock in the morning at a bakery in Pudding Lane in the southern part of the City. Fanned by an eastern wind the fire spread, and efforts to arrest it by pulling down houses to make firebreaks were disorganised to begin with. On Tuesday night the wind fell somewhat, and on Wednesday the fire slackened. On Thursday it was extinguished, but on the evening of that day the flames again burst forth at the Temple. Some houses were at once blown up by gunpowder, and thus the fire was finally mastered. The Monument was built to commemorate the fire: for over a century and a half it bore an inscription attributing the conflagration to a "popish frenzy". The fire destroyed about 60% of the City, including Old St Paul's Cathedral, 87 parish churches, 44 livery company halls and the Royal Exchange. However, the number of lives lost was surprisingly small; it is believed to have been 16 at most. Within a few days of the fire, three plans were presented to the king for the rebuilding of the city, by Christopher Wren, John Evelyn and Robert Hooke. Wren proposed to build main thoroughfares north and south, and east and west, to insulate all the churches in conspicuous positions, to form the most public places into large piazzas, to unite the halls of the 12 chief livery companies into one regular square annexed to the Guildhall, and to make a fine quay on the bank of the river from Blackfriars to the Tower of London. Wren wished to build the new streets straight and in three standard widths of thirty, sixty and ninety feet. Evelyn's plan differed from Wren's chiefly in proposing a street from the church of St Dunstan's in the East to the St Paul's, and in having no quay or terrace along the river. These plans were not implemented, and the rebuilt city generally followed the streetplan of the old one, and most of it has survived into the 21st century. Nonetheless, the new City was different from the old one. Many aristocratic residents never returned, preferring to take new houses in the West End, where fashionable new districts such as St. James's were built close to the main royal residence, which was Whitehall Palace until it was destroyed by fire in the 1690s, and thereafter St. James's Palace. The rural lane of Piccadilly sprouted courtiers mansions such as Burlington House. Thus the separation between the middle class mercantile City of London, and the aristocratic world of the court in Westminster became complete. In the City itself there was a move from wooden buildings to stone and brick construction to reduce the risk of fire. Parliament's Rebuilding of London Act 1666 stated "building with brick [is] not only more comely and durable, but also more safe against future perils of fire". From then on only doorcases, window-frames and shop fronts were allowed to be made of wood. Christopher Wren's plan for a new model London came to nothing, but he was appointed to rebuild the ruined parish churches and to replace St Paul's Cathedral. His domed baroque cathedral was the primary symbol of London for at least a century and a half. As city surveyor, Robert Hooke oversaw the reconstruction of the City's houses. The East End, that is the area immediately to the east of the city walls, also became heavily populated in the decades after the Great Fire. London's docks began to extend downstream, attracting many working people who worked on the docks themselves and in the processing and distributive trades. These people lived in Whitechapel, Wapping, Stepney and Limehouse, generally in slum conditions. In the winter of 1683–1684, a frost fair was held on the Thames. The frost, which began about seven weeks before Christmas and continued for six weeks after, was the greatest on record. The Revocation of the Edict of Nantes in 1685 led to a large migration on Huguenots to London. They established a silk industry at Spitalfields. At this time the Bank of England was founded, and the British East India Company was expanding its influence. Lloyd's of London also began to operate in the late 17th century. In 1700, London handled 80% of England's imports, 69% of its exports and 86% of its re-exports. Many of the goods were luxuries from the Americas and Asia such as silk, sugar, tea and tobacco. The last figure emphasises London's role as an entrepot: while it had many craftsmen in the 17th century, and would later acquire some large factories, its economic prominence was never based primarily on industry. Instead it was a great trading and redistribution centre. Goods were brought to London by England's increasingly dominant merchant navy, not only to satisfy domestic demand, but also for re-export throughout Europe and beyond. William III, a Dutchman, cared little for London, the smoke of which gave him asthma, and after the first fire at Whitehall Palace (1691) he purchased Nottingham House and transformed it into Kensington Palace. Kensington was then an insignificant village, but the arrival of the court soon caused it to grow in importance. The palace was rarely favoured by future monarchs, but its construction was another step in the expansion of the bounds of London. During the same reign Greenwich Hospital, then well outside the boundary of London, but now comfortably inside it, was begun; it was the naval complement to the Chelsea Hospital for former soldiers, which had been founded in 1681. During the reign of Queen Anne an act was passed authorising the building of 50 new churches to serve the greatly increased population living outside the boundaries of the City of London. 18th century The 18th century was a period of rapid growth for London, reflecting an increasing national population, the early stirrings of the Industrial Revolution, and London's role at the centre of the evolving British Empire. In 1707, an Act of Union was passed merging the Scottish and the English Parliaments, thus establishing the Kingdom of Great Britain. A year later, in 1708 Christopher Wren's masterpiece, St Paul's Cathedral was completed on his birthday. However, the first service had been held on 2 December 1697; more than 10 years earlier. This Cathedral replaced the original St. Paul's which had been completely destroyed in the Great Fire of London. This building is considered one of the finest in Britain and a fine example of Baroque architecture. Many tradesmen from different countries came to London to trade goods and merchandise. Also, more immigrants moved to London making the population greater. More people also moved to London for work and for business making London an altogether bigger and busier city. Britain's victory in the Seven Years' War increased the country's international standing and opened large new markets to British trade, further boosting London's prosperity. During the Georgian period London spread beyond its traditional limits at an accelerating pace. This is shown in a series of detailed maps, particularly John Rocque's 1741–45 map (see below) and his 1746 Map of London. New districts such as Mayfair were built for the rich in the West End, new bridges over the Thames encouraged an acceleration of development in South London and in the East End, the Port of London expanded downstream from the City. During this period was also the uprising of the American colonies. In 1780, the Tower of London held its only American prisoner, former President of the Continental Congress, Henry Laurens. In 1779, he was the Congress's representative of Holland, and got the country's support for the Revolution. On his return voyage back to America, the Royal Navy captured him and charged him with treason after finding evidence of a reason of war between Great Britain and the Netherlands. He was released from the Tower on 21 December 1781 in exchange for General Lord Cornwallis. In 1762, George III acquired Buckingham Palace (then called Buckingham House) from the Duke of Buckingham. It was enlarged over the next 75 years by architects such as John Nash. A phenomenon of the era was the coffeehouse, which became a popular place to debate ideas. Growing literacy and the development of the printing press meant that news became widely available. Fleet Street became the centre of the embryonic national press during the century. 18th-century London was dogged by crime. The Bow Street Runners were established in 1750 as a professional police force. Penalties for crime were harsh, with the death penalty being applied for fairly minor crimes. Public hangings were common in London, and were popular public events. In 1780, London was rocked by the Gordon Riots, an uprising by Protestants against Roman Catholic emancipation led by Lord George Gordon. Severe damage was caused to Catholic churches and homes, and 285 rioters were killed. Up until 1750, London Bridge was the only crossing over the Thames, but in that year Westminster Bridge was opened and, for the first time in history, London Bridge, in a sense, had a rival. In 1798, Frankfurt banker Nathan Mayer Rothschild arrived in London and set up a banking house in the city, with a large sum of money given to him by his father, Amschel Mayer Rothschild. The Rothschilds also had banks in Paris and Vienna. The bank financed numerous large-scale projects, especially regarding railways around the world and the Suez Canal. The 18th century saw the breakaway of the American colonies and many other unfortunate events in London, but also great change and Enlightenment. This all led into the beginning of modern times, the 19th century. 19th century During the 19th century, London was transformed into the world's largest city and capital of the British Empire. Its population expanded from 1 million in 1800 to 6.7 million a century later. During this period, London became a global political, financial, and trading capital. In this position, it was largely unrivalled until the latter part of the century, when Paris and New York began to threaten its dominance. While the city grew wealthy as Britain's holdings expanded, 19th-century London was also a city of poverty, where millions lived in overcrowded and unsanitary slums. Life for the poor was immortalised by Charles Dickens in such novels as Oliver Twist In 1810, after the death of Sir Francis Baring and Abraham Goldsmid, Rothschild emerges as the major banker in London. In 1829, the then Home Secretary (and future prime minister) Robert Peel established the Metropolitan Police as a police force covering the entire urban area. The force gained the nickname of "bobbies" or "peelers" named after Robert Peel. 19th-century London was transformed by the coming of the railways. A new network of metropolitan railways allowed for the development of suburbs in neighbouring counties from which middle-class and wealthy people could commute to the centre. While this spurred the massive outward growth of the city, the growth of greater London also exacerbated the class divide, as the wealthier classes emigrated to the suburbs, leaving the poor to inhabit the inner city areas. The first railway to be built in London was a line from London Bridge to Greenwich, which opened in 1836. This was soon followed by the opening of great rail
the evolving British Empire. In 1707, an Act of Union was passed merging the Scottish and the English Parliaments, thus establishing the Kingdom of Great Britain. A year later, in 1708 Christopher Wren's masterpiece, St Paul's Cathedral was completed on his birthday. However, the first service had been held on 2 December 1697; more than 10 years earlier. This Cathedral replaced the original St. Paul's which had been completely destroyed in the Great Fire of London. This building is considered one of the finest in Britain and a fine example of Baroque architecture. Many tradesmen from different countries came to London to trade goods and merchandise. Also, more immigrants moved to London making the population greater. More people also moved to London for work and for business making London an altogether bigger and busier city. Britain's victory in the Seven Years' War increased the country's international standing and opened large new markets to British trade, further boosting London's prosperity. During the Georgian period London spread beyond its traditional limits at an accelerating pace. This is shown in a series of detailed maps, particularly John Rocque's 1741–45 map (see below) and his 1746 Map of London. New districts such as Mayfair were built for the rich in the West End, new bridges over the Thames encouraged an acceleration of development in South London and in the East End, the Port of London expanded downstream from the City. During this period was also the uprising of the American colonies. In 1780, the Tower of London held its only American prisoner, former President of the Continental Congress, Henry Laurens. In 1779, he was the Congress's representative of Holland, and got the country's support for the Revolution. On his return voyage back to America, the Royal Navy captured him and charged him with treason after finding evidence of a reason of war between Great Britain and the Netherlands. He was released from the Tower on 21 December 1781 in exchange for General Lord Cornwallis. In 1762, George III acquired Buckingham Palace (then called Buckingham House) from the Duke of Buckingham. It was enlarged over the next 75 years by architects such as John Nash. A phenomenon of the era was the coffeehouse, which became a popular place to debate ideas. Growing literacy and the development of the printing press meant that news became widely available. Fleet Street became the centre of the embryonic national press during the century. 18th-century London was dogged by crime. The Bow Street Runners were established in 1750 as a professional police force. Penalties for crime were harsh, with the death penalty being applied for fairly minor crimes. Public hangings were common in London, and were popular public events. In 1780, London was rocked by the Gordon Riots, an uprising by Protestants against Roman Catholic emancipation led by Lord George Gordon. Severe damage was caused to Catholic churches and homes, and 285 rioters were killed. Up until 1750, London Bridge was the only crossing over the Thames, but in that year Westminster Bridge was opened and, for the first time in history, London Bridge, in a sense, had a rival. In 1798, Frankfurt banker Nathan Mayer Rothschild arrived in London and set up a banking house in the city, with a large sum of money given to him by his father, Amschel Mayer Rothschild. The Rothschilds also had banks in Paris and Vienna. The bank financed numerous large-scale projects, especially regarding railways around the world and the Suez Canal. The 18th century saw the breakaway of the American colonies and many other unfortunate events in London, but also great change and Enlightenment. This all led into the beginning of modern times, the 19th century. 19th century During the 19th century, London was transformed into the world's largest city and capital of the British Empire. Its population expanded from 1 million in 1800 to 6.7 million a century later. During this period, London became a global political, financial, and trading capital. In this position, it was largely unrivalled until the latter part of the century, when Paris and New York began to threaten its dominance. While the city grew wealthy as Britain's holdings expanded, 19th-century London was also a city of poverty, where millions lived in overcrowded and unsanitary slums. Life for the poor was immortalised by Charles Dickens in such novels as Oliver Twist In 1810, after the death of Sir Francis Baring and Abraham Goldsmid, Rothschild emerges as the major banker in London. In 1829, the then Home Secretary (and future prime minister) Robert Peel established the Metropolitan Police as a police force covering the entire urban area. The force gained the nickname of "bobbies" or "peelers" named after Robert Peel. 19th-century London was transformed by the coming of the railways. A new network of metropolitan railways allowed for the development of suburbs in neighbouring counties from which middle-class and wealthy people could commute to the centre. While this spurred the massive outward growth of the city, the growth of greater London also exacerbated the class divide, as the wealthier classes emigrated to the suburbs, leaving the poor to inhabit the inner city areas. The first railway to be built in London was a line from London Bridge to Greenwich, which opened in 1836. This was soon followed by the opening of great rail termini which eventually linked London to every corner of Great Britain, including Euston station (1837), Paddington station (1838), Fenchurch Street station (1841), Waterloo station (1848), King's Cross station (1850), and St Pancras station (1863). From 1863, the first lines of the London Underground were constructed. The urbanised area continued to grow rapidly, spreading into Islington, Paddington, Belgravia, Holborn, Finsbury, Shoreditch, Southwark and Lambeth. Towards the middle of the century, London's antiquated local government system, consisting of ancient parishes and vestries, struggled to cope with the rapid growth in population. In 1855, the Metropolitan Board of Works (MBW) was created to provide London with adequate infrastructure to cope with its growth. One of its first tasks was addressing London's sanitation problems. At the time, raw sewage was pumped straight into the River Thames. This culminated in The Great Stink of 1858. Parliament finally gave consent for the MBW to construct a large system of sewers. The engineer put in charge of building the new system was Joseph Bazalgette. In what was one of the largest civil engineering projects of the 19th century, he oversaw construction of over 2100 km of tunnels and pipes under London to take away sewage and provide clean drinking water. When the London sewerage system was completed, the death toll in London dropped dramatically, and epidemics of cholera and other diseases were curtailed. Bazalgette's system is still in use today. One of the most famous events of 19th-century London was the Great Exhibition of 1851. Held at The Crystal Palace, the fair attracted 6 million visitors from across the world and displayed Britain at the height of its Imperial dominance. As the capital of a massive empire, London became a magnet for immigrants from the colonies and poorer parts of Europe. A large Irish population settled in the city during the Victorian period, with many of the newcomers refugees from the Great Famine (1845–1849). At one point, Catholic Irish made up about 20% of London's population; they typically lived in overcrowded slums. London also became home to a sizable Jewish community, which was notable for its entrepreneurship in the clothing trade and merchandising. In 1888, the new County of London was established, administered by the London County Council. This was the first elected London-wide administrative body, replacing the earlier Metropolitan Board of Works, which had been made up of appointees. The County of London covered broadly what was then the full extent of the London conurbation, although the conurbation later outgrew the boundaries of the county. In 1900, the county was sub-divided into 28 metropolitan boroughs, which formed a more local tier of administration than the county council. Many famous buildings and landmarks of London were constructed during the 19th century including: Trafalgar Square Big Ben and the Houses of Parliament The Royal Albert Hall The Victoria and Albert Museum Tower Bridge 20th century 1900 to 1939 London entered the 20th century at the height of its influence as the capital of one of the largest empires in history, but the new century was to bring many challenges. London's population continued to grow rapidly in the early decades of the century, and public transport was greatly expanded. A large tram network was constructed by the London County Council, through the LCC Tramways; the first motorbus service began in the 1900s. Improvements to London's overground and underground rail network, including large scale electrification were progressively carried out. During World War I, London experienced its first bombing raids carried out by German zeppelin airships; these killed around 700 people and caused great terror, but were merely a foretaste of what was to come. The city of London would experience many more terrors as a result of both World Wars. The largest explosion in London occurred during World War I: the Silvertown explosion, when a munitions factory containing 50 tons of TNT exploded, killing 73 and injuring 400. The period between the two World Wars saw London's geographical extent growing more quickly than ever before or since. A preference for lower density suburban housing, typically semi-detached, by Londoners seeking a more "rural" lifestyle, superseded Londoners' old predilection for terraced houses. This was facilitated not only by a continuing expansion of the rail network, including trams and the Underground, but also by slowly widening car ownership. London's suburbs expanded outside the boundaries of the County of London, into the neighbouring counties of Essex, Hertfordshire, Kent, Middlesex and Surrey. Like the rest of the country, London suffered severe unemployment during the Great Depression of the 1930s. In the East End during the 1930s, politically extreme parties of both right and left flourished. The Communist Party of Great Britain and the British Union of Fascists both gained serious support. Clashes between right and left culminated in the Battle of Cable Street in 1936. The population of London reached an all-time peak of 8.6 million in 1939. Large numbers of Jewish immigrants fleeing from Nazi Germany settled in London during the 1930s, mostly in the East End. Labour Party politician Herbert Morrison was a dominant figure in local government in the 1920s and 1930s. He became mayor of Hackney and a member of the London County Council in 1922, and for a while was Minister of Transport in Ramsay MacDonald's cabinet. When Labour gained power in London in 1934, Morrison unified the bus, tram and trolleybus services with the Underground, by the creation of the London Passenger Transport Board (known as London Transport) in 1933., He led the effort to finance and build the new Waterloo Bridge. He designed the Metropolitan Green Belt around the suburbs and worked to clear slums, build schools, and reform public assistance. In World War II During World War II, London, as many other British cities, suffered severe damage, being bombed extensively by the Luftwaffe as a part of The Blitz. Prior to the bombing, hundreds of thousands of children in London were evacuated to the countryside to avoid the bombing. Civilians took shelter from the air raids in underground stations. The heaviest bombing took place during The Blitz between 7 September 1940 and 10 May 1941. During this period, London was subjected to 71 separate raids receiving over 18,000 tonnes of high explosive. One raid in December 1940, which became known as the Second Great Fire of London, saw a firestorm engulf much of the City of London and destroy many historic buildings. St Paul's Cathedral, however, remained unscathed; a photograph showing the Cathedral shrouded in smoke became a famous image of the war. Having failed to defeat Britain, Hitler turned his attention to the Eastern front and regular bombing raids ceased. They began again, but on a smaller scale with the "Little Blitz" in early 1944. Towards the end of the war, during 1944/45 London again came under heavy attack by pilotless V-1 flying bombs and V-2 rockets, which were fired from Nazi occupied Europe. These attacks only came to an end when their launch sites were captured by advancing Allied forces. London suffered severe damage and heavy casualties, the worst hit part being the Docklands area. By the war's end, just under 30,000 Londoners had been killed by the bombing, and over 50,000 seriously injured, tens of thousands of buildings were destroyed, and hundreds of thousands of people were made homeless. 1945–2000 Three years after the war, the 1948 Summer Olympics were held at the original Wembley Stadium, at a time when the city had barely recovered from the war. London's rebuilding was slow to begin. However, in 1951 the Festival of Britain was held, which marked an increasing mood of optimism and forward looking. In the immediate postwar years housing was a major issue in London, due to the large amount of housing which had been destroyed in the war. The authorities decided upon high-rise blocks of flats as the answer to housing shortages. During the 1950s and 1960s the skyline of London altered dramatically as tower blocks were erected, although these later proved unpopular. In a bid to reduce the number of people living in overcrowded housing, a policy was introduced of encouraging people to move into newly built new towns surrounding London. Through the 19th and in the early half of the 20th century, Londoners used coal for heating their homes, which produced large amounts of smoke. In combination with climatic conditions this often caused a characteristic smog, and London became known for its typical "London Fog", also known as "Pea Soupers". London was sometimes referred to as "The Smoke" because of this. In 1952, this culminated in the disastrous Great Smog of 1952 which lasted for five days and killed over 4,000 people. In response to this, the Clean Air Act 1956 was passed, mandating the creating of "smokeless zones" where the use of "smokeless" fuels was required (this was at a time when most households still used open fires); the Act was effective. Starting in the mid-1960s, and partly as a result of the success of such UK musicians as the Beatles and The Rolling Stones, London became a centre for the worldwide youth culture, exemplified by the Swinging London subculture which made Carnaby Street a household name of youth fashion around the world. London's role as a trendsetter for youth fashion continued strongly in the 1980s during the new wave and punk eras and into the mid-1990s with the emergence of the Britpop era. From the 1950s onwards London became home to a large number of immigrants, largely from Commonwealth countries such as Jamaica, India, Bangladesh, Pakistan, which dramatically changed the face of London, turning it into one of the most diverse cities in Europe. However, the integration of the new immigrants was not always easy. Racial tensions emerged in events such as the Brixton Riots in the early 1980s. From the beginning of "The Troubles" in Northern Ireland in the early 1970s until the mid-1990s, London was subjected to repeated terrorist attacks by the Provisional IRA. The outward expansion of London was slowed by the war, and the introduction of the Metropolitan Green Belt. Due to this outward expansion, in 1965 the old County of London (which by now only covered part of the London conurbation) and the London County Council were abolished, and the much larger area of Greater London was established with a new Greater London Council (GLC) to administer it, along with 32 new London boroughs. Greater London's population declined steadily in the decades after World War II, from an estimated peak of 8.6 million in 1939 to around 6.8 million in the 1980s. However, it then began to increase again in the late 1980s, encouraged by strong economic performance and an increasingly positive image. London's traditional status as a major port declined dramatically in the post-war decades as the old Docklands could not accommodate large modern container ships. The principal ports for London moved downstream to the ports of Felixstowe and Tilbury. The docklands area had become largely derelict by the 1980s, but was redeveloped into flats and offices from the mid-1980s onwards. The Thames Barrier was completed in the 1980s to protect London against tidal surges from the North Sea. In the early 1980s political disputes between the GLC run by Ken Livingstone and the Conservative government of Margaret Thatcher led to the GLC's abolition in 1986, with most of its powers relegated to the London boroughs. This left London as the only large metropolis in the world without a central administration. In 2000, London-wide government was restored, with the creation of the Greater London Authority (GLA) by Tony Blair's government, covering the same area of Greater London. The new authority had similar powers to the old GLC, but was made up of a directly elected Mayor and a London Assembly. The first election took place on 4 May, with Ken Livingstone comfortably regaining his previous post. London was recognised as one of the nine regions of England. In global perspective, it was emerging as a World city widely compared to New York and Tokyo. 21st century Around the start of the 21st century, London hosted the much derided Millennium Dome at Greenwich, to mark the new century. Other Millennium projects were more successful. One was the largest observation wheel in the world, the "Millennium Wheel", or the London Eye, which was erected as a temporary structure, but soon became a fixture, and draws four million visitors a year. The National Lottery also released a flood of funds for major enhancements to existing attractions, for example the roofing of the Great Court at the British Museum. The London Plan, published by the Mayor of London in 2004, estimated that the population would reach 8.1 million by 2016, and continue to rise thereafter. This was reflected in a move towards denser, more urban styles of building, including a greatly increased number of tall buildings, and proposals for major enhancements to the public transport network. However, funding for projects such as Crossrail remained a struggle. On 6 July 2005 London won the right to host the 2012 Olympics and Paralympics making it the first city to host the modern games three times. However, celebrations were cut short the following day when the city was rocked by a series of terrorist attacks. More than 50 were killed and 750 injured in three bombings on London Underground trains and a fourth on a double decker bus near King's Cross. London was the starting point for countrywide riots which occurred in August 2011, when thousands of people rioted in several city boroughs and in towns across England. In 2011, the population grew over 8 million people for the first time in decades. White British formed less than half of the population for the first time. In the public there was ambivalence leading-up to the 2012 Summer Olympics in the city, though public sentiment changed strongly in their favour following a successful opening ceremony and when the anticipated organisational and transport problems never occurred. Population Historical sites of note Alexandra Palace Battersea Power Station Buckingham Palace Croydon Airport Hyde Park Monument to the Great Fire of London Palace of Westminster Parliament Hill Royal Observatory, Greenwich St Paul's Cathedral Tower Bridge Tower of London Tyburn Vauxhall station Waterloo International station Westminster Abbey See also Ale silver Economy of London Culture of London Fortifications of London Geography of London Geology of London History of local government in London Timeline of London history Notes Further reading Ackroyd, Peter. London: A Biography (2009) (First chapter.) Ball, Michael, and David T. Sunderland. Economic history of London, 1800–1914 (Routledge, 2002) Billings, Malcolm (1994), London: A Companion to Its History and Archaeology, Bucholz, Robert O., and Joseph P. Ward. London: A Social and Cultural History, 1550–1750 (Cambridge University Press; 2012) 526 pages Clark, Greg. The Making of a World City: London 1991 to 2021 (John Wiley & Sons, 2014) Emerson, Charles. 1913: In Search of the World Before the Great War (2013) compares London to 20 major world cities on the eve of World War I; pp 15 to 36, 431–49. Inwood, Stephen. A History of London (1998) Jones, Robert Wynn. The Flower of All Cities: The History of London from Earliest Times to the Great Fire (Amberley Publishing, 2019). Mort, Frank, and Miles Ogborn. "Transforming Metropolitan London, 1750–1960". Journal of British Studies (2004) 43#1 pp: 1–14. Naismith, Rory, Citadel of the Saxons: The Rise of Early London (I.B.Tauris; 2018), Porter, Roy. History of London (1995), by a leading scholar Weightman, Gavin, and Stephen Humphries. The Making of Modern London, 1914–1939 (Sidgwick & Jackson, 1984) White, Jerry. London in the 20th Century: A City and Its People (2001) 544 pages; Social history of people, neighborhoods, work, culture, power. Excerpts White, Jerry. London in the 19th Century: 'A Human Awful Wonder of God''' (2008); Social history of people, neighborhoods, work, culture, power. Excerpt and text search White, Jerry. London in the Eighteenth Century: A Great and Monstrous Thing (2013) 624 pages; Excerpt and text search 480pp; Social history of people, neighborhoods, work, culture, power. Environment Allen, Michelle Elizabeth. Cleansing the city: sanitary geographies in Victorian London (2008). Brimblecombe, Peter. The Big Smoke: A History of Air Pollution in London Since Medieval Times (Methuen, 1987) Ciecieznski, N. J. "The Stench of Disease: Public Health and the Environment in Late-Medieval English towns and cities". Health, Culture and Society (2013) 4#1 pp: 91–104. Field, Jacob F. London, Londoners and the Great Fire of 1666: Disaster and Recovery (2018)
mouth of a Big Fish, an Arabic constellation. This "cloud" was apparently commonly known to the Isfahan astronomers, very probably before 905 AD. The first recorded mention of the Large Magellanic Cloud was also given by al-Sufi. In 1006, Ali ibn Ridwan observed SN 1006, the brightest supernova in recorded history, and left a detailed description of the temporary star. In the late 10th century, a huge observatory was built near Tehran, Iran, by the astronomer Abu-Mahmud al-Khujandi who observed a series of meridian transits of the Sun, which allowed him to calculate the tilt of the Earth's axis relative to the Sun. He noted that measurements by earlier (Indian, then Greek) astronomers had found higher values for this angle, possible evidence that the axial tilt is not constant but was in fact decreasing. In 11th-century Persia, Omar Khayyám compiled many tables and performed a reformation of the calendar that was more accurate than the Julian and came close to the Gregorian. Other Muslim advances in astronomy included the collection and correction of previous astronomical data, resolving significant problems in the Ptolemaic model, the development of the universal latitude-independent astrolabe by Arzachel, the invention of numerous other astronomical instruments, Ja'far Muhammad ibn Mūsā ibn Shākir's belief that the heavenly bodies and celestial spheres were subject to the same physical laws as Earth, and the introduction of empirical testing by Ibn al-Shatir, who produced the first model of lunar motion which matched physical observations. Natural philosophy (particularly Aristotelian physics) was separated from astronomy by Ibn al-Haytham (Alhazen) in the 11th century, by Ibn al-Shatir in the 14th century, and Qushji in the 15th century. Western Europe After the significant contributions of Greek scholars to the development of astronomy, it entered a relatively static era in Western Europe from the Roman era through the 12th century. This lack of progress has led some astronomers to assert that nothing happened in Western European astronomy during the Middle Ages. Recent investigations, however, have revealed a more complex picture of the study and teaching of astronomy in the period from the 4th to the 16th centuries. Western Europe entered the Middle Ages with great difficulties that affected the continent's intellectual production. The advanced astronomical treatises of classical antiquity were written in Greek, and with the decline of knowledge of that language, only simplified summaries and practical texts were available for study. The most influential writers to pass on this ancient tradition in Latin were Macrobius, Pliny, Martianus Capella, and Calcidius. In the 6th century Bishop Gregory of Tours noted that he had learned his astronomy from reading Martianus Capella, and went on to employ this rudimentary astronomy to describe a method by which monks could determine the time of prayer at night by watching the stars. In the 7th century the English monk Bede of Jarrow published an influential text, On the Reckoning of Time, providing churchmen with the practical astronomical knowledge needed to compute the proper date of Easter using a procedure called the computus. This text remained an important element of the education of clergy from the 7th century until well after the rise of the Universities in the 12th century. The range of surviving ancient Roman writings on astronomy and the teachings of Bede and his followers began to be studied in earnest during the revival of learning sponsored by the emperor Charlemagne. By the 9th century rudimentary techniques for calculating the position of the planets were circulating in Western Europe; medieval scholars recognized their flaws, but texts describing these techniques continued to be copied, reflecting an interest in the motions of the planets and in their astrological significance. Building on this astronomical background, in the 10th century European scholars such as Gerbert of Aurillac began to travel to Spain and Sicily to seek out learning which they had heard existed in the Arabic-speaking world. There they first encountered various practical astronomical techniques concerning the calendar and timekeeping, most notably those dealing with the astrolabe. Soon scholars such as Hermann of Reichenau were writing texts in Latin on the uses and construction of the astrolabe and others, such as Walcher of Malvern, were using the astrolabe to observe the time of eclipses in order to test the validity of computistical tables. By the 12th century, scholars were traveling to Spain and Sicily to seek out more advanced astronomical and astrological texts, which they translated into Latin from Arabic and Greek to further enrich the astronomical knowledge of Western Europe. The arrival of these new texts coincided with the rise of the universities in medieval Europe, in which they soon found a home. Reflecting the introduction of astronomy into the universities, John of Sacrobosco wrote a series of influential introductory astronomy textbooks: the Sphere, a Computus, a text on the Quadrant, and another on Calculation. In the 14th century, Nicole Oresme, later bishop of Liseux, showed that neither the scriptural texts nor the physical arguments advanced against the movement of the Earth were demonstrative and adduced the argument of simplicity for the theory that the Earth moves, and not the heavens. However, he concluded "everyone maintains, and I think myself, that the heavens do move and not the earth: For God hath established the world which shall not be moved." In the 15th century, Cardinal Nicholas of Cusa suggested in some of his scientific writings that the Earth revolved around the Sun, and that each star is itself a distant sun. Renaissance and Early Modern Europe Copernican Revolution During the renaissance period, astronomy began to undergo a revolution in thought known as the Copernican Revolution, which gets the name from the astronomer Nicolaus Copernicus, who proposed a heliocentric system, in which the planets revolved around the Sun and not the Earth. His De revolutionibus orbium coelestium was published in 1543. While in the long term this was a very controversial claim, in the very beginning it only brought minor controversy. The theory became the dominant view because many figures, most notably Galileo Galilei, Johannes Kepler and Isaac Newton championed and improved upon the work. Other figures also aided this new model despite not believing the overall theory, like Tycho Brahe, with his well-known observations. Brahe, a Danish noble, was an essential astronomer in this period. He came on the astronomical scene with the publication of De nova stella, in which he disproved conventional wisdom on the supernova SN 1572 (As bright as Venus at its peak, SN 1572 later became invisible to the naked eye, disproving the Aristotelian doctrine of the immutability of the heavens.) He also created the Tychonic system, where the Sun and Moon and the stars revolve around the Earth, but the other five planets revolve around the Sun. This system blended the mathematical benefits of the Copernican system with the "physical benefits" of the Ptolemaic system. This was one of the systems people believed in when they did not accept heliocentrism, but could no longer accept the Ptolemaic system. He is most known for his highly accurate observations of the stars and the solar system. Later he moved to Prague and continued his work. In Prague he was at work on the Rudolphine Tables, that were not finished until after his death. The Rudolphine Tables was a star map designed to be more accurate than either the Alfonsine tables, made in the 1300s, and the Prutenic Tables, which were inaccurate. He was assisted at this time by his assistant Johannes Kepler, who would later use his observations to finish Brahe's works and for his theories as well. After the death of Brahe, Kepler was deemed his successor and was given the job of completing Brahe's uncompleted works, like the Rudolphine Tables. He completed the Rudolphine Tables in 1624, although it was not published for several years. Like many other figures of this era, he was subject to religious and political troubles, like the Thirty Years' War, which led to chaos that almost destroyed some of his works. Kepler was, however, the first to attempt to derive mathematical predictions of celestial motions from assumed physical causes. He discovered the three Kepler's laws of planetary motion that now carry his name, those laws being as follows: The orbit of a planet is an ellipse with the Sun at one of the two foci. A line segment joining a planet and the Sun sweeps out equal areas during equal intervals of time. The square of the orbital period of a planet is proportional to the cube of the semi-major axis of its orbit. With these laws, he managed to improve upon the existing heliocentric model. The first two were published in 1609. Kepler's contributions improved upon the overall system, giving it more credibility because it adequately explained events and could cause more reliable predictions. Before this, the Copernican model was just as unreliable as the Ptolemaic model. This improvement came because Kepler realized the orbits were not perfect circles, but ellipses.Galileo Galilei was among the first to use a telescope to observe the sky, and after constructing a 20x refractor telescope. He discovered the four largest moons of Jupiter in 1610, which are now collectively known as the Galilean moons, in his honor. This discovery was the first known observation of satellites orbiting another planet. He also found that our Moon had craters and observed, and correctly explained, sunspots, and that Venus exhibited a full set of phases resembling lunar phases. Galileo argued that these facts demonstrated incompatibility with the Ptolemaic model, which could not explain the phenomenon and would even contradict it. With the moons it demonstrated that the Earth does not have to have everything orbiting it and that other parts of the Solar System could orbit another object, such as the Earth orbiting the Sun. In the Ptolemaic system the celestial bodies were supposed to be perfect so such objects should not have craters or sunspots. The phases of Venus could only happen in the event that Venus' orbit is insides Earth's orbit, which could not happen if the Earth was the center. He, as the most famous example, had to face challenges from church officials, more specifically the Roman Inquisition. They accused him of heresy because these beliefs went against the teachings of the Roman Catholic Church and were challenging the Catholic church's authority when it was at its weakest. While he was able to avoid punishment for a little while he was eventually tried and pled guilty to heresy in 1633. Although this came at some expense, his book was banned, and he was put under house arrest until he died in 1642.Sir Isaac Newton developed further ties between physics and astronomy through his law of universal gravitation. Realizing that the same force that attracts objects to the surface of the Earth held the Moon in orbit around the Earth, Newton was able to explain – in one theoretical framework – all known gravitational phenomena. In his Philosophiæ Naturalis Principia Mathematica, he derived Kepler's laws from first principles. Those first principles are as follows: In an inertial frame of reference, an object either remains at rest or continues to move at constant velocity, unless acted upon by a force. In an inertial reference frame, the vector sum of the forces F on an object is equal to the mass m of that object multiplied by the acceleration a of the object: F = ma. (It is assumed here that the mass m is constant) When one body exerts a force on a second body, the second body simultaneously exerts a force equal in magnitude and opposite in direction on the first body. Thus while Kepler explained how the planets moved, Newton accurately managed to explain why the planets moved the way they do. Newton's theoretical developments laid many of the foundations of modern physics. Completing the Solar System Outside of England, Newton's theory took some time to become established. Descartes' theory of vortices held sway in France, and Huygens, Leibniz and Cassini accepted only parts of Newton's system, preferring their own philosophies. Voltaire published a popular account in 1738. In 1748, the French Academy of Sciences offered a reward for solving the perturbations of Jupiter and Saturn which was eventually solved by Euler and Lagrange. Laplace completed the theory of the planets, publishing from 1798 to 1825. The early origins of the solar nebular model of planetary formation had begun. Edmund Halley succeeded Flamsteed as Astronomer Royal in England and succeeded in predicting the return in 1758 of the comet that bears his name. Sir William Herschel found the first new planet, Uranus, to be observed in modern times in 1781. The gap between the planets Mars and Jupiter disclosed by the Titius–Bode law was filled by the discovery of the asteroids Ceres and 2 Pallas Pallas in 1801 and 1802 with many more following. At first, astronomical thought in America was based on Aristotelian philosophy, but interest in the new astronomy began to appear in Almanacs as early as 1659. Stellar astronomy Cosmic pluralism is the name given to the idea that the stars are distant suns, perhaps with their own planetary systems. Ideas in this direction were expressed in antiquity, by Anaxagoras and by Aristarchus of Samos, but did not find mainstream acceptance. The first astronomer of the European Renaissance to suggest that the stars were distant suns was Giordano Bruno
was introduced into East Asia. Astronomy in China has a long history. Detailed records of astronomical observations were kept from about the 6th century BC, until the introduction of Western astronomy and the telescope in the 17th century. Chinese astronomers were able to precisely predict eclipses. Much of early Chinese astronomy was for the purpose of timekeeping. The Chinese used a lunisolar calendar, but because the cycles of the Sun and the Moon are different, astronomers often prepared new calendars and made observations for that purpose. Astrological divination was also an important part of astronomy. Astronomers took careful note of "guest stars"(Chinese: 客星; pinyin: kèxīng; lit.: 'guest star') which suddenly appeared among the fixed stars. They were the first to record a supernova, in the Astrological Annals of the Houhanshu in 185 AD. Also, the supernova that created the Crab Nebula in 1054 is an example of a "guest star" observed by Chinese astronomers, although it was not recorded by their European contemporaries. Ancient astronomical records of phenomena like supernovae and comets are sometimes used in modern astronomical studies. The world's first star catalogue was made by Gan De, a Chinese astronomer, in the 4th century BC. Mesoamerica Maya astronomical codices include detailed tables for calculating phases of the Moon, the recurrence of eclipses, and the appearance and disappearance of Venus as morning and evening star. The Maya based their calendrics in the carefully calculated cycles of the Pleiades, the Sun, the Moon, Venus, Jupiter, Saturn, Mars, and also they had a precise description of the eclipses as depicted in the Dresden Codex, as well as the ecliptic or zodiac, and the Milky Way was crucial in their Cosmology. A number of important Maya structures are believed to have been oriented toward the extreme risings and settings of Venus. To the ancient Maya, Venus was the patron of war and many recorded battles are believed to have been timed to the motions of this planet. Mars is also mentioned in preserved astronomical codices and early mythology. Although the Maya calendar was not tied to the Sun, John Teeple has proposed that the Maya calculated the solar year to somewhat greater accuracy than the Gregorian calendar. Both astronomy and an intricate numerological scheme for the measurement of time were vitally important components of Maya religion. Middle Ages Middle East The Arabic and the Persian world under Islam had become highly cultured, and many important works of knowledge from Greek astronomy and Indian astronomy and Persian astronomy were translated into Arabic, used and stored in libraries throughout the area. An important contribution by Islamic astronomers was their emphasis on observational astronomy. This led to the emergence of the first astronomical observatories in the Muslim world by the early 9th century. Zij star catalogues were produced at these observatories. In the 10th century, Abd al-Rahman al-Sufi (Azophi) carried out observations on the stars and described their positions, magnitudes, brightness, and colour and drawings for each constellation in his Book of Fixed Stars. He also gave the first descriptions and pictures of "A Little Cloud" now known as the Andromeda Galaxy. He mentions it as lying before the mouth of a Big Fish, an Arabic constellation. This "cloud" was apparently commonly known to the Isfahan astronomers, very probably before 905 AD. The first recorded mention of the Large Magellanic Cloud was also given by al-Sufi. In 1006, Ali ibn Ridwan observed SN 1006, the brightest supernova in recorded history, and left a detailed description of the temporary star. In the late 10th century, a huge observatory was built near Tehran, Iran, by the astronomer Abu-Mahmud al-Khujandi who observed a series of meridian transits of the Sun, which allowed him to calculate the tilt of the Earth's axis relative to the Sun. He noted that measurements by earlier (Indian, then Greek) astronomers had found higher values for this angle, possible evidence that the axial tilt is not constant but was in fact decreasing. In 11th-century Persia, Omar Khayyám compiled many tables and performed a reformation of the calendar that was more accurate than the Julian and came close to the Gregorian. Other Muslim advances in astronomy included the collection and correction of previous astronomical data, resolving significant problems in the Ptolemaic model, the development of the universal latitude-independent astrolabe by Arzachel, the invention of numerous other astronomical instruments, Ja'far Muhammad ibn Mūsā ibn Shākir's belief that the heavenly bodies and celestial spheres were subject to the same physical laws as Earth, and the introduction of empirical testing by Ibn al-Shatir, who produced the first model of lunar motion which matched physical observations. Natural philosophy (particularly Aristotelian physics) was separated from astronomy by Ibn al-Haytham (Alhazen) in the 11th century, by Ibn al-Shatir in the 14th century, and Qushji in the 15th century. Western Europe After the significant contributions of Greek scholars to the development of astronomy, it entered a relatively static era in Western Europe from the Roman era through the 12th century. This lack of progress has led some astronomers to assert that nothing happened in Western European astronomy during the Middle Ages. Recent investigations, however, have revealed a more complex picture of the study and teaching of astronomy in the period from the 4th to the 16th centuries. Western Europe entered the Middle Ages with great difficulties that affected the continent's intellectual production. The advanced astronomical treatises of classical antiquity were written in Greek, and with the decline of knowledge of that language, only simplified summaries and practical texts were available for study. The most influential writers to pass on this ancient tradition in Latin were Macrobius, Pliny, Martianus Capella, and Calcidius. In the 6th century Bishop Gregory of Tours noted that he had learned his astronomy from reading Martianus Capella, and went on to employ this rudimentary astronomy to describe a method by which monks could determine the time of prayer at night by watching the stars. In the 7th century the English monk Bede of Jarrow published an influential text, On the Reckoning of Time, providing churchmen with the practical astronomical knowledge needed to compute the proper date of Easter using a procedure called the computus. This text remained an important element of the education of clergy from the 7th century until well after the rise of the Universities in the 12th century. The range of surviving ancient Roman writings on astronomy and the teachings of Bede and his followers began to be studied in earnest during the revival of learning sponsored by the emperor Charlemagne. By the 9th century rudimentary techniques for calculating the position of the planets were circulating in Western Europe; medieval scholars recognized their flaws, but texts describing these techniques continued to be copied, reflecting an interest in the motions of the planets and in their astrological significance. Building on this astronomical background, in the 10th century European scholars such as Gerbert of Aurillac began to travel to Spain and Sicily to seek out learning which they had heard existed in the Arabic-speaking world. There they first encountered various practical astronomical techniques concerning the calendar and timekeeping, most notably those dealing with the astrolabe. Soon scholars such as Hermann of Reichenau were writing texts in Latin on the uses and construction of the astrolabe and others, such as Walcher of Malvern, were using the astrolabe to observe the time of eclipses in order to test the validity of computistical tables. By the 12th century, scholars were traveling to Spain and Sicily to seek out more advanced astronomical and astrological texts, which they translated into Latin from Arabic and Greek to further enrich the astronomical knowledge of Western Europe. The arrival of these new texts coincided with the rise of the universities in medieval Europe, in which they soon found a home. Reflecting the introduction of astronomy into the universities, John of Sacrobosco wrote a series of influential introductory astronomy textbooks: the Sphere, a Computus, a text on the Quadrant, and another on Calculation. In the 14th century, Nicole Oresme, later bishop of Liseux, showed that neither the scriptural texts nor the physical arguments advanced against the movement of the Earth were demonstrative and adduced the argument of simplicity for the theory that the Earth moves, and not the heavens. However, he concluded "everyone maintains, and I think myself, that the heavens do move and not the earth: For God hath established the world which shall not be moved." In the 15th century, Cardinal Nicholas of Cusa suggested in some of his scientific writings that the Earth revolved around the Sun, and that each star is itself a distant sun. Renaissance and Early Modern Europe Copernican Revolution During the renaissance period, astronomy began to undergo a revolution in thought known as the Copernican Revolution, which gets the name from the astronomer Nicolaus Copernicus, who proposed a heliocentric system, in which the planets revolved around the Sun and not the Earth. His De revolutionibus orbium coelestium was published in 1543. While in the long term this was a very controversial claim, in the very beginning it only brought minor controversy. The theory became the dominant view because many figures, most notably Galileo Galilei, Johannes Kepler and Isaac Newton championed and improved upon the work. Other figures also aided this new model despite not believing the overall theory, like Tycho Brahe, with his well-known observations. Brahe, a Danish noble, was an essential astronomer in this period. He came on the astronomical scene with the publication of De nova stella, in which he disproved conventional wisdom on the supernova SN 1572 (As bright as Venus at its peak, SN 1572 later became invisible to the naked eye, disproving the Aristotelian doctrine of the immutability of the heavens.) He also created the Tychonic system, where the Sun and Moon and the stars revolve around the Earth, but the other five planets revolve around the Sun. This system blended the mathematical benefits of the Copernican system with the "physical benefits" of the Ptolemaic system. This was one of the systems people believed in when they did not accept heliocentrism, but could no longer accept the Ptolemaic system. He is most known for his highly accurate observations of the stars and the solar system. Later he moved to Prague and continued his work. In Prague he was at work on the Rudolphine Tables, that were not finished until after his death. The Rudolphine Tables was a star map designed to be more accurate than either the Alfonsine tables, made in the 1300s, and the Prutenic Tables, which were inaccurate. He was assisted at this time by his assistant Johannes Kepler, who would later use his observations to finish Brahe's works and for his theories as well. After the death of Brahe, Kepler was deemed his successor and was given the job of completing Brahe's uncompleted works, like the Rudolphine Tables. He completed the Rudolphine Tables in 1624, although it was not published for several years. Like many other figures of this era, he was subject to religious and political troubles, like the Thirty Years' War, which led to chaos that almost destroyed some of his works. Kepler was, however, the first to attempt to derive mathematical predictions of celestial motions from assumed physical causes. He discovered the three Kepler's laws of planetary motion that now carry his name, those laws being as follows: The orbit of a planet is an ellipse with the Sun at one of the two foci. A line segment joining a planet and the Sun sweeps out equal areas during equal intervals of time. The square of the orbital period of a planet is proportional to the cube of the semi-major axis of its orbit. With these laws, he managed to improve upon the existing heliocentric model. The first two were published in 1609. Kepler's contributions improved upon the overall system, giving it more credibility because it adequately explained events and could cause more reliable predictions. Before this, the Copernican model was just as unreliable as the Ptolemaic model. This improvement came because Kepler realized the orbits were not perfect circles, but ellipses.Galileo Galilei was among the first to use a telescope to observe the sky, and after constructing a 20x refractor telescope. He discovered the four largest moons of Jupiter in 1610, which are now collectively known as the Galilean moons, in his honor. This discovery was the first known observation of satellites orbiting another planet. He also found that our Moon had craters and observed, and correctly explained, sunspots, and that Venus exhibited a full set of phases resembling lunar phases. Galileo argued that these facts demonstrated incompatibility with the Ptolemaic model, which could not explain the phenomenon and would even contradict it. With the moons it demonstrated that the Earth does not have to have everything orbiting it and that other parts of the Solar System could orbit another object, such as the Earth
the Haber–Bosch process, is an artificial nitrogen fixation process and is the main industrial procedure for the production of ammonia today. It is named after its inventors, the German chemists Fritz Haber and Carl Bosch, who developed it in the first decade of the 20th century. The process converts atmospheric nitrogen (N2) to ammonia (NH3) by a reaction with hydrogen (H2) using a metal catalyst under high temperatures and pressures: Before the development of the Haber process, ammonia had been difficult to produce on an industrial scale, with early methods such as the Birkeland–Eyde process and Frank–Caro process all being highly inefficient. Although the Haber process is mainly used to produce fertilizer today, during World War I it provided Germany with a source of ammonia for the production of explosives, compensating for the Allied Powers' trade blockade on Chilean saltpeter. History Throughout the 19th century the demand for nitrates and ammonia for use as fertilizers and industrial feedstocks had been steadily increasing. The main source was mining niter deposits and guano from tropical islands. At the beginning of the 20th century it was being predicted that these reserves could not satisfy future demands, and research into new potential sources of ammonia became more important. Although atmospheric nitrogen (N2) is abundant, comprising nearly 80% of the air, it is exceptionally stable and does not readily react with other chemicals. Converting N2 into ammonia posed a challenge for chemists globally. Haber, with his assistant Robert Le Rossignol, developed the high-pressure devices and catalysts needed to demonstrate the Haber process at laboratory scale. They demonstrated their process in the summer of 1909 by producing ammonia from air, drop by drop, at the rate of about per hour. The process was purchased by the German chemical company BASF, which assigned Carl Bosch the task of scaling up Haber's tabletop machine to industrial-level production. He succeeded in 1910. Haber and Bosch were later awarded Nobel prizes, in 1918 and 1931 respectively, for their work in overcoming the chemical and engineering problems of large-scale, continuous-flow, high-pressure technology. Ammonia was first manufactured using the Haber process on an industrial scale in 1913 in BASF's Oppau plant in Germany, reaching 20 tonnes per day the following year. During World War I, the production of munitions required large amounts of nitrate. The Allies had access to large deposits of sodium nitrate in Chile (Chile saltpetre) controlled by British companies. Germany had no such resources, so the Haber process proved essential to the German war effort. Synthetic ammonia from the Haber process was used for the production of nitric acid, a precursor to the nitrates used in explosives. Today, the most popular catalysts are based on iron promoted with K2O, CaO, SiO2, and Al2O3. Earlier, molybdenum was also used as a promoter. The original Haber–Bosch reaction chambers used osmium as the catalyst, but it was available in extremely small quantities. Haber noted uranium was almost as effective and easier to obtain than osmium. Under Bosch's direction in 1909, the BASF researcher Alwin Mittasch discovered a much less expensive iron-based catalyst, which is still used today. A major contributor to the elucidation of this catalysis was Gerhard Ertl. During the interwar years, alternative processes were developed, the most notably different being the Casale process, Claude process and the Mont-Cenis process by Friedrich Uhde Ingenieurbüro, founded in 1921. Luigi Casale and Georges Claude proposed to increase the pressure of the synthesis loop to , thereby increasing the single-pass ammonia conversion and making nearly complete liquefaction at ambient temperature feasible. Georges Claude even proposed to have three or four converters with liquefaction steps in series, thereby omitting the need for a recycle. Nowadays, most plants resemble the original Haber process ( and ), albeit with improved single-pass conversion and lower energy consumption due to process and catalyst optimization. Process This conversion is typically conducted at pressures above 10 MPa (100 bar; 1,450 psi) and between , as the gases (nitrogen and hydrogen) are passed over four beds of catalyst, with cooling between each pass for maintaining a reasonable equilibrium constant. On each pass only about 15% conversion occurs, but any unreacted gases are recycled, and eventually an overall conversion of 97% is achieved. The steam reforming, shift conversion, carbon dioxide removal, and methanation steps each operate at pressures of about , and the ammonia synthesis loop operates at pressures ranging from , depending upon which proprietary process is used. Sources of hydrogen The major source of hydrogen is methane from natural gas. The conversion, steam reforming, is conducted with steam in a high-temperature and pressure tube inside a reformer with a nickel catalyst, separating the carbon and hydrogen atoms in the natural gas, yielding hydrogen gas and carbon dioxide waste. Other fossil fuel sources include coal, heavy fuel oil and naphtha. Green hydrogen is produced without fossil fuels or carbon dioxide waste from biomass, electrolysis of water and the thermochemical (solar or other heat source) splitting of water. Reaction rate and equilibrium Nitrogen gas (N2) is very unreactive because the atoms are held together by strong triple bonds. The Haber process relies on catalysts that accelerate the scission of this triple bond. Two opposing considerations are relevant to this synthesis: the position of the equilibrium and the rate of reaction. At room temperature, the equilibrium is strongly in favor of ammonia, but the reaction doesn't proceed at a detectable rate due to its high activation energy. Because the reaction is exothermic, the equilibrium constant becomes unity at around (see Le Châtelier's principle). Above this temperature, the equilibrium quickly becomes quite unfavorable for the reaction product at atmospheric pressure, according to the van 't Hoff equation. Lowering the temperature is also unhelpful because the catalyst requires a temperature of at least 400 °C to be efficient. Increased pressure does favor the forward reaction because there are 4 moles of reactant for every 2 moles of product, and the pressure used () alters the equilibrium concentrations to give a substantial ammonia yield. The reason for this is evident in the equilibrium relationship, which is where is the fugacity coefficient of species , is the mole fraction of the same species, is the pressure in the reactor, and is standard pressure, typically . Economically, pressurization of the reactor is expensive: pipes, valves, and reaction vessels need to be strengthened, and there are safety considerations when working at 20 MPa. In addition, running compressors takes considerable energy, as work must be done on the (very compressible) gas. Thus, the compromise used gives a single-pass yield of around 15% While removing the product (i.e., ammonia gas) from the system would increase the reaction yield, this step is not used in practice, since the temperature is too high; it is removed from the equilibrium mixture of gases leaving the reaction vessel. The hot gases are cooled enough, whilst maintaining a high pressure, for the ammonia to condense and be removed as liquid. Unreacted hydrogen and nitrogen gases are then returned to the reaction vessel to undergo further reaction. While most ammonia is removed (typically down to 2–5 mol.%), some ammonia remains in the recycle stream to the converter. In academic literature, more complete separation of ammonia has been proposed by absorption in metal halides and by adsorption on zeolites. Such a process is called a absorbent-enhanced Haber process or adsorbent-enhanced Haber-Bosch process. Catalysts The Haber–Bosch process relies on catalysts to accelerate the hydrogenation of N2. The catalysts are "heterogeneous", meaning that they are solids that interact on gaseous reagents. The catalyst typically consists of finely divided iron bound to an iron oxide carrier containing promoters possibly including aluminium oxide, potassium oxide, calcium oxide, potassium hydroxide, molybdenum, and magnesium oxide. Production of iron-based catalysts In industrial practice, the iron catalyst is obtained from finely ground iron powder, which is usually obtained by reduction of high-purity magnetite (Fe3O4). The pulverized iron is burnt (oxidized) to give magnetite or wüstite (FeO, ferrous oxide) particles of a specific size. The magnetite (or wüstite) particles are then partially reduced, removing some of the oxygen in the process. The resulting catalyst particles consist of a core of magnetite, encased in a shell of wüstite, which in turn is surrounded by an outer shell of metallic iron. The catalyst maintains most of its bulk volume during the reduction, resulting in a highly porous high-surface-area material, which enhances its effectiveness as a catalyst. Other minor components of the catalyst include calcium and aluminium oxides, which support the iron catalyst and help it maintain its surface area. These oxides of Ca, Al, K, and Si are unreactive to reduction by the hydrogen. The production of the required magnetite catalyst requires a particular melting process in which the used raw materials must be
being predicted that these reserves could not satisfy future demands, and research into new potential sources of ammonia became more important. Although atmospheric nitrogen (N2) is abundant, comprising nearly 80% of the air, it is exceptionally stable and does not readily react with other chemicals. Converting N2 into ammonia posed a challenge for chemists globally. Haber, with his assistant Robert Le Rossignol, developed the high-pressure devices and catalysts needed to demonstrate the Haber process at laboratory scale. They demonstrated their process in the summer of 1909 by producing ammonia from air, drop by drop, at the rate of about per hour. The process was purchased by the German chemical company BASF, which assigned Carl Bosch the task of scaling up Haber's tabletop machine to industrial-level production. He succeeded in 1910. Haber and Bosch were later awarded Nobel prizes, in 1918 and 1931 respectively, for their work in overcoming the chemical and engineering problems of large-scale, continuous-flow, high-pressure technology. Ammonia was first manufactured using the Haber process on an industrial scale in 1913 in BASF's Oppau plant in Germany, reaching 20 tonnes per day the following year. During World War I, the production of munitions required large amounts of nitrate. The Allies had access to large deposits of sodium nitrate in Chile (Chile saltpetre) controlled by British companies. Germany had no such resources, so the Haber process proved essential to the German war effort. Synthetic ammonia from the Haber process was used for the production of nitric acid, a precursor to the nitrates used in explosives. Today, the most popular catalysts are based on iron promoted with K2O, CaO, SiO2, and Al2O3. Earlier, molybdenum was also used as a promoter. The original Haber–Bosch reaction chambers used osmium as the catalyst, but it was available in extremely small quantities. Haber noted uranium was almost as effective and easier to obtain than osmium. Under Bosch's direction in 1909, the BASF researcher Alwin Mittasch discovered a much less expensive iron-based catalyst, which is still used today. A major contributor to the elucidation of this catalysis was Gerhard Ertl. During the interwar years, alternative processes were developed, the most notably different being the Casale process, Claude process and the Mont-Cenis process by Friedrich Uhde Ingenieurbüro, founded in 1921. Luigi Casale and Georges Claude proposed to increase the pressure of the synthesis loop to , thereby increasing the single-pass ammonia conversion and making nearly complete liquefaction at ambient temperature feasible. Georges Claude even proposed to have three or four converters with liquefaction steps in series, thereby omitting the need for a recycle. Nowadays, most plants resemble the original Haber process ( and ), albeit with improved single-pass conversion and lower energy consumption due to process and catalyst optimization. Process This conversion is typically conducted at pressures above 10 MPa (100 bar; 1,450 psi) and between , as the gases (nitrogen and hydrogen) are passed over four beds of catalyst, with cooling between each pass for maintaining a reasonable equilibrium constant. On each pass only about 15% conversion occurs, but any unreacted gases are recycled, and eventually an overall conversion of 97% is achieved. The steam reforming, shift conversion, carbon dioxide removal, and methanation steps each operate at pressures of about , and the ammonia synthesis loop operates at pressures ranging from , depending upon which proprietary process is used. Sources of hydrogen The major source of hydrogen is methane from natural gas. The conversion, steam reforming, is conducted with steam in a high-temperature and pressure tube inside a reformer with a nickel catalyst, separating the carbon and hydrogen atoms in the natural gas, yielding hydrogen gas and carbon dioxide waste. Other fossil fuel sources include coal, heavy fuel oil and naphtha. Green hydrogen is produced without fossil fuels or carbon dioxide waste from biomass, electrolysis of water and the thermochemical (solar or other heat source) splitting of water. Reaction rate and equilibrium Nitrogen gas (N2) is very unreactive because the atoms are held together by strong triple bonds. The Haber process relies on catalysts that accelerate the scission of this triple bond. Two opposing considerations are relevant to this synthesis: the position of the equilibrium and the rate of reaction. At room temperature, the equilibrium is strongly in favor of ammonia, but the reaction doesn't proceed at a detectable rate due to its high activation energy. Because the reaction is exothermic, the equilibrium constant becomes unity at around (see Le Châtelier's principle). Above this temperature, the equilibrium quickly becomes quite unfavorable for the reaction product at atmospheric pressure, according to the van 't Hoff equation. Lowering the temperature is also unhelpful because the catalyst requires a temperature of at least 400 °C to be efficient. Increased pressure does favor the forward reaction because there are 4 moles of reactant for every 2 moles of product, and the pressure used () alters the equilibrium concentrations to give a substantial ammonia yield. The reason for this is evident in the equilibrium relationship, which is where is the fugacity coefficient of species , is the mole fraction of the same species, is the pressure in the reactor, and is standard pressure, typically . Economically, pressurization of the reactor is expensive: pipes, valves, and reaction vessels need to be strengthened, and there are safety considerations when working at 20 MPa. In addition, running compressors takes considerable energy, as work must be done on the (very compressible) gas. Thus, the compromise used gives a single-pass yield of around 15% While removing the product (i.e., ammonia gas) from the system would increase the reaction yield, this step is not used in practice, since the temperature is too high; it is removed from the equilibrium mixture of gases leaving the reaction vessel. The hot gases are cooled enough, whilst maintaining a high pressure, for the ammonia to condense and be removed as liquid. Unreacted hydrogen and nitrogen gases are then returned to the reaction vessel to undergo further reaction. While most ammonia is removed (typically down to 2–5 mol.%), some ammonia remains in the recycle stream to the converter. In academic literature, more complete separation of ammonia has been proposed by absorption in metal halides and by adsorption on zeolites. Such a process is called a absorbent-enhanced Haber process or adsorbent-enhanced Haber-Bosch process. Catalysts The Haber–Bosch process relies on catalysts to accelerate the hydrogenation of N2. The catalysts are "heterogeneous", meaning that they are solids that interact on gaseous reagents. The catalyst typically consists of finely divided iron bound to an iron oxide carrier containing promoters possibly including aluminium oxide, potassium oxide, calcium oxide, potassium hydroxide, molybdenum, and magnesium oxide. Production of iron-based catalysts In industrial practice, the iron catalyst is obtained from finely ground iron powder, which is usually obtained by reduction of high-purity magnetite (Fe3O4). The pulverized iron is burnt (oxidized) to give magnetite or wüstite (FeO, ferrous oxide) particles of a specific size. The magnetite (or wüstite) particles are then partially reduced, removing some of the oxygen in the process. The resulting catalyst particles consist of a core of magnetite, encased in a
Not type site called FaceMash, where he posted photos from Harvard's Facebook for the university's community to rate. Hot or Not was sold for a rumored $20 million on February 8, 2008, to Avid Life Media, owners of Ashley Madison. Annual revenue reached $7.5 million, with net profits of $5.5 million. They initially started off $60,000 in debt due to tuition fees James paid for his MBA. On July 31, 2008, Hot or Not launched Hot or Not Gossip and a Baresi rate box (a "hot meter") – a subdivision to expand their market, run by former radio DJ turned celebrity blogger Zack Taylor. In 2012, Hot or Not was purchased by Badoo, which is owned by Bumble Inc. The app is currently rebranded as Chat & Date which uses a similar user interface to Badoo and shares user accounts between both sites. Predecessors and spin-offs Hot or Not was preceded by the rating sites, like RateMyFace, which was registered a year earlier in the summer of 1999, and AmIHot.com, which was registered in January 2000 by MIT freshman Daniel Roy. Regardless, despite any head starts of its predecessors, Hot or Not quickly became the most popular. Since AmIHotOrNot.com's launch, the concept has spawned many imitators. The concept always remained the same, but the subject matter varied greatly. The concept has also been integrated with a wide variety of dating and matchmaking systems. In 2007 BecauseImHot.com launched and deleted anyone with a rating below 7 after a voting audit or the first 50 votes (whichever is first). Research In 2005, as an example of using image morphing methods to study the effects of averageness, imaging researcher Pierre Tourigny created a composite of about 30 faces to find out the current standard of good looks on the Internet. On the Hot or Not web site, people rate others' attractiveness on a scale of 1 to 10. An average score based on hundreds or even thousands
for the university's community to rate. Hot or Not was sold for a rumored $20 million on February 8, 2008, to Avid Life Media, owners of Ashley Madison. Annual revenue reached $7.5 million, with net profits of $5.5 million. They initially started off $60,000 in debt due to tuition fees James paid for his MBA. On July 31, 2008, Hot or Not launched Hot or Not Gossip and a Baresi rate box (a "hot meter") – a subdivision to expand their market, run by former radio DJ turned celebrity blogger Zack Taylor. In 2012, Hot or Not was purchased by Badoo, which is owned by Bumble Inc. The app is currently rebranded as Chat & Date which uses a similar user interface to Badoo and shares user accounts between both sites. Predecessors and spin-offs Hot or Not was preceded by the rating sites, like RateMyFace, which was registered a year earlier in the summer of 1999, and AmIHot.com, which was registered in January 2000 by MIT freshman Daniel Roy. Regardless, despite any head starts of its predecessors, Hot or Not quickly became the most popular. Since AmIHotOrNot.com's launch, the concept has spawned many imitators. The concept always remained the same, but the subject matter varied greatly. The concept has also been integrated with a wide variety of dating and matchmaking systems. In 2007 BecauseImHot.com launched and deleted anyone with a rating below 7 after a voting audit or the first 50 votes (whichever is first). Research In 2005, as an example of using image morphing methods to study the effects of averageness, imaging researcher Pierre Tourigny created a composite of about 30 faces to find out the current standard of good looks on the Internet. On the Hot or Not web site, people rate others' attractiveness on a scale of 1 to 10. An average score based on hundreds or even thousands of individual ratings
H.263 specified the following annexes: Annex A – Inverse transform accuracy specification Annex B – Hypothetical Reference Decoder Annex C – Considerations for Multipoint Annex D – Unrestricted Motion Vector mode Annex E – Syntax-based Arithmetic Coding mode Annex F – Advanced Prediction mode Annex G – PB-frames mode Annex H – Forward Error Correction for coded video signal The first version of H.263 supported a limited set of picture sizes: 128x96 (a.k.a. Sub-QCIF) 176x144 (a.k.a. QCIF) 352x288 (a.k.a. CIF) 704x576 (a.k.a. 4CIF) 1408x1152 (a.k.a. 16CIF) In March 1997, an informative Appendix I describing Error Tracking – an encoding technique for providing improved robustness to data losses and errors, was approved to provide information for the aid of implementers having an interest in such techniques. H.263v2 (H.263+) H.263v2 (also known as H.263+, or as the 1998 version of H.263) is the informal name of the second edition of the ITU-T H.263 international video coding standard. It retained the entire technical content of the original version of the standard, but enhanced H.263 capabilities by adding several annexes which can substantially improve encoding efficiency and provide other capabilities (such as enhanced robustness against data loss in the transmission channel). The H.263+ project was ratified by the ITU in February 1998. It added the following Annexes: Annex I – Advanced INTRA Coding mode Annex J – Deblocking Filter mode Annex K – Slice Structured mode Annex L – Supplemental Enhancement Information Specification Annex M – Improved PB-frames mode Annex N – Reference Picture Selection mode Annex O – Temporal, SNR, and Spatial Scalability mode Annex P – Reference picture resampling Annex Q – Reduced-Resolution Update mode (see implementors' guide correction as noted below) Annex R – Independent Segment Decoding mode Annex S – Alternative INTER VLC mode Annex T – Modified Quantization mode H.263v2 also added support for flexible customized picture formats and custom picture clock frequencies. As noted above, the only picture formats previously supported in H.263 had been Sub-QCIF, QCIF, CIF, 4CIF, and 16CIF, and the only picture clock frequency had been 30000/1001 (approximately 29.97) clock ticks per second. H.263v2 specified a set of recommended modes in an informative appendix (Appendix II, since deprecated): H.263v3 (H.263++) and Annex X The definition of H.263v3
a project ending in 1995/1996. It is a member of the H.26x family of video coding standards in the domain of the ITU-T. Like previous H.26x standards, H.263 is based on discrete cosine transform (DCT) video compression. H.263 was later extended to add various additional enhanced features in 1998 and 2000. Smaller additions were also made in 1997 and 2001, and a unified edition was produced in 2005. History and background The H.263 standard was first designed to be utilized in H.324 based systems (PSTN and other circuit-switched network videoconferencing and videotelephony), but it also found use in H.323 (RTP/IP-based videoconferencing), H.320 (ISDN-based videoconferencing, where it was the most widely used video compression standard), RTSP (streaming media) and SIP (IP-based videoconferencing) solutions. H.263 is a required video coding format in ETSI 3GPP technical specifications for IP Multimedia Subsystem (IMS), Multimedia Messaging Service (MMS) and Transparent end-to-end Packet-switched Streaming Service (PSS). In 3GPP specifications, H.263 video is usually used in 3GP container format. H.263 also found many applications on the internet: much Flash Video content (as used on sites such as YouTube, Google Video, and MySpace) used to be encoded in Sorenson Spark format (an incomplete implementation of H.263). The original version of the RealVideo codec was based on H.263 until the release of RealVideo 8. H.263 was developed as an evolutionary improvement based on experience from H.261 and H.262 (aka MPEG-2 Video), the previous ITU-T standards for video compression, and the MPEG-1 standard developed in ISO/IEC. Its first version was completed in 1995 and provided a suitable replacement for H.261 at all bit rates. It was further enhanced in projects known as H.263v2 (also known as H.263+ or H.263 1998) and H.263v3 (also known as H.263++ or H.263 2000). It was also used as the basis for the development of MPEG-4 Part 2. MPEG-4 Part 2 is H.263 compatible in the sense that basic "baseline" H.263 bitstreams are correctly decoded by an MPEG-4 Video decoder. The next enhanced format developed by ITU-T VCEG (in partnership with MPEG) after H.263 was the H.264 standard, also known as AVC and MPEG-4 part 10. As H.264 provides a significant improvement in capability beyond H.263, the H.263 standard is now considered a legacy design. Most new videoconferencing products now include H.264 as well as H.263 and H.261 capabilities. An even-newer standard format, HEVC, has also been developed by VCEG and MPEG, and has begun to emerge in some applications. Versions Since the original ratification of H.263 in March 1996 (approving a document that was produced in November 1995), there have been two subsequent additions which improved on the original standard by additional optional extensions (for example, the H.263v2 project added a deblocking filter in its Annex J). Version 1 and Annex I The original version of H.263 specified the following annexes: Annex A – Inverse transform accuracy specification Annex B – Hypothetical Reference Decoder Annex C – Considerations for Multipoint Annex D – Unrestricted Motion Vector mode Annex E – Syntax-based Arithmetic Coding mode Annex F – Advanced Prediction mode Annex G – PB-frames mode Annex H – Forward Error Correction for coded video signal The first version of H.263 supported a limited set of picture sizes: 128x96 (a.k.a. Sub-QCIF) 176x144 (a.k.a. QCIF) 352x288 (a.k.a. CIF) 704x576 (a.k.a. 4CIF) 1408x1152 (a.k.a. 16CIF) In March 1997, an informative Appendix I describing Error Tracking – an encoding technique for providing improved robustness to data losses and errors, was approved to provide information for the aid of implementers
House of Nassau active in European politics. House of Orange may also refer to: The House of Orange (song), by Stan Rogers House of Orange-Chalon, a medieval Frankish dynasty of Burgundy House of
by Stan Rogers House of Orange-Chalon, a medieval Frankish dynasty of Burgundy House of Orange-Nassau, a branch of the European House of Nassau See also Order of the House of Orange Principality
Angstroms in diameter (a solenoid (DNA)-like particle). Around 146 base pairs (bp) of DNA wrap around this core particle 1.65 times in a left-handed super-helical turn to give a particle of around 100 Angstroms across. The linker histone H1 binds the nucleosome at the entry and exit sites of the DNA, thus locking the DNA into place and allowing the formation of higher order structure. The most basic such formation is the 10 nm fiber or beads on a string conformation. This involves the wrapping of DNA around nucleosomes with approximately 50 base pairs of DNA separating each pair of nucleosomes (also referred to as linker DNA). Higher-order structures include the 30 nm fiber (forming an irregular zigzag) and 100 nm fiber, these being the structures found in normal cells. During mitosis and meiosis, the condensed chromosomes are assembled through interactions between nucleosomes and other regulatory proteins. Histones are subdivided into canonical replication-dependent histones that are expressed during the S-phase of the cell cycle and replication-independent histone variants, expressed during the whole cell cycle. In animals, genes encoding canonical histones are typically clustered along the chromosome, lack introns and use a stem loop structure at the 3' end instead of a polyA tail. Genes encoding histone variants are usually not clustered, have introns and their mRNAs are regulated with polyA tails. Complex multicellular organisms typically have a higher number of histone variants providing a variety of different functions. Recent data are accumulating about the roles of diverse histone variants highlighting the functional links between variants and the delicate regulation of organism development. Histone variants from different organisms, their classification and variant specific features can be found in "HistoneDB 2.0 - Variants" database. The following is a list of human histone proteins: Structure The nucleosome core is formed of two H2A-H2B dimers and a H3-H4 tetramer, forming two nearly symmetrical halves by tertiary structure (C2 symmetry; one macromolecule is the mirror image of the other). The H2A-H2B dimers and H3-H4 tetramer also show pseudodyad symmetry. The 4 'core' histones (H2A, H2B, H3 and H4) are relatively similar in structure and are highly conserved through evolution, all featuring a 'helix turn helix turn helix' motif (DNA-binding protein motif that recognize specific DNA sequence). They also share the feature of long 'tails' on one end of the amino acid structure - this being the location of post-translational modification (see below). Archaeal histone only contains a H3-H4 like dimeric structure made out of the same protein. Such dimeric structures can stack into a tall superhelix ("hypernucleosome") onto which DNA coils in a manner similar to nucleosome spools. Only some archaeal histones have tails. The distance between the spools around which eukaryotic cells wind their DNA has been determined to range from 59 to 70 Å. In all, histones make five types of interactions with DNA: Salt bridges and hydrogen bonds between side chains of basic amino acids (especially lysine and arginine) and phosphate oxygens on DNA Helix-dipoles form alpha-helixes in H2B, H3, and H4 cause a net positive charge to accumulate at the point of interaction with negatively charged phosphate groups on DNA Hydrogen bonds between the DNA backbone and the amide group on the main chain of histone proteins Nonpolar interactions between the histone and deoxyribose sugars on DNA Non-specific minor groove insertions of the H3 and H2B N-terminal tails into two minor grooves each on the DNA molecule The highly basic nature of histones, aside from facilitating DNA-histone interactions, contributes to their water solubility. Histones are subject to post translational modification by enzymes primarily on their N-terminal tails, but also in their globular domains. Such modifications include methylation, citrullination, acetylation, phosphorylation, SUMOylation, ubiquitination, and ADP-ribosylation. This affects their function of gene regulation. In general, genes that are active have less bound histone, while inactive genes are highly associated with histones during interphase. It also appears that the structure of histones has been evolutionarily conserved, as any deleterious mutations would be severely maladaptive. All histones have a highly positively charged N-terminus with many lysine and arginine residues. Evolution and species distribution Core histones are found in the nuclei of eukaryotic cells and in most Archaeal phyla, but not in bacteria. However the linker histones have homologs in bacteria. The unicellular algae known as dinoflagellates were previously thought to be the only eukaryotes that completely lack histones, however, later studies showed that their DNA still encodes histone genes. Unlike the core histones, lysine-rich linker histone (H1) proteins are found in bacteria, otherwise known as nucleoprotein HC1/HC2. It has been proposed that histone proteins are evolutionarily related to the helical part of the extended AAA+ ATPase domain, the C-domain, and to the N-terminal substrate recognition domain of Clp/Hsp100 proteins. Despite the differences in their topology, these three folds share a homologous helix-strand-helix (HSH) motif. Archaeal histones may well resemble the evolutionary precursors to eukaryotic histones. Furthermore, the nucleosome (core) histones may have evolved from ribosomal proteins (RPS6/RPS15) with which they share much in common, both being short and basic proteins. Histone proteins are among the most highly conserved proteins in eukaryotes, emphasizing their important role in the biology of the nucleus. In contrast mature sperm cells largely use protamines to package their genomic DNA, most likely because this allows them to achieve an even higher packaging ratio. There are some variant forms in some of the major classes. They share amino acid sequence homology and core structural similarity to a specific class of major histones but also have their own feature that is distinct from the major histones. These minor histones usually carry out specific functions of the chromatin metabolism. For example, histone H3-like CENPA is associated with only the centromere region of the chromosome. Histone H2A variant H2A.Z is associated with the promoters of actively transcribed genes and also involved in the prevention of the spread of silent heterochromatin. Furthermore, H2A.Z has roles in chromatin for genome stability. Another H2A variant H2A.X is phosphorylated at S139 in regions around double-strand breaks and marks the region undergoing DNA repair. Histone H3.3 is associated with the body of actively transcribed genes. Function Compacting DNA strands Histones act as spools around which DNA winds. This enables the compaction necessary to fit the large genomes of eukaryotes inside cell nuclei: the compacted molecule is 40,000 times shorter than an unpacked molecule. Chromatin regulation Histones undergo posttranslational modifications that alter their interaction with DNA and nuclear proteins. The H3 and H4 histones have long tails protruding from the nucleosome, which can be covalently modified at several places. Modifications of the tail include methylation, acetylation, phosphorylation, ubiquitination, SUMOylation, citrullination, and ADP-ribosylation. The core of the histones H2A and H2B can also be modified. Combinations of modifications are thought to constitute a code, the so-called "histone code". Histone modifications act in diverse biological processes such as gene regulation, DNA repair, chromosome condensation (mitosis) and spermatogenesis (meiosis). The common nomenclature of histone modifications is: The name of the histone (e.g., H3) The single-letter amino acid abbreviation (e.g., K for Lysine) and the amino acid position in the protein The type of modification (Me: methyl, P: phosphate, Ac: acetyl, Ub: ubiquitin) The number of modifications (only Me is known to occur in more than one copy per residue. 1, 2 or 3 is mono-, di- or tri-methylation) So H3K4me1 denotes the monomethylation of the 4th residue (a lysine) from the start (i.e., the N-terminal) of the H3 protein. Modification A huge catalogue of histone modifications have been described, but a functional understanding of most is still lacking. Collectively, it is thought that histone modifications may underlie a histone code, whereby combinations of histone modifications have specific meanings. However, most functional data concerns individual prominent histone modifications that are biochemically amenable to detailed study. Chemistry Lysine methylation The addition of one, two, or many methyl groups to lysine has little effect on the chemistry of the histone; methylation leaves the charge of the lysine intact and adds a minimal number of atoms so steric interactions are mostly unaffected. However, proteins containing Tudor, chromo or PHD domains, amongst others, can recognise lysine methylation with exquisite sensitivity and differentiate mono, di and tri-methyl lysine, to the extent that, for some lysines (e.g.: H4K20) mono, di and tri-methylation appear to have different meanings. Because of this, lysine methylation tends to be a very informative mark and
are poised for transcription; they are not required in stem cells, but are rapidly required after differentiation into some lineages. Once the cell starts to differentiate, these bivalent promoters are resolved to either active or repressive states depending on the chosen lineage. Other functions DNA damage Marking sites of DNA damage is an important function for histone modifications. It also protects DNA from getting destroyed by ultraviolet radiation of sun. Phosphorylation of H2AX at serine 139 (γH2AX) Phosphorylated H2AX (also known as gamma H2AX) is a marker for DNA double strand breaks, and forms part of the response to DNA damage. H2AX is phosphorylated early after detection of DNA double strand break, and forms a domain extending many kilobases either side of the damage. Gamma H2AX acts as a binding site for the protein MDC1, which in turn recruits key DNA repair proteins (this complex topic is well reviewed in) and as such, gamma H2AX forms a vital part of the machinery that ensures genome stability. Acetylation of H3 lysine 56 (H3K56Ac) H3K56Acx is required for genome stability. H3K56 is acetylated by the p300/Rtt109 complex, but is rapidly deacetylated around sites of DNA damage. H3K56 acetylation is also required to stabilise stalled replication forks, preventing dangerous replication fork collapses. Although in general mammals make far greater use of histone modifications than microorganisms, a major role of H3K56Ac in DNA replication exists only in fungi, and this has become a target for antibiotic development. DNA repair Trimethylation of H3 lysine 36 (H3K36me3) H3K36me3 has the ability to recruit the MSH2-MSH6 (hMutSα) complex of the DNA mismatch repair pathway. Consistently, regions of the human genome with high levels of H3K36me3 accumulate less somatic mutations due to mismatch repair activity. Chromosome condensation Phosphorylation of H3 at serine 10 (phospho-H3S10) The mitotic kinase aurora B phosphorylates histone H3 at serine 10, triggering a cascade of changes that mediate mitotic chromosome condensation. Condensed chromosomes therefore stain very strongly for this mark, but H3S10 phosphorylation is also present at certain chromosome sites outside mitosis, for example in pericentric heterochromatin of cells during G2. H3S10 phosphorylation has also been linked to DNA damage caused by R-loop formation at highly transcribed sites. Phosphorylation H2B at serine 10/14 (phospho-H2BS10/14) Phosphorylation of H2B at serine 10 (yeast) or serine 14 (mammals) is also linked to chromatin condensation, but for the very different purpose of mediating chromosome condensation during apoptosis. This mark is not simply a late acting bystander in apoptosis as yeast carrying mutations of this residue are resistant to hydrogen peroxide-induced apoptotic cell death. Addiction Epigenetic modifications of histone tails in specific regions of the brain are of central importance in addictions. Once particular epigenetic alterations occur, they appear to be long lasting "molecular scars" that may account for the persistence of addictions. Cigarette smokers (about 15% of the US population) are usually addicted to nicotine. After 7 days of nicotine treatment of mice, acetylation of both histone H3 and histone H4 was increased at the FosB promoter in the nucleus accumbens of the brain, causing 61% increase in FosB expression. This would also increase expression of the splice variant Delta FosB. In the nucleus accumbens of the brain, Delta FosB functions as a "sustained molecular switch" and "master control protein" in the development of an addiction. About 7% of the US population is addicted to alcohol. In rats exposed to alcohol for up to 5 days, there was an increase in histone 3 lysine 9 acetylation in the pronociceptin promoter in the brain amygdala complex. This acetylation is an activating mark for pronociceptin. The nociceptin/nociceptin opioid receptor system is involved in the reinforcing or conditioning effects of alcohol. Methamphetamine addiction occurs in about 0.2% of the US population. Chronic methamphetamine use causes methylation of the lysine in position 4 of histone 3 located at the promoters of the c-fos and the C-C chemokine receptor 2 (ccr2) genes, activating those genes in the nucleus accumbens (NAc). c-fos is well known to be important in addiction. The ccr2 gene is also important in addiction, since mutational inactivation of this gene impairs addiction. Synthesis The first step of chromatin structure duplication is the synthesis of histone proteins: H1, H2A, H2B, H3, H4. These proteins are synthesized during S phase of the cell cycle. There are different mechanisms which contribute to the increase of histone synthesis. Yeast Yeast carry one or two copies of each histone gene, which are not clustered but rather scattered throughout chromosomes. Histone gene transcription is controlled by multiple gene regulatory proteins such as transcription factors which bind to histone promoter regions. In budding yeast, the candidate gene for activation of histone gene expression is SBF. SBF is a transcription factor that is activated in late G1 phase, when it dissociates from its repressor Whi5. This occurs when Whi5 is phosphorylated by Cdc8 which is a G1/S Cdk. Suppression of histone gene expression outside of S phases is dependent on Hir proteins which form inactive chromatin structure at the locus of histone genes, causing transcriptional activators to be blocked. Metazoan In metazoans the increase in the rate of histone synthesis is due to the increase in processing of pre-mRNA to its mature form as well as decrease in mRNA degradation; this results in an increase of active mRNA for translation of histone proteins. The mechanism for mRNA activation has been found to be the removal of a segment of the 3' end of the mRNA strand, and is dependent on association with stem-loop binding protein (SLBP). SLBP also stabilizes histone mRNAs during S phase by blocking degradation by the 3'hExo nuclease. SLBP levels are controlled by cell-cycle proteins, causing SLBP to accumulate as cells enter S phase and degrade as cells leave S phase. SLBP are marked for degradation by phosphorylation at two threonine residues by cyclin dependent kinases, possibly cyclin A/ cdk2, at the end of S phase. Metazoans also have multiple copies of histone genes clustered on chromosomes which are localized in structures called Cajal bodies as determined by genome-wide chromosome conformation capture analysis (4C-Seq). Link between cell-cycle control and synthesis Nuclear protein Ataxia-Telangiectasia (NPAT), also known as nuclear protein coactivator of histone transcription, is a transcription factor which activates histone gene transcription on chromosomes 1 and 6 of human cells. NPAT is also a substrate of cyclin E-Cdk2, which is required for the transition between G1 phase and S phase. NPAT activates histone gene expression only after it has been phosphorylated by the G1/S-Cdk cyclin E-Cdk2 in early S phase. This shows an important regulatory link between cell-cycle control and histone synthesis. History Histones were discovered in 1884 by Albrecht Kossel. The word "histone" dates from the late 19th century and is derived from the German word "Histon", a word itself of uncertain origin, perhaps from Ancient Greek ἵστημι (hístēmi, “make stand”) or ἱστός (histós, “loom”). In the early 1960s, before the types of histones were known and before histones were known to be highly conserved across taxonomically diverse organisms, James F. Bonner and his collaborators began a study of these proteins that were known to be tightly associated with the DNA in the nucleus of higher organisms. Bonner and his postdoctoral fellow Ru Chih C. Huang showed that isolated chromatin would not support RNA transcription in the test tube, but if the histones were extracted from the chromatin, RNA could be transcribed from the remaining DNA. Their paper became a citation classic. Paul T'so and James Bonner had called together a World Congress on Histone Chemistry and Biology in 1964, in which it became clear that there was no consensus on the number of kinds of histone and that no one knew how they would compare when isolated from different organisms. Bonner and his collaborators then developed methods to separate each type of histone, purified individual histones, compared amino acid compositions in the same histone from different organisms, and compared amino acid sequences of the same histone from different organisms in collaboration with Emil Smith from UCLA. For example, they found Histone IV sequence to be highly conserved between peas
corporations, governments, criminal enterprises, and organized religions are hierarchical organizations with different levels of management, power or authority. For example, the broad, top-level overview of the general organization of the Catholic Church consists of the Pope, then the Cardinals, then the Archbishops, and so on. Members of hierarchical organizational structures chiefly communicate with their immediate superior and with their immediate subordinates. Structuring organizations in this way is useful partly because it can reduce the communication overhead by limiting information flow. Visualization A hierarchy is typically visualized as a pyramid, where the height of the ranking or person depicts their power status and the width of that level represents how many people or business divisions are at that level relative to the whole—the highest-ranking people are at the apex, and there are very few of them, and in many cases only one; the base may include thousands of people who have no subordinates. These hierarchies are typically depicted with a tree or triangle diagram, creating an organizational chart or organogram. Those nearest the top have more power than those nearest the bottom, and there being fewer people at the top than at the bottom. As a result, superiors in a hierarchy generally have higher status and command greater rewards than their subordinates. Common social manifestations All governments and most companies feature similar hierarchical structures. Traditionally, the monarch stood at the pinnacle of the state. In many countries, feudalism and manorialism provided a formal social structure that established hierarchical links pervading every level of society, with the monarch at the top. In modern post-feudal states the nominal top of the hierarchy still remains a head of state - sometimes a president or a constitutional monarch, although in many modern states the powers of the head of state are delegated among different bodies. Below or alongside this head there is commonly a senate, parliament or congress; such bodies in turn often delegate the day-to-day running of the country to a prime minister,
few of them, and in many cases only one; the base may include thousands of people who have no subordinates. These hierarchies are typically depicted with a tree or triangle diagram, creating an organizational chart or organogram. Those nearest the top have more power than those nearest the bottom, and there being fewer people at the top than at the bottom. As a result, superiors in a hierarchy generally have higher status and command greater rewards than their subordinates. Common social manifestations All governments and most companies feature similar hierarchical structures. Traditionally, the monarch stood at the pinnacle of the state. In many countries, feudalism and manorialism provided a formal social structure that established hierarchical links pervading every level of society, with the monarch at the top. In modern post-feudal states the nominal top of the hierarchy still remains a head of state - sometimes a president or a constitutional monarch, although in many modern states the powers of the head of state are delegated among different bodies. Below or alongside this head there is commonly a senate, parliament or congress; such bodies in turn often delegate the day-to-day running of the country to a prime minister, who may head a cabinet. In many democracies, constitutions theoretically regard "the people" as the notional top of the hierarchy, above the head of state; in reality, the people's influence is often restricted to voting in elections or in referendums. In business, the business owner traditionally occupies the pinnacle of the organization. Most modern large companies lack a single dominant shareholder and for most purposes delegate the collective power of the business owners to a board of directors, which in turn delegates the day-to-day running of the company to a managing director or CEO. Again, although the shareholders
singer. At the beginning of his career as an entertainer his act would end with a joke version of the duet Sweethearts, in which he sang both the baritone and falsetto parts. Trained under Italian maestro Manlio di Veroli, he emerged as a bel canto tenor (characteristically, he insisted that in his case this meant "can belto") and had a long list of best-selling record albums to his credit. In 1958 he appeared in the film Jet Storm, which starred Dame Sybil Thorndike and Richard Attenborough and in the same year Secombe starred in the title role in Davy, one of Ealing Studios' last films. The power of his voice allowed Secombe to appear in many stage musicals. This included 1963's Pickwick, based on Charles Dickens' The Pickwick Papers, which gave him the number 18 hit single "If I Ruled the World" – his later signature tune. In 1965 the show was produced on tour in the United States, where on Broadway he garnered a nomination for a Tony Award for Best Actor in a Musical. Secombe scored his biggest hit single in 1967 with his version of "This Is My Song", which peaked at no. 2 on the charts in April 1967 while a recording by Petula Clark, which had hit no. 1 in February, was still in the top ten. He also appeared in the musical The Four Musketeers (1967) at Drury Lane, as Mr. Bumble in Carol Reed's film of Oliver! (1968), and in the Envy segment of The Magnificent Seven Deadly Sins (1971). He went on to star in his own television show, The Harry Secombe Show, which debuted on Christmas Day 1968 on BBC 1 and ran for thirty-one episodes until 1973. A sketch comedy show featuring Julian Orchard as Secombe's regular sidekick, the series also featured guest appearances by fellow Goon Spike Milligan as well as leading performers such as Ronnie Barker and Arthur Lowe. Secombe later starred in similar vehicles such as Sing a Song of Secombe and ITV's Secombe with Music during the 1970s. Later career Later in life, Secombe (whose brother Fred Secombe was a priest in the Church in Wales, part of the Anglican Communion) attracted new audiences as a presenter of religious programmes, such as the BBC's Songs of Praise and ITV's Stars on Sunday and Highway. He was also a special programming consultant to Harlech Television and hosted a Thames Television programme in 1979 entitled Cross on the Donkey's Back. In the latter half of the 1980s, Secombe personally sponsored a football team for boys aged 9–11 in the local West Sutton Little League, 'Secombes Knights'. In 1990, he was one of a few to be honoured by a second appearance on This Is Your Life, when he was surprised by Michael Aspel at a book signing in a London branch of WH Smith. Secombe had been a subject of the show previously in March 1958 when Eamonn Andrews surprised him at the BBC Television Theatre. Honours In 1963 he was appointed a Commander of the Order of the British Empire (CBE). He was knighted in 1981, and jokingly referred to himself as Sir Cumference (in recognition of his rotund figure). The motto he chose for his coat of arms was "GO ON", a reference to goon. Later life and death Secombe suffered from peritonitis in 1980. Within two years, taking advice from doctors, he had lost five stone in weight. He had a stroke in 1997, from which he made a slow recovery. He was then diagnosed with prostate cancer in September 1998. After suffering a second stroke in 1999, he was forced to abandon his television career, but made a documentary about his condition in the hope of giving encouragement to other sufferers. Secombe had diabetes in the latter part of his life. Secombe died on 11 April 2001 at the age of 79, from prostate cancer, in hospital in Guildford, Surrey. His ashes are interred at the parish church of Shamley Green, and a later memorial service to celebrate his life was held at Westminster Abbey on 26 October 2001. As well as family members and friends, the service was also attended by Charles, Prince of Wales and representatives of Prince Philip, Duke of Edinburgh, Anne, Princess Royal, Princess Margaret, Countess of Snowdon and Prince Edward, Duke of Kent. On his tombstone is the inscription: "To know him was to love him." Upon hearing of his old friend's death, Spike Milligan quipped, "I'm glad he died before me, because I didn't want him to sing at my funeral." But Secombe had the last laugh: upon Milligan's own death the following year, a recording of Secombe singing was played at Spike's memorial service. The Secombe Theatre at Sutton, Greater London, bears his name in memory of this former local personality. He is also fondly remembered at the London Welsh Centre, where he opened the bar on St Patrick's Day (17 March) 1971. Family Secombe met Myra Joan Atherton at the Mumbles Dance Hall in 1946. The couple were married from 1948 until his death, and had four children: Jennifer Secombe (d. 2019), widow of actor Alex Giannini. She was
charts in April 1967 while a recording by Petula Clark, which had hit no. 1 in February, was still in the top ten. He also appeared in the musical The Four Musketeers (1967) at Drury Lane, as Mr. Bumble in Carol Reed's film of Oliver! (1968), and in the Envy segment of The Magnificent Seven Deadly Sins (1971). He went on to star in his own television show, The Harry Secombe Show, which debuted on Christmas Day 1968 on BBC 1 and ran for thirty-one episodes until 1973. A sketch comedy show featuring Julian Orchard as Secombe's regular sidekick, the series also featured guest appearances by fellow Goon Spike Milligan as well as leading performers such as Ronnie Barker and Arthur Lowe. Secombe later starred in similar vehicles such as Sing a Song of Secombe and ITV's Secombe with Music during the 1970s. Later career Later in life, Secombe (whose brother Fred Secombe was a priest in the Church in Wales, part of the Anglican Communion) attracted new audiences as a presenter of religious programmes, such as the BBC's Songs of Praise and ITV's Stars on Sunday and Highway. He was also a special programming consultant to Harlech Television and hosted a Thames Television programme in 1979 entitled Cross on the Donkey's Back. In the latter half of the 1980s, Secombe personally sponsored a football team for boys aged 9–11 in the local West Sutton Little League, 'Secombes Knights'. In 1990, he was one of a few to be honoured by a second appearance on This Is Your Life, when he was surprised by Michael Aspel at a book signing in a London branch of WH Smith. Secombe had been a subject of the show previously in March 1958 when Eamonn Andrews surprised him at the BBC Television Theatre. Honours In 1963 he was appointed a Commander of the Order of the British Empire (CBE). He was knighted in 1981, and jokingly referred to himself as Sir Cumference (in recognition of his rotund figure). The motto he chose for his coat of arms was "GO ON", a reference to goon. Later life and death Secombe suffered from peritonitis in 1980. Within two years, taking advice from doctors, he had lost five stone in weight. He had a stroke in 1997, from which he made a slow recovery. He was then diagnosed with prostate cancer in September 1998. After suffering a second stroke in 1999, he was forced to abandon his television career, but made a documentary about his condition in the hope of giving encouragement to other sufferers. Secombe had diabetes in the latter part of his life. Secombe died on 11 April 2001 at the age of 79, from prostate cancer, in hospital in Guildford, Surrey. His ashes are interred at the parish church of Shamley Green, and a later memorial service to celebrate his life was held at Westminster Abbey on 26 October 2001. As well as family members and friends, the service was also attended by Charles, Prince of Wales and representatives of Prince Philip, Duke of Edinburgh, Anne, Princess Royal, Princess Margaret, Countess of Snowdon and Prince Edward, Duke of Kent. On his tombstone is the inscription: "To know him was to love him." Upon hearing of his old friend's death, Spike Milligan quipped, "I'm glad he died before me, because I didn't want him to sing at my funeral." But Secombe had the last laugh: upon Milligan's own death the following year, a recording of Secombe singing was played at Spike's memorial service. The Secombe Theatre at Sutton, Greater London, bears his name in memory of this former local personality. He is also fondly remembered at the London Welsh
is injected, however, it avoids this first-pass effect, very rapidly crossing the blood–brain barrier because of the presence of the acetyl groups, which render it much more fat soluble than morphine itself. Once in the brain, it then is deacetylated variously into the inactive 3-monoacetylmorphine and the active 6-monoacetylmorphine (6-MAM), and then to morphine, which bind to μ-opioid receptors, resulting in the drug's euphoric, analgesic (pain relief), and anxiolytic (anti-anxiety) effects; heroin itself exhibits relatively low affinity for the μ receptor. Analgesia follows from the activation of the μ receptor G-protein coupled receptor, which indirectly hyperpolarizes the neuron, reducing the release of nociceptive neurotransmitters, and hence, causes analgesia and increased pain tolerance. Unlike hydromorphone and oxymorphone, however, administered intravenously, heroin creates a larger histamine release, similar to morphine, resulting in the feeling of a greater subjective "body high" to some, but also instances of pruritus (itching) when they first start using. Normally GABA, released from inhibitory neurones, inhibits the release of dopamine. Opiates, like heroin and morphine, decrease the inhibitory activity of such neurones. This causes increased release of dopamine in the brain which is the reason for euphoric and rewarding effects of heroin. Both morphine and 6-MAM are μ-opioid agonists that bind to receptors present throughout the brain, spinal cord, and gut of all mammals. The μ-opioid receptor also binds endogenous opioid peptides such as β-endorphin, Leu-enkephalin, and Met-enkephalin. Repeated use of heroin results in a number of physiological changes, including an increase in the production of μ-opioid receptors (upregulation). These physiological alterations lead to tolerance and dependence, so that stopping heroin use results in uncomfortable symptoms including pain, anxiety, muscle spasms, and insomnia called the opioid withdrawal syndrome. Depending on usage it has an onset 4–24 hours after the last dose of heroin. Morphine also binds to δ- and κ-opioid receptors. There is also evidence that 6-MAM binds to a subtype of μ-opioid receptors that are also activated by the morphine metabolite morphine-6β-glucuronide but not morphine itself. The third subtype of third opioid type is the mu-3 receptor, which may be a commonality to other six-position monoesters of morphine. The contribution of these receptors to the overall pharmacology of heroin remains unknown. A subclass of morphine derivatives, namely the 3,6 esters of morphine, with similar effects and uses, includes the clinically used strong analgesics nicomorphine (Vilan), and dipropanoylmorphine; there is also the latter's dihydromorphine analogue, diacetyldihydromorphine (Paralaudin). Two other 3,6 diesters of morphine invented in 1874–75 along with diamorphine, dibenzoylmorphine and acetylpropionylmorphine, were made as substitutes after it was outlawed in 1925 and, therefore, sold as the first "designer drugs" until they were outlawed by the League of Nations in 1930. Chemistry Diamorphine is produced from acetylation of morphine derived from natural opium sources, generally using acetic anhydride. The major metabolites of diamorphine, 6-MAM, morphine, morphine-3-glucuronide, and morphine-6-glucuronide, may be quantitated in blood, plasma or urine to monitor for use, confirm a diagnosis of poisoning, or assist in a medicolegal death investigation. Most commercial opiate screening tests cross-react appreciably with these metabolites, as well as with other biotransformation products likely to be present following usage of street-grade diamorphine such as 6-acetylcholine and codeine. However, chromatographic techniques can easily distinguish and measure each of these substances. When interpreting the results of a test, it is important to consider the diamorphine usage history of the individual, since a chronic user can develop tolerance to doses that would incapacitate an opiate-naive individual, and the chronic user often has high baseline values of these metabolites in his system. Furthermore, some testing procedures employ a hydrolysis step before quantitation that converts many of the metabolic products to morphine, yielding a result that may be 2 times larger than with a method that examines each product individually. History The opium poppy was cultivated in lower Mesopotamia as long ago as 3400 BC. The chemical analysis of opium in the 19th century revealed that most of its activity could be ascribed to the alkaloids codeine and morphine. Diamorphine was first synthesized in 1874 by C. R. Alder Wright, an English chemist working at St. Mary's Hospital Medical School in London who had been experimenting combining morphine with various acids. He boiled anhydrous morphine alkaloid with acetic anhydride for several hours and produced a more potent, acetylated form of morphine which is now called diacetylmorphine or morphine diacetate. He sent the compound to F. M. Pierce of Owens College in Manchester for analysis. Pierce told Wright: Wright's invention did not lead to any further developments, and diamorphine became popular only after it was independently re-synthesized 23 years later by chemist Felix Hoffmann. Hoffmann was working at Bayer pharmaceutical company in Elberfeld, Germany, and his supervisor Heinrich Dreser instructed him to acetylate morphine with the objective of producing codeine, a constituent of the opium poppy that is pharmacologically similar to morphine but less potent and less addictive. Instead, the experiment produced an acetylated form of morphine one and a half to two times more potent than morphine itself. The head of Bayer's research department reputedly coined the drug's new name of "heroin," based on the German heroisch which means "heroic, strong" (from the ancient Greek word "heros, ήρως"). Bayer scientists were not the first to make heroin, but their scientists discovered ways to make it, and Bayer led the commercialization of heroin. In 1895, Bayer marketed diacetylmorphine as an over-the-counter drug under the trademark name Heroin. It was developed chiefly as a morphine substitute for cough suppressants that did not have morphine's addictive side-effects. Morphine at the time was a popular recreational drug, and Bayer wished to find a similar but non-addictive substitute to market. However, contrary to Bayer's advertising as a "non-addictive morphine substitute," heroin would soon have one of the highest rates of addiction among its users. From 1898 through to 1910, diamorphine was marketed under the trademark name Heroin as a non-addictive morphine substitute and cough suppressant. In the 11th edition of Encyclopædia Britannica (1910), the article on morphine states: "In the cough of phthisis minute doses [of morphine] are of service, but in this particular disease morphine is frequently better replaced by codeine or by heroin, which checks irritable coughs without the narcotism following upon the administration of morphine." In the US, the Harrison Narcotics Tax Act was passed in 1914 to control the sale and distribution of diacetylmorphine and other opioids, which allowed the drug to be prescribed and sold for medical purposes. In 1924, the United States Congress banned its sale, importation, or manufacture. It is now a Schedule I substance, which makes it illegal for non-medical use in signatory nations of the Single Convention on Narcotic Drugs treaty, including the United States. The Health Committee of the League of Nations banned diacetylmorphine in 1925, although it took more than three years for this to be implemented. In the meantime, the first designer drugs, viz. 3,6 diesters and 6 monoesters of morphine and acetylated analogues of closely related drugs like hydromorphone and dihydromorphine, were produced in massive quantities to fill the worldwide demand for diacetylmorphine—this continued until 1930 when the Committee banned diacetylmorphine analogues with no therapeutic advantage over drugs already in use, the first major legislation of this type. Bayer lost some of its trademark rights to heroin (as well as aspirin) under the 1919 Treaty of Versailles following the German defeat in World War I. Use of heroin by jazz musicians in particular was prevalent in the mid-twentieth century, including Billie Holiday, saxophonists Charlie Parker and Art Pepper, guitarist Joe Pass and piano player/singer Ray Charles; a "staggering number of jazz musicians were addicts". It was also a problem with many rock musicians, particularly from the late 1960s through the 1990s. Pete Doherty is also a self-confessed user of heroin. Nirvana lead singer Kurt Cobain's heroin addiction was well documented. Pantera frontman, Phil Anselmo, turned to heroin while touring during the 1990s to cope with his back pain. James Taylor, Jimmy Page, John Lennon, Eric Clapton, Johnny Winter, Keith Richards and Janis Joplin also used heroin. Many musicians have made songs referencing their heroin usage. Society and culture Names "Diamorphine" is the Recommended International Nonproprietary Name and British Approved Name. Other synonyms for heroin include: diacetylmorphine, and morphine diacetate. Heroin is also known by many street names including dope, H, smack, junk, horse, scag, and brown, among others. Legal status Asia In Hong Kong, diamorphine is regulated under Schedule 1 of Hong Kong's Chapter 134 Dangerous Drugs Ordinance. It is available by prescription. Anyone supplying diamorphine without a valid prescription can be fined $5,000,000 (HKD) and imprisoned for life. The penalty for trafficking or manufacturing diamorphine is a $5,000,000 (HKD) fine and life imprisonment. Possession of diamorphine without a license from the Department of Health is illegal with a $1,000,000 (HKD) fine and 7 years of jail time. Europe In the Netherlands, diamorphine is a List I drug of the Opium Law. It is available for prescription under tight regulation exclusively to long-term addicts for whom methadone maintenance treatment has failed. It cannot be used to treat severe pain or other illnesses. In the United Kingdom, diamorphine is available by prescription, though it is a restricted Class A drug. According to the 50th edition of the British National Formulary (BNF), diamorphine hydrochloride may be used in the treatment of acute pain, myocardial infarction, acute pulmonary oedema, and chronic pain. The treatment of chronic non-malignant pain must be supervised by a specialist. The BNF notes that all opioid analgesics cause dependence and tolerance but that this is "no deterrent in the control of pain in terminal illness". When used in the palliative care of cancer patients, diamorphine is often injected using a syringe driver. In Switzerland, heroin is produced in injectable or tablet form under the name Diaphin by a private company under contract to the Swiss government. Swiss-produced heroin has been imported into Canada with government approval. Australia In Australia diamorphine is listed as a schedule 9 prohibited substance under the Poisons Standard (October 2015). A schedule 9 drug is outlined in the Poisons Act 1964 as "Substances which may be abused or misused, the manufacture, possession, sale or use of which should be prohibited by law except when required for medical or scientific research, or for analytical, teaching or training purposes with approval of the CEO." North America In Canada, diamorphine is a controlled substance under Schedule I of the Controlled Drugs and Substances Act (CDSA). Any person seeking or obtaining diamorphine without disclosing authorization 30 days before obtaining another prescription from a practitioner is guilty of an indictable offense and subject to imprisonment for a term not exceeding seven years. Possession of diamorphine for the purpose of trafficking is an indictable offense and subject to imprisonment for life. In the United States, diamorphine is a Schedule I drug according to the Controlled Substances Act of 1970, making it illegal to possess without a DEA license. Possession of more than 100 grams of diamorphine or a mixture containing diamorphine is punishable with a minimum mandatory sentence of 5 years of imprisonment in a federal prison. In 2021, the US state of Oregon became the first state to decriminalize the use of heroin after voters passed Ballot Measure 110 in 2020. This measure will allow people with small amounts to avoid arrest. Turkey Turkey maintains strict laws against the use, possession or trafficking of illegal drugs. If convicted under these offences, one could receive a heavy fine or a prison sentence of 4 to 24 years. Misuse of prescription medication Misused prescription medicine, such as opioids, can lead to heroin use and dependence. The number of death from illegal opioid overdose follows the increasing number of death caused by prescription opioid overdoses. Prescription opioids are relatively easy to obtain. This may ultimately lead to heroin injection because heroin is cheaper than prescribed pills. Economics Production Diamorphine is produced from acetylation of morphine derived from natural opium sources. One such method of heroin production involves isolation of the water-soluble components of raw opium, including morphine, in a strongly basic aqueous solution, followed by recrystallization of the morphine base by addition of ammonium chloride. The solid morphine base is then filtered out. The morphine base is then reacted with acetic anhydride, which forms heroin. This highly impure brown heroin base may then undergo further purification steps, which produces a white-colored product; the final products have a different appearance depending on purity and have different names. Heroin purity has been classified into four grades. No.4 is the purest form – white powder (salt) to be easily dissolved and injected. No.3 is "brown sugar" for smoking (base). No.1 and No.2 are unprocessed raw heroin (salt or base). Trafficking Traffic is heavy worldwide, with the biggest producer being Afghanistan. According to a U.N. sponsored survey, in 2004, Afghanistan accounted for production of 87 percent of the world's diamorphine. Afghan opium kills around 100,000 people annually. In 2003 The Independent reported: Opium production in that country has increased rapidly since, reaching an all-time high in 2006. War in Afghanistan once again appeared as a facilitator of the trade. Some 3.3 million Afghans are involved in producing opium. At present, opium poppies are mostly grown in Afghanistan (), and in Southeast Asia, especially in the region known as the Golden Triangle straddling Burma (), Thailand, Vietnam, Laos () and Yunnan province in China. There is also cultivation of opium poppies in Pakistan (), Mexico () and in
drug to assist the treatment of opiate addiction, normally in long-term chronic intravenous (IV) heroin users. It is only prescribed following exhaustive efforts at treatment via other means. It is sometimes thought that heroin users can walk into a clinic and walk out with a prescription, but the process takes many weeks before a prescription for diamorphine is issued. Though this is somewhat controversial among proponents of a zero-tolerance drug policy, it has proven superior to methadone in improving the social and health situations of addicts. The UK Department of Health's Rolleston Committee Report in 1926 established the British approach to diamorphine prescription to users, which was maintained for the next 40 years: dealers were prosecuted, but doctors could prescribe diamorphine to users when withdrawing. In 1964, the Brain Committee recommended that only selected approved doctors working at approved specialized centres be allowed to prescribe diamorphine and cocaine to users. The law was made more restrictive in 1968. Beginning in the 1970s, the emphasis shifted to abstinence and the use of methadone; currently, only a small number of users in the UK are prescribed diamorphine. In 1994, Switzerland began a trial diamorphine maintenance program for users that had failed multiple withdrawal programs. The aim of this program was to maintain the health of the user by avoiding medical problems stemming from the illicit use of diamorphine. The first trial in 1994 involved 340 users, although enrollment was later expanded to 1000, based on the apparent success of the program. The trials proved diamorphine maintenance to be superior to other forms of treatment in improving the social and health situation for this group of patients. It has also been shown to save money, despite high treatment expenses, as it significantly reduces costs incurred by trials, incarceration, health interventions and delinquency. Patients appear twice daily at a treatment center, where they inject their dose of diamorphine under the supervision of medical staff. They are required to contribute about 450 Swiss francs per month to the treatment costs. A national referendum in November 2008 showed 68% of voters supported the plan, introducing diamorphine prescription into federal law. The previous trials were based on time-limited executive ordinances. The success of the Swiss trials led German, Dutch, and Canadian cities to try out their own diamorphine prescription programs. Some Australian cities (such as Sydney) have instituted legal diamorphine supervised injecting centers, in line with other wider harm minimization programs. Since January 2009, Denmark has prescribed diamorphine to a few addicts who have tried methadone and buprenorphine without success. Beginning in February 2010, addicts in Copenhagen and Odense became eligible to receive free diamorphine. Later in 2010, other cities including Århus and Esbjerg joined the scheme. It was estimated that around 230 addicts would be able to receive free diamorphine. However, Danish addicts would only be able to inject heroin according to the policy set by Danish National Board of Health. Of the estimated 1500 drug users who did not benefit from the then-current oral substitution treatment, approximately 900 would not be in the target group for treatment with injectable diamorphine, either because of "massive multiple drug abuse of non-opioids" or "not wanting treatment with injectable diamorphine". In July 2009, the German Bundestag passed a law allowing diamorphine prescription as a standard treatment for addicts; a large-scale trial of diamorphine prescription had been authorized in the country in 2002. On 26 August 2016, Health Canada issued regulations amending prior regulations it had issued under the Controlled Drugs and Substances Act; the "New Classes of Practitioners Regulations", the "Narcotic Control Regulations", and the "Food and Drug Regulations", to allow doctors to prescribe diamorphine to people who have a severe opioid addiction who have not responded to other treatments. The prescription heroin can be accessed by doctors through Health Canada's Special Access Programme (SAP) for "emergency access to drugs for patients with serious or life-threatening conditions when conventional treatments have failed, are unsuitable, or are unavailable." Routes of administration The onset of heroin's effects depends upon the route of administration. Smoking is the fastest route of drug administration, although intravenous injection results in a quicker rise in blood concentration. These are followed by suppository (anal or vaginal insertion), insufflation (snorting), and ingestion (swallowing). A 2002 study suggests that a fast onset of action increases the reinforcing effects of addictive drugs. Ingestion does not produce a rush as a forerunner to the high experienced with the use of heroin, which is most pronounced with intravenous use. While the onset of the rush induced by injection can occur in as little as a few seconds, the oral route of administration requires approximately half an hour before the high sets in. Thus, with both higher the dosage of heroin used and faster the route of administration used, the higher the potential risk for psychological dependence/addiction. Large doses of heroin can cause fatal respiratory depression, and the drug has been used for suicide or as a murder weapon. The serial killer Harold Shipman used diamorphine on his victims, and the subsequent Shipman Inquiry led to a tightening of the regulations surrounding the storage, prescribing and destruction of controlled drugs in the UK. Because significant tolerance to respiratory depression develops quickly with continued use and is lost just as quickly during withdrawal, it is often difficult to determine whether a heroin lethal overdose was accidental, suicide or homicide. Examples include the overdose deaths of Sid Vicious, Janis Joplin, Tim Buckley, Hillel Slovak, Layne Staley, Bradley Nowell, Ted Binion, and River Phoenix. By mouth Use of heroin by mouth is less common than other methods of administration, mainly because there is little to no "rush", and the effects are less potent. Heroin is entirely converted to morphine by means of first-pass metabolism, resulting in deacetylation when ingested. Heroin's oral bioavailability is both dose-dependent (as is morphine's) and significantly higher than oral use of morphine itself, reaching up to 64.2% for high doses and 45.6% for low doses; opiate-naive users showed far less absorption of the drug at low doses, having bioavailabilities of only up to 22.9%. The maximum plasma concentration of morphine following oral administration of heroin was around twice as much as that of oral morphine. Injection Injection, also known as "slamming", "banging", "shooting up", "digging" or "mainlining", is a popular method which carries relatively greater risks than other methods of administration. Heroin base (commonly found in Europe), when prepared for injection, will only dissolve in water when mixed with an acid (most commonly citric acid powder or lemon juice) and heated. Heroin in the east-coast United States is most commonly found in the hydrochloride salt form, requiring just water (and no heat) to dissolve. Users tend to initially inject in the easily accessible arm veins, but as these veins collapse over time, users resort to more dangerous areas of the body, such as the femoral vein in the groin. Users who have used this route of administration often develop a deep vein thrombosis. Intravenous users can use a various single dose range using a hypodermic needle. The dose of heroin used for recreational purposes is dependent on the frequency and level of use: thus a first-time user may use between 5 and 20 mg, while an established addict may require several hundred mg per day. As with the injection of any drug, if a group of users share a common needle without sterilization procedures, blood-borne diseases, such as HIV/AIDS or hepatitis, can be transmitted. The use of a common dispenser for water for the use in the preparation of the injection, as well as the sharing of spoons and filters can also cause the spread of blood-borne diseases. Many countries now supply small sterile spoons and filters for single use in order to prevent the spread of disease. Smoking Smoking heroin refers to vaporizing it to inhale the resulting fumes, rather than burning and inhaling the smoke. It is commonly smoked in glass pipes made from glassblown Pyrex tubes and light bulbs. Heroin may be smoked from aluminium foil, that is heated by a flame underneath it, with the resulting smoke inhaled through a tube of rolled up foil, a method also known as "chasing the dragon". Insufflation Another popular route to intake heroin is insufflation (snorting), where a user crushes the heroin into a fine powder and then gently inhales it (sometimes with a straw or a rolled-up banknote, as with cocaine) into the nose, where heroin is absorbed through the soft tissue in the mucous membrane of the sinus cavity and straight into the bloodstream. This method of administration redirects first-pass metabolism, with a quicker onset and higher bioavailability than oral administration, though the duration of action is shortened. This method is sometimes preferred by users who do not want to prepare and administer heroin for injection or smoking but still experience a fast onset. Snorting heroin becomes an often unwanted route, once a user begins to inject the drug. The user may still get high on the drug from snorting, and experience a nod, but will not get a rush. A "rush" is caused by a large amount of heroin entering the body at once. When the drug is taken in through the nose, the user does not get the rush because the drug is absorbed slowly rather than instantly. Heroin for pain has been mixed with sterile water on site by the attending physician, and administered using a syringe with a nebulizer tip. Heroin may be used for fractures, burns, finger-tip injuries, suturing, and wound re-dressing, but is inappropriate in head injuries. Suppository Little research has been focused on the suppository (anal insertion) or pessary (vaginal insertion) methods of administration, also known as "plugging". These methods of administration are commonly carried out using an oral syringe. Heroin can be dissolved and withdrawn into an oral syringe which may then be lubricated and inserted into the anus or vagina before the plunger is pushed. The rectum or the vaginal canal is where the majority of the drug would likely be taken up, through the membranes lining their walls. Adverse effects Heroin is classified as a hard drug in terms of drug harmfulness. Like most opioids, unadulterated heroin may lead to adverse effects. The purity of street heroin varies greatly, leading to overdoses when the purity is higher than they expected. Short term effects Users report an intense rush, an acute transcendent state of euphoria, which occurs while diamorphine is being metabolized into 6-monoacetylmorphine (6-MAM) and morphine in the brain. Some believe that heroin produces more euphoria than other opioids; one possible explanation is the presence of 6-monoacetylmorphine, a metabolite unique to heroin – although a more likely explanation is the rapidity of onset. While other opioids of recreational use produce only morphine, heroin also leaves 6-MAM, also a psycho-active metabolite. However, this perception is not supported by the results of clinical studies comparing the physiological and subjective effects of injected heroin and morphine in individuals formerly addicted to opioids; these subjects showed no preference for one drug over the other. Equipotent injected doses had comparable action courses, with no difference in subjects' self-rated feelings of euphoria, ambition, nervousness, relaxation, drowsiness, or sleepiness. The rush is usually accompanied by a warm flushing of the skin, dry mouth, and a heavy feeling in the extremities. Nausea, vomiting, and severe itching may also occur. After the initial effects, users usually will be drowsy for several hours; mental function is clouded; heart function slows, and breathing is also severely slowed, sometimes enough to be life-threatening. Slowed breathing can also lead to coma and permanent brain damage. Heroin use has also been associated with myocardial infarction. Long term effects Repeated heroin use changes the physical structure and physiology of the brain, creating long-term imbalances in neuronal and hormonal systems that are not easily reversed. Studies have shown some deterioration of the brain's white matter due to heroin use, which may affect decision-making abilities, the ability to regulate behavior, and responses to stressful situations. Heroin also produces profound degrees of tolerance and physical dependence. Tolerance occurs when more and more of the drug is required to achieve the same effects. With physical dependence, the body adapts to the presence of the drug, and withdrawal symptoms occur if use is reduced abruptly. Injection Intravenous use of heroin (and any other substance) with needles and syringes or other related equipment may lead to: Contracting blood-borne pathogens such as HIV and hepatitis via the sharing of needles Contracting bacterial or fungal endocarditis and possibly venous sclerosis Abscesses Poisoning from contaminants
berth (fifth place) on the last day of the regular season. The team advanced to the play-off final after eliminating Sorrento in the semi-finals 3–1 on aggregate. Following the play-off final, after four years of Lega Pro football, Verona were promoted back to Serie B after a 2–1 aggregate win over Salernitana on 19 June 2011. On 18 May 2013, Verona finished second in Serie B and were promoted to Serie A after an eleven-year absence. Their return to the top flight began against title contenders Milan and Roma, beating the former 2–1 and losing to the latter 3–0. The team continued at a steady pace, finishing the first half of the season with 32 points and sitting in sixth place, eleven points behind the closest UEFA Champions League spot—and tied with Internazionale for the final UEFA Europa League spot. Verona, however, ultimately finished the year in tenth. During the 2015–16 season, Verona had not won a single match since the beginning of the campaign until the club edged Atalanta 2–1 on 3 February 2016 in a win at home; coming twenty-three games into the season. Consequently, Verona were relegated from Serie A. In the 2016–17 Serie B season, Hellas Verona finished second on the table and were automatically promoted back to Serie A. Hellas lasted one season back in the top division after finishing second last during the 2017–18 Serie A season and were relegated back to Serie B. At the end of the 2018–19 season, Hellas finished in fifth position and achieved promotion back to Serie A after defeating Cittadella 3–0 in the second leg of their promotion play-off to win 3–2 on aggregate. The club's return to the top flight in the 2019–20 Serie A season, in which it was considered a strong relegation candidate at the beginning of the campaign, was a successful one, with a ninth-placed finish. Heavily reliant on the defensive solidity of 20-year-old centre-back Marash Kumbulla, Amir Rrahmani and goalkeeper Marco Silvestri, along with the consistent performances of midfielder Sofyan Amrabat, Verona was a surprise contender for Europa League qualification but fell out of the race after a downturn in form after the coronavirus break which temporarily halted the season. A 2-1 win at home against eventual title winners Juventus in February was a highlight of a season in which the club achieved 10 clean sheets and punched towards the higher end of the table despite its modest budget. Ahead of Verona’s second consecutive year in Serie A, key players Amrabat, Rrahmani and Kumbulla were poached by Fiorentina, Napoli and Roma respectively, and loanee Matteo Pessina returned to Atalanta. This left the club with a heavily weakened squad and it was once again expected to struggle in the league prior to the season-opening match. Despite these losses in the transfer window, Verona again finished in the top half of the league table, ending the season in 10th place with 45 points. Successful breakout seasons for attacking midfielder Mattia Zaccagni, who was eventually called up to the Italian national team as a reward for his performances, as well as wing-backs Federico Dimarco and Davide Faraoni, were partly the reason for this achievement. At the end of the season, coach Ivan Jurić was appointed by Torino following his two impressive Serie A seasons with Verona, with the Gialloblu replacing him with Eusebio Di Francesco. Following another summer transfer window in which several of the club's star players were sold to Serie A rivals, namely Zaccagni transferring to Lazio, Marco Silvestri to Udinese and Dimarco returning to Inter, the beginning of the 2021-22 season proved to be much more difficult for Verona, as Di Francesco was fired and replaced with Igor Tudor after just three matches, all of which were defeats. This poor early-season form had left the club at the bottom of the table. Under the guidance of Tudor, the team regains competitiveness obtaining in the next eight matches three wins - including victories with Lazio and Juventus -four draws and only one defeat. Colours and badge The team's colours are yellow and blue. As a result the clubs most widely used nickname is gialloblu literally "yellow-blue" in Italian. The colours represent the city itself and Verona's emblem (a yellow cross on a blue shield) appears on most team apparel. Home kits are traditionally blue, sometimes of a navy shade, combined with yellow details and trim, although the club has used a blue and yellow striped design on occasion. Two more team nicknames are Mastini (the mastiffs) and Scaligeri, both references to Mastino I della Scala of the Della Scala princes that ruled the city during the 13th and 14th centuries. The Scala family coat of arms is depicted on the team's jersey and on its trademark logo as a stylised image of two large, powerful mastiffs facing opposite directions, introduced in 1995. In essence, the term "scaligeri" is synonymous with Veronese, and therefore can describe anything or anyone from Verona (e.g., Chievo Verona, a different team that also links itself to the Scala family – specifically to Cangrande I della Scala). Stadium Since 1963, the club have played at the Stadio Marc'Antonio Bentegodi, which has a capacity of 39,211. The ground was shared with Hellas's rivals, Chievo Verona until 2021. It was used as a venue for some matches of the 1990 FIFA World Cup. Derby with Chievo Verona The intercity fixtures against Chievo Verona are known as the "Derby della Scala". The name refers to the Scaligeri or della Scala aristocratic family, who were rulers of Verona during the Middle Ages and early Renaissance. In the season 2001–02, both Hellas Verona
new director of football and Maurizio Sarri as new head coach. Halfway through the 2007–08 season, the team remained at the bottom of Serie C1, on the brink of relegation to the fourth level (Serie C2). In response, club management sacked Sarri and brought back Pellegrini. Thanks to a late-season surge the scaligeri avoided direct relegation by qualifying for the relegation play-off, and narrowly averted dropping to Lega Pro Seconda Divisione in the final game, beating Pro Patria 2–1 on aggregate. However, despite the decline in results, attendance and season ticket sales remained at 15,000 on average. For the 2008–09 season, Verona appointed former Sassuolo and Piacenza manager Gian Marco Remondina with the aim to win promotion to Serie B. However, the season did not start impressively, with Verona being out of the playoff zone by mid-season, and club chairman Pietro Arvedi D'Emilei entering into a coma after being involved in a car crash on his way back from a league match in December 2008. Arvedi died in March 2009, two months after the club was bought by new chairman Giovanni Martinelli. The following season looked promising, as new transfer players were brought aboard, and fans enthusiastically embraced the new campaign. Season ticket figures climbed to over 10,000, placing Verona ahead of several Serie A teams and all but Torino in Serie B attendance. The team led the standings for much of the season, accumulating a seven-point lead by early in the spring. However, the advantage was gradually squandered, and the team dropped to second place on the second-last day of the season, with a chance to regain first place in the final regular season match against Portogruaro on home soil. Verona, however, disappointed a crowd of over 25,000 fans and, with the loss, dropped to third place and headed towards the play-offs. A managerial change for the post-season saw the firing of Remondina and the arrival of Giovanni Vavassori. After eliminating Rimini in the semi-finals (1–0; 0–0) Verona lost the final to Pescara (2–2 on home soil and 0–1 in the return match) and were condemned to a fourth-straight year of third division football. Former 1990 World Cup star Giuseppe Giannini (a famous captain of Roma for many years) signed as manager for the 2010–11 campaign. Once again, the team was almost entirely revamped during the transfer season. The squad struggled in the early months and Giannini was eventually sacked and replaced by former Internazionale defender Andrea Mandorlini, who succeeded in reorganising the team's play and bringing discipline both on and off the pitch. In the second half of the season, Verona climbed back from the bottom of the division to clinch a play-off berth (fifth place) on the last day of the regular season. The team advanced to the play-off final after eliminating Sorrento in the semi-finals 3–1 on aggregate. Following the play-off final, after four years of Lega Pro football, Verona were promoted back to Serie B after a 2–1 aggregate win over Salernitana on 19 June 2011. On 18 May 2013, Verona finished second in Serie B and were promoted to Serie A after an eleven-year absence. Their return to the top flight began against title contenders Milan and Roma, beating the former 2–1 and losing to the latter 3–0. The team continued at a steady pace, finishing the first half of the season with 32 points and sitting in sixth place, eleven points behind the closest UEFA Champions League spot—and tied with Internazionale for the final UEFA Europa League spot. Verona, however, ultimately finished the year in tenth. During the 2015–16 season, Verona had not won a single match since the beginning of the campaign until the club edged Atalanta 2–1 on 3 February 2016 in a win at home; coming twenty-three games into the season. Consequently, Verona were relegated from Serie A. In the 2016–17 Serie B season, Hellas Verona finished second on the table and were automatically promoted back to Serie A. Hellas lasted one season back in the top division after finishing second last during the 2017–18 Serie A season and were relegated back to Serie B. At the end of the 2018–19 season, Hellas finished in fifth position and achieved promotion back to Serie A after defeating Cittadella 3–0 in the second leg of their promotion play-off to win 3–2 on aggregate. The club's return to the top flight in the 2019–20 Serie A season, in which it was considered a strong relegation candidate at the beginning of the campaign, was a successful one, with a ninth-placed finish. Heavily reliant on the defensive solidity of 20-year-old centre-back Marash Kumbulla, Amir Rrahmani and goalkeeper Marco Silvestri, along with the consistent performances of midfielder Sofyan Amrabat, Verona was a surprise contender for Europa League qualification but fell out of the race after a downturn in form after the coronavirus break which temporarily halted the season. A 2-1 win at home against eventual title winners Juventus in February was a highlight of a season in which the club achieved 10 clean sheets and punched towards the higher end of the table despite its modest budget. Ahead of Verona’s second consecutive year in Serie A, key players Amrabat, Rrahmani and Kumbulla were poached by Fiorentina, Napoli and Roma respectively, and loanee Matteo Pessina returned to Atalanta. This left the club with a heavily weakened squad and it was once again expected to struggle in the league prior to the season-opening match. Despite these losses in the transfer window, Verona again finished in the top half of the league table, ending the season in 10th place with 45 points. Successful breakout seasons for attacking midfielder Mattia Zaccagni, who was eventually called up to the Italian national team as a reward for his performances, as well as wing-backs Federico Dimarco and Davide Faraoni, were partly the reason for this achievement. At the end of the season, coach Ivan Jurić was appointed by Torino following his two impressive Serie A seasons with Verona, with the Gialloblu replacing him with Eusebio Di Francesco. Following another summer transfer window in which several of the club's star players were sold to Serie A rivals, namely Zaccagni transferring to Lazio, Marco Silvestri to Udinese and Dimarco returning to Inter, the beginning of the 2021-22 season proved to be much more difficult for Verona, as Di Francesco was fired and replaced with Igor Tudor after just three matches, all of which were defeats. This poor early-season form had left the club at the bottom of the table. Under the guidance of Tudor, the team regains competitiveness obtaining in the next eight matches three wins - including victories with Lazio and Juventus -four draws and only one defeat. Colours and badge The team's colours are yellow and blue. As a result the clubs most widely used nickname is gialloblu literally "yellow-blue" in Italian. The colours represent the city itself and Verona's emblem (a yellow cross on a blue shield) appears on most team apparel. Home kits are traditionally blue, sometimes of a navy shade, combined with yellow details and trim, although the club has used a blue and yellow striped design on occasion. Two
Williams has also noted that the Mahāyāna never had nor ever attempted to have a separate vinaya or ordination lineage from the early Buddhist schools, and therefore bhikṣus and bhikṣuṇīs adhering to the Mahāyāna formally adheres to the vinaya of an early school. This continues today with the Dharmaguptaka ordination lineage in East Asia and the Mūlasarvāstivāda ordination lineage in Tibetan Buddhism. Mahāyāna was never a separate sect of the early schools. From Chinese monks visiting India, we now know that both Mahāyāna and non-Mahāyāna monks in India often lived in the same monasteries side by side. The seventh-century Chinese Buddhist monk and pilgrim Yijing wrote about the relationship between the various "vehicles" and the early Buddhist schools in India. He wrote, "There exist in the West numerous subdivisions of the schools which have different origins, but there are only four principal schools of continuous tradition." These schools are the Mahāsāṃghika Nikāya, Sthavira nikāya, Mūlasarvāstivāda Nikāya, and Saṃmitīya Nikāya. Explaining their doctrinal affiliations, he then writes, "Which of the four schools should be grouped with the Mahāyāna or with the Hīnayāna is not determined." That is to say, there was no simple correspondence between a Buddhist school and whether its members learn "Hīnayāna" or "Mahāyāna" teachings. To identify entire schools as "Hīnayāna" that contained not only śrāvakas and pratyekabuddhas but also Mahāyāna bodhisattvas would be attacking the schools of their fellow Mahāyānists as well as their own. Instead, what is demonstrated in the definition of Hīnayāna given by Yijing is that the term referred to individuals based on doctrinal differences. Hīnayāna as Śrāvakayāna Scholar Isabelle Onians asserts that although "the Mahāyāna ... very occasionally referred to earlier Buddhism as the Hinayāna, the Inferior Way, [...] the preponderance of this name in the secondary literature is far out of proportion to occurrences in the Indian texts." She notes that the term Śrāvakayāna was "the more politically correct and much more usual" term used by Mahāyānists. Jonathan Silk has argued that the term "Hinayana" was used to refer to whomever one wanted to criticize on any given occasion, and did not refer to any definite grouping of Buddhists. Hīnayāna and Theravāda Views of Chinese pilgrims The Chinese monk Yijing, who visited India in the 7th century, distinguished Mahāyāna from Hīnayāna as follows: In the 7th century, the Chinese Buddhist monk Xuanzang describes the concurrent existence of the Mahāvihara and the Abhayagiri vihāra in Sri Lanka. He refers to the monks of the Mahāvihara as the "Hīnayāna Sthaviras" and the monks of Abhayagiri vihāra as the "Mahāyāna Sthaviras". Xuanzang further writes, "The Mahāvihāravāsins reject the Mahāyāna and practice the Hīnayāna, while the Abhayagirivihāravāsins study both Hīnayāna and Mahāyāna teachings and propagate the Tripiṭaka." Philosophical differences Mahayanists were primarily in philosophical dialectic with the Vaibhāṣika school of Sarvāstivāda, which had by far the most "comprehensive edifice of doctrinal systematics" of the nikāya schools. With this in mind it is sometimes argued that the Theravada would not have been considered a "Hinayana" school by Mahayanists because, unlike the now-extinct Sarvastivada school, the
the early Buddhist schools, and therefore bhikṣus and bhikṣuṇīs adhering to the Mahāyāna formally adheres to the vinaya of an early school. This continues today with the Dharmaguptaka ordination lineage in East Asia and the Mūlasarvāstivāda ordination lineage in Tibetan Buddhism. Mahāyāna was never a separate sect of the early schools. From Chinese monks visiting India, we now know that both Mahāyāna and non-Mahāyāna monks in India often lived in the same monasteries side by side. The seventh-century Chinese Buddhist monk and pilgrim Yijing wrote about the relationship between the various "vehicles" and the early Buddhist schools in India. He wrote, "There exist in the West numerous subdivisions of the schools which have different origins, but there are only four principal schools of continuous tradition." These schools are the Mahāsāṃghika Nikāya, Sthavira nikāya, Mūlasarvāstivāda Nikāya, and Saṃmitīya Nikāya. Explaining their doctrinal affiliations, he then writes, "Which of the four schools should be grouped with the Mahāyāna or with the Hīnayāna is not determined." That is to say, there was no simple correspondence between a Buddhist school and whether its members learn "Hīnayāna" or "Mahāyāna" teachings. To identify entire schools as "Hīnayāna" that contained not only śrāvakas and pratyekabuddhas but also Mahāyāna bodhisattvas would be attacking the schools of their fellow Mahāyānists as well as their own. Instead, what is demonstrated in the definition of Hīnayāna given by Yijing is that the term referred to individuals based on doctrinal differences. Hīnayāna as Śrāvakayāna Scholar Isabelle Onians asserts that although "the Mahāyāna ... very occasionally referred to earlier Buddhism as the Hinayāna, the Inferior Way, [...] the preponderance of this name in the secondary literature is far out of proportion to occurrences in the Indian texts." She notes that the term Śrāvakayāna was "the more politically correct and much more usual" term used by Mahāyānists. Jonathan Silk has argued that the term "Hinayana" was used to refer to whomever one wanted to criticize on any given occasion, and did not refer to any definite grouping of Buddhists. Hīnayāna and Theravāda Views of Chinese pilgrims The Chinese monk Yijing, who visited India in the 7th century, distinguished Mahāyāna from Hīnayāna as follows: In the 7th century, the Chinese Buddhist monk Xuanzang describes the concurrent existence of the Mahāvihara and the Abhayagiri vihāra in Sri Lanka. He refers to the monks of the Mahāvihara as the "Hīnayāna Sthaviras" and the monks of Abhayagiri vihāra as the "Mahāyāna Sthaviras". Xuanzang further writes, "The Mahāvihāravāsins reject the Mahāyāna and practice the Hīnayāna, while the Abhayagirivihāravāsins study both Hīnayāna and Mahāyāna teachings and propagate the Tripiṭaka." Philosophical differences Mahayanists were primarily in philosophical dialectic with the Vaibhāṣika school of Sarvāstivāda, which had by far the most "comprehensive edifice of doctrinal systematics" of the nikāya schools. With this in mind it is sometimes argued that the Theravada would not have been considered a "Hinayana" school by Mahayanists because, unlike the now-extinct Sarvastivada school, the primary object of Mahayana criticism, the Theravada school does not claim the existence of independent dharmas; in this it maintains the attitude of early Buddhism. Additionally, the concept of the bodhisattva as one who puts off enlightenment rather than reaching awakening as soon as possible, has no roots in Theravada textual or cultural contexts, current or historical. Aside from the Theravada schools being geographically distant from the Mahayana, the Hinayana distinction is used in reference to certain views and practices that had become found within the Mahayana tradition itself. Theravada, as well as Mahayana schools stress the urgency of one's own awakening in order to end suffering. Some contemporary Theravadin figures have thus indicated a sympathetic stance toward the Mahayana philosophy found in the Heart Sutra and the Mūlamadhyamakakārikā. The Mahayanists were bothered by the substantialist thought of the Sarvāstivādins and Sautrāntikins, and in emphasizing the doctrine of śūnyatā, David Kalupahana holds that they endeavored to preserve the early teaching. The Theravadins too refuted the Sarvāstivādins and Sautrāntikins (and followers of other schools) on the grounds that their theories were in conflict with the non-substantialism of the canon. The Theravada arguments are preserved in the Kathavatthu. Opinions of scholars Some western scholars still regard the Theravada school to be one of the Hinayana schools referred to in Mahayana literature, or regard Hinayana as a synonym for Theravada. These scholars understand
always celebrated on Christmas Day, saying that he joked about being cheated out of a present every year. Sperber and Lax noted that a birth announcement in the Ontario County Times of January 10, 1900 rules out the possibility of a January 23 birthdate; state and federal census records from 1900 also report a Christmas 1899 birthdate. Belmont, Bogart's father, was a cardiopulmonary surgeon. Maud was a commercial illustrator who received her art training in New York and France, including study with James Abbott McNeill Whistler. She later became art director of the fashion magazine The Delineator and a militant suffragette. Maud used a drawing of baby Humphrey in an advertising campaign for Mellins Baby Food. She earned over $50,000 a year at the peak of her career – a very large sum of money at the time, and considerably more than her husband's $20,000. The Bogarts lived in an Upper West Side apartment, and had a cottage on a 55-acre estate on Canandaigua Lake in upstate New York. When he was young, Bogart's group of friends at the lake would put on plays. He had two younger sisters: Frances ("Pat") and Catherine Elizabeth ("Kay"). Bogart's parents were busy in their careers, and frequently fought. Very formal, they showed little emotion towards their children. Maud told her offspring to call her "Maud" instead of "Mother", and showed little, if any, physical affection for them. When she was pleased, she "[c]lapped you on the shoulder, almost the way a man does", Bogart recalled. "I was brought up very unsentimentally but very straightforwardly. A kiss, in our family, was an event. Our mother and father didn't glug over my two sisters and me." Bogart was teased as a boy for his curls, tidiness, the "cute" pictures his mother had him pose for, the Little Lord Fauntleroy clothes in which she dressed him, and for his first name. He inherited a tendency to needle, a fondness for fishing, a lifelong love of boating, and an attraction to strong-willed women from his father. Bogart attended the private Delancey School until the fifth grade, and then attended the prestigious Trinity School. He was an indifferent, sullen student who showed no interest in after-school activities. Bogart later attended Phillips Academy, a boarding school to which he was admitted based on family connections. Although his parents hoped that he would go on to Yale University, Bogart left Phillips in 1918 after one semester. He failed four out of six classes. Several reasons have been given; according to one, he was expelled for throwing the headmaster (or a groundskeeper) into Rabbit Pond on campus. Another cited smoking, drinking, poor academic performance, and (possibly) inappropriate comments made to the staff. In a third scenario, Bogart was withdrawn by his father for failing to improve his grades. His parents were deeply disappointed in their failed plans for his future. Navy With no viable career options, Bogart enlisted in the United States Navy in the spring of 1918 (during World War I), and served as a coxswain. He recalled later, "At eighteen, war was great stuff. Paris! Sexy French girls! Hot damn!" Bogart was recorded as a model sailor, who spent most of his sea time after the armistice ferrying troops back from Europe. Bogart left the service on June 18, 1919 at the rank of Boatswain's Mate Third Class. During the Second World War, Bogart attempted to reenlist in the Navy but was rejected due to his age. He then volunteered for the Coast Guard Temporary Reserve in 1944, patrolling the California coastline in his yacht, the Santana. He may have received his trademark scar and developed his characteristic lisp during his naval stint. There are several conflicting stories. In one, his lip was cut by shrapnel when his ship (the ) was shelled. The ship was never shelled, however, and Bogart may not have been at sea before the armistice. Another story, held by longtime friend Nathaniel Benchley, was that Bogart was injured while taking a prisoner to Portsmouth Naval Prison in Kittery, Maine. While changing trains in Boston, the handcuffed prisoner reportedly asked Bogart for a cigarette. When Bogart looked for a match, the prisoner smashed him across the mouth with the cuffs (cutting Bogart's lip) and fled before he was recaptured and imprisoned. In an alternative version, Bogart was struck in the mouth by a handcuff loosened while freeing his charge; the other handcuff was still around the prisoner's wrist. By the time Bogart was treated by a doctor, a scar had formed. David Niven said that when he first asked Bogart about his scar, however, he said that it was caused by a childhood accident. "Goddamn doctor", Bogart later told Niven. "Instead of stitching it up, he screwed it up." According to Niven, the stories that Bogart got the scar during wartime were made up by the studios. His post-service physical did not mention the lip scar, although it noted many smaller scars. When actress Louise Brooks met Bogart in 1924, he had scar tissue on his upper lip which Brooks said Bogart may have had partially repaired before entering the film industry in 1930. Brooks said that his "lip wound gave him no speech impediment, either before or after it was mended." Acting First performances Bogart returned home to find his father in poor health, his medical practice faltering, and much of the family's wealth lost in bad timber investments. His character and values developed separately from his family during his navy days, and he began to rebel. Bogart became a liberal who disliked pretension, phonies and snobs, sometimes defying conventional behavior and authority; he was also well-mannered, articulate, punctual, self-effacing and standoffish. After his naval service, he worked as a shipper and a bond salesman, joining the Coast Guard Reserve. Bogart resumed his friendship with Bill Brady Jr. (whose father had show-business connections), and obtained an office job with William A. Brady's new World Films company. Although he wanted to try his hand at screenwriting, directing, and production, he excelled at none. Bogart was stage manager for Brady's daughter Alice's play A Ruined Lady. He made his stage debut a few months later as a Japanese butler in Alice's 1921 play Drifting (nervously delivering one line of dialogue), and appeared in several of her subsequent plays. Although Bogart had been raised to believe that acting was a lowly profession, he liked the late hours actors kept and the attention they received: "I was born to be indolent and this was the softest of rackets." He spent much of his free time in speakeasies, drinking heavily. A barroom brawl at this time was also a purported cause of Bogart's lip damage, dovetailing with Louise Brooks' account. Preferring to learn by doing, he never took acting lessons. Bogart was persistent and worked steadily at his craft, appearing in at least 17 Broadway productions between 1922 and 1935. He played juveniles or romantic supporting roles in drawing-room comedies and is reportedly the first actor to say, "Tennis, anyone?" on stage. According to Alexander Woollcott, Bogart "is what is usually and mercifully described as inadequate." Other critics were kinder. Heywood Broun, reviewing Nerves, wrote: "Humphrey Bogart gives the most effective performance ... both dry and fresh, if that be possible". He played a juvenile lead (reporter Gregory Brown) in Lynn Starling's comedy Meet the Wife, which had a successful 232-performance run at the Klaw Theatre from November 1923 through July 1924. Bogart disliked his trivial, effeminate early-career parts, calling them "White Pants Willie" roles. While playing a double role in Drifting at the Playhouse Theatre in 1922, he met actress Helen Menken; they were married on May 20, 1926, at the Gramercy Park Hotel in New York City. Divorced on November 18, 1927, they remained friends. Menken said in her divorce filing that Bogart valued his career more than marriage, citing neglect and abuse. He married actress Mary Philips on April 3, 1928, at her mother's apartment in Hartford, Connecticut; Bogart and Philips had worked together in the play Nerves during its brief run at the Comedy Theatre in 1924. Theatrical production dropped off sharply after the Wall Street Crash of 1929, and many of the more-photogenic actors headed for Hollywood. Bogart debuted on film with Helen Hayes in the 1928 two-reeler, The Dancing Town, a complete copy of which has not been found. He also appeared with Joan Blondell and Ruth Etting in a Vitaphone short, Broadway's Like That (1930), which was rediscovered in 1963. Broadway to Hollywood Bogart signed a contract with the Fox Film Corporation for $750 a week. There he met Spencer Tracy, a Broadway actor whom Bogart liked and admired, and the two men became close friends and drinking companions. In 1930, Tracy first called him "Bogie". Tracy made his feature film debut in his only movie with Bogart, John Ford's early sound film Up the River (1930), in which their leading roles were as inmates. Tracy received top billing, but Bogart's picture appeared on the film's posters. He was billed fourth behind Tracy, Claire Luce and Warren Hymer but his role was almost as large as Tracy's and much larger than Luce's or Hymer's. A quarter of a century later, the two men planned to make The Desperate Hours together. Both insisted upon top billing, however; Tracy dropped out, and was replaced by Fredric March. Bogart then had a supporting role in Bad Sister (1931) with Bette Davis. Bogart shuttled back and forth between Hollywood and the New York stage from 1930 to 1935, out of work for long periods. His parents had separated; his father died in 1934 in debt, which Bogart eventually paid off. He inherited his father's gold ring, which he wore in many of his films. At his father's deathbed, Bogart finally told him how much he loved him. Bogart's second marriage was rocky; dissatisfied with his acting career, depressed and irritable, he drank heavily. In Hollywood permanently: The Petrified Forest In 1934, Bogart starred in the Broadway play Invitation to a Murder at the Theatre Masque (renamed the John Golden Theatre in 1937). Its producer, Arthur Hopkins, heard the play from offstage; he sent for Bogart and offered him the role of escaped murderer Duke Mantee in Robert E. Sherwood's forthcoming play, The Petrified Forest. Hopkins later recalled: The play had 197 performances at the Broadhurst Theatre in New York in 1935. Although Leslie Howard was the star, The New York Times critic Brooks Atkinson said that the play was "a peach ... a roaring Western melodrama ... Humphrey Bogart does the best work of his career as an actor." Bogart said that the play "marked my deliverance from the ranks of the sleek, sybaritic, stiff-shirted, swallow-tailed 'smoothies' to which I seemed condemned to life." However, he still felt insecure. Warner Bros. bought the screen rights to The Petrified Forest in 1935. The play seemed ideal for the studio, which was known for its socially-realistic pictures for a public entranced by real-life criminals such as John Dillinger and Dutch Schultz. Bette Davis and Leslie Howard were cast. Howard, who held the production rights, made it clear that he wanted Bogart to star with him. The studio tested several Hollywood veterans for the Duke Mantee role and chose Edward G. Robinson, who had star appeal and was due to make a film to fulfill his contract. Bogart cabled news of this development to Howard in Scotland, who replied: "Att: Jack Warner Insist Bogart Play Mantee No Bogart No Deal L.H.". When Warner Bros. saw that Howard would not budge, they gave in and cast Bogart. Jack Warner wanted Bogart to use a stage name, but Bogart declined having built a reputation with his name in Broadway theater. The film version of The Petrified Forest was released in 1936. According to Variety, "Bogart's menace leaves nothing wanting". Frank S. Nugent wrote for The New York Times that the actor "can be a psychopathic gangster more like Dillinger than the outlaw himself." The film was successful at the box office, earning $500,000 in rentals, and made Bogart a star. He never forgot Howard's favor and named his only daughter, Leslie Howard Bogart, after him in 1952. Supporting gangster and villain roles Despite his success in The Petrified Forest (an "A movie"), Bogart signed a tepid 26-week contract at $550 per week and was typecast as a gangster in a series of B movie crime dramas. Although he was proud of his success, the fact that it derived from gangster roles weighed on him: "I can't get in a mild discussion without turning it into an argument. There must be something in my tone of voice, or this arrogant face—something that antagonizes everybody. Nobody likes me on sight. I suppose that's why I'm cast as the heavy." In spite of his success, Warner Bros. had no interest in raising Bogart's profile. His roles were repetitive and physically demanding; studios were not yet air-conditioned, and his tightly scheduled job at Warners was anything but the indolent and "peachy" actor's life he hoped for. Although Bogart disliked the roles chosen for him, he worked steadily. "In the first 34 pictures" for Warner's, he told George Frazier, "I was shot in 12, electrocuted or hanged in 8, and was a jailbird in 9". He averaged a film every two months between 1936 and 1940, sometimes working on two films at the same time. Bogart used these years to begin developing his film persona: a wounded, stoical, cynical, charming, vulnerable, self-mocking loner with a code of honor. Amenities at Warners were few, compared to the prestigious Metro-Goldwyn-Mayer. Bogart thought that the Warners wardrobe department was cheap, and often wore his own suits in his films; he used his dog, Zero, to play Pard (his character's dog) in High Sierra. His disputes with Warner Bros. over roles and money were similar to those waged by the studio with more established and less malleable stars such as Bette Davis and James Cagney. Leading men at Warner Bros. included James Cagney and Edward G. Robinson. Most of the studio's better scripts went to them (or others), leaving Bogart with what was left: films like San Quentin (1937), Racket Busters (1938), and You Can't Get Away with Murder (1939). His only leading role during this period was in Dead End (1937, on loan to Samuel Goldwyn), as a gangster modeled after Baby Face Nelson. Bogart played violent roles so often that in Nevil Shute's 1939 novel, What Happened to the Corbetts, the protagonist replies "I've seen Humphrey Bogart with one often enough" when asked if he knows how to operate an automatic weapon. Although he played a variety of supporting roles in films such as Angels with Dirty Faces (1938), Bogart's roles were either rivals of characters played by Cagney and Robinson or a secondary member of their gang. In Black Legion (1937), a movie Graham Greene described as "intelligent and exciting, if rather earnest", he played a good man who was caught up with (and destroyed by) a racist organization. The studio cast Bogart as a wrestling promoter in Swing Your Lady (1938), a "hillbilly musical" which he reportedly considered his worst film performance. He played a rejuvenated, formerly-dead scientist in The Return of Doctor X (1939), his only horror film: "If it'd been Jack Warner's blood ... I wouldn't have minded so much. The trouble was they were drinking mine and I was making this stinking movie." His wife, Mary, had a stage hit in A Touch of Brimstone and refused to abandon her Broadway career
The Maltese Falcon Now regarded as a classic film noir, The Maltese Falcon (1941) was John Huston's directorial debut. Based on the Dashiell Hammett novel, it was first serialized in the pulp magazine Black Mask in 1929 and was the basis of two earlier film versions; the second was Satan Met a Lady (1936), starring Bette Davis. Producer Hal B. Wallis initially offered to cast George Raft as the leading man, but Raft (then better known than Bogart) had a contract stipulating he was not required to appear in remakes. Fearing that it would be nothing more than a sanitized version of the pre-Production Code The Maltese Falcon (1931), Raft turned down the role to make Manpower with director Raoul Walsh, with whom he had worked on The Bowery in 1933. Huston then eagerly accepted Bogart as his Sam Spade. Complementing Bogart were co-stars Sydney Greenstreet, Peter Lorre, Elisha Cook Jr., and Mary Astor as the treacherous female foil. Bogart's sharp timing and facial expressions were praised by the cast and director as vital to the film's quick action and rapid-fire dialogue. It was a commercial hit, and a major triumph for Huston. Bogart was unusually happy with the film: "It is practically a masterpiece. I don't have many things I'm proud of ... but that's one". Casablanca Bogart played his first romantic lead in Casablanca (1942): Rick Blaine, an expatriate nightclub owner hiding from a suspicious past and negotiating a fine line among Nazis, the French underground, the Vichy prefect and unresolved feelings for his ex-girlfriend. Bosley Crowther wrote in his November 1942 New York Times review that Bogart's character was used "to inject a cold point of tough resistance to evil forces afoot in Europe today". The film, directed by Michael Curtiz and produced by Hal Wallis, featured Ingrid Bergman, Claude Rains, Sydney Greenstreet, Paul Henreid, Conrad Veidt, Peter Lorre and Dooley Wilson. Bogart and Bergman's on-screen relationship was based on professionalism rather than actual rapport, although Mayo Methot assumed otherwise. Off the set, the co-stars hardly spoke. Bergman (who had a reputation for affairs with her leading men) later said about Bogart, "I kissed him but I never knew him." Because she was taller, Bogart had blocks attached to his shoes in some scenes. Bogart is reported to have been responsible for the notion that Rick Blaine should be portrayed as a chess player, a metaphor for the relationships he maintained with friends, enemies, and allies. He played tournament-level chess (one division below master) in real life, often enjoying games with crew members and cast but finding his better in Paul Henreid. Casablanca won the Academy Award for Best Picture at the 16th Academy Awards for 1943. Bogart was nominated for Best Actor in a Leading Role, but lost to Paul Lukas for his performance in Watch on the Rhine. The film vaulted Bogart from fourth place to first in the studio's roster, however, finally overtaking James Cagney. He more than doubled his annual salary to over $460,000 by 1946, making him the world's highest-paid actor. Bogart went on United Service Organizations and War Bond tours with Methot in 1943 and 1944, making arduous trips to Italy and North Africa (including Casablanca). He was still required to perform in films with weak scripts, leading to conflicts with the front office. He starred in Conflict (1945, again with Greenstreet), but turned down God is My Co-Pilot that year. Bogart and Bacall To Have and Have Not Howard Hawks introduced Bogart and Lauren Bacall (1924–2014) while Bogart was filming Passage to Marseille (1944). The three subsequently collaborated on To Have and Have Not (1944), a loose adaptation of the Ernest Hemingway novel, and Bacall's film debut. It has several similarities to Casablanca: the same kind of hero and enemies, and a piano player (portrayed this time by Hoagy Carmichael) as a supporting character. When they met, Bacall was 19 and Bogart 44; he nicknamed her "Baby." A model since age 16, she had appeared in two failed plays. Bogart was attracted by Bacall's high cheekbones, green eyes, tawny blond hair, lean body, maturity, poise and earthy, outspoken honesty; he reportedly said, "I just saw your test. We'll have a lot of fun together". Their emotional bond was strong from the start, their difference in age and acting-experience encouraged a mentor-student dynamic. In contrast to the Hollywood norm, their affair was Bogart's first with a leading lady. His early meetings with Bacall were discreet and brief, their separations bridged by love letters. The relationship made it easier for Bacall to make her first film, and Bogart did his best to put her at ease with jokes and quiet coaching. He encouraged her to steal scenes; Howard Hawks also did his best to highlight her role, and found Bogart easy to direct. However, Hawks began to disapprove of the relationship. He considered himself Bacall's protector and mentor, and Bogart was usurping that role. Not usually drawn to his starlets, the married director also fell for Bacall; he told her that she meant nothing to Bogart and threatened to send her to the poverty-row studio Monogram Pictures. Bogart calmed her down, and then went after Hawks; Jack Warner settled the dispute, and filming resumed. Hawks said about Bacall, "Bogie fell in love with the character she played, so she had to keep playing it the rest of her life." The Big Sleep Months after wrapping To Have and Have Not, Bogart and Bacall were reunited for an encore: the film noir The Big Sleep (1946), based on the novel by Raymond Chandler with script help from William Faulkner. Chandler admired the actor's performance: "Bogart can be tough without a gun. Also, he has a sense of humor that contains that grating undertone of contempt." Although the film was completed and scheduled for release in 1945, it was withdrawn and re-edited to add scenes exploiting Bogart and Bacall's box-office chemistry in To Have and Have Not and the publicity surrounding their offscreen relationship. At the insistence of director Howard Hawks, production partner Charles K. Feldman agreed to a rewrite of Bacall's scenes to heighten the "insolent" quality which had intrigued critics such as James Agee and audiences of the earlier film, and a memo was sent to studio head Jack Warner. The dialogue, especially in the added scenes supplied by Hawks, was full of sexual innuendo. The film was successful, although some critics found its plot confusing and overly complicated. According to Chandler, Hawks and Bogart argued about who killed the chauffeur; when Chandler received an inquiry by telegram, he could not provide an answer. Marriage Bogart filed for divorce from Methot in February 1945. He and Bacall married in a small ceremony at the country home of Bogart's close friend, Pulitzer Prize-winning author Louis Bromfield, at Malabar Farm (near Lucas, Ohio) on May 21, 1945. They moved into a $160,000 ($ in ) white brick mansion in an exclusive neighborhood of Los Angeles's Holmby Hills. The marriage was a happy one, with tensions due to their differences. Bogart's drinking was sometimes problematic. He was a homebody, and Bacall liked the nightlife; he loved the sea, which made her seasick. Bogart bought the Santana, a sailing yacht, from actor Dick Powell in 1945. He found the sea a sanctuary and spent about thirty weekends a year on the water, with a particular fondness for sailing around Catalina Island: "An actor needs something to stabilize his personality, something to nail down what he really is, not what he is currently pretending to be." Bogart joined the Coast Guard Temporary Reserve, offering the Coast Guard use of the Santana. He reportedly attempted to enlist, but was turned down due to his age. Dark Passage and Key Largo The suspenseful Dark Passage (1947) was Bogart and Bacall's next collaboration. Vincent Parry (Bogart) is intent on finding the real murderer for a crime of which he was convicted and sentenced to prison. According to Bogart's biographer, Stefan Kanfer, it was "a production line film noir with no particular distinction". Bogart and Bacall's last pairing in a film was in Key Largo (1948). Directed by John Huston, Edward G. Robinson was billed second (behind Bogart) as gangster Johnny Rocco: a seething, older synthesis of many of his early bad-guy roles. The billing question was hard-fought and at the end of at least one of the trailers, Robinson is listed above Bogart in a list of the actors' names in the last frame; and in the film itself, Robinson's name, appearing between Bogart's and Bacall's, is pictured slightly higher onscreen than the other two. Robinson had top billing over Bogart in their four previous films together: Bullets or Ballots (1936), Kid Galahad (1937), The Amazing Dr. Clitterhouse (1938) and Brother Orchid (1940). In some posters for Key Largo, Robinson's picture is substantially larger than Bogart's, and in the foreground manhandling Bacall while Bogart is in the background. The characters are trapped during a hurricane in a hotel owned by Bacall's father-in-law, portrayed by Lionel Barrymore. Claire Trevor won an Academy Award for Best Supporting Actress for her performance as Rocco's physically abused, alcoholic girlfriend. Later career The Treasure of the Sierra Madre Riding high in 1947 with a new contract which provided limited script refusal and the right to form his production company, Bogart rejoined with John Huston for The Treasure of the Sierra Madre: a stark tale of greed among three gold prospectors in Mexico. Lacking a love interest or a happy ending, it was considered a risky project. Bogart later said about co-star (and John Huston's father) Walter Huston, "He's probably the only performer in Hollywood to whom I'd gladly lose a scene." The film was shot in the heat of summer for greater realism and atmosphere and was grueling to make. James Agee wrote, "Bogart does a wonderful job with this character ... miles ahead of the very good work he has done before." Although John Huston won the Academy Award for Best Director and screenplay and his father won the Best Supporting Actor award, the film had mediocre box-office results. Bogart complained, "An intelligent script, beautifully directed—something different—and the public turned a cold shoulder on it." House Un-American Activities Committee Bogart, a liberal Democrat, organized the Committee for the First Amendment (a delegation to Washington, D.C.) opposing what he saw as the House Un-American Activities Committee's harassment of Hollywood screenwriters and actors. He later wrote an article, "I'm No Communist", for the March 1948 issue of Photoplay magazine distancing himself from the Hollywood Ten to counter negative publicity resulting from his appearance. Bogart wrote, "The ten men cited for contempt by the House Un-American Activities Committee were not defended by us." Santana Productions Bogart created his film company, Santana Productions (named after his yacht and the cabin cruiser in Key Largo), in 1948. The right to create his own company had left Jack Warner furious, fearful that other stars would do the same and further erode the major studios' power. In addition to pressure from freelancing actors such as Bogart, James Stewart, and Henry Fonda, they were beginning to buckle from the impact of television and the enforcement of antitrust laws which broke up theater chains. Bogart appeared in his final films for Warners, Chain Lightning (1950) and The Enforcer (1951). Except for Beat the Devil (1953), originally distributed in the United States by United Artists, the company released its films through Columbia Pictures; Columbia re-released Beat the Devil a decade later. In quick succession, Bogart starred in Knock on Any Door (1949), Tokyo Joe (1949), In a Lonely Place (1950), and Sirocco (1951). Santana also made two films without him: And Baby Makes Three (1949) and The Family Secret (1951). Although most lost money at the box office (ultimately forcing Santana's sale), at least two retain a reputation; In a Lonely Place is considered a film-noir high point. Bogart plays Dixon Steele, an embittered writer with a violent reputation who is the primary suspect in the murder of a young woman and falls in love with failed actress Laurel Gray (Gloria Grahame). Several Bogart biographers, and actress-writer Louise Brooks, have felt that this role is closest to the real Bogart. According to Brooks, the film "gave him a role that he could play with complexity, because the film character's pride in his art, his selfishness, drunkenness, lack of energy stabbed with lightning strokes of violence were shared by the real Bogart". The character mimics some of Bogart's personal habits, twice ordering the actor's favorite meal (ham and eggs). A parody of sorts of The Maltese Falcon, Beat the Devil was the final film for Bogart and John Huston. Co-written by Truman Capote, the eccentrically filmed story follows an amoral group of rogues, one of whom was portrayed by Peter Lorre, chasing an unattainable treasure. Bogart sold his interest in Santana to Columbia for over $1 million in 1955. The African Queen Outside Santana Productions, Bogart starred with Katharine Hepburn in the John Huston-directed The African Queen in 1951. The C. S. Forester novel on which it was based was overlooked and left undeveloped for 15 years until producer Sam Spiegel and Huston bought the rights. Spiegel sent Katharine Hepburn the book; she suggested Bogart for the male lead, believing that "he was the only man who could have played that part". Huston's love of adventure, his deep, longstanding friendship (and success) with Bogart, and the chance to work with Hepburn convinced the actor to leave Hollywood for a difficult shoot on location in the Belgian Congo. Bogart was to get 30 percent of the profits and Hepburn 10 percent, plus a relatively small salary for both. The stars met in London and announced that they would work together. Bacall came for the over-four-month duration, leaving their young son in Los Angeles. The Bogarts began the trip with a junket through Europe, including a visit with Pope Pius XII. Bacall later made herself useful as a cook, nurse and clothes washer; her husband said: "I don't know what we'd have done without her. She Luxed my undies in darkest Africa." Nearly everyone in the cast developed dysentery except Bogart and Huston, who subsisted on canned food and alcohol; Bogart said, "All I ate was baked beans, canned asparagus and Scotch whisky. Whenever a fly bit Huston or me, it dropped dead." Hepburn (a teetotaler) fared worse in the difficult conditions, losing weight and at one point becoming very ill. Bogart resisted Huston's insistence on using real leeches in a key scene where Charlie has to drag his steam launch through an infested marsh, and reasonable fakes were employed. The crew overcame illness, army-ant infestations, leaky boats, poor food, attacking hippos, poor water filters, extreme heat, isolation, and a boat fire to complete the film. Despite the discomfort of jumping from the boat into swamps, rivers and marshes, The African Queen apparently rekindled Bogart's early love of boats; when he returned to California, he bought a classic mahogany Hacker-Craft runabout which he kept until his death. His performance as cantankerous skipper Charlie Allnutt earned Bogart an Academy Award for Best Actor in 1951 (his only award of three nominations), and he considered it the best of his film career. Promising friends that if he won his speech would break the convention of thanking everyone in sight, Bogart advised Claire Trevor when she was nominated for Key Largo to "just say you did it all yourself and don't thank anyone". When Bogart won, however, he said: "It's a long way from the Belgian Congo to the stage of this theatre. It's nicer to be here. Thank you very much ... No one does it alone. As in tennis, you need a good opponent or partner to bring out the best in you. John and Katie helped me to be where I am now." Despite the award and its accompanying recognition, Bogart later said: "The way to survive an Oscar is never to try to win another one ... too many stars ... win it and then figure they have to top themselves ... they become afraid to take chances. The result: A lot of dull performances in dull pictures." The African Queen was Bogart's first starring Technicolor role. The Caine Mutiny Bogart dropped his asking price to obtain the role of Captain Queeg in Edward Dmytryk's drama, The Caine Mutiny (1954). Though he retained some of his old bitterness about having to do so, he delivered a strong performance in the lead; he received his final Oscar nomination and was the subject of a June 7, 1954 Time magazine cover story. Despite his success, Bogart was still melancholy; he grumbled to (and feuded with) the studio, while his health began to deteriorate. The character of Queeg was similar to his roles in The Maltese Falcon, Casablanca and The Big Sleep–the wary loner who trusts no one—but without their warmth and humor. Like his portrayal of Fred C. Dobbs in The Treasure of the Sierra Madre, Bogart's Queeg is a paranoid, self-pitying character whose small-mindedness eventually destroys him. Henry Fonda played a different role in the Broadway version of The Caine Mutiny, generating publicity for the film. Final roles For Sabrina (1954), Billy Wilder wanted Cary Grant for the older male lead and chose Bogart to play the conservative brother who competes with his younger, playboy sibling (William Holden) for the affection of the Cinderella-like Sabrina (Audrey Hepburn). Although Bogart was lukewarm about the part, he agreed to it on a handshake with Wilder without a finished script but with the director's assurance that he would take good care of Bogart during filming. The actor, however, got along poorly with his director and co-stars; he complained about the script's last-minute drafting and delivery, and accused Wilder of favoring Hepburn and Holden on and off the set. Wilder was the opposite of Bogart's ideal director (John Huston) in style and personality; Bogart complained to the press that Wilder was "overbearing" and "is [a] kind of Prussian German with a riding crop. He is the type of director I don't like to work with ... the picture is a crock of crap. I got sick and tired of who gets Sabrina." Wilder later said, "We parted as enemies but finally made up." Despite the acrimony, the film was successful; according to a review in The New York Times, Bogart was "incredibly adroit ... the skill with which this old rock-ribbed actor blends the gags and such duplicities with a manly manner of melting is one of the incalculable joys of the show". Joseph L. Mankiewicz's The Barefoot Contessa (1954) was filmed in Rome. In this Hollywood backstory, Bogart is a broken-down man, a cynical director-narrator who saves his career by making a star of a flamenco dancer modeled on Rita Hayworth. He was uneasy with Ava Gardner in the female lead; she had just broken up with his Rat Pack buddy Frank Sinatra, and Bogart was annoyed by her inexperienced performance. The actor was generally praised as the film's strongest part. During filming and while Bacall was home, Bogart resumed his discreet affair with Verita Bouvaire-Thompson (his long-time studio assistant, whom he drank with and took sailing). When Bacall found them together, she extracted an expensive shopping spree from her husband; the three traveled together after the shooting. Bogart could be generous with actors, particularly those who were blacklisted, down on their luck or having personal problems. During the filming of the Edward Dmytryk-directed The Left Hand of God (1955), he noticed his co-star Gene Tierney having a hard time remembering her lines and behaving oddly; he coached her, feeding Tierney her lines. Familiar with mental illness because of his sister's bouts of depression, Bogart encouraged Tierney to seek treatment. He also stood behind Joan Bennett and insisted on her as his co-star in Michael Curtiz's We're No Angels (1955) when a scandal made her persona non grata with studio head Jack Warner. Bogart had already been diagnosed with terminal cancer when shooting The Harder They Fall, a boxing drama with Rod Steiger in a supporting role. Steiger later mentioned Bogart's courage and geniality during his final performance: "Bogey and I got on very well. Unlike some other stars, when they had closeups, you might have been relegated to a two-shot, or cut out altogether. Bogey didn't play those games. He was a professional and had tremendous authority. He'd come in exactly at 9am and leave at precisely 6pm. I remember once walking to lunch in between takes and seeing Bogey on the lot. I shouldn't have because his work was finished for the day. I asked him why he was still on the lot, and he said, 'They want to shoot some retakes of my closeups because my eyes are too watery'. A little while later, after the film, somebody came up to me with word of Bogey's death. Then it struck me. His eyes were watery because he was in pain with the cancer. I thought: 'How dumb can you be, Rodney'!" Television and radio Bogart rarely performed on television, but he and Bacall appeared on Edward R. Murrow's Person to Person and disagreed on the answer to every question. He also appeared on The Jack Benny Show, where a surviving kinescope of the live telecast captures him in his only TV sketch-comedy performance (October 25, 1953). Bogart and Bacall worked on an early color telecast in 1955, an NBC adaptation of "The Petrified Forest" for Producers' Showcase. Bogart received top billing, Henry Fonda played Leslie Howard's role and Bacall played Bette Davis's part. Jack Klugman, Richard Jaeckel, and Jack Warden played supporting roles. In the late 1990s, Bacall donated the only known kinescope of the 1955 performance (in black and white) to the Museum Of Television & Radio (now the Paley Center for Media), where it remains archived for viewing in New York City and Los Angeles. It is now in the public domain. Bogart also performed radio adaptations of some of his best-known films, such as Casablanca and The Maltese Falcon, and recorded a radio series entitled Bold Venture with Bacall. Personal life Children Bogart became a father at age 49, when Bacall gave birth to Stephen Humphrey Bogart on January 6, 1949, during the filming of Tokyo Joe. The name was taken from Steve, Bogart's character's nickname in To Have and Have Not. Stephen became an author and biographer and hosted a television special about his father on Turner Classic Movies. The couple's daughter, Leslie Howard Bogart, was born on August 23, 1952. Her first and middle names honor Leslie Howard, Bogart's friend and co-star in The Petrified Forest. Rat Pack Bogart was a founding member and the original leader of the Hollywood Rat Pack. In the spring of 1955, after a long party in Las Vegas attended by Frank Sinatra, Judy Garland, her husband Sidney Luft, Michael Romanoff and his wife Gloria, David Niven, Angie Dickinson and others, Bacall surveyed the wreckage and said: "You look like a goddamn rat pack." The name stuck and was made official
to the most skilled. For that he must pass from representing a single figure to several together; history and myth must be depicted; great events must be represented as by historians, or like the poets, subjects that will please, and climbing still higher, he must have the skill to cover under the veil of myth the virtues of great men in allegories, and the mysteries they reveal". By the late 18th century, with both religious and mytholological painting in decline, there was an increased demand for paintings of scenes from history, including contemporary history. This was in part driven by the changing audience for ambitious paintings, which now increasingly made their reputation in public exhibitions rather than by impressing the owners of and visitors to palaces and public buildings. Classical history remained popular, but scenes from national histories were often the best-received. From 1760 onwards, the Society of Artists of Great Britain, the first body to organize regular exhibitions in London, awarded two generous prizes each year to paintings of subjects from British history. The unheroic nature of modern dress was regarded as a serious difficulty. When, in 1770, Benjamin West proposed to paint The Death of General Wolfe in contemporary dress, he was firmly instructed to use classical costume by many people. He ignored these comments and showed the scene in modern dress. Although George III refused to purchase the work, West succeeded both in overcoming his critics' objections and inaugurating a more historically accurate style in such paintings. Other artists depicted scenes, regardless of when they occurred, in classical dress and for a long time, especially during the French Revolution, history painting often focused on depictions of the heroic male nude. The large production, using the finest French artists, of propaganda paintings glorifying the exploits of Napoleon, were matched by works, showing both victories and losses, from the anti-Napoleonic alliance by artists such as Goya and J.M.W. Turner. Théodore Géricault's The Raft of the Medusa (1818–1819) was a sensation, appearing to update the history painting for the 19th century, and showing anonymous figures famous only for being victims of what was then a famous and controversial disaster at sea. Conveniently their clothes had been worn away to classical-seeming rags by the point the painting depicts. At the same time the demand for traditional large religious history paintings very largely fell away. In the mid-nineteenth century there arose a style known as historicism, which marked a formal imitation of historical styles and/or artists. Another development in the nineteenth century was the treatment of historical subjects, often on a large scale, with the values of genre painting, the depiction of scenes of everyday life, and anecdote. Grand depictions of events of great public importance were supplemented with scenes depicting more personal incidents in the lives of the great, or of scenes centred on unnamed figures involved in historical events, as in the Troubadour style. At the same time scenes of ordinary life with moral, political or satirical content became often the main vehicle for expressive interplay between figures in painting, whether given a modern or historical setting. By the later 19th century, history painting was often explicitly rejected by avant-garde movements such as the Impressionists (except for Édouard Manet) and the Symbolists, and according to one recent writer "Modernism was to a considerable extent built upon the rejection of History Painting... All other genres are deemed capable of entering, in one form or another, the 'pantheon' of modernity considered, but History Painting is excluded". History painting and historical painting The terms Initially, "history painting" and "historical painting" were used interchangeably in English, as when Sir Joshua Reynolds in his fourth Discourse uses both indiscriminately to cover "history painting", while saying "...it ought to be called poetical, as in reality it is", reflecting the French term peinture historique, one equivalent of "history painting". The terms began to separate in the 19th century, with "historical painting" becoming a sub-group of "history painting" restricted to subjects taken from history in its normal sense. In 1853 John Ruskin asked his audience: "What do you at present mean by historical painting? Now-a-days it means the endeavour, by the power of imagination, to portray some historical event of past days." So for example Harold Wethey's three-volume catalogue of the paintings of Titian (Phaidon, 1969–75) is divided between "Religious Paintings", "Portraits", and "Mythological and Historical Paintings", though both volumes I and III cover what is included in the term "History Paintings". This distinction is useful but is by no means generally observed, and the terms are still often used in a confusing manner. Because of the potential for confusion modern academic writing tends to avoid the phrase "historical painting", talking instead of "historical subject matter" in history painting, but where the phrase is still used in contemporary scholarship it will normally mean the painting of subjects from history, very often in the 19th century. "Historical painting" may also be used, especially in discussion of painting techniques in conservation studies, to mean "old", as opposed to modern or recent painting. In 19th-century British writing on art the terms "subject painting" or "anecdotic" painting were often used for works in a line of development going back to William Hogarth of monoscenic depictions of crucial moments in an implied narrative with unidentified characters, such as William Holman Hunt's 1853 painting The Awakening Conscience or Augustus Egg's Past and Present, a set of three paintings, updating sets by Hogarth such as Marriage à-la-mode. 19th century History painting was the dominant form of academic painting in the various national academies in the 18th century, and for most of the 19th, and increasingly historical subjects dominated. During the Revolutionary and Napoleonic periods the heroic treatment of contemporary history in a frankly propagandistic fashion by Antoine-Jean, Baron Gros, Jacques-Louis David, Carle Vernet and others was supported by the French state, but after the fall of Napoleon in 1815 the French governments were not regarded as suitable for heroic treatment and many artists retreated further into the past to find subjects, though in Britain depicting the victories of the Napoleonic Wars mostly occurred after they were over. Another path was to choose contemporary subjects that were oppositional to government either at home and abroad, and many of what were arguably the last great generation of history paintings were protests at contemporary episodes of repression or outrages at home or abroad: Goya's The Third of May 1808 (1814), Théodore Géricault's The Raft of the Medusa (1818–19), Eugène Delacroix's The Massacre at Chios (1824) and Liberty Leading the People (1830). These were heroic, but showed heroic suffering by ordinary civilians. Romantic artists such as Géricault and Delacroix, and those from other movements such as the English Pre-Raphaelite Brotherhood continued to regard history painting as the ideal for their most ambitious works. Others such as Jan Matejko in Poland, Vasily Surikov in Russia, José Moreno Carbonero in Spain and Paul Delaroche in France became specialized painters of large historical subjects. The style troubadour ("troubadour style") was a somewhat derisive French term for earlier paintings of medieval and Renaissance scenes, which were often small and depicting moments of anecdote rather than drama; Ingres, Richard Parkes Bonington and Henri Fradelle painted such works. Sir Roy Strong calls this type of work the "Intimate Romantic", and in French it was known as the "peinture de genre historique" or "peinture anecdotique" ("historical genre painting" or "anecdotal painting"). Church commissions for large group scenes from the Bible had greatly reduced, and historical painting became very significant. Especially in the early 19th century, much historical painting depicted specific moments from historical literature, with the novels of Sir Walter Scott a particular favourite, in France and other European countries as much as Great Britain. By the middle of the century medieval scenes were expected to be very carefully researched, using the work of historians of costume, architecture and all elements of decor that were becoming available. And example of this is the extensive research of Byzantine architecture, clothing and decoration made in Parisian museums and libraries by Moreno Carbonero for his masterwork The Entry of Roger de Flor in Constantinople. The provision of examples and expertise for artists, as well as revivalist industrial designers, was one of the motivations for the establishment of museums like the Victoria and Albert Museum in London. New techniques of printmaking such as the chromolithograph made good quality reproductions both relatively cheap and very widely accessible, and also hugely profitable for artist and publisher, as the sales were so large. Historical painting often had a close relationship with Nationalism,
modern (post-classical) work described in De Pictura is Giotto's huge Navicella in mosaic). Artists continued for centuries to strive to make their reputation by producing such works, often neglecting genres to which their talents were better suited. There was some objection to the term, as many writers preferred terms such as "poetic painting" (poesia), or wanted to make a distinction between the "true" istoria, covering history including biblical and religious scenes, and the fabula, covering pagan myth, allegory, and scenes from fiction, which could not be regarded as true. The large works of Raphael were long considered, with those of Michelangelo, as the finest models for the genre. In the Raphael Rooms in the Vatican Palace, allegories and historical scenes are mixed together, and the Raphael Cartoons show scenes from the Gospels, all in the Grand Manner that from the High Renaissance became associated with, and often expected in, history painting. In the Late Renaissance and Baroque the painting of actual history tended to degenerate into panoramic battle-scenes with the victorious monarch or general perched on a horse accompanied with his retinue, or formal scenes of ceremonies, although some artists managed to make a masterpiece from such unpromising material, as Velázquez did with his The Surrender of Breda. An influential formulation of the hierarchy of genres, confirming the history painting at the top, was made in 1667 by André Félibien, a historiographer, architect and theoretician of French classicism became the classic statement of the theory for the 18th century:Celui qui fait parfaitement des païsages est au-dessus d'un autre qui ne fait que des fruits, des fleurs ou des coquilles. Celui qui peint des animaux vivants est plus estimable que ceux qui ne représentent que des choses mortes & sans mouvement; & comme la figure de l'homme est le plus parfait ouvrage de Dieu sur la Terre, il est certain aussi que celui qui se rend l'imitateur de Dieu en peignant des figures humaines, est beaucoup plus excellent que tous les autres ... un Peintre qui ne fait que des portraits, n'a pas encore cette haute perfection de l'Art, & ne peut prétendre à l'honneur que reçoivent les plus sçavans. Il faut pour cela passer d'une seule figure à la représentation de plusieurs ensemble; il faut traiter l'histoire & la fable; il faut représenter de grandes actions comme les historiens, ou des sujets agréables comme les Poëtes; & montant encore plus haut, il faut par des compositions allégoriques, sçavoir couvrir sous le voile de la fable les vertus des grands hommes, & les mystères les plus relevez. He who produces perfect landscapes is above another who only produces fruit, flowers or seashells. He who paints living animals is more than those who only represent dead things without movement, and as man is the most perfect work of God on the earth, it is also certain that he who becomes an imitator of God in representing human figures, is much more excellent than all the others ... a painter who only does portraits still does not have the highest perfection of his art, and cannot expect the honour due to the most skilled. For that he must pass from representing a single figure to several together; history and myth must be depicted; great events must be represented as by historians, or like the poets, subjects that will please, and climbing still higher, he must have the skill to cover under the veil of myth the virtues of great men in allegories, and the mysteries they reveal". By the late 18th century, with both religious and mytholological painting in decline, there was an increased demand for paintings of scenes from history, including contemporary history. This was in part driven by the changing audience for ambitious paintings, which now increasingly made their reputation in public exhibitions rather than by impressing the owners of and visitors to palaces and public buildings. Classical history remained popular, but scenes from national histories were often the best-received. From 1760 onwards, the Society of Artists of Great Britain, the first body to organize regular exhibitions in London, awarded two generous prizes each year to paintings of subjects from British history. The unheroic nature of modern dress was regarded as a serious difficulty. When, in 1770, Benjamin West proposed to paint The Death of General Wolfe in contemporary dress, he was firmly instructed to use classical costume by many people. He ignored these comments and showed the scene in modern dress. Although George III refused to purchase the work, West succeeded both in overcoming his critics' objections and inaugurating a more historically accurate style in such paintings. Other artists depicted scenes, regardless of when they occurred, in classical dress and for a long time, especially during the French Revolution, history painting often focused on depictions of the heroic male nude. The large production, using the finest French artists, of propaganda paintings glorifying the exploits of Napoleon, were matched by works, showing both victories and losses, from the anti-Napoleonic alliance by artists such as Goya and J.M.W. Turner. Théodore Géricault's The Raft of the Medusa (1818–1819) was a sensation, appearing to update the history painting for the 19th century, and showing anonymous figures famous only for being victims of what was then a famous and controversial disaster at sea. Conveniently their clothes had been worn away to classical-seeming rags by the point the painting depicts. At the same time the demand for traditional large religious history paintings very largely fell away. In the mid-nineteenth century there arose a style known as historicism, which marked a formal imitation of historical styles and/or artists. Another development in the nineteenth century was the treatment of historical subjects, often on a large scale, with the values of genre painting, the depiction of scenes of everyday life, and anecdote. Grand depictions of events of great public importance were supplemented with scenes depicting more personal incidents in the lives of the great, or of scenes centred on unnamed figures involved in historical events, as in the Troubadour style. At the same time scenes of ordinary life with moral, political or satirical content became often the main vehicle for expressive interplay between figures in painting, whether given a modern or historical setting. By the later 19th century, history painting was often explicitly rejected by avant-garde movements such as the Impressionists (except for Édouard Manet) and the Symbolists, and according to one recent writer "Modernism was to a considerable extent built upon the rejection of History Painting... All other genres are deemed capable of entering, in one form or another, the 'pantheon' of modernity considered, but History Painting is excluded". History painting and historical painting The terms Initially, "history painting" and "historical painting" were used interchangeably in English, as when Sir Joshua Reynolds in his fourth Discourse uses both indiscriminately to cover "history painting", while saying "...it ought to be called poetical, as in reality it is", reflecting the French term peinture historique, one equivalent of "history painting". The terms began to separate in the 19th century, with "historical painting" becoming a sub-group of "history painting" restricted to subjects taken from history in its normal sense. In 1853 John Ruskin
the measurement above. That means if (Proof: straightforward calculation. If the points are on a hyperbola, one can assume the hyperbola's equation is .) A consequence of the inscribed angle theorem for hyperbolas is the 3-point-form of a hyperbola's equation: The equation of the hyperbola determined by 3 points is the solution of the equation for . As an affine image of the unit hyperbola x² − y² = 1 Another definition of a hyperbola uses affine transformations: Any hyperbola is the affine image of the unit hyperbola with equation . parametric representation An affine transformation of the Euclidean plane has the form , where is a regular matrix (its determinant is not 0) and is an arbitrary vector. If are the column vectors of the matrix , the unit hyperbola is mapped onto the hyperbola is the center, a point of the hyperbola and a tangent vector at this point. vertices In general the vectors are not perpendicular. That means, in general are not the vertices of the hyperbola. But point into the directions of the asymptotes. The tangent vector at point is Because at a vertex the tangent is perpendicular to the major axis of the hyperbola one gets the parameter of a vertex from the equation and hence from which yields (The formulae were used.) The two vertices of the hyperbola are implicit representation Solving the parametric representation for by Cramer's rule and using , one gets the implicit representation . hyperbola in space The definition of a hyperbola in this section gives a parametric representation of an arbitrary hyperbola, even in space, if one allows to be vectors in space. As an affine image of the hyperbola y = 1/x Because the unit hyperbola is affinely equivalent to the hyperbola , an arbitrary hyperbola can be considered as the affine image (see previous section) of the hyperbola is the center of the hyperbola, the vectors have the directions of the asymptotes and is a point of the hyperbola. The tangent vector is At a vertex the tangent is perpendicular to the major axis. Hence and the parameter of a vertex is is equivalent to and are the vertices of the hyperbola. The following properties of a hyperbola are easily proven using the representation of a hyperbola introduced in this section. Tangent construction The tangent vector can be rewritten by factorization: This means that the diagonal of the parallelogram is parallel to the tangent at the hyperbola point (see diagram). This property provides a way to construct the tangent at a point on the hyperbola. This property of a hyperbola is an affine version of the 3-point-degeneration of Pascal's theorem. Area of the grey parallelogram The area of the grey parallelogram in the above diagram is and hence independent of point . The last equation follows from a calculation for the case, where is a vertex and the hyperbola in its canonical form Point construction For a hyperbola with parametric representation (for simplicity the center is the origin) the following is true: For any two points the points are collinear with the center of the hyperbola (see diagram). The simple proof is a consequence of the equation . This property provides a possibility to construct points of a hyperbola if the asymptotes and one point are given. This property of a hyperbola is an affine version of the 4-point-degeneration of Pascal's theorem. Tangent-asymptotes-triangle For simplicity the center of the hyperbola may be the origin and the vectors have equal length. If the last assumption is not fulfilled one can first apply a parameter transformation (see above) in order to make the assumption true. Hence are the vertices, span the minor axis and one gets and . For the intersection points of the tangent at point with the asymptotes one gets the points The area of the triangle can be calculated by a 2 × 2 determinant: (see rules for determinants). is the area of the rhombus generated by . The area of a rhombus is equal to one half of the product of its diagonals. The diagonals are the semi-axes of the hyperbola. Hence: The area of the triangle is independent of the point of the hyperbola: Reciprocation of a circle The reciprocation of a circle B in a circle C always yields a conic section such as a hyperbola. The process of "reciprocation in a circle C" consists of replacing every line and point in a geometrical figure with their corresponding pole and polar, respectively. The pole of a line is the inversion of its closest point to the circle C, whereas the polar of a point is the converse, namely, a line whose closest point to C is the inversion of the point. The eccentricity of the conic section obtained by reciprocation is the ratio of the distances between the two circles' centers to the radius r of reciprocation circle C. If B and C represent the points at the centers of the corresponding circles, then Since the eccentricity of a hyperbola is always greater than one, the center B must lie outside of the reciprocating circle C. This definition implies that the hyperbola is both the locus of the poles of the tangent lines to the circle B, as well as the envelope of the polar lines of the points on B. Conversely, the circle B is the envelope of polars of points on the hyperbola, and the locus of poles of tangent lines to the hyperbola. Two tangent lines to B have no (finite) poles because they pass through the center C of the reciprocation circle C; the polars of the corresponding tangent points on B are the asymptotes of the hyperbola. The two branches of the hyperbola correspond to the two parts of the circle B that are separated by these tangent points. Quadratic equation A hyperbola can also be defined as a second-degree equation in the Cartesian coordinates (x, y) in the plane, provided that the constants Axx, Axy, Ayy, Bx, By, and C satisfy the determinant condition This determinant is conventionally called the discriminant of the conic section. A special case of a hyperbola—the degenerate hyperbola consisting of two intersecting lines—occurs when another determinant is zero: This determinant Δ is sometimes called the discriminant of the conic section. Given the above general parametrization of the hyperbola in Cartesian coordinates, the eccentricity can be found using the formula in Conic section#Eccentricity in terms of coefficients. The center (xc, yc) of the hyperbola may be determined from the formulae In terms of new coordinates, and , the defining equation of the hyperbola can be written The principal axes of the hyperbola make an angle φ with the positive x-axis that is given by Rotating the coordinate axes so that the x-axis is aligned with the transverse axis brings the equation into its canonical form The major and minor semiaxes a and b are defined by the equations where λ1 and λ2 are the roots of the quadratic equation For comparison, the corresponding equation for a degenerate hyperbola (consisting of two intersecting lines) is The tangent line to a given point (x0, y0) on the hyperbola is defined by the equation where E, F and G are defined by The normal line to the hyperbola at the same point is given by the equation The normal line is perpendicular to the tangent line, and both pass through the same point (x0, y0). From the equation the left focus is and the right focus is where is the eccentricity. Denote the distances from a point (x, y) to the left and right foci as and For a point on the right branch, and for a point on the left branch, This can be proved as follows: If (x,y) is a point on the hyperbola the distance to the left focal point is To the right focal point the distance is If (x,y) is a point on the right branch of the hyperbola then and Subtracting these equations one gets If (x,y) is a point on the left branch of the hyperbola then and Subtracting these equations one gets In Cartesian coordinates Equation If Cartesian coordinates are introduced such that the origin is the center of the hyperbola and the x-axis is the major axis, then the hyperbola is called east-west-opening and the foci are the points , the vertices are . For an arbitrary point the distance to the focus is and to the second focus . Hence the point is on the hyperbola if the following condition is fulfilled Remove the square roots by suitable squarings and use the relation to obtain the equation of the hyperbola: This equation is called the canonical form of a hyperbola, because any hyperbola, regardless of its orientation relative to the Cartesian axes and regardless of the location of its center, can be transformed to this form by a change of variables, giving a hyperbola that is congruent to the original (see below). The axes of symmetry or principal axes are the transverse axis (containing the segment of length 2a with endpoints at the vertices) and the conjugate axis (containing the segment of length 2b perpendicular to the transverse axis and with midpoint at the hyperbola's center). As opposed to an ellipse, a hyperbola has only two vertices: . The two points on the conjugate axes are not on the hyperbola. It follows from the equation that the hyperbola is symmetric with respect to both of the coordinate axes and hence symmetric with respect to the origin. Eccentricity For a hyperbola in the above canonical form, the eccentricity is given by Two hyperbolas are geometrically similar to each other – meaning that they have the same shape, so that one can be transformed into the other by rigid left and right movements, rotation, taking a mirror image, and scaling (magnification) – if and only if they have the same eccentricity. Asymptotes Solving the equation (above) of the hyperbola for yields It follows from this that the hyperbola approaches the two lines for large values of . These two lines intersect at the center (origin) and are called asymptotes of the hyperbola With the help of the second figure one can see that The perpendicular distance
one of the three kinds of conic section, formed by the intersection of a plane and a double cone. (The other conic sections are the parabola and the ellipse. A circle is a special case of an ellipse.) If the plane intersects both halves of the double cone but does not pass through the apex of the cones, then the conic is a hyperbola. Hyperbolas arise in many ways: as the curve representing the function in the Cartesian plane, as the path followed by the shadow of the tip of a sundial, as the shape of an open orbit (as distinct from a closed elliptical orbit), such as the orbit of a spacecraft during a gravity assisted swing-by of a planet or, more generally, any spacecraft exceeding the escape velocity of the nearest planet, as the path of a single-apparition comet (one travelling too fast ever to return to the solar system), as the scattering trajectory of a subatomic particle (acted on by repulsive instead of attractive forces but the principle is the same), in radio navigation, when the difference between distances to two points, but not the distances themselves, can be determined, and so on. Each branch of the hyperbola has two arms which become straighter (lower curvature) further out from the center of the hyperbola. Diagonally opposite arms, one from each branch, tend in the limit to a common line, called the asymptote of those two arms. So there are two asymptotes, whose intersection is at the center of symmetry of the hyperbola, which can be thought of as the mirror point about which each branch reflects to form the other branch. In the case of the curve the asymptotes are the two coordinate axes. Hyperbolas share many of the ellipses' analytical properties such as eccentricity, focus, and directrix. Typically the correspondence can be made with nothing more than a change of sign in some term. Many other mathematical objects have their origin in the hyperbola, such as hyperbolic paraboloids (saddle surfaces), hyperboloids ("wastebaskets"), hyperbolic geometry (Lobachevsky's celebrated non-Euclidean geometry), hyperbolic functions (sinh, cosh, tanh, etc.), and gyrovector spaces (a geometry proposed for use in both relativity and quantum mechanics which is not Euclidean). Etymology and history The word "hyperbola" derives from the Greek , meaning "over-thrown" or "excessive", from which the English term hyperbole also derives. Hyperbolae were discovered by Menaechmus in his investigations of the problem of doubling the cube, but were then called sections of obtuse cones. The term hyperbola is believed to have been coined by Apollonius of Perga (c. 262–c. 190 BC) in his definitive work on the conic sections, the Conics. The names of the other two general conic sections, the ellipse and the parabola, derive from the corresponding Greek words for "deficient" and "applied"; all three names are borrowed from earlier Pythagorean terminology which referred to a comparison of the side of rectangles of fixed area with a given line segment. The rectangle could be "applied" to the segment (meaning, have an equal length), be shorter than the segment or exceed the segment. Definitions As locus of points A hyperbola can be defined geometrically as a set of points (locus of points) in the Euclidean plane: A hyperbola is a set of points, such that for any point of the set, the absolute difference of the distances to two fixed points (the foci) is constant, usually denoted by The midpoint of the line segment joining the foci is called the center of the hyperbola. The line through the foci is called the major axis. It contains the vertices , which have distance to the center. The distance of the foci to the center is called the focal distance or linear eccentricity. The quotient is the eccentricity . The equation can be viewed in a different way (see diagram): If is the circle with midpoint and radius , then the distance of a point of the right branch to the circle equals the distance to the focus : is called the circular directrix (related to focus ) of the hyperbola. In order to get the left branch of the hyperbola, one has to use the circular directrix related to . This property should not be confused with the definition of a hyperbola with help of a directrix (line) below. Hyperbola with equation y = A/x If the xy-coordinate system is rotated about the origin by the angle and new coordinates are assigned, then . The rectangular hyperbola (whose semi-axes are equal) has the new equation . Solving for yields Thus, in an xy-coordinate system the graph of a function with equation is a rectangular hyperbola entirely in the first and third quadrants with the coordinate axes as asymptotes, the line as major axis , the center and the semi-axis the vertices the semi-latus rectum and radius of curvature at the vertices the linear eccentricity and the eccentricity the tangent at point A rotation of the original hyperbola by results in a rectangular hyperbola entirely in the second and fourth quadrants, with the same asymptotes, center, semi-latus rectum, radius of curvature at the vertices, linear eccentricity, and eccentricity as for the case of rotation, with equation the semi-axes the line as major axis, the vertices Shifting the hyperbola with equation so that the new center is , yields the new equation and the new asymptotes are and . The shape parameters remain unchanged. By the directrix property The two lines at distance from the center and parallel to the minor axis are called directrices of the hyperbola (see diagram). For an arbitrary point of the hyperbola the quotient of the distance to one focus and to the corresponding directrix (see diagram) is equal to the eccentricity: The proof for the pair follows from the fact that and satisfy the equation The second case is proven analogously. The inverse statement is also true and can be used to define a hyperbola (in a manner similar to the definition of a parabola): For any point (focus), any line (directrix) not through and any real number with the set of points (locus of points), for which the quotient of the distances to the point and to the line is is a hyperbola. (The choice yields a parabola and if an ellipse.) Proof Let and assume is a point on the curve. The directrix has equation . With , the relation produces the equations and The substitution yields This is the equation of an ellipse () or a parabola () or a hyperbola (). All of these non-degenerate conics have, in common, the origin as a vertex (see diagram). If , introduce new parameters so that , and then the equation above becomes which is the equation of a hyperbola with center , the x-axis as major axis and the major/minor semi axis . Construction of a directrix Because of point of directrix (see diagram) and focus are inverse with respect to the circle inversion at circle (in diagram green). Hence point can be constructed using the theorem of Thales (not shown in the diagram). The directrix is the perpendicular to line through point . Alternative construction of : Calculation shows, that point is the intersection of the asymptote with its perpendicular through (see diagram). As plane section of a cone The intersection of an upright double cone by a plane not through the vertex with slope greater than the slope of the lines on the cone is a hyperbola (see diagram: red curve). In order to prove the defining property of a hyperbola (see above) one uses two Dandelin spheres , which are spheres that touch the cone along circles , and the intersecting (hyperbola) plane at points and . It turns out: are the foci of the hyperbola. Let be an arbitrary point of the intersection curve . The generatrix of the cone containing intersects circle at point and circle at a point . The line segments and are tangential to the sphere and, hence, are of equal length. The line segments and are tangential to the sphere and, hence, are of equal length. The result is: is independent of the hyperbola point , because no matter where point is, have to be on circles , , and line segment has to cross the apex. Therefore, as point moves along the red curve (hyperbola), line segment simply rotates about apex without changing its length. Pin and string construction The definition of a hyperbola by its foci and its circular directrices (see above) can be used for drawing an arc of it with help of pins, a string and a ruler: (0) Choose the foci , the vertices and one of the circular directrices , for example (circle with radius ) (1) A ruler is fixed at point free to rotate around . Point is marked at distance . (2) A string with length is prepared. (3) One end of the string is pinned at point on the ruler, the other end is pinned to point . (4) Take a pen and hold the string tight to the edge of the ruler. (5) Rotating the ruler around prompts the pen to draw an arc of the right branch of the hyperbola, because of (see the definition of a hyperbola by circular directrices). Steiner generation of a hyperbola The following method to construct single points of a hyperbola relies on the Steiner generation of a non degenerate conic section: Given two pencils of lines at two points (all lines containing and , respectively) and a projective but not perspective mapping of onto , then the intersection points of corresponding lines form a non-degenerate projective conic section. For the generation of points of the hyperbola one uses the pencils at the vertices . Let be a point of the hyperbola and . The line segment is divided into n equally-spaced segments and this division is projected parallel with the diagonal as direction onto the line segment (see diagram). The parallel projection is part of the projective mapping between the pencils at and needed. The intersection points of any two related lines and are points of the uniquely defined hyperbola. Remark: The subdivision could be extended beyond the points and in order to get more points, but the determination of the intersection points would become more inaccurate. A better idea is extending the points already constructed by symmetry (see animation). Remark: The Steiner generation exists for ellipses and parabolas, too. The Steiner generation is sometimes called a parallelogram method because one can use other points rather than the vertices, which starts with a parallelogram instead of a rectangle. Inscribed angles for hyperbolas y
an army and march on Humayun. When Humayun received word of the approaching hostile army he decided against facing them, and instead sought refuge elsewhere. Akbar was left behind in camp close to Kandahar, as it was December, too cold and dangerous to include the 14-month-old toddler in the march through the mountains of the Hindu Kush. Askari Mirza took Akbar in, leaving the wives of Kamran and Askari Mirza to raise him. The Akbarnama specifies Kamran Mirza's wife, Sultan Begam. Once again Humayun turned toward Kandahar where his brother Kamran Mirza was in power, but he received no help and had to seek refuge with the Shah of Persia Refuge in Persia Humayun fled to the refuge of the Safavid Empire in Persia, marching with 40 men, his wife Bega Begum, and her companion through mountains and valleys. Among other trials the Imperial party were forced to live on horse meat boiled in the soldiers' helmets. These indignities continued during the month it took them to reach Herat, however after their arrival they were reintroduced to the finer things in life. Upon entering the city his army was greeted with an armed escort, and they were treated to lavish food and clothing. They were given fine accommodations and the roads were cleared and cleaned before them. Shah Tahmasp, unlike Humayun's own family, actually welcomed the Mughal, and treated him as a royal visitor. Here Humayun went sightseeing and was amazed at the Persian artwork and architecture he saw: much of this was the work of the Timurid Sultan Husayn Bayqarah and his ancestor, princess Gauhar Shad, thus he was able to admire the work of his relatives and ancestors at first hand. He was introduced to the work of the Persian miniaturists, and Kamaleddin Behzad had two of his pupils join Humayun in his court. Humayun was amazed at their work and asked if they would work for him if he were to regain the sovereignty of Hindustan: they agreed. With so much going on Humayun did not even meet the Shah until July, some six months after his arrival in Persia. After a lengthy journey from Herat the two met in Qazvin where a large feast and parties were held for the event. The meeting of the two monarchs is depicted in a famous wall-painting in the Chehel Sotoun (Forty Columns) palace in Esfahan. The Shah urged that Humayun convert from Sunni to Shia Islam in order to keep himself and several hundred followers alive. Although the Mughals initially disagreed to their conversion they knew that with this outward acceptance of Shi'ism, Shah Tahmasp was eventually prepared to offer Humayun more substantial support. When Humayun's brother, Kamran Mirza, offered to cede Kandahar to the Persians in exchange for Humayun, dead or alive, Shah Tahmasp refused. Instead the Shah staged a celebration for Humayun, with 300 tents, an imperial Persian carpet, 12 musical bands and "meat of all kinds". Here the Shah announced that all this, and 12,000 elite cavalry were his to lead an attack on his brother Kamran. All that Shah Tahmasp asked for was that, if Humayun's forces were victorious, Kandahar would be his. Kandahar and onward With this Persian Safavid aid Humayun took Kandahar from Askari Mirza after a two-week siege. He noted how the nobles who had served Askari Mirza quickly flocked to serve him, "in very truth the greater part of the inhabitants of the world are like a flock of sheep, wherever one goes the others immediately follow". Kandahar was, as agreed, given to the Shah of Persia who sent his infant son, Murad, as the Viceroy. However, the baby soon died and Humayun thought himself strong enough to assume power. Humayun now prepared to take Kabul, ruled by his brother Kamran Mirza. In the end, there was no actual siege. Kamran Mirza was detested as a leader and as Humayun's Persian army approached the city hundreds of Kamran Mirza's troops changed sides, flocking to join Humayun and swelling his ranks. Kamran Mirza absconded and began building an army outside the city. In November 1545, Hamida and Humayun were reunited with their son Akbar, and held a huge feast. They also held another, larger, feast in the child's honour when he was circumcised. However, while Humayun had a larger army than his brother and had the upper hand, on two occasions his poor military judgement allowed Kamran Mirza to retake Kabul and Kandahar, forcing Humayun to mount further campaigns for their recapture. He might have been aided in this by his reputation for leniency towards the troops who had defended the cities against him, as opposed to Kamran Mirza, whose brief periods of possession were marked by atrocities against the inhabitants who, he supposed, had helped his brother. His youngest brother, Hindal Mirza, formerly the most disloyal of his siblings, died fighting on his behalf. His brother Askari Mirza was shackled in chains at the behest of his nobles and aides. He was allowed go on Hajj, and died en route in the desert outside Damascus. Humayun's other brother, Kamran Mirza, had repeatedly sought to have Humayun killed. In 1552 Kamran Mirza attempted to make a pact with Islam Shah, Sher Shah's successor, but was apprehended by a Gakhar. The Gakhars were one of the minority of tribal groups who had consistently remained loyal to their oath to the Mughals. Sultan Adam of the Gakhars handed Kamran Mirza over to Humayun. Humayun was inclined to forgive his brother. However he was warned that allowing Kamran Mirza's repeated acts of treachery to go unpunished could foment rebellion amongst his own supporters. So, instead of killing his brother, Humayun had Kamran Mirza blinded which would end any claim by the latter to the throne. Humayun sent Kamran Mirza on Hajj, as he hoped to see his brother thereby absolved of his offences. However Kamran Mirza died close to Mecca in the Arabian Peninsula in 1557. Restoration of the Mughal Empire Sher Shah Suri had died in 1545; his son and successor Islam Shah died in 1554. These two deaths left the dynasty reeling and disintegrating. Three rivals for the throne all marched on Delhi, while in many cities leaders tried to stake a claim for independence. This was a perfect opportunity for the Mughals to march back to India. The Mughal Emperor Humayun gathered a vast army, which included the Baloch tribes of Leghari, Magsi and Rind, and attempted the challenging task of retaking the throne in Delhi. Humayun placed the army under the leadership of Bairam Khan, a wise move given Humayun's own record of military ineptitude, and it turned out to be prescient as Bairam proved himself a great tactician. At the Battle of Sirhind on 22 June 1555, the armies of Sikandar Shah Suri were decisively defeated and the Mughal Empire was re-established in India. Marriage relations with the Khanzadas The Gazetteer of Ulwur states: Bairam Khan led the army through the Punjab virtually unopposed. The fort of Rohtas, which was built in 1541–1543 by Sher Shah Suri to crush the Gakhars who were loyal to Humayun, was surrendered without a shot by a treacherous commander. The walls of the Rohtas Fort measure up to 12.5 meters in thickness and up to 18.28 meters in height. They extend for 4 km and feature 68 semi-circular bastions. Its sandstone gates, both massive and ornate, are thought to have exerted a profound influence on Mughal military architecture. The only major battle faced by Humayun's armies was against Sikander Suri in Sirhind, where Bairam Khan employed a tactic whereby he engaged his enemy in open battle, but then retreated quickly in apparent fear. When the enemy followed after them they were surprised by entrenched defensive positions and were easily annihilated. After Sirhind, most towns and villages chose to welcome the invading army as it made its way to the capital. On 23 July 1555, Humayun once again sat on Babur's throne in Delhi. Ruling Kashmir With all of Humayun's brothers now dead, there was no fear of another usurping his throne during his military campaigns. He was also now an established leader and could trust his generals. With this new-found strength Humayun embarked on a series of military campaigns aimed at extending his reign over areas in the east and west of the subcontinent. His sojourn in exile seems to have reduced his reliance on astrology, and his military leadership came to imitate the more effective methods that he had observed in Persia. Character Edward S. Holden writes; "He was uniformly kind and considerate to his dependents, devotedly attached to his son Akbar, to his friends, and to his turbulent brothers. The misfortunes of his reign arose in great, from his failure to treat them with rigor." He further writes: "The very defects of his character, which render him less admirable as a successful ruler of nations, make us more fond of him as a man. His renown has suffered in that his reign came between the brilliant conquests of Babur and the beneficent statesmanship of Akbar; but he was not unworthy to be the son of the one and the father of the other." Stanley Lane-Poole writes in his book Medieval India: "His name meant the winner (Lucky/Conqueror), there is no king in the history to be named as wrong as Humayun", he was of a forgiving nature. He further writes, "He was in fact unfortunate ... Scarcely had he enjoyed his throne for six
and formed new alliances that helped regain lost territories. Until finally Humayun had gathered hundreds of Sindhi and Baloch tribesmen alongside his Mughals and then marched towards Kandahar and later Kabul, thousands more gathered by his side as Humayun continually declared himself the rightful Timurid heir of the first Mughal Emperor, Babur. Retreat to Kabul After Humayun set out from his expedition in Sindh, along with 300 camels (mostly wild) and 2000 loads of grain, he set off to join his brothers in Kandahar after crossing the Indus River on 11 July 1543 along with the ambition to regain the Mughal Empire and overthrow the Suri dynasty. Among the tribes that had sworn allegiance to Humayun were the Leghari, Magsi, Rind and many others. In Kamran Mirza's territory, Hindal Mirza had been placed under house arrest in Kabul after refusing to have the Khutba recited in Kamran Mirza's name. His other brother, Askari Mirza, was now ordered to gather an army and march on Humayun. When Humayun received word of the approaching hostile army he decided against facing them, and instead sought refuge elsewhere. Akbar was left behind in camp close to Kandahar, as it was December, too cold and dangerous to include the 14-month-old toddler in the march through the mountains of the Hindu Kush. Askari Mirza took Akbar in, leaving the wives of Kamran and Askari Mirza to raise him. The Akbarnama specifies Kamran Mirza's wife, Sultan Begam. Once again Humayun turned toward Kandahar where his brother Kamran Mirza was in power, but he received no help and had to seek refuge with the Shah of Persia Refuge in Persia Humayun fled to the refuge of the Safavid Empire in Persia, marching with 40 men, his wife Bega Begum, and her companion through mountains and valleys. Among other trials the Imperial party were forced to live on horse meat boiled in the soldiers' helmets. These indignities continued during the month it took them to reach Herat, however after their arrival they were reintroduced to the finer things in life. Upon entering the city his army was greeted with an armed escort, and they were treated to lavish food and clothing. They were given fine accommodations and the roads were cleared and cleaned before them. Shah Tahmasp, unlike Humayun's own family, actually welcomed the Mughal, and treated him as a royal visitor. Here Humayun went sightseeing and was amazed at the Persian artwork and architecture he saw: much of this was the work of the Timurid Sultan Husayn Bayqarah and his ancestor, princess Gauhar Shad, thus he was able to admire the work of his relatives and ancestors at first hand. He was introduced to the work of the Persian miniaturists, and Kamaleddin Behzad had two of his pupils join Humayun in his court. Humayun was amazed at their work and asked if they would work for him if he were to regain the sovereignty of Hindustan: they agreed. With so much going on Humayun did not even meet the Shah until July, some six months after his arrival in Persia. After a lengthy journey from Herat the two met in Qazvin where a large feast and parties were held for the event. The meeting of the two monarchs is depicted in a famous wall-painting in the Chehel Sotoun (Forty Columns) palace in Esfahan. The Shah urged that Humayun convert from Sunni to Shia Islam in order to keep himself and several hundred followers alive. Although the Mughals initially disagreed to their conversion they knew that with this outward acceptance of Shi'ism, Shah Tahmasp was eventually prepared to offer Humayun more substantial support. When Humayun's brother, Kamran Mirza, offered to cede Kandahar to the Persians in exchange for Humayun, dead or alive, Shah Tahmasp refused. Instead the Shah staged a celebration for Humayun, with 300 tents, an imperial Persian carpet, 12 musical bands and "meat of all kinds". Here the Shah announced that all this, and 12,000 elite cavalry were his to lead an attack on his brother Kamran. All that Shah Tahmasp asked for was that, if Humayun's forces were victorious, Kandahar would be his. Kandahar and onward With this Persian Safavid aid Humayun took Kandahar from Askari Mirza after a two-week siege. He noted how the nobles who had served Askari Mirza quickly flocked to serve him, "in very truth the greater part of the inhabitants of the world are like a flock of sheep, wherever one goes the others immediately follow". Kandahar was, as agreed, given to the Shah of Persia who sent his infant son, Murad, as the Viceroy. However, the baby soon died and Humayun thought himself strong enough to assume power. Humayun now prepared to take Kabul, ruled by his brother Kamran Mirza. In the end, there was no actual siege. Kamran Mirza was detested as a leader and as Humayun's Persian army approached the city hundreds of Kamran Mirza's troops changed sides, flocking to join Humayun and swelling his ranks. Kamran Mirza absconded and began building an army outside the city. In November 1545, Hamida and Humayun were reunited with their son Akbar, and held a huge feast. They also held another, larger, feast in the child's honour when he was circumcised. However, while Humayun had a larger army than his brother and had the upper hand, on two occasions his poor military judgement allowed Kamran Mirza to retake Kabul and Kandahar, forcing Humayun to mount further campaigns for their recapture. He might have been aided in this by his reputation for leniency towards the troops who had defended the cities against him, as opposed to Kamran Mirza, whose brief periods of possession were marked by atrocities against the inhabitants who, he supposed, had helped his brother. His youngest brother, Hindal Mirza, formerly the most disloyal of his siblings, died fighting on his behalf. His brother Askari Mirza was shackled in chains at the behest of his nobles and aides. He was allowed go on Hajj, and died en route in the desert outside Damascus. Humayun's other brother, Kamran Mirza, had repeatedly sought to have Humayun killed. In 1552 Kamran Mirza attempted to make a pact with Islam Shah, Sher Shah's successor, but was apprehended by a Gakhar. The Gakhars were one of the minority of tribal groups who had consistently remained loyal to their oath to the Mughals. Sultan Adam of the Gakhars handed Kamran Mirza over to Humayun. Humayun was inclined to forgive his brother. However he was warned that allowing Kamran Mirza's repeated acts of treachery to go unpunished could foment rebellion amongst his own supporters. So, instead of killing his brother, Humayun had Kamran Mirza blinded which would end any claim by the latter to the throne. Humayun sent Kamran Mirza on Hajj, as he hoped to see his brother thereby absolved of his offences. However Kamran Mirza died close to Mecca in the Arabian Peninsula in 1557. Restoration of the Mughal Empire Sher Shah Suri had died in 1545; his son and successor Islam Shah died in 1554. These two deaths left the dynasty reeling and disintegrating. Three rivals for the throne all marched on Delhi, while in many cities leaders tried to stake a claim for independence. This was a perfect opportunity for the Mughals to march back to India. The Mughal Emperor Humayun gathered a vast army, which included the Baloch tribes of Leghari, Magsi and Rind, and attempted the challenging task of retaking the throne in Delhi. Humayun placed the army under the leadership of Bairam Khan, a wise move given Humayun's own record of military ineptitude, and it turned out to be prescient as Bairam proved himself a great tactician. At the Battle of Sirhind on 22 June 1555, the armies of Sikandar Shah Suri were decisively defeated and the Mughal Empire was re-established in India. Marriage relations with the Khanzadas The Gazetteer of Ulwur states: Bairam Khan led the army through the Punjab virtually unopposed. The fort of Rohtas, which was built in 1541–1543 by Sher Shah Suri to crush the Gakhars who were loyal to Humayun, was surrendered without a shot by a treacherous commander. The walls of the Rohtas Fort measure up to
between the two as to which was vicar. In 1659, both purported to act as vicar, but ultimately, the other vicar recognized the Elector of Bavaria. Later, the two electors made a pact to act as joint vicars, but the Imperial Diet rejected the agreement. In 1711, while the Elector of Bavaria was under the ban of the Empire, the Elector Palatine again acted as vicar, but his cousin was restored to his position upon his restoration three years later. Finally, in 1745, the two agreed to alternate as vicars, with Bavaria starting first. This arrangement was upheld by the Imperial Diet in 1752. In 1777, the question was settled when the Elector Palatine inherited Bavaria. On many occasions, however, there was no interregnum, as a new king had been elected during the lifetime of the previous Emperor. Frankfurt regularly served as the site of the election from the fifteenth century on, but elections were also held at Cologne (1531), Regensburg (1575 and 1636), and Augsburg (1653 and 1690). An elector could appear in person or could appoint another elector as his proxy. More often, an electoral suite or embassy was sent to cast the vote; the credentials of such representatives were verified by the Archbishop of Mainz, who presided over the ceremony. The deliberations were held at the city hall, but voting occurred in the cathedral. In Frankfurt, a special electoral chapel, or , was used for elections. Under the Golden Bull, a majority of electors sufficed to elect a king, and each elector could cast only one vote. Electors were free to vote for whomsoever they pleased (including themselves), but dynastic considerations played a great part in the choice. Electors drafted a , or electoral capitulation, which was presented to the king-elect. The capitulation may be described as a contract between the princes and the king, the latter conceding rights and powers to the electors and other princes. Once an individual swore to abide by the electoral capitulation, he assumed the office of King of the Romans. In the 10th and 11th centuries, princes often acted merely to confirm hereditary succession in the Saxon Ottonian dynasty and Franconian Salian dynasty. But with the actual formation of the prince-elector class, elections became more open, starting with the election of Lothair II in 1125. The Staufen dynasty managed to get its sons formally elected in their fathers' lifetimes almost as a formality. After these lines ended in extinction, the electors began to elect kings from different families so that the throne would not once again settle within a single dynasty. For some two centuries, the monarchy was elective both in theory and in practice; the arrangement, however, did not last, since the powerful House of Habsburg managed to secure succession within their dynasty during the fifteenth century. All kings elected from 1438 onwards were from among the Habsburg Archdukes of Austria (and later Kings of Hungary and Bohemia) until 1740, when the archduchy was inherited by a woman, Maria Theresa, sparking the War of the Austrian Succession. A representative of the House of Wittelsbach was elected for a short period of time, but in 1745, Maria Theresa's husband, Francis I of the Habsburg-Lorraine dynasty, became King. All of his successors were also from the same family. Hence, for the greater part of the Empire's history, the role of the electors was largely ceremonial. High offices Each elector held a "High Office of the Empire" () analogous to a modern Cabinet office and was a member of the (ceremonial) Imperial Household. The three spiritual electors were Arch-Chancellors (, ): the Archbishop of Mainz was Arch-Chancellor of Germany, the Archbishop of Cologne was Arch-Chancellor of Italy, and the Archbishop of Trier was Arch-Chancellor of Burgundy. The six remaining were secular electors, who were granted augmentations to their arms reflecting their position in the Household. These augments were displayed either as an inset badge, as in the case of the Arch Steward, Treasurer, and Chamberlain—or dexter, as in the case of the Arch Marshal and Arch Bannerbearer. Or, as in the case of the Arch Cupbearer, the augment was integrated into the escutcheon, held in the royal Bohemian lion's right paw. When the Duke of Bavaria replaced the Elector Palatine in 1623, he assumed the latter's office of Arch-Steward. When the Count Palatine was granted a new electorate, he assumed the position of Arch-Treasurer of the Empire. When the Duke of Bavaria was banned in 1706, the Elector Palatine returned to the office of Arch-Steward, and in 1710, the Elector of Hanover was promoted to the post of Arch-Treasurer. Matters were complicated by the Duke of Bavaria's restoration in 1714; the Elector of Bavaria resumed the office of Arch-Steward, while the Elector Palatine returned to the post of Arch-Treasurer, and the Elector of Hanover was given the new office of Archbannerbearer. The Electors of Hanover, however, continued to be styled Arch-Treasurers, though the Elector Palatine was the one who actually exercised the office until 1777, when he inherited Bavaria and the Arch-Stewardship. After 1777, no further changes were made to the Imperial Household; new offices were planned for the Electors admitted in 1803, but the Empire was abolished before they could be created. The Duke of Württemberg, however, started to adopt the trappings of the Arch-Bannerbearer. Many High Officers were entitled to use "augmentations" on their coats of arms; said augmentations, which were special marks of honor, appeared in the middle of the electors' shields (as shown in the image above) atop the other charges (in heraldic terms, the augmentations appeared in the form of inescutcheons). The Arch-Steward used gules an orb Or (a gold orb on a red field). The Arch-Marshal used the more complicated per fess sable and argent, two swords in saltire gules (two red swords arranged in the form of a saltire, on a black and white field). The Arch-Chamberlain's augmentation was azure a scepter palewise Or (a golden scepter on a blue field), while the Arch-Treasurer's was gules the crown of Charlemagne Or (a gold crown on a red field). As noted above, the Elector Palatine and the Elector of Hanover styled themselves Arch-Treasurer from 1714 until 1777; during this time, both electors used the corresponding augmentations. The three Arch-Chancellors and the Arch-Cupbearer, however, did not use any augmentations. The electors discharged the ceremonial duties associated with their offices only during coronations, where they bore the crown and regalia of the Empire. Otherwise, they were represented by holders of corresponding "Hereditary Offices of the Household". The Arch-Butler was represented by the Hereditary Butler (Cupbearer) (the Count of Althann), the Arch-Seneschal by the Hereditary Steward (the Count of Waldburg, who adopted the title into their name as "Truchsess von Waldburg"), the Arch-Chamberlain by the Hereditary Chamberlain (the Count of Hohenzollern), the Arch-Marshal by the Hereditary Marshal (the Count of Pappenheim), and the Arch-Treasurer by the Hereditary Treasurer (the Count of Sinzendorf). After 1803, the Duke of Württemberg as Arch-Bannerbearer assigned the count of Zeppelin-Aschhausen as Hereditary Bannerbearer. History The German practice of electing monarchs began when ancient Germanic tribes formed ad hoc coalitions and elected the leaders thereof. Elections were irregularly held by the Franks, whose successor states include France and the Holy Roman Empire. The French monarchy eventually became hereditary, but the Holy Roman Emperors remained elective, at least in theory, although the Habsburgs provided most of the later monarchs. While all free men originally exercised the right to vote in such elections, suffrage eventually came to be limited to the leading men of the realm. In the election of Lothar II in 1125, a small number of eminent nobles chose the monarch and then submitted him to the remaining magnates for their approbation. Soon, the right to choose the monarch was settled on an exclusive group of princes, and the procedure of seeking the approval of the remaining nobles was abandoned. The college of electors was mentioned in 1152 and again in 1198. The composition of electors
Mainz presided over the Catholic body, or , while the Elector of Saxony presided over the Protestant body, or . The division into religious bodies was on the basis of the official religion of the state, and not of its rulers. Thus, even when the Electors of Saxony were Catholics during the eighteenth century, they continued to preside over the , since the state of Saxony was officially Protestant. Elections The electors were originally summoned by the Archbishop of Mainz within one month of an Emperor's death, and met within three months of being summoned. During the interregnum, imperial power was exercised by two imperial vicars. Each vicar, in the words of the Golden Bull, was "the administrator of the empire itself, with the power of passing judgments, of presenting to ecclesiastical benefices, of collecting returns and revenues and investing with fiefs, of receiving oaths of fealty for and in the name of the holy empire". The Elector of Saxony was vicar in areas operating under Saxon law (Saxony, Westphalia, Hannover, and northern Germany), while the Elector Palatine was vicar in the remainder of the Empire (Franconia, Swabia, the Rhine, and southern Germany). The Elector of Bavaria replaced the Elector Palatine in 1623, but when the latter was granted a new electorate in 1648, there was a dispute between the two as to which was vicar. In 1659, both purported to act as vicar, but ultimately, the other vicar recognized the Elector of Bavaria. Later, the two electors made a pact to act as joint vicars, but the Imperial Diet rejected the agreement. In 1711, while the Elector of Bavaria was under the ban of the Empire, the Elector Palatine again acted as vicar, but his cousin was restored to his position upon his restoration three years later. Finally, in 1745, the two agreed to alternate as vicars, with Bavaria starting first. This arrangement was upheld by the Imperial Diet in 1752. In 1777, the question was settled when the Elector Palatine inherited Bavaria. On many occasions, however, there was no interregnum, as a new king had been elected during the lifetime of the previous Emperor. Frankfurt regularly served as the site of the election from the fifteenth century on, but elections were also held at Cologne (1531), Regensburg (1575 and 1636), and Augsburg (1653 and 1690). An elector could appear in person or could appoint another elector as his proxy. More often, an electoral suite or embassy was sent to cast the vote; the credentials of such representatives were verified by the Archbishop of Mainz, who presided over the ceremony. The deliberations were held at the city hall, but voting occurred in the cathedral. In Frankfurt, a special electoral chapel, or , was used for elections. Under the Golden Bull, a majority of electors sufficed to elect a king, and each elector could cast only one vote. Electors were free to vote for whomsoever they pleased (including themselves), but dynastic considerations played a great part in the choice. Electors drafted a , or electoral capitulation, which was presented to the king-elect. The capitulation may be described as a contract between the princes and the king, the latter conceding rights and powers to the electors and other princes. Once an individual swore to abide by the electoral capitulation, he assumed the office of King of the Romans. In the 10th and 11th centuries, princes often acted merely to confirm hereditary succession in the Saxon Ottonian dynasty and Franconian Salian dynasty. But with the actual formation of the prince-elector class, elections became more open, starting with the election of Lothair II in 1125. The Staufen dynasty managed to get its sons formally elected in their fathers' lifetimes almost as a formality. After these lines ended in extinction, the electors began to elect kings from different families so that the throne would not once again settle within a single dynasty. For some two centuries, the monarchy was elective both in theory and in practice; the arrangement, however, did not last, since the powerful House of Habsburg managed to secure succession within their dynasty during the fifteenth century. All kings elected from 1438 onwards were from among the Habsburg Archdukes of Austria (and later Kings of Hungary and Bohemia) until 1740, when the archduchy was inherited by a woman, Maria Theresa, sparking the War of the Austrian Succession. A representative of the House of Wittelsbach was elected for a short period of time, but in 1745, Maria Theresa's husband, Francis I of the Habsburg-Lorraine dynasty, became King. All of his successors were also from the same family. Hence, for the greater part of the Empire's history, the role of the electors was largely ceremonial. High offices Each elector held a "High Office of the Empire" () analogous to a modern Cabinet office and was a member of the (ceremonial) Imperial Household. The three spiritual electors were Arch-Chancellors (, ): the Archbishop of Mainz was Arch-Chancellor of Germany, the Archbishop of Cologne was Arch-Chancellor of Italy, and the Archbishop of Trier was Arch-Chancellor of Burgundy. The six remaining were secular electors, who were granted augmentations to their arms reflecting their position in the Household. These augments were displayed either as an inset badge, as in the case of the Arch Steward, Treasurer, and Chamberlain—or dexter, as in the case of the Arch Marshal and Arch Bannerbearer. Or, as in the case of the Arch Cupbearer, the augment was integrated into the escutcheon, held in the royal Bohemian lion's right paw. When the Duke of Bavaria replaced the Elector Palatine in 1623, he assumed the latter's office of Arch-Steward. When the Count Palatine was granted a new electorate, he assumed the position of Arch-Treasurer of the Empire. When the Duke of Bavaria was banned in 1706, the Elector Palatine returned to the office of Arch-Steward, and in 1710, the Elector of Hanover was promoted to the post of Arch-Treasurer. Matters were complicated by the Duke of Bavaria's restoration in 1714; the Elector of Bavaria resumed the office of Arch-Steward, while the Elector Palatine returned to the post of Arch-Treasurer, and the Elector of Hanover was given the new office of Archbannerbearer. The Electors of Hanover, however, continued to be styled Arch-Treasurers, though the Elector Palatine was the one who actually exercised the office until 1777, when he inherited Bavaria and the Arch-Stewardship. After 1777, no further changes were made to the Imperial Household; new offices were planned for the Electors admitted in 1803, but the Empire was abolished before they could be created. The Duke of Württemberg, however, started to adopt the trappings of the Arch-Bannerbearer. Many High Officers were entitled to use "augmentations" on their coats of arms; said augmentations, which were special marks of honor, appeared in the middle of the electors' shields (as shown in the image above) atop the other charges (in heraldic terms, the augmentations appeared in the form of inescutcheons). The Arch-Steward used gules an orb Or (a gold orb on a red field). The Arch-Marshal used the more complicated per fess sable and argent, two swords in saltire gules (two red swords arranged in the form of a saltire, on a black and white field). The Arch-Chamberlain's augmentation was azure a scepter palewise Or (a golden scepter on a blue field), while the Arch-Treasurer's was gules the crown of Charlemagne Or (a gold crown on a red field). As noted above, the Elector Palatine and the Elector of Hanover styled themselves Arch-Treasurer from 1714 until 1777; during this time, both electors used the corresponding augmentations. The three Arch-Chancellors and the Arch-Cupbearer, however, did not use any augmentations. The electors discharged the ceremonial duties associated with their offices only during coronations, where they bore the crown and regalia of the Empire. Otherwise, they were represented by holders of corresponding "Hereditary Offices of the Household". The Arch-Butler was represented by the Hereditary Butler (Cupbearer) (the Count of Althann), the Arch-Seneschal by the Hereditary Steward (the Count of Waldburg, who adopted
flying around the world in record time. He was awarded the Harmon Trophy in 1936 and 1938 for the record-breaking global circumnavigation. In 1938 the William P. Hobby Airport in Houston, Texas—known at the time as Houston Municipal Airport—was renamed after Hughes, but the name was changed back due to public outrage over naming the airport after a living person. Hughes also had a role in the design and financing of both the Boeing 307 Stratoliner and Lockheed L-049 Constellation. Other aviator awards include: the Bibesco Cup of the Fédération Aéronautique Internationale in 1938, the Octave Chanute Award in 1940, and a special Congressional Gold Medal in 1939 "in recognition of the achievements of Howard Hughes in advancing the science of aviation and thus bringing great credit to his country throughout the world". President Harry S. Truman sent the Congressional medal to Hughes after the F-11 crash. After his around-the-world flight, Hughes had declined to go to the White House to collect it. Hughes D-2 and XF-11 The Hughes D-2 was conceived in 1939 as a bomber with five crew members, powered by 42-cylinder Wright R-2160 Tornado engines. In the end, it appeared as two-seat fighter-reconnaissance aircraft designated the D-2A, powered by two Pratt & Whitney R-2800-49 engines. The aircraft was constructed using the Duramold process. The prototype was brought to Harper's Dry Lake in California in great secrecy in 1943 and first flew on June 20 of that year. Acting on a recommendation of the president's son, Colonel Elliott Roosevelt, who had become friends with Hughes, in September 1943 the USAAF ordered 100 of a reconnaissance development of the D-2, known as the F-11. Hughes then attempted to get the military to pay for the development of the D-2. In November 1944, the hangar containing the D-2A was reportedly hit by lightning and the aircraft was destroyed. The D-2 design was abandoned but led to the extremely controversial Hughes XF-11. The XF-11 was a large, all-metal, two-seat reconnaissance aircraft, powered by two Pratt & Whitney R-4360-31 engines, each driving a set of contra-rotating propellers. Only two prototypes were completed; the second one with a single propeller per side. Fatal crash of the Sikorsky S-43 In the spring of 1943 Hughes spent nearly a month in Las Vegas, test-flying his Sikorsky S-43 amphibious aircraft, practising touch-and-go landings on Lake Mead in preparation for flying the H-4 Hercules. The weather conditions at the lake during the day were ideal and he enjoyed Las Vegas at night. On May 17, 1943, Hughes flew the Sikorsky from California, carrying two CAA aviation inspectors, two of his employees, and actress Ava Gardner. Hughes dropped Gardner off in Las Vegas and proceeded to Lake Mead to conduct qualifying tests in the S-43. The test flight did not go well. The Sikorsky crashed into Lake Mead, killing CAA inspector Ceco Cline and Hughes's employee Richard Felt. Hughes suffered a severe gash on the top of his head when he hit the upper control panel and had to be rescued by one of the others on board. Hughes paid divers $100,000 to raise the aircraft and later spent more than $500,000 restoring it. Hughes sent the plane to Houston, where it remained for many years. Near-fatal crash of the XF-11 Hughes was involved in another near-fatal aircraft accident on July 7, 1946, while performing the first flight of the prototype U.S. Army Air Forces reconnaissance aircraft, the XF-11, near Hughes airfield at Culver City, California. An oil leak caused one of the contra-rotating propellers to reverse pitch, causing the aircraft to yaw sharply and lose altitude rapidly. Hughes attempted to save the aircraft by landing it at the Los Angeles Country Club golf course, but just seconds before reaching the course, the XF-11 started to drop dramatically and crashed in the Beverly Hills neighborhood surrounding the country club. When the XF-11 finally came to a halt after destroying three houses, the fuel tanks exploded, setting fire to the aircraft and a nearby home at 808 North Whittier Drive owned by Lt Col. Charles E. Meyer. Hughes managed to pull himself out of the flaming wreckage but lay beside the aircraft until rescued by Marine Master Sgt. William L. Durkin, who happened to be in the area visiting friends. Hughes sustained significant injuries in the crash, including a crushed collar bone, multiple cracked ribs, crushed chest with collapsed left lung, shifting his heart to the right side of the chest cavity, and numerous third-degree burns. An oft-told story said that Hughes sent a check to the Marine weekly for the remainder of his life as a sign of gratitude. Noah Dietrich asserted that Hughes did send Durkin $200 a month, but Durkin's daughter denied knowing that he received any money from Hughes. Despite his physical injuries, Hughes took pride that his mind was still working. As he lay in his hospital bed, he decided that he did not like the bed's design. He called in plant engineers to design a customized bed, equipped with hot and cold running water, built in six sections, and operated by 30 electric motors, with push-button adjustments. Hughes designed the hospital bed specifically to alleviate the pain caused by moving with severe burn injuries. Although he never used the bed that he designed, Hughes's bed served as a prototype for the modern hospital bed. Hughes's doctors considered his recovery almost miraculous. Many attribute his long-term dependence on opiates to his use of codeine as a painkiller during his convalescence. Yet Dietrich asserts that Hughes recovered the "hard way—no sleeping pills, no opiates of any kind". The trademark mustache he wore afterward hid a scar on his upper lip resulting from the accident. H-4 Hercules The War Production Board (not the military) originally contracted with Henry Kaiser and Hughes to produce the gigantic HK-1 Hercules flying boat for use during World War II to transport troops and equipment across the Atlantic as an alternative to seagoing troop transport ships that were vulnerable to German U-boats. The military services opposed the project, thinking it would siphon resources from higher-priority programs, but Hughes's powerful allies in Washington, D.C., advocated for it. After disputes, Kaiser withdrew from the project and Hughes elected to continue it as the H-4 Hercules. However, the aircraft was not completed until after the end of World War II. The Hercules was the world's largest flying boat, the largest aircraft made from wood, and, at , had the longest wingspan of any aircraft (the next-largest wingspan was about ). (The Hercules is no longer the longest nor heaviest aircraft ever built - surpassed by the Antonov An-225 Mriya produced in 1985.) The Hercules flew only once for one mile (1.6 km), and above the water, with Hughes at the controls, on November 2, 1947. Critics nicknamed the Hercules the Spruce Goose, but it was actually made largely from birch (not spruce) rather than from aluminum, because the contract required that Hughes build the aircraft of "non-strategic materials". It was built in Hughes's Westchester, California, facility. In 1947, Howard Hughes was summoned to testify before the Senate War Investigating Committee to explain why the H-4 development had been so troubled, and why $22 million had produced only two prototypes of the XF-11. General Elliott Roosevelt and numerous other USAAF officers were also called to testify in hearings that transfixed the nation during August and November 1947. In hotly-disputed testimony over TWA's route awards and malfeasance in the defense-acquisition process, Hughes turned the tables on his main interlocutor, Maine Senator Owen Brewster, and the hearings were widely interpreted as a Hughes victory. After being displayed at the harbor of Long Beach, California, the Hercules was moved to McMinnville, Oregon, where it features at the Evergreen Aviation & Space Museum. On November 4, 2017, the 70th anniversary of the only flight of the H-4 Hercules was celebrated at the Evergreen Aviation & Space Museum with Hughes's paternal cousin Michael Wesley Summerlin and Brian Palmer Evans, son of Hughes radio-technology pioneer Dave Evans, taking their positions in the recreation of a photo that was previously taken of Hughes, Dave Evans and Joe Petrali on board the H-4 Hercules. Hughes Aircraft In 1932 Hughes founded the Hughes Aircraft Company, a division of Hughes Tool Company, in a rented corner of a Lockheed Aircraft Corporation hangar in Burbank, California, to build the H-1 racer. Shortly after founding the company, Hughes used the alias "Charles Howard" to accept a job as a baggage handler for American Airlines. He was soon promoted to co-pilot. Hughes continued to work for American Airlines until his real identity was discovered. During and after World War II Hughes fashioned his company into a major defense contractor. The Hughes Helicopters division started in 1947 when helicopter manufacturer Kellett sold their latest design to Hughes for production. Hughes Aircraft became a major American aerospace- and defense contractor, manufacturing numerous technology-related products that included spacecraft vehicles, military aircraft, radar systems, electro-optical systems, the first working laser, aircraft computer systems, missile systems, ion-propulsion engines (for space travel), commercial satellites, and other electronics systems. In 1948 Hughes created a new division of Hughes Aircraft: the Hughes Aerospace Group. The Hughes Space and Communications Group and the Hughes Space Systems Division were later spun off in 1948 to form their own divisions and ultimately became the Hughes Space and Communications Company in 1961. In 1953 Howard Hughes gave all his stock in the Hughes Aircraft Company to the newly formed Howard Hughes Medical Institute, thereby turning the aerospace and defense contractor into a tax-exempt charitable organization. The Howard Hughes Medical Institute sold Hughes Aircraft in 1985 to General Motors for $5.2 billion. In 1997 General Motors sold Hughes Aircraft to Raytheon and in 2000, sold Hughes Space & Communications to Boeing. A combination of Boeing, GM, and Raytheon acquired the Hughes Research Laboratories, which focused on advanced developments in microelectronics, information & systems sciences, materials, sensors, and photonics; their work-space spans from basic research to product delivery. It has particularly emphasized capabilities in high-performance integrated circuits, high-power lasers, antennas, networking, and smart materials. Airlines In 1939, at the urging of Jack Frye, president of Transcontinental & Western Airlines, the predecessor of Trans World Airlines (TWA), Hughes began to quietly purchase a majority share of TWA stock; he took a controlling interest in the airline by 1944. Although he never had an official position with TWA, Hughes handpicked the board of directors, which included Noah Dietrich, and often issued orders directly to airline staff. Hughes Tool Co. purchased the first six Stratoliners Boeing manufactured. Hughes used one personally, and he let TWA operate the other five. Hughes is commonly credited as the driving force behind the Lockheed Constellation airliner, which Hughes and Frye ordered in 1939 as a long-range replacement for TWA's fleet of Boeing 307 Stratoliners. Hughes personally financed TWA's acquisition of 40 Constellations for $18 million, the largest aircraft order in history up to that time. The Constellations were among the highest-performing commercial aircraft of the late 1940s and 1950s and allowed TWA to pioneer nonstop transcontinental service. During World War II Hughes leveraged political connections in Washington to obtain rights for TWA to serve Europe, making it the only U.S. carrier with a combination of domestic and transatlantic routes. After the announcement of the Boeing 707, Hughes opted to pursue a more advanced jet aircraft for TWA and approached Convair in late 1954. Convair proposed two concepts to Hughes, but Hughes was unable to decide which concept to adopt, and Convair eventually abandoned its initial jet project after the mockups of the 707 and Douglas DC-8 were unveiled. Even after competitors such as United Airlines, American Airlines and Pan American World Airways had placed large orders for the 707, Hughes only placed eight orders for 707s through the Hughes Tool Company and forbade TWA from using the aircraft. After finally beginning to reserve 707 orders in 1956, Hughes embarked on a plan to build his own "superior" jet aircraft for TWA, applied for CAB permission to sell Hughes aircraft to TWA, and began negotiations with the state of Florida to build a manufacturing plant there. However, he abandoned this plan around 1958, and in the interim, negotiated new contracts for 707 and Convair 880 aircraft and engines totaling $400 million. The financing of TWA's jet orders precipitated the end of Hughes's relationship with Noah Dietrich, and ultimately Hughes's ouster from control of TWA. Hughes did not have enough cash on hand or future cash flow to pay for the orders and did not immediately seek bank financing. Hughes's refusal to heed Dietrich's financing advice led to a major rift between the two by the end of 1956. Hughes believed that Dietrich wished to have Hughes committed as mentally incompetent, although the evidence of this is inconclusive. Dietrich resigned by telephone in May 1957 after repeated requests for stock options, which Hughes refused to grant, and with no further progress on the jet financing. As Hughes's mental state worsened, he ordered various tactics to delay payments to Boeing and Convair; his behavior led TWA's banks to insist that he be removed from management as a condition for further financing. In 1960, Hughes was ultimately forced out of the management of TWA, although he continued to own 78% of the company. In 1961, TWA filed suit against Hughes Tool Company, claiming that the latter had violated antitrust law by using TWA as a captive market for aircraft trading. The claim was largely dependent upon obtaining testimony from Hughes himself. Hughes went into hiding and refused to testify. A default judgment was issued against Hughes Tool Company for $135 million in 1963 but was overturned by the Supreme Court of the United States in 1973, on the basis that Hughes was immune from prosecution. In 1966, Hughes was forced to sell his TWA shares. The sale of his TWA shares brought Hughes $546,549,771. Hughes acquired control of Boston-based Northeast Airlines in 1962. However, the airline's lucrative route authority between major northeastern cities and Miami was terminated by a CAB decision around the time of the acquisition, and Hughes sold control of the company to a trustee in 1964. Northeast went on to merge with Delta Air Lines in 1972. In 1970, Hughes acquired San Francisco-based Air West and renamed it Hughes Airwest. Air West had been formed in 1968 by the merger of Bonanza Air Lines, Pacific Air Lines, and West Coast Airlines, all of which operated in the western U.S. By the late 1970s, Hughes Airwest operated an all-jet fleet of Boeing 727-200, Douglas DC-9-10, and McDonnell Douglas DC-9-30 jetliners serving an extensive route network in the western U.S. with flights to Mexico and western Canada as well. By 1980, the airline's route system reached as far east as Houston (Hobby Airport) and Milwaukee with a total of 42 destinations being served. Hughes Airwest was then acquired by and merged into Republic Airlines (1979–1986) in late 1980. Republic was subsequently acquired by and merged into Northwest Airlines which in turn was ultimately merged into Delta Air Lines in 2008. Business with David Charnay Hughes had made numerous business partnerships through industrialist and producer David Charnay. Their friendship and many partnerships began with the film The Conqueror, which was first released to the public in 1956. The film caused many controversies due to its critical flop and radioactive location used in St. George, Utah, that eventually led to Hughes buying up nearly every copy of the film he could, only to watch the film at home repeatedly for many nights in a row. Charnay later bought Four Star, the film and television production company that produced The Conqueror. Hughes and Charnay's most published dealings were with a contested AirWest leveraged buyout. Charnay led the buyout group that involved Howard Hughes and their partners acquiring Air West. Hughes, Charnay, as well as three others, were indicted. The indictment, made by U.S. Attorney DeVoe Heaton, accused the group of conspiring to drive down the stock price of Air West in order to pressure company directors to sell to Hughes. The charges were dismissed after a judge had determined that the indictment had failed to allege an illegal action on the part of Hughes, Charnay, and all the other accused in the indictment. Thompson, the federal judge that made the decision to dismiss the charges called the indictment one of the worst claims that he had ever seen. The charges were filed again, a second time, by U.S. Attorney DeVoe Heaton's assistant, Dean Vernon. The Federal Judge ruled on November 13, 1974, and elaborated to say that the case suggested a "reprehensible misuse of the power of great wealth", but in his judicial opinion, "no crime had been committed." The aftermath of the Air West deal was later settled with the SEC by paying former stockholders for alleged losses from the sale of their investment in Air West stock. As noted above, Air West was subsequently renamed Hughes Airwest. During a long pause between the years of the dismissed charges against Hughes, Charnay, and their partners, Howard Hughes mysteriously died mid-flight while on the way to Houston from Acapulco. No further attempts were made to file any indictments after Hughes died. Howard Hughes Medical Institute In 1953, Hughes launched the Howard Hughes Medical Institute in Miami, Florida, (currently located in Chevy Chase, Maryland) with the expressed goal of basic biomedical research, including trying to understand, in Hughes's words, the "genesis of life itself", due to his lifelong interest in science and technology. Hughes's first will, which he signed in 1925 at the age of 19, stipulated that a portion of his estate should be used to create a medical institute bearing his name. When a major battle with the IRS loomed ahead, Hughes gave all his stock in the Hughes Aircraft Company to the institute, thereby turning the aerospace and defense contractor into a for-profit entity of a fully tax-exempt charity. Hughes's internist, Verne Mason, who treated Hughes after his 1946 aircraft crash, was chairman of the institute's medical advisory committee. The Howard Hughes Medical Institute's new board of trustees sold Hughes Aircraft in 1985 to General Motors for $5.2 billion, allowing the institute to grow dramatically. In 1954, Hughes transferred Hughes Aircraft to the foundation, which paid Hughes Tool Co. $18,000,000 for the assets. The foundation leased the land from Hughes Tool Co., which then subleased it to Hughes Aircraft Corp. The difference in rent, $2,000,000 per year, became the foundation's working capital. The deal was the topic of a protracted legal battle between Hughes and the Internal Revenue Service, which Hughes ultimately won. After his death in 1976, many thought that the balance of Hughes's estate would go to the institute, although it was ultimately divided among his cousins and other heirs, given the lack of a will to the contrary. The HHMI was the fourth largest private organization and one of the largest devoted to biological and medical research, with an endowment of $20.4 billion . Glomar Explorer and the taking of K-129 In 1972, during the cold war era, Hughes was approached by the CIA through his longtime partner, David Charnay, to help secretly recover the Soviet submarine K-129, which had sunk near Hawaii four years earlier. Hughes's involvement provided the CIA with a plausible cover story, conducting expensive civilian marine research at extreme depths and the mining of undersea manganese nodules. The recovery plan used the special-purpose salvage vessel Glomar Explorer. In the summer of 1974, Glomar Explorer attempted to raise the Soviet vessel. However, during the recovery a mechanical failure in the ship's grapple caused half of
Near-fatal crash of the XF-11 Hughes was involved in another near-fatal aircraft accident on July 7, 1946, while performing the first flight of the prototype U.S. Army Air Forces reconnaissance aircraft, the XF-11, near Hughes airfield at Culver City, California. An oil leak caused one of the contra-rotating propellers to reverse pitch, causing the aircraft to yaw sharply and lose altitude rapidly. Hughes attempted to save the aircraft by landing it at the Los Angeles Country Club golf course, but just seconds before reaching the course, the XF-11 started to drop dramatically and crashed in the Beverly Hills neighborhood surrounding the country club. When the XF-11 finally came to a halt after destroying three houses, the fuel tanks exploded, setting fire to the aircraft and a nearby home at 808 North Whittier Drive owned by Lt Col. Charles E. Meyer. Hughes managed to pull himself out of the flaming wreckage but lay beside the aircraft until rescued by Marine Master Sgt. William L. Durkin, who happened to be in the area visiting friends. Hughes sustained significant injuries in the crash, including a crushed collar bone, multiple cracked ribs, crushed chest with collapsed left lung, shifting his heart to the right side of the chest cavity, and numerous third-degree burns. An oft-told story said that Hughes sent a check to the Marine weekly for the remainder of his life as a sign of gratitude. Noah Dietrich asserted that Hughes did send Durkin $200 a month, but Durkin's daughter denied knowing that he received any money from Hughes. Despite his physical injuries, Hughes took pride that his mind was still working. As he lay in his hospital bed, he decided that he did not like the bed's design. He called in plant engineers to design a customized bed, equipped with hot and cold running water, built in six sections, and operated by 30 electric motors, with push-button adjustments. Hughes designed the hospital bed specifically to alleviate the pain caused by moving with severe burn injuries. Although he never used the bed that he designed, Hughes's bed served as a prototype for the modern hospital bed. Hughes's doctors considered his recovery almost miraculous. Many attribute his long-term dependence on opiates to his use of codeine as a painkiller during his convalescence. Yet Dietrich asserts that Hughes recovered the "hard way—no sleeping pills, no opiates of any kind". The trademark mustache he wore afterward hid a scar on his upper lip resulting from the accident. H-4 Hercules The War Production Board (not the military) originally contracted with Henry Kaiser and Hughes to produce the gigantic HK-1 Hercules flying boat for use during World War II to transport troops and equipment across the Atlantic as an alternative to seagoing troop transport ships that were vulnerable to German U-boats. The military services opposed the project, thinking it would siphon resources from higher-priority programs, but Hughes's powerful allies in Washington, D.C., advocated for it. After disputes, Kaiser withdrew from the project and Hughes elected to continue it as the H-4 Hercules. However, the aircraft was not completed until after the end of World War II. The Hercules was the world's largest flying boat, the largest aircraft made from wood, and, at , had the longest wingspan of any aircraft (the next-largest wingspan was about ). (The Hercules is no longer the longest nor heaviest aircraft ever built - surpassed by the Antonov An-225 Mriya produced in 1985.) The Hercules flew only once for one mile (1.6 km), and above the water, with Hughes at the controls, on November 2, 1947. Critics nicknamed the Hercules the Spruce Goose, but it was actually made largely from birch (not spruce) rather than from aluminum, because the contract required that Hughes build the aircraft of "non-strategic materials". It was built in Hughes's Westchester, California, facility. In 1947, Howard Hughes was summoned to testify before the Senate War Investigating Committee to explain why the H-4 development had been so troubled, and why $22 million had produced only two prototypes of the XF-11. General Elliott Roosevelt and numerous other USAAF officers were also called to testify in hearings that transfixed the nation during August and November 1947. In hotly-disputed testimony over TWA's route awards and malfeasance in the defense-acquisition process, Hughes turned the tables on his main interlocutor, Maine Senator Owen Brewster, and the hearings were widely interpreted as a Hughes victory. After being displayed at the harbor of Long Beach, California, the Hercules was moved to McMinnville, Oregon, where it features at the Evergreen Aviation & Space Museum. On November 4, 2017, the 70th anniversary of the only flight of the H-4 Hercules was celebrated at the Evergreen Aviation & Space Museum with Hughes's paternal cousin Michael Wesley Summerlin and Brian Palmer Evans, son of Hughes radio-technology pioneer Dave Evans, taking their positions in the recreation of a photo that was previously taken of Hughes, Dave Evans and Joe Petrali on board the H-4 Hercules. Hughes Aircraft In 1932 Hughes founded the Hughes Aircraft Company, a division of Hughes Tool Company, in a rented corner of a Lockheed Aircraft Corporation hangar in Burbank, California, to build the H-1 racer. Shortly after founding the company, Hughes used the alias "Charles Howard" to accept a job as a baggage handler for American Airlines. He was soon promoted to co-pilot. Hughes continued to work for American Airlines until his real identity was discovered. During and after World War II Hughes fashioned his company into a major defense contractor. The Hughes Helicopters division started in 1947 when helicopter manufacturer Kellett sold their latest design to Hughes for production. Hughes Aircraft became a major American aerospace- and defense contractor, manufacturing numerous technology-related products that included spacecraft vehicles, military aircraft, radar systems, electro-optical systems, the first working laser, aircraft computer systems, missile systems, ion-propulsion engines (for space travel), commercial satellites, and other electronics systems. In 1948 Hughes created a new division of Hughes Aircraft: the Hughes Aerospace Group. The Hughes Space and Communications Group and the Hughes Space Systems Division were later spun off in 1948 to form their own divisions and ultimately became the Hughes Space and Communications Company in 1961. In 1953 Howard Hughes gave all his stock in the Hughes Aircraft Company to the newly formed Howard Hughes Medical Institute, thereby turning the aerospace and defense contractor into a tax-exempt charitable organization. The Howard Hughes Medical Institute sold Hughes Aircraft in 1985 to General Motors for $5.2 billion. In 1997 General Motors sold Hughes Aircraft to Raytheon and in 2000, sold Hughes Space & Communications to Boeing. A combination of Boeing, GM, and Raytheon acquired the Hughes Research Laboratories, which focused on advanced developments in microelectronics, information & systems sciences, materials, sensors, and photonics; their work-space spans from basic research to product delivery. It has particularly emphasized capabilities in high-performance integrated circuits, high-power lasers, antennas, networking, and smart materials. Airlines In 1939, at the urging of Jack Frye, president of Transcontinental & Western Airlines, the predecessor of Trans World Airlines (TWA), Hughes began to quietly purchase a majority share of TWA stock; he took a controlling interest in the airline by 1944. Although he never had an official position with TWA, Hughes handpicked the board of directors, which included Noah Dietrich, and often issued orders directly to airline staff. Hughes Tool Co. purchased the first six Stratoliners Boeing manufactured. Hughes used one personally, and he let TWA operate the other five. Hughes is commonly credited as the driving force behind the Lockheed Constellation airliner, which Hughes and Frye ordered in 1939 as a long-range replacement for TWA's fleet of Boeing 307 Stratoliners. Hughes personally financed TWA's acquisition of 40 Constellations for $18 million, the largest aircraft order in history up to that time. The Constellations were among the highest-performing commercial aircraft of the late 1940s and 1950s and allowed TWA to pioneer nonstop transcontinental service. During World War II Hughes leveraged political connections in Washington to obtain rights for TWA to serve Europe, making it the only U.S. carrier with a combination of domestic and transatlantic routes. After the announcement of the Boeing 707, Hughes opted to pursue a more advanced jet aircraft for TWA and approached Convair in late 1954. Convair proposed two concepts to Hughes, but Hughes was unable to decide which concept to adopt, and Convair eventually abandoned its initial jet project after the mockups of the 707 and Douglas DC-8 were unveiled. Even after competitors such as United Airlines, American Airlines and Pan American World Airways had placed large orders for the 707, Hughes only placed eight orders for 707s through the Hughes Tool Company and forbade TWA from using the aircraft. After finally beginning to reserve 707 orders in 1956, Hughes embarked on a plan to build his own "superior" jet aircraft for TWA, applied for CAB permission to sell Hughes aircraft to TWA, and began negotiations with the state of Florida to build a manufacturing plant there. However, he abandoned this plan around 1958, and in the interim, negotiated new contracts for 707 and Convair 880 aircraft and engines totaling $400 million. The financing of TWA's jet orders precipitated the end of Hughes's relationship with Noah Dietrich, and ultimately Hughes's ouster from control of TWA. Hughes did not have enough cash on hand or future cash flow to pay for the orders and did not immediately seek bank financing. Hughes's refusal to heed Dietrich's financing advice led to a major rift between the two by the end of 1956. Hughes believed that Dietrich wished to have Hughes committed as mentally incompetent, although the evidence of this is inconclusive. Dietrich resigned by telephone in May 1957 after repeated requests for stock options, which Hughes refused to grant, and with no further progress on the jet financing. As Hughes's mental state worsened, he ordered various tactics to delay payments to Boeing and Convair; his behavior led TWA's banks to insist that he be removed from management as a condition for further financing. In 1960, Hughes was ultimately forced out of the management of TWA, although he continued to own 78% of the company. In 1961, TWA filed suit against Hughes Tool Company, claiming that the latter had violated antitrust law by using TWA as a captive market for aircraft trading. The claim was largely dependent upon obtaining testimony from Hughes himself. Hughes went into hiding and refused to testify. A default judgment was issued against Hughes Tool Company for $135 million in 1963 but was overturned by the Supreme Court of the United States in 1973, on the basis that Hughes was immune from prosecution. In 1966, Hughes was forced to sell his TWA shares. The sale of his TWA shares brought Hughes $546,549,771. Hughes acquired control of Boston-based Northeast Airlines in 1962. However, the airline's lucrative route authority between major northeastern cities and Miami was terminated by a CAB decision around the time of the acquisition, and Hughes sold control of the company to a trustee in 1964. Northeast went on to merge with Delta Air Lines in 1972. In 1970, Hughes acquired San Francisco-based Air West and renamed it Hughes Airwest. Air West had been formed in 1968 by the merger of Bonanza Air Lines, Pacific Air Lines, and West Coast Airlines, all of which operated in the western U.S. By the late 1970s, Hughes Airwest operated an all-jet fleet of Boeing 727-200, Douglas DC-9-10, and McDonnell Douglas DC-9-30 jetliners serving an extensive route network in the western U.S. with flights to Mexico and western Canada as well. By 1980, the airline's route system reached as far east as Houston (Hobby Airport) and Milwaukee with a total of 42 destinations being served. Hughes Airwest was then acquired by and merged into Republic Airlines (1979–1986) in late 1980. Republic was subsequently acquired by and merged into Northwest Airlines which in turn was ultimately merged into Delta Air Lines in 2008. Business with David Charnay Hughes had made numerous business partnerships through industrialist and producer David Charnay. Their friendship and many partnerships began with the film The Conqueror, which was first released to the public in 1956. The film caused many controversies due to its critical flop and radioactive location used in St. George, Utah, that eventually led to Hughes buying up nearly every copy of the film he could, only to watch the film at home repeatedly for many nights in a row. Charnay later bought Four Star, the film and television production company that produced The Conqueror. Hughes and Charnay's most published dealings were with a contested AirWest leveraged buyout. Charnay led the buyout group that involved Howard Hughes and their partners acquiring Air West. Hughes, Charnay, as well as three others, were indicted. The indictment, made by U.S. Attorney DeVoe Heaton, accused the group of conspiring to drive down the stock price of Air West in order to pressure company directors to sell to Hughes. The charges were dismissed after a judge had determined that the indictment had failed to allege an illegal action on the part of Hughes, Charnay, and all the other accused in the indictment. Thompson, the federal judge that made the decision to dismiss the charges called the indictment one of the worst claims that he had ever seen. The charges were filed again, a second time, by U.S. Attorney DeVoe Heaton's assistant, Dean Vernon. The Federal Judge ruled on November 13, 1974, and elaborated to say that the case suggested a "reprehensible misuse of the power of great wealth", but in his judicial opinion, "no crime had been committed." The aftermath of the Air West deal was later settled with the SEC by paying former stockholders for alleged losses from the sale of their investment in Air West stock. As noted above, Air West was subsequently renamed Hughes Airwest. During a long pause between the years of the dismissed charges against Hughes, Charnay, and their partners, Howard Hughes mysteriously died mid-flight while on the way to Houston from Acapulco. No further attempts were made to file any indictments after Hughes died. Howard Hughes Medical Institute In 1953, Hughes launched the Howard Hughes Medical Institute in Miami, Florida, (currently located in Chevy Chase, Maryland) with the expressed goal of basic biomedical research, including trying to understand, in Hughes's words, the "genesis of life itself", due to his lifelong interest in science and technology. Hughes's first will, which he signed in 1925 at the age of 19, stipulated that a portion of his estate should be used to create a medical institute bearing his name. When a major battle with the IRS loomed ahead, Hughes gave all his stock in the Hughes Aircraft Company to the institute, thereby turning the aerospace and defense contractor into a for-profit entity of a fully tax-exempt charity. Hughes's internist, Verne Mason, who treated Hughes after his 1946 aircraft crash, was chairman of the institute's medical advisory committee. The Howard Hughes Medical Institute's new board of trustees sold Hughes Aircraft in 1985 to General Motors for $5.2 billion, allowing the institute to grow dramatically. In 1954, Hughes transferred Hughes Aircraft to the foundation, which paid Hughes Tool Co. $18,000,000 for the assets. The foundation leased the land from Hughes Tool Co., which then subleased it to Hughes Aircraft Corp. The difference in rent, $2,000,000 per year, became the foundation's working capital. The deal was the topic of a protracted legal battle between Hughes and the Internal Revenue Service, which Hughes ultimately won. After his death in 1976, many thought that the balance of Hughes's estate would go to the institute, although it was ultimately divided among his cousins and other heirs, given the lack of a will to the contrary. The HHMI was the fourth largest private organization and one of the largest devoted to biological and medical research, with an endowment of $20.4 billion . Glomar Explorer and the taking of K-129 In 1972, during the cold war era, Hughes was approached by the CIA through his longtime partner, David Charnay, to help secretly recover the Soviet submarine K-129, which had sunk near Hawaii four years earlier. Hughes's involvement provided the CIA with a plausible cover story, conducting expensive civilian marine research at extreme depths and the mining of undersea manganese nodules. The recovery plan used the special-purpose salvage vessel Glomar Explorer. In the summer of 1974, Glomar Explorer attempted to raise the Soviet vessel. However, during the recovery a mechanical failure in the ship's grapple caused half of the submarine to break off and fall to the ocean floor. This section is believed to have held many of the most sought-after items, including its code book and nuclear missiles. Two nuclear-tipped torpedoes and some cryptographic machines were recovered, along with the bodies of six Soviet submariners who were subsequently given formal burial at sea in a filmed ceremony. The operation, known as Project Azorian (but incorrectly referred to by the press as Project Jennifer), became public in February 1975 after secret documents were released, obtained by burglars of Hughes's headquarters in June 1974. Although he lent his name and his company's resources to the operation, Hughes and his companies had no operational involvement in the project. The Glomar Explorer was eventually acquired by Transocean and was sent to the scrap yard in 2015 during a large decline in oil prices. Personal life Early romances In 1929, Hughes's wife of four years, Ella, returned to Houston and filed for divorce. Hughes dated many famous women, including Joan Crawford, Billie Dove, Faith Domergue, Bette Davis, Yvonne De Carlo, Ava Gardner, Olivia de Havilland, Katharine Hepburn, Hedy Lamarr, Ginger Rogers, Janet Leigh, Pat Sheehan, Mamie Van Doren and Gene Tierney. He also proposed to Joan Fontaine several times, according to her autobiography No Bed of Roses. Jean Harlow accompanied him to the premiere of Hell's Angels, but Noah Dietrich wrote many years later that the relationship was strictly professional, as Hughes disliked Harlow personally. In his 1971 book, Howard: The Amazing Mr. Hughes, Dietrich said that Hughes genuinely liked and respected Jane Russell, but never sought romantic involvement with her. According to Russell's autobiography, however, Hughes once tried to bed her after a party. Russell (who was married at the time) refused him, and Hughes promised it would never happen again. The two maintained a professional and private friendship for many years. Hughes remained good friends with Tierney who, after his failed attempts to seduce her, was quoted as saying "I don't think Howard could love anything that did not have a motor in it". Later, when Tierney's daughter Daria was born deaf and blind and with a severe learning disability because of Tierney's exposure to rubella during her pregnancy, Hughes saw to it that Daria received the best medical care and paid all expenses. Luxury yacht In 1933, Hughes made a purchase of a luxury steam yacht named the Rover, which was previously owned by Scottish shipping magnate Lord Inchcape. "I have never seen the Rover but bought it on the blueprints, photographs and the reports of Lloyd's surveyors. My experience is that the English are the most honest race in the world." Hughes renamed the yacht Southern Cross and later sold her to Swedish entrepreneur Axel Wenner-Gren. 1936 automobile accident On July 11, 1936, Hughes struck and killed a pedestrian named Gabriel S. Meyer with his car at the corner of 3rd Street and Lorraine in Los Angeles. After the crash, Hughes was taken to the hospital and certified as sober, but an attending doctor made a note that Hughes had been drinking. A witness to the crash told police that Hughes was driving erratically and too fast and that Meyer had been standing in the safety zone of a streetcar stop. Hughes was booked on suspicion of negligent homicide and held overnight in jail until his attorney, Neil S. McCarthy, obtained a writ of habeas corpus for his release pending a coroner's inquest. By the time of the coroner's inquiry, however, the witness had changed his story and claimed that Meyer had moved directly in front of Hughes's car. Nancy Bayly (Watts), who was in the car with Hughes at the time of the crash, corroborated this version of the story. On July 16, 1936, Hughes was held blameless by a coroner's jury at the inquest into Meyer's death. Hughes told reporters outside the inquiry, "I was driving slowly and a man stepped out of the darkness in front of me". Marriage to Jean Peters On January 12, 1957, Hughes married actress Jean Peters at a small hotel in Tonopah, Nevada. The couple met in the 1940s, before Peters became a film actress. They had a highly publicized romance in 1947 and there was talk of marriage, but she said she could not combine it with her career. Some later claimed that Peters was "the only woman [Hughes] ever loved", and he reportedly had his security officers follow her everywhere even when they were not in a relationship. Such reports were confirmed by actor Max Showalter, who became a close friend of Peters while shooting Niagara (1953). Showalter told an interviewer that because he frequently met with Peters, Hughes's men threatened to ruin his career if he did not leave her alone. Connections to Richard Nixon and Watergate Shortly before the 1960 Presidential election, Richard Nixon was alarmed when it was revealed that his brother, Donald, received a $205,000 loan from Hughes. It has long been speculated that Nixon's drive to learn what the Democrats were planning in 1972 was based in part on his belief that the Democrats knew about a later bribe that his friend Bebe Rebozo had received from Hughes after Nixon took office. In late 1971, Donald Nixon was collecting intelligence for his brother in preparation for the upcoming presidential election. One of his sources was John H. Meier, a former business adviser of Hughes who had also worked with Democratic National Committee Chairman Larry O'Brien. Meier, in collaboration with former Vice President Hubert Humphrey and others, wanted to feed misinformation to the Nixon campaign. Meier told Donald that he was sure the Democrats would win the election because Larry O'Brien had a great deal of information on Richard Nixon's illicit dealings with Howard Hughes that had never been released; O'Brien did not actually have any such information, but Meier wanted Nixon to think that he did. Donald told his brother that O'Brien was in possession of damaging Hughes information that could destroy his campaign. Terry Lenzner, who was the chief investigator for the Senate Watergate Committee, speculates that it was Nixon's desire to know what O'Brien knew about Nixon's dealings with Hughes that may have partially motivated the Watergate break-in. Last years and death Physical and mental decline Hughes was widely considered eccentric and to have suffered with severe obsessive-compulsive disorder (OCD). Dietrich wrote that Hughes always ate the same thing for dinner, a New York strip steak cooked medium rare, dinner salad, and peas, but only the smaller ones, pushing the larger ones aside. For breakfast, Hughes wanted his eggs cooked the way his family cook, Lily, made them. Hughes had a "phobia about germs", and "his passion for secrecy became a mania." While directing The Outlaw, Hughes became fixated on a small flaw in one of Jane Russell's blouses, claiming that the fabric bunched up along a seam and gave the appearance of two nipples on each breast. He wrote a detailed memorandum to the crew on how to fix the problem. Richard Fleischer, who directed His Kind of Woman with Hughes as executive producer, wrote at length in his autobiography about the difficulty of dealing with the tycoon. In his book, Just Tell Me When to Cry, Fleischer explained that Hughes was fixated on trivial details and was alternately indecisive and obstinate. He also revealed that Hughes's unpredictable mood swings made him wonder if the film would ever be completed. In 1958, Hughes told his aides that he wanted to screen some movies at a film studio near his home. He stayed in the studio's darkened screening room for more than four months, never leaving. He ate only chocolate bars and chicken and drank only milk, and was surrounded by dozens of Kleenex that he continuously stacked and re-arranged. He wrote detailed memos to his aides giving them explicit instructions neither to look at him nor speak to him unless spoken to. Throughout this period, Hughes sat fixated in his chair, often naked, continually watching movies. When he finally emerged in the summer of 1958, his hygiene was terrible. He had neither bathed nor cut his hair and nails for weeks; this may have been due to allodynia, which results in a pain response to stimuli that would normally not cause pain. After the screening room incident, Hughes moved into a bungalow at the Beverly Hills Hotel where he also rented rooms for his aides, his wife, and numerous girlfriends. He would sit naked in his bedroom with a pink hotel napkin placed over his genitals, watching movies. This may have been because Hughes found the touch of clothing painful due to allodynia. He may have watched movies to distract himself from his pain—a common practice among patients with intractable pain, especially those who do not receive adequate treatment. In one year, Hughes spent an estimated $11 million at the hotel. Hughes began purchasing restaurant chains and four-star hotels that had been founded within the state of Texas. This included, if for only a short period, many unknown franchises currently out of business. He placed ownership of the restaurants with the Howard Hughes Medical Institute, and all licenses were resold shortly after. Another time, he became obsessed with the 1968 film Ice Station Zebra, and had it run on a continuous loop in his home. According to his aides, he watched it 150 times. Feeling guilty about the commercial, critical, and rumored toxicity of his film The Conqueror, he bought every copy of the film for $12 million, watching the film on repeat. Paramount Pictures acquired the rights of the film in 1979, three years after his death. Hughes insisted on using tissues to pick up objects to insulate himself from germs. He would also notice dust, stains, or other imperfections on people's clothes and demand that they take care of them. Once one of the most visible men in America, Hughes ultimately vanished from public view, although tabloids continued to follow rumors of his behavior and whereabouts. He was reported to be terminally ill, mentally unstable, or even dead. Injuries from numerous aircraft crashes caused Hughes to spend much of his later life in pain, and he eventually became addicted to codeine, which he injected intramuscularly. Hughes had his hair cut and nails trimmed only once a year, likely due to the pain caused by the RSD/CRPS, which was caused by the plane crashes. He also stored his urine in bottles. Later years in Las Vegas The wealthy and aging Hughes, accompanied by his entourage of personal aides, began moving from one hotel to another, always taking up residence in the top floor penthouse. In the last ten years of his life, 1966 to 1976, Hughes lived in hotels in many cities—including Beverly Hills, Boston, Las Vegas, Nassau, Freeport and Vancouver. On November 24, 1966 (Thanksgiving Day), Hughes arrived in Las Vegas by railroad car and moved into the Desert Inn. Because he refused to leave the hotel and to avoid further conflicts with the owners, Hughes bought the Desert Inn in early 1967. The hotel's eighth floor became the nerve center of Hughes's empire and the ninth-floor penthouse became his personal residence. Between 1966 and 1968, he bought several other hotel-casinos, including the Castaways, New Frontier, the Landmark Hotel and Casino, and the Sands. He bought the small Silver Slipper casino for the sole purpose of moving its trademark neon silver slipper, which was visible from his bedroom, and had apparently kept him awake at night. After Hughes left the Desert Inn, hotel employees discovered that his drapes had not been opened during the time he lived there and had rotted
by an 'area committee'. Transport links Railways The Schiedam–Hoek van Holland railway is a 24-kilometre branch line from Schiedam Centrum station via Vlaardingen and Maassluis. The final two stations on the line are located within the town. Hoek van Holland Haven, the penultimate station, is close to the town centre, adjacent to the ferry terminal and the small harbour, the Berghaven. Hoek van Holland Strand, the terminus, is closest to the beach. The railway line opened for service in 1893 and was electrified in 1935. International trains ran from Berlin and Moscow to connect these with London via the ferry service. From 1928 to 1939 and from 1962 to 1979, Hook of Holland was the northern terminus of the Rheingold Express to Frankfurt and Geneva. Services on the line to Rotterdam Centraal station were operated by NS every half-hour during the day until April 2017, when the line was closed for conversion to metro standards. It was reopened in September 2019, as an extension of the Rotterdam Metro. The metro line service from Hook of Holland does not offer direct connections to Rotterdam Centraal. Ferry Hook of Holland is also the location of an international ferry terminal, from which service to eastern England has operated since 1893 except for the durations of the two World Wars. Currently, two routes are operated: one, a day-and-night freight and passenger service to Harwich, Essex, and the other, a night, freight-only service to North Killingholme Haven, Lincolnshire. The passenger ferry service is operated by Stena Line as part of the Dutchflyer rail-ferry service between Hook van Holland Haven station and Harwich International station in England, from which Greater Anglia provides service to Liverpool Street station in central London.
town in the southwestern corner of Holland (hence the name; hoek means "corner" and was the word in use before the word kaap – "cape", from Portuguese cabo – became Dutch), at the mouth of the New Waterway shipping canal into the North Sea. The town is administered by the municipality of Rotterdam as a district of that city. Its district covers an area of 16.7 km2, of which 13.92 km2 is land. On 1 January 1999 it had an estimated population of 9,400. Towns near "the Hook" () include Monster, 's-Gravenzande, Naaldwijk and Delft to the northeast, and Maassluis to the southeast. On the other side of the river is the Europort and the Maasvlakte. The wide sandy beach, one section of which is designated for use by naturists, runs for approximately 18 kilometres to Scheveningen and for most of this distance is backed by extensive sand dunes through which there are foot and cycle paths. On the north side of the New Waterway, to the west of the town, is a pier, part of which is accessible to pedestrians and cyclists. The Berghaven is a small harbour on the New Waterway where the Rotterdam and Europort pilots are based. This small harbour is only for the use of the pilot service, government vessels and the Hook of Holland lifeboat. History The Hook of Holland area was created as a sandbar in the Meuse estuary, when it became more and more silted after St. Elizabeth's flood of 1421. All kinds of plans were designed to improve the shipping channel to Rotterdam. In 1863 it was finally decided to construct the New Waterway which was dug between 1866 and 1868. The route ran through the Hook of Holland, where a primitive settlement, Old Hook (Oude Hoek - nowadays the Zuidelijk Strandcentrum), was created. Many workers and senior employees of the Rijkswaterstaat settled in Old Hook. The Hook initially fell under the administrative authority of 's-Gravenzande. An attempt by the inhabitants to transform the place into an independent municipality failed and, on 1 January 1914, Hook of Holland was added to Rotterdam. After the First World War the village started to develop into a seaside resort. It has since been informally
University of Glasgow at age thirteen. Binning has been described as "an extraordinary instance of precocious learning and genius." In 1645, James Dalrymple, 1st Viscount of Stair, who was Hugh's master (primary professor) in the study of philosophy, announced he was retiring from the University of Glasgow. Dalrymple was afterward President of the Court of Session, and Viscount Stair. After a national search for a replacement on the faculty, three men were selected to compete for the position. Binning was one of those selected, but was at a disadvantage because of his extreme youth and because he was not of noble birth. However, he had strong support from the existing faculty, who suggested that the candidates speak extemporaneously on any topic of the candidate's choice. After hearing Hugh speak, the other candidates withdrew, making Hugh a regent and professor of philosophy, while he was still 18 years old. On 7 February 1648, (at the age of 21) Hugh was appointed an Advocate before the Court of Sessions (an attorney). In the same year, he married Barbara Simpson (sometimes called Mary), daughter of Rev. James Simpson a minister in Ireland. Their son, John, was born in 1650. Binning was called on 25 October 1649. As minister of Govan, he was the successor of Mr. William Wilkie. His ordination took place on the 8th of January 1649, when Mr David Dickson, one of the theological professors at the College of Glasgow, and author of Therapeutica Sacra, presided. He was ordained in January, at the age of 22, holding his regency until 14 May that year. At that time Govan was a separate town rather than part of Glasgow. Hugh died around September 1653 and was buried in the churchyard of Govan, where Patrick Gillespie, then principal of the University of Glasgow, ordered a monument inscribed in Latin, roughly translated: Here lies Mr. Hugh Binning, a man distinguished for his piety and eloquence, learned in philology, philosophy, and theology, a Prelate, faithful to the Gospel, and finally an excellent preacher. In the middle of a series of events, he was taken at the age of 26, in the year of our Lord 1653. Alive, he changed the society of his own land because he walked with God. And if you wish to make other inquires, the rest should keep silence, since neither you nor the marble can comprehend it. Hugh's widow, Barbara (sometimes called Mary), then remarried James Gordon, an Anglican priest at Cumber in Ireland. Together they had a daughter,Jean who married Daniel MacKenzie, who was on the winning side of the Battle of Bothwell Bridge serving as an ensign under Lieutenant-Colonel William Ramsay (who became the third Earl of Dalhousie), in the Earl of Mar's Regiment of Foot. Binning's son, John Binning, married Hanna Keir, who was born in Ireland. The Binnings were Covenanters, a resistance movement that objected to the return of Charles II (who was received into the Catholic Church on his deathbed). They were on the losing side in the 1679 Battle of Bothwell Bridge. Most of the rebels who were not executed were exiled to the Americas; about 30 Covenanters were exiled to the Carolinas on the Carolina Merchant in 1684. After the battle, John and Hanna were separated. In the aftermath of the battle at Bothwell Bridge, Binning's widow (now Barbara Gordon) tried to reclaim the family estate at Dalvennan by saying that John and his wife owed his stepfather a considerable some of money. The legal action was successful and Dalvennan became the possession of John's half-sister Jean, and her husband Daniel MacKenzie. In addition, Jean came into possession of Hanna Keir's property in Ireland. By 1683, Jean was widowed. John Binning was branded a traitor, was sentenced to death and forfeited his property to the Crown. John's wife (Hanna Keir) was branded as a traitor and forfeited her property in Ireland. In 1685, Jean "donated" the Binning family's home at Dalvennan and other properties, along with the Keir properties to Roderick MacKenzie, who was a Scottish advocate of James II (James VII of Scotland), and the baillie of Carrick. According to an act of the Scottish Parliament, Roderick MacKenzie was also very effective in "suppressing the rebellious, fanatical party in the western and other shires of this realm, and putting the laws to vigorous execution against them". Since Bothwell Bridge, Hanna had been hiding from the authorities. In 1685, Hanna was in Edinburgh where she was found during a sweep for subversives and imprisoned in the Tolbooth of Edinburgh, a combination city hall and prison. Those arrested with Hanna were exiled to North America, however, she developed dysentery and remained behind. By 1687, near death, Hanna petitioned the Privy Council of Scotland for her release; she was exiled to her family in Ireland, where she died around 1692. In 1690, the Scottish Parliament rescinded John's fines and forfeiture, but he was unable to recover his family's estates, the courts suggesting that he had relinquished his claim to Dalvennan in exchange for forgiveness of debt, rather than forfeiture. There is little documentation about John after his wife's death. John received a small income from royalties on his father's works after parliament extended copyrights on Binning's writings to him. However, the income was not significant and John made several petitions to the Scottish parliament for money, the last occurring in 1717. It is thought that he died in Somerset county, in southwestern England. He died of consumption at the age of 26 on September 1653. He was remarkably popular as a preacher, having been considered "the most accomplished philosopher and divine in his time, and styled the Scottish Cicero." He married (cont. 17 May 1650), Mary (who died at Paisley in 1694) and had a son, John of Dalvennan. She was the daughter of Richard Simson, minister of Sprouston. After John's early death Mary married her second husband, James Gordon, minister of Comber, in Ireland. A marble tablet, with an inscription in classical Latin, was erected to his memory by his friend Mr Patrick Gillespie, who was then Principal of the University of Glasgow. It has been placed in the vestibule of the present parish church. The whole of his works are posthumous publications. He was a follower of James Dalrymple. In later life, he was well known as an evangelical Christian. Impact of the Commonwealth Hugh Binning was born two years after Charles I became monarch of England, Ireland, and Scotland. At the time, each was an independent country sharing the same monarch. The Acts of Union 1707 integrated Scotland and England to form the Kingdom of Great Britain, and the Acts of Union 1800 integrated Ireland to form the United Kingdom of Great Britain and Ireland. The period was dominated by both political and religious strife between the three independent countries. Religious disputes centered on questions such as whether religion was to be dictated by the monarch or was to be the choice of the people, and whether individuals had a direct relationship with God or needed to use an intermediary. Civil disputes centered on debates about the extent of the King's power (a question of the Divine right of kings), and specifically whether the King had the right to raise taxes and armed forces without the consent of the governed. These wars ultimately changed the relationship between king and subjects. In 1638, the General Assembly of the Church of Scotland voted to remove bishops and the Book of Common Prayer that had been introduced by Charles I to impose the Anglican model on the Presbyterian Church of Scotland. Public riots followed, culminating in the Wars of the Three Kingdoms, an interrelated series of conflicts that took place in the three countries. The first conflict, which was also the first of the Bishops' Wars, took place in 1639 and was a single border skirmish between England and Scotland, also known as the war the armys did not wanted to fight. To maintain his English power base, Charles I made secret alliances with Catholic Ireland and Presbyterian Scotland to invade Anglican England, promising that each country could establish their own separate state religion. Once these secret entreaties became known to the English Long Parliament, the Congregationalist faction (of which Oliver Cromwell was a primary spokesman) took matters into its own hands and Parliament established an army separate from the King. Charles I was executed in January 1649, which led to the rule of Cromwell and the establishment of the Commonwealth. The conflicts concluded with The English Restoration of the monarchy and the return of Charles II in 1660. The Act of Classes was passed by the Parliament of Scotland on 23 January 1649; the act banned Royalists (people supporting the monarchy) from holding political or military office. In exile, Charles II signed the Treaty of Breda (1650) with the Scottish Parliament; among other things, the treaty established Presbyterianism as the national religion. Charles was crowned King of Scots at Scone in January 1651. By September 1651, Scotland was annexed by England, its legislative institutions abolished, Presbyterianism dis-established, and Charles was forced into exile in France. The Scottish Parliament rescinded the Act of Classes in 1651, which produced a split within Scottish Society. The sides of the conflict were called the Resolutioners (who supported the rescission of the act – supported the monarchy and the Scottish House of Stewart) and the Protesters (who supported Cromwell and the Commonwealth); Binning sided with the Protestors. Binning joined the Protesters in 1651. When Cromwell had sent troops to Scotland, he was also attempting to dis-establish Presbyterianism and the Church of Scotland, Binning spoke against Cromwell's act. On Saturday 19 April 1651, Cromwell entered Glasgow and the next day he heard a sermon by three ministers who condemned him for invading Scotland. That evening, Cromwell summoned those ministers and others, to a debate on the issue. a discussion on some of the controverted points of the times was held in his presence, between his chaplains, the learned Dr John Owen, Joseph Caryl, and others on the one side, and some Scots ministers on the other. Mr. Binning, who was one of the disputants, apparently nonplussed the Independents, which led Cromwell to ask who the learned and bold young man was. Told it was Binning, he said: "He hath bound well, indeed," ... " but, laying his hand on his sword, this will lose all again." The late Mr. Orme was of the opinion that there is nothing improbable in the account of the meeting, but that such a meeting took place is certain. This appears from two letters which were written by Principal Robert Baillie, who was then Professor of Theology at the University of Glasgow.At the debate, Rev Hugh Binning is said to have out-debated Cromwell's
learned in philology, philosophy, and theology, a Prelate, faithful to the Gospel, and finally an excellent preacher. In the middle of a series of events, he was taken at the age of 26, in the year of our Lord 1653. Alive, he changed the society of his own land because he walked with God. And if you wish to make other inquires, the rest should keep silence, since neither you nor the marble can comprehend it. Hugh's widow, Barbara (sometimes called Mary), then remarried James Gordon, an Anglican priest at Cumber in Ireland. Together they had a daughter,Jean who married Daniel MacKenzie, who was on the winning side of the Battle of Bothwell Bridge serving as an ensign under Lieutenant-Colonel William Ramsay (who became the third Earl of Dalhousie), in the Earl of Mar's Regiment of Foot. Binning's son, John Binning, married Hanna Keir, who was born in Ireland. The Binnings were Covenanters, a resistance movement that objected to the return of Charles II (who was received into the Catholic Church on his deathbed). They were on the losing side in the 1679 Battle of Bothwell Bridge. Most of the rebels who were not executed were exiled to the Americas; about 30 Covenanters were exiled to the Carolinas on the Carolina Merchant in 1684. After the battle, John and Hanna were separated. In the aftermath of the battle at Bothwell Bridge, Binning's widow (now Barbara Gordon) tried to reclaim the family estate at Dalvennan by saying that John and his wife owed his stepfather a considerable some of money. The legal action was successful and Dalvennan became the possession of John's half-sister Jean, and her husband Daniel MacKenzie. In addition, Jean came into possession of Hanna Keir's property in Ireland. By 1683, Jean was widowed. John Binning was branded a traitor, was sentenced to death and forfeited his property to the Crown. John's wife (Hanna Keir) was branded as a traitor and forfeited her property in Ireland. In 1685, Jean "donated" the Binning family's home at Dalvennan and other properties, along with the Keir properties to Roderick MacKenzie, who was a Scottish advocate of James II (James VII of Scotland), and the baillie of Carrick. According to an act of the Scottish Parliament, Roderick MacKenzie was also very effective in "suppressing the rebellious, fanatical party in the western and other shires of this realm, and putting the laws to vigorous execution against them". Since Bothwell Bridge, Hanna had been hiding from the authorities. In 1685, Hanna was in Edinburgh where she was found during a sweep for subversives and imprisoned in the Tolbooth of Edinburgh, a combination city hall and prison. Those arrested with Hanna were exiled to North America, however, she developed dysentery and remained behind. By 1687, near death, Hanna petitioned the Privy Council of Scotland for her release; she was exiled to her family in Ireland, where she died around 1692. In 1690, the Scottish Parliament rescinded John's fines and forfeiture, but he was unable to recover his family's estates, the courts suggesting that he had relinquished his claim to Dalvennan in exchange for forgiveness of debt, rather than forfeiture. There is little documentation about John after his wife's death. John received a small income from royalties on his father's works after parliament extended copyrights on Binning's writings to him. However, the income was not significant and John made several petitions to the Scottish parliament for money, the last occurring in 1717. It is thought that he died in Somerset county, in southwestern England. He died of consumption at the age of 26 on September 1653. He was remarkably popular as a preacher, having been considered "the most accomplished philosopher and divine in his time, and styled the Scottish Cicero." He married (cont. 17 May 1650), Mary (who died at Paisley in 1694) and had a son, John of Dalvennan. She was the daughter of Richard Simson, minister of Sprouston. After John's early death Mary married her second husband, James Gordon, minister of Comber, in Ireland. A marble tablet, with an inscription in classical Latin, was erected to his memory by his friend Mr Patrick Gillespie, who was then Principal of the University of Glasgow. It has been placed in the vestibule of the present parish church. The whole of his works are posthumous publications. He was a follower of James Dalrymple. In later life, he was well known as an evangelical Christian. Impact of the Commonwealth Hugh Binning was born two years after Charles I became monarch of England, Ireland, and Scotland. At the time, each was an independent country sharing the same monarch. The Acts of Union 1707 integrated Scotland and England to form the Kingdom of Great Britain, and the Acts of Union 1800 integrated Ireland to form the United Kingdom of Great Britain and Ireland. The period was dominated by both political and religious strife between the three independent countries. Religious disputes centered on questions such as whether religion was to be dictated by the monarch or was to be the choice of the people, and whether individuals had a direct relationship with God or needed to use an intermediary. Civil disputes centered on debates about the extent of the King's power (a question of the Divine right of kings), and specifically whether the King had the right to raise taxes and armed forces without the consent of the governed. These wars ultimately changed the relationship between king and subjects. In 1638, the General Assembly of the Church of Scotland voted to remove bishops and the Book of Common Prayer that had been introduced by Charles I to impose the Anglican model on the Presbyterian Church of Scotland. Public riots followed, culminating in the Wars of the Three Kingdoms, an interrelated series of conflicts that took place in the three countries. The first conflict, which was also the first of the Bishops' Wars, took place in 1639 and was a single border skirmish between England and Scotland, also known as the war the armys did not wanted to fight. To maintain his English power base, Charles I made secret alliances with Catholic Ireland and Presbyterian Scotland to invade Anglican England, promising that each country could establish their own separate state religion. Once these secret entreaties became known to the English Long Parliament, the Congregationalist faction (of which Oliver Cromwell was a primary spokesman) took matters into its own hands and Parliament established an army separate from the King. Charles I was executed in January 1649, which led to the rule of Cromwell and the establishment of the Commonwealth. The conflicts concluded with The English Restoration of the monarchy and the return of Charles II in 1660. The Act of Classes was passed by the Parliament of Scotland on 23 January 1649; the act banned Royalists (people supporting the monarchy) from holding political or military office. In exile, Charles II signed the Treaty of Breda (1650) with the Scottish Parliament; among other things, the treaty established Presbyterianism as the national religion. Charles was crowned King of Scots at Scone in January 1651. By September 1651, Scotland was annexed by England, its legislative institutions abolished, Presbyterianism dis-established, and Charles was forced into exile in France. The Scottish Parliament rescinded the Act of Classes in 1651, which produced a split within Scottish Society. The sides of the conflict were called the Resolutioners (who supported the rescission of the act – supported the monarchy and the Scottish House of Stewart) and the Protesters (who supported Cromwell and the Commonwealth); Binning sided with the Protestors. Binning joined the Protesters in 1651. When Cromwell had sent troops to Scotland, he was also attempting to dis-establish Presbyterianism and the Church of Scotland, Binning
"polite" commercial towns of Glasgow and Edinburgh, and in the Western Isles a remaining culture of rude huts where fishermen and gatherers of seaweed eked out their subsistence living. Home was a polygenist, he believed God had created different races on earth in separate regions. In his book Sketches of the History of Man, in 1774, Home claimed that the environment, climate, or state of society could not account for racial differences, so that the races must have come from distinct, separate stocks. The above studies created the genre of the story of civilization and defined the fields of anthropology and sociology and therefore the modern study of history for two hundred years. In the popular book Elements of Criticism (1762) Home interrogated the notion of fixed or arbitrary rules of literary composition, and endeavoured to establish a new theory based on the principles of human nature. The late eighteenth-century tradition of sentimental writing was associated with his notion that 'the genuine rules of criticism are all of them derived from the human heart. Prof Neil Rhodes has argued that Lord Kames played a significant role in the development of English as an academic discipline in the Scottish Universities. Social milieu He enjoyed intelligent conversation and cultivated a large number of intellectual associates, among them John Home, David Hume and James Boswell.. Lord Monboddo was also a frequent debater of Kames, although these two usually had a fiercely competitive and adversarial relationship. Family He was married to Agatha Drummond of Blair Drummond. Their children included George Drummond-Home. Major works Remarkable Decisions of the Court of Session (1728) Essays upon Several Subjects in Law (1732) Essay Upon Several Subjects Concerning British Antiquities (c. 1745) Essays on the Principles of Morality and Natural Religion (1751) He advocates the doctrine of philosophical necessity. Historical Law-Tracts (1758) Principles of Equity (1760) Introduction to the Art of Thinking (1761) Elements of Criticism (1762) Published by two Scottish booksellers, Andrew Millar and Alexander Kincaid. Sketches of the History of Man (1774) Gentleman Farmer (1776) Loose Thoughts on Education (1781) See also George Anderson (minister) Literature References External links Henry Home, Lord Kames at James Boswell – a Guide 1696 births 1782 deaths 18th-century philosophers 18th-century Scottish people People from Berwickshire Members of the Faculty of Advocates Enlightenment philosophers
on its east side, facing onto the Canongate. He is buried in the Home-Drummond plot at Kincardine-in-Menteith just west of Blair Drummond. Writings Home wrote much about the importance of property to society. In his Essay Upon Several Subjects Concerning British Antiquities, written just after the Jacobite rising of 1745, he showed that the politics of Scotland were based not on loyalty to Kings, as the Jacobites had said, but on the royal land grants that lay at the base of feudalism, the system whereby the sovereign maintained "an immediate hold of the persons and property of his subjects". In Historical Law Tracts Home described a four-stage model of social evolution that became "a way of organizing the history of Western civilization". The first stage was that of the hunter-gatherer, wherein families avoided each other as competitors for the same food. The second was that of the herder of domestic animals, which encouraged the formation of larger groups but did not result in what Home considered a true society. No laws were needed at these early stages except those given by the head of the family, clan, or tribe. Agriculture was the third stage, wherein new occupations such as "plowman, carpenter, blacksmith, stonemason" made "the industry of individuals profitable to others as well as to themselves", and a new complexity of relationships, rights, and obligations required laws and law enforcers. A fourth stage evolved with the development of market towns and seaports, "commercial society", bringing yet more laws and complexity but also providing more benefit. Lord Kames could see these stages within Scotland itself, with the pastoral Highlands, the agricultural Lowlands, the "polite" commercial towns of Glasgow and Edinburgh, and in the Western Isles a remaining culture of rude huts where fishermen and gatherers of seaweed eked out their subsistence living. Home was a polygenist, he believed God had created different races on earth in separate regions. In his book Sketches of the History of Man, in 1774, Home claimed that the environment, climate, or state of society could not account for racial differences, so that the races must have come from distinct, separate stocks. The above studies created the genre of the story of civilization and defined the fields of anthropology and sociology and therefore the modern study of history for two hundred years. In the popular book Elements of Criticism (1762) Home interrogated the notion of fixed or arbitrary rules of literary composition, and endeavoured to establish a new theory based on the principles of human nature. The late eighteenth-century tradition of sentimental writing was associated with his notion that 'the genuine rules of criticism are all
Port. The port is famous for the phrase "Harwich for the Continent", seen on road signs and in London & North Eastern Railway (LNER) advertisements. From 1924 to 1987 (with a break during the second world war), a train ferry service operated between Harwich and Zeebrugge. The train ferry linkspan still exists today and the rails leading from the former goods yard of Harwich Town railway station are still in position across the road, although the line is blocked by the Trinity House buoy store. Architecture Despite, or perhaps because of, its small size Harwich is highly regarded in terms of architectural heritage, and the whole of the older part of the town, excluding Navyard Wharf, is a conservation area. The regular street plan with principal thoroughfares connected by numerous small alleys indicates the town's medieval origins, although many buildings of this period are hidden behind 18th century facades. The extant medieval structures are largely private homes. The house featured in the image of Kings Head St to the left is unique in the town and is an example of a sailmaker's house, thought to have been built circa 1600. Notable public buildings include the parish church of St. Nicholas (1821) in a restrained Gothic style, with many original furnishings, including a somewhat altered organ in the west end gallery. There is also the Guildhall of 1769, the only Grade I listed building in Harwich. The Pier Hotel of 1860 and the building that was the Great Eastern Hotel of 1864 can both been seen on the quayside, both reflecting the town's new importance to travellers following the arrival of the Great Eastern Main Line from Colchester in 1854. In 1923, The Great Eastern Hotel was closed by the newly formed LNER, as the Great Eastern Railway had opened a new hotel with the same name at the new passenger port at Parkeston Quay, causing a decline in numbers. The hotel became the Harwich Town Hall, which included the Magistrates Court and, following changes in local government, was sold and divided into apartments. Also of interest are the High Lighthouse (1818), the unusual Treadwheel Crane (late 17th century), the Old Custom Houses on West Street, a number of Victorian shopfronts and the Electric Palace Cinema (1911), one of the oldest purpose-built cinemas to survive complete with its ornamental frontage and original projection room still intact and operational. There is little notable building from the later parts of the 20th century, but major recent additions include the lifeboat station and two new structures for Trinity House. The Trinity House office building, next door to the Old Custom Houses, was completed in 2005. All three additions are influenced by the high-tech style. Notable residents Harwich has also historically hosted a number of notable inhabitants, linked with Harwich's maritime past. Christopher Newport (1561–1617) seaman and privateer, captain of the expedition that founded Jamestown, Virginia Christopher Jones (c.1570–1622) Captain of the 1620 voyage of the Pilgrim ship Mayflower Thomas Cobbold (1708–1767), brewer and owner of Three Cups William Shearman (1767–1861) physician and medical writer James Francillon (1802–1866) barrister and legal writer Captain Charles Fryatt (1872–1916) mariner executed by the Germans, brought back from Belgium and buried at Dovercourt Peter Firmin (1928- 2018) artist and puppet maker Randolph Stow (1935–2010) reclusive but award-winning Australian-born writer made his home in Harwich Myles de Vries (born 1940), first-class cricketer Liana Bridges (born 1969) actress, best known for co-presenting Sooty & Co Kate Hall (born 1983) British-Danish singer Politicians Sir John Jacob, 1st Baronet of Bromley (c.1597–1666) politician who sat in the House of Commons in 1640 and 1641 Sir Capel Luckyn, 2nd Baronet (1622–1680) politician sat in the House of Commons variously between 1647 and 1679 Samuel Pepys (1633–1703) diarist and Member of Parliament (MP) for Harwich Sir Anthony Deane (1638–1721) Mayor of Harwich, naval architect, Master Shipwright, commercial shipbuilder and MP Lieutenant-General Edward Harvey (1718–1788) Adjutant-General to the Forces and MP for Harwich 1768 to 1778 Tony Newton, Baron Newton of Braintree OBE, PC, DL (1937–2012) Conservative politician and former Cabinet member Nick Alston (born 1952) Conservative Essex Police and Crime Commissioner Bernard Jenkin (born 1959) Conservative politician, MP for Harwich and North
Louis Schomberg was made Marquess of Harwich. Writer Daniel Defoe devotes a few pages to the town in A tour thro' the Whole Island of Great Britain. Visiting in 1722, he noted its formidable fort and harbour "of a vast extent". The town, he recounts, was also known for an unusual chalybeate spring rising on Beacon Hill (a promontory to the north-east of the town), which "petrified" clay, allowing it to be used to pave Harwich's streets and build its walls. The locals also claimed that "the same spring is said to turn wood into iron", but Defoe put this down to the presence of "copperas" in the water. Regarding the atmosphere of the town, he states: "Harwich is a town of hurry and business, not much of gaiety and pleasure; yet the inhabitants seem warm in their nests and some of them are very wealthy". Harwich played an important part in the Napoleonic and more especially the two world wars. Of particular note: 1793-1815—Post Office Station for communication with Europe, one of embarkation and evacuation bases for expeditions to Holland in 1799, 1809 and 1813/14; base for capturing enemy privateers. The dockyard built many ships for the Navy, including HMS Conqueror which captured the French Admiral Villeneuve at the Battle of Trafalgar. The Redoubt and the now-demolished Ordnance Building date from that era. 1914-18—base for the Royal Navy's Harwich Force light cruisers and destroyers under Commodore Tyrwhitt, and for British submarines. In November 1918 the German U-boat fleet surrendered to the Royal Navy in the harbour. 1939-1945—one of main East Coast minesweeping and destroyer bases, at one period base for British and French submarines; assembled fleets for Dutch and Dunkirk evacuations and follow-up to D-Day; unusually, a target for Italian bombers during the Battle of Britain. Royal Naval Dockyard Harwich Dockyard was established as a Royal Navy Dockyard in 1652. It ceased to operate as a Royal Dockyard in 1713 (though a Royal Navy presence was maintained until 1829). During the various wars with France and Holland, through to 1815, the dockyard was responsible for both building and repairing numerous warships. HMS Conqueror, a 74-gun ship completed in 1801, captured the French admiral Villeneuve at Trafalgar. The yard was then a semi-private concern, with the actual shipbuilding contracted to Joseph Graham, who was sometimes mayor of the town. During World War II parts of Harwich were again requisitioned for naval use and ships were based at HMS Badger; Badger was decommissioned in 1946, but the Royal Naval Auxiliary Service maintained a headquarters on the site until 1992. Lighthouses In 1665, not long after the establishment of the Dockyard, a pair of lighthouses were set up on the Town Green to serve as leading lights for ships entering the harbour. Completely rebuilt in 1818, both towers are still standing (though they ceased functioning as lighthouses in 1863, when they were replaced by a new pair of lights at Dovercourt). Transport The Royal Navy no longer has a presence in Harwich but Harwich International Port at nearby Parkeston continues to offer regular ferry services to the Hook of Holland (Hoek van Holland) in the Netherlands. Mann Lines operates a roll-on roll-off ferry service from Harwich Navyard to Bremerhaven, Cuxhaven, Paldiski and Turku. Many operations of the Port of Felixstowe and of Trinity House, the lighthouse authority, are managed from Harwich. The Mayflower railway line serves Harwich and there are three operational passenger stations: , and . The line also allows freight trains to access the Port. The port is famous for the phrase "Harwich for the Continent", seen on road signs and in London & North Eastern Railway (LNER) advertisements. From 1924 to 1987 (with a break during the second world war), a train ferry service operated between Harwich and Zeebrugge. The train ferry linkspan still exists today and the rails leading from the former goods yard of Harwich Town railway station are still in position across the road, although the line is blocked by the Trinity House buoy store. Architecture Despite, or perhaps because of, its small size Harwich is highly regarded in terms of architectural heritage, and the whole of the older part of the town, excluding Navyard Wharf, is a conservation area. The regular street plan with principal thoroughfares connected by numerous small alleys indicates the town's medieval origins, although many buildings of this period are hidden behind 18th century facades. The extant medieval structures are largely private homes. The house featured in the image of Kings Head St to the left is unique in the town and is an example of a
drawings, many of which were tinted with water-color, as finished pictures to be pasted into the albums of collectors. The Royal Collection has an outstanding collection of his works. Avercamp died in Kampen and was interred there in the Sint Nicolaaskerk. Artwork Avercamp probably painted in his studio on the basis of sketches he had made in the winter. Avercamp was famous even abroad for his winter landscapes. The passion for painting skating characters probably came from his childhood as he practiced skating with his parents. The last quarter of the 16th century, during which Avercamp was born, was one of the coldest periods of the Little Ice Age. The Flemish painting tradition is mainly expressed in Avercamp's early work. This is consistent with the landscapes of Pieter Bruegel the Elder. Avercamp painted landscapes with a high horizon and many figures who are working on something. The paintings are narrative, with many anecdotes. For instance, included in the painting Winter Landscape with Skaters are several prurient details: a couple making love, naked buttocks, and a peeing male. Later in his life drawing the atmosphere was also important in his work. The horizon also gradually dropped down under more and more air. Avercamp used the painting technique of aerial perspective. The depth is suggested by change of color in the distance. To the front objects are painted in richer colors, such as trees or a boat, while farther objects are lighter. This technique strengthens the impression of depth in the painting. Avercamp has also painted cattle and seascapes.
finished pictures to be pasted into the albums of collectors. The Royal Collection has an outstanding collection of his works. Avercamp died in Kampen and was interred there in the Sint Nicolaaskerk. Artwork Avercamp probably painted in his studio on the basis of sketches he had made in the winter. Avercamp was famous even abroad for his winter landscapes. The passion for painting skating characters probably came from his childhood as he practiced skating with his parents. The last quarter of the 16th century, during which Avercamp was born, was one of the coldest periods of the Little Ice Age. The Flemish painting tradition is mainly expressed in Avercamp's early work. This is consistent with the landscapes of Pieter Bruegel the Elder. Avercamp painted landscapes with a high horizon and many figures who are working on something. The paintings are narrative, with many anecdotes. For instance, included in the painting Winter Landscape with Skaters are several prurient details: a couple making love, naked buttocks, and a peeing male. Later in his life drawing the atmosphere was also important in his work. The horizon also gradually dropped down under more and more air. Avercamp used the painting technique of aerial perspective. The depth is suggested by change of
of witchcraft to be based on folklore rather than the cultural beliefs of his time. By contrast, throughout the early sixteenth century, humanism became very popular, and within this movement, Latin literature was valorized, particularly poetry and satire, some of which included views on witches that could be combined with witch lore massively accumulated in works such as the Malleus Maleficarum. Baldung partook in this culture, producing not only many works depicting Strasbourg humanists and scenes from ancient art and literature, but what an earlier literature on the artist described as his satirical take on his depiction of witches. Gert von der Osten comments on this aspect of "Baldung [treating] his witches humorously, an attitude that reflects the dominant viewpoint of the humanists in Strasbourg at this time who viewed witchcraft as 'lustig,' a matter that was more amusing than serious". However, the separation of a satirical tone from deadly serious vilifying intent proves difficult to maintain for Baldung as it is for many other artists, including his rough contemporary Hieronymus Bosch. Baldung's art simultaneously represents ideals presented in ancient Greek and Roman poetry, such as the pre-16th century notion that witches could control the weather, which Baldung is believed to have alluded to in his 1523 oil painting "Weather Witches", which showcases two attractive and naked witches in front of a stormy sky. Baldung also regularly incorporated scenes of witches flying in his art, a characteristic that had been contested centuries before his artwork came into being. Flying was inherently attributed to witches by those who believed in the myth of the Sabbath (without their ability to fly, the myth fragmented), such as Baldung, which he depicted in works like "Witches Preparing for the Sabbath Flight" (1514). Work Painting Throughout his life, Baldung painted numerous portraits, known for their sharp characterizations. While Dürer rigorously details his models, Baldung's style differs by focusing more on the personality of the represented character, an abstract conception of the model's state of mind. Baldung settled eventually in Strasbourg and then to Freiburg im Breisgau, where he executed what is held to be his masterpiece. Here in painted an eleven-panel altarpiece for the Freiburg Cathedral, still intact today, depicting scenes from the life of the Virgin, including, The Annunciation, The Visitation, The Nativity, The Flight into Egypt, The Crucifixion, Four Saints and The Donators. These depictions were a large part of the artist's greater body of work containing several renowned pieces of the Virgin. The earliest pictures assigned to him by some are altar-pieces with the monogram H. B. interlaced, and the date of 1496, in the monastery chapel of Lichtenthal near Baden-Baden. Another early work is a portrait of the emperor Maximilian, drawn in 1501 on a leaf of a sketch-book now in the print-room at Karlsruhe. "The Martyrdom of St Sebastian and the Epiphany" (now Berlin, 1507), were painted for the market-church of Halle in Saxony. Baldung's prints, though Düreresque, are very individual in style, and often in subject. They show little direct Italian influence. His paintings are less important than his prints. He worked mainly in woodcut, although he made six engravings, one very fine. He joined in the fashion for chiaroscuro woodcuts, adding a tone block to a woodcut of 1510. Most of his hundreds of woodcuts were commissioned for books, as was usual at the time; his "single-leaf" woodcuts (i.e. prints not for book illustration) are fewer than 100, though no two catalogues agree as to the exact number. Unconventional as a draughtsman, his treatment of human form is often exaggerated and eccentric (hence his linkage, in the art historical literature, with European Mannerism), whilst his ornamental style—profuse, eclectic, and akin to the self-consciously "German" strain of contemporary limewood sculptors—is equally distinctive. Though Baldung has been commonly called the Correggio of the north, his compositions are a curious medley of glaring and heterogeneous colours, in which pure black is contrasted with pale yellow, dirty grey, impure red and glowing green. Flesh is a mere glaze under which the features are indicated by lines. His works are notable for their individualistic departure from the Renaissance composure of his model, Dürer, for the wild and fantastic strength that some of them display, and for their remarkable themes. In the field of painting, his Eve, the Serpent and Death (National Gallery of Canada) shows his strengths well. There is special force in the "Death and the Maiden" panel of 1517 (Basel), in the "Weather Witches" (Frankfurt), in the monumental panels of "Adam" and "Eve" (Madrid), and in his many powerful portraits. Baldung's most sustained effort is the altarpiece of Freiburg, where the Coronation of the Virgin, and the Twelve Apostles, the Annunciation, Visitation, Nativity and Flight into Egypt, and the Crucifixion, with portraits of donors, are executed with some of that fanciful power that Martin Schongauer bequeathed to the Swabian school. He is well known as a portrait painter, his works include historical pictures and portraits; among the latter may be named those of Maximilian I. and Charles V. His bust of Margrave Philip in the Munich Gallery tells us that he was connected with the reigning family of Baden as early as 1514. At a later period he had sittings with Margrave Christopher of Baden, Ottilia his wife, and all their children, and the picture containing these portraits is still in the gallery at Karlsruhe. Like Dürer and Cranach, Baldung supported the Protestant Reformation. He was present at the diet of Augsburg in 1518, and one of his woodcuts represents Luther in quasi-saintly guise, under the protection of (or being inspired by) the Holy Spirit, which hovers over him in the shape of a dove. Selected works Phyllis and Aristotle, Paris, Louvre. 1503 Two altar wings (Charles the Great, St. George), Augsburg, State Gallery. Portrait of a Youth, Hampton Court, Royal Collection 1509 The birth of Christ, Basel, Kunstmuseum Basel, 1510 The Adoration of the Magi, Dessau, Anhalt Art Gallery, 1510 The Witches, 1510 The Mass of St. Gregory, Cleveland, Cleveland Museum of Art, 1511 The crucifixion of Christ, Basel, Kunstmuseum Basel, 1512 The crucifixion of Christ, Berlin, Gemäldegalerie, 1512 The Holy
His uncle, Hieronymus Baldung, was a doctor in medicine, he had a son, Pius Hieronymus, that can be seen as Hans' cousin, who taught law at Freiburg, and became by 1527 chancellor of the Tyrol. In fact, Baldung was the first male in his family not to attend university, but was one of the first German artists to come from an academic family. His earliest training as an artist began around 1500 in the Upper Rhineland by an artist from Strasbourg. He perfected his art in Albrecht Dürer's studio in Nuremberg between 1503 and 1507. At the age of 26, Baldung married Margaretha (née Herlin), with whom he had one child: Margarethe Baldungin. Life as a student of Dürer Beginning in 1503, during the "Wanderjahre" ("Hiking years") required of artists of the time, Baldung became an assistant to Albrecht Dürer. Here, he may have been given his nickname "Grien". This name is thought to have come foremost from a preference to the color green: he seems to have worn green clothing. He probably also got this nickname to distinguish him from at least two other Hanses in Dürer's shop, Hans Schäufelein and Hans Suess von Kulmbach. He later included the name "Grien" in his monogram, and it has also been suggested that the name came from, or consciously echoed, "grienhals", a German word for witch—one of his signature themes. Hans quickly picked up Dürer's influence and style, and they became friends: Baldung seems to have managed Dürer's workshop during the latter's second sojourn in Venice. In a later trip to the Netherlands in 1521 Dürer's account book records that he took with him and sold prints by Baldung. On Dürer's death Baldung was sent a lock of his hair, which suggests a close friendship. Near the end of his Nuremberg years, Grien oversaw the production by Dürer of stained glass, woodcuts and engravings, and therefore developed an affinity for these media and for the Nuremberg master's handing of them. Strasbourg In 1509, when Baldung's time in Nuremberg was complete, he moved back to Strasbourg and became a citizen there. He became a celebrity of the town, and received many important commissions. The following year he married Margarethe Herlin, a local merchant's daughter, joined the guild "Zur Steltz", opened a workshop, and began signing his works with the HGB monogram that he used for the rest of his career. His style became much more deliberately individual—a tendency art historians used to term "mannerist." He stayed in Freiburg im Breisgau in 1513–1516 where he made, among other things, the . Witchcraft and religious imagery In addition to traditional religious subjects, Baldung was concerned during these years with the profane theme of the imminence of death and with scenes of sorcery and witchcraft. He helped introduce supernatural and erotic themes into German art, although these were already amply present in Dürer's work. Most famously, he depicted witches, also a local interest: Strasbourg's humanists studied witchcraft and its bishop was charged with finding and prosecuting witches. His most characteristic works in this area are small in scale and mostly in the medium of drawing; these include a series of puzzling, often erotic allegories and mythological works executed in quill pen and ink and white body color on primed paper. The number of Hans Baldung's religious works diminished with the Protestant Reformation, which generally repudiated church art as either wasteful or idolatrous. But earlier, around the same time that he produced an important chiaroscuro woodcut of Adam and Eve, the artist became interested in themes related to death, the supernatural, witchcraft, sorcery, and the relation between the sexes. Baldung's fascination with witchcraft began early, with his first chiaroscuro woodcut print in 1510, and lasted to the end of his career. Hans Baldung Grien's work depicting witches was produced in the first half of the 16th century, before witch hunting became a widespread cultural phenomenon in Europe. According to one view, Baldung's work did not represent widespread cultural beliefs at the time of creation but reflected largely individual choices. On the other hand, through his family, Baldung stood as closer to the leading intellectuals of the day than any of his contemporaries, and could draw on a burgeoning literature on witchcraft, as well as on developing juridical and forensic strategies for witch-hunting. Baldung never worked directly with any Reformation leaders to spread religious ideals through his artwork, although living in fervently religious Strasbourg, although he was a supporter of the movement, working on the high altar in the city of Münster, Germany. Baldung was the first artist to heavily incorporate witches and witchcraft into his artwork (his mentor Albrecht Dürer had sporadically included them but not as prominently as Baldung would). During his lifetime there were few witch trials, therefore, some believe Baldung's depictions of witchcraft to be based on folklore rather than the cultural beliefs of his time. By contrast, throughout the early sixteenth century, humanism became very popular, and within this movement, Latin literature was valorized, particularly poetry and satire, some of which included views on witches that could be combined with witch lore massively accumulated in works such as the Malleus Maleficarum. Baldung partook in this culture, producing not only many works depicting Strasbourg humanists and scenes from ancient art and literature, but what an earlier literature on the artist described as his satirical take on his depiction of witches. Gert von der Osten comments on this aspect of "Baldung [treating] his witches humorously, an attitude that reflects the dominant viewpoint of the humanists in Strasbourg at this time who viewed witchcraft as 'lustig,' a matter that was more amusing than serious". However, the separation of a satirical tone from deadly serious vilifying intent proves difficult to maintain for Baldung as it is for many other artists, including his rough contemporary Hieronymus Bosch. Baldung's art simultaneously represents ideals presented in ancient Greek and Roman poetry, such as the pre-16th century notion that witches could control the weather, which Baldung is believed to have alluded to in his 1523 oil painting "Weather Witches", which showcases two attractive and naked witches in front of a stormy sky. Baldung also regularly incorporated scenes of witches flying in his art, a characteristic that had been contested centuries before his artwork came into being. Flying was inherently attributed to witches by those who believed in the myth of the Sabbath (without their ability to fly, the myth fragmented), such as Baldung, which he depicted in works like "Witches Preparing for the Sabbath Flight" (1514). Work Painting Throughout his life, Baldung painted numerous portraits, known for their sharp characterizations. While Dürer rigorously details his models, Baldung's style differs by focusing more on the personality of the represented character, an abstract conception of the model's state of mind. Baldung settled eventually in Strasbourg and then to Freiburg im Breisgau, where he executed what is held to be his masterpiece. Here in painted an eleven-panel altarpiece for the Freiburg Cathedral, still intact
courses transposes the D-major scale to A-major, but of course the first Do-Re-Mi would be shifted off the instrument. This tuning results in most, but not all, notes of the chromatic scale being available. To fill in the gaps, many modern dulcimer builders include extra short bridges at the top and bottom of the soundboard, where extra strings are tuned to some or all of the missing pitches. Such instruments are often called "chromatic dulcimers" as opposed to the more traditional "diatonic dulcimers". The tetrachord markers found on the bridges of most hammered dulcimers in the English-speaking world were introduced by the American player and maker Sam Rizzetta in the 1960s. In the Alps there are also chromatic dulcimers with crossed strings, which are in a whole tone distance in every row. This chromatic Salzburger hackbrett was developed in the mid 1930s from the diatonic hammered dulcimer by Tobi Reizer and his son along with Franz Peyer and Heinrich Bandzauner. In the postwar period it was one of the instruments taught in state-sponsored music schools. Hammered dulcimers of non-European descent may have other tuning patterns, and builders of European-style dulcimers sometimes experiment with alternate tuning patterns. Hammers The instrument is referred to as "hammered" in reference to the small mallets (referred to as hammers) that players use to strike the strings. Hammers are usually made of wood (most likely hardwoods such as maple, cherry, padauk, oak, walnut, or any other hardwood), but can also be made from any material, including metal and plastic. In the Western hemisphere, hammers are usually stiff, but in Asia, flexible hammers are often used. The head of the hammer can be left bare for a sharp attack sound, or can be covered with adhesive tape, leather, or fabric for a softer sound. Two-sided hammers are also available. The heads of two sided hammers are usually oval or round. Most of the time, one side is left as bare wood while the other side may be covered in leather or a softer material such as piano felt. Several traditional players have used hammers that differ substantially from those in common use today. Paul Van Arsdale (1920–2018), a player from upstate New York, used flexible hammers made from hacksaw blades, with leather-covered wooden blocks attached to the ends (these were modeled after the hammers used by his grandfather, Jesse Martin). The Irish player John Rea (1915–1983) used hammers made of thick steel wire, which he made himself from old bicycle spokes wrapped with wool. Billy Bennington (1900–1986), a player from Norfolk, England, used cane hammers bound with wool. Variants and adaptations The hammered dulcimer was extensively used during the Middle Ages in England, France, Italy, Germany, the Netherlands, and Spain. Although it had a distinctive name in each country, it was everywhere regarded as a kind of psalterium. The importance of the method of setting the strings in vibration by means of hammers, and its bearing on the acoustics of the instrument, were recognized only when the invention of the pianoforte had become a matter of history. It was then perceived that the psalterium (in which the strings were plucked) and the dulcimer (in which they were struck), when provided with keyboards would give rise to two distinct families of instruments, differing essentially in tone quality, in technique and in capabilities. The evolution of the psalterium resulted in the harpsichord; that of the dulcimer produced the pianoforte. Around the world Versions of the hammered dulcimer, each of which has its own distinct manner of construction and playing style, are used throughout the world: Austria – Hackbrett Belarus – tsymbaly (цымбалы) Belgium – hakkebord Brazil – saltério Cambodia – khim Canada – hammered dulcimer China – yangqin (扬琴, formerly 洋琴) Croatian – cimbal, cimbale, cimbule Czech Republic – cimbál Denmark – hakkebræt France – tympanon Germany – Hackbrett Greece – santouri Hungary – cimbalom India – santoor Iran – santur Iraq – santur Ireland – tiompan Israel – דולצימר פטישים Italy – salterio Japan – darushimaa (ダルシマー) Korea – yanggeum (양금) Laos – khim Latgalia (Latvia) – cymbala Latvia – cimbole Lithuania – cimbalai, cimbolai Mexico – salterio Mongolia – yoochin (ёочин or ёчин) Netherlands – hakkebord Norway – hakkebrett Poland – cymbały Portugal – saltério Romania – ţambal Russia – цимбалы, dultsimer (дульцимер) Serbia – цимбал (tsimbal) Slovakia – cimbal Slovenia – cimbale, oprekelj Spain (and Spanish-speaking countries) – salterio, dulcémele Sweden – hackbräde, hammarharpa Switzerland – Hackbrett Thailand – khim Turkey – santur Ukraine – tsymbaly (цимбали)
period it was one of the instruments taught in state-sponsored music schools. Hammered dulcimers of non-European descent may have other tuning patterns, and builders of European-style dulcimers sometimes experiment with alternate tuning patterns. Hammers The instrument is referred to as "hammered" in reference to the small mallets (referred to as hammers) that players use to strike the strings. Hammers are usually made of wood (most likely hardwoods such as maple, cherry, padauk, oak, walnut, or any other hardwood), but can also be made from any material, including metal and plastic. In the Western hemisphere, hammers are usually stiff, but in Asia, flexible hammers are often used. The head of the hammer can be left bare for a sharp attack sound, or can be covered with adhesive tape, leather, or fabric for a softer sound. Two-sided hammers are also available. The heads of two sided hammers are usually oval or round. Most of the time, one side is left as bare wood while the other side may be covered in leather or a softer material such as piano felt. Several traditional players have used hammers that differ substantially from those in common use today. Paul Van Arsdale (1920–2018), a player from upstate New York, used flexible hammers made from hacksaw blades, with leather-covered wooden blocks attached to the ends (these were modeled after the hammers used by his grandfather, Jesse Martin). The Irish player John Rea (1915–1983) used hammers made of thick steel wire, which he made himself from old bicycle spokes wrapped with wool. Billy Bennington (1900–1986), a player from Norfolk, England, used cane hammers bound with wool. Variants and adaptations The hammered dulcimer was extensively used during the Middle Ages in England, France, Italy, Germany, the Netherlands, and Spain. Although it had a distinctive name in each country, it was everywhere regarded as a kind of psalterium. The importance of the method of setting the strings in vibration by means of hammers, and its bearing on the acoustics of the instrument, were recognized only when the invention of the pianoforte had become a matter of history. It was then perceived that the psalterium (in which the strings were plucked) and the dulcimer (in which they were struck), when provided with keyboards would give rise to two distinct families of instruments, differing essentially in tone quality, in technique and in capabilities. The evolution of the psalterium resulted in the harpsichord; that of the dulcimer produced the pianoforte. Around the world Versions of the hammered dulcimer, each of which has its own distinct manner of construction and playing style, are used throughout the world: Austria – Hackbrett Belarus – tsymbaly (цымбалы) Belgium – hakkebord Brazil – saltério Cambodia – khim Canada – hammered dulcimer China – yangqin (扬琴, formerly 洋琴) Croatian – cimbal, cimbale, cimbule Czech Republic – cimbál Denmark – hakkebræt France – tympanon Germany – Hackbrett Greece – santouri Hungary – cimbalom India – santoor Iran – santur Iraq – santur Ireland –
II was concluding, Pope Paul VI enlarged it to fifty-eight members, including married couples, laywomen, theologians and bishops. The last document issued by the council (Gaudium et spes) contained a section titled "Fostering the Nobility of Marriage" (1965, nos. 47-52), which discussed marriage from the personalist point of view. The "duty of responsible parenthood" was affirmed, but the determination of licit and illicit forms of regulating birth was reserved to Pope Paul VI. In the spring of 1966, following the close of the council, the commission held its fifth and final meeting, having been enlarged again to include sixteen bishops as an executive committee. The commission was only consultative but it submitted a report approved by a majority of 64 members to Paul VI. It proposed he approve of artificial contraception without distinction of the various means. A minority of four members opposed this report and issued a parallel report to the Pope. Arguments in the minority report, against change in the church's teaching, were that "we should have to concede frankly that the Holy Spirit had been on the side of the Protestant churches in 1930" (when Casti connubii was promulgated) and that "it should likewise have to be admitted that for a half a century the Spirit failed to protect Pius XI, Pius XII, and a large part of the Catholic hierarchy from a very serious error." After two more years of study and consultation, the pope issued Humanae vitae, which removed any doubt that the Church views hormonal anti-ovulants as contraceptive. He explained why he did not accept the opinion of the majority report of the commission (1968, #6). Arguments were raised in the decades that followed that his decision has never passed the condition of "reception" to become church doctrine. Drafting of the Encyclical In his role as Theologian of the Pontifical Household Mario Luigi Ciappi advised Pope Paul VI during the drafting of Humanae vitae. Ciappi, a doctoral graduate of the Pontificium Athenaeum Internationale Angelicum, the future Pontifical University of Saint Thomas Aquinas, Angelicum, served as professor of dogmatic theology there and was Dean of the Angelicum's Faculty of Theology from 1935 to 1955. According to George Weigel, Paul VI named Archbishop Karol Wojtyła (later Pope John Paul II) to the commission, but Polish government authorities would not permit him to travel to Rome. Wojtyła had earlier defended the church's position from a philosophical standpoint in his 1960 book Love and Responsibility. Wojtyła's position was strongly considered and it was reflected in the final draft of the encyclical, although much of his language and arguments were not incorporated. Weigel attributes much of the poor reception of the encyclical to the omission of many of Wojtyła's arguments. In 2017, anticipating the 50th anniversary of the encyclical, four theologians led by Mgr. Gilfredo Marengo, a professor of theological anthropology at the Pontifical John Paul II Institute for Studies on Marriage and Family, launched a research project he called "a work of historical-critical investigation without any aim other than reconstructing as well as possible the whole process of composing the encyclical". Using the resources of the Vatican Secret Archives and the Congregation for the Doctrine of the Faith, they hope to detail the writing process and the interaction between the commission, publicity surrounding the commission's work, and Paul's own authorship. Highlights Faithfulness to God's design 13. Men rightly observe that a conjugal act imposed on one's partner without regard to his or her condition or personal and reasonable wishes in the matter, is no true act of love, and therefore offends the moral order in its particular application to the intimate relationship of husband and wife. If they further reflect, they must also recognize that an act of mutual love which impairs the capacity to transmit life which God the Creator, through specific laws, has built into it, frustrates His design which constitutes the norm of marriage, and contradicts the will of the Author of life. Hence to use this divine gift while depriving it, even if only partially, of its meaning and purpose, is equally repugnant to the nature of man and of woman, and is consequently in opposition to the plan of God and His holy will. But to experience the gift of married love while respecting the laws of conception is to acknowledge that one is not the master of the sources of life but rather the minister of the design established by the Creator. Just as man does not have unlimited dominion over his body in general, so also, and with more particular reason, he has no such dominion over his specifically sexual faculties, for these are concerned by their very nature with the generation of life, of which God is the source. "Human life is sacred—all men must recognize that fact," Our predecessor Pope John XXIII recalled. "From its very inception it reveals the creating hand of God." Lawful therapeutic means 15. ...the Church does not consider at all illicit the use of those therapeutic means necessary to cure bodily diseases, even if a foreseeable impediment to procreation should result therefrom — provided such impediment is not directly intended. Recourse to infertile periods 16. ...If therefore there are well-grounded reasons for spacing births, arising from the physical or psychological condition of husband or wife, or from external circumstances, the Church teaches that married people may then take advantage of the natural cycles immanent in the reproductive system and engage in marital intercourse only during those times that are infertile, thus controlling birth in a way which does not in the least offend the moral principles which We have just explained. Concern of the Church 18. It is to be anticipated that perhaps not everyone will easily accept this particular teaching. There is too much clamorous outcry against the voice of the Church, and this is intensified by modern means of communication. But it comes as no surprise to the Church that it, no less than its divine Founder, is destined to be a "sign of contradiction.") The Church does not, because of this, evade the duty imposed on it of proclaiming humbly but firmly the entire moral law, both natural and evangelical. Since the Church did not make either of these laws, it cannot be their arbiter—only their guardian and interpreter. It could never be right for the Church to declare lawful what is in fact unlawful, since that, by its very nature, is always opposed to the true good of man. In preserving intact the whole moral law of marriage, the Church is convinced that it is contributing to the creation of a truly human civilization. The Church urges man not to betray his personal responsibilities by putting all his faith in technical expedients. In this way it defends the dignity of husband and wife. This course of action shows that the Church, loyal to the example and teaching of the divine Savior, is sincere and unselfish in its regard for men whom it strives to help even now during this earthly pilgrimage "to share God's life as sons of the living God, the Father of all men". Developing countries 23. We are fully aware of the difficulties confronting the public authorities in this matter, especially in the developing countries. In fact, We had in mind the justifiable anxieties which weigh upon them when We published Our encyclical letter Populorum Progressio. But now We join Our voice to that of Our predecessor John XXIII of venerable memory, and We make Our own his words: "No statement of the problem and no solution to it is acceptable which does violence to man's essential dignity; those who propose such solutions base them on an utterly materialistic conception of man himself and his life. The only possible solution to this question is one which envisages the social and economic progress both of individuals and of the whole of human society, and which respects and promotes true human values." No one can, without being grossly unfair, make divine Providence responsible for what clearly seems to be the result of misguided governmental policies, of an insufficient sense of social justice, of a selfish accumulation of material goods, and finally of a culpable failure to undertake those initiatives and responsibilities which would raise the standard of living of peoples and their children. Response and criticism Galileo affair comparison Cardinal Leo Joseph Suenens, a moderator of the ecumenical council, questioned, "whether moral theology took sufficient account of scientific progress, which can help determine, what is according to nature. I beg you my brothers let us avoid another Galileo affair. One is enough for the Church." In an interview in Informations Catholiques Internationales on 15 May 1969, he criticized the Pope’s decision again as frustrating the collegiality defined by the Council, calling it a non-collegial or even an anti-collegial act. He was supported by Vatican II theologians such as Karl Rahner, Hans Küng, several Episcopal conferences, e.g. the Episcopal Conference of Austria, Germany, and Switzerland, as well as several bishops, including Christopher Butler, who called it one of the most important contributions to contemporary discussion in the Church. Open dissent The publication of the encyclical marks the first time in the twentieth century that open dissent from the laity about teachings of the Church was voiced widely and publicly. The teaching has been criticized by development organizations and others who claim that it limits the methods available to fight worldwide population growth and struggle against HIV/AIDS. Within two days of the encyclical's release, a group of dissident theologians, led by Rev. Charles Curran, then of The Catholic University of America, issued a statement stating, "spouses may responsibly decide according to their conscience that artificial contraception in some circumstances is permissible and indeed necessary to preserve and foster the value and sacredness of marriage. Canadian bishops Two months later, the controversial "Winnipeg Statement" issued by the Canadian Conference of Catholic Bishops stated that those who cannot accept the teaching should not be considered shut off from the Catholic Church, and that individuals can in good conscience use contraception as long as they have first made an honest attempt to accept the difficult directives of the encyclical. Dutch Catechism The Dutch Catechism of 1966, based on the Dutch bishops' interpretation of the just completed Vatican Council, and the first post-Council comprehensive Catholic catechism, noted the lack of mention of artificial contraception in the Council. "As everyone can ascertain nowadays, there are several methods of regulating births. The Second Vatican Council did not speak of any of these concrete methods… This is a different standpoint than that taken under Pius XI some thirty years which was also maintained by his successor ... we can sense here a clear development in the Church, a development, which is also going on outside the Church." Soviet Union In the Soviet Union, Literaturnaja Gazeta, a publication of Soviet intellectuals, included an editorial and statement by Russian physicians against the encyclical. Ecumenical reactions Ecumenical reactions were mixed. Liberal and Moderate Lutherans and the World Council of Churches were disappointed. Eugene Carson Blake criticised the concepts of nature and natural law, which, in his view, still dominated Catholic theology, as outdated. This concern dominated several articles in Catholic and non-Catholic journals at the time. Patriarch Athenagoras I stated his full agreement with Pope Paul VI: “He could not have spoken in any other way.” Latin America In Latin America, much support developed for the Pope and his encyclical. As World Bank President Robert McNamara declared at the 1968 Annual Meeting of the International Monetary Fund and the World Bank Group that countries permitting birth control practices will get preferential access to resources, doctors in La Paz, Bolivia, called it insulting that money should be exchanged for the conscience of a Catholic nation. In Colombia, Cardinal Aníbal Muñoz Duque declared, if American conditionality undermines Papal teachings, we prefer not to receive one cent. The Senate of Bolivia passed a resolution, stating that Humanae vitae can be discussed in its implications on individual consciences, but is of greatest significance because it defends the rights of developing nations to determine their own population policies. The Jesuit Journal Sic dedicated one edition to the encyclical with supportive contributions. However, against eighteen insubordinate priests, professors of theology at Pontifical Catholic University of Chile, and the ensuing conspiracy of silence practiced by the Chilean Episcopate, which had to be censured by the Nuncio in Santiago at the behest of Cardinal Gabriel-Marie Garrone, prefect of the Congregation for Catholic Education, triggering eventually a media conflict with , Plinio Corrêa de Oliveira expressed his affliction with the lamentations of Jeremiah: "O ye all that pass through the way…" Lamentations 1 Cardinal Martini In the book "Nighttime conversations in Jerusalem. On the risk of faith." well-known liberal Cardinal Carlo Maria Martini accused Paul VI of deliberately concealing the truth, leaving it to theologians and pastors to fix things by adapting precepts to practice: "I knew Paul VI well. With the encyclical, he wanted to express consideration for human life. He explained his intention to some of his friends by using a comparison: although one must not lie, sometimes it is not possible to do otherwise; it may be necessary to conceal the truth, or it may be unavoidable to tell a lie. It is up to the moralists to explain where sin begins, especially in the cases in which there is a higher duty than the transmission of life." Response of Pope Paul VI Pope Paul VI was troubled by the encyclical's reception in the West. Acknowledging the
man and of woman, and is consequently in opposition to the plan of God and His holy will. But to experience the gift of married love while respecting the laws of conception is to acknowledge that one is not the master of the sources of life but rather the minister of the design established by the Creator. Just as man does not have unlimited dominion over his body in general, so also, and with more particular reason, he has no such dominion over his specifically sexual faculties, for these are concerned by their very nature with the generation of life, of which God is the source. "Human life is sacred—all men must recognize that fact," Our predecessor Pope John XXIII recalled. "From its very inception it reveals the creating hand of God." Lawful therapeutic means 15. ...the Church does not consider at all illicit the use of those therapeutic means necessary to cure bodily diseases, even if a foreseeable impediment to procreation should result therefrom — provided such impediment is not directly intended. Recourse to infertile periods 16. ...If therefore there are well-grounded reasons for spacing births, arising from the physical or psychological condition of husband or wife, or from external circumstances, the Church teaches that married people may then take advantage of the natural cycles immanent in the reproductive system and engage in marital intercourse only during those times that are infertile, thus controlling birth in a way which does not in the least offend the moral principles which We have just explained. Concern of the Church 18. It is to be anticipated that perhaps not everyone will easily accept this particular teaching. There is too much clamorous outcry against the voice of the Church, and this is intensified by modern means of communication. But it comes as no surprise to the Church that it, no less than its divine Founder, is destined to be a "sign of contradiction.") The Church does not, because of this, evade the duty imposed on it of proclaiming humbly but firmly the entire moral law, both natural and evangelical. Since the Church did not make either of these laws, it cannot be their arbiter—only their guardian and interpreter. It could never be right for the Church to declare lawful what is in fact unlawful, since that, by its very nature, is always opposed to the true good of man. In preserving intact the whole moral law of marriage, the Church is convinced that it is contributing to the creation of a truly human civilization. The Church urges man not to betray his personal responsibilities by putting all his faith in technical expedients. In this way it defends the dignity of husband and wife. This course of action shows that the Church, loyal to the example and teaching of the divine Savior, is sincere and unselfish in its regard for men whom it strives to help even now during this earthly pilgrimage "to share God's life as sons of the living God, the Father of all men". Developing countries 23. We are fully aware of the difficulties confronting the public authorities in this matter, especially in the developing countries. In fact, We had in mind the justifiable anxieties which weigh upon them when We published Our encyclical letter Populorum Progressio. But now We join Our voice to that of Our predecessor John XXIII of venerable memory, and We make Our own his words: "No statement of the problem and no solution to it is acceptable which does violence to man's essential dignity; those who propose such solutions base them on an utterly materialistic conception of man himself and his life. The only possible solution to this question is one which envisages the social and economic progress both of individuals and of the whole of human society, and which respects and promotes true human values." No one can, without being grossly unfair, make divine Providence responsible for what clearly seems to be the result of misguided governmental policies, of an insufficient sense of social justice, of a selfish accumulation of material goods, and finally of a culpable failure to undertake those initiatives and responsibilities which would raise the standard of living of peoples and their children. Response and criticism Galileo affair comparison Cardinal Leo Joseph Suenens, a moderator of the ecumenical council, questioned, "whether moral theology took sufficient account of scientific progress, which can help determine, what is according to nature. I beg you my brothers let us avoid another Galileo affair. One is enough for the Church." In an interview in Informations Catholiques Internationales on 15 May 1969, he criticized the Pope’s decision again as frustrating the collegiality defined by the Council, calling it a non-collegial or even an anti-collegial act. He was supported by Vatican II theologians such as Karl Rahner, Hans Küng, several Episcopal conferences, e.g. the Episcopal Conference of Austria, Germany, and Switzerland, as well as several bishops, including Christopher Butler, who called it one of the most important contributions to contemporary discussion in the Church. Open dissent The publication of the encyclical marks the first time in the twentieth century that open dissent from the laity about teachings of the Church was voiced widely and publicly. The teaching has been criticized by development organizations and others who claim that it limits the methods available to fight worldwide population growth and struggle against HIV/AIDS. Within two days of the encyclical's release, a group of dissident theologians, led by Rev. Charles Curran, then of The Catholic University of America, issued a statement stating, "spouses may responsibly decide according to their conscience that artificial contraception in some circumstances is permissible and indeed necessary to preserve and foster the value and sacredness of marriage. Canadian bishops Two months later, the controversial "Winnipeg Statement" issued by the Canadian Conference of Catholic Bishops stated that those who cannot accept the teaching should not be considered shut off from the Catholic Church, and that individuals can in good conscience use contraception as long as they have first made an honest attempt to accept the difficult directives of the encyclical. Dutch Catechism The Dutch Catechism of 1966, based on the Dutch bishops' interpretation of the just completed Vatican Council, and the first post-Council comprehensive Catholic catechism, noted the lack of mention of artificial contraception in the Council. "As everyone can ascertain nowadays, there are several methods of regulating births. The Second Vatican Council did not speak of any of these concrete methods… This is a different standpoint than that taken under Pius XI some thirty years which was also maintained by his successor ... we can sense here a clear development in the Church, a development, which is also going on outside the Church." Soviet Union In the Soviet Union, Literaturnaja Gazeta, a publication of Soviet intellectuals, included an editorial and statement by Russian physicians against the encyclical. Ecumenical reactions Ecumenical reactions were mixed. Liberal and Moderate Lutherans and the World Council of Churches were disappointed. Eugene Carson Blake criticised the concepts of nature and natural law, which, in his view, still dominated Catholic theology, as outdated. This concern dominated several articles in Catholic and non-Catholic journals at the time. Patriarch Athenagoras I stated his full agreement with Pope Paul VI: “He could not have spoken in any other way.” Latin America In Latin America, much support developed for the Pope and his encyclical. As World Bank President Robert McNamara declared at the 1968 Annual Meeting of the International Monetary Fund and the World Bank Group that countries permitting birth control practices will get preferential access to resources, doctors in La Paz, Bolivia, called it insulting that money should be exchanged for the conscience of a Catholic nation. In Colombia, Cardinal Aníbal Muñoz Duque declared, if American conditionality undermines Papal teachings, we prefer not to receive one cent. The Senate of Bolivia passed a resolution, stating that Humanae vitae can be discussed in its implications on individual consciences, but is of greatest significance because it defends the rights of developing nations to determine their own population policies. The Jesuit Journal Sic dedicated one edition to the encyclical with supportive contributions. However, against eighteen insubordinate priests, professors of theology at Pontifical Catholic University of Chile, and the ensuing conspiracy of silence practiced by the Chilean Episcopate, which had to be censured by the Nuncio in Santiago at the behest of Cardinal Gabriel-Marie Garrone, prefect of the Congregation for Catholic Education, triggering eventually a media conflict with , Plinio Corrêa de Oliveira expressed his affliction with the lamentations of Jeremiah: "O ye all that pass through the way…" Lamentations 1 Cardinal Martini In the book "Nighttime conversations in Jerusalem. On the risk of faith." well-known liberal Cardinal Carlo Maria Martini accused Paul VI of deliberately concealing the truth, leaving it to theologians and pastors to fix things by adapting precepts to practice: "I knew Paul VI well. With the encyclical, he wanted to express consideration for human life. He explained his intention to some of his friends by using a comparison: although one must not lie, sometimes it is not possible to do otherwise; it may be necessary to conceal the truth, or it may be unavoidable to tell a lie. It is up to the moralists to explain where sin begins, especially in the cases in which there is a higher duty than the transmission of life." Response of Pope Paul VI Pope Paul VI was troubled by the encyclical's reception in the West. Acknowledging the controversy, Paul VI in a letter to the Congress of German Catholics (30 August 1968), stated: "May the lively debate aroused by our encyclical lead to a better knowledge of God’s will." In March 1969, he had a meeting with one of the main critics of Humanae vitae, Cardinal Leo Joseph Suenens. Paul heard him out and said merely, "Yes, pray for me; because of my weaknesses, the Church is badly governed." And to jog the memory of his critics, he put in their minds the experience of no less a figure than Pope Saint Peter: "[n]ow I understand St Peter: he came to Rome twice, the second time to be crucified", — herewith directing their attention to his rejoicing in glorifying the Lord. Increasingly convinced, that "the smoke of Satan entered the temple of God from some fissure", Paul VI reaffirmed, on 23 June 1978, weeks before his death, in an address to the College of Cardinals, his Humanae vitae: "following the confirmations of serious science", and which sought to affirm the principle of respect for the laws of nature and of "a conscious and ethically responsible paternity". Legacy Polls show that most Catholics use artificial means of contraception, and very few use natural family planning, However, John L. Allen Jr. wrote in 2008: "Three decades of bishops’ appointments by John Paul II and Benedict XVI, both unambiguously committed to Humanae Vitae, mean that senior leaders in Catholicism these days are far less inclined than they were in 1968 to distance themselves from the ban on birth control, or to soft-pedal it. Some Catholic bishops have brought out documents of their own defending Humanae Vitae." Also, developments in fertility awareness since the 1960s have given rise to natural family planning organizations such as the Billings Ovulation Method,
list of interested editors, and the presence of a full-time editor-in-chief, Larry Sanger, a graduate philosophy student hired by Wales, the writing of content for Nupedia was extremely slow, with only 12 articles written during the first year. Wales and Sanger discussed various ways to create content more rapidly. The idea of a wiki-based complement originated from a conversation between Sanger and Ben Kovitz. Ben Kovitz was a computer programmer and regular on Ward Cunningham's revolutionary wiki "the WikiWikiWeb". He explained to Sanger what wikis were, at that time a difficult concept to understand, over a dinner on Tuesday 2 January 2001. Wales first stated, in October 2001, that "Larry had the idea to use Wiki software", though he later stated in December 2005 that Jeremy Rosenfeld, a Bomis employee, introduced him to the concept. Sanger thought a wiki would be a good platform to use, and proposed on the Nupedia mailing list that a wiki based upon UseModWiki (then v. 0.90) be set up as a "feeder" project for Nupedia. Under the subject "Let's make a wiki", he wrote: Wales set one up and put it online on Wednesday 10 January 2001. Founding of Wikipedia There was considerable resistance on the part of Nupedia's editors and reviewers to the idea of associating Nupedia with a wiki-style website. Sanger suggested giving the new project its own name, Wikipedia, and Wikipedia was soon launched on its own domain, , on Monday 15 January 2001. The bandwidth and server (located in San Diego) used for these initial projects were donated by Bomis. Many former Bomis employees later contributed content to the encyclopedia: notably Tim Shell, co-founder and later CEO of Bomis, and programmer Jason Richey. Wales stated in December 2008 that he made Wikipedia's first edit, a test edit with the text "Hello, World!", but this edit may have been to an old version of Wikipedia which soon after was scrapped and replaced by a restart; see [WikiEN-l] "Hello world?". The existence of the project was formally announced and an appeal for volunteers to engage in content creation was made to the Nupedia mailing list on 17 January 2001. The project received many new participants after being mentioned on the Slashdot website in July 2001, having already earned two minor mentions in March 2001. It then received a prominent pointer to a story on the community-edited technology and culture website Kuro5hin on 25 July. Between these relatively rapid influxes of traffic, there had been a steady stream of traffic from other sources, especially Google, which alone sent hundreds of new visitors to the site every day. Its first major mainstream media coverage was in The New York Times on 20 September 2001. The project gained its 1,000th article around Monday 12 February 2001, and reached 10,000 articles around 7 September. In the first year of its existence, over 20,000 encyclopedia entries were created – a rate of over 1,500 articles per month. On Friday 30 August 2002, the article count reached 40,000. Wikipedia's earliest edits were long believed lost, since the original UseModWiki software deleted old data after about a month. On Tuesday 14 December 2010, developer Tim Starling found backups on SourceForge containing every change made to Wikipedia from its creation in January 2001 to 17 August 2001. It showed the first edit as being to HomePage on 15 January 2001, reading "This is the new WikiPedia!". That edit was imported in 2019 and can be found here. The first three edits that were known of before Tim Starling's discovery, are: To page Wikipedia:UuU at 20:08, 16 January 2001 To page TransporT at 20:12, 16 January 2001 To page User:ScottMoonen at 21:16, 16 January 2001 Divisions and internationalization Early in Wikipedia's development, it began to expand internationally, with the creation of new namespaces, each with a distinct set of usernames. The first subdomain created for a non-English Wikipedia was deutsche.wikipedia.com (created on Friday 16 March 2001, 01:38 UTC), followed after a few hours by Catalan.wikipedia.com (at 13:07 UTC). The Japanese Wikipedia, started as nihongo.wikipedia.com, was created around that period, and initially used only Romanized Japanese. For about two months Catalan was the one with the most articles in a non-English language, although statistics of that early period are imprecise. The French Wikipedia was created on or around 11 May 2001, in a wave of new language versions that also included Chinese, Dutch, Esperanto, Hebrew, Italian, Portuguese, Russian, Spanish, and Swedish. These languages were soon joined by Arabic and Hungarian. In September 2001, an announcement pledged commitment to the multilingual provision of Wikipedia, notifying users of an upcoming roll-out of Wikipedias for all major languages, the establishment of core standards, and a push for the translation of core pages for the new wikis. At the end of that year, when international statistics first began to be logged, Afrikaans, Norwegian, and Serbian versions were announced. In January 2002, 90% of all Wikipedia articles were in English. By January 2004, fewer than 50% were English, and this internationalization has continued to increase as the encyclopedia grows. , about 85.5% of all Wikipedia articles are contained within non-English Wikipedia versions. Development of Wikipedia In March 2002, following the withdrawal of funding by Bomis during the dot-com bust, Larry Sanger left both Nupedia and Wikipedia. By 2002, Sanger and Wales differed in their views on how best to manage open encyclopedias. Both still supported the open-collaboration concept, but the two disagreed on how to handle disruptive editors, specific roles for experts, and the best way to guide the project to success. Wales went on to establish self-governance and bottom-up self-direction by editors on Wikipedia. He made it clear that he would not be involved in the community's day-to-day management, but would encourage it to learn to self-manage and find its own best approaches. , Wales mostly restricts his own role to occasional input on serious matters, executive activity, advocacy of knowledge, and encouragement of similar reference projects. Sanger says he is an "inclusionist" and is open to almost anything. He proposed that experts still have a place in the Web 2.0 world. He returned briefly to academia, then joined the Digital Universe Foundation. In 2006, Sanger founded Citizendium, an open encyclopedia that used real names for contributors in an effort to reduce disruptive editing, and hoped to facilitate "gentle expert guidance" to increase the accuracy of its content. Decisions about article content were to be up to the community, but the site was to include a statement about "family-friendly content". He stated early on that he intended to leave Citizendium in a few years, by which time the project and its management would presumably be established. Organization The Wikipedia project has grown rapidly in the course of its life, at several levels. Content has grown organically through the addition of new articles, new wikis have been added in English and non-English languages, and entire new projects replicating these growth methods in other related areas (news, quotations, reference books and so on) have been founded as well. Wikipedia itself has grown, with the creation of the Wikimedia Foundation to act as an umbrella body and the growth of software and policies to address the needs of the editorial community. These are documented below: Evolution of logo Timeline First decade: 2000–2009 2000 In March 2000, the Nupedia project was started. Its intention was to publish articles written by experts which would be licensed as free content. Nupedia was founded by Jimmy Wales, with Larry Sanger as editor-in-chief, and funded by the web-advertising company Bomis. 2001 In January 2001, Wikipedia began as a side-project of Nupedia, to allow collaboration on articles prior to entering the peer-review process. The name was suggested by Sanger on 11 January 2001 as a portmanteau of the words wiki (Hawaiian for "quick") and encyclopedia. The wikipedia.com and wikipedia.org domain names were registered on 12 and 13 January, respectively, with wikipedia.org being brought online on the same day. The project formally opened on 15 January ("Wikipedia Day"), with the first international Wikipedias – the French, German, Catalan, Swedish, and Italian editions – being created between March and May. The "neutral point of view" (NPOV) policy was officially formulated at this time, and Wikipedia's first slashdotter wave arrived on 26 July. The first media report about Wikipedia appeared in August 2001 in the newspaper Wales on Sunday. The September 11 attacks spurred the appearance of breaking news stories on the homepage, as well as information boxes linking related articles. At the time, approximately 100 articles related to 9/11 had been created. After the September 11 attacks, a link to the Wikipedia article on the attacks appeared on Yahoo!'s home page, resulting in a spike in traffic. 2002 2002 saw the end of funding for Wikipedia from Bomis and the departure of Larry Sanger. The forking of the Spanish Wikipedia also took place with the establishment of the Enciclopedia Libre. The first portable MediaWiki software went live on 25 January. Bots were introduced, Jimmy Wales confirmed that Wikipedia would never run commercial advertising, and the first sister project (Wiktionary) and first formal Manual of Style were launched. A separate board of directors to supervise the project was proposed and initially discussed at Meta-Wikipedia. Close to 200 contributors were editing Wikipedia daily. 2003 The English Wikipedia passed 100,000 articles in 2003, while the next largest edition, the German Wikipedia, passed 10,000. The Wikimedia Foundation was established, and Wikipedia adopted its jigsaw world logo. Mathematical formulae using TeX were reintroduced to the website. The first Wikipedian social meeting took place in Munich, Germany, in October. The basic principles of Wikipedia's (known colloquially as "ArbCom") were developed. Wikisource was created as a separate project on 24 November 2003, to host free textual sources. 2004 The worldwide Wikipedia article pool continued to grow rapidly in 2004, doubling in size in 12 months, from under 500,000 articles in late 2003 to over 1 million in over 100 languages by the end of 2004. The English Wikipedia accounted for just under half of these articles. The website's server farms were moved from California to Florida, and CSS style configuration sheets were introduced, and the first attempt to block Wikipedia occurred, with the website being blocked in China for two weeks in June. The formal election of a board and Arbitration Committee began. The first formal projects were proposed to deliberately balance content and seek out systemic bias arising from Wikipedia's community structure. Bourgeois v. Peters, (11th Cir. 2004), a court case decided by the United States Court of Appeals for the Eleventh Circuit was one of the earliest court opinions to cite and quote Wikipedia. It stated: "We also reject the notion that the Department of Homeland Security's threat advisory level somehow justifies these searches. Although the threat level was 'elevated' at the time of the protest, 'to date, the threat level has stood at yellow (elevated) for the majority of its time in existence. It has been raised to orange (high) six times." Wikimedia Commons was created on 7 September 2004 to host media files for Wikipedia in all languages. 2005 In 2005, Wikipedia became the most popular reference website on the Internet, according to Hitwise, with the English Wikipedia alone exceeding 750,000 articles. Wikipedia's first multilingual and subject portals were established in 2005. A formal fundraiser held in the first quarter of the year raised almost US$100,000 for system upgrades to handle growing demand. China again blocked Wikipedia in October 2005. The first major Wikipedia scandal, the Seigenthaler incident, occurred in 2005, when a well-known figure was found to have a vandalized biography which had gone unnoticed for months. In the wake of this and other concerns, the first policy and system changes specifically designed to counter this form of abuse were established. These included a new Checkuser privilege policy update to assist in sock puppetry investigations, a new feature called , a more strict policy on biographies of living people and the tagging of such articles for stricter review. A restriction of new article creation to registered users only was put in place in December 2005, after the Seigenthaler incident where an anonymous user posted a hoax. Wikimania 2005, the first Wikimania conference, was held from 4 to 8 August 2005 at the Haus der Jugend in Frankfurt, Germany, attracting about 380 attendees. 2006 The English Wikipedia gained its one-millionth article, Jordanhill railway station, on 1 March 2006. The first approved Wikipedia article selection was made freely available to download, and "Wikipedia" became registered as a trademark of the Wikimedia Foundation. The congressional aides biography scandals – multiple incidents in which congressional staffers and a campaign manager were caught trying to covertly alter Wikipedia biographies – came to public attention, leading to the resignation of the campaign manager. Nonetheless, Wikipedia was rated as one of the top five global brands of 2006. Jimmy Wales indicated at Wikimania 2006 that Wikipedia had achieved sufficient volume and called for an emphasis on quality, perhaps best expressed in the call for 100,000 feature-quality articles. A new privilege, "oversight", was created, allowing specific versions of archived pages with unacceptable content to be marked as non-viewable. Semi-protection against anonymous vandalism, introduced in 2005, proved more popular than expected, with over 1,000 pages being semi-protected at any given time in 2006. 2007 Wikipedia continued to grow rapidly in 2007, possessing over 5 million registered editor accounts by 13 August. The 250 language editions of Wikipedia contained a combined total of 7.5 million articles, totalling 1.74 billion words, by 13 August. The English Wikipedia gained articles at a steady rate of 1,700 a day, with the wikipedia.org domain name ranked the 10th-busiest in the world. Wikipedia continued to garner visibility in the press – the Essjay controversy broke when a prominent member of Wikipedia was found to have lied about his credentials. Citizendium, a competing online encyclopedia, launched publicly. A new trend developed in Wikipedia, with the encyclopedia addressing people whose notability stemmed from being a participant in a news story by adding a redirect from their name to the larger story, rather than creating a distinct biographical article. On 9 September 2007, the English Wikipedia gained its two-millionth article, El Hormiguero. There was some controversy in late 2007 when the Volapük Wikipedia jumped from 797 to over 112,000 articles, briefly becoming the 15th-largest Wikipedia edition, due to automated stub generation by an enthusiast for the Volapük constructed language. According to the MIT Technology Review, the number of regularly active editors on the English-language Wikipedia peaked in 2007 at more than 51,000, and has since been declining. 2008 Various in many areas continued to expand and refine article contents within their scope. In April 2008, the 10-millionth Wikipedia article was created, and by the end of the year the English Wikipedia exceeded 2.5 million articles. 2009 On 25 June 2009 at 3:15 pm PDT (22:15 UTC), following the death of pop icon Michael Jackson, the website temporarily crashed. The Wikimedia Foundation reported nearly a million visitors to Jackson's biography within one hour, probably the most visitors in a one-hour period to any article in Wikipedia's history. By late August 2009, the number of articles in all Wikipedia editions had exceeded 14 million. The three-millionth article on the English Wikipedia, Beate Eriksen, was created on 17 August 2009 at 04:05 UTC. On 27 December 2009, the German Wikipedia exceeded one million articles, becoming the second edition after the English Wikipedia to do so. A TIME article listed Wikipedia among 2009's best websites. Wikipedia content became licensed under Creative Commons in 2009. Second decade: 2010–2019 2010 On 24 March, the European Wikipedia servers went offline due to an overheating problem. Failover to servers in Florida turned out to be broken, causing DNS resolution for Wikipedia to fail across the world. The problem was resolved quickly, but due to DNS caching effects, some areas were slower to regain access to Wikipedia than others. On 13 May, the site released a new interface. New features included an updated logo, new navigation tools, and a link wizard. However, the classic interface remained available for those who wished to use it. On 12 December, the English Wikipedia passed the 3.5-million-article mark, while the French Wikipedia's millionth article was created on 21 September. The 1-billionth Wikimedia project edit was performed on 16 April. 2011 Wikipedia and its users held many celebrations worldwide to commemorate the site's 10th anniversary on 15 January. The site began efforts to expand its growth in India, holding its first Indian conference in Mumbai in November 2011. The English Wikipedia passed the 3.6-million-article mark on 2 April, and reached 3.8 million articles on 18 November. On 7 November 2011, the German Wikipedia exceeded 100 million page edits, becoming the second language edition to do so after the English edition, which attained 500 million page edits on 24 November 2011. The Dutch Wikipedia exceeded 1 million articles on 17 December 2011, becoming the fourth Wikipedia edition to do so. The "Wikimania 2011 – Haifa, Israel" stamp was issued by Israel Post on 2 August 2011. This was the first-ever stamp dedicated to a Wikimedia-related project. Between 4 and 6 October 2011, the Italian Wikipedia became intentionally inaccessible in protest against the Italian Parliament's proposed DDL intercettazioni law, which, if approved, would allow any person to force websites to remove information that is perceived as untrue or offensive, without the need to provide evidence. Also in October 2011, Wikimedia announced the launch of Wikipedia Zero, an initiative to enable free mobile access to Wikipedia in developing countries through partnerships with mobile operators. 2012 On 16 January, Wikipedia co-founder Jimmy Wales announced that the English Wikipedia would shut down for 24 hours on 18 January as part of a protest meant to call public attention to the proposed Stop Online Piracy Act and PROTECT IP Act, two anti-piracy laws under debate in the United States Congress. Calling the blackout a "community decision", Wales and other opponents of the laws believed that they would endanger free speech and online innovation. A similar blackout was staged on 10 July by the Russian Wikipedia, in protest against a proposed Russian internet regulation law. In late March 2012, the Wikimedia Deutschland announced Wikidata, a universal platform for sharing data between all Wikipedia language editions. The US$1.7-million Wikidata project was partly funded by Google, the Gordon and Betty Moore Foundation, and the Allen Institute for Artificial Intelligence. Wikimedia Deutschland assumed responsibility for the first phase of Wikidata, and initially planned to make the platform available to editors by December 2012. Wikidata's first phase became fully operational in March 2013. In April 2012, Justin Knapp became the first single contributor to make over one million edits to Wikipedia. Jimmy Wales congratulated Knapp for his work and presented him with the site's Special Barnstar medal and the Golden Wiki award for his achievement. Wales also declared that 20 April would be "Justin Knapp Day". On 13 July 2012, the English Wikipedia gained its 4-millionth article, Izbat al-Burj. In October 2012, historian and Wikipedia editor Richard J. Jensen opined that the English Wikipedia was "nearing completion", noting that the number of regularly active editors had fallen significantly since 2007, despite Wikipedia's rapid growth in article count and readership. According to Alexa Internet, Wikipedia was the world's sixth-most-popular website as of November 2012. Dow Jones ranked Wikipedia fifth worldwide as of December 2012. 2013 On 22 January 2013, the Italian Wikipedia became the fifth language edition of Wikipedia to exceed 1 million articles, while the Russian and Spanish Wikipedias gained their millionth articles on 11 and 16 May respectively. On 15 July the Swedish and on 24 September the Polish Wikipedias gained their millionth articles, becoming the eighth and ninth Wikipedia editions to do so. On 27 January, the main belt asteroid 274301 was officially renamed "Wikipedia" by the Committee for Small Body Nomenclature. The first phase of the Wikidata database, automatically providing interlanguage links and other data, became available for all language editions in March 2013. In April 2013, the French secret service was accused of attempting to censor Wikipedia by threatening a Wikipedia volunteer with arrest unless "classified information" about a military radio station was deleted. In July, the VisualEditor editing system was launched, forming the first stage of an effort to allow articles to be edited with a word processor-like interface instead of using wiki markup. An editor specifically designed for smartphones and other mobile devices was also launched. 2014 In February 2014, a project to make a print edition of the English Wikipedia, consisting of 1,000 volumes and over 1,100,000 pages, was launched by German Wikipedia contributors. The project sought funding through Indiegogo, and was intended to honor the contributions of Wikipedia's editors. On 22 October 2014, the first monument to Wikipedia was unveiled in the Polish town of Slubice. On 8 June, 15 June, and 16 July 2014, the Waray Wikipedia, the Vietnamese Wikipedia and the Cebuano Wikipedia each exceeded the one million article mark. They were the tenth, eleventh and twelfth Wikipedias to reach that milestone. Despite having very few active users, the Waray and Cebuano Wikipedias had a high number of automatically generated articles created by bots. 2015 In mid-2015, Wikipedia was the world's seventh-most-popular website according to Alexa Internet, down one place from the position it held in November 2012. At the start of 2015, Wikipedia remained the largest general-knowledge encyclopedia online, with a combined total of over 36 million mainspace articles across all 291 language editions. On average, Wikipedia receives a total of 10 billion global pageviews from around 495 million unique visitors every month, including 85 million visitors from the United States alone, where it is the sixth-most-popular site. Print Wikipedia was an art project by Michael Mandiberg that created the ability to print 7473 volumes of Wikipedia as it existed on 7 April 2015. Each volume has 700 pages and only 110 were printed by the artist. On 1 November 2015, the English Wikipedia reached 5,000,000 articles with the creation of an article on Persoonia terminalis, a type of shrub. 2016 On 19 January 2016, the Japanese Wikipedia exceeded the one million article mark, becoming the thirteenth Wikipedia to reach that milestone. The millionth article was 波号第二百二十四潜水艦 (a World War II submarine of the Imperial Japanese Navy). In mid-2016, Wikipedia was once again the world's sixth-most-popular website according to Alexa Internet, up one place from the position it held in the previous year. In October 2016, the mobile version of Wikipedia got a new look. 2017 In mid-2017, Wikipedia was listed as the world's fifth-most-popular website according to Alexa Internet, rising one place from the position it held in the previous year. Wikipedia Zero was made available in Iraq and Afghanistan. On 29 April 2017, online access to Wikipedia was blocked across all language editions in Turkey by the Turkish authorities. This block lasted until 15 January 2020, as the court of Turkey ruled that the block violated human rights. The encrypted Japanese Wikipedia has been blocked in China since 28 December 2017. 2018 During 2018, Wikipedia retained its listing as the world's fifth-most-popular website according to Alexa Internet. One notable development was the use of Artificial Intelligence to create draft articles on overlooked topics. On 13 April 2018, the number of Chinese Wikipedia articles exceeded 1 million, becoming the fourteenth Wikipedia to reach that milestone. The Chinese Wikipedia has been blocked in Mainland China since May 2015. Later in the year, on 26 June, the Portuguese Wikipedia exceeded the one million article mark, becoming the fifteenth Wikipedia to reach that milestone. The millionth article was Perdão de Richard Nixon (the Pardon of Richard Nixon). 2019 In August 2019, according to Alexa.com, Wikipedia fell from fifth placed to seventh placed website in the world for global internet engagement. On 23 April 2019, Chinese authorities expanded the block of Wikipedia to versions in all languages. The timing of the block coincided with the 30th anniversary of the 1989 Tiananmen Square protests and massacre and the 100th anniversary of the May Fourth Movement, resulting in stricter internet censorship in China. Third decade: 2020–present 2020 On 23 January 2020, the six millionth article, the biography of Maria Elise Turner Lauder, was added to the English Wikipedia. Despite this growth in articles, Wikipedia's global internet engagement, as measured by Alexa, continued to decline. By February 2020, Wikipedia fell to the eleventh placed website in the world for global internet engagement. Both Wikipedia's coverage of the COVID-19 pandemic crisis and the supporting edits, discussions and even deletions were thought to be a useful resource for future historians seeking to understand the period in detail. The World Health Organization collaborated with Wikipedia as a key resource for the dissemination of COVID-19-related information as to help combat the spread of misinformation. 2021 In January 2021, Wikipedia's 20th anniversary was noted in the media. On 13 January 2021, the English Wikipedia reached one billion edits. MIT Press published an open access book of essays Wikipedia @ 20: Stories of an Unfinished Revolution, edited by Joseph Reagle and Jackie Koerner with contributions from prominent Wikipedians, Wikimedians, researchers, journalists, librarians and other experts reflecting on particular histories and themes. By November 2021, Wikipedia had fallen to the thirteenth placed website in the world for global internet engagement. History by subject area Hardware and software The software that runs Wikipedia, and the computer hardware, server farms and other systems upon which Wikipedia relies. In January 2001, Wikipedia ran on UseModWiki, written in Perl by Clifford Adams. The server still runs on Linux, although the original text was stored in files rather than in a database. Articles were named with the CamelCase convention. In January 2002, "Phase II" of the wiki software powering Wikipedia was introduced, replacing the older UseModWiki. Written specifically for the project by Magnus Manske, it included a PHP wiki engine. In July 2002, a major rewrite of the software powering Wikipedia went live; dubbed "Phase III", it replaced the older "Phase II" version, and became MediaWiki. It was written by Lee Daniel Crocker in response to the increasing demands of the growing project. In October 2002, Derek Ramsey created a bot—an automated program called Rambot—to add a large number of articles about United States towns; these articles were automatically generated from U.S. census data. He thus increased the number of Wikipedia articles by 33,832. This has been called "the most controversial move in Wikipedia history". In January 2003, support for mathematical formulas in TeX was added. The code was contributed by Tomasz Wegrzanowski. On 9 June 2003, Wikipedia's ISBN interface was amended to make ISBNs in articles link to Special:Booksources, which fetches its contents from the user-editable page . Before this, ISBN link targets were coded into the software and new ones were suggested on the page. See the edit that changed this. After 6 December 2003, various system messages shown to Wikipedia users were no longer hard coded, allowing Wikipedia to modify certain parts of MediaWiki's interface, such as the message shown to blocked users. On 12 February 2004, server operations were moved from San Diego, California to Tampa, Florida. On 29 May 2004, all the various websites were updated to a new version of the MediaWiki software. On 30 May 2004, the first instances of "categorization" entries appeared. Category schemes, like Recent Changes and Edit This Page, had existed from the founding of Wikipedia. However, Larry Sanger had viewed the schemes as lists, and even hand-entered articles, whereas the categorization effort centered on individual categorization entries in each article of the encyclopedia, as part of a larger automatic categorization of the articles of the encyclopedia. After 3 June 2004, administrators could edit the style of the interface by changing the CSS in the monobook stylesheet at MediaWiki:Monobook.css. Also on 30 May 2004, with MediaWiki 1.3, the Template namespace was created, allowing transclusion of standard texts. On 7 June 2005 at 3:00 a.m. Eastern Standard Time, the bulk of the Wikimedia servers were moved to a new facility across the street. All Wikimedia projects were down during this time. In March 2013, the first phase of the Wikidata interwiki database became available across Wikipedia's language editions. In July 2013, the VisualEditor editing interface was inaugurated, allowing users to edit Wikipedia using a WYSIWYG text editor (similar to a word processor) instead of wiki markup. An editing interface optimised for mobile devices was also released. Look and feel The external face of Wikipedia, its look and feel, and the Wikipedia branding, as presented to users. On 4 April 2002, BrilliantProse, since renamed Featured Articles, was moved to the Wikipedia namespace from the article namespace. Around 15 October 2003, a new Wikipedia logo was installed. The logo concept was selected by a voting process, which was followed by a revision process to select the best variant. The final selection was created by David Friedland (who edits Wikipedia under the username "nohat") based on a logo design and concept created by Paul Stansifer. On 22 February 2004, Did You Know (DYK) made its first Main Page appearance. On 23 February 2004, a coordinated new look for the Main Page appeared at 19:46 UTC. Hand-chosen entries for the Daily Featured Article, Anniversaries, In the News, and Did You Know rounded out the new look. On 10 January 2005, the multilingual portal at www.wikipedia.org was set up, replacing a redirect to the English-language Wikipedia. On 5 February 2005, was created, becoming the first thematic "portal" on the English Wikipedia. However, the concept was pioneered on the German Wikipedia, where Portal:Recht (law studies) was set up in October 2003. On 16 July 2005, the English Wikipedia began the practice of including the day's "featured pictures" on the Main Page. On 19 March 2006, following a vote, the Main Page of the English-language Wikipedia featured its first redesign in nearly two years. On 13 May 2010, the site released a new interface. New features included an updated logo, new navigation tools, and a link wizard. The "classic" Wikipedia interface remained available as an option. Internal structures Landmarks in the Wikipedia community, and the development of its organization, internal structures, and policies. April 2001, Wales formally defines the "neutral point of view", Wikipedia's core non-negotiable editorial policy, a reformulation of the "Lack of Bias" policy outlined by Sanger for Nupedia in spring or summer 2000, which covered many of the same core principles. In September 2001, collaboration by subject matter in is introduced. In February 2002, concerns over the risk of future censorship and commercialization by Bomis Inc (Wikipedia's original host) combined with a lack of guarantee this would not happen, led most participants of the Spanish Wikipedia to break away and establish it independently as the Enciclopedia Libre. Following clarification of Wikipedia's status and non-commercial nature later that year, re-merger talks between Enciclopedia Libre and the re-founded Spanish Wikipedia occasionally took place in 2002 and 2003, but no conclusion was reached. As of October 2009, the two continue to coexist as substantial Spanish language reference sources, with around 43,000 articles (EL) and 520,000 articles (Sp.W) respectively. Also in 2002, policy and style issues were clarified with the creation of the Manual of Style, along with a number of other policies and guidelines. November 2002 – new mailing lists for WikiEN and Announce are set up, as well as other language mailing lists (e.g. Polish), to reduce the volume of traffic on mailing lists. In July 2003, the rule against editing one's autobiography is introduced. On 28 October 2003, the first "real" meeting of Wikipedians happened in Munich. Many cities followed suit, and soon a number of regular Wikipedian get-togethers were established around the world. Several Internet communities, including one on the popular blog website LiveJournal, have also sprung up since. From 10 July to 30 August 2004 the and formerly on the Main Page were replaced by links to overviews. On 27 August 2004 the Community Portal was started, to serve as a focus for community efforts. These were previously accomplished on an informal basis, by individual queries of the Recent Changes, in wiki style, as ad hoc collaborations between like-minded editors. During September to December 2005 following the Seigenthaler
place in Munich, Germany, in October. The basic principles of Wikipedia's (known colloquially as "ArbCom") were developed. Wikisource was created as a separate project on 24 November 2003, to host free textual sources. 2004 The worldwide Wikipedia article pool continued to grow rapidly in 2004, doubling in size in 12 months, from under 500,000 articles in late 2003 to over 1 million in over 100 languages by the end of 2004. The English Wikipedia accounted for just under half of these articles. The website's server farms were moved from California to Florida, and CSS style configuration sheets were introduced, and the first attempt to block Wikipedia occurred, with the website being blocked in China for two weeks in June. The formal election of a board and Arbitration Committee began. The first formal projects were proposed to deliberately balance content and seek out systemic bias arising from Wikipedia's community structure. Bourgeois v. Peters, (11th Cir. 2004), a court case decided by the United States Court of Appeals for the Eleventh Circuit was one of the earliest court opinions to cite and quote Wikipedia. It stated: "We also reject the notion that the Department of Homeland Security's threat advisory level somehow justifies these searches. Although the threat level was 'elevated' at the time of the protest, 'to date, the threat level has stood at yellow (elevated) for the majority of its time in existence. It has been raised to orange (high) six times." Wikimedia Commons was created on 7 September 2004 to host media files for Wikipedia in all languages. 2005 In 2005, Wikipedia became the most popular reference website on the Internet, according to Hitwise, with the English Wikipedia alone exceeding 750,000 articles. Wikipedia's first multilingual and subject portals were established in 2005. A formal fundraiser held in the first quarter of the year raised almost US$100,000 for system upgrades to handle growing demand. China again blocked Wikipedia in October 2005. The first major Wikipedia scandal, the Seigenthaler incident, occurred in 2005, when a well-known figure was found to have a vandalized biography which had gone unnoticed for months. In the wake of this and other concerns, the first policy and system changes specifically designed to counter this form of abuse were established. These included a new Checkuser privilege policy update to assist in sock puppetry investigations, a new feature called , a more strict policy on biographies of living people and the tagging of such articles for stricter review. A restriction of new article creation to registered users only was put in place in December 2005, after the Seigenthaler incident where an anonymous user posted a hoax. Wikimania 2005, the first Wikimania conference, was held from 4 to 8 August 2005 at the Haus der Jugend in Frankfurt, Germany, attracting about 380 attendees. 2006 The English Wikipedia gained its one-millionth article, Jordanhill railway station, on 1 March 2006. The first approved Wikipedia article selection was made freely available to download, and "Wikipedia" became registered as a trademark of the Wikimedia Foundation. The congressional aides biography scandals – multiple incidents in which congressional staffers and a campaign manager were caught trying to covertly alter Wikipedia biographies – came to public attention, leading to the resignation of the campaign manager. Nonetheless, Wikipedia was rated as one of the top five global brands of 2006. Jimmy Wales indicated at Wikimania 2006 that Wikipedia had achieved sufficient volume and called for an emphasis on quality, perhaps best expressed in the call for 100,000 feature-quality articles. A new privilege, "oversight", was created, allowing specific versions of archived pages with unacceptable content to be marked as non-viewable. Semi-protection against anonymous vandalism, introduced in 2005, proved more popular than expected, with over 1,000 pages being semi-protected at any given time in 2006. 2007 Wikipedia continued to grow rapidly in 2007, possessing over 5 million registered editor accounts by 13 August. The 250 language editions of Wikipedia contained a combined total of 7.5 million articles, totalling 1.74 billion words, by 13 August. The English Wikipedia gained articles at a steady rate of 1,700 a day, with the wikipedia.org domain name ranked the 10th-busiest in the world. Wikipedia continued to garner visibility in the press – the Essjay controversy broke when a prominent member of Wikipedia was found to have lied about his credentials. Citizendium, a competing online encyclopedia, launched publicly. A new trend developed in Wikipedia, with the encyclopedia addressing people whose notability stemmed from being a participant in a news story by adding a redirect from their name to the larger story, rather than creating a distinct biographical article. On 9 September 2007, the English Wikipedia gained its two-millionth article, El Hormiguero. There was some controversy in late 2007 when the Volapük Wikipedia jumped from 797 to over 112,000 articles, briefly becoming the 15th-largest Wikipedia edition, due to automated stub generation by an enthusiast for the Volapük constructed language. According to the MIT Technology Review, the number of regularly active editors on the English-language Wikipedia peaked in 2007 at more than 51,000, and has since been declining. 2008 Various in many areas continued to expand and refine article contents within their scope. In April 2008, the 10-millionth Wikipedia article was created, and by the end of the year the English Wikipedia exceeded 2.5 million articles. 2009 On 25 June 2009 at 3:15 pm PDT (22:15 UTC), following the death of pop icon Michael Jackson, the website temporarily crashed. The Wikimedia Foundation reported nearly a million visitors to Jackson's biography within one hour, probably the most visitors in a one-hour period to any article in Wikipedia's history. By late August 2009, the number of articles in all Wikipedia editions had exceeded 14 million. The three-millionth article on the English Wikipedia, Beate Eriksen, was created on 17 August 2009 at 04:05 UTC. On 27 December 2009, the German Wikipedia exceeded one million articles, becoming the second edition after the English Wikipedia to do so. A TIME article listed Wikipedia among 2009's best websites. Wikipedia content became licensed under Creative Commons in 2009. Second decade: 2010–2019 2010 On 24 March, the European Wikipedia servers went offline due to an overheating problem. Failover to servers in Florida turned out to be broken, causing DNS resolution for Wikipedia to fail across the world. The problem was resolved quickly, but due to DNS caching effects, some areas were slower to regain access to Wikipedia than others. On 13 May, the site released a new interface. New features included an updated logo, new navigation tools, and a link wizard. However, the classic interface remained available for those who wished to use it. On 12 December, the English Wikipedia passed the 3.5-million-article mark, while the French Wikipedia's millionth article was created on 21 September. The 1-billionth Wikimedia project edit was performed on 16 April. 2011 Wikipedia and its users held many celebrations worldwide to commemorate the site's 10th anniversary on 15 January. The site began efforts to expand its growth in India, holding its first Indian conference in Mumbai in November 2011. The English Wikipedia passed the 3.6-million-article mark on 2 April, and reached 3.8 million articles on 18 November. On 7 November 2011, the German Wikipedia exceeded 100 million page edits, becoming the second language edition to do so after the English edition, which attained 500 million page edits on 24 November 2011. The Dutch Wikipedia exceeded 1 million articles on 17 December 2011, becoming the fourth Wikipedia edition to do so. The "Wikimania 2011 – Haifa, Israel" stamp was issued by Israel Post on 2 August 2011. This was the first-ever stamp dedicated to a Wikimedia-related project. Between 4 and 6 October 2011, the Italian Wikipedia became intentionally inaccessible in protest against the Italian Parliament's proposed DDL intercettazioni law, which, if approved, would allow any person to force websites to remove information that is perceived as untrue or offensive, without the need to provide evidence. Also in October 2011, Wikimedia announced the launch of Wikipedia Zero, an initiative to enable free mobile access to Wikipedia in developing countries through partnerships with mobile operators. 2012 On 16 January, Wikipedia co-founder Jimmy Wales announced that the English Wikipedia would shut down for 24 hours on 18 January as part of a protest meant to call public attention to the proposed Stop Online Piracy Act and PROTECT IP Act, two anti-piracy laws under debate in the United States Congress. Calling the blackout a "community decision", Wales and other opponents of the laws believed that they would endanger free speech and online innovation. A similar blackout was staged on 10 July by the Russian Wikipedia, in protest against a proposed Russian internet regulation law. In late March 2012, the Wikimedia Deutschland announced Wikidata, a universal platform for sharing data between all Wikipedia language editions. The US$1.7-million Wikidata project was partly funded by Google, the Gordon and Betty Moore Foundation, and the Allen Institute for Artificial Intelligence. Wikimedia Deutschland assumed responsibility for the first phase of Wikidata, and initially planned to make the platform available to editors by December 2012. Wikidata's first phase became fully operational in March 2013. In April 2012, Justin Knapp became the first single contributor to make over one million edits to Wikipedia. Jimmy Wales congratulated Knapp for his work and presented him with the site's Special Barnstar medal and the Golden Wiki award for his achievement. Wales also declared that 20 April would be "Justin Knapp Day". On 13 July 2012, the English Wikipedia gained its 4-millionth article, Izbat al-Burj. In October 2012, historian and Wikipedia editor Richard J. Jensen opined that the English Wikipedia was "nearing completion", noting that the number of regularly active editors had fallen significantly since 2007, despite Wikipedia's rapid growth in article count and readership. According to Alexa Internet, Wikipedia was the world's sixth-most-popular website as of November 2012. Dow Jones ranked Wikipedia fifth worldwide as of December 2012. 2013 On 22 January 2013, the Italian Wikipedia became the fifth language edition of Wikipedia to exceed 1 million articles, while the Russian and Spanish Wikipedias gained their millionth articles on 11 and 16 May respectively. On 15 July the Swedish and on 24 September the Polish Wikipedias gained their millionth articles, becoming the eighth and ninth Wikipedia editions to do so. On 27 January, the main belt asteroid 274301 was officially renamed "Wikipedia" by the Committee for Small Body Nomenclature. The first phase of the Wikidata database, automatically providing interlanguage links and other data, became available for all language editions in March 2013. In April 2013, the French secret service was accused of attempting to censor Wikipedia by threatening a Wikipedia volunteer with arrest unless "classified information" about a military radio station was deleted. In July, the VisualEditor editing system was launched, forming the first stage of an effort to allow articles to be edited with a word processor-like interface instead of using wiki markup. An editor specifically designed for smartphones and other mobile devices was also launched. 2014 In February 2014, a project to make a print edition of the English Wikipedia, consisting of 1,000 volumes and over 1,100,000 pages, was launched by German Wikipedia contributors. The project sought funding through Indiegogo, and was intended to honor the contributions of Wikipedia's editors. On 22 October 2014, the first monument to Wikipedia was unveiled in the Polish town of Slubice. On 8 June, 15 June, and 16 July 2014, the Waray Wikipedia, the Vietnamese Wikipedia and the Cebuano Wikipedia each exceeded the one million article mark. They were the tenth, eleventh and twelfth Wikipedias to reach that milestone. Despite having very few active users, the Waray and Cebuano Wikipedias had a high number of automatically generated articles created by bots. 2015 In mid-2015, Wikipedia was the world's seventh-most-popular website according to Alexa Internet, down one place from the position it held in November 2012. At the start of 2015, Wikipedia remained the largest general-knowledge encyclopedia online, with a combined total of over 36 million mainspace articles across all 291 language editions. On average, Wikipedia receives a total of 10 billion global pageviews from around 495 million unique visitors every month, including 85 million visitors from the United States alone, where it is the sixth-most-popular site. Print Wikipedia was an art project by Michael Mandiberg that created the ability to print 7473 volumes of Wikipedia as it existed on 7 April 2015. Each volume has 700 pages and only 110 were printed by the artist. On 1 November 2015, the English Wikipedia reached 5,000,000 articles with the creation of an article on Persoonia terminalis, a type of shrub. 2016 On 19 January 2016, the Japanese Wikipedia exceeded the one million article mark, becoming the thirteenth Wikipedia to reach that milestone. The millionth article was 波号第二百二十四潜水艦 (a World War II submarine of the Imperial Japanese Navy). In mid-2016, Wikipedia was once again the world's sixth-most-popular website according to Alexa Internet, up one place from the position it held in the previous year. In October 2016, the mobile version of Wikipedia got a new look. 2017 In mid-2017, Wikipedia was listed as the world's fifth-most-popular website according to Alexa Internet, rising one place from the position it held in the previous year. Wikipedia Zero was made available in Iraq and Afghanistan. On 29 April 2017, online access to Wikipedia was blocked across all language editions in Turkey by the Turkish authorities. This block lasted until 15 January 2020, as the court of Turkey ruled that the block violated human rights. The encrypted Japanese Wikipedia has been blocked in China since 28 December 2017. 2018 During 2018, Wikipedia retained its listing as the world's fifth-most-popular website according to Alexa Internet. One notable development was the use of Artificial Intelligence to create draft articles on overlooked topics. On 13 April 2018, the number of Chinese Wikipedia articles exceeded 1 million, becoming the fourteenth Wikipedia to reach that milestone. The Chinese Wikipedia has been blocked in Mainland China since May 2015. Later in the year, on 26 June, the Portuguese Wikipedia exceeded the one million article mark, becoming the fifteenth Wikipedia to reach that milestone. The millionth article was Perdão de Richard Nixon (the Pardon of Richard Nixon). 2019 In August 2019, according to Alexa.com, Wikipedia fell from fifth placed to seventh placed website in the world for global internet engagement. On 23 April 2019, Chinese authorities expanded the block of Wikipedia to versions in all languages. The timing of the block coincided with the 30th anniversary of the 1989 Tiananmen Square protests and massacre and the 100th anniversary of the May Fourth Movement, resulting in stricter internet censorship in China. Third decade: 2020–present 2020 On 23 January 2020, the six millionth article, the biography of Maria Elise Turner Lauder, was added to the English Wikipedia. Despite this growth in articles, Wikipedia's global internet engagement, as measured by Alexa, continued to decline. By February 2020, Wikipedia fell to the eleventh placed website in the world for global internet engagement. Both Wikipedia's coverage of the COVID-19 pandemic crisis and the supporting edits, discussions and even deletions were thought to be a useful resource for future historians seeking to understand the period in detail. The World Health Organization collaborated with Wikipedia as a key resource for the dissemination of COVID-19-related information as to help combat the spread of misinformation. 2021 In January 2021, Wikipedia's 20th anniversary was noted in the media. On 13 January 2021, the English Wikipedia reached one billion edits. MIT Press published an open access book of essays Wikipedia @ 20: Stories of an Unfinished Revolution, edited by Joseph Reagle and Jackie Koerner with contributions from prominent Wikipedians, Wikimedians, researchers, journalists, librarians and other experts reflecting on particular histories and themes. By November 2021, Wikipedia had fallen to the thirteenth placed website in the world for global internet engagement. History by subject area Hardware and software The software that runs Wikipedia, and the computer hardware, server farms and other systems upon which Wikipedia relies. In January 2001, Wikipedia ran on UseModWiki, written in Perl by Clifford Adams. The server still runs on Linux, although the original text was stored in files rather than in a database. Articles were named with the CamelCase convention. In January 2002, "Phase II" of the wiki software powering Wikipedia was introduced, replacing the older UseModWiki. Written specifically for the project by Magnus Manske, it included a PHP wiki engine. In July 2002, a major rewrite of the software powering Wikipedia went live; dubbed "Phase III", it replaced the older "Phase II" version, and became MediaWiki. It was written by Lee Daniel Crocker in response to the increasing demands of the growing project. In October 2002, Derek Ramsey created a bot—an automated program called Rambot—to add a large number of articles about United States towns; these articles were automatically generated from U.S. census data. He thus increased the number of Wikipedia articles by 33,832. This has been called "the most controversial move in Wikipedia history". In January 2003, support for mathematical formulas in TeX was added. The code was contributed by Tomasz Wegrzanowski. On 9 June 2003, Wikipedia's ISBN interface was amended to make ISBNs in articles link to Special:Booksources, which fetches its contents from the user-editable page . Before this, ISBN link targets were coded into the software and new ones were suggested on the page. See the edit that changed this. After 6 December 2003, various system messages shown to Wikipedia users were no longer hard coded, allowing Wikipedia to modify certain parts of MediaWiki's interface, such as the message shown to blocked users. On 12 February 2004, server operations were moved from San Diego, California to Tampa, Florida. On 29 May 2004, all the various websites were updated to a new version of the MediaWiki software. On 30 May 2004, the first instances of "categorization" entries appeared. Category schemes, like Recent Changes and Edit This Page, had existed from the founding of Wikipedia. However, Larry Sanger had viewed the schemes as lists, and even hand-entered articles, whereas the categorization effort centered on individual categorization entries in each article of the encyclopedia, as part of a larger automatic categorization of the articles of the encyclopedia. After 3 June 2004, administrators could edit the style of the interface by changing the CSS in the monobook stylesheet at MediaWiki:Monobook.css. Also on 30 May 2004, with MediaWiki 1.3, the Template namespace was created, allowing transclusion of standard texts. On 7 June 2005 at 3:00 a.m. Eastern Standard Time, the bulk of the Wikimedia servers were moved to a new facility across the street. All Wikimedia projects were down during this time. In March 2013, the first phase of the Wikidata interwiki database became available across Wikipedia's language editions. In July 2013, the VisualEditor editing interface was inaugurated, allowing users to edit Wikipedia using a WYSIWYG text editor (similar to a word processor) instead of wiki markup. An editing interface optimised for mobile devices was also released. Look and feel The external face of Wikipedia, its look and feel, and the Wikipedia branding, as presented to users. On 4 April 2002, BrilliantProse, since renamed Featured Articles, was moved to the Wikipedia namespace from the article namespace. Around 15 October 2003, a new Wikipedia logo was installed. The logo concept was selected by a voting process, which was followed by a revision process to select the best variant. The final selection was created by David Friedland (who edits Wikipedia under the username "nohat") based on a logo design and concept created by Paul Stansifer. On 22 February 2004, Did You Know (DYK) made its first Main Page appearance. On 23 February 2004, a coordinated new look for the Main Page appeared at 19:46 UTC. Hand-chosen entries for the Daily Featured Article, Anniversaries, In the News, and Did You Know rounded out the new look. On 10 January 2005, the multilingual portal at www.wikipedia.org was set up, replacing a redirect to the English-language Wikipedia. On 5 February 2005, was created, becoming the first thematic "portal" on the English Wikipedia. However, the concept was pioneered on the German Wikipedia, where Portal:Recht (law studies) was set up in October 2003. On 16 July 2005, the English Wikipedia began the practice of including the day's "featured pictures" on the Main Page. On 19 March 2006, following a vote, the Main Page of the English-language Wikipedia featured its first redesign in nearly two years. On 13 May 2010, the site released a new interface. New features included an updated logo, new navigation tools, and a link wizard. The "classic" Wikipedia interface remained available as an option. Internal structures Landmarks in the Wikipedia community, and the development of its organization, internal structures, and policies. April 2001, Wales formally defines the "neutral point of view", Wikipedia's core non-negotiable editorial policy, a reformulation of the "Lack of Bias" policy outlined by Sanger for Nupedia in spring or summer 2000, which covered many of the same core principles. In September 2001, collaboration by subject matter in is introduced. In February 2002, concerns over the risk of future censorship and commercialization by Bomis Inc (Wikipedia's original host) combined with a lack of guarantee this would not happen, led most participants of the Spanish Wikipedia to break away and establish it independently as the Enciclopedia Libre. Following clarification of Wikipedia's status and non-commercial nature later that year, re-merger talks between Enciclopedia Libre and the re-founded Spanish Wikipedia occasionally took place in 2002 and 2003, but no conclusion was reached. As of October 2009, the two continue to coexist as substantial Spanish language reference sources, with around 43,000 articles (EL) and 520,000 articles (Sp.W) respectively. Also in 2002, policy and style issues were clarified with the creation of the Manual of Style, along with a number of other policies and
and factories, it was still used during the 18th and 19th centuries for many smaller operations, such as driving the bellows in small blast furnaces (e.g. the Dyfi Furnace) and gristmills, such as those built at Saint Anthony Falls, which uses the 50-foot (15 m) drop in the Mississippi River. Technological advances moved the open water wheel into an enclosed turbine or water motor. In 1848, the British-American engineer James B. Francis, head engineer of Lowell's Locks and Canals company, improved on these designs to create a turbine with 90% efficiency. He applied scientific principles and testing methods to the problem of turbine design. His mathematical and graphical calculation methods allowed the confident design of high-efficiency turbines to exactly match a site's specific flow conditions. The Francis reaction turbine is still in use. In the 1870s, deriving from uses in the California mining industry, Lester Allan Pelton developed the high-efficiency Pelton wheel impulse turbine, which used hydropower from the high head streams characteristic of the Sierra Nevada. Calculating the amount of available power A hydropower resource can be evaluated by its available power. Power is a function of the hydraulic head and volumetric flow rate. The head is the energy per unit weight (or unit mass) of water. The static head is proportional to the difference in height through which the water falls. Dynamic head is related to the velocity of moving water. Each unit of water can do an amount of work equal to its weight times the head. The power available from falling water can be calculated from the flow rate and density of water, the height of fall, and the local acceleration due to gravity: where (work flow rate out) is the useful power output (in watts) ("eta") is the efficiency of the turbine (dimensionless) is the mass flow rate (in kilograms per second) ("rho") is the density of water (in kilograms per cubic metre) is the volumetric flow rate (in cubic metres per second) is the acceleration due to gravity (in metres per second per second) ("Delta h") is the difference in height between the outlet and inlet (in metres) To illustrate, the power output of a turbine that is 85% efficient, with a flow rate of 80 cubic metres per second (2800 cubic feet per second) and a head of 145 metres (480 feet), is 97 megawatts: Operators of hydroelectric stations compare the total electrical energy produced with the theoretical potential energy of the water passing through the turbine to calculate efficiency. Procedures and definitions for calculation of efficiency are given in test codes such as ASME PTC 18 and IEC 60041. Field testing of turbines is used to validate the manufacturer's efficiency guarantee. Detailed calculation of the efficiency of a hydropower turbine accounts for the head lost due to flow friction in the power canal or penstock, rise in tailwater level due to flow, the location of the station and effect of varying gravity, the air temperature and barometric pressure, the density of the water at ambient temperature, and the relative altitudes of the forebay and tailbay. For precise calculations, errors due to rounding and the number of significant digits of constants must be considered. Some hydropower systems such as water wheels can draw power from the flow of a body of water without necessarily changing its height. In this case, the available power is the kinetic energy of the flowing water. Over-shot water wheels can efficiently capture both types of energy. The flow in a stream can vary widely from season to season. The development of a hydropower site requires analysis of flow records, sometimes spanning decades, to assess the reliable annual energy supply. Dams and reservoirs provide a more dependable source of power by smoothing seasonal changes in water flow. However, reservoirs have a significant environmental impact, as does alteration of naturally occurring streamflow. Dam design must account for the worst-case, "probable maximum flood" that can be expected at the site; a spillway is often included to route flood flows around the dam. A computer model of the hydraulic basin and rainfall and snowfall records are used to predict the maximum flood. Disadvantages and limitations
from restoring what is lost from erosion. Large and deep dam and reservoir plants cover large areas of land which causes greenhouse gas emissions from underwater rotting vegetation. Furthermore, although at lower levels than other renewable energy sources, it was found that hydropower produces methane gas which is a greenhouse gas. This occurs when organic matters accumulate at the bottom of the reservoir because of the deoxygenation of water which triggers anaerobic digestion. Furthermore, studies found that the construction of dams and reservoirs can result in habitat loss for some aquatic species. Dam failures can have catastrophic effects, including loss of life, property and pollution of land. Applications Mechanical power Watermills Compressed air A plentiful head of water can be made to generate compressed air directly without moving parts. In these designs, a falling column of water is deliberately mixed with air bubbles generated through turbulence or a venturi pressure reducer at the high-level intake. This allows it to fall down a shaft into a subterranean, high-roofed chamber where the now-compressed air separates from the water and becomes trapped. The height of the falling water column maintains compression of the air in the top of the chamber, while an outlet, submerged below the water level in the chamber allows water to flow back to the surface at a lower level than the intake. A separate outlet in the roof of the chamber supplies the compressed air. A facility on this principle was built on the Montreal River at Ragged Shutes near Cobalt, Ontario in 1910 and supplied 5,000 horsepower to nearby mines. Electricity Hydroelectricity is the biggest hydropower application. Hydroelectricity generates about 15% of global electricity and provides at least 50% of the total electricity supply for more than 35 countries. Hydroelectricity generation starts with converting either the potential energy of water that is present due to the site's elevation or the kinetic energy of moving water into electrical energy. Hydroelectric power plants vary in terms of the way they harvest energy. One type involves a dam and a reservoir. The water in the reservoir is available on demand to be used to generate electricity by passing through channels that connect the dam to the reservoir. The water spins a turbine, which is connected to the generator that produces electricity. The other type is called a run-of-river plant. In this case, a barrage is built to control the flow of water, absent a reservoir. The run-of river power plant needs continuous water flow and therefore has less ability to provide power on demand. The kinetic energy of flowing water is the main source of energy. Both designs have limitations. For example, dam construction can result in discomfort to nearby residents. The dam and reservoirs occupy a relatively large amount of space that may be opposed by nearby communities. Moreover, reservoirs can potentially have major environmental consequences such as harming downstream habitats. On the other hand, the limitation of the run-of-river project is the decreased efficiency of electricity generation because the process depends on the speed of the seasonal river flow. This means that the rainy season increases electricity generation compared to the dry season. The size of hydroelectric plants can vary from small plants called micro hydro, to large plants supply that power to a whole country. As of 2019, the five largest power stations in the world are conventional hydroelectric power stations with dams. Hydroelectricity can also be used to store energy in the form of potential energy between two reservoirs at different heights with pumped-storage. Water is pumped uphill into reservoirs during periods of low demand to be released for generation when demand is high or system generation is low. Other forms of electricity generation with hydropower include tidal stream generators using energy from tidal power generated from oceans, rivers, and human-made canal systems to generating electricity. Rain power Rain has been referred to as "one of the last unexploited energy sources in nature. When it rains, billions of litres of water can fall, which have enormous electric potential if used in the right way." Research is being done into the different methods of generating power from rain, such as by using the energy in the impact of raindrops. This is in its very early stages with new and emerging technologies being tested, prototyped and created. Such power has been called rain power. One method in which this has been attempted is by using hybrid solar panels called "all-weather solar panels" that can generate electricity from both the sun and the rain. According to zoologist and science and technology educator, Luis Villazon, "A 2008 French study estimated that you could use piezoelectric devices, which generate power when they move, to extract 12 milliwatts from a raindrop. Over a year, this would amount to less than 0.001kWh per square metre – enough to power a remote sensor." Villazon suggested a better application would be to collect the water from fallen rain and use it to drive a turbine, with an estimated energy generation of 3kWh of energy per year for a 185m2 roof. A microturbine-based system created by three students from the Technological University of Mexico has been used to generate electricity. The Pluvia system "uses the stream of rainwater runoff from houses' rooftop rain gutters to spin a microturbine in a cylindrical housing. Electricity generated
a breed registry. However, the term is sometimes used in a broader sense to define landrace animals of a common phenotype located within a limited geographic region, or even feral “breeds” that are naturally selected. Depending on definition, hundreds of "breeds" exist today, developed for many different uses. Horse breeds are loosely divided into three categories based on general temperament: spirited "hot bloods" with speed and endurance; "cold bloods," such as draft horses and some ponies, suitable for slow, heavy work; and "warmbloods," developed from crosses between hot bloods and cold bloods, often focusing on creating breeds for specific riding purposes, particularly in Europe. Horse breeds are groups of horses with distinctive characteristics that are transmitted consistently to their offspring, such as conformation, color, performance ability, or disposition. These inherited traits are usually the result of a combination of natural crosses and artificial selection methods aimed at producing horses for specific tasks. Certain breeds are known for certain talents. For example, Standardbreds are known for their speed in harness racing. Some breeds have been developed through centuries of crossings with other breeds, while others, such as the Morgan horse, originated via a single sire from which all current breed members descend. More than 300 horse breeds exist in the world today. Origin of breeds Modern horse breeds developed in response to a need for "form to function", the necessity to develop certain physical characteristics to perform a certain type of work. Thus, powerful but refined breeds such as the Andalusian or the Lusitano developed in the Iberian peninsula as riding horses that also had a great aptitude for dressage, while heavy draft horses such as the Clydesdale and the Shire developed out of a need to perform demanding farm work and pull heavy wagons. Ponies of all breeds originally developed mainly from the need for a working animal that could fulfill specific local draft and transportation needs while surviving in harsh environments. However, by the 20th century, many pony breeds had Arabian and other blood added to make a more refined pony suitable for riding. Other horse breeds developed specifically for light agricultural work, heavy and light carriage and road work, various equestrian disciplines, or simply as pets. Purebreds and registries Horses have been selectively bred since their domestication. However, the concept of purebred bloodstock and a controlled, written breed registry only became of significant importance in modern times. Today, the standards for defining and registration of different breeds vary. Sometimes, purebred horses are called Thoroughbreds, which is incorrect; "Thoroughbred" is a specific breed of horse, while a "purebred" is a horse (or any other
"form to function", the necessity to develop certain physical characteristics to perform a certain type of work. Thus, powerful but refined breeds such as the Andalusian or the Lusitano developed in the Iberian peninsula as riding horses that also had a great aptitude for dressage, while heavy draft horses such as the Clydesdale and the Shire developed out of a need to perform demanding farm work and pull heavy wagons. Ponies of all breeds originally developed mainly from the need for a working animal that could fulfill specific local draft and transportation needs while surviving in harsh environments. However, by the 20th century, many pony breeds had Arabian and other blood added to make a more refined pony suitable for riding. Other horse breeds developed specifically for light agricultural work, heavy and light carriage and road work, various equestrian disciplines, or simply as pets. Purebreds and registries Horses have been selectively bred since their domestication. However, the concept of purebred bloodstock and a controlled, written breed registry only became of significant importance in modern times. Today, the standards for defining and registration of different breeds vary. Sometimes, purebred horses are called Thoroughbreds, which is incorrect; "Thoroughbred" is a specific breed of horse, while a "purebred" is a horse (or any other animal) with a defined pedigree recognized by a breed registry. An early example of people who practiced selective horse breeding were the Bedouin, who had a reputation for careful breeding practices, keeping extensive pedigrees of their Arabian horses and placing great value upon pure bloodlines. Though these pedigrees were originally transmitted by an oral tradition, written pedigrees of Arabian horses can be found that date to the 14th century. In the same period of the early Renaissance, the Carthusian monks of southern Spain bred horses and kept meticulous pedigrees of the best bloodstock; the lineage survives to this day in the Andalusian horse. One of the earliest formal registries was General Stud Book for Thoroughbreds, which began in 1791 and traced back to the Arabian stallions imported to England from the Middle East that became the foundation stallions for the breed. Some breed registries have a closed stud book, where registration is based on pedigree, and no outside animals can gain admittance. For example, a registered Thoroughbred or Arabian must have two registered parents of the same breed. Other breeds have a partially closed stud book, but still allow certain infusions from other breeds. For example, the modern Appaloosa must have at least one Appaloosa parent, but may also have a Quarter Horse, Thoroughbred, or Arabian parent, so long as the offspring exhibits appropriate color characteristics. The Quarter Horse normally requires both parents to be registered Quarter Horses, but allows "Appendix" registration of horses with one Thoroughbred parent, and the horse may earn its way to full registration by completing certain performance requirements. Open stud books exist for horse
birth, a foal's navel is dipped in antiseptic to prevent infection. The foal is sometimes given an enema to help clear the meconium from its digestive tract. The newborn is monitored to ensure that it stands and nurses without difficulty. While most horse births happen without complications, many owners have first aid supplies prepared and a veterinarian on call in case of a birthing emergency. People who supervise foaling should also watch the mare to be sure that she passes the placenta in a timely fashion, and that it is complete with no fragments remaining in the uterus. Retained fetal membranes can cause a serious inflammatory condition (endometritis) and/or infection. If the placenta is not removed from the stall after it is passed, a mare will often eat it, an instinct from the wild, where blood would attract predators. Foal care Foals develop rapidly, and within a few hours a wild foal can travel with the herd. In domestic breeding, the foal and dam are usually separated from the herd for a while, but within a few weeks are typically pastured with the other horses. A foal will begin to eat hay, grass and grain alongside the mare at about 4 weeks old; by 10–12 weeks the foal requires more nutrition than the mare's milk can supply. Foals are typically weaned at 4–8 months of age, although in the wild a foal may nurse for a year. How breeds develop Beyond the appearance and conformation of a specific type of horse, breeders aspire to improve physical performance abilities. This concept, known as matching "form to function," has led to the development of not only different breeds, but also families or bloodlines within breeds that are specialists for excelling at specific tasks. For example, the Arabian horse of the desert naturally developed speed and endurance to travel long distances and survive in a harsh environment, and domestication by humans added a trainable disposition to the animal's natural abilities. In the meantime, in northern Europe, the locally adapted heavy horse with a thick, warm coat was domesticated and put to work as a farm animal that could pull a plow or wagon. This animal was later adapted through selective breeding to create a strong but rideable animal suitable for the heavily armored knight in warfare. Then, centuries later, when people in Europe wanted faster horses than could be produced from local horses through simple selective breeding, they imported Arabians and other oriental horses to breed as an outcross to the heavier, local animals. This led to the development of breeds such as the Thoroughbred, a horse taller than the Arabian and faster over the distances of a few miles required of a European race horse or light cavalry horse. Another cross between oriental and European horses produced the Andalusian, a horse developed in Spain that was powerfully built, but extremely nimble and capable of the quick bursts of speed over short distances necessary for certain types of combat as well as for tasks such as bullfighting. Later, the people who settled America needed a hardy horse that was capable of working with cattle. Thus, Arabians and Thoroughbreds were crossed on Spanish horses, both domesticated animals descended from those brought over by the Conquistadors, and feral horses such as the Mustangs, descended from the Spanish horse, but adapted by natural selection to the ecology and climate of the west. These crosses ultimately produced new breeds such as the American Quarter Horse and the Criollo of Argentina. In Canada, the Canadian Horse descended from the French stock Louis XIV sent to Canada in the late 17th century.[6] The initial shipment, in 1665, consisted of two stallions and twenty mares from the Royal Stables in Normandy and Brittany, the centre of French horse breeding.[7] Only 12 of the 20 mares survived the trip. Two more shipments followed, one in 1667 of 14 horses (mostly mares, but with at least one stallion), and one in 1670 of 11 mares and a stallion. The shipments included a mix of draft horses and light horses, the latter of which included both pacing and trotting horses.[1] The exact origins of all the horses are unknown, although the shipments probably included Bretons, Normans, Arabians, Andalusians and Barbs. In modern times, these breeds themselves have since been selectively bred to further specialize at certain tasks. One example of this is the American Quarter Horse. Once a general-purpose working ranch horse, different bloodlines now specialize in different events. For example, larger, heavier animals with a very steady attitude are bred to give competitors an advantage in events such as team roping, where a horse has to start and stop quickly, but also must calmly hold a full-grown steer at the end of a rope. On the other hand, for an event known as cutting, where the horse must separate a cow from a herd and prevent it from rejoining the group, the best horses are smaller, quick, alert, athletic and highly trainable. They must learn quickly, have conformation that allows quick stops and fast, low turns, and the best competitors have a certain amount of independent mental ability to anticipate and counter the movement of a cow, popularly known as "cow sense." Another example is the Thoroughbred. While most representatives of this breed are bred for horse racing, there are also specialized bloodlines suitable as show hunters or show jumpers. The hunter must have a tall, smooth build that allows it to trot and canter smoothly and efficiently. Instead of speed, value is placed on appearance and upon giving the equestrian a comfortable ride, with natural jumping ability that shows bascule and good form. A show jumper, however, is bred less for overall form and more for power over tall fences, along with speed, scope, and agility. This favors a horse with a good galloping stride, powerful hindquarters that can change speed or direction easily, plus a good shoulder angle and length of neck. A jumper has a more powerful build than either the hunter or the racehorse. History of horse breeding The history of horse breeding goes back millennia. Though the precise date is in dispute, humans could have domesticated the horse as far back as approximately 4500 BCE. However, evidence of planned breeding has a more blurry history. It is well known, for example, that the Romans did breed horses and valued them in their armies, but little is known regarding their breeding and husbandry practices: all that remains are statues and artwork. Mankind has plenty of equestrian statues of Roman emperors, horses are mentioned in the Odyssey by Homer, and hieroglyphics and paintings left behind by Egyptians tell stories of pharaohs hunting elephants from chariots. Nearly nothing is known of what became of the horses they bred for hippodromes, for warfare, or even for farming. One of the earliest people known to document the breedings of their horses were the Bedouin of the Middle East, the breeders of the Arabian horse. While it is difficult to determine how far back the Bedouin passed on pedigree information via an oral tradition, there were written pedigrees of Arabian horses by CE 1330. The Akhal-Teke of West-Central Asia is another breed with roots in ancient times that was also bred specifically for war and racing. The nomads of the Mongolian steppes bred horses for several thousand years as well, and the Caspian horse is believed to be a very close relative of Ottoman horses from the earliest origins of the Turks in Central Asia. The types of horse bred varied with culture and with the times. The uses to which a horse was put also determined its qualities, including smooth amblers for riding, fast horses for carrying messengers, heavy horses for plowing and pulling heavy wagons, ponies for hauling cars of ore from mines, packhorses, carriage horses and many others. Medieval Europe bred large horses specifically for war, called destriers. These horses were the ancestors of the great heavy horses of today, and their size was preferred not simply because of the weight of the armor, but also because a large horse provided more power for the knight's lance. Weighing almost twice as much as a normal riding horse, the destrier was a powerful weapon in battle meant to act like a giant battering ram that could quite literally run down men on an enemy line. On the other hand, during this same time, lighter horses were bred in northern Africa and the Middle East, where a faster, more agile horse was preferred. The lighter horse suited the raids and battles of desert people, allowing them to outmaneuver rather than overpower the enemy. When Middle Eastern warriors and European knights collided in warfare, the heavy knights were frequently outmaneuvered. The Europeans, however, responded by crossing their native breeds with "oriental" type horses such as the Arabian, Barb, and Turkoman horse This cross-breeding led both to a nimbler war horse, such as today's Andalusian horse, but also created a type of horse known as a Courser, a predecessor to the Thoroughbred, which was used as a message horse. During the Renaissance, horses were bred not only for war, but for haute ecole riding, derived from the most athletic movements required of a war horse, and popular among the elite nobility of the time. Breeds such as the Lipizzan and the now extinct Neapolitan horse were developed from Spanish-bred horses for this purpose, and also became the preferred mounts of cavalry officers, who were derived mostly from the ranks of the nobility. It was during this time that firearms were developed, and so the light cavalry horse, a faster and quicker war horse, was bred for "shoot and run" tactics rather than the shock action as in the Middle Ages. Fine horses usually had a well muscled, curved neck, slender body, and sweeping mane, as the nobility liked to show off their wealth and breeding in paintings of the era. After Charles II retook the British throne in 1660, horse racing, which had been banned by Cromwell, was revived. The Thoroughbred was developed 40 years later, bred to be the ultimate racehorse, through the lines of three foundation Arabian stallions and one Turkish horse. In the 18th century, James Burnett, Lord Monboddo noted the importance of selecting appropriate parentage to achieve desired outcomes of successive generations. Monboddo worked more broadly in the abstract thought of species relationships and evolution of species. The Thoroughbred breeding hub in Lexington, Kentucky was developed in the late 18th century, and became a mainstay in American racehorse breeding. The 17th and 18th centuries saw more of a need for fine carriage horses in Europe, bringing in the dawn of the warmblood. The warmblood breeds have been exceptionally good at adapting to changing times, and from their carriage horse beginnings they easily transitioned during the 20th century into a sport horse type. Today's warmblood breeds, although still used for competitive driving, are more often seen competing in show jumping or dressage. The Thoroughbred continues to dominate the horse racing world, although its lines have been more recently used to improve warmblood breeds and to develop sport horses. The French saddle horse is an excellent example as is the Irish Sport Horse, the latter being an unusual combination between a Thoroughbred and a draft breed. The American Quarter Horse was developed early in the 18th century, mainly for quarter racing (racing ¼ of a mile). Colonists did not have racetracks or any of the trappings of Europe that the earliest Thoroughbreds had at their disposal, so instead the owners of Quarter Horses would run their horses on roads that lead through town as a form of local entertainment. As the USA expanded West, the breed went with settlers as a farm and ranch animal, and "cow sense" was particularly valued: their use for herding cattle increased on rough, dry terrain that often involved sitting in the saddle for long hours. However, this did not mean that the original ¼-mile races that colonists held ever went out of fashion, so today there are three types: the stock horse type, the racer, and the more recently evolving sport type. The racing type most resembles the finer-boned ancestors of the first racing Quarter Horses, and the type is still used for ¼-mile races. The stock horse type, used in western events and as a farm and patrol animal is bred for a shorter stride, an ability to stop and turn quickly, and an unflappable attitude that remains calm and focused even in the face of an angry charging steer. The first two are still to this day bred to have a combination of explosive speed that exceeds the Thoroughbred on short distances clocked as high as 55 mph, but they still retain the gentle, calm, and kindly temperament of their ancestors that makes them easily handled. The Canadian horse's origin corresponds to shipments of French horses, some of which came from Louis XIV's own stable and most likely were Baroque horses meant to be gentlemen's mounts. These were ill-suited to farm work and to the hardscrabble life of the New World, so like the Americans, early Canadians crossed their horses with natives escapees. In time they evolved along similar lines as the Quarter Horse to the South as both the US and Canada spread westward and needed a calm and tractable horse versatile enough to carry the farmer's son to school but still capable of running fast and running hard as a cavalry horse, a stockhorse, or a horse to pull a conestoga wagon. Other horses from North America retained a hint of their mustang origins by being either derived from stock that Native Americans bred that came in a rainbow of color, like the Appaloosa and American Paint Horse. with those East of the Mississippi River increasingly bred to impress and mimic the trends of the upper classes of Europe: The Tennessee Walking Horse and Saddlebred were originally plantation horses bred for their gait and comfortable ride in the saddle as a plantation master would survey his vast lands like an English lord. Horses were needed for heavy draft and carriage work until replaced by the automobile, truck, and tractor. After this time, draft and carriage horse numbers dropped significantly, though light riding horses remained popular for recreational pursuits. Draft horses today are used on a few small farms, but today are seen mainly for pulling and plowing competitions rather than farm work. Heavy harness horses are now used as an outcross with lighter breeds, such as the Thoroughbred, to produce the modern warmblood breeds popular in sport horse disciplines, particularly at the Olympic level. Deciding to breed a horse Breeding a horse is an endeavor where the owner, particularly of the mare, will usually need to invest considerable time and money. For this reason, a horse owner needs to consider several factors, including: Does the proposed breeding animal have valuable genetic qualities to pass on? Is the proposed breeding animal in good physical health, fertile, and able to withstand the rigors of reproduction? For what purpose will the foal be used? Is there a market for the foal if the owner does not wish to keep the foal for its entire life? What is the anticipated economic benefit, if any, to the owner of the ensuing foal? What is the anticipated economic benefit, if any, to the owner(s) of the sire and dam or the foal? Does the owner of the mare have the expertise to properly manage the mare through gestation and parturition? Does the owner of the potential foal have the expertise to properly manage and train a young animal once it is born? There are value judgements involved in considering whether an animal is suitable breeding stock, hotly debated by breeders. Additional personal beliefs may come into play when considering a suitable level of care for the mare and ensuing foal, the potential market or use for the foal, and other tangible and intangible benefits to the owner. If the breeding endeavor is intended to make a profit, there are additional market factors to consider, which may vary considerably from year to year, from breed to breed, and by region of the world. In many cases, the low end of the market is saturated with horses, and the law of supply and demand thus allows little or no profit to be made from breeding unregistered animals or animals of poor quality, even if registered. The minimum cost of breeding for a mare owner includes the stud fee, and the cost of proper nutrition, management and veterinary care of the mare throughout gestation, parturition, and care of both mare and foal up to the time of weaning. Veterinary expenses may be higher if specialized reproductive technologies are used or health complications occur. Making a profit in horse breeding is often difficult. While some owners of only a few horses may keep a foal for purely personal enjoyment, many individuals breed horses in hopes of making some money in the process. A rule of thumb is that a foal intended for sale should be worth three times the cost of the stud fee if it were sold at the moment of birth. From birth forward, the costs of care and training are added to the value of the foal, with a sale price going up accordingly. If the foal wins awards in some form of competition, that may also enhance the price. On the other hand, without careful thought, foals bred without a potential market for them may wind up being sold at a loss, and in a worst-case scenario, sold for "salvage" value—a euphemism for sale to slaughter as horsemeat. Therefore, a mare owner must consider their reasons for breeding, asking hard questions of themselves as to whether their motivations are based on either emotion or profit and how realistic those
the same sire. The terms paternal half-sibling, and maternal half-sibling are also often used. Three-quarter siblings are horses out of the same dam, and are by sires that are either half-brothers (i.e. same dam) or who are by the same sire. Thoroughbreds and Arabians are also classified through the "distaff" or direct female line, known as their "family" or "tail female" line, tracing back to their taproot foundation bloodstock or the beginning of their respective stud books. The female line of descent always appears at the bottom of a tabulated pedigree and is therefore often known as the bottom line. In addition, the maternal grandfather of a horse has a special term: damsire. "Linebreeding" technically is the duplication of fourth generation or more distant ancestors. However, the term is often used more loosely, describing horses with duplication of ancestors closer than the fourth generation. It also is sometimes used as a euphemism for the practice of inbreeding, a practice that is generally frowned upon by horse breeders, though used by some in an attempt to fix certain traits. Estrous cycle of the mare The estrous cycle (also spelled oestrous) controls when a mare is sexually receptive toward a stallion, and helps to physically prepare the mare for conception. It generally occurs during the spring and summer months, although some mares may be sexually receptive into the late fall, and is controlled by the photoperiod (length of the day), the cycle first triggered when the days begin to lengthen. The estrous cycle lasts about 19–22 days, with the average being 21 days. As the days shorten, the mare returns to a period when she is not sexually receptive, known as anestrus. Anestrus – occurring in the majority of, but not all, mares – prevents the mare from conceiving in the winter months, as that would result in her foaling during the harshest part of the year, a time when it would be most difficult for the foal to survive. This cycle contains 2 phases: Estrus, or Follicular, phase: 5–7 days in length, when the mare is sexually receptive to a stallion. Estrogen is secreted by the follicle. Ovulation occurs in the final 24–48 hours of estrus. Diestrus, or Luteal, phase: 14–15 days in length, the mare is not sexually receptive to the stallion. The corpus luteum secretes progesterone. Depending on breed, on average, 16% of mares have double ovulations, allowing them to twin, though this does not affect the length of time of estrus or diestrus. Effects on the reproductive system during the estrous cycle Changes in hormone levels can have great effects on the physical characteristics of the reproductive organs of the mare, thereby preparing, or preventing, her from conceiving. Uterus: increased levels of estrogen during estrus cause edema within the uterus, making it feel heavier, and the uterus loses its tone. This edema decreases following ovulation, and the muscular tone increases. High levels of progesterone do not cause edema within the uterus. The uterus becomes flaccid during anestrus. Cervix: the cervix starts to relax right before estrus occurs, with maximal relaxation around the time of ovulation. The secretions of the cervix increase. High progesterone levels (during diestrus) cause the cervix to close and become toned. Vagina: the portion of the vagina near the cervix becomes engorged with blood right before estrus. The vagina becomes relaxed and secretions increase. Vulva: relaxes right before estrus begins. Becomes dry, and closes more tightly, during diestrus. Hormones involved in the estrous cycle, during foaling, and after birth The cycle is controlled by several hormones which regulate the estrous cycle, the mare's behavior, and the reproductive system of the mare. The cycle begins when the increased day length causes the pineal gland to reduce the levels of melatonin, thereby allowing the hypothalamus to secrete GnRH. GnRH (Gonadotropin releasing hormone): secreted by the hypothalamus, causes the pituitary to release two gonadotrophins: LH and FSH. LH (Luteinizing hormone): levels are highest 2 days following ovulation, then slowly decrease over 4–5 days, dipping to their lowest levels 5–16 days after ovulation. Stimulates maturation of the follicle, which then in turn secretes estrogen. Unlike most mammals, the mare does not have an increase of LH right before ovulation. FSH (Follicle-stimulating hormone): secreted by the pituitary, causes the ovarian follicle to develop. Levels of FSH rise slightly at the end of estrus, but have their highest peak about 10 days before the next ovulation. FSH is inhibited by inhibin (see below), at the same time LH and estrogen levels rise, which prevents immature follicles from continuing their growth. Mares may however have multiple FSH waves during a single estrous cycle, and diestrus follicles resulting from a diestrus FSH wave are not uncommon, particularly in the height of the natural breeding season. Estrogen: secreted by the developing follicle, it causes the pituitary gland to secrete more LH (therefore, these 2 hormones are in a positive feedback loop). Additionally, it causes behavioral changes in the mare, making her more receptive toward the stallion, and causes physical changes in the cervix, uterus, and vagina to prepare the mare for conception (see above). Estrogen peaks 1–2 days before ovulation, and decreases within 2 days following ovulation. Inhibin: secreted by the developed follicle right before ovulation, "turns off" FSH, which is no longer needed now that the follicle is larger. Progesterone: prevents conception and decreases sexual receptibility of the mare to the stallion. Progesterone is therefore lowest during the estrus phase, and increases during diestrus. It decreases 12–15 days after ovulation, when the corpus luteum begins to decrease in size. Prostaglandin: secreted by the endrometrium 13–15 days following ovulation, causes luteolysis and prevents the corpus luteum from secreting progesterone eCG – equine chorionic gonadotropin – also called PMSG (pregnant mare serum gonadotropin): chorionic gonadotropins secreted if the mare conceives. First secreted by the endometrial cups around the 36th day of gestation, peaking around day 60, and decreasing after about 120 days of gestation. Also help to stimulate the growth of the fetal gonads. Prolactin: stimulates lactation Oxytocin: stimulates the uterus to contract Breeding and gestation While horses in the wild mate and foal in mid to late spring, in the case of horses domestically bred for competitive purposes, especially horse racing, it is desirable that they be born as close to January 1 in the northern hemisphere or August 1 in the southern hemisphere as possible, so as to be at an advantage in size and maturity when competing against other horses in the same age group. When an early foal is desired, barn managers will put the mare "under lights" by keeping the barn lights on in the winter to simulate a longer day, thus bringing the mare into estrus sooner than she would in nature. Mares signal estrus and ovulation by urination in the presence of a stallion, raising the tail and revealing the vulva. A stallion, approaching with a high head, will usually nicker, nip and nudge the mare, as well as sniff her urine to determine her readiness for mating. Once fertilized, the oocyte (egg) remains in the oviduct for approximately 5.5 more days, and then descends into the uterus. The initial single cell combination is already dividing and by the time of entry into the uterus, the egg might have already reached the blastocyst stage. The gestation period lasts for about eleven months, or about 340 days (normal average range 320–370 days). During the early days of pregnancy, the conceptus is mobile, moving about in the uterus until about day 16 when "fixation" occurs. Shortly after fixation, the embryo proper (so called up to about 35 days) will become visible on trans-rectal ultrasound (about day 21) and a heartbeat should be visible by about day 23. After the formation of the endometrial cups and early placentation is initiated (35–40 days of gestation) the terminology changes, and the embryo is referred to as a fetus. True implantation – invasion into the endometrium of any sort – does not occur until about day 35 of pregnancy with the formation of the endometrial cups, and true placentation (formation of the placenta) is not initiated until about day 40-45 and not completed until about 140 days of pregnancy. The fetus's sex can be determined by day 70 of the gestation using ultrasound. Halfway through gestation the fetus is the size of between a rabbit and a beagle. The most dramatic fetal development occurs in the last 3 months of pregnancy when 60% of fetal growth occurs. Colts are carried on average about 4 days longer than fillies. Care of the pregnant mare Domestic mares receive specific care and nutrition to ensure that they and their foals are healthy. Mares are given vaccinations against diseases such as the Rhinopneumonitis (EHV-1) virus (which can cause miscarriage) as well as vaccines for other conditions that may occur in a given region of the world. Pre-foaling vaccines are recommended 4–6 weeks prior to foaling to maximize the immunoglobulin content of the colostrum in the first milk. Mares are dewormed a few weeks prior to foaling, as the mare is the primary source of parasites for the foal. Mares can be used for riding or driving during most of their pregnancy. Exercise is healthy, though should be moderated when a mare is heavily in foal. Exercise in excessively high temperatures has been suggested as being detrimental to pregnancy maintenance during the embryonic period; however ambient temperatures encountered during the research were in the region of 100 degrees F and the same results may not be encountered in regions with lower ambient temperatures. During the first several months of pregnancy, the nutritional requirements do not increase significantly since the rate of growth of the fetus is very slow. However, during this time, the mare may be provided supplemental vitamins and minerals, particularly if forage quality is questionable. During the last 3–4 months of gestation, rapid growth of the fetus increases the mare's nutritional requirements. Energy requirements during these last few months, and during the first few months of lactation are similar to those of a horse in full training. Trace minerals such as copper are extremely important, particularly during the tenth month of pregnancy, for proper skeletal formation. Many feeds designed for pregnant and lactating mares provide the careful balance required of increased protein, increased calories through extra fat as well as vitamins and minerals. Overfeeding the pregnant mare, particularly during early gestation, should be avoided, as excess weight may contribute to difficulties foaling or fetal/foal related problems. Foaling Mares due to foal are usually separated from other horses, both for the benefit of the mare and the safety of the soon-to-be-delivered foal. In addition, separation allows the mare to be monitored more closely by humans for any problems that may occur while giving birth. In the northern hemisphere, a special foaling stall that is large and clutter free is frequently used, particularly by major breeding farms. Originally, this was due in part to a need for protection from the harsh winter climate present when mares foal early in the year, but even in moderate climates, such as Florida, foaling stalls are still common because they allow closer monitoring of mares. Smaller breeders often use a small pen with a large shed for foaling, or they may remove a wall between two box stalls in a small barn to make a large stall. In the milder climates seen in much of the southern hemisphere, most mares foal outside, often in a paddock built specifically for foaling, especially on the larger stud farms. Many stud farms worldwide employ technology to alert human managers when the mare is about to foal, including webcams, closed-circuit television, or assorted types of devices that alert a handler via a remote alarm when a mare lies down in a position to foal. On the other hand, some breeders, particularly those in remote areas or with extremely large numbers of horses, may allow mares to foal out in a field amongst a herd, but may also see higher rates of foal and mare mortality in doing so. Most mares foal at night or early in the morning, and prefer to give birth alone when possible. Labor is rapid, often no more than 30 minutes, and from the time the feet of the foal appear to full delivery is often only about 15 to 20 minutes. Once the foal is born, the mare will lick the newborn foal to clean it and help blood circulation. In a very short time, the foal will attempt to stand and get milk from its mother. A foal should stand and nurse within the first hour of life. To create a bond with her foal, the mare licks and nuzzles the foal, enabling her to distinguish the foal from others. Some mares are aggressive when protecting their foals, and may attack other horses or unfamiliar humans that come near their newborns. After birth, a foal's navel is dipped in antiseptic to prevent infection. The foal is sometimes given an enema to help clear the meconium from its digestive tract. The newborn is monitored to ensure that it stands and nurses without difficulty. While most horse births happen without complications, many owners have first aid supplies prepared and a veterinarian on call in case of a birthing emergency. People who supervise foaling should also watch the mare to be sure that she passes the placenta in a timely fashion, and that it is complete with no fragments remaining in the uterus. Retained fetal membranes can cause a serious inflammatory condition (endometritis) and/or infection. If the placenta is not removed from the stall after it is passed, a mare will often eat it, an instinct from the wild, where blood would attract predators. Foal care Foals develop rapidly, and within a few hours a wild foal can travel with the herd. In domestic breeding, the foal and dam are usually separated from the herd for a while, but within a few weeks are typically pastured with the other horses. A foal will begin to eat hay, grass and grain alongside the mare at about 4 weeks old; by 10–12 weeks the foal requires more nutrition than the mare's milk can supply. Foals are typically weaned at 4–8 months of age, although in the wild a foal may nurse for a year. How breeds develop Beyond the appearance and conformation of a specific type of horse, breeders aspire to improve physical performance abilities. This concept, known as matching "form to function," has led to the development of not only different breeds, but also families or bloodlines within breeds that are specialists for excelling at specific tasks. For example, the Arabian horse of the desert naturally developed speed and endurance to travel long distances and survive in a harsh environment, and domestication by humans added a trainable disposition to the animal's natural abilities. In the meantime, in northern Europe, the locally adapted heavy horse with a thick, warm coat was domesticated and put to work as a farm animal that could pull a plow or wagon. This animal was later adapted through selective breeding to create a strong but rideable animal suitable for the heavily armored knight in warfare. Then, centuries later, when people in Europe wanted faster horses than could be produced from local horses through simple selective breeding, they imported Arabians and other oriental horses to breed as an outcross to the heavier, local animals. This led to the development of breeds such as the Thoroughbred, a horse taller than the Arabian and faster over the distances of a few miles required of a European race horse or light cavalry horse. Another cross between oriental and European horses produced the Andalusian, a horse developed in Spain that was powerfully built, but extremely nimble and capable of the quick bursts of speed over short distances necessary for certain types of combat as well as for tasks such as bullfighting. Later, the people who settled America needed a hardy horse that was capable of working with cattle. Thus, Arabians and Thoroughbreds were crossed on Spanish horses, both domesticated animals descended from those brought over by the Conquistadors, and feral horses such as the Mustangs, descended from the Spanish horse, but adapted by natural selection to the ecology and climate of the west. These crosses ultimately produced new breeds such as the American Quarter Horse and the Criollo of Argentina. In Canada, the Canadian Horse descended from the French stock Louis XIV sent to Canada in the late 17th century.[6] The initial shipment, in 1665, consisted of two stallions and twenty mares from the Royal Stables in Normandy and Brittany, the centre of French horse breeding.[7] Only 12 of the 20 mares survived the trip. Two more shipments followed, one in 1667 of 14 horses (mostly mares, but with at least one stallion), and one in 1670 of 11 mares and a stallion. The shipments included a mix of draft horses and light horses, the latter of which included both pacing and trotting horses.[1] The exact origins of all the horses are unknown, although the shipments probably included Bretons, Normans, Arabians, Andalusians and Barbs. In modern times, these breeds themselves have since been selectively bred to further specialize at certain tasks. One example of this is the American Quarter Horse. Once a general-purpose working ranch horse, different bloodlines now specialize in different events. For example, larger, heavier animals with a very steady attitude are bred to give competitors an advantage in events such as team roping, where a horse has to start and stop quickly, but also must calmly hold a full-grown steer at the end of a rope. On the other hand, for an event known as cutting, where the horse must separate a cow from a herd and prevent it from rejoining the group, the best horses are smaller, quick, alert, athletic and highly trainable. They must learn quickly, have conformation that allows quick stops and fast, low turns, and the best competitors have a certain amount of independent mental ability to anticipate and counter the movement of a cow, popularly known as "cow sense." Another example is the Thoroughbred. While most representatives of this breed are bred for horse racing, there are also specialized bloodlines suitable as show hunters or show jumpers. The hunter must have a tall, smooth build that allows it to trot and canter smoothly and efficiently. Instead of speed, value is placed on appearance and upon giving the equestrian a comfortable ride, with natural jumping ability that shows bascule and good form. A show jumper, however, is bred less for overall form and more for power over tall fences, along with speed, scope, and agility. This favors a horse with a good galloping stride, powerful hindquarters that can change speed or direction easily, plus a good shoulder angle and length of neck. A jumper has a more powerful build than either the hunter or the racehorse. History of horse breeding The history of horse breeding goes back millennia. Though the precise date is in dispute, humans could have domesticated the horse as far back as approximately 4500 BCE. However, evidence of planned breeding has a more blurry history. It is well known, for example, that the Romans did breed horses and valued them in their armies, but little is known regarding their breeding and husbandry practices: all that remains are statues and artwork. Mankind has plenty of equestrian statues of Roman emperors, horses are mentioned in the Odyssey by Homer, and hieroglyphics and paintings left behind by Egyptians tell stories of pharaohs hunting elephants from chariots. Nearly nothing is known of what became of the horses they bred for hippodromes, for warfare, or even for farming. One of the earliest people known to document the breedings of their horses were the Bedouin of the Middle East, the breeders of the Arabian horse. While it is difficult to determine how far back the Bedouin passed on pedigree information via an oral tradition, there were written pedigrees of Arabian horses by CE 1330. The Akhal-Teke of West-Central Asia is another breed with roots in ancient times that was also bred specifically for war and racing. The nomads of the Mongolian steppes bred horses for several thousand years as well, and the Caspian horse is believed to be a very close relative of Ottoman horses from the earliest origins of the Turks in Central Asia. The types of horse bred varied with culture and with the times. The uses to which a horse was put also determined its qualities, including smooth amblers for riding, fast horses for carrying messengers, heavy horses for plowing and pulling heavy wagons, ponies for hauling cars of ore from mines, packhorses, carriage horses and many others. Medieval Europe bred large horses specifically for war, called destriers. These horses were the ancestors of the great heavy horses of today, and their size was preferred not simply because of the weight of the armor, but also because a large horse provided more power for the knight's lance. Weighing almost twice as much as a normal riding horse, the destrier was a powerful weapon in battle meant to act like a giant battering ram that could quite literally run down men on an enemy line. On the other hand, during this same time, lighter horses were bred in northern Africa and the Middle East, where a faster, more agile horse was preferred. The lighter horse suited the raids and battles of desert people, allowing them to outmaneuver rather than overpower the enemy. When Middle Eastern warriors and European knights collided in warfare, the heavy knights were frequently outmaneuvered.
"manifestation of sexual passion for one of the opposite sex; normal sexuality". In LGBT slang, the term breeder has been used as a denigrating phrase to deride heterosexuals. Hyponyms of heterosexual include heteroflexible. The word can be informally shortened to "hetero". The term straight originated as a mid-20th century gay slang term for heterosexuals, ultimately coming from the phrase "to go straight" (as in "straight and narrow"), or stop engaging in homosexual sex. One of the first uses of the word in this way was in 1941 by author G. W. Henry. Henry's book concerned conversations with homosexual males and used this term in connection with people who are identified as ex-gays. It is now simply a colloquial term for "heterosexual", having changed in primary meaning over time. Some object to usage of the term straight because it implies that non-heteros are crooked. Demographics In their 2016 literature review, Bailey et al. stated that they "expect that in all cultures the vast majority of individuals are sexually predisposed exclusively to the other sex (i.e., heterosexual)" and that there is no persuasive evidence that the demographics of sexual orientation have varied much across time or place. Heterosexual activity between only one male and one female is by far the most common type of sociosexual activity. According to several major studies, 89% to 98% of people have had only heterosexual contact within their lifetime; but this percentage falls to 79–84% when either or both same-sex attraction and behavior are reported. A 1992 study reported that 93.9% of males in Britain have only had heterosexual experience, while in France the number was reported at 95.9%. According to a 2008 poll, 85% of Britons have only opposite-sex sexual contact while 94% of Britons identify themselves as heterosexual. Similarly, a survey by the UK Office for National Statistics (ONS) in 2010 found that 95% of Britons identified as heterosexual, 1.5% of Britons identified themselves as homosexual or bisexual, and the last 3.5% gave more vague answers such as "don't know", "other", or did not respond to the question. In the United States, according to a Williams Institute report in April 2011, 96% or approximately 250 million of the adult population are heterosexual. An October 2012 Gallup poll provided unprecedented demographic information about those who identify as heterosexual, arriving at the conclusion that 96.6%, with a margin of error of ±1%, of all U.S. adults identify as heterosexual. The Gallup results show: In a 2015 YouGov survey of 1,000 adults of the United States, 89% of the sample identified as heterosexual, 4% as homosexual (2% as homosexual male and 2% as homosexual female) and 4% as bisexual (of either sex). Bailey et al., in their 2016 review, stated that in recent Western surveys, about 93% of men and 87% of women identify as completely heterosexual, and about 4% of men and 10% of women as mostly heterosexual. Academic study Biological and environmental No simple and singular determinant for sexual orientation has been conclusively demonstrated, but scientists believe that a combination of genetic, hormonal, and environmental factors determine sexual orientation. They favor biological theories for explaining the causes of sexual orientation, as there is considerably more evidence supporting nonsocial, biological causes than social ones, especially for males. Factors related to the development of a heterosexual orientation include genes, prenatal hormones, and brain structure, and their interaction with the environment. Prenatal hormones The neurobiology of the masculinization of the brain is fairly well understood. Estradiol and testosterone, which is catalyzed by the enzyme 5α-reductase into dihydrotestosterone, act upon androgen receptors in the brain to masculinize it. If there are few androgen receptors (people with androgen insensitivity syndrome) or too much androgen (females with congenital adrenal hyperplasia), there can be physical and psychological effects. It has been suggested that both male and female heterosexuality are the results of this process. In these studies heterosexuality in females is linked to a lower amount of masculinization than is found in lesbian females, though when dealing with male heterosexuality there are results supporting both higher and lower degrees of masculinization than homosexual males. Animals and reproduction Sexual reproduction in the animal world is facilitated through opposite-sex sexual activity, although there are also animals that reproduce asexually, including protozoa and lower invertebrates. Reproductive sex does not require a heterosexual orientation, since sexual orientation typically refers to a long-term enduring pattern of sexual and emotional attraction leading often to long-term social bonding, while reproduction requires as little as a single act of copulation to fertilize the ovum by sperm. Sexual fluidity Often, sexual orientation and sexual orientation identity are not distinguished, which can impact accurately assessing sexual identity and whether or not sexual orientation is able to change; sexual orientation identity can change throughout an individual's life, and may or may not align with biological sex, sexual behavior or actual sexual orientation. Sexual orientation is stable and unlikely to change for the vast majority of people, but some research indicates that some people may experience change in their sexual orientation, and this is more likely for women than for men. The American Psychological Association distinguishes between sexual orientation (an innate attraction) and sexual orientation identity (which may change at any point in a person's life). A 2012 study found that 2% of a sample of 2,560 adult participants reported a change of sexual orientation identity after a 10-year period. For men, a change occurred in 0.78% of those who had identified as heterosexual, 9.52% of homosexuals, and 47%
1960s. The colloquial shortening "hetero" is attested from 1933. The abstract noun "heterosexuality" is first recorded in 1900. The word "heterosexual" was listed in Merriam-Webster's New International Dictionary in 1923 as a medical term for "morbid sexual passion for one of the opposite sex"; however, in 1934 in their Second Edition Unabridged it is defined as a "manifestation of sexual passion for one of the opposite sex; normal sexuality". In LGBT slang, the term breeder has been used as a denigrating phrase to deride heterosexuals. Hyponyms of heterosexual include heteroflexible. The word can be informally shortened to "hetero". The term straight originated as a mid-20th century gay slang term for heterosexuals, ultimately coming from the phrase "to go straight" (as in "straight and narrow"), or stop engaging in homosexual sex. One of the first uses of the word in this way was in 1941 by author G. W. Henry. Henry's book concerned conversations with homosexual males and used this term in connection with people who are identified as ex-gays. It is now simply a colloquial term for "heterosexual", having changed in primary meaning over time. Some object to usage of the term straight because it implies that non-heteros are crooked. Demographics In their 2016 literature review, Bailey et al. stated that they "expect that in all cultures the vast majority of individuals are sexually predisposed exclusively to the other sex (i.e., heterosexual)" and that there is no persuasive evidence that the demographics of sexual orientation have varied much across time or place. Heterosexual activity between only one male and one female is by far the most common type of sociosexual activity. According to several major studies, 89% to 98% of people have had only heterosexual contact within their lifetime; but this percentage falls to 79–84% when either or both same-sex attraction and behavior are reported. A 1992 study reported that 93.9% of males in Britain have only had heterosexual experience, while in France the number was reported at 95.9%. According to a 2008 poll, 85% of Britons have only opposite-sex sexual contact while 94% of Britons identify themselves as heterosexual. Similarly, a survey by the UK Office for National Statistics (ONS) in 2010 found that 95% of Britons identified as heterosexual, 1.5% of Britons identified themselves as homosexual or bisexual, and the last 3.5% gave more vague answers such as "don't know", "other", or did not respond to the question. In the United States, according to a Williams Institute report in April 2011, 96% or approximately 250 million of the adult population are heterosexual. An October 2012 Gallup poll provided unprecedented demographic information about those who identify as heterosexual, arriving at the conclusion that 96.6%, with a margin of error of ±1%, of all U.S. adults identify as heterosexual. The Gallup results show: In a 2015 YouGov survey of 1,000 adults of the United States, 89% of the sample identified as heterosexual, 4% as homosexual (2% as homosexual male and 2% as homosexual female) and 4% as bisexual (of either sex). Bailey et al., in their 2016 review, stated that in recent Western surveys, about 93% of men and 87% of women identify as completely heterosexual, and about 4% of men and 10% of women as mostly heterosexual. Academic study Biological and environmental No simple and singular determinant for sexual orientation has been conclusively demonstrated, but scientists believe that a combination of genetic, hormonal, and environmental factors determine sexual orientation. They favor biological theories for explaining the causes of sexual orientation, as there is considerably more evidence supporting nonsocial, biological causes than social ones, especially for males. Factors related to the development of a heterosexual orientation include genes, prenatal hormones, and brain structure, and their interaction with the environment. Prenatal hormones The neurobiology of the masculinization of the brain is fairly well understood. Estradiol and testosterone, which is catalyzed by the enzyme 5α-reductase into dihydrotestosterone, act upon androgen receptors in the brain to masculinize it. If there are few androgen receptors (people with androgen insensitivity syndrome) or too much androgen (females with congenital adrenal hyperplasia), there can be physical and psychological effects. It has been suggested that both male and female heterosexuality are the results of this process. In these studies heterosexuality in females is linked to a lower amount of masculinization than is found in lesbian females, though when dealing with male heterosexuality there are results supporting both higher and lower degrees of masculinization than homosexual males. Animals and reproduction Sexual reproduction in the animal world is facilitated through opposite-sex sexual activity, although there are also animals that reproduce asexually, including protozoa and lower invertebrates. Reproductive sex does not require a heterosexual orientation, since sexual orientation typically refers to a long-term enduring pattern of sexual and emotional attraction leading often to long-term social bonding, while reproduction requires as little as a single act of copulation to fertilize the ovum by sperm. Sexual fluidity Often, sexual orientation and sexual orientation identity are not distinguished, which can impact accurately assessing sexual identity and whether or not sexual orientation is able to change; sexual orientation identity can change throughout an individual's life, and may or may not align with biological sex, sexual behavior or actual sexual
Hong Kong Island in Hong Kong. The tower is the first circular skyscraper in Hong Kong. It is named after Hong Kong–listed property firm Hopewell Holdings Limited, which constructed the building. Hopewell Holdings Limited's headquarters are in the building and its Chief executive officer, Gordon Wu, has his office on the top floor. Description Construction started in 1977 and was completed in 1980. Upon completion, Hopewell Centre surpassed Jardine House as Hong Kong's tallest building. It was also the second tallest building in Asia at the time. It kept its title in Hong Kong until 1989, when the Bank of China Tower was completed. The building is now the 20th tallest building in Hong Kong. The building has a circular floor plan. Although the front entrance is on the 'ground floor', commuters are taken through a set of escalators to the 3rd floor lift lobby. Hopewell Centre stands on the slope of a hill so steep that the building has its back entrance on the 17th floor towards Kennedy Road. There is a circular private swimming pool on the roof of the building built for feng shui reasons. A revolving restaurant located on the 62nd floor, called
Wan Chai, Hong Kong Island in Hong Kong. The tower is the first circular skyscraper in Hong Kong. It is named after Hong Kong–listed property firm Hopewell Holdings Limited, which constructed the building. Hopewell Holdings Limited's headquarters are in the building and its Chief executive officer, Gordon Wu, has his office on the top floor. Description Construction started in 1977 and was completed in 1980. Upon completion, Hopewell Centre surpassed Jardine House as Hong Kong's tallest building. It was also the second tallest building in Asia at the time. It kept its title in Hong Kong until 1989, when the Bank of China Tower was completed. The building is now the 20th tallest building in Hong Kong. The building has a circular floor plan. Although the front entrance is on the 'ground floor', commuters are taken through a set of escalators to the 3rd floor lift lobby. Hopewell Centre stands on the slope of a hill so steep that the building has its back entrance on the 17th floor towards Kennedy Road. There is a circular private swimming pool on the roof of the building built for feng shui reasons. A revolving restaurant located on the 62nd floor, called "Revolving 66", overlooks other tall buildings below and the harbour. It was originally called Revolving 62, but soon changed its name as locals kept calling it Revolving 66. It completes a 360-degree rotation each hour. Passengers take either office lifts (faster) or the scenic lifts (with a view) to the 56/F, where they transfer to smaller lifts up to the 62/F.
largest and is a member of the Cape Libraries Automated Materials Sharing (CLAMS) library network. There are two smaller non-municipal libraries – the Chase Library on Route 28 in West Harwich at the Dennis town line, and the Harwich Port Library on Lower Bank Street in Harwich Port. Harwich is the site of the Long Pond Medical Center, which serves the southeastern Cape region. Harwich has police and fire departments, with one fire and police station headquarters, and Station 2 in East Harwich. There are post offices in Harwich Port, South Harwich, West Harwich, and East Harwich. Education Harwich's schools are part of the Monomoy Regional School District. Harwich Elementary School serves students from pre-school through fourth grade, Monomoy Regional Middle School which serves both Harwich and its joining town, Chatham. This middle school serves grades 5–7, and Monomoy Regional High School serves grades 8–12 for both Harwich and Chatham. Monomoy's teams are known as the Sharks. Harwich is known for its excellent boys basketball, girls basketball, girls field hockey, softball and baseball teams. The Lighthouse Charter School recently moved into where the Harwich Cinema building was located. Harwich is the site of Cape Cod Regional Technical High School, a grades 9–12 high school which serves most of Cape Cod. The town is also home to Holy Trinity PreSchool, a Catholic pre-school which serves pre-kindergarten in West Harwich. Transportation Roadways Two of Massachusetts major routes, U.S. Route 6 and Massachusetts Route 28, cross the town. The town has the southern termini of Routes 39 and 124, and a portion of Route 137 passes through the town. Route 39 leads east through East Harwich to Orleans. Route 28 passes through West Harwich and Harwich Port, connecting the towns of Dennis and Chatham. Route 124 leads from Harwich Center to Brewster, and Route 137 cuts through East Harwich leading from Chatham to Brewster. Cape Cod Rail Trail A portion of the Cape Cod Rail Trail, as well as several other bicycle routes, are in town. There is no rail service in town, but the Cape Cod Rail Trail rotary is located in North Harwich near Main Street. Air travel Other than the occasional sea plane landing on the pond, the nearest airport is in neighboring Chatham; the nearest regional service is at Barnstable Municipal Airport; and the nearest national and international air service is at Logan International Airport in Boston. CCRTA bus connections In recent years parts of Cape Cod have introduced bus service, especially during the summer to help cut down on traffic. The Flex Harwich Port – West Harwich – Dennis Port - South Dennis – East Dennis - South Yarmouth - West Yarmouth - Hyannis Route H2O Hyannis – Orleans via South Dennis, West Dennis, Dennis Port, Harwich Port, Chatham and Orleans. Notable people Ruby Braff, (1927–2003) jazz trumpeter and cornetist, former resident of Harwich in his later life A. Elmer Crowell, (1862–1952) was a master decoy carver from East Harwich. Crowell specialized in shorebirds, waterfowl, and miniatures. Crowell's decoys are consistently regarded as the finest and most desirable decoys ever made. Two of Crowell's decoys have repeatedly set world records for sales. Currently, Crowell's preening pintail drake and Canada goose decoys share the world record at $1.13 million. Seth Doane, award-winning television journalist; raised in Harwich and graduate of Harwich High School Shawn Fanning, creator and owner of MP3 music downloading application Napster; graduated from Harwich High School John Kendrick, (1740–1794) maritime fur trader; one of the first Americans to visit the Pacific Northwest, the Hawaiian Islands, and China. Thomas Nickerson, survivor of the ill-fated whaleship Essex, which inspired Melville's novel Moby Dick Tip O'Neill, (1912–1994) politician; owned a vacation home near Bank Street Beach and buried in Mount
Nantucket except the towns of Bourne, Falmouth, Sandwich and a portion of Barnstable. The town is patrolled by the Second (Yarmouth) Barracks of Troop D of the Massachusetts State Police. After results from the 2020 census, Massachusetts decreased from 10 to 9 congressional districts due to decreased growth in population. These new boundaries now put Harwich in the 9th congressional district as the 10th no longer exists. Harwich is currently represented by William R. Keating. The state's senior member of the United States Senate is Elizabeth Warren, elected in 2012. The junior senator is Ed Markey, elected in 2013. Harwich is governed by the open town meeting form of government, led by a town administrator and a board of selectmen. Public and health services There are three libraries in the town. The municipal library, the Brooks Free Library in Harwich Center, is the largest and is a member of the Cape Libraries Automated Materials Sharing (CLAMS) library network. There are two smaller non-municipal libraries – the Chase Library on Route 28 in West Harwich at the Dennis town line, and the Harwich Port Library on Lower Bank Street in Harwich Port. Harwich is the site of the Long Pond Medical Center, which serves the southeastern Cape region. Harwich has police and fire departments, with one fire and police station headquarters, and Station 2 in East Harwich. There are post offices in Harwich Port, South Harwich, West Harwich, and East Harwich. Education Harwich's schools are part of the Monomoy Regional School District. Harwich Elementary School serves students from pre-school through fourth grade, Monomoy Regional Middle School which serves both Harwich and its joining town, Chatham. This middle school serves grades 5–7, and Monomoy Regional High School serves grades 8–12 for both Harwich and Chatham. Monomoy's teams are known as the Sharks. Harwich is known for its excellent boys basketball, girls basketball, girls field hockey, softball and baseball teams. The Lighthouse Charter School recently moved into where the Harwich Cinema building was located. Harwich is the site of Cape Cod Regional Technical High School, a grades 9–12 high school which serves most of Cape Cod. The town is also home to Holy Trinity PreSchool, a Catholic pre-school which serves pre-kindergarten in West Harwich. Transportation Roadways Two of Massachusetts major routes, U.S. Route 6 and Massachusetts Route 28, cross the town. The town has the southern termini of Routes 39 and 124, and a portion of Route 137 passes through the town. Route 39 leads east through East Harwich to Orleans. Route 28 passes through West Harwich and Harwich Port, connecting the towns of Dennis and Chatham. Route 124 leads from Harwich Center to Brewster, and Route 137 cuts through East Harwich leading from Chatham to Brewster. Cape Cod Rail Trail A portion of the Cape Cod Rail Trail, as well as several other bicycle routes, are in town. There is no rail service in town, but the Cape Cod Rail Trail rotary is located in North Harwich near Main Street. Air travel Other than the occasional sea plane landing on the pond, the nearest airport is in neighboring Chatham; the nearest regional service is at Barnstable Municipal Airport; and the nearest national and international air service is at Logan International Airport in Boston. CCRTA bus connections In recent years parts of Cape Cod have introduced bus service, especially during the summer to help cut down on traffic. The Flex Harwich Port – West Harwich – Dennis Port - South Dennis – East Dennis - South Yarmouth - West Yarmouth - Hyannis Route H2O Hyannis – Orleans via South Dennis, West Dennis, Dennis Port, Harwich Port, Chatham and Orleans. Notable people Ruby Braff, (1927–2003) jazz trumpeter and cornetist, former resident of Harwich in his later life A. Elmer Crowell, (1862–1952) was a master decoy carver from East Harwich. Crowell specialized in shorebirds, waterfowl, and miniatures. Crowell's decoys are consistently regarded as the finest and most desirable decoys ever made. Two of Crowell's decoys have repeatedly set
ship, Helicopter, Dock LKA: Amphibious Cargo Ship (out of commission) LPA: Amphibious Transport LPD: Amphibious transport dock, also known as Landing ship, Personnel, Dock LPH: Landing ship, Personnel, Helicopter LPR: High speed transport LSD: Landing Ship, Dock LSH: Landing Ship, Heavy LSIL: Landing Ship, Infantry (Large) (formerly LCIL) LSL: Landing Ship, Logistics LSM: Landing Ship, Medium LSM(R): Landing Ship, Medium (Rocket) LSSL: Landing Ship, Support (Large) (formerly LCSL) LST: Landing Ship, Tank LST(H): Landing Ship, Tank (Hospital) LSV: Landing Ship, Vehicle Landing Craft LCA: Landing Craft, Assault LCAC: Landing Craft Air Cushion LCFF: (Flotilla Flagship) LCH: Landing Craft, Heavy LCI: Landing Craft, Infantry, World War II-era classification further modified by (G) – Gunboat (L) – Large (M) – Mortar (R) – Rocket LCL: Landing Craft, Logistics (UK) LCM: Landing Craft, Mechanized LCP: Landing Craft, Personnel LCP(L): Landing Craft, Personnel, Large LCP(R): Landing Craft, Personnel, Ramped LCPA: Landing Craft, Personnel, Air-Cushioned LCS(L): Landing Craft, Support (Large) changed to LSSL in 1949 LCT: Landing Craft, Tank (World War II era) LCU: Landing Craft, Utility LCVP: Landing Craft, Vehicle and Personnel LSH: Landing Ship Heavy (Royal Australian Navy) Expeditionary support Operated by Military Sealift Command, have ship prefix "USNS", hull code begins with "T-". ESD: Expeditionary Transfer Dock ESB: Expeditionary Mobile Base (a variant of ESD, formerly AFSB) EPF: Expeditionary fast transport MLP: Mobile landing platform (changed to ESD) JHSV: Joint high-speed vessel (changed to EPF) HST: High-speed transport (similar to JHSV, not to be confused with WWII-era High-speed transport (APD)) HSV: High-speed vessel Combat logistics type Ships which have the capability to provide underway replenishment to fleet units. AC: Collier (retired) AE: Ammunition ship AF: Stores ship (retired) AFS: Combat stores ship AKE: Advanced dry cargo ship AKS: General stores ship AO: Fleet Oiler AOE: Fast combat support ship AOR: Replenishment oiler AW: Distilling ship (retired) Mine warfare type Mine warfare ships are those ships whose primary function is mine warfare on the high seas. ADG: Degaussing ship AM: Minesweeper AMb: Harbor minesweeper AMc: Coastal minesweeper AMCU: Underwater mine locater AMS: Motor minesweeper CM: Cruiser (i.e., large) minelayer CMc: Coastal minelayer DM: High-speed minelayer (converted destroyer) DMS: High-speed minesweeper (converted-destroyer) MCM: Mine countermeasures ship MCS: Mine countermeasures support ship MH(C)(I)(O)(S): Minehunter, (coastal) (inshore) (ocean) (hunter and sweeper, general) MLC: Coastal minelayer MSC: Minesweeper, coastal MSF: Minesweeper, steel hulled MSO: Minesweeper, ocean PCS: Submarine chasers (wooden) fitted for minesweeping YDG: District degaussing vessel Coastal defense type Coastal defense ships are those whose primary function is coastal patrol and interdiction. FS: Corvette PB: Patrol boat PBR: Patrol boat, river PC: Patrol, coastal PCE: Patrol craft, escort PCF: Patrol craft, fast, (swift boat) PCS: Patrol craft, sweeper (modified-motor minesweepers meant for anti-submarine warfare) PF: Frigate, in a role similar to World War II Commonwealth corvette PG: Patrol gunboat PGM: Motor gunboat (To PG, 1967) PR: Patrol, river SP: Section patrol Mobile logistics type Mobile logistics ships have the capability to provide direct material support to other deployed units operating far from home ports. AD: Destroyer tender AGP: Patrol craft tender AR (AR, ARB, ARC, ARG, ARH, ARL, ARV): repair ship AS: Submarine tender AV: Seaplane tender Auxiliary type An auxiliary ship is designed to operate in any number of roles supporting combatant ships and other naval operations. AN: Net laying ship ARL: Auxiliary repair light—light craft or landing craft repair ship (World War II-era, out of commission) ATF: Fleet ocean tug AGHS: Patrol combatant support ship—ocean or inshore Airships Although technically an aircraft, pre-World War II rigid airships (e.g., zeppelins) were treated like commissioned surface warships and submarines, flew the U.S. ensign from their stern and carried a United States Ship (USS) designation. Non-rigid airships (e.g., blimps) continued to fly the U.S. ensign from their stern but were always considered to be primarily aircraft. ZMC: Airship metal clad ZNN-G: G-class blimp ZNN-J: J-class blimp ZNN-L: L-class blimp ZNP-K: K-class blimp ZNP-M: M-class blimp ZNP-N: N-class blimp ZPG-3W: surveillance patrol blimp ZR: Rigid airship ZRS: Rigid airship scout Support ships Support ships are not designed to participate in combat and are generally not armed. For ships with civilian crews (owned by and/or operated for Military Sealift Command and the Maritime Administration), the prefix T- is placed at the front of the hull classification. Support type Support ships are designed to operate in the open ocean in a variety of sea states to provide general support to either combatant forces or shore-based establishments. They include smaller auxiliaries which, by the nature of their duties, leave inshore waters. AB: Auxiliary Crane Ship (1920-41) AC: Collier (retired) ACS: Auxiliary Crane Ship AG: Miscellaneous Auxiliary AGDE: Testing Ocean Escort AGDS: Deep Submergence Support Ship AGER (i): Miscellaneous Auxiliary, Electronic Reconnaissance AGER (ii): Environmental Research Ship AGF: Miscellaneous Command Ship AGFF: Testing Frigate AGL: Auxiliary vessel, lighthouse tender AGM: Missile Range Instrumentation Ship AGOR: Oceanographic Research Ship AGOS: Ocean Surveillance Ship AGP: Motor Torpedo Boat Tender AGR: Radar picket ship AGS: Surveying Ship AGSE: Submarine and Special Warfare Support AGSS: Auxiliary Research Submarine AGTR: Technical research ship AH: Hospital ship AK: Cargo Ship AKR: Vehicle Cargo Ship AKS: General Stores Issue Ship AKV: Cargo Ship and Aircraft Ferry AO: Oiler AOE: Fast Combat Support Ship AOR: Replenishment oiler (retired) AOG: Gasoline Tanker AOT: Transport Oiler AP: Transport ARC: Cable Repair Ship (see also Cable layer) ARG: Internal Combustion Engine repair ship APB: Self-propelled Barracks Ship APL: Barracks Craft ARB: Battle Damage Repair Ship ARL: Small Repair Ship ARS: Salvage Ship AS: Submarine tender ASR: Submarine Rescue Ship AT: Ocean-Going Tug ATA: Auxiliary Ocean Tug ATF: Fleet Ocean Tug ATLS: Drone Launch Ship ATS: Salvage and Rescue Ship AVB(i): Aviation Logistics Support Ship AVB(ii): Advance Aviation Base Ship AVS: Aviation Stores Issue Ship AVT(i): Auxiliary Aircraft Transport AVT(ii): Auxiliary Aircraft Landing Training Ship EPCER: Experimental – Patrol Craft Escort – Rescue ID or Id. No.: Civilian ship taken into service for auxiliary duties, used indiscriminately for large ocean-going ships of all kinds and coastal and yard craft (World War I; retired 1920) PCER: Patrol Craft Escort – Rescue SBX: Sea-based X-band Radar – a mobile active electronically scanned array early-warning radar station. Service type craft Service craft are navy-subordinated craft (including non-self-propelled) designed to provide general support to either combatant forces or shore-based establishments. The suffix "N" refers to non-self-propelled variants. AB: Crane Ship AFDB: Large Auxiliary Floating Dry Dock AFD/AFDL: Small Auxiliary Floating Dry Dock AFDM: Medium Auxiliary Floating Dry Dock APB: Self-Propelled Barracks ship APL: Barracks Craft ARD: Auxiliary Repair Dry Dock ARDM: Medium Auxiliary Repair Dry Dock ATA: Auxiliary Ocean Tug DSRV: Deep Submergence Rescue Vehicle DSV: Deep Submergence Vehicle JUB/JB : Jack Up Barge NR: Submersible Research Vehicle YC: Open Lighter YCF: Car Float YCV: Aircraft Transportation Lighter YD: Floating Crane YDT: Diving Tender YF: Covered Lighter YFB: Ferry Boat or Launch YFD: Yard Floating Dry Dock YFN: Covered Lighter (non-self propelled) YFNB: Large Covered Lighter (non-self propelled) YFND: Dry Dock Companion Craft (non-self propelled) YFNX: Lighter (Special purpose) (non-self propelled) YFP: Floating Power Barge YFR: Refrigerated Cover Lighter YFRN: Refrigerated Covered Lighter (non-self propelled) YFRT: Range Tender USNS Range Recoverer (T-AG-161) YFU: Harbor Utility Craft YG: Garbage Lighter YGN: Garbage Lighter (non-self propelled) YH: Ambulance boat/small medical support vessel YLC: Salvage Lift Craft YM: Dredge YMN: Dredge (non-self propelled) YNG: Gate Craft YN: Yard Net Tender YNT: Net Tender YO: Fuel Oil Barge YOG: Gasoline Barge YOGN: Gasoline Barge (non-self propelled) YON: Fuel Oil Barge (non-self propelled) YOS: Oil Storage Barge YP: Patrol Craft, Training YPD: Floating Pile Driver YR: Floating Workshop YRB: Repair and Berthing Barge YRBM: Repair, Berthing and Messing Barge YRDH: Floating Dry Dock Workshop (Hull) YRDM: Floating Dry Dock Workshop (Machine) YRR: Radiological Repair Barge nuclear ships and submarines service YRST: Salvage Craft Tender YSD: Seaplane Wrecking Derrick - Yard Seaplane Derrick YSR: Sludge Removal Barge YT: Harbor Tug (craft later assigned YTB, YTL, or YTM classifications) YTB: Large Harbor tug YTL: Small Harbor Tug YTM: Medium Harbor Tug YTT: Torpedo trials craft YW: Water Barge YWN: Water Barge (non-self propelled) ID or Id. No.: Civilian ship taken into service for auxiliary duties, used indiscriminately for large ocean-going ships of all kinds and coastal and yard craft (World War I; retired 1920) IX: Unclassified Miscellaneous Unit X: Submersible Craft "none": To honor her unique historical status, USS Constitution, formerly IX 21, was reclassified to "none", effective 1 September 1975. United States Coast Guard vessels Prior to 1965, U.S. Coast Guard cutters used the same designation as naval ships but preceded by a "W" to indicate Coast Guard commission. The U.S. Coast Guard considers any ship over 65 feet in length with a permanently assigned crew, a cutter. Current USCG cutter classes and types Historic USCG cutter classes and types USCG classification symbols definitions CG: all Coast Guard ships in the 1920s (retired) WAGB: Coast Guard WAGL: Auxiliary vessel, lighthouse tender (retired 1960's) WAVP: seagoing Coast Guard seaplane tenders (retired 1960s) WDE: seagoing Coast Guard destroyer escorts (retired 1960s) WHEC: Coast Guard high endurance cutters WIX: Coast Guard barque WLB: Coast Guard buoy tenders WLBB: Coast Guard seagoing buoy tenders/ice breaker WLI: Coast Guard inland buoy tenders WLIC: Coast Guard inland construction tenders WLM: Coast Guard coastal buoy tenders WLR: Coast Guard river buoy tenders WMEC: Coast Guard medium endurance cutters WMSL: Coast Guard maritime security cutter, large (referred to as national security cutters) WPB: Coast Guard patrol boats WPC: Coast Guard patrol craft—later reclassed under WHEC, symbol reused for Coast Guard patrol cutter (referred to as fast response cutters) WPG: seagoing Coast Guard gunboats (retired 1960s) WTGB: Coast Guard tug boat (140' icebreakers) WYTL: Small harbor tug USCG classification symbols for small craft and boats MLB: Motor Life Boat (52', 47', and 44' variants) UTB: Utility Boat DPB: Deployable Pursuit Boat ANB: Aids to Navigation Boats TPSB: Transportable Port Security Boat RHIB: Rigid Hull Inflatable Boats Temporary designations United States Navy Designations (Temporary) are a form of U.S. Navy ship designation, intended for temporary identification use. Such designations usually occur during periods of sudden mobilization, such as that which occurred prior to, and during, World War II or the Korean War, when it was determined that a sudden temporary need arose for a ship for which there was no official Navy designation. During World War II, for example, a number of commercial vessels were requisitioned, or acquired, by the U.S. Navy to meet the sudden requirements of war. A yacht acquired by the U.S. Navy during the start of World War II might seem desirable to the Navy whose use for the vessel might not be fully developed or explored at the time of acquisition. On the other hand, a U.S. Navy vessel, such as the yacht in the example above, already in commission or service, might be desired, or found useful, for another need or purpose for which there is no official designation. IX: Unclassified Miscellaneous Auxiliary Ship, for example, yacht Chanco acquired by the U.S. Navy on 1 October 1940. It was classified as a minesweeper , but instead, mainly used as a patrol craft along the New England coast. When another assignment came, and it could not be determined how to classify the vessel, it was redesignated IX-175 on 10 July 1944. IXSS: Unclassified Miscellaneous Submarines, such as the , the and the . YAG: Miscellaneous Auxiliary Service Craft, such as the , and which, curiously, was earlier known as . Numerous other U.S. Navy vessels were launched with a temporary, or nominal, designation, such as YMS or PC, since it could not be determined, at the time of construction, what they should be used for. Many of these were vessels in the 150 to 200 feet length class with powerful engines, whose function could be that of a minesweeper, patrol craft, submarine chaser, seaplane tender, tugboat, or other. Once their destiny, or capability, was found or determined, such vessels were reclassified with their actual designation. National Oceanic and Atmospheric Administration hull codes R: Research ships, including oceanographic and fisheries research ships S: Survey ships, including hydrographic survey ships The letter is paired with a three-digit number. The first digit of the number is determined by the ships "power tonnage," defined as the sum of its shaft horsepower and gross international tonnage, as follows: If the power tonnage is 5,501 through 9,000, the first digit is "1". If the power tonnage 3,501 through 5,500, the first digit is "2." If the power tonnage is 2,001 through 3,500, the first digit is "3." If the power tonnage is 1,001 through 2,000, the first digit is "4." If the power tonnage is 501 through 1,000, the first digit is "5." If the power tonnage is 500 or less and the ship is at least 65 feet (19.8 meters) long, the first digit is "6." The second and third digits are assigned to create a unique three-digit hull number. See also United States Navy 1975 ship reclassification List of hull classifications Ship prefix Hull classification symbol (Canada) Pennant number for the British Commonwealth
ships received the designation "MSS" for "medium survey ship," and smaller "Category III" oceanographic survey ships were given the classification "CSS" for "coastal survey ship." A fourth designation, "ASV" for "auxiliary survey vessel," included even smaller vessels. In each case, a particular ship received a unique designation based on its classification and a unique hull number separated by a space rather than a hyphen; for example, the third Coast and Geodetic Survey ship named Pioneer was an ocean survey ship officially known as USC&GS Pioneer (OSS 31). The Coast and Geodetic Surveys system persisted after the creation of NOAA in 1970, when NOAA took control of the Surveys fleet, but NOAA later changed to its modern hull classification system. United States Fish and Wildlife Service The Fish and Wildlife Service, created in 1940 and reorganized as the United States Fish and Wildlife Service (USFWS) in 1956, adopted a hull number system for its fisheries research ships and patrol vessels. It consisted of "FWS" followed by a unique identifying number. In 1970, NOAA took control of the seagoing ships of the USFWS′s Bureau of Commercial Fisheries, and as part of the NOAA fleet they eventually were renumbered under the NOAA hull number system. The modern hull classification system United States Navy The U.S. Navy instituted its modern hull classification system on 17 July 1920, doing away with section patrol numbers, "identification numbers", and the other numbering systems described above. In the new system, all hull classification symbols are at least two letters; for basic types the symbol is the first letter of the type name, doubled, except for aircraft carriers. The combination of symbol and hull number identifies a modern Navy ship uniquely. A heavily modified or re-purposed ship may receive a new symbol, and either retain the hull number or receive a new one. For example, the heavy gun cruiser was converted to a gun/missile cruiser, changing the hull number to CAG-1. Also, the system of symbols has changed a number of times both since it was introduced in 1907 and since the modern system was instituted in 1920, so ships' symbols sometimes change without anything being done to the physical ship. Hull numbers are assigned by classification. Duplication between, but not within, classifications is permitted. Hence, CV-1 was the aircraft carrier and BB-1 was the battleship . Ship types and classifications have come and gone over the years, and many of the symbols listed below are not presently in use. The Naval Vessel Register maintains an online database of U.S. Navy ships showing which symbols are presently in use. After World War II until 1975, the U.S. Navy defined a "frigate" as a type of surface warship larger than a destroyer and smaller than a cruiser. In other navies, such a ship generally was referred to as a "flotilla leader", or "destroyer leader". Hence the U.S. Navy's use of "DL" for "frigate" prior to 1975, while "frigates" in other navies were smaller than destroyers and more like what the U.S. Navy termed a "destroyer escort", "ocean escort", or "DE". The United States Navy 1975 ship reclassification of cruisers, frigates, and ocean escorts brought U.S. Navy classifications into line with other nations' classifications, at least cosmetically in terms of terminology, and eliminated the perceived "cruiser gap" with the Soviet Navy by redesignating the former "frigates" as "cruisers". Military Sealift Command If a U.S. Navy ship's hull classification symbol begins with "T-", it is part of the Military Sealift Command, has a primarily civilian crew, and is a United States Naval Ship (USNS) in non-commissioned service – as opposed to a commissioned United States Ship (USS) with an all-military crew. United States Coast Guard If a ship's hull classification symbol begins with "W", it is a commissioned cutter of the United States Coast Guard. Until 1965, the Coast Guard used U.S. Navy hull classification codes, prepending a "W" to their beginning. In 1965, it retired some of the less mission-appropriate Navy-based classifications and developed new ones of its own, most notably WHEC for "high endurance cutter" and WMEC for "medium endurance cutter". National Oceanic and Atmospheric Administration The National Oceanic and Atmospheric Administration (NOAA), a component of the United States Department of Commerce, includes the National Oceanic and Atmospheric Administration Commissioned Officer Corps (or "NOAA Corps"), one of the eight uniformed services of the United States, and operates a fleet of seagoing research and survey ships. The NOAA fleet also uses a hull classification symbol system, which it also calls "hull numbers," for its ships. After NOAA took over the former fleets of the U.S. Coast and Geodetic Survey and the U.S. Fish and Wildlife Service Bureau of Commercial Fisheries in 1970, it adopted a new system of ship classification. In its system, the NOAA fleet is divided into two broad categories, research ships and survey ships. The research ships, which include oceanographic and fisheries research vessels, are given hull numbers beginning with "R", while the survey ships, generally hydrographic survey vessels, receive hull numbers beginning with "S". The letter is followed by a three-digit number; the first digit indicates the NOAA "class" (i.e., size) of the vessel, which NOAA assigns based on the ship's gross tonnage and horsepower, while the next two digits combine with the first digit to create a unique three-digit identifying number for the ship. Generally, each NOAA hull number is written with a space between the letter and the three-digit number, as in, for example, or . Unlike in the U.S. Navy system, once an older NOAA ship leaves service, a newer one can be given the same hull number; for example, "S 222" was assigned to , then assigned to NOAAS Thomas Jefferson (S 222), which entered NOAA service after Mount Mitchell was stricken. United States Navy hull classification codes The U.S. Navy's system of alpha-numeric ship designators, and its associated hull numbers, have been for several decades a unique method of categorizing ships of all types: combatants, auxiliaries and district craft. Though considerably changed in detail and expanded over the years, this system remains essentially the same as when formally implemented in 1920. It is a very useful tool for organizing and keeping track of naval vessels, and also provides the basis for the identification numbers painted on the bows (and frequently the sterns) of most U.S. Navy ships. The ship designator and hull number system's roots extend back to the late 1880s when ship type serial numbers were assigned to most of the new-construction warships of the emerging "Steel Navy". During the course of the next thirty years, these same numbers were combined with filing codes used by the Navy's clerks to create an informal version of the system that was put in place in 1920. Limited usage of ship numbers goes back even earlier, most notably to the "Jeffersonian Gunboats" of the early 1800s and the "Tinclad" river gunboats of the Civil War Mississippi Squadron. It is important to understand that hull number-letter prefixes are not acronyms, and should not be carelessly treated as abbreviations of ship type classifications. Thus, "DD" does not stand for anything more than "Destroyer". "SS" simply means "Submarine". And "FF" is the post-1975 type code for "Frigate." The hull classification codes for ships in active duty in the United States Navy are governed under Secretary of the Navy Instruction 5030.8B (SECNAVINST 5030.8B). Warships Warships are designed to participate in combat operations. The origin of the two-letter code derives from the need to distinguish various cruiser subtypes. Aircraft carrier type Aircraft carriers are ships designed primarily for the purpose of conducting combat operations by aircraft which engage in attacks against airborne, surface, sub-surface and shore targets. Contrary to popular belief, the "CV" hull classification symbol does not stand for "carrier vessel". "CV" derives from the cruiser designation, with one popular theory that the v comes from French voler, "to fly", but this has never been definitively proven. Aircraft carriers are designated in two sequences: the first sequence runs from CV-1 USS Langley to the very latest ships, and the second sequence, "CVE" for escort carriers, ran from CVE-1 Long Island to CVE-127 Okinawa before being discontinued. AV: Heavier-than-air aircraft tender (retired) AZ: Lighter-than-air aircraft tender (retired) (1920-23) AVG: General-purpose aircraft tender (repurposed escort carrier) (1941–42) AVD: Seaplane tender destroyer (retired) AVP: Seaplane tender, Small (retired) AVT (i) Auxiliary aircraft transport (retired) AVT (ii) Auxiliary training carrier (retired) ACV: Auxiliary aircraft carrier (escort carrier, replaced by CVE) (1942) CV: Fleet aircraft carrier (1921–1975), multi-purpose aircraft carrier (1975–present) CVA: Aircraft carrier, attack (category merged into CV, 30 June 1975) CV(N): Aircraft carrier, night (deck equipped with lighting and pilots trained and for nighttime fights) (1944) (retired) CVAN: Aircraft carrier, attack, nuclear-powered (category merged into CVN, 30 June 1975) CVB: Aircraft carrier, large (original USS Midway class, category merged into CVA, 1952) CVE: Escort aircraft carrier (retired) (1943–retirement of type) CVHA: Aircraft carrier, helicopter assault (retired in favor of several LH-series amphibious assault ship hull codes) CVHE: Aircraft carrier, helicopter, escort (retired) CVL: Light aircraft carrier or aircraft carrier, small (retired) CVN: Aircraft carrier, nuclear-powered CVS: Antisubmarine aircraft carrier (retired) CVT: Aircraft carrier, training (changed to AVT (auxiliary)) CVU: Aircraft carrier, utility (retired) CVG: Aircraft carrier, guided missile (retired) CF: Flight-deck cruiser (1930s, retired unused) CVV: Aircraft carrier, vari-purpose, medium (retired unused) Surface combatant type Surface combatants are ships which are designed primarily to engage enemy forces on the high seas. The primary surface combatants are battleships, cruisers and destroyers. Battleships are very heavily armed and armored; cruisers moderately so; destroyers and smaller warships, less so. Before 1920, ships were called "<type> no. X", with the type fully pronounced. The types were commonly abbreviated in ship lists to "B-X", "C-X", "D-X" et cetera—for example, before 1920, would have been called "USS Minnesota, Battleship number 22" orally and "USS Minnesota, B-22" in writing. After 1920, the ship's name would have been both written and pronounced "USS Minnesota (BB-22)". In generally decreasing size, the types are: ACR: Armored Cruiser (pre-1920) AFSB: Afloat forward staging base (also AFSB(I) for "interim", changed to ESB for Expeditionary Mobile Dock) B: Battleship (pre-1920) BB: Battleship BBG: Battleship, guided missile or arsenal ship (theoretical only, never assigned) BM: Monitor (1920–retirement) C: Cruiser (pre-1920 protected cruisers and peace cruisers) CA: (first series) Cruiser, armored (retired, comprised all surviving pre-1920 armored and protected cruisers) CA: (second series) Heavy cruiser, category later renamed gun cruiser (retired) CAG: Cruiser, heavy, guided missile (retired) CB: Large cruiser (retired) CBC: Large command cruiser (retired, never used operationally) CC: Battlecruiser (retired, never used operationally) CC: (second usage) command ship (retired) CLC: Command cruiser CLD: Cruiser-destroyer, light (never used operationally) CG: Cruiser, guided missile CGN: Cruiser, guided missile, nuclear-powered: and CL: Cruiser, light (retired) CLAA: Cruiser, light, anti-aircraft (retired) CLG: Cruiser, light, guided missile (retired) CLGN: Cruiser, light, guided missile, nuclear-powered (retired) CLK: Cruiser, hunter–killer (abolished 1951) CM: Cruiser–minelayer (retired) CS: Scout cruiser (retired) CSGN: Cruiser, strike, guided missile, nuclear-powered (retired, never used operationally) D: Destroyer (pre-1920) DD: Destroyer DDC: Corvette (briefly proposed in the mid-1950s) DDE: Escort destroyer, a destroyer (DD) converted for antisubmarine warfare – category abolished 1962. (not to be confused with destroyer escort DE) DDG: Destroyer, guided missile DDK: Hunter–killer destroyer (category merged into DDE, 4 March 1950) DDR: Destroyer, radar picket (retired) DE: Destroyer escort (World War II, later became Ocean escort) DE: Ocean escort (abolished 30 June 1975) DEG: Guided missile ocean escort (abolished 30 June 1975) DER: Radar picket destroyer escort (abolished 30 June 1975) There were two distinct breeds of DE, the World War II destroyer escorts (some of which were converted to DERs) and the postwar DE/DEG classes, which were known as ocean escorts despite carrying the same type symbol as the World War II destroyer escorts. All DEs, DEGs, and DERs were reclassified as FFs, FFGs, or FFRs, 30 June 1975. DL: Destroyer leader (later frigate) (retired) DLG: Frigate, guided missile (abolished 30 June 1975) DLGN: Frigate, guided missile, nuclear-propulsion (abolished 30 June 1975) The DL category was established in 1951 with the abolition of the CLK category. CLK 1 became DL 1 and DD 927–930 became DL 2–5. By the mid-1950s the term destroyer leader had been dropped in favor of frigate. Most DLGs and DLGNs were reclassified as CGs and CGNs, 30 June 1975. However, DLG 6–15
police officers and civil and judiciary authorities who either violate or fail to enforce the law. France and the United States played a synergistic role in the international team, led by Eleanor Roosevelt, which crafted the Universal Declaration of Human Rights. The French judge and Nobel Peace Laureate René Cassin produced the first draft and argued against arbitrary detentions. René Cassin and the French team subsequently championed the habeas corpus provisions enshrined in the European Convention for the Protection of Human Rights and Fundamental Freedoms. Germany Germany has constitutional guarantees against improper detention and these have been implemented in statutory law in a manner that can be considered as equivalent to writs of habeas corpus. Article 104, paragraph 1 of the Basic Law for the Federal Republic of Germany provides that deprivations of liberty may be imposed only on the basis of a specific enabling statute that also must include procedural rules. Article 104, paragraph 2 requires that any arrested individual be brought before a judge by the end of the day following the day of the arrest. For those detained as criminal suspects, article 104, paragraph 3 specifically requires that the judge must grant a hearing to the suspect in order to rule on the detention. Restrictions on the power of the authorities to arrest and detain individuals also emanate from article 2 paragraph 2 of the Basic Law which guarantees liberty and requires a statutory authorization for any deprivation of liberty. In addition, several other articles of the Basic Law have a bearing on the issue. The most important of these are article 19, which generally requires a statutory basis for any infringements of the fundamental rights guaranteed by the Basic Law while also guaranteeing judicial review; article 20, paragraph 3, which guarantees the rule of law; and article 3 which guarantees equality. In particular, a constitutional obligation to grant remedies for improper detention is required by article 19, paragraph 4 of the Basic Law, which provides as follows: "Should any person's right be violated by public authority, he may have recourse to the courts. If no other jurisdiction has been established, recourse shall be to the ordinary courts." India The Indian judiciary, in a catena of cases, has effectively resorted to the writ of habeas corpus to secure release of a person from illegal detention. For example, in October 2009, the Karnataka High Court heard a habeas corpus petition filed by the parents of a girl who married a Muslim boy from Kannur district and was allegedly confined in a madrasa in Malapuram town. Usually, in most other jurisdictions, the writ is directed at police authorities. The extension to non-state authorities has its grounds in two cases: the 1898 Queen's Bench case of Ex Parte Daisy Hopkins, wherein the Proctor of Cambridge University did detain and arrest Hopkins without his jurisdiction, and Hopkins was released, and that of Somerset v Stewart, in which an African slave whose master had moved to London was freed by action of the writ. The Indian judiciary has dispensed with the traditional doctrine of locus standi, so that if a detained person is not in a position to file a petition, it can be moved on his behalf by any other person. The scope of habeas relief has expanded in recent times by actions of the Indian judiciary. In 1976, the habeas writ was used in the Rajan case, a student victim of torture in local police custody during the nationwide Emergency in India. On 12 March 2014, Subrata Roy's counsel approached the Chief Justice moving a habeas corpus petition. It was also filed by the Panthers Party to protest the imprisonment of Anna Hazare, a social activist. Ireland In the Republic of Ireland, the writ of habeas corpus is available at common law and under the Habeas Corpus Acts of 1782 and 1816. A remedy equivalent to habeas corpus is also guaranteed by Article 40 of the 1937 constitution. The article guarantees that "no citizen shall be deprived of his personal liberty save in accordance with law" and outlines a specific procedure for the High Court to enquire into the lawfulness of any person's detention. It does not mention the Latin term habeas corpus, but includes the English phrase "produce the body". Article 40.4.2° provides that a prisoner, or anyone acting on his behalf, may make a complaint to the High Court (or to any High Court judge) of unlawful detention. The court must then investigate the matter "forthwith" and may order that the defendant bring the prisoner before the court and give reasons for his detention. The court must immediately release the detainee unless it is satisfied that he is being held lawfully. The remedy is available not only to prisoners of the state, but also to persons unlawfully detained by any private party. However the constitution provides that the procedure is not binding on the Defence Forces during a state of war or armed rebellion. The full text of Article 40.4.2° is as follows: The writ of habeas corpus continued as part of the Irish law when the state seceded from the United Kingdom in 1922. A remedy equivalent to habeas corpus was also guaranteed by Article 6 of the Constitution of the Irish Free State, enacted in 1922. That article used similar wording to Article 40.4 of the current constitution, which replaced it 1937. The relationship between the Article 40 and the Habeas Corpus Acts of 1782 and 1816 is ambiguous, and Forde and Leonard write that "The extent if any to which Article 40.4 has replaced these Acts has yet to be determined". In The State (Ahern) v. Cotter (1982) Walsh J. opined that the ancient writ referred to in the Habeas Corpus Acts remains in existence in Irish law as a separate remedy from that provided for in Article 40. In 1941, the Article 40 procedure was restricted by the Second Amendment. Prior to the amendment, a prisoner had the constitutional right to apply to any High Court judge for an enquiry into her detention, and to as many High Court judges as she wished. If the prisoner successfully challenged her detention before the High Court she was entitled to immediate, unconditional release. The Second Amendment provided that a prisoner has only the right to apply to a single judge, and, once a writ has been issued, the President of the High Court has authority to choose the judge or panel of three judges who will decide the case. If the High Court finds that the prisoner's detention is unlawful due to the unconstitutionality of a law the judge must refer the matter to the Supreme Court, and until the Supreme's Court's decision is rendered the prisoner may be released only on bail. The power of the state to detain persons prior to trial was extended by the Sixteenth Amendment, in 1996. In 1965, the Supreme Court ruled in the O'Callaghan case that the constitution required that an individual charged with a crime could be refused bail only if she was likely to flee or to interfere with witnesses or evidence. Since the Sixteenth Amendment, it has been possible for a court to take into account whether a person has committed serious crimes while on bail in the past. Italy The right to freedom from arbitrary detention is guaranteed by Article 13 of the Constitution of Italy, which states: This implies that within 48 hours every arrest made by a police force must be validated by a court. Furthermore, if subject to a valid detention, an arrested can ask for a review of the detention to another court, called the Review Court (Tribunale del Riesame, also known as the Freedom Court, Tribunale della Libertà). Macau In Macau, the relevant provision is Article 204 in the Code of Penal Processes, which became law in 1996 under Portuguese rule. cases are heard before the Tribunal of Ultimate Instance. A notable case is Case 3/2008 in Macau. Malaysia In Malaysia, the remedy of habeas corpus is guaranteed by the federal constitution, although not by name. Article 5(2) of the Constitution of Malaysia provides that "Where complaint is made to a High Court or any judge thereof that a person is being unlawfully detained the court shall inquire into the complaint and, unless satisfied that the detention is lawful, shall order him to be produced before the court and release him". As there are several statutes, for example, the Internal Security Act 1960, that still permit detention without trial, the procedure is usually effective in such cases only if it can be shown that there was a procedural error in the way that the detention was ordered. New Zealand In New Zealand, habeas corpus may be invoked against the government or private individuals. In 2006, a child was allegedly kidnapped by his maternal grandfather after a custody dispute. The father began habeas corpus proceedings against the mother, the grandfather, the grandmother, the great grandmother, and another person alleged to have assisted in the kidnap of the child. The mother did not present the child to the court and so was imprisoned for contempt of court. She was released when the grandfather came forward with the child in late January 2007. Pakistan Issuance of a writ is an exercise of an extraordinary jurisdiction of the superior courts in Pakistan. A writ of habeas corpus may be issued by any High Court of a province in Pakistan. Article 199 of the 1973 Constitution of the Islamic Republic of Pakistan, specifically provides for the issuance of a writ of habeas corpus, empowering the courts to exercise this prerogative. Subject to the Article 199 of the Constitution, "A High Court may, if it is satisfied that no other adequate remedy is provided by law, on the application of any person, make an order that a person in custody within the territorial jurisdiction of the Court be brought before it so that the Court may satisfy itself that he is not being held in custody without a lawful authority or in an unlawful manner". The hallmark of extraordinary constitutional jurisdiction is to keep various functionaries of State within the ambit of their authority. Once a High Court has assumed jurisdiction to adjudicate the matter before it, justiciability of the issue raised before it is beyond question. The Supreme Court of Pakistan has stated clearly that the use of words "in an unlawful manner" implies that the court may examine, if a statute has allowed such detention, whether it was a colorable exercise of the power of authority. Thus, the court can examine the malafides of the action taken. Portugal In Portugal, article 31 of the Constitution guarantees citizens against improper arrest, imprisonment or detention. The full text of Article 31 is as follows: There are also statutory provisions, most notably the Code of Criminal Procedure, articles 220 and 222 that stipulate the reasons by which a judge may guarantee Habeas corpus. The Philippines In the Bill of Rights of the Philippine constitution, habeas corpus is guaranteed in terms almost identically to those used in the U.S. Constitution. Article 3, Section 15 of the Constitution of the Philippines states that "The privilege of the writ of habeas corpus shall not be suspended except in cases of invasion or rebellion when the public safety requires it". In 1971, after the Plaza Miranda bombing, the Marcos administration, under Ferdinand Marcos, suspended habeas corpus in an effort to stifle the oncoming insurgency, having blamed the Filipino Communist Party for the events of August 21. Many considered this to be a prelude to martial law. After widespread protests, however, the Marcos administration decided to reintroduce the writ. The writ was again suspended when Marcos declared martial law in 1972. In December 2009, habeas corpus was suspended in Maguindanao as President Gloria Macapagal Arroyo placed the province under martial law. This occurred in response to the Maguindanao massacre. In 2016, President Rodrigo Duterte said he was planning on suspending habeas corpus. At 10 pm on 23 May 2017 Philippine time, President Rodrigo Duterte declared martial law in the whole island of Mindanao including Sulu and Tawi-tawi for the period of 60 days due to the series of attacks mounted by the Maute group, an ISIS-linked terrorist organization. The declaration suspended the writ. Scotland The Parliament of Scotland passed a law to have the same effect as habeas corpus in the 18th century. This is now known as the Criminal Procedure Act 1701 c.6. It was originally called "the Act for preventing wrongful imprisonment and against undue delays in trials". It is still in force although certain parts have been repealed. Spain The present Constitution of Spain states that "A habeas corpus procedure shall be provided for by law to ensure the immediate handing over to the judicial authorities of any person illegally arrested". The statute which regulates the procedure is the Law of Habeas Corpus of 24 May 1984, which provides that a person imprisoned may, on her or his own or through a third person, allege that she or he is imprisoned unlawfully and request to appear before a judge. The request must specify the grounds on which the detention is considered to be unlawful, which can be, for example, that the custodian holding the prisoner does not have the legal authority, that the prisoner's constitutional rights have been violated, or that he has been subjected to mistreatment. The judge may then request additional information if needed, and may issue a habeas corpus order, at which point the custodian has 24 hours to bring the prisoner before the judge. Historically, many of the territories of Spain had remedies equivalent to the habeas corpus, such as the privilege of manifestación in the Crown of Aragon or the right of the Tree in Biscay. United States The United States inherited habeas corpus from the English common law. In England, the writ was issued in the name of the monarch. When the original thirteen American colonies declared independence, and became a republic based on popular sovereignty, any person, in the name of the people, acquired authority to initiate such writs. The U.S. Constitution specifically includes the habeas procedure in the Suspension Clause (Clause 2), located in Article One, Section 9. This states that "The privilege of the writ of habeas corpus shall not be suspended, unless when in cases of rebellion or invasion the public safety may require it". The writ of habeas corpus ad subjiciendum is a civil, not criminal, ex parte proceeding in which a court inquires as to the legitimacy of a prisoner's custody. Typically, habeas corpus proceedings are to determine whether the court that imposed sentence on the defendant had jurisdiction and authority
century the writ has also been used in cases of unlawful detention by private individuals, most famously in Somersett's Case (1772), where the black slave, Somersett, was ordered to be freed. During that case, these famous words are said to have been uttered: "... that the air of England was too pure for slavery" (although it was the lawyers in argument who expressly used this phrase – referenced from a much earlier argument heard in The Star Chamber – and not Lord Mansfield himself). During the Seven Years' War and later conflicts, the Writ was used on behalf of soldiers and sailors pressed into military and naval service. The Habeas Corpus Act 1816 introduced some changes and expanded the territoriality of the legislation. The privilege of habeas corpus has been suspended or restricted several times during English history, most recently during the 18th and 19th centuries. Although internment without trial has been authorised by statute since that time, for example during the two World Wars and the Troubles in Northern Ireland, the habeas corpus procedure has in modern times always technically remained available to such internees. However, as habeas corpus is only a procedural device to examine the lawfulness of a prisoner's detention, so long as the detention is in accordance with an Act of Parliament, the petition for habeas corpus is unsuccessful. Since the passage of the Human Rights Act 1998, the courts have been able to declare an Act of Parliament to be incompatible with the European Convention on Human Rights, but such a declaration of incompatibility has no legal effect unless and until it is acted upon by the government. The wording of the writ of habeas corpus implies that the prisoner is brought to the court for the legality of the imprisonment to be examined. However, rather than issuing the writ immediately and waiting for the return of the writ by the custodian, modern practice in England is for the original application to be followed by a hearing with both parties present to decide the legality of the detention, without any writ being issued. If the detention is held to be unlawful, the prisoner can usually then be released or bailed by order of the court without having to be produced before it. With the development of modern public law, applications for habeas corpus have been to some extent discouraged, in favour of applications for judicial review. The writ, however, maintains its vigour, and was held by the UK Supreme Court in 2012 to be available in respect of a prisoner captured by British forces in Afghanistan, albeit that the Secretary of State made a valid return to the writ justifying the detention of the claimant. Precedents in medieval Catalonia and Biscay Although the first recorded historical references come from Anglo-Saxon law in the 12th century and one of the first documents referring to this right is a law of the English Parliament (1679), it must be noted that in Catalonia there are already references from 1428 in the (appeal of people's manifestation) collected in the of the Crown of Aragon and some references to this term in the Law of the Lordship of Biscay (1527). Other jurisdictions Australia The writ of habeas corpus as a procedural remedy is part of Australia's English law inheritance. In 2005, the Australian parliament passed the Australian Anti-Terrorism Act 2005. Some legal experts questioned the constitutionality of the act, due in part to limitations it placed on habeas corpus. Canada Habeas corpus rights are part of the British legal tradition inherited by Canada. The rights exist in the common law but have been enshrined in section 10(c) of the Charter of Rights and Freedoms, which states that "[e]veryone has the right on arrest or detention ... to have the validity of the detention determined by way of habeas corpus and to be released if the detention is not lawful". The test for habeas corpus in Canada was recently laid down by the Supreme Court of Canada in Mission Institution v Khela, as follows:To be successful, an application for habeas corpus must satisfy the following criteria. First, the applicant [i.e., the person seeking habeas corpus review] must establish that he or she has been deprived of liberty. Once a deprivation of liberty is proven, the applicant must raise a legitimate ground upon which to question its legality. If the applicant has raised such a ground, the onus shifts to the respondent authorities [i.e., the person or institution detaining the applicant] to show that the deprivation of liberty was lawful.Suspension of the writ in Canadian history occurred famously during the October Crisis, during which the War Measures Act was invoked by the Governor General of Canada on the constitutional advice of Prime Minister Pierre Trudeau, who had received a request from the Quebec Cabinet. The Act was also used to justify German, Slavic, and Ukrainian Canadian internment during the First World War, and the internment of German-Canadians, Italian-Canadians and Japanese-Canadians during the Second World War. The writ was suspended for several years following the Battle of Fort Erie (1866) during the Fenian Rising, though the suspension was only ever applied to suspects in the Thomas D'Arcy McGee assassination. The writ is available where there is no other adequate remedy. However, a superior court always has the discretion to grant the writ even in the face of an alternative remedy (see May v Ferndale Institution). Under the Criminal Code the writ is largely unavailable if a statutory right of appeal exists, whether or not this right has been exercised. France A fundamental human right in the 1789 Declaration of the Rights of Man and of the Citizen drafted by Lafayette in cooperation with Thomas Jefferson, the guarantees against arbitrary detention are enshrined in the French Constitution and regulated by the Penal Code. The safeguards are equivalent to those found under the Habeas-Corpus provisions found in Germany, the United States and several Commonwealth countries. The French system of accountability prescribes severe penalties for ministers, police officers and civil and judiciary authorities who either violate or fail to enforce the law. France and the United States played a synergistic role in the international team, led by Eleanor Roosevelt, which crafted the Universal Declaration of Human Rights. The French judge and Nobel Peace Laureate René Cassin produced the first draft and argued against arbitrary detentions. René Cassin and the French team subsequently championed the habeas corpus provisions enshrined in the European Convention for the Protection of Human Rights and Fundamental Freedoms. Germany Germany has constitutional guarantees against improper detention and these have been implemented in statutory law in a manner that can be considered as equivalent to writs of habeas corpus. Article 104, paragraph 1 of the Basic Law for the Federal Republic of Germany provides that deprivations of liberty may be imposed only on the basis of a specific enabling statute that also must include procedural rules. Article 104, paragraph 2 requires that any arrested individual be brought before a judge by the end of the day following the day of the arrest. For those detained as criminal suspects, article 104, paragraph 3 specifically requires that the judge must grant a hearing to the suspect in order to rule on the detention. Restrictions on the power of the authorities to arrest and detain individuals also emanate from article 2 paragraph 2 of the Basic Law which guarantees liberty and requires a statutory authorization for any deprivation of liberty. In addition, several other articles of the Basic Law have a bearing on the issue. The most important of these are article 19, which generally requires a statutory basis for any infringements of the fundamental rights guaranteed by the Basic Law while also guaranteeing judicial review; article 20, paragraph 3, which guarantees the rule of law; and article 3 which guarantees equality. In particular, a constitutional obligation to grant remedies for improper detention is required by article 19, paragraph 4 of the Basic Law, which provides as follows: "Should any person's right be violated by public authority, he may have recourse to the courts. If no other jurisdiction has been established, recourse shall be to the ordinary courts." India The Indian judiciary, in a catena of cases, has effectively resorted to the writ of habeas corpus to secure release of a person from illegal detention. For example, in October 2009, the Karnataka High Court heard a habeas corpus petition filed by the parents of a girl who married a Muslim boy from Kannur district and was allegedly confined in a madrasa in Malapuram town. Usually, in most other jurisdictions, the writ is directed at police authorities. The extension to non-state authorities has its grounds in two cases: the 1898 Queen's Bench case of Ex Parte Daisy Hopkins, wherein the Proctor of Cambridge University did detain and arrest Hopkins without his jurisdiction, and Hopkins was released, and that of Somerset v Stewart, in which an African slave whose master had moved to London was freed by action of the writ. The Indian judiciary has dispensed with the traditional doctrine of locus standi, so that if a detained person is not in a position to file a petition, it can be moved on his behalf by any other person. The scope of habeas relief has expanded in recent times by actions of the Indian judiciary. In 1976, the habeas writ was used in the Rajan case, a student victim of torture in local police custody during the nationwide Emergency in India. On 12 March 2014, Subrata Roy's counsel approached the Chief Justice moving a habeas corpus petition. It was also filed by the Panthers Party to protest the imprisonment of Anna Hazare, a social activist. Ireland In the Republic of Ireland, the writ of habeas corpus is available at common law and under the Habeas Corpus Acts of 1782 and 1816. A remedy equivalent to habeas corpus is also guaranteed by Article 40 of the 1937 constitution. The article guarantees that "no citizen shall be deprived of his personal liberty save in accordance with law" and outlines a specific procedure for the High Court to enquire into the lawfulness of any person's detention. It does not mention the Latin term habeas corpus, but includes the English phrase "produce the body". Article 40.4.2° provides that a prisoner, or anyone acting on his behalf, may make a complaint to the High Court (or to any High Court judge) of unlawful detention. The court must then investigate the matter "forthwith" and may order that the defendant bring the prisoner before the court and give reasons for his detention. The court must immediately release the detainee unless it is satisfied that he is being held lawfully. The remedy is available not only to prisoners of the state, but also to persons unlawfully detained by any private party. However the constitution provides that the procedure is not binding on the Defence Forces during a state of war or armed rebellion. The full text of Article 40.4.2° is as follows: The writ of habeas corpus continued as part of the Irish law when the state seceded from the United Kingdom in 1922. A remedy equivalent to habeas corpus was also guaranteed by Article 6 of the Constitution of the Irish Free State, enacted in 1922. That article used similar wording to Article 40.4 of the current constitution, which replaced it 1937. The relationship between the Article 40 and the Habeas Corpus Acts of 1782 and 1816 is ambiguous, and Forde and Leonard write that "The extent if any to which Article 40.4 has replaced these Acts has yet to be determined". In The State (Ahern) v. Cotter (1982) Walsh J. opined that the ancient writ referred to in the Habeas Corpus Acts remains in existence in Irish law as a separate remedy from that provided for in Article 40. In 1941, the Article 40 procedure was restricted by the Second Amendment. Prior to the amendment, a prisoner had the constitutional right to apply to any High Court judge for an enquiry into her detention, and to as many High
charge from Henry to seek out geographic material. Peter returned with a current world map from Venice. In 1431, Henry donated houses for the Estudo Geral to teach all the sciences—grammar, logic, rhetoric, arithmetic, music, and astronomy—in what would later become the University of Lisbon. For other subjects like medicine or philosophy, he ordered that each room should be decorated according to the subject taught. Henry also had other resources. When John I died in 1433, Henry's eldest brother Edward of Portugal became king. He granted Henry all profits from trading within the areas he discovered as well as the sole right to authorize expeditions beyond Cape Bojador. Henry also held a monopoly on tuna fishing in the Algarve. When Edward died eight years later, Henry supported his brother Peter, Duke of Coimbra for the regency during the minority of Edward's son Afonso V, and in return received a confirmation of this levy. Henry functioned as a primary organizer of the disastrous expedition to Tangier in 1437 against Çala Ben Çala, which ended in Henry's younger brother Ferdinand being given as hostage to guarantee Portuguese promises in the peace agreement. The Portuguese Cortes refused to return Ceuta as ransom for Ferdinand, who remained in captivity until his death six years later. Prince Regent Peter supported Portuguese maritime expansion in the Atlantic Ocean and Africa, and Henry promoted the colonization of the Azores during Peter's regency (1439–1448). For most of the latter part of his life, Henry concentrated on his maritime activities and court politics. Vila do Infante and Portuguese exploration According to João de Barros, in Algarve, Prince Henry the Navigator repopulated a village that he called Terçanabal (from terça nabal or tercena nabal). This village was situated in a strategic position for his maritime enterprises and was later called Vila do Infante ("Estate or Town of the Prince"). It is traditionally suggested that Henry gathered at his villa on the Sagres peninsula a school of navigators and map-makers. However modern historians hold this to be a misconception. He did employ some cartographers to chart the coast of Mauritania after the voyages he sent there, but there was no center of navigation science or observatory in the modern sense of the word, nor was there an organized navigational center. Referring to Sagres, sixteenth-century Portuguese mathematician and cosmographer Pedro Nunes remarked, "from it our sailors went out well taught and provided with instruments and rules which all map makers and navigators should know." The view that Henry's court rapidly grew into the technological base for exploration, with a naval arsenal and an observatory, etc., although repeated in popular culture, has never been established. Henry did possess geographical curiosity, and employed cartographers. Jehuda Cresques, a noted cartographer, has been said to have accepted an invitation to come to Portugal to make maps for the infante. Prestage makes the argument that the presence of the latter at the Prince's court "probably accounts for the legend of the School of Sagres, which is now discredited." The first contacts with the African slave market were made by expeditions to ransom Portuguese subjects enslaved by pirate attacks on Portuguese ships or villages. Henry's explorations Henry sponsored voyages, collecting a 20% tax (o quinto) on profits, the usual practice in the Iberian states at the time. The nearby port of Lagos provided a convenient home port for these expeditions. The voyages were made in very small ships, mostly the caravel, a light and maneuverable vessel equipped by lateen sails. Most of the voyages sent out by Henry consisted of one or two ships that navigated by following the coast, stopping at night to tie up along some shore. During Prince Henry's time and after, the Portuguese navigators discovered and perfected the North Atlantic volta do mar (the 'turn of the sea' or 'return from the sea'): the dependable pattern of trade winds blowing largely from the east near the equator and the returning westerlies in the mid-Atlantic. This was a major step in the history of navigation, when an understanding of oceanic wind patterns was crucial to Atlantic navigation, from Africa and the open ocean to Europe, and enabled the main route between the New World and Europe in the North Atlantic in future voyages of discovery. Although the lateen sail allowed sailing upwind to some extent, it was worth even major extensions of course to have a faster and calmer following wind for most of a journey. Portuguese mariners who sailed south and southwest towards the Canary Islands and West Africa would afterwards sail far to the northwest—that is, away from continental Portugal, and seemingly in the wrong direction—before turning northeast near the Azores islands and finally east to Europe in order to have largely following winds for their full journey. Christopher Columbus used this on his
largely independent of the prevailing winds. The caravel used the lateen sail, the prevailing rig in Christian Mediterranean navigation since late antiquity. With this ship, Portuguese mariners freely explored uncharted waters around the Atlantic, from rivers and shallow waters to transocean voyages. In 1419, Henry's father appointed him governor of the province of the Algarve. Resources and income On 25 May 1420, Henry gained appointment as the Grand Master of the Military Order of Christ, the Portuguese successor to the Knights Templar, which had its headquarters at Tomar in central Portugal. Henry held this position for the remainder of his life, and the Order was an important source of funds for Henry's ambitious plans, especially his persistent attempts to conquer the Canary Islands, which the Portuguese had claimed to have discovered before the year 1346. In 1425, his second brother the Infante Peter, Duke of Coimbra, made a diplomatic tour of Europe, with an additional charge from Henry to seek out geographic material. Peter returned with a current world map from Venice. In 1431, Henry donated houses for the Estudo Geral to teach all the sciences—grammar, logic, rhetoric, arithmetic, music, and astronomy—in what would later become the University of Lisbon. For other subjects like medicine or philosophy, he ordered that each room should be decorated according to the subject taught. Henry also had other resources. When John I died in 1433, Henry's eldest brother Edward of Portugal became king. He granted Henry all profits from trading within the areas he discovered as well as the sole right to authorize expeditions beyond Cape Bojador. Henry also held a monopoly on tuna fishing in the Algarve. When Edward died eight years later, Henry supported his brother Peter, Duke of Coimbra for the regency during the minority of Edward's son Afonso V, and in return received a confirmation of this levy. Henry functioned as a primary organizer of the disastrous expedition to Tangier in 1437 against Çala Ben Çala, which ended in Henry's younger brother Ferdinand being given as hostage to guarantee Portuguese promises in the peace agreement. The Portuguese Cortes refused to return Ceuta as ransom for Ferdinand, who remained in captivity until his death six years later. Prince Regent Peter supported Portuguese maritime expansion in the Atlantic Ocean and Africa, and Henry promoted the colonization of the Azores during Peter's regency (1439–1448). For most of the latter part of his life, Henry concentrated on his maritime activities and court politics. Vila do Infante and Portuguese exploration According to João de Barros, in Algarve, Prince Henry the Navigator repopulated a village that he called Terçanabal (from terça nabal or tercena nabal). This village was situated in a strategic position for his maritime enterprises and was later called Vila do Infante ("Estate or Town of the Prince"). It is traditionally suggested that Henry gathered at his villa on the Sagres peninsula a school of navigators and map-makers. However modern historians hold this to be a misconception. He did employ some cartographers to chart the coast of Mauritania after the voyages he sent there, but there was no center of navigation science or observatory in the modern sense of the word, nor was there an organized navigational center. Referring to Sagres, sixteenth-century Portuguese mathematician and cosmographer Pedro Nunes remarked, "from it our sailors went out well taught and provided with instruments and rules which all map makers and navigators should know." The view that Henry's court rapidly grew into the technological base for exploration, with a naval arsenal and an observatory, etc., although repeated in popular culture, has never been established. Henry did possess geographical curiosity, and employed cartographers. Jehuda Cresques, a noted cartographer, has been said to have accepted an invitation to come to Portugal to make maps for the infante. Prestage makes the argument that the presence of the latter at the Prince's court "probably accounts for the legend of the School of Sagres, which is now discredited." The first contacts with the African slave market were made by expeditions to ransom Portuguese subjects enslaved by pirate attacks on Portuguese ships or villages. Henry's explorations Henry sponsored voyages, collecting a 20% tax (o quinto) on profits, the usual practice in the Iberian states at the time. The nearby port of Lagos provided a convenient home port for these expeditions. The voyages were made in very small ships, mostly the caravel, a light and maneuverable vessel equipped by lateen sails. Most of the voyages sent out by Henry consisted of one or two ships that navigated by following the coast, stopping at night to tie up along some shore. During Prince Henry's time and after, the Portuguese navigators discovered and perfected the North Atlantic volta do mar (the 'turn of the sea' or 'return from the sea'): the dependable pattern of trade winds blowing largely from the east near the equator and the returning westerlies in the mid-Atlantic. This was a major step in the history of navigation, when an understanding of oceanic wind patterns was crucial to Atlantic navigation, from Africa and the open ocean to Europe, and enabled the main route between the New World and Europe in the North Atlantic in future voyages of discovery. Although the lateen sail allowed sailing upwind to some extent, it was worth even major extensions of course to have a faster and calmer following wind for most of a journey. Portuguese mariners who sailed south and southwest towards the Canary Islands and West Africa would afterwards sail far to the northwest—that is, away from continental Portugal, and seemingly in the wrong direction—before turning northeast near the Azores islands and finally east to Europe in order to have largely following winds for their full journey. Christopher Columbus used this on his transatlantic voyages. Madeira The first explorations followed not long after the capture of Ceuta in 1415. Henry was interested in locating the source of the caravans that brought gold to the city. During the reign of his father, John I, João Gonçalves Zarco and Tristão Vaz Teixeira were sent to explore along the African coast. Zarco, a knight in service to Prince Henry, had commanded the caravels guarding the coast of Algarve from the incursions of the Moors. He had
make up a complete organism" Induced pluripotent stem cells (iPSCs) Creating induced pluripotent stem cells ("iPSCs") is a long and inefficient process. Pluripotency refers to a stem cell that has the potential to differentiate into any of the three germ layers: endoderm (interior stomach lining, gastrointestinal tract, the lungs), mesoderm (muscle, bone, blood, urogenital), or ectoderm (epidermal tissues and nervous tissue). A specific set of genes, often called "reprogramming factors", are introduced into a specific adult cell type. These factors send signals in the mature cell that cause the cell to become a pluripotent stem cell. This process is highly studied and new techniques are being discovered frequently on how to better this induction process. Depending on the method used, reprogramming of adult cells into iPSCs for implantation could have severe limitations in humans. If a virus is used as a reprogramming factor for the cell, cancer-causing genes called oncogenes may be activated. These cells would appear as rapidly dividing cancer cells that do not respond to the body's natural cell signaling process. However, in 2008 scientists discovered a technique that could remove the presence of these oncogenes after pluripotency induction, thereby increasing the potential use of iPSC in humans. Comparing SCNT to reprogramming Both the processes of SCNT and iPSCs have benefits and deficiencies. Historically, reprogramming methods were better studied than SCNT derived embryonic stem cells (ESCs). However, more recent studies have put more emphasis on developing new procedures for SCNT-ESCs. The major advantage of SCNT over iPSCs at this time is the speed with which cells can be produced. iPSCs derivation takes several months while SCNT would take a much shorter time, which could be important for medical applications. New studies are working to improve the process of iPSC in terms of both speed and efficiency with the discovery of new reprogramming factors in oocytes. Another advantage SCNT could have over iPSCs is its potential to treat mitochondrial disease, as it utilizes a donor oocyte. No other advantages are known at this time in using stem cells derived from one method over stem cells derived from the other. Uses, actual and potential Work on cloning techniques has advanced our basic understanding of developmental biology in humans. Observing human pluripotent stem cells grown in culture provides great insight into human embryo development, which otherwise cannot be seen. Scientists are now able to better define steps of early human development. Studying signal transduction along with genetic manipulation within the early human embryo has the potential to provide answers to many developmental diseases and defects. Many human-specific signaling pathways have been discovered by studying human embryonic stem cells. Studying developmental pathways in humans has given developmental biologists more evidence toward the hypothesis that developmental pathways are conserved throughout species. iPSCs and cells created by SCNT are useful for research into the causes of disease, and as model systems used in drug discovery. Cells produced with SCNT, or iPSCs could eventually be used in stem cell therapy, or to create organs to be used in transplantation, known as regenerative medicine. Stem cell therapy is the use of stem cells to treat or prevent a disease or condition. Bone marrow transplantation is a widely used form of stem cell therapy. No other forms of stem cell therapy are in clinical use at this time. Research is underway to potentially use stem cell therapy to treat heart disease, diabetes, and spinal cord injuries. Regenerative medicine is not in clinical practice, but is heavily researched for its potential uses. This type of medicine would allow for autologous transplantation, thus removing the risk of organ transplant rejection by the recipient. For instance, a person with liver disease could potentially have a new liver grown using their same genetic material and transplanted to remove the damaged liver. In current research, human pluripotent stem cells have been promised as a reliable source for generating human neurons, showing the potential for regenerative medicine in brain and neural injuries. Ethical implications In bioethics, the ethics of cloning refers to a variety
discovery of new reprogramming factors in oocytes. Another advantage SCNT could have over iPSCs is its potential to treat mitochondrial disease, as it utilizes a donor oocyte. No other advantages are known at this time in using stem cells derived from one method over stem cells derived from the other. Uses, actual and potential Work on cloning techniques has advanced our basic understanding of developmental biology in humans. Observing human pluripotent stem cells grown in culture provides great insight into human embryo development, which otherwise cannot be seen. Scientists are now able to better define steps of early human development. Studying signal transduction along with genetic manipulation within the early human embryo has the potential to provide answers to many developmental diseases and defects. Many human-specific signaling pathways have been discovered by studying human embryonic stem cells. Studying developmental pathways in humans has given developmental biologists more evidence toward the hypothesis that developmental pathways are conserved throughout species. iPSCs and cells created by SCNT are useful for research into the causes of disease, and as model systems used in drug discovery. Cells produced with SCNT, or iPSCs could eventually be used in stem cell therapy, or to create organs to be used in transplantation, known as regenerative medicine. Stem cell therapy is the use of stem cells to treat or prevent a disease or condition. Bone marrow transplantation is a widely used form of stem cell therapy. No other forms of stem cell therapy are in clinical use at this time. Research is underway to potentially use stem cell therapy to treat heart disease, diabetes, and spinal cord injuries. Regenerative medicine is not in clinical practice, but is heavily researched for its potential uses. This type of medicine would allow for autologous transplantation, thus removing the risk of organ transplant rejection by the recipient. For instance, a person with liver disease could potentially have a new liver grown using their same genetic material and transplanted to remove the damaged liver. In current research, human pluripotent stem cells have been promised as a reliable source for generating human neurons, showing the potential for regenerative medicine in brain and neural injuries. Ethical implications In bioethics, the ethics of cloning refers to a variety of ethical positions regarding the practice and possibilities of cloning, especially human cloning. While many of these views are religious in origin, for instance relating to Christian views of procreation and personhood, the questions raised by cloning engage secular perspectives as well. Advocates support development of therapeutic cloning in order to generate tissues and whole organs to treat patients who otherwise cannot obtain transplants, to avoid the need for immunosuppressive drugs, and to stave off the effects of aging. Advocates for reproductive cloning believe that parents who cannot otherwise procreate should have access to the technology. Opposition to therapeutic cloning mainly centers around the status of embryonic stem cells, which has connections with the abortion debate. Some opponents of reproductive cloning have concerns that technology is not yet developed enough to be safe – for example, the position of the American Association for the Advancement of Science , while others emphasize that reproductive cloning could be prone to abuse (leading to the generation of humans whose organs and tissues would be harvested), and have concerns about how cloned individuals could integrate with families and with society at large. Members of religious groups are divided. Some Christian theologians perceive the technology as usurping God's role in creation and, to the extent embryos are used, destroying a human life; others see no inconsistency between Christian tenets and cloning's positive and potentially life-saving benefits. Current law In 2018 it was reported that about 70 countries had banned human cloning. In popular culture Science fiction has used cloning, most commonly and specifically human cloning, due to the fact that it brings up controversial questions of identity. Humorous fiction, such as Multiplicity (1996) and the Maxwell Smart feature The Nude Bomb (1980), have featured human cloning. A recurring sub-theme of cloning fiction is the use of clones as a supply of organs for transplantation. Robin Cook's 1997 novel Chromosome 6 and Michael Bay's The Island are examples of this; Chromosome 6 also features genetic manipulation and xenotransplantation. The series Orphan Black follows human clones' stories and experiences as
During this period, the Sanskrit language developed and the Vedas were written, epic hymns that told tales of gods and wars. This was the basis for the Vedic religion, which would eventually sophisticate and develop into Hinduism. China and Vietnam were also centres of metalworking. Dating back to the Neolithic Age, the first bronze drums, called the Dong Son drums have been uncovered in and around the Red River Delta regions of Vietnam and Southern China. These relate to the prehistoric Dong Son Culture of Vietnam. Song Da bronze drum's surface, Dong Son culture, Vietnam In Ban Chiang, Thailand (Southeast Asia), bronze artifacts have been discovered dating to 2100 BCE. In Nyaunggan, Burma bronze tools have been excavated along with ceramics and stone artifacts. Dating is still currently broad (3500–500 BCE). Iron and Axial Age The Iron Age saw the widespread use of iron tools, weaponry, and armor throughout the major civilizations of Asia. Middle East The Achaemenid dynasty of the Persian Empire, founded by Cyrus the Great, ruled an area from Greece and Turkey to the Indus River and Central Asia during the 6th to 4th centuries BCE. Persian politics included a tolerance for other cultures, a highly centralized government, and significant infrastructure developments. Later, in Darius the Great's rule, the territories were integrated, a bureaucracy was developed, nobility were assigned military positions, tax collection was carefully organized, and spies were used to ensure the loyalty of regional officials. The primary religion of Persia at this time was Zoroastrianism, developed by the philosopher Zoroaster. It introduced an early form of monotheism to the area. The religion banned animal sacrifice and the use of intoxicants in rituals; and introduced the concept of spiritual salvation through personal moral action, an end time, and both general and Particular judgment with a heaven or hell. These concepts would heavily influence later emperors and the masses. More importantly, Zoroastrianism would be an important precursor for the Abrahamic religions such as Christianity, Islam, or Judaism. The Persian Empire was successful in establishing peace and stability throughout the Middle East and were a major influence in art, politics (affecting Hellenistic leaders), and religion. Alexander the Great conquered this dynasty in the 4th century BCE, creating the brief Hellenistic period. He was unable to establish stability and after his death, Persia broke into small, weak dynasties including the Seleucid Empire, followed by the Parthian Empire. By the end of the Classical age, Persia had been reconsolidated into the Sassanid Empire, also known as the second Persian Empire. The Roman Empire would later control parts of Western Asia. The Seleucid, Parthian and Sassanid dynasties of Persia dominated Western Asia for centuries. India The Maurya and Gupta empires are called the Golden Age of India and were marked by extensive inventions and discoveries in science, technology, art, religion, and philosophy that crystallized the elements of what is generally known as Indian culture. The religions of Hinduism and Buddhism, which began in Indian sub-continent, were an important influence on South, East and Southeast Asia. By 600 BCE, India had been divided into 17 regional states that would occasionally feud amongst themselves. In 327 BCE, Alexander the Great came to India with a vision of conquering the whole world. He crossed northwestern India and created the province Bactria but could not move further because his army wanted to go back to their family. Shortly prior, the soldier Chandragupta Maurya began to take control of the Ganges river and soon established the Maurya Empire. The Maurya Empire (Sanskrit: मौर्य राजवंश, Maurya Rājavaṃśa) was the geographically extensive and powerful empire in ancient India, ruled by the Mauryan dynasty from 321 to 185 BCE. It was one of the world's largest empires in its time, stretching to the Himalayas in the north, what is now Assam in the east, probably beyond modern Pakistan in the west, and annexing Balochistan and much of what is now Afghanistan, at its greatest extent. South of Mauryan empire was the Tamilakam an independent country dominated by three dynasties, the Pandyans, Cholas and Cheras. The government established by Chandragupta was led by an autocratic king, who primarily relied on the military to assert his power. It also applied the use of a bureaucracy and even sponsored a postal service. Chandragupta's grandson, Ashoka, greatly extended the empire by conquering most of modern-day India (save for the southern tip). He eventually converted to Buddhism, though, and began a peaceful life where he promoted the religion as well as humane methods throughout India. The Maurya Empire would disintegrate soon after Ashoka's death and was conquered by the Kushan invaders from the northwest, establishing the Kushan Empire. Their conversion to Buddhism caused the religion to be associated with foreigners and therefore a decline in its popularity occurred. The Kushan Empire would fall apart by 220 CE, creating more political turmoil in India. Then in 320, the Gupta Empire (Sanskrit: गुप्त राजवंश, Gupta Rājavanśha) was established and covered much of the Indian Subcontinent. Founded by Maharaja Sri-Gupta, the dynasty was the model of a classical civilization. Gupta kings united the area primarily through negotiation of local leaders and families as well as strategical intermarriage. Their rule covered less land than the Maurya Empire, but established the greatest stability. In 535, the empire ended when India was overrun by the Hunas. Classical China Zhou Dynasty Since 1029 BCE, the Zhou dynasty ( ), had existed in China and it would continue to until 258 BCE. The Zhou dynasty had been using a feudal system by giving power to local nobility and relying on their loyalty in order to control its large territory. As a result, the Chinese government at this time tended to be very decentralized and weak, and there was often little the emperor could do to resolve national issues. Nonetheless, the government was able to retain its position with the creation of the Mandate of Heaven, which could establish an emperor as divinely chosen to rule. The Zhou additionally discouraged the human sacrifice of the preceding eras and unified the Chinese language. Finally, the Zhou government encouraged settlers to move into the Yangtze River valley, thus creating the Chinese Middle Kingdom. But by 500 BCE, its political stability began to decline due to repeated nomadic incursions and internal conflict derived from the fighting princes and families. This was lessened by the many philosophical movements, starting with the life of Confucius. His philosophical writings (called Confucianism) concerning the respect of elders and of the state would later be popularly used in the Han dynasty. Additionally, Laozi's concepts of Taoism, including yin and yang and the innate duality and balance of nature and the universe, became popular throughout this period. Nevertheless, the Zhou Dynasty eventually disintegrated as the local nobles began to gain more power and their conflict devolved into the Warring States period, from 402 to 201 BCE. Qin Dynasty One leader eventually came on top, Qin Shi Huang (, Shǐ Huángdì), who overthrew the last Zhou emperor and established the Qin dynasty. The Qin dynasty (Chinese: 秦朝; pinyin: Qín Cháo) was the first ruling dynasty of Imperial China, lasting from 221 to 207 BCE. The new Emperor abolished the feudal system and directly appointed a bureaucracy that would rely on him for power. Huang's imperial forces crushed any regional resistance, and they furthered the Chinese empire by expanding down to the South China Sea and northern Vietnam. Greater organization brought a uniform tax system, a national census, regulated road building (and cart width), standard measurements, standard coinage, and an official written and spoken language. Further reforms included new irrigation projects, the encouragement of silk manufacturing, and (most famously) the beginning of the construction of the Great Wall of China—designed to keep out the nomadic raiders who'd constantly badger the Chinese people. However, Shi Huang was infamous for his tyranny, forcing laborers to build the Wall, ordering heavy taxes, and severely punishing all who opposed him. He oppressed Confucians and promoted Legalism, the idea that people were inherently evil, and that a strong, forceful government was needed to control them. Legalism was infused with realistic, logical views and rejected the pleasures of educated conversation as frivolous. All of this made Shi Huang extremely unpopular with the people. As the Qin began to weaken, various factions began to fight for control of China. Han Dynasty The Han dynasty (simplified Chinese: 汉朝; traditional Chinese: 漢朝; pinyin: Hàn Cháo; 206 BCE – 220 CE) was the second imperial dynasty of China, preceded by the Qin Dynasty and succeeded by the Three Kingdoms (220–265 CE). Spanning over four centuries, the period of the Han Dynasty is considered a golden age in Chinese history. One of the Han dynasty's greatest emperors, Emperor Wu of Han, established a peace throughout China comparable to the Pax Romana seen in the Mediterranean a hundred years later. To this day, China's majority ethnic group refers to itself as the "Han people". The Han Dynasty was established when two peasants succeeded in rising up against Shi Huang's significantly weaker successor-son. The new Han government retained the centralization and bureaucracy of the Qin, but greatly reduced the repression seen before. They expanded their territory into Korea, Vietnam, and Central Asia, creating an even larger empire than the Qin. The Han developed contacts with the Persian Empire in the Middle East and the Romans, through the Silk Road, with which they were able to trade many commodities—primarily silk. Many ancient civilizations were influenced by the Silk Road, which connected China, India, the Middle East and Europe. Han emperors like Wu also promoted Confucianism as the national "religion" (although it is debated by theologians as to whether it is defined as such or as a philosophy). Shrines devoted to Confucius were built and Confucian philosophy was taught to all scholars who entered the Chinese bureaucracy. The bureaucracy was further improved with the introduction of an examination system that selected scholars of high merit. These bureaucrats were often upper-class people educated in special schools, but whose power was often checked by the lower-class brought into the bureaucracy through their skill. The Chinese imperial bureaucracy was very effective and highly respected by all in the realm and would last over 2,000 years. The Han government was highly organized and it commanded the military, judicial law (which used a system of courts and strict laws), agricultural production, the economy, and the general lives of its people. The government also promoted intellectual philosophy, scientific research, and detailed historical records. However, despite all of this impressive stability, central power began to lose control by the turn of the Common Era. As the Han Dynasty declined, many factors continued to pummel it into submission until China was left in a state of chaos. By 100 CE, philosophical activity slowed, and corruption ran rampant in the bureaucracy. Local landlords began to take control as the scholars neglected their duties, and this resulted in heavy taxation of the peasantry. Taoists began to gain significant ground and protested the decline. They started to proclaim magical powers and promised to save China with them; the Taoist Yellow Turban Rebellion in 184 (led by rebels in yellow scarves) failed but was able to weaken the government. The aforementioned Huns combined with diseases killed up to half of the population and officially ended the Han dynasty by 220. The ensuing period of chaos was so terrible it lasted for three centuries, where many weak regional rulers and dynasties failed to establish order in China. This period of chaos and attempts at order is commonly known as that of the Six Dynasties. The first part of this included the Three Kingdoms which started in 220 and describes the brief and weak successor "dynasties" that followed the Han. In 265, the Jin dynasty of China was started and this soon split into two different empires in control of northwestern and southeastern China. In 420, the conquest and abdication of those two dynasties resulted in the first of the Southern and Northern Dynasties. The Northern and Southern Dynasties passed through until finally, by 557, the Northern Zhou dynasty ruled the north and the Chen dynasty ruled the south. Medieval During this period, the Eastern world empires continued to expand through trade, migration and conquests of neighboring areas. Gunpowder was widely used as early as the 11th century and they were using moveable type printing five hundred years before Gutenberg created his press. Buddhism, Taoism, Confucianism were the dominant philosophies of the Far East during the Middle Ages. Marco Polo was not the first Westerner to travel to the Orient and return with amazing stories of this different culture, but his accounts published in the late 13th and early 14th centuries were the first to be widely read throughout Europe. Western Asia (Middle East) The Arabian peninsula and the surrounding Middle East and Near East regions saw dramatic change during the Medieval era caused primarily by the spread of Islam and the establishment of the Arabian Empires. In the 5th century, the Middle East was separated into small, weak states; the two most prominent were the Sassanian Empire of the Persians in what is now Iran and Iraq, and the Byzantine Empire in Anatolia (modern-day Turkey). The Byzantines and Sassanians fought with each other continually, a reflection of the rivalry between the Roman Empire and the Persian Empire seen during the previous five hundred years. The fighting weakened both states, leaving the stage open to a new power. Meanwhile, the nomadic Bedouin tribes who dominated the Arabian desert saw a period of tribal stability, greater trade networking and a familiarity with Abrahamic religions or monotheism. While the Byzantine Roman and Sassanid Persian empires were both weakened by the Byzantine–Sasanian War of 602–628, a new power in the form of Islam grew in the Middle East under Muhammad in Medina. In a series of rapid Muslim conquests, the Rashidun army, led by the Caliphs and skilled military commanders such as Khalid ibn al-Walid, swept through most of the Middle East, taking more than half of Byzantine territory in the Arab–Byzantine wars and completely engulfing Persia in the Muslim conquest of Persia. It would be the Arab Caliphates of the Middle Ages that would first unify the entire Middle East as a distinct region and create the dominant ethnic identity that persists today. These Caliphates included the Rashidun Caliphate, Umayyad Caliphate, Abbasid Caliphate, and later the Seljuq Empire. After Muhammad introduced Islam, it jump-started Middle Eastern culture into an Islamic Golden Age, inspiring achievements in architecture, the revival of old advances in science and technology, and the formation of a distinct way of life. Muslims saved and spread Greek advances in medicine, algebra, geometry, astronomy, anatomy, and ethics that would later finds it way back to Western Europe. The dominance of the Arabs came to a sudden end in the mid-11th century with the arrival of the Seljuq Turks, migrating south from the Turkic homelands in Central Asia. They conquered Persia, Iraq (capturing Baghdad in 1055), Syria, Palestine, and the Hejaz. This was followed by a series of Christian Western Europe invasions. The fragmentation of the Middle East allowed joined forces, mainly from England, France, and the emerging Holy Roman Empire, to enter the region. In 1099 the knights of the First Crusade captured Jerusalem and founded the Kingdom of Jerusalem, which survived until 1187, when Saladin retook the city. Smaller crusader fiefdoms survived until 1291. In the early 13th century, a new wave of invaders, the armies of the Mongol Empire, swept through the region, sacking Baghdad in the Siege of Baghdad (1258) and advancing as far south as the border of Egypt in what became known as the Mongol conquests. The Mongols eventually retreated in 1335, but the chaos that ensued throughout the empire deposed the Seljuq Turks. In 1401, the region was further plagued by the Turko-Mongol, Timur, and his ferocious raids. By then, another group of Turks had arisen as well, the Ottomans. Central Asia Mongol Empire The Mongol Empire conquered a large part of Asia in the 13th century, an area extending from China to Europe. Medieval Asia was the kingdom of the Khans. Never before had any person controlled as much land as Genghis Khan. He built his power unifying separate Mongol tribes before expanding his kingdom south and west. He and his grandson, Kublai Khan, controlled lands in China, Burma, Central Asia, Russia, Iran, the Middle East, and Eastern Europe. Genghis Khan was a Khagan who tolerated nearly every religion. South Asia/Indian Subcontinent India The Indian early medieval age, 600 to 1200, is defined by regional kingdoms and cultural diversity. When Harsha of Kannauj, who ruled much of the Indo-Gangetic Plain from 606 to 647, attempted to expand southwards, he was defeated by the Chalukya ruler of the Deccan. When his successor attempted to expand eastwards, he was defeated by the Pala king of Bengal. When the Chalukyas attempted to expand southwards, they were defeated by the Pallavas from farther south, who in turn were opposed by the Pandyas and the Cholas from still farther south. The Cholas could under the rule of Raja Raja Chola defeat their rivals and rise to a regional power. Cholas expanded northward and defeated Eastern Chalukya, Kalinga and the Pala. Under Rajendra Chola the Cholas created the first notable navy of Indian subcontinent. The Chola navy extended the influence of Chola empire to southeast asia. During this time, pastoral peoples whose land had been cleared to make way for the growing agricultural economy were accommodated within caste society, as were new non-traditional ruling classes. The Muslim conquest in the Indian subcontinent mainly took place from the 12th century onwards, though earlier Muslim conquests include the limited inroads into modern Afghanistan and Pakistan and the Umayyad campaigns in India, during the time of the Rajput kingdoms in the 8th century. Major economic and military powers like the Delhi Sultanate and Bengal Sultanate, were seen to be established. The search of their wealth led the Voyages of Christopher Columbus. East Asia China China saw the rise and fall of the Sui, Tang, Song, and Yuan dynasties and therefore improvements in its bureaucracy, the spread of Buddhism, and the advent of Neo-Confucianism. It was an unsurpassed era for Chinese ceramics and painting. Medieval architectural masterpieces the Great South Gate in Todaiji, Japan, and the Tien-ning Temple in Peking, China are some of the surviving constructs from this era. Sui Dynasty A new powerful dynasty began to rise in the 580s, amongst the divided factions of China. This was started when an aristocrat named Yang Jian married his daughter into the Northern Zhou dynasty. He proclaimed himself Emperor Wen of Sui and appeased the nomadic military by abandoning the Confucian scholar-gentry. Emperor Wen soon led the conquest of the southern Chen Dynasty and united China once more under the Sui dynasty. The emperor lowered taxes and constructed granaries that he used to prevent famine and control the market. Later Wen's son would murder him for the throne and declare himself Emperor Yang of Sui. Emperor Yang revived the Confucian scholars and the bureaucracy, much to anger of the aristocrats and nomadic military leaders. Yang became an excessive leader who overused China's resources for personal luxury and perpetuated exhaustive attempts to conquer Goguryeo. His military failures and neglect of the empire forced his own ministers to assassinate him in 618, ending the Sui Dynasty. Tang dynasty Fortunately, one of Yang's most respectable advisors, Li Yuan, was able to claim the throne quickly, preventing a chaotic collapse. He proclaimed himself Emperor Gaozu, and established the Tang dynasty in 623. The Tang saw expansion of China through conquest to Tibet in the west, Vietnam in the south, and Manchuria in the north. Tang emperors also improved the education of scholars in the Chinese bureaucracy. A Ministry of Rites was established and the examination system was improved to better qualify scholars for their jobs. In addition, Buddhism became popular in China with two different strains between the peasantry and the elite, the Pure Land and Zen strains, respectively. Greatly supporting the spread of Buddhism was Empress Wu, who additionally claimed an unofficial "Zhou dynasty" and displayed China's tolerance of a woman ruler, which was rare at the time. However, Buddhism would also experience some backlash, especially from Confucianists and Taoists. This would usually involve criticism about how it was costing the state money, since the government was unable to tax Buddhist monasteries, and additionally sent many grants and gifts to them. The Tang dynasty began to decline under the rule of Emperor Xuanzong, who began to neglect the economy and military and caused unrest amongst the court officials due to the excessive influence of his concubine, Yang Guifei, and her family. This eventually sparked a revolt in 755. Although the revolt failed, subduing it required involvement with the unruly nomadic tribes outside of China and distributing more power to local leaders—leaving the government and economy in a degraded state. The Tang dynasty officially ended in 907 and various factions led by the aforementioned nomadic tribes and local leaders would fight for control of China in the Five Dynasties and Ten Kingdoms period. Liao, Song and Jin dynasties By 960, most of China proper had been reunited under the Song dynasty, although it lost territories in the north and could not defeat one of the nomadic tribes there—the Liao dynasty of the highly sinicized Khitan people. From then on, the Song would have to pay tribute to avoid invasion and thus set the precedent for other nomadic kingdoms to oppress them. The Song also saw the revival of Confucianism in the form of Neo-Confucianism. This had the effect of putting the Confucian scholars at a higher status than aristocrats or Buddhists and also intensified the reduction of power in women. The infamous practice of foot binding developed in this period as a result. Eventually the Liao dynasty in the north was overthrown by the Jin dynasty of the Manchu-related Jurchen people. The new Jin kingdom invaded northern China, leaving the Song to flee farther south and creating the Southern Song dynasty in 1126. There, cultural life flourished. Yuan Dynasty By 1227, the Mongols had conquered the Western Xia kingdom northwest of China. Soon the Mongols incurred upon the Jin empire of the Jurchens. Chinese cities were soon besieged by the Mongol hordes that showed little mercy for those who resisted and the Southern Song Chinese were quickly losing territory. In 1271 the current great khan, Kublai Khan, claimed himself Emperor of China and officially established the Yuan Dynasty. By 1290, all of China was under control of the Mongols, marking the first time they were ever completely conquered by a foreign invader; the new capital was established at Khanbaliq (modern-day Beijing). Kublai Khan segregated Mongol culture from Chinese culture by discouraging interactions between the two peoples, separating living spaces and places of worship, and reserving top administrative positions to Mongols, thus preventing Confucian scholars to continue the bureaucratic system. Nevertheless, Kublai remained fascinated with Chinese thinking, surrounding himself with Chinese Buddhist, Taoist, or Confucian advisors. Mongol women displayed a contrasting independent nature compared to the Chinese women who continued to be suppressed. Mongol women often rode out on hunts or even to war. Kublai's wife, Chabi, was a perfect example of this; Chabi advised her husband on several political and diplomatic matters; she convinced him that the Chinese were to be respected and well-treated in order to make them easier to rule. However, this was not enough to affect Chinese women's position, and the increasingly Neo-Confucian successors of Kublai further repressed Chinese and even Mongol women. The Black Death, which would later ravage Western Europe, had its beginnings in Asia, where it wiped out large populations in China in 1331. Korea Three Kingdoms of Korea The three Kingdoms of Korea involves Goguryeo in north, Baekje in southwest, and Silla in southeast Korean peninsula. These three kingdoms were like a bridge of cultures between China and Japan. Thanks to them, Japan was able to accept Chinese splendid cultures. Prince Shōtoku of Japan had been taught by two teachers. One was from Baekje, the other was from Goguryeo. Once Japan invaded Silla, Goguryeo helped Silla to defeat Japan. Baekje met the earliest heyday of them. Its heyday was the 5th century AD. Its capital was Seoul. During its heyday, the kingdom made colonies overseas. Liaodong, China and Kyushu, Japan were the colonies of Baekje during its short heyday. Goguryeo was the strongest kingdom of all. They sometimes called themselves as an Empire. Its heyday was 6th century. King Gwanggaeto widened its territory to
of Joseon. Japan Asuka period Japan's medieval history began with the Asuka period, from around 600 to 710. The time was characterized by the Taika Reform and imperial centralization, both of which were a direct result of growing Chinese contact and influences. In 603, Prince Shōtoku of the Yamato dynasty began significant political and cultural changes. He issued the Seventeen-article constitution in 604, centralizing power towards the emperor (under the title tenno, or heavenly sovereign) and removing the power to levy taxes from provincial lords. Shōtoku was also a patron of Buddhism and he encouraged building temples competitively. Nara period Shōtoku's reforms transitioned Japan to the Nara period (c. 710 to c. 794), with the moving of the Japanese capital to Nara in Honshu. This period saw the culmination of Chinese-style writing, etiquette, and architecture in Japan along with Confucian ideals to supplement the already present Buddhism. Peasants revered both Confucian scholars and Buddhist monks. However, in the wake of the 735–737 Japanese smallpox epidemic, Buddhism gained the status of state religion and the government ordered the construction of numerous Buddhist temples, monasteries, and statues. The lavish spending combined with the fact that many aristocrats did not pay taxes, put a heavy burden on peasantry that caused poverty and famine. Eventually the Buddhist position got out of control, threatening to seize imperial power and causing Emperor Kanmu to move the capital to Heian-kyō to avoid a Buddhist takeover. This marked the beginning of the Heian period and the end of Taika reform. Heian period With the Heian period (from 794 to 1185) came a decline of imperial power. Chinese influence also declined, as a result of its correlation with imperial centralization and the heavenly mandate, which came to be regarded as ineffective. By 838, the Japanese court discontinued its embassies in China; only traders and Buddhist monks continued to travel to China. Buddhism itself came to be considered more Japanese than Chinese, and persisted to be popular in Japan. Buddhists monks and monasteries continued their attempts to gather personal power in courts, along with aristocrats. One particular noble family that dominated influence in the imperial bureaucracy was the Fujiwara clan. During this time cultural life in the imperial court flourished. There was a focus on beauty and social interaction and writing and literature was considered refined. Noblewomen were cultured the same as noblemen, dabbling in creative works and politics. A prime example of both Japanese literature and women's role in high-class culture at this time was The Tale of Genji, written by the lady-in-waiting Murasaki Shikibu. Popularization of wooden palaces and shōji sliding doors amongst the nobility also occurred. Loss of imperial power also led to the rise of provincial warrior elites. Small lords began to function independently. They administered laws, supervised public works projects, and collected revenue for themselves instead of the imperial court. Regional lords also began to build their own armies. These warriors were loyal only their local lords and not the emperor, although the imperial government increasingly called them in to protect the capital. The regional warrior class developed into the samurai, which created its own culture: including specialized weapons such as the katana and a form of chivalry, bushido. The imperial government's loss of control in the second half of the Heian period allowed banditry to grow, requiring both feudal lords and Buddhist monasteries to procure warriors for protection. As imperial control over Japan declined, feudal lords also became more independent and seceded from the empire. These feudal states squandered the peasants living in them, reducing the farmers to an almost serfdom status. Peasants were also rigidly restricted from rising to the samurai class, being physically set off by dress and weapon restrictions. As a result of their oppression, many peasants turned to Buddhism as a hope for reward in the afterlife for upright behavior. With the increase of feudalism, families in the imperial court began to depend on alliances with regional lords. The Fujiwara clan declined from power, replaced by a rivalry between the Taira clan and the Minamoto clan. This rivalry grew into the Genpei War in the early 1180s. This war saw the use of both samurai and peasant soldiers. For the samurai, battle was ritual and they often easily cut down the poorly trained peasantry. The Minamoto clan proved successful due to their rural alliances. Once the Taira was destroyed, the Minamoto established a military government called the shogunate (or bakufu), centered in Kamakura. Kamakura period The end of the Genpei War and the establishment of the Kamakura shogunate marked the end of the Heian period and the beginning of the Kamakura period in 1185, solidifying feudal Japan. Southeast Asia Khmers In 802, Jayavarman II consolidated his rule over neighboring peoples and declared himself chakravartin, or "universal ruler". The Khmer Empire effectively dominated all Mainland Southeast Asia from the early 9th until the 15th century, during which time they developed a sophisticated monumental architecture of most exquisite expression and mastery of composition at Angkor. Vietnam Early modern The Russian Empire began to expand into Asia from the 17th century, and would eventually take control of all of Siberia and most of Central Asia by the end of the 19th century. The Ottoman Empire controlled Anatolia, the Middle East, North Africa and the Balkans from the 16th century onwards. In the 17th century, the Manchu conquered China and established the Qing Dynasty. In the 16th century, the Mughal Empire controlled much of India and initiated the second golden age for India. China was the largest economy in the world for much of the time, followed by India until the 18th century. Ming China By 1368, Zhu Yuanzhang had claimed himself Hongwu Emperor and established the Ming dynasty of China. Immediately, the new emperor and his followers drove the Mongols and their culture out of China and beyond the Great Wall. The new emperor was somewhat suspicious of the scholars that dominated China's bureaucracy, for he had been born a peasant and was uneducated. Nevertheless, Confucian scholars were necessary to China's bureaucracy and were reestablished as well as reforms that would improve the exam systems and make them more important in entering the bureaucracy than ever before. The exams became more rigorous, cut down harshly on cheating, and those who excelled were more highly appraised. Finally, Hongwu also directed more power towards the role of emperor so as to end the corrupt influences of the bureaucrats. Society and economy The Hongwu emperor, perhaps for his sympathy of the common-folk, had built many irrigation systems and other public projects that provided help for the peasant farmers. They were also allowed to cultivate and claim unoccupied land without having to pay any taxes and labor demands were lowered. However, none of this was able to stop the rising landlord class that gained many privileges from the government and slowly gained control of the peasantry. Moneylenders foreclosed on peasant debt in exchange for mortgages and bought up farmer land, forcing them to become the landlords' tenants or to wander elsewhere for work. Also during this time, Neo-Confucianism intensified even more than the previous two dynasties (the Song and Yuan). Focus on the superiority of elders over youth, men over women, and teachers over students resulted in minor discrimination of the "inferior" classes. The fine arts grew in the Ming era, with improved techniques in brush painting that depicted scenes of court, city or country life; people such as scholars or travelers; or the beauty of mountains, lakes, or marshes. The Chinese novel fully developed in this era, with such classics written such as Water Margin, Journey to the West, and Jin Ping Mei. Economics grew rapidly in the Ming Dynasty as well. The introduction of American crops such as maize, sweet potatoes, and peanuts allowed for cultivation of crops in infertile land and helped prevent famine. The population boom that began in the Song dynasty accelerated until China's population went from 80 or 90 million to 150 million in three centuries, culminating in 1600. This paralleled the market economy that was growing both internally and externally. Silk, tea, ceramics, and lacquer-ware were produced by artisans that traded them in Asia and to Europeans. Westerners began to trade (with some Chinese-assigned limits), primarily in the port-towns of Macau and Canton. Although merchants benefited greatly from this, land remained the primary symbol of wealth in China and traders' riches were often put into acquiring more land. Therefore, little of these riches were used in private enterprises that could've allowed for China to develop the market economy that often accompanied the highly-successful Western countries. Foreign interests In the interest of national glory, the Chinese began sending impressive junk ships across the South China Sea and the Indian Ocean. From 1403 to 1433, the Yongle Emperor commissioned expeditions led by the admiral Zheng He, a Muslim eunuch from China. Chinese junks carrying hundreds of soldiers, goods, and animals for zoos, traveled to Southeast Asia, Persia, southern Arabia, and east Africa to show off Chinese power. Their prowess exceeded that of current Europeans at the time, and had these expeditions not ended, the world economy may be different from today. In 1433, the Chinese government decided that the cost of a navy was an unnecessary expense. The Chinese navy was slowly dismantled and focus on interior reform and military defense began. It was China's longstanding priority that they protect themselves from nomads and they have accordingly returned to it. The growing limits on the Chinese navy would leave them vulnerable to foreign invasion by sea later on. As was inevitable, Westerners arrived on the Chinese east coast, primarily Jesuit missionaries which reached the mainland in 1582. They attempted to convert the Chinese people to Christianity by first converting the top of the social hierarchy and allowing the lower classes to subsequently convert. To further gain support, many Jesuits adopted Chinese dress, customs, and language. Some Chinese scholars were interested in certain Western teachings and especially in Western technology. By the 1580s, Jesuit scholars like Matteo Ricci and Adam Schall amazed the Chinese elite with technological advances such as European clocks, improved calendars and cannons, and the accurate prediction of eclipses. Although some the scholar-gentry converted, many were suspicious of the Westerners whom they called "barbarians" and even resented them for the embarrassment they received at the hand of Western correction. Nevertheless, a small group of Jesuit scholars remained at the court to impress the emperor and his advisors. Decline Near the end of the 1500s, the extremely centralized government that gave so much power to the emperor had begun to fail as more incompetent rulers took the mantle. Along with these weak rulers came increasingly corrupt officials who took advantage of the decline. Once more the public projects fell into disrepair due to neglect by the bureaucracy and resulted in floods, drought, and famine that rocked the peasantry. The famine soon became so terrible that some peasants resorted to selling their children to slavery to save them from starvation, or to eating bark, the feces of geese, or other people. Many landlords abused the situation by building large estates where desperate farmers would work and be exploited. In turn, many of these farmers resorted to flight, banditry, and open rebellion. All of this corresponded with the usual dynastic decline of China seen before, as well as the growing foreign threats. In the mid-16th century, Japanese and ethnic Chinese pirates began to raid the southern coast, and neither the bureaucracy nor the military were able to stop them. The threat of the northern Manchu people also grew. The Manchu were an already large state north of China, when in the early 17th century a local leader named Nurhaci suddenly united them under the Eight Banners—armies that the opposing families were organized into. The Manchus adopted many Chinese customs, specifically taking after their bureaucracy. Nevertheless, the Manchus still remained a Chinese vassal. In 1644 Chinese administration became so weak, the 16th and last emperor, the Chongzhen Emperor, did not respond to the severity of an ensuing rebellion by local dissenters until the enemy had invaded the Forbidden City (his personal estate). He soon hanged himself in the imperial gardens. For a brief amount of time, the Shun dynasty was claimed, until a loyalist Ming official called support from the Manchus to put down the new dynasty. The Shun Dynasty ended within a year and the Manchu were now within the Great Wall. Taking advantage of the situation, the Manchus marched on the Chinese capital of Beijing. Within two decades all of China belonged to the Manchu and the Qing dynasty was established. Korea: Joseon dynasty (1392–1897) In early-modern Korea, the 500-year-old kingdom, Goryeo fell and new dynasty Joseon rose in August 5, 1392. Taejo of Joseon changed the country's name from Goryeo to Joseon. Sejong the Great created Hangul, the modern Korean alphabet, in 1443; likewise the Joseon dynasty saw several improvements in science and technology, like Sun Clocks, Water Clocks, Rain-Measuring systems, Star Maps, and detailed records of Korean small villages. The ninth king, Seongjong accomplished the first complete Korean law code in 1485. So the culture and people's lives were improved again. In 1592, Japan under Toyotomi Hideyoshi invaded Korea. That war is Imjin war. Before that war, Joseon was in a long peace like PAX ROMANA. So Joseon was not ready for the war. Joseon had lost again and again. Japanese army conquered Seoul. The whole Korean peninsula was in danger. But Yi Sun-sin, the most renowned general of Korea, defeated Japanese fleet in southern Korea coast even 13 ships VS 133 ships. This incredible battle is called "Battle of Myeongnyang". After that, Ming dynasty helped Joseon, and Japan lost the battle. So Toyotomi Hideyoshi's campaign in Korea failed, and the Tokugawa Shogunate has later began. Korea was hurt a lot at Imjin war. Not long after, Manchurian people invaded Joseon again. It is called Qing invasion of Joseon. The first invasion was for sake. Because Qing was at war between Ming, so Ming's alliance with Joseon was threatening. And the second invasion was for Joseon to obey Qing. After that, Qing defeated Ming and took the whole Chinese territories. Joseon also had to obey Qing because Joseon lose the second war against Qing. After the Qing invasion, the princes of the Joseon dynasty lived their childhood in China. The son of King Injo met Adam Schall in Beijing. So he wanted to introduce western technologies to Korean people when he becomes a king. Unfortunately, he died before he could take the throne. After then, the alternative prince became the 17th king of the Joseon dynasty, Hyojong, trying to revenge for his kingdom and fallen Ming dynasty to Qing. Later kings such as Yeongjo and Jeongjo tried to improve their people's lives and stop the governors' unreasonable competition. From the 17th century to the 18th century, Joseon sent diplomats and artists to Japan more than 10 times. This group was called 'Tongshinsa'. They were sent to Japan to teach Japan about advanced Korean culture. Japanese people liked to receive poems from Korean nobles. At that time, Korea was more powerful than Japan. But that relationship between Joseon and Japan was reversed after the 19th century. Because Japan became more powerful than Korea and China, either. So Joseon sent diplomats called 'Sooshinsa' to learn Japanese advanced technologies. After king Jeongjo's death, some noble families controlled the whole kingdom in the early 19th century. At the end of that period, Western people invaded Joseon. In 1876, Joseon was set free from Qing so they did not have to obey Qing. But Japanese Empire was happy because Joseon became a perfect independent kingdom. So Japan could intervene in the kingdom more. After this, Joseon traded with the United States and sent 'Sooshinsa' to Japan, 'Youngshinsa' to Qing, and 'Bobingsa' to the US and Europe. These groups took many modern things to the Korean peninsula. Japan: Tokugawa or Edo period (1603–1867) In early-modern Japan following the Sengoku period of "warring states", central government had been largely reestablished by Oda Nobunaga and Toyotomi Hideyoshi during the Azuchi–Momoyama period. After the Battle of Sekigahara in 1600, central authority fell to Tokugawa Ieyasu who completed this process and received the title of shōgun in 1603. Society in the Japanese "Tokugawa period" (see Edo society), unlike the shogunates before it, was based on the strict class hierarchy originally established by Toyotomi Hideyoshi. The daimyōs (feudal lords) were at the top, followed by the warrior-caste of samurai, with the farmers, artisans, and merchants ranking below. The country was strictly closed to foreigners with few exceptions with the Sakoku policy. Literacy rose in the two centuries of isolation. In some parts of the country, particularly smaller regions, daimyōs and samurai were more or less identical, since daimyōs might be trained as samurai, and samurai might act as local lords. Otherwise, the largely inflexible nature of this social stratification system unleashed disruptive forces over time. Taxes on the peasantry were set at fixed amounts which did not account for inflation or other changes in monetary value. As a result, the tax revenues collected by the samurai landowners were worth less and less over time. This often led to numerous confrontations between noble but impoverished samurai and well-to-do peasants. None, however, proved compelling enough to seriously challenge the established order until the arrival of foreign powers. India In the Indian subcontinent, the Mughal Empire ruled most of India in the early 18th century. During emperor Shah Jahan and his son Aurangzeb's Islamic sharia reigns, the empire reached its architectural and economic zenith, and became the world's largest economy, worth over 25% of world GDP and signaled the proto-industrialization. Following major events such as the Nader Shah's invasion of the Mughal Empire, Battle of Plassey, Battle of Buxar and the long Anglo-Mysore Wars, most of South Asia was colonised and governed by the British Empire, thus establishing the British Raj. The "classic period" ended with the death of Mughal Emperor Aurangzeb, although the dynasty continued for another 150 years. During this period, the Empire was marked by a highly centralized administration connecting the different regions. All the significant monuments of the Mughals, their most visible legacy, date to this period which was characterised by the expansion of Persian cultural influence in the Indian subcontinent, with brilliant literary, artistic, and architectural results. The Maratha Empire was located in the south west of present-day India and expanded greatly under the rule of the Peshwas, the prime ministers of the Maratha empire. In 1761, the Maratha army lost the Third Battle of Panipat against Ahmad shah Durrani king of Afghanistan which halted imperial expansion and the empire was then divided into a confederacy of Maratha states. British and Dutch colonization The European economic and naval powers pushed into Asia, first to do trading, and then to take over major colonies. The Dutch led the way followed by the British. Portugal had arrived first, but was too weak
the Norte Chico diet has been a subject of scholarly debate. In 1973, examining the Aspero region of Norte Chico, Michael E. Moseley contended that a maritime subsistence (seafood) economy had been the basis of society and its early flourishing. This theory, later termed "maritime foundation of Andean Civilization" was at odds with the general scholarly consensus that civilization arose as a result of intensive grain-based agriculture, as had been the case in the emergence of civilizations in northeast Africa (Egypt) and southwest Asia (Mesopotamia). While earlier research pointed to edible domestic plants such as squash, beans, lucuma, guava, pacay, and camote at Caral, publications by Haas and colleagues have added avocado, achira, and maize (Zea Mays) to the list of foods consumed in the region. In 2013, Haas and colleagues reported that maize was a primary component of the diet throughout the period of 3000 to 1800 BC. Cotton was another widespread crop in Norte Chico, essential to the production of fishing nets and textiles. Jonathan Haas noted a mutual dependency, whereby "The prehistoric residents of the Norte Chico needed the fish resources for their protein and the fishermen needed the cotton to make the nets to catch the fish." In the 2005 book 1491: New Revelations of the Americas Before Columbus, journalist Charles C. Mann surveyed the literature at the time, reporting a date "sometime before 3200 BC, and possibly before 3500 BC" as the beginning date for the formation of Norte Chico. He notes that the earliest date securely associated with a city is 3500 BC, at Huaricanga in the (inland) Fortaleza area. The Norte Chico civilization began to decline around 1800 BC as more powerful centers appeared to the south and north along its coast, and to the east within the Andes Mountains. Mesoamerica, the Woodland Period, and Mississippian culture (2000 BCE – 500 CE) After the decline of the Norte Chico civilization, several large, centralized civilizations developed in the Western Hemisphere: Chavin, Nazca, Moche, Huari, Quitus, Cañaris, Chimu, Pachacamac, Tiahuanaco, Aymara and Inca in the Central Andes (Ecuador, Peru and Bolivia); Muisca in Colombia ; Taínos in Dominican Republic (Hispaniola, Española) and part of Caribbean; and the Olmecs, Maya, Toltecs, Mixtecs, Zapotecs, Aztecs and Purepecha in southern North America (Mexico, Guatemala). The Olmec civilization was the first Mesoamerican civilization, beginning around 1600–1400 BC and ending around 400 BC. Mesoamerica is considered one of the six sites around the globe in which civilization developed independently and indigenously. This civilization is considered the mother culture of the Mesoamerican civilizations. The Mesoamerican calendar, numeral system, writing, and much of the Mesoamerican pantheon seem to have begun with the Olmec. Some elements of agriculture seem to have been practiced in Mesoamerica quite early. The domestication of maize is thought to have begun around 7,500 to 12,000 years ago. The earliest record of lowland maize cultivation dates to around 5100 BC. Agriculture continued to be mixed with a hunting-gathering-fishing lifestyle until quite late compared to other regions, but by 2700 BC, Mesoamericans were relying on maize, and living mostly in villages. Temple mounds and classes started to appear. By 1300/ 1200 BC, small centres coalesced into the Olmec civilization, which seems to have been a set of city-states, united in religious and commercial concerns. The Olmec cities had ceremonial complexes with earth/clay pyramids, palaces, stone monuments, aqueducts and walled plazas. The first of these centers was at San Lorenzo (until 900 bc). La Venta was the last great Olmec centre. Olmec artisans sculpted jade and clay figurines of Jaguars and humans. Their iconic giant heads – believed to be of Olmec rulers – stood in every major city. The Olmec civilization ended in 400 BC, with the defacing and destruction of San Lorenzo and La Venta, two of the major cities. It nevertheless spawned many other states, most notably the Mayan civilization, whose first cities began appearing around 700–600 BC. Olmec influences continued to appear in many later Mesoamerican civilizations. Cities of the Aztecs, Mayas, and Incas were as large and organized as the largest in the Old World, with an estimated population of 200,000 to 350,000 in Tenochtitlan, the capital of the Aztec Empire. The market established in the city was said to have been the largest ever seen by the conquistadors when they arrived. The capital of the Cahokians, Cahokia, located near modern East St. Louis, Illinois, may have reached a population of over 20,000. At its peak, between the 12th and 13th centuries, Cahokia may have been the most populous city in North America. Monk's Mound, the major ceremonial center of Cahokia, remains the largest earthen construction of the prehistoric New World. These civilizations developed agriculture as well, breeding maize (corn) from having ears 2–5 cm in length to perhaps 10–15 cm in length. Potatoes, tomatoes, beans (greens), pumpkins, avocados, and chocolate are now the most popular of the pre-Columbian agricultural products. The civilizations did not develop extensive livestock as there were few suitable species, although alpacas and llamas were domesticated for use as beasts of burden and sources of wool and meat in the Andes. By the 15th century, maize was being farmed in the Mississippi River Valley after introduction from Mexico. The course of further agricultural development was greatly altered by the arrival of Europeans. Classic stage (800 BCE – 1533 CE) Cahokia Cahokia was a major regional chiefdom, with trade and tributary chiefdoms located in a range of areas from bordering the Great Lakes to the Gulf of Mexico. Haudenosaune The Iroquois League of Nations or "People of the Long House", based in present-day upstate and western New York, had a confederacy model from the mid-15th century. It has been suggested that their culture contributed to political thinking during the development of the later United States government. Their system of affiliation was a kind of federation, different from the strong, centralized European monarchies. Leadership was restricted to a group of 50 sachem chiefs, each representing one clan within a tribe; the Oneida and Mohawk people had nine seats each; the Onondagas held fourteen; the Cayuga had ten seats; and the Seneca had eight. Representation was not based on population numbers, as the Seneca tribe greatly outnumbered the others. When a sachem chief died, his successor was chosen by the senior woman of his tribe in consultation with other female members of the clan; property and hereditary leadership were passed matrilineally. Decisions were not made through voting but through consensus decision making, with each sachem chief holding theoretical veto power. The Onondaga were the "firekeepers", responsible for raising topics to be discussed. They occupied one side of a three-sided fire (the Mohawk and Seneca sat on one side of the fire, the Oneida and Cayuga sat on the third side.) Long-distance trading did not prevent warfare and displacement among the indigenous peoples, and their oral histories tell of numerous migrations to the historic territories where Europeans encountered them. The Iroquois invaded and attacked tribes in the Ohio River area of present-day Kentucky and claimed the hunting grounds. Historians have placed these events as occurring as early as the 13th century, or in the 17th century Beaver Wars. Through warfare, the Iroquois drove several tribes to migrate west to what became known as their historically traditional lands west of the Mississippi River. Tribes originating in the Ohio Valley who moved west included the Osage, Kaw, Ponca and Omaha people. By the mid-17th century, they had resettled in their historical lands in present-day Kansas, Nebraska, Arkansas and Oklahoma. The Osage warred with Caddo-speaking Native Americans, displacing them in turn by the mid-18th century and dominating their new historical territories. Oasisamerica Pueblo people The Pueblo people of what is now occupied by the Southwestern United States and northern Mexico, living conditions were that of large stone apartment like adobe structures. They live in Arizona, New Mexico, Utah, Colorado, and possibly surrounding areas. Aridoamerica Chichimeca Chichimeca was the name that the Mexica (Aztecs) generically applied to a wide range of semi-nomadic peoples who inhabited the north of modern-day Mexico, and carried the same sense as the European term "barbarian". The name was adopted with a pejorative tone by the Spaniards when referring especially to the semi-nomadic hunter-gatherer peoples of northern Mexico. Mesoamerica Olmec The Olmec civilization emerged around 1200 BCE in Mesoamerica and ended around 400 BCE. Olmec art and concepts influenced surrounding cultures after their downfall. This civilization was thought to be the first in America to develop a writing system. After the Olmecs abandoned their cities for unknown reasons, the Maya, Zapotec and Teotihuacan arose. Purepecha The Purepecha civilization emerged around 1000 CE in Mesoamerica . They flourished from 1100 CE to 1530 CE. They continue to live on in the state of Michoacán. Fierce warriors, they were never conquered and in their glory years, successfully sealed off huge areas from Aztec domination. Maya Maya history spans 3,000 years. The Classic Maya may have collapsed due to changing climate in the end of the 10th century. Toltec The Toltec were a nomadic people, dating from the 10th–12th century, whose language was also spoken by the Aztecs. Teotihuacan Teotihuacan (4th century BCE – 7/8th century CE) was both a city, and an empire of the same name, which, at its zenith between 150 and the 5th century, covered most of Mesoamerica. Aztec The Aztec having started to build their empire around 14th century found their civilization abruptly ended by the Spanish conquistadors. They lived in Mesoamerica, and surrounding lands. Their capital city Tenochtitlan was one of the largest cities of all time. South America Norte Chico The oldest known civilization of the Americas was established in the Norte Chico region of modern Peru. Complex society emerged in the group of coastal valleys, between 3000 and 1800 BCE. The Quipu, a distinctive recording device among Andean civilizations, apparently dates from the era of Norte Chico's prominence. Chavín The Chavín established a trade network and developed agriculture by as early as (or late compared to the Old World) 900 BCE according to some estimates and archaeological finds. Artifacts were found at a site called Chavín in modern Peru at an elevation of 3,177 meters. Chavín civilization spanned from 900 BCE to 300 BCE. Inca Holding their capital at the great city of Cusco, the Inca civilization dominated the Andes region from 1438 to 1533. Known as Tahuantinsuyu, or "the land of the four regions", in Quechua, the Inca culture was highly distinct and developed. Cities were built with precise, unmatched stonework, constructed over many levels of mountain terrain. Terrace farming was a useful form of agriculture. There is evidence of excellent metalwork and even successful trepanation of the skull in Inca civilization. European colonization Around 1000, the Vikings established a short-lived settlement in Newfoundland, now known as L'Anse aux Meadows. Speculations exist about other Old World discoveries of the New World, but none of these are generally or completely accepted by most scholars. Spain sponsored a major exploration led by Italian explorer Christopher Columbus in 1492; it quickly led to extensive European colonization of the Americas. The Europeans brought Old World diseases which are thought to have caused catastrophic epidemics and a huge decrease of the native population. Columbus came at a time in which many technical developments in sailing techniques and communication made it possible to report his voyages easily and to spread word of them throughout Europe. It was also a time of growing religious, imperial and economic rivalries that led to a competition for the establishment of colonies. Colonial period 15th to 19th century colonies in the New World: Spanish colonization of the Americas (1492) Viceroyalty of New Spain (1535 to 1821) Viceroyalty of Peru (1542–1824) Spanish Main Spanish West Indies Captaincy General of Guatemala British America / Thirteen Colonies (1584/1607 to 1776/20th century) Danish West Indies New Netherland New France Captaincy General of Venezuela Portuguese colonization of the Americas (1499 to 1822) Colonial Brazil (1500 to 1815) Decolonization The formation of sovereign states in the New World began with the United States Declaration of Independence of 1776. The American Revolutionary War lasted through the period of the Siege of Yorktown — its last major campaign — in the early autumn of 1781, with peace being achieved in 1783. The Spanish colonies won their independence in the first quarter of the 19th century, in the Spanish American wars of independence. Simón Bolívar and José de San Martín, among others, led their independence struggle. Although Bolivar attempted to keep the Spanish-speaking parts of Latin America politically allied, they rapidly became independent of one another as well, and several further wars were fought, such as the Paraguayan War and the War of the Pacific. (See Latin American integration.) In the Portuguese colony Dom Pedro I (also Pedro IV of Portugal), son of the Portuguese king Dom João VI, proclaimed the country's independence in 1822 and became Brazil's first Emperor. This was peacefully accepted by the crown in Portugal, upon compensation. Effects of slavery Slavery has had a significant role in the economic development of the New World after the colonization of the Americas by the Europeans. The cotton, tobacco, and sugar cane harvested by slaves became important exports for the United States and the Caribbean countries. 20th century North America As a part of the British Empire, Canada immediately entered World War I when it broke out in 1914. Canada bore the brunt of several major battles during the early stages of the war, including the use of poison gas attacks at Ypres. Losses became grave, and the government eventually brought in conscription, despite the fact this was against the wishes of the majority of French Canadians. In the ensuing Conscription Crisis of 1917, riots broke out on the streets of Montreal. In neighboring Newfoundland, the new dominion suffered a devastating loss on July 1, 1916, the First day on the Somme. The United States stayed out of the conflict until 1917, when it joined the Entente powers. The United States was then able to play a crucial role at the Paris Peace Conference of 1919 that shaped interwar Europe. Mexico was not part of the war, as the country was embroiled in the Mexican Revolution at the time. The 1920s brought an age of great prosperity in the United States, and to a lesser degree Canada. But the Wall Street Crash of 1929 combined with drought ushered in a period of economic hardship in the United States and Canada. From 1936 to 1949, there was a popular uprising against the anti-Catholic Mexican government of the time, set off specifically by the anti-clerical provisions of the Mexican Constitution
in 400 BC, with the defacing and destruction of San Lorenzo and La Venta, two of the major cities. It nevertheless spawned many other states, most notably the Mayan civilization, whose first cities began appearing around 700–600 BC. Olmec influences continued to appear in many later Mesoamerican civilizations. Cities of the Aztecs, Mayas, and Incas were as large and organized as the largest in the Old World, with an estimated population of 200,000 to 350,000 in Tenochtitlan, the capital of the Aztec Empire. The market established in the city was said to have been the largest ever seen by the conquistadors when they arrived. The capital of the Cahokians, Cahokia, located near modern East St. Louis, Illinois, may have reached a population of over 20,000. At its peak, between the 12th and 13th centuries, Cahokia may have been the most populous city in North America. Monk's Mound, the major ceremonial center of Cahokia, remains the largest earthen construction of the prehistoric New World. These civilizations developed agriculture as well, breeding maize (corn) from having ears 2–5 cm in length to perhaps 10–15 cm in length. Potatoes, tomatoes, beans (greens), pumpkins, avocados, and chocolate are now the most popular of the pre-Columbian agricultural products. The civilizations did not develop extensive livestock as there were few suitable species, although alpacas and llamas were domesticated for use as beasts of burden and sources of wool and meat in the Andes. By the 15th century, maize was being farmed in the Mississippi River Valley after introduction from Mexico. The course of further agricultural development was greatly altered by the arrival of Europeans. Classic stage (800 BCE – 1533 CE) Cahokia Cahokia was a major regional chiefdom, with trade and tributary chiefdoms located in a range of areas from bordering the Great Lakes to the Gulf of Mexico. Haudenosaune The Iroquois League of Nations or "People of the Long House", based in present-day upstate and western New York, had a confederacy model from the mid-15th century. It has been suggested that their culture contributed to political thinking during the development of the later United States government. Their system of affiliation was a kind of federation, different from the strong, centralized European monarchies. Leadership was restricted to a group of 50 sachem chiefs, each representing one clan within a tribe; the Oneida and Mohawk people had nine seats each; the Onondagas held fourteen; the Cayuga had ten seats; and the Seneca had eight. Representation was not based on population numbers, as the Seneca tribe greatly outnumbered the others. When a sachem chief died, his successor was chosen by the senior woman of his tribe in consultation with other female members of the clan; property and hereditary leadership were passed matrilineally. Decisions were not made through voting but through consensus decision making, with each sachem chief holding theoretical veto power. The Onondaga were the "firekeepers", responsible for raising topics to be discussed. They occupied one side of a three-sided fire (the Mohawk and Seneca sat on one side of the fire, the Oneida and Cayuga sat on the third side.) Long-distance trading did not prevent warfare and displacement among the indigenous peoples, and their oral histories tell of numerous migrations to the historic territories where Europeans encountered them. The Iroquois invaded and attacked tribes in the Ohio River area of present-day Kentucky and claimed the hunting grounds. Historians have placed these events as occurring as early as the 13th century, or in the 17th century Beaver Wars. Through warfare, the Iroquois drove several tribes to migrate west to what became known as their historically traditional lands west of the Mississippi River. Tribes originating in the Ohio Valley who moved west included the Osage, Kaw, Ponca and Omaha people. By the mid-17th century, they had resettled in their historical lands in present-day Kansas, Nebraska, Arkansas and Oklahoma. The Osage warred with Caddo-speaking Native Americans, displacing them in turn by the mid-18th century and dominating their new historical territories. Oasisamerica Pueblo people The Pueblo people of what is now occupied by the Southwestern United States and northern Mexico, living conditions were that of large stone apartment like adobe structures. They live in Arizona, New Mexico, Utah, Colorado, and possibly surrounding areas. Aridoamerica Chichimeca Chichimeca was the name that the Mexica (Aztecs) generically applied to a wide range of semi-nomadic peoples who inhabited the north of modern-day Mexico, and carried the same sense as the European term "barbarian". The name was adopted with a pejorative tone by the Spaniards when referring especially to the semi-nomadic hunter-gatherer peoples of northern Mexico. Mesoamerica Olmec The Olmec civilization emerged around 1200 BCE in Mesoamerica and ended around 400 BCE. Olmec art and concepts influenced surrounding cultures after their downfall. This civilization was thought to be the first in America to develop a writing system. After the Olmecs abandoned their cities for unknown reasons, the Maya, Zapotec and Teotihuacan arose. Purepecha The Purepecha civilization emerged around 1000 CE in Mesoamerica . They flourished from 1100 CE to 1530 CE. They continue to live on in the state of Michoacán. Fierce warriors, they were never conquered and in their glory years, successfully sealed off huge areas from Aztec domination. Maya Maya history spans 3,000 years. The Classic Maya may have collapsed due to changing climate in the end of the 10th century. Toltec The Toltec were a nomadic people, dating from the 10th–12th century, whose language was also spoken by the Aztecs. Teotihuacan Teotihuacan (4th century BCE – 7/8th century CE) was both a city, and an empire of the same name, which, at its zenith between 150 and the 5th century, covered most of Mesoamerica. Aztec The Aztec having started to build their empire around 14th century found their civilization abruptly ended by the Spanish conquistadors. They lived in Mesoamerica, and surrounding lands. Their capital city Tenochtitlan was one of the largest cities of all time. South America Norte Chico The oldest known civilization of the Americas was established in the Norte Chico region of modern Peru. Complex society emerged in the group of coastal valleys, between 3000 and 1800 BCE. The Quipu, a distinctive recording device among Andean civilizations, apparently dates from the era of Norte Chico's prominence. Chavín The Chavín established a trade network and developed agriculture by as early as (or late compared to the Old World) 900 BCE according to some estimates and archaeological finds. Artifacts were found at a site called Chavín in modern Peru at an elevation of 3,177 meters. Chavín civilization spanned from 900 BCE to 300 BCE. Inca Holding their capital at the great city of Cusco, the Inca civilization dominated the Andes region from 1438 to 1533. Known as Tahuantinsuyu, or "the land of the four regions", in Quechua, the Inca culture was highly distinct and developed. Cities were built with precise, unmatched stonework, constructed over many levels of mountain terrain. Terrace farming was a useful form of agriculture. There is evidence of excellent metalwork and even successful trepanation of the skull in Inca civilization. European colonization Around 1000, the Vikings established a short-lived settlement in Newfoundland, now known as L'Anse aux Meadows. Speculations exist about other Old World discoveries of the New World, but none of these are generally or completely accepted by most scholars. Spain sponsored a major exploration led by Italian explorer Christopher Columbus in 1492; it quickly led to extensive European colonization of the Americas. The Europeans brought Old World diseases which are thought to have caused catastrophic epidemics and a huge decrease of the native population. Columbus came at a time in which many technical developments in sailing techniques and communication made it possible to report his voyages easily and to spread word of them throughout Europe. It was also a time of growing religious, imperial and economic rivalries that led to a competition for the establishment of colonies. Colonial period 15th to 19th century colonies in the New World: Spanish colonization of the Americas (1492) Viceroyalty of New Spain (1535 to 1821) Viceroyalty of Peru (1542–1824) Spanish Main Spanish West Indies Captaincy General of Guatemala British America / Thirteen Colonies (1584/1607 to 1776/20th century) Danish West Indies New Netherland New France Captaincy General of Venezuela Portuguese colonization of the Americas (1499 to 1822) Colonial Brazil (1500 to 1815) Decolonization The formation of sovereign states in the New World began with the United States Declaration of Independence of 1776. The American Revolutionary War lasted through the period of the Siege of Yorktown — its last major campaign — in the early autumn of 1781, with peace being achieved in 1783. The Spanish colonies won their independence in the first quarter of the 19th century, in the Spanish American wars of independence. Simón Bolívar and José de San Martín, among others, led their independence struggle. Although Bolivar attempted to keep the Spanish-speaking parts of Latin America politically allied, they rapidly became independent of one another as well, and several further wars were fought, such as the Paraguayan War and the War of the Pacific. (See Latin American integration.) In the Portuguese colony Dom Pedro I (also Pedro IV of Portugal), son of the Portuguese king Dom João VI, proclaimed the country's independence in 1822 and became Brazil's first Emperor. This was peacefully accepted by the crown in Portugal, upon compensation. Effects of slavery Slavery has had a significant role in the economic development of the New World after the colonization of the Americas by the Europeans. The cotton, tobacco, and sugar cane harvested by slaves became important exports for the United States and the Caribbean countries. 20th century North America As a part of the British Empire, Canada immediately entered World War I when it broke out in 1914. Canada bore the brunt of several major battles during the early stages of the war, including the use of poison gas attacks at Ypres. Losses became grave, and the government eventually brought in conscription, despite the fact this was against the wishes of the majority of French Canadians. In the ensuing Conscription Crisis of 1917, riots broke out on the streets of Montreal. In neighboring Newfoundland, the new dominion suffered a devastating loss on July 1, 1916, the First day on the Somme. The United States stayed out of the conflict until 1917, when it joined the Entente powers. The United States was then able to play a crucial role at the Paris Peace Conference of 1919 that shaped interwar Europe. Mexico was not part of the war, as the country was embroiled in the Mexican Revolution at the time. The 1920s brought an age of great prosperity in the United States, and to a lesser degree Canada. But the Wall Street Crash of 1929 combined with drought ushered in a period of economic hardship in the United States and Canada. From 1936 to 1949, there was a popular uprising against the anti-Catholic Mexican government of the time, set off specifically by the anti-clerical provisions of the Mexican Constitution of 1917. Once again, Canada found itself at war before its neighbors, with numerically modest but significant contributions overseas such as the Battle of Hong Kong and the Battle of Britain. The entry of the United States into the war helped to tip the balance in favour of the allies. Two Mexican tankers, transporting oil to the United States, were attacked and sunk by the Germans in the Gulf of Mexico waters, in 1942. The incident happened in spite of Mexico's neutrality at that time. This led Mexico to enter the conflict with a declaration of war on the Axis nations. The destruction of Europe wrought by the war vaulted all North American countries to more important roles in world affairs, especially the United States, which emerged as a "superpower". The early Cold War era saw the United States as the most powerful nation in a Western coalition of which Mexico and Canada were also a part. In Canada, Quebec was transformed by the Quiet Revolution and the emergence of Quebec nationalism. Mexico experienced an era of huge economic growth after World War II, a heavy industrialization process and a growth of its middle class, a period known in Mexican history as "El Milagro Mexicano" (the Mexican miracle). The Caribbean saw the beginnings of decolonization, while on the largest island the Cuban Revolution introduced Cold War rivalries into Latin America. The civil rights movement in the U.S. ended Jim Crow and empowered black voters in the 1960s, which allowed black citizens to move into high government offices for the first time since Reconstruction. However, the dominant New Deal coalition collapsed in the mid 1960s in disputes over race and the Vietnam War, and the conservative movement began its rise to power, as the once dominant liberalism weakened and collapsed. Canada during
in central Nigeria, around 1,500 BC, the Nok culture developed in Jos Plateau. It was a highly centralized community. The Nok people produced lifelike representations in terracotta, including human heads and human figures, elephants, and other animals. By 500 BC they were smelting iron. By 200 AD the Nok culture had vanished. Based on stylistic similarities with the Nok terracottas, the bronze figurines of the Yoruba kingdom of Ife and those of the Bini kingdom of Benin are now believed to be continuations of the traditions of the earlier Nok culture. Bantu expansion The Bantu expansion involved a significant movement of people in African history and in the settling of the continent. People speaking Bantu languages (a branch of the Niger–Congo family) began in the second millennium BC to spread from Cameroon eastward to the Great Lakes region. In the first millennium BC, Bantu languages spread from the Great Lakes to southern and east Africa. One early movement headed south to the upper Zambezi valley in the 2nd century BC. Then Bantu-speakers pushed westward to the savannahs of present-day Angola and eastward into Malawi, Zambia, and Zimbabwe in the 1st century AD. The second thrust from the Great Lakes was eastward, 2,000 years ago, expanding to the Indian Ocean coast, Kenya and Tanzania. The eastern group eventually met the southern migrants from the Great Lakes in Malawi, Zambia, and Zimbabwe. Both groups continued southward, with eastern groups continuing to Mozambique and reaching Maputo in the 2nd century AD, and expanding as far as Durban. By the later first millennium AD, the expansion had reached the Great Kei River in present-day South Africa. Sorghum, a major Bantu crop, could not thrive under the winter rainfall of Namibia and the western Cape. Khoisan people inhabited the remaining parts of southern Africa. Medieval and Early Modern (6th to 18th centuries) Sao civilization The Sao civilization flourished from about the sixth century BC to as late as the 16th century AD in Central Africa. The Sao lived by the Chari River south of Lake Chad in territory that later became part of present-day Cameroon and Chad. They are the earliest people to have left clear traces of their presence in the territory of modern Cameroon. Today, several ethnic groups of northern Cameroon and southern Chad – but particularly the Sara people – claim descent from the civilization of the Sao. Sao artifacts show that they were skilled workers in bronze, copper, and iron. Finds include bronze sculptures and terracotta statues of human and animal figures, coins, funerary urns, household utensils, jewelry, highly decorated pottery, and spears. The largest Sao archaeological finds have occurred south of Lake Chad. Kanem Empire The Kanem Empire was centered in the Chad Basin. It was known as the Kanem Empire from the 9th century AD onward and lasted as the independent kingdom of Bornu until 1893. At its height it encompassed an area covering not only much of Chad, but also parts of modern southern Libya, eastern Niger, northeastern Nigeria, northern Cameroon, parts of South Sudan and the Central African Republic. The history of the Empire is mainly known from the Royal Chronicle or Girgam discovered in 1851 by the German traveller Heinrich Barth. Kanem rose in the 8th century in the region to the north and east of Lake Chad. The Kanem empire went into decline, shrank, and in the 14th century was defeated by Bilala invaders from the Lake Fitri region. Around the 9th century AD, the central Sudanic Empire of Kanem, with its capital at Njimi, was founded by the Kanuri-speaking nomads. Kanem arose by engaging in the trans-Saharan trade. It exchanged slaves captured by raiding the south for horses from North Africa, which in turn aided in the acquisition of slaves. By the late 11th century, the Islamic Sayfawa (Saifawa) dynasty was founded by Humai (Hummay) ibn Salamna. The Sayfawa Dynasty ruled for 771 years, making it one of the longest-lasting dynasties in human history. In addition to trade, taxation of local farms around Kanem became a source of state income. Kanem reached its peak under Mai (king) Dunama Dibalemi ibn Salma (1210–1248). The empire reportedly was able to field 40,000 cavalry, and it extended from Fezzan in the north to the Sao state in the south. Islam became firmly entrenched in the empire. Pilgrimages to Mecca were common; Cairo had hostels set aside specifically for pilgrims from Kanem. Bornu Empire The Kanuri people led by the Sayfuwa migrated to the west and south of the lake, where they established the Bornu Empire. By the late 16th century the Bornu empire had expanded and recaptured the parts of Kanem that had been conquered by the Bulala. Satellite states of Bornu included the Damagaram in the west and Baguirmi to the southeast of Lake Chad. Around 1400, the Sayfawa Dynasty moved its capital to Bornu, a tributary state southwest of Lake Chad with a new capital Birni Ngarzagamu. Overgrazing had caused the pastures of Kanem to become too dry. In addition, political rivalry from the Bilala clan was becoming intense. Moving to Bornu better situated the empire to exploit the trans-Saharan trade and to widen its network in that trade. Links to the Hausa states were also established, providing horses and salt from Bilma for Bonoman gold. Mai Ali Gazi ibn Dunama (c. 1475 – 1503) defeated the Bilala, reestablishing complete control of Kanem. During the early 16th century, the Sayfawa Dynasty solidified its hold on the Bornu population after much rebellion. In the latter half of the 16th century, Mai Idris Alooma modernized its military, in contrast to the Songhai Empire. Turkish mercenaries were used to train the military. The Sayfawa Dynasty were the first monarchs south of the Sahara to import firearms. The empire controlled all of the Sahel from the borders of Darfur in the east to Hausaland to the west. Friendly relationship was established with the Ottoman Empire via Tripoli. The Mai exchanged gifts with the Ottoman sultan. During the 17th and 18th centuries, not much is known about Bornu. During the 18th century, it became a center of Islamic learning. However, Bornu's army became outdated by not importing new arms, and Kamembu had also begun its decline. The power of the mai was undermined by droughts and famine that were becoming more intense, internal rebellion in the pastoralist north, growing Hausa power, and the importation of firearms which made warfare more bloody. By 1841, the last mai was deposed, bringing to an end the long-lived Sayfawa Dynasty. In its place, the al-Kanemi dynasty of the shehu rose to power. Shilluk Kingdom The Shilluk Kingdom was centered in South Sudan from the 15th century from along a strip of land along the western bank of the White Nile, from Lake No to about 12° north latitude. The capital and royal residence was in the town of Fashoda. The kingdom was founded during the mid-15th century AD by its first ruler, Nyikang. During the 19th century, the Shilluk Kingdom faced decline following military assaults from the Ottoman Empire and later British and Sudanese colonization in Anglo-Egyptian Sudan. Baguirmi Kingdom The Kingdom of Baguirmi existed as an independent state during the 16th and 17th centuries southeast of Lake Chad in what is now the country of Chad. Baguirmi emerged to the southeast of the Kanem–Bornu Empire. The kingdom's first ruler was Mbang Birni Besse. Later in his reign, the Bornu Empire conquered and made the state a tributary. Wadai Empire The Wadai Empire was centered on Chad and the Central African Republic from the 17th century. The Tunjur people founded the Wadai Kingdom to the east of Bornu in the 16th century. In the 17th century there was a revolt of the Maba people who established a Muslim dynasty. At first Wadai paid tribute to Bornu and Durfur, but by the 18th century Wadai was fully independent and had become an aggressor against its neighbors.To the west of Bornu, by the 15th century the Kingdom of Kano had become the most powerful of the Hausa Kingdoms, in an unstable truce with the Kingdom of Katsina to the north. Both were absorbed into the Sokoto Caliphate during the Fulani Jihad of 1805, which threatened Bornu itself. Luba Empire Sometime between 1300 and 1400 AD, Kongolo Mwamba (Nkongolo) from the Balopwe clan unified the various Luba peoples, near Lake Kisale. He founded the Kongolo Dynasty, which was later ousted by Kalala Ilunga. Kalala expanded the kingdom west of Lake Kisale. A new centralized political system of spiritual kings () with a court council of head governors and sub-heads all the way to village heads. The was the direct communicator with the ancestral spirits and chosen by them. Conquered states were integrated into the system and represented in the court, with their titles. The authority of the resided in his spiritual power rather than his military authority. The army was relatively small. The Luba was able to control regional trade and collect tribute for redistribution. Numerous offshoot states were formed with founders claiming descent from the Luba. The Luba political system spread throughout Central Africa, southern Uganda, Rwanda, Burundi, Malawi, Zambia, Zimbabwe, and the western Congo. Two major empires claiming Luba descent were the Lunda Empire and Maravi Empire. The Bemba people and Basimba people of northern Zambia were descended from Luba migrants who arrived in Zambia during the 17th century. Lunda Empire In the 1450s, a Luba from the royal family Ilunga Tshibinda married Lunda queen Rweej and united all Lunda peoples. Their son Luseeng expanded the kingdom. His son Naweej expanded the empire further and is known as the first Lunda emperor, with the title (, ), the Lord of Vipers. The Luba political system was retained, and conquered peoples were integrated into the system. The assigned a or (royal adviser) and tax collector to each state conquered. Numerous states claimed descent from the Lunda. The Imbangala of inland Angola claimed descent from a founder, Kinguri, brother of Queen Rweej, who could not tolerate the rule of Tshibunda. Kinguri became the title of kings of states founded by Queen Rweej's brother. The Luena (Lwena) and Lozi (Luyani) in Zambia also claim descent from Kinguri. During the 17th century, a Lunda chief and warrior called Mwata Kazembe set up an Eastern Lunda kingdom in the valley of the Luapula River. The Lunda's western expansion also saw claims of descent by the Yaka and the Pende. The Lunda linked Central Africa with the western coast trade. The kingdom of Lunda came to an end in the 19th century when it was invaded by the Chokwe, who were armed with guns. Kingdom of Kongo By the 15th century AD, the farming Bakongo people (ba being the plural prefix) were unified as the Kingdom of Kongo under a ruler called the manikongo, residing in the fertile Pool Malebo area on the lower Congo River. The capital was M'banza-Kongo. With superior organization, they were able to conquer their neighbors and extract tribute. They were experts in metalwork, pottery, and weaving raffia cloth. They stimulated interregional trade via a tribute system controlled by the manikongo. Later, maize (corn) and cassava (manioc) would be introduced to the region via trade with the Portuguese at their ports at Luanda and Benguela. The maize and cassava would result in population growth in the region and other parts of Africa, replacing millet as a main staple. By the 16th century, the manikongo held authority from the Atlantic in the west to the Kwango River in the east. Each territory was assigned a mani-mpembe (provincial governor) by the manikongo. In 1506, Afonso I (1506–1542), a Christian, took over the throne. Slave trading increased with Afonso's wars of conquest. About 1568 to 1569, the Jaga invaded Kongo, laying waste to the kingdom and forcing the manikongo into exile. In 1574, Manikongo Álvaro I was reinstated with the help of Portuguese mercenaries. During the latter part of the 1660s, the Portuguese tried to gain control of Kongo. Manikongo António I (1661–1665), with a Kongolese army of 5,000, was destroyed by an army of Afro-Portuguese at the Battle of Mbwila. The empire dissolved into petty polities, fighting among each other for war captives to sell into slavery. Kongo gained captives from the Kingdom of Ndongo in wars of conquest. Ndongo was ruled by the ngola. Ndongo would also engage in slave trading with the Portuguese, with São Tomé being a transit point to Brazil. The kingdom was not as welcoming as Kongo; it viewed the Portuguese with great suspicion and as an enemy. The Portuguese in the latter part of the 16th century tried to gain control of Ndongo but were defeated by the Mbundu. Ndongo experienced depopulation from slave raiding. The leaders established another state at Matamba, affiliated with Queen Nzinga, who put up a strong resistance to the Portuguese until coming to terms with them. The Portuguese settled along the coast as trade dealers, not venturing on conquest of the interior. Slavery wreaked havoc in the interior, with states initiating wars of conquest for captives. The Imbangala formed the slave-raiding state of Kasanje, a major source of slaves during the 17th and 18th centuries. Horn of Africa Somalia The birth of Islam opposite Somalia's Red Sea coast meant that Somali merchants and sailors living on the Arabian Peninsula gradually came under the influence of the new religion through their converted Arab Muslim trading partners. With the migration of Muslim families from the Islamic world to Somalia in the early centuries of Islam, and the peaceful conversion of the Somali population by Somali Muslim scholars in the following centuries, the ancient city-states eventually transformed into Islamic Mogadishu, Berbera, Zeila, Barawa and Merka, which were part of the Berber (the medieval Arab term for the ancestors of the modern Somalis) civilization. The city of Mogadishu came to be known as the City of Islam and controlled the East African gold trade for several centuries. During this period, sultanates such as the Ajuran Empire and the Sultanate of Mogadishu, and republics like Barawa, Merca and Hobyo and their respective ports flourished and had a lucrative foreign commerce with ships sailing to and coming from Arabia, India, Venice, Persia, Egypt, Portugal and as far away as China. Vasco da Gama, who passed by Mogadishu in the 15th century, noted that it was a large city with houses four or five stories high and big palaces in its centre, in addition to many mosques with cylindrical minarets. In the 16th century, Duarte Barbosa noted that many ships from the Kingdom of Cambaya in modern-day India sailed to Mogadishu with cloth and spices, for which they in return received gold, wax, and ivory. Barbosa also highlighted the abundance of meat, wheat, barley, horses, and fruit in the coastal markets, which generated enormous wealth for the merchants. Mogadishu, the center of a thriving weaving industry known as toob benadir (specialized for the markets in Egypt and Syria), together with Merca and Barawa, served as a transit stop for Swahili merchants from Mombasa and Malindi and for the gold trade from Kilwa. Jewish merchants from the Strait of Hormuz brought their Indian textiles and fruit to the Somali coast to exchange for grain and wood. Trading relations were established with Malacca in the 15th century, with cloth, ambergris, and porcelain being the main commodities of the trade. Giraffes, zebras, and incense were exported to the Ming Empire of China, which established Somali merchants as leaders in the commerce between the Asia and Africa and influenced the Chinese language with borrowings from the Somali language in the process. Hindu merchants from Surat and southeast African merchants from Pate, seeking to bypass both the Portuguese blockade and Omani meddling, used the Somali ports of Merca and Barawa (which were out of the two powers' jurisdiction) to conduct their trade in safety and without any problems. Ethiopia The Zagwe dynasty ruled many parts of modern Ethiopia and Eritrea from approximately 1137 to 1270. The name of the dynasty comes from the Cushitic speaking Agaw of northern Ethiopia. From 1270 AD and on for many centuries, the Solomonic dynasty ruled the Ethiopian Empire. In the early 15th century Ethiopia sought to make diplomatic contact with European kingdoms for the first time since Aksumite times. A letter from King Henry IV of England to the Emperor of Abyssinia survives. In 1428, the Emperor Yeshaq I sent two emissaries to Alfonso V of Aragon, who sent return emissaries who failed to complete the return trip. The first continuous relations with a European country began in 1508 with the Kingdom of Portugal under Emperor Lebna Dengel, who had just inherited the throne from his father. This proved to be an important development, for when the empire was subjected to the attacks of the Adal general and imam, Ahmad ibn Ibrahim al-Ghazi (called "Grañ", or "the Left-handed"), Portugal assisted the Ethiopian emperor by sending weapons and four hundred men, who helped his son Gelawdewos defeat Ahmad and re-establish his rule. This Abyssinian–Adal War was also one of the first proxy wars in the region as the Ottoman Empire, and Portugal took sides in the conflict. When Emperor Susenyos converted to Roman Catholicism in 1624, years of revolt and civil unrest followed resulting in thousands of deaths. The Jesuit missionaries had offended the Orthodox faith of the local Ethiopians, and on June 25, 1632, Susenyos's son, Emperor Fasilides, declared the state religion to again be Ethiopian Orthodox Christianity and expelled the Jesuit missionaries and other Europeans. North Africa Maghreb By 711 AD, the Umayyad Caliphate had conquered all of North Africa. By the 10th century, the majority of the population of North Africa was Muslim. By the 9th century AD, the unity brought about by the Islamic conquest of North Africa and the expansion of Islamic culture came to an end. Conflict arose as to who should be the successor of the prophet. The Umayyads had initially taken control of the Caliphate, with their capital at Damascus. Later, the Abbasids had taken control, moving the capital to Baghdad. The Berber people, being independent in spirit and hostile to outside interference in their affairs and to Arab exclusivity in orthodox Islam, adopted Shi'ite and Kharijite Islam, both considered unorthodox and hostile to the authority of the Abbasid Caliphate. Numerous Kharijite kingdoms came and fell during the 8th and 9th centuries, asserting their independence from Baghdad. In the early 10th century, Shi'ite groups from Syria, claiming descent from Muhammad's daughter Fatimah, founded the Fatimid Dynasty in the Maghreb. By 950, they had conquered all of the Maghreb and by 969 all of Egypt. They had immediately broken away from Baghdad. In an attempt to bring about a purer form of Islam among the Sanhaja Berbers, Abdallah ibn Yasin founded the Almoravid movement in present-day Mauritania and Western Sahara. The Sanhaja Berbers, like the Soninke, practiced an indigenous religion alongside Islam. Abdallah ibn Yasin found ready converts in the Lamtuna Sanhaja, who were dominated by the Soninke in the south and the Zenata Berbers in the north. By the 1040s, all of the Lamtuna was converted to the Almoravid movement. With the help of Yahya ibn Umar and his brother Abu Bakr ibn Umar, the sons of the Lamtuna chief, the Almoravids created an empire extending from the Sahel to the Mediterranean. After the death of Abdallah ibn Yassin and Yahya ibn Umar, Abu Bakr split the empire in half, between himself and Yusuf ibn Tashfin, because it was too big to be ruled by one individual. Abu Bakr took the south to continue fighting the Soninke, and Yusuf ibn Tashfin took the north, expanding it to southern Spain. The death of Abu Bakr in 1087 saw a breakdown of unity and increase military dissension in the south. This caused a re-expansion of the Soninke. The Almoravids were once held responsible for bringing down the Ghana Empire in 1076, but this view is no longer credited. During the 10th through 13th centuries, there was a large-scale movement of bedouins out of the Arabian Peninsula. About 1050, a quarter of a million Arab nomads from Egypt moved into the Maghreb. Those following the northern coast were referred to as Banu Hilal. Those going south of the Atlas Mountains were the Banu Sulaym. This movement spread the use of the Arabic language and hastened the decline of the Berber language and the Arabisation of North Africa. Later an Arabised Berber group, the Hawwara, went south to Nubia via Egypt. In the 1140s, Abd al-Mu'min declared jihad on the Almoravids, charging them with decadence and corruption. He united the northern Berbers against the Almoravids, overthrowing them and forming the Almohad Empire. During this period, the Maghreb became thoroughly Islamised and saw the spread of literacy, the development of algebra, and the use of the number zero and decimals. By the 13th century, the Almohad states had split into three rival states. Muslim states were largely extinguished in the Iberian Peninsula by the Christian kingdoms of Castile, Aragon, and Portugal. Around 1415, Portugal engaged in a reconquista of North Africa by capturing Ceuta, and in later centuries Spain and Portugal acquired other ports on the North African coast. In 1492, at the end of the Granada War, Spain defeated Muslims in the Emirate of Granada, effectively ending eight centuries of Muslim domination in southern Iberia. . The pashas of Tripoli traded horses, firearms, and armor via Fez with the sultans of the Bornu Empire for slaves. In the 16th century, an Arab nomad tribe that claimed descent from Muhammad's daughter, the Saadis, conquered and united Morocco. They prevented the Ottoman Empire from reaching to the Atlantic and expelled Portugal from Morocco's western coast. Ahmad al-Mansur brought the state to the height of its power. He invaded Songhay in 1591, to control the gold trade, which had been diverted to the western coast of Africa for European ships and to the east, to Tunis. Morocco's hold on Songhay diminished in the 17th century. In 1603, after Ahmad's death, the kingdom split into the two sultanates of Fes and Marrakesh. Later it was reunited by Moulay al-Rashid, founder of the Alaouite Dynasty (1672–1727). His brother and successor, Ismail ibn Sharif (1672–1727), strengthened the unity of the country by importing slaves from the Sudan to build up the military. Nile Valley Egypt In 642 AD, the Rashidun Caliphate conquered Byzantine Egypt. Egypt under the Fatimid Caliphate was prosperous. Dams and canals were repaired, and wheat, barley, flax, and cotton production increased. Egypt became a major producer of linen and cotton cloth. Its Mediterranean and Red Sea trade increased. Egypt also minted a gold currency called the Fatimid dinar, which was used for international trade. The bulk of revenues came from taxing the fellahin (peasant farmers), and taxes were high. Tax collecting was leased to Berber overlords, who were soldiers who had taken part in the Fatimid conquest in 969 AD. The overlords paid a share to the caliphs and retained what was left. Eventually, they became landlords and constituted a settled land aristocracy. To fill the military ranks, Mamluk Turkish slave cavalry and Sudanese slave infantry were used. Berber freemen were also recruited. In the 1150s, tax revenues from farms diminished. The soldiers revolted and wreaked havoc in the countryside, slowed trade, and diminished the power and authority of the Fatimid caliphs. During the 1160s, Fatimid Egypt came under threat from European crusaders. Out of this threat, a Kurdish general named Ṣalāḥ ad-Dīn Yūsuf ibn Ayyūb (Saladin), with a small band of professional soldiers, emerged as an outstanding Muslim defender. Saladin defeated the Christian crusaders at Egypt's borders and recaptured Jerusalem in 1187. On the death of Al-Adid, the last Fatimid caliph, in 1171, Saladin became the ruler of Egypt, ushering in the Ayyubid Dynasty. Under his rule, Egypt returned to Sunni Islam, Cairo became an important center of Arab Islamic learning, and Mamluk slaves were increasingly recruited from Turkey and southern Russia for military service. Support for the military was tied to the iqta, a form of land taxation in which soldiers were given ownership in return for military service. Over time, Mamluk slave soldiers became a very powerful landed aristocracy, to the point of getting rid of the Ayyubid dynasty in 1250 and establishing a Mamluk dynasty. The more powerful Mamluks were referred to as amirs. For 250 years, Mamluks controlled all of Egypt under a military dictatorship. Egypt extended her territories to Syria and Palestine, thwarted the crusaders, and halted a Mongol invasion in 1260 at the Battle of Ain Jalut. Mamluk Egypt came to be viewed as a protector of Islam, and of Medina and Mecca. Eventually the iqta system declined and proved unreliable for providing an adequate military. The Mamluks started viewing their iqta as hereditary and became attuned to urban living. Farm production declined, and dams and canals lapsed into disrepair. Mamluk military skill and technology did not keep pace with new technology of handguns and cannons. With the rise of the Ottoman Empire, Egypt was easily defeated. In 1517, at the end of an Ottoman–Mamluk War, Egypt became part of the Ottoman Empire. The Istanbul government revived the iqta system. Trade was reestablished in the Red Sea, but it could not completely connect with the Indian Ocean trade because of growing Portuguese presence. During the 17th and 18th centuries, hereditary Mamluks regained power. The leading Mamluks were referred to as beys. Pashas, or viceroys, represented the Istanbul government in name only, operating independently. During the 18th century, dynasties of pashas became established. The government was weak and corrupt. In 1798, Napoleon invaded Egypt. The local forces had little ability to resist the French conquest. However, the British Empire and the Ottoman Empire were able to remove French occupation in 1801. These events marked the beginning of a 19th-century Anglo-Franco rivalry over Egypt. Sudan Christian and Islamic Nubia After Ezana of Aksum sacked Meroe, people associated with the site of Ballana moved into Nubia from the southwest and founded three kingdoms: Makuria, Nobatia, and Alodia. They would rule for 200 years. Makuria was above the third cataract, along the Dongola Reach with its capital at Dongola. Nobadia was to the north with its capital at Faras, and Alodia was to the south with its capital at Soba. Makuria eventually absorbed Nobadia. The people of the region converted to Monophysite Christianity around 500 to 600 CE. The church initially started writing in Coptic, then in Greek, and finally in Old Nubian, a Nilo-Saharan language. The church was aligned with the Egyptian Coptic Church. By 641, Egypt was conquered by the Rashidun Caliphate. This effectively blocked Christian Nubia and Aksum from Mediterranean Christendom. In 651–652, Arabs from Egypt invaded Christian Nubia. Nubian archers soundly defeated the invaders. The Baqt (or Bakt) Treaty was drawn, recognizing Christian Nubia and regulating trade. The treaty controlled relations between Christian Nubia and Islamic Egypt for almost six hundred years. By the 13th century, Christian Nubia began its decline. The authority of the monarchy was diminished by the church and nobility. Arab bedouin tribes began to infiltrate Nubia, causing further havoc. Fakirs (holy men) practicing Sufism introduced Islam into Nubia. By 1366, Nubia had become divided into petty fiefdoms when it was invaded by Mamluks. During the 15th century, Nubia was open to Arab immigration. Arab nomads intermingled with the population and introduced the Arab culture and the Arabic language. By the 16th century, Makuria and Nobadia had been Islamized. During the 16th century, Abdallah Jamma headed an Arab confederation that destroyed Soba, capital of Alodia, the last holdout of Christian Nubia. Later Alodia would fall under the Funj Sultanate. During the 15th century, Funj herders migrated north to Alodia and occupied it. Between 1504 and 1505, the kingdom expanded, reaching its peak and establishing its capital at Sennar under Badi II Abu Daqn (c. 1644 – 1680). By the end of the 16th century, the Funj had converted to Islam. They pushed their empire westward to Kordofan. They expanded eastward, but were halted by Ethiopia. They controlled Nubia down to the 3rd Cataract. The economy depended on captured enemies to fill the army and on merchants travelling through Sennar. Under Badi IV (1724–1762), the army turned on the king, making him nothing but a figurehead. In 1821, the Funj were conquered by Muhammad Ali (1805–1849), Pasha of Egypt. Southern Africa Settlements of Bantu-speaking peoples who were iron-using agriculturists and herdsmen were long already well established south of the Limpopo River by the 4th century CE, displacing and absorbing the original Khoisan speakers. They slowly moved south, and the earliest ironworks in modern-day KwaZulu-Natal Province are believed to date from around 1050. The southernmost group was the Xhosa people, whose language incorporates certain linguistic traits from the earlier Khoi-San people, reaching the Great Fish River in today's Eastern Cape Province. Great Zimbabwe and Mapungubwe The Kingdom of Mapungubwe was the first state in Southern Africa, with its capital at Mapungubwe. The state arose in the 12th century CE. Its wealth came from controlling the trade in ivory from the Limpopo Valley, copper from the mountains of northern Transvaal, and gold from the Zimbabwe Plateau between the Limpopo and Zambezi rivers, with the Swahili merchants at Chibuene. By the mid-13th century, Mapungubwe was abandoned. After the decline of Mapungubwe, Great Zimbabwe rose on the Zimbabwe Plateau. Zimbabwe means stone building. Great Zimbabwe was the first city in Southern Africa and was the center of an empire, consolidating lesser Shona polities. Stone building was inherited from Mapungubwe. These building techniques were enhanced and came into maturity at Great Zimbabwe, represented by the wall of the Great Enclosure. The dry-stack stone masonry technology was also used to build smaller compounds in the area. Great Zimbabwe flourished by trading with Swahili Kilwa and Sofala. The rise of Great Zimbabwe parallels the rise of Kilwa. Great Zimbabwe was a major source of gold. Its royal court lived in luxury, wore Indian cotton, surrounded themselves with copper and gold ornaments, and ate on plates from as far away as Persia and China. Around the 1420s and 1430s, Great Zimbabwe was on decline. The city was abandoned by 1450. Some have attributed the decline to the rise of the trading town Ingombe Ilede. A new chapter of Shona history ensued. Nyatsimba Mutota, a northern Shona king of the Karanga, engaged in conquest. He and his son Mutope conquered the Zimbabwe Plateau, going through Mozambique to the east coast, linking the empire to the coastal trade. They called their empire Wilayatu 'l Mu'anamutapah or mwanamutapa (Lord of the Plundered Lands), or the Kingdom of Mutapa. Monomotapa was the Portuguese corruption. They did not build stone structures; the northern Shonas had no traditions of building in stone. After the death of Matope in 1480, the empire split into two small empires: Torwa in the south and Mutapa in the north. The split occurred over rivalry from two Shona lords, Changa and Togwa, with the mwanamutapa line. Changa was able to acquire the south, forming the Kingdom of Butua with its capital at Khami. The Mutapa Empire continued in the north under the mwenemutapa line. During the 16th century the Portuguese were able to establish permanent markets up the Zambezi River in an attempt to gain political and military control of Mutapa. They were partially successful. In 1628, a decisive battle allowed them to put a puppet mwanamutapa named Mavura, who signed treaties that gave favorable mineral export rights to the Portuguese. The Portuguese were successful in destroying the mwanamutapa system of government and undermining trade. By 1667, Mutapa was in decay. Chiefs would not allow digging for gold because of fear of Portuguese theft, and the population declined. The Kingdom of Butua was ruled by a changamire, a title derived from the founder, Changa. Later it became the Rozwi Empire. The Portuguese tried to gain a foothold but were thrown out of the region in 1693, by Changamire Dombo. The 17th century was a period of peace and prosperity. The Rozwi Empire fell into ruins in the 1830s from invading Nguni from Natal. Namibia By 1500 AD, most of southern Africa had established states. In northwestern Namibia, the Ovambo engaged in farming and the Herero engaged in herding. As cattle numbers increased, the Herero moved southward to central Namibia for grazing land. A related group, the Ovambanderu, expanded to Ghanzi in northwestern Botswana. The Nama, a Khoi-speaking, sheep-raising group, moved northward and came into contact with the Herero; this would set the stage for much conflict between the two groups. The expanding Lozi states pushed the Mbukushu, Subiya, and Yei to Botei, Okavango, and Chobe in northern Botswana. South Africa and Botswana Sotho–Tswana The development of Sotho–Tswana states based on the highveld, south of the Limpopo River, began around 1000 CE. The chief's power rested on cattle and his connection to the ancestor. This can be seen in the Toutswemogala Hill settlements with stone foundations and stone walls, north of the highveld and south of the Vaal River. Northwest of the Vaal River developed early Tswana states centered on towns of thousands of people. When disagreements or rivalry arose, different groups moved to form their own states. Nguni peoples Southeast of the Drakensberg mountains lived Nguni-speaking peoples (Zulu, Xhosa, Swazi, and Ndebele). They too engaged in state building, with new states developing from rivalry, disagreements, and population pressure causing movement into new regions. This 19th-century process of warfare, state building and migration later became known as the Mfecane (Nguni) or Difaqane (Sotho). Its major catalyst was the consolidation of the Zulu Kingdom. They were metalworkers, cultivators of millet, and cattle herders. Khoisan and Boers The Khoisan lived in the southwestern Cape Province, where winter rainfall is plentiful. Earlier Khoisan populations were absorbed by Bantu peoples, such as the Sotho and Nguni, but the Bantu expansion stopped at the region with winter rainfall. Some Bantu languages have incorporated the click consonant of the Khoisan languages. The Khoisan traded with their Bantu neighbors, providing cattle, sheep, and hunted items. In return, their Bantu speaking neighbors traded copper, iron, and tobacco. By the 16th century, the Dutch East India Company established a replenishing station at Table Bay for restocking water and purchasing meat from the Khoikhoi. The Khoikhoi received copper, iron, tobacco, and beads in exchange. In order to control the price of meat and stock and make service more consistent, the Dutch established a permanent settlement at Table Bay in 1652. They grew fresh fruit and vegetables and established a hospital for sick sailors. To increase produce, the Dutch decided to increase the number of farms at Table Bay by encouraging freeburgher boers (farmers) on lands worked initially by slaves from West Africa. The land was taken from Khoikhoi grazing land, triggering the first Khoikhoi-Dutch war in 1659. No victors emerged, but the Dutch assumed a "right of conquest" by which they claimed all of the cape. In a series of wars pitting the Khoikhoi against each other, the Boers assumed all Khoikhoi land and claimed all their cattle. The second Khoikoi-Dutch war (1673–1677) was a cattle raid. The Khoikhoi also died in thousands from European diseases. By the 18th century, the cape colony had grown, with slaves coming from Madagascar, Mozambique, and Indonesia. The settlement also started to expand northward, but Khoikhoi resistance, raids, and guerrilla warfare slowed the expansion during the 18th century. Boers who started to practice pastoralism were known as trekboers. A common source of trekboer labor was orphan children who were captured during raids and whose parents had been killed. Southeast Africa Prehistory According to the theory of recent African origin of modern humans, the mainstream position held within the scientific community, all humans originate from either Southeast Africa or the Horn of Africa. During the first millennium CE, Nilotic and Bantu-speaking peoples moved into the region. Swahili coast Following the Bantu Migration, on the coastal section of Southeast Africa, a mixed Bantu community developed through contact with Muslim Arab and Persian traders, leading to the development of the mixed Arab, Persian and African Swahili City States. The Swahili culture that emerged from these exchanges evinces many Arab and Islamic influences not seen in traditional Bantu culture, as do the many Afro-Arab members of the Bantu Swahili people. With its original speech community centered on the coastal parts of Tanzania (particularly Zanzibar) and Kenya—a seaboard referred to as the Swahili Coast—the Bantu Swahili language contains many Arabic language loan-words as a consequence of these interactions. The earliest Bantu inhabitants of the Southeast coast of Kenya and Tanzania encountered by these later Arab and Persian settlers have been variously identified with the trading settlements of Rhapta, Azania and Menouthias referenced in early Greek and Chinese writings from 50 AD to 500 AD, ultimately giving rise to the name for Tanzania. These early writings perhaps document the first wave of Bantu settlers to reach Southeast Africa during their migration. Historically, the Swahili people could be found as far north as northern Kenya and as far south as the Ruvuma River in Mozambique. Arab geographers referred to the Swahili coast as the land of the zanj (blacks). Although once believed to be the descendants of Persian colonists, the ancient Swahili are now recognized by most historians, historical linguists, and archaeologists as a Bantu people who had sustained important interactions with Muslim merchants, beginning in the late 7th and early 8th centuries AD. Medieval Swahili kingdoms are known to have had island trade ports, described by Greek historians as "metropolises", and to have established regular trade routes with the Islamic world and Asia. Ports such as Mombasa, Zanzibar, and Kilwa were known to Chinese sailors under Zheng He and medieval Islamic geographers such as the Berber traveller Abu Abdullah ibn Battuta. The main Swahili exports were ivory, slaves, and gold. They traded with Arabia, India, Persia, and China. The Portuguese arrived in 1498. On a mission to economically control and Christianize the Swahili coast, the Portuguese attacked Kilwa first in 1505 and other cities later. Because of Swahili resistance, the Portuguese attempt at establishing commercial control was never successful. By the late 17th century, Portuguese authority on the Swahili coast began to diminish. With the help of Omani Arabs, by 1729 the Portuguese presence had been removed. The Swahili coast eventually became part of the Sultanate of Oman. Trade recovered, but it did not regain the levels of the past. Urewe The Urewe culture developed and spread in and around the Lake Victoria region of Africa during the African Iron Age. The culture's earliest dated artifacts are located in the Kagera Region of Tanzania, and it extended as far west as the Kivu region of the Democratic Republic of the Congo, as far east as the Nyanza and Western provinces of Kenya, and north into Uganda, Rwanda and Burundi. Sites from the Urewe culture date from the Early Iron Age, from the 5th century BC to the 6th century AD. The origins of the Urewe culture are ultimately in the Bantu expansion originating in Cameroon. Research into early Iron Age civilizations in Sub-Saharan Africa has been undertaken concurrently with studies on African linguistics on Bantu expansion. The Urewe culture may correspond to the Eastern subfamily of Bantu languages, spoken by the descendants of the first wave of Bantu peoples to settle East Africa. At first sight, Urewe seems to be a fully developed civilization recognizable through its distinctive, stylish earthenware and highly technical and sophisticated iron working techniques. Given our current level of knowledge, neither seems to have developed or altered for nearly 2,000 years. However, minor local variations in the ceramic ware can be observed. Urewe is the name of the site in Kenya brought to prominence through the publication in 1948 of Mary Leakey's archaeological findings. She described the early Iron Age period in the Great Lakes region in Central East Africa around Lake Victoria. Madagascar and Merina Madagascar was apparently first settled by Austronesian speakers from Southeast Asia before the 6th century AD and subsequently by Bantu speakers from the east African mainland in the 6th or 7th century, according to archaeological and linguistic data. The Austronesians introduced banana and rice cultivation, and the Bantu speakers introduced cattle and other farming practices. About the year 1000, Arab and Indian trade settlement were started in northern Madagascar to exploit the Indian Ocean trade. By the 14th century, Islam was introduced on the island by traders. Madagascar functioned in the East African medieval period as a contact port for the other Swahili seaport city-states such as Sofala, Kilwa, Mombasa, and Zanzibar. Several kingdoms emerged after the 15th century: the Sakalava Kingdom (16th century) on the west coast, Tsitambala Kingdom (17th century) on the east coast, and Merina (15th century) in the central highlands. By the 19th century, Merina controlled the whole island. In 1500, the Portuguese were the first Europeans on the island, raiding the trading settlements. The British and later the French arrived. During the latter part of the 17th century, Madagascar was a popular transit point for pirates. Radama I (1810–1828) invited Christian missionaries in the early 19th century. Queen Ranavalona I "the Cruel" (1828–1861) banned the practice of Christianity in the kingdom, and an estimated 150,000 Christians perished. Under Radama II (1861–1863), Madagascar took a French orientation, with great commercial concession given to the French. In 1895, in the second Franco-Hova War, the French invaded Madagascar, taking over Antsiranana (Diego Suarez) and declaring Madagascar a protectorate. Lake Plateau states and empires Between the 14th and 15th centuries, large Southeast African kingdoms and states emerged, such as the Buganda and Karagwe Kingdoms of Uganda and Tanzania. 𝗘𝗺𝗽𝗶𝗿𝗲 𝗼𝗳 𝗸𝗶𝘁𝗮𝗿𝗮 By 1000 AD, numerous states had arisen on the Lake Plateau among the Great Lakes of East Africa. Cattle herding, cereal growing, and banana cultivation were the economic mainstays of these states. The Ntusi and Bigo earthworks are representative of one of the first states, the Bunyoro kingdom, which oral tradition stipulates was part of the Empire of Kitara that dominated the whole Lakes region. A Luo ethnic elite, from the Babito clan, ruled over the Bantu-speaking Nyoro people. The society was essentially Nyoro in its culture, based on the evidence from pottery, settlement patterns, and economic specialization. The Babito clan claim legitimacy by being descended from the Bachwezi clan, who were said to have ruled the Empire of Kitara. Buganda The Buganda kingdom was founded by Kato Kimera around the 14th century AD. Kato Kintu may have migrated to the northwest of Lake Victoria as early as 1000 BC. Buganda was ruled by the kabaka with a bataka composed of the clan heads. Over time, the kabakas diluted the authority of the bataka, with Buganda becoming a centralized monarchy. By the 16th century, Buganda was engaged in expansion but had a serious rival in Bunyoro. By the 1870s, Buganda was a wealthy nation-state. The kabaka ruled with his Lukiko (council of ministers). Buganda had a naval fleet of a hundred vessels, each manned by thirty men. Buganda supplanted Bunyoro as the most important state in the region. However, by the early 20th century, Buganda became a province of the British Uganda Protectorate. Rwanda Southeast of Bunyoro, near Lake Kivu at the bottom of the western rift, the Kingdom of Rwanda was founded, perhaps during the 17th century. Tutsi (BaTutsi) pastoralists formed the elite, with a king called the mwami. The Hutu (BaHutu) were farmers. Both groups spoke the same language, but there were strict social norms against marrying each other and interaction. According to oral tradition, the Kingdom of Rwanda was founded by Mwami Ruganzu II (Ruganzu Ndori) (c. 1600 – 1624), with his capital near Kigali. It took 200 years to attain a truly centralized kingdom under Mwami Kigeli IV (Kigeri Rwabugiri) (1840–1895). Subjugation of the Hutu proved more difficult than subduing the Tutsi. The last Tutsi chief gave up to Mwami Mutara II (Mutara Rwogera) (1802–1853) in 1852, but the last Hutu holdout was conquered in the 1920s by Mwami Yuhi V (Yuli Musinga) (1896–1931). Burundi South of the Kingdom of Rwanda was the Kingdom of Burundi. It was founded by the Tutsi chief Ntare Rushatsi (c. 1657 – 1705). Like Rwanda, Burundi was built on cattle raised by Tutsi pastoralists, crops from Hutu farmers, conquest, and political innovations. Under Mwami Ntare Rugaamba (c. 1795 – 1852), Burundi pursued an aggressive expansionist policy, one based more on diplomacy than force. Maravi The Maravi claimed descent from Karonga (kalonga), who took that title as king. The Maravi connected Central Africa to the east coastal trade, with Swahili Kilwa. By the 17th century, the Maravi Empire encompassed all the area between Lake Malawi and the mouth of the Zambezi River. The karonga was Mzura, who did much to extend the empire. Mzura made a pact with the Portuguese to establish a 4,000-man army to attack the Shona in return for aid in defeating his rival Lundi, a chief of the Zimba. In 1623, he turned on the Portuguese and assisted the Shona. In 1640, he welcomed back the Portuguese for trade. The Maravi Empire did not long survive the death of Mzura. By the 18th century, it had broken into its previous polities. West Africa Sahelian empires and states Ghana The Ghana Empire may have been an established kingdom as early as the 8th century AD, founded among the Soninke by Dinge Cisse. Ghana was first mentioned by Arab geographer Al-Farazi in the late 8th century. Ghana was inhabited by urban dwellers and rural farmers. The urban dwellers were the administrators of the empire, who were Muslims, and the Ghana (king), who practiced traditional religion. Two towns existed, one where the Muslim administrators and Berber-Arabs lived, which was connected by a stone-paved road to the king's residence. The rural dwellers lived in villages, which joined together into broader polities that pledged loyalty to the Ghana. The Ghana was viewed as divine, and his physical well-being reflected on the whole society. Ghana converted to Islam around 1050, after conquering Aoudaghost. The Ghana Empire grew wealthy by taxing the trans-Saharan trade that linked Tiaret and Sijilmasa to Aoudaghost. Ghana controlled access to the goldfields of Bambouk, southeast of Koumbi Saleh. A percentage of salt and gold going through its territory was taken. The empire was not involved in production. By the 11th century, Ghana was in decline. It was once thought that the sacking of Koumbi Saleh by Berbers under the Almoravid dynasty in 1076 was the cause. This is no longer accepted. Several alternative explanations are cited. One important reason is the transfer of the gold trade east to the Niger River and the Taghaza Trail, and Ghana's consequent economic decline. Another reason cited is political instability through rivalry among the different hereditary polities. The empire came to an end in 1230, when Takrur in northern Senegal took over the capital. Mali The Mali Empire began in the 13th century AD, when a Mande (Mandingo) leader, Sundiata (Lord Lion) of the Keita clan, defeated Soumaoro Kanté, king of the Sosso or southern Soninke, at the Battle of Kirina in c. 1235. Sundiata continued his conquest from the fertile forests and Niger Valley, east to the Niger Bend, north into the Sahara, and west to the Atlantic Ocean, absorbing the remains of the Ghana Empire. Sundiata took on the title of mansa. He established the capital of his empire at Niani. Although the salt and gold trade continued to be important to the Mali Empire, agriculture and pastoralism was also critical. The growing of sorghum, millet, and rice was a vital function. On the northern borders of the Sahel, grazing cattle, sheep, goats, and camels were major activities. Mande society was organize around the village and land. A cluster of villages was called a kafu, ruled by a farma. The farma paid tribute to the mansa. A dedicated army of elite cavalry and infantry maintained order, commanded by the royal court. A formidable force could be raised from tributary regions, if necessary. Conversion to Islam was a gradual process. The power of the mansa depended on upholding traditional beliefs and a spiritual foundation of power. Sundiata initially kept Islam at bay. Later mansas were devout Muslims but still acknowledged traditional deities and took part in traditional rituals and festivals, which were important to the Mande. Islam became a court religion under Sundiata's son Uli I (1225–1270). Mansa Uli made a pilgrimage to Mecca, becoming recognized within the Muslim world. The court was staffed with literate Muslims as secretaries and accountants. Muslim traveller Ibn Battuta left vivid descriptions of the empire. Mali reached the peak of its power and extent in the 14th century, when Mansa Musa (1312–1337) made his famous hajj to Mecca with 500 slaves, each holding a bar of gold worth 500 mitqals. Mansa Musa's hajj devalued gold in Mamluk Egypt for a decade. He made a great impression on the minds of the Muslim and European world. He invited scholars and architects like Ishal al-Tuedjin (al-Sahili) to further integrate Mali into the Islamic world. The Mali Empire saw an expansion of learning and literacy. In 1285, Sakura, a freed slave, usurped the throne. This mansa drove the Tuareg out of Timbuktu and established it as a center of learning and commerce. The book trade increased, and book copying became a very respectable and profitable profession. Timbuktu and Djenné became important centers of learning within the Islamic world. After the reign of Mansa Suleyman (1341–1360), Mali began its spiral downward. Mossi cavalry raided the exposed southern border. Tuareg harassed the northern border in order to retake Timbuktu. Fulani (Fulbe) eroded Mali's authority in the west by establishing the independent Imamate of Futa Toro, a successor to the kingdom of Takrur. Serer and Wolof alliances were broken. In 1545 to 1546, the Songhai Empire took Niani. After 1599, the empire lost the Bambouk goldfields and disintegrated into petty polities. Songhai The Songhai people are descended from fishermen on the Middle Niger River. They established their capital at Kukiya in the 9th century AD and at Gao in the 12th century. The Songhai speak a Nilo-Saharan language. Sonni Ali, a Songhai, began his conquest by capturing Timbuktu in 1468 from the Tuareg. He extended the empire to the north, deep into the desert, pushed the Mossi further south of the Niger, and expanded southwest to Djenne. His army consisted of cavalry and a fleet of canoes. Sonni Ali was not a Muslim, and he was portrayed negatively by Berber-Arab scholars, especially for attacking Muslim Timbuktu. After his death in 1492, his heirs were deposed by General Muhammad Ture, a Muslim of Soninke origins Muhammad Ture (1493–1528) founded the Askiya Dynasty, askiya being the title of the king. He consolidated the conquests of Sonni Ali. Islam was used to extend his authority by declaring jihad on the Mossi, reviving the trans-Saharan trade, and having the Abbasid "shadow" caliph in Cairo declare him as caliph of Sudan. He established Timbuktu as a great center of Islamic learning. Muhammad Ture expanded the empire by pushing the Tuareg north, capturing Aïr in the east, and capturing salt-producing Taghaza. He brought the Hausa states into the Songhay trading network. He further centralized the administration of the empire by selecting administrators from loyal servants and families and assigning them to conquered territories. They were responsible for raising local militias. Centralization made Songhay very stable, even during dynastic disputes. Leo Africanus left vivid descriptions of the empire under Askiya Muhammad. Askiya Muhammad was deposed by his son in 1528. After much rivalry, Muhammad Ture's last son Askiya Daoud (1529–1582) assumed the throne. In 1591, Morocco invaded the Songhai Empire under Ahmad al-Mansur of the Saadi Dynasty in order to secure the goldfields of the Sahel. At the Battle of Tondibi, the Songhai army was defeated. The Moroccans captured Djenne, Gao, and Timbuktu, but they were unable to secure the whole region. Askiya Nuhu and the Songhay army regrouped at Dendi in the heart of Songhai territory where a spirited guerrilla resistance sapped the resources of the Moroccans, who were dependent upon constant resupply from Morocco. Songhai split into several states during the 17th century. Morocco found its venture unprofitable. The gold trade had been diverted to Europeans on the coast. Most of the trans-Saharan trade was now diverted east to Bornu. Expensive equipment purchased with gold had to be sent across the Sahara, an unsustainable scenario. The Moroccans who remained married into the population and were referred to as Arma or Ruma. They established themselves at Timbuktu as a military caste with various fiefs, independent from Morocco. Amid the chaos, other groups began to assert themselves, including the Fulani of Futa Tooro who encroached from the west. The Bambara Empire, one of the states that broke from Songhai, sacked Gao. In 1737, the Tuareg massacred the Arma. Sokoto Caliphate The Fulani were migratory people. They moved from Mauritania and settled in Futa Tooro, Futa Djallon, and subsequently throughout the rest of West Africa. By the 14th century CE, they had converted to Islam. During the 16th century, they established themselves at Macina in southern Mali. During the 1670s, they declared jihads on non-Muslims. Several states were formed from these jihadist wars, at Futa Toro, Futa Djallon, Macina, Oualia, and Bundu. The most important of these states was the Sokoto Caliphate or Fulani Empire. In the city of Gobir, Usman dan Fodio (1754–1817) accused the Hausa leadership of practicing an impure version of Islam and of being morally corrupt. In 1804, he launched the Fulani War as a jihad among a population that was restless about high taxes and discontented with its leaders. Jihad fever swept northern Nigeria, with strong support among both the Fulani and the Hausa. Usman created an empire that included parts of northern Nigeria, Benin, and Cameroon, with Sokoto as its capital. He retired to teach and write and handed the empire to his son Muhammed Bello. The Sokoto Caliphate lasted until 1903 when the British conquered northern Nigeria. Forest empires and states Akan kingdoms and emergence of Asante Empire The
cataracts that exerted an influence over nearby chiefdoms based on pictorial representation ruling over Upper Egypt. Ta-Seti traded as far as Syro-Palestine, as well as with Egypt. Ta-Seti exported gold, copper, ostrich feathers, ebony and ivory to the Old Kingdom. By the 32nd century BC, Ta-Seti was in decline. After the unification of Egypt by Narmer in 3,100 BC, Ta-Seti was invaded by the Pharaoh Hor-Aha of the First Dynasty, destroying the final remnants of the kingdom. Ta-Seti is affiliated with the A-Group Culture known to archaeology. Small sacral kingdoms continued to dot the Nubian portion of the Nile for centuries after 3,000 BC. Around the latter part of the third millennium, there was further consolidation of the sacral kingdoms. Two kingdoms in particular emerged: the Sai kingdom, immediately south of Egypt, and the Kingdom of Kerma at the third cataract. Sometime around the 18th century BC, the Kingdom of Kerma conquered the Kingdom of Sai, becoming a serious rival to Egypt. Kerma occupied a territory from the first cataract to the confluence of the Blue Nile, White Nile, and Atbarah River. About 1,575 to 1,550 BC, during the latter part of the Seventeenth Dynasty, the Kingdom of Kerma invaded Egypt. The Kingdom of Kerma allied itself with the Hyksos invasion of Egypt. Egypt eventually re-energized under the Eighteenth Dynasty and conquered the Kingdom of Kerma or Kush, ruling it for almost 500 years. The Kushites were Egyptianized during this period. By 1100 BC, the Egyptians had withdrawn from Kush. The region regained independence and reasserted its culture. Kush built a new religion around Amun and made Napata its spiritual center. In 730 BC, the Kingdom of Kush invaded Egypt, taking over Thebes and beginning the Nubian Empire. The empire extended from Palestine to the confluence of the Blue Nile, the White Nile, and River Atbara. In 664 BC, the Kushites were expelled from Egypt by iron-wielding Assyrians. Later, the administrative capital was moved from Napata to Meröe, developing a new Nubian culture. Initially, Meroites were highly Egyptianized, but they subsequently began to take on distinctive features. Nubia became a center of iron-making and cotton cloth manufacturing. Egyptian writing was replaced by the Meroitic alphabet. The lion god Apedemak was added to the Egyptian pantheon of gods. Trade links to the Red Sea increased, linking Nubia with Mediterranean Greece. Its architecture and art diversified, with pictures of lions, ostriches, giraffes, and elephants. Eventually, with the rise of Aksum, Nubia's trade links were broken and it suffered environmental degradation from the tree cutting required for iron production. In 350 AD, the Aksumite king Ezana brought Meröe to an end. Carthage The Egyptians referred to the people west of the Nile, ancestral to the Berbers, as Libyans. The Libyans were agriculturalists like the Mauri of Morocco and the Numidians of central and eastern Algeria and Tunis. They were also nomadic, having the horse, and occupied the arid pastures and desert, like the Gaetuli. Berber desert nomads were typically in conflict with Berber coastal agriculturalists. The Phoenicians were Mediterranean seamen in constant search for valuable metals such as copper, gold, tin, and lead. They began to populate the North African coast with settlements—trading and mixing with the native Berber population. In 814 BC, Phoenicians from Tyre established the city of Carthage. By 600 BC, Carthage had become a major trading entity and power in the Mediterranean, largely through trade with tropical Africa. Carthage's prosperity fostered the growth of the Berber kingdoms, Numidia and Mauretania. Around 500 BC, Carthage provided a strong impetus for trade with Sub-Saharan Africa. Berber middlemen, who had maintained contacts with Sub-Saharan Africa since the desert had desiccated, utilized pack animals to transfer products from oasis to oasis. Danger lurked from the Garamantes of Fez, who raided caravans. Salt and metal goods were traded for gold, slaves, beads, and ivory. The Carthaginians were rivals to the Greeks and Romans. Carthage fought the Punic Wars, three wars with Rome: the First Punic War (264 to 241 BC), over Sicily; the Second Punic War (218 to 201 BC), in which Hannibal invaded Europe; and the Third Punic War (149 to 146 BC). Carthage lost the first two wars, and in the third it was destroyed, becoming the Roman province of Africa, with the Berber Kingdom of Numidia assisting Rome. The Roman province of Africa became a major agricultural supplier of wheat, olives, and olive oil to imperial Rome via exorbitant taxation. Two centuries later, Rome brought the Berber kingdoms of Numidia and Mauretania under its authority. In the 420's AD, Vandals invaded North Africa and Rome lost her territories, subsequently the Berber kingdoms regained their independence. Christianity gained a foothold in Africa at Alexandria in the 1st century AD and spread to Northwest Africa. By 313 AD, with the Edict of Milan, all of Roman North Africa was Christian. Egyptians adopted Monophysite Christianity and formed the independent Coptic Church. Berbers adopted Donatist Christianity. Both groups refused to accept the authority of the Roman Catholic Church. Role of the Berbers As Carthaginian power grew, its impact on the indigenous population increased dramatically. Berber civilization was already at a stage in which agriculture, manufacturing, trade, and political organization supported several states. Trade links between Carthage and the Berbers in the interior grew, but territorial expansion also resulted in the enslavement or military recruitment of some Berbers and in the extraction of tribute from others. By the early 4th century BC, Berbers formed one of the largest element, with Gauls, of the Carthaginian army. In the Mercenary War (241-238 BC), a rebellion was instigated by mercenary soldiers of Carthage and African allies. Berber soldiers participated after being unpaid following the defeat of Carthage in the First Punic War. Berbers succeeded in obtaining control of much of Carthage's North African territory, and they minted coins bearing the name Libyan, used in Greek to describe natives of North Africa. The Carthaginian state declined because of successive defeats by the Romans in the Punic Wars; in 146 BC the city of Carthage was destroyed. As Carthaginian power waned, the influence of Berber leaders in the hinterland grew. By the 2nd century BC, several large but loosely administered Berber kingdoms had emerged. Two of them were established in Numidia, behind the coastal areas controlled by Carthage. West of Numidia lay Mauretania, which extended across the Moulouya River in Morocco to the Atlantic Ocean. The high point of Berber civilization, unequaled until the coming of the Almohads and Almoravid dynasty more than a millennium later, was reached during the reign of Masinissa in the 2nd century BC. After Masinissa's death in 148 BC, the Berber kingdoms were divided and reunited several times. Masinissa's line survived until 24 AD, when the remaining Berber territory was annexed to the Roman Empire. Macrobia and the Barbari City States Macrobia was an ancient kingdom situated in the Horn of Africa (Present day Somalia) it is mentioned in the 5th century BC. According to Herodotus' account, the Persian Emperor Cambyses II upon his conquest of Egypt (525 BC) sent ambassadors to Macrobia, bringing luxury gifts for the Macrobian king to entice his submission. The Macrobian ruler, who was elected based at least in part on stature, replied instead with a challenge for his Persian counterpart in the form of an unstrung bow: if the Persians could manage to string it, they would have the right to invade his country; but until then, they should thank the gods that the Macrobians never decided to invade their empire. The Macrobians were a regional power reputed for their advanced architecture and gold wealth, which was so plentiful that they shackled their prisoners in golden chains. After the collapse of Macrobia, several wealthy ancient city-states, such as Opone, Essina, Sarapion, Nikon, Malao, Damo and Mosylon near Cape Guardafui would emerge from the 1st millennium BC–500 AD to compete with the Sabaeans, Parthians and Axumites for the wealthy Indo-Greco-Roman trade and flourished along the Somali coast. They developed a lucrative trading network under a region collectively known in the Peripilus of the Erythraean Sea as Barbaria. Roman North Africa "Increases in urbanization and in the area under cultivation during Roman rule caused wholesale dislocations of the Berber society, forcing nomad tribes to settle or to move from their traditional rangelands. Sedentary tribes lost their autonomy and connection with the land. Berber opposition to the Roman presence was nearly constant. The Roman emperor Trajan established a frontier in the south by encircling the Aurès and Nemencha mountains and building a line of forts from Vescera (modern Biskra) to Ad Majores (Henchir Besseriani, southeast of Biskra). The defensive line extended at least as far as Castellum Dimmidi (modern Messaâd, southwest of Biskra), Roman Algeria's southernmost fort. Romans settled and developed the area around Sitifis (modern Sétif) in the 2nd century, but farther west the influence of Rome did not extend beyond the coast and principal military roads until much later." The Roman military presence of North Africa remained relatively small, consisting of about 28,000 troops and auxiliaries in Numidia and the two Mauretanian provinces. Starting in the 2nd century AD, these garrisons were manned mostly by local inhabitants. Aside from Carthage, urbanization in North Africa came in part with the establishment of settlements of veterans under the Roman emperors Claudius (reigned 41–54), Nerva (96–98), and Trajan (98–117). In Algeria such settlements included Tipasa, Cuicul or Curculum (modern Djemila, northeast of Sétif), Thamugadi (modern Timgad, southeast of Sétif), and Sitifis (modern Sétif). The prosperity of most towns depended on agriculture. Called the "granary of the empire", North Africa became one of the largest exporters of grain in the empire, shipping to the provinces which did not produce cereals, like Italy and Greece. Other crops included fruit, figs, grapes, and beans. By the 2nd century AD, olive oil rivaled cereals as an export item. The beginnings of the Roman imperial decline seemed less serious in North Africa than elsewhere. However, uprisings did take place. In 238 AD, landowners rebelled unsuccessfully against imperial fiscal policies. Sporadic tribal revolts in the Mauretanian mountains followed from 253 to 288, during the Crisis of the Third Century. The towns also suffered economic difficulties, and building activity almost ceased. The towns of Roman North Africa had a substantial Jewish population. Some Jews had been deported from Judea or Palestine in the 1st and 2nd centuries AD for rebelling against Roman rule; others had come earlier with Punic settlers. In addition, a number of Berber tribes had converted to Judaism. Christianity arrived in the 2nd century and soon gained converts in the towns and among slaves. More than eighty bishops, some from distant frontier regions of Numidia, attended the Council of Carthage (256) in 256. By the end of the 4th century, the settled areas had become Christianized, and some Berber tribes had converted en masse. A division in the church that came to be known as the Donatist heresy began in 313 among Christians in North Africa. The Donatists stressed the holiness of the church and refused to accept the authority to administer the sacraments of those who had surrendered the scriptures when they were forbidden under the Emperor Diocletian (reigned 284–305). The Donatists also opposed the involvement of Constantine the Great (reigned 306–337) in church affairs in contrast to the majority of Christians who welcomed official imperial recognition. The occasionally violent Donatist controversy has been characterized as a struggle between opponents and supporters of the Roman system. The most articulate North African critic of the Donatist position, which came to be called a heresy, was Augustine, bishop of Hippo Regius. Augustine maintained that the unworthiness of a minister did not affect the validity of the sacraments because their true minister was Jesus Christ. In his sermons and books Augustine, who is considered a leading exponent of Christian dogma, evolved a theory of the right of orthodox Christian rulers to use force against schismatics and heretics. Although the dispute was resolved by a decision of an imperial commission in Carthage in 411, Donatist communities continued to exist as late as the 6th century. A decline in trade weakened Roman control. Independent kingdoms emerged in mountainous and desert areas, towns were overrun, and Berbers, who had previously been pushed to the edges of the Roman Empire, returned. During the Vandalic War, Belisarius, general of the Byzantine emperor Justinian I based in Constantinople, landed in North Africa in 533 with 16,000 men and within a year destroyed the Vandal Kingdom. Local opposition delayed full Byzantine control of the region for twelve years, however, and when imperial control came, it was but a shadow of the control exercised by Rome. Although an impressive series of fortifications were built, Byzantine rule was compromised by official corruption, incompetence, military weakness, and lack of concern in Constantinople for African affairs, which made it an easy target for the Arabs during the Early Muslim conquests. As a result, many rural areas reverted to Berber rule. Aksum The earliest state in Eritrea and northern Ethiopia, Dʿmt, dates from around the 8th and 7th centuries BC. D'mt traded through the Red Sea with Egypt and the Mediterranean, providing frankincense. By the 5th and 3rd centuries, D'mt had declined, and several successor states took its place. Later there was greater trade with South Arabia, mainly with the port of Saba. Adulis became an important commercial center in the Ethiopian Highlands. The interaction of the peoples in the two regions, the southern Arabia Sabaeans and the northern Ethiopians, resulted in the Ge'ez culture and language and eventual development of the Ge'ez script. Trade links increased and expanded from the Red Sea to the Mediterranean, with Egypt, Israel, Phoenicia, Greece, and Rome, to the Black Sea, and to Persia, India, and China. Aksum was known throughout those lands. By the 5th century BC, the region was very prosperous, exporting ivory, hippopotamus hides, gold dust, spices, and live elephants. It imported silver, gold, olive oil, and wine. Aksum manufactured glass crystal, brass, and copper for export. A powerful Aksum emerged, unifying parts of eastern Sudan, northern Ethiopia (Tigre), and Eritrea. Its kings built stone palatial buildings and were buried under megalithic monuments. By 300 AD, Aksum was minting its own coins in silver and gold. In 331 AD, King Ezana (320–350 AD) was converted to Miaphysite Christianity which believes in one united divine-human nature of Christ, supposedly by Frumentius and Aedesius, who became stranded on the Red Sea coast. Some scholars believed the process was more complex and gradual than a simple conversion. Around 350, the time Ezana sacked Meroe, the Syrian monastic tradition took root within the Ethiopian church. In the 6th century Aksum was powerful enough to add Saba on the Arabian peninsula to her empire. At the end of the 6th century, the Sasanian Empire pushed Aksum out of the peninsula. With the spread of Islam through Western Asia and Northern Africa, Aksum's trading networks in the Mediterranean faltered. The Red Sea trade diminished as it was diverted to the Persian Gulf and dominated by Arabs, causing Aksum to decline. By 800 AD, the capital was moved south into the interior highlands, and Aksum was much diminished. West Africa In the western Sahel the rise of settled communities occurred largely as a result of the domestication of millet and of sorghum. Archaeology points to sizable urban populations in West Africa beginning in the 2nd millennium BC. Symbiotic trade relations developed before the trans-Saharan trade, in response to the opportunities afforded by north–south diversity in ecosystems across deserts, grasslands, and forests. The agriculturists received salt from the desert nomads. The desert nomads acquired meat and other foods from pastoralists and farmers of the grasslands and from fishermen on the Niger River. The forest-dwellers provided furs and meat. Dhar Tichitt and Oualata in present-day Mauritania figure prominently among the early urban centers, dated to 2,000 BC. About 500 stone settlements litter the region in the former savannah of the Sahara. Its inhabitants fished and grew millet. It has been found Augustin Holl that the Soninke of the Mandé peoples were likely responsible for constructing such settlements. Around 300 BC the region became more desiccated and the settlements began to decline, most likely relocating to Koumbi Saleh. Architectural evidence and the comparison of pottery styles suggest that Dhar Tichitt was related to the subsequent Ghana Empire. Djenné-Djenno (in present-day Mali) was settled around 300 BC, and the town grew to house a sizable Iron Age population, as evidenced by crowded cemeteries. Living structures were made of sun-dried mud. By 250 BC Djenné-Djenno had become a large, thriving market town. Towns similar to that at Djenne-Jeno also developed at the site of Dia, also in Mali along the Niger River, from around 900 BC. Farther south, in central Nigeria, around 1,500 BC, the Nok culture developed in Jos Plateau. It was a highly centralized community. The Nok people produced lifelike representations in terracotta, including human heads and human figures, elephants, and other animals. By 500 BC they were smelting iron. By 200 AD the Nok culture had vanished. Based on stylistic similarities with the Nok terracottas, the bronze figurines of the Yoruba kingdom of Ife and those of the Bini kingdom of Benin are now believed to be continuations of the traditions of the earlier Nok culture. Bantu expansion The Bantu expansion involved a significant movement of people in African history and in the settling of the continent. People speaking Bantu languages (a branch of the Niger–Congo family) began in the second millennium BC to spread from Cameroon eastward to the Great Lakes region. In the first millennium BC, Bantu languages spread from the Great Lakes to southern and east Africa. One early movement headed south to the upper Zambezi valley in the 2nd century BC. Then Bantu-speakers pushed westward to the savannahs of present-day Angola and eastward into Malawi, Zambia, and Zimbabwe in the 1st century AD. The second thrust from the Great Lakes was eastward, 2,000 years ago, expanding to the Indian Ocean coast, Kenya and Tanzania. The eastern group eventually met the southern migrants from the Great Lakes in Malawi, Zambia, and Zimbabwe. Both groups continued southward, with eastern groups continuing to Mozambique and reaching Maputo in the 2nd century AD, and expanding as far as Durban. By the later first millennium AD, the expansion had reached the Great Kei River in present-day South Africa. Sorghum, a major Bantu crop, could not thrive under the winter rainfall of Namibia and the western Cape. Khoisan people inhabited the remaining parts of southern Africa. Medieval and Early Modern (6th to 18th centuries) Sao civilization The Sao civilization flourished from about the sixth century BC to as late as the 16th century AD in Central Africa. The Sao lived by the Chari River south of Lake Chad in territory that later became part of present-day Cameroon and Chad. They are the earliest people to have left clear traces of their presence in the territory of modern Cameroon. Today, several ethnic groups of northern Cameroon and southern Chad – but particularly the Sara people – claim descent from the civilization of the Sao. Sao artifacts show that they were skilled workers in bronze, copper, and iron. Finds include bronze sculptures and terracotta statues of human and animal figures, coins, funerary urns, household utensils, jewelry, highly decorated pottery, and spears. The largest Sao archaeological finds have occurred south of Lake Chad. Kanem Empire The Kanem Empire was centered in the Chad Basin. It was known as the Kanem Empire from the 9th century AD onward and lasted as the independent kingdom of Bornu until 1893. At its height it encompassed an area covering not only much of Chad, but also parts of modern southern Libya, eastern Niger, northeastern Nigeria, northern Cameroon, parts of South Sudan and the Central African Republic. The history of the Empire is mainly known from the Royal Chronicle or Girgam discovered in 1851 by the German traveller Heinrich Barth. Kanem rose in the 8th century in the region to the north and east of Lake Chad. The Kanem empire went into decline, shrank, and in the 14th century was defeated by Bilala invaders from the Lake Fitri region. Around the 9th century AD, the central Sudanic Empire of Kanem, with its capital at Njimi, was founded by the Kanuri-speaking nomads. Kanem arose by engaging in the trans-Saharan trade. It exchanged slaves captured by raiding the south for horses from North Africa, which in turn aided in the acquisition of slaves. By the late 11th century, the Islamic Sayfawa (Saifawa) dynasty was founded by Humai (Hummay) ibn Salamna. The Sayfawa Dynasty ruled for 771 years, making it one of the longest-lasting dynasties in human history. In addition to trade, taxation of local farms around Kanem became a source of state income. Kanem reached its peak under Mai (king) Dunama Dibalemi ibn Salma (1210–1248). The empire reportedly was able to field 40,000 cavalry, and it extended from Fezzan in the north to the Sao state in the south. Islam became firmly entrenched in the empire. Pilgrimages to Mecca were common; Cairo had hostels set aside specifically for pilgrims from Kanem. Bornu Empire The Kanuri people led by the Sayfuwa migrated to the west and south of the lake, where they established the Bornu Empire. By the late 16th century the Bornu empire had expanded and recaptured the parts of Kanem that had been conquered by the Bulala. Satellite states of Bornu included the Damagaram in the west and Baguirmi to the southeast of Lake Chad. Around 1400, the Sayfawa Dynasty moved its capital to Bornu, a tributary state southwest of Lake Chad with a new capital Birni Ngarzagamu. Overgrazing had caused the pastures of Kanem to become too dry. In addition, political rivalry from the Bilala clan was becoming intense. Moving to Bornu better situated the empire to exploit the trans-Saharan trade and to widen its network in that trade. Links to the Hausa states were also established, providing horses and salt from Bilma for Bonoman gold. Mai Ali Gazi ibn Dunama (c. 1475 – 1503) defeated the Bilala, reestablishing complete control of Kanem. During the early 16th century, the Sayfawa Dynasty solidified its hold on the Bornu population after much rebellion. In the latter half of the 16th century, Mai Idris Alooma modernized its military, in contrast to the Songhai Empire. Turkish mercenaries were used to train the military. The Sayfawa Dynasty were the first monarchs south of the Sahara to import firearms. The empire controlled all of the Sahel from the borders of Darfur in the east to Hausaland to the west. Friendly relationship was established with the Ottoman Empire via Tripoli. The Mai exchanged gifts with the Ottoman sultan. During the 17th and 18th centuries, not much is known about Bornu. During the 18th century, it became a center of Islamic learning. However, Bornu's army became outdated by not importing new arms, and Kamembu had also begun its decline. The power of the mai was undermined by droughts and famine that were becoming more intense, internal rebellion in the pastoralist north, growing Hausa power, and the importation of firearms which made warfare more bloody. By 1841, the last mai was deposed, bringing to an end the long-lived Sayfawa Dynasty. In its place, the al-Kanemi dynasty of the shehu rose to power. Shilluk Kingdom The Shilluk Kingdom was centered in South Sudan from the 15th century from along a strip of land along the western bank of the White Nile, from Lake No to about 12° north latitude. The capital and royal residence was in the town of Fashoda. The kingdom was founded during the mid-15th century AD by its first ruler, Nyikang. During the 19th century, the Shilluk Kingdom faced decline following military assaults from the Ottoman Empire and later British and Sudanese colonization in Anglo-Egyptian Sudan. Baguirmi Kingdom The Kingdom of Baguirmi existed as an independent state during the 16th and 17th centuries southeast of Lake Chad in what is now the country of Chad. Baguirmi emerged to the southeast of the Kanem–Bornu Empire. The kingdom's first ruler was Mbang Birni Besse. Later in his reign, the Bornu Empire conquered and made the state a tributary. Wadai Empire The Wadai Empire was centered on Chad and the Central African Republic from the 17th century. The Tunjur people founded the Wadai Kingdom to the east of Bornu in the 16th century. In the 17th century there was a revolt of the Maba people who established a Muslim dynasty. At first Wadai paid tribute to Bornu and Durfur, but by the 18th century Wadai was fully independent and had become an aggressor against its neighbors.To the west of Bornu, by the 15th century the Kingdom of Kano had become the most powerful of the Hausa Kingdoms, in an unstable truce with the Kingdom of Katsina to the north. Both were absorbed into the Sokoto Caliphate during the Fulani Jihad of 1805, which threatened Bornu itself. Luba Empire Sometime between 1300 and 1400 AD, Kongolo Mwamba (Nkongolo) from the Balopwe clan unified the various Luba peoples, near Lake Kisale. He founded the Kongolo Dynasty, which was later ousted by Kalala Ilunga. Kalala expanded the kingdom west of Lake Kisale. A new centralized political system of spiritual kings () with a court council of head governors and sub-heads all the way to village heads. The was the direct communicator with the ancestral spirits and chosen by them. Conquered states were integrated into the system and represented in the court, with their titles. The authority of the resided in his spiritual power rather than his military authority. The army was relatively small. The Luba was able to control regional trade and collect tribute for redistribution. Numerous offshoot states were formed with founders claiming descent from the Luba. The Luba political system spread throughout Central Africa, southern Uganda, Rwanda, Burundi, Malawi, Zambia, Zimbabwe, and the western Congo. Two major empires claiming Luba descent were the Lunda Empire and Maravi Empire. The Bemba people and Basimba people of northern Zambia were descended from Luba migrants who arrived in Zambia during the 17th century. Lunda Empire In the 1450s, a Luba from the royal family Ilunga Tshibinda married Lunda queen Rweej and united all Lunda peoples. Their son Luseeng expanded the kingdom. His son Naweej expanded the empire further and is known as the first Lunda emperor, with the title (, ), the Lord of Vipers. The Luba political system was retained, and conquered peoples were integrated into the system. The assigned a or (royal adviser) and tax collector to each state conquered. Numerous states claimed descent from the Lunda. The Imbangala of inland Angola claimed descent from a founder, Kinguri, brother of Queen Rweej, who could not tolerate the rule of Tshibunda. Kinguri became the title of kings of states founded by Queen Rweej's brother. The Luena (Lwena) and Lozi (Luyani) in Zambia also claim descent from Kinguri. During the 17th century, a Lunda chief and warrior called Mwata Kazembe set up an Eastern Lunda kingdom in the valley of the Luapula River. The Lunda's western expansion also saw claims of descent by the Yaka and the Pende. The Lunda linked Central Africa with the western coast trade. The kingdom of Lunda came to an end in the 19th century when it was invaded by the Chokwe, who were armed with guns. Kingdom of Kongo By the 15th century AD, the farming Bakongo people (ba being the plural prefix) were unified as the Kingdom of Kongo under a ruler called the manikongo, residing in the fertile Pool Malebo area on the lower Congo River. The capital was M'banza-Kongo. With superior organization, they were able to conquer their neighbors and extract tribute. They were experts in metalwork, pottery, and weaving raffia cloth. They stimulated interregional trade via a tribute system controlled by the manikongo. Later, maize (corn) and cassava (manioc) would be introduced to the region via trade with the Portuguese at their ports at Luanda and Benguela. The maize and cassava would result in population growth in the region and other parts of Africa, replacing millet as a main staple. By the 16th century, the manikongo held authority from the Atlantic in the west to the Kwango River in the east. Each territory was assigned a mani-mpembe (provincial governor) by the manikongo. In 1506, Afonso I (1506–1542), a Christian, took over the throne. Slave trading increased with Afonso's wars of conquest. About 1568 to 1569, the Jaga invaded Kongo, laying waste to the kingdom and forcing the manikongo into exile. In 1574, Manikongo Álvaro I was reinstated with the help of Portuguese mercenaries. During the latter part of the 1660s, the Portuguese tried to gain control of Kongo. Manikongo António I (1661–1665), with a Kongolese army of 5,000, was destroyed by an army of Afro-Portuguese at the Battle of Mbwila. The empire dissolved into petty polities, fighting among each other for war captives to sell into slavery. Kongo gained captives from the Kingdom of Ndongo in wars of conquest. Ndongo was ruled by the ngola. Ndongo would also engage in slave trading with the Portuguese, with São Tomé being a transit point to Brazil. The kingdom was not as welcoming as Kongo; it viewed the Portuguese with great suspicion and as an enemy. The Portuguese in the latter part of the 16th century tried to gain control of Ndongo but were defeated by the Mbundu. Ndongo experienced depopulation from slave raiding. The leaders established another state at Matamba, affiliated with Queen Nzinga, who put up a strong resistance to the Portuguese until coming to terms with them. The Portuguese settled along the coast as trade dealers, not venturing on conquest of the interior. Slavery wreaked havoc in the interior, with states initiating wars of conquest for captives. The Imbangala formed the slave-raiding state of Kasanje, a major source of slaves during the 17th and 18th centuries. Horn of Africa Somalia The birth of Islam opposite Somalia's Red Sea coast meant that Somali merchants and sailors living on the Arabian Peninsula gradually came under the influence of the new religion through their converted Arab Muslim trading partners. With the migration of Muslim families from the Islamic world to Somalia in the early centuries of Islam, and the peaceful conversion of the Somali population by Somali Muslim scholars in the following centuries, the ancient city-states eventually transformed into Islamic Mogadishu, Berbera, Zeila, Barawa and Merka, which were part of the Berber (the medieval Arab term for the ancestors of the modern Somalis) civilization. The city of Mogadishu came to be known as the City of Islam and controlled the East African gold trade for several centuries. During this period, sultanates such as the Ajuran Empire and the Sultanate of Mogadishu, and republics like Barawa, Merca and Hobyo and their respective ports flourished and had a lucrative foreign commerce with ships sailing to and coming from Arabia, India, Venice, Persia, Egypt, Portugal and as far away as China. Vasco da Gama, who passed by Mogadishu in the 15th century, noted that it was a large city with houses four or five stories high and big palaces in its centre, in addition to many mosques with cylindrical minarets. In the 16th century, Duarte Barbosa noted that many ships from the Kingdom of Cambaya in modern-day India sailed to Mogadishu with cloth and spices, for which they in return received gold, wax, and ivory. Barbosa also highlighted the abundance of meat, wheat, barley, horses, and fruit in the coastal markets, which generated enormous wealth for the merchants. Mogadishu, the center of a thriving weaving industry known as toob benadir (specialized for the markets in Egypt and Syria), together with Merca and Barawa, served as a transit stop for Swahili merchants from Mombasa and Malindi and for the gold trade from Kilwa. Jewish merchants from the Strait of Hormuz brought their Indian textiles and fruit to the Somali coast to exchange for grain and wood. Trading relations were established with Malacca in the 15th century, with cloth, ambergris, and porcelain being the main commodities of the trade. Giraffes, zebras, and incense were exported to the Ming Empire of China, which established Somali merchants as leaders in the commerce between the Asia and Africa and influenced the Chinese language with borrowings from the Somali language in the process. Hindu merchants from Surat and southeast African merchants from Pate, seeking to bypass both the Portuguese blockade and Omani meddling, used the Somali ports of Merca and Barawa (which were out of the two powers' jurisdiction) to conduct their trade in safety and without any problems. Ethiopia The Zagwe dynasty ruled many parts of modern Ethiopia and Eritrea from approximately 1137 to 1270. The name of the dynasty comes from the Cushitic speaking Agaw of northern Ethiopia. From 1270 AD and on for many centuries, the Solomonic dynasty ruled the Ethiopian Empire. In the early 15th century Ethiopia sought to make diplomatic contact with European kingdoms for the first time since Aksumite times. A letter from King Henry IV of England to the Emperor of Abyssinia survives. In 1428, the Emperor Yeshaq I sent two emissaries to Alfonso V of Aragon, who sent return emissaries who failed to complete the return trip. The first continuous relations with a European country began in 1508 with the Kingdom of Portugal under Emperor Lebna Dengel, who had just inherited the throne from his father. This proved to be an important development, for when the empire was subjected to the attacks of the Adal general and imam, Ahmad ibn Ibrahim al-Ghazi (called "Grañ", or "the Left-handed"), Portugal assisted the Ethiopian emperor by sending weapons and four hundred men, who helped his son Gelawdewos defeat Ahmad and re-establish his rule. This Abyssinian–Adal War was also one of the first proxy wars in the region as the Ottoman Empire, and Portugal took sides in the conflict. When Emperor Susenyos converted to Roman Catholicism in 1624, years of revolt and civil unrest followed resulting in thousands of deaths. The Jesuit missionaries had offended the Orthodox faith of the local Ethiopians, and on June 25, 1632, Susenyos's son, Emperor Fasilides, declared the state religion to again be Ethiopian Orthodox Christianity and expelled the Jesuit missionaries and other Europeans. North Africa Maghreb By 711 AD, the Umayyad Caliphate had conquered all of North Africa. By the 10th century, the majority of the population of North Africa was Muslim. By the 9th century AD, the unity brought about by the Islamic conquest of North Africa and the expansion of Islamic culture came to an end. Conflict arose as to who should be the successor of the prophet. The Umayyads had initially taken control of the Caliphate, with their capital at Damascus. Later, the Abbasids had taken control, moving the capital to Baghdad. The Berber people, being independent in spirit and hostile to outside interference in their affairs and to Arab exclusivity in orthodox Islam, adopted Shi'ite and Kharijite Islam, both considered unorthodox and hostile to the authority of the Abbasid Caliphate. Numerous Kharijite kingdoms came and fell during the 8th and 9th centuries, asserting their independence from Baghdad. In the early 10th century, Shi'ite groups from Syria, claiming descent from Muhammad's daughter Fatimah, founded the Fatimid Dynasty in the Maghreb. By 950, they had conquered all of the Maghreb and by 969 all of Egypt. They had immediately broken away from Baghdad. In an attempt to bring about a purer form of Islam among the Sanhaja Berbers, Abdallah ibn Yasin founded the Almoravid movement in present-day Mauritania and Western Sahara. The Sanhaja Berbers, like the Soninke, practiced an indigenous religion alongside Islam. Abdallah ibn Yasin found ready converts in the Lamtuna Sanhaja, who were dominated by the Soninke in the south and the Zenata Berbers in the north. By the 1040s, all of the Lamtuna was converted to the Almoravid movement. With the help of Yahya ibn Umar and his brother Abu Bakr ibn Umar, the sons of the Lamtuna chief, the Almoravids created an empire extending from the Sahel to the Mediterranean. After the death of Abdallah ibn Yassin and Yahya ibn Umar, Abu Bakr split the empire in half, between himself and Yusuf ibn Tashfin, because it was too big to be ruled by one individual. Abu Bakr took the south to continue fighting the Soninke, and Yusuf ibn Tashfin took the north, expanding it to southern Spain. The death of Abu Bakr in 1087 saw a breakdown of unity and increase military dissension in the south. This caused a re-expansion of the Soninke. The Almoravids were once held responsible for bringing down the Ghana Empire in 1076, but this view is no longer credited. During the 10th through 13th centuries, there was a large-scale movement of bedouins out of the Arabian Peninsula. About 1050, a quarter of a million Arab nomads from Egypt moved into the Maghreb. Those following the northern coast were referred to as Banu Hilal. Those going south of the Atlas Mountains were the Banu Sulaym. This movement spread the use of the Arabic language and hastened the decline of the Berber language and the Arabisation of North Africa. Later an Arabised Berber group, the Hawwara, went south to Nubia via Egypt. In the 1140s, Abd al-Mu'min declared jihad on the Almoravids, charging them with decadence and corruption. He united the northern Berbers against the Almoravids, overthrowing them and forming the Almohad Empire. During this period, the Maghreb became thoroughly Islamised and saw the spread of literacy, the development of algebra, and the use of the number zero and decimals. By the 13th century, the Almohad states had split into three rival states. Muslim states were largely extinguished in the Iberian Peninsula by the Christian kingdoms of Castile, Aragon, and Portugal. Around 1415, Portugal engaged in a reconquista of North Africa by capturing Ceuta, and in later centuries Spain and Portugal acquired other ports on the North African coast. In 1492, at the end of the Granada War, Spain defeated Muslims in the Emirate of Granada, effectively ending eight centuries of Muslim domination in southern Iberia. . The pashas of Tripoli traded horses, firearms, and armor via Fez with the sultans of the Bornu Empire for slaves. In the 16th century, an Arab nomad tribe that claimed descent from Muhammad's daughter, the Saadis, conquered and united Morocco. They prevented the Ottoman Empire from reaching to the Atlantic and expelled Portugal from Morocco's western coast. Ahmad al-Mansur brought the state to the height of its power. He invaded Songhay in 1591, to control the gold trade, which had been diverted to the western coast of Africa for European ships and to the east, to Tunis. Morocco's hold on Songhay diminished in the 17th century. In 1603, after Ahmad's death, the kingdom split into the two sultanates of Fes and Marrakesh. Later it was reunited by Moulay al-Rashid, founder of the Alaouite Dynasty (1672–1727). His brother and successor, Ismail ibn Sharif (1672–1727), strengthened the unity of the country by importing slaves from the Sudan to build up the military. Nile Valley Egypt In 642 AD, the Rashidun Caliphate conquered Byzantine Egypt. Egypt under the Fatimid Caliphate was prosperous. Dams and canals were repaired, and wheat, barley, flax, and cotton production increased. Egypt became a major producer of linen and cotton cloth. Its Mediterranean and Red Sea trade increased. Egypt also minted a gold currency called the Fatimid dinar, which was used for international trade. The bulk of revenues came from taxing the fellahin (peasant farmers), and taxes were high. Tax collecting was leased to Berber overlords, who were soldiers who had taken part in the Fatimid conquest in 969 AD. The overlords paid a share to the caliphs and retained what was left. Eventually, they became landlords and constituted a settled land aristocracy. To fill the military ranks, Mamluk Turkish slave cavalry and Sudanese slave infantry were used. Berber freemen were also recruited. In the 1150s, tax revenues from farms diminished. The soldiers revolted and wreaked havoc in the countryside, slowed trade, and diminished the power and authority of the Fatimid caliphs. During the 1160s, Fatimid Egypt came under threat from European crusaders. Out of this threat, a Kurdish general named Ṣalāḥ ad-Dīn Yūsuf ibn Ayyūb (Saladin), with a small band of professional soldiers, emerged as an outstanding Muslim defender. Saladin defeated the Christian crusaders at Egypt's borders and recaptured Jerusalem in 1187. On the death of Al-Adid, the last Fatimid caliph, in 1171, Saladin became the ruler of Egypt, ushering in the Ayyubid Dynasty. Under his rule, Egypt returned to Sunni Islam, Cairo became an important center of Arab Islamic learning, and Mamluk slaves were increasingly recruited from Turkey and southern Russia for military service. Support for the military was tied to the iqta, a form of land taxation in which soldiers were given ownership in return for military service. Over time, Mamluk slave soldiers became a very powerful landed aristocracy, to the point of getting rid of the Ayyubid dynasty in 1250 and establishing a Mamluk dynasty. The more powerful Mamluks were referred to as amirs. For 250 years, Mamluks controlled all of Egypt under a military dictatorship. Egypt extended her territories to Syria and Palestine, thwarted the crusaders, and halted a Mongol invasion in 1260 at the Battle of Ain Jalut. Mamluk Egypt came to be viewed as a protector of Islam, and of Medina and Mecca. Eventually the iqta system declined and proved unreliable for providing an adequate military. The Mamluks started viewing their iqta as hereditary and became attuned to urban living. Farm production declined, and dams and canals lapsed into disrepair. Mamluk military skill and technology did not keep pace with new technology of handguns and cannons. With the rise of the Ottoman Empire, Egypt was easily defeated. In 1517, at the end of an Ottoman–Mamluk War, Egypt became part of the Ottoman Empire. The Istanbul government revived the iqta system. Trade was reestablished in the Red Sea, but it could not completely connect with the Indian Ocean trade because of growing Portuguese presence. During the 17th and 18th centuries, hereditary Mamluks regained power. The leading Mamluks were referred to as beys. Pashas, or viceroys, represented the Istanbul government in name only, operating independently. During the 18th century, dynasties of pashas became established. The government was weak and corrupt. In 1798, Napoleon invaded Egypt. The local forces had little ability to resist the French conquest. However, the British Empire and the Ottoman Empire were able to remove French occupation in 1801. These events marked the beginning of a 19th-century Anglo-Franco rivalry over Egypt. Sudan Christian and Islamic Nubia After Ezana of Aksum sacked Meroe, people associated with the site of Ballana moved into Nubia from the southwest and founded three kingdoms: Makuria, Nobatia, and Alodia. They would rule for 200 years. Makuria was above the third cataract, along the Dongola Reach with its capital at Dongola. Nobadia was to the north with its capital at Faras, and Alodia was to the south with its capital at Soba. Makuria eventually absorbed Nobadia. The people of the region converted to Monophysite Christianity around 500 to 600 CE. The church initially started writing in Coptic, then in Greek, and finally in Old Nubian, a Nilo-Saharan language. The church was aligned with the Egyptian Coptic Church. By 641, Egypt was conquered by the Rashidun Caliphate. This effectively blocked Christian Nubia and Aksum from Mediterranean Christendom. In 651–652, Arabs from Egypt invaded Christian Nubia. Nubian archers soundly defeated the invaders. The Baqt (or Bakt) Treaty was drawn, recognizing Christian Nubia and regulating trade. The treaty controlled relations between Christian Nubia and Islamic Egypt for almost six hundred years. By the 13th century, Christian Nubia began its decline. The authority of the monarchy was diminished by the church and nobility. Arab bedouin tribes began to infiltrate Nubia, causing further havoc. Fakirs (holy men) practicing Sufism introduced Islam into Nubia. By 1366, Nubia had become divided into petty fiefdoms when it was invaded by Mamluks. During the 15th century, Nubia was open to Arab immigration. Arab nomads intermingled with the population and introduced the Arab culture and the Arabic language. By the 16th century, Makuria and Nobadia had been Islamized. During the 16th century, Abdallah Jamma headed an Arab confederation that destroyed Soba, capital of Alodia, the last holdout of Christian Nubia. Later Alodia would fall under the Funj Sultanate. During the 15th century, Funj herders migrated north to Alodia and occupied it. Between 1504 and 1505, the kingdom expanded, reaching its peak and establishing its capital at Sennar under Badi II Abu Daqn (c. 1644 – 1680). By the end of the 16th century, the Funj had converted to Islam. They pushed their empire westward to Kordofan. They expanded eastward, but were halted by Ethiopia. They controlled Nubia down to the 3rd Cataract. The economy depended on captured enemies to fill the army and on merchants travelling through Sennar. Under Badi IV (1724–1762), the army turned on the king, making him nothing but a figurehead. In 1821, the Funj were conquered by Muhammad Ali (1805–1849), Pasha of Egypt. Southern Africa Settlements of Bantu-speaking peoples who were iron-using agriculturists and herdsmen were long already well established south of the Limpopo River by the 4th century CE, displacing and absorbing the original Khoisan speakers. They slowly moved south, and the earliest ironworks in modern-day KwaZulu-Natal Province are believed to date from around 1050. The southernmost group was the Xhosa people, whose language incorporates certain linguistic traits from the earlier Khoi-San people, reaching the Great Fish River in today's Eastern Cape Province. Great Zimbabwe and Mapungubwe The Kingdom of Mapungubwe was the first state in Southern Africa, with its capital at Mapungubwe. The state arose in the 12th century CE. Its wealth came from controlling the trade in ivory from the Limpopo Valley, copper from the mountains of northern Transvaal, and gold from the Zimbabwe Plateau between the Limpopo and Zambezi rivers, with the Swahili merchants at Chibuene. By the mid-13th century, Mapungubwe was abandoned. After the decline of Mapungubwe, Great Zimbabwe rose on the Zimbabwe Plateau. Zimbabwe means stone building. Great Zimbabwe was the first city in Southern Africa and was the center of an empire, consolidating lesser Shona polities. Stone building was inherited from Mapungubwe. These building techniques were enhanced and came into maturity at Great Zimbabwe, represented by the wall of the Great Enclosure. The dry-stack stone masonry technology was also used to build smaller compounds in the area. Great Zimbabwe flourished by trading with Swahili Kilwa and Sofala. The rise of Great Zimbabwe parallels the rise of Kilwa. Great Zimbabwe was a major source of gold. Its royal court lived in luxury, wore Indian cotton, surrounded themselves with copper and gold ornaments, and ate on plates from as far away as Persia and China. Around the 1420s and 1430s, Great Zimbabwe was on decline. The city was abandoned by 1450. Some have attributed the decline to the rise of the trading town Ingombe Ilede. A new chapter of Shona history ensued. Nyatsimba Mutota, a northern Shona king of the Karanga, engaged in conquest. He and his son Mutope conquered the Zimbabwe Plateau, going through Mozambique to the east coast, linking the empire to the coastal trade. They called their empire Wilayatu 'l Mu'anamutapah or mwanamutapa (Lord of the Plundered Lands), or the Kingdom of Mutapa. Monomotapa was the Portuguese corruption. They did not build stone structures; the northern Shonas had no traditions of building in stone. After the death of Matope in 1480, the empire split into two small empires: Torwa in the south and Mutapa in the north. The split occurred over rivalry from two Shona lords, Changa and Togwa, with the mwanamutapa line. Changa was able to acquire the south, forming the Kingdom of Butua with its capital at Khami. The Mutapa Empire continued in the north under the mwenemutapa line. During the 16th century the Portuguese were able to establish permanent markets up the Zambezi River in an attempt to gain political and military control of Mutapa. They were partially successful. In 1628, a decisive battle allowed them to put a puppet mwanamutapa named Mavura, who signed treaties that gave favorable mineral export rights to the Portuguese. The Portuguese were successful in destroying the mwanamutapa system of government and undermining trade. By 1667, Mutapa was in decay. Chiefs would not allow digging for gold because of fear of Portuguese theft, and the population declined. The Kingdom of Butua was ruled by a changamire, a title derived from the founder, Changa. Later it became the Rozwi Empire. The Portuguese tried to gain a foothold but were thrown out of the region in 1693, by Changamire Dombo. The 17th century was a period of peace and prosperity. The Rozwi Empire fell into ruins in the 1830s from invading Nguni from Natal. Namibia By 1500 AD, most of southern Africa had established states. In northwestern Namibia, the Ovambo engaged in farming and the Herero engaged in herding. As cattle numbers increased, the Herero moved southward to central Namibia for grazing land. A related group, the Ovambanderu, expanded to Ghanzi in northwestern Botswana. The Nama, a Khoi-speaking, sheep-raising group, moved northward and came into contact with the Herero; this would set the stage for much conflict between the two groups. The expanding Lozi states pushed the Mbukushu, Subiya, and Yei to Botei, Okavango, and Chobe in northern Botswana. South Africa and Botswana Sotho–Tswana The development of Sotho–Tswana states based on the highveld, south of the Limpopo River, began around 1000 CE. The chief's power rested on cattle and his connection to the ancestor. This can be seen in the Toutswemogala Hill settlements with stone foundations and stone walls, north of the highveld and south of the Vaal River. Northwest of the Vaal River developed early Tswana states centered on towns of thousands of people. When disagreements or rivalry arose, different groups moved to form their own states. Nguni peoples Southeast of the Drakensberg mountains lived Nguni-speaking peoples (Zulu, Xhosa, Swazi, and Ndebele). They too engaged in state building, with new states developing from rivalry, disagreements, and population pressure causing movement into new regions. This 19th-century process of warfare, state building and migration later became known as the Mfecane (Nguni) or Difaqane (Sotho). Its major catalyst was the consolidation of the Zulu Kingdom. They were metalworkers, cultivators of millet, and cattle herders. Khoisan and Boers The Khoisan lived in the southwestern Cape Province, where winter rainfall is plentiful. Earlier Khoisan populations were absorbed by Bantu peoples, such as the Sotho and Nguni, but the Bantu expansion stopped at the region with winter rainfall. Some Bantu languages have incorporated the click consonant of the Khoisan languages. The Khoisan traded with their Bantu neighbors, providing cattle, sheep, and hunted items. In return, their Bantu speaking neighbors traded copper, iron, and tobacco. By the 16th century, the Dutch East India Company established a replenishing station at Table Bay for restocking water and purchasing meat from the Khoikhoi. The Khoikhoi received copper, iron, tobacco, and beads in exchange. In order to control the price of meat and stock and make service more consistent, the Dutch established a permanent settlement at Table Bay in 1652. They grew fresh fruit and vegetables and established a hospital for sick sailors. To increase produce, the Dutch decided to increase the number of farms at Table Bay by encouraging freeburgher boers (farmers) on lands worked initially by slaves from West Africa. The land was taken from Khoikhoi grazing land, triggering the first Khoikhoi-Dutch war in 1659. No victors emerged, but the Dutch assumed a "right of conquest" by which they claimed all of the cape. In a series of wars pitting the Khoikhoi against each other, the Boers assumed all Khoikhoi land and claimed all their cattle. The second Khoikoi-Dutch war (1673–1677) was a cattle raid. The Khoikhoi also died in thousands from European diseases. By the 18th century, the cape colony had grown, with slaves coming from Madagascar, Mozambique, and Indonesia. The settlement also started to expand northward, but Khoikhoi resistance, raids, and guerrilla warfare slowed the expansion during the 18th century. Boers who started to practice pastoralism were known as trekboers. A common source of trekboer labor was orphan children who were captured during raids and whose parents had been killed. Southeast Africa Prehistory According to the theory of recent African origin of modern humans, the mainstream position held within the scientific community, all humans originate from either Southeast Africa or the Horn of Africa. During the first millennium CE, Nilotic and Bantu-speaking peoples moved into the region. Swahili coast Following the Bantu Migration, on the coastal section of Southeast Africa, a mixed Bantu community developed through contact with Muslim Arab and Persian traders, leading to the development of the mixed Arab, Persian and African Swahili City States. The Swahili culture that emerged from these exchanges evinces many Arab and Islamic influences not seen in traditional Bantu culture, as do the many Afro-Arab members of the Bantu Swahili people. With its original speech community centered on the coastal parts of Tanzania (particularly Zanzibar) and Kenya—a seaboard referred to as the Swahili Coast—the Bantu Swahili language contains many Arabic language loan-words as a consequence of these interactions. The earliest Bantu inhabitants of the Southeast coast of Kenya and Tanzania encountered by these later Arab and Persian settlers have been variously identified with the trading settlements of Rhapta, Azania and Menouthias referenced in early Greek and Chinese writings from 50 AD to 500 AD, ultimately giving rise to the name for Tanzania. These early writings perhaps document the first wave of Bantu settlers to reach Southeast Africa during their migration. Historically, the Swahili people could be found as far north as northern Kenya and as far south as the Ruvuma River in Mozambique. Arab geographers referred to the Swahili coast as the land of the zanj (blacks). Although once believed to be the descendants of Persian colonists, the ancient Swahili are now recognized by most historians, historical linguists, and archaeologists as a Bantu people who had sustained important interactions with Muslim merchants, beginning in the late 7th and early 8th centuries AD. Medieval Swahili kingdoms are known to have had island trade ports, described by Greek historians as "metropolises", and to have established regular trade routes with the Islamic world and Asia. Ports such as Mombasa, Zanzibar, and Kilwa were known to Chinese sailors under Zheng He and medieval Islamic geographers such as the Berber traveller Abu Abdullah ibn Battuta. The main Swahili exports were ivory, slaves, and gold. They traded with Arabia, India, Persia, and China. The Portuguese arrived in 1498. On a mission to economically control and Christianize the Swahili coast, the Portuguese attacked Kilwa first in 1505 and other cities later. Because of Swahili resistance, the Portuguese attempt at establishing commercial control was never successful. By the late 17th century, Portuguese authority on the Swahili coast began to diminish. With the help of Omani Arabs, by 1729 the Portuguese presence had been removed. The Swahili coast eventually became part of the Sultanate of Oman. Trade recovered, but it did not regain the levels of the past. Urewe The Urewe culture developed and spread in and around the Lake Victoria region of Africa during the African Iron Age. The culture's earliest dated artifacts are located in the Kagera Region of Tanzania, and it extended as far west as the Kivu region of the Democratic Republic of the Congo, as far east as the Nyanza and Western provinces of Kenya, and north into Uganda, Rwanda and Burundi. Sites from the Urewe culture date from the Early Iron Age, from the 5th century BC to the 6th century AD. The origins of the Urewe culture are ultimately in the Bantu expansion originating in Cameroon. Research into early Iron Age civilizations in Sub-Saharan Africa has been undertaken concurrently with studies on African linguistics on Bantu expansion. The Urewe culture may correspond to the Eastern subfamily of Bantu languages, spoken by the descendants of the first wave of Bantu peoples to settle East Africa. At first sight, Urewe seems to be a fully developed civilization recognizable through its distinctive, stylish earthenware and highly technical and sophisticated iron working techniques. Given our current level of knowledge, neither seems to have developed or altered for nearly 2,000 years. However, minor local variations in the ceramic ware can be observed. Urewe is the name of the site in Kenya brought to prominence through the publication in 1948 of Mary Leakey's archaeological findings. She described the early Iron Age period in the Great Lakes region in Central East Africa around Lake Victoria. Madagascar and Merina Madagascar was apparently first settled by Austronesian speakers from Southeast Asia before the 6th century AD and subsequently by Bantu speakers from the east African mainland in the 6th or 7th century, according to archaeological and linguistic data. The Austronesians introduced banana and rice cultivation, and the Bantu speakers introduced cattle and other farming practices. About the year 1000, Arab and Indian trade settlement were started in northern Madagascar to exploit the Indian Ocean trade. By the 14th century, Islam was introduced on the island by traders. Madagascar functioned in the East African medieval period as a contact port for the other Swahili seaport city-states such as Sofala, Kilwa, Mombasa, and Zanzibar. Several kingdoms emerged after the 15th century: the Sakalava Kingdom (16th century) on the west coast, Tsitambala Kingdom (17th century) on the east coast, and Merina (15th century) in the central highlands. By the 19th century, Merina controlled the whole island. In 1500, the Portuguese were the first Europeans on the island, raiding the trading settlements. The British and later the French arrived. During the latter part of the 17th century, Madagascar was a popular transit point for pirates. Radama I (1810–1828) invited Christian missionaries in the early 19th century. Queen Ranavalona I "the Cruel" (1828–1861) banned the practice of Christianity in the kingdom, and an estimated 150,000 Christians perished. Under Radama II (1861–1863), Madagascar took a French orientation, with great commercial concession given to the French. In 1895, in the second Franco-Hova War, the French invaded Madagascar, taking over Antsiranana (Diego Suarez) and declaring Madagascar a protectorate. Lake Plateau states and empires Between the 14th and 15th centuries, large Southeast African kingdoms and states emerged, such as the Buganda and Karagwe Kingdoms of Uganda and Tanzania. 𝗘𝗺𝗽𝗶𝗿𝗲 𝗼𝗳 𝗸𝗶𝘁𝗮𝗿𝗮 By 1000 AD, numerous states had arisen on the Lake Plateau among the Great Lakes of East Africa. Cattle herding, cereal growing, and banana cultivation were the economic mainstays of these states. The Ntusi and Bigo earthworks are representative of one of the first states, the Bunyoro kingdom, which oral tradition stipulates was part of the Empire of Kitara that dominated the whole Lakes region. A Luo ethnic elite, from the Babito clan, ruled over the Bantu-speaking Nyoro people. The society was essentially Nyoro in its culture, based on the evidence from pottery, settlement patterns, and economic specialization. The Babito clan claim legitimacy by being descended from the Bachwezi clan, who were said to have ruled the Empire of Kitara. Buganda The Buganda kingdom was founded by Kato Kimera around the 14th century AD. Kato Kintu may have migrated to the northwest of Lake Victoria as early as 1000 BC. Buganda was ruled by the kabaka with a bataka composed of the clan heads. Over time, the kabakas diluted the authority of the bataka, with Buganda becoming a centralized monarchy. By the 16th century, Buganda was engaged in expansion but had a serious rival in Bunyoro. By the 1870s, Buganda was a wealthy nation-state. The kabaka ruled with his Lukiko (council of ministers). Buganda had a naval fleet of a hundred vessels, each manned by thirty men. Buganda supplanted Bunyoro as the most important state in the region. However, by the early 20th century, Buganda became a province of the British Uganda Protectorate. Rwanda Southeast of Bunyoro, near Lake Kivu at the bottom of the western rift, the Kingdom of Rwanda was founded, perhaps during the 17th century. Tutsi (BaTutsi) pastoralists formed the elite, with a king called the mwami. The Hutu (BaHutu) were farmers. Both groups spoke the same language, but there were strict social norms against marrying each other and interaction. According to oral tradition, the Kingdom of Rwanda was founded by Mwami Ruganzu II (Ruganzu Ndori) (c. 1600 – 1624), with his capital near Kigali. It took 200 years to attain a truly centralized kingdom under Mwami Kigeli IV (Kigeri Rwabugiri) (1840–1895). Subjugation of the Hutu proved more difficult than subduing the Tutsi. The last Tutsi chief gave up to Mwami Mutara II (Mutara Rwogera) (1802–1853) in 1852, but the last Hutu holdout was conquered in the 1920s by Mwami Yuhi V (Yuli Musinga) (1896–1931). Burundi South of the Kingdom of Rwanda was the Kingdom of Burundi. It was founded by the Tutsi chief Ntare Rushatsi (c. 1657 – 1705). Like Rwanda, Burundi was built on cattle raised by Tutsi pastoralists, crops from Hutu farmers, conquest, and political innovations. Under Mwami Ntare Rugaamba (c. 1795 – 1852), Burundi pursued an aggressive expansionist policy, one based more on diplomacy than force. Maravi The Maravi claimed descent from Karonga (kalonga), who took that title as king. The Maravi connected Central Africa to the east coastal trade, with Swahili Kilwa. By the 17th century, the Maravi Empire encompassed all the area between Lake Malawi and the mouth of the Zambezi River. The karonga was Mzura, who did much to extend the empire. Mzura made a pact with the Portuguese to establish a 4,000-man army to attack the Shona in return for aid in defeating his rival Lundi, a chief of the Zimba. In 1623, he turned on the Portuguese and assisted the Shona. In 1640, he welcomed back the Portuguese for trade. The Maravi Empire did not long survive the death of Mzura. By the 18th century, it had broken into its previous polities. West Africa Sahelian empires and states Ghana The Ghana Empire may have been an established kingdom as early as the 8th century AD, founded among the Soninke by Dinge Cisse. Ghana was first mentioned by Arab geographer Al-Farazi in the late 8th century. Ghana was inhabited by urban dwellers and rural farmers. The urban dwellers were the administrators of the empire, who were Muslims, and the Ghana (king), who practiced traditional religion. Two towns existed, one where the Muslim administrators and Berber-Arabs lived, which was connected by a stone-paved road to the king's residence. The rural dwellers lived in villages, which joined together into broader polities that pledged loyalty to the Ghana. The Ghana was viewed as divine, and his physical well-being reflected on the whole society. Ghana converted to Islam around 1050, after conquering Aoudaghost. The Ghana Empire grew wealthy by taxing the trans-Saharan trade that linked Tiaret and Sijilmasa to Aoudaghost. Ghana controlled access to the goldfields of Bambouk, southeast of Koumbi Saleh. A percentage of salt and gold going through its territory was taken. The empire was not involved in production. By the 11th century, Ghana was in decline. It was once thought that the sacking of Koumbi Saleh by Berbers under the Almoravid dynasty in 1076 was the cause. This is no longer accepted. Several alternative explanations are cited. One important reason is the transfer of the gold trade east to the Niger River and the Taghaza Trail, and Ghana's consequent economic decline. Another reason cited is political instability through rivalry among the different hereditary polities. The empire came to an end in 1230, when Takrur in northern Senegal took over the capital. Mali The Mali Empire began in the 13th century AD, when a Mande (Mandingo) leader, Sundiata (Lord Lion) of the Keita clan, defeated Soumaoro Kanté, king of the Sosso or southern Soninke, at the Battle of Kirina in c. 1235. Sundiata continued his conquest from the fertile forests and Niger Valley, east to the Niger Bend, north into the Sahara, and west to the Atlantic Ocean, absorbing the remains of the Ghana Empire. Sundiata took on the title of mansa. He established the capital of his empire at Niani. Although the salt and gold trade continued to be important to the Mali Empire, agriculture and pastoralism was also critical. The growing of sorghum, millet, and rice was a vital function. On the northern borders of the Sahel, grazing cattle, sheep, goats, and camels were major activities. Mande society was organize around the village and land. A cluster of villages was called a kafu, ruled by a farma. The farma paid tribute to the mansa. A dedicated army of elite cavalry and infantry maintained order, commanded by the royal court. A formidable force could be raised from tributary regions, if necessary. Conversion to Islam was a gradual process. The power of the mansa depended on upholding traditional beliefs and a spiritual foundation of power. Sundiata initially kept Islam at bay. Later mansas were devout Muslims but still acknowledged traditional deities and took part in traditional rituals and festivals, which were important to the Mande. Islam became a court religion under Sundiata's son Uli I (1225–1270). Mansa Uli made a pilgrimage to Mecca, becoming recognized within the Muslim world. The court was staffed with literate Muslims as secretaries and accountants. Muslim traveller Ibn Battuta left vivid descriptions of the empire. Mali reached the peak of its power and extent in the 14th century, when Mansa Musa (1312–1337) made his famous hajj to Mecca with 500 slaves, each holding a bar of gold worth 500 mitqals. Mansa Musa's hajj devalued gold in Mamluk Egypt for a decade. He made a great impression on the minds of the Muslim and European world. He invited scholars and architects like Ishal al-Tuedjin (al-Sahili) to further integrate Mali into the Islamic world. The Mali Empire saw an expansion of learning and literacy. In 1285, Sakura, a freed slave, usurped the throne. This mansa drove the Tuareg out of Timbuktu and established it as a center of learning and commerce. The book trade increased, and book copying became a very respectable and profitable profession. Timbuktu and Djenné became important centers of learning within the Islamic world. After the reign of Mansa Suleyman (1341–1360), Mali began its spiral downward. Mossi cavalry raided the exposed southern border. Tuareg harassed the northern border in order to retake Timbuktu. Fulani (Fulbe) eroded Mali's authority in the west by establishing the independent Imamate of Futa Toro, a successor to the kingdom of Takrur. Serer and Wolof alliances were broken. In 1545 to 1546, the Songhai Empire took Niani. After 1599, the empire lost the Bambouk goldfields and disintegrated into petty polities. Songhai The Songhai people are descended from fishermen on the Middle Niger River. They established their capital at Kukiya in the 9th century AD and at Gao in the 12th century. The Songhai speak a Nilo-Saharan language. Sonni Ali, a Songhai, began his conquest by capturing Timbuktu in 1468 from the Tuareg. He extended the empire to the north, deep into the desert, pushed the Mossi further south of the Niger, and expanded southwest to Djenne. His army consisted of cavalry and a fleet of canoes. Sonni Ali was not a Muslim, and he was portrayed negatively by Berber-Arab scholars, especially for attacking Muslim Timbuktu. After his death in 1492, his heirs were deposed by General Muhammad Ture, a Muslim of Soninke origins Muhammad Ture (1493–1528) founded the Askiya Dynasty, askiya being the title of the king. He consolidated the conquests of Sonni Ali. Islam was used to extend his authority by declaring jihad on the Mossi, reviving the trans-Saharan trade, and having the Abbasid "shadow" caliph in Cairo declare him as caliph of Sudan. He established Timbuktu as a great center of Islamic learning. Muhammad Ture expanded the empire by pushing the Tuareg north, capturing Aïr in the east, and capturing salt-producing Taghaza. He brought the Hausa states into the Songhay trading network. He further centralized the administration of the empire by selecting administrators from loyal servants and families and assigning them to conquered territories. They were responsible for raising local militias. Centralization made Songhay very stable, even during dynastic disputes. Leo Africanus left vivid descriptions of the empire under Askiya Muhammad. Askiya Muhammad was deposed by his son in 1528. After much rivalry, Muhammad Ture's last son Askiya Daoud (1529–1582) assumed the throne. In 1591, Morocco invaded the Songhai Empire under Ahmad al-Mansur of the Saadi Dynasty in order to secure the goldfields of the Sahel. At the Battle of Tondibi, the Songhai army was defeated. The Moroccans captured Djenne, Gao, and Timbuktu, but they were unable to secure the whole region. Askiya Nuhu and the Songhay army regrouped at Dendi in the heart of Songhai territory where a spirited guerrilla resistance sapped the resources of the Moroccans, who were dependent upon constant resupply from Morocco. Songhai split into several states during the 17th century. Morocco found its venture unprofitable. The gold trade had been diverted to Europeans on the coast. Most of the trans-Saharan trade was now diverted east to Bornu. Expensive equipment purchased with gold had to be sent across the Sahara, an unsustainable scenario. The Moroccans who remained married into the population and were referred to as Arma or Ruma. They established themselves at Timbuktu as a military caste with various fiefs, independent from Morocco. Amid the chaos, other groups began to assert themselves, including the Fulani of Futa Tooro who encroached from the west. The Bambara Empire, one of the states that broke from Songhai, sacked Gao. In 1737, the Tuareg massacred the Arma. Sokoto Caliphate The Fulani were migratory people. They moved from Mauritania and settled in Futa Tooro, Futa Djallon, and subsequently throughout the rest of West Africa. By the 14th century CE, they had converted to Islam. During the 16th century, they established themselves at Macina in southern Mali. During the 1670s, they declared jihads on non-Muslims. Several states were formed from these jihadist wars, at Futa Toro, Futa Djallon, Macina, Oualia, and Bundu. The most important of these states was the Sokoto Caliphate or Fulani Empire. In the city of Gobir, Usman dan Fodio (1754–1817) accused the Hausa leadership of practicing an impure version of Islam and of being morally corrupt. In 1804, he launched the Fulani War as a jihad among a population that was restless about high taxes and discontented with its leaders. Jihad fever swept northern Nigeria, with strong support among both the Fulani and the Hausa. Usman created an empire that included parts of northern Nigeria, Benin, and Cameroon, with Sokoto as its capital. He retired to teach and write and handed the empire to his son Muhammed Bello. The Sokoto Caliphate lasted until 1903 when the British conquered northern Nigeria. Forest empires and states Akan kingdoms and emergence of Asante Empire The Akan speak a Kwa language. The speakers of Kwa languages are believed to have come from East/Central Africa, before settling in the Sahel. By the 12th century, the Akan Kingdom of Bonoman (Bono State) was established. During the 13th century, when the gold mines in modern-day Mali started to dry up, Bonoman and later other Akan states began to rise to prominence as the major players in the Gold trade. It was Bonoman and other Akan kingdoms like Denkyira, Akyem, Akwamu which were the predecessors, and later the emergence of the Empire of Ashanti. When and how the Ashante got to their present location is debatable. What is known is that by the 17th century an Akan people were identified as living in a state called Kwaaman. The location of the state was north of Lake Bosomtwe. The state's revenue was mainly derived from trading in gold and kola nuts and clearing forest to plant yams. They built towns between the Pra and Ofin rivers. They formed alliances for defense and paid tribute to Denkyira one of the more powerful Akan states at that time along with Adansi and Akwamu. During the 16th century, Ashante society experienced sudden changes, including population growth because of cultivation of New World plants such as cassava and maize and an increase in the gold trade between the coast and the north. By the 17th century, Osei Kofi Tutu I (c. 1695 – 1717), with help of Okomfo Anokye, unified what became the Ashante into a confederation with the Golden Stool as a symbol of their unity and spirit. Osei Tutu engaged in a massive territorial expansion. He built up the Ashante army based on the Akan state of Akwamu, introducing new organization and turning a disciplined militia into an effective fighting machine. In 1701, the Ashante conquered Denkyira, giving them access to the coastal trade with Europeans, especially the Dutch. Opoku Ware I (1720–1745) engaged in further expansion, adding other southern Akan states to the growing empire. He turned north adding Techiman, Banda, Gyaaman, and Gonja, states on the Black Volta. Between 1744 and 1745, Asantehene Opoku attacked the powerful northern state of Dagomba, gaining control of the important middle Niger trade routes. Kusi Obodom (1750–1764) succeeded Opoku. He solidified all the newly won territories. Osei Kwadwo (1777–1803) imposed administrative reforms that allowed the empire to be governed effectively and to continue its military expansion. Osei Kwame Panyin (1777–1803), Osei Tutu Kwame (1804–1807), and Osei Bonsu (1807–1824) continued territorial consolidation and expansion. The Ashante Empire included all of present-day Ghana and large parts of the Ivory Coast. The Ashantehene inherited his position from his mother. He was assisted at the capital, Kumasi, by a civil service of men talented in trade, diplomacy, and the military, with a head called the Gyaasehene. Men from Arabia, Sudan, and Europe were employed in the civil service, all of them appointed by the Ashantehene. At the capital and in other towns, the ankobia or special police were used as bodyguards to the Ashantehene, as sources of intelligence, and to suppress rebellion. Communication throughout the empire was maintained via a network of well-kept roads from the coast to the middle Niger and linking together other trade cities. For most of the 19th century, the Ashante Empire remained powerful. It was later destroyed in 1900 by British superior weaponry and organization following
Samoa. The site is at Mulifanua on Upolu. The Mulifanua site, where 4,288 pottery shards have been found and studied, has a "true" age of c. 1000 BCE based on C14 dating. A 2010 study places the beginning of the human archaeological sequences of Polynesia in Tonga at 900 BCE, the small differences in dates with Samoa being due to differences in radiocarbon dating technologies between 1989 and 2010, the Tongan site apparently predating the Samoan site by some few decades in real time. Within a mere three or four centuries between about 1300 and 900 BCE, the Lapita archaeological culture spread 6,000 kilometres further to the east from the Bismarck Archipelago, until it reached as far as Fiji, Tonga, and Samoa. The area of Tonga, Fiji, and Samoa served as a gateway into the rest of the Pacific region known as Polynesia. Ancient Tongan mythologies recorded by early European explorers report the islands of 'Ata and Tongatapu as the first islands being hauled to the surface from the deep ocean by Maui. The "Tuʻi Tonga Empire" or "Tongan Empire" in Oceania are descriptions sometimes given to Tongan expansionism and projected hegemony dating back to 950 CE, but at its peak during the period 1200–1500. While modern researchers and cultural experts attest to widespread Tongan influence and evidences of transoceanic trade and exchange of material and non-material cultural artifacts, empirical evidence of a true political empire ruled for any length of time by successive rulers is lacking. Modern archeology, anthropology and linguistic studies confirm widespread Tongan cultural influence ranging widely through East 'Uvea, Rotuma, Futuna, Samoa and Niue, parts of Micronesia (Kiribati, Pohnpei), Vanuatu, and New Caledonia and the Loyalty Islands, and while some academics prefer the term "maritime chiefdom", others argue that, while very different from examples elsewhere, ..."empire" is probably the most convenient term. Pottery art from Fijian towns shows that Fiji was settled before or around 3500 to 1000 BC, although the question of Pacific migration still lingers. It is believed that the Lapita people or the ancestors of the Polynesians settled the islands first but not much is known of what became of them after the Melanesians arrived; they may have had some influence on the new culture, and archaeological evidence shows that they would have then moved on to Tonga, Samoa and even Hawai'i. The first settlements in Fiji were started by voyaging traders and settlers from the west about 5000 years ago. Lapita pottery shards have been found at numerous excavations around the country. Aspects of Fijian culture are similar to the Melanesian culture of the western Pacific but have a stronger connection to the older Polynesian cultures. Across from east to west, Fiji has been a nation of many languages. Fiji's history was one of settlement but also of mobility. Over the centuries, a unique Fijian culture developed. Constant warfare and cannibalism between warring tribes were quite rampant and very much part of everyday life. In later centuries, the ferocity of the cannibal lifestyle deterred European sailors from going near Fijian waters, giving Fiji the name Cannibal Isles; as a result, Fiji remained unknown to the rest of the world. Early European visitors to Easter Island recorded the local oral traditions about the original settlers. In these traditions, Easter Islanders claimed that a chief Hotu Matu'a arrived on the island in one or two large canoes with his wife and extended family. They are believed to have been Polynesian. There is considerable uncertainty about the accuracy of this legend as well as the date of settlement. Published literature suggests the island was settled around 300–400 CE, or at about the time of the arrival of the earliest settlers in Hawaii. Some scientists say that Easter Island was not inhabited until 700–800 CE. This date range is based on glottochronological calculations and on three radiocarbon dates from charcoal that appears to have been produced during forest clearance activities. Moreover, a recent study which included radiocarbon dates from what is thought to be very early material suggests that the island was settled as recently as 1200 CE. This seems to be supported by a 2006 study of the island's deforestation, which could have started around the same time. A large now extinct palm, Paschalococos disperta, related to the Chilean wine palm (Jubaea chilensis), was one of the dominant trees as attested by fossil evidence; this species, whose sole occurrence was Easter Island, became extinct due to deforestation by the early settlers. Micronesia theories Micronesia began to be settled several millennia ago, although there are competing theories about the origin and arrival of the first settlers. There are numerous difficulties with conducting archaeological excavations in the islands, due to their size, settlement patterns and storm damage. As a result, much evidence is based on linguistic analysis. The earliest archaeological traces of civilization have been found on the island of Saipan, dated to 1500 BCE or slightly before. The ancestors of the Micronesians settled there over 4,000 years ago. A decentralized chieftain-based system eventually evolved into a more centralized economic and religious culture centered on Yap and Pohnpei. The prehistory of many Micronesian islands such as Yap are not known very well. On Pohnpei, pre-colonial history is divided into three eras: Mwehin Kawa or Mwehin Aramas (Period of Building, or Period of Peopling, before c. 1100); Mwehin Sau Deleur (Period of the Lord of Deleur, c. 1100 to c. 1628); and Mwehin Nahnmwarki (Period of the Nahnmwarki, c. 1628 to c. 1885). Pohnpeian legend recounts that the Saudeleur rulers, the first to bring government to Pohnpei, were of foreign origin. The Saudeleur centralized form of absolute rule is characterized in Pohnpeian legend as becoming increasingly oppressive over several generations. Arbitrary and onerous demands, as well as a reputation for offending Pohnpeian deities, sowed resentment among Pohnpeians. The Saudeleur Dynasty ended with the invasion of Isokelekel, another semi-mythical foreigner, who replaced the Saudeleur rule with the more decentralized nahnmwarki system in existence today. Isokelekel is regarded as the creator of the modern Pohnpeian nahnmwarki social system and the father of the Pompeian people. Construction of Nan Madol, a megalithic complex made from basalt lava logs in Pohnpei began as early as 1200 CE. Nan Madol is offshore of Temwen Island near Pohnpei, consists of a series of small artificial islands linked by a network of canals, and is often called the Venice of the Pacific. It is located near the island of Pohnpei and was the ceremonial and political seat of the Saudeleur Dynasty that united Pohnpei's estimated 25,000 people until its centralized system collapsed amid the invasion of Isokelekel. Isokelekel and his descendants initially occupied the stone city, but later abandoned it. The first people of the Northern Mariana Islands navigated to the islands at some period between 4000 BCE to 2000 BCE from South-East Asia. They became known as the Chamorros, and spoke an Austronesian language called Chamorro. The ancient Chamorro left a number of megalithic ruins, including Latte stone. The Refaluwasch or Carolinian people came to the Marianas in the 1800s from the Caroline Islands. Micronesian colonists gradually settled the Marshall Islands during the 2nd millennium BCE, with inter-island navigation made possible using traditional stick charts. Melanesia theories The first settlers of Australia, New Guinea, and the large islands just to the east arrived between 50,000 and 30,000 years ago, when Neanderthals still roamed Europe. The original inhabitants of the group of islands now named Melanesia were likely the ancestors of the present-day Papuan-speaking people. Migrating from South-East Asia, they appear to have occupied these islands as far east as the main islands in the Solomon Islands archipelago, including Makira and possibly the smaller islands farther to the east. Particularly along the north coast of New Guinea and in the islands north and east of New Guinea, the Austronesian people, who had migrated into the area somewhat more than 3,000 years ago, came into contact with these pre-existing populations of Papuan-speaking peoples. In the late 20th century, some scholars theorized a long period of interaction, which resulted in many complex changes in genetics, languages, and culture among the peoples. Kayser, et al. proposed that, from this area, a very small group of people (speaking an Austronesian language) departed to the east to become the forebears of the Polynesian people. However, the theory is contradicted by the findings of a genetic study published by Temple University in 2008; based on genome scans and evaluation of more than 800 genetic markers among a wide variety of Pacific peoples, it found that neither Polynesians nor Micronesians have much genetic relation to Melanesians. Both groups are strongly related genetically to East Asians, particularly Taiwanese aborigines. It appeared that, having developed their sailing outrigger canoes, the Polynesian ancestors migrated from East Asia, moved through the Melanesian area quickly on their way, and kept going to eastern areas, where they settled. They left little genetic evidence in Melanesia. The study found a high rate of genetic differentiation and diversity among the groups living within the Melanesian islands, with the peoples distinguished by island, language, topography, and geography among the islands. Such diversity developed over their tens of thousands of years of settlement before the Polynesian ancestors ever arrived at the islands. For instance, populations developed differently in coastal areas, as opposed to those in more isolated mountainous valleys. Additional DNA analysis has taken research into new directions, as more human species have been discovered since the late 20th century. Based on his genetic studies of the Denisova hominin, an ancient human species discovered in 2010, Svante Pääbo claims that ancient human ancestors of the Melanesians interbred in Asia with these humans. He has found that people of New Guinea share 4–6% of their genome with the Denisovans, indicating this exchange. The Denisovans are considered cousin to the Neanderthals; both groups are now understood to have migrated out of Africa, with the Neanderthals going into Europe, and the Denisovans heading east about 400,000 years ago. This is based on genetic evidence from a fossil found in Siberia. The evidence from Melanesia suggests their territory extended into south Asia, where ancestors of the Melanesians developed. Melanesians of some islands are one of the few non-European peoples, and the only dark-skinned group of people outside Australia, known to have blond hair. Australasia theories Indigenous Australians are the original inhabitants of the Australian continent and nearby islands. Indigenous Australians migrated from Africa to Asia around 70,000 years ago and arrived in Australia around 50,000 years ago. The Torres Strait Islanders are indigenous to the Torres Strait Islands, which are at the northernmost tip of Queensland near Papua New Guinea. The term "Aboriginal" is traditionally applied to only the indigenous inhabitants of mainland Australia and Tasmania, along with some of the adjacent islands, i.e.: the "first peoples". Indigenous Australians is an inclusive term used when referring to both Aboriginal and Torres Strait islanders. The earliest definite human remains found to date are that of Mungo Man, which have been dated at about 40,000 years old, but the time of arrival of the ancestors of Indigenous Australians is a matter of debate among researchers, with estimates dating back as far as 125,000 years ago. There is great diversity among different Indigenous communities and societies in Australia, each with its own unique mixture of cultures, customs and languages. In present-day Australia these groups are further divided into local communities. European contact and exploration (1500s–1700s) Iberian pioneers Early Iberian exploration Oceania was first explored by Europeans from the 16th century onwards. Portuguese navigators, between 1512 and 1526, reached the Moluccas (by António de Abreu and Francisco Serrão in 1512), Timor, the Aru Islands (Martim A. Melo Coutinho), the Tanimbar Islands, some of the Caroline Islands (by Gomes de Sequeira in 1525), and west Papua New Guinea (by Jorge de Menezes in 1526). In 1519 a Castilian ('Spanish') expedition led by Ferdinand Magellan sailed down the east coast of South America, found and sailed through the strait that bears his name and on 28 November 1520 entered the ocean which he named "Pacific". The three remaining ships, led by Magellan and his captains Duarte Barbosa and João Serrão, then sailed north and caught the trade winds which carried them across the Pacific to the Philippines where Magellan was killed. One surviving ship led by Juan Sebastián Elcano returned west across the Indian Ocean and the other went north in the hope of finding the westerlies and reaching Mexico. Unable to find the right winds, it was forced to return to the East Indies. The Magellan-Elcano expedition achieved the first circumnavigation of the world and reached the Philippines, the Mariana Islands and other islands of Oceania. Other large expeditions From 1527 to 1595 a number of other large Spanish expeditions crossed the Pacific Ocean, leading to the discovery of the Marshall Islands and Palau in the North Pacific, as well as Tuvalu, the Marquesas, the Solomon Islands archipelago, the Cook Islands and the Admiralty Islands in the South Pacific. In 1565, Spanish navigator Andrés de Urdaneta found a wind system that would allow ships to sail eastward from Asia, back to the Americas. From then until 1815 the annual Manila Galleons crossed the Pacific from Mexico to the Philippines and back, in the first transpacific trade route in history. Combined with the Spanish Atlantic or West Indies Fleet, the Manila Galleons formed one of the first global maritime exchange in human history, linking Seville in Spain with Manila in the Philippines, via Mexico. Later, in the quest for Terra Australis, Spanish explorers in the 17th century discovered the Pitcairn and Vanuatu archipelagos, and sailed the Torres Strait between Australia and New Guinea, named after navigator Luís Vaz de Torres. In 1668 the Spanish founded a colony on Guam as a resting place for west-bound galleons. For a long time this was the only non-coastal European settlement in the Pacific. Oceania during the Golden Age of Dutch exploration and discovery Early Dutch exploration The Dutch were the first non-natives to undisputedly explore and chart coastlines of Australia, Tasmania, New Zealand, Tonga, Fiji, Samoa, and Easter Island. Verenigde Oostindische Compagnie (or VOC) was a major force behind the Golden Age of Dutch exploration (c. 1590s–1720s) and Netherlandish cartography (c. 1570s–1670s). In the 17th century, the VOC's navigators and explorers charted almost three-quarters of the Australian coastline, except the east coast. Abel Tasman's exploratory voyages Abel Tasman was the first known European explorer to reach the islands of Van Diemen's Land (now Tasmania) and New Zealand, and to sight the Fiji islands. His navigator François Visscher, and his merchant Isaack Gilsemans, mapped substantial portions of Australia, New Zealand, Tonga and the Fijian islands. On 24 November 1642 Abel Tasman sighted the west coast of Tasmania, north of Macquarie Harbour. He named his discovery Van Diemen's Land after Antonio van Diemen, Governor-General of the Dutch East Indies. then claimed formal possession of the land on 3 December 1642. After some exploration, Tasman had intended to proceed in a northerly direction but as the wind was unfavourable he steered east. On 13 December they sighted land on the north-west coast of the South Island, New Zealand, becoming the first Europeans to do so. Tasman named it Staten Landt on the assumption that it was connected to an island (Staten Island, Argentina) at the south of the tip of South America. Proceeding north and then east, he stopped to gather water, but one of his boats was attacked by Māori in a double hulled waka (canoes) and four of his men were attacked and killed by mere. As Tasman sailed out of the bay he was again attacked, this time by 11 waka . The waka approached the Zeehan which fired and hit one Māori who fell down. Canister shot hit the side of a waka. Archeological research has shown the Dutch had tried to land at a major agricultural area, which the Māori may have been trying to protect. Tasman named the bay Murderers' Bay (now known as Golden Bay) and sailed north, but mistook Cook Strait for a bight (naming it Zeehaen's Bight). Two names he gave to New Zealand landmarks still endure, Cape Maria van Diemen and Three Kings Islands, but Kaap Pieter Boreels was renamed by Cook 125 years later to Cape Egmont. En route back to Batavia, Tasman came across the Tongan archipelago on 20 January 1643. While passing the Fiji Islands Tasman's ships came close to being wrecked on the dangerous reefs of the north-eastern part of the Fiji group. He charted the eastern tip of Vanua Levu and Cikobia before making his way back into the open sea. He eventually turned north-west to New Guinea, and arrived at Batavia on 15 June 1643. For over a century after Tasman's voyages, until the era of James Cook, Tasmania and New Zealand were not visited by Europeans—mainland Australia was visited, but usually only by accident. British exploration and Captain James Cook's voyages First voyage (1768–1771) In 1766 the Royal Society engaged James Cook to travel to the Pacific Ocean to observe and record the transit of Venus across the Sun. The expedition sailed from England on 26 August 1768, rounded Cape Horn and continued westward across the Pacific to arrive at Tahiti on 13 April 1769, where the observations of the Venus Transit were made. Once the observations were completed, Cook opened the sealed orders which were additional instructions from the Admiralty for the second part of his voyage: to search the south Pacific for signs of the postulated rich southern continent of Terra Australis. With the help of a Tahitian named Tupaia, who had extensive knowledge of Pacific geography, Cook managed to reach New Zealand on 6 October 1769, leading only the second group of Europeans to do so (after Abel Tasman over a century earlier, in 1642). Cook mapped the complete New Zealand coastline, making only some minor errors (such as calling Banks Peninsula an island, and thinking Stewart Island/Rakiura was a peninsula of the South Island). He also identified Cook Strait, which separates the North Island from the South Island, and which Tasman had not seen. Cook then voyaged west, reaching the south-eastern coast of Australia on 19 April 1770, and in doing so his expedition became the first recorded Europeans to have encountered its eastern coastline. On 23 April he made his first recorded direct observation of indigenous Australians at Brush Island near Bawley Point, noting in his journal: "…and were so near the Shore as to distinguish several people upon the Sea beach they appear'd to be of a very dark or black Colour but whether this was the real colour of their skins or the C[l]othes they might have on I know not." On 29 April Cook and crew made their first landfall on the mainland of the continent at a place now known as the Kurnell Peninsula. It is here that James Cook made first contact with an aboriginal tribe known as the Gweagal. After his departure from Botany Bay he continued northwards. After a grounding mishap on the Great Barrier Reef, the voyage continued, sailing through Torres Strait before returning to England via Batavia, the Cape of Good Hope, and Saint Helena. Second voyage (1772–1775) In 1772 the Royal Society commissioned Cook to search for the hypothetical Terra Australis again. On his first voyage, Cook had demonstrated by circumnavigating New Zealand that it was not attached to a larger landmass to the south. Although he charted almost the entire eastern coastline of Australia, showing it to be continental in size, the Terra Australis was believed by the Royal Society to lie further south. Cook commanded on this voyage, while Tobias Furneaux commanded its companion ship, . Cook's expedition circumnavigated the globe at an extreme southern latitude, becoming one of the first to cross the Antarctic Circle (17 January 1773). In the Antarctic fog, Resolution and Adventure became separated. Furneaux made his way to New Zealand, where he lost some of his men during an encounter with Māori, and eventually sailed back to Britain, while Cook continued to explore the Antarctic, reaching 71°10'S on 31 January 1774. Cook almost encountered the mainland of Antarctica, but turned towards Tahiti to resupply his ship. He then resumed his southward course in a second fruitless attempt to find the supposed continent. On this leg of the voyage he brought a young Tahitian named Omai, who proved to be somewhat less knowledgeable about the Pacific than Tupaia had been on the first voyage. On his return voyage to New Zealand in 1774, Cook landed at the Friendly Islands, Easter Island, Norfolk Island, New Caledonia, and Vanuatu. Before returning to England, Cook made a final sweep across the South Atlantic from Cape Horn. He then turned north to South Africa, and from there continued back to England. His reports upon his return home put to rest the popular myth of Terra Australis. Third voyage (1776–1779) On his last voyage, Cook again commanded HMS Resolution, while Captain Charles Clerke commanded . The voyage was ostensibly planned to return the Pacific Islander, Omai to Tahiti, or so the public were led to believe. The trip's principal goal was to locate a North-West Passage around the American continent. After dropping Omai at Tahiti, Cook travelled north and in 1778 became the first European to visit the Hawaiian Islands. After his initial landfall in January 1778 at Waimea harbour, Kauai, Cook named the archipelago the "Sandwich Islands" after the fourth Earl of Sandwich—the acting First Lord of the Admiralty. From the Sandwich Islands Cook sailed north and then north-east to explore the west coast of North America north of the Spanish settlements in Alta California. Cook explored and mapped the coast all the way to the Bering Strait, on the way identifying what came to be known as Cook Inlet in Alaska. In a single visit, Cook charted the majority of the North American north-west coastline on world maps for the first time, determined the extent of Alaska, and closed the gaps in Russian (from the West) and Spanish (from the South) exploratory probes of the Northern limits of the Pacific. Cook returned to Hawaii in 1779. After sailing around the archipelago for
the history of Australia, New Zealand, Hawaii, Papua New Guinea, Fiji and other Pacific island nations. Prehistory The prehistory of Oceania is divided into the prehistory of each of its major areas: Polynesia, Micronesia, Melanesia, and Australasia, and these vary greatly as to when they were first inhabited by humans—from 70,000 years ago (Australasia) to 3,000 years ago (Polynesia). Polynesia theories The Polynesian people are considered to be by linguistic, archaeological and human genetic ancestry a subset of the sea-migrating Austronesian people and tracing Polynesian languages places their prehistoric origins in the Malay Archipelago, and ultimately, in Taiwan. Between about 3000 and 1000 BCE speakers of Austronesian languages began spreading from Taiwan into Island South-East Asia, as tribes whose natives were thought to have arrived through South China about 8,000 years ago to the edges of western Micronesia and on into Melanesia, although they are different from the Han Chinese who now form the majority of people in China and Taiwan. There are three theories regarding the spread of humans across the Pacific to Polynesia. These are outlined well by Kayser et al. (2000) and are as follows: Express Train model: A recent (c. 3000–1000 BCE) expansion out of Taiwan, via the Philippines and eastern Indonesia and from the north-west ("Bird's Head") of New Guinea, on to Island Melanesia by roughly 1400 BCE, reaching western Polynesian islands right about 900 BCE. This theory is supported by the majority of current human genetic data, linguistic data, and archaeological data Entangled Bank model: Emphasizes the long history of Austronesian speakers' cultural and genetic interactions with indigenous Island South-East Asians and Melanesians along the way to becoming the first Polynesians. Slow Boat model: Similar to the express-train model but with a longer hiatus in Melanesia along with admixture, both genetically, culturally and linguistically with the local population. This is supported by the Y-chromosome data of Kayser et al. (2000), which shows that all three haplotypes of Polynesian Y chromosomes can be traced back to Melanesia. In the archaeological record there are well-defined traces of this expansion which allow the path it took to be followed and dated with some certainty. It is thought that by roughly 1400 BCE, "Lapita Peoples", so-named after their pottery tradition, appeared in the Bismarck Archipelago of north-west Melanesia. This culture is seen as having adapted and evolved through time and space since its emergence "Out of Taiwan". They had given up rice production, for instance, after encountering and adapting to breadfruit in the Bird's Head area of New Guinea. In the end, the most eastern site for Lapita archaeological remains recovered so far has been through work on the archaeology in Samoa. The site is at Mulifanua on Upolu. The Mulifanua site, where 4,288 pottery shards have been found and studied, has a "true" age of c. 1000 BCE based on C14 dating. A 2010 study places the beginning of the human archaeological sequences of Polynesia in Tonga at 900 BCE, the small differences in dates with Samoa being due to differences in radiocarbon dating technologies between 1989 and 2010, the Tongan site apparently predating the Samoan site by some few decades in real time. Within a mere three or four centuries between about 1300 and 900 BCE, the Lapita archaeological culture spread 6,000 kilometres further to the east from the Bismarck Archipelago, until it reached as far as Fiji, Tonga, and Samoa. The area of Tonga, Fiji, and Samoa served as a gateway into the rest of the Pacific region known as Polynesia. Ancient Tongan mythologies recorded by early European explorers report the islands of 'Ata and Tongatapu as the first islands being hauled to the surface from the deep ocean by Maui. The "Tuʻi Tonga Empire" or "Tongan Empire" in Oceania are descriptions sometimes given to Tongan expansionism and projected hegemony dating back to 950 CE, but at its peak during the period 1200–1500. While modern researchers and cultural experts attest to widespread Tongan influence and evidences of transoceanic trade and exchange of material and non-material cultural artifacts, empirical evidence of a true political empire ruled for any length of time by successive rulers is lacking. Modern archeology, anthropology and linguistic studies confirm widespread Tongan cultural influence ranging widely through East 'Uvea, Rotuma, Futuna, Samoa and Niue, parts of Micronesia (Kiribati, Pohnpei), Vanuatu, and New Caledonia and the Loyalty Islands, and while some academics prefer the term "maritime chiefdom", others argue that, while very different from examples elsewhere, ..."empire" is probably the most convenient term. Pottery art from Fijian towns shows that Fiji was settled before or around 3500 to 1000 BC, although the question of Pacific migration still lingers. It is believed that the Lapita people or the ancestors of the Polynesians settled the islands first but not much is known of what became of them after the Melanesians arrived; they may have had some influence on the new culture, and archaeological evidence shows that they would have then moved on to Tonga, Samoa and even Hawai'i. The first settlements in Fiji were started by voyaging traders and settlers from the west about 5000 years ago. Lapita pottery shards have been found at numerous excavations around the country. Aspects of Fijian culture are similar to the Melanesian culture of the western Pacific but have a stronger connection to the older Polynesian cultures. Across from east to west, Fiji has been a nation of many languages. Fiji's history was one of settlement but also of mobility. Over the centuries, a unique Fijian culture developed. Constant warfare and cannibalism between warring tribes were quite rampant and very much part of everyday life. In later centuries, the ferocity of the cannibal lifestyle deterred European sailors from going near Fijian waters, giving Fiji the name Cannibal Isles; as a result, Fiji remained unknown to the rest of the world. Early European visitors to Easter Island recorded the local oral traditions about the original settlers. In these traditions, Easter Islanders claimed that a chief Hotu Matu'a arrived on the island in one or two large canoes with his wife and extended family. They are believed to have been Polynesian. There is considerable uncertainty about the accuracy of this legend as well as the date of settlement. Published literature suggests the island was settled around 300–400 CE, or at about the time of the arrival of the earliest settlers in Hawaii. Some scientists say that Easter Island was not inhabited until 700–800 CE. This date range is based on glottochronological calculations and on three radiocarbon dates from charcoal that appears to have been produced during forest clearance activities. Moreover, a recent study which included radiocarbon dates from what is thought to be very early material suggests that the island was settled as recently as 1200 CE. This seems to be supported by a 2006 study of the island's deforestation, which could have started around the same time. A large now extinct palm, Paschalococos disperta, related to the Chilean wine palm (Jubaea chilensis), was one of the dominant trees as attested by fossil evidence; this species, whose sole occurrence was Easter Island, became extinct due to deforestation by the early settlers. Micronesia theories Micronesia began to be settled several millennia ago, although there are competing theories about the origin and arrival of the first settlers. There are numerous difficulties with conducting archaeological excavations in the islands, due to their size, settlement patterns and storm damage. As a result, much evidence is based on linguistic analysis. The earliest archaeological traces of civilization have been found on the island of Saipan, dated to 1500 BCE or slightly before. The ancestors of the Micronesians settled there over 4,000 years ago. A decentralized chieftain-based system eventually evolved into a more centralized economic and religious culture centered on Yap and Pohnpei. The prehistory of many Micronesian islands such as Yap are not known very well. On Pohnpei, pre-colonial history is divided into three eras: Mwehin Kawa or Mwehin Aramas (Period of Building, or Period of Peopling, before c. 1100); Mwehin Sau Deleur (Period of the Lord of Deleur, c. 1100 to c. 1628); and Mwehin Nahnmwarki (Period of the Nahnmwarki, c. 1628 to c. 1885). Pohnpeian legend recounts that the Saudeleur rulers, the first to bring government to Pohnpei, were of foreign origin. The Saudeleur centralized form of absolute rule is characterized in Pohnpeian legend as becoming increasingly oppressive over several generations. Arbitrary and onerous demands, as well as a reputation for offending Pohnpeian deities, sowed resentment among Pohnpeians. The Saudeleur Dynasty ended with the invasion of Isokelekel, another semi-mythical foreigner, who replaced the Saudeleur rule with the more decentralized nahnmwarki system in existence today. Isokelekel is regarded as the creator of the modern Pohnpeian nahnmwarki social system and the father of the Pompeian people. Construction of Nan Madol, a megalithic complex made from basalt lava logs in Pohnpei began as early as 1200 CE. Nan Madol is offshore of Temwen Island near Pohnpei, consists of a series of small artificial islands linked by a network of canals, and is often called the Venice of the Pacific. It is located near the island of Pohnpei and was the ceremonial and political seat of the Saudeleur Dynasty that united Pohnpei's estimated 25,000 people until its centralized system collapsed amid the invasion of Isokelekel. Isokelekel and his descendants initially occupied the stone city, but later abandoned it. The first people of the Northern Mariana Islands navigated to the islands at some period between 4000 BCE to 2000 BCE from South-East Asia. They became known as the Chamorros, and spoke an Austronesian language called Chamorro. The ancient Chamorro left a number of megalithic ruins, including Latte stone. The Refaluwasch or Carolinian people came to the Marianas in the 1800s from the Caroline Islands. Micronesian colonists gradually settled the Marshall Islands during the 2nd millennium BCE, with inter-island navigation made possible using traditional stick charts. Melanesia theories The first settlers of Australia, New Guinea, and the large islands just to the east arrived between 50,000 and 30,000 years ago, when Neanderthals still roamed Europe. The original inhabitants of the group of islands now named Melanesia were likely the ancestors of the present-day Papuan-speaking people. Migrating from South-East Asia, they appear to have occupied these islands as far east as the main islands in the Solomon Islands archipelago, including Makira and possibly the smaller islands farther to the east. Particularly along the north coast of New Guinea and in the islands north and east of New Guinea, the Austronesian people, who had migrated into the area somewhat more than 3,000 years ago, came into contact with these pre-existing populations of Papuan-speaking peoples. In the late 20th century, some scholars theorized a long period of interaction, which resulted in many complex changes in genetics, languages, and culture among the peoples. Kayser, et al. proposed that, from this area, a very small group of people (speaking an Austronesian language) departed to the east to become the forebears of the Polynesian people. However, the theory is contradicted by the findings of a genetic study published by Temple University in 2008; based on genome scans and evaluation of more than 800 genetic markers among a wide variety of Pacific peoples, it found that neither Polynesians nor Micronesians have much genetic relation to Melanesians. Both groups are strongly related genetically to East Asians, particularly Taiwanese aborigines. It appeared that, having developed their sailing outrigger canoes, the Polynesian ancestors migrated from East Asia, moved through the Melanesian area quickly on their way, and kept going to eastern areas, where they settled. They left little genetic evidence in Melanesia. The study found a high rate of genetic differentiation and diversity among the groups living within the Melanesian islands, with the peoples distinguished by island, language, topography, and geography among the islands. Such diversity developed over their tens of thousands of years of settlement before the Polynesian ancestors ever arrived at the islands. For instance, populations developed differently in coastal areas, as opposed to those in more isolated mountainous valleys. Additional DNA analysis has taken research into new directions, as more human species have been discovered since the late 20th century. Based on his genetic studies of the Denisova hominin, an ancient human species discovered in 2010, Svante Pääbo claims that ancient human ancestors of the Melanesians interbred in Asia with these humans. He has found that people of New Guinea share 4–6% of their genome with the Denisovans, indicating this exchange. The Denisovans are considered cousin to the Neanderthals; both groups are now understood to have migrated out of Africa, with the Neanderthals going into Europe, and the Denisovans heading east about 400,000 years ago. This is based on genetic evidence from a fossil found in Siberia. The evidence from Melanesia suggests their territory extended into south Asia, where ancestors of the Melanesians developed. Melanesians of some islands are one of the few non-European peoples, and the only dark-skinned group of people outside Australia, known to have blond hair. Australasia theories Indigenous Australians are the original inhabitants of the Australian continent and nearby islands. Indigenous Australians migrated from Africa to Asia around 70,000 years ago and arrived in Australia around 50,000 years ago. The Torres Strait Islanders are indigenous to the Torres Strait Islands, which are at the northernmost tip of Queensland near Papua New Guinea. The term "Aboriginal" is traditionally applied to only the indigenous inhabitants of mainland Australia and Tasmania, along with some of the adjacent islands, i.e.: the "first peoples". Indigenous Australians is an inclusive term used when referring to both Aboriginal and Torres Strait islanders. The earliest definite human remains found to date are that of Mungo Man, which have been dated at about 40,000 years old, but the time of arrival of the ancestors of Indigenous Australians is a matter of debate among researchers, with estimates dating back as far as 125,000 years ago. There is great diversity among different Indigenous communities and societies in Australia, each with its own unique mixture of cultures, customs and languages. In present-day Australia these groups are further divided into local communities. European contact and exploration (1500s–1700s) Iberian pioneers Early Iberian exploration Oceania was first explored by Europeans from the 16th century onwards. Portuguese navigators, between 1512 and 1526, reached the Moluccas (by António de Abreu and Francisco Serrão in 1512), Timor, the Aru Islands (Martim A. Melo Coutinho), the Tanimbar Islands, some of the Caroline Islands (by Gomes de Sequeira in 1525), and west Papua New Guinea (by Jorge de Menezes in 1526). In 1519 a Castilian ('Spanish') expedition led by Ferdinand Magellan sailed down the east coast of South America, found and sailed through the strait that bears his name and on 28 November 1520 entered the ocean which he named "Pacific". The three remaining ships, led by Magellan and his captains Duarte Barbosa and João Serrão, then sailed north and caught the trade winds which carried them across the Pacific to the Philippines where Magellan was killed. One surviving ship led by Juan Sebastián Elcano returned west across the Indian Ocean and the other went north in the hope of finding the westerlies and reaching Mexico. Unable to find the right winds, it was forced to return to the East Indies. The Magellan-Elcano expedition achieved the first circumnavigation of the world and reached the Philippines, the Mariana Islands and other islands of Oceania. Other large expeditions From 1527 to 1595 a number of other large Spanish expeditions crossed the Pacific Ocean, leading to the discovery of the Marshall Islands and Palau in the North Pacific, as well as Tuvalu, the Marquesas, the Solomon Islands archipelago, the Cook Islands and the Admiralty Islands in the South Pacific. In 1565, Spanish navigator Andrés de Urdaneta found a wind system that would allow ships to sail eastward from Asia, back to the Americas. From then until 1815 the annual Manila Galleons crossed the Pacific from Mexico to the Philippines and back, in the first transpacific trade route in history. Combined with the Spanish Atlantic or West Indies Fleet, the Manila Galleons formed one of the first global maritime exchange in human history, linking Seville in Spain with Manila in the Philippines, via Mexico. Later, in the quest for Terra Australis, Spanish explorers in the 17th century discovered the Pitcairn and Vanuatu archipelagos, and sailed the Torres Strait between Australia and New Guinea, named after navigator Luís Vaz de Torres. In 1668 the Spanish founded a colony on Guam as a resting place for west-bound galleons. For a long time this was the only non-coastal European settlement in the Pacific. Oceania during the Golden Age of Dutch exploration and discovery Early Dutch exploration The Dutch were the first non-natives to undisputedly explore and chart coastlines of Australia, Tasmania, New Zealand, Tonga, Fiji, Samoa, and Easter Island. Verenigde Oostindische Compagnie (or VOC) was a major force behind the Golden Age of Dutch exploration (c. 1590s–1720s) and Netherlandish cartography (c. 1570s–1670s). In the 17th century, the VOC's navigators and explorers charted almost three-quarters of the Australian coastline, except the east coast. Abel Tasman's exploratory voyages Abel Tasman was the first known European explorer to reach the islands of Van Diemen's Land (now Tasmania) and New Zealand, and to sight the Fiji islands. His navigator François Visscher, and his merchant Isaack Gilsemans, mapped substantial portions of Australia, New Zealand, Tonga and the Fijian islands. On 24 November 1642 Abel Tasman sighted the west coast of
in the early period of the Gotlander settlement. Later they established their own trading station in Novgorod, known as , which was further up-river, in the first half of the 13th century. In 1229 German merchants at Novgorod were granted certain privileges that made their positions more secure. The granting of privileges was enacted by the current ruler of Novgorod, a Rus' prince, Michael of Chernigov. Hansa societies worked to remove restrictions on trade for their members. The earliest extant documentary mention, although without a name, of a specific German commercial federation dates from 1157 in London. That year, the merchants of the Hansa in Cologne convinced King Henry II of England to exempt them from all tolls in London and allow them to trade at fairs throughout England. The "Queen of the Hansa", Lübeck, where traders were required to trans-ship goods between the North Sea and the Baltic, gained imperial privileges to become a free imperial city in 1226, as had Hamburg in 1189. In 1241 Lübeck, which had access to the Baltic and North seas' fishing grounds, formed an alliance—a precursor to the League—with Hamburg, another trading city, which controlled access to salt-trade routes from Lüneburg. The allied cities gained control over most of the salt-fish trade, especially the Scania Market; Cologne joined them in the Diet of 1260. "In 1266 King Henry III of England granted the Lübeck and Hamburg Hansa a charter for operations in England, and the Cologne Hansa joined them in 1282 to form the most powerful Hanseatic colony in London. Much of the drive for this co-operation came from the fragmented nature of existing territorial governments, which failed to provide security for trade. Over the next 50 years, the Hansa solidified with formal agreements for confederation and co-operation covering the west and east trade routes. The principal city and linchpin remained Lübeck; with the first general diet of the Hansa held there in 1356, the Hanseatic League acquired an official structure." Commercial expansion Lübeck's location on the Baltic provided access for trade with Scandinavia and Kievan Rus' (with its sea-trade center, Veliky Novgorod), putting it in direct competition with the Scandinavians who had previously controlled most of the Baltic trade-routes. A treaty with the Visby Hansa put an end to this competition: through this treaty the Lübeck merchants gained access to the inland Russian port of Novgorod, where they built a trading post or Kontor (literally: "office"). Although such alliances formed throughout the Holy Roman Empire, the league never became a closely managed formal organisation. Assemblies of the Hanseatic towns met irregularly in Lübeck for a Hansetag (Hanseatic Diet) from 1356 onwards, but many towns chose not to attend nor to send representatives, and decisions were not binding on individual cities. Over the period, a network of alliances grew to include a flexible roster of 70 to 170 cities. The league succeeded in establishing additional Kontors in Bruges (Flanders), Bergen (Norway), and London (England). These trading posts became significant enclaves. The London Kontor, first alluded to by crusaders from Lübeck for whom the Kontor arranged the purchase of a replacement cog-ship in Summer 1189, formally established in 1320, stood west of London Bridge near Upper Thames Street, on the site now occupied by Cannon Street station. It grew into a significant walled community with its own warehouses, weighhouse, church, offices and houses, reflecting the importance and scale of trading activity on the premises. The first reference to it as the Steelyard (der Stahlhof) occurs in 1422. Starting with trade in coarse woollen fabrics, the Hanseatic League had the effect of bringing both commerce and industry to northern Germany. As trade increased, newer and finer woollen and linen fabrics, and even silks, were manufactured in northern Germany. The same refinement of products out of cottage industry occurred in other fields, e.g. etching, wood carving, armour production, engraving of metals, and wood-turning. The century-long monopolization of sea navigation and trade by the Hanseatic League ensured that the Renaissance arrived in northern Germany long before it did in the rest of Europe. A legacy of the period is a regional style of architecture known the Weser Renaissance, typified by the embellished facade added to the Bremen Rathaus in 1612. In addition to the major Kontors, individual Hanseatic ports had a representative merchant and warehouse. In England this happened in Boston, Bristol, Bishop's Lynn (now King's Lynn, which features the sole remaining Hanseatic warehouse in England), Hull, Ipswich, Norwich, Yarmouth (now Great Yarmouth), and York. The league primarily traded timber, furs, resin (or tar), flax, honey, wheat, and rye from the east to Flanders and England with cloth (and, increasingly, manufactured goods) going in the other direction. Metal ore (principally copper and iron) and herring came southwards from Sweden. German colonists in the 12th and 13th centuries settled in numerous cities on and near the east Baltic coast, such as Elbing (Elbląg), Thorn (Toruń), Reval (Tallinn), Riga, and Dorpat (Tartu), which became members of the Hanseatic League, and some of which still retain many Hansa buildings and bear the style of their Hanseatic days. Most were granted Lübeck law (Lübisches Recht), after the league's most prominent town. The law provided that they had to appeal in all legal matters to Lübeck's city council. The Livonian Confederation of 1435 to incorporated modern-day Estonia and parts of Latvia and had its own Hanseatic parliament (diet); all of its major towns became members of the Hanseatic League. The dominant language of trade was Middle Low German, a dialect with significant impact for countries involved in the trade, particularly the larger Scandinavian languages, Estonian, and Latvian. Zenith The league had a fluid structure, but its members shared some characteristics; most of the Hansa cities either started as independent cities or gained independence through the collective bargaining power of the league, though such independence remained limited. The Hanseatic free cities owed allegiance directly to the Holy Roman Emperor, without any intermediate family tie of obligation to the local nobility. Another similarity involved the cities' strategic locations along trade routes. At the height of their power in the late-14th century, the merchants of the Hanseatic League succeeded in using their economic power and, sometimes, their military might—trade routes required protection and the league's ships sailed well-armed—to influence imperial policy. The league also wielded power abroad. Between 1361 and 1370 it waged war against Denmark. Initially unsuccessful, Hanseatic towns in 1368 allied in the Confederation of Cologne, sacked Copenhagen and Helsingborg, and forced Valdemar IV, King of Denmark, and his son-in-law Haakon VI, King of Norway, to grant the league 15% of the profits from Danish trade in the subsequent peace treaty of Stralsund in 1370, thus gaining an effective trade and economic monopoly in Scandinavia. This favourable treaty marked the height of Hanseatic power. After the Danish-Hanseatic War and the Bombardment of Copenhagen, the Treaty of Vordingborg renewed the commercial privileges in 1435. The Hansa also waged a vigorous campaign against pirates. Between 1392 and 1440 maritime trade of the league faced danger from raids of the Victual Brothers and their descendants, privateers hired in 1392 by Albert of Mecklenburg, King of Sweden, against Margaret I, Queen of Denmark. In the Dutch–Hanseatic War (1438–1441), the merchants of Amsterdam sought and eventually won free access to the Baltic and broke the Hanseatic monopoly. As an essential part of protecting their investment in ships and their cargoes, the League trained pilots and erected lighthouses. Most foreign cities confined the Hanseatic traders to certain trading areas and to their own trading posts. They seldom interacted with the local inhabitants, except when doing business. Many locals, merchant and noble alike, envied the power of the League and tried to diminish it. For example, in London, the local merchants exerted continuing pressure for the revocation of privileges. The refusal of the Hansa to offer reciprocal arrangements to their English counterparts exacerbated the tension. King Edward IV of England reconfirmed the league's privileges in the Treaty of Utrecht despite the latent hostility, in part thanks to the significant financial contribution the League made to the Yorkist side during the Wars of the Roses of 1455–1487. In 1597 Queen Elizabeth of England expelled the League from London, and the Steelyard closed the following year. Tsar Ivan III of Russia closed the Hanseatic Kontor at Novgorod in 1494. The very existence of the League and its privileges and monopolies created economic and social tensions that often crept over into rivalries between League members. Rise of rival powers The economic crises of the late 15th century did not spare the Hansa. Nevertheless, its eventual rivals emerged in the form of the territorial states, whether new or revived, and not just in the west: Ivan III, Grand Prince of Moscow, ended the entrepreneurial independence of Hansa's Novgorod Kontor in 1478—it closed completely and finally in 1494. New vehicles of credit were imported from Italy, where double-entry book-keeping was popularly formalized in 1494, and outpaced the Hansa economy, in which silver coins changed hands rather than bills of exchange. In the 15th century, tensions between the Prussian region and the "Wendish" cities (Lübeck and its eastern neighbours) increased. Lübeck was dependent on its role as centre of the Hansa, being on the shore of the sea without a major river. It was on the entrance of the land route to Hamburg, but this land route could be bypassed by sea travel around Denmark and through the Kattegat. Prussia's main interest, on the other hand, was the export of bulk products like grain and timber, which were very important for England, the Low Countries, and, later on, also for Spain and Italy. In 1454, the year of the marriage of Elisabeth of Austria to King-Grand Duke Casimir IV Jagiellon of Poland-Lithuania, the towns of the Prussian Confederation rose up against the dominance of the Teutonic Order and asked Casimir IV for help. Gdańsk (Danzig), Thorn and Elbing became part of the Kingdom of Poland, (from 1466 to 1569 referred to as Royal Prussia, region of Poland) by the Second Peace of Thorn. Poland in turn was heavily supported by the Holy Roman Empire through family connections and by military assistance under the Habsburgs. Kraków, then the capital of Poland, had a loose association with the Hansa. The lack of customs borders
name, of a specific German commercial federation dates from 1157 in London. That year, the merchants of the Hansa in Cologne convinced King Henry II of England to exempt them from all tolls in London and allow them to trade at fairs throughout England. The "Queen of the Hansa", Lübeck, where traders were required to trans-ship goods between the North Sea and the Baltic, gained imperial privileges to become a free imperial city in 1226, as had Hamburg in 1189. In 1241 Lübeck, which had access to the Baltic and North seas' fishing grounds, formed an alliance—a precursor to the League—with Hamburg, another trading city, which controlled access to salt-trade routes from Lüneburg. The allied cities gained control over most of the salt-fish trade, especially the Scania Market; Cologne joined them in the Diet of 1260. "In 1266 King Henry III of England granted the Lübeck and Hamburg Hansa a charter for operations in England, and the Cologne Hansa joined them in 1282 to form the most powerful Hanseatic colony in London. Much of the drive for this co-operation came from the fragmented nature of existing territorial governments, which failed to provide security for trade. Over the next 50 years, the Hansa solidified with formal agreements for confederation and co-operation covering the west and east trade routes. The principal city and linchpin remained Lübeck; with the first general diet of the Hansa held there in 1356, the Hanseatic League acquired an official structure." Commercial expansion Lübeck's location on the Baltic provided access for trade with Scandinavia and Kievan Rus' (with its sea-trade center, Veliky Novgorod), putting it in direct competition with the Scandinavians who had previously controlled most of the Baltic trade-routes. A treaty with the Visby Hansa put an end to this competition: through this treaty the Lübeck merchants gained access to the inland Russian port of Novgorod, where they built a trading post or Kontor (literally: "office"). Although such alliances formed throughout the Holy Roman Empire, the league never became a closely managed formal organisation. Assemblies of the Hanseatic towns met irregularly in Lübeck for a Hansetag (Hanseatic Diet) from 1356 onwards, but many towns chose not to attend nor to send representatives, and decisions were not binding on individual cities. Over the period, a network of alliances grew to include a flexible roster of 70 to 170 cities. The league succeeded in establishing additional Kontors in Bruges (Flanders), Bergen (Norway), and London (England). These trading posts became significant enclaves. The London Kontor, first alluded to by crusaders from Lübeck for whom the Kontor arranged the purchase of a replacement cog-ship in Summer 1189, formally established in 1320, stood west of London Bridge near Upper Thames Street, on the site now occupied by Cannon Street station. It grew into a significant walled community with its own warehouses, weighhouse, church, offices and houses, reflecting the importance and scale of trading activity on the premises. The first reference to it as the Steelyard (der Stahlhof) occurs in 1422. Starting with trade in coarse woollen fabrics, the Hanseatic League had the effect of bringing both commerce and industry to northern Germany. As trade increased, newer and finer woollen and linen fabrics, and even silks, were manufactured in northern Germany. The same refinement of products out of cottage industry occurred in other fields, e.g. etching, wood carving, armour production, engraving of metals, and wood-turning. The century-long monopolization of sea navigation and trade by the Hanseatic League ensured that the Renaissance arrived in northern Germany long before it did in the rest of Europe. A legacy of the period is a regional style of architecture known the Weser Renaissance, typified by the embellished facade added to the Bremen Rathaus in 1612. In addition to the major Kontors, individual Hanseatic ports had a representative merchant and warehouse. In England this happened in Boston, Bristol, Bishop's Lynn (now King's Lynn, which features the sole remaining Hanseatic warehouse in England), Hull, Ipswich, Norwich, Yarmouth (now Great Yarmouth), and York. The league primarily traded timber, furs, resin (or tar), flax, honey, wheat, and rye from the east to Flanders and England with cloth (and, increasingly, manufactured goods) going in the other direction. Metal ore (principally copper and iron) and herring came southwards from Sweden. German colonists in the 12th and 13th centuries settled in numerous cities on and near the east Baltic coast, such as Elbing (Elbląg), Thorn (Toruń), Reval (Tallinn), Riga, and Dorpat (Tartu), which became members of the Hanseatic League, and some of which still retain many Hansa buildings and bear the style of their Hanseatic days. Most were granted Lübeck law (Lübisches Recht), after the league's most prominent town. The law provided that they had to appeal in all legal matters to Lübeck's city council. The Livonian Confederation of 1435 to incorporated modern-day Estonia and parts of Latvia and had its own Hanseatic parliament (diet); all of its major towns became members of the Hanseatic League. The dominant language of trade was Middle Low German, a dialect with significant impact for countries involved in the trade, particularly the larger Scandinavian languages, Estonian, and Latvian. Zenith The league had a fluid structure, but its members shared some characteristics; most of the Hansa cities either started as independent cities or gained independence through the collective bargaining power of the league, though such independence remained limited. The Hanseatic free cities owed allegiance directly to the Holy Roman Emperor, without any intermediate family tie of obligation to the local nobility. Another similarity involved the cities' strategic locations along trade routes. At the height of their power in the late-14th century, the merchants of the Hanseatic League succeeded in using their economic power and, sometimes, their military might—trade routes required protection and the league's ships sailed well-armed—to influence imperial policy. The league also wielded power abroad. Between 1361 and 1370 it waged war against Denmark. Initially unsuccessful, Hanseatic towns in 1368 allied in the Confederation of Cologne, sacked Copenhagen and Helsingborg, and forced Valdemar IV, King of Denmark, and his son-in-law Haakon VI, King of Norway, to grant the league 15% of the profits from Danish trade in the subsequent peace treaty of Stralsund in 1370, thus gaining an effective trade and economic monopoly in Scandinavia. This favourable treaty marked the height of Hanseatic power. After the Danish-Hanseatic War and the Bombardment of Copenhagen, the Treaty of Vordingborg renewed the commercial privileges in 1435. The Hansa also waged a vigorous campaign against pirates. Between 1392 and 1440 maritime trade of the league faced danger from raids of the Victual Brothers and their descendants, privateers hired in 1392 by Albert of Mecklenburg, King of Sweden, against Margaret I, Queen of Denmark. In the Dutch–Hanseatic War (1438–1441), the merchants of Amsterdam sought and eventually won free access to the Baltic and broke the Hanseatic monopoly. As an essential part of protecting their investment in ships and their cargoes, the League trained pilots and erected lighthouses. Most foreign cities confined the Hanseatic traders to certain trading areas and to their own trading posts. They seldom interacted with the local inhabitants, except when doing business. Many locals, merchant and noble alike, envied the power of the League and tried to diminish it. For example, in London, the local merchants exerted continuing pressure for the revocation of privileges. The refusal of the Hansa to offer reciprocal arrangements to their English counterparts exacerbated the tension. King Edward IV of England reconfirmed the league's privileges in the Treaty of Utrecht despite the latent hostility, in part thanks to the significant financial contribution the League made to the Yorkist side during the Wars of the Roses of 1455–1487. In 1597 Queen Elizabeth of England expelled the League from London, and the Steelyard closed the following year. Tsar Ivan III of Russia closed the Hanseatic Kontor at Novgorod in 1494. The very existence of the League and its privileges and monopolies created economic and social tensions that often crept over into rivalries between League members. Rise of rival powers The economic crises of the late 15th century did not spare the Hansa. Nevertheless, its eventual rivals emerged in the form of the territorial states, whether new or revived, and not just in the west: Ivan III, Grand Prince of Moscow, ended the entrepreneurial independence of Hansa's Novgorod Kontor in 1478—it closed completely and finally in 1494. New vehicles of credit were imported from Italy, where double-entry book-keeping was popularly formalized in 1494, and outpaced the Hansa economy, in which silver coins changed hands rather than bills of exchange. In the 15th century, tensions between the Prussian region and the "Wendish" cities (Lübeck and its eastern neighbours) increased. Lübeck was dependent on its role as centre of the Hansa, being on the shore of the sea without a major river. It was on the entrance of the land route to Hamburg, but this land route could be bypassed by sea travel around Denmark and through the Kattegat. Prussia's main interest, on the other hand, was the export of bulk products like grain and timber, which were very important for England, the Low Countries, and, later on, also for Spain and Italy. In 1454, the year of the marriage of Elisabeth of Austria to King-Grand Duke Casimir IV Jagiellon of Poland-Lithuania, the towns of the Prussian Confederation rose up against the dominance of the Teutonic Order and asked Casimir IV for help. Gdańsk (Danzig), Thorn and Elbing became part of the Kingdom of Poland, (from 1466 to 1569 referred to as Royal Prussia, region of Poland) by the Second Peace of Thorn. Poland in turn was heavily supported by the Holy Roman Empire through family connections and by military assistance under the Habsburgs. Kraków, then the capital of Poland, had a loose association with the Hansa. The lack of customs borders on the River Vistula after 1466 helped to gradually increase Polish grain exports, transported to the sea down the Vistula, from per year, in the late 15th century, to over in the 17th century. The Hansa-dominated maritime grain trade made Poland one of the main areas of its activity, helping Danzig to become the Hansa's largest city. The member cities took responsibility for their own protection. In 1567, a Hanseatic League agreement reconfirmed previous obligations and rights of league members, such as common protection and defense against enemies. The Prussian Quartier cities of Thorn, Elbing, Königsberg and Riga and Dorpat also signed. When pressed by the King of Poland–Lithuania, Danzig remained neutral and would not allow ships running for Poland into its territory. They had to anchor somewhere else, such as at Pautzke (Puck). A major economic advantage for the Hansa was its control of the shipbuilding market, mainly in Lübeck and in Danzig. The Hansa sold ships everywhere in Europe, including Italy. They drove out the Dutch, because Holland wanted to favour Bruges as a huge staple market at the end of a trade route. When the Dutch started to become competitors of the Hansa in shipbuilding, the Hansa tried to stop the flow of shipbuilding technology from Hanseatic towns to Holland. Danzig, a trading partner of Amsterdam, attempted to forestall the decision. Dutch ships sailed to Danzig to take grain from the city directly, to the dismay of Lübeck. Hollanders also circumvented the Hanseatic towns by trading directly with north German princes in non-Hanseatic towns. Dutch freight costs were much lower than those of the Hansa, and the Hansa were excluded as middlemen. When Bruges, Antwerp and Holland all became part of the Duchy of Burgundy they actively tried to take over the monopoly of trade from the Hansa, and the staples market from
a town in the United States Harvard, Nebraska, a city in the United States Harvard Township, Clay County, Nebraska, a township in the United States Aeroplanes Harvard (aeroplane), often used name for the North American T-6. Harvard Blue Yonder EZ (aeroplane), a replica of the Harvard. Ships List of ships named Harvard USS Harvard, several ships of the United Started Navy Other Harvard (name), given name/first name and surname/last name. Harvard architecture, a type of computer architecture. Harvard (automobile), a Brass Era car built in New York between
television personality John Harvard (clergyman) (1607–1638), clergyman after whom Harvard University is named John Harvard (politician) (1938–2016), former Lieutenant-Governor of Manitoba Boston area Harvard College, the undergraduate division of Harvard University Harvard Crimson, Harvard University's athletic program The Harvard Crimson, Harvard University's daily student newspaper Harvard Bridge, a bridge over the Charles River near the Massachusetts Institute of Technology Harvard Square, a square in Cambridge, Massachusetts, adjacent to the Harvard University campus Harvard Yard, the center of the Harvard campus, adjacent to Harvard Square Harvard (MBTA station), the subway station located in Harvard Square Cities Harvard, Idaho Harvard, Illinois, a city in
Niger, Burkina Faso, and Benin German East Africa - Tanzania and Zanzibar German South-West Africa - Namibia The Gold Coast - Ghana Guinea Grain Coast or Pepper Coast - Liberia Malagasy Republic - Madagascar Mdre Bahri -Eritrea Monomotapa - Zimbabwe, South Africa, Lesotho, Swaziland, Mozambique and parts of Namibia and Botswana Middle Congo - Republic of the Congo Nubia - Sudan and Egypt Numidia - Algeria, Libya and Tunisia Nyasaland - Malawi Western Pentapolis - Libya Portuguese Guinea - Guinea-Bissau Rhodesia - Northern Rhodesia - Zambia Southern Rhodesia - Zimbabwe (Southern Rhodesia was commonly referred to simply as Rhodesia from 1964 to 1980) Rwanda-Urundi - Rwanda and Burundi The Slave Coast - Benin Somaliland - Somalia South-West Africa - Namibia Spanish Sahara - Western Sahara Swaziland - Eswatini
Republic of the Congo Dahomey - Benin Equatoria - Sudan and Uganda Fernando Pó - Bioko French Congo - Gabon and Republic of the Congo French Equatorial Africa - Chad, Central African Republic, Gabon, Republic of the Congo French Sudan - Mali French West Africa - Mauritania, Senegal, Mali, Guinea, Ivory Coast, Niger, Burkina Faso, and Benin German East Africa - Tanzania and Zanzibar German South-West Africa - Namibia The Gold Coast - Ghana Guinea Grain Coast or Pepper Coast - Liberia Malagasy Republic - Madagascar Mdre Bahri -Eritrea Monomotapa - Zimbabwe, South Africa, Lesotho, Swaziland, Mozambique and parts of Namibia and Botswana Middle Congo - Republic of the Congo Nubia - Sudan and Egypt Numidia - Algeria, Libya
their genesis in the Brothers Grimm's "Hänsel und Gretel" (1812), Mary Shelley's Frankenstein (1818), John Polidori's "The Vampyre" (1819), Charles Maturin's Melmoth the Wanderer (1820), Washington Irving's "The Legend of Sleepy Hollow" (1820), Jane C. Loudon's The Mummy!: Or a Tale of the Twenty-Second Century (1827), Victor Hugo's The Hunchback of Notre Dame (1831), Thomas Peckett Prest's Varney the Vampire (1847), the works of Edgar Allan Poe, the works of Sheridan Le Fanu, Robert Louis Stevenson's Strange Case of Dr Jekyll and Mr Hyde (1886), Oscar Wilde's The Picture of Dorian Gray (1890), H. G. Wells' The Invisible Man (1897), and Bram Stoker's Dracula (1897). Each of these works created an enduring icon of horror seen in later re-imaginings on the page, stage and screen. 20th century A proliferation of cheap periodicals around turn of the century led to a boom in horror writing. For example, Gaston Leroux serialized his Le Fantôme de l'Opéra before it became a novel in 1910. One writer who specialized in horror fiction for mainstream pulps, such as All-Story Magazine, was Tod Robbins, whose fiction deals with themes of madness and cruelty. In Russia, the writer Alexander Belyaev popularized these themes in his story Professor Dowell's Head (1925), in which a mad doctor performs experimental head transplants and reanimations on bodies stolen from the morgue, and which was first published as a magazine serial before being turned into a novel. Later, specialist publications emerged to give horror writers an outlet, prominent among them was Weird Tales and Unknown Worlds. Influential horror writers of the early 20th century made inroads in these mediums. Particularly, the venerated horror author H. P. Lovecraft, and his enduring Cthulhu Mythos transformed and popularized the genre of cosmic horror, and M. R. James is credited with redefining the ghost story in that era. The serial murderer became a recurring theme. Yellow journalism and sensationalism of various murderers, such as Jack the Ripper, and lesser so, Carl Panzram, Fritz Haarman, and Albert Fish, all perpetuated this phenomenon. The trend continued in the postwar era, partly renewed after the murders committed by Ed Gein. In 1959, Robert Bloch, inspired by the murders, wrote Psycho. The crimes committed in 1969 by the Manson Family influenced the slasher theme in horror fiction of the 1970s. In 1981, Thomas Harris wrote Red Dragon, introducing Dr. Hannibal Lecter. In 1988, the sequel to that novel, The Silence of the Lambs, was published. Early cinema was inspired by many aspects of horror literature, and started a strong tradition of horror films and subgenres that continues to this day. Up until the graphic depictions of violence and gore on the screen commonly associated with 1960s and 1970s slasher films and splatter films, comic books such as those published by EC Comics (most notably Tales From The Crypt) in the 1950s satisfied readers' quests for horror imagery that the silver screen could not provide. This imagery made these comics controversial, and as a consequence, they were frequently censored. The modern zombie tale dealing with the motif of the living dead harks back to works including H. P. Lovecraft's stories "Cool Air" (1925), "In The Vault" (1926), and "The Outsider" (1926), and Dennis Wheatley's "Strange Conflict" (1941). Richard Matheson's novel I Am Legend (1954) influenced an entire genre of apocalyptic zombie fiction emblematized by the films of George A. Romero. In the late 1960s and early 1970s, the enormous commercial success of three books - Rosemary's Baby (1967) by Ira Levin, The Exorcist by William Peter Blatty, and The Other by Thomas Tryon - encouraged publishers to begin releasing numerous other horror novels, thus creating a "horror boom". One of the best-known late-20th century horror writers is Stephen King, known for Carrie, The Shining, It, Misery and several dozen other novels and about 200 short stories. Beginning in the 1970s, King's stories have attracted a large audience, for which he was awarded by the U.S. National Book Foundation in 2003. Other popular horror authors of the period included Anne Rice, Brian Lumley, Graham Masterton, James Herbert, Dean Koontz, Clive Barker, Ramsey Campbell, and Peter Straub. 21st century Best-selling book series of contemporary times exist in genres related to horror fiction, such as the werewolf fiction urban fantasy Kitty Norville books by Carrie Vaughn (2005 onward). Horror elements continue to expand outside the genre. The alternate history of more traditional historical horror in Dan Simmons's 2007 novel The Terror sits on bookstore shelves next to genre mash ups such as Pride and Prejudice and Zombies (2009), and historical fantasy and horror comics such as Hellblazer (1993 onward) and Mike Mignola's Hellboy (1993 onward). Horror also serves as one of the central genres in more complex modern works such as Mark Z. Danielewski's House of Leaves (2000), a finalist for the National Book Award. There are many horror novels for teens, such as The Monstrumologist by Rick Yancey (2009). Additionally, many movies, particularly animated ones, use a horror aesthetic. These are what can be collectively referred to as "children's horror". Although it's unknown for sure why children enjoy these movies (as it seems counter-intuitive), it is theorized that it is the grotesque monsters that fascinate kids. Tangential to this, the internalized impact of horror television programs and films on children is rather under-researched, especially when compared to the research done on the similar subject of violence in TV and film's impact on the young mind. What little research there is tends to be inconclusive on the impact that viewing such media has. Characteristics One defining trait of the horror genre is that it provokes an emotional, psychological, or physical response within readers that causes them to react with fear. One of H. P. Lovecraft's most famous quotes about the genre is that: "The oldest and strongest emotion of mankind is fear, and the oldest and strongest kind of fear is fear of the unknown." the first sentence from his seminal essay, "Supernatural Horror in Literature". Science fiction historian Darrell Schweitzer has stated, "In the simplest sense, a horror story is one that scares us" and "the true horror story requires a sense of evil, not in necessarily in a theological sense; but the menaces must be truly menacing, life-destroying, and antithetical to happiness." In her essay "Elements of Aversion", Elizabeth Barrette articulates the need by some for horror tales in a modern world: In a sense similar to the reason a person seeks out the controlled thrill of a roller coaster, readers in the modern era seek out feelings of horror and terror to feel a sense of excitement. However, Barrette adds that horror fiction is one of the few mediums where readers seek out a form of art that forces themselves to confront ideas and images they "might rather ignore to challenge preconceptions of all kinds." One can see the confrontation of ideas that readers and characters would "rather ignore" throughout literature in famous moments such as Hamlet's musings about the skull of Yorick, its implications of the mortality of humanity, and the gruesome end that bodies inevitably come to. In horror fiction, the confrontation with the gruesome is often a metaphor for the problems facing the current generation of the author. There are many theories as to why people enjoy being scared. For example, "people who like horror films are more likely to score highly for openness to experience, a personality trait linked to intellect and imagination." It is a now commonly accepted viewpoint that the horror elements of Dracula's portrayal of vampirism are metaphors for sexuality in a repressed Victorian era. But this is merely one of many interpretations of the metaphor of Dracula. Jack Halberstam postulates many of these in his essay Technologies of Monstrosity: Bram Stoker's Dracula. He writes: Halberstram articulates a view of Dracula as manifesting the growing perception of the aristocracy as an evil and outdated notion to be defeated. The depiction of a multinational band of protagonists using the latest technologies (such as
the vampiress is most notably derived from the real-life noblewoman and murderess, Elizabeth Bathory, and helped usher in the emergence of horror fiction in the 18th century, such as through László Turóczi's 1729 book Tragica Historia. 18th century The 18th century saw the gradual development of Romanticism and the Gothic horror genre. It drew on the written and material heritage of the Late Middle Ages, finding its form with Horace Walpole's seminal and controversial 1764 novel, The Castle of Otranto. In fact, the first edition was published disguised as an actual medieval romance from Italy, discovered and republished by a fictitious translator. Once revealed as modern, many found it anachronistic, reactionary, or simply in poor taste but it proved immediately popular. Otranto inspired Vathek (1786) by William Beckford, A Sicilian Romance (1790), The Mysteries of Udolpho (1794) and The Italian (1796) by Ann Radcliffe and The Monk (1797) by Matthew Lewis. A significant amount of horror fiction of this era was written by women and marketed towards a female audience, a typical scenario of the novels being a resourceful female menaced in a gloomy castle. 19th century The Gothic tradition blossomed into the genre that modern readers today call horror literature in the 19th century. Influential works and characters that continue resonating in fiction and film today saw their genesis in the Brothers Grimm's "Hänsel und Gretel" (1812), Mary Shelley's Frankenstein (1818), John Polidori's "The Vampyre" (1819), Charles Maturin's Melmoth the Wanderer (1820), Washington Irving's "The Legend of Sleepy Hollow" (1820), Jane C. Loudon's The Mummy!: Or a Tale of the Twenty-Second Century (1827), Victor Hugo's The Hunchback of Notre Dame (1831), Thomas Peckett Prest's Varney the Vampire (1847), the works of Edgar Allan Poe, the works of Sheridan Le Fanu, Robert Louis Stevenson's Strange Case of Dr Jekyll and Mr Hyde (1886), Oscar Wilde's The Picture of Dorian Gray (1890), H. G. Wells' The Invisible Man (1897), and Bram Stoker's Dracula (1897). Each of these works created an enduring icon of horror seen in later re-imaginings on the page, stage and screen. 20th century A proliferation of cheap periodicals around turn of the century led to a boom in horror writing. For example, Gaston Leroux serialized his Le Fantôme de l'Opéra before it became a novel in 1910. One writer who specialized in horror fiction for mainstream pulps, such as All-Story Magazine, was Tod Robbins, whose fiction deals with themes of madness and cruelty. In Russia, the writer Alexander Belyaev popularized these themes in his story Professor Dowell's Head (1925), in which a mad doctor performs experimental head transplants and reanimations on bodies stolen from the morgue, and which was first published as a magazine serial before being turned into a novel. Later, specialist publications emerged to give horror writers an outlet, prominent among them was Weird Tales and Unknown Worlds. Influential horror writers of the early 20th century made inroads in these mediums. Particularly, the venerated horror author H. P. Lovecraft, and his enduring Cthulhu Mythos transformed and popularized the genre of cosmic horror, and M. R. James is credited with redefining the ghost story in that era. The serial murderer became a recurring theme. Yellow journalism and sensationalism of various murderers, such as Jack the Ripper, and lesser so, Carl Panzram, Fritz Haarman, and Albert Fish, all perpetuated this phenomenon. The trend continued in the postwar era, partly renewed after the murders committed by Ed Gein. In 1959, Robert Bloch, inspired by the murders, wrote Psycho. The crimes committed in 1969 by the Manson Family influenced the slasher theme in horror fiction of the 1970s. In 1981, Thomas Harris wrote Red Dragon, introducing Dr. Hannibal Lecter. In 1988, the sequel to that novel, The Silence of the Lambs, was published. Early cinema was inspired by many aspects of horror literature, and started a strong tradition of horror films and subgenres that continues to this day. Up until the graphic depictions of violence and gore on the screen commonly associated with 1960s and 1970s slasher films and splatter films, comic books such as those published by EC Comics (most notably Tales From The Crypt) in the 1950s satisfied readers' quests for horror imagery that the silver screen could not provide. This imagery made these comics controversial, and as a consequence, they were frequently censored. The modern zombie tale dealing with the motif of the living dead harks back to works including H. P. Lovecraft's stories "Cool Air" (1925), "In The Vault" (1926), and "The Outsider" (1926), and Dennis Wheatley's "Strange Conflict" (1941). Richard Matheson's novel I Am Legend (1954) influenced an entire genre of apocalyptic zombie fiction emblematized by the films of George A. Romero. In the late 1960s and early 1970s, the enormous commercial success of three books - Rosemary's Baby (1967) by Ira Levin, The Exorcist by William Peter Blatty, and The Other by Thomas Tryon - encouraged publishers
an entire function ("whole") in a domain of the complex plane while a meromorphic function (defined to mean holomorphic except at certain isolated poles), resembles a rational fraction ("part") of entire functions in a domain of the complex plane. Cauchy had instead used the term synectic. Today, the term "holomorphic function" is sometimes preferred to "analytic function". An important result in complex analysis is that every holomorphic function is complex analytic, a fact that does not follow obviously from the definitions. The term "analytic" is however also in wide use. Properties Because complex differentiation is linear and obeys the product, quotient, and chain rules, the sums, products and compositions of holomorphic functions are holomorphic, and the quotient of two holomorphic functions is holomorphic wherever the denominator is not zero. That is, if functions and are holomorphic in a domain , then so are , , , and . Furthermore, is holomorphic if has no zeros in , or is meromorphic otherwise. If one identifies with the real plane , then the holomorphic functions coincide with those functions of two real variables with continuous first derivatives which solve the Cauchy–Riemann equations, a set of two partial differential equations. Every holomorphic function can be separated into its real and imaginary parts , and each of these is a harmonic function on (each satisfies Laplace's equation ), with the harmonic conjugate of . Conversely, every harmonic function on a simply connected domain is the real part of a holomorphic function: If is the harmonic conjugate of , unique up to a constant, then is holomorphic. Cauchy's integral theorem implies that the contour integral of every holomorphic function along a loop vanishes: Here is a rectifiable path in a simply connected complex domain whose start point is equal to its end point, and is a holomorphic function. Cauchy's integral formula states that every function holomorphic inside a disk is completely determined by its values on the disk's boundary. Furthermore: Suppose is a complex domain, is a holomorphic function and the closed disk is completely contained in . Let be the circle forming the boundary of . Then for every in the interior of : where the contour integral is taken counter-clockwise. The derivative can be written as a contour integral using Cauchy's differentiation formula: for any simple loop positively winding once around , and for infinitesimal positive loops around . In regions where the first derivative is not zero, holomorphic functions are conformal: they preserve angles and the shape (but not size) of small figures. Every holomorphic function is analytic. That is, a holomorphic function has derivatives of every order at each point in its domain, and it coincides with its own Taylor series at in a neighbourhood of . In fact, coincides with its Taylor series at in any disk centred at that point and lying within the domain of the function. From an algebraic point of view, the set of holomorphic functions on an open set is a commutative ring and a complex vector space. Additionally, the set of holomorphic functions in an open set is an integral domain if and only if the open set is connected. In fact, it is a locally convex topological vector space, with the seminorms being the suprema on compact subsets. From a geometric perspective, a function is holomorphic at if and only if its exterior derivative in a neighbourhood of is equal to for some continuous function . It follows from that is also proportional to , implying that the derivative is itself holomorphic and thus that is infinitely differentiable. Similarly, implies that any function that is holomorphic on the simply connected region is also integrable on . (For a path from to lying entirely in , define in light of the Jordan curve theorem and the generalized Stokes' theorem, is independent of the particular choice of path , and thus is a well-defined function on having and .) Examples All polynomial functions in with complex coefficients are entire functions (holomorphic in the whole complex plane ), and so are the exponential function and the trigonometric functions and (cf. Euler's formula). The principal branch of the complex logarithm function is holomorphic on the domain The square root function can be defined as and
), with the harmonic conjugate of . Conversely, every harmonic function on a simply connected domain is the real part of a holomorphic function: If is the harmonic conjugate of , unique up to a constant, then is holomorphic. Cauchy's integral theorem implies that the contour integral of every holomorphic function along a loop vanishes: Here is a rectifiable path in a simply connected complex domain whose start point is equal to its end point, and is a holomorphic function. Cauchy's integral formula states that every function holomorphic inside a disk is completely determined by its values on the disk's boundary. Furthermore: Suppose is a complex domain, is a holomorphic function and the closed disk is completely contained in . Let be the circle forming the boundary of . Then for every in the interior of : where the contour integral is taken counter-clockwise. The derivative can be written as a contour integral using Cauchy's differentiation formula: for any simple loop positively winding once around , and for infinitesimal positive loops around . In regions where the first derivative is not zero, holomorphic functions are conformal: they preserve angles and the shape (but not size) of small figures. Every holomorphic function is analytic. That is, a holomorphic function has derivatives of every order at each point in its domain, and it coincides with its own Taylor series at in a neighbourhood of . In fact, coincides with its Taylor series at in any disk centred at that point and lying within the domain of the function. From an algebraic point of view, the set of holomorphic functions on an open set is a commutative ring and a complex vector space. Additionally, the set of holomorphic functions in an open set is an integral domain if and only if the open set is connected. In fact, it is a locally convex topological vector space, with the seminorms being the suprema on compact subsets. From a geometric perspective, a function is holomorphic at if and only if its exterior derivative in a neighbourhood of is equal to for some continuous function . It follows from that is also proportional to , implying that the derivative is itself holomorphic and thus that is infinitely differentiable. Similarly, implies that any function that is holomorphic on the simply connected region is also integrable on . (For a path from to lying entirely in , define in light of the Jordan curve theorem and the generalized Stokes' theorem, is independent of the particular choice of path , and thus is a well-defined function on having and .) Examples All polynomial functions in with complex coefficients are entire functions (holomorphic in the whole complex plane ), and so are the exponential function and the trigonometric functions and (cf. Euler's formula). The principal branch of the complex logarithm function is holomorphic on the domain The square root function can be defined as and is therefore holomorphic wherever the logarithm is. The reciprocal function is holomorphic on (The reciprocal function, and any other rational function, is meromorphic on .) As a consequence of the Cauchy–Riemann equations, any real-valued holomorphic function must be constant. Therefore, the absolute value , the argument , the real part and the imaginary part are not holomorphic. Another typical example of a continuous function which is not holomorphic is the complex conjugate (The complex conjugate is antiholomorphic.) Several variables The definition of a holomorphic function generalizes to several complex variables in a straightforward way. Let to be polydisk and also, denote an open subset of , and let . The function is analytic at a point in if there exists an open neighbourhood of in which is equal to a convergent power series in complex variables. Define to be holomorphic if it is analytic at each point in its domain. Osgood's lemma shows (using the multivariate Cauchy integral formula) that, for a continuous function , this is equivalent to being holomorphic in each variable separately (meaning that if any coordinates are fixed, then the restriction of is a holomorphic function of the remaining coordinate). The much deeper Hartogs' theorem proves that the continuity hypothesis is unnecessary: is holomorphic if and only if it is holomorphic in each variable separately. More generally, a function of several complex variables that is square integrable over every compact subset of its domain is analytic if and only if it satisfies the Cauchy–Riemann equations in the sense of distributions. Functions of several complex
in small cells (some of which were small enough to impede lying down), throwing prisoners out of helicopters to their death or into the sea with concrete on their feet, and burying people alive. The FLN also committed many atrocities, both against French pieds-noirs and against fellow Algerians whom they deemed as supporting the French. These crimes included killing unarmed men, women and children, rape and disembowelment or decapitation of women and murdering children by slitting their throats or banging their heads against walls. Between 350,000 and 1 million Algerians are estimated to have died during the war, and more than 2 million, out of a total Muslim population of 9 or 10 million, were made into refugees or forcibly relocated into government-controlled camps. Much of the countryside and agriculture was devastated, along with the modern economy, which had been dominated by urban European settlers (the pied-noirs). French sources estimated that at least 70,000 Muslim civilians were killed or abducted and presumed killed, by the FLN during the Algerian War. Nearly one million people of mostly French, Spanish and Italian descent left the country at independence due to the privileges that they lost as settlers and their unwillingness to be on equal footing with indigenous Algerians along with them left most Algerians of Jewish descent and those Muslim Algerians who had supported a French Algeria (harkis). 30–150,000 pro-French Muslims were also killed in Algeria by FLN in post-war reprisals. Independent Algeria Ben Bella presidency (1962–65) The Algerian independence referendum was held in French Algeria on 1 July 1962, passing with 99.72% of the vote. As a result, France declared Algeria independent on 3 July. On 8 September 1963, the first Algerian constitution was adopted by nationwide referendum under close supervision by the National Liberation Front (FLN). Later that month, Ahmed Ben Bella was formally elected the first president of Algeria for a five-year term after receiving support from the FLN and the military, led by Colonel Houari Boumédiène. However, the war for independence and its aftermath had severely disrupted Algeria's society and economy. In addition to the destruction of much of Algeria's infrastructure, an exodus of the upper-class French and European colons from Algeria deprived the country of most of its managers, civil servants, engineers, teachers, physicians, and skilled workers. The homeless and displaced numbered in the hundreds of thousands, many suffering from illness, and some 70 percent of the workforce was unemployed. The months immediately following independence witnessed the pell-mell rush of Algerians and government officials to claim the property and jobs left behind by the European colons. For example in the 1963 March Decrees, President Ben Bella declared all agricultural, industrial, and commercial properties previously owned and operated by Europeans vacant, thereby legalizing confiscation by the state. The military played an important role in Ben Bella's administration. Since the president recognized the role that the military played in bringing him to power, he appointed senior military officers as ministers and other important positions within the new state, including naming Colonel Boumédiène as defence minister. These military officials played a core role into implementing the country's security and foreign policy. Under the new constitution, Ben Bella's presidency combined the functions of chief of state and head of government with those of supreme commander of the armed forces. He formed his government without needing legislative approval and was responsible for the definition and direction of its policies. There was no effective institutional check on the president's powers. As a result, opposition leader Hocine Aït-Ahmed quit the National Assembly in 1963 to protest the increasingly dictatorial tendencies of the regime and formed a clandestine resistance movement, the Socialist Forces Front (Front des Forces Socialistes—FFS), dedicated to overthrowing the Ben Bella regime by force. Late summer 1963 saw sporadic incidents attributed to the FFS, but more serious fighting broke out a year later, and the army moved quickly and in force to crush a rebellion. Minister of Defense Boumédiène had no qualms about sending the army to put down regional uprisings because he felt they posed a threat to the state. However, President Ben Bella attempted to co-opt allies from among these regional leaders in order to undermine the ability of military commanders to influence foreign and security policy. Tensions consequently built between Boumédiène and Ben Bella, and in 1965 the military removed Ben Bella in a coup d'état, replacing him with Boumédiène as head of state. The 1965 coup and the Boumédienne military regime On 19 June 1965, Houari Boumédiène deposed Ahmed Ben Bella in a military coup d'état that was both swift and bloodless. Ben Bella "disappeared", and would not be seen again until he was released from house arrest in 1980 by Boumédiène's successor, Colonel Chadli Bendjedid. Boumédiène immediately dissolved the National Assembly and suspended the 1963 constitution. Political power resided in the Nation Council of the Algerian Revolution (Conseil National de la Révolution Algérienne—CNRA), a predominantly military body intended to foster cooperation among various factions in the army and the party. Houari Boumédiène's position as head of government and of state was initially insecure, partly because of his lack of a significant power base outside of the armed forces. He relied strongly on a network of former associates known as the Oujda group, named after Boumédiène's posting as National Liberation Army (Armée de Libération Nationale—ALN) leader in the Moroccan border town of Oujda during the war years, but he could not fully dominate his fractious regime. This situation may have accounted for his deference to collegial rule. Over Boumédiène's 11-year reign as Chairman of the CNRA, the council introduced two formal mechanisms: the People's Municipal Assembly (Assemblée Populaires Communales) and the People's Provincial Assembly (Assemblée Populaires de Wilaya) for popular participation in politics. Under Boumédiène's rule, leftist and socialist concepts were merged with Islam. Boumédiène also used Islam to opportunistically consolidate his power. On one hand, he made token concessions and cosmetic changes to the government to appear more Islamic, such as putting Islamist Ahmed Taleb Ibrahimi in charge of national education in 1965 and adopting policies criminalizing gambling, establishing Friday as the national holiday, and dropping plans to introduce birth control to paint an Islamic image of the new government. But on the other hand, Boumédiène's government also progressively repressed Islamic groups, such as by ordering the dissolution of Al Qiyam. Following attempted coups—most notably that of chief-of-staff Col. Tahar Zbiri in December 1967—and a failed assassination attempt on 25 April, 1968, Boumédiène consolidated power and forced military and political factions to submit. He took a systematic, authoritarian approach to state building, arguing that Algeria needed stability and an economic base before building any political institutions. Eleven years after Boumédiène took power, after much public debate, a long-promised new constitution was promulgated in November 1976. The constitution restored the National Assembly and gave it legislative, consent, and oversight functions. Boumédiène was later elected president with 95 percent of the cast votes. Bendjedid rule (1978–92), the 1992 Coup d'État and the rise of the civil war Boumédiène's death on 27 December, 1978 set off a struggle within the FLN to choose a successor. A deadlock occurred between two candidates was broken when Colonel Chadli Bendjedid, a moderate who had collaborated with Boumédiène in deposing Ahmed Ben Bella, was sworn in on February 9, 1979. He was re-elected in 1984 and 1988. After the violent 1988 October Riots, a new constitution was adopted in 1989 that eradicated the Algerian one-party state by allowing the formation of political associations in addition to the FLN. It also removed the armed forces, which had run the government since the days of Boumédiène, from a role in the operation of the government. Among the scores of parties that sprang up under the new constitution, the militant Islamic Salvation Front (Front Islamique du Salut—FIS) was the most successful, winning a majority of votes in the June 1990 municipal elections, as well as the first stage of the December national legislative elections. The surprising first round of success for the fundamentalist FIS party in the December 1991 balloting caused the army to discuss options to intervene in the election. Officers feared that an Islamist government would interfere with their positions and core interests in economic, national security, and foreign policy, since the FIS has promised to make a fundamental re-haul of the social, political, and economic structure to achieve a radical Islamist agenda. Senior military figures, such as Defence Minister Khaled Nezzar, Chief of the General Staff Abdelmalek Guenaizia, and other leaders of the navy, Gendarmerie, and security services, all agreed that the FIS should be stopped from gaining power at the polling box. They also agreed that Bendjedid would need to be removed from office due to his determination to uphold the country's new constitution by continuing with the second round of ballots. On 11 January 1992, Bendjedid announced his resignation on national television, saying it was necessary to "protect the unity of the people and the security of the country". Later that same day, the High Council of State (Haut Comité d'Etat—HCE), which was composed of five people (including Khaled Nezzar, Tedjini Haddam, Ali Kafi, Mohamed Boudiaf and Ali Haroun), was appointed to carry out the duties of the president. The new government, led by Sid Ahmed Ghozali, banned all political activity at mosques and began stopping people from attending prayers at popular mosques. The FIS was legally dissolved by Interior Minister Larbi Belkheir on 9 February for attempting "insurrections against the state". A state of emergency was also declared and extraordinary powers, such as curtailing the right to associate, were granted to the regime. Between January and March, a growing number of FIS militants were arrested by the military, including Abdelkader Hachani and his successors, Othman Aissani and Rabah Kebir. Following the announcement to dissolve the FIS and implement a state of emergency on 9 February, the Algerian security forces used their new emergency powers to conduct large scale arrests of FIS members and housed them in 5 "detention centers" in the Sahara. Between 5,000 (official number) and 30,000 (FIS number) people were detained. This crackdown led to a fundamental Islamic insurgency, resulting in the continuous and brutal 10 year-long Algerian Civil War. During the civil war, the secular state apparatus nonetheless allowed elections featuring pro-government and moderate religious-based parties. The civil war lasted from 1991 to 2002. Civil War and Bouteflika (1992–2019) After Chadli Bendjedid resigned from the presidency in the military coup of 1992, a series of figureheads were selected by the military to assume the presidency, as officers were reluctant to assume public political power even though they had manifested control over the government. Additionally, the military's senior leaders felt a need to give a civilian face to the new political regime they had hastily constructed in the aftermath of Benjedid's ousting and the termination of elections, preferring a friendlier non-military face to front the regime. The first such head of state was Mohamed Boudiaf, who was appointed president of the High Council of State (HCE) in February 1992 after a 27-year exile in Morocco. However, Boudiaf quickly came to odds with the military when attempts by Boudiaf to appoint his own staff or form a political party were viewed with suspicion by officers. Boudiaf also launched political initiatives, such as a rigorous anti-corruption campaign in April 1992 and the sacking of Khaled Nezzar from his post as Defence Minister, which were seen by the military as an attempt to remove their influence in the government. The former of these initiatives was especially hazardous to the many senior military officials who had benefited massively and illegally from the political system for years. In the end, Boudiaf was assassinated in June 1992 by one of his bodyguards with Islamist sympathies. Ali Kafi briefly assumed the HCE presidency after Boudiaf's death, before Liamine Zéroual was appointed as a long-term replacement in 1994. However, Zéroual only remained in office for four years before he announced his retirement, as he quickly became embroiled in a clan warfare within the upper classes of the military and fell out with groups of the more senior generals. After this Abdelaziz Bouteflika, Boumédiène's foreign minister, succeeded as the president. As the Algerian civil war wound to a close, presidential elections were held again in April 1999. Although seven candidates qualified for election, all but Abdelaziz Bouteflika, who had the support of the military as well as the National Liberation Front (FLN), withdrew on the eve of the election amid charges of electoral fraud and interference from the military. Bouteflika went on to win with 70 percent of the cast votes. Despite the purportedly democratic elections, the civilian government immediately after the 1999 elections only acted as a sort of 'hijab' over the true government, mostly running day-to-day businesses, while the military still largely ran the country behind the scenes. For example, ministerial mandates to individuals were only granted with the military's approval, and different factions of the military invested in various political parties and the press, using them as pawns to gain influence. However, the military's influence over politics decreased gradually, leaving Bouteflika with more authority on deciding policy. One reason for this was that the senior commanders who had dominated the political scene during the 1960s and 1970s started to retire. Bouteflika's former experience as Boumédiène's foreign minister earned him connections that rejuvenated Algeria's international reputation, which had been tarnished in the early 1990s due to the civil war. On the domestic front, Bouteflika's policy of "national reconciliation" to bring a close to civilian violence earned him a popular mandate that helped him to win further presidential terms in 2004, 2009 and 2014. In 2010, journalists gathered to demonstrate for press freedom and against Bouteflika's self-appointed role as editor-in-chief of Algeria's state television station. In February 2011, the government rescinded the state of emergency that had been in place since 1992 but still banned all protest gatherings and demonstrations. However, in April 2011, over 2,000 protesters defied the official ban and took to the streets of Algiers, clashing with police forces. These protests can be seen as a part of the Arab Spring, with protesters noting that they were inspired by the recent Egyptian revolution, and that Algeria was a police state that was "corrupt to the bone". In 2019, after 20 years in office, Bouteflika announced in February that he would seek a fifth term of office. This sparked widespread discontent around Algeria and protests in Algiers. Despite later attempts at saying he would resign after his term finished in late April, Bouteflika resigned on 2 April, after the chief of the army, Ahmed Gaid Salah, made a declaration that he was "unfit for office". Despite Gaid Salah being loyal to Bouteflika, many in the military identified with civilians, as nearly 70 percent of the army are civilian conscripts who are required to serve for 18 months. Also, since demonstrators demanded a change to the whole governmental system, many army officers aligned themselves with demonstrators in the hopes of surviving an anticipated revolution and retaining their positions. After Bouteflika (2019-) After the resignation of Abdelaziz Bouteflika on 9 April 2019, the President of the Council of the Nation Abdelkader Bensalah became acting president of Algeria. Following the presidential election on 12 December 2019, Abdelmadjid Tebboune was elected president after taking 58% of the votes, beating the candidates from both main parties, the National Liberation Front and the Democratic National Rally. On the eve of the first anniversary of the Hirak Movement, which led to the resignation of former president Bouteflika, President Abdelmadjid Tebboune announced in a statement to the Algerian national media that 22 February would be declared the Algerian "National Day of Fraternity and Cohesion between the People and Its Army for Democracy." In the same statement, Tebboune spoke in favor of the Hirak Movement, saying that "the blessed Hirak has preserved the country from a total collapse", and that he had "made a personal commitment to carry out all of the [movement's] demands." On 21 and 22 February 2020, masses of demonstrators (with turnout comparable to well-established Algerian holidays like the Algerian Day of Independence) gathered to honor the anniversary of the Hirak Movement and the newly established national day. In an effort to contain the COVID-19 pandemic, Tebboune announced on 17 March 2020 that "marches and rallies, whatever their motives" would be prohibited. But after protesters and journalists
Granada in 1492. Christian Spain imposed its influence on the Maghrib coast by constructing fortified outposts and collecting tribute. But Spain never sought to extend its North African conquests much beyond a few modest enclaves. Privateering was an age-old practice in the Mediterranean, and North African rulers engaged in it increasingly in the late 16th and early 17th centuries because it was so lucrative. Until the 17th century the Barbary pirates used galleys, but a Dutch renegade of the name of Zymen Danseker taught them the advantage of using sailing ships. Algeria became the privateering city-state par excellence, and two privateer brothers were instrumental in extending Ottoman influence in Algeria. At about the time Spain was establishing its presidios in the Maghrib, the Muslim privateer brothers Aruj and Khair ad Din—the latter known to Europeans as Barbarossa, or Red Beard—were operating successfully off Tunisia. In 1516 Aruj moved his base of operations to Algiers but was killed in 1518. Khair ad Din succeeded him as military commander of Algiers, and the Ottoman sultan gave him the title of beglerbey (provincial governor). Spanish enclaves The Spanish expansionist policy in North Africa began with the Catholic Monarchs and the regent Cisneros, once the Reconquista in the Iberian Peninsula was finished. That way, several towns and outposts in the Algerian coast were conquered and occupied: Mers El Kébir (1505), Oran (1509), Algiers (1510) and Bugia (1510). The Spanish conquest of Oran was won with much bloodshed: 4,000 Algerians were massacred, and up to 8,000 were taken prisoner. For about 200 years, Oran's inhabitants were virtually held captive in their fortress walls, ravaged by famine and plague; Spanish soldiers, too, were irregularly fed and paid. The Spaniards left Algiers in 1529, Bujia in 1554, Mers El Kébir and Oran in 1708. The Spanish returned in 1732 when the armada of the Duke of Montemar was victorious in the Battle of Aïn-el-Turk and retook Oran and Mers El Kébir; the Spanish massacred many Muslim soldiers. In 1751, a Spanish adventurer, named John Gascon, obtained permission, and vessels and fireworks, to go against Algiers, and set fire, at night, to the Algerian fleet. The plan, however, miscarried. In 1775, Charles III of Spain sent a large force to attack Algiers, under the command of Alejandro O'Reilly (who had led Spanish forces in crushing French rebellion in Louisiana), resulting in a disastrous defeat. The Algerians suffered 5,000 casualties. The Spanish navy bombarded Algiers in 1784; over 20,000 cannonballs were fired, much of the city and its fortifications were destroyed and most of the Algerian fleet was sunk. Oran and Mers El Kébir were held until 1792, when they were sold by the king Charles IV to the Bey of Algiers. Ottoman era Under Khair ad Din's regency, Algiers became the center of Ottoman authority in the Maghrib. For 300 years, Algeria was a Vassal state of the Ottoman Empire under a regency that had Algiers as its capital (see Dey). Subsequently, with the institution of a regular Ottoman administration, governors with the title of pasha ruled. Turkish was the official language. In 1671 a new leader took power, adopting the title of dey. In 1710 the dey persuaded the sultan to recognize him and his successors as regent, replacing the pasha in that role. Although Algiers remained a part of the Ottoman Empire, the Ottoman government ceased to have effective influence there. European maritime powers paid the tribute demanded by the rulers of the privateering states of North Africa (Algiers, Tunis, Tripoli, and Morocco) to prevent attacks on their shipping. The Napoleonic wars of the early 19th century diverted the attention of the maritime powers from suppressing piracy. But when peace was restored to Europe in 1815, Algiers found itself at war with Spain, the Netherlands, Prussia, Denmark, Russia, and Naples. Algeria and surrounding areas, collectively known as the Barbary States, were responsible for piracy in the Mediterranean Sea, as well as the enslaving of Christians, actions which brought them into the First and Second Barbary War with the United States of America. French rule 19th century colonialism North African boundaries have shifted during various stages of the conquests. The borders of modern Algeria were expanded by the French, whose colonization began in 1830 (French invasion began on July 5). To benefit French colonists (many of whom were not in fact of French origin but Italian, Maltese, and Spanish) and nearly the entirety of whom lived in urban areas, northern Algeria was eventually organized into overseas departments of France, with representatives in the French National Assembly. France controlled the entire country, but the traditional Muslim population in the rural areas remained separated from the modern economic infrastructure of the European community. As a result of what the French considered an insult to the French consul in Algiers by the Day in 1827, France blockaded Algiers for three years. In 1830, France invaded and occupied the coastal areas of Algeria, citing a diplomatic incident as casus belli. Hussein Dey went into exile. French colonization then gradually penetrated southwards, and came to have a profound impact on the area and its populations. The European conquest, initially accepted in the Algiers region, was soon met by a rebellion, led by Abdel Kadir, which took roughly a decade for the French troops to put down after the "pacification campaign", in which the French used chemical weapons, mass executions of civilians and prisoners, concentration camps and many other atrocities. By 1848 nearly all of northern Algeria was under French control, and the new government of the French Second Republic declared the occupied lands an integral part of France. Three "civil territories"—Algiers, Oran, and Constantine—were organized as French départements (local administrative units) under a civilian government. In addition to enduring the affront of being ruled by a foreign, non-Muslim power, many Algerians lost their lands to the new government or to colonists. Traditional leaders were eliminated, coopted, or made irrelevant, and the traditional educational system was largely dismantled; social structures were stressed to the breaking point. From 1856, native Muslims and Jews were viewed as French subjects not citizens. However, in 1865, Napoleon III allowed them to apply for full French citizenship, a measure that few took, since it involved renouncing the right to be governed by sharia law in personal matters, and was considered a kind of apostasy; in 1870, the Crémieux Decree made French citizenship automatic for Jewish natives, a move which largely angered many Muslims, which resulted in the Jews being seen as the accomplices of the colonial power by anti-colonial Algerians. Nonetheless, this period saw progress in health, some infrastructures, and the overall expansion of the economy of Algeria, as well as the formation of new social classes, which, after exposure to ideas of equality and political liberty, would help propel the country to independence. During the colonization France focused on eradicating the local culture by destroying hundreds years old palaces and important buildings. It is estimated that around half of Algiers, a city founded in the 10th century, was destroyed. Many segragatory laws were levied against the Algerians and their culture. Rise of Algerian nationalism and French resistance A new generation of Islamic leadership emerged in Algeria at the time of World War I and grew to maturity during the 1920s and 1930s. Various groups were formed in opposition to French rule, most notable the National Liberation Front (FLN) and the National Algerian Movement. Colons (colonists), or, more popularly, pieds noirs (literally, black feet) dominated the government and controlled the bulk of Algeria's wealth. Throughout the colonial era, they continued to block or delay all attempts to implement even the most modest reforms. But from 1933 to 1936, mounting social, political, and economic crises in Algeria induced the indigenous population to engage in numerous acts of political protest. The government responded with more restrictive laws governing public order and security. Algerian Muslims rallied to the French side at the start of World War II as they had done in World War I. But the colons were generally sympathetic to the collaborationist Vichy regime established following France's defeat by Nazi Germany. After the fall of the Vichy regime in Algeria (November 11, 1942) as a result of Operation Torch, the Free French commander in chief in North Africa slowly rescinded repressive Vichy laws, despite opposition by colon extremists. In March 1943, Muslim leader Ferhat Abbas presented the French administration with the Manifesto of the Algerian People, signed by 56 Algerian nationalist and international leaders. The manifesto demanded an Algerian constitution that would guarantee immediate and effective political participation and legal equality for Muslims. Instead, the French administration in 1944 instituted a reform package, based on the 1936 Viollette Plan, that granted full French citizenship only to certain categories of "meritorious" Algerian Muslims, who numbered about 60,000. In April 1945 the French had arrested the Algerian nationalist leader Messali Hadj. On May 1 the followers of his Parti du Peuple Algérien (PPA) participated in demonstrations which were violently put down by the police. Several Algerians were killed. The tensions between the Muslim and colon communities exploded on May 8, 1945, V-E Day, causing the Sétif and Guelma massacre. When a Muslim march was met with violence, marchers rampaged. The army and police responded by conducting a prolonged and systematic ratissage (literally, raking over) of suspected centers of dissidence. According to official French figures, 1,500 Muslims died as a result of these countermeasures. Other estimates vary from 6,000 to as high as 45,000 killed. Many nationalists drew the conclusion that independence could not be won by peaceful means, and so started organizing for violent rebellion. In August 1947, the French National Assembly approved the government-proposed Organic Statute of Algeria. This law called for the creation of an Algerian Assembly with one house representing Europeans and "meritorious" Muslims and the other representing the remaining 8 million or more Muslims. Muslim and colon deputies alike abstained or voted against the statute but for diametrically opposed reasons: the Muslims because it fell short of their expectations and the colons because it went too far. Algerian War of Independence (1954–1962) The Algerian War of Independence (1954–1962), brutal and long, was the most recent major turning point in the country's history. Although often fratricidal, it ultimately united Algerians and seared the value of independence and the philosophy of anticolonialism into the national consciousness. In the early morning hours of November 1, 1954, the National Liberation Front (Front de Libération Nationale—FLN) launched attacks throughout Algeria in the opening salvo of a war of independence. An important watershed in this war was the massacre of Pieds-Noirs civilians by the FLN near the town of Philippeville in August 1955. Which prompted Jacques Soustelle into calling for more repressive measures against the rebels. The French authorities claimed that 1,273 "guerrillas" died in what Soustelle admitted were "severe" reprisals. The FLN subsequently, giving names and addresses, claimed that 12,000 Muslims were killed. After Philippeville, all-out war began in Algeria. The FLN fought largely using guerrilla tactics whilst the French counter-insurgency tactics often included severe reprisals and repression. Eventually, protracted negotiations led to a cease-fire signed by France and the FLN on March 18, 1962, at Evian, France. The Evian accords also provided for continuing economic, financial, technical, and cultural relations, along with interim administrative arrangements until a referendum on self-determination could be held. The Evian accords guaranteed the religious and property rights of French settlers, but the perception that they would not be respected led to the exodus of one million pieds-noirs and harkis. Abusive tactics of the French Army remains a controversial subject in France to this day. Deliberate illegal methods were used, such as beatings, mutilations, hanging by the feet or hands, torture by electroshock, waterboarding, sleep deprivation and sexual assaults, among others. French war crimes against Algerian civilians were also committed, including indiscriminate shootings of civilians, bombings of villages suspected of helping the ALN, rape, disembowelment of pregnant women, imprisonment without food in small cells (some of which were small enough to impede lying down), throwing prisoners out of helicopters to their death or into the sea with concrete on their feet, and burying people alive. The FLN also committed many atrocities, both against French pieds-noirs and against fellow Algerians whom they deemed as supporting the French. These crimes included killing unarmed men, women and children, rape and disembowelment or decapitation of women and murdering children by slitting their throats or banging their heads against walls. Between 350,000 and 1 million Algerians are estimated to have died during the war, and more than 2 million, out of a total Muslim population of 9 or 10 million, were made into refugees or forcibly relocated into government-controlled camps. Much of the countryside and agriculture was devastated, along with the modern economy, which had been dominated by urban European settlers (the pied-noirs). French sources estimated that at least 70,000 Muslim civilians were killed or abducted and presumed killed, by the FLN during the Algerian War. Nearly one million people of mostly French, Spanish and Italian descent left the country at independence due to the privileges that they lost as settlers and their unwillingness to be on equal footing with indigenous Algerians along with them left most Algerians of Jewish descent and those Muslim Algerians who had supported a French Algeria (harkis). 30–150,000 pro-French Muslims were also killed in Algeria by FLN in post-war reprisals. Independent Algeria Ben Bella presidency (1962–65) The Algerian independence referendum was held in French Algeria on 1 July 1962, passing with 99.72% of the vote. As a result, France declared Algeria independent on 3 July. On 8 September 1963, the first Algerian constitution was adopted by nationwide referendum under close supervision by the National Liberation Front (FLN). Later that month, Ahmed Ben Bella was formally elected the first president of Algeria for a five-year term after receiving support from the FLN and the military, led by Colonel Houari Boumédiène. However, the war
of emergency". This gave the government widespread powers under the "Law and Order Maintenance Act," including the right to detain persons without charge which it used quite widely. In 1983 to 1984 the government declared a curfew in areas of Matabeleland and sent in the army in an attempt to suppress members of the Ndebele tribe. The pacification campaign, known as the Gukuruhundi, or strong wind, resulted in at least 20,000 civilian deaths perpetrated by an elite, North Korean-trained brigade, known in Zimbabwe as the Gukurahundi. ZANU-PF increased its majority in the 1985 elections, winning 67 of the 100 seats. The majority gave Mugabe the opportunity to start making changes to the constitution, including those with regard to land restoration. Fighting did not cease until Mugabe and Nkomo reached an agreement in December 1987 whereby ZAPU became part of ZANU-PF and the government changed the constitution to make Mugabe the country's first executive president and Nkomo one of two vice-presidents. 1990s Elections in March 1990 resulted in another overwhelming victory for Mugabe and his party, which won 117 of the 120 election seats. Election observers estimated voter turnout at only 54% and found the campaign neither free nor fair, though balloting met international standards. Unsatisfied with a de facto one-party state, Mugabe called on the ZANU-PF Central Committee to support the creation of a de jure one-party state in September 1990 and lost. The government began further amending the constitution. The judiciary and human rights advocates fiercely criticised the first amendments enacted in April 1991 because they restored corporal and capital punishment and denied recourse to the courts in cases of compulsory purchase of land by the government. The general health of the civilian population also began to significantly flounder and by 1997 25% of the population of Zimbabwe had been infected by HIV, the AIDS virus. During the 1990s students, trade unionists, and workers often demonstrated to express their discontent with the government. Students protested in 1990 against proposals for an increase in government control of universities and again in 1991 and 1992 when they clashed with police. Trade unionists and workers also criticised the government during this time. In 1992 police prevented trade unionists from holding anti-government demonstrations. In 1994 widespread industrial unrest weakened the economy. In 1996 civil servants, nurses, and junior doctors went on strike over salary issues. On 9 December 1997 a national strike paralysed the country. Mugabe was panicked by demonstrations by Zanla ex-combatants, war veterans, who had been the heart of incursions 20 years earlier in the Bush War. He agreed to pay them large gratuities and pensions, which proved to be a wholly unproductive and unbudgeted financial commitment. The discontent with the government spawned draconian government crackdowns which in turn started to destroy both the fabric of the state and of society. This in turn brought with it further discontent within the population. Thus a vicious downward spiral commenced. Although many whites had left Zimbabwe after independence, mainly for neighbouring South Africa, those who remained continued to wield disproportionate control of some sectors of the economy, especially agriculture. In the late-1990s whites accounted for less than 1% of the population but owned 70% of arable land. Mugabe raised this issue of land ownership by white farmers. In a calculated move, he began forcible land redistribution, which brought the government into headlong conflict with the International Monetary Fund. Amid a severe drought in the region, the police and military were instructed not to stop the invasion of white-owned farms by the so-called 'war veterans' and youth militia. This led to a mass migration of White Zimbabweans out of Zimbabwe. At present almost no arable land is in the possession of white farmers. The economy during the 1980s and 1990s The economy was run along corporatist lines with strict governmental controls on all aspects of the economy. Controls were placed on wages, prices and massive increases in government spending resulting in significant budget deficits. This experiment met with very mixed results and Zimbabwe fell further behind the first world and unemployment. Some market reforms in the 1990s were attempted. A 40 per cent devaluation of the Zimbabwean dollar was allowed to occur and price and wage controls were removed. These policies also failed at that time. Growth, employment, wages, and social service spending contracted sharply, inflation did not improve, the deficit remained well above target, and many industrial firms, notably in textiles and footwear, closed in response to increased competition and high real interest rates. The incidence of poverty in the country increased during this time. 1999 to 2000 However, Zimbabwe began experiencing a period of considerable political and economic upheaval in 1999. Opposition to President Mugabe and the ZANU-PF government grew considerably after the mid-1990s in part due to worsening economic and human rights conditions brought about by the seizure of farmland owned by white farmers and economic sanctions imposed by Western countries in response. The Movement for Democratic Change (MDC) was established in September 1999 as an opposition party founded by trade unionist Morgan Tsvangirai. The MDC's first opportunity to test opposition to the Mugabe government came in February 2000, when a referendum was held on a draft constitution proposed by the government. Among its elements, the new constitution would have permitted President Mugabe to seek two additional terms in office, granted government officials immunity from prosecution, and authorised government seizure of white-owned land. The referendum was handily defeated. Shortly thereafter, the government, through a loosely organised group of war veterans, some of the so-called war veterans judging from their age were not war veterans as they were too young to have fought in the chimurenga, sanctioned an aggressive land redistribution program often characterised by forced expulsion of white farmers and violence against both farmers and farm employees. Parliamentary elections held in June 2000 were marred by localised violence, and claims of electoral irregularities and government intimidation of opposition supporters. Nonetheless, the MDC succeeded in capturing 57 of 120 seats in the National Assembly. 2002 Presidential elections were held in March 2002. In the months leading up to the poll, ZANU-PF, with the support of the army, security services, and especially the so-called 'war veterans', – very few of whom actually fought in the Second Chimurenga against the Smith regime in the 1970s – set about wholesale intimidation and suppression of the MDC-led opposition. Despite strong international criticism, these measures, together with organised subversion of the electoral process, ensured a Mugabe victory . The government's behaviour drew strong criticism from the EU and the US, which imposed limited sanctions against the leading members of the Mugabe regime. Since the 2002 election, Zimbabwe has suffered further economic difficulty and growing political chaos. 2003–2005 Divisions within the opposition MDC had begun to fester early in the decade, after Morgan Tsvangirai (the president of the MDC) was lured into a government sting operation that videotaped him talking of Mr. Mugabe's removal from power. He was subsequently arrested and put on trial on treason charges. This crippled his control of party affairs and raised questions about his competence. It also catalysed a major split within the party. In 2004 he was acquitted, but not until after suffering serious abuse and mistreatment in prison. The opposing faction was led by Welshman Ncube who was the general secretary of the party. In mid-2004, vigilantes loyal to Mr. Tsvangirai began attacking members who were mostly loyal to Ncube, climaxing in a September raid on the party's Harare headquarters in which the security director was nearly thrown to his death. An internal party inquiry later established that aides to Tsvangirai had tolerated, if not endorsed, the violence. Divisive as the violence was, it was a debate over the rule of law that set off the party's final break-up in November 2005. These division severely weakened the opposition. In addition the government employed its own operatives to both spy on each side and to undermine each side via acts of espionage. Zimbabwean parliamentary election, 2005 were held in March 2005 in which ZANU-PF won a two-thirds majority, were again criticised by international observers as being flawed. Mugabe's political operatives were thus able to weaken the opposition internally and the security apparatus of the state was able to destabilise it externally by using violence in anti-Mugabe strongholds to prevent citizens from voting. Some voters were 'turned away' from polling station despite having proper identification, further guaranteeing that the government could control the results. Additionally Mugabe had started to appoint judges sympathetic to the government, making any judicial appeal futile. Mugabe was also able to appoint 30 of the members of parliament. As Senate elections approached further opposition splits occurred. Ncube's supporters argued that the M.D.C. should field a slate of candidates; Tsvangirai's argued for a boycott. When party leaders voted on the issue, Ncube's side narrowly won, but Mr. Tsvangirai declared that as president of the party he was not bound by the majority's decision. Again the opposition was weakened. As a result, the elections for a new Senate in November 2005 were largely boycotted by the opposition. Mugabe's party won 24 of the 31 constituencies where elections were held amid low voter turnout. Again, evidence surfaced of voter intimidation and fraud. In May 2005 the government began Operation Murambatsvina. It was officially billed to rid urban areas of illegal structures, illegal business enterprises, and criminal activities. In practice its purpose was to punish political opponents. The UN estimates 700,000 people have been left without jobs or homes as a result. Families and traders, especially at the beginning of the operation, were often given no notice before police destroyed their homes and businesses. Others were able to salvage some possessions and building materials but often had nowhere to go, despite the government's statement that people should be returning to their rural homes. Thousands of families were left unprotected in the open in the middle of Zimbabwe's winter., . The government interfered with non-governmental organisation (NGO) efforts to provide emergency assistance to the displaced in many instances. Some families were removed to transit camps, where they had no shelter or cooking facilities and minimal food, supplies, and sanitary facilities. The operation continued into July 2005, when the government began a program to provide housing for the newly displaced. Human Rights Watch said the evictions had disrupted treatment for people with HIV/AIDS in a country where 3,000 die from the disease each week and about 1.3 million children have been orphaned. The operation was "the latest manifestation of a massive human rights problem that has been going on for years", said Amnesty International. As of September 2006, housing construction fell far short of demand, and there were reports that beneficiaries were mostly civil servants and ruling party loyalists, not those displaced. The government campaign of forced evictions continued in 2006, albeit on a lesser scale. In September 2005 Mugabe signed constitutional amendments that reinstituted a national senate (abolished in 1987) and that nationalised all land. This converted all ownership rights into leases. The amendments also ended the right of landowners to challenge government expropriation of land in the courts and marked the end of any hope of returning any land that had been hitherto grabbed by armed land invasions. Elections for the senate in November resulted in a victory for the government. The MDC split over whether to field candidates and partially boycotted the vote. In addition to low turnout there was widespread government intimidation. The split in the MDC hardened into factions, each of which claimed control of the party. The early months of 2006 were marked by food shortages and mass hunger. The sheer extremity of the siltation was revealed by the fact that in the courts, state witnesses said they were too weak from hunger to testify. 2006 to 2007 In August 2006 runaway inflation forced the government to replace its existing currency with a revalued one. In December 2006, ZANU-PF proposed the "harmonisation" of the parliamentary and presidential election schedules in 2010; the move was seen by the opposition as an excuse to extend Mugabe's term as president until 2010. Morgan Tsvangirai was badly beaten on 12 March 2007 after being arrested and held at Machipisa Police Station in the Highfield suburb of Harare. The event garnered an international outcry and was considered particularly brutal and extreme, even considering the reputation of Mugabe's government. Kolawole Olaniyan, Director of Amnesty International's Africa Programme said "We are very concerned by reports of continuing brutal attacks on opposition activists in Zimbabwe and call on the government to stop all acts of violence and intimidation against opposition activists". The economy has shrunk by 50% from 2000 to 2007. In September 2007 the inflation rate was put at almost 8,000%, the world's highest. There are frequent power and water outages. Harare's drinking water became unreliable in 2006 and as a consequence dysentery and cholera swept the city in December 2006 and January 2007. Unemployment in formal jobs is running at a record 80%. There was widespread hunger, manipulated by the government so that opposition strongholds suffer the most. Availability of bread was severely constrained after a poor wheat harvest and the closure of all bakeries. The country, which used to be one of Africa's richest, became one of its poorest. Many observers now view the country as a 'failed state'. The settlement of the Second Congo War brought back Zimbabwe's substantial military commitment, although some troops remain to secure the mining assets under their control. The government lacks the resources or machinery to deal with the ravages of the HIV/AIDS pandemic, which affects 25% of the population. With all this and the forced and violent removal of white farmers in a brutal land redistribution program, Mugabe has earned himself widespread scorn from the international arena. The regime has managed to cling to power by creating wealthy enclaves for government ministers, and senior party members. For example, Borrowdale Brook, a suburb of Harare is an oasis of wealth and privilege. It features mansions, manicured lawns, full shops with fully stocked shelves containing an abundance of fruit and vegetables, big cars and a golf club give is the home to President Mugabe's out-of-town retreat. Zimbabwe's bakeries shut down in October 2007 and supermarkets warned that they would have no bread for the foreseeable future due to collapse in wheat production after the seizure of white-owned farms. The ministry of agriculture has also blamed power shortages for the wheat shortfall, saying that electricity cuts have affected irrigation and halved crop yields per acre. The power shortages are because Zimbabwe relies on Mozambique for some of its electricity and that due to an unpaid bill of $35 million Mozambique had reduced the amount of electrical power it supplies. On 4 December 2007, The United States imposed travel sanctions against 38 people with ties to President Mugabe because they "played a central role in the regime's escalated human rights abuses." On 8 December 2007, Mugabe attended a meeting of EU and African leaders in Lisbon, prompting UK Prime Minister Gordon Brown to decline to attend. While German chancellor Angela Merkel criticised Mugabe with her public comments, the leaders of other African countries offered him statements of support. Deterioration of the educational system The educational system in Zimbabwe which was once regarded as among the best in Africa, went into crisis in 2007 because of the country's economic meltdown. One foreign reporter witnessed hundreds of children at Hatcliffe Extension Primary School in Epworth, west of Harare, writing in the dust on the floor because they had no exercise books or pencils. The high school exam system unravelled in 2007. Examiners refused to mark examination papers when they were offered just Z$79 a paper, enough to buy three small candies. Corruption has crept into the system and may explain why in January 2007 thousands of pupils received no marks for subjects they had entered, while others were deemed "excellent" in subjects they had not sat. However, as of late the education system has recovered and is still considered the best in Southern Africa. 2008 2008 elections Zimbabwe held a presidential election along with a 2008 parliamentary election of
operatives to both spy on each side and to undermine each side via acts of espionage. Zimbabwean parliamentary election, 2005 were held in March 2005 in which ZANU-PF won a two-thirds majority, were again criticised by international observers as being flawed. Mugabe's political operatives were thus able to weaken the opposition internally and the security apparatus of the state was able to destabilise it externally by using violence in anti-Mugabe strongholds to prevent citizens from voting. Some voters were 'turned away' from polling station despite having proper identification, further guaranteeing that the government could control the results. Additionally Mugabe had started to appoint judges sympathetic to the government, making any judicial appeal futile. Mugabe was also able to appoint 30 of the members of parliament. As Senate elections approached further opposition splits occurred. Ncube's supporters argued that the M.D.C. should field a slate of candidates; Tsvangirai's argued for a boycott. When party leaders voted on the issue, Ncube's side narrowly won, but Mr. Tsvangirai declared that as president of the party he was not bound by the majority's decision. Again the opposition was weakened. As a result, the elections for a new Senate in November 2005 were largely boycotted by the opposition. Mugabe's party won 24 of the 31 constituencies where elections were held amid low voter turnout. Again, evidence surfaced of voter intimidation and fraud. In May 2005 the government began Operation Murambatsvina. It was officially billed to rid urban areas of illegal structures, illegal business enterprises, and criminal activities. In practice its purpose was to punish political opponents. The UN estimates 700,000 people have been left without jobs or homes as a result. Families and traders, especially at the beginning of the operation, were often given no notice before police destroyed their homes and businesses. Others were able to salvage some possessions and building materials but often had nowhere to go, despite the government's statement that people should be returning to their rural homes. Thousands of families were left unprotected in the open in the middle of Zimbabwe's winter., . The government interfered with non-governmental organisation (NGO) efforts to provide emergency assistance to the displaced in many instances. Some families were removed to transit camps, where they had no shelter or cooking facilities and minimal food, supplies, and sanitary facilities. The operation continued into July 2005, when the government began a program to provide housing for the newly displaced. Human Rights Watch said the evictions had disrupted treatment for people with HIV/AIDS in a country where 3,000 die from the disease each week and about 1.3 million children have been orphaned. The operation was "the latest manifestation of a massive human rights problem that has been going on for years", said Amnesty International. As of September 2006, housing construction fell far short of demand, and there were reports that beneficiaries were mostly civil servants and ruling party loyalists, not those displaced. The government campaign of forced evictions continued in 2006, albeit on a lesser scale. In September 2005 Mugabe signed constitutional amendments that reinstituted a national senate (abolished in 1987) and that nationalised all land. This converted all ownership rights into leases. The amendments also ended the right of landowners to challenge government expropriation of land in the courts and marked the end of any hope of returning any land that had been hitherto grabbed by armed land invasions. Elections for the senate in November resulted in a victory for the government. The MDC split over whether to field candidates and partially boycotted the vote. In addition to low turnout there was widespread government intimidation. The split in the MDC hardened into factions, each of which claimed control of the party. The early months of 2006 were marked by food shortages and mass hunger. The sheer extremity of the siltation was revealed by the fact that in the courts, state witnesses said they were too weak from hunger to testify. 2006 to 2007 In August 2006 runaway inflation forced the government to replace its existing currency with a revalued one. In December 2006, ZANU-PF proposed the "harmonisation" of the parliamentary and presidential election schedules in 2010; the move was seen by the opposition as an excuse to extend Mugabe's term as president until 2010. Morgan Tsvangirai was badly beaten on 12 March 2007 after being arrested and held at Machipisa Police Station in the Highfield suburb of Harare. The event garnered an international outcry and was considered particularly brutal and extreme, even considering the reputation of Mugabe's government. Kolawole Olaniyan, Director of Amnesty International's Africa Programme said "We are very concerned by reports of continuing brutal attacks on opposition activists in Zimbabwe and call on the government to stop all acts of violence and intimidation against opposition activists". The economy has shrunk by 50% from 2000 to 2007. In September 2007 the inflation rate was put at almost 8,000%, the world's highest. There are frequent power and water outages. Harare's drinking water became unreliable in 2006 and as a consequence dysentery and cholera swept the city in December 2006 and January 2007. Unemployment in formal jobs is running at a record 80%. There was widespread hunger, manipulated by the government so that opposition strongholds suffer the most. Availability of bread was severely constrained after a poor wheat harvest and the closure of all bakeries. The country, which used to be one of Africa's richest, became one of its poorest. Many observers now view the country as a 'failed state'. The settlement of the Second Congo War brought back Zimbabwe's substantial military commitment, although some troops remain to secure the mining assets under their control. The government lacks the resources or machinery to deal with the ravages of the HIV/AIDS pandemic, which affects 25% of the population. With all this and the forced and violent removal of white farmers in a brutal land redistribution program, Mugabe has earned himself widespread scorn from the international arena. The regime has managed to cling to power by creating wealthy enclaves for government ministers, and senior party members. For example, Borrowdale Brook, a suburb of Harare is an oasis of wealth and privilege. It features mansions, manicured lawns, full shops with fully stocked shelves containing an abundance of fruit and vegetables, big cars and a golf club give is the home to President Mugabe's out-of-town retreat. Zimbabwe's bakeries shut down in October 2007 and supermarkets warned that they would have no bread for the foreseeable future due to collapse in wheat production after the seizure of white-owned farms. The ministry of agriculture has also blamed power shortages for the wheat shortfall, saying that electricity cuts have affected irrigation and halved crop yields per acre. The power shortages are because Zimbabwe relies on Mozambique for some of its electricity and that due to an unpaid bill of $35 million Mozambique had reduced the amount of electrical power it supplies. On 4 December 2007, The United States imposed travel sanctions against 38 people with ties to President Mugabe because they "played a central role in the regime's escalated human rights abuses." On 8 December 2007, Mugabe attended a meeting of EU and African leaders in Lisbon, prompting UK Prime Minister Gordon Brown to decline to attend. While German chancellor Angela Merkel criticised Mugabe with her public comments, the leaders of other African countries offered him statements of support. Deterioration of the educational system The educational system in Zimbabwe which was once regarded as among the best in Africa, went into crisis in 2007 because of the country's economic meltdown. One foreign reporter witnessed hundreds of children at Hatcliffe Extension Primary School in Epworth, west of Harare, writing in the dust on the floor because they had no exercise books or pencils. The high school exam system unravelled in 2007. Examiners refused to mark examination papers when they were offered just Z$79 a paper, enough to buy three small candies. Corruption has crept into the system and may explain why in January 2007 thousands of pupils received no marks for subjects they had entered, while others were deemed "excellent" in subjects they had not sat. However, as of late the education system has recovered and is still considered the best in Southern Africa. 2008 2008 elections Zimbabwe held a presidential election along with a 2008 parliamentary election of 29 March. The three major candidates were incumbent President Robert Mugabe of the Zimbabwe African National Union – Patriotic Front (ZANU-PF), Morgan Tsvangirai of the Movement for Democratic Change – Tsvangirai (MDC-T), and Simba Makoni, an independent. As no candidate received an outright majority in the first round, a second round was held on 27 June 2008 between Tsvangirai (with 47.9% of the first round vote) and Mugabe (43.2%). Tsvangirai withdrew from the second round a week before it was scheduled to take place, citing violence against his party's supporters. The second round went ahead, despite widespread criticism, and led to victory for Mugabe. Because of Zimbabwe's dire economic situation the election was expected to provide President Mugabe with his toughest electoral challenge to date. Mugabe's opponents were critical of the handling of the electoral process, and the government was accused of planning to rig the election; Human Rights Watch said that the election was likely to be "deeply flawed". After the first round, but before the counting was completed, Jose Marcos Barrica, the head of the Southern African Development Community observer mission, described the election as "a peaceful and credible expression of the will of the people of Zimbabwe." No official results were announced for more than a month after the first round. The failure to release results was strongly criticised by the MDC, which unsuccessfully sought an order from the High Court to force their release. An independent projection placed Tsvangirai in the lead, but without the majority needed to avoid a second round. The MDC declared that Tsvangirai won a narrow majority in the first round and initially refused to participate in any second round. ZANU-PF has said that Mugabe will participate in a second round; the party alleged that some electoral officials, in connection with the MDC, fraudulently reduced Mugabe's score, and as a result a recount was conducted. After the recount and the verification of the results, the Zimbabwe Electoral Commission (ZEC) announced on 2 May that Tsvangirai won 47.9% and Mugabe won 43.2%, thereby necessitating a run-off, which was to be held on 27 June 2008. Despite Tsvangirai's continuing claims to have won a first round majority, he refused to participate in the second round. The period following the first round was marked by serious political violence caused by ZANU-PF. ZANU-PF blamed the MDC supporters for perpetrating this violence; Western governments and prominent Western organisations have blamed ZANU-PF for the violence which seems very likely to be true. On 22 June 2008, Tsvangirai announced that he was withdrawing from the run-off, describing it as a "violent sham" and saying that his supporters risked being killed if they voted for him. The second round nevertheless went ahead as planned with Mugabe as the only actively participating candidate, although Tsvangirai's name remained on the ballot. Mugabe won the second round by an overwhelming margin and was sworn in for another term as president on 29 June. The international reaction to the second round have varied. The United States and states of the European Union have called for increased sanctions. On 11 July, the United Nations Security Council voted to impose sanctions on the Zimbabwe; Russia and China vetoed. The African Union has called for a "government of national unity." Preliminary talks to set up conditions for official negotiations began between leading negotiators from both parties on 10 July, and on 22 July, the three party leaders met for the first time in Harare to express their support for a negotiated settlement of disputes arising out of the presidential and parliamentary elections. Negotiations between the parties officially began on 25 July and are currently proceeding with very few details released from the negotiation teams in Pretoria, as coverage by the media is barred from the premises where the negotiations are taking place. The talks were mediated by South African President Thabo Mbeki. On 15 September 2008, the leaders of the 14-member Southern African Development Community witnessed the signing of the power-sharing agreement, brokered by South African leader Thabo Mbeki. With symbolic handshake and warm smiles at the Rainbow Towers hotel, in Harare, Mugabe and Tsvangirai signed the deal to end the violent political crisis. As provided, Robert Mugabe will remain president, Morgan Tsvangirai will become prime minister, ZANU-PF and the MDC will share control of the police, Mugabe's Zanu (PF) will command the Army, and Arthur Mutambara becomes deputy prime minister. Marange diamond fields massacre In November 2008 the Air Force of Zimbabwe was sent, after some police officers began refusing orders to shoot the illegal miners at Marange diamond fields. Up to 150 of the estimated 30,000 illegal miners were shot from helicopter gunships. In 2008 some Zimbabwean lawyers and opposition politicians from Mutare claimed that Shiri was the prime mover behind the military assaults on illegal diggers in the diamond mines in the east of Zimbabwe. Estimates of the death toll by mid-December range from 83 reported by the Mutare City Council, based on a request for burial ground, to 140 estimated by the (then) opposition Movement for Democratic Change - Tsvangirai party. 2009 to present 2009–2017 In January 2009, Morgan Tsvangirai announced that he would do as the leaders across Africa had insisted and join a coalition government as prime minister with his nemesis, President Robert Mugabe . On 11 February 2009 Tsvangirai was sworn in as the Prime Minister of Zimbabwe. By 2009 inflation had peaked at 500 billion % per year under the Mugabe government and the Zimbabwe currency was worthless. The opposition shared power with the Mugabe regime between 2009 and 2013, Zimbabwe switched to using the US dollar as currency and the economy improved reaching a growth rate of 10% per year. In 2013 the Mugabe government won an election which The Economist described as "rigged," doubled the size of the civil service and embarked on "...misrule and dazzling corruption." However, the United Nations, African Union and SADC endorsed the elections as free and fair. By 2016 the economy had collapsed, nationwide protests took place throughout the country and the finance minister admitted "Right now we literally have nothing." There was the introduction of bond notes to literally fight the biting cash crisis and liquidity crunch. Cash became scarce on the market in the year 2017. On Wednesday 15 November 2017 the military placed President Mugabe under house arrest and removed him from power. The military stated that the president was safe. The military placed tanks around government buildings in Harare and blocked the main road to the airport. Public opinion in the capital favored the dictators removal although they were uncertain about his replacement with another dictatorship. The Times reported that Emmerson Mnangagwa helped to orchestrate the coup. He had recently been sacked by Mr Mugabe so that the path could be smoothed for Grace Mugabe to replace her husband. A Zimbabwean army officer, Major General Sibusiso Moyo, went on television to say the military was targeting "criminals" around President Mugabe but not actively removing the president from power. However the head of the African Union described it as such. Ugandan writer Charles Onyango-Obbo stated on Twitter "If it looks like a coup, walks like a coup and quacks like a coup, then it's a coup". Naunihal Singh, an assistant professor at the U.S. Naval War College and author of a book on military coups, described the situation in Zimbabwe as a coup. He tweeted that "'The President is safe' is a classic coup catch-phrase" of such an event. Robert Mugabe resigned 21 November 2017. Second Vice-President Phelekezela Mphoko became the Acting President. Emmerson Mnangagwa was sworn in as president on 24 November 2017. 2018–2019 General elections were held on 30 July 2018 to elect the president and members of both houses of parliament. Ruling party ZANU-PF won the majority of seats in parliament, incumbent President Emmerson Mnangagwa was declared the winner after receiving 50.8% of votes. The opposition accused the government of rigging the vote. In subsequent riots by MDC supporters, the army opened fire and killed three people, while three others died of their injuries the following day. In January 2019 following a 130% increase in the price of fuel thousands of Zimbabweans protested and the government responded with a coordinated crackdown that resulted in hundreds of arrests and multiple deaths. Economic statistics 2021 HARARE, June 10, 2021–-Gross
separate Constitution and incorporated it directly into Russia. To counter the rise of a revolutionary and anarchistic movements, he sent thousands of dissidents into exile in Siberia and was proposing additional parliamentary reforms when he was assassinated in 1881. In the late 1870s Russia and the Ottoman Empire again clashed in the Balkans. The Russo-Turkish War was popular among the Russian people, who supported the independence of their fellow Orthodox Slavs, the Serbs and the Bulgarians. Russia's victory in this war allowed a number of Balkan states to gain independence: Romania, Serbia, Montenegro. In addition, Bulgaria de facto also became independent after 500 years of Turkish rule. However, the war increased tension with Austria-Hungary, which also had ambitions in the region. The Tsar was disappointed by the results of the Congress of Berlin in 1878, but abided by the agreement. During this period Russia expanded its empire into Central Asia, conquering the khanates of Kokand, Bukhara, and Khiva, as well as the Trans-Caspian region. Russia's advance in Asia led to British fears that the Russians planned aggression against British India. Before 1815 London worried Napoleon would combine with Russia to do that in one mighty campaign. After 1815 London feared Russia alone would do it step by step. Rudyard Kipling called it "the Great Game" and the term caught on. However historians report that the Russians never had any intention to move against India. Russian society in the second half of 19th century In the 1860s, a movement known as Nihilism developed in Russia. A term originally coined by Ivan Turgenev in his 1862 novel Fathers and Sons, Nihilists favoured the destruction of human institutions and laws, based on the assumption that they are artificial and corrupt. At its core, Russian nihilism was characterized by the belief that the world lacks comprehensible meaning, objective truth, or value. For some time, many Russian liberals had been dissatisfied by what they regarded as the empty discussions of the intelligentsia. The Nihilists questioned all old values and shocked the Russian establishment. They became involved in the cause of reform and became major political forces. Their path was facilitated by the previous actions of the Decembrists, who revolted in 1825, and the financial and political hardship caused by the Crimean War, which caused many Russians to lose faith in political institutions. Russian nihilists created the manifesto «Catechism of a Revolutionary». One leader of Russian nihilists, Sergei Nechaev, was basis for Dostoevsky's novel Demons. After the Nihilists failed to convert the aristocracy and landed gentry to the cause of reform, they turned to the peasants. Their campaign became known as the Narodnk ("Populist") movement. It was based on the belief that the common people had the wisdom and peaceful ability to lead the nation. As the Narodnik movement gained momentum, the government moved to extirpate it. In response to the growing reaction of the government, a radical branch of the Narodniks advocated and practiced terrorism. One after another, prominent officials were shot or killed by bombs. This represented the ascendancy of anarchism in Russia as a powerful revolutionary force. Finally, after several attempts, Alexander II was assassinated by anarchists in 1881, on the very day he had approved a proposal to call a representative assembly to consider new reforms in addition to the abolition of serfdom designed to ameliorate revolutionary demands. The end of the 19th century - the beginning of the 20th century is known as the Silver Age of Russian culture. The Silver Age was dominated by the artistic movements of Russian Symbolism, Acmeism, and Russian Futurism, many poetic schools flourished, including the Mystical Anarchism tendency within the Symbolist movement. The Russian avant-garde was a large, influential wave of modern art that flourished in Russian Empire and Soviet Union, approximately from 1890 to 1930—although some have placed its beginning as early as 1850 and its end as late as 1960. The term covers many separate art movements of the era in painting, literature, music and architecture. Autocracy and reaction under Alexander III Unlike his father, the new tsar Alexander III (1881–1894) was throughout his reign a staunch reactionary who revived the maxim of "Orthodoxy, Autocracy, and National Character". A committed Slavophile, Alexander III believed that Russia could be saved from chaos only by shutting itself off from the subversive influences of Western Europe. In his reign Russia concluded the union with republican France to contain the growing power of Germany, completed the conquest of Central Asia, and exacted important territorial and commercial concessions from China. The tsar's most influential adviser was Konstantin Pobedonostsev, tutor to Alexander III and his son Nicholas, and procurator of the Holy Synod from 1880 to 1895. He taught his royal pupils to fear freedom of speech and press and to hate democracy, constitutions, and the parliamentary system. Under Pobedonostsev, revolutionaries were hunted down and a policy of Russification was carried out throughout the empire. Nicholas II and new revolutionary movement Alexander was succeeded by his son Nicholas II (1894–1918). The Industrial Revolution, which began to exert a significant influence in Russia, was meanwhile creating forces that would finally overthrow the tsar. Politically, these opposition forces organized into three competing parties: The liberal elements among the industrial capitalists and nobility, who wanted peaceful social reform and a constitutional monarchy, founded the Constitutional Democratic party or Kadets in 1905. Followers of the Narodnik tradition established the Socialist-Revolutionary Party or Esers in 1901, advocating the distribution of land among the peasants who worked it. A third radical group founded the Russian Social Democratic Labour Party or RSDLP in 1898; this party was the primary exponent of Marxism in Russia. Gathering their support from the radical intellectuals and the urban working class, they advocated complete social, economic and political revolution. In 1903, the RSDLP split into two wings: the radical Bolsheviks, led by Vladimir Lenin, and the relatively moderate Mensheviks, led by Yuli Martov. The Mensheviks believed that Russian socialism would grow gradually and peacefully and that the tsar's regime should be succeeded by a democratic republic in which the socialists would cooperate with the liberal bourgeois parties. The Bolsheviks advocated the formation of a small elite of professional revolutionaries, subject to strong party discipline, to act as the vanguard of the proletariat in order to seize power by force. At the beginning of the 20th century, Russia continued its expansion in the Far East; Chinese Manchuria was in the zone of Russian interests. Russia took an active part in the intervention of the great powers in China to suppress the Boxer rebellion. During this war, Russia occupied Manchuria, which caused a clash of interests with Japan. In 1904, the Russo-Japanese War began, which ended extremely unfavourably for Russia. Revolution of 1905 The disastrous performance of the Russian armed forces in the Russo-Japanese War was a major blow to the Russian State and increased the potential for unrest. In January 1905, an incident known as "Bloody Sunday" occurred when Father Gapon led an enormous crowd to the Winter Palace in Saint Petersburg to present a petition to the tsar. When the procession reached the palace, Cossacks opened fire on the crowd, killing hundreds. The Russian masses were so aroused over the massacre that a general strike was declared demanding a democratic republic. This marked the beginning of the Russian Revolution of 1905. Soviets (councils of workers) appeared in most cities to direct revolutionary activity. In October 1905, Nicholas reluctantly issued the October Manifesto, which conceded the creation of a national Duma (legislature) to be called without delay. The right to vote was extended, and no law was to go into force without confirmation by the Duma. The moderate groups were satisfied; but the socialists rejected the concessions as insufficient and tried to organize new strikes. By the end of 1905, there was disunity among the reformers, and the tsar's position was strengthened for the time being. World War I On 28 June 1914, Gavrilo Princip assassinated Archduke Franz Ferdinand of Austro-Hungary. In response, on 23 July, Austro-Hungary issued an ultimatum to Serbia, which it considered a Russian client-state. Russia had no treaty obligation to Serbia, and in long-term perspective, Russia was militarily gaining on Germany and Austro-Hungary, and so had an incentive to wait. Most Russian leaders wanted to avoid war. But in that crisis they had the support of France, and believed that supporting Serbia was important for Russia's credibility and for its goal of a leadership role in the Balkans. Tsar Nicholas II mobilised Russian forces on 30 July 1914 to defend Serbia from Austria-Hungary. Christopher Clark states: "The Russian general mobilisation [of 30 July] was one of the most momentous decisions of the July crisis. This was the first of the general mobilisations. It came at the moment when the German government had not yet even declared the State of Impending War". Germany responded with its own mobilisation and declaration of War on 1 August 1914. At the opening of hostilities, the Russians took the offensive against both Germany and Austria-Hungary. The very large but poorly equipped Russian army fought tenaciously and desperately at times, despite its lack of organization and very weak logistics. Casualties were enormous. In the 1914 campaign, Russian forces defeated Austro-Hungarian forces in the Battle of Galicia. The success of the Russian army forced the German army to withdraw troops from the western front to the Russian front. However, the shell famine led to the defeat of the Russian forces in Poland by the central powers in the 1915 campaign, which led to a major retreat of the Russian army. In 1916, the Russians again dealt a powerful blow to the Austrians during the Brusilov offensive. By 1915, morale was bad and getting worse. Many recruits were sent to the front unarmed, and told to pick up whatever weapons they could from the battlefield. Nevertheless, the Russian army fought on, and tied down large numbers of Germans and Austrians. When the homefront showed an occasional surge of patriotism, the tsar and his entourage failed to exploit it for military benefit. The Russian army neglected to rally the ethnic and religious minorities that were hostile to Austria, such as Poles. The tsar refused to cooperate with the national legislature, the Duma, and listened less to experts than to his wife, who was in thrall to her chief advisor, the holy man Grigori Rasputin. More than two million refugees fled. Repeated military failures and bureaucratic ineptitude soon turned large segments of the population against the government. The German and Ottoman fleets prevented Russia from importing urgently needed supplies through the Baltic and Black seas. By the middle of 1915 the impact of the war was demoralizing. Food and fuel were in short supply, casualties kept occurring, and inflation was mounting. Strikes increased among low-paid factory workers, and the peasants, who wanted land reforms, were restless. Meanwhile, elite distrust of the regime was deepened by reports that Rasputin was gaining influence; his assassination in late 1916 ended the scandal but did not restore the autocracy's lost prestige. Russian Civil War (1917–1922) Russian Revolution In late February (3 March 1917), a strike occurred in a factory in the capital Petrograd (the new name for Saint Petersburg). On 23 February (8 March) 1917, thousands of female textile workers walked out of their factories protesting the lack of food and calling on other workers to join them. Within days, nearly all the workers in the city were idle, and street fighting broke out. The tsar ordered the Duma to disband, ordered strikers to return to work, and ordered troops to shoot at demonstrators in the streets. His orders triggered the February Revolution, especially when soldiers openly sided with the strikers. The tsar and the aristocracy fell on 2 March, as Nicholas II abdicated. To fill the vacuum of authority, the Duma declared a Provisional Government, headed by Prince Lvov, which was collectively known as the Russian Republic. Meanwhile, the socialists in Petrograd organized elections among workers and soldiers to form a soviet (council) of workers' and soldiers' deputies, as an organ of popular power that could pressure the "bourgeois" Provisional Government. In July, following a series of crises that undermined their authority with the public, the head of the Provisional Government resigned and was succeeded by Alexander Kerensky, who was more progressive than his predecessor but not radical enough for the Bolsheviks or many Russians discontented with the deepening economic crisis and the continuation of the war. While Kerensky's government marked time, the socialist-led soviet in Petrograd joined with soviets that formed throughout the country to create a national movement. The German government provided over 40 million gold marks to subsidize Bolshevik publications and activities subversive of the tsarist government, especially focusing on disgruntled soldiers and workers. In April 1917 Germany provided a special sealed train to carry Vladimir Lenin back to Russia from his exile in Switzerland. After many behind-the-scenes maneuvers, the soviets seized control of the government in November 1917 and drove Kerensky and his moderate provisional government into exile, in the events that would become known as the October Revolution. When the national Constituent Assembly (elected in December 1917) refused to become a rubber stamp of the Bolsheviks, it was dissolved by Lenin's troops and all vestiges of democracy were removed. With the handicap of the moderate opposition removed, Lenin was able to free his regime from the war problem by the harsh Treaty of Brest-Litovsk (1918) with Germany. Russia lost much of her western borderlands. However, when Germany was defeated the Soviet government repudiated the Treaty. Russian Civil War The Bolshevik grip on power was by no means secure, and a lengthy struggle broke out between the new regime and its opponents, which included the Socialist Revolutionaries, the anti-Bolshevik White movement, and large numbers of peasants. At the same time the Allied powers sent several expeditionary armies to support the anti-Communist forces in an attempt to force Russia to rejoin the world war. The Bolsheviks fought against both these forces and national independence movements in the former Russian Empire. By 1921, they had defeated their internal enemies and brought most of the newly independent states under their control, with the exception of Finland, the Baltic States, the Moldavian Democratic Republic (which joined Romania), and Poland (with whom they had fought the Polish–Soviet War). Finland also annexed the region Pechenga of the Russian Kola peninsula; Soviet Russia and allied Soviet republics conceded the parts of its territory to Estonia (Petseri County and Estonian Ingria), Latvia (Pytalovo), and Turkey (Kars). Poland incorporated the contested territories of Western Belarus and Western Ukraine, the former parts of the Russian Empire (except Galicia) east to Curzon Line. Both sides regularly committed brutal atrocities against civilians. During the civil war era White Terror (Russia) for example, Petlyura and Denikin's forces massacred 100,000 to 150,000 Jews in Ukraine and southern Russia. Hundreds of thousands of Jews were left homeless and tens of thousands became victims of serious illness. Estimates for the total number of people killed during the Red Terror carried out by the Bolsheviks vary widely. One source asserts that the total number of victims of repression and pacification campaigns could be 1.3 million, whereas others give estimates ranging from 10,000 in the initial period of repression to 50,000 to 140,000 and an estimate of 28,000 executions per year from December 1917 to February 1922. The most reliable estimations for the total number of killings put the number at about 100,000, whereas others suggest a figure of 200,000. The Russian economy was devastated by the war, with factories and bridges destroyed, cattle and raw materials pillaged, mines flooded and machines damaged. The droughts of 1920 and 1921, as well as the 1921 famine, worsened the disaster still further. Disease had reached pandemic proportions, with 3,000,000 dying of typhus alone in 1920. Millions more also died of widespread starvation. By 1922 there were at least 7,000,000 street children in Russia as a result of nearly ten years of devastation from the Great War and the civil war. Another one to two million people, known as the White émigrés, fled Russia, many were evacuated from Crimea in the 1920, some through the Far East, others west into the newly independent Baltic countries. These émigrés included a large percentage of the educated and skilled population of Russia. Soviet Union (1922–1991) Creation of the Soviet Union The history of Russia between 1922 and 1991 is essentially the history of the Union of Soviet Socialist Republics, or Soviet Union. This ideologically based union, established in December 1922 by the leaders of the Russian Communist Party, was roughly coterminous with Russia before the Treaty of Brest-Litovsk. At that time, the new nation included four constituent republics: the Russian SFSR, the Ukrainian SSR, the Belarusian SSR, and the Transcaucasian SFSR. The constitution, adopted in 1924, established a federal system of government based on a succession of soviets set up in villages, factories, and cities in larger regions. This pyramid of soviets in each constituent republic culminated in the All-Union Congress of Soviets. However, while it appeared that the congress exercised sovereign power, this body was actually governed by the Communist Party, which in turn was controlled by the Politburo from Moscow, the capital of the Soviet Union, just as it had been under the tsars before Peter the Great. War Communism and the New Economic Policy The period from the consolidation of the Bolshevik Revolution in 1917 until 1921 is known as the period of war communism. Land, all industry, and small businesses were nationalized, and the money economy was restricted. Strong opposition soon developed. The peasants wanted cash payments for their products and resented having to surrender their surplus grain to the government as a part of its civil war policies. Confronted with peasant opposition, Lenin began a strategic retreat from war communism known as the New Economic Policy (NEP). The peasants were freed from wholesale levies of grain and allowed to sell their surplus produce in the open market. Commerce was stimulated by permitting private retail trading. The state continued to be responsible for banking, transportation, heavy industry, and public utilities. Although the left opposition among the Communists criticized the rich peasants, or kulaks, who benefited from the NEP, the program proved highly beneficial and the economy revived. The NEP would later come under increasing opposition from within the party following Lenin's death in early 1924. Changes to Russian society As the Russian Empire included during this period not only the region of Russia, but also today's territories of Ukraine, Belarus, Poland, Lithuania, Estonia, Latvia, Moldavia and the Caucasian and Central Asian countries, it is possible to examine the firm formation process in all those regions. One of the main determinants of firm creation for given regions of Russian Empire might be urban demand of goods and supply of industrial and organizational skill. While the Russian economy was being transformed, the social life of the people underwent equally drastic changes. The Family Code of 1918 granted women equal status to men, and permitted a couple to take either the husband or wife's name. Divorce no longer required court procedure, and to make women completely free of the responsibilities of childbearing, abortion was made legal as early as 1920. As a side effect, the emancipation of women increased the labor market. Girls were encouraged to secure an education and pursue a career in the factory or the office. Communal nurseries were set up for the care of small children, and efforts were made to shift the center of people's social life from the home to educational and recreational groups, the soviet clubs. The Soviet government pursued a policy of eliminating illiteracy Likbez. After industrialization, massive urbanization began in the USSR. In the field of national policy in the 1920s, the Korenizatsiya was carried out. However, from the mid-30s, the Stalinist government returned to the tsarist policy of Russification of the outskirts. In particular, the languages of all the nations of the USSR were translated into the Cyrillic alphabet Cyrillization. Industrialization and collectivization The years from 1929 to 1939 comprised a tumultuous decade in Soviet history—a period of massive industrialization and internal struggles as Joseph Stalin established near total control over Soviet society, wielding virtually unrestrained power. Following Lenin's death Stalin wrestled to gain control of the Soviet Union with rival factions in the Politburo, especially Leon Trotsky's. By 1928, with the Trotskyists either exiled or rendered powerless, Stalin was ready to put a radical programme of industrialisation into action. In 1929, Stalin proposed the first five-year plan. Abolishing the NEP, it was the first of a number of plans aimed at swift accumulation of capital resources through the buildup of heavy industry, the collectivization of agriculture, and the restricted manufacture of consumer goods. For the first time in history a government controlled all economic activity. The rapid growth of production capacity and the volume of production of heavy industry (4 times) was of great importance for ensuring economic independence from western countries and strengthening the country's defense capability. At this time, the Soviet Union made the transition from an agrarian country to an industrial one. As a part of the plan, the government took control of agriculture through the state and collective farms (kolkhozes). By a decree of February 1930, about one million individual peasants (kulaks) were forced off their land. Many peasants strongly opposed regimentation by the state, often slaughtering their herds when faced with the loss of their land. In some sections they revolted, and countless peasants deemed "kulaks" by the authorities were executed. The combination of bad weather, deficiencies of the hastily established collective farms, and massive confiscation of grain precipitated a serious famine, and several million peasants died of starvation, mostly in Ukraine, Kazakhstan and parts of southwestern Russia. The deteriorating conditions in the countryside drove millions of desperate peasants to the rapidly growing cities, fueling industrialization, and vastly increasing Russia's urban population in the space of just a few years. The plans received remarkable results in areas aside from agriculture. Russia, in many measures the poorest nation in Europe at the time of the Bolshevik Revolution, now industrialized at a phenomenal rate, far surpassing Germany's pace of industrialization in the 19th century and Japan's earlier in the 20th century. Stalinist repression The NKVD gathered in tens of thousands of Soviet citizens to face arrest, deportation, or execution. Of the six original members of the 1920 Politburo who survived Lenin, all were purged by Stalin. Old Bolsheviks who had been loyal comrades of Lenin, high officers in the Red Army, and directors of industry were liquidated in the Great Purges. Purges in other Soviet republics also helped centralize control in the USSR. Stalin destroyed the opposition in the party consisting of the old Bolsheviks during the Moscow trials. The NKVD under the leadership of Stalin's commissar Nikolai Yezhov carried out a series of massive repressive operations against the kulaks and various national minorities in the USSR. During the Great Purges of 1937–38, about 700 000 people were executed. Penalties were introduced, and many citizens were prosecuted for fictitious crimes of sabotage and espionage. The labor provided by convicts working in the labor camps of the Gulag system became an important component of the industrialization effort, especially in Siberia. An estimated 18 million people passed through the Gulag system, and perhaps another 15 million had experience of some other form of forced labor. After the partition of Poland in 1939, the NKVD executed 20,000 captured Polish officers in the Katyn massacre. In the late 30s - first half of the 40s, the Stalinist government carried out massive deportations of various nationalities. A number of ethnic groups were deported from their settlement to Central Asia. Soviet Union on the international stage The Soviet Union viewed the 1933 accession of fervently anti-Communist Hitler's government to power in Germany with great alarm from the onset, especially since Hitler proclaimed the Drang nach Osten as one of the major objectives in his vision of the German strategy of Lebensraum. The Soviets supported the republicans of Spain who struggled against fascist German and Italian troops in the Spanish Civil War. In 1938–1939, immediately prior to WWII, the Soviet Union successfully fought against Imperial Japan in the Soviet–Japanese border conflicts in the Russian Far East, which led to Soviet-Japanese neutrality and the tense border peace that lasted until August 1945. In 1938, Germany annexed Austria and, together with major Western European powers, signed the Munich Agreement following which Germany, Hungary and Poland divided parts of Czechoslovakia between themselves. German plans for further eastward expansion, as well as the lack of resolve from Western powers to oppose it, became more apparent. Despite the Soviet Union strongly opposing the Munich deal and repeatedly reaffirming its readiness to militarily back commitments given earlier to Czechoslovakia, the Western Betrayal led to the end of Czechoslovakia and further increased fears in the Soviet Union of a coming German attack. This led the Soviet Union to rush the modernization of its military industry and to carry out its own diplomatic maneuvers. In 1939, the Soviet Union signed the Molotov–Ribbentrop Pact: a non-aggression pact with Nazi Germany dividing Eastern Europe into two separate spheres of influence. Following the pact, the USSR normalized relations with Nazi Germany and resumed Soviet–German trade. World War II On 17 September 1939, sixteen days after the start of World War II and with the victorious Germans having advanced deep into Polish territory, the Red Army invaded eastern Poland, stating as justification the "need to protect Ukrainians and Belarusians" there, after the "cessation of existence" of the Polish state. As a result, the Belarusian and Ukrainian Soviet republics' western borders were moved westward, and the new Soviet western border was drawn close to the original Curzon line. In the meantime negotiations with Finland over a Soviet-proposed land swap that would redraw the Soviet-Finnish border further away from Leningrad failed, and in December 1939 the USSR invaded Finland, beginning a campaign known as the Winter War (1939–40). The war took a heavy death toll on the Red Army but forced Finland to sign a Moscow Peace Treaty and cede the Karelian Isthmus and Ladoga Karelia. In summer 1940 the USSR issued an ultimatum to Romania forcing it to cede the territories of Bessarabia and Northern Bukovina. At the same time, the Soviet Union also occupied the three formerly independent Baltic states (Estonia, Latvia and Lithuania). The peace with Germany was tense, as both sides were preparing for the military conflict, and abruptly ended when the Axis forces led by Germany swept across the Soviet border on 22 June 1941. By the autumn the German army had seized Ukraine, laid a siege of Leningrad, and threatened to capture the capital, Moscow, itself. Despite the fact that in December 1941 the Red Army threw off the German forces from Moscow in a successful counterattack, the Germans retained the strategic initiative for approximately another year and held a deep offensive in the south-eastern direction, reaching the Volga and the Caucasus. However, two major German defeats in Stalingrad and Kursk proved decisive and reversed the course of the entire World War as the Germans never regained the strength to sustain their offensive operations and the Soviet Union recaptured the initiative for the rest of the conflict. By the end of 1943, the Red Army had broken through the German siege of Leningrad and liberated much of Ukraine, much of Western Russia and moved into Belarus. During the 1944 campaign, the Red Army defeated German forces in a series of offensive campaigns known as Stalin's ten blows. By the end of 1944, the front had moved beyond the 1939 Soviet frontiers into eastern Europe. Soviet forces drove into eastern Germany, capturing Berlin in May 1945. The war with Germany thus ended triumphantly for the Soviet Union. As agreed at the Yalta Conference, three months after the Victory Day in Europe the USSR launched the Soviet invasion of Manchuria, defeating the Japanese troops in neighboring Manchuria, the last Soviet battle of World War II. Although the Soviet Union was victorious in World War II, the war resulted in around 26–27 million Soviet deaths (estimates vary) and had devastated the Soviet economy in the struggle. Some 1,710 towns and 70,000 settlements were destroyed. The occupied territories suffered from the ravages of German occupation and deportations of slave labor by Germany. Thirteen million Soviet citizens became victims of the repressive policies of Germany and its allies in occupied territories, where people died because of mass murders, famine, absence of elementary medical aid and slave labor. The Nazi Genocide of the Jews, carried out by German Einsatzgruppen along with local collaborators, resulted in almost complete annihilation of the Jewish population over the entire territory temporarily occupied by Germany and its allies. During the occupation, the Leningrad region lost around a quarter of its population, Soviet Belarus lost from a quarter to a third of its population, and 3.6 million Soviet prisoners of war (of 5.5 million) died in German camps. Cold War Collaboration among the major Allies had won the war and was supposed to serve as the basis for postwar reconstruction and security. USSR became one of the founders of the UN and a permanent member of the UN Security Council. However, the conflict between Soviet and U.S. national interests, known as the Cold War, came to dominate the international stage in the postwar period. The Cold War emerged from a conflict between Stalin and U.S. President Harry Truman over the future of Eastern Europe during the Potsdam Conference in the summer of 1945. Russia had suffered three devastating Western onslaughts in the previous 150 years during the Napoleonic Wars, the First World War, and the Second World War, and Stalin's goal was to establish a buffer zone of states between Germany and the Soviet Union. Truman charged that Stalin had betrayed the Yalta agreement. With Eastern Europe under Red Army occupation, Stalin was also biding his time, as his own atomic bomb project was steadily and secretly progressing. In April 1949 the United States sponsored the North Atlantic Treaty Organization (NATO), a mutual defense pact in which most Western nations pledged to treat an armed attack against one nation as an assault on all. The Soviet Union established an Eastern counterpart to NATO in 1955, dubbed the Warsaw Pact. The division of Europe into Western and Soviet blocks later took on a more global character, especially after 1949, when the U.S. nuclear monopoly ended with the testing of a Soviet bomb and the Communist takeover in China. The foremost objectives of Soviet foreign policy were the maintenance and enhancement of national security and the maintenance of hegemony over Eastern Europe. The Soviet Union maintained its dominance over the Warsaw Pact through crushing the Hungarian Revolution of 1956, suppressing the Prague Spring in Czechoslovakia in 1968, and supporting the suppression of the Solidarity movement in Poland in the early 1980s. The Soviet Union opposed the United States in a number of proxy conflicts all over the world, including the Korean War and Vietnam War. As the Soviet Union continued to maintain tight control over its sphere of influence in Eastern Europe, the Cold War gave way to Détente and a more complicated pattern of international relations in the 1970s in which the world was no longer clearly split into two clearly opposed blocs. The nuclear race continued, the number of nuclear weapons in the hands of the USSR and the United States reached a menacing scale, giving them the ability to destroy the planet multiple times. Less powerful countries had more room to assert their independence, and the two superpowers were partially able to recognize their common interest in trying to check the further spread and proliferation of nuclear weapons in treaties such as SALT I, SALT II, and the Anti-Ballistic Missile Treaty. U.S.–Soviet relations deteriorated following the beginning of the nine-year Soviet–Afghan War in 1979 and the 1980 election of Ronald Reagan, a staunch anti-communist, but improved as the communist bloc started to unravel in the late 1980s. With the collapse of the Soviet Union in 1991, Russia lost the superpower status that it had won in the Second World War. De-Stalinization and the era of stagnation Nikita Khrushchev solidified his position in a speech before the Twentieth Congress of the Communist Party in 1956 detailing Stalin's atrocities. In 1964, Khrushchev was impeached by the Communist Party's Central Committee, charging him with a host of errors that included Soviet setbacks such as the Cuban Missile Crisis. After a period of collective leadership led by Leonid Brezhnev, Alexei Kosygin and Nikolai Podgorny, a veteran bureaucrat, Brezhnev, took Khrushchev's place as Soviet leader. Brezhnev emphasized heavy industry, instituted the Soviet economic reform of 1965, and also attempted to ease relationships with the United States. In the 1960s the USSR became a leading producer and exporter of petroleum and natural gas. Soviet science and industry peaked in the Khrushchev and Brezhnev years. The world's first nuclear power plant was established in 1954 in Obninsk, and the Baikal Amur Mainline was built. In addition, in 1980 Moscow hosted the Summer Olympic Games. While all modernized economies were rapidly moving to computerization after 1965, the USSR fell further and further behind. Moscow's decision to copy the IBM 360 of 1965 proved a decisive mistake for it locked scientists into an antiquated system they were unable to improve. They had enormous difficulties in manufacturing the necessary chips reliably and in quantity, in programming workable and efficient programs, in coordinating entirely separate operations, and in providing support to computer users. One of the greatest strengths of Soviet economy was its vast supplies of oil and gas; world oil prices quadrupled in 1973–74, and rose again in 1979–1981, making the energy sector the chief driver of the Soviet economy, and was used to cover multiple weaknesses. At one point, Soviet Premier Alexei Kosygin told the head of oil and gas production, "things are bad with bread. Give me 3 million tons [of oil] over the plan." Former prime minister Yegor Gaidar, an economist looking back three decades, in 2007 wrote: Soviet space program The Soviet space program, founded by Sergey Korolev, was especially successful. On 4 October 1957, the Soviet Union launched the first satellite, Sputnik. On 12 April 1961, Yuri Gagarin became the first human to travel into space in the Soviet spaceship Vostok 1. Other achievements of Russian space program include: the first photo of the far side of the Moon; exploration of Venus; the first spacewalk by Alexei Leonov; first female spaceflight by Valentina Tereshkova. In 1970 and 1973, the world's first planetary rovers were sent to the moon and successfully worked there: Lunokhod 1 and Lunokhod 2. More recently, the Soviet Union produced the world's first space station, Salyut which in 1986 was replaced by Mir, the first consistently inhabited long-term space station, that served from 1986 to 2001. Perestroika and breakup of the Union Two developments dominated the decade that followed: the increasingly apparent crumbling of the Soviet Union's economic and political structures, and the patchwork attempts at reforms to reverse that process. After the rapid succession of former KGB Chief Yuri Andropov and Konstantin Chernenko, transitional figures with deep roots in Brezhnevite tradition, Mikhail Gorbachev implemented perestroika in an attempt to modernize Soviet communism, and made significant changes in the party leadership. However, Gorbachev's social reforms led to unintended consequences. His policy of glasnost facilitated public access to information after decades of government repression, and social problems received wider public attention, undermining the Communist Party's authority. Glasnost allowed ethnic and nationalist disaffection to reach the surface, and many constituent republics, especially the Baltic republics, Georgian SSR and Moldavian SSR, sought greater autonomy, which Moscow was unwilling to provide. In the revolutions of 1989 the USSR lost its allies in Eastern Europe. Gorbachev's attempts at economic reform were not sufficient, and the Soviet government left intact most of the fundamental elements of communist economy. Suffering from low pricing of petroleum and natural gas, the ongoing war in Afghanistan, and outdated industry and pervasive corruption, the Soviet planned economy proved to be ineffective, and by 1990 the Soviet government had lost control over economic conditions. Due to price control, there were shortages of almost all products, reaching their peak in the end of 1991, when people had to stand in long lines and were lucky to buy even the essentials. Control over the constituent republics was also relaxed, and they began to assert their national sovereignty over Moscow. The tension between Soviet Union and Russian SFSR authorities came to be personified in the bitter power struggle between Gorbachev and Boris Yeltsin. Squeezed out of Union politics by Gorbachev in 1987, Yeltsin, who represented himself as a committed democrat, presented a significant opposition to Gorbachev's authority. In a remarkable reversal of fortunes, he gained election as chairman of the Russian republic's new Supreme Soviet in May 1990. The following month, he secured legislation giving Russian laws priority over Soviet laws and withholding two-thirds of the budget. In the first Russian presidential election in 1991 Yeltsin became president of the Russian SFSR. At last Gorbachev attempted to restructure the Soviet Union into a less centralized state. However, on 19 August 1991, a coup against Gorbachev, conspired by senior Soviet officials, was attempted. The coup faced wide popular opposition and collapsed in three days, but disintegration of the Union became imminent. The Russian government took over most of the Soviet Union government institutions on its territory. Because of the dominant position of Russians in the Soviet Union, most gave little thought to any distinction between Russia and the Soviet Union before the late 1980s. In the Soviet Union, only Russian SFSR lacked even the paltry instruments of statehood that the other republics possessed, such as its own republic-level Communist Party branch, trade union councils, Academy of Sciences, and the like. The Communist Party of the Soviet Union was banned in Russia in 1991–1992, although no lustration has ever taken place, and many of its members became top Russian officials. However, as the Soviet government was still opposed to market reforms, the economic situation continued to deteriorate. By December 1991, the shortages had resulted in the introduction of food rationing in Moscow and Saint Petersburg for the first time since World War II. Russia received humanitarian food aid from abroad. After the Belavezha Accords, the Supreme Soviet of Russia withdrew Russia from the Soviet Union on 12 December. The Soviet Union officially ended on 25 December 1991, and the Russian Federation (formerly the Russian Soviet Federative Socialist Republic) took power on 26 December. The Russian government lifted price control in January 1992. Prices rose dramatically, but shortages disappeared. Russian Federation (1991–present) Liberal reforms of the 1990s Although Yeltsin came to power on a wave of optimism, he never recovered his popularity after endorsing Yegor Gaidar's "shock therapy" of ending Soviet-era price controls, drastic cuts in state spending, and an open foreign trade regime in early 1992 (see Russian economic reform in the 1990s). The reforms immediately devastated the living standards of much of the population. In the 1990s Russia suffered an economic downturn that was, in some ways, more severe than the United States or Germany had undergone six decades earlier in the Great Depression. Hyperinflation hit the ruble, due to monetary overhang from the days of the planned economy. Meanwhile, the profusion of small parties and their aversion to coherent alliances left the legislature chaotic. During 1993, Yeltsin's rift with the parliamentary leadership led to the September–October 1993 constitutional crisis. The crisis climaxed on 3 October, when Yeltsin chose a radical solution to settle his dispute with parliament: he called up tanks to shell the Russian White House, blasting out his opponents. As Yeltsin was taking the unconstitutional step of dissolving the legislature, Russia came close to a serious civil conflict. Yeltsin was then free to impose the current Russian constitution with strong presidential powers, which was approved by referendum in December 1993. The cohesion of the Russian Federation was also threatened when the republic of Chechnya attempted to break away, leading to the First and Second Chechen Wars. Economic reforms also consolidated a semi-criminal oligarchy with roots in the old Soviet system. Advised by Western governments, the World Bank, and the International Monetary Fund, Russia embarked on the largest and fastest privatization that the world had ever seen in order to reform the fully nationalized Soviet economy. By mid-decade, retail, trade, services, and small industry was in private hands. Most big enterprises were acquired by their old managers, engendering a new rich (Russian tycoons) in league with criminal mafias or Western investors. Corporate raiders such as Andrei Volgin engaged in hostile takeovers of corrupt corporations by the mid-1990s. By the mid-1990s Russia had a system of multiparty electoral politics. But it was harder to establish a representative government because of two structural problems—the struggle between president and parliament and the anarchic party system. Meanwhile, the central government had lost control of the localities, bureaucracy, and economic fiefdoms, and tax revenues had collapsed. Still in a deep depression, Russia's economy was hit further by the financial crash of 1998. After the crisis, Yeltsin was at the end of his political career. Just hours before the first day of 2000, Yeltsin made a surprise announcement of his resignation, leaving the government in the hands of the little-known Prime Minister Vladimir Putin, a former KGB official and head of the FSB, the KGB's post-Soviet successor agency. The era of Putin In 2000, the new acting president defeated his opponents in the presidential election on 26 March and won in a landslide four years later. The Second Chechen war ended with the victory of Russia, at the same time, after the September 11 terrorist attacks, there was a rapprochement between Russia and the United States. Putin has created a system of guided democracy in Russia by subjugating parliament, suppressing independent media and placing major oil and gas companies under state control. International observers were alarmed by moves in late 2004 to further tighten the presidency's control over parliament, civil society, and regional officeholders. In 2008, Dmitri Medvedev, a former Gazprom chairman and Putin's head of staff, was elected new President of Russia. In 2012, Putin and Medvedev switched places, Putin became president again, prompting massive protests in Moscow in 2011–12. Russia's long-term problems include a shrinking workforce, rampant corruption, and underinvestment in infrastructure. Nevertheless, reversion to a socialist command economy seemed almost impossible. The economic problems are aggravated by massive capital outflows, as well as extremely difficult conditions for doing business, due to pressure from the security forces Siloviki and government agencies. Due to high oil prices, from 2000 to 2008, Russia's GDP at PPP doubled. Although high oil prices and a relatively cheap ruble initially drove this growth, since 2003 consumer demand and, more recently, investment have played a significant role. Russia is well ahead of most other resource-rich countries in its economic development, with a long tradition of education, science, and industry. The economic recovery of the 2000s allowed Russia to obtain the right to host the 2014 Winter Olympic Games in Sochi. In 2014, following a controversial referendum, in which separation was favored by a large majority of voters, the Russian leadership announced the accession of Crimea into the Russian Federation. Following Russia's annexation of Crimea and alleged Russian interference in the war in eastern Ukraine, Western sanctions were imposed on Russia. Since 2015, Russia has been conducting military intervention in Syria in support of the Bashar al-Assad regime, against ISIS and the Syrian opposition. In 2018, the FIFA World Cup was held in Russia. Vladimir Putin was re-elected for a fourth presidential term. In 2022, Russia launched an invasion of Ukraine. The invasion was widely condemned by the global community, with new sanctions being imposed on Russia. Historiography Historians See also Dissolution of the Soviet Union Family tree of the Russian monarchs General Secretary of the Communist Party of the Soviet Union History of Central Asia History of Siberia History of the administrative division of Russia History of the Caucasus History of the Jews in Russia History of the Soviet Union List of heads of government of Russia List of Mongol and Tatar raids against Rus' List of presidents of Russia List of Russian explorers List of Russian historians List of Russian rulers List of wars involving Russia Military history of the Russian Empire Military history of the Soviet Union Politics of Russia Russia Russian Armed Forces Russian colonization of the Americas Russian Empire Russian Medical Fund Soviet Union Timeline of Moscow Timeline of Russian history Timeline of Russian innovation Timeline of Saint Petersburg References Further reading Surveys Auty, Robert, and Dimitri Obolensky, eds. Companion to Russian Studies: vol 1: An Introduction to Russian History (1981) 403 pages; surveys by scholars. Bartlett, Roger P. A history of Russia (2005) online Brown, Archie et al. eds. The Cambridge Encyclopedia of Russia and the Former Soviet Union (2nd ed. 1994) 664 pages online Bushkovitch, Paul. A Concise History of Russia (2011) excerpt and text search Connolly, Richard. The Russian Economy: A Very Short Introduction (Oxford University Press, 2020). Online review Figes, Orlando. Natasha's Dance: A Cultural History of Russia (2002). excerpt Florinsky, Michael T. ed. McGraw-Hill Encyclopedia of Russia and the Soviet Union (1961). Freeze, Gregory L., ed.,. Russia: A History. 2nd ed. (Oxford UP, 2002). . Harcave, Sidney, ed. Readings in Russian history (1962) excerpts from scholars. online Hosking, Geoffrey A. Russia and the Russians: a history (2011) online Jelavich, Barbara. St. Petersburg and Moscow: tsarist and Soviet foreign policy, 1814–1974 (1974). Kort, Michael. A brief history of Russia (2008) online McKenzie, David & Michael W. Curran. A History of Russia, the Soviet Union, and Beyond. 6th ed. Belmont, CA: Wadsworth Publishing, 2001. . Millar, James, ed. Encyclopedia of Russian History (4 vol. 2003). online Pares, Bernard. A History of Russia (1926) By a leading historian. Online Paxton, John. Encyclopedia of Russian History (1993) online Paxton, John. Companion to Russian history (1983) online Perrie, Maureen, et al. The Cambridge History of Russia. (3 vol. Cambridge University Press, 2006). excerpt and text search Riasanovsky, Nicholas V., and Mark D. Steinberg. A History of Russia (9th ed. 2018) 9th edition 1993 online Service, Robert. A History of Modern Russia: From Tsarism to the Twenty-First Century (Harvard UP, 3rd ed., 2009) excerpt Stone, David. A Military History of Russia: From Ivan the Terrible to the War in Chechnya excerpts Ziegler; Charles E. The History of Russia (Greenwood Press, 1999) Russian Empire Baykov, Alexander. “The Economic Development of Russia.” Economic History Review 7#2 1954, pp. 137–149. online Billington, James H. The icon and the axe; an interpretive history of Russian culture (1966) online Christian, David. A History of Russia, Central Asia and Mongolia. Vol. 1: Inner Eurasia from Prehistory to the Mongol Empire. Malden, MA: Blackwell Publishers, 1998. . De Madariaga, Isabel. Russia in the Age of Catherine the Great (2002), comprehensive topical survey Fuller, William C. Strategy and Power in Russia 1600–1914 (1998) excerpts Hughes, Lindsey. Russia in the Age of Peter the Great (Yale UP, 1998), Comprehensive topical survey. online Kahan, Arcadius. The Plow, the Hammer, and the Knout: An Economic History of Eighteenth-Century Russia (1985) Kahan, Arcadius. Russian Economic History: The Nineteenth Century (1989) Gatrell, Peter. "Review: Russian Economic History: The Legacy of Arcadius Kahan" Slavic Review 50#1 (1991), pp. 176–178 online Lincoln, W. Bruce. The Romanovs: Autocrats of All the Russias (1983) online, sweeping narrative history Lincoln, W. Bruce. The great reforms : autocracy, bureaucracy, and the politics of change in Imperial Russia (1990) online Manning, Roberta. The Crisis of the Old Order in Russia: Gentry and Government. Princeton University Press, 1982. Markevich, Andrei, and Ekaterina Zhuravskaya. 2018. “Economic Effects of the Abolition of Serfdom: Evidence from the Russian Empire.” American Economic Review 108.4–5: 1074–1117. Mironov, Boris N., and Ben Eklof. The Social History of Imperial Russia, 1700–1917 (2 vol Westview Press, 2000) Moss, Walter G. A History of Russia. Vol. 1: To 1917. 2d ed. Anthem Press, 2002. Oliva, Lawrence Jay. ed. Russia in the era of Peter the Great (1969), excerpts from primary and secondary sources online Pipes, Richard. Russia under the Old Regime (2nd ed. 1997) Seton-Watson, Hugh. The Russian Empire 1801–1917 (Oxford History of Modern Europe) (1988) excerpt and text search Treasure, Geoffrey. The Making of Modern Europe, 1648–1780 (3rd ed. 2003). pp. 550–600. Soviet era Chamberlin, William Henry. The Russian Revolution 1917–1921 (2 vol 1935) online free Cohen, Stephen F. Rethinking the Soviet Experience: Politics and History since 1917. (Oxford University Press, 1985) Davies, R. W. Soviet economic development from Lenin to Khrushchev (1998) excerpt Davies, R.W., Mark Harrison and S.G. Wheatcroft. The Economic transformation of the Soviet Union, 1913-1945 (1994) Figes, Orlando. A people's tragedy a history of the Russian Revolution (1997) online Fitzpatrick, Sheila. The Russian Revolution. (Oxford University Press, 1982), 208 pages. Gregory, Paul R. and Robert C. Stuart, Russian and Soviet Economic Performance and Structure (7th ed. 2001) Hosking, Geoffrey. The First Socialist Society: A History of the Soviet Union from Within (2nd ed. Harvard UP 1992) 570 pages Kennan, George F. Russia and the West under Lenin and Stalin (1961) online Kort, Michael. The Soviet Colossus: History and Aftermath (7th ed. 2010) 502 pages Kotkin, Stephen. Stalin: Paradoxes of Power, 1878–1928 (2014); vol 2 (2017) Library of Congress. Russia: a country study edited by Glenn E. Curtis. (Federal Research Division, Library of Congress, 1996). online Lincoln, W. Bruce. Passage Through Armageddon: The Russians in War and Revolution, 1914–1918 (1986) Lewin, Moshe. Russian Peasants and Soviet Power. (Northwestern University Press, 1968) McCauley, Martin. The Rise and Fall of the Soviet Union (2007), 522 pages. Moss, Walter G. A History of Russia. Vol. 2: Since 1855. 2d ed. Anthem Press, 2005. Nove, Alec. An Economic History of the USSR, 1917–1991. 3rd ed. London: Penguin Books, 1993. . Ofer, Gur. "Soviet Economic Growth: 1928-1985," Journal of Economic Literature (1987) 25#4: 1767–1833. online Pipes, Richard. A concise history of the Russian Revolution (1995) online Regelson, Lev. Tragedy of Russian Church. 1917–1953. http://www.regels.org/Russian-Church.htm Remington, Thomas. Building Socialism in Bolshevik Russia. Pittsburgh: University of Pittsburgh Press, 1984. Service, Robert. A History of Twentieth-Century Russia. 2nd ed. Cambridge, MA: Harvard University Press, 1999. . Service, Robert. Stalin: A Biography (2004), along with Tucker and Kotkin, a standard biography Steinberg, Mark D. The Russian Revolution, 1905–1921 (Oxford Histories, 2017). Tucker, Robert C. Stalin as Revolutionary, 1879–1929 (1973); Stalin in Power: The Revolution from Above, 1929–1941. (1990)along with Kotkin and Service books, a standard biography; online at ACLS e-books Post-Soviet era Asmus, Ronald. A Little War that Shook the World : Georgia, Russia, and the Future of the West. NYU (2010). Cohen, Stephen. Failed Crusade: America and the Tragedy of Post-Communist Russia. New York: W.W. Norton, 2000, 320 pages. Gregory, Paul R. and Robert C. Stuart, Russian and Soviet Economic Performance and Structure, Addison-Wesley, Seventh Edition, 2001. Medvedev, Roy. Post-Soviet Russia A Journey Through the Yeltsin Era, Columbia University Press, 2002, 394 pages. Moss, Walter G. A History of Russia. Vol. 2: Since 1855. 2d ed. Anthem Press, 2005. Chapter 22. Smorodinskaya, Tatiana, and Karen Evans-Romaine, eds. Encyclopedia of Contemporary Russian Culture (2014) excerpt; 800 pp covering art, literature, music, film, media, crime, politics, business, and economics. Stent, Angela. The Limits of Partnership: U.S.-Russian Relations in the Twenty-First Century (2014) Atlases, geography Blinnikov, Mikhail S. A geography of Russia and its neighbors (Guilford Press, 2011) Barnes, Ian. Restless Empire: A Historical Atlas of Russia (2015), copies of historic maps Catchpole, Brian. A Map History of Russia (Heinemann Educational Publishers, 1974), new topical maps. Channon, John, and Robert Hudson. The Penguin historical atlas of Russia (Viking, 1995), new topical maps. Chew, Allen F. An atlas of Russian history: eleven centuries of changing borders (Yale UP, 1970), new topical maps. Gilbert, Martin. Routledge Atlas of Russian History (4th ed. 2007) excerpt and text search online Henry, Laura A. Red to green: environmental activism in post-Soviet Russia (2010) Kaiser, Robert J. The Geography of Nationalism in Russia and the USSR (1994). Medvedev, Andrei. Economic Geography of the Russian Federation by (2000) Parker, William Henry. An historical geography of Russia (University of London Press, 1968) Shaw, Denis J.B. Russia
the radical intellectuals and the urban working class, they advocated complete social, economic and political revolution. In 1903, the RSDLP split into two wings: the radical Bolsheviks, led by Vladimir Lenin, and the relatively moderate Mensheviks, led by Yuli Martov. The Mensheviks believed that Russian socialism would grow gradually and peacefully and that the tsar's regime should be succeeded by a democratic republic in which the socialists would cooperate with the liberal bourgeois parties. The Bolsheviks advocated the formation of a small elite of professional revolutionaries, subject to strong party discipline, to act as the vanguard of the proletariat in order to seize power by force. At the beginning of the 20th century, Russia continued its expansion in the Far East; Chinese Manchuria was in the zone of Russian interests. Russia took an active part in the intervention of the great powers in China to suppress the Boxer rebellion. During this war, Russia occupied Manchuria, which caused a clash of interests with Japan. In 1904, the Russo-Japanese War began, which ended extremely unfavourably for Russia. Revolution of 1905 The disastrous performance of the Russian armed forces in the Russo-Japanese War was a major blow to the Russian State and increased the potential for unrest. In January 1905, an incident known as "Bloody Sunday" occurred when Father Gapon led an enormous crowd to the Winter Palace in Saint Petersburg to present a petition to the tsar. When the procession reached the palace, Cossacks opened fire on the crowd, killing hundreds. The Russian masses were so aroused over the massacre that a general strike was declared demanding a democratic republic. This marked the beginning of the Russian Revolution of 1905. Soviets (councils of workers) appeared in most cities to direct revolutionary activity. In October 1905, Nicholas reluctantly issued the October Manifesto, which conceded the creation of a national Duma (legislature) to be called without delay. The right to vote was extended, and no law was to go into force without confirmation by the Duma. The moderate groups were satisfied; but the socialists rejected the concessions as insufficient and tried to organize new strikes. By the end of 1905, there was disunity among the reformers, and the tsar's position was strengthened for the time being. World War I On 28 June 1914, Gavrilo Princip assassinated Archduke Franz Ferdinand of Austro-Hungary. In response, on 23 July, Austro-Hungary issued an ultimatum to Serbia, which it considered a Russian client-state. Russia had no treaty obligation to Serbia, and in long-term perspective, Russia was militarily gaining on Germany and Austro-Hungary, and so had an incentive to wait. Most Russian leaders wanted to avoid war. But in that crisis they had the support of France, and believed that supporting Serbia was important for Russia's credibility and for its goal of a leadership role in the Balkans. Tsar Nicholas II mobilised Russian forces on 30 July 1914 to defend Serbia from Austria-Hungary. Christopher Clark states: "The Russian general mobilisation [of 30 July] was one of the most momentous decisions of the July crisis. This was the first of the general mobilisations. It came at the moment when the German government had not yet even declared the State of Impending War". Germany responded with its own mobilisation and declaration of War on 1 August 1914. At the opening of hostilities, the Russians took the offensive against both Germany and Austria-Hungary. The very large but poorly equipped Russian army fought tenaciously and desperately at times, despite its lack of organization and very weak logistics. Casualties were enormous. In the 1914 campaign, Russian forces defeated Austro-Hungarian forces in the Battle of Galicia. The success of the Russian army forced the German army to withdraw troops from the western front to the Russian front. However, the shell famine led to the defeat of the Russian forces in Poland by the central powers in the 1915 campaign, which led to a major retreat of the Russian army. In 1916, the Russians again dealt a powerful blow to the Austrians during the Brusilov offensive. By 1915, morale was bad and getting worse. Many recruits were sent to the front unarmed, and told to pick up whatever weapons they could from the battlefield. Nevertheless, the Russian army fought on, and tied down large numbers of Germans and Austrians. When the homefront showed an occasional surge of patriotism, the tsar and his entourage failed to exploit it for military benefit. The Russian army neglected to rally the ethnic and religious minorities that were hostile to Austria, such as Poles. The tsar refused to cooperate with the national legislature, the Duma, and listened less to experts than to his wife, who was in thrall to her chief advisor, the holy man Grigori Rasputin. More than two million refugees fled. Repeated military failures and bureaucratic ineptitude soon turned large segments of the population against the government. The German and Ottoman fleets prevented Russia from importing urgently needed supplies through the Baltic and Black seas. By the middle of 1915 the impact of the war was demoralizing. Food and fuel were in short supply, casualties kept occurring, and inflation was mounting. Strikes increased among low-paid factory workers, and the peasants, who wanted land reforms, were restless. Meanwhile, elite distrust of the regime was deepened by reports that Rasputin was gaining influence; his assassination in late 1916 ended the scandal but did not restore the autocracy's lost prestige. Russian Civil War (1917–1922) Russian Revolution In late February (3 March 1917), a strike occurred in a factory in the capital Petrograd (the new name for Saint Petersburg). On 23 February (8 March) 1917, thousands of female textile workers walked out of their factories protesting the lack of food and calling on other workers to join them. Within days, nearly all the workers in the city were idle, and street fighting broke out. The tsar ordered the Duma to disband, ordered strikers to return to work, and ordered troops to shoot at demonstrators in the streets. His orders triggered the February Revolution, especially when soldiers openly sided with the strikers. The tsar and the aristocracy fell on 2 March, as Nicholas II abdicated. To fill the vacuum of authority, the Duma declared a Provisional Government, headed by Prince Lvov, which was collectively known as the Russian Republic. Meanwhile, the socialists in Petrograd organized elections among workers and soldiers to form a soviet (council) of workers' and soldiers' deputies, as an organ of popular power that could pressure the "bourgeois" Provisional Government. In July, following a series of crises that undermined their authority with the public, the head of the Provisional Government resigned and was succeeded by Alexander Kerensky, who was more progressive than his predecessor but not radical enough for the Bolsheviks or many Russians discontented with the deepening economic crisis and the continuation of the war. While Kerensky's government marked time, the socialist-led soviet in Petrograd joined with soviets that formed throughout the country to create a national movement. The German government provided over 40 million gold marks to subsidize Bolshevik publications and activities subversive of the tsarist government, especially focusing on disgruntled soldiers and workers. In April 1917 Germany provided a special sealed train to carry Vladimir Lenin back to Russia from his exile in Switzerland. After many behind-the-scenes maneuvers, the soviets seized control of the government in November 1917 and drove Kerensky and his moderate provisional government into exile, in the events that would become known as the October Revolution. When the national Constituent Assembly (elected in December 1917) refused to become a rubber stamp of the Bolsheviks, it was dissolved by Lenin's troops and all vestiges of democracy were removed. With the handicap of the moderate opposition removed, Lenin was able to free his regime from the war problem by the harsh Treaty of Brest-Litovsk (1918) with Germany. Russia lost much of her western borderlands. However, when Germany was defeated the Soviet government repudiated the Treaty. Russian Civil War The Bolshevik grip on power was by no means secure, and a lengthy struggle broke out between the new regime and its opponents, which included the Socialist Revolutionaries, the anti-Bolshevik White movement, and large numbers of peasants. At the same time the Allied powers sent several expeditionary armies to support the anti-Communist forces in an attempt to force Russia to rejoin the world war. The Bolsheviks fought against both these forces and national independence movements in the former Russian Empire. By 1921, they had defeated their internal enemies and brought most of the newly independent states under their control, with the exception of Finland, the Baltic States, the Moldavian Democratic Republic (which joined Romania), and Poland (with whom they had fought the Polish–Soviet War). Finland also annexed the region Pechenga of the Russian Kola peninsula; Soviet Russia and allied Soviet republics conceded the parts of its territory to Estonia (Petseri County and Estonian Ingria), Latvia (Pytalovo), and Turkey (Kars). Poland incorporated the contested territories of Western Belarus and Western Ukraine, the former parts of the Russian Empire (except Galicia) east to Curzon Line. Both sides regularly committed brutal atrocities against civilians. During the civil war era White Terror (Russia) for example, Petlyura and Denikin's forces massacred 100,000 to 150,000 Jews in Ukraine and southern Russia. Hundreds of thousands of Jews were left homeless and tens of thousands became victims of serious illness. Estimates for the total number of people killed during the Red Terror carried out by the Bolsheviks vary widely. One source asserts that the total number of victims of repression and pacification campaigns could be 1.3 million, whereas others give estimates ranging from 10,000 in the initial period of repression to 50,000 to 140,000 and an estimate of 28,000 executions per year from December 1917 to February 1922. The most reliable estimations for the total number of killings put the number at about 100,000, whereas others suggest a figure of 200,000. The Russian economy was devastated by the war, with factories and bridges destroyed, cattle and raw materials pillaged, mines flooded and machines damaged. The droughts of 1920 and 1921, as well as the 1921 famine, worsened the disaster still further. Disease had reached pandemic proportions, with 3,000,000 dying of typhus alone in 1920. Millions more also died of widespread starvation. By 1922 there were at least 7,000,000 street children in Russia as a result of nearly ten years of devastation from the Great War and the civil war. Another one to two million people, known as the White émigrés, fled Russia, many were evacuated from Crimea in the 1920, some through the Far East, others west into the newly independent Baltic countries. These émigrés included a large percentage of the educated and skilled population of Russia. Soviet Union (1922–1991) Creation of the Soviet Union The history of Russia between 1922 and 1991 is essentially the history of the Union of Soviet Socialist Republics, or Soviet Union. This ideologically based union, established in December 1922 by the leaders of the Russian Communist Party, was roughly coterminous with Russia before the Treaty of Brest-Litovsk. At that time, the new nation included four constituent republics: the Russian SFSR, the Ukrainian SSR, the Belarusian SSR, and the Transcaucasian SFSR. The constitution, adopted in 1924, established a federal system of government based on a succession of soviets set up in villages, factories, and cities in larger regions. This pyramid of soviets in each constituent republic culminated in the All-Union Congress of Soviets. However, while it appeared that the congress exercised sovereign power, this body was actually governed by the Communist Party, which in turn was controlled by the Politburo from Moscow, the capital of the Soviet Union, just as it had been under the tsars before Peter the Great. War Communism and the New Economic Policy The period from the consolidation of the Bolshevik Revolution in 1917 until 1921 is known as the period of war communism. Land, all industry, and small businesses were nationalized, and the money economy was restricted. Strong opposition soon developed. The peasants wanted cash payments for their products and resented having to surrender their surplus grain to the government as a part of its civil war policies. Confronted with peasant opposition, Lenin began a strategic retreat from war communism known as the New Economic Policy (NEP). The peasants were freed from wholesale levies of grain and allowed to sell their surplus produce in the open market. Commerce was stimulated by permitting private retail trading. The state continued to be responsible for banking, transportation, heavy industry, and public utilities. Although the left opposition among the Communists criticized the rich peasants, or kulaks, who benefited from the NEP, the program proved highly beneficial and the economy revived. The NEP would later come under increasing opposition from within the party following Lenin's death in early 1924. Changes to Russian society As the Russian Empire included during this period not only the region of Russia, but also today's territories of Ukraine, Belarus, Poland, Lithuania, Estonia, Latvia, Moldavia and the Caucasian and Central Asian countries, it is possible to examine the firm formation process in all those regions. One of the main determinants of firm creation for given regions of Russian Empire might be urban demand of goods and supply of industrial and organizational skill. While the Russian economy was being transformed, the social life of the people underwent equally drastic changes. The Family Code of 1918 granted women equal status to men, and permitted a couple to take either the husband or wife's name. Divorce no longer required court procedure, and to make women completely free of the responsibilities of childbearing, abortion was made legal as early as 1920. As a side effect, the emancipation of women increased the labor market. Girls were encouraged to secure an education and pursue a career in the factory or the office. Communal nurseries were set up for the care of small children, and efforts were made to shift the center of people's social life from the home to educational and recreational groups, the soviet clubs. The Soviet government pursued a policy of eliminating illiteracy Likbez. After industrialization, massive urbanization began in the USSR. In the field of national policy in the 1920s, the Korenizatsiya was carried out. However, from the mid-30s, the Stalinist government returned to the tsarist policy of Russification of the outskirts. In particular, the languages of all the nations of the USSR were translated into the Cyrillic alphabet Cyrillization. Industrialization and collectivization The years from 1929 to 1939 comprised a tumultuous decade in Soviet history—a period of massive industrialization and internal struggles as Joseph Stalin established near total control over Soviet society, wielding virtually unrestrained power. Following Lenin's death Stalin wrestled to gain control of the Soviet Union with rival factions in the Politburo, especially Leon Trotsky's. By 1928, with the Trotskyists either exiled or rendered powerless, Stalin was ready to put a radical programme of industrialisation into action. In 1929, Stalin proposed the first five-year plan. Abolishing the NEP, it was the first of a number of plans aimed at swift accumulation of capital resources through the buildup of heavy industry, the collectivization of agriculture, and the restricted manufacture of consumer goods. For the first time in history a government controlled all economic activity. The rapid growth of production capacity and the volume of production of heavy industry (4 times) was of great importance for ensuring economic independence from western countries and strengthening the country's defense capability. At this time, the Soviet Union made the transition from an agrarian country to an industrial one. As a part of the plan, the government took control of agriculture through the state and collective farms (kolkhozes). By a decree of February 1930, about one million individual peasants (kulaks) were forced off their land. Many peasants strongly opposed regimentation by the state, often slaughtering their herds when faced with the loss of their land. In some sections they revolted, and countless peasants deemed "kulaks" by the authorities were executed. The combination of bad weather, deficiencies of the hastily established collective farms, and massive confiscation of grain precipitated a serious famine, and several million peasants died of starvation, mostly in Ukraine, Kazakhstan and parts of southwestern Russia. The deteriorating conditions in the countryside drove millions of desperate peasants to the rapidly growing cities, fueling industrialization, and vastly increasing Russia's urban population in the space of just a few years. The plans received remarkable results in areas aside from agriculture. Russia, in many measures the poorest nation in Europe at the time of the Bolshevik Revolution, now industrialized at a phenomenal rate, far surpassing Germany's pace of industrialization in the 19th century and Japan's earlier in the 20th century. Stalinist repression The NKVD gathered in tens of thousands of Soviet citizens to face arrest, deportation, or execution. Of the six original members of the 1920 Politburo who survived Lenin, all were purged by Stalin. Old Bolsheviks who had been loyal comrades of Lenin, high officers in the Red Army, and directors of industry were liquidated in the Great Purges. Purges in other Soviet republics also helped centralize control in the USSR. Stalin destroyed the opposition in the party consisting of the old Bolsheviks during the Moscow trials. The NKVD under the leadership of Stalin's commissar Nikolai Yezhov carried out a series of massive repressive operations against the kulaks and various national minorities in the USSR. During the Great Purges of 1937–38, about 700 000 people were executed. Penalties were introduced, and many citizens were prosecuted for fictitious crimes of sabotage and espionage. The labor provided by convicts working in the labor camps of the Gulag system became an important component of the industrialization effort, especially in Siberia. An estimated 18 million people passed through the Gulag system, and perhaps another 15 million had experience of some other form of forced labor. After the partition of Poland in 1939, the NKVD executed 20,000 captured Polish officers in the Katyn massacre. In the late 30s - first half of the 40s, the Stalinist government carried out massive deportations of various nationalities. A number of ethnic groups were deported from their settlement to Central Asia. Soviet Union on the international stage The Soviet Union viewed the 1933 accession of fervently anti-Communist Hitler's government to power in Germany with great alarm from the onset, especially since Hitler proclaimed the Drang nach Osten as one of the major objectives in his vision of the German strategy of Lebensraum. The Soviets supported the republicans of Spain who struggled against fascist German and Italian troops in the Spanish Civil War. In 1938–1939, immediately prior to WWII, the Soviet Union successfully fought against Imperial Japan in the Soviet–Japanese border conflicts in the Russian Far East, which led to Soviet-Japanese neutrality and the tense border peace that lasted until August 1945. In 1938, Germany annexed Austria and, together with major Western European powers, signed the Munich Agreement following which Germany, Hungary and Poland divided parts of Czechoslovakia between themselves. German plans for further eastward expansion, as well as the lack of resolve from Western powers to oppose it, became more apparent. Despite the Soviet Union strongly opposing the Munich deal and repeatedly reaffirming its readiness to militarily back commitments given earlier to Czechoslovakia, the Western Betrayal led to the end of Czechoslovakia and further increased fears in the Soviet Union of a coming German attack. This led the Soviet Union to rush the modernization of its military industry and to carry out its own diplomatic maneuvers. In 1939, the Soviet Union signed the Molotov–Ribbentrop Pact: a non-aggression pact with Nazi Germany dividing Eastern Europe into two separate spheres of influence. Following the pact, the USSR normalized relations with Nazi Germany and resumed Soviet–German trade. World War II On 17 September 1939, sixteen days after the start of World War II and with the victorious Germans having advanced deep into Polish territory, the Red Army invaded eastern Poland, stating as justification the "need to protect Ukrainians and Belarusians" there, after the "cessation of existence" of the Polish state. As a result, the Belarusian and Ukrainian Soviet republics' western borders were moved westward, and the new Soviet western border was drawn close to the original Curzon line. In the meantime negotiations with Finland over a Soviet-proposed land swap that would redraw the Soviet-Finnish border further away from Leningrad failed, and in December 1939 the USSR invaded Finland, beginning a campaign known as the Winter War (1939–40). The war took a heavy death toll on the Red Army but forced Finland to sign a Moscow Peace Treaty and cede the Karelian Isthmus and Ladoga Karelia. In summer 1940 the USSR issued an ultimatum to Romania forcing it to cede the territories of Bessarabia and Northern Bukovina. At the same time, the Soviet Union also occupied the three formerly independent Baltic states (Estonia, Latvia and Lithuania). The peace with Germany was tense, as both sides were preparing for the military conflict, and abruptly ended when the Axis forces led by Germany swept across the Soviet border on 22 June 1941. By the autumn the German army had seized Ukraine, laid a siege of Leningrad, and threatened to capture the capital, Moscow, itself. Despite the fact that in December 1941 the Red Army threw off the German forces from Moscow in a successful counterattack, the Germans retained the strategic initiative for approximately another year and held a deep offensive in the south-eastern direction, reaching the Volga and the Caucasus. However, two major German defeats in Stalingrad and Kursk proved decisive and reversed the course of the entire World War as the Germans never regained the strength to sustain their offensive operations and the Soviet Union recaptured the initiative for the rest of the conflict. By the end of 1943, the Red Army had broken through the German siege of Leningrad and liberated much of Ukraine, much of Western Russia and moved into Belarus. During the 1944 campaign, the Red Army defeated German forces in a series of offensive campaigns known as Stalin's ten blows. By the end of 1944, the front had moved beyond the 1939 Soviet frontiers into eastern Europe. Soviet forces drove into eastern Germany, capturing Berlin in May 1945. The war with Germany thus ended triumphantly for the Soviet Union. As agreed at the Yalta Conference, three months after the Victory Day in Europe the USSR launched the Soviet invasion of Manchuria, defeating the Japanese troops in neighboring Manchuria, the last Soviet battle of World War II. Although the Soviet Union was victorious in World War II, the war resulted in around 26–27 million Soviet deaths (estimates vary) and had devastated the Soviet economy in the struggle. Some 1,710 towns and 70,000 settlements were destroyed. The occupied territories suffered from the ravages of German occupation and deportations of slave labor by Germany. Thirteen million Soviet citizens became victims of the repressive policies of Germany and its allies in occupied territories, where people died because of mass murders, famine, absence of elementary medical aid and slave labor. The Nazi Genocide of the Jews, carried out by German Einsatzgruppen along with local collaborators, resulted in almost complete annihilation of the Jewish population over the entire territory temporarily occupied by Germany and its allies. During the occupation, the Leningrad region lost around a quarter of its population, Soviet Belarus lost from a quarter to a third of its population, and 3.6 million Soviet prisoners of war (of 5.5 million) died in German camps. Cold War Collaboration among the major Allies had won the war and was supposed to serve as the basis for postwar reconstruction and security. USSR became one of the founders of the UN and a permanent member of the UN Security Council. However, the conflict between Soviet and U.S. national interests, known as the Cold War, came to dominate the international stage in the postwar period. The Cold War emerged from a conflict between Stalin and U.S. President Harry Truman over the future of Eastern Europe during the Potsdam Conference in the summer of 1945. Russia had suffered three devastating Western onslaughts in the previous 150 years during the Napoleonic Wars, the First World War, and the Second World War, and Stalin's goal was to establish a buffer zone of states between Germany and the Soviet Union. Truman charged that Stalin had betrayed the Yalta agreement. With Eastern Europe under Red Army occupation, Stalin was also biding his time, as his own atomic bomb project was steadily and secretly progressing. In April 1949 the United States sponsored the North Atlantic Treaty Organization (NATO), a mutual defense pact in which most Western nations pledged to treat an armed attack against one nation as an assault on all. The Soviet Union established an Eastern counterpart to NATO in 1955, dubbed the Warsaw Pact. The division of Europe into Western and Soviet blocks later took on a more global character, especially after 1949, when the U.S. nuclear monopoly ended with the testing of a Soviet bomb and the Communist takeover in China. The foremost objectives of Soviet foreign policy were the maintenance and enhancement of national security and the maintenance of hegemony over Eastern Europe. The Soviet Union maintained its dominance over the Warsaw Pact through crushing the Hungarian Revolution of 1956, suppressing the Prague Spring in Czechoslovakia in 1968, and supporting the suppression of the Solidarity movement in Poland in the early 1980s. The Soviet Union opposed the United States in a number of proxy conflicts all over the world, including the Korean War and Vietnam War. As the Soviet Union continued to maintain tight control over its sphere of influence in Eastern Europe, the Cold War gave way to Détente and a more complicated pattern of international relations in the 1970s in which the world was no longer clearly split into two clearly opposed blocs. The nuclear race continued, the number of nuclear weapons in the hands of the USSR and the United States reached a menacing scale, giving them the ability to destroy the planet multiple times. Less powerful countries had more room to assert their independence, and the two superpowers were partially able to recognize their common interest in trying to check the further spread and proliferation of nuclear weapons in treaties such as SALT I, SALT II, and the Anti-Ballistic Missile Treaty. U.S.–Soviet relations deteriorated following the beginning of the nine-year Soviet–Afghan War in 1979 and the 1980 election of Ronald Reagan, a staunch anti-communist, but improved as the communist bloc started to unravel in the late 1980s. With the collapse of the Soviet Union in 1991, Russia lost the superpower status that it had won in the Second World War. De-Stalinization and the era of stagnation Nikita Khrushchev solidified his position in a speech before the Twentieth Congress of the Communist Party in 1956 detailing Stalin's atrocities. In 1964, Khrushchev was impeached by the Communist Party's Central Committee, charging him with a host of errors that included Soviet setbacks such as the Cuban Missile Crisis. After a period of collective leadership led by Leonid Brezhnev, Alexei Kosygin and Nikolai Podgorny, a veteran bureaucrat, Brezhnev, took Khrushchev's place as Soviet leader. Brezhnev emphasized heavy industry, instituted the Soviet economic reform of 1965, and also attempted to ease relationships with the United States. In the 1960s the USSR became a leading producer and exporter of petroleum and natural gas. Soviet science and industry peaked in the Khrushchev and Brezhnev years. The world's first nuclear power plant was established in 1954 in Obninsk, and the Baikal Amur Mainline was built. In addition, in 1980 Moscow hosted the Summer Olympic Games. While all modernized economies were rapidly moving to computerization after 1965, the USSR fell further and further behind. Moscow's decision to copy the IBM 360 of 1965 proved a decisive mistake for it locked scientists into an antiquated system they were unable to improve. They had enormous difficulties in manufacturing the necessary chips reliably and in quantity, in programming workable and efficient programs, in coordinating entirely separate operations, and in providing support to computer users. One of the greatest strengths of Soviet economy was its vast supplies of oil and gas; world oil prices quadrupled in 1973–74, and rose again in 1979–1981, making the energy sector the chief driver of the Soviet economy, and was used to cover multiple weaknesses. At one point, Soviet Premier Alexei Kosygin told the head of oil and gas production, "things are bad with bread. Give me 3 million tons [of oil] over the plan." Former prime minister Yegor Gaidar, an economist looking back three decades, in 2007 wrote: Soviet space program The Soviet space program, founded by Sergey Korolev, was especially successful. On 4 October 1957, the Soviet Union launched the first satellite, Sputnik. On 12 April 1961, Yuri Gagarin became the first human to travel into space in the Soviet spaceship Vostok 1. Other achievements of Russian space program include: the first photo of the far side of the Moon; exploration of Venus; the first spacewalk by Alexei Leonov; first female spaceflight by Valentina Tereshkova. In 1970 and 1973, the world's first planetary rovers were sent to the moon and successfully worked there: Lunokhod 1 and Lunokhod 2. More recently, the Soviet Union produced the world's first space station, Salyut which in 1986 was replaced by Mir, the first consistently inhabited long-term space station, that served from 1986 to 2001. Perestroika and breakup of the Union Two developments dominated the decade that followed: the increasingly apparent crumbling of the Soviet Union's economic and political structures, and the patchwork attempts at reforms to reverse that process. After the rapid succession of former KGB Chief Yuri Andropov and Konstantin Chernenko, transitional figures with deep roots in Brezhnevite tradition, Mikhail Gorbachev implemented perestroika in an attempt to modernize Soviet communism, and made significant changes in the party leadership. However, Gorbachev's social reforms led to unintended consequences. His policy of glasnost facilitated public access to information after decades of government repression, and social problems received wider public attention, undermining the Communist Party's authority. Glasnost allowed ethnic and nationalist disaffection to reach the surface, and many constituent republics, especially the Baltic republics, Georgian SSR and Moldavian SSR, sought greater autonomy, which Moscow was unwilling to provide. In the revolutions of 1989 the USSR lost its allies in Eastern Europe. Gorbachev's attempts at economic reform were not sufficient, and the Soviet government left intact most of the fundamental elements of communist economy. Suffering from low pricing of petroleum and natural gas, the ongoing war in Afghanistan, and outdated industry and pervasive corruption, the Soviet planned economy proved to be ineffective, and by 1990 the Soviet government had lost control over economic conditions. Due to price control, there were shortages of almost all products, reaching their peak in the end of 1991, when people had to stand in long lines and were lucky to buy even the essentials. Control over the constituent republics was also relaxed, and they began to assert their national sovereignty over Moscow. The tension between Soviet Union and Russian SFSR authorities came to be personified in the bitter power struggle between Gorbachev and Boris Yeltsin. Squeezed out of Union politics by Gorbachev in 1987, Yeltsin, who represented himself as a committed democrat, presented a significant opposition to Gorbachev's authority. In a remarkable reversal of fortunes, he gained election as chairman of the Russian republic's new Supreme Soviet in May 1990. The following month, he secured legislation giving Russian laws priority over Soviet laws and withholding two-thirds of the budget. In the first Russian presidential election in 1991 Yeltsin became president of the Russian SFSR. At last Gorbachev attempted to restructure the Soviet Union into a less centralized state. However, on 19 August 1991, a coup against Gorbachev, conspired by senior Soviet officials, was attempted. The coup faced wide popular opposition and collapsed in three days, but disintegration of the Union became imminent. The Russian government took over most of the Soviet Union government institutions on its territory. Because of the dominant position of Russians in the Soviet Union, most gave little thought to any distinction between Russia and the Soviet Union before the late 1980s. In the Soviet Union, only Russian SFSR lacked even the paltry instruments of statehood that the other republics possessed, such as its own republic-level Communist Party branch, trade union councils, Academy of Sciences, and the like. The Communist Party of the Soviet Union was banned in Russia in 1991–1992, although no lustration has ever taken place, and many of its members became top Russian officials. However, as the Soviet government was still opposed to market reforms, the economic situation continued to deteriorate. By December 1991, the shortages had resulted in the introduction of food rationing in Moscow and Saint Petersburg for the first time since World War II. Russia received humanitarian food aid from abroad. After the Belavezha Accords, the Supreme Soviet of Russia withdrew Russia from the Soviet Union on 12 December. The Soviet Union officially ended on 25 December 1991, and the Russian Federation (formerly the Russian Soviet Federative Socialist Republic) took power on 26 December. The Russian government lifted price control in January 1992. Prices rose dramatically, but shortages disappeared. Russian Federation (1991–present) Liberal reforms of the 1990s Although Yeltsin came to power on a wave of optimism, he never recovered his popularity after endorsing Yegor Gaidar's "shock therapy" of ending Soviet-era price controls, drastic cuts in state spending, and an open foreign trade regime in early 1992 (see Russian economic reform in the 1990s). The reforms immediately devastated the living standards of much of the population. In the 1990s Russia suffered an economic downturn that was, in some ways, more severe than the United States or Germany had undergone six decades earlier in the Great Depression. Hyperinflation hit the ruble, due to monetary overhang from the days of the planned economy. Meanwhile, the profusion of small parties and their aversion to coherent alliances left the legislature chaotic. During 1993, Yeltsin's rift with the parliamentary leadership led to the September–October 1993 constitutional crisis. The crisis climaxed on 3 October, when Yeltsin chose a radical solution to settle his dispute with parliament: he called up tanks to shell the Russian White House, blasting out his opponents. As Yeltsin was taking the unconstitutional step of dissolving the legislature, Russia came close to a serious civil conflict. Yeltsin was then free to impose the current Russian constitution with strong presidential powers, which was approved by referendum in December 1993. The cohesion of the Russian Federation was also threatened when the republic of Chechnya attempted to break away, leading to the First and Second Chechen Wars. Economic reforms also consolidated a semi-criminal oligarchy with roots in the old Soviet system. Advised by Western governments, the World Bank, and the International Monetary Fund, Russia embarked on the largest and fastest privatization that the world had ever seen in order to reform the fully nationalized Soviet economy. By mid-decade, retail, trade, services, and small industry was in private hands. Most big enterprises were acquired by their old managers, engendering a new rich (Russian tycoons) in league with criminal mafias or Western investors. Corporate raiders such as Andrei Volgin engaged in hostile takeovers of corrupt corporations by the mid-1990s. By the mid-1990s Russia had a system of multiparty electoral politics. But it was harder to establish a representative government because of two structural problems—the struggle between president and parliament and the anarchic party system. Meanwhile, the central government had lost control of the localities, bureaucracy, and economic fiefdoms, and tax revenues had collapsed. Still in a deep depression, Russia's economy was hit further by the financial crash of 1998. After the crisis, Yeltsin was at the end of his political career. Just hours before the first day of 2000, Yeltsin made a surprise announcement of his resignation, leaving the government in the hands of the little-known Prime Minister Vladimir Putin, a former KGB official and head of the FSB, the KGB's post-Soviet successor agency. The era of Putin In 2000, the new acting president defeated his opponents in the presidential election on 26 March and won in a landslide four years later. The Second Chechen war ended with the victory of Russia, at the same time, after the September 11 terrorist attacks, there was a rapprochement between Russia and the United States. Putin has created a system of guided democracy in Russia by subjugating parliament, suppressing independent media and placing major oil and gas companies under state control. International observers were alarmed by moves in late 2004 to further tighten the presidency's control over parliament, civil society, and regional officeholders. In 2008, Dmitri Medvedev, a former Gazprom chairman and Putin's head of staff, was elected new President of Russia. In 2012, Putin and Medvedev switched places, Putin became president again, prompting massive protests in Moscow in 2011–12. Russia's long-term problems include a shrinking workforce, rampant corruption, and underinvestment in infrastructure. Nevertheless, reversion to a socialist command economy seemed almost impossible. The economic problems are aggravated by massive capital outflows, as well as extremely difficult conditions for doing business, due to pressure from the security forces Siloviki and government agencies. Due to high oil prices, from 2000 to 2008, Russia's GDP at PPP doubled. Although high oil prices and a relatively cheap ruble initially drove this growth, since 2003 consumer demand and, more recently, investment have played a significant role. Russia is well ahead of most other resource-rich countries in its economic development, with a long tradition of education, science, and industry. The economic recovery of the 2000s allowed Russia to obtain the right to host the 2014 Winter Olympic Games in Sochi. In 2014, following a controversial referendum, in which separation was favored by a large majority of voters, the Russian leadership announced the accession of Crimea into the Russian Federation. Following Russia's annexation of Crimea and alleged Russian interference in the war in eastern Ukraine, Western sanctions were imposed on Russia. Since 2015, Russia has been conducting military intervention in Syria in support of the Bashar al-Assad regime, against ISIS and the Syrian opposition. In 2018, the FIFA World Cup was held in Russia. Vladimir Putin was re-elected for a fourth presidential term. In 2022, Russia launched an invasion of Ukraine. The invasion was widely condemned by the global community, with new sanctions being imposed on Russia. Historiography Historians See also Dissolution of the Soviet Union Family tree of the Russian monarchs General Secretary of the Communist Party of the Soviet Union History of Central Asia History of Siberia History of the administrative division of Russia History of the Caucasus History of the Jews in Russia History of the Soviet Union List of heads of government of Russia List of Mongol and Tatar raids against Rus' List of presidents of Russia List of Russian explorers List of Russian historians List of Russian rulers List of wars involving Russia Military history of the Russian Empire Military history of the Soviet Union Politics of Russia Russia Russian Armed Forces Russian colonization of the Americas Russian Empire Russian Medical Fund Soviet Union Timeline of Moscow Timeline of Russian history Timeline of Russian innovation Timeline of Saint Petersburg References Further reading Surveys Auty, Robert, and Dimitri Obolensky, eds. Companion to Russian Studies: vol 1: An Introduction to Russian History (1981) 403 pages; surveys by scholars. Bartlett, Roger P. A history of Russia (2005) online Brown, Archie et al. eds. The Cambridge Encyclopedia of Russia and the Former Soviet Union (2nd ed. 1994) 664 pages online Bushkovitch, Paul. A Concise History of Russia (2011) excerpt and text search Connolly, Richard. The Russian Economy: A Very Short Introduction (Oxford University Press, 2020). Online review Figes, Orlando. Natasha's Dance: A Cultural History of Russia (2002). excerpt Florinsky, Michael T. ed. McGraw-Hill Encyclopedia of Russia and the Soviet Union (1961). Freeze, Gregory L., ed.,. Russia: A History. 2nd ed. (Oxford UP, 2002). . Harcave, Sidney, ed. Readings in Russian history (1962) excerpts from scholars. online Hosking, Geoffrey A. Russia and the Russians: a history (2011) online Jelavich, Barbara. St. Petersburg and Moscow: tsarist and Soviet foreign policy, 1814–1974 (1974). Kort, Michael. A brief history of Russia (2008) online McKenzie, David & Michael W. Curran. A History of Russia, the Soviet Union, and Beyond. 6th ed. Belmont, CA: Wadsworth Publishing, 2001. . Millar, James, ed. Encyclopedia of Russian History (4 vol. 2003). online Pares, Bernard. A History of Russia (1926) By a leading historian. Online Paxton, John. Encyclopedia of Russian History (1993) online Paxton, John. Companion to Russian history (1983) online Perrie, Maureen, et al. The Cambridge History of Russia. (3 vol. Cambridge University Press, 2006). excerpt and text search Riasanovsky, Nicholas V., and Mark D. Steinberg. A History of Russia (9th ed. 2018) 9th edition 1993 online Service, Robert. A History of Modern Russia: From Tsarism to the Twenty-First Century (Harvard UP, 3rd ed., 2009) excerpt Stone, David. A Military History of Russia: From Ivan the Terrible to the War in Chechnya excerpts Ziegler; Charles E. The History of Russia (Greenwood Press, 1999) Russian Empire Baykov, Alexander. “The Economic Development of Russia.” Economic History Review 7#2 1954, pp. 137–149. online Billington, James H. The icon and the axe; an interpretive history of Russian culture (1966) online Christian, David. A History of Russia, Central Asia and Mongolia. Vol. 1: Inner Eurasia from Prehistory to the Mongol Empire. Malden, MA: Blackwell Publishers, 1998. . De Madariaga, Isabel. Russia in the Age of Catherine the Great (2002), comprehensive topical survey Fuller, William C. Strategy and Power in Russia 1600–1914 (1998) excerpts Hughes, Lindsey. Russia in the Age of Peter the Great (Yale UP, 1998), Comprehensive topical survey. online Kahan, Arcadius. The Plow, the Hammer, and the Knout: An Economic History of Eighteenth-Century Russia (1985) Kahan, Arcadius. Russian Economic History: The Nineteenth Century (1989) Gatrell, Peter. "Review: Russian Economic History: The Legacy of Arcadius Kahan" Slavic Review 50#1 (1991), pp. 176–178 online Lincoln, W. Bruce. The Romanovs: Autocrats of All the Russias (1983) online, sweeping narrative history Lincoln, W. Bruce. The great reforms : autocracy, bureaucracy, and the politics of change in Imperial Russia (1990) online Manning, Roberta. The Crisis of the Old Order in Russia: Gentry and Government. Princeton University Press, 1982. Markevich, Andrei, and Ekaterina Zhuravskaya. 2018. “Economic Effects of the Abolition of Serfdom: Evidence from the Russian Empire.” American Economic Review 108.4–5: 1074–1117. Mironov, Boris N., and Ben Eklof. The Social History of Imperial Russia, 1700–1917 (2 vol Westview Press, 2000) Moss, Walter G. A History of Russia. Vol. 1: To 1917. 2d ed. Anthem Press, 2002. Oliva, Lawrence Jay. ed. Russia in the era of Peter the Great (1969), excerpts from primary and secondary sources online Pipes, Richard. Russia under the Old Regime (2nd ed. 1997) Seton-Watson, Hugh. The Russian Empire 1801–1917 (Oxford History of Modern Europe) (1988) excerpt and text search Treasure, Geoffrey. The Making of Modern Europe, 1648–1780 (3rd ed. 2003). pp. 550–600. Soviet era Chamberlin, William Henry. The Russian Revolution 1917–1921 (2 vol 1935) online free Cohen, Stephen F. Rethinking the Soviet Experience: Politics and History since 1917. (Oxford University Press, 1985) Davies, R. W. Soviet economic development from Lenin to Khrushchev (1998) excerpt Davies, R.W., Mark Harrison and S.G. Wheatcroft. The Economic transformation of the Soviet Union, 1913-1945 (1994) Figes, Orlando. A people's tragedy a history of the Russian Revolution (1997) online Fitzpatrick, Sheila. The Russian Revolution. (Oxford University Press, 1982), 208 pages. Gregory, Paul R. and Robert C. Stuart, Russian and Soviet Economic Performance and Structure (7th ed. 2001) Hosking, Geoffrey. The First Socialist Society: A History of the Soviet Union from Within (2nd ed. Harvard UP 1992) 570 pages Kennan, George F. Russia and the West under Lenin and Stalin (1961) online Kort, Michael. The Soviet Colossus: History and Aftermath (7th ed. 2010) 502 pages Kotkin, Stephen. Stalin: Paradoxes of Power, 1878–1928 (2014); vol 2 (2017) Library of Congress. Russia: a country study edited by Glenn E. Curtis. (Federal Research Division, Library of Congress, 1996). online Lincoln, W. Bruce. Passage Through Armageddon:
appoint bishops (investiture). The end of lay investiture threatened to undercut the power of the Empire and the ambitions of noblemen. Bishoprics being merely lifetime appointments, a king could better control their powers and revenues than those of hereditary noblemen. Even better, he could leave the post vacant and collect the revenues, theoretically in trust for the new bishop, or give a bishopric to pay a helpful noble. The Church wanted to end lay investiture to end this and other abuses, to reform the episcopate and provide better pastoral care. Pope Gregory VII issued the Dictatus Papae, which declared that the pope alone could appoint bishops. Henry IV's rejection of the decree led to his excommunication and a ducal revolt. Eventually Henry received absolution after dramatic public penance, though the Great Saxon Revolt and conflict of investiture continued. A similar controversy occurred in England between King Henry I and St. Anselm, Archbishop of Canterbury, over investiture and episcopal vacancy. The English dispute was resolved by the Concordat of London, 1107, where the king renounced his claim to invest bishops but continued to require an oath of fealty. This was a partial model for the Concordat of Worms (Pactum Calixtinum), which resolved the Imperial investiture controversy with a compromise that allowed secular authorities some measure of control but granted the selection of bishops to their cathedral canons. As a symbol of the compromise, both ecclesiastical and lay authorities invested bishops with respectively, the staff and the ring. Crusades Generally, the Crusades refer to the campaigns in the Holy Land sponsored by the papacy against Muslim forces. There were other crusades against Islamic forces in southern Spain, southern Italy, and Sicily. The Papacy also sponsored numerous Crusades to subjugate and convert the pagan peoples of north-eastern Europe, against its political enemies in Western Europe, and against heretical or schismatic religious minorities within Christendom. The Holy Land had been part of the Roman Empire, and thus Byzantine Empire, until the Islamic conquests of the 7th and 8th centuries. Thereafter, Christians had generally been permitted to visit the sacred places in the Holy Land until 1071, when the Seljuk Turks closed Christian pilgrimages and assailed the Byzantines, defeating them at the Battle of Manzikert. Emperor Alexius I asked for aid from Pope Urban II against Islamic aggression. He probably expected money from the pope for the hiring of mercenaries. Instead, Urban II called upon the knights of Christendom in a speech made at the Council of Clermont on 27 November 1095, combining the idea of pilgrimage to the Holy Land with that of waging a holy war against infidels. The First Crusade captured Antioch in 1099 and then Jerusalem. The Second Crusade occurred in 1145 when Edessa was taken by Islamic forces. Jerusalem was held until 1187 and the Third Crusade, famous for the battles between Richard the Lionheart and Saladin. The Fourth Crusade, begun by Innocent III in 1202, intended to retake the Holy Land but was soon subverted by the Venetians. When the crusaders arrived in Constantinople, they sacked the city and other parts of Asia Minor and established the Latin Empire of Constantinople in Greece and Asia Minor. Five numbered crusades to the Holy Land, culminating in the siege of Acre of 1219, essentially ending the Western presence in the Holy Land. Jerusalem was held by the crusaders for nearly a century, while other strongholds in the Near East remained in Christian possession much longer. The crusades in the Holy Land ultimately failed to establish permanent Christian kingdoms. Islamic expansion into Europe remained a threat for centuries, culminating in the campaigns of Suleiman the Magnificent in the 16th century. Crusades in Iberia (the Reconquista), southern Italy, and Sicily eventually lead to the demise of Islamic power in Europe. The Albigensian Crusade targeted the heretical Cathars of southern France; in combination with the Inquisition set up in its aftermath, it succeeded in exterminating them. The Wendish Crusade succeeded in subjugating and forcibly converting the pagan Slavs of modern eastern Germany. The Livonian Crusade, carried out by the Teutonic Knights and other orders of warrior-monks, similarly conquered and forcibly converted the pagan Balts of Livonia and Old Prussia. However, the pagan Grand Duchy of Lithuania successfully resisted the Knights and converted only voluntarily in the 14th century. Medieval Inquisition The Medieval Inquisition was a series of inquisitions (Roman Catholic Church bodies charged with suppressing heresy) from around 1184, including the Episcopal Inquisition (1184–1230s) and later the Papal Inquisition (1230s). It was in response to movements within Europe considered apostate or heretical to Western Catholicism, in particular the Cathars and the Waldensians in southern France and northern Italy. These were the first inquisition movements of many that would follow. The inquisitions in combination with the Albigensian Crusade were fairly successful in ending heresy. Spread of Christianity Early evangelization in Scandinavia was begun by Ansgar, Archbishop of Bremen, "Apostle of the North". Ansgar, a native of Amiens, was sent with a group of monks to Jutland in around 820 at the time of the pro-Christian King Harald Klak. The mission was only partially successful, and Ansgar returned two years later to Germany, after Harald had been driven out of his kingdom. In 829, Ansgar went to Birka on Lake Mälaren, Sweden, with his aide friar Witmar, and a small congregation was formed in 831 which included the king's steward Hergeir. Conversion was slow, however, and most Scandinavian lands were only completely Christianised at the time of rulers such as Saint Canute IV of Denmark and Olaf I of Norway in the years following AD 1000. The Christianisation of the Slavs was initiated by one of Byzantium's most learned churchmen – the patriarch Photios I of Constantinople. The Byzantine Emperor Michael III chose Cyril and Methodius in response to a request from King Rastislav of Moravia, who wanted missionaries that could minister to the Moravians in their own language. The two brothers spoke the local Slavonic vernacular and translated the Bible and many of the prayer books. As the translations prepared by them were copied by speakers of other dialects, the hybrid literary language Old Church Slavonic was created, which later evolved into Church Slavonic and is the common liturgical language still used by the Russian Orthodox Church and other Slavic Orthodox Christians. Methodius went on to convert the Serbs. Bulgaria was a pagan country since its establishment in 681 until 864 when Boris I converted to Christianity. The reasons for that decision were complex; the most important factors were that Bulgaria was situated between two powerful Christian empires, Byzantium and East Francia; Christian doctrine particularly favoured the position of the monarch as God's representative on Earth, while Boris also saw it as a way to overcome the differences between Bulgars and Slavs. Bulgaria was officially recognised as a patriarchate by Constantinople in 927, Serbia in 1346, and Russia in 1589. All of these nations had been converted long before these dates. Late Middle Ages and the early Renaissance (1300–1520) Avignon Papacy and the Western Schism The Avignon Papacy, sometimes referred to as the Babylonian Captivity, was a period from 1309 to 1378 during which seven popes resided in Avignon, in modern-day France. In 1309, Pope Clement V moved to Avignon in southern France. Confusion and political animosity waxed, as the prestige and influence of Rome waned without a resident pontiff. Troubles reached their peak in 1378 when Gregory XI died while visiting Rome. A papal conclave met in Rome and elected Urban VI, an Italian. Urban soon alienated the French cardinals, and they held a second conclave electing Robert of Geneva to succeed Gregory XI, beginning the Western Schism. Criticism of Church corruption John Wycliffe, an English scholar and alleged heretic best known for denouncing the corruptions of the Church, was a precursor of the Protestant Reformation. He emphasized the supremacy of the Bible and called for a direct relationship between God and the human person, without interference by priests and bishops. His followers played a role in the English Reformation. Jan Hus, a Czech theologian in Prague, was influenced by Wycliffe and spoke out against the corruptions he saw in the Church. He was a forerunner of the Protestant Reformation, and his legacy has become a powerful symbol of Czech culture in Bohemia. Renaissance and the Church The Renaissance was a period of great cultural change and achievement, marked in Italy by a classical orientation and an increase of wealth through mercantile trade. The city of Rome, the papacy, and the papal states were all affected by the Renaissance. On the one hand, it was a time of great artistic patronage and architectural magnificence, where the Church commissioned such artists as Michelangelo, Brunelleschi, Bramante, Raphael, Fra Angelico, Donatello, and Leonardo da Vinci. On the other hand, wealthy Italian families often secured episcopal offices, including the papacy, for their own members, some of whom were known for immorality, such as Alexander VI and Sixtus IV. In addition to being the head of the Church, the pope became one of Italy's most important secular rulers, and pontiffs such as Julius II often waged campaigns to protect and expand their temporal domains. Furthermore, the popes, in a spirit of refined competition with other Italian lords, spent lavishly both on private luxuries but also on public works, repairing or building churches, bridges, and a magnificent system of aqueducts in Rome that still function today. Fall of Constantinople In 1453, Constantinople fell to the Ottoman Empire. Eastern Christians fleeing Constantinople, and the Greek manuscripts they carried with them, is one of the factors that prompted the literary renaissance in the West at about this time. The Ottoman government followed Islamic law when dealing with the conquered Christian population. Christians were officially tolerated as people of the Book. As such, the Church's canonical and hierarchical organisation were not significantly disrupted, and its administration continued to function. One of the first things that Mehmet the Conqueror did was to allow the Church to elect a new patriarch, Gennadius Scholarius. However, these rights and privileges, including freedom of worship and religious organisation, were often established in principle but seldom corresponded to reality. Christians were viewed as second-class citizens, and the legal protections they depended upon were subject to the whims of the sultan and the sublime porte. The Hagia Sophia and the Parthenon, which had been Christian churches for nearly a millennium, were converted into mosques. Violent persecutions of Christians were common and reached their climax in the Armenian, Assyrian, and Greek genocides. Early modern period (c. 1500–c. 1750) Reformation In the early 16th century, attempts were made by the theologians Martin Luther and Huldrych Zwingli, along with many others, to reform the Church. They considered the root of corruptions to be doctrinal, rather than simply a matter of moral weakness or lack of ecclesiastical discipline, and thus advocated for God's autonomy in redemption, and against voluntaristic notions that salvation could be earned by people. The Reformation is usually considered to have started with the publication of the Ninety-five Theses by Luther in 1517, although there was no schism until the 1521 Diet of Worms. The edicts of the Diet condemned Luther and officially banned citizens of the Holy Roman Empire from defending or propagating his ideas. The word Protestant is derived from the Latin protestatio, meaning declaration, which refers to the letter of protestation by Lutheran princes against the decision of the Diet of Speyer in 1529, which reaffirmed the edict of the Diet of Worms ordering the seizure of all property owned by persons guilty of advocating Lutheranism. The term "Protestant" was not originally used by Reformation era leaders; instead, they called themselves "evangelical", emphasising the "return to the true gospel (Greek: euangelion)." Early protest was against corruptions such as simony, the holding of multiple church offices by one person at the same time, episcopal vacancies, and the sale of indulgences. The Protestant position also included sola scriptura, sola fide, the priesthood of all believers, Law and Gospel, and the two kingdoms doctrine. The three most important traditions to emerge directly from the Reformation were the Lutheran, Reformed, and Anglican traditions, though the latter group identifies as both "Reformed" and "Catholic", and some subgroups reject the classification as "Protestant". Unlike other reform movements, the English Reformation began by royal influence. Henry VIII considered himself a thoroughly Catholic king, and in 1521 he defended the papacy against Luther in a book he commissioned entitled, The Defence of the Seven Sacraments, for which Pope Leo X awarded him the title Fidei Defensor (Defender of the Faith). However, the king came into conflict with the papacy when he wished to annul his marriage with Catherine of Aragon, for which he needed papal sanction. Catherine, among many other noble relations, was the aunt of Emperor Charles V, the papacy's most significant secular supporter. The ensuing dispute eventually lead to a break from Rome and the declaration of the King of England as head of the English Church, which saw itself as a Protestant Church navigating a middle way between Lutheranism and Reformed Christianity, but leaning more towards the latter. Consequently, England experienced periods of reform and also Counter-Reformation. Monarchs such as Edward VI, Lady Jane Grey, Mary I, Elizabeth I, and Archbishops of Canterbury such as Thomas Cranmer and William Laud, pushed the Church of England in different directions over the course of only a few generations. What emerged was the Elizabethan Religious Settlement and a state church that considered itself both "Reformed" and "Catholic" but not "Roman", and other unofficial more radical movements, such as the Puritans. In terms of politics, the English Reformation included heresy trials, the exiling of Roman Catholic populations to Spain and other Roman Catholic lands, and censorship and prohibition of books. Radical Reformation The Radical Reformation represented a response to corruption both in the Catholic Church and in the expanding Magisterial Protestant movement led by Martin Luther and many others. Beginning in Germany and Switzerland in the 16th century, the Radical Reformation gave birth to many radical Protestant groups throughout Europe. The term covers radical reformers like Thomas Müntzer and Andreas Karlstadt, the Zwickau prophets, and Anabaptist Christians, most notably the Amish, Mennonites, Hutterites, the Bruderhof Communities, and Schwarzenau Brethren. Counter-Reformation The Counter-Reformation was the response of the Catholic Church to the Protestant Reformation. In terms of meetings and documents, it consisted of the Confutatio Augustana, the Council of Trent, the Roman Catechism, and the Defensio Tridentinæ fidei. In terms of politics, the Counter-Reformation included heresy trials, the exiling of Protestant populations from Catholic lands, the seizure of children from their Protestant parents for institutionalized Catholic upbringing, a series of wars, the Index Librorum Prohibitorum (the list of prohibited books), and the Spanish Inquisition. Although Protestant Christians were excommunicated in an attempt to reduce their influence within the Catholic Church, at the same time they were persecuted during the Counter-Reformation, prompting some to live as crypto-Protestants (also termed Nicodemites, against the urging of John Calvin who urged them to live their faith openly. Crypto-Protestants were documented as late as the 19th century in Latin America. The Council of Trent (1545–1563) initiated by Pope Paul III addressed issues of certain ecclesiastical corruptions such as simony, absenteeism, nepotism, the holding of multiple church offices by one person, and other abuses. It also reasserted traditional practices and doctrines of the Church, such as the episcopal structure, clerical celibacy, the seven Sacraments, transubstantiation (the belief that during mass the consecrated bread and wine truly become the body and blood of Christ), the veneration of relics, icons, and saints (especially the Blessed Virgin Mary), the necessity of both faith and good works for salvation, the existence of purgatory and the issuance (but not the sale) of indulgences. In other words, all Protestant doctrinal objections and changes were uncompromisingly rejected. The Council also fostered an interest in education for parish priests to increase pastoral care. Milan's Archbishop Saint Charles Borromeo set an example by visiting the remotest parishes and instilling high standards. Catholic Reformation Simultaneous to the Counter-Reformation, the Catholic Reformation consisted of improvements in art and culture, anti-corruption measures, the founding of the Jesuits, the establishment of seminaries, a reassertion of traditional doctrines and the emergence of new religious orders aimed at both moral reform and new missionary activity. Also part of this was the development of new yet orthodox forms of spirituality, such as that of the Spanish mystics and the French school of spirituality. The papacy of St. Pius V was known not only for its focus on halting heresy and worldly abuses within the Church, but also for its focus on improving popular piety in a determined effort to stem
the Corinthian church in his epistle to Corinthians as bishops and presbyters interchangeably. The New Testament writers also use the terms overseer and elders interchangeably and as synonyms. Variant Christianities The Ante-Nicene period saw the rise of a great number of Christian sects, cults and movements with strong unifying characteristics lacking in the apostolic period. They had different interpretations of Scripture, particularly the divinity of Jesus and the nature of the Trinity. Many variations in this time defy neat categorizations, as various forms of Christianity interacted in a complex fashion to form the dynamic character of Christianity in this era. The Post-Apostolic period was diverse both in terms of beliefs and practices. In addition to the broad spectrum of general branches of Christianity, there was constant change and diversity that variably resulted in both internecine conflicts and syncretic adoption. Development of the biblical canon The Pauline epistles were circulating in collected form by the end of the 1st century. By the early 3rd century, there existed a set of Christian writings similar to the current New Testament, though there were still disputes over the canonicity of Hebrews, James, I Peter, I and II John, and Revelation. By the 4th century, there existed unanimity in the West concerning the New Testament canon, and by the 5th century the East, with a few exceptions, had come to accept the Book of Revelation and thus had come into harmony on the matter of the canon. Early orthodox writings As Christianity spread, it acquired certain members from well-educated circles of the Hellenistic world; they sometimes became bishops. They produced two sorts of works, theological and apologetic, the latter being works aimed at defending the faith by using reason to refute arguments against the veracity of Christianity. These authors are known as the Church Fathers, and study of them is called patristics. Notable early fathers include Ignatius of Antioch, Polycarp, Justin Martyr, Irenaeus, Tertullian, Clement of Alexandria, and Origen. Early art Christian art emerged relatively late and the first known Christian images emerge from about 200 AD, although there is some literary evidence that small domestic images were used earlier. The oldest known Christian paintings are from the Roman catacombs, dated to about 200, and the oldest Christian sculptures are from sarcophagi, dating to the beginning of the 3rd century. The early rejection of images, and the necessity to hide Christian practice from persecution, left behind few written records regarding early Christianity and its evolution. Persecutions and legalisation There was no empire-wide persecution of Christians until the reign of Decius in the third century. The last and most severe persecution organised by the imperial authorities was the Diocletianic Persecution, 303–311. The Edict of Serdica was issued in 311 by the Roman Emperor Galerius, officially ending the persecution in the East. With the passage in 313 AD of the Edict of Milan, in which the Roman Emperors Constantine the Great and Licinius legalised the Christian religion, persecution of Christians by the Roman state ceased. Armenia became the first country to establish Christianity as its state religion when, in an event traditionally dated to 301 AD, St. Gregory the Illuminator convinced Tiridates III, the king of Armenia, to convert to Christianity. Late antiquity (325–476) Influence of Constantine How much Christianity Constantine adopted at this point is difficult to discern, but his accession was a turning point for the Christian Church. He supported the Church financially, built various basilicas, granted privileges (e.g., exemption from certain taxes) to clergy, promoted Christians to some high offices, and returned confiscated property. Constantine played an active role in the leadership of the Church. In 316, he acted as a judge in a North African dispute concerning the Donatist controversy. More significantly, in 325 he summoned the Council of Nicaea, the first ecumenical council. He thus established a precedent for the emperor as responsible to God for the spiritual health of his subjects, and thus with a duty to maintain orthodoxy. He was to enforce doctrine, root out heresy, and uphold ecclesiastical unity. Constantine's son's successor, his nephew Julian, under the influence of his adviser Mardonius, renounced Christianity and embraced a Neo-platonic and mystical form of paganism, shocking the Christian establishment. He began reopening pagan temples, modifying them to resemble Christian traditions such as the episcopal structure and public charity (previously unknown in Roman paganism). Julian's short reign ended when he died in battle with the Persians. Arianism and the first ecumenical councils A popular doctrine in the 4th century was Arianism, which taught that Christ is distinct from and subordinate to God the Father. Although this doctrine was condemned as heresy and eventually eliminated by the Roman Church, it remained popular underground for some time. In the late 4th century, Ulfilas, a Roman bishop and an Arian, was appointed as the first bishop to the Goths, the Germanic peoples in much of Europe at the borders of and within the Empire. Ulfilas spread Arian Christianity among the Goths, firmly establishing the faith among many of the Germanic tribes, thus helping to keep them culturally distinct. During this age, the first ecumenical councils were convened. They were mostly concerned with Christological disputes. The First Council of Nicaea (325) and the First Council of Constantinople (381) resulted in condemnation of Arian teachings as heresy and produced the Nicene Creed. Christianity as Roman state religion On 27 February 380, with the Edict of Thessalonica put forth under Theodosius I, Gratian, and Valentinian II, the Roman Empire officially adopted Trinitarian Christianity as its state religion. Prior to this date, Constantius II and Valens had personally favoured Arian or Semi-Arian forms of Christianity, but Valens' successor Theodosius I supported the Trinitarian doctrine as expounded in the Nicene Creed. After its establishment, the Church adopted the same organisational boundaries as the Empire: geographical provinces, called dioceses, corresponding to imperial government territorial divisions. The bishops, who were located in major urban centres as in pre-legalisation tradition, thus oversaw each diocese. The bishop's location was his "seat", or "see". Among the sees, five came to hold special eminence: Rome, Constantinople, Jerusalem, Antioch, and Alexandria. The prestige of most of these sees depended in part on their apostolic founders, from whom the bishops were therefore the spiritual successors. Though the bishop of Rome was still held to be the First among equals, Constantinople was second in precedence as the new capital of the empire. Theodosius I decreed that others not believing in the preserved "faithful tradition", such as the Trinity, were to be considered to be practitioners of illegal heresy, and in 385, this resulted in the first case of the state, not Church, infliction of capital punishment on a heretic, namely Priscillian. Church of the East and the Sasanian Empire During the early 5th century, the School of Edessa had taught a Christological perspective stating that Christ's divine and human nature were distinct persons. A particular consequence of this perspective was that Mary could not be properly called the mother of God but could only be considered the mother of Christ. The most widely known proponent of this viewpoint was the Patriarch of Constantinople Nestorius. Since referring to Mary as the mother of God had become popular in many parts of the Church this became a divisive issue. The Roman Emperor Theodosius II called for the Council of Ephesus (431), with the intention of settling the issue. The council ultimately rejected Nestorius' view. Many churches who followed the Nestorian viewpoint broke away from the Roman Church, causing a major schism. The Nestorian churches were persecuted, and many followers fled to the Sasanian Empire where they were accepted. The Sasanian (Persian) Empire had many Christian converts early in its history tied closely to the Syriac branch of Christianity. The Empire was officially Zoroastrian and maintained a strict adherence to this faith in part to distinguish itself from the religion of the Roman Empire (originally the pagan Roman religion and then Christianity). Christianity became tolerated in the Sasanian Empire, and as the Roman Empire increasingly exiled heretics during the 4th and 6th centuries, the Sasanian Christian community grew rapidly. By the end of the 5th century, the Persian Church was firmly established and had become independent of the Roman Church. This church evolved into what is today known as the Church of the East. In 451, the Council of Chalcedon was held to further clarify the Christological issues surrounding Nestorianism. The council ultimately stated that Christ's divine and human nature were separate but both part of a single entity, a viewpoint rejected by many churches who called themselves miaphysites. The resulting schism created a communion of churches, including the Armenian, Syrian, and Egyptian churches. Though efforts were made at reconciliation in the next few centuries, the schism remained permanent, resulting in what is today known as Oriental Orthodoxy. Monasticism Monasticism is a form of asceticism whereby one renounces worldly pursuits and goes off alone as a hermit or joins a tightly organized community. It began early in the Church as a family of similar traditions, modelled upon Scriptural examples and ideals, and with roots in certain strands of Judaism. John the Baptist is seen as an archetypical monk, and monasticism was inspired by the organisation of the Apostolic community as recorded in Acts 2:42–47. Eremitic monks, or hermits, live in solitude, whereas cenobitics live in communities, generally in a monastery, under a rule (or code of practice) and are governed by an abbot. Originally, all Christian monks were hermits, following the example of Anthony the Great. However, the need for some form of organised spiritual guidance lead Pachomius in 318 to organise his many followers in what was to become the first monastery. Soon, similar institutions were established throughout the Egyptian desert as well as the rest of the eastern half of the Roman Empire. Women were especially attracted to the movement. Central figures in the development of monasticism were Basil the Great in the East and, in the West, Benedict, who created the famous Rule of Saint Benedict, which would become the most common rule throughout the Middle Ages and the starting point for other monastic rules. Early Middle Ages (476–799) The transition into the Middle Ages was a gradual and localised process. Rural areas rose as power centres whilst urban areas declined. Although a greater number of Christians remained in the East (Greek areas), important developments were underway in the West (Latin areas) and each took on distinctive shapes. The bishops of Rome, the popes, were forced to adapt to drastically changing circumstances. Maintaining only nominal allegiance to the emperor, they were forced to negotiate balances with the "barbarian rulers" of the former Roman provinces. In the East, the Church maintained its structure and character and evolved more slowly. Western missionary expansion The stepwise loss of Western Roman Empire dominance, replaced with foederati and Germanic kingdoms, coincided with early missionary efforts into areas not controlled by the collapsing empire. As early as in the 5th century, missionary activities from Roman Britain into the Celtic areas (Scotland, Ireland and Wales) produced competing early traditions of Celtic Christianity, that was later reintegrated under the Church in Rome. Prominent missionaries were Saints Patrick, Columba and Columbanus. The Anglo-Saxon tribes that invaded southern Britain some time after the Roman abandonment were initially pagan but were converted to Christianity by Augustine of Canterbury on the mission of Pope Gregory the Great. Soon becoming a missionary centre, missionaries such as Wilfrid, Willibrord, Lullus and Boniface converted their Saxon relatives in Germania. The largely Christian Gallo-Roman inhabitants of Gaul (modern France) were overrun by the Franks in the early 5th century. The native inhabitants were persecuted until the Frankish King Clovis I converted from paganism to Roman Catholicism in 496. Clovis insisted that his fellow nobles follow suit, strengthening his newly established kingdom by uniting the faith of the rulers with that of the ruled. After the rise of the Frankish Kingdom and the stabilizing political conditions, the Western part of the Church increased the missionary activities, supported by the Merovingian kingdom as a means to pacify troublesome neighbour peoples. After the foundation of a church in Utrecht by Willibrord, backlashes occurred when the pagan Frisian King Radbod destroyed many Christian centres between 716 and 719. In 717, the English missionary Boniface was sent to aid Willibrord, re-establishing churches in Frisia and continuing missions in Germany. During the late 8th century, Charlemagne used mass killings to subjugate the pagan Saxons and compel them to accept Christianity Byzantine Iconoclasm Following a series of heavy military reverses against the Muslims, Iconoclasm emerged in the early 8th century. In the 720s, the Byzantine Emperor Leo III the Isaurian banned the pictorial representation of Christ, saints, and biblical scenes. In the West, Pope Gregory III held two synods at Rome and condemned Leo's actions. The Byzantine Iconoclast Council, held at Hieria in 754, ruled that holy portraits were heretical. The movement destroyed much of the Christian church's early artistic history. The iconoclastic movement was later defined as heretical in 787 under the Second Council of Nicaea (the seventh ecumenical council) but had a brief resurgence between 815 and 842. High Middle Ages (800–1299) Carolingian Renaissance The Carolingian Renaissance was a period of intellectual and cultural revival of literature, arts, and scriptural studies during the late 8th and 9th centuries, mostly during the reigns of Charlemagne and Louis the Pious, Frankish rulers. To address the problems of illiteracy among clergy and court scribes, Charlemagne founded schools and attracted the most learned men from all of Europe to his court. Growing tensions between East and West Tensions in Christian unity started to become evident in the 4th century. Two basic problems were involved: the nature of the primacy of the bishop of Rome and the theological implications of adding a clause to the Nicene Creed, known as the filioque clause. These doctrinal issues were first openly discussed in Photius's patriarchate. The Eastern churches viewed Rome's understanding of the nature of episcopal power as being in direct opposition to the Church's essentially conciliar structure and thus saw the two ecclesiologies as mutually antithetical. Another issue developed into a major irritant to Eastern Christendom, the gradual introduction into the Nicene Creed in the West of the Filioque clause – meaning "and the Son" – as in "the Holy Spirit ... proceeds from the Father and the Son", where the original Creed, sanctioned by the councils and still used today by the Eastern Orthodox, simply states "the Holy Spirit, ... proceeds from the Father." The Eastern Church argued that the phrase had been added unilaterally and therefore illegitimately, since the East had never been consulted. In addition to this ecclesiological issue, the Eastern Church also considered the Filioque clause unacceptable on dogmatic grounds. Photian schism In the 9th century, a controversy arose between Eastern (Byzantine, Greek Orthodox) and Western (Latin, Roman Catholic) Christianity that was precipitated by the opposition of the Roman Pope John VII to the appointment by the Byzantine Emperor Michael III of Photios I to the position of patriarch of Constantinople. Photios was refused an apology by the pope for previous points of dispute between the East and West. Photios refused to accept the supremacy of the pope in Eastern matters or accept the Filioque clause. The Latin delegation at the council of his consecration pressed him to accept the clause in order to secure their support. The controversy also involved Eastern and Western ecclesiastical jurisdictional rights in the Bulgarian church. Photios did provide concession on the issue of jurisdictional rights concerning Bulgaria, and the papal legates made do with his return of Bulgaria to Rome. This concession, however, was purely nominal, as Bulgaria's return to the Byzantine rite in 870 had already secured for it an autocephalous church. Without the consent of Boris I of Bulgaria, the papacy was unable to enforce any of its claims. East–West Schism (1054) The East–West Schism, or Great Schism, separated the Church into Western (Latin) and Eastern (Greek) branches, i.e., Western Catholicism and Eastern Orthodoxy. It was the first major division since certain groups in the East rejected the decrees of the Council of Chalcedon (see Oriental Orthodoxy) and was far more significant. Though normally dated to 1054, the East–West Schism was actually the result of an extended period of estrangement between Latin and Greek Christendom over the nature of papal primacy and certain doctrinal matters like the Filioque, but intensified from cultural and linguistic differences. Monastic reform From the 6th century onward, most of the monasteries in the West were of the Benedictine Order. Owing to the stricter adherence to a reformed Benedictine rule, the abbey of Cluny became the acknowledged leader of western monasticism from the later 10th century. Cluny created a large, federated order in which the administrators of subsidiary houses served as deputies of the abbot of Cluny and answered to him. The Cluniac spirit was a revitalising influence on the Norman church, at its height from the second half of the 10th century through the early 12th century. The next wave of monastic reform came with the Cistercian Movement. The first Cistercian abbey was founded in 1098, at Cîteaux Abbey. The keynote of Cistercian life was a return to a literal observance of the Benedictine rule, rejecting the developments of the Benedictines. The most striking feature in the reform was the return to manual labour, and especially to field-work. Inspired by Bernard of Clairvaux, the primary builder of the Cistercians, they became the main force of technological diffusion in medieval Europe. By the end of the 12th century, the Cistercian houses numbered 500, and at its height in the 15th century the order claimed to have close to 750 houses. Most of these were built in wilderness areas, and played a major part in bringing such isolated parts of Europe into economic cultivation. A third level of monastic reform was provided by the establishment of the Mendicant orders. Commonly known as friars, mendicants live under a monastic rule with traditional vows of poverty, chastity, and obedience but they emphasise preaching, missionary activity, and education, in a secluded monastery. Beginning in the 12th century, the Franciscan order was instituted by the followers of Francis of Assisi, and thereafter the Dominican order was begun by St. Dominic. Investiture Controversy The Investiture Controversy, or Lay Investiture Controversy, was the most significant conflict between secular and religious powers in medieval Europe. It began as a dispute in the 11th century between the Holy Roman Emperor Henry IV and Pope Gregory VII concerning who would appoint bishops (investiture). The end of lay investiture threatened to undercut the power of the Empire and the ambitions of noblemen. Bishoprics being merely lifetime appointments, a king could better control their powers and revenues than those of hereditary noblemen. Even better, he could leave the post vacant and collect the revenues, theoretically in trust for the new bishop, or give a bishopric to pay a helpful noble. The Church wanted to end lay investiture to end this and other abuses, to reform the episcopate and provide better pastoral care. Pope Gregory VII issued the Dictatus Papae, which declared that the pope alone could appoint bishops. Henry IV's rejection of the decree led to his excommunication and a ducal revolt. Eventually Henry received absolution after dramatic public penance, though the Great Saxon Revolt and conflict of investiture continued. A similar controversy occurred in England between King Henry I and St. Anselm, Archbishop of Canterbury, over investiture and episcopal vacancy. The English dispute was resolved by the Concordat of London, 1107, where the king renounced his claim to invest bishops but continued to require an oath of fealty. This was a partial model for the Concordat of Worms (Pactum Calixtinum), which resolved the Imperial investiture controversy with a compromise that allowed secular authorities some measure of control but granted the selection of bishops to their cathedral canons. As a symbol of the compromise, both ecclesiastical and lay authorities invested bishops with respectively, the staff and the ring. Crusades Generally, the Crusades refer to the campaigns in the Holy Land sponsored by the papacy against Muslim forces. There were other crusades against Islamic forces in southern Spain, southern Italy, and Sicily. The Papacy also sponsored numerous Crusades to subjugate and convert the pagan peoples of north-eastern Europe, against its political enemies in Western Europe, and against heretical or schismatic religious minorities within Christendom. The Holy Land had been part of the Roman Empire, and thus Byzantine Empire, until the Islamic conquests of the 7th and 8th centuries. Thereafter, Christians had generally been permitted to visit the sacred places in the Holy Land until 1071, when the Seljuk Turks closed Christian pilgrimages and assailed the Byzantines, defeating them at the Battle of Manzikert. Emperor Alexius I asked for aid from Pope Urban II against Islamic aggression. He probably expected
kilohertz (, kHz), megahertz (, MHz), gigahertz (, GHz), terahertz (, THz). Some of the unit's most common uses are in the description of sine waves and musical tones, particularly those used in radio- and audio-related applications. It is also used to describe the clock speeds at which computers and other electronics are driven. The units are sometimes also used as a representation of the energy of a photon, via the Planck relation E=hν, where E is the photon's energy, ν is its frequency, and the proportionality constant h is Planck's constant. Definition The hertz is defined as one cycle per second. The International Committee for Weights and Measures defined the second as "the duration of periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium-133 atom" and then adds: "It follows that the hyperfine splitting in the ground state of the caesium 133 atom is exactly hertz, ν(hfs Cs) = ." The dimension of the unit hertz is 1/time (1/T). Expressed in base SI units, the unit is 1/second (1/s). Problems can arise because the unit of angular measure (radian) is sometimes omitted in SI. In English, "hertz" is also used as the plural form. As an SI unit, Hz can be prefixed; commonly used multiples are kHz (kilohertz, ), MHz (megahertz, ), GHz (gigahertz, ) and THz (terahertz, ). One hertz simply means "one cycle per second" (typically that which is being counted is a complete cycle); means "one hundred cycles per second", and so on. The unit may be applied to any periodic event—for example, a clock might be said to tick at , or a human heart might be said to beat at . The occurrence rate of aperiodic or stochastic events is expressed in reciprocal second or inverse second (1/s or s−1) in general or, in the specific case of radioactive decay, in becquerels. Whereas is one cycle per second, is one aperiodic radionuclide event per second. Even though angular velocity, angular frequency and the unit hertz all have the dimension 1/T, angular velocity and angular frequency are not expressed in hertz, but rather in an appropriate angular unit such as the radian
2 rad/s or , where the former measures the angular velocity and the latter reflects the number of complete revolutions per second. The conversion between a frequency f measured in hertz and an angular velocity ω measured in radians per second is and . History The hertz is named after the German physicist Heinrich Hertz (1857–1894), who made important scientific contributions to the study of electromagnetism. The name was established by the International Electrotechnical Commission (IEC) in 1935. It was adopted by the General Conference on Weights and Measures (CGPM) (Conférence générale des poids et mesures) in 1960, replacing the previous name for the unit, "cycles per second" (cps), along with its related multiples, primarily "kilocycles per second" (kc/s) and "megacycles per second" (Mc/s), and occasionally "kilomegacycles per second" (kMc/s). The term "cycles per second" was largely replaced by "hertz" by the 1970s. Sometimes the adjectival form "per second" was omitted, so that "megacycles" (Mc) was used as an abbreviation of "megacycles per second" (that is, megahertz (MHz)). Applications Vibration Sound is a traveling longitudinal wave which is an oscillation of pressure. Humans perceive frequency of sound waves as pitch. Each musical note corresponds to a particular frequency which can be measured in hertz. An infant's ear is able to perceive frequencies ranging from to ; the average adult human can hear sounds between and . The range of ultrasound, infrasound and other physical vibrations such as molecular and atomic vibrations extends from a few femtohertz into the terahertz range and beyond. Electromagnetic radiation Electromagnetic radiation is often described by its frequency—the number of oscillations of the perpendicular electric and magnetic fields per second—expressed in hertz. Radio frequency radiation is usually measured in kilohertz (kHz), megahertz (MHz), or gigahertz (GHz). Light is electromagnetic radiation that is even higher in frequency, and has frequencies in the range of tens (infrared) to thousands (ultraviolet) of terahertz. Electromagnetic radiation with frequencies in the low terahertz range (intermediate between those of the highest normally usable radio frequencies and long-wave infrared light) is often called terahertz radiation. Even higher frequencies exist, such as that of gamma rays, which can be measured in exahertz (EHz). (For historical reasons, the frequencies of light and higher frequency electromagnetic radiation are more commonly specified in terms of their wavelengths or photon energies: for a more detailed treatment of this and the above frequency ranges, see electromagnetic spectrum.) Computers In computers, most central processing units (CPU) are labeled in terms of their clock rate expressed in megahertz () or gigahertz (). This specification refers to the frequency of the CPU's master clock signal. This signal is a square wave, which is an electrical voltage that switches between low and high logic values at regular intervals. As the hertz has become the primary unit of measurement accepted by the general populace to determine the performance of a CPU, many experts have criticized this approach, which they claim is an easily manipulable benchmark. Some processors use multiple clock periods to perform a single operation, while others can perform multiple operations in a single cycle. For personal computers, CPU clock speeds have ranged from approximately in the late 1970s (Atari, Commodore, Apple computers) to up to in IBM Power microprocessors. Various computer buses, such as the front-side bus connecting the CPU and northbridge, also operate at various frequencies in the megahertz range. Higher frequencies than the International System of Units provides prefixes for are believed to occur naturally in the frequencies of the quantum-mechanical vibrations of high-energy, or, equivalently, massive particles, although these are not directly observable and must be inferred from their interactions with other phenomena. By convention, these are typically not expressed in hertz, but
heroic couplet is a traditional form for English poetry, commonly used in epic and narrative poetry, and consisting of a rhyming pair of lines in iambic pentameter. Use of the heroic couplet was pioneered by Geoffrey Chaucer in the Legend of Good Women and the Canterbury Tales, and generally considered to have been perfected by John Dryden and Alexander Pope in the Restoration Age and early 18th century respectively. Example A frequently-cited example illustrating the use of heroic couplets is this passage from Cooper's Hill by John Denham, part of his description of the Thames: History The term "heroic couplet" is sometimes reserved for couplets that are largely closed and self-contained, as opposed to the enjambed couplets of poets like John Donne. The heroic couplet is often identified with the English Baroque works of John Dryden and Alexander Pope, who used the form for their translations of the epics of Virgil and Homer, respectively. Major poems in the closed couplet, apart from the works of Dryden and Pope, are Samuel Johnson's The Vanity of Human Wishes, Oliver Goldsmith's The Deserted Village, and John Keats's Lamia. The form was immensely popular in the 18th century. The looser type of couplet, with occasional enjambment, was one of the standard verse forms in medieval narrative poetry, largely because of the influence of the Canterbury Tales. Variations English heroic couplets, especially in Dryden and
Geoffrey Chaucer in the Legend of Good Women and the Canterbury Tales, and generally considered to have been perfected by John Dryden and Alexander Pope in the Restoration Age and early 18th century respectively. Example A frequently-cited example illustrating the use of heroic couplets is this passage from Cooper's Hill by John Denham, part of his description of the Thames: History The term "heroic couplet" is sometimes reserved for couplets that are largely closed and self-contained, as opposed to the enjambed couplets of poets like John Donne. The heroic couplet is often identified with the English Baroque works of John Dryden and Alexander Pope, who used the form for their translations of the epics of Virgil and Homer, respectively. Major poems in the closed
she found too unimportant to ask (alternatively, which she found too young to demand an oath from). The gods amused themselves by trying weapons on Baldr and seeing them fail to do any harm. Loki, the mischief-maker, upon finding out about Baldr's one weakness, made a spear from mistletoe, and helped Höðr shoot it at Baldr. In reaction to this, Odin and the giantess Rindr gave birth to Váli, who grew to adulthood within a day and slew Höðr. The Danish historian Saxo Grammaticus recorded an alternative version of this myth in his Gesta Danorum. In this version, the mortal hero Høtherus and the demi-god Balderus compete for the hand of Nanna. Ultimately, Høtherus slays Balderus. Name Rawlinson and Bosworth Professor of Anglo-Saxon Andy Orchard, argues that the name Hǫðr, means 'warrior', and is comparable with the Old English heaðu-deór ('brave, stout in war'). the Old Norse noun hǫð ('war, slaughter'), it stems from Proto-Germanic *haþuz ('battle'; compare with Old English heaðo-, Old High German hadu-, Old Saxon hathu-). Yet these etymological excersises does not correspond completely with the contexts and meaning of the word as it is used in Norse literature according to the Old Norse Dictionary of University of Copenhagen and The Árni Magnússon Institute for Icelandic Studies operating Málið, facilitating digital searching for information on the Icelandic language and learning about language usage. Both resources refers to Íslensk orðsifjabók, Icelandic etymological dictionary, which says that additionally to refer to the ås, is Höðr the name of a legendary king of Hadeland in Norway, as well as possibly denoting 'eagle'. Icelandic etymologists relate Hǫðr to HauðrIcelandic etymological dictionary, denoting 'heath', 'meadow', as well as to Hoð, Höð meaning 'battle'. Hodd means 'treasure house', 'hiding place'. The Prose Edda In the Gylfaginning part of Snorri Sturluson's Prose Edda Höðr is introduced in an ominous way. Höðr is not mentioned again until the prelude to Baldr's death is described. All things except the mistletoe (believed to be harmless) have sworn an oath not to harm Baldr, so the Æsir throw missiles at him for sport. The Gylfaginning does not say what happens to Höðr after this. In fact it specifically states that Baldr cannot be avenged, at least not immediately. It does seem, however, that Höðr ends up in Hel one way or another for the last mention of him in Gylfaginning is in the description of the post-Ragnarök world. Snorri's source of this knowledge is clearly Völuspá as quoted below. In the Skáldskaparmál section of the Prose Edda several kennings for Höðr are related. None of those kennings, however, are actually found in surviving skaldic poetry. Neither are Snorri's kennings for Váli, which are also of interest in this context. It is clear from this that Snorri was familiar with the role of Váli as Höðr's slayer, even though he does not relate that myth in the Gylfaginning prose. Some scholars have speculated that he found it distasteful, since Höðr is essentially innocent in his version of the story. The Poetic Edda Höðr is referred to several times in the Poetic Edda, always in the context of Baldr's death. The following strophes are from Völuspá. This account seems to fit well with the information in the Prose Edda, but here the role of Baldr's avenging brother is emphasized. Baldr and
clever ruse and forces him to yield his artifacts. Hearing about Hotherus's artifacts, Gelderus, king of Saxony, equips a fleet to attack him. Gevarus warns Hotherus of this and tells him where to meet Gelderus in battle. When the battle is joined, Hotherus and his men save their missiles while defending themselves against those of the enemy with a testudo formation. With his missiles exhausted, Gelderus is forced to sue for peace. He is treated mercifully by Hotherus and becomes his ally. Hotherus then gains another ally with his eloquent oratory by helping King Helgo of Hålogaland win a bride. Meanwhile, Balderus enters the country of king Gevarus armed and sues for Nanna. Gevarus tells him to learn Nanna's own mind. Balderus addresses her with cajoling words but is refused. Nanna tells him that because of the great difference in their nature and stature, since he is a demigod, they are not suitable for marriage. As news of Balderus's efforts reaches Hotherus, he and his allies resolve to attack Balderus. A great naval battle ensues where the gods fight on the side of Balderus. Thoro in particular shatters all opposition with his mighty club. When the battle seems lost, Hotherus manages to hew Thoro's club off at the haft and the gods are forced to retreat. Gelderus perishes in the battle and Hotherus arranges a funeral pyre of vessels for him. After this battle Hotherus finally marries Nanna. Balderus is not completely defeated and shortly afterwards returns to defeat Hotherus in the field. But Balderus's victory is without fruit for he is still without Nanna. Lovesick, he is harassed by phantoms in Nanna's likeness and his health deteriorates so that he cannot walk but has himself drawn around in a cart. After a while Hotherus and Balderus have their third battle and again Hotherus is forced to retreat. Weary of life because of his misfortunes, he plans to retire and wanders into the wilderness. In a cave he comes upon the same maidens he had met at the start of his career. Now they tell him that he can defeat Balderus if he gets a taste of some extraordinary food which had been devised to increase the strength of Balderus. Encouraged by this, Hotherus returns from exile and once again meets Balderus in the field. After a day of inconclusive fighting, he goes out during the night to spy on the enemy. He finds where Balderus's magical food is prepared and plays the lyre for the maidens preparing it. While they don't want to give him the food, they bestow on him a belt and a girdle which secure victory. Heading back to his camp, Hotherus meets Balderus and plunges his sword into his side. After three days, Balderus dies from his wound. Many years later, Bous, the son of Othinus and Rinda, avenges his brother by killing Hotherus in a duel. Chronicon Lethrense and Annales Lundenses There are also two lesser-known DanishLatin chronicles, the Chronicon Lethrense and the Annales Lundenses, of which the latter is included in the former. These two sources provide a second euhemerized account of Höðr's slaying of Balder. It relates that Hother was the king of the Saxons, son of Hothbrod, the daughter of Hadding. Hother first slew Othen's (i.e., Odin's) son Balder in battle and then chased Othen and Thor. Finally, Othen's son Both killed Hother. Hother, Balder, Othen, and Thor were incorrectly considered to be gods. Rydberg's theories According to the Swedish mythologist and romantic poet Viktor Rydberg, the story of Baldr's death was taken from Húsdrápa, a poem composed by Ulfr Uggason around 990 AD at a feast thrown by the Icelandic Chief Óláfr Höskuldsson to celebrate the finished construction of his new home, Hjarðarholt, the walls of which were filled with symbolic representations of the Baldr myth among others. Rydberg suggested that Höðr was depicted with eyes closed and Loki guiding his aim to indicate that Loki was the true cause of Baldr's death and Höðr was only his "blind tool." Rydberg theorized that the author of the Gylfaginning then mistook the description of the symbolic artwork in the Húsdrápa as the actual tale of Baldr's death. Notes References Sources Bellows, Henry Adams (trans.) (1936). The Poetic Edda. Princeton: Princeton University Press. Available online Brodeur, Arthur Gilchrist (transl.) (1916). The Prose Edda by Snorri Sturluson. New York: The American-Scandinavian Foundation. Available online in parallel text Dronke, Ursula (ed. and trans.) (1997) The Poetic Edda: Mythological Poems. Oxford: Oxford University Press. . Eysteinn Björnsson (2001). Lexicon of Kennings : The Domain of Battle. Published online: https://web.archive.org/web/20090328200122/http://www3.hi.is/~eybjorn/ugm/kennings/kennings.html Eysteinn Björnsson (ed.). Snorra-Edda: Formáli & Gylfaginning : Textar fjögurra meginhandrita. 2005. Published online: https://web.archive.org/web/20080611212105/http://www.hi.is/~eybjorn/gg/ Eysteinn Björnsson (ed.). Völuspá. Published online: https://web.archive.org/web/20090413124631/http://www3.hi.is/~eybjorn/ugm/vsp3.html Guðni Jónsson (ed.) (1949). Eddukvæði : Sæmundar Edda. Reykjavík: Íslendingasagnaútgáfan. Available online Thorpe, Benjamin (transl.) (1866). Edda Sæmundar Hinns Froða : The Edda Of Sæmund The Learned''. (2 vols.) London: Trübner & Co. Available online at Google Books External links MyNDIR (My Norse Digital Image Repository) Illustrations of Höðr from
rise of the Saffarids in Sistān under Ya'qub-i Laith in 861, who, in 862, started launching raids on Herat before besieging and capturing it on 16 August 867, and again in 872. The Saffarids succeeded in expelling the Taherids from Khorasan in 873. The Sāmānid dynasty was established in Transoxiana by three brothers, Nuh, Yahyā, and Ahmad. Ahmad Sāmāni opened the way for the Samanid dynasty to the conquest of Khorāsān, including Herāt, which they were to rule for one century. The centralized Samanid administration served as a model for later dynasties. The Samanid power was destroyed in 999 by the Qarakhanids, who were advancing on Transoxiana from the northeast, and by the Ghaznavids, former Samanid retainers, attacking from the southeast. Sultan Maḥmud of Ghazni officially took control of Khorāsān in 998. Herat was one of the six Ghaznavid mints in the region. In 1040, Herat was captured by the Seljuk Empire. During this change of power in Herat, there was supposedly a power vacuum which was filled by Abdullah Awn, who established a city-state and made an alliance with Mahmud of Ghazni. Yet, in 1175, it was captured by the Ghurids of Ghor and then came under the Khawarazm Empire in 1214. According to the account of Mustawfi, Herat flourished especially under the Ghurid dynasty in the 12th century. Mustawfi reported that there were "359 colleges in Herat, 12,000 shops all fully occupied, 6,000 bath-houses; besides caravanserais and mills, also a darwish convent and a fire temple". There were about 444,000 houses occupied by a settled population. The men were described as "warlike and carry arms", and they were Sunni Muslims. The great mosque of Herāt was built by Ghiyasuddin Ghori in 1201. In this period Herāt became an important center for the production of metal goods, especially in bronze, often decorated with elaborate inlays in precious metals. Herat was invaded and destroyed by Genghis Khan's Mongol army in 1221. The city was destroyed a second time and remained in ruins from 1222 to about 1236. In 1244 a local prince Shams al-Din Kart was named ruler of Herāt by the Mongol governor of Khorāsān and in 1255 he was confirmed in his rule by the founder of the Il-Khan dynasty Hulagu. Shamsuddin Kart founded a new dynasty and his successors, especially Fakhruddin Kart and Ghiyasuddin Kart, built many mosques and other buildings. The members of this dynasty were great patrons of literature and the arts. By this time Herāt became known as the pearl of Khorasan. Timur took Herat in 1380 and he brought the Kartid dynasty to an end a few years later. The city reached its greatest glory under the Timurid princes, especially Sultan Husayn Bayqara who ruled Herat from 1469 until May 4, 1506. His chief minister, the poet and author in Persian and Turkish, Mir Ali-Shir Nava'i was a great builder and patron of the arts. Under the Timurids, Herat assumed the role of the main capital of an empire that extended in the West as far as central Persia. As the capital of the Timurid empire, it boasted many fine religious buildings and was famous for its sumptuous court life and musical performance and its tradition of miniature paintings. On the whole, the period was one of relative stability, prosperity, and development of economy and cultural activities. It began with the nomination of Shahrokh, the youngest son of Timur, as governor of Herat in 1397. The reign of Shahrokh in Herat was marked by intense royal patronage, building activities, and the promotion of manufacturing and trade, especially through the restoration and enlargement of the Herat's bāzār. The present Musallah Complex, and many buildings such as the madrasa of Gawhar Shad, Ali Shir mahāl, many gardens, and others, date from this time. The village of Gazar Gah, over two km northeast of Herat, contained a shrine that was enlarged and embellished under the Timurids. The tomb of the poet and mystic Khwājah Abdullāh Ansārī (d. 1088), was first rebuilt by Shahrokh about 1425, and other famous men were buried in the shrine area. Herat was shortly captured by Kara Koyunlu between 1458 and 1459. In 1507 Herat was occupied by the Uzbeks but after much fighting the city was taken by Shah Isma'il, the founder of the Safavid dynasty, in 1510 and the Shamlu Qizilbash assumed the governorship of the area. Under the Safavids, Herat was again relegated to the position of a provincial capital, albeit one of particular importance. At the death of Shah Isma'il the Uzbeks again took Herat and held it until Shah Tahmasp retook it in 1528. The Persian king, Abbas was born in Herat, and in Safavid texts, Herat is referred to as a'zam-i bilād-i īrān, meaning "the greatest of the cities of Iran". In the 16th century, all future Safavid rulers, from Tahmasp I to Abbas I, were governors of Herat in their youth. Modern history By the early 18th century Herat was governed by the Abdali Afghans. After Nader Shah's death in 1747, Ahmad Shah Durrani took possession of the city and became part of the Durrani Empire. In 1793, Herat became independent for several years when Afghanistan underwent a civil war between different sons of Timur Shah. The Iranians had multiple wars with Herat between 1801 and 1837 (1804, 1807, 1811, 1814, 1817, 1818, 1821, 1822, 1825, 1833). The Iranians besieged the city in 1837, but the British helped the Heratis in repelling them. In 1856, they invaded again, and briefly managed to take the city on October 25; it led directly to the Anglo-Persian War. In 1857 hostilities between the Iranians and the British ended after the Treaty of Paris was signed, and the Persian troops withdrew from Herat in September 1857. Afghanistan conquered Herat on May 26, 1863, under Dost Muhammad Khan, two weeks before his death. The famous Musalla of Gawhar Shah of Herat, a large Islamic religious complex consisting of five minarets, several mausoleums along with mosques and madrasas was dynamited during the Panjdeh incident to prevent their usage by the advancing Russian forces. Some emergency preservation work was carried out at the site in 2001 which included building protective walls around the Gawhar Shad Mausoleum and Sultan Husain Madrasa, repairing the remaining minaret of Gawhar Shad's Madrasa, and replanting the mausoleum garden. In the 1960s, engineers from the United States built Herat Airport, which was used by the Soviet forces during the Democratic Republic of Afghanistan in the 1980s. Even before the Soviet invasion at the end of 1979, there was a substantial presence of Soviet advisors in the city with their families. Between March 10 and March 20, 1979, the Afghan Army in Herāt under the control of commander Ismail Khan mutinied. Thousands of protesters took to the streets against the Khalq communist regime's oppression led by Nur Mohammad Taraki. The new rebels led by Khan managed to oust the communists and take control of the city for 3 days, with some protesters murdering any Soviet advisers. This shocked the government, who blamed the new administration of Iran following the Iranian Revolution for influencing the uprising. Reprisals by the government followed, and between 3,000 and 24,000 people (according to different sources) were killed, in what is called the 1979 Herat uprising, or in Persian as the Qiam-e Herat. The city itself was recaptured with tanks and airborne forces, but at the cost of thousands of civilians killed. This massacre was the first of its kind since the Third Anglo-Afghan War in 1919, and was the bloodiest event preceding the Soviet–Afghan War. Herat received damage during the Soviet–Afghan War in the 1980s, especially its western side. The province as a whole was one of the worst-hit. In April 1983, a series of Soviet bombings damaged half of the city and killed around 3,000 civilians, described as "extremely heavy, brutal and prolonged". Ismail Khan was the leading mujahideen commander in Herāt fighting against the Soviet-backed government. After the communist government's collapse in 1992, Khan joined the new government and he became governor of Herat Province. The city was relatively safe and it was recovering and rebuilding from the damage caused in the Soviet–Afghan War. However, on September 5, 1995, the city was captured by the Taliban without much resistance, forcing Khan to flee. Herat became the first Persian-speaking city to be captured by the Taliban. The Taliban's strict enforcement of laws confining women at home and closing girls' schools alienated Heratis who are traditionally more liberal and educated, like the Kabulis, than other urban populations in the country. Two days of anti-Taliban protests occurred in December 1996 which was violently dispersed and led to the imposition of a curfew. In May 1999, a rebellion in Herat was crushed by the Taliban, who blamed Iran for causing it. After the U.S. invasion of Afghanistan, on November 12, 2001, it was captured from the Taliban by forces loyal to the Northern Alliance and Ismail Khan returned to power (see Battle of Herat). The state of the city was reportedly much better than that of Kabul. In 2004, Mirwais Sadiq, Aviation Minister of Afghanistan and the son of Ismail Khan, was ambushed and killed in Herāt by a local rival group. More than 200 people were arrested under suspicion of involvement. In 2005, the International Security Assistance Force (ISAF) began establishing bases in and around the city. Its main mission was to train the Afghan National Security Forces (ANSF) and help with the rebuilding process of the country. Regional Command West, led by Italy, assisted the Afghan National Army (ANA) 207th Corps. Herat was one of the first seven areas that transitioned security responsibility from NATO to Afghanistan. In July 2011, the Afghan security forces assumed security responsibility from NATO. Due to their close relations, Iran began investing in the development of Herat's power, economy and education sectors. In the meantime, the United States built a consulate in Herat to help further strengthen its relations with Afghanistan. In addition to the usual services, the consulate works with the local officials on development projects and with security issues in the region. On 12 August 2021, the city was captured by the Taliban during the 2021 Taliban offensive. Geography Climate Herat has a cold semi-arid climate (Köppen climate classification BSk). Precipitation is very low, and mostly falls in winter. Although Herāt is approximately lower than Kandahar, the summer climate is more temperate, and the climate throughout the year is far from disagreeable, although winter temperatures are comparably lower. From May to September, the wind blows from the northwest with great force. The winter is tolerably mild; snow melts as it falls, and even on the mountains does not lie long. Three years out of four it does not freeze hard enough for the people to store ice. The eastern reaches of the Hari River, including the rapids, are frozen hard in the winter, and people travel on it as on a road. Places of interest Foreign consulates India, Iran and Pakistan operate their consulate here for trade, military and political links. Neighborhoods Shahr-e Naw (Downtown) Welayat (Office of the governor) Qol-Ordue (Army's HQ) Farqa (Army's HQ) Darwaze Khosh Chaharsu Pul-e Rangine Sufi-abad New-abad Pul-e malaan Thakhte Safar Howz-e-Karbas Baramaan Darwaze-ye Qandahar Darwaze-ye Iraq Darwaze Az Kordestan Parks Park-e Taraki Park-e Millat Khane-ye Jihad Park Monuments Herat Citadel (Qala Ikhtyaruddin or Arg) Musallah Complex Musalla Minarets of Herat Of the more than dozen minarets that once stood in Herāt, many have been toppled from war and neglect over the past century. Recently, however, everyday traffic threatens many of the remaining unique towers by shaking the very foundations they stand on. Cars and trucks that drive on a road encircling the ancient city rumble the ground every time they pass these historic structures. UNESCO personnel and Afghan authorities have been working to stabilize the Fifth Minaret. Museums Herat Museum, located inside the Herat Citadel Jihad Museum Mausoleums and tombs Gawhar Shad Mausoleum Mausoleum of Khwajah Abdullah Ansari Tomb of Jami Tomb of khaje Qaltan Mausoleum of Mirwais Sadiq Jewish cemetery – there once existed an ancient Jewish community in the city. Its remnants are a cemetery and a ruined shrine. Mosques Jumu'ah Mosque (Friday Mosque of Herat) Gazargah Sharif Khalghe Sharif Shah Zahdahe Hotels Serena Hotel (coming soon) Diamond Hotel Marcopolo Hotel Stadiums Herat Stadium Universities Herat University Demography The population of Herat numbered approximately 592,902 in 2021. The city houses a multi-ethnic society and speakers of the Persian language are in the majority. There is no current data on the precise ethnic composition of the city's population, but according to a 2003 map found in the National Geographic Magazine, Pashtun peoples form the majority of the city, comprising around 85% of the population. The remaining population comprises Tajek (10%), Hazaras (2%), Uzbeks (2%) and Turkmens (1%). Pashto is the native language of Herat and the local dialect – known by natives as Herātī – belongs to the cluster within Persian. It is akin to the Pashto dialects of eastern Afghanistan, The second language that is understood by many is Dari, which is the native language of the
The town was rebuilt and the citadel was constructed. Afghanistan became part of the Seleucid Empire. However, most sources suggest that Herat was predominantly Zoroastrian. It became part of the Parthian Empire in 167 BC. In the Sasanian period (226-652), 𐭧𐭥𐭩𐭥 Harēv is listed in an inscription on the Ka'ba-i Zartosht at Naqsh-e Rustam; and Hariy is mentioned in the Pahlavi catalogue of the provincial capitals of the empire. In around 430, the town is also listed as having a Christian community, with a Nestorian bishop. In the last two centuries of Sasanian rule, Aria (Herat) had great strategic importance in the endless wars between the Sasanians, the Chionites and the Hephthalites who had been settled in the northern section of Afghanistan since the late 4th century. Islamization At the time of the Arab invasion in the middle of the 7th century, the Sasanian central power seemed already largely nominal in the province in contrast with the role of the Hephthalites tribal lords, who were settled in the Herat region and in the neighboring districts, mainly in pastoral Bādghis and in Qohestān. It must be underlined, however, that Herat remained one of the three Sasanian mint centers in the east, the other two beings Balkh and Marv. The Hephthalites from Herat and some unidentified Turks opposed the Arab forces in a battle of Qohestān in 651-52 AD, trying to block their advance on Nishāpur, but they were defeated When the Arab armies appeared in Khorāsān in the 650s AD, Herāt was counted among the twelve capital towns of the Sasanian Empire. The Arab army under the general command of Ahnaf ibn Qais in its conquest of Khorāsān in 652 seems to have avoided Herāt, but it can be assumed that the city eventually submitted to the Arabs, since shortly afterward an Arab governor is mentioned there. A treaty was drawn in which the regions of Bādghis and Bushanj were included. As did many other places in Khorāsān, Herāt rebelled and had to be re-conquered several times. Another power that was active in the area in the 650s was Tang dynasty China which had embarked on a campaign that culminated in the Conquest of the Western Turks. By 659–661, the Tang claimed a tenuous suzerainty over Herat, the westernmost point of Chinese power in its long history. This hold however would be ephemeral with local Turkish tribes rising in rebellion in 665 and driving out the Tang. In 702 AD Yazid ibn al-Muhallab defeated certain Arab rebels, followers of Ibn al-Ash'ath, and forced them out of Herat. The city was the scene of conflicts between different groups of Muslims and Arab tribes in the disorders leading to the establishment of the Abbasid Caliphate. Herat was also a center of the followers of Ustadh Sis. In 870 AD, Yaqub ibn Layth Saffari, a local ruler of the Saffarid dynasty conquered Herat and the rest of the nearby regions in the name of Islam. “Pearl of Khorasan” The region of Herāt was under the rule of King Nuh III, the seventh of the Samanid line—at the time of Sebük Tigin and his older son, Mahmud of Ghazni. The governor of Herāt was a noble by the name of Faik, who was appointed by Nuh III. It is said that Faik was a powerful, but insubordinate governor of Nuh III, and had been punished by Nuh III. Faik made overtures to Bogra Khan and Ughar Khan of Khorasan. Bogra Khan answered Faik's call, came to Herāt, and became its ruler. The Samanids fled, betrayed at the hands of Faik to whom the defense of Herāt had been entrusted by Nuh III. In 994, Nuh III invited Alptegin to come to his aid. Alptegin, along with Mahmud of Ghazni, defeated Faik and annexed Herāt, Nishapur and Tous. Herat was a great trading center strategically located on trade routes from Mediterranean to India or to China. The city was noted for its textiles during the Abbasid Caliphate, according to many references by geographers. Herāt also had many learned sons such as Ansārī. The city is described by Estakhri and Ibn Hawqal in the 10th century as a prosperous town surrounded by strong walls with plenty of water sources, extensive suburbs, an inner citadel, a congregational mosque, and four gates, each gate opening to a thriving market place. The government building was outside the city at a distance of about a mile in a place called Khorāsānābād. A church was still visible in the countryside northeast of the town on the road to Balkh, and farther away on a hilltop stood a flourishing fire temple, called Sereshk, or Arshak according to Mustawfi. Herat was a part of the Taherid dominion in Khorāsān until the rise of the Saffarids in Sistān under Ya'qub-i Laith in 861, who, in 862, started launching raids on Herat before besieging and capturing it on 16 August 867, and again in 872. The Saffarids succeeded in expelling the Taherids from Khorasan in 873. The Sāmānid dynasty was established in Transoxiana by three brothers, Nuh, Yahyā, and Ahmad. Ahmad Sāmāni opened the way for the Samanid dynasty to the conquest of Khorāsān, including Herāt, which they were to rule for one century. The centralized Samanid administration served as a model for later dynasties. The Samanid power was destroyed in 999 by the Qarakhanids, who were advancing on Transoxiana from the northeast, and by the Ghaznavids, former Samanid retainers, attacking from the southeast. Sultan Maḥmud of Ghazni officially took control of Khorāsān in 998. Herat was one of the six Ghaznavid mints in the region. In 1040, Herat was captured by the Seljuk Empire. During this change of power in Herat, there was supposedly a power vacuum which was filled by Abdullah Awn, who established a city-state and made an alliance with Mahmud of Ghazni. Yet, in 1175, it was captured by the Ghurids of Ghor and then came under the Khawarazm Empire in 1214. According to the account of Mustawfi, Herat flourished especially under the Ghurid dynasty in the 12th century. Mustawfi reported that there were "359 colleges in Herat, 12,000 shops all fully occupied, 6,000 bath-houses; besides caravanserais and mills, also a darwish convent and a fire temple". There were about 444,000 houses occupied by a settled population. The men were described as "warlike and carry arms", and they were Sunni Muslims. The great mosque of Herāt was built by Ghiyasuddin Ghori in 1201. In this period Herāt became an important center for the production of metal goods, especially in bronze, often decorated with elaborate inlays in precious metals. Herat was invaded and destroyed by Genghis Khan's Mongol army in 1221. The city was destroyed a second time and remained in ruins from 1222 to about 1236. In 1244 a local prince Shams al-Din Kart was named ruler of Herāt by the Mongol governor of Khorāsān and in 1255 he was confirmed in his rule by the founder of the Il-Khan dynasty Hulagu. Shamsuddin Kart founded a new dynasty and his successors, especially Fakhruddin Kart and Ghiyasuddin Kart, built many mosques and other buildings. The members of this dynasty were great patrons of literature and the arts. By this time Herāt became known as the pearl of Khorasan. Timur took Herat in 1380 and he brought the Kartid dynasty to an end a few years later. The city reached its greatest glory under the Timurid princes, especially Sultan Husayn Bayqara who ruled Herat from 1469 until May 4, 1506. His chief minister, the poet and author in Persian and Turkish, Mir Ali-Shir Nava'i was a great builder and patron of the arts. Under the Timurids, Herat assumed the role of the main capital of an empire that extended in the West as far as central Persia. As the capital of the Timurid empire, it boasted many fine religious buildings and was famous for its sumptuous court life and musical performance and its tradition of miniature paintings. On the whole, the period was one of relative stability, prosperity, and development of economy and cultural activities. It began with the nomination of Shahrokh, the youngest son of Timur, as governor of Herat in 1397. The reign of Shahrokh in Herat was marked by intense royal patronage, building activities, and the promotion of manufacturing and trade, especially through the restoration and enlargement of the Herat's bāzār. The present Musallah Complex, and many buildings such as the madrasa of Gawhar Shad, Ali Shir mahāl, many gardens, and others, date from this time. The village of Gazar Gah, over two km northeast of Herat, contained a shrine that was enlarged and embellished under the Timurids. The tomb of the poet and mystic Khwājah Abdullāh Ansārī (d. 1088), was first rebuilt by Shahrokh about 1425, and other famous men were buried in the shrine area. Herat was shortly captured by Kara Koyunlu between 1458 and 1459. In 1507 Herat was occupied by the Uzbeks but after much fighting the city was taken by Shah Isma'il, the founder of the Safavid dynasty, in 1510 and the Shamlu Qizilbash assumed the governorship of the area. Under the Safavids, Herat was again relegated to the position of a provincial capital, albeit one of particular importance. At the death of Shah Isma'il the Uzbeks again took Herat and held it until Shah Tahmasp retook it in 1528. The Persian king, Abbas was born in Herat, and in Safavid texts, Herat is referred to as a'zam-i bilād-i īrān, meaning "the greatest of the cities of Iran". In the 16th century, all future Safavid rulers, from Tahmasp I to Abbas I, were governors of Herat in their youth. Modern history By the early 18th century Herat was governed by the Abdali Afghans. After Nader Shah's death in 1747, Ahmad Shah Durrani took possession of the city and became part of the Durrani Empire. In 1793, Herat became independent for several years when Afghanistan underwent a civil war between different sons of Timur Shah. The Iranians had multiple wars with Herat between 1801 and 1837 (1804, 1807, 1811, 1814, 1817, 1818, 1821, 1822, 1825, 1833). The Iranians besieged the city in 1837, but the British helped the Heratis in repelling them. In 1856, they invaded again, and briefly managed to take the city on October 25; it led directly to the Anglo-Persian War. In 1857 hostilities between the Iranians and the British ended after the Treaty of Paris was signed, and the Persian troops withdrew from Herat in September 1857. Afghanistan conquered Herat on May 26, 1863, under Dost Muhammad Khan, two weeks before his death. The famous Musalla of Gawhar Shah of Herat, a large Islamic religious complex consisting of five minarets, several mausoleums along with mosques and madrasas was dynamited during the Panjdeh incident to prevent their usage by the advancing Russian forces. Some emergency preservation work was carried out at the site in 2001 which included building protective walls around the Gawhar Shad Mausoleum and Sultan Husain Madrasa, repairing the remaining minaret of Gawhar Shad's Madrasa, and replanting the mausoleum garden. In the 1960s, engineers from the United States built Herat Airport, which was used by the Soviet forces during the Democratic Republic of Afghanistan in the 1980s. Even before the Soviet invasion at the end of 1979, there was a substantial presence of Soviet advisors in the city with their families. Between March 10 and March 20, 1979, the Afghan Army in Herāt under the control of commander Ismail Khan mutinied. Thousands of protesters took to the streets against the Khalq communist regime's oppression led by Nur Mohammad Taraki. The new rebels led by Khan managed to oust the communists and take control of the city for 3 days, with some protesters murdering any Soviet advisers. This shocked the government, who blamed the new administration of Iran following the Iranian Revolution for influencing the uprising. Reprisals by the government followed, and between 3,000 and 24,000 people (according to different sources) were killed, in what is called the 1979 Herat uprising, or in Persian as the Qiam-e Herat. The city itself was recaptured with tanks and airborne forces, but at the cost of thousands of civilians killed. This massacre was the first of its kind since the Third Anglo-Afghan War in 1919, and was the bloodiest event preceding the Soviet–Afghan War. Herat received damage during the Soviet–Afghan War in the 1980s, especially its western side. The province as a whole was one of the worst-hit. In April 1983, a series of Soviet bombings damaged half of the city and killed around 3,000 civilians, described as "extremely heavy, brutal and prolonged". Ismail Khan was the leading mujahideen commander in Herāt fighting against the Soviet-backed government. After the communist government's collapse in 1992, Khan joined the new government and he became governor of Herat Province. The city was relatively safe and it was recovering and rebuilding from the damage caused in the Soviet–Afghan War. However, on September 5, 1995, the city was captured by the Taliban without much resistance, forcing Khan to flee. Herat became the first Persian-speaking city to be captured by the Taliban. The Taliban's strict enforcement of laws confining women at home and closing girls' schools alienated Heratis who are traditionally more liberal and educated, like the Kabulis, than other urban populations in the country. Two days of anti-Taliban protests occurred in December 1996 which was violently dispersed and led to the imposition of a curfew. In May 1999, a rebellion in Herat was crushed by the Taliban, who blamed Iran for causing it. After the U.S. invasion of Afghanistan, on November 12, 2001, it was captured from the Taliban by forces loyal to the Northern Alliance and Ismail Khan returned to power (see
Sliesthorp and Sliaswich (cf. -thorp vs. -wich), and the town of Schleswig still exists 3 km north of Hedeby. However, Æthelweard claimed in his Latin translation of the Anglo-Saxon Chronicle that the Saxons used Slesuuic and the Danes Haithaby to refer to the same town. History Origins Hedeby is first mentioned in the Frankish chronicles of Einhard (804) who was in the service of Charlemagne, but was probably founded around 770. In 808 the Danish king Godfred (Lat. Godofredus) destroyed a competing Slav trade centre named Reric, and it is recorded in the Frankish chronicles that he moved the merchants from there to Hedeby. This may have provided the initial impetus for the town to develop. The same sources record that Godfred strengthened the Danevirke, an earthen wall that stretched across the south of the Jutland peninsula. The Danevirke joined the defensive walls of Hedeby to form an east–west barrier across the peninsula, from the marshes in the west to the Schlei inlet leading into the Baltic in the east. The town itself was surrounded on its three landward sides (north, west, and south) by earthworks. At the end of the 9th century the northern and southern parts of the town were abandoned for the central section. Later a 9-metre (29-ft) high semi-circular wall was erected to guard the western approaches to the town. On the eastern side, the town was bordered by the innermost part of the Schlei inlet and the bay of Haddebyer Noor. Timeline Rise Hedeby became a principal marketplace because of its geographical location on the major trade routes between the Frankish Empire and Scandinavia (north-south), and between the Baltic and the North Sea (east-west). Between 800 and 1000 the growing economic power of the Vikings led to its dramatic expansion as a major trading centre. Along with Birka and Schleswig, Hedeby's prominence as a major international trading hub served as a foundation of the Hanseatic League that would emerge by the 12th century. The following indicate the importance achieved by the town: The town was described by visitors from England (Wulfstan - 9th century) and the Mediterranean (Al-Tartushi - 10th century). Hedeby became the seat of a bishop (948) and belonged to the Archbishopric of Hamburg and Bremen. The town minted its own coins (from 825). Adam of Bremen (11th century) reports that ships were sent from this portus maritimus to Slavic lands, to Sweden, Samland (Semlant) and even Greece. A Swedish dynasty founded by Olof the Brash is said to have ruled Hedeby during the last decades of the 9th century and the first part of the 10th century. This was told to Adam of Bremen by the Danish king Sweyn Estridsson, and it is supported by three runestones found in Denmark. Two of them were raised by the mother of Olof's grandson Sigtrygg Gnupasson. The third runestone, discovered in 1796, is from Hedeby, the Stone of Eric (). It is inscribed with Norwegian-Swedish runes. It is, however, possible that Danes also occasionally wrote with this version of the younger futhark. Lifestyle Life was
to form an east–west barrier across the peninsula, from the marshes in the west to the Schlei inlet leading into the Baltic in the east. The town itself was surrounded on its three landward sides (north, west, and south) by earthworks. At the end of the 9th century the northern and southern parts of the town were abandoned for the central section. Later a 9-metre (29-ft) high semi-circular wall was erected to guard the western approaches to the town. On the eastern side, the town was bordered by the innermost part of the Schlei inlet and the bay of Haddebyer Noor. Timeline Rise Hedeby became a principal marketplace because of its geographical location on the major trade routes between the Frankish Empire and Scandinavia (north-south), and between the Baltic and the North Sea (east-west). Between 800 and 1000 the growing economic power of the Vikings led to its dramatic expansion as a major trading centre. Along with Birka and Schleswig, Hedeby's prominence as a major international trading hub served as a foundation of the Hanseatic League that would emerge by the 12th century. The following indicate the importance achieved by the town: The town was described by visitors from England (Wulfstan - 9th century) and the Mediterranean (Al-Tartushi - 10th century). Hedeby became the seat of a bishop (948) and belonged to the Archbishopric of Hamburg and Bremen. The town minted its own coins (from 825). Adam of Bremen (11th century) reports that ships were sent from this portus maritimus to Slavic lands, to Sweden, Samland (Semlant) and even Greece. A Swedish dynasty founded by Olof the Brash is said to have ruled Hedeby during the last decades of the 9th century and the first part of the 10th century. This was told to Adam of Bremen by the Danish king Sweyn Estridsson, and it is supported by three runestones found in Denmark. Two of them were raised by the mother of Olof's grandson Sigtrygg Gnupasson. The third runestone, discovered in 1796, is from Hedeby, the Stone of Eric (). It is inscribed with Norwegian-Swedish runes. It is, however, possible that Danes also occasionally wrote with this version of the younger futhark. Lifestyle Life was short and crowded in Hedeby. The small houses were clustered tightly together in a grid, with the east–west streets leading down to jetties in the harbour. People rarely lived beyond 30 or 40, and archaeological research shows that their later years were often painful due to crippling diseases such as tuberculosis. Al-Tartushi, a late 10th-century traveller from al-Andalus, provides one of the most colourful and often quoted descriptions of life in Hedeby. Al-Tartushi was from Cordoba in Spain, which had a significantly more wealthy and comfortable lifestyle than Hedeby. While Hedeby may have been significant by Scandinavian standards, Al-Tartushi was unimpressed: "Slesvig (Hedeby) is a very large town at the extreme end of the world ocean... The inhabitants worship Sirius, except for a minority of Christians who have a church of their own there.... He who slaughters a sacrificial animal puts up poles at the door to his courtyard and impales the animal on them, be it a piece of cattle, a ram, billy goat or a pig so that his neighbours will be aware that he is making a sacrifice in honour of his god. The town is poor in goods and riches. People eat
With the Taliban's capture of Kabul in 1996, all the Hazara groups united with the new Northern Alliance against the common new enemy. However, it was too late and despite the fierce resistance Hazarajat fell to the Taliban by 1998. The Taliban had Hazarajat isolated from the rest of the world going as far as not allowing the United Nations to deliver food to the provinces of Bamyan, Ghor, Maidan Wardak, and Daykundi. Hazaras have also played a significant role in the creation of Pakistan. One such Hazara was Qazi Muhammad Essa of the Sheikh Ali tribe, who had been close friends with Muhammad Ali Jinnah, having had met each other for the first time while they were studying in London. He had been the first from his native province of Balochistan to obtain a Bar-at-Law degree and had helped set up the All-India Muslim League in Balochistan. Though Hazara played a role in the anti-Soviet movement, other Hazara participated in the new communist government, which actively courted Afghan minorities. Sultan Ali Kishtmand, a Hazara, served as prime minister of Afghanistan from 1981 to 1990 (with one brief interruption in 1988). The Ismaili Hazara of Baghlan Province likewise supported the communists, and their pir (religious leader) Jaffar Naderi led a pro-Communist militia in the region. During the years that followed, Hazara suffered severe oppression, and many ethnic massacres, genocides, and pogroms were carried out by the predominantly ethnic Pashtun Taliban and are documented by such groups as the Human Rights Watch. Following the September 11, 2001 attacks in the United States, American and Coalition forces invaded Afghanistan. Many Hazaras have become leaders in today's newly emerging Afghanistan. Hazara have also pursued higher education, enrolled in the army, and many have top government positions. For example, Mohammad Mohaqiq, a Hazara from the Hizb-i-Wahdat party, ran in the 2004 presidential election in Afghanistan, and Karim Khalili became the Vice President of Afghanistan. Some ministers and governors are Hazara, including Sima Samar, Habiba Sarabi, Sarwar Danish, Sayed Hussein Anwari, Abdul Haq Shafaq, Sayed Anwar Rahmati, Qurban Ali Oruzgani. The mayor of Nili in Daykundi Province is Azra Jafari, who became the first female mayor in Afghanistan. Some other notable Hazara include Sultan Ali Keshtmand, Abdul Wahed Sarābi, Ghulam Ali Wahdat, Akram Yari, Sayed Mustafa Kazemi, Muhammad Arif Shah Jahan, Ghulam Husain Naseri, Abbas Noyan, Abbas Ibrahim Zada, Ramazan Bashardost, Ahmad Shah Ramazan, Ahmad Behzad, Nasrullah Sadiqi Zada Nili, Fahim Hashimy, Maryam Monsef and more. Although Afghanistan has been historically one of the poorest countries in the world, the Hazarajat region has been kept less developed by past governments. Since ousting the Taliban in late 2001, billions of dollars have poured into Afghanistan for reconstruction and several large-scale reconstruction projects took place in Afghanistan from August 2012. For example, there have been more than 5000 kilometres of road pavement completed across Afghanistan, of which little was done in central Afghanistan (Hazarajat). On the other hand, the Band-e Amir in Bamyan Province became the first national park of Afghanistan. A road from Kabul to Bamyan was also built, along with new police stations, government institutions, hospitals, and schools in Bamyan Province, Daykundi Province, and others. The first ski resort of Afghanistan was also established in Bamyan Province. Discrimination indicates that Kuchis (Pashtun nomads who have historically been migrating from region to region depending on the season) are allowed to use Hazarajat pastures during the summer season. It is believed that allowing the Kuchis to use some of the grazing lands in Hazarajat began during the rule of Abdur Rahman Khan. Living in mountainous Hazarajat, where little farmland exists, Hazara people rely on these pasture lands for their livelihood during the long and harsh winters. In 2007 some Kuchi nomads entered into parts of Hazarajat to graze their livestock, and when the local Hazara resisted, a clash took place and several people on both sides died using assault rifles. Such events continue to occur, even after the central government was forced to intervene, including President Hamid Karzai. In late July 2012, a Hazara police commander in Uruzgan province reportedly rounded up and killed 9 Pashtun civilians in revenge for the death of two local Hazara. The matter is being investigated by the Afghan government. The drive-by President Hamid Karzai after the Peace Jirga to strike a deal with Taliban leaders caused deep unease in Afghanistan's minority communities, who fought the Taliban the longest and suffered the most during their rule. The leaders of the Tajik, Uzbek and Hazara communities, vowed to resist any return of the Taliban to power, referring to the large-scale massacres of Hazara civilians during the Taliban period. Following the Fall of Kabul to the Taliban in 2021, which ended the war in Afghanistan, concerns were raised as to whether the Taliban would reimpose the persecution of Hazaras as in the 1990s. An academic at Melbourne's La Trobe University said that "The Hazaras are very fearful that the Taliban will likely be reinstating the policies of the 1990s" in spite of Taliban reassurances that they will not revert to the bad old ways of the 1990s. Genetics Genetically, the Hazara are a mixture of western Eurasian and eastern Eurasian components, i.e. racially Eurasian. Genetic research suggests that the Hazaras of Afghanistan cluster closely with the Uzbek population of the country, while both groups are at a notable distance from Afghanistan's Tajik and Pashtun populations. There is evidence of both paternal and maternal relations to Turkic peoples and Mongols amongst some Hazaras. East Eurasian male and female ancestry is supported by studies in genetic genealogy as well. East Asian maternal haplogroups (mtDNA) make up about 35%, suggesting that the male descendants of Turkic and Mongolic peoples were accompanied by women of East Asian ancestry, though the Hazaras as a whole have mostly west Eurasian mtDNA. Women of Non-East Asian mtDNA in Hazaras are at about 65%, most which are West Eurasians and some South Asian. The most frequent paternal haplogroups found amongst the Pakistani Hazara were haplogroup C-M217 at 40%(10/25) and Haplogroup R1b at 32% (8/25). One study about paternal DNA haplogroups of Afghanistan shows that the Y-DNA haplogroups R1a and C-M217 are the most common haplogroups, followed by J2-M172 and L-M20. Some Hazaras also have the haplogroup R1a1a-M17, E1b1b1-M35, L-M20 and H-M69, which are common in Tajiks, Pashtuns as well as Indian populations. In one study, a small minority had the haplogroup B-M60, normally found in East Africa, and in one mtDNA study of Hazara, mtDNA Haplogroup L (which is of African origin) was detected at a frequency of 7.5%. A recent study shows that the Uyghurs are closely related to the Hazaras. The study also suggests a small but notable East Asian ancestry in other populations of Pakistan and India. Demographics Some sources claim that Hazaras are about 20 to 30 percent of the total population of Afghanistan. They were by far the largest ethnic group in the past, in 1888–1893 Uprisings of Hazaras over 60% of them massacred with some being displaced. Geographic distribution The vast majority of Hazaras live in Hazarajat, and many others live in the cities, including in neighboring countries or abroad. Diaspora Alessandro Monsutti argues, in his recent anthropological book, that migration is the traditional way of life of the Hazara people, referring to the seasonal and historical migrations which have never ceased and do not seem to be dictated only by emergencies such as war. Due to the decades of war in Afghanistan and the sectarian violence in Pakistan, many Hazaras left their communities and have settled in Australia, New Zealand, Canada, the United States, the United Kingdom and particularly the Northern European countries such as Sweden and Denmark. Some go to these countries as exchange students while others through human smuggling, which sometimes costs them their lives. Since 2001, about 1,000 people have died in the ocean while trying to reach Australia by boats from Indonesia. Many of these were Hazaras, including women and small children who could not swim. The notable case was the Tampa affair in which a shipload of refugees, mostly Hazara, was rescued by the Norwegian freighter MV Tampa and subsequently sent to Nauru. New Zealand agreed to take some of the refugees and all but one of those were granted a stay. Hazara in Pakistan During the period of British colonial rule on the Indian subcontinent in the 19th century, Hazaras worked during the winter months in coal mines, road
Daykundi Province is Azra Jafari, who became the first female mayor in Afghanistan. Some other notable Hazara include Sultan Ali Keshtmand, Abdul Wahed Sarābi, Ghulam Ali Wahdat, Akram Yari, Sayed Mustafa Kazemi, Muhammad Arif Shah Jahan, Ghulam Husain Naseri, Abbas Noyan, Abbas Ibrahim Zada, Ramazan Bashardost, Ahmad Shah Ramazan, Ahmad Behzad, Nasrullah Sadiqi Zada Nili, Fahim Hashimy, Maryam Monsef and more. Although Afghanistan has been historically one of the poorest countries in the world, the Hazarajat region has been kept less developed by past governments. Since ousting the Taliban in late 2001, billions of dollars have poured into Afghanistan for reconstruction and several large-scale reconstruction projects took place in Afghanistan from August 2012. For example, there have been more than 5000 kilometres of road pavement completed across Afghanistan, of which little was done in central Afghanistan (Hazarajat). On the other hand, the Band-e Amir in Bamyan Province became the first national park of Afghanistan. A road from Kabul to Bamyan was also built, along with new police stations, government institutions, hospitals, and schools in Bamyan Province, Daykundi Province, and others. The first ski resort of Afghanistan was also established in Bamyan Province. Discrimination indicates that Kuchis (Pashtun nomads who have historically been migrating from region to region depending on the season) are allowed to use Hazarajat pastures during the summer season. It is believed that allowing the Kuchis to use some of the grazing lands in Hazarajat began during the rule of Abdur Rahman Khan. Living in mountainous Hazarajat, where little farmland exists, Hazara people rely on these pasture lands for their livelihood during the long and harsh winters. In 2007 some Kuchi nomads entered into parts of Hazarajat to graze their livestock, and when the local Hazara resisted, a clash took place and several people on both sides died using assault rifles. Such events continue to occur, even after the central government was forced to intervene, including President Hamid Karzai. In late July 2012, a Hazara police commander in Uruzgan province reportedly rounded up and killed 9 Pashtun civilians in revenge for the death of two local Hazara. The matter is being investigated by the Afghan government. The drive-by President Hamid Karzai after the Peace Jirga to strike a deal with Taliban leaders caused deep unease in Afghanistan's minority communities, who fought the Taliban the longest and suffered the most during their rule. The leaders of the Tajik, Uzbek and Hazara communities, vowed to resist any return of the Taliban to power, referring to the large-scale massacres of Hazara civilians during the Taliban period. Following the Fall of Kabul to the Taliban in 2021, which ended the war in Afghanistan, concerns were raised as to whether the Taliban would reimpose the persecution of Hazaras as in the 1990s. An academic at Melbourne's La Trobe University said that "The Hazaras are very fearful that the Taliban will likely be reinstating the policies of the 1990s" in spite of Taliban reassurances that they will not revert to the bad old ways of the 1990s. Genetics Genetically, the Hazara are a mixture of western Eurasian and eastern Eurasian components, i.e. racially Eurasian. Genetic research suggests that the Hazaras of Afghanistan cluster closely with the Uzbek population of the country, while both groups are at a notable distance from Afghanistan's Tajik and Pashtun populations. There is evidence of both paternal and maternal relations to Turkic peoples and Mongols amongst some Hazaras. East Eurasian male and female ancestry is supported by studies in genetic genealogy as well. East Asian maternal haplogroups (mtDNA) make up about 35%, suggesting that the male descendants of Turkic and Mongolic peoples were accompanied by women of East Asian ancestry, though the Hazaras as a whole have mostly west Eurasian mtDNA. Women of Non-East Asian mtDNA in Hazaras are at about 65%, most which are West Eurasians and some South Asian. The most frequent paternal haplogroups found amongst the Pakistani Hazara were haplogroup C-M217 at 40%(10/25) and Haplogroup R1b at 32% (8/25). One study about paternal DNA haplogroups of Afghanistan shows that the Y-DNA haplogroups R1a and C-M217 are the most common haplogroups, followed by J2-M172 and L-M20. Some Hazaras also have the haplogroup R1a1a-M17, E1b1b1-M35, L-M20 and H-M69, which are common in Tajiks, Pashtuns as well as Indian populations. In one study, a small minority had the haplogroup B-M60, normally found in East Africa, and in one mtDNA study of Hazara, mtDNA Haplogroup L (which is of African origin) was detected at a frequency of 7.5%. A recent study shows that the Uyghurs are closely related to the Hazaras. The study also suggests a small but notable East Asian ancestry in other populations of Pakistan and India. Demographics Some sources claim that Hazaras are about 20 to 30 percent of the total population of Afghanistan. They were by far the largest ethnic group in the past, in 1888–1893 Uprisings of Hazaras over 60% of them massacred with some being displaced. Geographic distribution The vast majority of Hazaras live in Hazarajat, and many others live in the cities, including in neighboring countries or abroad. Diaspora Alessandro Monsutti argues, in his recent anthropological book, that migration is the traditional way of life of the Hazara people, referring to the seasonal and historical migrations which have never ceased and do not seem to be dictated only by emergencies such as war. Due to the decades of war in Afghanistan and the sectarian violence in Pakistan, many Hazaras left their communities and have settled in Australia, New Zealand, Canada, the United States, the United Kingdom and particularly the Northern European countries such as Sweden and Denmark. Some go to these countries as exchange students while others through human smuggling, which sometimes costs them their lives. Since 2001, about 1,000 people have died in the ocean while trying to reach Australia by boats from Indonesia. Many of these were Hazaras, including women and small children who could not swim. The notable case was the Tampa affair in which a shipload of refugees, mostly Hazara, was rescued by the Norwegian freighter MV Tampa and subsequently sent to Nauru. New Zealand agreed to take some of the refugees and all but one of those were granted a stay. Hazara in Pakistan During the period of British colonial rule on the Indian subcontinent in the 19th century, Hazaras worked during the winter months in coal mines, road construction, and in other working-class jobs in some cities of what is now Pakistan. The earliest record of Hazara in the areas of Pakistan is found in Broadfoot's Sappers company from 1835 in Quetta. This company had also participated in the First Anglo-Afghan War. Some Hazara also worked in the agriculture farms in Sindh and the construction of the Sukkur barrage. Haider Ali Karmal Jaghori was a prominent political thinker of the Hazara people in Pakistan, writing about the political history of the Hazara people. His work Hazaraha wa Hazarajat Bastan Dar Aiyna-i-Tarikh was published in Quetta in 1992, and another work by Aziz Tughyan Hazara Tarikh Milli Hazara was published in 1984 in Quetta. Most Pakistani Hazaras today live in the city of Quetta, in Balochistan, Pakistan. Localities in the city of Quetta with prominent Hazara populations include Hazara Town and Mehr Abad and Hazara tribes such as the Sardar are exclusively Pakistani. The literacy level among the Hazara community in Pakistan is relatively high compared to the Hazaras of Afghanistan, and they have integrated well into the social dynamics of the local society. Saira Batool, a Hazara woman, was one of the first female pilots in Pakistan Air Force. Other notable Hazaras include Qazi Mohammad Esa, Muhammad Musa Khan, who served as Commander in Chief of the Pakistani Army from 1958 to 1968, Air Marshal Sharbat Ali Changezi, Hussain Ali Yousafi, the slain chairman of the Hazara Democratic Party, Syed Nasir Ali Shah, MNA from Quetta and his father Haji Sayed Hussain Hazara who was a senator and member of Majlis-e-Shura during the Zia-ul-Haq era. Despite all of this, Hazaras are often targeted by militant groups such as the Lashkar-e-Jhangvi and others. "Activists say at least 800-1,000 Hazaras have been killed since 1999 and the pace is quickening. More than one hundred have been murdered in and around Quetta since January, according to Human Rights Watch." The political representation of the community is served by Hazara Democratic Party, a secular liberal democratic party, headed by Abdul Khaliq Hazara. Hazara in Iran Hazaras in Iran are also referred to as Khawaris or Barbaris. Over the many years as a result of political unrest in Afghanistan, some Hazaras have migrated to Iran. The local Hazara population has been estimated at 500,000 people of which at least one-third have spent more than half their life in Iran. Culture The Hazara, outside of Hazarajat, have adopted the cultures of the cities where they dwell, resembling customs and traditions of the Afghan Tajiks and Pashtuns. Traditionally the Hazara are highland farmers and although sedentary, in the Hazarajat, they have retained many of their own customs and traditions, some of which are more closely related to those of Central Asia than to those of the Afghan Tajiks. The Hazara live in houses rather than tents; Aimaq Hazaras and Aimaqs in tents rather than houses. Music Many Hazara musicians are widely hailed as being skilled in playing the dambura, a native, regional lute instrument similarly found in other Central Asian nations such as Kazakhstan, Uzbekistan and Tajikistan. Some of the popular Hazara dambura players are, such as Sarwar Sarkhosh, Dawood Sarkhosh, Safdar Tawakoli, Sayed Anwar Azad and others. Cuisine The Hazara cuisine is strongly influenced by Central Asian, South Asian and Persian cuisines. However, there are special foods, cooking methods and different cooking styles that are specific to them. They have a hospitable dining etiquette. In their culture, it is customary to prepare special food for guests. Language Hazara people living in Hazarajat (Hazaristan) areas speak the Hazaragi dialect of Persian, which is infused with many Turkic and a few Mongolic loanwords. The primary differences between Persian and Hazaragi are the accent. Despite these differences, Hazaragi is mutually intelligible with Dari, one of the two official languages of Afghanistan. Religion Hazaras predominantly practice Islam, mostly the Shi'a of the Twelver sect, with significant Sunni, some Isma'ili and Non-denominational Muslim minorities.
place entirely on the honour system. As the system does not depend on the legal enforceability of claims, it can operate even in the absence of a legal and juridical environment. Trust and extensive use of connections are the components that distinguish it from other remittance systems. Hawaladar networks are often based on membership in the same family, village, clan or ethnic group, and cheating is punished by effective excommunication and "loss of honour" that leads to severe economic hardship. Informal records are produced of individual transactions, and a running tally of the amount owed by one broker to another is kept. Settlements of debts between hawala brokers can take a variety of forms (such as goods, services, properties, transfers of employees, etc.), and need not take the form of direct cash transactions. In addition to commissions, hawala brokers often earn their profits through bypassing official exchange rates. Generally, the funds enter the system in the source country's currency and leave the system in the recipient country's currency. As settlements often take place without any foreign exchange transactions, they can be made at other than official exchange rates. Hawala is attractive to customers because it provides a fast and convenient transfer of funds, usually with a far lower commission than that charged by banks. Its advantages are most pronounced when the receiving country applies unprofitable exchange rate regulations or when the banking system in the receiving country is less complex (e.g., due to differences in legal environment in places such as Afghanistan, Yemen, Somalia). Moreover, in some parts of the world it is the only option for legitimate fund transfers, and has even been used by aid organizations in areas where it is the best-functioning institution. Regional variants Dubai has been prominent for decades as a welcoming hub for hawala transactions worldwide. South Asia Hundis The hundi is a financial instrument that developed on the Indian sub-continent for use in trade and credit transactions. Hundis are used as a form of remittance instrument to transfer money from place to place, as a form of credit instrument or IOU to borrow money and as a bill of exchange in trade transactions. The Reserve Bank of India describes the Hundi as "an unconditional order in writing made by a person directing another to pay a certain sum of money to a person named in the order." Horn of Africa According to the CIA, with the dissolution of Somalia's formal banking system, many informal money transfer operators arose to fill the void. It estimates that such hawaladars, xawilaad or xawala brokers are now responsible for the transfer of up to $1.6 billion per year in remittances to the country, most coming from working Somalis outside Somalia. Such funds have in turn had a stimulating effect on local business activity. West Africa The 2012 Tuareg rebellion left Northern Mali without an official money transfer service for months. The coping mechanisms that appeared were patterned on the hawala system. See also Economy related Global ranking of remittance by nations Remittances to India Hundi Informal value transfer system FATF Related contemporary issues Jizya Zakat Riba FATF blacklist Terrorism financing References Further reading , exploring
hand, Islamic law and the later common law "had no difficulty in accepting agency as one of its institutions in the field of contracts and of obligations in general". How hawala works In the most basic variant of the hawala system, money is transferred via a network of hawala brokers, or hawaladars. It is the transfer of money without actually moving it. In fact, a successful definition of the hawala system that is used is "money transfer without money movement". According to author Sam Vaknin, while there are large hawaladar operators with networks of middlemen in cities across many countries, most hawaladars are small businesses who work at hawala as a sideline or moonlighting operation. The figure shows how hawala works: (1) a customer (A, left-hand side) approaches a hawala broker (X) in one city and gives a sum of money (red arrow) that is to be transferred to a recipient (B, right-hand side) in another, usually foreign, city. Along with the money, he usually specifies something like a password that will lead to the money being paid out (blue arrows). (2b) The hawala broker X calls another hawala broker M in the recipient's city, and informs M about the agreed password, or gives other disposition of the funds. Then, the intended recipient (B), who also has been informed by A about the password (2a), now approaches M and tells him the agreed password (3a). If the password is correct, then M releases the transferred sum to B (3b), usually minus a small commission. X now owes M the money that M had paid out to B; thus M has to trust Xs promise to settle the debt at a later date. The unique feature of the system is that no promissory instruments are exchanged between the hawala brokers; the transaction takes place entirely on the honour system. As the system does not depend on the legal enforceability of claims, it can operate even in the absence of a legal and juridical environment. Trust and extensive use of connections are the components that distinguish it from other remittance systems. Hawaladar networks are often based on membership in the same family, village, clan or ethnic group, and cheating is punished by effective excommunication and "loss of honour" that leads to severe economic hardship. Informal records are produced of individual transactions, and a running tally of the amount owed by one broker to another is kept. Settlements of debts between hawala brokers can take a variety of forms (such as goods, services, properties, transfers of employees, etc.), and need not take the form of direct cash transactions. In addition to commissions, hawala brokers often
solutions of white vinegar, chlorine bleach, or hydrogen peroxide (), and rinsing completely. Another view is that clay pebbles are best not re-used even when they are cleaned, due to root growth that may enter the medium. Breaking open a clay pebble after a crop has been shown to reveal this growth. Growstones Growstones, made from glass waste, have both more air and water retention space than perlite and peat. This aggregate holds more water than parboiled rice hulls. Growstones by volume consist of 0.5 to 5% calcium carbonate – for a standard 5.1 kg bag of Growstones that corresponds to 25.8 to 258 grams of calcium carbonate. The remainder is soda-lime glass. Coconut Coir Regardless of hydroponic demand, coconut coir is a natural byproduct derived from coconut processes. The outer husk of a coconut consists of fibers which are commonly used to make a myriad of items ranging from floor mats to brushes. After the long fibers are used for those applications, the dust and short fibers are merged to create coir. Coconuts absorb high levels of nutrients throughout their life cycle, so the coir must undergo a maturation process before it becomes a viable growth medium. This process removes salt, tannins and phenolic compounds through substantial water washing. Contaminated water is a byproduct of this process, as three hundred to six hundred liters of water per one cubic meter of coir is needed. Additionally, this maturation can take up to six months and one study concluded the working conditions during the maturation process are dangerous and would be illegal in North America and Europe. Despite requiring attention, posing health risks and environmental impacts, coconut coir has impressive material properties. When exposed to water, the brown, dry, chunky and fibrous material expands nearly three-four times its original size. This characteristic combined with coconut coir's water retention capacity and resistance to pests and diseases make it an effective growth medium. Used as an alternative to rock wool, coconut coir, also known as coir peat, offers optimized growing conditions. Rice husks Parboiled rice husks (PBH) are an agricultural byproduct that would otherwise have little use. They decay over time, and allow drainage, and even retain less water than growstones. A study showed that rice husks did not affect the effects of plant growth regulators. Perlite Perlite is a volcanic rock that has been superheated into very lightweight expanded glass pebbles. It is used loose or in plastic sleeves immersed in the water. It is also used in potting soil mixes to decrease soil density. Perlite has similar properties and uses to vermiculite but, in general, holds more air and less water and is buoyant. Vermiculite Like perlite, vermiculite is a mineral that has been superheated until it has expanded into light pebbles. Vermiculite holds more water than perlite and has a natural "wicking" property that can draw water and nutrients in a passive hydroponic system. If too much water and not enough air surrounds the plants roots, it is possible to gradually lower the medium's water-retention capability by mixing in increasing quantities of perlite. Pumice Like perlite, pumice is a lightweight, mined volcanic rock that finds application in hydroponics. Sand Sand is cheap and easily available. However, it is heavy, does not hold water very well, and it must be sterilized between uses. Due to sand being easily available and in high demand sand shortages are on our horizon as we are running out. Gravel The same type that is used in aquariums, though any small gravel can be used, provided it is washed first. Indeed, plants growing in a typical traditional gravel filter bed, with water circulated using electric powerhead pumps, are in effect being grown using gravel hydroponics, also termed "nutriculture". Gravel is inexpensive, easy to keep clean, drains well and will not become waterlogged. However, it is also heavy, and, if the system does not provide continuous water, the plant roots may dry out. Wood fiber Wood fibre, produced from steam friction of wood, is a very efficient organic substrate for hydroponics. It has the advantage that it keeps its structure for a very long time. Wood wool (i.e. wood slivers) have been used since the earliest days of the hydroponics research. However, more recent research suggests that wood fibre may have detrimental effects on "plant growth regulators". Sheep wool Wool from shearing sheep is a little-used yet promising renewable growing medium. In a study comparing wool with peat slabs, coconut fibre slabs, perlite and rockwool slabs to grow cucumber plants, sheep wool had a greater air capacity of 70%, which decreased with use to a comparable 43%, and water capacity that increased from 23% to 44% with use. Using sheep wool resulted in the greatest yield out of the tested substrates, while application of a biostimulator consisting of humic acid, lactic acid and Bacillus subtilis improved yields in all substrates. Brick shards Brick shards have similar properties to gravel. They have the added disadvantages of possibly altering the pH and requiring extra cleaning before reuse. Polystyrene packing peanuts Polystyrene packing peanuts are inexpensive, readily available, and have excellent drainage. However, they can be too lightweight for some uses. They are used mainly in closed-tube systems. Note that non-biodegradable polystyrene peanuts must be used; biodegradable packing peanuts will decompose into a sludge. Plants may absorb styrene and pass it to their consumers; this is a possible health risk. Nutrient solutions Inorganic hydroponic solutions The formulation of hydroponic solutions is an application of plant nutrition, with nutrient deficiency symptoms mirroring those found in traditional soil based agriculture. However, the underlying chemistry of hydroponic solutions can differ from soil chemistry in many significant ways. Important differences include: Unlike soil, hydroponic nutrient solutions do not have cation-exchange capacity (CEC) from clay particles or organic matter. The absence of CEC and soil pores means the pH, oxygen saturation, and nutrient concentrations can change much more rapidly in hydroponic setups than is possible in soil. Selective absorption of nutrients by plants often imbalances the amount of counterions in solution. This imbalance can rapidly affect solution pH and the ability of plants to absorb nutrients of similar ionic charge (see article membrane potential). For instance, nitrate anions are often consumed rapidly by plants to form proteins, leaving an excess of cations in solution. This cation imbalance can lead to deficiency symptoms in other cation based nutrients (e.g. Mg2+) even when an ideal quantity of those nutrients are dissolved in the solution. Depending on the pH or on the presence of water contaminants, nutrients such as iron can precipitate from the solution and become unavailable to plants. Routine adjustments to pH, buffering the solution, or the use of chelating agents is often necessary. Unlike soil types, which can vary greatly in their composition, hydroponic solutions are often standardized and require routine maintenance for plant cultivation. Hydroponic solutions are periodically pH adjusted to near neutral (pH ≈ 6.0) and are aerated with oxygen. Also, water levels must be refilled to account for transpiration losses and nutrient solutions require re-fortification to correct the nutrient imbalances that occur as plants grow and deplete nutrient reserves. Sometimes the regular measurement of nitrate ions is used as a parameter to estimate the remaining proportions and concentrations of other nutrient ions in a solution. As in conventional agriculture, nutrients should be adjusted to satisfy Liebig's law of the minimum for each specific plant variety. Nevertheless, generally acceptable concentrations for nutrient solutions exist, with minimum and maximum concentration ranges for most plants being somewhat similar. Most nutrient solutions are mixed to have concentrations between 1,000 and 2,500 ppm. Acceptable concentrations for the individual nutrient ions, which comprise that total ppm figure, are summarized in the following table. For essential nutrients, concentrations below these ranges often lead to nutrient deficiencies while exceeding these ranges can lead to nutrient toxicity. Optimum nutrition concentrations for plant varieties are found empirically by experience or by plant tissue tests. Organic hydroponic solutions Organic fertilizers can be used to supplement or entirely replace the inorganic compounds used in conventional hydroponic solutions. However, using organic fertilizers introduces a number of challenges that are not easily resolved. Examples include: organic fertilizers are highly variable in their nutritional compositions in terms of minerals and different chemical species. Even similar materials can differ significantly based on their source (e.g. the quality of manure varies based on an animal's diet). organic fertilizers are often sourced from animal byproducts, making disease transmission a serious concern for plants grown for human consumption or animal forage. organic fertilizers are often particulate and can clog substrates or other growing equipment. Sieving or milling the organic materials to fine dusts is often necessary. some organic materials (i.e. particularly manures and offal) can further degrade to emit foul odors under anaerobic conditions. many organic molecules (i.e. sugars) demand additional oxygen during aerobic degradation, which is essential for cellular respiration in the plant roots. organic compounds are not necessary for normal plant nutrition. Nevertheless, if precautions are taken, organic fertilizers can be used successfully in hydroponics. Organically sourced macronutrients Examples of suitable materials, with their average nutritional contents tabulated in terms of percent dried mass, are listed in the following table. Organically sourced micronutrients Micronutrients can be sourced from organic fertilizers as well. For example, composted pine bark is high in manganese and is sometimes used to fulfill that mineral requirement in hydroponic solutions. To satisfy requirements for National Organic Programs, pulverized, unrefined minerals (e.g. Gypsum, Calcite, and glauconite) can also be added to satisfy a plant's nutritional needs. Additives Compounds can be added in both organic and conventional hydroponic systems to improve nutrition acquisition and uptake by the plant. Chelating agents and humic acid have been shown to increase nutrient uptake. Additionally, plant growth promoting rhizobacteria (PGPR), which are regularly utilized in field and greenhouse agriculture, have been shown to benefit hydroponic plant growth development and nutrient acquisition. Some PGPR are known to increase nitrogen fixation. While nitrogen is generally abundant in hydroponic systems with properly maintained fertilizer regimens, Azospirillum and Azotobacter genera can help maintain mobilized forms of nitrogen in systems with higher microbial growth in the rhizosphere. Traditional fertilizer methods often lead to high accumulated concentrations of nitrate within plant tissue at harvest. Rhodopseudo-monas palustris has been shown to increase nitrogen use efficiency, increase yield, and decrease nitrate concentration by 88% at harvest compared to traditional hydroponic fertilizer methods in leafy greens. Many Bacillus spp., Pseudomonas spp. and Streptomyces spp. convert forms of phosphorus in the soil that are unavailable to the plant into soluble anions by decreasing soil pH, releasing phosphorus bound in chelated form that is available in a wider pH range, and mineralizing organic phosphorus. Some studies have found that Bacillus inoculants allow hydroponic leaf lettuce to overcome high salt stress that would otherwise reduce growth. This can be especially beneficial in regions with high electrical conductivity or salt content in their water source. This could potentially avoid costly reverse osmosis filtration systems while maintaining high crop yield. Tools Common equipment Managing nutrient concentrations, oxygen saturation, and pH values within acceptable ranges is essential for successful hydroponic horticulture. Common tools used to manage hydroponic solutions include: Electrical conductivity meters, a tool which estimates nutrient ppm by measuring how well a solution transmits an electric current. pH meter, a tool that uses an electric current to determine the concentration of hydrogen ions in solution. Oxygen electrode, an electrochemical sensor for determining the oxygen concentration in solution. Litmus paper, disposable pH indicator strips that determine hydrogen ion concentrations by color changing chemical reaction. Graduated cylinders or measuring spoons to measure out premixed, commercial hydroponic solutions. Equipment Chemical equipment can also be used to perform accurate chemical analyses of nutrient solutions. Examples include: Balances for accurately measuring materials. Laboratory glassware, such as burettes and pipettes, for performing titrations. Colorimeters for solution tests which apply the Beer–Lambert law. Spectrophotometer to measure the concentrations of the lead parameter nitrate and other nutrients, such as phosphate, sulfate or iron. Using chemical equipment for hydroponic solutions can be beneficial to growers of any background because nutrient solutions are often reusable. Because nutrient solutions are virtually never completely depleted, and should never be due to the unacceptably low osmotic pressure that would result, re-fortification of old solutions with new nutrients can save growers money and can control point source pollution, a common source for the eutrophication of nearby lakes and streams. Software Although pre-mixed concentrated nutrient solutions
Gericke created a sensation by growing tomato vines high in his back yard in mineral nutrient solutions rather than soil. He then introduced the term hydroponics, water culture, in 1937, proposed to him by W. A. Setchell, a phycologist with an extensive education in the classics. Hydroponics is derived from neologism υδρωπονικά (derived from Greek ύδωρ=water and πονέω=cultivate), constructed in analogy to γεωπονικά (derived from Greek γαία=earth and πονέω=cultivate), geoponica, that which concerns agriculture, replacing, γεω-, earth, with ὑδρο-, water. Gericke, however, underestimated that the time was not yet ripe for the general technical application and commercial use of hydroponics for producing crops because the system he employed was at that time too sensitive and required too much monitoring to be used in commercial applications. Reports of Gericke's work and his claims that hydroponics would revolutionize plant agriculture prompted a huge number of requests for further information. Gericke had been denied use of the university's greenhouses for his experiments due to the administration's skepticism, and when the university tried to compel him to release his preliminary nutrient recipes developed at home, he requested greenhouse space and time to improve them using appropriate research facilities. While he was eventually provided greenhouse space, the university assigned Hoagland and Arnon to re-evaluate Gericke's claims and show his formula held no benefit over soil grown plant yields, a view held by Hoagland. In 1940, Gericke, whose work is considered to be the basis for all forms of hydroponic growing, published the book, Complete Guide to Soilless Gardening, after leaving his academic position in 1937 in a climate that was politically unfavorable. Therein, for the first time, he published his basic formula involving the macro- and micronutrient salts for hydroponically-grown plants. As a result of research of Gericke's claims by order of the Director of the California Agricultural Experiment Station of the University of California, Claude B. Hutchison, Dennis Robert Hoagland and Daniel Israel Arnon wrote a classic 1938 agricultural bulletin, The Water Culture Method for Growing Plants Without Soil, which made the claim that hydroponic crop yields were no better than crop yields obtained with good-quality soils. Ultimately, crop yields would be limited by factors other than mineral nutrients, especially light. However, this study did not adequately appreciate that hydroponics has other key benefits including the fact that the roots of the plant have constant access to oxygen and that the plants have access to as much or as little water as they need. This is important as one of the most common errors when cultivating plants is overwatering and underwatering; and hydroponics prevents this from occurring as large amounts of water, which may drown root systems in soil, can be made available to the plant in hydroponics, and any water not used, is drained away, recirculated, or actively aerated, thus, eliminating anoxic conditions in the root area. In soil, a grower needs to be very experienced to know exactly with how much water to feed the plant. Too much and the plant will be unable to access oxygen because the air in the soil pores is displaced; too little and the plant will lose the ability to absorb nutrients, which are typically moved into the roots while dissolved, leading to nutrient deficiency symptoms such as chlorosis. Hoagland's views and helpful support by the University prompted these two researchers to develop several new formulas for mineral nutrient solutions, universally known as Hoagland solution. One of the earliest successes of hydroponics occurred on Wake Island, a rocky atoll in the Pacific Ocean used as a refueling stop for Pan American Airlines. Hydroponics was used there in the 1930s to grow vegetables for the passengers. Hydroponics was a necessity on Wake Island because there was no soil, and it was prohibitively expensive to airlift in fresh vegetables. From 1943 to 1946, Daniel I. Arnon served as a major in the United States Army and used his prior expertise with plant nutrition to feed troops stationed on barren Ponape Island in the western Pacific by growing crops in gravel and nutrient-rich water because there was no arable land available. In the 1960s, Allen Cooper of England developed the nutrient film technique. The Land Pavilion at Walt Disney World's EPCOT Center opened in 1982 and prominently features a variety of hydroponic techniques. In recent decades, NASA has done extensive hydroponic research for its Controlled Ecological Life Support System (CELSS). Hydroponics research mimicking a Martian environment uses LED lighting to grow in a different color spectrum with much less heat. Ray Wheeler, a plant physiologist at Kennedy Space Center's Space Life Science Lab, believes that hydroponics will create advances within space travel, as a bioregenerative life support system. In 2007, Eurofresh Farms in Willcox, Arizona, sold more than 200 million pounds of hydroponically grown tomatoes. Eurofresh has under glass and represents about a third of the commercial hydroponic greenhouse area in the U.S. Eurofresh tomatoes were pesticide-free, grown in rockwool with top irrigation. Eurofresh declared bankruptcy, and the greenhouses were acquired by NatureSweet Ltd. in 2013. As of 2017, Canada had hundreds of acres of large-scale commercial hydroponic greenhouses, producing tomatoes, peppers and cucumbers. Due to technological advancements within the industry and numerous economic factors, the global hydroponics market is forecast to grow from US$226.45 million in 2016 to US$724.87 million by 2023. Techniques There are two main variations for each medium: sub-irrigation and top irrigation. For all techniques, most hydroponic reservoirs are now built of plastic, but other materials have been used, including concrete, glass, metal, vegetable solids, and wood. The containers should exclude light to prevent algae and fungal growth in the nutrient solution. Static solution culture In static solution culture, plants are grown in containers of nutrient solution, such as glass Mason jars (typically, in-home applications), pots, buckets, tubs, or tanks. The solution is usually gently aerated but may be un-aerated. If un-aerated, the solution level is kept low enough that enough roots are above the solution so they get adequate oxygen. A hole is cut (or drilled) in the top of the reservoir for each plant; if it a jar or tub, it may be its lid, but otherwise, cardboard, foil, paper, wood or metal may be put on top. A single reservoir can be dedicated to a single plant, or to various plants. Reservoir size can be increased as plant size increases. A home-made system can be constructed from food containers or glass canning jars with aeration provided by an aquarium pump, aquarium airline tubing and aquarium valves. Clear containers are covered with aluminium foil, butcher paper, black plastic, or other material to exclude light, thus helping to eliminate the formation of algae. The nutrient solution is changed either on a schedule, such as once per week, or when the concentration drops below a certain level as determined with an electrical conductivity meter. Whenever the solution is depleted below a certain level, either water or fresh nutrient solution is added. A Mariotte's bottle, or a float valve, can be used to automatically maintain the solution level. In raft solution culture, plants are placed in a sheet of buoyant plastic that is floated on the surface of the nutrient solution. That way, the solution level never drops below the roots. Continuous-flow solution culture In continuous-flow solution culture, the nutrient solution constantly flows past the roots. It is much easier to automate than the static solution culture because sampling and adjustments to the temperature, pH, and nutrient concentrations can be made in a large storage tank that has potential to serve thousands of plants. A popular variation is the nutrient film technique or NFT, whereby a very shallow stream of water containing all the dissolved nutrients required for plant growth is recirculated in a thin layer past a bare root mat of plants in a watertight channel, with an upper surface exposed to air. As a consequence, an abundant supply of oxygen is provided to the roots of the plants. A properly designed NFT system is based on using the right channel slope, the right flow rate, and the right channel length. The main advantage of the NFT system over other forms of hydroponics is that the plant roots are exposed to adequate supplies of water, oxygen, and nutrients. In all other forms of production, there is a conflict between the supply of these requirements, since excessive or deficient amounts of one results in an imbalance of one or both of the others. NFT, because of its design, provides a system where all three requirements for healthy plant growth can be met at the same time, provided that the simple concept of NFT is always remembered and practised. The result of these advantages is that higher yields of high-quality produce are obtained over an extended period of cropping. A downside of NFT is that it has very little buffering against interruptions in the flow (e.g., power outages). But, overall, it is probably one of the more productive techniques. The same design characteristics apply to all conventional NFT systems. While slopes along channels of 1:100 have been recommended, in practice it is difficult to build a base for channels that is sufficiently true to enable nutrient films to flow without ponding in locally depressed areas. As a consequence, it is recommended that slopes of 1:30 to 1:40 are used. This allows for minor irregularities in the surface, but, even with these slopes, ponding and water logging may occur. The slope may be provided by the floor, benches or racks may hold the channels and provide the required slope. Both methods are used and depend on local requirements, often determined by the site and crop requirements. As a general guide, flow rates for each gully should be one liter per minute. At planting, rates may be half this and the upper limit of 2 L/min appears about the maximum. Flow rates beyond these extremes are often associated with nutritional problems. Depressed growth rates of many crops have been observed when channels exceed 12 meters in length. On rapidly growing crops, tests have indicated that, while oxygen levels remain adequate, nitrogen may be depleted over the length of the gully. As a consequence, channel length should not exceed 10–15 meters. In situations where this is not possible, the reductions in growth can be eliminated by placing another nutrient feed halfway along the gully and halving the flow rates through each outlet. Aeroponics Aeroponics is a system wherein roots are continuously or discontinuously kept in an environment saturated with fine drops (a mist or aerosol) of nutrient solution. The method requires no substrate and entails growing plants with their roots suspended in a deep air or growth chamber with the roots periodically wetted with a fine mist of atomized nutrients. Excellent aeration is the main advantage of aeroponics. Aeroponic techniques have proven to be commercially successful for propagation, seed germination, seed potato production, tomato production, leaf crops, and micro-greens. Since inventor Richard Stoner commercialized aeroponic technology in 1983, aeroponics has been implemented as an alternative to water intensive hydroponic systems worldwide. The limitation of hydroponics is the fact that of water can only hold of air, no matter whether aerators are utilized or not. Another distinct advantage of aeroponics over hydroponics is that any species of plants can be grown in a true aeroponic system because the microenvironment of an aeroponic can be finely controlled. The limitation of hydroponics is that certain species of plants can only survive for so long in water before they become waterlogged. The advantage of aeroponics is that suspended aeroponic plants receive 100% of the available oxygen and carbon dioxide to the roots zone, stems, and leaves, thus accelerating biomass growth and reducing rooting times. NASA research has shown that aeroponically grown plants have an 80% increase in dry weight biomass (essential minerals) compared to hydroponically grown plants. Aeroponics used 65% less water than hydroponics. NASA also concluded that aeroponically grown plants require ¼ the nutrient input compared to hydroponics. Unlike hydroponically grown plants, aeroponically grown plants will not suffer transplant shock when transplanted to soil, and offers growers the ability to reduce the spread of disease and pathogens. Aeroponics is also widely used in laboratory studies of plant physiology and plant pathology. Aeroponic techniques have been given special attention from NASA since a mist is easier to handle than a liquid in a zero-gravity environment. Fogponics Fogponics is a derivation of aeroponics wherein the nutrient solution is aerosolized by a diaphragm vibrating at ultrasonic frequencies. Solution droplets produced by this method tend to be 5–10 µm in diameter, smaller than those produced by forcing a nutrient solution through pressurized nozzles, as in aeroponics. The smaller size of the droplets allows them to diffuse through the air more easily, and deliver nutrients to the roots without limiting their access to oxygen. Passive sub-irrigation Passive sub-irrigation,
typefaces Humanist (electronic seminar), an email discussion list on humanities computing, described as “an international online seminar on humanities computing and the digital humanities” The Humanist (journal), a magazine published by the American Humanist Association Humanist (journal), a magazine published by the Norwegian Humanist Association A scholar or academic in the Humanities Humanism (philosophy of education) Humanistic (album), the 2001 debut album by Abandoned Pools Humanist minuscule, a style of handwriting invented in 15th century Italy Humanist Movement, international
or scholar in the Renaissance Humanist, typeface classes under the Vox-ATypI classification, which may refer to: Humanist sans-serif typefaces Humanist or old-style serif typefaces Humanist (electronic seminar), an email discussion list on humanities computing, described as “an international online seminar on humanities computing and the digital humanities” The Humanist (journal), a magazine published by the American Humanist Association Humanist
compose a work in which the music carried the entire drama. The story of Dido and Aeneas derives from the original source in Virgil's epic the Aeneid. Soon after Purcell's marriage, in 1682, on the death of Edward Lowe, he was appointed organist of the Chapel Royal, an office which he was able to hold simultaneously with his position at Westminster Abbey. His eldest son was born in this same year, but he was short-lived. His first printed composition, Twelve Sonatas, was published in 1683. For some years after this, he was busy in the production of sacred music, odes addressed to the king and royal family, and other similar works. In 1685, he wrote two of his finest anthems, I was glad and My heart is inditing, for the coronation of King James II. In 1690 he composed a setting of the birthday ode for Queen Mary, Arise, my muse and four years later wrote one of his most elaborate, important and magnificent works – a setting for another birthday ode for the Queen, written by Nahum Tate, entitled Come Ye Sons of Art. In 1687, he resumed his connection with the theatre by furnishing the music for John Dryden's tragedy Tyrannick Love. In this year, Purcell also composed a march and passepied called Quick-step, which became so popular that Lord Wharton adapted the latter to the fatal verses of Lillibullero; and in or before January 1688, Purcell composed his anthem Blessed are they that fear the Lord by the express command of the King. A few months later, he wrote the music for D'Urfey's play, The Fool's Preferment. In 1690, he composed the music for Betterton's adaptation of Fletcher and Massinger's Prophetess (afterwards called Dioclesian) and Dryden's Amphitryon. In 1691, he wrote the music for what is sometimes considered his dramatic masterpiece, King Arthur, or The British Worthy . In 1692, he composed The Fairy-Queen (an adaptation of Shakespeare's A Midsummer Night's Dream), the score of which (his longest for theatre) was rediscovered in 1901 and published by the Purcell Society. The Indian Queen followed in 1695, in which year he also wrote songs for Dryden and Davenant's version of Shakespeare's The Tempest (recently, this has been disputed by music scholars), probably including "Full fathom five" and "Come unto these yellow sands". The Indian Queen was adapted from a tragedy by Dryden and Sir Robert Howard. In these semi-operas (another term for which at the time was "dramatic opera"), the main characters of the plays do not sing but speak their lines: the action moves in dialogue rather than recitative. The related songs are sung "for" them by singers, who have minor dramatic roles. Purcell's Te Deum and Jubilate Deo were written for Saint Cecilia's Day, 1694, the first English Te Deum ever composed with orchestral accompaniment. This work was annually performed at St Paul's Cathedral until 1712, after which it was performed alternately with Handel's Utrecht Te Deum and Jubilate until 1743, when both works were replaced by Handel's Dettingen Te Deum. He composed an anthem and two elegies for Queen Mary II's funeral, his Funeral Sentences and Music for the Funeral of Queen Mary. Besides the operas and semi-operas already mentioned, Purcell wrote the music and songs for Thomas d'Urfey's The Comical History of Don Quixote, Bonduca, The Indian Queen and others, a vast quantity of sacred music, and numerous odes, cantatas, and other miscellaneous pieces. The quantity of his instrumental chamber music is minimal after his early career, and his keyboard music consists of an even more minimal number of harpsichord suites and organ pieces. In 1693, Purcell composed music for two comedies: The Old Bachelor, and The Double Dealer. Purcell also composed for five other plays within the same year. In July 1695, Purcell composed an ode for the Duke of Gloucester for his sixth birthday. The ode is titled Who can from joy refrain? Purcell's four-part sonatas were issued in 1697. In the final six years of his life, Purcell wrote music for forty-two plays. Death Purcell died in 1695 at his home in Marsham Street, at the height of his career. He is believed to have been 35 or 36 years old at the time. The cause of his death is unclear: one theory is that he caught a chill after returning home late from the theatre one night to find that his wife had locked him out. Another is that he succumbed to tuberculosis. The beginning of Purcell's will reads: Purcell is buried adjacent to the organ in Westminster Abbey. The music that he had earlier composed for Queen Mary's funeral was performed during his funeral as well. Purcell was universally mourned as "a very great master of music." Following his death, the officials at Westminster honoured him by unanimously voting that he be buried with no expense spared in the north aisle of the Abbey. His epitaph reads: "Here lyes Henry Purcell Esq., who left this life and is gone to that Blessed Place where only His harmony can be exceeded." Purcell fathered six children by his wife Frances, four of whom died in infancy. His wife, as well as his son Edward (1689–1740) and daughter Frances, survived him. His wife Frances died in 1706, having published a number of her husband's works, including the now-famous collection called Orpheus Britannicus, in two volumes, printed in 1698 and 1702, respectively. Edward was appointed organist of St Clement's, Eastcheap, London, in 1711 and was succeeded by his son Edward Henry Purcell (died 1765). Both men were buried in St Clement's near the organ gallery. Legacy Notable compositions Purcell worked in many genres, both in works closely linked to the court, such as symphony song, to the Chapel Royal, such as the symphony anthem, and the theatre. Among Purcell's most notable works are his opera Dido and Aeneas (1688), his semi-operas Dioclesian (1690), King Arthur (1691), The Fairy-Queen (1692) and Timon of Athens (1695), as well as the compositions Hail! Bright Cecilia (1692), Come Ye Sons of Art (1694) and Funeral Sentences and Music for the Funeral of Queen Mary (1695). Influence and reputation After his death, Purcell was honoured by many of his contemporaries, including his old friend John Blow, who wrote An Ode, on the Death of Mr. Henry Purcell (Mark how the lark and linnet sing) with text by his old collaborator, John Dryden. William Croft's 1724 setting for the Burial Service was written in the style of "the great Master". Croft preserved Purcell's setting of "Thou knowest Lord" (Z 58) in his service, for reasons "obvious to any artist"; it has been sung at every British state funeral ever since. More recently, the English poet Gerard Manley Hopkins wrote a famous sonnet entitled simply "Henry Purcell", with a headnote reading: "The poet wishes well to the divine genius of Purcell and praises him that, whereas other musicians have given utterance to the moods of man's mind, he has, beyond that, uttered in notes the very make and species of man as created both in him and in all men generally." Purcell also had a strong influence on the composers of the English musical renaissance of the early 20th century, most notably Benjamin Britten, who arranged many of Purcell's vocal works for voice(s) and piano in Britten's Purcell Realizations, including from Dido and Aeneas, and whose The Young Person's Guide to the Orchestra is based on a theme from Purcell's Abdelazar. Stylistically, the aria "I know a bank" from Britten's opera A Midsummer Night's Dream is clearly inspired by Purcell's aria "Sweeter than Roses", which Purcell originally wrote as part of incidental music to Richard Norton's Pausanias, the Betrayer of His Country. Purcell is honoured together with Johann Sebastian Bach and George Frideric Handel with a feast day on the liturgical calendar of the Episcopal Church (USA) on 28 July. In a 1940 interview Ignaz Friedman stated that he considered Purcell as great as Bach and Beethoven. In Victoria Street, Westminster, England, there is a bronze monument to Purcell, sculpted by Glynn Williams and unveiled in 1995 to mark the three hundredth anniversary of his death. Purcell's works have been catalogued by Franklin Zimmerman, who gave them a number preceded by Z. A Purcell Club was founded in London in 1836 for promoting the performance of his music but was dissolved in 1863. In 1876 a Purcell Society was founded, which published new editions of his works. A modern-day Purcell Club has been
35 or 36 years old at the time. The cause of his death is unclear: one theory is that he caught a chill after returning home late from the theatre one night to find that his wife had locked him out. Another is that he succumbed to tuberculosis. The beginning of Purcell's will reads: Purcell is buried adjacent to the organ in Westminster Abbey. The music that he had earlier composed for Queen Mary's funeral was performed during his funeral as well. Purcell was universally mourned as "a very great master of music." Following his death, the officials at Westminster honoured him by unanimously voting that he be buried with no expense spared in the north aisle of the Abbey. His epitaph reads: "Here lyes Henry Purcell Esq., who left this life and is gone to that Blessed Place where only His harmony can be exceeded." Purcell fathered six children by his wife Frances, four of whom died in infancy. His wife, as well as his son Edward (1689–1740) and daughter Frances, survived him. His wife Frances died in 1706, having published a number of her husband's works, including the now-famous collection called Orpheus Britannicus, in two volumes, printed in 1698 and 1702, respectively. Edward was appointed organist of St Clement's, Eastcheap, London, in 1711 and was succeeded by his son Edward Henry Purcell (died 1765). Both men were buried in St Clement's near the organ gallery. Legacy Notable compositions Purcell worked in many genres, both in works closely linked to the court, such as symphony song, to the Chapel Royal, such as the symphony anthem, and the theatre. Among Purcell's most notable works are his opera Dido and Aeneas (1688), his semi-operas Dioclesian (1690), King Arthur (1691), The Fairy-Queen (1692) and Timon of Athens (1695), as well as the compositions Hail! Bright Cecilia (1692), Come Ye Sons of Art (1694) and Funeral Sentences and Music for the Funeral of Queen Mary (1695). Influence and reputation After his death, Purcell was honoured by many of his contemporaries, including his old friend John Blow, who wrote An Ode, on the Death of Mr. Henry Purcell (Mark how the lark and linnet sing) with text by his old collaborator, John Dryden. William Croft's 1724 setting for the Burial Service was written in the style of "the great Master". Croft preserved Purcell's setting of "Thou knowest Lord" (Z 58) in his service, for reasons "obvious to any artist"; it has been sung at every British state funeral ever since. More recently, the English poet Gerard Manley Hopkins wrote a famous sonnet entitled simply "Henry Purcell", with a headnote reading: "The poet wishes well to the divine genius of Purcell and praises him that, whereas other musicians have given utterance to the moods of man's mind, he has, beyond that, uttered in notes the very make and species of man as created both in him and in all men generally." Purcell also had a strong influence on the composers of the English musical renaissance of the early 20th century, most notably Benjamin Britten, who arranged many of Purcell's vocal works for voice(s) and piano in Britten's Purcell Realizations, including from Dido and Aeneas, and whose The Young Person's Guide to the Orchestra is based on a theme from Purcell's Abdelazar. Stylistically, the aria "I know a bank" from Britten's opera A Midsummer Night's Dream is clearly inspired by Purcell's aria "Sweeter than Roses", which Purcell originally wrote as part of incidental music to Richard Norton's Pausanias, the Betrayer of His Country. Purcell is honoured together with Johann Sebastian Bach and George Frideric Handel with a feast day on the liturgical calendar of the Episcopal Church (USA) on 28 July. In a 1940 interview Ignaz Friedman stated that he considered Purcell as great as Bach and Beethoven. In Victoria Street, Westminster, England, there is a bronze monument to Purcell, sculpted by Glynn Williams and unveiled in 1995 to mark the three hundredth anniversary of his death. Purcell's works have been catalogued by Franklin Zimmerman, who gave them a number preceded by Z. A Purcell Club was founded in London in 1836 for promoting the performance of his music but was dissolved in 1863. In 1876 a Purcell Society was founded, which published new editions of his works. A modern-day Purcell Club has been created, and provides guided tours and concerts in support of Westminster Abbey. Today there is a Henry Purcell Society of Boston, which performs his music in live concert and currently is online streaming concerts, in response to the pandemic. There is a Purcell Society in London, which collects and studies Purcell manuscripts and musical scores, concentrating on producing revised versions of the scores of all his music. So strong was his reputation that a popular wedding processional was incorrectly attributed to Purcell for many years. The so-called Purcell's Trumpet Voluntary was in fact written around 1700 by a British composer named Jeremiah Clarke as the Prince of Denmark's March. In popular culture Music for the Funeral of Queen Mary was reworked by Wendy Carlos for the title music of the 1971 film by Stanley Kubrick, A Clockwork Orange. The 1973 Rolling Stone review of Jethro Tull's A Passion Play compared the musical style of the album with that of Purcell. In 2009 Pete Townshend of The Who, an English rock band that established itself in the 1960s, identified Purcell's harmonies, particularly the use of suspension and resolution that Townshend had learned from producer Kit Lambert, as an influence on the band's music (in songs such as "Won't Get Fooled Again" (1971), "I Can See for Miles" (1967) and the very Purcellian intro to "Pinball Wizard"). Purcell's music was widely featured as background music in the Academy Award winning 1979 film Kramer vs. Kramer, with a soundtrack on CBS Masterworks Records. In the 21st century, the soundtrack of the 2005 film version of Pride and Prejudice features a dance titled "A Postcard to Henry Purcell". This is a version by composer Dario Marianelli of Purcell's Abdelazar theme. In the German-language 2004 movie, Downfall, the music of Dido's Lament is used repeatedly as Nazi Germany collapses. The 2012 film Moonrise Kingdom contains Benjamin Britten's version of the Rondeau in Purcell's Abdelazar created for his 1946 The Young Person's Guide to the Orchestra. In 2013, the Pet Shop Boys released their single "Love Is a
surface (one that has an original contact angle greater than 90°) becomes more hydrophobic when microstructured – its new contact angle becomes greater than the original. However, a hydrophilic surface (one that has an original contact angle less than 90°) becomes more hydrophilic when microstructured – its new contact angle becomes less than the original. Cassie and Baxter found that if the liquid is suspended on the tops of microstructures, θ will change to θCB*: where φ is the area fraction of the solid that touches the liquid. Liquid in the Cassie–Baxter state is more mobile than in the Wenzel state. We can predict whether the Wenzel or Cassie–Baxter state should exist by calculating the new contact angle with both equations. By a minimization of free energy argument, the relation that predicted the smaller new contact angle is the state most likely to exist. Stated in mathematical terms, for the Cassie–Baxter state to exist, the following inequality must be true. A recent alternative criterion for the Cassie–Baxter state asserts that the Cassie–Baxter state exists when the following 2 criteria are met:1) Contact line forces overcome body forces of unsupported droplet weight and 2) The microstructures are tall enough to prevent the liquid that bridges microstructures from touching the base of the microstructures. A new criterion for the switch between Wenzel and Cassie-Baxter states has been developed recently based on surface roughness and surface energy. The criterion focuses on the air-trapping capability under liquid droplets on rough surfaces, which could tell whether Wenzel's model or Cassie-Baxter's model should be used for certain combination of surface roughness and energy. Contact angle is a measure of static hydrophobicity, and contact angle hysteresis and slide angle are dynamic measures. Contact angle hysteresis is a phenomenon that characterizes surface heterogeneity. When a pipette injects a liquid onto a solid, the liquid will form some contact angle. As the pipette injects more liquid, the droplet will increase in volume, the contact angle will increase, but its three-phase boundary will remain stationary until it suddenly advances outward. The contact angle the droplet had immediately before advancing outward is termed the advancing contact angle. The receding contact angle is now measured by pumping the liquid back out of the droplet. The droplet will decrease in volume, the contact angle will decrease, but its three-phase boundary will remain stationary until it suddenly recedes inward. The contact angle the droplet had immediately before receding inward is termed the receding contact angle. The difference between advancing and receding contact angles is termed contact angle hysteresis and can be used to characterize surface heterogeneity, roughness, and mobility. Surfaces that are not homogeneous will have domains that impede motion of the contact line. The slide angle is another dynamic measure of hydrophobicity and is measured by depositing a droplet on a surface and tilting the surface until the droplet begins to slide. In general, liquids in the Cassie–Baxter state exhibit lower slide angles and contact angle hysteresis than those in the Wenzel state. Research and development Dettre and Johnson discovered in 1964 that the superhydrophobic lotus effect phenomenon was related to rough hydrophobic surfaces, and they developed a theoretical model based on experiments with glass beads coated with paraffin or TFE telomer. The self-cleaning property of superhydrophobic micro-nanostructured surfaces was reported in 1977. Perfluoroalkyl, perfluoropolyether, and RF plasma -formed superhydrophobic materials were developed, used for electrowetting and commercialized for bio-medical applications between 1986 and 1995. Other technology and applications have emerged since the mid 1990s. A durable superhydrophobic hierarchical composition, applied in one or two steps, was disclosed in 2002 comprising nano-sized particles ≤ 100 nanometers overlaying a surface having micrometer-sized features or particles ≤ 100 micrometers. The larger particles were observed to protect the smaller particles from mechanical abrasion. In recent research, superhydrophobicity has been reported by allowing
surface, θ will change to θW* where r is the ratio of the actual area to the projected area. Wenzel's equation shows that microstructuring a surface amplifies the natural tendency of the surface. A hydrophobic surface (one that has an original contact angle greater than 90°) becomes more hydrophobic when microstructured – its new contact angle becomes greater than the original. However, a hydrophilic surface (one that has an original contact angle less than 90°) becomes more hydrophilic when microstructured – its new contact angle becomes less than the original. Cassie and Baxter found that if the liquid is suspended on the tops of microstructures, θ will change to θCB*: where φ is the area fraction of the solid that touches the liquid. Liquid in the Cassie–Baxter state is more mobile than in the Wenzel state. We can predict whether the Wenzel or Cassie–Baxter state should exist by calculating the new contact angle with both equations. By a minimization of free energy argument, the relation that predicted the smaller new contact angle is the state most likely to exist. Stated in mathematical terms, for the Cassie–Baxter state to exist, the following inequality must be true. A recent alternative criterion for the Cassie–Baxter state asserts that the Cassie–Baxter state exists when the following 2 criteria are met:1) Contact line forces overcome body forces of unsupported droplet weight and 2) The microstructures are tall enough to prevent the liquid that bridges microstructures from touching the base of the microstructures. A new criterion for the switch between Wenzel and Cassie-Baxter states has been developed recently based on surface roughness and surface energy. The criterion focuses on the air-trapping capability under liquid droplets on rough surfaces, which could tell whether Wenzel's model or Cassie-Baxter's model should be used for certain combination of surface roughness and energy. Contact angle is a measure of static hydrophobicity, and contact angle hysteresis and slide angle are dynamic measures. Contact angle hysteresis is a phenomenon that characterizes surface heterogeneity. When a pipette injects a liquid onto a solid, the liquid will form some contact angle. As the pipette injects more liquid, the droplet will increase in volume, the contact angle will increase, but its three-phase boundary will remain stationary until it suddenly advances outward. The contact angle the droplet had immediately before advancing outward is termed the advancing contact angle. The receding contact angle is now measured by pumping the liquid back out of the droplet. The droplet will decrease in volume, the contact angle will decrease, but its three-phase boundary will remain stationary until it suddenly recedes inward. The contact angle the droplet had immediately before receding inward is termed the receding contact angle. The difference between advancing and receding contact angles is termed contact angle hysteresis and can be used to characterize surface heterogeneity, roughness, and mobility. Surfaces that are not homogeneous will have domains that impede motion of the contact line. The slide angle is another dynamic measure of hydrophobicity and is measured by depositing a droplet on a surface and tilting the surface until the droplet begins to slide. In general, liquids in the Cassie–Baxter state exhibit lower slide angles and contact angle hysteresis than those in the Wenzel state. Research and development Dettre and Johnson discovered in 1964 that the superhydrophobic lotus effect phenomenon was related to rough hydrophobic surfaces, and they developed a theoretical model based on experiments with glass beads coated with paraffin or TFE telomer. The self-cleaning property of superhydrophobic micro-nanostructured surfaces was reported in 1977. Perfluoroalkyl, perfluoropolyether, and RF plasma -formed superhydrophobic materials were developed, used for electrowetting and commercialized for bio-medical applications between 1986 and 1995. Other technology and applications have emerged since the mid 1990s. A durable superhydrophobic hierarchical composition, applied in one or two steps, was disclosed in 2002 comprising nano-sized particles ≤ 100 nanometers overlaying a surface having micrometer-sized features or particles ≤ 100 micrometers. The larger particles were observed to protect the smaller particles from mechanical abrasion. In recent research, superhydrophobicity has been reported by allowing alkylketene dimer (AKD) to solidify into a nanostructured fractal surface. Many papers have since presented fabrication methods for producing superhydrophobic surfaces including particle deposition, sol-gel techniques, plasma treatments, vapor deposition, and casting techniques. Current opportunity for research impact lies mainly in fundamental research and practical manufacturing. Debates have recently emerged concerning the applicability of the Wenzel and Cassie–Baxter models. In an experiment designed to challenge the surface energy perspective of the Wenzel and Cassie–Baxter model and promote a contact line perspective, water drops
II, was already supplying the Army with a military-specific version of its WL line, called the WLA. The A in this case stood for "Army". Upon the outbreak of war, the company, along with most other manufacturing enterprises, shifted to war work. More than 90,000 military motorcycles, mostly WLAs and WLCs (the Canadian version) were produced, many to be provided to allies. Harley-Davidson received two Army-Navy "E" Awards, one in 1943 and the other in 1945, which were awarded for Excellence in Production. Shipments to the Soviet Union under the Lend-Lease program numbered at least 30,000. The WLAs produced during all four years of war production generally have 1942 serial numbers. Production of the WLA stopped at the end of World War II, but was resumed from 1950 to 1952 for use in the Korean War. The U.S. Army also asked Harley-Davidson to produce a new motorcycle with many of the features of BMW's side-valve and shaft-driven R71. Harley-Davidson largely copied the BMW engine and drive train and produced the shaft-driven 750 cc 1942 Harley-Davidson XA. This shared no dimensions, no parts or no design concepts (except side valves) with any prior Harley-Davidson engine. Due to the superior cooling of the flat-twin engine with the cylinders across the frame, Harley's XA cylinder heads ran 100 °F (56 °C) cooler than its V-twins. The XA never entered full production: the motorcycle by that time had been eclipsed by the Jeep as the Army's general-purpose vehicle, and the WLA—already in production—was sufficient for its limited police, escort, and courier roles. Only 1,000 were made and the XA never went into full production. It remains the only shaft-driven Harley-Davidson ever made. Small: Hummer, Sportcycle and Aermacchi As part of war reparations, Harley-Davidson acquired the design of a small German motorcycle, the DKW RT 125, which they adapted, manufactured, and sold from 1948 to 1966. Various models were made, including the Hummer from 1955 to 1959, but they are all colloquially referred to as "Hummers" at present. BSA in the United Kingdom took the same design as the foundation of their BSA Bantam. In 1960, Harley-Davidson consolidated the Model 165 and Hummer lines into the Super-10, introduced the Topper scooter, and bought fifty percent of Aermacchi's motorcycle division. Importation of Aermacchi's 250 cc horizontal single began the following year. The bike bore Harley-Davidson badges and was marketed as the Harley-Davidson Sprint. The engine of the Sprint was increased to 350 cc in 1969 and would remain that size until 1974, when the four-stroke Sprint was discontinued. After the Pacer and Scat models were discontinued at the end of 1965, the Bobcat became the last of Harley-Davidson's American-made two-stroke motorcycles. The Bobcat was manufactured only in the 1966 model year. Harley-Davidson replaced their American-made lightweight two-stroke motorcycles with the Italian Aermacchi-built two-stroke powered M-65, M-65S, and Rapido. The M-65 had a semi-step-through frame and tank. The M-65S was a M-65 with a larger tank that eliminated the step-through feature. The Rapido was a larger bike with a 125 cc engine. The Aermacchi-built Harley-Davidsons became entirely two-stroke powered when the 250 cc two-stroke SS-250 replaced the four-stroke 350 cc Sprint in 1974. Harley-Davidson purchased full control of Aermacchi's motorcycle production in 1974 and continued making two-stroke motorcycles there until 1978, when they sold the facility to Cagiva, owned by the Castiglioni family. Tarnished reputation In 1952, following their application to the U.S. Tariff Commission for a 40 percent tax on imported motorcycles, Harley-Davidson was charged with restrictive practices. In 1969, American Machine and Foundry (AMF) bought the company, streamlined production, and slashed the workforce. This tactic resulted in a labor strike and cost-cutting produced lower-quality bikes. The bikes were expensive and inferior in performance, handling, and quality to Japanese motorcycles. Sales and quality declined, and the company almost went bankrupt. The "Harley-Davidson" name was mocked as "Hardly Ableson", "Hardly Driveable", and "Hogly Ferguson", and the nickname "Hog" became pejorative. The early '70s saw the introduction of what the motoring press called the Universal Japanese Motorcycle in North America, that revolutionized the industry and made motorcycling in America more accessible during the 1970s and 1980s. In 1977, following the successful manufacture of the Liberty Edition to commemorate America's bicentennial in 1976, Harley-Davidson produced what has become one of its most controversial models, the Harley-Davidson Confederate Edition. The bike was essentially a stock Harley-Davidson with Confederate-specific paint and details. Restructuring and revival In 1981, AMF sold the company to a group of 13 investors led by Vaughn Beals and Willie G. Davidson for $80 million. The new management team improved product quality, introduced new technologies, and adopted just-in-time inventory management. These operational and product improvements were matched with a strategy of seeking tariff protection for large-displacement motorcycles in the face of intense competition with Japanese manufacturers. These protections were granted by the Reagan administration in 1983, giving Harley-Davidson time to implement their new strategies. Revising stagnated product designs was a crucial centerpiece of Harley-Davidson's turnaround strategy. Rather than trying to mimic popular Japanese designs, the new management deliberately exploited the "retro" appeal of Harley motorcycles, building machine that deliberately adopted the look and feel of their earlier bikes and the subsequent customizations of owners of that era. Many components such as brakes, forks, shocks, carburetors, electrics and wheels were outsourced from foreign manufacturers and quality increased, technical improvements were made, and buyers slowly returned. Harley-Davidson bought the "Sub Shock" cantilever-swingarm rear suspension design from Missouri engineer Bill Davis and developed it into its Softail series of motorcycles, introduced in 1984 with the FXST Softail. In response to possible motorcycle market loss due to the aging of baby-boomers, Harley-Davidson bought luxury motorhome manufacturer Holiday Rambler in 1986. In 1996, the company sold Holiday Rambler to the Monaco Coach Corporation. The "Sturgis" model, boasting a dual belt-drive, was introduced initially in 1980 and was made for three years. This bike was then brought back as a commemorative model in 1991. Fat Boy, Dyna, and Harley-Davidson museum By 1990, with the introduction of the "Fat Boy", Harley-Davidson once again became the sales leader in the heavyweight (over 750 cc) market. At the time of the Fat Boy model introduction, a false etymology spread that "Fat Boy" was a combination of the names of the atomic bombs Fat Man and Little Boy. This has been debunked, as the name "Fat Boy" actually comes from the observation that the motorcycle is somewhat wider than other bikes when viewed head-on. 1993 and 1994 saw the replacement of FXR models with the Dyna (FXD), which became the sole rubber mount FX Big Twin frame in 1994. The FXR was revived briefly from 1999 to 2000 for special limited editions (FXR2, FXR3 & FXR4). Harley-Davidson celebrated their 100th anniversary on September 1, 2003 with a large event and concert featuring performances from Elton John, The Doobie Brothers, Kid Rock, and Tim McGraw. Construction started on the $75 million, 130,000 square-foot (12,000 m2) Harley-Davidson Museum in the Menomonee Valley of Milwaukee, Wisconsin on June 1, 2006. It opened in 2008 and houses the company's vast collection of historic motorcycles and corporate archives, along with a restaurant, café and meeting space. Overseas operations Established in 1918, the oldest continuously operating Harley-Davidson dealership outside of the United States is in Australia. Sales in Japan started in 1912 then in 1929, Harley-Davidsons were produced in Japan under license to the company Rikuo (Rikuo Internal Combustion Company) under the name of Harley-Davidson and using the company's tooling, and later under the name Rikuo. Production continued until 1958. In 1998 the first Harley-Davidson factory outside the US opened in Manaus, Brazil, taking advantage of the free economic zone there. The location was positioned to sell motorcycles in the southern hemisphere market. In August 2009, Harley-Davidson launched Harley-Davidson India and started selling motorcycles there in 2010. The company established the subsidiary in Gurgaon, near Delhi, in 2011 and created an Indian dealer network. On September 24, 2020, Harley Davidson announced that it would discontinue its sales and manufacturing operations in India due to weak demand and sales. The move involves $75 million in restructuring costs, 70 layoffs and the closure of its Bawal plant in northern India. Buell Motorcycle Company Harley-Davidson's association with sportbike manufacturer Buell Motorcycle Company began in 1987 when they supplied Buell with fifty surplus XR1000 engines. Buell continued to buy engines from Harley-Davidson until 1993, when Harley-Davidson bought 49 percent of the Buell Motorcycle Company. Harley-Davidson increased its share in Buell to ninety-eight percent in 1998, and to complete ownership in 2003. In an attempt to attract newcomers to motorcycling in general and to Harley-Davidson in particular, Buell developed a low-cost, low-maintenance motorcycle. The resulting single-cylinder Buell Blast was introduced in 2000, and was made through 2009, which, according to Buell, was to be the final year of production. The Buell Blast was the training vehicle for the Harley-Davidson Rider's Edge New Rider Course from 2000 until May 2014, when the company re-branded the training academy and started using the Harley-Davidson Street 500 motorcycles. In those 14 years, more than 350,000 participants in the course learned to ride on the Buell Blast. On October 15, 2009, Harley-Davidson Inc. issued an official statement that it would be discontinuing the Buell line and ceasing production immediately. The stated reason was to focus on the Harley-Davidson brand. The company refused to consider selling Buell. Founder Erik Buell subsequently established Erik Buell Racing and continued to manufacture and develop the company's 1125RR racing motorcycle. Claims of stock price manipulation During its period of peak demand, during the late 1990s and early first decade of the 21st century, Harley-Davidson embarked on a program of expanding the number of dealerships throughout the country. At the same time, its current dealers typically had waiting lists that extended up to a year for some of the most popular models. Harley-Davidson, like the auto manufacturers, records a sale not when a consumer buys their product, but rather when it is delivered to a dealer. Therefore, it is possible for the manufacturer to inflate sales numbers by requiring dealers to accept more inventory than desired in a practice called channel stuffing. When demand softened following the unique 2003 model year, this news led to a dramatic decline in the stock price. In April 2004 alone, the price of HOG shares dropped from more than $60 to less than $40. Immediately prior to this decline, retiring CEO Jeffrey Bleustein profited $42 million on the exercise of employee stock options. Harley-Davidson was named as a defendant in numerous class action suits filed by investors who claimed they were intentionally defrauded by Harley-Davidson's management and directors. By January 2007, the price of Harley-Davidson shares reached $70. Problems with Police Touring models Starting around 2000, several police departments started reporting problems with high-speed instability on the Harley-Davidson Touring motorcycles. A Raleigh, North Carolina police officer, Charles Paul, was killed when his 2002 police touring motorcycle crashed after reportedly experiencing a high-speed wobble. The California Highway Patrol conducted testing of the Police Touring motorcycles in 2006. The CHP test riders reported experiencing wobble or weave instability while operating the motorcycles on the test track. 2007 strike On February 2, 2007, upon the expiration of their union contract, about 2,700 employees at Harley-Davidson Inc.'s largest manufacturing plant in York, Pennsylvania, went on strike after failing to agree on wages and health benefits. During the pendency of the strike, the company refused to pay for any portion of the striking employees' health care. The day before the strike, after the union voted against the proposed contract and to authorize the strike, the company shut down all production at the plant. The York facility employs more than 3,200 workers, both union and non-union. Harley-Davidson announced on February 16, 2007, that it had reached a labor agreement with union workers at its largest manufacturing plant, a breakthrough in the two-week-old strike. The strike disrupted Harley-Davidson's national production and was felt in Wisconsin, where 440 employees were laid off, and many Harley suppliers also laid off workers because of the strike. MV Agusta Group On July 11, 2008, Harley-Davidson announced they had signed a definitive agreement to acquire the MV Agusta Group for US$109 million (€70M). MV Agusta Group contains two lines of motorcycles: the high-performance MV Agusta brand and the lightweight Cagiva brand. The acquisition was completed on August 8. On October 15, 2009, Harley-Davidson announced that it would divest its interest in MV Agusta. Harley-Davidson Inc. sold Italian motorcycle maker MV Agusta to Claudio Castiglioni – a member of the family that had purchased Aermacchi from H-D in 1978 – for a reported 3 euros, ending the transaction in the first week of August 2010. Castiglioni was MV Agusta's former owner, and had been MV Agusta's chairman since Harley-Davidson bought it in 2008. As part of the deal, Harley-Davidson put $26M into MV Agusta's accounts, essentially giving Castiglioni $26M to take the brand. Financial crisis According to Interbrand, the value of the Harley-Davidson brand fell by 43 percent to $4.34 billion in 2009. The fall in value is believed to be connected to the 66 percent drop in the company profits in two-quarters of the previous year. On April 29, 2010, Harley-Davidson stated that they must cut $54 million in manufacturing costs from its production facilities in Wisconsin, and that they would explore alternative U.S. sites to accomplish this. The announcement came in the wake of a massive company-wide restructuring, which began in early 2009 and involved the closing of two factories, one distribution center, and the planned elimination of nearly 25 percent of its total workforce (around 3,500 employees). The company announced on September 14, 2010, that it would remain in Wisconsin. Motorcycle engines The classic Harley-Davidson engines are V-twin engines, with a 45° angle between the cylinders. The crankshaft has a single pin, and both pistons are connected to this pin through their connecting rods. This 45° angle is covered under several United States patents and is an engineering tradeoff that allows a large, high-torque engine in a relatively small space. It causes the cylinders to fire at uneven intervals and produces the choppy "potato-potato" sound so strongly linked to the Harley-Davidson brand. To simplify the engine and reduce costs, the V-twin ignition was designed to operate with a single set of points and no distributor. This is known as a dual fire ignition system, causing both spark plugs to fire regardless of which cylinder was on its compression stroke, with the other spark plug firing on its cylinder's exhaust stroke, effectively "wasting a spark". The exhaust note is basically a throaty growling sound with some popping. The 45° design of the engine thus creates a plug firing sequencing as such: The first cylinder fires, the second (rear) cylinder fires 315° later, then there is a 405° gap until the first cylinder fires again, giving the engine its unique sound. Harley-Davidson has used various ignition systems throughout its history – be it the early points and condenser system, (Big Twin and Sportsters up to 1978), magneto ignition system used on some 1958 to 1969 Sportsters, early electronic with centrifugal mechanical advance weights, (all models from mid-1978 until 1979), or the late electronic with a transistorized ignition control module, more familiarly known as the black box or the brain, (all models 1980 to present). Starting in 1995, the company introduced Electronic Fuel Injection (EFI) as an option for the 30th anniversary edition Electra Glide. EFI became standard on all Harley-Davidson motorcycles, including Sportsters, upon the introduction of the 2007 product line. In 1991, Harley-Davidson began to participate in the Sound Quality Working Group, founded by Orfield Labs, Bruel and Kjaer, TEAC, Yamaha, Sennheiser, SMS and Cortex. This was the nation's first group to share research on psychological acoustics. Later that year, Harley-Davidson participated in a series of sound quality studies at Orfield Labs, based on recordings taken at the Talladega Superspeedway, with the objective to lower the sound level for EU standards while analytically capturing the "Harley Sound". This research resulted in the bikes that were introduced in compliance with EU standards for 1998. On February 1, 1994, the company filed a sound trademark application for the distinctive sound of the Harley-Davidson motorcycle engine: "The mark consists of the exhaust sound of applicant's motorcycles, produced by V-twin, common crankpin motorcycle engines when the goods are in use". Nine of Harley-Davidson's competitors filed comments opposing the application, arguing that cruiser-style motorcycles of various brands use a single-crankpin V-twin engine which produce a similar sound. These objections were followed by litigation. In June 2000, the company dropped efforts to federally register its trademark. Big V-twins F-head, also known as JD, pocket valve and IOE (intake over exhaust), 1914–1929 (1,000 cc), and 1922–1929 (1,200 cc) Flathead, 1930–1949 (1,200 cc) and 1935–1941 (1,300 cc). Knucklehead, 1936–1947 61 cubic inch (1,000 cc), and 1941–1947 74 cubic inch (1,200 cc) Panhead, 1948–1965 61 cubic inch (1,000 cc), and 1948–1965, 74 cubic inch (1,200 cc) Shovelhead, 1966–1984, 74 cubic inch (1,200 cc) and 80 cubic inch (1,338 cc) since late 1978 Evolution (a.k.a. "Evo" and "Blockhead"), 1984–1999, 80 cubic inch (1,340 cc) Twin Cam (a.k.a. "Fathead" as named by American Iron Magazine) 1999–2017, in the following versions: Twin Cam 88, 1999–2006, 88 cubic inch (1,450 cc) Twin Cam 88B, counterbalanced version of the Twin Cam 88, 2000–2006, 88 cubic inch (1,450 cc) Twin Cam 95, since 2000, 95 cubic inch (1,550 cc) (engines for early C.V.O. models) Twin Cam 96, since 2007. Twin Cam 103, 2003–2006, 2009, 103 cubic inch (1,690 cc) (engines for C.V.O. models), Standard on 2011 Touring models: Ultra Limited, Road King Classic and Road Glide Ultra and optional on the Road Glide Custom and Street Glide. Standard on most 2012 models excluding Sportsters and 2 Dynas (Street Bob and Super Glide Custom). Standard on all 2014 dyna models. Twin Cam 110, 2007–2017, 110 cubic inch (1,800 cc) (engines for C.V.O. models, 2016 Soft Tail Slim S; FatBoy S, Low Rider S, and Pro-Street Breakout) Milwaukee-Eight Standard : Standard on touring model year 2017+ and Softail models 2018+. Twin-cooled : Optional on some touring and trike model year 2017+. Twin-cooled : Optional on touring and trike model year 2017+, standard on 2017 CVO models. Twin-cooled : Standard on 2018 CVO models Small V-twins D Model, 1929–1931, 750 cc R Model, 1932–1936, 750 cc W Model, 1937–1952, 750 cc, solo (2 wheel, frame only) G (Servi-Car) Model, 1932–1973, 750 cc K Model, 1952–1953, 750 cc KH Model, 1954–1956, 900 cc Ironhead, 1957–1971, 883 cc; 1972–1985, 1,000 cc Evolution, since 1986, 883 cc, 1,100 cc and 1,200 cc Revolution engine The Revolution engine is based on the VR-1000 Superbike race program, developed by Harley-Davidson's Powertrain Engineering with Porsche helping to make the engine suitable for street use. It is a liquid cooled, dual overhead cam, internally counterbalanced 60 degree V-twin engine with a displacement of 69 cubic inch (1,130 cc), producing at 8,250 rpm at the crank, with a redline of 9,000 rpm. It was introduced for the new VRSC (V-Rod) line in 2001 for the 2002 model year, starting with the single VRSCA (V-Twin Racing Street Custom) model. The Revolution marks Harley's first collaboration with Porsche since the V4 Nova project, which, like the V-Rod, was a radical departure from Harley's traditional lineup until it was cancelled by AMF in 1981 in favor of the Evolution engine. A 1,250 cc Screamin' Eagle version of the Revolution engine was made available for 2005 and 2006, and was present thereafter in a single production model from 2005 to 2007. In 2008, the 1,250 cc Revolution Engine became standard for the entire VRSC line. Harley-Davidson claims at the crank for the 2008 VRSCAW model. The VRXSE Destroyer dragbike is equipped with a stroker (75 mm crank) Screamin' Eagle 79 cubic inch (1,300 cc) Revolution Engine, producing , and more than . 750 cc and 500 cc versions of the Revolution engine are used in Harley-Davidson's Street line of light cruisers. These motors, named the Revolution X, use a single overhead cam, screw and locknut valve adjustment, a single internal counterbalancer, and vertically split crankcases; all of these changes making it different from the original Revolution design. Düsseldorf-Test An extreme endurance test of the Revolution engine was performed in a dynamometer installation at the Harley-Davidson factory in Milwaukee, simulating the German Autobahn (highways without general speed limit) between the Porsche research and development center in Weissach, near Stuttgart to Düsseldorf. An undisclosed number of samples of engines failed, until an engine successfully passed the 500-hour nonstop run. This was the benchmark for the engineers to approve the start of production for the Revolution engine, which was documented in the Discovery channel special Harley-Davidson: Birth of the V-Rod, October 14, 2001. Single-cylinder engines IOE singles The first Harley-Davidson motorcycles were powered by single-cylinder IOE engines with the inlet valve operated by engine vacuum, based on the DeDion-Bouton pattern. Singles of this type continued to be made until 1913, when a pushrod and rocker system was used to operate the overhead inlet valve on the single, a similar system having been used on their V-twins since 1911. Single-cylinder motorcycle engines were discontinued in 1918. Flathead and OHV singles Single-cylinder engines were reintroduced in 1925 as 1926 models. These singles were available either as flathead engines or as overhead valve engines until 1930, after which they were only available as flatheads. The flathead single-cylinder motorcycles were designated Model A for engines with magneto systems only and Model B for engines with battery and coil systems, while overhead valve versions were designated Model AA and Model BA respectively, and a magneto-only racing version was designated Model S. This line of single-cylinder motorcycles ended production in 1934. Two-stroke singles Model families Modern Harley-branded motorcycles fall into one of seven model families: Touring, Softail, Dyna, Sportster, Vrod, Street and LiveWire. These model families are distinguished by the frame, engine, suspension, and other characteristics. Touring Touring models use Big-Twin engines and large-diameter telescopic forks. All Touring designations begin with the letters FL, e.g., FLHR (Road King) and FLTR (Road Glide). The touring family, also known as "dressers" or "baggers", includes Road King, Road Glide, Street Glide and Electra Glide models offered in various trims. The Road Kings have a "retro cruiser" appearance and are equipped with a large clear windshield. Road Kings are reminiscent of big-twin models from the 1940s and 1950s. Electra Glides can be identified by their full front fairings. Most Electra Glides sport a fork-mounted fairing referred to as the "Batwing" due to its unmistakable shape. The Road Glide and Road Glide Ultra Classic have a frame-mounted fairing, referred to as the "Sharknose". The Sharknose includes a unique, dual front headlight. Touring models are distinguishable by their large saddlebags, rear coil-over air suspension and are the only models to offer full fairings with radios and CBs. All touring models use the same frame, first introduced with a Shovelhead motor in 1980, and carried forward with only modest upgrades until 2009, when it was extensively redesigned. The frame is distinguished by the location of the steering head in front of the forks and was the first H-D frame to rubber mount the drivetrain to isolate the rider from the vibration of the big V-twin. The frame was modified for the 1993 model year when the oil tank went under the transmission and the battery was moved inboard from under the right saddlebag to under the seat. In 1997, the frame was again modified to allow for a larger battery under the seat and to lower seat height. In 2007, Harley-Davidson introduced the Twin Cam 96 engine, as well the six-speed transmission to give the rider better speeds on the highway. In 2006, Harley introduced the FLHX Street Glide, a bike designed by Willie G. Davidson to be his personal ride, to its touring line. In 2008, Harley added anti-lock braking systems and cruise control as a factory installed option on all touring models (standard on CVO and Anniversary models). Also new for 2008 is the fuel tank for all touring models. 2008 also brought throttle-by-wire to all touring models. For the 2009 model year, Harley-Davidson redesigned the entire touring range with several changes, including a new frame, new swingarm, a completely revised engine-mounting system, front wheels for all but the FLHRC Road King Classic, and a 2–1–2 exhaust. The changes result in greater load carrying capacity, better handling, a smoother engine, longer range and less exhaust heat transmitted to the rider and passenger. Also released for the 2009 model year is the FLHTCUTG Tri-Glide Ultra Classic, the first three-wheeled Harley since the Servi-Car was discontinued in 1973. The model features a unique frame and a 103-cubic-inch (1,690 cc) engine exclusive to the trike. In 2014, Harley-Davidson released a redesign for specific touring bikes and called it
an engine successfully passed the 500-hour nonstop run. This was the benchmark for the engineers to approve the start of production for the Revolution engine, which was documented in the Discovery channel special Harley-Davidson: Birth of the V-Rod, October 14, 2001. Single-cylinder engines IOE singles The first Harley-Davidson motorcycles were powered by single-cylinder IOE engines with the inlet valve operated by engine vacuum, based on the DeDion-Bouton pattern. Singles of this type continued to be made until 1913, when a pushrod and rocker system was used to operate the overhead inlet valve on the single, a similar system having been used on their V-twins since 1911. Single-cylinder motorcycle engines were discontinued in 1918. Flathead and OHV singles Single-cylinder engines were reintroduced in 1925 as 1926 models. These singles were available either as flathead engines or as overhead valve engines until 1930, after which they were only available as flatheads. The flathead single-cylinder motorcycles were designated Model A for engines with magneto systems only and Model B for engines with battery and coil systems, while overhead valve versions were designated Model AA and Model BA respectively, and a magneto-only racing version was designated Model S. This line of single-cylinder motorcycles ended production in 1934. Two-stroke singles Model families Modern Harley-branded motorcycles fall into one of seven model families: Touring, Softail, Dyna, Sportster, Vrod, Street and LiveWire. These model families are distinguished by the frame, engine, suspension, and other characteristics. Touring Touring models use Big-Twin engines and large-diameter telescopic forks. All Touring designations begin with the letters FL, e.g., FLHR (Road King) and FLTR (Road Glide). The touring family, also known as "dressers" or "baggers", includes Road King, Road Glide, Street Glide and Electra Glide models offered in various trims. The Road Kings have a "retro cruiser" appearance and are equipped with a large clear windshield. Road Kings are reminiscent of big-twin models from the 1940s and 1950s. Electra Glides can be identified by their full front fairings. Most Electra Glides sport a fork-mounted fairing referred to as the "Batwing" due to its unmistakable shape. The Road Glide and Road Glide Ultra Classic have a frame-mounted fairing, referred to as the "Sharknose". The Sharknose includes a unique, dual front headlight. Touring models are distinguishable by their large saddlebags, rear coil-over air suspension and are the only models to offer full fairings with radios and CBs. All touring models use the same frame, first introduced with a Shovelhead motor in 1980, and carried forward with only modest upgrades until 2009, when it was extensively redesigned. The frame is distinguished by the location of the steering head in front of the forks and was the first H-D frame to rubber mount the drivetrain to isolate the rider from the vibration of the big V-twin. The frame was modified for the 1993 model year when the oil tank went under the transmission and the battery was moved inboard from under the right saddlebag to under the seat. In 1997, the frame was again modified to allow for a larger battery under the seat and to lower seat height. In 2007, Harley-Davidson introduced the Twin Cam 96 engine, as well the six-speed transmission to give the rider better speeds on the highway. In 2006, Harley introduced the FLHX Street Glide, a bike designed by Willie G. Davidson to be his personal ride, to its touring line. In 2008, Harley added anti-lock braking systems and cruise control as a factory installed option on all touring models (standard on CVO and Anniversary models). Also new for 2008 is the fuel tank for all touring models. 2008 also brought throttle-by-wire to all touring models. For the 2009 model year, Harley-Davidson redesigned the entire touring range with several changes, including a new frame, new swingarm, a completely revised engine-mounting system, front wheels for all but the FLHRC Road King Classic, and a 2–1–2 exhaust. The changes result in greater load carrying capacity, better handling, a smoother engine, longer range and less exhaust heat transmitted to the rider and passenger. Also released for the 2009 model year is the FLHTCUTG Tri-Glide Ultra Classic, the first three-wheeled Harley since the Servi-Car was discontinued in 1973. The model features a unique frame and a 103-cubic-inch (1,690 cc) engine exclusive to the trike. In 2014, Harley-Davidson released a redesign for specific touring bikes and called it "Project Rushmore". Changes include a new 103CI High Output engine, one handed easy open saddlebags and compartments, a new Boom! Box Infotainment system with either 4.3-inch (10 cm) or 6.5-inch (16.5 cm) screens featuring touchscreen functionality [6.5-inch (16.5 cm) models only], Bluetooth (media and phone with approved compatible devices), available GPS and SiriusXM, Text-to-Speech functionality (with approved compatible devices) and USB connectivity with charging. Other features include ABS with Reflex linked brakes, improved styling, Halogen or LED lighting and upgraded passenger comfort. Softail These big-twin motorcycles capitalize on Harley's strong value on tradition. With the rear-wheel suspension hidden under the transmission, they are visually similar to the "hardtail" choppers popular in the 1960s and 1970s, as well as from their own earlier history. In keeping with that tradition, Harley offers Softail models with "Heritage" styling that incorporate design cues from throughout their history and used to offer "Springer" front ends on these Softail models from the factory. Designation Softail models utilize the big-twin engine (F) and the Softail chassis (ST). Softail models that use 21-inch (530 mm) Front Wheels have designations that begin with FX, e.g., FXSTB (Night Train), FXSTD (Deuce), and FXSTS (Springer). Softail models that use 16-inch (410 mm) Front Wheels have designations beginning with FL, e.g., FLSTF (Fat Boy), FLSTC (Heritage Softail Classic), FLSTN (Softail Deluxe) and FLS (Softail Slim). Softail models that use Springer forks with a wheel have designations that begin with FXSTS, e.g., FXSTS (Springer Softail) and FXSTSB (Bad Boy). Softail models that use Springer forks with a wheel have designations that begin with FLSTS, e.g., FLSTSC (Springer Classic) and FLSTSB (Cross Bones). Dyna Dyna-frame motorcycles were developed in the 1980s and early 1990s and debuted in the 1991 model year with the FXDB Sturgis offered in limited edition quantities. In 1992 the line continued with the limited edition FXDB Daytona and a production model FXD Super Glide. The new DYNA frame featured big-twin engines and traditional styling. They can be distinguished from the Softail by the traditional coil-over suspension that connects the swingarm to the frame, and from the Sportster by their larger engines. On these models, the transmission also houses the engine's oil reservoir. Prior to 2006, Dyna models typically featured a narrow, XL-style 39mm front fork and front wheel, as well as footpegs which the manufacturer included the letter "X" in the model designation to indicate. This lineup traditionally included the Super Glide (FXD), Super Glide Custom (FXDC), Street Bob (FXDB), and Low Rider (FXDL). One exception was the Wide Glide (FXDWG), which featured thicker 41mm forks and a narrow front wheel, but positioned the forks on wider triple-trees that give a beefier appearance. In 2008, the Dyna Fat Bob (FXDF) was introduced to the Dyna lineup, featuring aggressive styling like a new 2–1–2 exhaust, twin headlamps, a 180 mm rear tire, and, for the first time in the Dyna lineup, a 130 mm front tire. For the 2012 model year, the Dyna Switchback (FLD) became the first Dyna to break the tradition of having an FX model designation with floorboards, detachable painted hard saddlebags, touring windshield, headlight nacelle and a wide front tire with full fender. The new front end resembled the big-twin FL models from 1968 to 1971. The Dyna family used the 88-cubic-inch (1,440 cc) twin cam from 1999 to 2006. In 2007, the displacement was increased to 96 cubic inches (1,570 cc) as the factory increased the stroke to . For the 2012 model year, the manufacturer began to offer Dyna models with the 103-cubic-inch (1,690 cc) upgrade. All Dyna models use a rubber-mounted engine to isolate engine vibration. Harley discontinued the Dyna platform in 2017 for the 2018 model year, having been replaced by a completely-redesigned Softail chassis; some of the existing models previously released by the company under the Dyna nameplate have since been carried over to the new Softail line. Designation Dyna models utilize the big-twin engine (F), footpegs noted as (X) with the exception of the 2012 FLD Switchback, a Dyna model which used floorboards as featured on the Touring (L) models, and the Dyna chassis (D). Therefore, except for the FLD from 2012 to 2016, all Dyna models have designations that begin with FXD, e.g., FXDWG (Dyna Wide Glide) and FXDL (Dyna Low Rider). Sportster Introduced in 1957, the Sportster family were conceived as racing motorcycles, and were popular on dirt and flat-track race courses through the 1960s and 1970s. Smaller and lighter than the other Harley models, contemporary Sportsters make use of 883 cc or 1,200 cc Evolution engines and, though often modified, remain similar in appearance to their racing ancestors. Up until the 2003 model year, the engine on the Sportster was rigidly mounted to the frame. The 2004 Sportster received a new frame accommodating a rubber-mounted engine. This made the bike heavier and reduced the available lean angle, while it reduced the amount of vibration transmitted to the frame and the rider, providing a smoother ride for rider and passenger. In the 2007 model year, Harley-Davidson celebrated the 50th anniversary of the Sportster and produced a limited edition called the XL50, of which only 2000 were made for sale worldwide. Each motorcycle was individually numbered and came in one of two colors, Mirage Pearl Orange or Vivid Black. Also in 2007, electronic fuel injection was introduced to the Sportster family, and the Nightster model was introduced in mid-year. In 2009, Harley-Davidson added the Iron 883 to the Sportster line, as part of the Dark Custom series. In the 2008 model year, Harley-Davidson released the XR1200 Sportster in Europe, Africa, and the Middle East. The XR1200 had an Evolution engine tuned to produce , four-piston dual front disc brakes, and an aluminum swing arm. Motorcyclist featured the XR1200 on the cover of its July 2008 issue and was generally positive about it in their "First Ride" story, in which Harley-Davidson was repeatedly asked to sell it in the United States. One possible reason for the delayed availability in the United States was the fact that Harley-Davidson had to obtain the "XR1200" naming rights from Storz Performance, a Harley customizing shop in Ventura, Calif. The XR1200 was released in the United States in 2009 in a special color scheme including Mirage Orange highlighting its dirt-tracker heritage. The first 750 XR1200 models in 2009 were pre-ordered and came with a number 1 tag for the front of the bike, autographed by Kenny Coolbeth and Scott Parker and a thank you/welcome letter from the company, signed by Bill Davidson. The XR1200 was discontinued in model year 2013. In 2021, Harley-Davidson launched the Sportster S model, with a 121hp engine and 228 Kg ready-to-ride weight. The Sportster S was one of the first Harleys to come with cornering-ABS and lean-sensitive traction control. The Sportster S is also the first model under the Sportster nameplate since 1957 to receive a completely new engine. Designation Except for the street-going XR1000 of the 1980s and the XR1200, most Sportsters made for street use have the prefix XL in their model designation. For the Sportster Evolution engines used since the mid-1980s, there have been two engine sizes. Motorcycles with the smaller engine are designated XL883, while those with the larger engine were initially designated XL1100. When the size of the larger engine was increased from 1,100 cc to 1,200 cc, the designation was changed accordingly from XL1100 to XL1200. Subsequent letters in the designation refer to model variations within the Sportster range, e.g. the XL883C refers to an 883 cc Sportster Custom, while the XL1200S designates the now-discontinued 1200 Sportster Sport. VRSC Introduced in 2001 and produced until 2017, the VRSC muscle bike family bears little resemblance to Harley's more traditional lineup. Competing against Japanese and American muscle bikes in the upcoming muscle bike/power cruiser segment, the "V-Rod" makes use of the revolution engine that, for the first time in Harley history, incorporates overhead cams and liquid cooling. The V-Rod is visually distinctive, easily identified by the 60-degree V-Twin engine, the radiator and the hydroformed frame members that support the round-topped air cleaner cover. The VRSC platform was also used for factory drag-racing motorcycles. In 2008, Harley added the anti-lock braking system as a factory-installed option on all VRSC models. Harley also increased the displacement of the stock engine from , which had only previously been available from Screamin' Eagle, and added a slipper clutch as standard equipment. VRSC models include: VRSCA: V-Rod (2002–2006), VRSCAW: V-Rod (2007–2010), VRSCB: V-Rod (2004–2005), VRSCD: Night Rod (2006–2008), VRSCDX: Night Rod Special (2007–2014), VRSCSE: Screamin' Eagle CVO V-Rod (2005), VRSCSE2: Screamin' Eagle CVO V-Rod (2006), VRSCR: Street Rod (2006–2007), VRSCX: Screamin' Eagle Tribute V-Rod (2007), VRSCF: V-Rod Muscle (2009–2014). VRSC models utilize the Revolution engine (VR), and the street versions are designated Street Custom (SC). After the VRSC prefix common to all street Revolution bikes, the next letter denotes the model, either A (base V-Rod: discontinued), AW (base V-Rod + W for Wide with a 240 mm rear tire), B (discontinued), D (Night Rod: discontinued), R (Street Rod: discontinued), SE and SEII(CVO Special Edition), or X (Special edition). Further differentiation within models are made with an additional letter, e.g., VRSCDX denotes the Night Rod Special. VRXSE The VRXSE V-Rod Destroyer is Harley-Davidson's production drag racing motorcycle, constructed to run the quarter mile in less than ten seconds. It is based on the same revolution engine that powers the VRSC line, but the VRXSE uses the Screamin' Eagle 1,300 cc "stroked" incarnation, featuring a 75 mm crankshaft, 105 mm Pistons, and 58 mm throttle bodies. The V-Rod Destroyer is not a street-legal motorcycle. As such, it uses "X" instead of "SC" to denote a non-street bike. "SE" denotes a CVO Special Edition. Street The Street, Harley-Davidson's newest platform and their first all new platform in thirteen years, was designed to appeal to younger riders looking for a lighter bike at a cheaper price. The Street 750 model was launched in India at the 2014 Indian Auto Expo, Delhi-NCR on February 5, 2014. The Street 750 weighs 218 kg and has a ground clearance of 144 mm giving it the lowest weight and the highest ground clearance of Harley-Davidson motorcycles currently available. The Street 750 uses an all-new, liquid-cooled, 60° V-twin engine called the Revolution X. In the Street 750, the engine displaces and produces 65 Nm at 4,000 rpm. A six speed transmission is used. The Street 750 and the smaller-displacement Street 500 have been available since late 2014. Street series motorcycles for the North American market will be built in Harley-Davidson's Kansas City, Missouri plant, while those for other markets around the world will be built completely in their plant in Bawal, India. LiveWire Harley-Davidson's LiveWire, released in 2019, is their first electric vehicle. The high-voltage battery provides a minimum city range of 98 miles (158 km). The LiveWire targets a different type of customer than their classic V-twin powered motorcycles. In March 2020, a Harley-Davidson LiveWire was used to break the 24-hour distance record for an electric motorcycle. The bike traveled a reported 1,723 km (1,079 miles) in 23 hours and 48 minutes. The LiveWire offers a Level 1 slow recharge, which uses a regular wall outlet to refill an empty battery overnight, or a quick Level 3 DC Fast Charge. The Fast Charge fills the battery most of the way in about 40 minutes. Swiss rider Michel von Tell used the Level 3 charging to make the 24-hour ride. In December 2021, the news was published that LiveWire would be spun-off from parent Harley Davidson, set to go public in the first half of 2022 as a special purpose acquisition company (SPAC) valued at $1.77 billion. Custom Vehicle Operations Custom Vehicle Operations (CVO) is a team within Harley-Davidson that produces limited-edition customizations of Harley's stock models. Every year since 1999, the team has selected two to five of the company's base models and added higher-displacement engines, performance upgrades, special-edition paint jobs, more chromed or accented components, audio system upgrades, and electronic accessories to create high-dollar, premium-quality customizations for the factory custom market. The models most commonly upgraded in such a fashion are the Ultra Classic Electra Glide, which has been selected for CVO treatment every year from 2006 to the present, and the Road King, which was selected in 2002, 2003, 2007, and 2008. The Dyna, Softail, and VRSC families have also been selected for CVO customization. Environmental record The Environmental Protection Agency conducted emissions-certification and representative emissions test in Ann Arbor, Michigan, in 2005. Subsequently, Harley-Davidson produced an "environmental warranty". The warranty ensures each owner that the vehicle is designed and built free of any defects in materials and workmanship that would cause the vehicle to not meet EPA standards. In 2005, the EPA and the Pennsylvania Department of Environmental Protection (PADEP) confirmed Harley-Davidson to be the first corporation to voluntarily enroll in the One Clean-Up Program. This program is designed for the clean-up of the affected soil and groundwater at the former York Naval Ordnance Plant. The program is backed by the state and local government along with participating organizations and corporations. Paul Gotthold, Director of Operations for the EPA, congratulated the motor company: Harley-Davidson also purchased most of Castalloy, a South Australian producer of cast motorcycle wheels and hubs. The South Australian government has set forth "protection to the purchaser (Harley-Davidson) against environmental risks". In August 2016 Harley-Davidson settled with the EPA for $12 million, without admitting wrongdoing, over the sale of after-market "super tuners". Super tuners were devices, marketed for competition, which enabled increased performance of Harley-Davidson products. However, the devices also modified the emission control systems, producing increased hydrocarbon and nitrogen oxide. Harley-Davidson is required to buy back and destroy any super tuners which do not meet Clean Air Act requirements and spend $3 million on air pollution mitigation. Brand culture According to a recent Harley-Davidson study, in 1987 half of all Harley riders were under age 35. However, by 2006, only 15 percent of Harley buyers were under 35, and as of 2005, the median age had risen to 46.7. In 2008, Harley-Davidson stopped disclosing the average age of riders; at this point it was 48 years old. In 1987, the median household income of a Harley-Davidson rider was $38,000. By 1997, the median household income for those riders had more than doubled, to $83,000. Many Harley-Davidson Clubs exist nowadays around the world; the oldest one, founded in 1928, is in Prague. Harley-Davidson attracts a loyal brand community, with licensing of the Harley-Davidson logo accounting for almost 5 percent of the company's net revenue ($41 million in 2004). Harley-Davidson supplies many American police forces with their motorcycle fleets. From its founding, Harley-Davidson had worked to brand its motorcycles as respectable and refined products, with ads that showed what motorcycling writer Fred Rau called "refined-looking ladies with parasols, and men in conservative suits as the target market". The 1906 Harley-Davidson's effective, and polite, muffler was emphasized in advertisements with the nickname "The Silent Gray Fellow". That began to shift in the 1960s, partially in response to the clean-cut motorcyclist portrayed in Honda's "You meet the nicest people on a Honda" campaign, when Harley-Davidson sought to draw a contrast with Honda by underscoring the more working-class, macho, and even a little anti-social attitude associated with motorcycling's dark side. With the 1971 FX Super Glide, the company embraced, rather than distanced, itself from chopper style, and the counterculture custom Harley scene. Their marketing cultivated the "bad boy" image of biker and motorcycle clubs, and to a point, even outlaw or one-percenter motorcycle
the adoption of certain standard British (i.e., non-Irish) features. The result is a configuration of features that is still unique; in other words, this accent is not simply a wholesale shift towards British English. Most speakers born in the 1980s or later are showing fewer features of this late-twentieth-century mainstream supraregional form and more characteristics aligning to a rapidly spreading new Dublin accent (see more above, under "Non-local Dublin English"). Ireland's supraregional dialect pronounces: as quite open . along a possible spectrum , with innovative [ɑɪ] particularly more common before voiced consonants, notably including . as starting fronter and often more raised than other dialects: . may be , with a backer vowel than in other Irish accents, though still relatively fronted. as . as , almost always separate from , keeping words like war and wore, or horse and hoarse, pronounced distinctly. as . as a diphthong, approaching , as in the mainstream United States, or , as in mainstream England. as higher, fronter, and often rounder . Overview of pronunciation and phonology The following charts list the vowels typical of each Irish English dialect as well as the several distinctive consonants of Irish English. Phonological characteristics of overall Irish English are given as well as categorisations into five major divisions of Hiberno-English: northern Ireland (or Ulster); West & South-West Ireland; local Dublin; new Dublin; and supraregional (southern) Ireland. Features of mainstream non-local Dublin English fall on a range between "local Dublin" and "new Dublin". Pure vowels (monophthongs) The defining monophthongs of Irish English: The following pure vowel sounds are defining characteristics of Irish English: is typically centralised in the mouth and often somewhat more rounded than other standard English varieties, such as Received Pronunciation in England or General American in the United States. There is a partial trap-bath split in most Irish English varieties (cf. Variation in Australian English). There is inconsistency regarding the lot–cloth split and the cot–caught merger; certain Irish English dialects have these phenomena while others do not. The cot-caught merger by definition rules out the presence of the lot-cloth split. Any and many are pronounced to rhyme with nanny, Danny, etc. by very many speakers, i.e. with each of these words pronounced with . All pure vowels of various Hiberno-English dialects: {| class="wikitable" style="text-align:center" |+ |- | English diaphoneme | Ulster | West & South-West Ireland | Local Dublin | New Dublin | Supraregional Ireland | Example words |- | flat | | colspan="2" | | | | add, land, trap |- | and broad | | colspan="2" | | colspan="2" | | bath, calm, dance |- | conservative | | colspan="2" | | | | lot, top, wasp|- | divergent | | colspan="2" | | | | loss, off |- | | | colspan="2" | | | | all, bought, saw|- | | colspan="5" | | dress, met, bread |- | | colspan="5" | | about, syrup, arena|- | | | colspan="4" | | hit, skim, tip |- | | colspan="5" | | beam, chic, fleet |- | | colspan="2" | | | | | bus, flood |- | | rowspan="2" | | colspan="4" | | book, put, should |- | | colspan="3" | | | food, glue, new|}Footnotes:In southside Dublin's once-briefly fashionable "Dublin 4" (or "Dortspeak") accent, the " and broad " set becomes rounded as [ɒː]. In South-West Ireland, before or is raised to . Due to the local Dublin accent's phenomenon of "vowel breaking", may be realised in this accent as in a closed syllable, and, in the same environment, may be realised as . The vowel is rather open in Ulster accents, uniquely among Irish accents.Other notes:In some highly conservative Irish English varieties, words spelled with ea and pronounced with in RP are pronounced with , for example meat, beat, and leaf. In words like took where the spelling "oo" usually represents , conservative speakers may use . This is most common in local Dublin and the speech of north-east Leinster. Gliding vowels (diphthongs)The defining diphthongs of Hiberno-English:The following gliding vowel (diphthong) sounds are defining characteristics of Irish English: The first element of the diphthong , as in ow or doubt, may move forward in the mouth in the east (namely, Dublin) and supraregionally; however, it may actually move backwards throughout the entire rest of the country. In the north alone, the second element is particularly moved forward, as in Scotland. The first element of the diphthong , as in boy or choice, is slightly or significantly lowered in all geographic regions except the north. The diphthong , as in rain or bay, is most commonly monophthongised to . Furthermore, this often lowers to in words such as gave and came (sounding like "gev" and "kem").All diphthongs of various Hiberno-English dialects:Footnotes: Due to the local Dublin accent's phenomenon of "vowel breaking", may be realised in that accent as in a closed syllable, and, in the same environment, may be realised as . R-coloured vowelsThe defining r-coloured vowels of Hiberno-English:The following r-coloured vowel features are defining characteristics of Hiberno-English: Rhoticity: Every major accent of Hiberno-English pronounces the letter "r" whenever it follows a vowel sound, though this is weaker in the local Dublin accent due to its earlier history of non-rhoticity. Rhoticity is a feature that Hiberno-English shares with Canadian English and General American but not with Received Pronunciation. The distinction between and is almost always preserved, so that, for example, horse and hoarse are not merged in most Irish accents.All r-coloured vowels of various Hiberno-English dialects:Footnotes: In older varieties of the conservative accents, like local Dublin, the "r" sound before a vowel may be pronounced as a tapped , rather than as the typical approximant . Every major accent of Irish English is rhotic (pronounces "r" after a vowel sound). The local Dublin accent is the only one that during an earlier time was non-rhotic, though it usually very lightly rhotic today, with a few minor exceptions. The rhotic consonant in this and most other Irish accents is an approximant . The "r" sound of the mainstream non-local Dublin accent is more precisely a velarised approximant , while the "r" sound of the more recently emerging non-local Dublin (or "new Dublin") accent is more precisely a retroflex approximant . In southside Dublin's once-briefly fashionable "Dublin 4" (or "Dortspeak") accent, is realised as . In non-local Dublin's more recently emerging (or "new Dublin") accent, and may both be realised more rounded as . The mergers have not occurred in local Dublin, West/South-West, and other very conservative and traditional Irish English varieties ranging from the south to the north. Whereas the vowels corresponding to historical , and have merged to in most dialects of English, the local Dublin and West/South-West accents retain a two-way distinction: versus . The distribution of these two in these accents does not always align to what their spelling suggests: is used when after a labial consonant (e.g. fern), when spelled as "ur" or "or" (e.g. word), or when spelled as "ir" after an alveolar stop (e.g. dirt); is used in all other situations. However, there are apparent exceptions to these rules; John C. Wells describes prefer and per as falling under the class, despite the vowel in question following a labial. The distribution of versus is listed below in some other example words:certain chirp circle earn earth girl germ heard or herd Hertz irk tern bird dirt first hurts murder nurse turn third or turd urn work world Non-local Dublin, younger, and supraregional Irish accents do feature the full mergers to , as in American English. In rare few local Dublin varieties that are non-rhotic, is either lowered to or backed and raised to . The distinction between and is widely preserved in Ireland, so that, for example, horse and hoarse are not merged in most Irish English dialects; however, they are usually merged in Belfast and new Dublin. In local Dublin, due to the phenomenon of "vowel breaking" may in fact be realised as . ConsonantsThe defining consonants of Hiberno-English:The consonants of Hiberno-English mostly align to the typical English consonant sounds. However, a few Irish English consonants have distinctive, varying qualities. The following consonant features are defining characteristics of Hiberno-English: H-fulness: Unlike most English varieties of England and Wales, which drop the word-initial sound in words like house or happy, Hiberno-English always retains word-initial . Furthermore, Hiberno-English also allows where it is permitted in Irish but excluded in other dialects of English, such as before an unstressed vowel (e.g. Haughey ) and at the end of a word (e.g. McGrath ). The phonemes dental fricatives (as in the) and (as in thin) are pronounced uniquely as stops in most Hiberno-English—either dental or alveolar—. is pronounced as or , depending on specific dialect; and is pronounced as or . In some middle- or upper-class accents, they are realized as the dental stops and as such do not merge with the alveolar stops ; thus, for example, tin () is not a homophone of thin . In older, rural, or working-class accents, such pairs are indeed merged. The phoneme , when appearing at the end of a word or between vowel sounds, is pronounced uniquely in most Hiberno-English; the most common pronunciation is as a "slit fricative". The phoneme is almost always of a "light" or "clear" quality (i.e. not velarised), unlike Received Pronunciation, which uses both a clear and a dark "L" sound, or General American, which pronounces all "L" sounds as dark. Rhoticity: The pronunciation of historical is nearly universal in Irish accents of English. Like with General American (but not Received Pronunciation), this means that the letter "r", if appearing after a vowel sound, is always pronounced (in words such as here, cart, or surf).Unique consonants in various Hiberno-English dialects:Footnotes:In traditional, conservative Ulster English, and are palatalised before a low front vowel. Local Dublin also undergoes cluster simplification, so that stop consonant sounds occurring after fricatives or sonorants may be left unpronounced, resulting, for example, in "poun(d)" and "las(t)". Rhoticity: Every major accent of Irish English is strongly rhotic (pronounces "r" after a vowel sound), though to a weaker degree with the local Dublin accent. The accents of local Dublin and some smaller eastern towns like Drogheda were historically non-rhotic and now only very lightly rhotic or variably rhotic, with the rhotic consonant being an alveolar approximant, . In extremely traditional and conservative accents (exemplified, for instance, in the speech of older speakers throughout the country, even in South-West Ireland, such as Mícheál Ó Muircheartaigh and Jackie Healy-Rae), the rhotic consonant, before a vowel sound, can also be an alveolar tap, . The rhotic consonant for the northern Ireland and new Dublin accents is a retroflex approximant, . Dublin's retroflex approximant has no precedent outside of northern Ireland and is a genuine innovation of the 1990s and 2000s. A guttural/uvular is found in north-east Leinster. Otherwise, the rhotic consonant of virtually all other Irish accents is the postalveolar approximant, . The symbol [θ̠] is used here to represent the voiceless alveolar non-sibilant fricative, sometimes known as a "slit fricative", whose articulation is described as being apico-alveolar. Overall, and are being increasingly merged in supraregional Irish English, for example, making wine and whine homophones, as in most varieties of English around the world. Other phonological characteristics of Irish English include that consonant clusters ending in before are distinctive:Wells, 1982, p. 435. is dropped after coronal sonorants and fricatives, e.g. new sounds like noo, and sue like soo. becomes , e.g. dew/due, duke and duty sound like "jew", "jook" and "jooty". becomes , e.g. tube is "choob", tune is "choon" The following show neither dropping nor coalescence: (as in cute), (as in mute), and (as in huge; though the can be dropped in the South-West of Ireland). The naming of the letter H as "haytch" is standard. Due to Gaelic influence, an epenthetic schwa is sometimes inserted, perhaps as a feature of older and less careful speakers, e.g. film and form . Vocabulary Loan words from Irish A number of Irish-language loan words are used in Hiberno-English, particularly in an official state capacity. For example, the head of government is the Taoiseach, the deputy head is the Tánaiste, the parliament is the Oireachtas and its lower house is Dáil Éireann. Less formally, people also use loan words in day-to-day speech, although this has been on the wane in recent decades and among the young. Derived words from Irish Another group of Hiberno-English words are those derived from the Irish language. Some are words in English that have entered into general use, while others are unique to Ireland. These words and phrases are often Anglicised versions of words in Irish or direct translations into English. In the latter case, they often give meaning to a word or phrase that is generally not found in wider English use. Derived words from Old and Middle English Another class of vocabulary found in Hiberno-English are words and phrases common in Old and Middle English, but which have since become obscure or obsolete in the modern English language generally. Hiberno-English has also developed particular meanings for words that are still in common use in English generally. Other words In addition to the three groups above, there are also additional words and phrases whose origin is disputed or unknown. While this group may not be unique to Ireland, their usage is not widespread, and could be seen as characteristic of the language in Ireland. Grammar and syntax The syntax of the Irish language is quite different from that of English. Various aspects of Irish syntax have influenced Hiberno-English, though many of these idiosyncrasies are disappearing in suburban areas and among the younger population. The other major influence on Hiberno-English that sets it apart from modern English in general is the retention of words and phrases from Old- and Middle-English. From Irish Reduplication Reduplication is an alleged trait of Hiberno-English strongly associated with Stage Irish and Hollywood films. the Irish ar bith corresponds to English "at all", so the stronger ar chor ar bith gives rise to the form "at all at all". "I've no time at all at all." ar eagla go … (lit. "on fear that …") means "in case …". The variant ar eagla na heagla, (lit. "on fear of fear") implies the circumstances are more unlikely. The corresponding Hiberno-English phrases are "to be sure" and the very rarely used "to be sure to be sure". In this context, these are not, as might be thought, disjuncts meaning "certainly"; they could better be translated "in case" and "just in case". Nowadays normally spoken with conscious levity. "I brought some cash in case I saw a bargain, and my credit card to be sure to be sure." Yes and no Irish has no words that directly translate as "yes" or "no", and instead repeats the verb used in the question, negated if necessary, to answer. Hiberno-English uses "yes" and "no" less frequently than other English dialects as speakers can repeat the verb, positively or negatively, instead of (or in redundant addition to) using "yes" or "no". "Are you coming home soon?" – "I am." "Is your mobile charged?" – "It isn't." This is not limited only to the verb to be: it is also used with to have when used as an auxiliary; and, with other verbs, the verb to do is used. This is most commonly used for intensification, especially in Ulster English. "This is strong stuff, so it is." "We won the game, so we did." Recent past construction Irish indicates recency of an action by adding "after" to the present continuous (a verb ending in "-ing"), a construction known as the "hot news perfect" or "after perfect". The idiom for "I had done X when I did Y" is "I was after doing X when I did Y", modelled on the Irish usage of the compound prepositions , , and : / / . "Why did you hit him?" – "He was after giving me cheek." (he had [just beforehand] been cheeky to me). A similar construction is seen where exclamation is used in describing a recent event: "I'm after hitting him with the car!" "She's after losing five stone in five weeks!" When describing less astonishing or significant events, a structure resembling the German perfect can be seen: "I have the car fixed." "I have my breakfast eaten." This correlates with an analysis of "H1 Irish" proposed by Adger & Mitrovic, in a deliberate parallel to the status of German as a V2 language. Recent past construction has been directly adopted into Newfoundland English, where it is common in both formal and casual register. In rural areas of the Avalon peninsula, where Newfoundland Irish was spoken until the early 20th century, it is the grammatical standard for describing whether or not an action has occurred. Reflection for emphasis The reflexive version of pronouns is often used for emphasis or to refer indirectly to a particular person, etc., according to context. Herself, for example, might refer to the speaker's boss or to the woman of the house. Use of herself or himself in this way often indicates that the speaker attributes some degree of arrogance or selfishness to the person in question. Note also the indirectness of this construction relative to, for example, She's coming now. This reflexive pronoun can also be used to describe a partner – "I was with himself last night." or "How's herself doing?" "'Tis herself that's coming now." Is í féin atá ag teacht anois. "Was it all of ye or just yourself?" An sibhse ar fad nó tusa féin a bhí i gceist? Prepositional pronouns There are some language forms that stem from the fact that there is no verb to have in Irish. Instead, possession is indicated in Irish by using the preposition at, (in Irish, ag.). To be more precise, Irish uses a prepositional pronoun that combines ag "at" and mé "me" to create agam. In English, the verb "to have" is used, along with a "with
guttural/uvular is found in north-east Leinster. Otherwise, the rhotic consonant of virtually all other Irish accents is the postalveolar approximant, . The symbol [θ̠] is used here to represent the voiceless alveolar non-sibilant fricative, sometimes known as a "slit fricative", whose articulation is described as being apico-alveolar. Overall, and are being increasingly merged in supraregional Irish English, for example, making wine and whine homophones, as in most varieties of English around the world. Other phonological characteristics of Irish English include that consonant clusters ending in before are distinctive:Wells, 1982, p. 435. is dropped after coronal sonorants and fricatives, e.g. new sounds like noo, and sue like soo. becomes , e.g. dew/due, duke and duty sound like "jew", "jook" and "jooty". becomes , e.g. tube is "choob", tune is "choon" The following show neither dropping nor coalescence: (as in cute), (as in mute), and (as in huge; though the can be dropped in the South-West of Ireland). The naming of the letter H as "haytch" is standard. Due to Gaelic influence, an epenthetic schwa is sometimes inserted, perhaps as a feature of older and less careful speakers, e.g. film and form . Vocabulary Loan words from Irish A number of Irish-language loan words are used in Hiberno-English, particularly in an official state capacity. For example, the head of government is the Taoiseach, the deputy head is the Tánaiste, the parliament is the Oireachtas and its lower house is Dáil Éireann. Less formally, people also use loan words in day-to-day speech, although this has been on the wane in recent decades and among the young. Derived words from Irish Another group of Hiberno-English words are those derived from the Irish language. Some are words in English that have entered into general use, while others are unique to Ireland. These words and phrases are often Anglicised versions of words in Irish or direct translations into English. In the latter case, they often give meaning to a word or phrase that is generally not found in wider English use. Derived words from Old and Middle English Another class of vocabulary found in Hiberno-English are words and phrases common in Old and Middle English, but which have since become obscure or obsolete in the modern English language generally. Hiberno-English has also developed particular meanings for words that are still in common use in English generally. Other words In addition to the three groups above, there are also additional words and phrases whose origin is disputed or unknown. While this group may not be unique to Ireland, their usage is not widespread, and could be seen as characteristic of the language in Ireland. Grammar and syntax The syntax of the Irish language is quite different from that of English. Various aspects of Irish syntax have influenced Hiberno-English, though many of these idiosyncrasies are disappearing in suburban areas and among the younger population. The other major influence on Hiberno-English that sets it apart from modern English in general is the retention of words and phrases from Old- and Middle-English. From Irish Reduplication Reduplication is an alleged trait of Hiberno-English strongly associated with Stage Irish and Hollywood films. the Irish ar bith corresponds to English "at all", so the stronger ar chor ar bith gives rise to the form "at all at all". "I've no time at all at all." ar eagla go … (lit. "on fear that …") means "in case …". The variant ar eagla na heagla, (lit. "on fear of fear") implies the circumstances are more unlikely. The corresponding Hiberno-English phrases are "to be sure" and the very rarely used "to be sure to be sure". In this context, these are not, as might be thought, disjuncts meaning "certainly"; they could better be translated "in case" and "just in case". Nowadays normally spoken with conscious levity. "I brought some cash in case I saw a bargain, and my credit card to be sure to be sure." Yes and no Irish has no words that directly translate as "yes" or "no", and instead repeats the verb used in the question, negated if necessary, to answer. Hiberno-English uses "yes" and "no" less frequently than other English dialects as speakers can repeat the verb, positively or negatively, instead of (or in redundant addition to) using "yes" or "no". "Are you coming home soon?" – "I am." "Is your mobile charged?" – "It isn't." This is not limited only to the verb to be: it is also used with to have when used as an auxiliary; and, with other verbs, the verb to do is used. This is most commonly used for intensification, especially in Ulster English. "This is strong stuff, so it is." "We won the game, so we did." Recent past construction Irish indicates recency of an action by adding "after" to the present continuous (a verb ending in "-ing"), a construction known as the "hot news perfect" or "after perfect". The idiom for "I had done X when I did Y" is "I was after doing X when I did Y", modelled on the Irish usage of the compound prepositions , , and : / / . "Why did you hit him?" – "He was after giving me cheek." (he had [just beforehand] been cheeky to me). A similar construction is seen where exclamation is used in describing a recent event: "I'm after hitting him with the car!" "She's after losing five stone in five weeks!" When describing less astonishing or significant events, a structure resembling the German perfect can be seen: "I have the car fixed." "I have my breakfast eaten." This correlates with an analysis of "H1 Irish" proposed by Adger & Mitrovic, in a deliberate parallel to the status of German as a V2 language. Recent past construction has been directly adopted into Newfoundland English, where it is common in both formal and casual register. In rural areas of the Avalon peninsula, where Newfoundland Irish was spoken until the early 20th century, it is the grammatical standard for describing whether or not an action has occurred. Reflection for emphasis The reflexive version of pronouns is often used for emphasis or to refer indirectly to a particular person, etc., according to context. Herself, for example, might refer to the speaker's boss or to the woman of the house. Use of herself or himself in this way often indicates that the speaker attributes some degree of arrogance or selfishness to the person in question. Note also the indirectness of this construction relative to, for example, She's coming now. This reflexive pronoun can also be used to describe a partner – "I was with himself last night." or "How's herself doing?" "'Tis herself that's coming now." Is í féin atá ag teacht anois. "Was it all of ye or just yourself?" An sibhse ar fad nó tusa féin a bhí i gceist? Prepositional pronouns There are some language forms that stem from the fact that there is no verb to have in Irish. Instead, possession is indicated in Irish by using the preposition at, (in Irish, ag.). To be more precise, Irish uses a prepositional pronoun that combines ag "at" and mé "me" to create agam. In English, the verb "to have" is used, along with a "with me" or "on me" that derives from Tá … agam. This gives rise to the frequent "Do you have the book?" – "I have it with me." "Have you change for the bus on you?" "He will not shut up if he has drink taken." Somebody who can speak a language "has" a language, in which Hiberno-English has borrowed the grammatical form used in Irish. "She does not have Irish." Níl Gaeilge aici. literally "There is no Irish at her". When describing something, many Hiberno-English speakers use the term "in it" where "there" would usually be used. This is due to the Irish word ann (pronounced "oun" or "on") fulfilling both meanings. "Is it yourself that is in it?" An tú féin atá ann? "Is there any milk in it?" An bhfuil bainne ann? Another idiom is this thing or that thing described as "this man here" or "that man there", which also features in Newfoundland English in Canada. "This man here." An fear seo. (cf. the related anseo = here) "That man there." An fear sin. (cf. the related ansin = there) Conditionals have a greater presence in Hiberno-English due to the tendency to replace the simple present tense with the conditional (would) and the simple past tense with the conditional perfect (would have). "John asked me would I buy a loaf of bread." (John asked me to buy a loaf of bread.) "How do you know him? We would have been in school together." (We were in school together.)Bring and take: Irish use of these words differs from that of British English because it follows the Irish grammar for beir and tóg. English usage is determined by direction; a person determines Irish usage. So, in English, one takes "from here to there", and brings it "to here from there". In Irish, a person takes only when accepting a transfer of possession of the object from someone elseand a person brings at all other times, irrespective of direction (to or from). Don't forget to bring your umbrella with you when you leave. (To a child) Hold my hand: I don't want someone to take you. To be The Irish equivalent of the verb "to be" has two present tenses, one (the present tense proper or "aimsir láithreach") for cases which are generally true or are true at the time of speaking and the other (the habitual present or "aimsir ghnáthláithreach") for repeated actions. Thus, "you are [now, or generally]" is tá tú, but "you are [repeatedly]" is bíonn tú. Both forms are used with the verbal noun (equivalent to the English present participle) to create compound tenses. This is similar to the distinction between ser and estar in Spanish or the use of the 'habitual be' in African-American Vernacular English. The corresponding usage in English is frequently found in rural areas, especially Mayo/Sligo in the west of Ireland and Wexford in the south-east, Inner-City Dublin and Cork city along with border areas of the North and Republic. In this form, the verb "to be" in English is similar to its use in Irish, with a "does be/do be" (or "bees", although less frequently) construction to indicate the continuous, or habitual, present: "He does be working every day." Bíonn sé ag obair gach lá. "They do be talking on their mobiles a lot." Bíonn siad ag caint go minic ar a bhfóin póca. "He does be doing a lot of work at school." Bíonn sé ag déanamh go leor oibre ar scoil. "It's him I do be thinking of." Is air a bhíonn mé ag smaoineamh. This construction also surfaces in African American Vernacular English, as the famous habitual be. From Old and Middle English In old-fashioned usage, "it is" can be freely abbreviated ’tis, even as a standalone sentence. This also allows the double contraction ’tisn’t, for "it is not". Irish has separate forms for the second person singular (tú) and the second person plural (sibh). Mirroring Irish, and almost every other Indo-European language, the plural you is also distinguished from the singular in Hiberno-English, normally by use of the otherwise archaic English word ye ; the word yous (sometimes written as youse) also occurs, but primarily only in Dublin and across Ulster. In addition, in some areas in Leinster, north Connacht and parts of Ulster, the hybrid word ye-s, pronounced "yiz", may be used. The pronunciation differs with that of the northwestern being and the Leinster pronunciation being . "Did ye all go to see it?" Ar imigh sibh go léir chun é a fheicint? "None of youse have a clue!" Níl ciall/leid ar bith agaibh! "Are ye not finished yet?" Nach bhfuil sibh críochnaithe fós? "Yis are after destroying it!" Tá sibh tar éis é a scriosadh! The word ye, yis or yous, otherwise archaic, is still used in place of "you" for the second-person plural. Ye'r, Yisser or Yousser are the possessive forms, e.g. "Where are yous going?" The verb mitch is very common in Ireland, indicating being truant from school. This word appears in Shakespeare (though he wrote in Early Modern English rather than Middle English), but is seldom heard these days in British English, although pockets of usage persist in some areas (notably South Wales, Devon, and Cornwall). In parts of Connacht and Ulster the mitch is often replaced by the verb scheme, while in Dublin it is often replaced by "on the hop/bounce". Another usage familiar from Shakespeare is the inclusion of the second person pronoun after the imperative form of a verb, as in "Wife, go you to her ere you go to bed" (Romeo and Juliet, Act III, Scene IV). This is still common in Ulster: "Get youse your homework done or you're no goin' out!" In Munster, you will still hear children being told, "Up to bed, let ye" . For influence from Scotland, see Ulster Scots and Ulster English. Other grammatical influencesNow is often used at the end of sentences or phrases as a semantically empty word, completing an utterance without contributing any apparent meaning. Examples include "Bye now" (= "Goodbye"), "There you go now" (when giving someone something), "Ah now!" (expressing dismay), "Hold on now" (= "wait a minute"), "Now then" as a mild attention-getter, etc. This usage is universal among English dialects, but occurs more frequently in Hiberno-English. It is also used in the manner of the Italian 'prego' or German 'bitte', for example, a barman might say "Now, Sir." when delivering drinks.So is often used for emphasis ("I can speak Irish, so I can"), or it may be tacked onto the end of a sentence to indicate agreement, where "then" would often be used in Standard English ("Bye so", "Let's go so", "That's fine so", "We'll do that so"). The word is also used to contradict a negative statement ("You're not pushing hard enough" – "I am so!"). (This contradiction of a negative is also seen in American English, though not as often as "I am too", or "Yes, I am".) The practice of indicating emphasis with so and including reduplicating the sentence's subject pronoun
analysis. If the group is neither abelian nor compact, no general satisfactory theory is currently known ("satisfactory" means at least as strong as the Plancherel theorem). However, many specific cases have been analyzed, for example SLn. In this case, representations in infinite dimensions play a crucial role. Other branches Study of the eigenvalues and eigenvectors of the Laplacian on domains, manifolds, and (to a lesser extent) graphs is also considered a branch of harmonic analysis. See e.g., hearing the shape of a drum. Harmonic analysis on Euclidean spaces deals with properties of the Fourier transform on Rn that have no analog on general groups. For example, the fact that the Fourier transform is rotation-invariant. Decomposing the Fourier transform into its radial and spherical components leads to topics such as Bessel functions and spherical harmonics. Harmonic analysis on tube domains is concerned with generalizing properties of Hardy spaces to higher dimensions. Applied harmonic analysis Many applications of harmonic analysis in science and engineering begin with the idea or hypothesis that a phenomenon or signal is composed of a sum of individual oscillatory components. Ocean tides and vibrating strings are common and simple examples. The theoretical approach is often to try to describe the system by a differential equation or system of equations to predict the essential features, including the amplitude, frequency, and phases of the oscillatory components. The specific equations depend on the field, but theories generally try to select equations that represent major principles that are applicable. The experimental approach is usually to acquire data that accurately quantifies the phenomenon. For example, in a study of tides, the experimentalist would acquire samples of water depth as a function of time at closely enough spaced intervals to see each oscillation and over a long enough duration that multiple oscillatory periods are likely included. In a study on vibrating strings, it is common for the experimentalist to acquire a sound waveform sampled at a rate at least twice that of the highest frequency expected and for a duration many times the period of the lowest frequency expected. For example, the top signal at the right is a sound waveform of a bass guitar playing an open string corresponding to an A note with a fundamental frequency of 55 Hz. The waveform appears oscillatory, but it is more complex than a simple sine wave, indicating the presence of additional waves. The different wave components contributing to the sound can be revealed by applying a mathematical analysis technique known as the Fourier transform, the result of which is shown in the lower figure. Note that there is a prominent peak at 55 Hz, but that there are other peaks at 110 Hz, 165 Hz, and at other frequencies corresponding to integer multiples of 55 Hz. In this case, 55 Hz is identified as the fundamental frequency of the string vibration, and the integer multiples are known as harmonics. See also Convergence of Fourier series Fourier analysis for computing periodicity in evenly spaced data Harmonic
In this case, representations in infinite dimensions play a crucial role. Other branches Study of the eigenvalues and eigenvectors of the Laplacian on domains, manifolds, and (to a lesser extent) graphs is also considered a branch of harmonic analysis. See e.g., hearing the shape of a drum. Harmonic analysis on Euclidean spaces deals with properties of the Fourier transform on Rn that have no analog on general groups. For example, the fact that the Fourier transform is rotation-invariant. Decomposing the Fourier transform into its radial and spherical components leads to topics such as Bessel functions and spherical harmonics. Harmonic analysis on tube domains is concerned with generalizing properties of Hardy spaces to higher dimensions. Applied harmonic analysis Many applications of harmonic analysis in science and engineering begin with the idea or hypothesis that a phenomenon or signal is composed of a sum of individual oscillatory components. Ocean tides and vibrating strings are common and simple examples. The theoretical approach is often to try to describe the system by a differential equation or system of equations to predict the essential features, including the amplitude, frequency, and phases of the oscillatory components. The specific equations depend on the field, but theories generally try to select equations that represent major principles that are applicable. The experimental approach is usually to acquire data that accurately quantifies the phenomenon. For example, in a study of tides, the experimentalist would acquire samples of water depth as a function of time at closely enough spaced intervals to see each oscillation and over a long enough duration that multiple oscillatory periods are likely included. In a study on vibrating strings, it is common for the experimentalist
RBI for each runner that scores, including himself. Likewise, the pitcher is recorded as having given up a hit and a run, with additional runs charged for each runner that scores other than the batter. Home runs are among the most popular aspects of baseball and, as a result, prolific home run hitters are usually the most popular among fans and consequently the highest paid by teams—hence the old saying, "Home run hitters drive Cadillacs, and singles hitters drive Fords" (coined, circa 1948, by veteran pitcher Fritz Ostermueller, by way of mentoring his young teammate, Ralph Kiner). Nicknames for a home run include "homer", "round tripper", "four-bagger", "big fly", "dinger", "long ball", "jack", "shot"/"moon shot", "bomb", and "blast", while a player hitting a home run may be said to have "gone deep" or "gone yard". Types of home runs Out of the park In modern times a home run is most often scored when the ball is hit over the outfield wall between the foul poles (in fair territory) before it touches the ground (in flight), and without being caught or deflected back onto the field by a fielder. A batted ball is also a home run if it touches either a foul pole or its attached screen before touching the ground, as the foul poles are by definition in fair territory. Additionally, many major-league ballparks have ground rules stating that a batted ball in flight that strikes a specified location or fixed object is a home run; this usually applies to objects that are beyond the outfield wall but are located such that it may be difficult for an umpire to judge. In professional baseball, a batted ball that goes over the outfield wall after touching the ground (i.e. a ball that bounces over the outfield wall) becomes an automatic double. This is colloquially referred to as a "ground rule double" even though it is uniform across all of Major League Baseball, per MLB rules 5.05(a)(6) through 5.05(a)(9). A fielder is allowed to reach over the wall to attempt to catch the ball as long as his feet are on or over the field during the attempt, and if the fielder successfully catches the ball while it is in flight the batter is out, even if the ball had already passed the vertical plane of the wall. However, since the fielder is not part of the field, a ball that bounces off a fielder (including his glove) and over the wall without touching the ground is still a home run. A fielder may not deliberately throw his glove, cap, or any other equipment or apparel to stop or deflect a fair ball, and an umpire may award a home run to the batter if a fielder does so on a ball that, in the umpire's judgment, would have otherwise been a home run (this is rare in modern professional baseball). A home run accomplished in any of the above manners is an automatic home run. The ball is dead, even if it rebounds back onto the field (e.g., from striking a foul pole), and the batter and any preceding runners cannot be put out at any time while running the bases. However, if one or more runners fail to touch a base or one runner passes another before reaching home plate, that runner or runners can be called out on appeal, though in the case of not touching a base a runner can go back and touch it if doing so won't cause them to be passed by another preceding runner and they have not yet touched the next base (or home plate in the case of missing third base). This stipulation is in Approved Ruling (2) of Rule 7.10(b). Inside-the-park home run An inside-the-park home run occurs when a batter hits the ball into play and is able to circle the bases before the fielders can put him out. Unlike with an outside-the-park home run, the batter-runner and all preceding runners are liable to be put out by the defensive team at any time while running the bases. This can only happen if the ball does not leave the ballfield. In the early days of baseball, outfields were relatively much more spacious, reducing the likelihood of an over-the-fence home run, while increasing the likelihood of an inside-the-park home run, as a ball getting past an outfielder had more distance that it could roll before a fielder could track it down. Modern outfields are much less spacious and more uniformly designed than in the game's early days. Therefore, inside-the-park home runs are now rare. They usually occur when a fast runner hits the ball deep into the outfield and the ball bounces in an unexpected direction away from the nearest outfielder (e.g., of a divot in the grass or off the outfield wall), the nearest outfielder is injured on the play and cannot get to the ball, or an outfielder misjudges the flight of the ball in a way that he cannot quickly recover from the mistake (e.g., by diving and missing). The speed of the runner is crucial as even triples are relatively rare in most modern ballparks. If any defensive play on an inside-the-park home run is labeled an error by the official scorer, a home run is not scored. Instead, it is scored as a single, double, or triple, and the batter-runner and any applicable preceding runners are said to have taken all additional bases on error. All runs scored on such a play, however, still count. An example of an unexpected bounce occurred during the 2007 Major League Baseball All-Star Game at AT&T Park in San Francisco on July 10, 2007. Ichiro Suzuki of the American League team hit a fly ball that caromed off the right-center field wall in the opposite direction from where National League right fielder Ken Griffey Jr. was expecting it to go. By the time the ball was relayed, Ichiro had already crossed the plate standing up. This was the first inside-the-park home run in All-Star Game history, and led to Suzuki being named the game's Most Valuable Player. Number of runs batted in Home runs are often characterized by the number of runners on base at the time. A home run hit with the bases empty is seldom called a "one-run homer", but rather a solo home run, solo homer, or "solo shot". With one runner on base, two runs are scored (the baserunner and the batter) and thus the home run is often called a two-run homer or two-run shot. Similarly, a home run with two runners on base is a three-run homer or three-run shot. The term "four-run homer" is seldom used; instead, it is nearly always called a "grand slam". Hitting a grand slam is the best possible result for the batter's turn at bat and the worst possible result for the pitcher and his team. Grand slam A grand slam occurs when the bases are "loaded" (that is, there are base runners standing at first, second, and third base) and the batter hits a home run. According to The Dickson Baseball Dictionary, the term originated in the card game of contract bridge. An inside-the-park grand slam is a grand slam that is also an inside-the-park home run, a home run without the ball leaving the field, and it is very rare, due to the relative rarity of loading the bases along with the significant rarity (nowadays) of inside-the-park home runs. On July 25, 1956, Roberto Clemente became the only MLB player to have ever scored a walk-off inside-the-park grand slam in a 9–8 Pittsburgh Pirates win over the Chicago Cubs, at Forbes Field. On April 23, 1999, Fernando Tatís made history by hitting two grand slams in one inning, both against Chan Ho Park of the Los Angeles Dodgers. With this feat, Tatís also set a Major League record with 8 RBI in one inning. On July 29, 2003, against the Texas Rangers, Bill Mueller of the Boston Red Sox became the only player in major league history to hit two grand slams in one game from opposite sides of the plate; he hit three home runs in that game, and his two grand slams were in consecutive at-bats. On August 25, 2011, the New York Yankees became the first team to hit three grand slams in one game vs the Oakland A's. The Yankees eventually won the game 22–9, after trailing 7–1. Specific situation home runs These types of home runs are characterized by the specific game situation in which they occur, and can theoretically occur on either an outside-the-park or inside-the-park home run. Walk-off home run A walk-off home run is a home run hit by the home team in the bottom of the ninth inning, any extra inning, or other scheduled final inning, which gives the home team the lead and thereby ends the game. The term is attributed to Hall of Fame relief pitcher Dennis Eckersley, so named because after the run is scored, the losing team has to "walk off" the field. Two World Series have ended via the "walk-off" home run. The first was the 1960 World Series when Bill Mazeroski of the Pittsburgh Pirates hit a ninth inning solo home run in the seventh game of the series off New York Yankees pitcher Ralph Terry to give the Pirates the World Championship. The second time was the 1993 World Series when Joe Carter of the Toronto Blue Jays hit a ninth inning three-run home run off Philadelphia Phillies pitcher Mitch Williams in Game 6 of the series, to help the Toronto Blue Jays capture their second World Series Championship in a row. Such a home run can also be called a "sudden death" or "sudden victory" home run. That usage has lessened as "walk-off home run" has gained favor. Along with Mazeroski's 1960 shot, the most famous walk-off or sudden-death homer would probably be the "Shot Heard 'Round the World" hit by Bobby Thomson to win the 1951 National League pennant for the New York Giants, along with many other game-ending home runs that famously ended some of the most important and suspenseful baseball games. A walk-off home run over the fence is an exception to baseball's one-run rule. Normally if the home team is tied or behind in the ninth or extra innings, the game ends as soon as the home team scores enough run to achieve a lead. If the home team has two outs in the inning, and the game is tied, the game will officially end either the moment the batter successfully reaches first base or the moment the runner touches home plate—whichever happens last. However, this is superseded by the "ground rule", which provides automatic doubles (when a ball-in-play hits the ground first then leaves the playing field) and home runs (when a ball-in-play leaves the playing field without ever touching the ground). In the latter case, all base runners including the batter are allowed to cross the plate. Leadoff home run A leadoff home run is a home run hit by the first batter of a team, the leadoff hitter of the first inning of the game. In MLB, Rickey Henderson holds the career record with 81 lead-off home runs. Craig Biggio holds the National League career record with 53, third overall to Henderson, and Alfonso Soriano with 54. As of 2018, Ian Kinsler held the career record among active players, with 48 leadoff home runs, which also ranked him fourth all-time. In 1996, Brady Anderson set a Major League record by hitting a lead-off home run in four consecutive games. Back-to-back When two consecutive batters each hit a home run, this is described as back-to-back home runs. It is still considered back-to-back even if both batters hit their home runs off different pitchers. A third batter hitting a home run is commonly referred to as back-to-back-to-back. Four home runs in a row by consecutive batters has only occurred ten times in the history of Major League Baseball. Following convention, this is called back-to-back-to-back-to-back. The most recent occurrence was on August 16, 2020, when the Chicago White Sox hit four in a row against the St. Louis Cardinals. Yoan Moncada, Yasmani Grandal, José Abreu and Eloy Jiménez hit consecutive home runs during the fifth inning off relief pitcher Roel Ramírez, who was making his major league debut. On June 9, 2019, the Washington Nationals hit four in a row against the San Diego Padres in Petco Park as Howie Kendrick, Trea Turner, Adam Eaton and Anthony Rendon homered off pitcher Craig Stammen. Stammen became the fifth pitcher to surrender back-to-back-to-back-to-back home runs, following Paul Foytack on July 31, 1963, Chase Wright on April 22, 2007, Dave Bush on
runs" or "small ball". The home run's place in baseball changed dramatically when the live-ball era began after World War I. First, the materials and manufacturing processes improved significantly, making the now-mass-produced, cork-centered ball somewhat more lively. Batters such as Babe Ruth and Rogers Hornsby took full advantage of rules changes that were instituted during the 1920s, particularly prohibition of the spitball, and the requirement that balls be replaced when worn or dirty. These changes resulted in the baseball being easier to see and hit, and easier to hit out of the park. Meanwhile, as the game's popularity boomed, more outfield seating was built, shrinking the size of the outfield and increasing the chances of a long fly ball resulting in a home run. The teams with the sluggers, typified by the New York Yankees, became the championship teams, and other teams had to change their focus from the "inside game" to the "power game" in order to keep up. Before , Major League Baseball considered a fair ball that bounced over an outfield fence to be a home run. The rule was changed to require the ball to clear the fence on the fly, and balls that reached the seats on a bounce became automatic doubles (often referred to as a ground rule double). The last "bounce" home run in MLB was hit by Al López of the Brooklyn Robins on September 12, 1930, at Ebbets Field. A carryover of the old rule is that if a player deflects a ball over the outfield fence in fair territory without it touching the ground, it is a home run, per MLB rule 5.05(a)(9). Additionally, MLB rule 5.05(a)(5) still stipulates that a ball hit over a fence in fair territory that is less that from home plate "shall entitle the batter to advance to second base only", as some early ballparks had short dimensions. Also until circa 1931, the ball had to go not only over the fence in fair territory, but it had to land in the bleachers in fair territory or still be visibly fair when disappearing from view. The rule stipulated "fair when last seen" by the umpires. Photos from that era in ballparks, such as the Polo Grounds and Yankee Stadium, show ropes strung from the foul poles to the back of the bleachers, or a second "foul pole" at the back of the bleachers, in a straight line with the foul line, as a visual aid for the umpire. Ballparks still use a visual aid much like the ropes; a net or screen attached to the foul poles on the fair side has replaced ropes. As with American football, where a touchdown once required a literal "touch down" of the ball in the end zone but now only requires the "breaking of the [vertical] plane" of the goal line, in baseball the ball need only "break the plane" of the fence in fair territory (unless the ball is caught by a player who is in play, in which case the batter is called out). Babe Ruth's 60th home run in 1927 was somewhat controversial, because it landed barely in fair territory in the stands down the right field line. Ruth lost a number of home runs in his career due to the when-last-seen rule. Bill Jenkinson, in The Year Babe Ruth Hit 104 Home Runs, estimates that Ruth lost at least 50 and as many as 78 in his career due to this rule. Further, the rules once stipulated that an over-the-fence home run in a sudden-victory situation would only count for as many bases as was necessary to "force" the winning run home. For example, if a team trailed by two runs with the bases loaded, and the batter hit a fair ball over the fence, it only counted as a triple, because the runner immediately ahead of him had technically already scored the game-winning run. That rule was changed in the 1920s as home runs became increasingly frequent and popular. Babe Ruth's career total of 714 would have been one higher had that rule not been in effect in the early part of his career. Records Major League Baseball keeps running totals of all-time home runs by the team, including teams no longer active (prior to 1900) as well as by individual players. Gary Sheffield hit the 250,000th home run in MLB history with a grand slam on September 8, 2008. Sheffield had hit MLB's 249,999th home run against Gio González in his previous at-bat. The all-time, verified professional baseball record for career home runs for one player, excluding the U.S. Negro leagues during the era of segregation, is held by Sadaharu Oh. Oh spent his entire career playing for the Yomiuri Giants in Japan's Nippon Professional Baseball, later managing the Giants, the Fukuoka SoftBank Hawks and the 2006 World Baseball Classic Japanese team. Oh holds the all-time home run world record, having hit 868 home runs in his career. In Major League Baseball, the career record is 762, held by Barry Bonds, who broke Hank Aaron's record on August 7, 2007, when he hit his 756th home run at AT&T Park off pitcher Mike Bacsik. Only eight other major league players have hit as many as 600: Hank Aaron (755), Babe Ruth (714), Alex Rodriguez (696), Albert Pujols (679), Willie Mays (660), Ken Griffey Jr. (630), Jim Thome (612), and Sammy Sosa (609); Pujols holds the record for active MLB players. The single season record is 73, set by Barry Bonds in 2001. Other notable single season records were achieved by Babe Ruth who hit 60 in 1927, Roger Maris, with 61 home runs in 1961, Sammy Sosa who hit 66 in 1998, and Mark McGwire, who hit 70 in 1998. Negro league slugger Josh Gibson's Baseball Hall of Fame plaque says he hit "almost 800" home runs in his career. The Guinness Book of World Records lists Gibson's lifetime home run total at 800. Ken Burns' award-winning series, Baseball, states that his actual total may have been as high as 950. Gibson's true total is not known, in part due to inconsistent record keeping in the Negro leagues. The 1993 edition of the MacMillan Baseball Encyclopedia attempted to compile a set of Negro league records, and subsequent work has expanded on that effort. Those records demonstrate that Gibson and Ruth were of comparable power. The 1993 book had Gibson hitting 146 home runs in the 501 "official" Negro league games they were able to account for in his 17-year career, about 1 homer every 3.4 games. Babe Ruth, in 22 seasons (several of them in the dead-ball era), hit 714 in 2503 games, or 1 homer every 3.5 games. The large gap in the numbers for Gibson reflect the fact that Negro league clubs played relatively far fewer league games and many more "barnstorming" or exhibition games during the course of a season, than did the major league clubs of that era. Other legendary home run hitters include Jimmie Foxx, Mel Ott, Ted Williams, Mickey Mantle (who on September 10, 1960, mythically hit "the longest home run ever" at an estimated distance of , although this was measured after the ball stopped rolling), Reggie Jackson, Harmon Killebrew, Ernie Banks, Mike Schmidt, Dave Kingman, Sammy Sosa (who hit 60 or more home runs in a season three times), Ken Griffey Jr. and Eddie Mathews. In 1987, Joey Meyer of the minor league Denver Zephyrs hit the longest verifiable home run in professional baseball history. The home run was measured at a distance of and was hit inside Denver's Mile High Stadium. On May 6, 1964, Chicago White Sox outfielder Dave Nicholson hit a home run officially measured at 573 feet that either bounced atop the left-field roof of Comiskey Park or entirely cleared it. Major League Baseball's longest verifiable home run distance is about , by Babe Ruth, to straightaway center field at Tiger Stadium (then called Navin Field and before the double-deck), which landed nearly across the intersection of Trumbull and Cherry. The location of where Hank Aaron's record 755th home run landed has been monumented in Milwaukee. The spot sits outside American Family Field, where the Milwaukee Brewers currently play. Similarly, the point where Aaron's 715th homer landed, upon breaking Ruth's career record in 1974, is marked in the Turner Field parking lot. A red-painted seat in Fenway Park marks the landing place of the 502-ft home run Ted Williams hit in 1946, the longest measured homer in Fenway's history; a red stadium seat mounted on the wall of the Mall of America in Bloomington, Minnesota, marks the landing spot of Harmon Killebrew's record 520-foot shot in old Metropolitan Stadium. May 2019 saw 1,135 MLB home runs, the highest ever number of home runs in a single month in Major League Baseball history. During this month, 44.5% of all runs came during a homer, breaking the previous record of 42.3%. Instant replay Replays "to get the call right" have been used extremely sporadically in the past, but the use of instant replay to determine "boundary calls"—home runs and foul balls—was not officially allowed until 2008. In a game on May 31, 1999, involving the St. Louis Cardinals and Florida Marlins, a hit by Cliff Floyd of the Marlins was initially ruled a double, then a home run, then was changed back to a double when umpire Frank Pulli decided to review video of the play. The Marlins protested that video replay was not allowed, but while the National League office agreed that replay was not to be used in future games, it declined the protest on the grounds it was a judgment call, and the play stood. In November 2007, the general managers of Major League Baseball voted in favor of implementing instant replay reviews on boundary home run calls. The proposal limited the use of instant replay to determining whether a boundary/home run call is: A fair (home run) or foul ball A live ball (ball hit a fence and rebounded onto the field), ground rule double (ball hit a fence before leaving the field), or home run (ball hit some object beyond the fence while in flight) Spectator interference or home run (spectator touched the ball after it broke the plane of the fence). On August 28, 2008, instant replay review became available in MLB for reviewing calls in accordance with the above proposal. It was first utilized on September 3, 2008, in a game between the New York Yankees and the Tampa Bay Rays at Tropicana Field. Alex Rodriguez of the Yankees hit what appeared to be a home run, but the ball hit a catwalk behind the foul pole. It was at first called a home run, until Tampa Bay manager Joe Maddon argued the call, and the umpires decided to review the play. After 2 minutes and 15 seconds,
South Asian prehistory. Paleopathological analysis demonstrated that leprosy and tuberculosis were present at Harappa, with the highest prevalence of both disease and trauma present in the skeletons from Area G (an ossuary located south-east of the city walls). Furthermore, rates of craniofacial trauma and infection increased through time demonstrating that the civilisation collapsed amid illness and injury. The bioarchaeologists who examined the remains have suggested that the combined evidence for differences in mortuary treatment and epidemiology indicate that some individuals and communities at Harappa were excluded from access to basic resources like health and safety. Trade The Harappans had traded with ancient Mesopotamia, especially Elam, among other areas. Cotton textiles and agricultural products were the primary trading objects. The Harappan merchants also had procurement colonies in Mesopotamia which also served as trading centres. Archaeology The excavators of the site have proposed the following chronology of Harappa's occupation: Ravi Aspect of the Hakra phase, c. 3300 – 2800 BC. Kot Dijian (Early Harappan) phase, c. 2800 – 2600 BC. Harappan Phase, c. 2600 – 1900 BC. Transitional Phase, c. 1900 – 1800 BC. Late Harappan Phase, c. 1800 – 1300 BC. By far the most exquisite and obscure artefacts unearthed to date are the small, square steatite (soapstone) seals engraved with human or animal motifs. A large number of seals have been found at such sites as Mohenjo-Daro and Harappa. Many bear pictographic inscriptions generally thought to be a form of writing or script. Despite the efforts of philologists from all parts of the world, and despite the use of modern cryptographic analysis, the signs remain undeciphered. It is also unknown if they reflect proto-Dravidian or other non-Vedic language(s). The ascribing of Indus Valley Civilisation iconography and epigraphy to historically known cultures is extremely problematic, in part due to the rather tenuous archaeological evidence for such claims, as well as the projection of modern South Asian political concerns onto the archaeological record of the area. This is especially evident in the radically varying interpretations of Harappan material culture as seen from both Pakistan- and India-based scholars. In February 2006 a school teacher in the village of Sembian-Kandiyur in Tamil Nadu discovered a stone celt (tool) with an inscription estimated to be up to 3,500 years old. Indian epigraphist Iravatham Mahadevan postulated that the four signs were in the Indus script and called the find "the greatest archaeological discovery of a century in Tamil Nadu". Based on this evidence he goes on to suggest that the language used in the Indus Valley was of Dravidian origin. However, the absence of a Bronze Age in South India, contrasted with the knowledge of bronze making techniques in the Indus Valley cultures, calls into question the validity of this hypothesis. The area of the late Harappan period consisted of areas of Daimabad, Maharashtra, and Badakshan regions of Afghanistan. The area covered by this civilisation would have been very large with a distance of around .
the west have also been discovered and studied. Although the archaeological site at Harappa was damaged in 1857 when engineers constructing the Lahore-Multan railroad used brick from the Harappa ruins for track ballast, an abundance of artefacts have nevertheless been found. Because of the reducing sea-levels certain regions in late Harappan period were abandoned . Towards the end Harappan civilisation lost features such as writing and hydraulic engineering. As a result the Ganges Valley settlement gained prominence and Ganges cities developed. Culture and economy The Indus Valley civilization was basically an urban culture sustained by surplus agricultural production and commerce, the latter including trade with Elam and Sumer in southern Mesopotamia. Both Mohenjo-Daro and Harappa are generally characterised as having "differentiated living quarters, flat-roofed brick houses, and fortified administrative or religious centers." Although such similarities have given rise to arguments for the existence of a standardised system of urban layout and planning, the similarities are largely due to the presence of a semi-orthogonal type of civic layout, and a comparison of the layouts of Mohenjo-Daro and Harappa shows that they are in fact, arranged in a quite dissimilar fashion. The weights and measures of the Indus Valley Civilisation, on the other hand, were highly standardised, and conform to a set scale of gradations. Distinctive seals were used, among other applications, perhaps for the identification of property and shipment of goods. Although copper and bronze were in use, iron was not yet employed. "Cotton was woven and dyed for clothing; wheat, rice, and a variety of vegetables and fruits were cultivated; and a number of animals, including the humped bull, was domesticated," as well as "fowl for fighting". Wheel-made pottery—some of it adorned with animal and geometric motifs—has been found in profusion at all the major Indus sites. A centralised administration for each city, though not the whole civilisation, has been inferred from the revealed cultural uniformity; however, it remains uncertain whether authority lay with a commercial oligarchy. Harappans had many trade routes along the Indus River that went as far as the Persian Gulf, Mesopotamia, and Egypt. Some of the most valuable things traded were carnelian and lapis lazuli. What is clear is that Harappan society was not entirely peaceful, with the human skeletal remains demonstrating some of the highest rates of injury (15.5%) found in South Asian prehistory. Paleopathological analysis demonstrated that leprosy and tuberculosis were present at Harappa, with the highest prevalence of both disease and trauma present in the skeletons from Area G (an ossuary located south-east of the city walls). Furthermore, rates of craniofacial trauma and infection increased through time demonstrating that the civilisation collapsed amid illness and injury. The bioarchaeologists who examined the remains have suggested that the combined evidence for differences in mortuary treatment and epidemiology indicate that some individuals and communities at Harappa were excluded from access to basic resources like health and safety. Trade The Harappans had traded with ancient Mesopotamia, especially Elam, among other areas. Cotton textiles and agricultural products were the primary trading objects. The Harappan merchants also had procurement colonies in Mesopotamia which also served as trading centres. Archaeology The excavators of the site have proposed the following chronology of Harappa's occupation: Ravi Aspect of the Hakra phase, c. 3300 – 2800 BC. Kot Dijian (Early Harappan) phase, c. 2800 – 2600 BC. Harappan Phase, c. 2600 – 1900 BC. Transitional Phase, c. 1900 – 1800 BC. Late Harappan Phase, c. 1800 – 1300 BC. By far the most exquisite and obscure artefacts unearthed to date are the small, square steatite (soapstone) seals engraved with human or animal motifs. A large number of seals have been found at such sites as Mohenjo-Daro and Harappa. Many bear pictographic inscriptions generally thought to be a form of writing or script. Despite the efforts of philologists from all parts of the world, and despite the
has been imitated in English, notably by Alfred Tennyson, Swinburne, and Robert Frost, cf. “For Once Then Something.” Contemporary American poets Annie Finch (“Lucid Waking”) and Patricia Smith (“The Reemergence of the Noose”) have published recent examples. Poets wanting to capture the hendecasyllabic rhythm in English have simply transposed the pattern into its accentual-syllabic equivalent: – ⏑ |– ⏑ |– ⏑ ⏑ |– ⏑ |– ⏑ |, or trochee/trochee/dactyl/trochee/trochee, so that the long/short pattern becomes a stress/unstress pattern. Tennyson, however, maintained the quantitative features of the metre: O you chorus of indolent reviewers, Irresponsible, indolent reviewers, Look, I come to the test, a tiny poem All composed in a metre of Catullus... (“Hendecasyllabics”) In Italian poetry The hendecasyllable () is the principal metre in Italian poetry. Its defining feature is a constant stress on the tenth syllable, so that the number of syllables in the verse may vary, equaling eleven in the usual case where the final word is stressed on the penultimate syllable. The verse also has a stress preceding the caesura, on either the fourth or sixth syllable. The first case is called endecasillabo a minore, or lesser hendecasyllable, and has the first hemistich equivalent to a quinario; the second is called endecasillabo a maiore, or greater hendecasyllable, and has a settenario as the first hemistich. There is a strong tendency for hendecasyllabic lines to end with feminine rhymes (causing the total number of syllables to be eleven, hence the name), but ten-syllable lines ("Ciò che 'n grembo a Benaco star non può") and twelve-syllable lines ("Ergasto mio, perché solingo e tacito") are encountered as well. Lines of ten or twelve syllables are more common in rhymed verse; versi sciolti, which rely more heavily on a pleasant rhythm for effect, tend toward a stricter eleven-syllable format. As a novelty, lines longer than twelve syllables can be created by the use of certain verb forms and affixed enclitic pronouns ("Ottima è l'acqua; ma le piante abbeverinosene."). Additional accents beyond the two mandatory ones provide rhythmic variation and allow the poet to express thematic effects. A line in which accents fall consistently on even-numbered syllables ("Al còr gentìl rempàira sèmpre amóre") is called iambic (giambico) and may be a greater or lesser hendecasyllable. This line is the simplest, commonest and most musical but may become repetitive, especially in longer works. Lesser hendecasyllables often have an accent on the seventh syllable ("fàtta di giòco in figùra d'amóre"). Such a line is called dactylic (dattilico) and its less pronounced rhythm is considered particularly appropriate for representing dialogue. Another kind of greater hendecasyllable has an accent on the third syllable ("Se Mercé fosse amìca a' miei disìri") and is known as anapestic (anapestico). This sort of line has a crescendo effect and gives the poem a sense of speed and fluidity. It is considered improper for the lesser hendecasyllable to use a word accented on its antepenultimate syllable (parola sdrucciola) for its mid-line stress. A line like "Più non sfavìllano quegli òcchi néri", which delays the caesura until after the sixth syllable, is not considered a valid hendecasylable. Most classical Italian poems are composed in hendecasyllables, including the major works of Dante, Francesco Petrarca, Ludovico Ariosto, and Torquato Tasso. The rhyme systems used include terza rima, ottava, sonnet and canzone, and some verse forms use a mixture of hendecasyllables and shorter lines. From the early 16th century onward, hendecasyllables are often used without a strict system, with few or no rhymes, both in poetry and in drama. This is known as verso sciolto. An early example is Le Api ("the bees") by Giovanni di Bernardo Rucellai, written around 1517 and published in 1525, which begins: Mentr'era per cantare i vostri doni Con altre rime, o Verginette caste, Vaghe Angelette delle erbose rive, Preso dal sonno, in sul spuntar dell'Alba M'apparve un coro della vostra gente, E dalla lingua, onde s'accoglie il mele, Sciolsono in chiara voce este parole: O spirto amici, che dopo mill'anni, E cinque cento, rinovar ti piace E
to the test, a tiny poem All composed in a metre of Catullus... (“Hendecasyllabics”) In Italian poetry The hendecasyllable () is the principal metre in Italian poetry. Its defining feature is a constant stress on the tenth syllable, so that the number of syllables in the verse may vary, equaling eleven in the usual case where the final word is stressed on the penultimate syllable. The verse also has a stress preceding the caesura, on either the fourth or sixth syllable. The first case is called endecasillabo a minore, or lesser hendecasyllable, and has the first hemistich equivalent to a quinario; the second is called endecasillabo a maiore, or greater hendecasyllable, and has a settenario as the first hemistich. There is a strong tendency for hendecasyllabic lines to end with feminine rhymes (causing the total number of syllables to be eleven, hence the name), but ten-syllable lines ("Ciò che 'n grembo a Benaco star non può") and twelve-syllable lines ("Ergasto mio, perché solingo e tacito") are encountered as well. Lines of ten or twelve syllables are more common in rhymed verse; versi sciolti, which rely more heavily on a pleasant rhythm for effect, tend toward a stricter eleven-syllable format. As a novelty, lines longer than twelve syllables can be created by the use of certain verb forms and affixed enclitic pronouns ("Ottima è l'acqua; ma le piante abbeverinosene."). Additional accents beyond the two mandatory ones provide rhythmic variation and allow the poet to express thematic effects. A line in which accents fall consistently on even-numbered syllables ("Al còr gentìl rempàira sèmpre amóre") is called iambic (giambico) and may be a greater or lesser hendecasyllable. This line is the simplest, commonest and most musical but may become repetitive, especially in longer works. Lesser hendecasyllables often have an accent on the seventh syllable ("fàtta di giòco in figùra d'amóre"). Such a line is called dactylic (dattilico) and its less pronounced rhythm is considered particularly appropriate for representing dialogue. Another kind of greater hendecasyllable has an accent on the third syllable ("Se Mercé fosse amìca a' miei disìri") and is known as anapestic (anapestico). This sort of line has a crescendo effect and gives the poem a sense of speed and fluidity. It is considered improper for the lesser hendecasyllable to use a word accented on its antepenultimate syllable (parola sdrucciola) for its mid-line stress. A line like "Più non sfavìllano quegli òcchi néri", which delays the caesura until after the sixth syllable, is not considered a valid hendecasylable. Most classical Italian poems are composed in hendecasyllables, including the major works of Dante, Francesco Petrarca, Ludovico Ariosto, and Torquato Tasso. The rhyme systems used include terza rima, ottava, sonnet and canzone, and some verse forms use a mixture of hendecasyllables and shorter lines. From the early 16th century onward, hendecasyllables are often used without a strict system, with few or no rhymes, both in poetry and in drama. This is known as verso sciolto. An early example is Le Api ("the bees") by Giovanni
raids began on Scottish shores towards the end of the 8th century, and the Hebrides came under Norse control and settlement during the ensuing decades, especially following the success of Harald Fairhair at the Battle of in 872. In the Western Isles Ketill Flatnose may have been the dominant figure of the mid 9th century, by which time he had amassed a substantial island realm and made a variety of alliances with other Norse leaders. These princelings nominally owed allegiance to the Norwegian crown, although in practice the latter's control was fairly limited. Norse control of the Hebrides was formalised in 1098 when Edgar of Scotland formally signed the islands over to Magnus III of Norway. The Scottish acceptance of Magnus III as King of the Isles came after the Norwegian king had conquered Orkney, the Hebrides and the Isle of Man in a swift campaign earlier the same year, directed against the local Norwegian leaders of the various island petty kingdoms. By capturing the islands Magnus imposed a more direct royal control, although at a price. His skald Bjorn Cripplehand recorded that in Lewis "fire played high in the heaven" as "flame spouted from the houses" and that in the Uists "the king dyed his sword red in blood". The Hebrides were now part of the Kingdom of the Isles, whose rulers were themselves vassals of the Kings of Norway. This situation lasted until the partitioning of the Western Isles in 1156, at which time the Outer Hebrides remained under Norwegian control while the Inner Hebrides broke out under Somerled, the Norse-Gael kinsman of the Manx royal house. Following the ill-fated 1263 expedition of Haakon IV of Norway, the Outer Hebrides and the Isle of Man were yielded to the Kingdom of Scotland as a result of the 1266 Treaty of Perth. Although their contribution to the islands can still be found in personal and place names, the archaeological record of the Norse period is very limited. The best known find is the Lewis chessmen, which date from the mid 12th century. Scottish control As the Norse era drew to a close, the Norse-speaking princes were gradually replaced by Gaelic-speaking clan chiefs including the MacLeods of Lewis and Harris, Clan Donald and MacNeil of Barra. This transition did little to relieve the islands of internecine strife although by the early 14th century the MacDonald Lords of the Isles, based on Islay, were in theory these chiefs' feudal superiors and managed to exert some control. The Lords of the Isles ruled the Inner Hebrides as well as part of the Western Highlands as subjects of the King of Scots until John MacDonald, fourth Lord of the Isles, squandered the family's powerful position. A rebellion by his nephew, Alexander of Lochalsh provoked an exasperated James IV to forfeit the family's lands in 1493. In 1598, King James VI authorised some "Gentleman Adventurers" from Fife to civilise the "most barbarous Isle of Lewis". Initially successful, the colonists were driven out by local forces commanded by Murdoch and Neil MacLeod, who based their forces on in . The colonists tried again in 1605 with the same result, but a third attempt in 1607 was more successful and in due course Stornoway became a Burgh of Barony. By this time, Lewis was held by the Mackenzies of Kintail (later the Earls of Seaforth), who pursued a more enlightened approach, investing in fishing in particular. The Seaforths' royalist inclinations led to Lewis becoming garrisoned during the Wars of the Three Kingdoms by Cromwell's troops, who destroyed the old castle in Stornoway. Early British era With the implementation of the Treaty of Union in 1707, the Hebrides became part of the new Kingdom of Great Britain, but the clans' loyalties to a distant monarch were not strong. A considerable number of islesmen "came out" in support of the Jacobite Earl of Mar in the 1715 and again in the 1745 rising including Macleod of Dunvegan and MacLea of Lismore. The aftermath of the decisive Battle of Culloden, which effectively ended Jacobite hopes of a Stuart restoration, was widely felt. The British government's strategy was to estrange the clan chiefs from their kinsmen and turn their descendants into English-speaking landlords whose main concern was the revenues their estates brought rather than the welfare of those who lived on them. This may have brought peace to the islands, but in the following century it came at a terrible price. In the wake of the rebellion, the clan system was broken up and islands of the Hebrides became a series of landed estates. The early 19th century was a time of improvement and population growth. Roads and quays were built; the slate industry became a significant employer on Easdale and surrounding islands; and the construction of the Crinan and Caledonian canals and other engineering works such as Clachan Bridge improved transport and access. However, in the mid-19th century, the inhabitants of many parts of the Hebrides were devastated by the Clearances, which destroyed communities throughout the Highlands and Islands as the human populations were evicted and replaced with sheep farms. The position was exacerbated by the failure of the islands' kelp industry that thrived from the 18th century until the end of the Napoleonic Wars in 1815 and large scale emigration became endemic. As , a Gaelic poet from South Uist, wrote for his countrymen who were obliged to leave the Hebrides in the late 18th century, emigration was the only alternative to "sinking into slavery" as the Gaels had been unfairly dispossessed by rapacious landlords. In the 1880s, the "Battle of the Braes" involved a demonstration against unfair land regulation and eviction, stimulating the calling of the Napier Commission. Disturbances continued until the passing of the 1886 Crofters' Act. Language The residents of the Hebrides have spoken a variety of different languages during the long period of human occupation. It is assumed that Pictish must once have predominated in the northern Inner Hebrides and Outer Hebrides. The Scottish Gaelic language arrived from Ireland due to the growing influence of the kingdom of Dál Riata from the 6th century AD onwards, and became the dominant language of the southern Hebrides at that time. For a few centuries, the military might of the meant that Old Norse was prevalent in the Hebrides. North of , the place names that existed prior to the 9th century have been all but obliterated. The Old Norse name for the Hebrides during the Viking occupation was , which means "Southern Isles"; in contrast to the , or "Northern Isles" of Orkney and Shetland. South of , Gaelic place names are more common, and after the 13th century, Gaelic became the main language of the entire Hebridean archipelago. Due to Scots and English being favoured in government and the educational system, the Hebrides have been in a state of diglossia since at least the 17th century. The Highland Clearances of the 19th century accelerated the language shift away from Scottish Gaelic, as did increased migration and the continuing lower status of Gaelic speakers. Nevertheless, as late as the end of the 19th century, there were significant populations of monolingual Gaelic speakers, and the Hebrides still contain the highest percentages of Gaelic speakers in Scotland. This is especially true of the Outer Hebrides, where a slim majority speak the language. The Scottish Gaelic college, , is based on Skye and Islay. Ironically, given the status of the Western Isles as the last Gaelic-speaking stronghold in Scotland, the Gaelic language name for the islands – – means "isles of the foreigners"; from the time when they were under Norse colonisation. Modern economy For those who remained, new economic opportunities emerged through the export of cattle, commercial fishing and tourism. Nonetheless, emigration and military service became the choice of many and the archipelago's populations continued to dwindle throughout the late 19th century and for much of the 20th century. Lengthy periods of continuous occupation notwithstanding, many of the smaller islands were abandoned. There were, however, continuing gradual economic improvements, among the most visible of which was the replacement of the traditional thatched blackhouse with accommodation of a more modern design and with the assistance of Highlands and Islands Enterprise many of the islands' populations have begun to increase after decades of decline. The discovery of substantial deposits of North Sea oil in 1965 and the renewables sector have contributed to a degree of economic stability in recent decades. For example, the Arnish yard has had a chequered history but has been a significant employer in both the oil and renewables industries. The widespread immigration of mainlanders, particularly non-Gaelic speakers, has been a subject of controversy. Agriculture practised by crofters remained popular in the 21st century in the Hebrides; crofters own a small property but often share a large common grazing area. Various types of funding are available to crofters to help supplement their incomes, including the "Basic Payment Scheme, the suckler beef support scheme, the upland sheep support scheme and the Less Favoured Area support scheme". One reliable source discussed the Crofting Agricultural Grant Scheme (CAGS) in March 2020:the scheme "pays up to £25,000 per claim in any two-year period, covering 80% of investment costs for those who are under 41 and have had their croft less than five years. Older, more established crofters can get 60% grants". Media and the arts Music Many contemporary Gaelic musicians have roots in the Hebrides, including Julie Fowlis (North Uist), Catherine-Ann MacPhee (Barra), Kathleen MacInnes (South Uist), and Ishbel MacAskill (Lewis). All of these singers have repertoire based on the Hebridean tradition, such as and (waulking songs). This tradition includes many songs composed by little-known or anonymous poets before 1800, such as "", "" and "". Several of Runrig's songs are inspired by the archipelago; Calum and were raised on North Uist and Donnie Munro on Skye. Literature The Gaelic poet spent much of his life in the Hebrides and often referred to them in his poetry, including in and . The best known Gaelic poet of her era, (Mary MacPherson, 1821–98), embodied the spirit of the land agitation of the 1870s and 1880s. This, and her powerful evocation of the Hebrides—she was from Skye—has made her among the most enduring Gaelic poets. Allan MacDonald (1859–1905), who spent his adult life on Eriskay and South Uist, composed hymns and verse in honour of the Blessed Virgin, the Christ Child, and the Eucharist. In his secular poetry, MacDonald praised the beauty of Eriskay and its people. In his verse drama, (The Old Wives' Parliament), he lampooned the gossiping of his female parishioners and local marriage customs. In the 20th century, Murdo Macfarlane of Lewis wrote , a well-known poem about the Gaelic revival in the Outer Hebrides. Sorley MacLean, the most respected 20th-century Gaelic writer, was born and raised on Raasay, where he set his best known poem, , about the devastating effect of the Highland Clearances. , raised on South Uist and described by MacLean as "one of the few really significant living poets in Scotland, writing in any language" (West Highland Free Press, October 1992) wrote the Scottish Gaelic-language novel which was voted in the Top Ten of the 100 Best-Ever Books from Scotland. Film The area around the Inaccessible Pinnacle of of Skye provided the setting for the Scottish Gaelic feature film Seachd: The Inaccessible Pinnacle (2006). The script was written by the actor, novelist, and poet Aonghas Phàdraig Chaimbeul, who also starred in the movie. , an hour-long documentary in Scottish Gaelic, was made for BBC Alba documenting the battle to remove tolls from the Skye bridge. Video Games The 2012 exploration adventure game Dear Esther by developer The Chinese Room is set on an unnamed island in the Hebrides. Influence on visitors J.M. Barrie's Marie Rose contains references to Harris inspired by a holiday visit to Castle and he wrote a screenplay for the 1924 film adaptation of Peter Pan whilst on . The Hebrides, also known as Fingal's Cave, is a famous overture composed by Felix Mendelssohn while residing on these islands, while Granville Bantock composed the Hebridean Symphony. Enya's song "Ebudæ" from Shepherd Moons is named after the Hebrides (see below). The 1973 British horror film The Wicker Man is set on the fictional Hebridean island of Summerisle. The 2011 British romantic comedy The Decoy Bride is set on the fictional Hebrides island of Hegg. Natural history In some respects the Hebrides lack biodiversity in comparison to mainland Britain; for example, there are only half as many mammalian species. However, these islands provide breeding grounds for many important seabird species including the world's largest colony of northern gannets. Avian life includes the corncrake, red-throated diver, rock dove, kittiwake, tystie, Atlantic puffin, goldeneye, golden eagle and white-tailed sea eagle. The latter was re-introduced to Rùm in 1975 and has successfully spread to various neighbouring islands, including Mull. There is a small population of red-billed chough concentrated on the islands of Islay and Colonsay. Red deer are common on the hills and the grey seal and common seal are present around the coasts of Scotland. Colonies of seals are found on Oronsay and the Treshnish Isles. The rich freshwater streams contain brown trout, Atlantic salmon and water shrew. Offshore, minke whales, Killer whales, basking sharks, porpoises and dolphins are among the sealife that can be seen. Heather moor containing ling, bell heather, cross-leaved heath, bog myrtle and fescues is abundant and there is a diversity of Arctic and alpine plants including Alpine pearlwort and mossy cyphal. Loch Druidibeg on South Uist is a national nature reserve owned and managed by Scottish Natural Heritage. The reserve covers 1,677 hectares across the whole range of local habitats. Over 200 species
BC, the Greek historian Diodorus Siculus wrote that there was an island called Hyperborea (which means "beyond the North Wind"), where a round temple stood from which the moon appeared only a little distance above the earth every 19 years. This may have been a reference to the stone circle at Callanish. A traveller called Demetrius of Tarsus related to Plutarch the tale of an expedition to the west coast of Scotland in or shortly before 83 AD. He stated it was a gloomy journey amongst uninhabited islands, but he had visited one which was the retreat of holy men. He mentioned neither the druids nor the name of the island. The first written records of native life begin in the 6th century AD, when the founding of the kingdom of Dál Riata took place. This encompassed roughly what is now Argyll and Bute and Lochaber in Scotland and County Antrim in Ireland. The figure of Columba looms large in any history of Dál Riata, and his founding of a monastery on Iona ensured that the kingdom would be of great importance in the spread of Christianity in northern Britain. However, Iona was far from unique. Lismore in the territory of the Cenél Loairn, was sufficiently important for the death of its abbots to be recorded with some frequency and many smaller sites, such as on Eigg, Hinba, and Tiree, are known from the annals. North of Dál Riata, the Inner and Outer Hebrides were nominally under Pictish control, although the historical record is sparse. Hunter (2000) states that in relation to King Bridei I of the Picts in the sixth century: "As for Shetland, Orkney, Skye and the Western Isles, their inhabitants, most of whom appear to have been Pictish in culture and speech at this time, are likely to have regarded Bridei as a fairly distant presence.” Norwegian control Viking raids began on Scottish shores towards the end of the 8th century, and the Hebrides came under Norse control and settlement during the ensuing decades, especially following the success of Harald Fairhair at the Battle of in 872. In the Western Isles Ketill Flatnose may have been the dominant figure of the mid 9th century, by which time he had amassed a substantial island realm and made a variety of alliances with other Norse leaders. These princelings nominally owed allegiance to the Norwegian crown, although in practice the latter's control was fairly limited. Norse control of the Hebrides was formalised in 1098 when Edgar of Scotland formally signed the islands over to Magnus III of Norway. The Scottish acceptance of Magnus III as King of the Isles came after the Norwegian king had conquered Orkney, the Hebrides and the Isle of Man in a swift campaign earlier the same year, directed against the local Norwegian leaders of the various island petty kingdoms. By capturing the islands Magnus imposed a more direct royal control, although at a price. His skald Bjorn Cripplehand recorded that in Lewis "fire played high in the heaven" as "flame spouted from the houses" and that in the Uists "the king dyed his sword red in blood". The Hebrides were now part of the Kingdom of the Isles, whose rulers were themselves vassals of the Kings of Norway. This situation lasted until the partitioning of the Western Isles in 1156, at which time the Outer Hebrides remained under Norwegian control while the Inner Hebrides broke out under Somerled, the Norse-Gael kinsman of the Manx royal house. Following the ill-fated 1263 expedition of Haakon IV of Norway, the Outer Hebrides and the Isle of Man were yielded to the Kingdom of Scotland as a result of the 1266 Treaty of Perth. Although their contribution to the islands can still be found in personal and place names, the archaeological record of the Norse period is very limited. The best known find is the Lewis chessmen, which date from the mid 12th century. Scottish control As the Norse era drew to a close, the Norse-speaking princes were gradually replaced by Gaelic-speaking clan chiefs including the MacLeods of Lewis and Harris, Clan Donald and MacNeil of Barra. This transition did little to relieve the islands of internecine strife although by the early 14th century the MacDonald Lords of the Isles, based on Islay, were in theory these chiefs' feudal superiors and managed to exert some control. The Lords of the Isles ruled the Inner Hebrides as well as part of the Western Highlands as subjects of the King of Scots until John MacDonald, fourth Lord of the Isles, squandered the family's powerful position. A rebellion by his nephew, Alexander of Lochalsh provoked an exasperated James IV to forfeit the family's lands in 1493. In 1598, King James VI authorised some "Gentleman Adventurers" from Fife to civilise the "most barbarous Isle of Lewis". Initially successful, the colonists were driven out by local forces commanded by Murdoch and Neil MacLeod, who based their forces on in . The colonists tried again in 1605 with the same result, but a third attempt in 1607 was more successful and in due course Stornoway became a Burgh of Barony. By this time, Lewis was held by the Mackenzies of Kintail (later the Earls of Seaforth), who pursued a more enlightened approach, investing in fishing in particular. The Seaforths' royalist inclinations led to Lewis becoming garrisoned during the Wars of the Three Kingdoms by Cromwell's troops, who destroyed the old castle in Stornoway. Early British era With the implementation of the Treaty of Union in 1707, the Hebrides became part of the new Kingdom of Great Britain, but the clans' loyalties to a distant monarch were not strong. A considerable number of islesmen "came out" in support of the Jacobite Earl of Mar in the 1715 and again in the 1745 rising including Macleod of Dunvegan and MacLea of Lismore. The aftermath of the decisive Battle of Culloden, which effectively ended Jacobite hopes of a Stuart restoration, was widely felt. The British government's strategy was to estrange the clan chiefs from their kinsmen and turn their descendants into English-speaking landlords whose main concern was the revenues their estates brought rather than the welfare of those who lived on them. This may have brought peace to the islands, but in the following century it came at a terrible price. In the wake of the rebellion, the clan system was broken up and islands of the Hebrides became a series of landed estates. The early 19th century was a time of improvement and population growth. Roads and quays were built; the slate industry became a significant employer on Easdale and surrounding islands; and the construction of the Crinan and Caledonian canals and other engineering works such as Clachan Bridge improved transport and access. However, in the mid-19th century, the inhabitants of many parts of the Hebrides were devastated by the Clearances, which destroyed communities throughout the Highlands and Islands as the human populations were evicted and replaced with sheep farms. The position was exacerbated by the failure of the islands' kelp industry that thrived from the 18th century until the end of the Napoleonic Wars in 1815 and large scale emigration became endemic. As , a Gaelic poet from South Uist, wrote for his countrymen who were obliged to leave the Hebrides in the late 18th century, emigration was the only alternative to "sinking into slavery" as the Gaels had been unfairly dispossessed by rapacious landlords. In the 1880s, the "Battle of the Braes" involved a demonstration against unfair land regulation and eviction, stimulating the calling of the Napier Commission. Disturbances continued until the passing of the 1886 Crofters' Act. Language The residents of the Hebrides have spoken a variety of different languages during the long period of human occupation. It is assumed that Pictish must once have predominated in the northern Inner Hebrides and Outer Hebrides. The Scottish Gaelic language arrived from Ireland due to the growing influence of the kingdom of Dál Riata from the 6th century AD onwards, and became the dominant language of the southern Hebrides at that time. For a few centuries, the military might of the meant that Old Norse was prevalent in the Hebrides. North of , the place names that existed prior to the 9th century have been all but obliterated. The Old Norse name for the Hebrides during the Viking occupation was , which means "Southern Isles"; in contrast to the , or "Northern Isles" of Orkney and Shetland. South of , Gaelic place names are more common, and after the 13th century, Gaelic became the main language of the entire Hebridean archipelago. Due to Scots and English being favoured in government and the educational system, the Hebrides have been in a state of diglossia since at least the 17th century. The Highland Clearances of the 19th century accelerated the language shift away from Scottish Gaelic, as did increased migration and the continuing lower status of Gaelic speakers. Nevertheless, as late as the end of the 19th century, there were significant populations of monolingual Gaelic speakers, and the Hebrides still contain the highest percentages of Gaelic speakers in Scotland. This is especially true of the Outer Hebrides, where a slim majority speak the language. The Scottish Gaelic college, , is based on Skye and Islay. Ironically, given the status of the Western Isles as the last Gaelic-speaking stronghold in Scotland, the Gaelic language name for the islands – – means "isles of the foreigners"; from the time when they were under Norse colonisation. Modern economy For those who remained, new economic opportunities emerged through the export of cattle, commercial fishing and tourism. Nonetheless, emigration and military service became the choice of many and the archipelago's populations continued to dwindle throughout the late 19th century and for much of the 20th century. Lengthy periods of continuous occupation notwithstanding, many of the smaller islands were abandoned. There were, however, continuing gradual economic improvements, among the most visible of which was the replacement of the traditional thatched blackhouse with accommodation of a more modern design and with the assistance of Highlands and Islands Enterprise many of the islands' populations have begun to increase after decades of decline. The discovery of substantial deposits of North Sea oil in 1965 and the renewables sector have contributed to a degree of economic stability in recent decades. For example, the Arnish yard has had a chequered history but has been a significant employer in both the oil and renewables industries. The widespread immigration of mainlanders, particularly non-Gaelic speakers, has been a subject of controversy. Agriculture practised by crofters remained popular in the 21st century in the Hebrides; crofters own a small property but often share a large common grazing area. Various types of funding are available to crofters to help supplement their incomes, including the "Basic Payment Scheme, the suckler beef support scheme, the upland sheep support scheme and the Less Favoured Area support scheme". One reliable source discussed the Crofting Agricultural Grant Scheme (CAGS) in March 2020:the scheme "pays up to £25,000 per claim in any two-year period, covering 80% of investment costs for those who are under 41 and have had their croft less than five years. Older, more established crofters can get 60% grants". Media and the arts Music Many contemporary Gaelic musicians have roots in the Hebrides, including Julie Fowlis (North Uist), Catherine-Ann MacPhee (Barra), Kathleen MacInnes (South Uist), and Ishbel MacAskill (Lewis). All of these singers have repertoire based on the Hebridean tradition, such as and (waulking songs). This tradition includes many songs composed by little-known or anonymous poets before 1800, such as "", "" and "". Several of Runrig's songs are inspired by the archipelago; Calum and were raised on North Uist and Donnie Munro on Skye. Literature The Gaelic poet spent much of his life in the Hebrides and often referred to them in his poetry, including in and . The best known Gaelic poet of her era, (Mary MacPherson, 1821–98), embodied the spirit of the land agitation of the 1870s and 1880s. This, and her powerful evocation of the Hebrides—she was from Skye—has made her among the most enduring Gaelic poets. Allan MacDonald (1859–1905), who spent his adult life on Eriskay and South Uist, composed hymns and verse in honour of the Blessed Virgin, the Christ Child, and the Eucharist. In his secular poetry, MacDonald praised the beauty of Eriskay and its people. In his verse drama, (The Old Wives' Parliament), he lampooned the gossiping of his female parishioners and local marriage customs. In the 20th century, Murdo Macfarlane of Lewis wrote , a well-known poem about the Gaelic revival in
ship Dreadnought 1553 was a 40-gun ship built in 1553. was a 41-gun ship launched in 1573, rebuilt in 1592 and 1614, then broken up in 1648. was a 52-gun third-rate ship of the line launched in 1654 as the Torrington for the Commonwealth of England Navy, renamed Dreadnought at the Restoration in 1660, and lost in 1690. was a 60-gun fourth-rate ship of the line launched in 1691, rebuilt in 1706 and broken up 1748. was a 60-gun fourth rate launched in 1742 and sold 1784. was a 98-gun second rate launched in 1801, converted to a hospital ship in 1827, and broken up 1857. was a hospital ship, formerly HMS Caledonia. was a battleship launched in 1875 and hulked in 1903, then sold in 1908. was
built in 1553. was a 41-gun ship launched in 1573, rebuilt in 1592 and 1614, then broken up in 1648. was a 52-gun third-rate ship of the line launched in 1654 as the Torrington for the Commonwealth of England Navy, renamed Dreadnought at the Restoration in 1660, and lost in 1690. was a 60-gun fourth-rate ship of the line launched in 1691, rebuilt in 1706 and broken up 1748. was a 60-gun fourth rate launched in 1742 and sold 1784. was a 98-gun second rate launched in 1801, converted to a hospital ship in 1827, and broken up 1857. was a hospital ship, formerly HMS Caledonia. was a battleship launched in
Quantum Books, [2002?]. - CCXCIX, [51] S., Hartmann Schedel: Register des Buchs der Croniken und geschichten mit figuren und pildnussen von anbeginn der welt bis auf dise unnsere Zeit. [Durch Georgium Alten ... in diss Teutsch gebracht]. Reprint [der Ausg.] Nürnberg, Koberger, 1493, 1. Wiederdruck. München: Reprint-Verlag Kölbl, 1991. - [9], CCLXXXVI Bl., IDN: 947020551 Hartmann Schedel: Weltchronik. Nachdruck [der] kolorierten Gesamtausgabe von 1493. Einleitung und Kommentar von Stephan Füssel. Augsburg: Weltbild, 2004. - 680 S., Stephan Füssel (Hg.): Schedel'sche Weltchronik. Taschen Verlag, Köln 2001. Digitalisat der lateinischen Ausgabe (mit brasil-portugiesischer Bedien-Oberfläche) Digitalisat der Bayerischen Staatsbibliothek Digitalisat der Beloit copy (Morse Library, Beloit College, Beloit, WI 53511, United States) Holzschnitte aus einem der Exemplare der Bibliothèque nationale de France Notes Sources Elisabeth Rücker: Hartmann Schedels Weltchronik, das größte Buchunternehmen der Dürerzeit. Verlag Prestel, München 1988. Stephan Füssel (Hrsg.): 500 Jahre Schedelsche
in 1447, it became feasible to print books and maps for a larger customer basis. Because they had to be handwritten, books were previously rare and very expensive. Schedel was also a notable collector of books, art and old master prints. An album he had bound in 1504, which once contained five engravings by Jacopo de' Barbari, provides important evidence for dating de' Barbari's work. Gallery Editions Hartmann Schedel: Registrum huius operis libri cronicarum cu [cum] figuris et imagibus [imaginibus] ab inicio mudi [mundi]. [Nachdruck der Ausgabe Nürnberg, Koberger, 1493]. Ostfildern: Quantum Books, [2002?]. - CCXCIX, [51] S., Hartmann Schedel: Register des Buchs der Croniken und geschichten mit figuren und pildnussen von anbeginn der welt bis auf dise unnsere Zeit. [Durch Georgium Alten ... in diss Teutsch gebracht]. Reprint [der Ausg.] Nürnberg, Koberger, 1493, 1. Wiederdruck. München: Reprint-Verlag Kölbl, 1991. - [9], CCLXXXVI Bl., IDN: 947020551 Hartmann Schedel: Weltchronik. Nachdruck [der] kolorierten Gesamtausgabe von 1493. Einleitung und Kommentar von Stephan Füssel. Augsburg: Weltbild, 2004. - 680 S., Stephan Füssel (Hg.): Schedel'sche Weltchronik. Taschen Verlag, Köln 2001. Digitalisat der lateinischen Ausgabe (mit brasil-portugiesischer Bedien-Oberfläche)
the Hymns of Orpheus. According to Greek mythology, hexameter was invented by Phemonoe, daughter of Apollo and the first Pythia of Delphi. Classical Hexameter In classical hexameter, the six feet follow these rules: A foot can be made up of two long syllables (– –), a spondee; or a long and two short syllables, a dactyl (– υ υ). The first four feet can contain either one of them. The fifth is almost always a dactyl, and last must be a spondee. A short syllable (υ) is a syllable with a short vowel and no consonant at the end. A long syllable (–) is a syllable that either has a long vowel, one or more consonants at the end (or a long consonant), or both. Spaces between words are not counted in syllabification, so for instance "cat" is a long syllable in isolation, but "cat attack" would be syllabified as short-short-long: "ca", "ta", "tack" (υ υ –). Variations of the sequence from line to line, as well as the use of caesura (logical full stops within the line) are essential in avoiding what may otherwise be a monotonous sing-song effect. Application Although the rules seem simple, it is hard to use classical hexameter in English, because English is a stress-timed language that condenses vowels and consonants between stressed syllables, while hexameter relies on the regular timing of the
spondee; or a long and two short syllables, a dactyl (– υ υ). The first four feet can contain either one of them. The fifth is almost always a dactyl, and last must be a spondee. A short syllable (υ) is a syllable with a short vowel and no consonant at the end. A long syllable (–) is a syllable that either has a long vowel, one or more consonants at the end (or a long consonant), or both. Spaces between words are not counted in syllabification, so for instance "cat" is a long syllable in isolation, but "cat attack" would be syllabified as short-short-long: "ca", "ta", "tack" (υ υ –). Variations of the sequence from line to line, as well as the use of caesura (logical full stops within the line) are essential in avoiding what may otherwise be a monotonous sing-song effect. Application Although the rules seem simple, it is hard to use classical hexameter in English, because English is a stress-timed language that condenses vowels and consonants between stressed syllables, while hexameter relies on the regular timing of the phonetic sounds. Languages having the latter properties (i.e., languages that are
History of Poland. See also the list of Polish monarchs and list of prime ministers of Poland. Centuries: 5th6th7th8th9th10th11th12th13th14th15th16th17th18th19th20th21stSee also 5th century 10th century 11th century 12th century 13th century 14th century 15th century 16th century 17th century 18th century 19th century 20th
People's Republic Democratic Republic of Poland 21st century See also Cities in Poland Timeline of Białystok Timeline of Gdańsk Timeline of Kraków Timeline of Łódź Timeline of Lwów (formerly in Poland; now in Ukraine) Timeline of Poznań Timeline of Szczecin Timeline of Warsaw Timeline of Wrocław Category:Timelines of cities in Poland (in Polish) References Library of Congress, A Country Study: Poland, Chronology of Important
on the Ganymede Heights massif on Alexander Island, Antarctica
Himalia (moon), a moon of Jupiter Himalia group Himalia (mythology), a nymph from Cyprus in Greek
Trojan War. The Heracleidae, who thus became practically masters of Peloponnesus, proceeded to distribute its territory among themselves by lot. Argos fell to Temenus, Lacedaemon to Procles and Eurysthenes, the twin sons of Aristodemus; and Messenia to Cresphontes (tradition maintains that Cresphontes cheated in order to obtain Messenia, which had the best land of all.) The fertile district of Elis had been reserved by agreement for Oxylus. The Heracleidae ruled in Lacedaemon until 221 BCE, but disappeared much earlier in the other countries. This conquest of Peloponnesus by the Dorians, commonly called the "Dorian invasion" or the "Return of the Heraclidae", is represented as the recovery by the descendants of Heracles of the rightful inheritance of their hero ancestor and his sons. The Dorians followed the custom of other Greek tribes in claiming as ancestor for their ruling families one of the legendary heroes, but the traditions must not on that account be regarded as entirely mythical. They represent a joint invasion of Peloponnesus by Aetolians and Dorians, the latter having been driven southward from their original northern home under pressure from the Thessalians. It is noticeable that there is no mention of these Heraclidae or their invasion in Homer or Hesiod. Herodotus (vi. 52) speaks of poets who had celebrated their deeds, but these were limited to events immediately succeeding the death of Heracles. List of Heracleidae At Sparta At Sparta, the Heraclids formed two dynasties ruling jointly: the Agiads and the Eurypontids. Other Spartiates also claimed Heraclid descent, such as Lysander. At Corinth At Corinth the Heraclids ruled as the Bacchiadae dynasty before the aristocratic revolution, which brought a Bacchiad aristocracy into power. At Argos A descendent of Heracles, Temenus, was the first king of Argos, who later counted the famous tyrant Pheidon. At Macedonia At Macedonia, the Heraclids formed the Argead Dynasty, whose name comes from Argos, as one of the Heraclids from this city, Perdiccas I, settled in Macedonia, where he founded his kingdom. By the time of Philip II the family had expanded their reign further, to include under the rule of Macedonia all Upper Macedonian states. Their most celebrated members were Philip II of Macedon and his son Alexander the Great, under whose leadership the kingdom of Macedonia gradually gained predominance throughout Greece, defeated the Achaemenid Empire and expanded as far as Egypt and India. The mythical founder of the Argead dynasty is King Caranus. In Euripides' tragedy The Greek tragedians amplified the story, probably drawing inspiration from local legends which glorified the services rendered by Athens to the rulers of Peloponnesus. The Heracleidae feature as the main subjects of Euripides' play, Heracleidae. J. A. Spranger found the political subtext of Heracleidae, never far to seek, so particularly apt in Athens towards the end of the peace of Nicias, in 419 BCE, that he suggested the date as that of the play's first performance. In the tragedy, Iolaus, Heracles' old comrade and nephew, and Heracles' children, Macaria and her brothers and sisters have hidden from Eurystheus in Athens, ruled by King Demophon; as the first scene makes clear, they expect that the blood relationship of the kings with Heracles and their father's past indebtedness to Theseus will finally provide them sanctuary. As Eurystheus prepares to attack, an oracle tells Demophon that only the sacrifice of a noble woman to Persephone can guarantee an Athenian victory. Macaria volunteers for the sacrifice and a spring is named the Macarian spring in her honor. References Sources Bibliotheca ii. 8 Diodorus Siculus iv. 57, 58 Pausanias i. 32, 41, ii. 13, 18, iii. I, iv. 3, v. 3 Euripides, Heracleidae Pindar, Pythia, ix. 137 Herodotus ix. 27 Connop Thirlwall, History of Greece, ch. vii George Grote, History of Greece, pt. i. ch. xviii Georg Busolt, Griechische Geschichte, i. ch.
to quit. They withdrew to Thessaly, where Aegimius, the mythical ancestor of the Dorians, whom Heracles had assisted in war against the Lapithae, adopted Hyllus and made over to him a third part of his territory. After the death of Aegimius, his two sons, Pamphylus and Dymas, voluntarily submitted to Hyllus (who was, according to the Dorian tradition in Herodotus V. 72, really an Achaean), who thus became ruler of the Dorians, the three branches of that race being named after these three heroes. Desiring to reconquer his paternal inheritance, Hyllus consulted the Delphic oracle, which told him to wait for "the third fruit", (or "the third crop") and then enter Peloponnesus by "a narrow passage by sea". Accordingly, after three years, Hyllus marched across the isthmus of Corinth to attack Atreus, the successor of Eurystheus, but was slain in single combat by Echemus, king of Tegea. This second attempt was followed by a third under Cleodaeus and a fourth under Aristomachus, both unsuccessful. Dorian invasion At last, Temenus, Cresphontes and Aristodemus, the sons of Aristomachus, complained to the oracle that its instructions had proved fatal to those who had followed them. They received the answer that by the "third fruit" the "third generation" was meant, and that the "narrow passage" was not the isthmus of Corinth, but the straits of Rhium. They accordingly built a fleet at Naupactus, but before they set sail, Aristodemus was struck by lightning (or shot by Apollo) and the fleet destroyed, because one of the Heracleidae had slain an Acarnanian soothsayer. The oracle, being again consulted by Temenus, bade him offer an expiatory sacrifice and banish the murderer for ten years, and look out for a man with three eyes to act as guide. On his way back to Naupactus, Temenus fell in with Oxylus, an Aetolian, who had lost one eye, riding on a horse (thus making up the three eyes) and immediately pressed him into his service. According to another account, a mule on which Oxylus rode had lost an eye. The Heracleidae repaired their ships, sailed from Naupactus to Antirrhium, and thence to Rhium in Peloponnesus. A decisive battle was fought with Tisamenus, son of Orestes, the chief ruler in the peninsula, who was defeated and slain. This conquest was traditionally dated eighty years after the Trojan War. The Heracleidae, who thus became practically masters of Peloponnesus, proceeded to distribute its territory among themselves by lot. Argos fell to Temenus, Lacedaemon to Procles and Eurysthenes, the twin sons of Aristodemus; and Messenia to Cresphontes (tradition maintains that Cresphontes cheated in order to obtain Messenia, which had the best land of all.) The fertile district of Elis had been reserved by agreement for Oxylus. The Heracleidae ruled in Lacedaemon until 221 BCE, but disappeared much earlier in the other countries. This conquest of Peloponnesus by the Dorians, commonly called the "Dorian invasion" or the "Return of the Heraclidae", is represented as the recovery by the descendants of Heracles of the rightful inheritance of their hero ancestor and his sons. The Dorians followed the custom of other Greek tribes in claiming as ancestor for their ruling families one of the legendary heroes, but the traditions must not on that account be regarded as entirely mythical. They represent a joint invasion of Peloponnesus by Aetolians and Dorians, the latter having been driven southward from their original northern home under pressure from the Thessalians. It is noticeable that there is no mention of these
proteins. This cleavage is mediated by the packaged viral protease and can be inhibited by antiretroviral drugs of the protease inhibitor class. The various structural components then assemble to produce a mature HIV virion. Only mature virions are then able to infect another cell. Spread within the body The classical process of infection of a cell by a virion can be called "cell-free spread" to distinguish it from a more recently recognized process called "cell-to-cell spread". In cell-free spread (see figure), virus particles bud from an infected T cell, enter the blood or extracellular fluid and then infect another T cell following a chance encounter. HIV can also disseminate by direct transmission from one cell to another by a process of cell-to-cell spread, for which two pathways have been described. Firstly, an infected T cell can transmit virus directly to a target T cell via a virological synapse. Secondly, an antigen-presenting cell (APC), such as a macrophage or dendritic cell, can transmit HIV to T cells by a process that either involves productive infection (in the case of macrophages) or capture and transfer of virions in trans (in the case of dendritic cells). Whichever pathway is used, infection by cell-to-cell transfer is reported to be much more efficient than cell-free virus spread. A number of factors contribute to this increased efficiency, including polarised virus budding towards the site of cell-to-cell contact, close apposition of cells, which minimizes fluid-phase diffusion of virions, and clustering of HIV entry receptors on the target cell towards the contact zone. Cell-to-cell spread is thought to be particularly important in lymphoid tissues, where CD4+ T cells are densely packed and likely to interact frequently. Intravital imaging studies have supported the concept of the HIV virological synapse in vivo. The many dissemination mechanisms available to HIV contribute to the virus's ongoing replication in spite of anti-retroviral therapies. Genetic variability HIV differs from many viruses in that it has very high genetic variability. This diversity is a result of its fast replication cycle, with the generation of about 1010 virions every day, coupled with a high mutation rate of approximately 3 x 10−5 per nucleotide base per cycle of replication and recombinogenic properties of reverse transcriptase. This complex scenario leads to the generation of many variants of HIV in a single infected patient in the course of one day. This variability is compounded when a single cell is simultaneously infected by two or more different strains of HIV. When simultaneous infection occurs, the genome of progeny virions may be composed of RNA strands from two different strains. This hybrid virion then infects a new cell where it undergoes replication. As this happens, the reverse transcriptase, by jumping back and forth between the two different RNA templates, will generate a newly synthesized retroviral DNA sequence that is a recombinant between the two parental genomes. This recombination is most obvious when it occurs between subtypes. The closely related simian immunodeficiency virus (SIV) has evolved into many strains, classified by the natural host species. SIV strains of the African green monkey (SIVagm) and sooty mangabey (SIVsmm) are thought to have a long evolutionary history with their hosts. These hosts have adapted to the presence of the virus, which is present at high levels in the host's blood, but evokes only a mild immune response, does not cause the development of simian AIDS, and does not undergo the extensive mutation and recombination typical of HIV infection in humans. In contrast, when these strains infect species that have not adapted to SIV ("heterologous" or similar hosts such as rhesus or cynomologus macaques), the animals develop AIDS and the virus generates genetic diversity similar to what is seen in human HIV infection. Chimpanzee SIV (SIVcpz), the closest genetic relative of HIV-1, is associated with increased mortality and AIDS-like symptoms in its natural host. SIVcpz appears to have been transmitted relatively recently to chimpanzee and human populations, so their hosts have not yet adapted to the virus. This virus has also lost a function of the nef gene that is present in most SIVs. For non-pathogenic SIV variants, nef suppresses T cell activation through the CD3 marker. Nef's function in non-pathogenic forms of SIV is to downregulate expression of inflammatory cytokines, MHC-1, and signals that affect T cell trafficking. In HIV-1 and SIVcpz, nef does not inhibit T-cell activation and it has lost this function. Without this function, T cell depletion is more likely, leading to immunodeficiency. Three groups of HIV-1 have been identified on the basis of differences in the envelope (env) region: M, N, and O. Group M is the most prevalent and is subdivided into eight subtypes (or clades), based on the whole genome, which are geographically distinct. The most prevalent are subtypes B (found mainly in North America and Europe), A and D (found mainly in Africa), and C (found mainly in Africa and Asia); these subtypes form branches in the phylogenetic tree representing the lineage of the M group of HIV-1. Co-infection with distinct subtypes gives rise to circulating recombinant forms (CRFs). In 2000, the last year in which an analysis of global subtype prevalence was made, 47.2% of infections worldwide were of subtype C, 26.7% were of subtype A/CRF02_AG, 12.3% were of subtype B, 5.3% were of subtype D, 3.2% were of CRF_AE, and the remaining 5.3% were composed of other subtypes and CRFs. Most HIV-1 research is focused on subtype B; few laboratories focus on the other subtypes. The existence of a fourth group, "P", has been hypothesised based on a virus isolated in 2009. The strain is apparently derived from gorilla SIV (SIVgor), first isolated from western lowland gorillas in 2006. HIV-2's closest relative is SIVsm, a strain of SIV found in sooty mangabees. Since HIV-1 is derived from SIVcpz, and HIV-2 from SIVsm, the genetic sequence of HIV-2 is only partially homologous to HIV-1 and more closely resembles that of SIVsm. Diagnosis Many HIV-positive people are unaware that they are infected with the virus. For example, in 2001 less than 1% of the sexually active urban population in Africa had been tested, and this proportion is even lower in rural populations. Furthermore, in 2001 only 0.5% of pregnant women attending urban health facilities were counselled, tested or receive their test results. Again, this proportion is even lower in rural health facilities. Since donors may therefore be unaware of their infection, donor blood and blood products used in medicine and medical research are routinely screened for HIV. HIV-1 testing is initially done using an enzyme-linked immunosorbent assay (ELISA) to detect antibodies to HIV-1. Specimens with a non-reactive result from the initial ELISA are considered HIV-negative, unless new exposure to an infected partner or partner of unknown HIV status has occurred. Specimens with a reactive ELISA result are retested in duplicate. If the result of either duplicate test is reactive, the specimen is reported as repeatedly reactive and undergoes confirmatory testing with a more specific supplemental test (e.g., a polymerase chain reaction (PCR), western blot or, less commonly, an immunofluorescence assay (IFA)). Only specimens that are repeatedly reactive by ELISA and positive by IFA or PCR or reactive by western blot are considered HIV-positive and indicative of HIV infection. Specimens that are repeatedly ELISA-reactive occasionally provide an indeterminate western blot result, which may be either an incomplete antibody response to HIV in an infected person or nonspecific reactions in an uninfected person. Although IFA can be used to confirm infection in these ambiguous cases, this assay is not widely used. In general, a second specimen should be collected more than a month later and retested for persons with indeterminate western blot results. Although much less commonly available, nucleic acid testing (e.g., viral RNA or proviral DNA amplification method) can also help diagnosis in certain situations. In addition, a few tested specimens might provide inconclusive results because of a low quantity specimen. In these situations, a second specimen is collected and tested for HIV infection. Modern HIV testing is extremely accurate, when the window period is taken into consideration. A single screening test is correct more than 99% of the time. The chance of a false-positive result in a standard two-step testing protocol is estimated to be about 1 in 250,000 in a low risk population. Testing post-exposure is recommended immediately and then at six weeks, three months, and six months. The latest recommendations of the US Centers for Disease Control and Prevention (CDC) show that HIV testing must start with an immunoassay combination test for HIV-1 and HIV-2 antibodies and p24 antigen. A negative result rules out HIV exposure, while a positive one must be followed by an HIV-1/2 antibody differentiation immunoassay to detect which antibodies are present. This gives rise to four possible scenarios: 1. HIV-1 (+) & HIV-2 (−): HIV-1 antibodies detected 2. HIV-1 (−) & HIV-2 (+): HIV-2 antibodies detected 3. HIV-1 (+) & HIV-2 (+): both HIV-1 and HIV-2 antibodies detected 4. HIV-1 (−) or indeterminate & HIV-2 (−): Nucleic acid test must be carried out to detect the acute infection of HIV-1 or its absence. Research HIV/AIDS research includes all medical research that attempts to prevent, treat, or cure HIV/AIDS, as well as fundamental research about the nature of HIV as an infectious agent and AIDS as the disease caused by HIV. Many governments and research institutions participate in HIV/AIDS research. This research includes behavioral health interventions, such as research into sex education, and drug development, such as research into microbicides for sexually transmitted diseases, HIV vaccines, and anti-retroviral drugs. Other medical research areas include the topics of pre-exposure prophylaxis, post-exposure prophylaxis, circumcision and HIV, and accelerated aging effects. Treatment and transmission The management of HIV/AIDS normally includes the use of multiple antiretroviral drugs. In many parts of the world, HIV has become a chronic condition in which progression to AIDS is increasingly rare. HIV latency, and the consequent viral reservoir in CD4+ T cells, dendritic cells, as well as macrophages, is the main barrier to eradication of the virus. It is important to note that although HIV is highly virulent, transmission does not occur through sex when an HIV-positive person has a consistently undetectable viral load (<50 copies/ml) due to anti-retroviral treatment. This was first argued by the Swiss Federal Commission for AIDS/HIV in 2008 in the Swiss Statement, though the statement was controversial at the time. However, following multiple studies, it became clear that the chance of passing on HIV through sex is effectively zero where the HIV-positive person has a consistently undetectable viral load; this is known as U=U, "Undetectable=Untransmittable", also phrased as "can't pass it on". The studies demonstrating U=U are: Opposites Attract, PARTNER 1, PARTNER 2, (for male-male couples) and HPTN052 (for heterosexual couples) when "the partner living with HIV had a durably suppressed viral load." In these studies, couples where one partner was HIV positive and one partner was HIV negative were enrolled and regular HIV testing completed. In total from the four studies, 4097 couples were enrolled over four continents and 151,880 acts of condomless sex were reported; there were zero phylogenetically linked transmissions of HIV where the positive partner had an undetectable viral load. Following this, the U=U consensus statement advocating the use of "zero risk" was signed by hundreds of individuals and organisations, including the US CDC, British HIV Association and The Lancet medical journal. The importance of the final results of the PARTNER 2 study were described by the medical director of the Terrence Higgins Trust as "impossible to overstate", while lead author Alison Rodger declared that the message that "undetectable viral load makes HIV untransmittable ... can help end the HIV pandemic by preventing HIV transmission. The authors summarised their findings in The Lancet as follows: This result is consistent with the conclusion presented by Anthony S. Fauci, the Director of the National Institute of Allergy and Infectious Diseases for the U.S. National Institutes of Health, and his team in a viewpoint published in the Journal of the American Medical Association, that U=U is an effective HIV prevention method when an undetectable viral load is maintained. Genital herpes (HSV-2) reactivation in those infected with the virus have an associated increase in CCR-5 enriched CD4+ T cells as well as inflammatory dendritic cells in the submucosa of the genital skin. Tropism of HIV for CCR-5 positive cells explains the two to threefold increase in HIV acquisition among persons with genital herpes. Daily antiviral (e.g. acyclovir) medication do not reduce the sub-clinical post reactivation inflammation and therefore does not confer reduced risk of HIV acquisition. History Discovery The first news story on "an exotic new disease" appeared May 18, 1981 in the gay newspaper New York Native. AIDS was first clinically observed in 1981 in the United States. The initial cases were a cluster of injection drug users and gay men with no known cause of impaired immunity who showed symptoms of Pneumocystis pneumonia (PCP or PJP, the latter term recognizing that the causative agent is now called Pneumocystis jirovecii), a rare opportunistic infection that was known to occur in people with very compromised immune systems. Soon thereafter, researchers at the NYU School of Medicine studied gay men developing a previously rare skin cancer called Kaposi's sarcoma (KS). Many more cases of PJP and KS emerged, alerting U.S. Centers for Disease Control and Prevention (CDC) and a CDC task force was formed to monitor the outbreak. The earliest retrospectively described case of AIDS is believed to have been in Norway beginning in 1966. In the beginning, the CDC did not have an official name for the disease, often referring to it by way of the diseases that were associated with it, for example, lymphadenopathy, the disease after which the discoverers of HIV originally named the virus. They also used Kaposi's Sarcoma and Opportunistic Infections, the name by which a task force had been set up in 1981. In the general press, the term GRID, which stood for gay-related immune deficiency, had been coined. The CDC, in search of a name and looking at the infected communities, coined "the 4H disease", as it seemed to single out homosexuals, heroin users, hemophiliacs, and Haitians. However, after determining that AIDS was not isolated to the gay community, it was realized that the term GRID was misleading and AIDS was introduced at a meeting in July 1982. By September 1982 the CDC started using the name AIDS. In 1983, two separate research groups led by American Robert Gallo and French investigators and Luc Montagnier independently declared that a novel retrovirus may have been infecting AIDS patients, and published their findings in the same issue of the journal Science. Gallo claimed that a virus his
hairpin loops of the gRNA monomers. At the same time, certain guanosine residues in the gRNA are made available for binding of the nucleocapsid (NC) protein leading to the subsequent virion assembly. The labile gRNA dimer has been also reported to achieve a more stable conformation following the NC binding, in which both the DIS and the U5:AUG regions of the gRNA participate in extensive base pairing. RNA can also be processed to produce mature messenger RNAs (mRNAs). In most cases, this processing involves RNA splicing to produce mRNAs that are shorter than the full-length genome. Which part of the RNA is removed during RNA splicing determines which of the HIV protein-coding sequences is translated. Mature HIV mRNAs are exported from the nucleus into the cytoplasm, where they are translated to produce HIV proteins, including Rev. As the newly produced Rev protein is produced it moves to the nucleus, where it binds to full-length, unspliced copies of virus RNAs and allows them to leave the nucleus. Some of these full-length RNAs function as mRNAs that are translated to produce the structural proteins Gag and Env. Gag proteins bind to copies of the virus RNA genome to package them into new virus particles. HIV-1 and HIV-2 appear to package their RNA differently. HIV-1 will bind to any appropriate RNA. HIV-2 will preferentially bind to the mRNA that was used to create the Gag protein itself. Recombination Two RNA genomes are encapsidated in each HIV-1 particle (see Structure and genome of HIV). Upon infection and replication catalyzed by reverse transcriptase, recombination between the two genomes can occur. Recombination occurs as the single-strand, positive-sense RNA genomes are reverse transcribed to form DNA. During reverse transcription, the nascent DNA can switch multiple times between the two copies of the viral RNA. This form of recombination is known as copy-choice. Recombination events may occur throughout the genome. Anywhere from two to 20 recombination events per genome may occur at each replication cycle, and these events can rapidly shuffle the genetic information that is transmitted from parental to progeny genomes. Viral recombination produces genetic variation that likely contributes to the evolution of resistance to anti-retroviral therapy. Recombination may also contribute, in principle, to overcoming the immune defenses of the host. Yet, for the adaptive advantages of genetic variation to be realized, the two viral genomes packaged in individual infecting virus particles need to have arisen from separate progenitor parental viruses of differing genetic constitution. It is unknown how often such mixed packaging occurs under natural conditions. Bonhoeffer et al. suggested that template switching by reverse transcriptase acts as a repair process to deal with breaks in the single-stranded RNA genome. In addition, Hu and Temin suggested that recombination is an adaptation for repair of damage in the RNA genomes. Strand switching (copy-choice recombination) by reverse transcriptase could generate an undamaged copy of genomic DNA from two damaged single-stranded RNA genome copies. This view of the adaptive benefit of recombination in HIV could explain why each HIV particle contains two complete genomes, rather than one. Furthermore, the view that recombination is a repair process implies that the benefit of repair can occur at each replication cycle, and that this benefit can be realized whether or not the two genomes differ genetically. On the view that recombination in HIV is a repair process, the generation of recombinational variation would be a consequence, but not the cause of, the evolution of template switching. HIV-1 infection causes chronic inflammation and production of reactive oxygen species. Thus, the HIV genome may be vulnerable to oxidative damage, including breaks in the single-stranded RNA. For HIV, as well as for viruses in general, successful infection depends on overcoming host defense strategies that often include production of genome-damaging reactive oxygen species. Thus, Michod et al. suggested that recombination by viruses is an adaptation for repair of genome damage, and that recombinational variation is a byproduct that may provide a separate benefit. Assembly and release The final step of the viral cycle, assembly of new HIV-1 virions, begins at the plasma membrane of the host cell. The Env polyprotein (gp160) goes through the endoplasmic reticulum and is transported to the Golgi apparatus where it is cleaved by furin resulting in the two HIV envelope glycoproteins, gp41 and gp120. These are transported to the plasma membrane of the host cell where gp41 anchors gp120 to the membrane of the infected cell. The Gag (p55) and Gag-Pol (p160) polyproteins also associate with the inner surface of the plasma membrane along with the HIV genomic RNA as the forming virion begins to bud from the host cell. The budded virion is still immature as the gag polyproteins still need to be cleaved into the actual matrix, capsid and nucleocapsid proteins. This cleavage is mediated by the packaged viral protease and can be inhibited by antiretroviral drugs of the protease inhibitor class. The various structural components then assemble to produce a mature HIV virion. Only mature virions are then able to infect another cell. Spread within the body The classical process of infection of a cell by a virion can be called "cell-free spread" to distinguish it from a more recently recognized process called "cell-to-cell spread". In cell-free spread (see figure), virus particles bud from an infected T cell, enter the blood or extracellular fluid and then infect another T cell following a chance encounter. HIV can also disseminate by direct transmission from one cell to another by a process of cell-to-cell spread, for which two pathways have been described. Firstly, an infected T cell can transmit virus directly to a target T cell via a virological synapse. Secondly, an antigen-presenting cell (APC), such as a macrophage or dendritic cell, can transmit HIV to T cells by a process that either involves productive infection (in the case of macrophages) or capture and transfer of virions in trans (in the case of dendritic cells). Whichever pathway is used, infection by cell-to-cell transfer is reported to be much more efficient than cell-free virus spread. A number of factors contribute to this increased efficiency, including polarised virus budding towards the site of cell-to-cell contact, close apposition of cells, which minimizes fluid-phase diffusion of virions, and clustering of HIV entry receptors on the target cell towards the contact zone. Cell-to-cell spread is thought to be particularly important in lymphoid tissues, where CD4+ T cells are densely packed and likely to interact frequently. Intravital imaging studies have supported the concept of the HIV virological synapse in vivo. The many dissemination mechanisms available to HIV contribute to the virus's ongoing replication in spite of anti-retroviral therapies. Genetic variability HIV differs from many viruses in that it has very high genetic variability. This diversity is a result of its fast replication cycle, with the generation of about 1010 virions every day, coupled with a high mutation rate of approximately 3 x 10−5 per nucleotide base per cycle of replication and recombinogenic properties of reverse transcriptase. This complex scenario leads to the generation of many variants of HIV in a single infected patient in the course of one day. This variability is compounded when a single cell is simultaneously infected by two or more different strains of HIV. When simultaneous infection occurs, the genome of progeny virions may be composed of RNA strands from two different strains. This hybrid virion then infects a new cell where it undergoes replication. As this happens, the reverse transcriptase, by jumping back and forth between the two different RNA templates, will generate a newly synthesized retroviral DNA sequence that is a recombinant between the two parental genomes. This recombination is most obvious when it occurs between subtypes. The closely related simian immunodeficiency virus (SIV) has evolved into many strains, classified by the natural host species. SIV strains of the African green monkey (SIVagm) and sooty mangabey (SIVsmm) are thought to have a long evolutionary history with their hosts. These hosts have adapted to the presence of the virus, which is present at high levels in the host's blood, but evokes only a mild immune response, does not cause the development of simian AIDS, and does not undergo the extensive mutation and recombination typical of HIV infection in humans. In contrast, when these strains infect species that have not adapted to SIV ("heterologous" or similar hosts such as rhesus or cynomologus macaques), the animals develop AIDS and the virus generates genetic diversity similar to what is seen in human HIV infection. Chimpanzee SIV (SIVcpz), the closest genetic relative of HIV-1, is associated with increased mortality and AIDS-like symptoms in its natural host. SIVcpz appears to have been transmitted relatively recently to chimpanzee and human populations, so their hosts have not yet adapted to the virus. This virus has also lost a function of the nef gene that is present in most SIVs. For non-pathogenic SIV variants, nef suppresses T cell activation through the CD3 marker. Nef's function in non-pathogenic forms of SIV is to downregulate expression of inflammatory cytokines, MHC-1, and signals that affect T cell trafficking. In HIV-1 and SIVcpz, nef does not inhibit T-cell activation and it has lost this function. Without this function, T cell depletion is more likely, leading to immunodeficiency. Three groups of HIV-1 have been identified on the basis of differences in the envelope (env) region: M, N, and O. Group M is the most prevalent and is subdivided into eight subtypes (or clades), based on the whole genome, which are geographically distinct. The most prevalent are subtypes B (found mainly in North America and Europe), A and D (found mainly in Africa), and C (found mainly in Africa and Asia); these subtypes form branches in the phylogenetic tree representing the lineage of the M group of HIV-1. Co-infection with distinct subtypes gives rise to circulating recombinant forms (CRFs). In 2000, the last year in which an analysis of global subtype prevalence was made, 47.2% of infections worldwide were of subtype C, 26.7% were of subtype A/CRF02_AG, 12.3% were of subtype B, 5.3% were of subtype D, 3.2% were of CRF_AE, and the remaining 5.3% were composed of other subtypes and CRFs. Most HIV-1 research is focused on subtype B; few laboratories focus on the other subtypes. The existence of a fourth group, "P", has been hypothesised based on a virus isolated in 2009. The strain is apparently derived from gorilla SIV (SIVgor), first isolated from western lowland gorillas in 2006. HIV-2's closest relative is SIVsm, a strain of SIV found in sooty mangabees. Since HIV-1 is derived from SIVcpz, and HIV-2 from SIVsm, the genetic sequence of HIV-2 is only partially homologous to HIV-1 and more closely resembles that of SIVsm. Diagnosis Many HIV-positive people are unaware that they are infected with the virus. For example, in 2001 less than 1% of the sexually active urban population in Africa had been tested, and this proportion is even lower in rural populations. Furthermore, in 2001 only 0.5% of pregnant women attending urban health facilities were counselled, tested or receive their test results. Again, this proportion is even lower in rural health facilities. Since donors may therefore be unaware of their infection, donor blood and blood products used in medicine and medical research are routinely screened for HIV. HIV-1 testing is initially done using an enzyme-linked immunosorbent assay (ELISA) to detect antibodies to HIV-1. Specimens with a non-reactive result from the initial ELISA are considered HIV-negative, unless new exposure to an infected partner or partner of unknown HIV status has occurred. Specimens with a reactive ELISA result are retested in duplicate. If the result of either duplicate test is reactive, the specimen is reported as repeatedly reactive and undergoes confirmatory testing with a more specific supplemental test (e.g., a polymerase chain reaction (PCR), western blot or, less commonly, an immunofluorescence assay (IFA)). Only specimens that are repeatedly reactive by ELISA and positive by IFA or PCR or reactive by western blot are considered HIV-positive and indicative of HIV infection. Specimens that are repeatedly ELISA-reactive occasionally provide an indeterminate western blot result, which may be either an incomplete antibody response to HIV in an infected person or nonspecific reactions in an uninfected person. Although IFA can be used to confirm infection in these ambiguous cases, this assay is not widely used. In general, a second specimen should be collected more than a month later and retested for persons with indeterminate western blot results. Although much less commonly available, nucleic acid testing (e.g., viral RNA or proviral DNA amplification method) can also help diagnosis in certain situations. In addition, a few tested specimens might provide inconclusive results because of a low quantity specimen. In these situations, a second specimen is collected and tested for HIV infection. Modern HIV testing is extremely accurate, when the window period is taken into consideration. A single screening test is correct more than 99% of the time. The chance of a false-positive result in a standard two-step testing protocol is estimated to be about 1 in 250,000 in a low risk population. Testing post-exposure is recommended immediately and then at six weeks, three months, and six months. The latest recommendations of the US Centers for Disease Control and Prevention (CDC) show that HIV testing must start with an immunoassay combination test for HIV-1 and HIV-2 antibodies and p24 antigen. A negative result rules out HIV exposure, while a positive one must be followed by an HIV-1/2 antibody differentiation immunoassay to detect which antibodies are present. This gives rise to four possible scenarios: 1. HIV-1 (+) & HIV-2 (−): HIV-1 antibodies detected 2. HIV-1 (−) & HIV-2 (+): HIV-2 antibodies detected 3. HIV-1 (+) & HIV-2 (+): both HIV-1 and HIV-2 antibodies detected 4. HIV-1 (−) or indeterminate & HIV-2 (−): Nucleic acid test must be carried out to detect the acute infection of HIV-1 or its absence. Research HIV/AIDS research includes all medical research that attempts to prevent, treat, or cure HIV/AIDS, as well as fundamental research about the nature of HIV as an infectious agent and AIDS as the disease caused by HIV. Many governments and research institutions participate in HIV/AIDS research. This research includes behavioral health interventions, such as research into sex education, and drug development, such as research into microbicides for sexually transmitted diseases, HIV vaccines, and anti-retroviral drugs. Other medical research areas include the topics of pre-exposure prophylaxis, post-exposure prophylaxis, circumcision and HIV, and accelerated aging effects. Treatment and transmission The management of HIV/AIDS normally includes the use of multiple antiretroviral drugs. In many parts of the world, HIV has become a chronic condition in which progression to AIDS is increasingly rare. HIV latency, and the consequent viral reservoir in CD4+ T cells, dendritic cells, as well as macrophages, is the main barrier to eradication of the virus. It is important to note that although HIV is highly virulent, transmission does not occur through sex when an HIV-positive person has a consistently undetectable viral load (<50 copies/ml) due to anti-retroviral treatment. This was first argued by the Swiss Federal Commission for AIDS/HIV in 2008 in the Swiss Statement, though the statement was controversial at the time. However, following multiple studies, it became clear that the chance of passing on HIV through sex is effectively zero where the HIV-positive person has a consistently undetectable viral load; this is known as U=U, "Undetectable=Untransmittable", also phrased as "can't pass it on". The studies demonstrating U=U are: Opposites Attract, PARTNER 1, PARTNER 2, (for male-male couples) and HPTN052 (for heterosexual couples) when "the partner living with HIV had a durably suppressed viral load." In these studies, couples where one partner was HIV positive and one partner was HIV negative were enrolled and regular HIV testing completed. In total from the four studies, 4097 couples were enrolled over four continents and 151,880 acts of condomless sex were reported; there were zero phylogenetically linked transmissions of HIV where the positive partner had an undetectable viral load. Following this, the U=U consensus statement
group in differential geometry Sports and games Hol (role-playing game) Hol IL, a sports club in Buskerud county Hollingworth Lake Rowing Club (prefix code: HOL) Holdsworth (cycling team) (UCI code HOL) HOL, pre-1992 code for Netherlands at the Olympics Other uses Hands On Learning Australia, a charity Hellas On-Line, a Greek Internet service provider Holiday Airlines (US
(Nordland) Hol, Ludhiana, a village in India Stations Hollywood station (Florida) (station code: HOL), USA Holmesglen railway station (station code: HOL), Malvern East, Melbourne, Victoria, Australia Holsworthy railway station, Sydney (station code: HOL), NSW, Australia Holton Heath railway station (station code: HOL), England, UK Science and technology HOL (proof assistant), theorem proving systems Head-of-line blocking in computer networking Higher-order logic, a branch of symbolic logic Holonomy group in differential geometry Sports and
to be hostile and the examining attorney is not required to seek the judge's permission before asking leading questions. Attorneys can influence a hostile witness's responses by using Gestalt psychology to influence the way the witness perceives the situation, and utility theory to understand their likely responses. The attorney will integrate a hostile witness's expected responses into the larger case strategy through pretrial planning and through adapting as necessary during the course of the trial. Jurisdiction Australia In the state of New South Wales, the term 'unfavourable witness' is defined by section 38 of the Evidence Act which permits the prosecution to cross-examine their own witness. For example, if the prosecution calls all material witnesses relevant to a case before the court, and any evidence given is not favourable to, or supports the prosecution case, or a witness has given a prior inconsistent statement, then the prosecution may
questions either suggest the answer ("You saw my client sign the contract, correct?") or challenge (impeach) the witness's testimony. As a rule, leading questions are generally allowed only during cross-examination, but a hostile witness is an exception to this rule. In cross-examination conducted by the opposing party's attorney, a witness is presumed to be hostile and the examining attorney is not required to seek the judge's permission before asking leading questions. Attorneys can influence a hostile witness's responses by using Gestalt psychology to influence the way the witness perceives the situation, and utility theory to understand their likely responses. The attorney will integrate a hostile witness's expected responses into
barons in Normandy and neighbouring Ponthieu. Robert allied himself with Philip I of France. In late 1090 William Rufus encouraged Conan Pilatus, a powerful burgher in Rouen, to rebel against Robert; Conan was supported by most of Rouen and made appeals to the neighbouring ducal garrisons to switch allegiance as well. Robert issued an appeal for help to his barons, and Henry was the first to arrive in Rouen in November. Violence broke out, leading to savage, confused street fighting as both sides attempted to take control of the city. Robert and Henry left the castle to join the battle, but Robert then retreated, leaving Henry to continue the fighting. The battle turned in favour of the ducal forces and Henry took Conan prisoner. Henry was angry that Conan had turned against his feudal lord. He had him taken to the top of Rouen Castle and then, despite Conan's offers to pay a huge ransom, threw him off the top of the castle to his death. Contemporaries considered Henry to have acted appropriately in making an example of Conan, and Henry became famous for his exploits in the battle. Fall and rise, 1091–99 In the aftermath, Robert forced Henry to leave Rouen, probably because Henry's role in the fighting had been more prominent than his own, and possibly because Henry had asked to be formally reinstated as the count of the Cotentin. In early 1091, William Rufus invaded Normandy with a sufficiently large army to bring Robert to the negotiating table. The two brothers signed a treaty at Rouen, granting William Rufus a range of lands and castles in Normandy. In return, William Rufus promised to support Robert's attempts to regain control of the neighbouring county of Maine, once under Norman control, and help in regaining control over the duchy, including Henry's lands. They nominated each other as heirs to England and Normandy, excluding Henry from any succession while either one of them lived. War now broke out between Henry and his brothers. Henry mobilised a mercenary army in the west of Normandy, but as William Rufus and Robert's forces advanced, his network of baronial support melted away. Henry focused his remaining forces at Mont Saint-Michel, where he was besieged, probably in March 1091. The site was easy to defend, but lacked fresh water. The chronicler William of Malmesbury suggested that when Henry's water ran short, Robert allowed his brother fresh supplies, leading to remonstrations between Robert and William Rufus. The events of the final days of the siege are unclear: the besiegers had begun to argue about the future strategy for the campaign, but Henry then abandoned Mont Saint-Michel, probably as part of a negotiated surrender. He left for Brittany and crossed over into France. Henry's next steps are not well documented; one chronicler, Orderic Vitalis, suggests that he travelled in the French Vexin, along the Normandy border, for over a year with a small band of followers. By the end of the year, Robert and William Rufus had fallen out once again, and the Treaty of Rouen had been abandoned. In 1092, Henry and his followers seized the Normandy town of Domfront. Domfront had previously been controlled by Robert of Bellême, but the inhabitants disliked his rule and invited Henry to take over the town, which he did in a bloodless coup. Over the next two years, Henry re-established his network of supporters across western Normandy, forming what Judith Green terms a "court in waiting". By 1094, he was allocating lands and castles to his followers as if he were the Duke of Normandy. William Rufus began to support Henry with money, encouraging his campaign against Robert, and Henry used some of this to construct a substantial castle at Domfront. William Rufus crossed into Normandy to take the war to Robert in 1094, and when progress stalled, called upon Henry for assistance. Henry responded, but travelled to London instead of joining the main campaign further east in Normandy, possibly at the request of the King, who in any event abandoned the campaign and returned to England. Over the next few years, Henry appears to have strengthened his power base in western Normandy, visiting England occasionally to attend at William Rufus's court. In 1095 Pope Urban II called the First Crusade, encouraging knights from across Europe to join. Robert joined the Crusade, borrowing money from William Rufus to do so, and granting the King temporary custody of his part of the Duchy in exchange. The King appeared confident of regaining the remainder of Normandy from Robert, and Henry appeared ever closer to William Rufus. They campaigned together in the Norman Vexin between 1097 and 1098. Early reign, 1100–06 Taking the throne, 1100 On the afternoon of 2 August 1100, King William went hunting in the New Forest, accompanied by a team of huntsmen and a number of the Norman nobility, including Henry. An arrow, possibly shot by the baron Walter Tirel, hit and killed William Rufus. Numerous conspiracy theories have been put forward suggesting that the King was killed deliberately; most modern historians reject these, as hunting was a risky activity, and such accidents were common. Chaos broke out, and Tirel fled the scene for France, either because he had shot the fatal arrow, or because he had been incorrectly accused and feared that he would be made a scapegoat for the King's death. Henry rode to Winchester, where an argument ensued as to who now had the best claim to the throne. William of Breteuil championed the rights of Robert, who was still abroad, returning from the Crusade, and to whom Henry and the barons had given homage in previous years. Henry argued that, unlike Robert, he had been born to a reigning king and queen, thereby giving him a claim under the right of porphyrogeniture. Tempers flared, but Henry, supported by Henry de Beaumont and Robert of Meulan, held sway and persuaded the barons to follow him. He occupied Winchester Castle and seized the royal treasury. Henry was hastily crowned king in Westminster Abbey on 5 August by Maurice, the bishop of London, as Anselm, the archbishop of Canterbury, had been exiled by William Rufus, and Thomas, the archbishop of York, was in the north of England at Ripon. In accordance with English tradition and in a bid to legitimise his rule, Henry issued a coronation charter laying out various commitments. The new king presented himself as having restored order to a trouble-torn country. He announced that he would abandon William Rufus's policies towards the Church, which had been seen as oppressive by the clergy; he promised to prevent royal abuses of the barons' property rights, and assured a return to the gentler customs of Edward the Confessor; he asserted that he would "establish a firm peace" across England and ordered "that this peace shall henceforth be kept". In addition to his existing circle of supporters, many of whom were richly rewarded with new lands, Henry quickly co-opted many of the existing administration into his new royal household. William Giffard, William Rufus's chancellor, was made the bishop of Winchester, and the prominent sheriffs Urse d'Abetot, Haimo Dapifer and Robert Fitzhamon continued to play a senior role in government. By contrast, the unpopular Ranulf Flambard, the bishop of Durham and a key member of the previous regime, was imprisoned in the Tower of London and charged with corruption. The late king had left many Church positions unfilled, and Henry set about nominating candidates to these, in an effort to build further support for his new government. The appointments needed to be consecrated, and Henry wrote to Anselm, apologising for having been crowned while the archbishop was still in France and asking him to return at once. Marriage to Matilda, 1100 On 11 November 1100 Henry married Matilda, the daughter of Malcolm III of Scotland, in Westminster Abbey. Henry was now around 31 years old, but late marriages for noblemen were not unusual in the 11th century. The pair had probably first met earlier the previous decade, possibly being introduced through Bishop Osmund of Salisbury. Historian Warren Hollister argues that Henry and Matilda were emotionally close, but their union was also certainly politically motivated. Matilda had originally been named Edith, an Anglo-Saxon name, and was a member of the West Saxon royal family, being the niece of Edgar the Ætheling, the great-granddaughter of Edmund Ironside and a descendant of Alfred the Great. For Henry, marrying Matilda gave his reign increased legitimacy, and for Matilda, an ambitious woman, it was an opportunity for high status and power in England. Matilda had been educated in a sequence of convents, however, and may well have taken the vows to formally become a nun, which formed an obstacle to the marriage progressing. She did not wish to be a nun and appealed to Anselm for permission to marry Henry, and the Archbishop established a council at Lambeth Palace to judge the issue. Despite some dissenting voices, the council concluded that although Matilda had lived in a convent, she had not actually become a nun and was therefore free to marry, a judgement that Anselm then affirmed, allowing the marriage to proceed. Matilda proved an effective queen for Henry, acting as a regent in England on occasion, addressing and presiding over councils, and extensively supporting the arts. The couple soon had two children, Matilda, born in 1102, and William Adelin, born in 1103; it is possible that they also had a second son, Richard, who died young. Following the birth of these children, Matilda preferred to remain based in Westminster while Henry travelled across England and Normandy, either for religious reasons or because she enjoyed being involved in the machinery of royal governance. Henry had a considerable sexual appetite and enjoyed a substantial number of sexual partners, resulting in many illegitimate children, at least nine sons and 13 daughters, many of whom he appears to have recognised and supported. It was normal for unmarried Anglo-Norman noblemen to have sexual relations with prostitutes and local women, and kings were also expected to have mistresses. Some of these relationships occurred before Henry was married, but many others took place after his marriage to Matilda. Henry had a wide range of mistresses from a range of backgrounds, and the relationships appear to have been conducted relatively openly. He may have chosen some of his noble mistresses for political purposes, but the evidence to support this theory is limited. Treaty of Alton, 1101–02 By early 1101, Henry's new regime was established and functioning, but many of the Anglo-Norman elite still supported his brother Robert, or would be prepared to switch sides if Robert appeared likely to gain power in England. In February, Flambard escaped from the Tower of London and crossed the Channel to Normandy, where he injected fresh direction and energy to Robert's attempts to mobilise an invasion force. By July, Robert had formed an army and a fleet, ready to move against Henry in England. Raising the stakes in the conflict, Henry seized Flambard's lands and, with the support of Anselm, Flambard was removed from his position as bishop. The King held court in April and June, where the nobility renewed their oaths of allegiance to him, but their support still appeared partial and shaky. With the invasion imminent, Henry mobilised his forces and fleet outside Pevensey, close to Robert's anticipated landing site, training some of them personally in how to counter cavalry charges. Despite English levies and knights owing military service to the Church arriving in considerable numbers, many of his barons did not appear. Anselm intervened with some of the doubters, emphasising the religious importance of their loyalty to Henry. Robert unexpectedly landed further up the coast at Portsmouth on 20 July with a modest force of a few hundred men, but these were quickly joined by many of the barons in England. However, instead of marching into nearby Winchester and seizing Henry's treasury, Robert paused, giving Henry time to march west and intercept the invasion force. The two armies met at Alton, Hampshire, where peace negotiations began, possibly initiated by either Henry or Robert, and probably supported by Flambard. The brothers then agreed to the Treaty of Alton, under which Robert released Henry from his oath of homage and recognised him as king; Henry renounced his claims on western Normandy, except for Domfront, and agreed to pay Robert £2,000 a year for life; if either brother died without a male heir, the other would inherit his lands; the barons whose lands had been seized by either the King or the Duke for supporting his rival would have them returned, and Flambard would be reinstated as bishop; the two brothers would campaign together to defend their territories in Normandy. Robert remained in England for a few months more with Henry before returning to Normandy. Despite the treaty, Henry set about inflicting severe penalties on the barons who had stood against him during the invasion. William de Warenne, the Earl of Surrey, was accused of fresh crimes, which were not covered by the Alton amnesty, and was banished from England. In 1102 Henry then turned against Robert of Bellême and his brothers, the most powerful of the barons, accusing him of 45 different offences. Robert escaped and took up arms against Henry. Henry besieged Robert's castles at Arundel, Tickhill and Shrewsbury, pushing down into the south-west to attack Bridgnorth. His power base in England broken, Robert accepted Henry's offer of banishment and left the country for Normandy. Conquest of Normandy, 1103–06 Henry's network of allies in Normandy became stronger during 1103. He arranged the marriages of his illegitimate daughters, Juliane and Matilda, to Eustace of Breteuil and Rotrou III, Count of Perche, respectively, the latter union securing the Norman border. Henry attempted to win over other members of the Norman nobility and gave other English estates and lucrative offers to key Norman lords. Duke Robert continued to fight Robert of Bellême, but the Duke's position worsened, until by 1104, he had to ally himself formally with Bellême to survive. Arguing that the Duke had broken the terms of their treaty, the King crossed over the Channel to Domfront, where he met with senior barons from across Normandy, eager to ally themselves with him. He confronted the Duke and accused him of siding with his enemies, before returning to England. Normandy continued to disintegrate into chaos. In 1105, Henry sent his friend Robert Fitzhamon and a force of knights into the Duchy, apparently to provoke a confrontation with Duke Robert. Fitzhamon was captured, and Henry used this as an excuse to invade, promising to restore peace and order. Henry had the support of most of the neighbouring counts around Normandy's borders, and King Philip of France was persuaded to remain neutral. Henry occupied western Normandy, and advanced east on Bayeux, where Fitzhamon was held. The city refused to surrender, and Henry besieged it, burning it to the ground. Terrified of meeting the same fate, the town of Caen switched sides and surrendered, allowing Henry to advance on Falaise, Calvados, which he took with some casualties. His campaign stalled, and the King instead began peace discussions with Robert. The negotiations were inconclusive and the fighting dragged on until Christmas, when Henry returned to England. Henry invaded again in July 1106, hoping to provoke a decisive battle. After some initial tactical successes, he turned south-west towards the castle of Tinchebray. He besieged the castle and Duke Robert, supported by Robert of Bellême, advanced from Falaise to relieve it. After attempts at negotiation failed, the Battle of Tinchebray took place, probably on 28 September. The battle lasted around an hour, and began with a charge by Duke Robert's cavalry; the infantry and dismounted knights of both sides then joined the battle. Henry's reserves, led by Elias I, Count of Maine, and Alan IV, Duke of Brittany, attacked the enemy's flanks, routing first Bellême's troops and then the bulk of the ducal forces. Duke Robert was taken prisoner, but Bellême escaped. Henry mopped up the remaining resistance in Normandy, and Duke Robert ordered his last garrisons to surrender. Reaching Rouen, Henry reaffirmed the laws and customs of Normandy and took homage from the leading barons and citizens. The lesser prisoners taken at Tinchebray were released, but the Duke and several other leading nobles were imprisoned indefinitely. The Duke's son, William Clito, was only three years old and was released to the care of Helias of Saint-Saens, a Norman baron. Henry reconciled himself with Robert of Bellême, who gave up the ducal lands he had seized and rejoined the royal court. Henry had no way of legally removing the Duchy from his brother, and initially Henry avoided using the title "duke" at all, emphasising that, as the king of England, he was only acting as the guardian of the troubled Duchy. Government, family and household Government, law and court Henry inherited the kingdom of England from William Rufus, giving him a claim of suzerainty over Wales and Scotland, and acquired the Duchy of Normandy, a complex entity with troubled borders. The borders between England and Scotland were still uncertain during Henry's reign, with Anglo-Norman influence pushing northwards through Cumbria, but his relationship with King David I of Scotland was generally good, partially due to Henry's marriage to his sister. In Wales, Henry used his power to coerce and charm the indigenous Welsh princes, while Norman Marcher Lords pushed across the valleys of South Wales. Normandy was controlled via various interlocking networks of ducal, ecclesiastical and family contacts, backed by a growing string of important ducal castles along the borders. Alliances and relationships with neighbouring counties along the Norman border were particularly important to maintaining the stability of the Duchy. Henry ruled through the various barons and lords in England and Normandy, whom he manipulated skillfully for political effect. Political friendships, termed amicitia in Latin, were important during the 12th century, and Henry maintained a wide range of these, mediating between his friends in various factions across his realm when necessary, and rewarding those who were loyal to him. He also had a reputation for punishing those barons who stood against him, and he maintained an effective network of informers and spies who reported to him on events. Henry was a harsh, firm ruler, but not excessively so by the standards of the day. Over time, he increased the degree of his control over the barons, removing his enemies and bolstering his friends until the "reconstructed baronage", as historian Warren Hollister describes it, was predominantly loyal and dependent on the King. Henry's itinerant royal court comprised various parts. At the heart was his domestic household, called the domus; a wider grouping was termed the familia regis, and formal gatherings of the court were termed curia. The domus was divided into several parts. The chapel, headed by the chancellor, looked after the royal documents, the chamber dealt with financial affairs and the master-marshal was responsible for travel and accommodation. The familia regis included Henry's mounted household troops, up to several hundred strong, who came from a wider range of social backgrounds, and could be deployed across England and Normandy as required. Initially Henry continued his father's practice of regular crown-wearing ceremonies at his curia, but they became less frequent as the years passed. Henry's court was grand and ostentatious, financing the construction of large new buildings and castles with a range of precious gifts on display, including his private menagerie of exotic animals, which he kept at Woodstock Palace. Despite being a lively community, Henry's court was more tightly controlled than those of previous kings. Strict rules controlled personal behaviour and prohibited members of the court from pillaging neighbouring villages, as had been the norm under William Rufus. Henry was responsible for a substantial expansion of the royal justice system. In England, Henry drew on the existing Anglo-Saxon system of justice, local government and taxes, but strengthened it with additional central governmental institutions. Roger of Salisbury began to develop the royal exchequer after 1110, using it to collect and audit revenues from the King's sheriffs in the shires. Itinerant justices began to emerge under Henry, travelling around the country managing eyre courts, and many more laws were formally recorded. Henry gathered increasing revenue from the expansion of royal justice, both from fines and from fees. The first Pipe Roll that is known to have survived dates from 1130, recording royal expenditures. Henry reformed the coinage in 1107, 1108 and in 1125, inflicting harsh corporal punishments to English coiners who had been found guilty of debasing the currency. In Normandy, he restored law and order after 1106, operating through a body of Norman justices and an exchequer system similar to that in England. Norman institutions grew in scale and scope under Henry, although less quickly than in England. Many of the officials that ran Henry's system were termed "new men", relatively low-born individuals who rose through the ranks as administrators, managing justice or the royal revenues. Relations with the Church Church and the King Henry's ability to govern was intimately bound up with the Church, which formed the key to the administration of both England and Normandy, and this relationship changed considerably over the course of his reign. William the Conqueror had reformed the English Church with the support of his Archbishop of Canterbury, Lanfranc, who became a close colleague and advisor to the King. Under William Rufus this arrangement had collapsed, the King and Archbishop Anselm had become estranged and Anselm had gone into exile. Henry also believed in Church reform, but on taking power in England he became embroiled in the investiture controversy. The argument concerned who should invest a new bishop with his staff and ring: traditionally, this had been carried out by the King in a symbolic demonstration of royal power, but Pope
or because he had been incorrectly accused and feared that he would be made a scapegoat for the King's death. Henry rode to Winchester, where an argument ensued as to who now had the best claim to the throne. William of Breteuil championed the rights of Robert, who was still abroad, returning from the Crusade, and to whom Henry and the barons had given homage in previous years. Henry argued that, unlike Robert, he had been born to a reigning king and queen, thereby giving him a claim under the right of porphyrogeniture. Tempers flared, but Henry, supported by Henry de Beaumont and Robert of Meulan, held sway and persuaded the barons to follow him. He occupied Winchester Castle and seized the royal treasury. Henry was hastily crowned king in Westminster Abbey on 5 August by Maurice, the bishop of London, as Anselm, the archbishop of Canterbury, had been exiled by William Rufus, and Thomas, the archbishop of York, was in the north of England at Ripon. In accordance with English tradition and in a bid to legitimise his rule, Henry issued a coronation charter laying out various commitments. The new king presented himself as having restored order to a trouble-torn country. He announced that he would abandon William Rufus's policies towards the Church, which had been seen as oppressive by the clergy; he promised to prevent royal abuses of the barons' property rights, and assured a return to the gentler customs of Edward the Confessor; he asserted that he would "establish a firm peace" across England and ordered "that this peace shall henceforth be kept". In addition to his existing circle of supporters, many of whom were richly rewarded with new lands, Henry quickly co-opted many of the existing administration into his new royal household. William Giffard, William Rufus's chancellor, was made the bishop of Winchester, and the prominent sheriffs Urse d'Abetot, Haimo Dapifer and Robert Fitzhamon continued to play a senior role in government. By contrast, the unpopular Ranulf Flambard, the bishop of Durham and a key member of the previous regime, was imprisoned in the Tower of London and charged with corruption. The late king had left many Church positions unfilled, and Henry set about nominating candidates to these, in an effort to build further support for his new government. The appointments needed to be consecrated, and Henry wrote to Anselm, apologising for having been crowned while the archbishop was still in France and asking him to return at once. Marriage to Matilda, 1100 On 11 November 1100 Henry married Matilda, the daughter of Malcolm III of Scotland, in Westminster Abbey. Henry was now around 31 years old, but late marriages for noblemen were not unusual in the 11th century. The pair had probably first met earlier the previous decade, possibly being introduced through Bishop Osmund of Salisbury. Historian Warren Hollister argues that Henry and Matilda were emotionally close, but their union was also certainly politically motivated. Matilda had originally been named Edith, an Anglo-Saxon name, and was a member of the West Saxon royal family, being the niece of Edgar the Ætheling, the great-granddaughter of Edmund Ironside and a descendant of Alfred the Great. For Henry, marrying Matilda gave his reign increased legitimacy, and for Matilda, an ambitious woman, it was an opportunity for high status and power in England. Matilda had been educated in a sequence of convents, however, and may well have taken the vows to formally become a nun, which formed an obstacle to the marriage progressing. She did not wish to be a nun and appealed to Anselm for permission to marry Henry, and the Archbishop established a council at Lambeth Palace to judge the issue. Despite some dissenting voices, the council concluded that although Matilda had lived in a convent, she had not actually become a nun and was therefore free to marry, a judgement that Anselm then affirmed, allowing the marriage to proceed. Matilda proved an effective queen for Henry, acting as a regent in England on occasion, addressing and presiding over councils, and extensively supporting the arts. The couple soon had two children, Matilda, born in 1102, and William Adelin, born in 1103; it is possible that they also had a second son, Richard, who died young. Following the birth of these children, Matilda preferred to remain based in Westminster while Henry travelled across England and Normandy, either for religious reasons or because she enjoyed being involved in the machinery of royal governance. Henry had a considerable sexual appetite and enjoyed a substantial number of sexual partners, resulting in many illegitimate children, at least nine sons and 13 daughters, many of whom he appears to have recognised and supported. It was normal for unmarried Anglo-Norman noblemen to have sexual relations with prostitutes and local women, and kings were also expected to have mistresses. Some of these relationships occurred before Henry was married, but many others took place after his marriage to Matilda. Henry had a wide range of mistresses from a range of backgrounds, and the relationships appear to have been conducted relatively openly. He may have chosen some of his noble mistresses for political purposes, but the evidence to support this theory is limited. Treaty of Alton, 1101–02 By early 1101, Henry's new regime was established and functioning, but many of the Anglo-Norman elite still supported his brother Robert, or would be prepared to switch sides if Robert appeared likely to gain power in England. In February, Flambard escaped from the Tower of London and crossed the Channel to Normandy, where he injected fresh direction and energy to Robert's attempts to mobilise an invasion force. By July, Robert had formed an army and a fleet, ready to move against Henry in England. Raising the stakes in the conflict, Henry seized Flambard's lands and, with the support of Anselm, Flambard was removed from his position as bishop. The King held court in April and June, where the nobility renewed their oaths of allegiance to him, but their support still appeared partial and shaky. With the invasion imminent, Henry mobilised his forces and fleet outside Pevensey, close to Robert's anticipated landing site, training some of them personally in how to counter cavalry charges. Despite English levies and knights owing military service to the Church arriving in considerable numbers, many of his barons did not appear. Anselm intervened with some of the doubters, emphasising the religious importance of their loyalty to Henry. Robert unexpectedly landed further up the coast at Portsmouth on 20 July with a modest force of a few hundred men, but these were quickly joined by many of the barons in England. However, instead of marching into nearby Winchester and seizing Henry's treasury, Robert paused, giving Henry time to march west and intercept the invasion force. The two armies met at Alton, Hampshire, where peace negotiations began, possibly initiated by either Henry or Robert, and probably supported by Flambard. The brothers then agreed to the Treaty of Alton, under which Robert released Henry from his oath of homage and recognised him as king; Henry renounced his claims on western Normandy, except for Domfront, and agreed to pay Robert £2,000 a year for life; if either brother died without a male heir, the other would inherit his lands; the barons whose lands had been seized by either the King or the Duke for supporting his rival would have them returned, and Flambard would be reinstated as bishop; the two brothers would campaign together to defend their territories in Normandy. Robert remained in England for a few months more with Henry before returning to Normandy. Despite the treaty, Henry set about inflicting severe penalties on the barons who had stood against him during the invasion. William de Warenne, the Earl of Surrey, was accused of fresh crimes, which were not covered by the Alton amnesty, and was banished from England. In 1102 Henry then turned against Robert of Bellême and his brothers, the most powerful of the barons, accusing him of 45 different offences. Robert escaped and took up arms against Henry. Henry besieged Robert's castles at Arundel, Tickhill and Shrewsbury, pushing down into the south-west to attack Bridgnorth. His power base in England broken, Robert accepted Henry's offer of banishment and left the country for Normandy. Conquest of Normandy, 1103–06 Henry's network of allies in Normandy became stronger during 1103. He arranged the marriages of his illegitimate daughters, Juliane and Matilda, to Eustace of Breteuil and Rotrou III, Count of Perche, respectively, the latter union securing the Norman border. Henry attempted to win over other members of the Norman nobility and gave other English estates and lucrative offers to key Norman lords. Duke Robert continued to fight Robert of Bellême, but the Duke's position worsened, until by 1104, he had to ally himself formally with Bellême to survive. Arguing that the Duke had broken the terms of their treaty, the King crossed over the Channel to Domfront, where he met with senior barons from across Normandy, eager to ally themselves with him. He confronted the Duke and accused him of siding with his enemies, before returning to England. Normandy continued to disintegrate into chaos. In 1105, Henry sent his friend Robert Fitzhamon and a force of knights into the Duchy, apparently to provoke a confrontation with Duke Robert. Fitzhamon was captured, and Henry used this as an excuse to invade, promising to restore peace and order. Henry had the support of most of the neighbouring counts around Normandy's borders, and King Philip of France was persuaded to remain neutral. Henry occupied western Normandy, and advanced east on Bayeux, where Fitzhamon was held. The city refused to surrender, and Henry besieged it, burning it to the ground. Terrified of meeting the same fate, the town of Caen switched sides and surrendered, allowing Henry to advance on Falaise, Calvados, which he took with some casualties. His campaign stalled, and the King instead began peace discussions with Robert. The negotiations were inconclusive and the fighting dragged on until Christmas, when Henry returned to England. Henry invaded again in July 1106, hoping to provoke a decisive battle. After some initial tactical successes, he turned south-west towards the castle of Tinchebray. He besieged the castle and Duke Robert, supported by Robert of Bellême, advanced from Falaise to relieve it. After attempts at negotiation failed, the Battle of Tinchebray took place, probably on 28 September. The battle lasted around an hour, and began with a charge by Duke Robert's cavalry; the infantry and dismounted knights of both sides then joined the battle. Henry's reserves, led by Elias I, Count of Maine, and Alan IV, Duke of Brittany, attacked the enemy's flanks, routing first Bellême's troops and then the bulk of the ducal forces. Duke Robert was taken prisoner, but Bellême escaped. Henry mopped up the remaining resistance in Normandy, and Duke Robert ordered his last garrisons to surrender. Reaching Rouen, Henry reaffirmed the laws and customs of Normandy and took homage from the leading barons and citizens. The lesser prisoners taken at Tinchebray were released, but the Duke and several other leading nobles were imprisoned indefinitely. The Duke's son, William Clito, was only three years old and was released to the care of Helias of Saint-Saens, a Norman baron. Henry reconciled himself with Robert of Bellême, who gave up the ducal lands he had seized and rejoined the royal court. Henry had no way of legally removing the Duchy from his brother, and initially Henry avoided using the title "duke" at all, emphasising that, as the king of England, he was only acting as the guardian of the troubled Duchy. Government, family and household Government, law and court Henry inherited the kingdom of England from William Rufus, giving him a claim of suzerainty over Wales and Scotland, and acquired the Duchy of Normandy, a complex entity with troubled borders. The borders between England and Scotland were still uncertain during Henry's reign, with Anglo-Norman influence pushing northwards through Cumbria, but his relationship with King David I of Scotland was generally good, partially due to Henry's marriage to his sister. In Wales, Henry used his power to coerce and charm the indigenous Welsh princes, while Norman Marcher Lords pushed across the valleys of South Wales. Normandy was controlled via various interlocking networks of ducal, ecclesiastical and family contacts, backed by a growing string of important ducal castles along the borders. Alliances and relationships with neighbouring counties along the Norman border were particularly important to maintaining the stability of the Duchy. Henry ruled through the various barons and lords in England and Normandy, whom he manipulated skillfully for political effect. Political friendships, termed amicitia in Latin, were important during the 12th century, and Henry maintained a wide range of these, mediating between his friends in various factions across his realm when necessary, and rewarding those who were loyal to him. He also had a reputation for punishing those barons who stood against him, and he maintained an effective network of informers and spies who reported to him on events. Henry was a harsh, firm ruler, but not excessively so by the standards of the day. Over time, he increased the degree of his control over the barons, removing his enemies and bolstering his friends until the "reconstructed baronage", as historian Warren Hollister describes it, was predominantly loyal and dependent on the King. Henry's itinerant royal court comprised various parts. At the heart was his domestic household, called the domus; a wider grouping was termed the familia regis, and formal gatherings of the court were termed curia. The domus was divided into several parts. The chapel, headed by the chancellor, looked after the royal documents, the chamber dealt with financial affairs and the master-marshal was responsible for travel and accommodation. The familia regis included Henry's mounted household troops, up to several hundred strong, who came from a wider range of social backgrounds, and could be deployed across England and Normandy as required. Initially Henry continued his father's practice of regular crown-wearing ceremonies at his curia, but they became less frequent as the years passed. Henry's court was grand and ostentatious, financing the construction of large new buildings and castles with a range of precious gifts on display, including his private menagerie of exotic animals, which he kept at Woodstock Palace. Despite being a lively community, Henry's court was more tightly controlled than those of previous kings. Strict rules controlled personal behaviour and prohibited members of the court from pillaging neighbouring villages, as had been the norm under William Rufus. Henry was responsible for a substantial expansion of the royal justice system. In England, Henry drew on the existing Anglo-Saxon system of justice, local government and taxes, but strengthened it with additional central governmental institutions. Roger of Salisbury began to develop the royal exchequer after 1110, using it to collect and audit revenues from the King's sheriffs in the shires. Itinerant justices began to emerge under Henry, travelling around the country managing eyre courts, and many more laws were formally recorded. Henry gathered increasing revenue from the expansion of royal justice, both from fines and from fees. The first Pipe Roll that is known to have survived dates from 1130, recording royal expenditures. Henry reformed the coinage in 1107, 1108 and in 1125, inflicting harsh corporal punishments to English coiners who had been found guilty of debasing the currency. In Normandy, he restored law and order after 1106, operating through a body of Norman justices and an exchequer system similar to that in England. Norman institutions grew in scale and scope under Henry, although less quickly than in England. Many of the officials that ran Henry's system were termed "new men", relatively low-born individuals who rose through the ranks as administrators, managing justice or the royal revenues. Relations with the Church Church and the King Henry's ability to govern was intimately bound up with the Church, which formed the key to the administration of both England and Normandy, and this relationship changed considerably over the course of his reign. William the Conqueror had reformed the English Church with the support of his Archbishop of Canterbury, Lanfranc, who became a close colleague and advisor to the King. Under William Rufus this arrangement had collapsed, the King and Archbishop Anselm had become estranged and Anselm had gone into exile. Henry also believed in Church reform, but on taking power in England he became embroiled in the investiture controversy. The argument concerned who should invest a new bishop with his staff and ring: traditionally, this had been carried out by the King in a symbolic demonstration of royal power, but Pope Urban II had condemned this practice in 1099, arguing that only the papacy could carry out this task, and declaring that the clergy should not give homage to their local temporal rulers. Anselm returned to England from exile in 1100 having heard Urban's pronouncement, and informed Henry that he would be complying with the Pope's wishes. Henry was in a difficult position. On one hand, the symbolism and homage was important to him; on the other hand, he needed Anselm's support in his struggle with his brother Duke Robert. Anselm stuck firmly to the letter of the papal decree, despite Henry's attempts to persuade him to give way in return for a vague assurance of a future royal compromise. Matters escalated, with Anselm going back into exile and Henry confiscating the revenues of his estates. Anselm threatened excommunication, and in July 1105 the two men finally negotiated a solution. A distinction was drawn between the secular and ecclesiastical powers of the prelates, under which Henry gave up his right to invest his clergy, but retained the custom of requiring them to come and do homage for the temporalities, the landed properties they held in England. Despite this argument, the pair worked closely together, combining to deal with Duke Robert's invasion of 1101, for example, and holding major reforming councils in 1102 and 1108. A long-running dispute between the Archbishops of Canterbury and York flared up under Anselm's successor, Ralph d'Escures. Canterbury, traditionally the senior of the two establishments, had long argued that the Archbishop of York should formally promise to obey their Archbishop, but York argued that the two episcopates were independent within the English Church and that no such promise was necessary. Henry supported the primacy of Canterbury, to ensure that England remained under a single ecclesiastical administration,
The shift in popularity from lolicon to bishōjo has been credited to Naoki Yamamoto (who wrote under the pen name of Tō Moriyama). Moriyama's manga had a style that had not been seen before at the time, and was different from the ero-gekiga and lolicon styles, and used bishōjo designs as a base to build upon. Moriyama's books sold well upon publication, creating even more fans for the genre. These new artists would then write for magazines such as Monthly Penguin Club Magazine (1986) and Manga Hot Milk (1986) which would become popular with their readership, drawing in new fans. The publication of erotic materials in the United States can be traced back to at least 1990, when IANVS Publications printed its first Anime Shower Special. In March 1994, Antarctic Press released Bondage Fairies, an English translation of Insect Hunter, an "insect rape" manga which became popular in the American market, while it apparently had a poor showing in Japan. During this time, the one American publisher translating and publishing hentai was Fantagraphics on their adult comic imprint, Eros Comix, which was established around 1990. Origin of erotic anime Because there are fewer animation productions, most erotic works are retroactively tagged as hentai since the coining of the term in English. Hentai is typically defined as consisting of excessive nudity, and graphic sexual intercourse whether or not it is perverse. The term "ecchi" is typically related to fanservice, with no sexual intercourse being depicted. The earliest pornographic anime was Suzumi-bune, created in 1932 by Hakusan Kimura. It was the first part of a two-reeler film, which was half complete before it was seized by the police. The remnants of the film were donated to the National Film Center in the early 21st century by the Tokyo police, who were removing all silver nitrate film in their possession, as it is extremely flammable. The film has never been viewed by the public. Two early works escape being defined as hentai, but contain erotic themes. This is likely due to the obscurity and unfamiliarity of the works, arriving in the United States and fading from public focus a full 20 years before importation and surging interests coined the Americanized term hentai. The first is the 1969 film One Thousand and One Arabian Nights, which faithfully includes erotic elements of the original story. In 1970, Cleopatra: Queen of Sex, was the first animated film to carry an X rating, but it was mislabeled as erotica in the United States. The Lolita Anime series is typically identified as the first erotic anime and original video animation (OVA); it was released in 1984 by Wonder Kids. Containing six episodes, the series focused on underage sex and rape, and included one episode containing BDSM bondage. Several sub-series were released in response, including a second Lolita Anime series released by Nikkatsu. It has not been officially licensed or distributed outside of its original release. The Cream Lemon franchise of works ran from 1984 to 2005, with a number of them entering the American market in various forms. The Brothers Grime series released by Excalibur Films contained Cream Lemon works as early as 1986. However, they were not billed as anime and were introduced during the same time that the first underground distribution of erotic works began. The American release of licensed erotic anime was first attempted in 1991 by Central Park Media, with I Give My All, but it never occurred. In December 1992, Devil Hunter Yohko was the first risque (ecchi) title that was released by A.D. Vision. While it contains no sexual intercourse, it pushes the limits of the ecchi category with sexual dialogue, nudity and one scene in which the heroine is about to be raped. It was Central Park Media's 1993 release of Urotsukidōji which brought the first hentai film to American viewers. Often cited for inventing the tentacle rape subgenre, it contains extreme depictions of violence and monster sex. As such, it is acknowledged for being the first to depict tentacle sex on screen. When the film premiered in the United States, it was described as being "drenched in graphic scenes of perverse sex and ultra-violence". Following this release, a wealth of pornographic content began to arrive in the United States, with companies such as A.D. Vision, Central Park Media and Media Blasters releasing licensed titles under various labels. A.D. Vision's label SoftCel Pictures released 19 titles in 1995 alone. Another label, Critical Mass, was created in 1996 to release an unedited edition of Violence Jack. When A.D. Vision's hentai label SoftCel Pictures shut down in 2005, most of its titles were acquired by Critical Mass. Following the bankruptcy of Central Park Media in 2009, the licenses for all Anime 18-related products and movies were transferred to Critical Mass. Origin of erotic games The term eroge (erotic game) literally defines any erotic game, but has become synonymous with video games depicting the artistic styles of anime and manga. The origins of eroge began in the early 1980s, while the computer industry in Japan was struggling to define a computer standard with makers like NEC, Sharp, and Fujitsu competing against one another. The PC98 series, despite lacking in processing power, CD drives and limited graphics, came to dominate the market, with the popularity of eroge games contributing to its success. Because of vague definitions of what constitutes an "erotic game", there are several possible candidates for the first eroge. If the definition applies to adult themes, the first game was Softporn Adventure. Released in America in 1981 for the Apple II, this was a text-based comedic game from On-Line Systems. If eroge is defined as the first graphical depictions or Japanese adult themes, it would be Koei's 1982 release of Night Life. Sexual intercourse is depicted through simple graphic outlines. Notably, Night Life was not intended to be erotic so much as an instructional guide "to support married life". A series of "undressing" games appeared as early as 1983, such as "Strip Mahjong". The first anime-styled erotic game was Tenshitachi no Gogo, released in 1985 by JAST. In 1988, ASCII released the first erotic role-playing game, Chaos Angel. In 1989, AliceSoft released the turn-based role-playing game Rance and ELF released Dragon Knight. In the late 1980s, eroge began to stagnate under high prices and the majority of games containing uninteresting plots and mindless sex. ELF's 1992 release of Dōkyūsei came as customer frustration with eroge was mounting and spawned a new genre of games called dating sims. Dōkyūsei was unique because it had no defined plot and required the player to build a relationship with different girls in order to advance the story. Each girl had her own story, but the prospect of consummating a relationship required the girl growing to love the player; there was no easy sex. The term "visual novel" is vague, with Japanese and English definitions classifying the genre as a type of interactive fiction game driven by narration and limited player interaction. While the term is often retroactively applied to many games, it was Leaf that coined the term with their "Leaf Visual Novel Series" (LVNS) with the 1996 release of Shizuku and Kizuato. The success of these two dark eroge games would be followed by the third and final installment of the LVNS, the 1997 romantic eroge To Heart. Eroge visual novels took a new emotional turn with Tactics' 1998 release One: Kagayaku Kisetsu e. Key's 1999 release of Kanon proved to be a major success and would go on to have numerous console ports, two manga series and two anime series. Censorship Japanese laws have impacted depictions of works since the Meiji Restoration, but these predate the common definition of hentai material. Since becoming law in 1907, Article 175 of the Criminal Code of Japan forbids the publication of obscene materials. Specifically, depictions of male–female sexual intercourse and pubic hair are considered obscene, but bare genitalia is not. As censorship is required for published works, the most common representations are the blurring dots on pornographic videos and "bars" or "lights" on still images. In 1986, Toshio Maeda sought to get past censorship on depictions of sexual intercourse, by creating tentacle sex. This led to the large number of works containing sexual intercourse with monsters, demons, robots, and aliens, whose genitals look different from men's. While Western views attribute hentai to any explicit work, it was the products of this censorship which became not only the first titles legally imported to America and Europe, but the first successful ones. While uncut for American release, the United Kingdom's release of Urotsukidōji removed many scenes of the violence and tentacle rape scenes. Another technique used to evade regulation was the "sexual intercourse cross-section view", an imaginary view of intercourse resembling an anatomic drawing or an MRI, which would eventually evolve as a prevalent expression in hentai for its erotic appeal. This expression is known in the Western world as the "x-ray view", but has also been known as the "bisection view" since the mid 2000s by manga critics. It was also because of this law that the artists began to depict the characters with a minimum of anatomical details and without pubic hair, by law, prior to 1991. Part of the ban was lifted when Nagisa Oshima prevailed over the obscenity charges at his trial for his film In the Realm of the Senses. Though not enforced, the lifting of this ban did not apply to anime and manga as they were not deemed artistic exceptions. Alterations of material or censorship and banning of works are common. The US release of La Blue Girl altered the age of the heroine from 16 to 18, removed sex scenes with a dwarf ninja named Nin-nin, and removed the Japanese blurring dots. La Blue Girl was outright rejected by UK censors who refused to classify it and prohibited its distribution. In 2011, members of the Liberal Democratic Party of Japan sought a ban on the subgenre lolicon but were unsuccessful. The last law proposed against it was introduced on May 27, 2013 by the Liberal Democratic Party, the New Komei Party and the Japan Restoration Party that would have made possession of sexual images of individuals under 18 illegal with a fine of 1 million yen (about US$10,437) and less than a year in jail. The Japanese Democratic Party, along with several industry associations involved in anime and manga protested against the bill saying "while they appreciate that the bill protects children, it will also restrict freedom of expression". The law was ultimately passed in June 2014 after the regulation of lolicon anime and manga was removed from the bill. This new law went into full effect in 2015 banning real life child pornography. Demographics According to data from Pornhub in 2017, the most prolific consumers of hentai are men. However, Patrick W. Galbraith and Jessica Bauwens-Sugimoto note that hentai manga attracts "a diverse readership, which of course includes women." Kathryn Hemmann also writes that "self-identified female otaku [...] readily admit to enjoying [hentai] dōjinshi catering to a male erotic gaze". When it comes to mediums of hentai, eroge games in particular combine three favored media—cartoons, pornography and gaming—into an experience. The hentai genre engages a wide audience that expands yearly, and desires better quality and storylines, or works which push the creative envelope. Nobuhiro Komiya, a manga censor, states that the unusual and extreme depictions in hentai are not about perversion so much as they are an example of the profit-oriented industry. Anime depicting normal sexual situations enjoy less market success than those that break social norms, such as sex at schools or bondage. According to clinical
be the first hentai manga magazine published in Japan, would be responsible for creating a new genre known as ero-gekiga, where gekiga was taken, and the sexual and violent content was intensified. Other well-known "ero-gekiga" magazines were Erogenica (1975), and Alice (1977). The circulation of ero-gekiga magazines would peak in 1978, and it is believed that somewhere between eighty to one hundred different ero-gekiga magazines were being published annually. The 1980s would see the decline of ero-gekiga in favor of the rising popularity of lolicon and bishōjo magazines, which grew from otaku fan culture. It has been theorized that the decline of ero-gekiga was due to the baby boomer readership beginning to start their own families, as well as migrating to seinen magazines such as Weekly Young Magazine, and when it came to sexual material, the readership was stolen by gravure and pornographic magazines. The distinct shift in the style of Japanese pornographic comics from realistic to cartoon-cute characters is accredited to Hideo Azuma, "The Father of Lolicon". In 1979, he penned , which offered the first depictions of sexual acts between cute, unrealistic Tezuka-style characters. This would start a pornographic manga movement. The lolicon boom of the 1980s saw the rise of magazines such as the anthologies Lemon People and Petit Apple Pie. As the lolicon boom waned in the mid-1980s, the dominant form of representation for female characters became "baby faced and big chested" women. The shift in popularity from lolicon to bishōjo has been credited to Naoki Yamamoto (who wrote under the pen name of Tō Moriyama). Moriyama's manga had a style that had not been seen before at the time, and was different from the ero-gekiga and lolicon styles, and used bishōjo designs as a base to build upon. Moriyama's books sold well upon publication, creating even more fans for the genre. These new artists would then write for magazines such as Monthly Penguin Club Magazine (1986) and Manga Hot Milk (1986) which would become popular with their readership, drawing in new fans. The publication of erotic materials in the United States can be traced back to at least 1990, when IANVS Publications printed its first Anime Shower Special. In March 1994, Antarctic Press released Bondage Fairies, an English translation of Insect Hunter, an "insect rape" manga which became popular in the American market, while it apparently had a poor showing in Japan. During this time, the one American publisher translating and publishing hentai was Fantagraphics on their adult comic imprint, Eros Comix, which was established around 1990. Origin of erotic anime Because there are fewer animation productions, most erotic works are retroactively tagged as hentai since the coining of the term in English. Hentai is typically defined as consisting of excessive nudity, and graphic sexual intercourse whether or not it is perverse. The term "ecchi" is typically related to fanservice, with no sexual intercourse being depicted. The earliest pornographic anime was Suzumi-bune, created in 1932 by Hakusan Kimura. It was the first part of a two-reeler film, which was half complete before it was seized by the police. The remnants of the film were donated to the National Film Center in the early 21st century by the Tokyo police, who were removing all silver nitrate film in their possession, as it is extremely flammable. The film has never been viewed by the public. Two early works escape being defined as hentai, but contain erotic themes. This is likely due to the obscurity and unfamiliarity of the works, arriving in the United States and fading from public focus a full 20 years before importation and surging interests coined the Americanized term hentai. The first is the 1969 film One Thousand and One Arabian Nights, which faithfully includes erotic elements of the original story. In 1970, Cleopatra: Queen of Sex, was the first animated film to carry an X rating, but it was mislabeled as erotica in the United States. The Lolita Anime series is typically identified as the first erotic anime and original video animation (OVA); it was released in 1984 by Wonder Kids. Containing six episodes, the series focused on underage sex and rape, and included one episode containing BDSM bondage. Several sub-series were released in response, including a second Lolita Anime series released by Nikkatsu. It has not been officially licensed or distributed outside of its original release. The Cream Lemon franchise of works ran from 1984 to 2005, with a number of them entering the American market in various forms. The Brothers Grime series released by Excalibur Films contained Cream Lemon works as early as 1986. However, they were not billed as anime and were introduced during the same time that the first underground distribution of erotic works began. The American release of licensed erotic anime was first attempted in 1991 by Central Park Media, with I Give My All, but it never occurred. In December 1992, Devil Hunter Yohko was the first risque (ecchi) title that was released by A.D. Vision. While it contains no sexual intercourse, it pushes the limits of the ecchi category with sexual dialogue, nudity and one scene in which the heroine is about to be raped. It was Central Park Media's 1993 release of Urotsukidōji which brought the first hentai film to American viewers. Often cited for inventing the tentacle rape subgenre, it contains extreme depictions of violence and monster sex. As such, it is acknowledged for being the first to depict tentacle sex on screen. When the film premiered in the United States, it was described as being "drenched in graphic scenes of perverse sex and ultra-violence". Following this release, a wealth of pornographic content began to arrive in the United States, with companies such as A.D. Vision, Central Park Media and Media Blasters releasing licensed titles under various labels. A.D. Vision's label SoftCel Pictures released 19 titles in 1995 alone. Another label, Critical Mass, was created in 1996 to release an unedited edition of Violence Jack. When A.D. Vision's hentai label SoftCel Pictures shut down in 2005, most of its titles were acquired by Critical Mass. Following the bankruptcy of Central Park Media in 2009, the licenses for all Anime 18-related products and movies were transferred to Critical Mass. Origin of erotic games The term eroge (erotic game) literally defines any erotic game, but has become synonymous with video games depicting the artistic styles of anime and manga. The origins of eroge began in the early 1980s, while the computer industry in Japan was struggling to define a computer standard with makers like NEC, Sharp, and Fujitsu competing against one another. The PC98 series, despite lacking in processing power, CD drives and limited graphics, came to dominate the market, with the popularity of eroge games contributing to its success. Because of vague definitions of what constitutes an "erotic game", there are several possible candidates for the first eroge. If the definition applies to adult themes, the first game was Softporn Adventure. Released in America in 1981 for the Apple II, this was a text-based comedic game from On-Line Systems. If eroge is defined as the first graphical depictions or Japanese adult themes, it would be Koei's 1982 release of Night Life. Sexual intercourse is depicted through simple graphic outlines. Notably, Night Life was not intended to be erotic so much as an instructional guide "to support married life". A series of "undressing" games appeared as early as 1983, such as "Strip Mahjong". The first anime-styled erotic game was Tenshitachi no Gogo, released in 1985 by JAST. In 1988, ASCII released the first erotic role-playing game, Chaos Angel. In 1989, AliceSoft released the turn-based role-playing game Rance and ELF released Dragon Knight. In the late 1980s, eroge began to stagnate under high prices and the majority of games containing uninteresting plots and mindless sex. ELF's 1992 release of Dōkyūsei came as customer frustration with eroge was mounting and spawned a new genre of games called dating sims. Dōkyūsei was unique because it had no defined plot and required the player to build a relationship with different girls in order to advance the story. Each girl had her own story, but the prospect of consummating a relationship required the girl growing to love the player; there was no easy sex. The term "visual novel" is vague, with Japanese and English definitions classifying the genre as a type of interactive fiction game driven by narration and limited player interaction. While the term is often retroactively applied to many games, it was Leaf that coined the term with their "Leaf Visual Novel Series" (LVNS) with the 1996 release of Shizuku and Kizuato. The success of these two dark eroge games would be followed by the third and final installment of the LVNS, the 1997 romantic eroge To Heart. Eroge visual novels took a new emotional turn with Tactics' 1998 release One: Kagayaku Kisetsu e. Key's 1999 release of Kanon proved to be a major success and would go on to have numerous console ports, two manga series and two anime series. Censorship Japanese laws have impacted depictions of works since the Meiji Restoration, but these predate the common definition of hentai material. Since becoming law in 1907, Article 175 of the Criminal Code of Japan forbids the publication of obscene materials. Specifically, depictions of male–female sexual intercourse and pubic hair are considered obscene, but bare genitalia is not. As censorship is required for published works, the most common representations are the blurring dots on pornographic videos and "bars" or "lights" on still images. In 1986, Toshio Maeda sought to get past censorship on depictions of sexual intercourse, by creating tentacle sex. This led to the large number of works containing sexual intercourse with monsters, demons, robots, and aliens, whose genitals look different from men's. While Western views attribute hentai to any explicit work, it was the products of this censorship which became not only the first titles legally imported to America and Europe, but the first successful ones. While uncut for American release, the United Kingdom's release of Urotsukidōji removed many scenes of the violence and tentacle rape scenes. Another technique used to evade regulation was the "sexual intercourse cross-section view", an imaginary view of intercourse resembling an anatomic drawing or an MRI, which would eventually evolve as a prevalent expression in hentai for its erotic appeal. This expression is known in the Western world as the "x-ray view", but has also been known as the "bisection view" since the mid 2000s by manga critics. It was also because of this law that the artists began to depict the characters with a minimum of anatomical details and without pubic hair, by law, prior to 1991. Part of the ban was lifted when Nagisa Oshima prevailed over the obscenity charges at his trial for his film In the Realm of the Senses. Though not enforced, the lifting of this ban did not apply to anime and manga as they were not deemed artistic exceptions. Alterations of material or censorship
children illegitimate, thus legitimising his wife. Amateur historians Bertram Fields and Sir Clements Markham have claimed that he may have been involved in the murder of the Princes in the Tower, as the repeal of Titulus Regius gave the Princes a stronger claim to the throne than his own. Alison Weir points out that the Rennes ceremony, two years earlier, was plausible only if Henry and his supporters were certain that the Princes were already dead. Henry secured his crown principally by dividing and undermining the power of the nobility, especially through the aggressive use of bonds and recognisances to secure loyalty. He also enacted laws against livery and maintenance, the great lords' practice of having large numbers of "retainers" who wore their lord's badge or uniform and formed a potential private army. Henry began taking precautions against rebellion while still in Leicester after Bosworth Field. Edward, Earl of Warwick, the ten-year-old son of Edward IV's brother George, Duke of Clarence, was the senior surviving male of the House of York. Before departing for London, Henry sent Robert Willoughby to Sheriff Hutton in Yorkshire, to arrest Warwick and take him to the Tower of London. Despite such precautions, Henry faced several rebellions over the next twelve years. The first was the 1486 rebellion of the Stafford brothers, abetted by Viscount Lovell, which collapsed without fighting. Next, in 1487, Yorkists led by Lincoln rebelled in support of Lambert Simnel, a boy they claimed to be Edward of Warwick (who was actually a prisoner in the Tower). The rebellion began in Ireland, where the historically Yorkist nobility, headed by the powerful Gerald FitzGerald, 8th Earl of Kildare, proclaimed Simnel king and provided troops for his invasion of England. The rebellion was defeated and Lincoln killed at the Battle of Stoke. Henry showed remarkable clemency to the surviving rebels: he pardoned Kildare and the other Irish nobles, and he made the boy, Simnel, a servant in the royal kitchen where he was in charge of roasting meats on a spit. In 1490, a young Fleming, Perkin Warbeck, appeared and claimed to be Richard of Shrewsbury, the younger of the "Princes in the Tower". Warbeck won the support of Edward IV's sister Margaret, Duchess of Burgundy. He led attempted invasions of Ireland in 1491 and England in 1495, and persuaded James IV of Scotland to invade England in 1496. In 1497 Warbeck landed in Cornwall with a few thousand troops, but was soon captured and executed. When the King's agents searched the property of William Stanley (Chamberlain of the Household, with direct access to Henry VII) they found a bag of coins amounting to around £10,000 and a collar of livery with Yorkist garnishings. Stanley was accused of supporting Warbeck's cause, arrested and later executed. In response to this threat within his own household, the King instituted more rigid security for access to his person. In 1499, Henry had the Earl of Warwick executed. However, he spared Warwick's elder sister Margaret, who survived until 1541 when she was executed by Henry VIII. Economics For most of Henry VII's reign Edward Story was Bishop of Chichester. Story's register still exists and, according to the 19th-century historian W.R.W. Stephens, "affords some illustrations of the avaricious and parsimonious character of the king". It seems that Henry was skillful at extracting money from his subjects on many pretexts, including that of war with France or war with Scotland. The money so extracted added to the King's personal fortune rather than being used for the stated purpose. Unlike his predecessors, Henry VII came to the throne without personal experience in estate management or financial administration. Despite this, during his reign he became a fiscally prudent monarch who restored the fortunes of an effectively bankrupt exchequer. Henry VII introduced stability to the financial administration of England by keeping the same financial advisors throughout his reign. For instance, except for the first few months of the reign, the Baron Dynham and the Earl of Surrey were the only Lord High Treasurers throughout his reign. Henry VII improved tax collection in the realm by introducing ruthlessly efficient mechanisms of taxation. He was supported in this effort by his chancellor, Archbishop John Morton, whose "Morton's Fork" was a catch-22 method of ensuring that nobles paid increased taxes: those nobles who spent little must have saved much, and thus could afford the increased taxes; in contrast, those nobles who spent much obviously had the means to pay the increased taxes. The capriciousness and lack of due process that indebted many would tarnish his legacy and were soon ended upon Henry VII's death, after a commission revealed widespread abuses. According to the contemporary historian Polydore Vergil, simple "greed" underscored the means by which royal control was over-asserted in Henry's final years. Following Henry VII's death, Henry VIII executed Richard Empson and Edmund Dudley, his two most hated tax collectors, on trumped-up charges of treason. Henry VII established the pound avoirdupois as a standard of weight; it later became part of the Imperial and customary systems of units. Foreign policy Henry VII's policy was both to maintain peace and to create economic prosperity. Up to a point, he succeeded. The Treaty of Redon was signed in February 1489 between Henry and representatives of Brittany. Based on the terms of the accord, Henry sent 6000 troops to fight (at the expense of Brittany) under the command of Lord Daubeney. The purpose of the agreement was to prevent France from annexing Brittany. According to John M. Currin, the treaty redefined Anglo-Breton relations. Henry started a new policy to recover Guyenne and other lost Plantagenet claims in France. The treaty marks a shift from neutrality over the French invasion of Brittany to active intervention against it. Henry later concluded a treaty with France at Etaples that brought money into the coffers of England, and ensured the French would not support pretenders to the English throne, such as Perkin Warbeck. However, this treaty came at a price, as Henry mounted a minor invasion of Brittany in November 1492. Henry decided to keep Brittany out of French hands, signed an alliance with Spain to that end, and sent 6,000 troops to France. The confused, fractious nature of Breton politics undermined his efforts, which finally failed after three sizeable expeditions, at a cost of £24,000. However, as France was becoming more concerned with the Italian Wars, the French were happy to agree to the Treaty of Etaples. Henry had pressured the French by laying siege to Boulogne in October 1492. Henry had been under the financial and physical protection of the French throne or its vassals for most of his life before becoming king. To strengthen his position, however, he subsidised shipbuilding, so strengthening the navy (he commissioned Europe's first ever – and the world's oldest surviving – dry dock at Portsmouth in 1495) and improving trading opportunities. Henry VII was one of the first European monarchs to recognise the importance of the newly united Spanish kingdom; he concluded the Treaty of Medina del Campo, by which his son Arthur, Prince of Wales, was married to Catherine of Aragon. He also concluded the Treaty of Perpetual Peace with Scotland (the first treaty between England and Scotland for almost two centuries), which betrothed his daughter Margaret Tudor to King James IV of Scotland. By this marriage, Henry VII hoped to break the Auld Alliance between Scotland and France. Though this was not achieved during his reign, the marriage eventually led to the union of the English and Scottish crowns under Margaret's great-grandson, James VI and I, following the death of Henry's granddaughter Elizabeth I. Henry also formed an alliance with Holy Roman Emperor Maximilian I (1493–1519) and persuaded Pope Innocent VIII to issue a papal bull of excommunication against all pretenders to Henry's throne. In 1506, Grand Master of the Knights Hospitaller Emery d'Amboise asked Henry VII to become the protector and patron of the Order, as he had an interest in the crusade. Later on, Henry had exchanged letters with Pope Julius II in 1507, in which he encouraged him to establish peace among Christian realms, and to organize an expedition against the Turks of the Ottoman Empire. Trade agreements Henry VII was much enriched by trading alum, which was used in the wool and cloth trades as a chemical fixative for dyeing fabrics. Since alum was mined in only one area in Europe (Tolfa, Italy), it was a scarce commodity and therefore especially valuable to its land holder, the Pope. With the English economy heavily invested in wool production, Henry VII became involved in the alum trade in 1486. With the assistance of the Italian merchant banker
on 22 August 1485 until his death in 1509. He was the first monarch of the House of Tudor. Henry's mother, Margaret Beaufort, was a descendant of the Lancastrian branch of the House of Plantagenet. Henry's father, Edmund Tudor, 1st Earl of Richmond, a half-brother of Henry VI of England and descendant of the Welsh Tudors of Penmynydd, died three months before his son Henry was born. During Henry's early years, his uncle Henry VI was fighting against Edward IV, a member of the Yorkist Plantagenet branch. After Edward retook the throne in 1471, Henry Tudor spent 14 years in exile in Brittany. He attained the throne when his forces, supported by France, Scotland, and Wales, defeated Edward IV's brother Richard III at the Battle of Bosworth Field, the culmination of the Wars of the Roses. He was the last king of England to win his throne on the field of battle. He cemented his claim by marrying Elizabeth of York, daughter of King Edward. Henry was successful in restoring power and stability to the English monarchy following the civil war. He is credited with a number of administrative, economic and diplomatic initiatives. His supportive policy toward England's wool industry and his standoff with the Low Countries had long-lasting benefit to the English economy. He paid very close attention to detail, and instead of spending lavishly he concentrated on raising new revenues. He introduced several new taxes, which stabilised the government's finances. After his death, a commission found widespread abuses in the tax collection process. Henry reigned for nearly 24 years and was peacefully succeeded by his son, Henry VIII. Ancestry and early life Henry VII was born at Pembroke Castle on 28 January 1457 to Lady Margaret Beaufort, Countess of Richmond. His father, Edmund Tudor, 1st Earl of Richmond, died three months before his birth. Henry's paternal grandfather, Owen Tudor, originally from the Tudors of Penmynydd, Isle of Anglesey in Wales, had been a page in the court of King Henry V. He rose to become one of the "Squires to the Body to the King" after military service at the Battle of Agincourt. Owen is said to have secretly married the widow of Henry V, Catherine of Valois. One of their sons was Edmund, Henry's father. Edmund was created Earl of Richmond in 1452, and "formally declared legitimate by Parliament". Henry's mother, Margaret, provided Henry's main claim to the English throne through the House of Beaufort. She was a great-granddaughter of John of Gaunt, 1st Duke of Lancaster (fourth son of Edward III), and his third wife Katherine Swynford. Katherine was Gaunt's mistress for about 25 years. When they married in 1396 they already had four children, including Henry's great-grandfather John Beaufort. Thus, Henry's claim was somewhat tenuous; it was from a woman, and by illegitimate descent. In theory, the Portuguese and Castilian royal families had a better claim as descendants of Catherine of Lancaster, the daughter of John of Gaunt and his second wife Constance of Castile. Gaunt's nephew Richard II legitimised Gaunt's children by Katherine Swynford by Letters Patent in 1397. In 1407, Henry IV, Gaunt's son by his first wife, issued new Letters Patent confirming the legitimacy of his half-siblings but also declaring them ineligible for the throne. Henry IV's action was of doubtful legality, as the Beauforts were previously legitimised by an Act of Parliament, but it further weakened Henry's claim. Nonetheless, by 1483 Henry was the senior male Lancastrian claimant remaining after the deaths in battle, by murder or execution of Henry VI (son of Henry V and Catherine of Valois), his son Edward of Westminster, Prince of Wales, and the other Beaufort line of descent through Lady Margaret's uncle, Edmund Beaufort, 2nd Duke of Somerset. Henry also made some political capital out of his Welsh ancestry in attracting military support and safeguarding his army's passage through Wales on its way to the Battle of Bosworth. He came from an old, established Anglesey family that claimed descent from Cadwaladr, in legend, the last ancient British king, and on occasion Henry displayed the red dragon of Cadwaladr. He took it, as well as the standard of St. George, on his procession through London after the victory at Bosworth. A contemporary writer and Henry's biographer, Bernard André, also made much of Henry's Welsh descent. In 1456, Henry's father Edmund Tudor was captured while fighting for Henry VI in South Wales against the Yorkists. He died shortly afterwards in Carmarthen Castle. His younger brother, Jasper Tudor, the Earl of Pembroke, undertook to protect Edmund's widow Margaret, who was 13 years old when she gave birth to Henry. When Edward IV became King in 1461, Jasper Tudor went into exile abroad. Pembroke Castle, and later the Earldom of Pembroke, were granted to the Yorkist William Herbert, who also assumed the guardianship of Margaret Beaufort and the young Henry. Henry lived in the Herbert household until 1469, when Richard Neville, Earl of Warwick (the "Kingmaker"), went over to the Lancastrians. Herbert was captured fighting for the Yorkists and executed by Warwick. When Warwick restored Henry VI in 1470, Jasper Tudor returned from exile and brought Henry to court. When the Yorkist Edward IV regained the throne in 1471, Henry fled with other Lancastrians to Brittany. He spent most of the next 14 years under the protection of Francis II, Duke of Brittany. In November 1476, Francis fell ill and his principal advisers were more amenable to negotiating with King Edward. Henry was thus handed over to English envoys and escorted to the Breton port of Saint-Malo. While there, he feigned stomach cramps and delayed his departure long enough to miss the tides. An ally of Henry's, Viscount , soon arrived, bringing news that Francis had recovered, and in the confusion Henry was able to flee to a monastery. There he claimed sanctuary until the envoys were forced to depart. Rise to the throne By 1483, Henry's mother was actively promoting him as an alternative to Richard III, despite her being married to Lord Stanley, a Yorkist. At Rennes Cathedral on Christmas Day 1483, Henry pledged to marry Elizabeth of York, the eldest daughter of Edward IV. She was Edward's heir since the presumed death of her brothers, the Princes in the Tower, King Edward V and Richard of Shrewsbury, Duke of York. With money and supplies borrowed from his host, Francis II of Brittany, Henry tried to land in England, but his conspiracy unravelled resulting in the execution of his primary co-conspirator, Henry Stafford, 2nd Duke of Buckingham. Now supported by Francis II's prime minister, Pierre Landais, Richard III attempted to extradite Henry from Brittany, but Henry escaped to France. He was welcomed by the French, who readily supplied him with troops and equipment for a second invasion. Henry gained the support of the Woodvilles, in-laws of the late Edward IV, and sailed with a small French and Scottish force, landing at Mill Bay near Dale, Pembrokeshire. He marched toward England accompanied by his uncle Jasper and John de Vere, 13th Earl of Oxford. Wales was historically a Lancastrian stronghold, and Henry owed the support he gathered to his Welsh birth and ancestry, being agnatically descended from Rhys ap Gruffydd. He amassed an army of about 5,000-6,000 soldiers. Henry devised a plan to seize the throne by engaging Richard quickly because Richard had reinforcements in Nottingham and Leicester. Though outnumbered, Henry's Lancastrian forces decisively defeated Richard's Yorkist army at the Battle of Bosworth Field on 22 August 1485. Several of Richard's key allies, such as Henry Percy, 4th Earl of Northumberland, and also Lord Stanley and his brother William, crucially switched sides or left the battlefield. Richard III's death at Bosworth Field effectively ended the Wars of the Roses. Reign To secure his hold on the throne, Henry declared himself king by right of conquest retroactively from 21 August 1485, the day before Bosworth Field. Thus, anyone who had fought for Richard against him would be guilty of treason and Henry could legally confiscate the lands and property of Richard III, while restoring his own. Henry spared Richard's nephew and designated heir, John de la Pole, Earl of Lincoln, and made the Yorkist heiress Margaret Plantagenet Countess of Salisbury suo jure. He took care not to address the baronage or summon Parliament until after his coronation, which took place in Westminster Abbey on 30 October 1485. After his coronation Henry issued an edict that any gentleman who swore fealty to him would, notwithstanding any previous attainder, be secure in his property and person. Henry honoured his pledge of December 1483 to marry Elizabeth of York. They were third cousins, as both were great-great-grandchildren of John of Gaunt. Henry married Elizabeth of York with the hope of uniting the Yorkist and Lancastrian sides of the Plantagenet dynastic disputes, and he was largely successful. However, such a level of paranoia persisted that anyone (John de la Pole, Earl of Lincoln, for example) with blood ties to the Plantagenets was suspected of coveting the throne. Henry had Parliament repeal Titulus Regius, the statute that declared Edward IV's marriage invalid and his children illegitimate, thus legitimising his wife. Amateur historians Bertram Fields and Sir Clements Markham have claimed that he may have been involved in the murder of the Princes in the Tower, as the repeal of Titulus Regius gave the Princes a stronger claim to the throne than his own. Alison Weir points out that the Rennes ceremony, two years earlier, was plausible only if Henry and his supporters were certain that the Princes were already dead. Henry secured his crown principally by dividing and undermining the power of the nobility, especially through the aggressive use of bonds and recognisances to secure loyalty. He also enacted laws against livery and maintenance, the great lords' practice of having large numbers of "retainers" who wore their lord's badge or uniform and formed a potential private army. Henry began taking precautions against rebellion while still in Leicester after Bosworth Field. Edward, Earl of Warwick, the ten-year-old son of Edward IV's brother George, Duke of Clarence, was the senior surviving male of the House of York. Before departing for London, Henry sent Robert Willoughby to Sheriff Hutton in Yorkshire, to arrest Warwick and take him to the Tower of London. Despite such precautions, Henry faced several rebellions over the next twelve years. The first was the 1486 rebellion of the Stafford brothers, abetted by Viscount Lovell, which collapsed without fighting. Next, in 1487, Yorkists led by Lincoln rebelled in support of Lambert Simnel, a boy they claimed to be Edward of Warwick (who was actually a prisoner in the Tower). The rebellion began in Ireland, where the historically Yorkist nobility, headed by the powerful Gerald FitzGerald, 8th Earl of Kildare, proclaimed Simnel king and provided troops for his invasion of England. The rebellion was defeated and Lincoln killed at the Battle of Stoke. Henry showed remarkable clemency to the surviving rebels: he pardoned Kildare and the other Irish nobles, and he made the boy, Simnel, a servant in the royal kitchen where he was in charge of roasting meats on a spit. In 1490, a young Fleming, Perkin Warbeck, appeared and claimed to be Richard of Shrewsbury, the younger of the "Princes in the Tower". Warbeck won the support of Edward IV's sister Margaret, Duchess of Burgundy. He led attempted invasions of Ireland in 1491 and England in 1495, and persuaded James IV of Scotland to invade England in 1496. In 1497 Warbeck landed in Cornwall with a few thousand troops, but was soon captured and executed. When the King's agents searched the property of William Stanley (Chamberlain of the Household, with direct access to Henry VII) they found a bag of coins amounting to around £10,000 and a collar of livery with Yorkist garnishings. Stanley was accused of supporting Warbeck's cause, arrested and later executed. In response to this threat within his own household, the King instituted more rigid security for access to his person. In 1499, Henry had the Earl of Warwick executed. However, he spared Warwick's elder sister Margaret, who survived until 1541 when she was executed by Henry VIII. Economics For most of Henry VII's reign Edward Story was Bishop of Chichester. Story's register still exists and, according to the 19th-century historian W.R.W. Stephens, "affords some illustrations of the avaricious and parsimonious character of the king". It seems that Henry was skillful at extracting money from his subjects on many pretexts, including that of war with France or war with Scotland. The money so extracted added to the King's personal fortune rather than being used for the stated purpose. Unlike his predecessors, Henry VII came to the throne without personal experience in estate management or financial administration. Despite this, during his reign he became a fiscally prudent monarch who restored the fortunes of an effectively bankrupt exchequer. Henry VII introduced stability to the financial administration of England by keeping the same financial advisors throughout his reign. For instance, except for the first few months of the reign, the Baron Dynham and the Earl of Surrey were the only Lord High Treasurers throughout his reign. Henry VII improved tax collection in the realm by introducing ruthlessly efficient mechanisms of taxation. He was supported in this effort by his chancellor, Archbishop John Morton, whose "Morton's Fork" was a catch-22 method of ensuring that nobles paid increased taxes: those nobles who spent little must have saved much, and thus could afford the increased taxes; in contrast, those nobles who spent much obviously had the means to pay the increased taxes. The capriciousness and lack of due process that indebted many would tarnish his legacy and were soon ended upon Henry VII's death, after a commission revealed widespread abuses. According to the contemporary historian Polydore Vergil, simple "greed" underscored the means by which royal control was over-asserted in Henry's final years. Following Henry VII's death, Henry VIII executed Richard Empson and Edmund Dudley, his two most hated tax collectors, on trumped-up charges of treason. Henry VII established the pound avoirdupois as a standard of weight; it later became part of the Imperial and customary systems of units. Foreign policy Henry VII's policy was both to maintain peace and to create economic prosperity. Up to a point, he succeeded. The Treaty of Redon was signed in February 1489 between Henry and representatives of Brittany. Based on the terms of the accord, Henry sent 6000 troops to fight (at the expense of Brittany) under the command of Lord Daubeney. The purpose of the agreement was to prevent France from annexing Brittany. According to John M. Currin, the treaty redefined Anglo-Breton relations. Henry started a new policy to recover Guyenne and other lost Plantagenet claims in France. The treaty marks a shift from neutrality over the French invasion of Brittany to active intervention against it. Henry later concluded a treaty with France at Etaples that brought money into the coffers of England, and ensured the French would not support pretenders to the English throne, such as Perkin Warbeck. However, this treaty came at a price, as Henry mounted a minor invasion of Brittany in November 1492. Henry decided
reform was undertaken and its role eventually devolved to the localities. Wolsey helped fill the gap left by Henry's declining participation in government (particularly in comparison to his father) but did so mostly by imposing himself in the king's place. His use of these courts to pursue personal grievances, and particularly to treat delinquents as mere examples of a whole class worthy of punishment, angered the rich, who were annoyed as well by his enormous wealth and ostentatious living. Following Wolsey's downfall, Henry took full control of his government, although at court numerous complex factions continued to try to ruin and destroy each other. Thomas Cromwell (c. 1485–1540) also came to define Henry's government. Returning to England from the continent in 1514 or 1515, Cromwell soon entered Wolsey's service. He turned to law, also picking up a good knowledge of the Bible, and was admitted to Gray's Inn in 1524. He became Wolsey's "man of all work". Driven in part by his religious beliefs, Cromwell attempted to reform the body politic of the English government through discussion and consent, and through the vehicle of continuity, not outward change. Many saw him as the man they wanted to bring about their shared aims, including Thomas Audley. By 1531, Cromwell and his associates were already responsible for the drafting of much legislation. Cromwell's first office was that of the master of the king's jewels in 1532, from which he began to invigorate the government finances. By that point, Cromwell's power as an efficient administrator, in a Council full of politicians, exceeded what Wolsey had achieved. Cromwell did much work through his many offices to remove the tasks of government from the Royal Household (and ideologically from the personal body of the king) and into a public state. But he did so in a haphazard fashion that left several remnants, not least because he needed to retain Henry's support, his own power, and the possibility of actually achieving the plan he set out. Cromwell made the various income streams Henry VII put in place more formal and assigned largely autonomous bodies for their administration. The role of the King's Council was transferred to a reformed Privy Council, much smaller and more efficient than its predecessor. A difference emerged between the king's financial health and the country's, although Cromwell's fall undermined much of his bureaucracy, which required him to keep order among the many new bodies and prevent profligate spending that strained relations as well as finances. Cromwell's reforms ground to a halt in 1539, the initiative lost, and he failed to secure the passage of an enabling act, the Proclamation by the Crown Act 1539. He was executed on 28 July 1540. Finances Henry inherited a vast fortune and a prosperous economy from his father, who had been frugal. This fortune is estimated at £1,250,000 (the equivalent of £375 million today). By comparison, Henry VIII's reign was a near disaster financially. He augmented the royal treasury by seizing church lands, but his heavy spending and long periods of mismanagement damaged the economy. Henry spent much of his wealth on maintaining his court and household, including many of the building works he undertook on royal palaces. He hung 2,000 tapestries in his palaces; by comparison, James V of Scotland hung just 200. Henry took pride in showing off his collection of weapons, which included exotic archery equipment, 2,250 pieces of land ordnance and 6,500 handguns. Tudor monarchs had to fund all government expenses out of their own income. This income came from the Crown lands that Henry owned as well as from customs duties like tonnage and poundage, granted by parliament to the king for life. During Henry's reign the revenues of the Crown remained constant (around £100,000), but were eroded by inflation and rising prices brought about by war. Indeed, war and Henry's dynastic ambitions in Europe exhausted the surplus he had inherited from his father by the mid-1520s. Henry VII had not involved Parliament in his affairs very much, but Henry VIII had to turn to Parliament during his reign for money, in particular for grants of subsidies to fund his wars. The dissolution of the monasteries provided a means to replenish the treasury, and as a result, the Crown took possession of monastic lands worth £120,000 (£36 million) a year. The Crown had profited by a small amount in 1526 when Wolsey put England onto a gold, rather than silver, standard, and had debased the currency slightly. Cromwell debased the currency more significantly, starting in Ireland in 1540. The English pound halved in value against the Flemish pound between 1540 and 1551 as a result. The nominal profit made was significant, helping to bring income and expenditure together, but it had a catastrophic effect on the country's economy. In part, it helped to bring about a period of very high inflation from 1544 onwards. Reformation Henry is generally credited with initiating the English Reformation—the process of transforming England from a Catholic country to a Protestant one—though his progress at the elite and mass levels is disputed, and the precise narrative not widely agreed upon. Certainly, in 1527, Henry, until then an observant and well-informed Catholic, appealed to the Pope for an annulment of his marriage to Catherine. No annulment was immediately forthcoming, since the papacy was now under the control of Charles V, Catherine' s nephew. The traditional narrative gives this refusal as the trigger for Henry's rejection of papal supremacy, which he had previously defended. Yet as E. L. Woodward put it, Henry's determination to annul his marriage with Catherine was the occasion rather than the cause of the English Reformation so that "neither too much nor too little" should be made of the annulment. Historian A. F. Pollard has argued that even if Henry had not needed an annulment, he might have come to reject papal control over the governance of England purely for political reasons. Indeed, Henry needed a son to secure the Tudor Dynasty and avert the risk of civil war over disputed succession. In any case, between 1532 and 1537, Henry instituted a number of statutes that dealt with the relationship between king and pope and hence the structure of the nascent Church of England. These included the Statute in Restraint of Appeals (passed 1533), which extended the charge of praemunire against all who introduced papal bulls into England, potentially exposing them to the death penalty if found guilty. Other acts included the Supplication against the Ordinaries and the Submission of the Clergy, which recognised Royal Supremacy over the church. The Ecclesiastical Appointments Act 1534 required the clergy to elect bishops nominated by the Sovereign. The Act of Supremacy in 1534 declared that the king was "the only Supreme Head on Earth of the Church of England" and the Treasons Act 1534 made it high treason, punishable by death, to refuse the Oath of Supremacy acknowledging the king as such. Similarly, following the passage of the Act of Succession 1533, all adults in the kingdom were required to acknowledge the Act's provisions (declaring Henry's marriage to Anne legitimate and his marriage to Catherine illegitimate) by oath; those who refused were subject to imprisonment for life, and any publisher or printer of any literature alleging that the marriage to Anne was invalid subject to the death penalty. Finally, the Peter's Pence Act was passed, and it reiterated that England had "no superior under God, but only your Grace" and that Henry's "imperial crown" had been diminished by "the unreasonable and uncharitable usurpations and exactions" of the Pope. The king had much support from the Church under Cranmer. To Cromwell's annoyance, Henry insisted on parliamentary time to discuss questions of faith, which he achieved through the Duke of Norfolk. This led to the passing of the Act of Six Articles, whereby six major questions were all answered by asserting the religious orthodoxy, thus restraining the reform movement in England. It was followed by the beginnings of a reformed liturgy and of the Book of Common Prayer, which would take until 1549 to complete. But this victory for religious conservatives did not convert into much change in personnel, and Cranmer remained in his position. Overall, the rest of Henry's reign saw a subtle movement away from religious orthodoxy, helped in part by the deaths of prominent figures from before the break with Rome, especially the executions of Thomas More and John Fisher in 1535 for refusing to renounce papal authority. Henry established a new political theology of obedience to the crown that continued for the next decade. It reflected Martin Luther's new interpretation of the fourth commandment ("Honour thy father and mother"), brought to England by William Tyndale. The founding of royal authority on the Ten Commandments was another important shift: reformers within the Church used the Commandments' emphasis on faith and the word of God, while conservatives emphasised the need for dedication to God and doing good. The reformers' efforts lay behind the publication of the Great Bible in 1539 in English. Protestant Reformers still faced persecution, particularly over objections to Henry's annulment. Many fled abroad, including the influential Tyndale, who was eventually executed and his body burned at Henry's behest. When taxes once payable to Rome were transferred to the Crown, Cromwell saw the need to assess the taxable value of the Church's extensive holdings as they stood in 1535. The result was an extensive compendium, the Valor Ecclesiasticus. In September 1535, Cromwell commissioned a more general visitation of religious institutions, to be undertaken by four appointee visitors. The visitation focussed almost exclusively on the country's religious houses, with largely negative conclusions. In addition to reporting back to Cromwell, the visitors made the lives of the monks more difficult by enforcing strict behavioural standards. The result was to encourage self-dissolution. In any case, the evidence Cromwell gathered led swiftly to the beginning of the state-enforced dissolution of the monasteries, with all religious houses worth less than £200 vested by statute in the crown in January 1536. After a short pause, surviving religious houses were transferred one by one to the Crown and new owners, and the dissolution confirmed by a further statute in 1539. By January 1540 no such houses remained; 800 had been dissolved. The process had been efficient, with minimal resistance, and brought the crown some £90,000 a year. The extent to which the dissolution of all houses was planned from the start is debated by historians; there is some evidence that major houses were originally intended only to be reformed. Cromwell's actions transferred a fifth of England's landed wealth to new hands. The programme was designed primarily to create a landed gentry beholden to the crown, which would use the lands much more efficiently. Although little opposition to the supremacy could be found in England's religious houses, they had links to the international church and were an obstacle to further religious reform. Response to the reforms was mixed. The religious houses had been the only support of the impoverished, and the reforms alienated much of the populace outside London, helping to provoke the great northern rising of 1536–37, known as the Pilgrimage of Grace. Elsewhere the changes were accepted and welcomed, and those who clung to Catholic rites kept quiet or moved in secrecy. They reemerged during the reign of Henry's daughter Mary (1553–58). Military Apart from permanent garrisons at Berwick, Calais, and Carlisle, England's standing army numbered only a few hundred men. This was increased only slightly by Henry. Henry's invasion force of 1513, some 30,000 men, was composed of billmen and longbowmen, at a time when the other European nations were moving to hand guns and pikemen. But the difference in capability was at this stage not significant, and Henry's forces had new armour and weaponry. They were also supported by battlefield artillery and the war wagon, relatively new innovations, and several large and expensive siege guns. The invasion force of 1544 was similarly well-equipped and organised, although command on the battlefield was laid with the dukes of Suffolk and Norfolk, which in the latter case produced disastrous results at Montreuil. Henry's break with Rome incurred the threat of a large-scale French or Spanish invasion. To guard against this, in 1538 he began to build a chain of expensive, state-of-the-art defences along Britain's southern and eastern coasts, from Kent to Cornwall, largely built of material gained from the demolition of the monasteries. These were known as Henry VIII's Device Forts. He also strengthened existing coastal defence fortresses such as Dover Castle and, at Dover, Moat Bulwark and Archcliffe Fort, which he visited for a few months to supervise. Wolsey had many years before conducted the censuses required for an overhaul of the system of militia, but no reform resulted. In 1538–39, Cromwell overhauled the shire musters, but his work mainly served to demonstrate how inadequate they were in organisation. The building works, including that at Berwick, along with the reform of the militias and musters, were eventually finished under Queen Mary. Henry is traditionally cited as one of the founders of the Royal Navy. Technologically, Henry invested in large cannon for his warships, an idea that had taken hold in other countries, to replace the smaller serpentines in use. He also flirted with designing ships personally. His contribution to larger vessels, if any, is unknown, but it is believed that he influenced the design of rowbarges and similar galleys. Henry was also responsible for the creation of a permanent navy, with the supporting anchorages and dockyards. Tactically, Henry's reign saw the Navy move away from boarding tactics to employ gunnery instead. The Tudor navy was enlarged up to 50 ships (the Mary Rose among them), and Henry was responsible for the establishment of the "council for marine causes" to oversee the maintenance and operation of the Navy, becoming the basis for the later Admiralty. Ireland At the beginning of Henry's reign, Ireland was effectively divided into three zones: the Pale, where English rule was unchallenged; Leinster and Munster, the so-called "obedient land" of Anglo-Irish peers; and the Gaelic Connaught and Ulster, with merely nominal English rule. Until 1513, Henry continued the policy of his father, to allow Irish lords to rule in the king's name and accept steep divisions between the communities. However, upon the death of the 8th Earl of Kildare, governor of Ireland, fractious Irish politics combined with a more ambitious Henry to cause trouble. When Thomas Butler, 7th Earl of Ormond died, Henry recognised one successor for Ormond's English, Welsh and Scottish lands, whilst in Ireland another took control. Kildare's successor, the 9th Earl, was replaced as Lord Lieutenant of Ireland by The Earl of Surrey in 1520. Surrey's ambitious aims were costly but ineffective; English rule became trapped between winning the Irish lords over with diplomacy, as favoured by Henry and Wolsey, and a sweeping military occupation as proposed by Surrey. Surrey was recalled in 1521, with Piers Butler – one of the claimants to the Earldom of Ormond – appointed in his place. Butler proved unable to control opposition, including that of Kildare. Kildare was appointed chief governor in 1524, resuming his dispute with Butler, which had before been in a lull. Meanwhile, the Earl of Desmond, an Anglo-Irish peer, had turned his support to Richard de la Pole as pretender to the English throne; when in 1528 Kildare failed to take suitable actions against him, Kildare was once again removed from his post. The Desmond situation was resolved on his death in 1529, which was followed by a period of uncertainty. This was effectively ended with the appointment of Henry FitzRoy, Duke of Richmond and the king's son, as lord lieutenant. Richmond had never before visited Ireland, his appointment a break with past policy. For a time it looked as if peace might be restored with the return of Kildare to Ireland to manage the tribes, but the effect was limited and the Irish parliament soon rendered ineffective. Ireland began to receive the attention of Cromwell, who had supporters of Ormond and Desmond promoted. Kildare, on the other hand, was summoned to London; after some hesitation, he departed for London in 1534, where he would face charges of treason. His son, Thomas, Lord Offaly was more forthright, denouncing the king and leading a "Catholic crusade" against the king, who was by this time mired in marital problems. Offaly had the Archbishop of Dublin murdered and besieged Dublin. Offaly led a mixture of Pale gentry and Irish tribes, although he failed to secure the support of Lord Darcy, a sympathiser, or Charles V. What was effectively a civil war was ended with the intervention of 2,000 English troops – a large army by Irish standards – and the execution of Offaly (his father was already dead) and his uncles. Although the Offaly revolt was followed by a determination to rule Ireland more closely, Henry was wary of drawn-out conflict with the tribes, and a royal commission recommended that the only relationship with the tribes was to be promises of peace, their land protected from English expansion. The man to lead this effort was Sir Antony St Leger, as Lord Deputy of Ireland, who would remain into the post past Henry's death. Until the break with Rome, it was widely believed that Ireland was a Papal possession granted as a mere fiefdom to the English king, so in 1541 Henry asserted England's claim to the Kingdom of Ireland free from the Papal overlordship. This change did, however, also allow a policy of peaceful reconciliation and expansion: the Lords of Ireland would grant their lands to the king, before being returned as fiefdoms. The incentive to comply with Henry's request was an accompanying barony, and thus a right
to Catherine annulled, forgoing at least one less openly defiant line of attack. In going public, all hope of tempting Catherine to retire to a nunnery or otherwise stay quiet was lost. Henry sent his secretary, William Knight, to appeal directly to the Holy See by way of a deceptively worded draft papal bull. Knight was unsuccessful; the Pope could not be misled so easily. Other missions concentrated on arranging an ecclesiastical court to meet in England, with a representative from Clement VII. Although Clement agreed to the creation of such a court, he never had any intention of empowering his legate, Lorenzo Campeggio, to decide in Henry's favour. This bias was perhaps the result of pressure from Emperor Charles V, Catherine's nephew, but it is not clear how far this influenced either Campeggio or the Pope. After less than two months of hearing evidence, Clement called the case back to Rome in July 1529, from which it was clear that it would never re-emerge. With the chance for an annulment lost, Cardinal Wolsey bore the blame. He was charged with praemunire in October 1529, and his fall from grace was "sudden and total". Briefly reconciled with Henry (and officially pardoned) in the first half of 1530, he was charged once more in November 1530, this time for treason, but died while awaiting trial. After a short period in which Henry took government upon his own shoulders, Sir Thomas More took on the role of Lord Chancellor and chief minister. Intelligent and able, but also a devout Catholic and opponent of the annulment, More initially cooperated with the king's new policy, denouncing Wolsey in Parliament. A year later, Catherine was banished from court, and her rooms were given to Anne Boleyn. Anne was an unusually educated and intellectual woman for her time and was keenly absorbed and engaged with the ideas of the Protestant Reformers, but the extent to which she herself was a committed Protestant is much debated. When Archbishop of Canterbury William Warham died, Anne's influence and the need to find a trustworthy supporter of the annulment had Thomas Cranmer appointed to the vacant position. This was approved by the Pope, unaware of the king's nascent plans for the Church. Henry was married to Catherine for 24 years. Their divorce has been described as a "deeply wounding and isolating" experience for Henry. Marriage to Anne Boleyn In the winter of 1532, Henry met with Francis I at Calais and enlisted the support of the French king for his new marriage. Immediately upon returning to Dover in England, Henry, now 41, and Anne went through a secret wedding service. She soon became pregnant, and there was a second wedding service in London on 25 January 1533. On 23 May 1533, Cranmer, sitting in judgment at a special court convened at Dunstable Priory to rule on the validity of the king's marriage to Catherine of Aragon, declared the marriage of Henry and Catherine null and void. Five days later, on 28 May 1533, Cranmer declared the marriage of Henry and Anne to be valid. Catherine was formally stripped of her title as queen, becoming instead "princess dowager" as the widow of Arthur. In her place, Anne was crowned queen consort on 1 June 1533. The queen gave birth to a daughter slightly prematurely on 7 September 1533. The child was christened Elizabeth, in honour of Henry's mother, Elizabeth of York. Following the marriage, there was a period of consolidation, taking the form of a series of statutes of the Reformation Parliament aimed at finding solutions to any remaining issues, whilst protecting the new reforms from challenge, convincing the public of their legitimacy, and exposing and dealing with opponents. Although the canon law was dealt with at length by Cranmer and others, these acts were advanced by Thomas Cromwell, Thomas Audley and the Duke of Norfolk and indeed by Henry himself. With this process complete, in May 1532 More resigned as Lord Chancellor, leaving Cromwell as Henry's chief minister. With the Act of Succession 1533, Catherine's daughter, Mary, was declared illegitimate; Henry's marriage to Anne was declared legitimate; and Anne's issue declared to be next in the line of succession. With the Acts of Supremacy in 1534, Parliament also recognised the king's status as head of the church in England and, together with the Act in Restraint of Appeals in 1532, abolished the right of appeal to Rome. It was only then that Pope Clement VII took the step of excommunicating the king and Cranmer, although the excommunication was not made official until some time later. The king and queen were not pleased with married life. The royal couple enjoyed periods of calm and affection, but Anne refused to play the submissive role expected of her. The vivacity and opinionated intellect that had made her so attractive as an illicit lover made her too independent for the largely ceremonial role of a royal wife and it made her many enemies. For his part, Henry disliked Anne's constant irritability and violent temper. After a false pregnancy or miscarriage in 1534, he saw her failure to give him a son as a betrayal. As early as Christmas 1534, Henry was discussing with Cranmer and Cromwell the chances of leaving Anne without having to return to Catherine. Henry is traditionally believed to have had an affair with Madge Shelton in 1535, although historian Antonia Fraser argues that Henry in fact had an affair with her sister Mary Shelton. Opposition to Henry's religious policies was quickly suppressed in England. A number of dissenting monks, including the first Carthusian Martyrs, were executed and many more pilloried. The most prominent resisters included John Fisher, Bishop of Rochester, and Sir Thomas More, both of whom refused to take the oath to the king. Neither Henry nor Cromwell sought at that stage to have the men executed; rather, they hoped that the two might change their minds and save themselves. Fisher openly rejected Henry as the Supreme Head of the Church, but More was careful to avoid openly breaking the Treasons Act of 1534, which (unlike later acts) did not forbid mere silence. Both men were subsequently convicted of high treason, however – More on the evidence of a single conversation with Richard Rich, the Solicitor General, and both were executed in the summer of 1535. These suppressions, as well as the Dissolution of the Lesser Monasteries Act of 1536, in turn contributed to more general resistance to Henry's reforms, most notably in the Pilgrimage of Grace, a large uprising in northern England in October 1536. Some 20,000 to 40,000 rebels were led by Robert Aske, together with parts of the northern nobility. Henry VIII promised the rebels he would pardon them and thanked them for raising the issues. Aske told the rebels they had been successful and they could disperse and go home. Henry saw the rebels as traitors and did not feel obliged to keep his promises to them, so when further violence occurred after Henry's offer of a pardon he was quick to break his promise of clemency. The leaders, including Aske, were arrested and executed for treason. In total, about 200 rebels were executed, and the disturbances ended. Execution of Anne Boleyn On 8 January 1536, news reached the king and queen that Catherine of Aragon had died. The following day, Henry dressed all in yellow, with a white feather in his bonnet. Queen Anne was pregnant again, and she was aware of the consequences if she failed to give birth to a son. Later that month, the king was unhorsed in a tournament and was badly injured; it seemed for a time that his life was in danger. When news of this accident reached the queen, she was sent into shock and miscarried a male child at about 15 weeks' gestation, on the day of Catherine's funeral, 29 January 1536. For most observers, this personal loss was the beginning of the end of this royal marriage. Although the Boleyn family still held important positions on the Privy Council, Anne had many enemies, including the Duke of Suffolk. Even her own uncle, the Duke of Norfolk, had come to resent her attitude to her power. The Boleyns preferred France over the Emperor as a potential ally, but the king's favour had swung towards the latter (partly because of Cromwell), damaging the family's influence. Also opposed to Anne were supporters of reconciliation with Princess Mary (among them the former supporters of Catherine), who had reached maturity. A second annulment was now a real possibility, although it is commonly believed that it was Cromwell's anti-Boleyn influence that led opponents to look for a way of having her executed. Anne's downfall came shortly after she had recovered from her final miscarriage. Whether it was primarily the result of allegations of conspiracy, adultery, or witchcraft remains a matter of debate among historians. Early signs of a fall from grace included the king's new mistress, the 28-year-old Jane Seymour, being moved into new quarters, and Anne's brother, George Boleyn, being refused the Order of the Garter, which was instead given to Nicholas Carew. Between 30 April and 2 May, five men, including George Boleyn, were arrested on charges of treasonable adultery and accused of having sexual relationships with the queen. Anne was also arrested, accused of treasonous adultery and incest. Although the evidence against them was unconvincing, the accused were found guilty and condemned to death. The accused men were executed on 17 May 1536. Henry and Anne's marriage was annulled by Archbishop Cranmer at Lambeth on the same day. Cranmer appears to have had difficulty finding grounds for an annulment and probably based it on the prior liaison between Henry and Anne's sister Mary, which in canon law meant that Henry's marriage to Anne was, like his first marriage, within a forbidden degree of affinity and therefore void. At 8 am on 19 May 1536, Anne was executed on Tower Green. Marriage to Jane Seymour; domestic and foreign affairs The day after Anne's execution the 45-year-old Henry became engaged to Seymour, who had been one of the queen's ladies-in-waiting. They were married ten days later at the Palace of Whitehall, Whitehall, London, in the queen's closet, by Stephen Gardiner, Bishop of Winchester. On 12 October 1537, Jane gave birth to a son, Prince Edward, the future Edward VI. The birth was difficult, and Queen Jane died on 24 October 1537 from an infection and was buried in Windsor. The euphoria that had accompanied Edward's birth became sorrow, but it was only over time that Henry came to long for his wife. At the time, Henry recovered quickly from the shock. Measures were immediately put in place to find another wife for Henry, which, at the insistence of Cromwell and the Privy Council, were focused on the European continent. With Charles V distracted by the internal politics of his many kingdoms and also external threats, and Henry and Francis on relatively good terms, domestic and not foreign policy issues had been Henry's priority in the first half of the 1530s. In 1536, for example, Henry granted his assent to the Laws in Wales Act 1535, which legally annexed Wales, uniting England and Wales into a single nation. This was followed by the Second Succession Act (the Act of Succession 1536), which declared Henry's children by Jane to be next in the line of succession and declared both Mary and Elizabeth illegitimate, thus excluding them from the throne. The king was also granted the power to further determine the line of succession in his will, should he have no further issue. However, when Charles and Francis made peace in January 1539, Henry became increasingly paranoid, perhaps as a result of receiving a constant list of threats to the kingdom (real or imaginary, minor or serious) supplied by Cromwell in his role as spymaster. Enriched by the dissolution of the monasteries, Henry used some of his financial reserves to build a series of coastal defences and set some aside for use in the event of a Franco-German invasion. Marriage to Anne of Cleves Having considered the matter, Cromwell suggested Anne, the 25-year-old sister of the Duke of Cleves, who was seen as an important ally in case of a Roman Catholic attack on England, for the duke fell between Lutheranism and Catholicism. Hans Holbein the Younger was dispatched to Cleves to paint a portrait of Anne for the king. Despite speculation that Holbein painted her in an overly flattering light, it is more likely that the portrait was accurate; Holbein remained in favour at court. After seeing Holbein's portrait, and urged on by the complimentary description of Anne given by his courtiers, the 49-year-old king agreed to wed Anne. However, it was not long before Henry wished to annul the marriage so he could marry another. Anne did not argue, and confirmed that the marriage had never been consummated. Anne's previous betrothal to the Duke of Lorraine's son Francis provided further grounds for the annulment. The marriage was subsequently dissolved, and Anne received the title of "The King's Sister", two houses, and a generous allowance. It was soon clear that Henry had fallen for the 17-year-old Catherine Howard, the Duke of Norfolk's niece. This worried Cromwell, for Norfolk was his political opponent. Shortly after, the religious reformers (and protégés of Cromwell) Robert Barnes, William Jerome and Thomas Garret were burned as heretics. Cromwell, meanwhile, fell out of favour although it is unclear exactly why, for there is little evidence of differences in domestic or foreign policy. Despite his role, he was never formally accused of being responsible for Henry's failed marriage. Cromwell was now surrounded by enemies at court, with Norfolk also able to draw on his niece Catherine's position. Cromwell was charged with treason, selling export licences, granting passports, and drawing up commissions without permission, and may also have been blamed for the failure of the foreign policy that accompanied the attempted marriage to Anne. He was subsequently attainted and beheaded. Marriage to Catherine Howard On 28 July 1540 (the same day Cromwell was executed), Henry married the young Catherine Howard, a first cousin and lady-in-waiting of Anne Boleyn. He was absolutely delighted with his new queen and awarded her the lands of Cromwell and a vast array of jewellery. Soon after the marriage, however, Queen Catherine had an affair with the courtier Thomas Culpeper. She also employed Francis Dereham, who had previously been informally engaged to her and had an affair with her prior to her marriage, as her secretary. The Privy Council was informed of her affair with Dereham whilst Henry was away; Thomas Cranmer was dispatched to investigate, and he brought evidence of Queen Catherine's previous affair with Dereham to the king's notice. Though Henry originally refused to believe the allegations, Dereham confessed. It took another meeting of the council, however, before Henry believed the accusations against Dereham and went into a rage, blaming the council before consoling himself in hunting. When questioned, the queen could have admitted a prior contract to marry Dereham, which would have made her subsequent marriage to Henry invalid, but she instead claimed that Dereham had forced her to enter into an adulterous relationship. Dereham, meanwhile, exposed Catherine's relationship with Culpeper. Culpeper and Dereham were both executed, and Catherine too was beheaded on 13 February 1542. Marriage to Catherine Parr Henry married his last wife, the wealthy widow Catherine Parr, in July 1543. A reformer at heart, she argued with Henry over religion. Henry remained committed to an idiosyncratic mixture of Catholicism and Protestantism; the reactionary mood that had gained ground after Cromwell's fall had neither eliminated his Protestant streak nor been overcome by it. Parr helped reconcile Henry with his daughters, Mary and Elizabeth. In 1543, the Third Succession Act put them back in the line of succession after Edward. The same act allowed Henry to determine further succession to the throne in his will. Shrines destroyed and monasteries dissolved In 1538, the chief minister Thomas Cromwell pursued an extensive campaign against what the government termed "idolatry" practised under the old religion, culminating in September with the dismantling of the shrine of St. Thomas Becket at Canterbury Cathedral. As a consequence, the king was excommunicated by Pope Paul III on 17 December of the same year. In 1540, Henry sanctioned the complete destruction of shrines to saints. In 1542, England's remaining monasteries were all dissolved, and their property transferred to the Crown. Abbots and priors lost their seats in the House of Lords; only archbishops and bishops remained. Consequently, the Lords Spiritual—as members of the clergy with seats in the House of Lords were known—were for the first time outnumbered by the Lords Temporal. Second invasion of France and the "Rough Wooing" of Scotland The 1539 alliance between Francis and Charles had soured, eventually degenerating into renewed war. With Catherine of Aragon and Anne Boleyn dead, relations between Charles and Henry improved considerably, and Henry concluded a secret alliance with the Emperor and decided to enter the Italian War in favour of his new ally. An invasion of France was planned for 1543. In preparation for it, Henry moved to eliminate the potential threat of Scotland under the youthful James V. The Scots were defeated at Battle of Solway Moss on 24 November 1542, and James died on 15 December. Henry now hoped to unite the crowns of England and Scotland by marrying his son Edward to James' successor, Mary. The Scottish Regent Lord Arran agreed to the marriage in the Treaty of Greenwich on 1 July 1543, but it was rejected by the Parliament of Scotland on 11 December. The result was eight years of war between England and Scotland, a campaign later dubbed "the Rough Wooing". Despite several peace treaties, unrest continued in Scotland until Henry's death. Despite the early success with Scotland, Henry hesitated to invade France, annoying Charles. Henry finally went to France in June 1544 with a two-pronged attack. One force under Norfolk ineffectively besieged Montreuil. The other, under Suffolk, laid siege to Boulogne. Henry later took personal command, and Boulogne fell on 18 September 1544. However, Henry had refused Charles' request to march against Paris. Charles' own campaign fizzled, and he made peace with France that same day. Henry was left alone against France, unable to make peace. Francis attempted to invade England in the summer of 1545 but reached only the Isle of Wight before being repulsed in the Battle of the Solent. Financially exhausted, France and England signed the Treaty of Camp on 7 June 1546. Henry secured Boulogne for eight years. The city was then to be returned to France for 2 million crowns (£750,000). Henry needed the money; the 1544 campaign had cost £650,000, and England was once again bankrupt. Physical decline and death Late in life, Henry became obese, with a waist measurement of , and had to be moved about with the help of mechanical devices. He was covered with painful, pus-filled boils and possibly suffered from gout. His obesity and other medical problems can be traced to the jousting accident in 1536 in which he suffered a leg wound. The accident reopened and aggravated an injury he had sustained years earlier, to the extent that his doctors found it difficult to treat. The chronic wound festered for the remainder of his life and became ulcerated, preventing him from maintaining the level of physical activity he had previously enjoyed. The jousting accident is also believed to have caused Henry's mood swings, which may have had a dramatic effect on his personality and temperament. The theory that Henry suffered from syphilis has been dismissed by most historians. Historian Susan Maclean Kybett ascribes his demise to scurvy, which is caused by insufficient vitamin C most often due to a lack of fresh fruits and vegetables in one's diet. Alternatively, his wives' pattern of pregnancies and his mental deterioration have led some to suggest that he may have been Kell positive and suffered from McLeod syndrome. According to another study, Henry's history and body morphology may have been the result of traumatic brain injury after his 1536 jousting accident, which in turn led to a neuroendocrine cause of his obesity. This analysis identifies growth hormone deficiency (GHD) as the reason for his increased adiposity but also significant behavioural changes noted in his later years, including his multiple marriages. Henry's obesity hastened his death at the age of 55, on 28 January 1547 in the Palace of Whitehall, on what would have been his father's 90th birthday. The tomb he had planned (with components taken from the tomb intended for Cardinal Wolsey) was only partly constructed and was never completed. (The sarcophagus and its base were later removed and used for Lord Nelson's tomb in the crypt of St. Paul's Cathedral.) Henry was interred in a vault at St George's Chapel, Windsor Castle, next to Jane Seymour. Over 100 years later, King Charles I (ruled 1625–1649) was buried in the same vault. Wives, mistresses, and children English historian and House of Tudor expert David Starkey describes Henry VIII as a husband:What is extraordinary is that Henry was usually a very good husband. And he liked women—that's why he married so many of them! He was
the Drishadwati/Saraswati river. Major canals are Western Yamuna Canal, Sutlej Yamuna link canal (from Sutlej river tributary of Indus), and Indira Gandhi Canal. Major dams are Kaushalya Dam in Panchkula district, Hathnikund Barrage and Tajewala Barrage on Yamuna in Yamunanagar district, Pathrala barrage on Somb river in Yamunanagar district, ancient Anagpur Dam near Surajkund in Faridabad district, and Ottu barrage on Ghaggar-Hakra River in Sirsa district. Major lakes are Dighal Wetland, Basai Wetland, Badkhal Lake in Faridabad, holy Brahma Sarovar and Sannihit Sarovar in Kurukshetra, Blue Bird Lake in Hisar, Damdama Lake at Sohna in Gurgram district, Hathni Kund in Yamunanagar district, Karna Lake at Karnal, ancient Surajkund in Faridabad, and Tilyar Lake in Rohtak. The Haryana State Waterbody Management Board is responsible for rejuvenation of 14,000 Johads of Haryana and up to 60 lakes in National Capital Region falling within the Haryana state. Only hot spring of Haryana is the Sohna Sulphur Hot Spring at Sohna in Gurgaon district. Tosham Hill range has several sacred sulphur pond of religious significance that are revered for the healing impact of sulphur, such as Pandu Teerth Kund, Surya Kund, Kukkar Kund, Gyarasia Kund or Vyas Kund. Seasonal waterfalls include Tikkar Taal twin lakes at Morni hiills, Dhosi Hill in Mahendragarh district and Pali village on outskirts of Faridabad. Climate Haryana is extremely hot in summer at around and mild in winter. The hottest months are May and June and the coldest December and January. The climate is arid to semi-arid with average rainfall of 354.5 mm. Around 29% of rainfall is received during the months from July to September, and the remaining rainfall is received during the period from December to February. Flora and fauna Forests Forest cover in the state in 2013 was 3.59% (1586 km2) and the Tree Cover in the state was 2.90% (1282 km2), giving a total forest and tree cover of 6.49%. In 2016–17, 18,412 hectares were brought under tree cover by planting 14.1 million seedlings. Thorny, dry, deciduous forest and thorny shrubs can be found all over the state. During the monsoon, a carpet of grass covers the hills. Mulberry, eucalyptus, pine, kikar, shisham and babul are some of the trees found here. The species of fauna found in the state of Haryana include black buck, nilgai, panther, fox, mongoose, jackal and wild dog. More than 450 species of birds are found here. Wildlife Haryana has thousand national parks, eight wildlife sanctuaries, two wildlife conservation areas, four animal and bird breeding centers, one deer park and three zoos, all of which are managed by the Haryana Forest Department of the Government of Haryana. Sultanpur National Park is a notable Park located in Gurugram District Environmental and ecological issues Haryana Environment Protection Council is the advisory committee and Department of Environment, Haryana is the department responsible for the administration of environment. Areas of Haryana surrounding Delhi NCR are the most polluted. During smog of November 2017, Air quality index of Gurgaon and Faridabad showed that the density of Fine particulates (2.5 PM diameter) was an average of 400 PM and monthly average of Haryana was 60 pm. Other sources of pollution are exhaust gases from old vehicles, stone crushers and Brick Kiln. Haryana has 7.5 million old vehicles, of which 40% are old more polluting vehicles, besides 500,000 new vehicles are added every year. Other majorly polluted cities are Bhiwani, Bahadurgarh, Dharuhera, Hisar and Yamunanagar. Administration Divisions The state is divided into 6 revenue divisions, 5 Police Ranges and 3 Police Commissionerates (c. January 2017). Six revenue divisions are: Ambala, Rohtak, Gurgaon, Hisar, Karnal and Faridabad. Haryana has 11 municipal corporations (Gurgaon, Faridabad, Ambala, Panchkula, Yamunanagar, Rohtak, Hisar, Panipat, Karnal, Sonepat, and Manesar), 18 municipal councils and 52 municipalities. Within these there are 22 districts, 72 sub-divisions, 93 tehsils, 50 sub-tehsils, 140 blocks, 154 cities and towns, 6,848 villages, 6,222 villages panchayats and numerous smaller dhanis. Districts Law and order Haryana Police force is the law enforcement agency of Haryana. Five Police Ranges are Ambala, Hissar, Karnal, Rewari and Rohtak. Three Police Commissionerates are Faridabad, Gurgaon and Panchkula. Cybercrime investigation cell is based in Gurgaon's Sector 51. The highest judicial authority in the state is the Punjab and Haryana High Court, with next higher right of appeal to Supreme Court of India. Haryana uses an e-filing facility. Governance and e-governance The Common Service Centres (CSCs) have been upgraded in all districts to offer hundreds of e-services to citizens, including applications of new water and sanitation connections, electricity bill collection, ration card member registration, result of HBSE, admit cards for board examinations, online admission forms for government colleges, long route booking of buses, admission forms for Kurukshetra University and HUDA plots status inquiry. Haryana has become the first state to implement Aadhaar-enabled birth registration in all the districts. Thousands of all traditional offline state and central government services are also available 24/7 online through single unified UMANG app and portal as part of Digital India initiative. Economy Haryana's 14th placed 12.96% 2012-17 CAGR estimated 2017-18 GSDP of US$95 billion is split in to 52% services, 30% industries and 18% agriculture. Services sector is split across 45% in real estate and financial and professional services, 26% trade and hospitality, 15% state and central govt employees, and 14% transport and logistics & warehousing. In IT services, Gurugram ranks number 1 in India in growth rate and existing technology infrastructure, and number 2 in startup ecosystem, innovation and livability (Nov 2016). Industries sector is split across 69% manufacturing, 28% construction, 2% utilities and 1% mining. In industrial manufacturing, Haryana produces India's 67% of passenger cars, 60% of motorcycles, 50% of tractors and 50% of the refrigerators. Services and industrial sectors are boosted by 7 operational SEZs and additional 23 formally approved SEZs (20 already notified and 3 in-principal approval) that are mostly spread along the Delhi–Mumbai Industrial Corridor, Amritsar Delhi Kolkata Industrial Corridor and Western Peripheral Expressway. Agriculture sector is split across 93% crops and livestock, 4% commercial forestry and logging, and 2% fisheries. Agriculture sector of Haryana, with only less than 1.4% area of India, contributes 15% food grains to the central food security public distribution system, and 7% of total national agricultural exports including 60% of total national Basmati rice export. Agriculture Crops Haryana is traditionally an agrarian society of zamindars (owner-cultivator farmers). About 70% of Haryana's residents are engaged in agriculture. The Green Revolution in Haryana of the 1960s combined with completion of Bhakra Dam in 1963 and Western Yamuna Command Network canal system in 1970s resulted in the significantly increased food grain production. As a result, Haryana is self sufficient in food production and the second largest contributor to India's central pool of food grains In 2015–2016, Haryana produced the following principal crops: 13,352,000 tonne wheat, 4,145,000 tonne rice, 7,169,000 tonne sugarcane, 993,000 tonne cotton and 855,000 tonne oilseeds (mustard seed, sunflower, etc.). Fruits, vegetables and spices Vegetable production was: potato 853,806 tonnes, onion 705,795 tonnes, tomato 675,384 tonnes, cauliflower 578,953 tonnes, leafy vegetables 370,646 tonnes, brinjal 331,169 tonnes, guard 307,793 tonnes, peas 111,081 tonnes and others 269,993 tonnes. Fruits production was: citrus 301,764 tonnes, guava 152,184 tonnes, mango 89,965 tonnes, chikoo 16,022 tonnes, aonla 12,056 tonnes and other fruits 25,848 tonnes. Spices production was: garlic 40,497 tonnes, fenugreek 9,348 tonnes, ginger 4,304 tonnes and others 840 tonnes. Flowers and medicinal plants Cut flowers production was: marigold 61,830 tonnes, gladiolus 2,448,620 million, rose 1,861,160 million and other 691,300 million. Medicinal plants production was: aloe vera 1403 tonnes and stevia 13 tonnes. Livestock Haryana is well known for its high-yield Murrah buffalo. Other breeds of cattle native to Haryana are Haryanvi, Mewati, Sahiwal and Nili-Ravi. Research To support its agrarian economy, both central government (Central Institute for Research on Buffaloes, Central Sheep Breeding Farm, National Research Centre on Equines, Central Institute of Fisheries, National Dairy Research Institute, Regional Centre for Biotechnology, Indian Institute of Wheat and Barley Research and National Bureau of Animal Genetic Resources) and state government (CCS HAU, LUVAS, Government Livestock Farm, Regional Fodder Station and Northern Region Farm Machinery Training and Testing Institute) have opened several institutes for research and education. Industrial sector Manufacturing Faridabad is one of the biggest industrial cities of Haryana as well as North India. The city is home to large-scale MNC companies like India Yamaha Motor Pvt. Ltd., Havells India Limited, JCB India Limited, Escorts Group, Indian Oil (R&D), and Larsen & Toubro (L&T). Eyewear e-tailer Lenskart and healthcare startup Lybrate have their headquarters in Faridabad. Hissar, a NCR Counter Magnet city known as steel and cotton spinning hub as well as upcoming integrated industrial aerocity and aero MRO hub at Hisar Airport, is a fast developing city and the hometown of Navin Jindal and Subhash Chandra of Zee TV fame. Savitri Jindal, Navin Jindal's mother, has been listed by Forbes as the third richest woman in the world. Panipat has heavy industry, including a refinery operated by the Indian Oil Corporation, a urea manufacturing plant operated by National Fertilizers Limited and a National Thermal Power Corporation power plant. It is known for its woven modhas or round stools. Sonipat: IMT Kundli, Nathupur, Rai and Barhi are industrial areas with several Small and medium-sized enterprises, including come large ones such as Atlas cycles, E.C.E., Birla factory, OSRAM Gurgaon: IMT Minesar, Dundahera and Sohna are industrial and logistics hubs, that also has National Security Guards, Indian Institute of Corporate Affairs, National Brain Research Centre and National Bomb Data Centre. Utilities Haryana State has always given high priority to the expansion of electricity infrastructure, as it is one of the most important inputs for the development of the state. Haryana was the first state in the country to achieve 100% rural electrification in 1970 as well as the first in the country to link all villages with all-weather roads and provide safe drinking water facilities throughout the state. Power in the state are: Renewable and non-polluting sources Hydroelectricity Bhakra-Nangal Dam Hydroelectric Power Plant WYC Hydro Electric Station, 62.4 MW, Yamunanagar Solar power stations Faridabad Solar Power Plant: being set up by HPGCL Faridabad (c.2016). Nuclear power stations Gorakhpur Nuclear Power Plant, 2800MW, Fatehabad, Phase-I 1400MW by 2021 Coal-fired thermal power stations Deenbandhu Chhotu Ram Thermal Power Station, 600MW, Yamunanagar Indira Gandhi Super Thermal Power Project, 1500MW, Jhajjar Jhajjar Power Station, 1500MW Panipat Thermal Power Station I, 440MW Panipat Thermal Power Station II, 920MW Rajiv Gandhi Thermal Power Station, 1200MW, Hisar Services sector Transport Aviation Roads and Highways Haryana has a total road length of , including 29 national highways, state highways, Major District Roads (MDR) and Other District Roads (ODR) (c. December 2017). A fleet of 3,864 Haryana Roadways buses covers a distance of 1.15 million km per day, and it was the first state in the country to introduce luxury video coaches. Ancient Delhi Multan Road and Grand Trunk Road, South Asia's oldest and longest major roads, pass through Haryana. GT Road passes through the districts of Sonipat, Panipat, Karnal, Kurukshetra and Ambala in north Haryana where it enters Delhi and subsequently the industrial town of Faridabad on its way. The Kundli-Manesar-Palwal Expressway (KMP) will provide a high-speed link to northern Haryana with its southern districts such as Sonipat, Gurgaon, and Faridabad. The Delhi-Agra Expressway (NH-2) that passes through Faridabad is being widened to six lanes from current four lanes. It will further boost Faridabad's connectivity with Delhi. Railway Rail network in Haryana is covered by five rail divisions under three rail zones. Diamond Quadrilateral High-speed rail network, Eastern Dedicated Freight Corridor (72 km) and Western Dedicated Freight Corridor (177 km) pass through Haryana. Bikaner railway division of North Western Railway zone manages rail network in western and southern Haryana covering Bhatinda-Dabwali-Hanumangarh line, Rewari-Bhiwani-Hisar-Bathinda line, Hisar-Sadulpur line and Rewari-Loharu-Sadulpur line. Jaipur railway division of North Western Railway zone manages the rail network in south-west Haryana covering Rewari-Reengas-Jaipur line, Delhi-Alwar-Jaipur line and Loharu-Sikar line. Delhi railway division of Northern Railway zone manages the rail network in north and east and central Haryana covering Delhi-Panipat-Ambala line, Delhi-Rohtak-Tohana line, Rewari–Rohtak line, Jind-Sonepat line and Delhi-Rewari line. Agra railway division of North Central Railway zone manages another very small part of the network in south-east Haryana covering Palwal-Mathura line only. Ambala railway division of Northern Railway zone manages a small part of the rail network in north-east Haryana covering Ambala-Yamunanagar line, Ambala-Kurukshetra line and UNESCO World Heritage Kalka–Shimla Railway. Metro Delhi Metro connects the national capital Delhi with NCR cities of Faridabad, Gurgaon and Bahadurgarh. Faridabad has the longest metro network in the NCR Region consisting of 11 stations and track length being 17 km. Sky Way The Haryana and Delhi governments have constructed the international standard Delhi Faridabad Skyway, the first of its kind in North India, to connect Delhi and Faridabad. Communication and media Haryana has a statewide network of telecommunication facilities. Haryana Government has its own statewide area network by which all government offices of 22 districts and 126 blocks across the state are connected with each other thus making it the first SWAN of the country. Bharat Sanchar Nigam Limited and most of the leading private sector players (such as Reliance Infocom, Tata Teleservices, Bharti Telecom, Idea Vodafone Essar, Aircel, Uninor and Videocon) have operations in the state. Two biggest cities of Haryana, Faridabad and Gurgaon which are part of National Capital Region come under the local Delhi Mobile Telecommunication System. The rest of the cities of Haryana comes under Haryana Telecommunication System. Electronic media channels include, MTV, 9XM, Star Group, SET Max, News Time, NDTV 24x7 and Zee Group. The radio stations include All India Radio and other FM stations. Panipat, Hisar, Ambala and Rohtak are the cities in which the leading newspapers of Haryana are printed and circulated throughout Haryana, in which Dainik Bhaskar, Dainik Jagran, Punjab Kesari, The Tribune, Aaj Samaj, Hari Bhoomi and Amar Ujala are prominent. Healthcare The total fertility rate of Haryana is 2.3. The infant mortality rate is 41 (SRS 2013) and maternal mortality ratio is 146 (SRS 2010–2012). The state of Haryana has various Medical Colleges including Pandit Bhagwat Dayal Sharma Post Graduate Institute of Medical Sciences Rohtak, Bhagat Phool Singh Medical College in District Sonipat, ESIC Medical College, Faridabad along with notable private medical institutes like Medanta, Max Hospital, Fortis Healthcare Education Literacy Literacy rate in Haryana has seen an upward trend and is 76.64 per cent as per 2011 population census. Male literacy stands at 85.38 per cent, while female literacy is at 66.67 per cent. In 2001, the literacy rate in Haryana stood at 67.91 per cent of which male and female were 78.49 per cent and 55.73
technology and automobile hubs of India. Haryana is the 11th-highest ranking among Indian states in human development index. The economy of Haryana is the 13th largest in India, with a gross state domestic product (GSDP) of and has the country's 5th-highest GSDP per capita of . Haryana has the highest unemployment rate among Indian states. The state is rich in history, monuments, heritage, flora and fauna and tourism, with a well developed economy, national highways and state roads. It is bordered by Himachal Pradesh to the north-east, by river Yamuna along its eastern border with Uttar Pradesh, by Rajasthan to the west and south, and Ghaggar-Hakra River flows along its northern border with Punjab. Since Haryana surrounds the country's capital New Delhi on three sides (north, west and south), consequently a large area of Haryana state is included in the economically important National Capital Region of India for the purposes of planning and development. Etymology Anthropologists came up with the view that Haryana was known by this name because in the post- Mahabharata period here lived the Abhiras. who developed special skills in the art of agriculture. According to Pran Nath Chopra Haryana got its name from Abhirayana-Ahirayana-Hirayana-Haryana. History Ancient period The villages of Rakhigarhi in Hisar district and Bhirrana in Fatehabad district are home to the largest and one of the world's oldest ancient Indus Valley Civilization sites, dated at over 9,000 years old. Evidence of paved roads, a drainage system, a large-scale rainwater collection storage system, terracotta brick and statue production, and skilled metal working (in both bronze and precious metals) have been uncovered. According to archaeologists, Rakhigarhi may be the origin of Harappan civilisation, which arose in the Ghaggar basin in Haryana and gradually and slowly moved to the Indus Valley. During the Vedic era, Haryana was the site of the Kuru Kingdom, one of India's great Mahajanapadas. The south of Haryana is the claimed location of the Vedic Brahmavarta region. Medieval period Ancient bronze and stone idols of Jain Tirthankara were found in archaeological expeditions in Badli, Bhiwani (Ranila, Charkhi Dadri and Badhra), Dadri, Gurgaon (Ferozepur Jhirka), Hansi, Hisar, Kasan, Nahad, Narnaul, Pehowa, Rewari, Rohad, Rohtak (Asthal Bohar) and Sonepat in Haryana. Pushyabhuti dynasty ruled parts of northern India in the 7th century with its capital at Thanesar. Harsha was a prominent king of the dynasty. Tomara dynasty ruled the south Haryana region in the 10th century. Anangpal Tomar was a prominent king among the Tomaras. After the sack of Bhatner fort during the Timurid conquests of India in 1398, Timur attacked and sacked the cities of Sirsa, Fatehabad, Sunam, Kaithal and Panipat. When he reached the town of Sarsuti (Sirsa), the residents, who were mostly non-Muslims, fled and were chased by a detachment of Timur's troops, with thousands of them being killed and looted by the troops. From there he travelled to Fatehabad, whose residents fled and a large number of those remaining in the town were massacred. The Ahirs resisted him at Ahruni but were defeated, with thousands being killed and many being taken prisoners while the town was burnt to ashes. From there he travelled to Tohana, whose Jat inhabitants were stated to be robbers according to Sharaf ad-Din Ali Yazdi. They tried to resist but were defeated and fled. Timur's army pursued and killed 200 Jats, while taking many more as prisoners. He then sent a detachment to chase the fleeing Jats and killed 2,000 of them while their wives and children were enslaved and their property plundered. Timur proceeded to Kaithal whose residents were massacred and plundered, destroying all villages along the way. On the next day, he came to Assandh whose residents were "fire-worshippers" according to Yazdi, and had fled to Delhi. Next, he travelled to and subdued Tughlaqpur fort and Salwan before reaching Panipat whose residents had already fled. He then marched on to Loni fort. Hemu claimed royal status after defeating Akbar's Mughal forces on 7 October 1556 in the Battle of Delhi and assumed the ancient title of Vikramaditya. The area that is now Haryana has been ruled by some of the major empires of India. Panipat is known for three seminal battles in the history of India. In the First Battle of Panipat (1526), Babur defeated the Lodis. In the Second Battle of Panipat (1556), Akbar defeated the local Haryanvi Hindu Emperor of Delhi, who belonged to Rewari. Hem Chandra Vikramaditya had earlier won 22 battles across India from Punjab to Bengal, defeating Mughals and Afghans. Hemu had defeated Akbar's forces twice at Agra and the Battle of Delhi in 1556 to become the last Hindu Emperor of India with a formal coronation at Purana Quila in Delhi on 7 October 1556. In the Third Battle of Panipat (1761), the Afghan king Ahmad Shah Abdali defeated the Marathas. Formation Haryana as a state came into existence on 1 November 1966 the Punjab Reorganisation Act (1966). The Indian government set up the Shah Commission under the chairmanship of Justice JC Shah on 23 April 1966 to divide the existing state of Punjab and determine the boundaries of the new state of Haryana after consideration of the languages spoken by the people. The commission delivered its report on 31 May 1966 whereby the then-districts of Hisar, Mahendragarh, Gurgaon, Rohtak and Karnal were to be a part of the new state of Haryana. Further, the tehsils of Jind and Narwana in the Sangrur district – along with Naraingarh, Ambala and Jagadhri – were to be included. The commission recommended that the tehsil of Kharar, which includes Chandigarh, the state capital of Punjab, should be a part of Haryana. However Kharar was given to Punjab. The city of Chandigarh was made a union territory, serving as the capital of both Punjab and Haryana. Bhagwat Dayal Sharma became the first Chief Minister of Haryana. Demographics Religion According to the 2011 census, of total 25,350,000 population of Haryana, Hindus (87.46%) constitute the majority of the state's population with Muslims (7.03%) (mainly Meos) and Sikhs (4.91%) being the largest minorities. Muslims are mainly found in the Nuh. Haryana has the second largest Sikh population in India after Punjab, and they mostly live in the districts adjoining Punjab, such as Sirsa, Jind, Fatehabad, Kaithal, Kurukshetra, Ambala and Panchkula. Languages The official language of Haryana is Hindi. Several regional languages or dialects, often subsumed under Hindi, are spoken in the state. Predominant among them is Haryanvi (also known as Bangru), whose territory encompasses the central and eastern portions of Haryana. Hindustani is spoken in the northeast, Bagri in the west, and Ahirwati, Mewati and Braj Bhasha in the south. There are also significant numbers of speakers of Urdu and Punjabi, the latter of which was recognised in 2010 as a second official language of Haryana for government and administrative purposes. After the state's formation, Telugu was made the state's "second language" – to be taught in schools – but it was not the "second official language" for official communication. Due to a lack of students, the language ultimately stopped being taught. Tamil was made the second language in 1969 by Bansi Lal to show the state's differences with Punjab although there were no Tamil speakers in Haryana at the time. In 2010, due to the lack of Tamil speakers, the language was removed from its status. There are also some speakers of several major regional languages of neighbouring states or other parts of the subcontinent, like Bengali, Bhojpuri, Marwari, Mewari, Nepali and Saraiki, as well as smaller communities of speakers of languages that are dispersed across larger regions, like Bauria, Bazigar, Gujari, Gade Lohar, Oadki, and Sansi. Culture Music Haryana has its own unique traditional folk music, folk dances, saang (folk theatre), cinema, belief system such as Jathera (ancestral worship), and arts such as Phulkari and Shisha embroidery. Folk dances Folk music and dances of Haryana are based on satisfying cultural needs of primarily agrarian and martial natures of Haryanavi tribes. Haryanvi musical folk theatre main types are Saang, Rasa lila and Ragini. The Saang and Ragini form of theatre was popularised by Lakhmi Chand. Haryanvi folk dances and music have fast energetic movements. Three popular categories of dance are: festive-seasonal, devotional, and ceremonial-recreational. The festive-seasonal dances and songs are Gogaji/Gugga, Holi, Phaag, Sawan, Teej. The devotional dances and songs are Chaupaiya, Holi, Manjira, Ras Leela, Raginis). The ceremonial-recreational dances and songs are of following types: legendary bravery (Kissa and Ragini of male warriors and female Satis), love and romance (Been and its variant Nāginī dance, and Ragini), ceremonial (Dhamal Dance, Ghoomar, Jhoomar (male), Khoria, Loor, and Ragini). Folk music and songs Haryanvi folk music is based on day to day themes and injecting earthly humour enlivens the feel of the songs. Haryanvi music takes two main forms: "Classical folk music" and "Desi Folk music" (Country Music of Haryana), and sung in the form of ballads and love, valor and bravery, harvest, happiness and pangs of parting of lovers. Classical Haryanvi folk music Classical Haryanvi folk music is based on Indian classical music. Hindustani classical ragas, learnt in gharana parampara of guru–shishya tradition, are used to sing songs of heroic bravery (such as Alha-Khand (1163–1202 CE) about the bravery of Alha and Udal, Jaimal and Patta of Maharana Udai Singh II), Brahmas worship and festive seasonal songs (such as Teej, Holi and Phaag songs of Phalgun month near Holi). Bravery songs are sung in high pitch. Desi Haryanvi folk music Desi Haryanvi folk music, is a form of Haryanvi music, based on Raag Bhairvi, Raag Bhairav, Raag Kafi, Raag Jaijaivanti, Raag Jhinjhoti and Raag Pahadi and used for celebrating community bonhomie to sing seasonal songs, ballads, ceremonial songs (wedding, etc.) and related religious legendary tales such as Puran Bhagat. Relationship and songs celebrating love and life are sung in medium pitch. Ceremonial and religious songs are sung in low pitch. Young girls and women usually sing entertaining and fast seasonal, love, relationship and friendship related songs such as Phagan (song for eponymous season/month), Katak (songs for the eponymous season/month), Samman (songs for the eponymous season/month), (male-female duet songs), (songs of sharing heartfelt feelings among female friends). Older women usually sing devotional Mangal Geet (auspicious songs) and ceremonial songs such as Bhajan, Bhat (wedding gift to the mother of bride or groom by her brother), Sagai, Ban (Hindu wedding ritual where pre-wedding festivities starts), Kuan-Poojan (a custom that is performed to welcome the birth of a child by worshiping the well or source of drinking water), Sanjhi and Holi festival. Socially normative-cohesive impact Music and dance for Haryanvi people is a great way of demolishing societal differences as folk singers are highly esteemed and they are sought after and invited for the events, ceremonies and special occasions regardless of their caste or status. These inter-caste songs are fluid in nature, and never personalised for any specific caste, and they are sung collectively by women from different strata, castes, dialects. These songs do transform fluidly in dialect, style, words, etc. This adoptive style can be seen from the adoption of
the party or coalition with a majority in the Legislative Assembly is appointed as the Chief Minister by the governor, and the Council of Ministers are appointed by the governor on the advice of the Chief Minister. The Council of Ministers reports to the Legislative Assembly. The Assembly is unicameral with 68 Members of the Legislative Assembly (MLA). Terms of office run for five years, unless the Assembly is dissolved prior to the completion of the term. Auxiliary authorities known as panchayats, for which local body elections are regularly held, govern local affairs. In the assembly elections held in November 2017, the Bharatiya Janata Party secured an absolute majority, winning 44 of the 68 seats while the Congress won only 21 of the 68 seats. Jai Ram Thakur was sworn in as Himachal Pradesh's Chief Minister for the first time in Shimla on 27 December 2017. Administrative divisions The state of Himachal Pradesh is divided into 12 districts which are grouped into three divisions, Shimla, Kangra and Mandi. The districts are further divided into 73 subdivisions, 78 blocks and 172 Tehsils. Economy The era of planning in Himachal Pradesh started in 1951 along with the rest of India with the implementation of the first five-year plan. The First Plan allocated 52.7 million to Himachal Pradesh. More than 50% of this expenditure was incurred on transport and communication; while the power sector got a share of just 4.6%, though it had steadily increased to 7% by the Third Plan. Expenditure on agriculture and allied activities increased from 14.4% in the First Plan to 32% in the Third Plan, showing a progressive decline afterwards from 24% in the Fourth Plan to less than 10% in the Tenth Plan. Expenditure on energy sector was 24.2% of the total in the Tenth Plan. The total GDP for 2005–06 was estimated at 254 billion as against 230 billion in the year 2004–05, showing an increase of 10.5%. The GDP for fiscal 2015–16 was estimated at 1.110 trillion, which increased to 1.247 trillion in 2016–17, recording growth of 6.8%. The per capita income increased from 130,067 in 2015–16 to 147,277 in 2016–17. The state government's advance estimates for fiscal 2017–18 stated the total GDP and per capita income as 1.359 trillion and 158,462, respectively. As of 2018, Himachal is the 22nd-largest state economy in India with in gross domestic product and has the 13th-highest per capita income () among the states and union territories of India. Himachal Pradesh also ranks as the second-best performing state in the country on human development indicators after Kerala. One of the Indian government's key initiatives to tackle unemployment is the National Rural Employment Guarantee Act (NREGA). The participation of women in the NREGA has been observed to vary across different regions of the nation. As of the year 2009–2010, Himachal Pradesh joined the category of high female participation, recording a 46% share of NREGS (National Rural Employment Guarantee Scheme) work days to women. This was a drastic increase from the 13% that was recorded in 2006–2007. Agriculture Agriculture accounts for 9.4% of the net state domestic product. It is the main source of income and employment in Himachal. About 90% of the population in Himachal depends directly upon agriculture, which provides direct employment to 62% of total workers of state. The main cereals grown include wheat, maize, rice and barley with major cropping systems being maize-wheat, rice-wheat and maize-potato-wheat. Pulses, fruits, vegetables and oilseeds are among the other crops grown in the state. Centuries-old traditional Kuhl irrigation system is prevalent in the Kangra valley, though in recent years these Kuhls have come under threat from hydroprojects on small streams in the valley. Land husbandry initiatives such as the Mid-Himalayan Watershed Development Project, which includes the Himachal Pradesh Reforestation Project (HPRP), the world's largest clean development mechanism (CDM) undertaking, have improved agricultural yields and productivity, and raised rural household incomes. Apple is the principal cash crop of the state grown principally in the districts of Shimla, Kinnaur, Kullu, Mandi, Chamba and some parts of Sirmaur and Lahaul-Spiti with an average annual production of five lakh tonnes and per hectare production of 8 to 10 tonnes. The apple cultivation constitute 49 per cent of the total area under fruit crops and 85% of total fruit production in the state with an estimated economy of 3500 crore. Apples from Himachal are exported to other Indian states and even other countries. In 2011–12, the total area under apple cultivation was 104,000 hectares, increased from 90,347 hectares in 2000–01. According to the provisional estimates of Ministry of Agriculture & Farmers Welfare, the annual apple production in Himachal for fiscal 2015–16 stood at 753,000 tonnes, making it India's second-largest apple-producing state after Jammu and Kashmir. The state is also among the leading producers of other fruits such as apricots, cherries, peaches, pears, plums and strawberries in India. Kangra tea is grown in the Kangra valley. Tea plantation began in 1849, and production peaked in the late 19th century with the tea becoming popular across the globe. Production dipped sharply after the 1905 Kangra earthquake and continues to decline. The tea received geographical indication status in 2005. Industry Energy Hydropower is one of the major sources of income generation for the state. The state has an abundance of hydropower resources because of the presence of various perennial rivers. Many high-capacity hydropower plants have been constructed which produce surplus electricity that is sold to other states, such as Delhi, Punjab and West Bengal. The income generated from exporting the electricity to other states is being provided as subsidy to the consumers in the state. The rich hydropower resources of Himachal have resulted in the state becoming almost universally electrified with around 94.8% houses receiving electricity as of 2001, as compared to the national average of 55.9%. Himachal's hydro-electric power production is, however, yet to be fully utilised. The identified hydroelectric potential for the state is 27,436 MW in five river basins while the hydroelectric capacity in 2016 was 10,351 MW. Tourism Tourism in Himachal Pradesh is a major contributor to the state's economy and growth. The Himalayas attracts tourists from all over the world. Hill stations like Shimla, Manali, Dharamshala, Dalhousie, Chamba, Khajjiar, Kullu and Kasauli are popular destinations for both domestic and foreign tourists. The state also has many important Hindu pilgrimage sites with prominent temples like Shri Chamunda Devi Mandir, Naina Devi Temple, Bajreshwari Mata Temple, Jwala Ji Temple, Chintpurni, Baijnath Temple, Bhimakali Temple, Bijli Mahadev and Jakhoo Temple. Manimahesh Lake situated in the Bharmour region of Chamba district is the venue of an annual Hindu pilgrimage trek held in the month of August which attracts lakhs of devotees. The state is also referred to as "Dev Bhoomi" (literally meaning Abode of Gods) due to its mention as such in ancient Hindu texts and occurrence of a large number of historical temples in the state. Himachal is also known for its adventure tourism activities like ice skating in Shimla, paragliding in Bir Billing and Solang valley, rafting in Kullu, skiing in Manali, boating in Bilaspur and trekking, horse riding and fishing in different parts of the state. Shimla, the state's capital, is home to Asia's only natural ice-skating rink. Spiti Valley in Lahaul and Spiti District situated at an altitude of over 3000 metres with its picturesque landscapes is an important destination for adventure seekers. The region also has some of the oldest Buddhist Monasteries in the world Himachal hosted the first Paragliding World Cup in India from 24 to 31 October in 2015. The venue for the paragliding world cup was Bir Billing, which is 70 km from the tourist town Macleod Ganj, located in the heart of Himachal in Kangra District. Bir Billing is the centre for aero sports in Himachal and considered as best for paragliding. Buddhist monasteries, trekking to tribal villages and mountain biking are other local possibilities. Transport Air Himachal has three Domestic Airports in Kangra, Kullu and Shimla districts. The air routes connect the state with Delhi and Chandigarh. Bhuntar Airport is in Kullu district, around from district headquarters. Gaggal Airport is in Kangra district, around from district headquarters at Dharamshala, which is around 10 kilometres from Kangra Shimla Airport is around west of the city. Railways Broad-gauge lines The only broad-gauge railway line in the whole state connects –Una Himachal railway station to in Punjab and runs all the way to Daulatpur, Himachal Pradesh. It is an electrified track since 1999. While a tiny portion of line adjacent to Kandrori(KNDI) station on either side on Pathankot-Jalandhar Section, under Ferozepur Division of Northern Railway also crosses into Himachal Pradesh, before venturing out to Punjab again. Future constructions: –Hamirpur rail project via Dhundla Bhanupali (Punjab)–Bilaspur, Himachal Pradesh Chandigarh–Baddi Narrow-gauge lines Himachal is known for its narrow-gauge railways. One is the Kalka-Shimla Railway, a UNESCO World Heritage Site, and another is the Kangra Valley Railway. The total length of these two tracks is . The Kalka-Shimla Railway passes through many tunnels and Bridgies, while the Pathankot–Jogindernagar meanders through a maze of hills and valleys. The total route length of the operational railway network in the state is . Roads Roads are the major mode of transport in the hilly terrains. The state has road network of , including eight National Highways (NH) that constitute and 19 State Highways with a total length of . Hamirpur district has the highest road density in the country. Some roads are closed during winter and monsoon seasons due to snow and landslides. The state-owned Himachal Road Transport Corporation with a fleet of over 3,100, operates bus services connecting important cities and towns with villages within the state and also on various interstate routes. In addition, around 5,000 private buses ply in the state. Demographics Population Himachal Pradesh has a total population of 6,864,602 including 3,481,873 males and 3,382,729 females according to the Census of India 2011. The Koli forms the largest caste-cluster, comprising 30% of the total population of Himachal Pradesh. It has only 0.57 per cent of India's total population, recording a growth of 12.81 per cent. The scheduled castes and scheduled tribes account for 25.19 per cent and 5.71 per cent of the population, respectively. The sex ratio stood at 972 females per 1,000 males, recording a marginal increase from 968 in 2001. The child sex ratio increased from 896 in 2001 to 909 in 2011. The total fertility rate (TFR) per woman in 2015 stood at 1.7, one of the lowest in India. In the census, the state is placed 21st on the population chart, followed by Tripura at 22nd place. Kangra District was top-ranked with a population strength of 1,507,223 (21.98%), Mandi District 999,518 (14.58%), Shimla District 813,384 (11.86%), Solan District 576,670 (8.41%), Sirmaur District 530,164 (7.73%), Una District 521,057 (7.60%), Chamba District 518,844 (7.57%), Hamirpur district 454,293 (6.63%), Kullu District 437,474 (6.38%), Bilaspur district 382,056 (5.57%), Kinnaur District 84,298 (1.23%) and Lahaul Spiti 31,528 (0.46%). The life expectancy at birth in Himachal Pradesh increased significantly from 52.6 years in the period from 1970 to 1975 (above the national average of 49.7 years) to 72.0 years for the period 2011–15 (above the national average of 68.3 years). The infant mortality rate stood at 40 in 2010, and the crude birth rate has declined from 37.3 in 1971 to 16.9 in 2010, below the national average of 26.5 in 1998. The crude death rate was 6.9 in 2010. Himachal Pradesh's literacy rate has almost doubled between 1981 and 2011 (see table to right). The state is one of the most literate states of India with a literacy rate of 83.78% as of 2011. Languages Hindi is the official language of Himachal Pradesh and is spoken by the majority of the population as a lingua franca. Sanskrit is the additional official language of the state. Although mostly encountered in academic and symbolic contexts, the government of Himachal Pradesh is encouraging its wider study and use. Most of the population, however, speaks natively one or another of the Western Pahari languages (locally also known as Himachali or just Pahari), a subgroup of the Indo-Aryan languages that includes Bhattiyali, Bilaspuri, Chambeali, Churahi, Gaddi, Hinduri, Kangri, Kullu, Mahasu Pahari, Mandeali, Pahari Kinnauri, Pangwali, and Sirmauri. Additional Indo-Aryan languages spoken include Punjabi (native to 4.4% of the population), Nepali (1.3%), Chinali, Lahul Lohar, and others. In parts of the state there are speakers of Tibeto-Burman languages like Kinnauri (1.2%), Tibetan (0.3%), Lahuli–Spiti languages (0.16%), Pattani (0.12%), Bhoti Kinnauri, Chitkuli Kinnauri, Bunan (or Gahri), Jangshung, Kanashi, Shumcho, Spiti Bhoti, Sunam, Tinani, and Tukpa. Religion Hinduism is the major religion in Himachal Pradesh. More than 95% of the total population adheres to the Hindu faith and majorly follows Shaivism and Shaktism traditions, the distribution of which is evenly spread throughout the state. Himachal Pradesh has the highest proportion of Hindu population among all the states and union territories in India. Other religions that form a small percentage are Islam, Sikhism and Buddhism. Muslims are mainly concentrated in Sirmaur, Chamba, Una and Solan districts where they form 2.53-6.27% of the population. Sikhs mostly live in towns and cities and constitute 1.16% of the state population. The Buddhists, who constitute 1.15%, are mainly natives and tribals from Lahaul and Spiti, where they form a majority of 62%, and Kinnaur, where they form 21.5%. Culture Himachal Pradesh was one of the few states that had remained largely untouched by external customs, largely due to its difficult terrain. With remarkable economic and social advancements, the state has changed rapidly. Himachal Pradesh is a multilingual state like other Indian states. Western Pahari languages also known as Himachali languages are widely spoken in the state. Some of the most commonly spoken individual languages are Kangri, Mandeali, Kulvi, Chambeali, Bharmauri and Kinnauri. The main caste groups in Himachal Pradesh are Rajputs, Brahmins, Kanets, Kulindas, Girths, Raos, Rathis, Thakurs, Kolis, Holis, Chamars, Darains, Rehars, Chanals, Lohars, Baris, Dagis, Dhakhis, Turis, Batwals Himachal is well known for its handicrafts. The carpets, leather works, Kullu shawls, Kangra paintings, Chamba Rumals, stoles, embroidered grass footwear (Pullan chappal), silver jewellery, metal ware, knitted woolen socks, Pattoo, basketry of cane and bamboo (Wicker and Rattan) and woodwork are among the notable ones. Of late, the demand for these handicrafts has increased within and outside the country. Himachali caps of various colour bands are also well-known local art work, and are often treated as a symbol of the Himachali identity. The colour of the Himachali caps has been an indicator of political loyalties in the hill state for a long period of time with Congress party leaders like Virbhadra Singh donning caps with green band and the rival BJP leader Prem Kumar Dhumal wearing a cap with maroon band. The former has served six terms as the Chief Minister of the state while the latter is a two-time Chief Minister. Local music and dance also reflect the cultural identity of the state. Through their dance and music, the Himachali people entreat their gods during local festivals and other special occasions. Apart from national fairs and festivals, there are regional fairs and festivals, including the temple fairs in nearly every region that are of great significance to Himachal Pradesh. The Kullu Dussehra festival is nationally known. The day-to-day cuisine of Himachalis is similar to the rest of northern India with Punjabi and Tibetan influences. Lentils (Dāl), rice ( or ), vegetables () and chapati (wheat flatbread) form the staple food of the local population. Non-vegetarian food is more widely accepted in Himachal Pradesh than elsewhere in India, partly due to the scarcity of fresh vegetables on the hilly terrain of the state. Himachali specialities include Notable people Virbhadra Singh JP Nadda Prem Kumar Dhumal Jai Ram Thakur Anurag Thakur Mohit Chauhan Kangana Ranaut Yami Gautam Randeep Guleria Vikram Batra Rubina Dilaik Preity Zinta Jay Chaudhry Anupam Kher Gaurav Sharma (politician) Education At the time of Independence, Himachal Pradesh had a literacy rate of 8% – one of the lowest in the country. By 2011, the literacy rate surged to 82.8%, making Himachal one of the most-literate states in the country. There are over 10,000 primary schools, 1,000 secondary schools and more than 1,300 high schools in the state. In meeting the constitutional obligation to make primary education compulsory, Himachal became the first state in India to make elementary education accessible to every child. Himachal Pradesh is an exception to the nationwide gender bias in education levels. The state has a female literacy rate of around 76%. In addition, school enrolment and participation rates for girls are almost universal at the primary level. While higher levels of education do reflect a gender-based disparity, Himachal is still significantly ahead of other states at bridging
apple cultivation constitute 49 per cent of the total area under fruit crops and 85% of total fruit production in the state with an estimated economy of 3500 crore. Apples from Himachal are exported to other Indian states and even other countries. In 2011–12, the total area under apple cultivation was 104,000 hectares, increased from 90,347 hectares in 2000–01. According to the provisional estimates of Ministry of Agriculture & Farmers Welfare, the annual apple production in Himachal for fiscal 2015–16 stood at 753,000 tonnes, making it India's second-largest apple-producing state after Jammu and Kashmir. The state is also among the leading producers of other fruits such as apricots, cherries, peaches, pears, plums and strawberries in India. Kangra tea is grown in the Kangra valley. Tea plantation began in 1849, and production peaked in the late 19th century with the tea becoming popular across the globe. Production dipped sharply after the 1905 Kangra earthquake and continues to decline. The tea received geographical indication status in 2005. Industry Energy Hydropower is one of the major sources of income generation for the state. The state has an abundance of hydropower resources because of the presence of various perennial rivers. Many high-capacity hydropower plants have been constructed which produce surplus electricity that is sold to other states, such as Delhi, Punjab and West Bengal. The income generated from exporting the electricity to other states is being provided as subsidy to the consumers in the state. The rich hydropower resources of Himachal have resulted in the state becoming almost universally electrified with around 94.8% houses receiving electricity as of 2001, as compared to the national average of 55.9%. Himachal's hydro-electric power production is, however, yet to be fully utilised. The identified hydroelectric potential for the state is 27,436 MW in five river basins while the hydroelectric capacity in 2016 was 10,351 MW. Tourism Tourism in Himachal Pradesh is a major contributor to the state's economy and growth. The Himalayas attracts tourists from all over the world. Hill stations like Shimla, Manali, Dharamshala, Dalhousie, Chamba, Khajjiar, Kullu and Kasauli are popular destinations for both domestic and foreign tourists. The state also has many important Hindu pilgrimage sites with prominent temples like Shri Chamunda Devi Mandir, Naina Devi Temple, Bajreshwari Mata Temple, Jwala Ji Temple, Chintpurni, Baijnath Temple, Bhimakali Temple, Bijli Mahadev and Jakhoo Temple. Manimahesh Lake situated in the Bharmour region of Chamba district is the venue of an annual Hindu pilgrimage trek held in the month of August which attracts lakhs of devotees. The state is also referred to as "Dev Bhoomi" (literally meaning Abode of Gods) due to its mention as such in ancient Hindu texts and occurrence of a large number of historical temples in the state. Himachal is also known for its adventure tourism activities like ice skating in Shimla, paragliding in Bir Billing and Solang valley, rafting in Kullu, skiing in Manali, boating in Bilaspur and trekking, horse riding and fishing in different parts of the state. Shimla, the state's capital, is home to Asia's only natural ice-skating rink. Spiti Valley in Lahaul and Spiti District situated at an altitude of over 3000 metres with its picturesque landscapes is an important destination for adventure seekers. The region also has some of the oldest Buddhist Monasteries in the world Himachal hosted the first Paragliding World Cup in India from 24 to 31 October in 2015. The venue for the paragliding world cup was Bir Billing, which is 70 km from the tourist town Macleod Ganj, located in the heart of Himachal in Kangra District. Bir Billing is the centre for aero sports in Himachal and considered as best for paragliding. Buddhist monasteries, trekking to tribal villages and mountain biking are other local possibilities. Transport Air Himachal has three Domestic Airports in Kangra, Kullu and Shimla districts. The air routes connect the state with Delhi and Chandigarh. Bhuntar Airport is in Kullu district, around from district headquarters. Gaggal Airport is in Kangra district, around from district headquarters at Dharamshala, which is around 10 kilometres from Kangra Shimla Airport is around west of the city. Railways Broad-gauge lines The only broad-gauge railway line in the whole state connects –Una Himachal railway station to in Punjab and runs all the way to Daulatpur, Himachal Pradesh. It is an electrified track since 1999. While a tiny portion of line adjacent to Kandrori(KNDI) station on either side on Pathankot-Jalandhar Section, under Ferozepur Division of Northern Railway also crosses into Himachal Pradesh, before venturing out to Punjab again. Future constructions: –Hamirpur rail project via Dhundla Bhanupali (Punjab)–Bilaspur, Himachal Pradesh Chandigarh–Baddi Narrow-gauge lines Himachal is known for its narrow-gauge railways. One is the Kalka-Shimla Railway, a UNESCO World Heritage Site, and another is the Kangra Valley Railway. The total length of these two tracks is . The Kalka-Shimla Railway passes through many tunnels and Bridgies, while the Pathankot–Jogindernagar meanders through a maze of hills and valleys. The total route length of the operational railway network in the state is . Roads Roads are the major mode of transport in the hilly terrains. The state has road network of , including eight National Highways (NH) that constitute and 19 State Highways with a total length of . Hamirpur district has the highest road density in the country. Some roads are closed during winter and monsoon seasons due to snow and landslides. The state-owned Himachal Road Transport Corporation with a fleet of over 3,100, operates bus services connecting important cities and towns with villages within the state and also on various interstate routes. In addition, around 5,000 private buses ply in the state. Demographics Population Himachal Pradesh has a total population of 6,864,602 including 3,481,873 males and 3,382,729 females according to the Census of India 2011. The Koli forms the largest caste-cluster, comprising 30% of the total population of Himachal Pradesh. It has only 0.57 per cent of India's total population, recording a growth of 12.81 per cent. The scheduled castes and scheduled tribes account for 25.19 per cent and 5.71 per cent of the population, respectively. The sex ratio stood at 972 females per 1,000 males, recording a marginal increase from 968 in 2001. The child sex ratio increased from 896 in 2001 to 909 in 2011. The total fertility rate (TFR) per woman in 2015 stood at 1.7, one of the lowest in India. In the census, the state is placed 21st on the population chart, followed by Tripura at 22nd place. Kangra District was top-ranked with a population strength of 1,507,223 (21.98%), Mandi District 999,518 (14.58%), Shimla District 813,384 (11.86%), Solan District 576,670 (8.41%), Sirmaur District 530,164 (7.73%), Una District 521,057 (7.60%), Chamba District 518,844 (7.57%), Hamirpur district 454,293 (6.63%), Kullu District 437,474 (6.38%), Bilaspur district 382,056 (5.57%), Kinnaur District 84,298 (1.23%) and Lahaul Spiti 31,528 (0.46%). The life expectancy at birth in Himachal Pradesh increased significantly from 52.6 years in the period from 1970 to 1975 (above the national average of 49.7 years) to 72.0 years for the period 2011–15 (above the national average of 68.3 years). The infant mortality rate stood at 40 in 2010, and the crude birth rate has declined from 37.3 in 1971 to 16.9 in 2010, below the national average of 26.5 in 1998. The crude death rate was 6.9 in 2010. Himachal Pradesh's literacy rate has almost doubled between 1981 and 2011 (see table to right). The state is one of the most literate states of India with a literacy rate of 83.78% as of 2011. Languages Hindi is the official language of Himachal Pradesh and is spoken by the majority of the population as a lingua franca. Sanskrit is the additional official language of the state. Although mostly encountered in academic and symbolic contexts, the government of Himachal Pradesh is encouraging its wider study and use. Most of the population, however, speaks natively one or another of the Western Pahari languages (locally also known as Himachali or just Pahari), a subgroup of the Indo-Aryan languages that includes Bhattiyali, Bilaspuri, Chambeali, Churahi, Gaddi, Hinduri, Kangri, Kullu, Mahasu Pahari, Mandeali, Pahari Kinnauri, Pangwali, and Sirmauri. Additional Indo-Aryan languages spoken include Punjabi (native to 4.4% of the population), Nepali (1.3%), Chinali, Lahul Lohar, and others. In parts of the state there are speakers of Tibeto-Burman languages like Kinnauri (1.2%), Tibetan (0.3%), Lahuli–Spiti languages (0.16%), Pattani (0.12%), Bhoti Kinnauri, Chitkuli Kinnauri, Bunan (or Gahri), Jangshung, Kanashi, Shumcho, Spiti Bhoti, Sunam, Tinani, and Tukpa. Religion Hinduism is the major religion in Himachal Pradesh. More than 95% of the total population adheres to the Hindu faith and majorly follows Shaivism and Shaktism traditions, the distribution of which is evenly spread throughout the state. Himachal Pradesh has the highest proportion of Hindu population among all the states and union territories in India. Other religions that form a small percentage are Islam, Sikhism and Buddhism. Muslims are mainly concentrated in Sirmaur, Chamba, Una and Solan districts where they form 2.53-6.27% of the population. Sikhs mostly live in towns and cities and constitute 1.16% of the state population. The Buddhists, who constitute 1.15%, are mainly natives and tribals from Lahaul and Spiti, where they form a majority of 62%, and Kinnaur, where they form 21.5%. Culture Himachal Pradesh was one of the few states that had remained largely untouched by external customs, largely due to its difficult terrain. With remarkable economic and social advancements, the state has changed rapidly. Himachal Pradesh is a multilingual state like other Indian states. Western Pahari languages also known as Himachali languages are widely spoken in the state. Some of the most commonly spoken individual languages are Kangri, Mandeali, Kulvi, Chambeali, Bharmauri and Kinnauri. The main caste groups in Himachal Pradesh are Rajputs, Brahmins, Kanets, Kulindas, Girths, Raos, Rathis, Thakurs, Kolis, Holis, Chamars, Darains, Rehars, Chanals, Lohars, Baris, Dagis, Dhakhis, Turis, Batwals Himachal is well known for its handicrafts. The carpets, leather works, Kullu shawls, Kangra paintings, Chamba Rumals, stoles, embroidered grass footwear (Pullan chappal), silver jewellery, metal ware, knitted woolen socks, Pattoo, basketry of cane and bamboo (Wicker and Rattan) and woodwork are among the notable ones. Of late, the demand for these handicrafts has increased within and outside the country. Himachali caps of various colour bands are also well-known local art work, and are often treated as a symbol of the Himachali identity. The colour of the Himachali caps has been an indicator of political loyalties in the hill state for a long period of time with Congress party leaders like Virbhadra Singh donning caps with green band and the rival BJP leader Prem Kumar Dhumal wearing a cap with maroon band. The former has served six terms as the Chief Minister of the state while the latter is a two-time Chief Minister. Local music and dance also reflect the cultural identity of the state. Through their dance and music, the Himachali people entreat their gods during local festivals and other special occasions. Apart from national fairs and festivals, there are regional fairs and festivals, including the temple fairs in nearly every region that are of great significance to Himachal Pradesh. The Kullu Dussehra festival is nationally known. The day-to-day cuisine of Himachalis is similar to the rest of northern India with Punjabi and Tibetan influences. Lentils (Dāl), rice ( or ), vegetables () and chapati (wheat flatbread) form the staple food of the local population. Non-vegetarian food is more widely accepted in Himachal Pradesh than elsewhere in India, partly due to the scarcity of fresh vegetables on the hilly terrain of the state. Himachali specialities include Notable people Virbhadra Singh JP Nadda Prem Kumar Dhumal Jai Ram Thakur Anurag Thakur Mohit Chauhan Kangana Ranaut Yami Gautam Randeep Guleria Vikram Batra Rubina Dilaik Preity Zinta Jay Chaudhry Anupam Kher Gaurav Sharma (politician) Education At the time of Independence, Himachal Pradesh had a literacy rate of 8% – one of the lowest in the country. By 2011, the literacy rate surged to 82.8%, making Himachal one of the most-literate states in the country. There are over 10,000 primary schools, 1,000 secondary schools and more than 1,300 high schools in the state. In meeting the constitutional obligation to make primary education compulsory, Himachal became the first state in India to make elementary education accessible to every child. Himachal Pradesh is an exception to the nationwide gender bias in education levels. The state has a female literacy rate of around 76%. In addition, school enrolment and participation rates for girls are almost universal at the primary level. While higher levels of education do reflect a gender-based disparity, Himachal is still significantly ahead of other states at bridging the gap. The Hamirpur District in particular stands out for high literacy rates across all metrics of measurement. The state government has played an instrumental role
film Hélène (drama), an 1891 play by Paul Delair Helene, English edition of German novel by Vicki Baum Hélène (film), a 1936 French drama film, based on the novel by Baum Music Hélène (opera), an opera by Camille Saint-Saëns 1904 Polka Hélène in D minor for piano 4 hands by Borodin Hélène (album), an album by Roch Voisine 1989 Hélène (Hélène Rollès album) album by Hélène Rollès 1992 Hélène, album by Hélène Segara 2002 "Hélène" (song), a 1989 song by Roch Voisine "Hélène", song by Julien Clerc
and film Hélène (drama), an 1891 play by Paul Delair Helene, English edition of German novel by Vicki Baum Hélène (film), a 1936 French drama film, based on the novel by Baum Music Hélène (opera), an opera by Camille Saint-Saëns 1904 Polka Hélène in D minor for piano 4
of the four Ark starships in Mass Effect: Andromeda Hyperion, an airship in the film The Island at the Top of the World Hyperion, an airship in the novel Skybreaker Hyperion, a ship in the TV series Skyland Hyperion, the flagship of Yang Wenli, a Legend of the Galactic Heroes character Emperor Hyperion, chief of the alien villains' race in the anime series Gekiganger III Hyperion Hotel, a fictional home base for Angel in the television series Angel Computing Hyperion (computer), an early portable computer Hyperion, a RuneScape emulator by Graham Edgecombe Hyperion, a hyperspectral imaging spectrometer on the NASA Earth Observing-1 satellite Hyperion, Disney's rendering system first used for Big Hero 6 (film) Vehicles Hyperion, a version of the Rolls-Royce Phantom Drophead Coupé Hyperion (yacht), a large sloop launched in 1998
coast redwood in Northern California and the world's tallest known living tree Hyperion proto-supercluster, a supercluster of galaxy groups discovered in 2018 Literature Hyperion (Hölderlin novel), a 1799 book by Friedrich Hölderlin Hyperion (poem), a 1819 poem by John Keats Hyperion (Longfellow novel), an 1839 book by Henry Wadsworth Longfellow Hyperion (Simmons novel), a 1989 novel by Dan Simmons Hyperion Cantos, the series of novels that started with Hyperion Hyperion (magazine), a 1908–1910 German literary journal Hyperion (comics), the name of several characters in the Marvel Comics universe Music Hyperion (Manticora album), 2002 Hyperion (Marilyn Crispell album), 1992 Hyperion (Gesaffelstein album), 2019 Hyperion, a 2016 EP by Krallice Hyperion, a 2018 album by St. Lucia Hyperion Records, an independent British classical music label Businesses and organizations Hachette Books, a book publishing division known until 2014 as Hyperion Books Hyperion Books for Children, a book publisher Hyperion Entertainment, a computer game producer Hyperion Pictures, a film production company Hyperion Power Generation, a nuclear power company Hyperion Records, an independent British classical music label Hyperion Theatricals, part of Disney Theatrical Group Oracle Hyperion, a business software company owned by Oracle Places and facilities Hyperion, California Hyperion Theater, a theater at the Disney California Adventure theme park in Anaheim, California Hyperion sewage treatment plant, Playa del Rey, California Hyperion Tower (or Mok-dong Hyperion Towers), Seoul, South Korea Fictional entities and characters Hyperion, the flagship of Jim Raynor in StarCraft Hyperion Corporation, an organization
wealthy families provided dowries for their daughters, and these funded the convents, while the nuns provided free nursing care for the poor. The Catholic elites provided hospital services because of their theology of salvation that good works were the route to heaven. The Protestant reformers rejected the notion that rich men could gain God's grace through good works—and thereby escape purgatory—by providing cash endowments to charitable institutions. They also rejected the Catholic idea that the poor patients earned grace and salvation through their suffering. Protestants generally closed all the convents and most of the hospitals, sending women home to become housewives, often against their will. On the other hand, local officials recognized the public value of hospitals, and some were continued in Protestant lands, but without monks or nuns and in the control of local governments. In London, the crown allowed two hospitals to continue their charitable work, under nonreligious control of city officials. The convents were all shut down but Harkness finds that women—some of them former nuns—were part of a new system that delivered essential medical services to people outside their family. They were employed by parishes and hospitals, as well as by private families, and provided nursing care as well as some medical, pharmaceutical, and surgical services. Meanwhile, in Catholic lands such as France, rich families continued to fund convents and monasteries, and enrolled their daughters as nuns who provided free health services to the poor. Nursing was a religious role for the nurse, and there was little call for science. Age of Enlightenment During the Age of Enlightenment, the 18th century, science was held in high esteem and physicians upgraded their social status by becoming more scientific. The health field was crowded with self-trained barber-surgeons, apothecaries, midwives, drug peddlers, and charlatans. Across Europe medical schools relied primarily on lectures and readings. The final year student would have limited clinical experience by trailing the professor through the wards. Laboratory work was uncommon, and dissections were rarely done because of legal restrictions on cadavers. Most schools were small, and only Edinburgh, Scotland, with 11,000 alumni, produced large numbers of graduates. Britain In Britain, there were but three small hospitals after 1550. Pelling and Webster estimate that in London in the 1580 to 1600 period, out of a population of nearly 200,000 people, there were about 500 medical practitioners. Nurses and midwives are not included. There were about 50 physicians, 100 licensed surgeons, 100 apothecaries, and 250 additional unlicensed practitioners. In the last category about 25% were women. All across Britain—and indeed all of the world—the vast majority of the people in city, town or countryside depended for medical care on local amateurs with no professional training but with a reputation as wise healers who could diagnose problems and advise sick people what to do—and perhaps set broken bones, pull a tooth, give some traditional herbs or brews or perform a little magic to cure what ailed them. The London Dispensary opened in 1696, the first clinic in the British Empire to dispense medicines to poor sick people. The innovation was slow to catch on, but new dispensaries were open in the 1770s. In the colonies, small hospitals opened in Philadelphia in 1752, New York in 1771, and Boston (Massachusetts General Hospital) in 1811. Guy's Hospital, the first great British hospital with a modern foundation opened in 1721 in London, with funding from businessman Thomas Guy. It had been preceded by St Bartholomew's Hospital and St Thomas's Hospital, both medieval foundations. In 1821 a bequest of £200,000 by William Hunt in 1829 funded expansion for an additional hundred beds at Guy's. Samuel Sharp (1709–78), a surgeon at Guy's Hospital from 1733 to 1757, was internationally famous; his A Treatise on the Operations of Surgery (1st ed., 1739), was the first British study focused exclusively on operative technique. English physician Thomas Percival (1740–1804) wrote a comprehensive system of medical conduct, Medical Ethics; or, a Code of Institutes and Precepts, Adapted to the Professional Conduct of Physicians and Surgeons (1803) that set the standard for many textbooks. Spain and Spanish Empire In the Spanish Empire, the viceregal capital of Mexico City was a site of medical training for physicians and the creation of hospitals. Epidemic disease had decimated indigenous populations starting with the early sixteenth-century Spanish conquest of the Aztec empire, when a black auxiliary in the armed forces of conqueror Hernán Cortés, with an active case of smallpox, set off a virgin land epidemic among indigenous peoples, Spanish allies and enemies alike. Aztec emperor Cuitlahuac died of smallpox. Disease was a significant factor in the Spanish conquest elsewhere as well. Medical education instituted at the Royal and Pontifical University of Mexico chiefly served the needs of urban elites. Male and female curanderos or lay practitioners, attended to the ills of the popular classes. The Spanish crown began regulating the medical profession just a few years after the conquest, setting up the Royal Tribunal of the Protomedicato, a board for licensing medical personnel in 1527. Licensing became more systematic after 1646 with physicians, druggists, surgeons, and bleeders requiring a license before they could publicly practice. Crown regulation of medical practice became more general in the Spanish empire. Elites and the popular classes alike called on divine intervention in personal and society-wide health crises, such as the epidemic of 1737. The intervention of the Virgin of Guadalupe was depicted in a scene of dead and dying Indians, with elites on their knees praying for her aid. In the late eighteenth century, the crown began implementing secularizing policies on the Iberian peninsula and its overseas empire to control disease more systematically and scientifically. Spanish Quest for Medicinal Spices Botanical medicines also became popular during the 16th, 17th, and 18th Centuries. Spanish pharmaceutical books during this time contain medicinal recipes consisting of spices, herbs, and other botanical products. For example, nutmeg oil was documented for curing stomach ailments and cardamom oil was believed to relieve intestinal ailments. During the rise of the global trade market, spices and herbs, along with many other goods, that were indigenous to different territories began to appear in different locations across the globe. Herbs and spices were especially popular for their utility in cooking and medicines. As a result of this popularity and increased demand for spices, some areas in Asia, like China and Indonesia, became hubs for spice cultivation and trade. The Spanish Empire also wanted to benefit from the international spice trade, so they looked towards their American colonies. The Spanish American colonies became an area where the Spanish searched to discover new spices and indigenous American medicinal recipes. The Florentine Codex, a 16th-century ethnographic research study in Mesoamerica by the Spanish Franciscan friar Bernardino de Sahagún, is a major contribution to the history of Nahua medicine. The Spanish did discover many spices and herbs new to them, some of which were reportedly similar to Asian spices. A Spanish physician by the name of Nicolás Monardes studied many of the American spices coming into Spain. He documented many of the new American spices and their medicinal properties in his survey Historia medicinal de las cosas que se traen de nuestras Indias Occidentales. For example, Monardes describes the "Long Pepper" (Pimienta luenga), found along the coasts of the countries that are now known Panama and Colombia, as a pepper that was more flavorful, healthy, and spicy in comparison to the Eastern black pepper. The Spanish interest in American spices can first be seen in the commissioning of the Libellus de Medicinalibus Indorum Herbis, which was a Spanish-American codex describing indigenous American spices and herbs and describing the ways that these were used in natural Aztec medicines. The codex was commissioned in the year 1552 by Francisco de Mendoza, the son of Antonio de Mendoza, who was the first Viceroy of New Spain. Francisco de Mendoza was interested in studying the properties of these herbs and spices, so that he would be able to profit from the trade of these herbs and the medicines that could be produced by them. Francisco de Mendoza recruited the help of Monardez in studying the traditional medicines of the indigenous people living in what was then the Spanish colonies. Monardez researched these medicines and performed experiments to discover the possibilities of spice cultivation and medicine creation in the Spanish colonies. The Spanish transplanted some herbs from Asia, but only a few foreign crops were successfully grown in the Spanish Colonies. One notable crop brought from Asia and successfully grown in the Spanish colonies was ginger, as it was considered Hispaniola's number 1 crop at the end of the 16th Century. The Spanish Empire did profit from cultivating herbs and spices, but they also introduced pre-Columbian American medicinal knowledge to Europe. Other Europeans were inspired by the actions of Spain and decided to try to establish a botanical transplant system in colonies that they controlled, however, these subsequent attempts were not successful. 19th century: rise of modern medicine The practice of medicine changed in the face of rapid advances in science, as well as new approaches by physicians. Hospital doctors began much more systematic analysis of patients' symptoms in diagnosis. Among the more powerful new techniques were anaesthesia, and the development of both antiseptic and aseptic operating theatres. Effective cures were developed for certain endemic infectious diseases. However, the decline in many of the most lethal diseases was due more to improvements in public health and nutrition than to advances in medicine. Medicine was revolutionized in the 19th century and beyond by advances in chemistry, laboratory techniques, and equipment. Old ideas of infectious disease epidemiology were gradually replaced by advances in bacteriology and virology. Germ theory and bacteriology In the 1830s in Italy, Agostino Bassi traced the silkworm disease muscardine to microorganisms. Meanwhile, in Germany, Theodor Schwann led research on alcoholic fermentation by yeast, proposing that living microorganisms were responsible. Leading chemists, such as Justus von Liebig, seeking solely physicochemical explanations, derided this claim and alleged that Schwann was regressing to vitalism. In 1847 in Vienna, Ignaz Semmelweis (1818–1865), dramatically reduced the death rate of new mothers (due to childbed fever) by requiring physicians to clean their hands before attending childbirth, yet his principles were marginalized and attacked by professional peers. At that time most people still believed that infections were caused by foul odors called miasmas. French scientist Louis Pasteur confirmed Schwann's fermentation experiments in 1857 and afterwards supported the hypothesis that yeast were microorganisms. Moreover, he suggested that such a process might also explain contagious disease. In 1860, Pasteur's report on bacterial fermentation of butyric acid motivated fellow Frenchman Casimir Davaine to identify a similar species (which he called ) as the pathogen of the deadly disease anthrax. Others dismissed "" as a mere byproduct of the disease. British surgeon Joseph Lister, however, took these findings seriously and subsequently introduced antisepsis to wound treatment in 1865. German physician Robert Koch, noting fellow German Ferdinand Cohn's report of a spore stage of a certain bacterial species, traced the life cycle of Davaine's , identified spores, inoculated laboratory animals with them, and reproduced anthrax—a breakthrough for experimental pathology and germ theory of disease. Pasteur's group added ecological investigations confirming spores' role in the natural setting, while Koch published a landmark treatise in 1878 on the bacterial pathology of wounds. In 1881, Koch reported discovery of the "tubercle bacillus", cementing germ theory and Koch's acclaim. Upon the outbreak of a cholera epidemic in Alexandria, Egypt, two medical missions went to investigate and attend the sick, one was sent out by Pasteur and the other led by Koch. Koch's group returned in 1883, having successfully discovered the cholera pathogen. In Germany, however, Koch's bacteriologists had to vie against Max von Pettenkofer, Germany's leading proponent of miasmatic theory. Pettenkofer conceded bacteria's casual involvement, but maintained that other, environmental factors were required to turn it pathogenic, and opposed water treatment as a misdirected effort amid more important ways to improve public health. The massive cholera epidemic in Hamburg in 1892 devastated Pettenkoffer's position, and yielded German public health to "Koch's bacteriology". On losing the 1883 rivalry in Alexandria, Pasteur switched research direction, and introduced his third vaccine—rabies vaccine—the first vaccine for humans since Jenner's for smallpox. From across the globe, donations poured in, funding the founding of Pasteur Institute, the globe's first biomedical institute, which opened in 1888. Along with Koch's bacteriologists, Pasteur's group—which preferred the term microbiology—led medicine into the new era of "scientific medicine" upon bacteriology and germ theory. Accepted from Jakob Henle, Koch's steps to confirm a species' pathogenicity became famed as "Koch's postulates". Although his proposed tuberculosis treatment, tuberculin, seemingly failed, it soon was used to test for infection with the involved species. In 1905, Koch was awarded the Nobel Prize in Physiology or Medicine, and remains renowned as the founder of medical microbiology. Women Women as healers Women have always served as healers and midwives since ancient times. However, the professionalization of medicine forced them increasingly to the sidelines. As hospitals multiplied they relied in Europe on orders of Roman Catholic nun-nurses, and German Protestant and Anglican deaconesses in the early 19th century. They were trained in traditional methods of physical care that involved little knowledge of medicine. The breakthrough to professionalization based on knowledge of advanced medicine was led by Florence Nightingale in England. She resolved to provide more advanced training than she saw on the Continent. At Kaiserswerth, where the first German nursing schools were founded in 1836 by Theodor Fliedner, she said, "The nursing was nil and the hygiene horrible.") Britain's male doctors preferred the old system, but Nightingale won out and her Nightingale Training School opened in 1860 and became a model. The Nightingale solution depended on the patronage of upper-class women, and they proved eager to serve. Royalty became involved. In 1902 the wife of the British king took control of the nursing unit of the British army, became its president, and renamed it after herself as the Queen Alexandra's Royal Army Nursing Corps; when she died the next queen became president. Today its Colonel In Chief is Sophie, Countess of Wessex, the daughter-in-law of Queen Elizabeth II. In the United States, upper-middle-class women who already supported hospitals promoted nursing. The new profession proved highly attractive to women of all backgrounds, and schools of nursing opened in the late 19th century. They soon a function of large hospitals , where they provided a steady stream of low-paid idealistic workers. The International Red Cross began operations in numerous countries in the late 19th century, promoting nursing as an ideal profession for middle-class women. The Nightingale model was widely copied. Linda Richards (1841–1930) studied in London and became the first professionally trained American nurse. She established nursing training programs in the United States and Japan, and created the first system for keeping individual medical records for hospitalized patients. The Russian Orthodox Church sponsored seven orders of nursing sisters in the late 19th century. They ran hospitals, clinics, almshouses, pharmacies, and shelters as well as training schools for nurses. In the Soviet era (1917–1991), with the aristocratic sponsors gone, nursing became a low-prestige occupation based in poorly maintained hospitals. Women as physicians It was very difficult for women to become doctors in any field before the 1970s. Elizabeth Blackwell (1821–1910) became the first woman to formally study and practice medicine in the United States. She was a leader in women's medical education. While Blackwell viewed medicine as a means for social and moral reform, her student Mary Putnam Jacobi (1842–1906) focused on curing disease. At a deeper level of disagreement, Blackwell felt that women would succeed in medicine because of their humane female values, but Jacobi believed that women should participate as the equals of men in all medical specialties using identical methods, values and insights. In the Soviet Union although the majority of medical doctors were women, they were paid less than the mostly male factory workers. Paris Paris (France) and Vienna were the two leading medical centers on the Continent in the era 1750–1914. In the 1770s–1850s Paris became a world center of medical research and teaching. The "Paris School" emphasized that teaching and research should be based in large hospitals and promoted the professionalization of the medical profession and the emphasis on sanitation and public health. A major reformer was Jean-Antoine Chaptal (1756–1832), a physician who was Minister of Internal Affairs. He created the Paris Hospital, health councils, and other bodies. Louis Pasteur (1822–1895) was one of the most important founders of medical microbiology. He is remembered for his remarkable breakthroughs in the causes and preventions of diseases. His discoveries reduced mortality from puerperal fever, and he created the first vaccines for rabies and anthrax. His experiments supported the germ theory of disease. He was best known to the general public for inventing a method to treat milk and wine in order to prevent it from causing sickness, a process that came to be called pasteurization. He is regarded as one of the three main founders of microbiology, together with Ferdinand Cohn and Robert Koch. He worked chiefly in Paris and in 1887 founded the Pasteur Institute there to perpetuate his commitment to basic research and its practical applications. As soon as his institute was created, Pasteur brought together scientists with various specialties. The first five departments were directed by Emile Duclaux (general microbiology research) and Charles Chamberland (microbe research applied to hygiene), as well as a biologist, Ilya Ilyich Mechnikov (morphological microbe research) and two physicians, Jacques-Joseph Grancher (rabies) and Emile Roux (technical microbe research). One year after the inauguration of the Institut Pasteur, Roux set up the first course of microbiology ever taught in the world, then entitled Cours de Microbie Technique (Course of microbe research techniques). It became the model for numerous research centers around the world named "Pasteur Institutes." Vienna The First Viennese School of Medicine, 1750–1800, was led by the Dutchman Gerard van Swieten (1700–1772), who aimed to put medicine on new scientific foundations—promoting unprejudiced clinical observation, botanical and chemical research, and introducing simple but powerful remedies. When the Vienna General Hospital opened in 1784, it at once became the world's largest hospital and physicians acquired a facility that gradually developed into the most important research centre. Progress ended with the Napoleonic wars and the government shutdown in 1819 of all liberal journals and schools; this caused a general return to traditionalism and eclecticism in medicine. Vienna was the capital of a diverse empire and attracted not just Germans but Czechs, Hungarians, Jews, Poles and others to its world-class medical facilities. After 1820 the Second Viennese School of Medicine emerged with the contributions of physicians such as Carl Freiherr von Rokitansky, Josef Škoda, Ferdinand Ritter von Hebra, and Ignaz Philipp Semmelweis. Basic medical science expanded and specialization advanced. Furthermore, the first dermatology, eye, as well as ear, nose, and throat clinics in the world were founded in Vienna. The textbook of ophthalmologist Georg Joseph Beer (1763–1821) Lehre von den Augenkrankheiten combined practical research and philosophical speculations, and became the standard reference work for decades. Berlin After 1871 Berlin, the capital of the new German Empire, became a leading center for medical research. Robert Koch (1843–1910) was a representative leader. He became famous for isolating Bacillus anthracis (1877), the Tuberculosis bacillus (1882) and Vibrio cholerae (1883) and for his development of Koch's postulates. He was awarded the Nobel Prize in Physiology or Medicine in 1905 for his tuberculosis findings. Koch is one of the founders of microbiology, inspiring such major figures as Paul Ehrlich and Gerhard Domagk. U.S. Civil War In the American Civil War (1861–65), as was typical of the 19th century, more soldiers died of disease than in battle, and even larger numbers were temporarily incapacitated by wounds, disease and accidents. Conditions were poor in the Confederacy, where doctors and medical supplies were in short supply. The war had a dramatic long-term impact on medicine in the U.S., from surgical technique to hospitals to nursing and to research facilities. Weapon development -particularly the appearance of Springfield Model 1861, mass-produced and much more accurate than muskets led to generals underestimating the risks of long range rifle fire; risks exemplified in the death of John Sedgwick and the disastrous Pickett's Charge. The rifles could shatter bone forcing amputation and longer ranges meant casualties were sometimes not quickly found. Evacuation of the wounded from Second Battle of Bull Run took a week. As in earlier wars, untreated casualties sometimes survived unexpectedly due to maggots debriding the wound -an observation which led to the surgical use of maggots -still a useful method in the absence of effective antibiotics. The hygiene of the training and field camps was poor, especially at the beginning of the war when men who had seldom been far from home were brought together for training with thousands of strangers. First came epidemics of the childhood diseases of chicken pox, mumps, whooping cough, and, especially, measles. Operations in the South meant a dangerous and new disease environment, bringing diarrhea, dysentery, typhoid fever, and malaria. There were no antibiotics, so the surgeons prescribed coffee, whiskey, and quinine. Harsh weather, bad water, inadequate shelter in winter quarters, poor policing of camps, and dirty camp hospitals took their toll. This was a common scenario in wars from time immemorial, and conditions faced by the Confederate army were even worse. The Union responded by building army hospitals in every state. What was different in the Union was the emergence of skilled, well-funded medical organizers who took proactive action, especially in the much enlarged United States Army Medical Department, and the United States Sanitary Commission, a new private agency. Numerous other new agencies also targeted the medical and morale needs of soldiers, including the United States Christian Commission as well as smaller private agencies. The U.S. Army learned many lessons and in August 1886, it established the Hospital Corps. Statistical methods A major breakthrough in epidemiology came with the introduction of statistical maps and graphs. They allowed careful analysis of seasonality issues in disease incidents, and the maps allowed public health officials to identify critical loci for the dissemination of disease. John Snow in London developed the methods. In 1849, he observed that the symptoms of cholera, which had already claimed around 500 lives within a month, were vomiting and diarrhoea. He concluded that the source of contamination must be through ingestion, rather than inhalation as was previously thought. It was this insight that resulted in the removal of The Pump On Broad Street, after which deaths from cholera plummeted afterwards. English nurse Florence Nightingale pioneered analysis of large amounts of statistical data, using graphs and tables, regarding the condition of thousands of patients in the Crimean War to evaluate the efficacy of hospital services. Her methods proved convincing and led to reforms in military and civilian hospitals, usually with the full support of the government. By the late 19th and early 20th century English statisticians led by Francis Galton, Karl Pearson and Ronald Fisher developed the mathematical tools such as correlations and hypothesis tests that made possible much more sophisticated analysis of statistical data. During the U.S. Civil War the Sanitary Commission collected enormous amounts of statistical data, and opened up the problems of storing information for fast access and mechanically searching for data patterns. The pioneer was John Shaw Billings (1838–1913). A senior surgeon in the war, Billings built the Library of the Surgeon General's Office (now the National Library of Medicine), the centerpiece of modern medical information systems. Billings figured out how to mechanically analyze medical and demographic data by turning facts into numbers and punching the numbers onto cardboard cards that could be sorted and counted by machine. The applications were developed by his assistant Herman Hollerith; Hollerith invented the punch card and counter-sorter system that dominated statistical data manipulation until the 1970s. Hollerith's company became International Business Machines (IBM) in 1911. Worldwide dissemination United States Johns Hopkins Hospital, founded in 1889, originated several modern medical practices, including residency and rounds. Japan European ideas of modern medicine were spread widely through the world by medical missionaries, and the dissemination of textbooks. Japanese elites enthusiastically embraced Western medicine after the Meiji Restoration of the 1860s. However they had been prepared by their knowledge of the Dutch and German medicine, for they had some contact with Europe through the Dutch. Highly influential was the 1765 edition of Hendrik van Deventer's pioneer work Nieuw Ligt ("A New Light") on Japanese obstetrics, especially on Katakura Kakuryo's publication in 1799 of Sanka Hatsumo ("Enlightenment of Obstetrics"). A cadre of Japanese physicians began to interact with Dutch doctors, who introduced smallpox vaccinations. By 1820 Japanese ranpô medical practitioners not only translated Dutch medical texts, they integrated their readings with clinical diagnoses. These men became leaders of the modernization of medicine in their country. They broke from Japanese traditions of closed medical fraternities and adopted the European approach of an open community of collaboration based on expertise in the latest scientific methods. Kitasato Shibasaburō (1853–1931) studied bacteriology in Germany under Robert Koch. In 1891 he founded the Institute of Infectious Diseases in Tokyo, which introduced the study of bacteriology to Japan. He and French researcher Alexandre Yersin went to Hong Kong in 1894, where; Kitasato confirmed Yersin's discovery that the bacterium Yersinia pestis is the agent of the plague. In 1897 he isolated and described the organism that caused dysentery. He became the first dean of medicine at Keio University, and the first president of the Japan Medical Association. Japanese physicians immediately recognized the values of X-Rays. They were able to purchase the equipment locally from the Shimadzu Company, which developed, manufactured, marketed, and distributed X-Ray machines after 1900. Japan not only adopted German methods of public health in the home islands, but implemented them in its colonies, especially Korea and Taiwan, and after 1931 in Manchuria. A heavy investment in sanitation resulted in a dramatic increase of life expectancy. Psychiatry Until the nineteenth century, the care of the insane was largely a communal and family responsibility rather than a medical one. The vast majority of the mentally ill were treated in domestic contexts with only the most unmanageable or burdensome likely to be institutionally confined. This situation was transformed radically from the late eighteenth century as, amid changing cultural conceptions of madness, a new-found optimism in the curability of insanity within the asylum setting emerged. Increasingly, lunacy was perceived less as a physiological condition than as a mental and moral one to which the correct response was persuasion, aimed at inculcating internal restraint, rather than external coercion. This new therapeutic sensibility, referred to as moral treatment, was epitomised in French physician Philippe Pinel's quasi-mythological unchaining of the lunatics of the Bicêtre Hospital in Paris and realised in an institutional setting with the foundation in 1796 of the Quaker-run York Retreat in England. From the early nineteenth century, as lay-led lunacy reform movements gained in influence, ever more state governments in the West extended their authority and responsibility over the mentally ill. Small-scale asylums, conceived as instruments to reshape both the mind and behaviour of the disturbed, proliferated across these regions. By the 1830s, moral treatment, together with the asylum itself, became increasingly medicalised and asylum doctors began to establish a distinct medical identity with the establishment in the 1840s of associations for their members in France, Germany, the United Kingdom and America, together with the founding of medico-psychological journals. Medical optimism in the capacity of the asylum to cure insanity soured by the close of the nineteenth century as the growth of the asylum population far outstripped that of the general population. Processes of long-term institutional segregation, allowing for the psychiatric conceptualisation of the natural course of mental illness, supported the perspective that the insane were a distinct population, subject to mental pathologies stemming from specific medical causes. As degeneration theory grew in influence from the mid-nineteenth century, heredity was seen as the central causal element in chronic mental illness, and, with national asylum systems overcrowded and insanity apparently undergoing an inexorable rise, the focus of psychiatric therapeutics shifted from a concern with treating the individual to maintaining the racial and biological health of national populations. Emil Kraepelin (1856–1926) introduced new medical categories of mental illness, which eventually came into psychiatric usage despite their basis in behavior rather than pathology or underlying cause. Shell shock among frontline soldiers exposed to heavy artillery bombardment was first diagnosed by British Army doctors in 1915. By 1916, similar symptoms were also noted in soldiers not exposed to explosive shocks, leading to questions as to whether the disorder was physical or psychiatric. In the 1920s surrealist opposition to psychiatry was expressed in a number of surrealist publications. In the 1930s several controversial medical practices were introduced including inducing seizures (by electroshock, insulin or other drugs) or cutting parts of the brain apart (leucotomy or lobotomy). Both came into widespread use by psychiatry, but there were grave concerns and much opposition on grounds of basic morality, harmful effects, or misuse. In the 1950s new psychiatric drugs, notably the antipsychotic chlorpromazine, were designed in laboratories and slowly came into preferred use. Although often accepted as an advance in some ways, there was some opposition, due to serious adverse effects such as tardive dyskinesia. Patients often opposed psychiatry and refused or stopped taking the drugs when not subject to psychiatric control. There was also increasing opposition to the use of psychiatric hospitals, and attempts to move people back into the community on a collaborative user-led group approach ("therapeutic communities") not controlled by psychiatry. Campaigns against masturbation were done in the Victorian era and elsewhere. Lobotomy was used until the 1970s to treat schizophrenia. This was denounced by the anti-psychiatric movement in the 1960s and later. 20th century and beyond Twentieth-century warfare and medicine The ABO blood group system was discovered in 1901, and the Rhesus blood group system in 1937, facilitating blood transfusion. During the 19th century, large-scale wars were attended with medics and mobile hospital units which developed advanced techniques for healing massive injuries and controlling infections rampant in battlefield conditions. During the Mexican Revolution (1910–1920), General Pancho Villa organized hospital trains for wounded soldiers. Boxcars marked Servicio Sanitario ("sanitary service") were re-purposed as surgical operating theaters and areas for recuperation, and staffed by up to 40 Mexican and U.S. physicians. Severely wounded soldiers were shuttled back to base hospitals. Canadian physician Norman Bethune, M.D. developed a mobile blood-transfusion service for frontline operations in the Spanish Civil War (1936–1939), but ironically, he himself died of blood poisoning. Thousands of scarred troops provided the need for improved prosthetic limbs and expanded techniques in plastic surgery or reconstructive surgery. Those practices were combined to broaden cosmetic surgery and other forms of elective surgery. During the second World War, Alexis Carrel and Henry Dakin developed the Carrel-Dakin method of treating wounds with an irrigation, Dakin's solution, a germicide which helped prevent gangrene. The War spurred the usage of Roentgen's X-ray, and the electrocardiograph, for the monitoring of internal bodily functions. This was followed in the inter-war period by the development of the first anti-bacterial agents such as the sulpha antibiotics. Public health Public health measures became particularly important during the 1918 flu pandemic, which killed at least 50 million people around the world. It became an important case study in epidemiology. Bristow shows there was a gendered response of health caregivers to the pandemic in the United States. Male doctors were unable to cure the patients, and they felt like failures. Women nurses also saw their patients die, but they took pride in their success in fulfilling their professional role of caring for, ministering, comforting, and easing the last hours of their patients, and helping the families of the patients cope as well. From 1917 to 1932, the American Red Cross moved into Europe with a battery of long-term child health projects. It built and operated hospitals and clinics, and organized antituberculosis and antityphus campaigns. A high priority involved child health programs such as clinics, better baby shows, playgrounds, fresh air camps, and courses for women on infant hygiene. Hundreds of U.S. doctors, nurses, and welfare professionals administered these programs, which aimed to reform the health of European youth and to reshape European public health and welfare along American lines. Second World War The advances in medicine made a dramatic difference for Allied troops, while the Germans and especially the Japanese and Chinese suffered from a severe lack of newer medicines, techniques and facilities. Harrison finds that the chances of recovery for a badly wounded British infantryman were as much as 25 times better than in the First World War. The reason was that: "By 1944 most casualties were receiving treatment within hours of wounding, due to the increased mobility of field hospitals and the extensive use of aeroplanes as ambulances. The care of the sick and wounded had also been revolutionized by new medical technologies, such as active immunization against tetanus, sulphonamide drugs, and penicillin." Nazi and Japanese medical research Unethical human subject research, and killing of patients with disabilities, peaked during the Nazi era, with Nazi human experimentation and Aktion T4 during the Holocaust as the most significant examples. Many of the details of these and related events were the focus of the Doctors' Trial. Subsequently, principles of medical ethics, such as the Nuremberg Code, were introduced to prevent a recurrence of such atrocities. After 1937, the Japanese Army established programs of biological warfare in China. In Unit 731, Japanese doctors and research scientists conducted large numbers of vivisections and experiments on human beings, mostly Chinese victims. Malaria Starting in World War II, DDT was used as insecticide to combat insect vectors carrying malaria, which was endemic in most tropical regions of the world. The first goal was to protect soldiers, but it was widely adopted as a public health device. In Liberia, for example, the United States had large military operations during the war and the U.S. Public Health Service began the use of DDT for indoor residual spraying (IRS) and as a larvicide, with the goal of controlling malaria in Monrovia, the Liberian capital. In the early 1950s, the project was expanded to nearby villages. In 1953, the World Health Organization (WHO) launched an antimalaria program in parts of Liberia as a pilot project to determine the feasibility of malaria eradication in tropical Africa. However these projects encountered a spate of difficulties that foreshadowed the general retreat from malaria eradication efforts across tropical Africa by the mid-1960s. Post-World War II The World Health Organization was founded in 1948 as a United Nations agency to improve global health. In most of the world, life expectancy has improved since then, and was about 67 years , and well above 80 years in some countries. Eradication of infectious diseases is an international effort, and several new vaccines have been developed during the post-war years, against infections such as measles, mumps, several strains of influenza and human papilloma virus. The long-known vaccine against Smallpox finally eradicated the disease in the 1970s, and Rinderpest was wiped out in 2011. Eradication of polio is underway. Tissue culture is important for development of vaccines. Though the early success of antiviral vaccines and antibacterial drugs, antiviral drugs were not introduced until the 1970s. Through the WHO, the international community has developed a response protocol against epidemics, displayed during the SARS epidemic in 2003, the Influenza A virus subtype H5N1 from 2004, the Ebola virus epidemic in West Africa and onwards. As infectious diseases have become less lethal, and the most common causes of death in developed countries are now tumors and cardiovascular diseases, these conditions have received increased attention in medical research. Tobacco smoking as a cause of lung cancer was first researched in the 1920s, but was not widely supported by publications until the 1950s. Cancer treatment has been developed with radiotherapy, chemotherapy and surgical oncology. Oral rehydration therapy has been extensively used since the 1970s to treat cholera and other diarrhea-inducing infections. The sexual revolution included taboo-breaking research in human sexuality such as the 1948 and 1953 Kinsey reports, invention of hormonal contraception, and the normalization of abortion and homosexuality in many countries. Family planning has promoted a demographic transition in most of the world. With threatening sexually transmitted infections, not least HIV, use of barrier contraception has become imperative. The struggle against HIV has improved antiretroviral treatments. X-ray imaging was the first kind of medical imaging, and later ultrasonic imaging, CT scanning, MR scanning and other imaging methods became available. Genetics have advanced with the discovery of the DNA molecule, genetic mapping and gene therapy. Stem cell research took off in the 2000s (decade), with stem cell therapy as a promising method. Evidence-based medicine is a modern concept, not introduced to literature until the 1990s. Prosthetics have improved. In 1958, Arne Larsson in Sweden became the first patient to depend on an artificial cardiac pacemaker. He died in 2001 at age 86, having outlived its inventor, the surgeon, and 26 pacemakers. Lightweight materials as well as neural prosthetics emerged in the end of the 20th century. Modern surgery Cardiac surgery was revolutionized in 1948 as open-heart surgery was introduced for the first time since 1925. In 1954 Joseph Murray, J. Hartwell Harrison and others accomplished the first kidney transplantation. Transplantations of other organs, such as heart, liver and pancreas, were also introduced during the later 20th century. The first partial face transplant was performed in 2005, and the first full one in 2010. By the end of the 20th century, microtechnology had been used to create tiny robotic devices to assist microsurgery using micro-video and fiber-optic cameras to view internal tissues during surgery with minimally invasive practices. Laparoscopic surgery was broadly introduced in the 1990s. Natural orifice surgery has followed. Remote surgery is another recent development, with the transatlantic Lindbergh operation in 2001 as a groundbreaking example. See also Health care in the United States History of dental treatments History of herbalism History of hospitals History of medicine in Canada History of medicine in the United States History of nursing History of pathology History of pharmacy History of surgery Timeline of nursing history Timeline of medicine and medical technology History of health care (disambiguation) Explanatory notes References Further reading Bowers, Barbara S. ed. The Medieval Hospital and Medical Practice (Ashgate, 2007); 258 pp; essays by scholars Brockliss, Laurence and Colin Jones. The Medical World of Early Modern France (1997). 984 pp; detailed survey, 1600–1790s excerpt and text search Burnham, John C. Health Care in America: A History (2015), Comprehensive scholarly history Bynum, W.F. and Roy Porter, eds. Companion Encyclopedia of the History of Medicine (2 vol. 1997); 1840 pp; 36 essays by scholars excerpt and text search Bynum, W.F. et al. The Western Medical Tradition: 1800–2000 (2006) 610 pp; 4 essays excerpt and text search Conrad, Lawrence I. et al. The Western Medical Tradition: 800 BC to AD 1800 (1995); excerpt and text search Donahue, M. Patricia. Nursing, The Finest Art: An Illustrated History (3r ed. 2010) excerpt and text search Loudon, Irvine, ed. Western Medicine: An Illustrated History (1997) online McGrew, Roderick. Encyclopedia of Medical History (1985) Nutton, Vivian. Ancient Medicine (2004) 489 pp. online Porter, Roy, ed. The Cambridge Illustrated History of Medicine (2001) excerpt and text search Porter, Roy, ed. The Cambridge History of Medicine (2006); 416 pp; excerpt and text search same text without the illustrations Porter, Roy. Blood and Guts: A Short History of Medicine (2004) 224 pp; excerpt and text search Rousseau With Miranda Gill, David Haycock and Malte Herwig. Singer, Charles, and E. Ashworth Underwood. A Short History of Medicine (2nd ed. 1962) Siraisi, Nancy G. Medieval and Early Renaissance Medicine: An Introduction to Knowledge and Practice (1990) excerpt and text search Watts, Sheldon. Disease and Medicine in World History (2003), 166 pp. online Weatherall, Miles. In Search of a Cure: A History of Pharmaceutical Discovery (1990), emphasis on antibiotics. Physicians Bonner, Thomas Neville. Becoming a Physician: Medical Education in Britain, France, Germany, and the United States, 1750–1945 (Johns Hopkins U.P. 2000) excerpt and text search Bonner, Thomas Neville. To the Ends of the Earth: Women's Search for Education in Medicine (Harvard U.P., 1992) More, Ellen S. Restoring the Balance: Women Physicians and the Profession of Medicine, 1850–1995 (Harvard U.P. 1999), focus on U.S. online Britain Berridge, Virginia. "Health and Medicine" in F M.L. Thompson, ed., The Cambridge Social History of Britain, 1750–1950, vol. 3, Social
color. Being extroverted, talkative, easygoing, carefree, and sociable coincides with a sanguine temperament, which is linked to too much blood. Finally, a choleric temperament is related to too much yellow bile, which is actually red in color and has the texture of foam; it is associated with being aggressive, excitable, impulsive, and also extroverted. There are numerous ways to treat a disproportion of the humors. For example, if someone was suspected to have too much blood, then the physician would perform bloodletting as a treatment. Likewise, if a person that had too much phlegm would feel better after expectorating, and someone with too much yellow bile would purge. Another factor to be considered in the balance of humors is the quality of air in which one resides, such as the climate and elevation. Also, the standard of food and drink, balance of sleeping and waking, exercise and rest, retention and evacuation are important. Moods such as anger, sadness, joy, and love can affect the balance. During that time, the importance of balance was demonstrated by the fact that women lose blood monthly during menstruation, and have a lesser occurrence of gout, arthritis, and epilepsy then men do. Galen also hypothesized that there are three faculties. The natural faculty affects growth and reproduction and is produced in the liver. Animal or vital faculty controls respiration and emotion, coming from the heart. In the brain, the psychic faculty commands the senses and thought. The structure of bodily functions is related to the humors as well. Greek physicians understood that food was cooked in the stomach; this is where the nutrients are extracted. The best, most potent and pure nutrients from food are reserved for blood, which is produced in the liver and carried through veins to organs. Blood enhanced with pneuma, which means wind or breath, is carried by the arteries. The path that blood take is as follows: venous blood passes through the vena cava and is moved into the right ventricle of the heart; then, the pulmonary artery takes it to the lungs. Later, the pulmonary vein then mixes air from the lungs with blood to form arterial blood, which has different observable characteristics. After leaving the liver, half of the yellow bile that is produced travels to the blood, while the other half travels to the gallbladder. Similarly, half of the black bile produced gets mixed in with blood, and the other half is used by the spleen. Women In 1376, in Sicily, it was historically given, in relationship to the laws of Federico II that they foresaw an examination with a regal errand of physicists, the first qualification to the exercise of the medicine to a woman, Virdimura a Jewish woman of Catania, whose document is preserved in Palermo to the Italian national archives. Renaissance to early modern period 16th–18th century The Renaissance brought an intense focus on scholarship to Christian Europe. A major effort to translate the Arabic and Greek scientific works into Latin emerged. Europeans gradually became experts not only in the ancient writings of the Romans and Greeks, but in the contemporary writings of Islamic scientists. During the later centuries of the Renaissance came an increase in experimental investigation, particularly in the field of dissection and body examination, thus advancing our knowledge of human anatomy. The development of modern neurology began in the 16th century in Italy and France with Niccolò Massa, Jean Fernel, Jacques Dubois and Andreas Vesalius. Vesalius described in detail the anatomy of the brain and other organs; he had little knowledge of the brain's function, thinking that it resided mainly in the ventricles. Over his lifetime he corrected over 200 of Galen's mistakes. Understanding of medical sciences and diagnosis improved, but with little direct benefit to health care. Few effective drugs existed, beyond opium and quinine. Folklore cures and potentially poisonous metal-based compounds were popular treatments. Independently from Ibn al-Nafis, Michael Servetus rediscovered the pulmonary circulation, but this discovery did not reach the public because it was written down for the first time in the "Manuscript of Paris" in 1546, and later published in the theological work which he paid with his life in 1553. Later this was perfected by Renaldus Columbus and Andrea Cesalpino. In 1628 the English physician William Harvey made a ground-breaking discovery when he correctly described the circulation of the blood in his Exercitatio Anatomica de Motu Cordis et Sanguinis in Animalibus. Before this time the most useful manual in medicine used both by students and expert physicians was Dioscorides' De Materia Medica, a pharmacopoeia. Bacteria and protists were first observed with a microscope by Antonie van Leeuwenhoek in 1676, initiating the scientific field of microbiology. Paracelsus Paracelsus (1493–1541), was an erratic and abusive innovator who rejected Galen and bookish knowledge, calling for experimental research, with heavy doses of mysticism, alchemy and magic mixed in. He rejected sacred magic (miracles) under Church auspices and looked for cures in nature. He preached but he also pioneered the use of chemicals and minerals in medicine. His hermetical views were that sickness and health in the body relied on the harmony of man (microcosm) and Nature (macrocosm). He took an approach different from those before him, using this analogy not in the manner of soul-purification but in the manner that humans must have certain balances of minerals in their bodies, and that certain illnesses of the body had chemical remedies that could cure them. Most of his influence came after his death. Paracelsus is a highly controversial figure in the history of medicine, with most experts hailing him as a Father of Modern Medicine for shaking off religious orthodoxy and inspiring many researchers; others say he was a mystic more than a scientist and downplay his importance. Padua and Bologna University training of physicians began in the 13th century. The University of Padua was founded about 1220 by walkouts from the University of Bologna, and began teaching medicine in 1222. It played a leading role in the identification and treatment of diseases and ailments, specializing in autopsies and the inner workings of the body. Starting in 1595, Padua's famous anatomical theatre drew artists and scientists studying the human body during public dissections. The intensive study of Galen led to critiques of Galen modeled on his own writing, as in the first book of Vesalius's De humani corporis fabrica. Andreas Vesalius held the chair of Surgery and Anatomy (explicator chirurgiae) and in 1543 published his anatomical discoveries in De Humani Corporis Fabrica. He portrayed the human body as an interdependent system of organ groupings. The book triggered great public interest in dissections and caused many other European cities to establish anatomical theatres. At the University of Bologna the training of physicians began in 1219. The Italian city attracted students from across Europe. Taddeo Alderotti built a tradition of medical education that established the characteristic features of Italian learned medicine and was copied by medical schools elsewhere. Turisanus (d. 1320) was his student. The curriculum was revised and strengthened in 1560–1590. A representative professor was Julius Caesar Aranzi (Arantius) (1530–1589). He became Professor of Anatomy and Surgery at the University of Bologna in 1556, where he established anatomy as a major branch of medicine for the first time. Aranzi combined anatomy with a description of pathological processes, based largely on his own research, Galen, and the work of his contemporary Italians. Aranzi discovered the 'Nodules of Aranzio' in the semilunar valves of the heart and wrote the first description of the superior levator palpebral and the coracobrachialis muscles. His books (in Latin) covered surgical techniques for many conditions, including hydrocephalus, nasal polyp, goitre and tumours to phimosis, ascites, haemorrhoids, anal abscess and fistulae. Women Catholic women played large roles in health and healing in medieval and early modern Europe. A life as a nun was a prestigious role; wealthy families provided dowries for their daughters, and these funded the convents, while the nuns provided free nursing care for the poor. The Catholic elites provided hospital services because of their theology of salvation that good works were the route to heaven. The Protestant reformers rejected the notion that rich men could gain God's grace through good works—and thereby escape purgatory—by providing cash endowments to charitable institutions. They also rejected the Catholic idea that the poor patients earned grace and salvation through their suffering. Protestants generally closed all the convents and most of the hospitals, sending women home to become housewives, often against their will. On the other hand, local officials recognized the public value of hospitals, and some were continued in Protestant lands, but without monks or nuns and in the control of local governments. In London, the crown allowed two hospitals to continue their charitable work, under nonreligious control of city officials. The convents were all shut down but Harkness finds that women—some of them former nuns—were part of a new system that delivered essential medical services to people outside their family. They were employed by parishes and hospitals, as well as by private families, and provided nursing care as well as some medical, pharmaceutical, and surgical services. Meanwhile, in Catholic lands such as France, rich families continued to fund convents and monasteries, and enrolled their daughters as nuns who provided free health services to the poor. Nursing was a religious role for the nurse, and there was little call for science. Age of Enlightenment During the Age of Enlightenment, the 18th century, science was held in high esteem and physicians upgraded their social status by becoming more scientific. The health field was crowded with self-trained barber-surgeons, apothecaries, midwives, drug peddlers, and charlatans. Across Europe medical schools relied primarily on lectures and readings. The final year student would have limited clinical experience by trailing the professor through the wards. Laboratory work was uncommon, and dissections were rarely done because of legal restrictions on cadavers. Most schools were small, and only Edinburgh, Scotland, with 11,000 alumni, produced large numbers of graduates. Britain In Britain, there were but three small hospitals after 1550. Pelling and Webster estimate that in London in the 1580 to 1600 period, out of a population of nearly 200,000 people, there were about 500 medical practitioners. Nurses and midwives are not included. There were about 50 physicians, 100 licensed surgeons, 100 apothecaries, and 250 additional unlicensed practitioners. In the last category about 25% were women. All across Britain—and indeed all of the world—the vast majority of the people in city, town or countryside depended for medical care on local amateurs with no professional training but with a reputation as wise healers who could diagnose problems and advise sick people what to do—and perhaps set broken bones, pull a tooth, give some traditional herbs or brews or perform a little magic to cure what ailed them. The London Dispensary opened in 1696, the first clinic in the British Empire to dispense medicines to poor sick people. The innovation was slow to catch on, but new dispensaries were open in the 1770s. In the colonies, small hospitals opened in Philadelphia in 1752, New York in 1771, and Boston (Massachusetts General Hospital) in 1811. Guy's Hospital, the first great British hospital with a modern foundation opened in 1721 in London, with funding from businessman Thomas Guy. It had been preceded by St Bartholomew's Hospital and St Thomas's Hospital, both medieval foundations. In 1821 a bequest of £200,000 by William Hunt in 1829 funded expansion for an additional hundred beds at Guy's. Samuel Sharp (1709–78), a surgeon at Guy's Hospital from 1733 to 1757, was internationally famous; his A Treatise on the Operations of Surgery (1st ed., 1739), was the first British study focused exclusively on operative technique. English physician Thomas Percival (1740–1804) wrote a comprehensive system of medical conduct, Medical Ethics; or, a Code of Institutes and Precepts, Adapted to the Professional Conduct of Physicians and Surgeons (1803) that set the standard for many textbooks. Spain and Spanish Empire In the Spanish Empire, the viceregal capital of Mexico City was a site of medical training for physicians and the creation of hospitals. Epidemic disease had decimated indigenous populations starting with the early sixteenth-century Spanish conquest of the Aztec empire, when a black auxiliary in the armed forces of conqueror Hernán Cortés, with an active case of smallpox, set off a virgin land epidemic among indigenous peoples, Spanish allies and enemies alike. Aztec emperor Cuitlahuac died of smallpox. Disease was a significant factor in the Spanish conquest elsewhere as well. Medical education instituted at the Royal and Pontifical University of Mexico chiefly served the needs of urban elites. Male and female curanderos or lay practitioners, attended to the ills of the popular classes. The Spanish crown began regulating the medical profession just a few years after the conquest, setting up the Royal Tribunal of the Protomedicato, a board for licensing medical personnel in 1527. Licensing became more systematic after 1646 with physicians, druggists, surgeons, and bleeders requiring a license before they could publicly practice. Crown regulation of medical practice became more general in the Spanish empire. Elites and the popular classes alike called on divine intervention in personal and society-wide health crises, such as the epidemic of 1737. The intervention of the Virgin of Guadalupe was depicted in a scene of dead and dying Indians, with elites on their knees praying for her aid. In the late eighteenth century, the crown began implementing secularizing policies on the Iberian peninsula and its overseas empire to control disease more systematically and scientifically. Spanish Quest for Medicinal Spices Botanical medicines also became popular during the 16th, 17th, and 18th Centuries. Spanish pharmaceutical books during this time contain medicinal recipes consisting of spices, herbs, and other botanical products. For example, nutmeg oil was documented for curing stomach ailments and cardamom oil was believed to relieve intestinal ailments. During the rise of the global trade market, spices and herbs, along with many other goods, that were indigenous to different territories began to appear in different locations across the globe. Herbs and spices were especially popular for their utility in cooking and medicines. As a result of this popularity and increased demand for spices, some areas in Asia, like China and Indonesia, became hubs for spice cultivation and trade. The Spanish Empire also wanted to benefit from the international spice trade, so they looked towards their American colonies. The Spanish American colonies became an area where the Spanish searched to discover new spices and indigenous American medicinal recipes. The Florentine Codex, a 16th-century ethnographic research study in Mesoamerica by the Spanish Franciscan friar Bernardino de Sahagún, is a major contribution to the history of Nahua medicine. The Spanish did discover many spices and herbs new to them, some of which were reportedly similar to Asian spices. A Spanish physician by the name of Nicolás Monardes studied many of the American spices coming into Spain. He documented many of the new American spices and their medicinal properties in his survey Historia medicinal de las cosas que se traen de nuestras Indias Occidentales. For example, Monardes describes the "Long Pepper" (Pimienta luenga), found along the coasts of the countries that are now known Panama and Colombia, as a pepper that was more flavorful, healthy, and spicy in comparison to the Eastern black pepper. The Spanish interest in American spices can first be seen in the commissioning of the Libellus de Medicinalibus Indorum Herbis, which was a Spanish-American codex describing indigenous American spices and herbs and describing the ways that these were used in natural Aztec medicines. The codex was commissioned in the year 1552 by Francisco de Mendoza, the son of Antonio de Mendoza, who was the first Viceroy of New Spain. Francisco de Mendoza was interested in studying the properties of these herbs and spices, so that he would be able to profit from the trade of these herbs and the medicines that could be produced by them. Francisco de Mendoza recruited the help of Monardez in studying the traditional medicines of the indigenous people living in what was then the Spanish colonies. Monardez researched these medicines and performed experiments to discover the possibilities of spice cultivation and medicine creation in the Spanish colonies. The Spanish transplanted some herbs from Asia, but only a few foreign crops were successfully grown in the Spanish Colonies. One notable crop brought from Asia and successfully grown in the Spanish colonies was ginger, as it was considered Hispaniola's number 1 crop at the end of the 16th Century. The Spanish Empire did profit from cultivating herbs and spices, but they also introduced pre-Columbian American medicinal knowledge to Europe. Other Europeans were inspired by the actions of Spain and decided to try to establish a botanical transplant system in colonies that they controlled, however, these subsequent attempts were not successful. 19th century: rise of modern medicine The practice of medicine changed in the face of rapid advances in science, as well as new approaches by physicians. Hospital doctors began much more systematic analysis of patients' symptoms in diagnosis. Among the more powerful new techniques were anaesthesia, and the development of both antiseptic and aseptic operating theatres. Effective cures were developed for certain endemic infectious diseases. However, the decline in many of the most lethal diseases was due more to improvements in public health and nutrition than to advances in medicine. Medicine was revolutionized in the 19th century and beyond by advances in chemistry, laboratory techniques, and equipment. Old ideas of infectious disease epidemiology were gradually replaced by advances in bacteriology and virology. Germ theory and bacteriology In the 1830s in Italy, Agostino Bassi traced the silkworm disease muscardine to microorganisms. Meanwhile, in Germany, Theodor Schwann led research on alcoholic fermentation by yeast, proposing that living microorganisms were responsible. Leading chemists, such as Justus von Liebig, seeking solely physicochemical explanations, derided this claim and alleged that Schwann was regressing to vitalism. In 1847 in Vienna, Ignaz Semmelweis (1818–1865), dramatically reduced the death rate of new mothers (due to childbed fever) by requiring physicians to clean their hands before attending childbirth, yet his principles were marginalized and attacked by professional peers. At that time most people still believed that infections were caused by foul odors called miasmas. French scientist Louis Pasteur confirmed Schwann's fermentation experiments in 1857 and afterwards supported the hypothesis that yeast were microorganisms. Moreover, he suggested that such a process might also explain contagious disease. In 1860, Pasteur's report on bacterial fermentation of butyric acid motivated fellow Frenchman Casimir Davaine to identify a similar species (which he called ) as the pathogen of the deadly disease anthrax. Others dismissed "" as a mere byproduct of the disease. British surgeon Joseph Lister, however, took these findings seriously and subsequently introduced antisepsis to wound treatment in 1865. German physician Robert Koch, noting fellow German Ferdinand Cohn's report of a spore stage of a certain bacterial species, traced the life cycle of Davaine's , identified spores, inoculated laboratory animals with them, and reproduced anthrax—a breakthrough for experimental pathology and germ theory of disease. Pasteur's group added ecological investigations confirming spores' role in the natural setting, while Koch published a landmark treatise in 1878 on the bacterial pathology of wounds. In 1881, Koch reported discovery of the "tubercle bacillus", cementing germ theory and Koch's acclaim. Upon the outbreak of a cholera epidemic in Alexandria, Egypt, two medical missions went to investigate and attend the sick, one was sent out by Pasteur and the other led by Koch. Koch's group returned in 1883, having successfully discovered the cholera pathogen. In Germany, however, Koch's bacteriologists had to vie against Max von Pettenkofer, Germany's leading proponent of miasmatic theory. Pettenkofer conceded bacteria's casual involvement, but maintained that other, environmental factors were required to turn it pathogenic, and opposed water treatment as a misdirected effort amid more important ways to improve public health. The massive cholera epidemic in Hamburg in 1892 devastated Pettenkoffer's position, and yielded German public health to "Koch's bacteriology". On losing the 1883 rivalry in Alexandria, Pasteur switched research direction, and introduced his third vaccine—rabies vaccine—the first vaccine for humans since Jenner's for smallpox. From across the globe, donations poured in, funding the founding of Pasteur Institute, the globe's first biomedical institute, which opened in 1888. Along with Koch's bacteriologists, Pasteur's group—which preferred the term microbiology—led medicine into the new era of "scientific medicine" upon bacteriology and germ theory. Accepted from Jakob Henle, Koch's steps to confirm a species' pathogenicity became famed as "Koch's postulates". Although his proposed tuberculosis treatment, tuberculin, seemingly failed, it soon was used to test for infection with the involved species. In 1905, Koch was awarded the Nobel Prize in Physiology or Medicine, and remains renowned as the founder of medical microbiology. Women Women as healers Women have always served as healers and midwives since ancient times. However, the professionalization of medicine forced them increasingly to the sidelines. As hospitals multiplied they relied in Europe on orders of Roman Catholic nun-nurses, and German Protestant and Anglican deaconesses in the early 19th century. They were trained in traditional methods of physical care that involved little knowledge of medicine. The breakthrough to professionalization based on knowledge of advanced medicine was led by Florence Nightingale in England. She resolved to provide more advanced training than she saw on the Continent. At Kaiserswerth, where the first German nursing schools were founded in 1836 by Theodor Fliedner, she said, "The nursing was nil and the hygiene horrible.") Britain's male doctors preferred the old system, but Nightingale won out and her Nightingale Training School opened in 1860 and became a model. The Nightingale solution depended on the patronage of upper-class women, and they proved eager to serve. Royalty became involved. In 1902 the wife of the British king took control of the nursing unit of the British army, became its president, and renamed it after herself as the Queen Alexandra's Royal Army Nursing Corps; when she died the next queen became president. Today its Colonel In Chief is Sophie, Countess of Wessex, the daughter-in-law of Queen Elizabeth II. In the United States, upper-middle-class women who already supported hospitals promoted nursing. The new profession proved highly attractive to women of all backgrounds, and schools of nursing opened in the late 19th century. They soon a function of large hospitals , where they provided a steady stream of low-paid idealistic workers. The International Red Cross began operations in numerous countries in the late 19th century, promoting nursing as an ideal profession for middle-class women. The Nightingale model was widely copied. Linda Richards (1841–1930) studied in London and became the first professionally trained American nurse. She established nursing training programs in the United States and Japan, and created the first system for keeping individual medical records for hospitalized patients. The Russian Orthodox Church sponsored seven orders of nursing sisters in the late 19th century. They ran hospitals, clinics, almshouses, pharmacies, and shelters as well as training schools for nurses. In the Soviet era (1917–1991), with the aristocratic sponsors gone, nursing became a low-prestige occupation based in poorly maintained hospitals. Women as physicians It was very difficult for women to become doctors in any field before the 1970s. Elizabeth Blackwell (1821–1910) became the first woman to formally study and practice medicine in the United States. She was a leader in women's medical education. While Blackwell viewed medicine as a means for social and moral reform, her student Mary Putnam Jacobi (1842–1906) focused on curing disease. At a deeper level of disagreement, Blackwell felt that women would succeed in medicine because of their humane female values, but Jacobi believed that women should participate as the equals of men in all medical specialties using identical methods, values and insights. In the Soviet Union although the majority of medical doctors were women, they were paid less than the mostly male factory workers. Paris Paris (France) and Vienna were the two leading medical centers on the Continent in the era 1750–1914. In the 1770s–1850s Paris became a world center of medical research and teaching. The "Paris School" emphasized that teaching and research should be based in large hospitals and promoted the professionalization of the medical profession and the emphasis on sanitation and public health. A major reformer was Jean-Antoine Chaptal (1756–1832), a physician who was Minister of Internal Affairs. He created the Paris Hospital, health councils, and other bodies. Louis Pasteur (1822–1895) was one of the most important founders of medical microbiology. He is remembered for his remarkable breakthroughs in the causes and preventions of diseases. His discoveries reduced mortality from puerperal fever, and he created the first vaccines for rabies and anthrax. His experiments supported the germ theory of disease. He was best known to the general public for inventing a method to treat milk and wine in order to prevent it from causing sickness, a process that came to be called pasteurization. He is regarded as one of the three main founders of microbiology, together with Ferdinand Cohn and Robert Koch. He worked chiefly in Paris and in 1887 founded the Pasteur Institute there to perpetuate his commitment to basic research and its practical applications. As soon as his institute was created, Pasteur brought together scientists with various specialties. The first five departments were directed by Emile Duclaux (general microbiology research) and Charles Chamberland (microbe research applied to hygiene), as well as a biologist, Ilya Ilyich Mechnikov (morphological microbe research) and two physicians, Jacques-Joseph Grancher (rabies) and Emile Roux (technical microbe research). One year after the inauguration of the Institut Pasteur, Roux set up the first course of microbiology ever taught in the world, then entitled Cours de Microbie Technique (Course of microbe research techniques). It became the model for numerous research centers around the world named "Pasteur Institutes." Vienna The First Viennese School of Medicine, 1750–1800, was led by the Dutchman Gerard van Swieten (1700–1772), who aimed to put medicine on new scientific foundations—promoting unprejudiced clinical observation, botanical and chemical research, and introducing simple but powerful remedies. When the Vienna General Hospital opened in 1784, it at once became the world's largest hospital and physicians acquired a facility that gradually developed into the most important research centre. Progress ended with the Napoleonic wars and the government shutdown in 1819 of all liberal journals and schools; this caused a general return to traditionalism and eclecticism in medicine. Vienna was the capital of a diverse empire and attracted not just Germans but Czechs, Hungarians, Jews, Poles and others to its world-class medical facilities. After 1820 the Second Viennese School of Medicine emerged with the contributions of physicians such as Carl Freiherr von Rokitansky, Josef Škoda, Ferdinand Ritter von Hebra, and Ignaz Philipp Semmelweis. Basic medical science expanded and specialization advanced. Furthermore, the first dermatology, eye, as well as ear, nose, and throat clinics in the world were founded in Vienna. The textbook of ophthalmologist Georg Joseph Beer (1763–1821) Lehre von den Augenkrankheiten combined practical research and philosophical speculations, and became the standard reference work for decades. Berlin After 1871 Berlin, the capital of the new German Empire, became a leading center for medical research. Robert Koch (1843–1910) was a representative leader. He became famous for isolating Bacillus anthracis (1877), the Tuberculosis bacillus (1882) and Vibrio cholerae (1883) and for his development of Koch's postulates. He was awarded the Nobel Prize in Physiology or Medicine in 1905 for his tuberculosis findings. Koch is one of the founders of microbiology, inspiring such major figures as Paul Ehrlich and Gerhard Domagk. U.S. Civil War In the American Civil War (1861–65), as was typical of the 19th century, more soldiers died of disease than in battle, and even larger numbers were temporarily incapacitated by wounds, disease and accidents. Conditions were poor in the Confederacy, where doctors and medical supplies were in short supply. The war had a dramatic long-term impact on medicine in the U.S., from surgical technique to hospitals to nursing and to research facilities. Weapon development -particularly the appearance of Springfield Model 1861, mass-produced and much more accurate than muskets led to generals underestimating the risks of long range rifle fire; risks exemplified in the death of John Sedgwick and the disastrous Pickett's Charge. The rifles could shatter bone forcing amputation and longer ranges meant casualties were sometimes not quickly found. Evacuation of the wounded from Second Battle of Bull Run took a week. As in earlier wars, untreated casualties sometimes survived unexpectedly due to maggots debriding the wound -an observation which led to the surgical use of maggots -still a useful method in the absence of effective antibiotics. The hygiene of the training and field camps was poor, especially at the beginning of the war when men who had seldom been far from home were brought together for training with thousands of strangers. First came epidemics of the childhood diseases of chicken pox, mumps, whooping cough, and, especially, measles. Operations in the South meant a dangerous and new disease environment, bringing diarrhea, dysentery, typhoid fever, and malaria. There were no antibiotics, so the surgeons prescribed coffee, whiskey, and quinine. Harsh weather, bad water, inadequate shelter in winter quarters, poor policing of camps, and dirty camp hospitals took their toll. This was a common scenario in wars from time immemorial, and conditions faced by the Confederate army were even worse. The Union responded by building army hospitals in every state. What was different in the Union was the emergence of skilled, well-funded medical organizers who took proactive action, especially in the much enlarged United States Army Medical Department, and the United States Sanitary Commission, a new private agency. Numerous other new agencies also targeted the medical and morale needs of soldiers, including the United States Christian Commission as well as smaller private agencies. The U.S. Army learned many lessons and in August 1886, it established the Hospital Corps. Statistical methods A major breakthrough in epidemiology came with the introduction of statistical maps and graphs. They allowed careful analysis of seasonality issues in disease incidents, and the maps allowed public health officials to identify critical loci for the dissemination of disease. John Snow in London developed the methods. In 1849, he observed that the symptoms of cholera, which had already claimed around 500 lives within a month, were vomiting and diarrhoea. He concluded that the source of contamination must be through ingestion, rather than inhalation as was previously thought. It was this insight that resulted in the removal of The Pump On Broad Street, after which deaths from cholera plummeted afterwards. English nurse Florence Nightingale pioneered analysis of large amounts of statistical data, using graphs and tables, regarding the condition of thousands of patients in the Crimean War to evaluate the efficacy of hospital services. Her methods proved convincing and led to reforms in military and civilian hospitals, usually with the full support of the government. By the late 19th and early 20th century English statisticians led by Francis Galton, Karl Pearson and Ronald Fisher developed the mathematical tools such as correlations and hypothesis tests that made possible much more sophisticated analysis of statistical data. During the U.S. Civil War the Sanitary Commission collected enormous amounts of statistical data, and opened up the problems of storing information for fast access and mechanically searching for data patterns. The pioneer was John Shaw Billings (1838–1913). A senior surgeon in the war, Billings built the Library of the Surgeon General's Office (now the National Library of Medicine), the centerpiece of modern medical information systems. Billings figured out how to mechanically analyze medical and demographic data by turning facts into numbers and punching the numbers onto cardboard cards that could be sorted and counted by machine. The applications were developed by his assistant Herman Hollerith; Hollerith invented the punch card and counter-sorter system that dominated statistical data manipulation until the 1970s. Hollerith's company became International Business Machines (IBM) in 1911. Worldwide dissemination United States Johns Hopkins Hospital, founded in 1889, originated several modern medical practices, including residency and rounds. Japan European ideas of modern medicine were spread widely through the world by medical missionaries, and the dissemination of textbooks. Japanese elites enthusiastically embraced Western medicine after the Meiji Restoration of the 1860s. However they had been prepared by their knowledge of the Dutch and German medicine, for they had some contact with Europe through the Dutch. Highly influential was the 1765 edition of Hendrik van Deventer's pioneer work Nieuw Ligt ("A New Light") on Japanese obstetrics, especially on Katakura Kakuryo's publication in 1799 of Sanka Hatsumo ("Enlightenment of Obstetrics"). A cadre of Japanese physicians began to interact with Dutch doctors, who introduced smallpox vaccinations. By 1820 Japanese ranpô medical practitioners not only translated Dutch medical texts, they integrated their readings with clinical diagnoses. These men became leaders of the modernization of medicine in their country. They broke from Japanese traditions of closed medical fraternities and adopted the European approach of an open community of collaboration based on expertise in the latest scientific methods. Kitasato Shibasaburō (1853–1931) studied bacteriology in Germany under Robert Koch. In 1891 he founded the Institute of Infectious Diseases in Tokyo, which introduced the study of bacteriology to Japan. He and French researcher Alexandre Yersin went to Hong Kong in 1894, where; Kitasato confirmed Yersin's discovery that the bacterium Yersinia pestis is the agent of the plague. In 1897 he isolated and described the organism that caused dysentery. He became the first dean of medicine at Keio University, and the first president of the Japan Medical Association. Japanese physicians immediately recognized the values of X-Rays. They were able to purchase the equipment locally from the Shimadzu Company, which developed, manufactured, marketed, and distributed X-Ray machines after 1900. Japan not only adopted German methods of public health in the home islands, but implemented them in its colonies, especially Korea and Taiwan, and after 1931 in Manchuria. A heavy investment in sanitation resulted in a dramatic increase of life expectancy. Psychiatry Until the nineteenth century, the care of the insane was largely a communal and family responsibility rather than a medical one. The vast majority of the mentally ill were treated in domestic contexts with only the most unmanageable or burdensome likely to be institutionally confined. This situation was transformed radically from the late eighteenth century as, amid changing cultural conceptions of madness, a new-found optimism in the curability of insanity within the asylum setting emerged. Increasingly, lunacy was perceived less as a physiological condition than as a mental and moral one to which the correct response was persuasion, aimed at inculcating internal restraint, rather than external coercion. This new therapeutic sensibility, referred to as moral treatment, was epitomised in French physician Philippe Pinel's quasi-mythological unchaining of the lunatics of the Bicêtre Hospital in Paris and realised in an institutional setting with the foundation in 1796 of the Quaker-run York Retreat in England. From the early nineteenth century, as lay-led lunacy reform movements gained in influence, ever more state governments in the West extended their authority and responsibility over the mentally ill. Small-scale asylums, conceived as instruments to reshape both the mind and behaviour of the disturbed, proliferated across these regions. By the 1830s, moral treatment, together with the asylum itself, became increasingly medicalised and asylum doctors began to establish a distinct medical identity with the establishment in the 1840s of associations for their members in France, Germany, the United Kingdom and America, together with the founding of medico-psychological journals. Medical optimism in the capacity of the asylum to cure insanity soured by the close of the nineteenth century as the growth of the asylum population far outstripped that of the general population. Processes of long-term institutional segregation, allowing for the psychiatric conceptualisation of the natural course of mental illness, supported the perspective that the insane were a distinct population, subject to mental pathologies stemming from specific medical causes. As degeneration theory grew in influence from the mid-nineteenth century, heredity was seen as the central causal element in chronic mental illness, and, with national asylum systems overcrowded and insanity apparently undergoing an inexorable rise, the focus of psychiatric therapeutics shifted from a concern with treating the individual to maintaining the racial and biological health of national populations. Emil Kraepelin (1856–1926) introduced new medical categories of mental illness, which eventually came into psychiatric usage despite their basis in behavior rather than pathology or underlying cause. Shell shock among frontline soldiers exposed to heavy artillery bombardment was first diagnosed by British Army doctors in 1915. By 1916, similar symptoms were also noted in soldiers not exposed to explosive shocks, leading to questions as to whether the disorder was physical or psychiatric. In the 1920s surrealist opposition to psychiatry was expressed in a number of surrealist publications. In the 1930s several controversial medical practices were introduced including inducing seizures (by electroshock, insulin or other drugs) or cutting parts of the brain apart (leucotomy or lobotomy). Both came into widespread use by psychiatry, but there were grave concerns and much opposition on grounds of basic morality, harmful effects, or misuse. In the 1950s new psychiatric drugs, notably the antipsychotic chlorpromazine, were designed in laboratories and slowly came into preferred use. Although often accepted as an advance in some ways, there was some opposition, due to serious adverse effects such as tardive dyskinesia. Patients often opposed psychiatry and refused or stopped taking the drugs when not subject to psychiatric control. There was also increasing opposition to the use of psychiatric hospitals, and attempts to move people back into the community on a collaborative user-led group approach ("therapeutic communities") not controlled by psychiatry. Campaigns against masturbation were done in the Victorian era and elsewhere. Lobotomy was used until the 1970s to treat schizophrenia. This was denounced by the anti-psychiatric movement in the 1960s and later. 20th century and beyond Twentieth-century warfare and medicine The ABO blood group system was discovered in 1901, and the Rhesus blood group system in 1937, facilitating blood transfusion.
the manor of Ham, north of the present-day Devonport Dockyard. The name evidently later came to be used for the estuary's main channel. The ose element possibly derives from Old English meaning 'mud' (as in 'ooze') – the creek consisting of mud-banks at low tide. The Hamoaze flows past Devonport Dockyard, which is one of three major bases of the Royal Navy today. The presence of large numbers of small watercraft is a challenge and hazard to the warships using the naval base and dockyard. Navigation on the waterway is controlled by the Queen's Harbour Master for Plymouth. Settlements on the banks of the Hamoaze are
the naval base and dockyard. Navigation on the waterway is controlled by the Queen's Harbour Master for Plymouth. Settlements on the banks of the Hamoaze are Saltash, Wilcove, Torpoint and Cremyll in Cornwall, as well as Devonport and Plymouth in Devon. Two regular ferry services crossing the Hamoaze exist: the Torpoint Ferry (a chain ferry that takes vehicles) and the Cremyll Ferry (passengers and cyclists only). The Hamoaze has a street in Torpoint named after it. See
Calenberg, moved his residence to Hanover. The Dukes of Brunswick-Lüneburg were elevated by the Holy Roman Emperor to the rank of Prince-Elector in 1692 and this elevation was confirmed by the Imperial Diet in 1708. Thus the principality was upgraded to the Electorate of Brunswick-Lüneburg, colloquially known as the Electorate of Hanover after Calenberg's capital (see also: House of Hanover). Its Electors later become monarchs of Great Britain (and from 1801 of the United Kingdom of Great Britain and Ireland). The first of these was George I Louis, who acceded to the British throne in 1714. The last British monarch who reigned in Hanover was William IV. Semi-Salic law, which required succession by the male line if possible, forbade the accession of Queen Victoria in Hanover. As a male-line descendant of George I, Queen Victoria was herself a member of the House of Hanover. Her descendants, however, bore her husband's titular name of Saxe-Coburg-Gotha. Three kings of Great Britain, or the United Kingdom, were concurrently also Electoral Princes of Hanover. During the time of the personal union of the crowns of the United Kingdom and Hanover (1714–1837) the monarchs rarely visited the city. In fact during the reigns of the final three joint rulers (1760–1837) there was only one short visit, by George IV in 1821. From 1816 to 1837 Viceroy Adolphus represented the monarch in Hanover. During the Seven Years' War the Battle of Hastenbeck was fought near the city on 26 July 1757. The French army defeated the Hanoverian Army of Observation, leading to the city's occupation as part of the Invasion of Hanover. It was recaptured by Anglo-German forces led by Ferdinand of Brunswick the following year. 19th century After Napoleon imposed the Convention of Artlenburg (Convention of the Elbe) on July 5, 1803, about 35,000 French soldiers occupied Hanover. The Convention also required disbanding the army of Hanover. However, George III did not recognise the Convention of the Elbe. This resulted in a great number of soldiers from Hanover eventually emigrating to Great Britain, where the King's German Legion was formed. It was only troops from Hanover and Brunswick that consistently opposed France throughout the entire Napoleonic wars. The Legion later played an important role in the Peninsular War and the Battle of Waterloo in 1815. In 1814 the electorate became the Kingdom of Hanover. In 1837, the personal union of the United Kingdom and Hanover ended because William IV's heir in the United Kingdom was female (Queen Victoria). Hanover could be inherited only by male heirs. Thus, Hanover passed to William IV's brother, Ernest Augustus, and remained a kingdom until 1866, when it was annexed by Prussia during the Austro-Prussian war. Despite Hanover being expected to defeat Prussia at the Battle of Langensalza, Prussia employed Moltke the Elder's Kesselschlacht order of battle to instead destroy the Hanoverian army. The city of Hanover became the capital of the Prussian Province of Hanover. In 1842 the first horse railway was inaugurated, and from 1893 an electric tram was installed. In 1887 Hanover's Emile Berliner invented the record and the gramophone. Nazi Germany After 1937 the lord mayor and the state commissioners of Hanover were members of the NSDAP (Nazi party). A large Jewish population then existed in Hanover. In October 1938, 484 Hanoverian Jews of Polish origin were expelled to Poland, including the Grynszpan family. However, Poland refused to accept them, leaving them stranded at the border with thousands of other Polish-Jewish deportees, fed only intermittently by the Polish Red Cross and Jewish welfare organisations. The Grynszpans' son Herschel Grynszpan was in Paris at the time. When he learned of what was happening, he drove to the German embassy in Paris and shot the German diplomat Eduard Ernst vom Rath, who died shortly afterwards. The Nazis took this act as a pretext to stage a nationwide pogrom known as Kristallnacht (9 November 1938). On that day, the synagogue of Hanover, designed in 1870 by Edwin Oppler in neo-romantic style, was burnt by the Nazis. In September 1941, through the "Action Lauterbacher" plan, a ghettoisation of the remaining Hanoverian Jewish families began. Even before the Wannsee Conference, on 15 December 1941, the first Jews from Hanover were deported to Riga. A total of 2,400 people were deported, and very few survived. During the war seven concentration camps were constructed in Hanover, in which many Jews were confined. Of the approximately 4,800 Jews who had lived in Hannover in 1938, fewer than 100 were still in the city when troops of the United States Army arrived on 10 April 1945 to occupy Hanover at the end of the war. Today, a memorial at the Opera Square is a reminder of the persecution of the Jews in Hanover. After the war a large group of Orthodox Jewish survivors of the nearby Bergen-Belsen concentration camp settled in Hanover. World War II As an important railway and road junction and production centre, Hanover was a major target for strategic bombing during World War II, including the Oil Campaign. Targets included the AFA (Stöcken), the Deurag-Nerag refinery (Misburg), the Continental plants (Vahrenwald and Limmer), the United light metal works (VLW) in Ricklingen and Laatzen (today Hanover fairground), the Hanover/Limmer rubber reclamation plant, the Hanomag factory (Linden) and the tank factory M.N.H. Maschinenfabrik Niedersachsen (Badenstedt). Residential areas were also targeted, and more than 6,000 civilians were killed by the Allied bombing raids. More than 90% of the city centre was destroyed in a total of 88 bombing raids. After the war, the Aegidienkirche was not rebuilt and its ruins were left as a war memorial. The Allied ground advance into Germany reached Hanover in April 1945. The US 84th Infantry Division captured the city on 10 April 1945. Hanover was in the British zone of occupation of Germany and became part of the new state (Land) of Lower Saxony in 1946. Today Hanover is a Vice-President City of Mayors for Peace, an international mayoral organisation mobilising cities and citizens worldwide to abolish and eliminate nuclear weapons by the year 2020. Population development Geography Climate Hanover has an oceanic climate (Köppen: Cfb) independent of the isotherm. Although the city is not on a coastal location, the predominant air masses are still from the ocean, unlike other places further east or south-central Germany. Subdivisions The city of Hanover is divided into 13 boroughs (Stadtbezirke) and 53 quarters (Stadtteile). Boroughs Mitte Vahrenwald-List Bothfeld-Vahrenheide Buchholz-Kleefeld Misburg-Anderten Kirchrode-Bemerode-Wülferode Südstadt-Bult Döhren-Wülfel Ricklingen Linden-Limmer Ahlem-Badenstedt-Davenstedt Herrenhausen-Stöcken Nord Quarters A selection of the 53 quarters: Nordstadt Südstadt Oststadt Zoo (for the zoo itself, see Hanover Zoo) Herrenhausen Waldheim Main sights One of Hanover's sights is the Royal Gardens of Herrenhausen. Its Great Garden is an important European baroque garden. The palace itself was largely destroyed by Allied bombing but has been reconstructed and reopened in 2013. Among the points of interest is the Grotto. Its interior was designed by French artist Niki de Saint Phalle). The Great Garden consists of several parts and contains Europe's highest garden fountain. The historic Garden Theatre hosted the musicals of the German rock musician Heinz Rudolf Kunze. Also at Herrenhausen, the Berggarten is a botanical garden with the most varied collection of orchids in Europe. Some points of interest are the Tropical House, the Cactus House, the Canary House and the Orchid House, and free-flying birds and butterflies. Near the entrance to the Berggarten is the historic Library Pavillon. The Mausoleum of the Guelphs is also located in the Berggarten. Like the Great Garden, the Berggarten also consists of several parts, for example the Paradies and the Prairie Garden. The Georgengarten is an English landscape garden. The Leibniz Temple and the Georgen Palace are two points of interest there. The landmark of Hanover is the New Town Hall (Neues Rathaus). Inside the building are four scale models of the city. A worldwide unique diagonal/arch elevator goes up the large dome at a 17 degree angle to an observation deck. The Hanover Zoo received the Park Scout Award for the fourth year running in 2009/10, placing it among the best zoos in Germany. The zoo consists of several theme areas: Sambesi, Meyers Farm, Gorilla-Mountain, Jungle-Palace, and Mullewapp. Some smaller areas are Australia, the wooded area for wolves, and the so-called swimming area with many seabirds. There is also a tropical house, a jungle house, and a show arena. The new Canadian-themed area, Yukon Bay, opened in 2010. In 2010 the Hanover Zoo had over 1.6 million visitors. There is also the Sea Life Centre Hanover, which is the first tropical aquarium in Germany. Another point of interest is the Old Town. In the centre are the large Marktkirche (Church St. Georgii et Jacobi, preaching venue of the bishop of the Lutheran Landeskirche Hannovers) and the Old Town Hall. Nearby are the Leibniz House, the Nolte House, and the Beguine Tower. The Kreuz-Church-Quarter around the Kreuz Church contains many little lanes. Nearby is the old royal sports hall, now called the Ballhof theatre. On the edge of the Old Town are the Market Hall, the Leine Palace, and the ruin of the Aegidien Church which is now a monument to the victims of war and violence. Through the Marstall Gate the bank of the river Leine can be reached; the Nanas of Niki de Saint Phalle are located here. They are part of the Mile of Sculptures, which starts from Trammplatz, leads along the river bank, crosses Königsworther Square, and ends at the entrance of the Georgengarten. Near the Old Town is the district of Calenberger Neustadt where the Catholic Basilica Minor of St. Clemens, the Reformed Church and the Lutheran Neustädter Hof- und Stadtkirche St. Johannis are located. Some other popular sights are the Waterloo Column, the Laves House, the Wangenheim Palace, the Lower Saxony State Archives, the Hanover Playhouse, the Kröpcke Clock, the Anzeiger Tower Block, the Administration Building of the NORD/LB, the Cupola Hall of the Congress Centre, the Lower Saxony Stock, the Ministry of Finance, the Garten Church, the Luther Church, the Gehry Tower (designed by the American architect Frank O. Gehry), the specially designed Bus Stops, the Opera House, the Central Station, the Maschsee lake and the city forest Eilenriede, which is one of the largest of its kind in Europe. With around 40 parks, forests and gardens, a couple of lakes, two rivers and one canal, Hanover offers a large variety of leisure activities. Since 2007 the historic Leibniz Letters, which can be viewed in the Gottfried Wilhelm Leibniz Library, are on UNESCO's Memory of the World Register. Outside the city centre is the EXPO-Park, the former site of EXPO 2000. Some points of interest are the Planet M., the former German Pavillon, some nations' vacant pavilions, the Expowale, the EXPO-Plaza and the EXPO-Gardens (Parc Agricole, EXPO-Park South and the Gardens of change). The fairground can be reached by the Exponale, one of the largest pedestrian bridges in Europe. The Hanover fairground is the largest exhibition centre in the world. It provides of covered indoor space, of open-air space, 27 halls and pavilions. Many of the Exhibition Centre's halls are architectural highlights. Furthermore, it offers the Convention Center with its 35 function rooms, glassed-in areas between halls, grassy park-like recreation zones and its own heliport. Two important sights on the fairground are the Hermes Tower ( high) and the EXPO Roof, the largest wooden roof in the world. In the district of Anderten is the European Cheese Centre, the only Cheese Experience Centre in Europe. Another tourist sight in Anderten is the Hindenburg Lock, which was the biggest lock in Europe at the time of its construction in 1928. The Tiergarten (literally the "animals' garden") in the district of Kirchrode is a large forest originally used for deer and other game for the king's table. In the district of Groß-Buchholz the Telemax is located, which is the tallest building in Lower Saxony and the highest television tower in Northern Germany. Some other notable towers are the VW-Tower in the city centre and the old towers of the former middle-age defence belt: Döhrener Tower, Lister Tower and the Horse Tower. The 36 most important sights of the city centre are connected with a red line, which is painted on the pavement. This so-called Red Thread marks out a walk that starts at the Tourist Information Office and ends on the Ernst-August-Square in front of the central station. There is also a guided sightseeing-bus tour through the city. Society and culture Religious life Hanover is headquarters for several Protestant organizations, including the World Communion of Reformed Churches, the Evangelical Church in Germany, the Reformed Alliance, the United Evangelical Lutheran Church of Germany, and the Independent Evangelical-Lutheran Church. In 2015, 31.1% of the population were Protestant and 13.4% were Roman Catholic. The majority 55.5% were irreligious or other faith. Museums and galleries The Historisches Museum Hannover (Historic museum) describes the history of Hanover, from the medieval settlement "Honovere" to the city of today. The museum focuses on the period from 1714 to 1834 when Hanover had a strong relationship with the British royal house. With more than 4,000 members, the Kestnergesellschaft is the largest art society in Germany. The museum hosts exhibitions from classical modernist art to contemporary art. Emphasis is placed on film, video, contemporary music and architecture, room installments and presentations of contemporary paintings, sculptures and video art. The Kestner-Museum is located in the House of 5.000 windows. The museum is named after August Kestner and exhibits 6,000 years of applied art in four areas: Ancient cultures, ancient Egypt, applied art and a valuable collection of historic coins. The KUBUS is a forum for contemporary art. It features mostly exhibitions and projects of artists from Hanover. The Kunstverein Hannover (Art Society Hanover) shows contemporary art and was established in 1832 as one of the first art societies
are located. Some other popular sights are the Waterloo Column, the Laves House, the Wangenheim Palace, the Lower Saxony State Archives, the Hanover Playhouse, the Kröpcke Clock, the Anzeiger Tower Block, the Administration Building of the NORD/LB, the Cupola Hall of the Congress Centre, the Lower Saxony Stock, the Ministry of Finance, the Garten Church, the Luther Church, the Gehry Tower (designed by the American architect Frank O. Gehry), the specially designed Bus Stops, the Opera House, the Central Station, the Maschsee lake and the city forest Eilenriede, which is one of the largest of its kind in Europe. With around 40 parks, forests and gardens, a couple of lakes, two rivers and one canal, Hanover offers a large variety of leisure activities. Since 2007 the historic Leibniz Letters, which can be viewed in the Gottfried Wilhelm Leibniz Library, are on UNESCO's Memory of the World Register. Outside the city centre is the EXPO-Park, the former site of EXPO 2000. Some points of interest are the Planet M., the former German Pavillon, some nations' vacant pavilions, the Expowale, the EXPO-Plaza and the EXPO-Gardens (Parc Agricole, EXPO-Park South and the Gardens of change). The fairground can be reached by the Exponale, one of the largest pedestrian bridges in Europe. The Hanover fairground is the largest exhibition centre in the world. It provides of covered indoor space, of open-air space, 27 halls and pavilions. Many of the Exhibition Centre's halls are architectural highlights. Furthermore, it offers the Convention Center with its 35 function rooms, glassed-in areas between halls, grassy park-like recreation zones and its own heliport. Two important sights on the fairground are the Hermes Tower ( high) and the EXPO Roof, the largest wooden roof in the world. In the district of Anderten is the European Cheese Centre, the only Cheese Experience Centre in Europe. Another tourist sight in Anderten is the Hindenburg Lock, which was the biggest lock in Europe at the time of its construction in 1928. The Tiergarten (literally the "animals' garden") in the district of Kirchrode is a large forest originally used for deer and other game for the king's table. In the district of Groß-Buchholz the Telemax is located, which is the tallest building in Lower Saxony and the highest television tower in Northern Germany. Some other notable towers are the VW-Tower in the city centre and the old towers of the former middle-age defence belt: Döhrener Tower, Lister Tower and the Horse Tower. The 36 most important sights of the city centre are connected with a red line, which is painted on the pavement. This so-called Red Thread marks out a walk that starts at the Tourist Information Office and ends on the Ernst-August-Square in front of the central station. There is also a guided sightseeing-bus tour through the city. Society and culture Religious life Hanover is headquarters for several Protestant organizations, including the World Communion of Reformed Churches, the Evangelical Church in Germany, the Reformed Alliance, the United Evangelical Lutheran Church of Germany, and the Independent Evangelical-Lutheran Church. In 2015, 31.1% of the population were Protestant and 13.4% were Roman Catholic. The majority 55.5% were irreligious or other faith. Museums and galleries The Historisches Museum Hannover (Historic museum) describes the history of Hanover, from the medieval settlement "Honovere" to the city of today. The museum focuses on the period from 1714 to 1834 when Hanover had a strong relationship with the British royal house. With more than 4,000 members, the Kestnergesellschaft is the largest art society in Germany. The museum hosts exhibitions from classical modernist art to contemporary art. Emphasis is placed on film, video, contemporary music and architecture, room installments and presentations of contemporary paintings, sculptures and video art. The Kestner-Museum is located in the House of 5.000 windows. The museum is named after August Kestner and exhibits 6,000 years of applied art in four areas: Ancient cultures, ancient Egypt, applied art and a valuable collection of historic coins. The KUBUS is a forum for contemporary art. It features mostly exhibitions and projects of artists from Hanover. The Kunstverein Hannover (Art Society Hanover) shows contemporary art and was established in 1832 as one of the first art societies in Germany. It is located in the Künstlerhaus (House of artists). There are around seven international exhibitions each year. The Landesmuseum Hannover is the largest museum in Hanover. The art gallery shows European art from the 11th to the 20th century, the nature department shows the zoology, geology, botanic, geology and a vivarium with fish, insects, reptiles and amphibians. The primeval department shows the primeval history of Lower Saxony, and the folklore department shows cultures from all over the world. The Sprengel Museum shows the art of the 20th century. It is one of the most notable art museums in Germany. The focus is put on the classical modernist art with the collection of Kurt Schwitters, works of German expressionism, and French cubism, the cabinet of abstracts, the graphics and the department of photography and media. Furthermore, the museum shows the works of the French artist Niki de Saint-Phalle. The Theatre Museum shows an exhibition of the history of the theatre in Hanover from the 17th century up to now: opera, concert, drama and ballet. The museum also hosts several touring exhibitions during the year. The Wilhelm Busch Museum is the German Museum of Caricature and Critical Graphic Arts. The collection of the works of Wilhelm Busch and the extensive collection of cartoons and critical graphics is unique in Germany. Furthermore, the museum hosts several exhibitions of national and international artists during the year. A cabinet of coins is the Münzkabinett der TUI-AG. The Polizeigeschichtliche Sammlung Niedersachsen is the largest police museum in Germany. Textiles from all over the world can be visited in the Museum for textile art. The EXPOseeum is the museum of the world-exhibition "EXPO 2000 Hannover". Carpets and objects from the orient can be visited in the Oriental Carpet Museum. The Museum for the visually impaired is a rarity in Germany, there is only one other of its kind in Berlin. The Museum of veterinary medicine is unique in Germany. The Museum for Energy History describes the 150 years old history of the application of energy. The Heimat-Museum Ahlem shows the history of the district of Ahlem. The Mahn- und Gedenkstätte Ahlem describes the history of the Jewish people in Hanover and the Stiftung Ahlers Pro Arte / Kestner Pro Arte shows modern art. Modern art is also the main topic of the Kunsthalle Faust, the Nord/LB Art Gallery and of the Foro Artistico / Eisfabrik. Some leading art events in Hanover are the Long Night of the Museums and the Zinnober Kunstvolkslauf which features all the galleries in Hanover. People who are interested in astronomy should visit the Observatory Geschwister Herschel on the Lindener Mountain or the small planetarium inside of the Bismarck School. Theatre, cabaret and musical Around 40 theatres are located in Hanover. The Opera House, the Schauspielhaus (Play House), the Ballhof eins, the Ballhof zwei and the Cumberlandsche Galerie belong to the Lower Saxony State Theatre. The Theater am Aegi is Hanover's principal theatre for musicals, shows and guest performances. The Neues Theater (New Theatre) is the boulevard theatre of Hanover. The Theater für Niedersachsen is another large theatre in Hanover, which also has an own musical company. Some of the most important musical productions are the rock musicals of the German rock musician Heinz Rudolph Kunze, which take place at the Garden-Theatre in the Great Garden. Some important theatre events are the Tanztheater International, the Long Night of the Theatres, the Festival Theaterformen and the International Competition for Choreographers. Hanover's leading cabaret stage is the GOP Variety theatre which is located in the Georgs Palace. Some other cabaret-stages are the Variety Marlene, the Uhu-Theatre. the theatre Die Hinterbühne, the Rampenlich Variety and the revue-stage TAK. The most important cabaret event is the Kleines Fest im Großen Garten (Little Festival in the Great Garden) which is the most successful cabaret festival in Germany. It features artists from around the world. Some other important events are the Calenberger Cabaret Weeks, the Hanover Cabaret Festival and the Wintervariety. Music Classical music Hanover has two symphony orchestras: The Lower Saxon State Orchestra Hanover and the NDR Radiophilharmonie (North German Radio Philharmonic Orchestra). Two notable choirs have their homes in Hanover: the Mädchenchor Hannover (girls' choir) and the Knabenchor Hannover (boys' choir). There are two major international competitions for classical music in Hanover: Hanover International Violin Competition (since 1991) Classica Nova International Music Competition (1997) (Non profit association Classica Nova exists in Hanover with the aim of continuing the Classica Nova competition). Popular music The rock bands Scorpions and Fury in the Slaughterhouse are originally from Hanover. Acclaimed DJ Mousse T also has his main recording studio in the area. Rick J. Jordan, member of the band Scooter was born here in 1968. Eurovision Song Contest winner of 2010, Lena, is also from Hanover. Sport Hannover 96 (nickname Die Roten or 'The Reds') is the top local football team that currently plays in the 2. Bundesliga. Home games are played at the HDI-Arena, which hosted matches in the 1974 and 2006 World Cups and the Euro 1988. Their reserve team Hannover 96 II plays in the fourth league. Their home games were played in the traditional Eilenriedestadium till they moved to the HDI Arena due to DFL directives. Arminia Hannover is another traditional soccer team in Hanover that has played in the second division (then 2. Liga Nord) for years and plays now in the Niedersachsen-West Liga (Lower Saxony League West). Home matches are played in the Rudolf-Kalweit-Stadium. The Hannover Indians are the local ice hockey team. They play in the third tier. Their home games are played at the traditional Eisstadion am Pferdeturm. The Hannover Scorpions played in Hanover in Germany's top league until 2013 when they sold their license and moved to Langenhagen. Hanover was one of the rugby union capitals in Germany. The first German rugby team was founded in Hanover in 1878. Hanover-based teams dominated the German rugby scene for a long time. DRC Hannover plays in the first division, and SV Odin von 1905 as well as SG 78/08 Hannover play in the second division. The first German fencing club was founded in Hanover in 1862. Today there are three additional fencing clubs in Hanover. The Hannover Korbjäger are the city's top basketball team. They play their home games at the IGS Linden. Hanover is a centre for water sports. Thanks to the Maschsee lake, the rivers Ihme and Leine and to the Mittellandkanal channel, Hanover hosts sailing schools, yacht schools, waterski clubs, rowing clubs, canoe clubs and paddle clubs. The water polo team WASPO W98 plays in the first division. The Hannover Regents play in the third Bundesliga (baseball) division. The Hannover Grizzlies, Armina Spartans and Hannover Stampeders are the local American football teams. The Hannover Marathon is the biggest running event in Hanover with more than 11,000 participants and usually around 200.000 spectators. Some other important running events are the Gilde Stadtstaffel (relay), the Sport-Check Nachtlauf (night-running), the Herrenhäuser Team-Challenge, the Hannoversche Firmenlauf (company running) and the Silvesterlauf (sylvester running). Hanover also hosts an important international cycle race: The Nacht von Hannover (night
the first color handheld console ever made, as well as the first with a backlit screen. It also features networking support with up to 17 other players, and advanced hardware that allows the zooming and scaling of sprites. The Lynx can also be turned upside down to accommodate left-handed players. However, all these features came at a very high price point, which drove consumers to seek cheaper alternatives. The Lynx is also very unwieldy, consumes batteries very quickly, and lacked the third-party support enjoyed by its competitors. Due to its high price, short battery life, production shortages, a dearth of compelling games, and Nintendo's aggressive marketing campaign, and despite a redesign in 1991, the Lynx became a commercial failure. Despite this, companies like Telegames helped to keep the system alive long past its commercial relevance, and when new owner Hasbro released the rights to develop for the public domain, independent developers like Songbird have managed to release new commercial games for the system every year until 2004's Winter Games. TurboExpress The TurboExpress is a portable version of the TurboGrafx, released in 1990 for $249.99. Its Japanese equivalent is the PC Engine GT. It is the most advanced handheld of its time and can play all the TurboGrafx-16's games (which are on a small, credit-card sized media called HuCards). It has a 66 mm (2.6 in.) screen, the same as the original Game Boy, but in a much higher resolution, and can display 64 sprites at once, 16 per scanline, in 512 colors. Although the hardware can only handle 481 simultaneous colors. It has 8 kilobytes of RAM. The Turbo runs the HuC6820 CPU at 1.79 or 7.16 MHz. The optional "TurboVision" TV tuner includes RCA audio/video input, allowing users to use TurboExpress as a video monitor. The "TurboLink" allowed two-player play. Falcon, a flight simulator, included a "head-to-head" dogfight mode that can only be accessed via TurboLink. However, very few TG-16 games offered co-op play modes especially designed with the TurboExpress in mind. Bitcorp Gamate The Bitcorp Gamate is the one of the first handheld game systems created in response to the Nintendo Game Boy. It was released in Asia in 1990 and distributed worldwide by 1991. Like the Sega Game Gear, it was horizontal in orientation and like the Game Boy, required 4 AA batteries. Unlike many later Game Boy clones, its internal components were professionally assembled (no "glop-top" chips). Unfortunately the system's fatal flaw is its screen. Even by the standards of the day, its screen is rather difficult to use, suffering from similar ghosting problems that were common complaints with the first generation Game Boys. Likely because of this fact sales were quite poor, and Bitcorp closed by 1992. However, new games continued to be published for the Asian market, possibly as late as 1994. The total number of games released for the system remains unknown. Gamate games were designed for stereo sound, but the console is only equipped with a mono speaker. Sega Game Gear The Game Gear is the third color handheld console, after the Lynx and the TurboExpress; produced by Sega. Released in Japan in 1990 and in North America and Europe in 1991, it is based on the Master System, which gave Sega the ability to quickly create Game Gear games from its large library of games for the Master System. While never reaching the level of success enjoyed by Nintendo, the Game Gear proved to be a fairly durable competitor, lasting longer than any other Game Boy rivals. While the Game Gear is most frequently seen in black or navy blue, it was also released in a variety of additional colors: red, light blue, yellow, clear, and violet. All of these variations were released in small quantities and frequently only in the Asian market. Following Sega's success with the Game Gear, they began development on a successor during the early 1990s, which was intended to feature a touchscreen interface, many years before the Nintendo DS. However, such a technology was very expensive at the time, and the handheld itself was estimated to have cost around $289 were it to be released. Sega eventually chose to shelve the idea and instead release the Genesis Nomad, a handheld version of the Genesis, as the successor. Watara Supervision The Watara Supervision was released in 1992 in an attempt to compete with the Nintendo Game Boy. The first model was designed very much like a Game Boy, but it is grey in color and has a slightly larger screen. The second model was made with a hinge across the center and can be bent slightly to provide greater comfort for the user. While the system did enjoy a modest degree of success, it never impacted the sales of Nintendo or Sega. The Supervision was redesigned a final time as "The Magnum". Released in limited quantities it was roughly equivalent to the Game Boy Pocket. It was available in three colors: yellow, green and grey. Watara designed many of the games themselves, but did receive some third party support, most notably from Sachen. A TV adapter was available in both PAL and NTSC formats that could transfer the Supervision's black-and-white palette to 4 colors, similar in some regards to the Super Game Boy from Nintendo. Hartung Game Master The Hartung Game Master is an obscure handheld released at an unknown point in the early 1990s. Its graphics fidelity was much lower than most of its contemporaries, displaying just 64x64 pixels. It was available in black, white, and purple, and was frequently rebranded by its distributors, such as Delplay, Videojet and Systema. The exact number of games released is not known, but is likely around 20. The system most frequently turns up in Europe and Australia. Late 1990s By this time, the lack of significant development in Nintendo's product line began allowing more advanced systems such as the Neo Geo Pocket Color and the WonderSwan Color to be developed. Sega Nomad The Nomad was released in October 1995 in North America only. The release was five years into the market span of the Genesis, with an existing library of more than 500 Genesis games. According to former Sega of America research and development head Joe Miller, the Nomad was not intended to be the Game Gear's replacement; he believed that there was little planning from Sega of Japan for the new handheld. Sega was supporting five different consoles: Saturn, Genesis, Game Gear, Pico, and the Master System, as well as the Sega CD and 32X add-ons. In Japan, the Mega Drive had never been successful and the Saturn was more successful than Sony's PlayStation, so Sega Enterprises CEO Hayao Nakayama decided to focus on the Saturn. By 1999, the Nomad was being sold at less than a third of its original price. Game Boy Pocket The Game Boy Pocket is a redesigned version of the original Game Boy having the same features. It was released in 1996. Notably, this variation is smaller and lighter. It comes in seven different colors; red, yellow, green, black, clear, silver, blue, and pink. It has space for two AAA batteries, which provide approximately 10 hours of game play. The screen was changed to a true black-and-white display, rather than the "pea soup" monochromatic display of the original Game Boy. Although, like its predecessor, the Game Boy Pocket has no backlight to allow play in a darkened area, it did notably improve visibility and pixel response-time (mostly eliminating ghosting). The first model of the Game Boy Pocket did not have an LED to show battery levels, but the feature was added due to public demand. The Game Boy Pocket was not a new software platform and played the same software as the original Game Boy model. Game.com The Game.com (pronounced in TV commercials as "game com", not "game dot com", and not capitalized in marketing material) is a handheld game console released by Tiger Electronics in September 1997. It featured many new ideas for handheld consoles and was aimed at an older target audience, sporting PDA-style features and functions such as a touch screen and stylus. However, Tiger hoped it would also challenge Nintendo's Game Boy and gain a following among younger gamers too. Unlike other handheld game consoles, the first game.com consoles included two slots for game cartridges, which would not happen again until the Tapwave Zodiac, the DS and DS Lite, and could be connected to a 14.4 kbit/s modem. Later models had only a single cartridge slot. Game Boy Color The Game Boy Color (also referred to as GBC or CGB) is Nintendo's successor to the Game Boy and was released on October 21, 1998, in Japan and in November of the same year in the United States. It features a color screen, and is slightly bigger than the Game Boy Pocket. The processor is twice as fast as a Game Boy's and has twice as much memory. It also had an infrared communications port for wireless linking which did not appear in later versions of the Game Boy, such as the Game Boy Advance. The Game Boy Color was a response to pressure from game developers for a new system, as they felt that the Game Boy, even in its latest incarnation, the Game Boy Pocket, was insufficient. The resulting product was backward compatible, a first for a handheld console system, and leveraged the large library of games and great installed base of the predecessor system. This became a major feature of the Game Boy line, since it allowed each new launch to begin with a significantly larger library than any of its competitors. As of March 31, 2005, the Game Boy and Game Boy Color combined to sell 118.69 million units worldwide. The console is capable of displaying up to 56 different colors simultaneously on screen from its palette of 32,768, and can add basic four-color shading to games that had been developed for the original Game Boy. It can also give the sprites and backgrounds separate colors, for a total of more than four colors. Neo Geo Pocket Color The Neo Geo Pocket Color (or NGPC) was released in 1999 in Japan, and later that year in the United States and Europe. It is a 16-bit color handheld game console designed by SNK, the maker of the Neo Geo home console and arcade machine. It came after SNK's original Neo Geo Pocket monochrome handheld, which debuted in 1998 in Japan. In 2000 following SNK's purchase by Japanese Pachinko manufacturer Aruze, the Neo Geo Pocket Color was dropped from both the US and European markets, purportedly due to commercial failure. The system seemed well on its way to being a success in the U.S. It was more successful than any Game Boy competitor since Sega's Game Gear, but was hurt by several factors, such as SNK's infamous lack of communication with third-party developers, and anticipation of the Game Boy Advance. The decision to ship U.S. games in cardboard boxes in a cost-cutting move rather than hard plastic cases that Japanese and European releases were shipped in may have also hurt US sales. Wonderswan Color The WonderSwan Color is a handheld game console designed by Bandai. It was released on December 9, 2000, in Japan, Although the WonderSwan Color was slightly larger and heavier (7 mm and 2 g) compared to the original WonderSwan, the color version featured 512 kB of RAM and a larger color LCD screen. In addition, the WonderSwan Color is compatible with the original WonderSwan library of games. Prior to WonderSwan's release, Nintendo had virtually a monopoly in the Japanese video game handheld market. After the release of the WonderSwan Color, Bandai took approximately 8% of the market share in Japan partly due to its low price of 6800 yen (approximately US$65). Another reason for the WonderSwan's success in Japan was the fact that Bandai managed to get a deal with Square to port over the original Famicom Final Fantasy games with improved graphics and controls. However, with the popularity of the Game Boy Advance and the reconciliation between Square and Nintendo, the WonderSwan Color and its successor, the SwanCrystal quickly lost its competitive advantage. Early 2000s The 2000s saw a major leap in innovation, particularly in the second half with the release of the DS and PSP. Game Boy Advance In 2001, Nintendo released the Game Boy Advance (GBA or AGB), which added two shoulder buttons, a larger screen, and more computing power than the Game Boy Color. The design was revised two years later when the Game Boy Advance SP (GBA SP), a more compact version, was released. The SP features a "clamshell" design (folding open and closed, like a laptop computer), as well as a frontlit color display and rechargeable battery. Despite the smaller form factor, the screen remained the same size as that of the original. In 2005, the Game Boy Micro was released. This revision sacrifices screen size and backwards compatibility with previous Game Boys for a dramatic reduction in total size and a brighter backlit screen. A new SP model with a backlit screen was released in some regions around the same time. Along with the Nintendo GameCube, the GBA also introduced the concept of "connectivity": using a handheld system as a console controller. A handful of games use this feature, most notably Animal Crossing, Pac-Man Vs., Final Fantasy Crystal Chronicles, The Legend of Zelda: Four Swords Adventures, The Legend of Zelda: The Wind Waker, Metroid Prime, and Sonic Adventure 2: Battle. As of December 31, 2007, the GBA, GBA SP, and the Game Boy Micro combined have sold 80.72 million units worldwide. Game Park 32 The original GP32 was released in 2001 by the South Korean company Game Park a few months after the launch of the Game Boy Advance. It featured a 32-bit CPU, 133 MHz processor, MP3 and Divx player, and e-book reader. SmartMedia cards were used for storage, and could hold up to 128mb of anything downloaded through a USB cable from a PC. The GP32 was redesigned in 2003. A front-lit screen was added and the new version was called GP32 FLU (Front Light Unit). In summer 2004, another redesign, the GP32 BLU, was made, and added a backlit screen. This version of the handheld was planned for release outside South Korea; in Europe, and it was released for example in Spain (VirginPlay was the distributor). While not a commercial success on a level with mainstream handhelds (only 30,000 units were sold), it ended up being used mainly as a platform for user-made applications and emulators of other systems, being popular with developers and more technically adept users. N-Gage Nokia released the N-Gage in 2003. It was designed as a combination MP3 player, cellphone, PDA, radio, and gaming device. The system received much criticism alleging defects in its physical design and layout, including its vertically oriented screen and requirement of removing the battery to change game cartridges. The most well known of these was "sidetalking", or the act of placing the phone speaker and receiver on an edge of the device instead of one of the flat sides, causing the user to appear as if they are speaking into a taco. The N-Gage QD was later released to address the design flaws of the original. However, certain features available in the original N-Gage, including MP3 playback, FM radio reception, and USB connectivity were removed. Second generation of N-Gage launched on April 3, 2008 in the form of a service for selected Nokia Smartphones. Cybiko The Cybiko is a Russian hand-held computer introduced in May 2000 by David Yang's company and designed for teenage audiences, featuring its own two-way radio text messaging system. It has over 430 "official" freeware games and applications. Because of the text messaging system, it features a QWERTY keyboard that was used with a stylus. An MP3 player add-on was made for the unit as well as a SmartMedia card reader. The company stopped manufacturing the units after two product versions and only a few years on the market. Cybikos can communicate with each other up to a maximum range of 300 metres (0.19 miles). Several Cybikos can chat with each other in a wireless chatroom. Cybiko Classic: There were two models of the Classic Cybiko. Visually, the only difference was that the original version had a power switch on the side, whilst the updated version used the "escape" key for power management. Internally, the differences between the two models were in the internal memory, and the location of the firmware. Cybiko Xtreme: The Cybiko Xtreme was the second-generation Cybiko handheld. It featured various improvements over the original Cybiko, such as a faster processor, more RAM, more ROM, a new operating system, a new keyboard layout and case design, greater wireless range, a microphone, improved audio output, and smaller size. Tapwave Zodiac In 2003, Tapwave released the Zodiac. It was designed to be a PDA-handheld game console hybrid. It supported photos, movies, music, Internet, and documents. The Zodiac used a special version Palm OS 5, 5.2T, that supported the special gaming buttons and graphics chip. Two versions were available, Zodiac 1 and 2, differing in memory and looks. The Zodiac line ended in July 2005 when Tapwave declared bankruptcy. Mid 2000s Nintendo DS The Nintendo DS was released in November 2004. Among its new features were the incorporation of two screens, a touchscreen, wireless connectivity, and a microphone port. As with the Game Boy Advance SP, the DS features a clamshell design, with the two screens aligned vertically on either side of the hinge. The DS's lower screen is touch sensitive, designed to be pressed with a stylus, a user's finger or a special "thumb pad" (a small plastic pad attached to the console's wrist strap, which can be affixed to the thumb to simulate an analog stick). More traditional controls include four face buttons, two shoulder buttons, a D-pad, and "Start" and "Select" buttons. The console also features online capabilities via the Nintendo Wi-Fi Connection and ad-hoc wireless networking for multiplayer games with up to sixteen players. It is backwards-compatible with all Game Boy Advance games, but like the Game Boy Micro, it is not compatible with games designed for the Game Boy or Game Boy Color. In January 2006, Nintendo revealed an updated version of the DS: the Nintendo DS Lite (released on March 2, 2006, in Japan) with an updated, smaller form factor (42% smaller and 21% lighter than the original Nintendo DS), a cleaner design, longer battery life, and brighter, higher-quality displays, with adjustable brightness. It is also able to connect wirelessly with Nintendo's Wii console. On October 2, 2008, Nintendo announced the Nintendo DSi, with larger, 3.25-inch screens and two integrated cameras. It has an SD card storage slot in place of the Game Boy Advance slot, plus internal flash memory for storing downloaded games. It was released on November 1, 2008, in Japan, April 2, 2009 in Australia, April 3, 2009 in Europe, and April 5, 2009 in North America. On October 29, 2009, Nintendo announced a larger version of the DSi, called the DSi XL, which was released on November 21, 2009 in Japan, March 5, 2010 in Europe, March 28, 2010 in North America, and April 15, 2010 in Australia. As of December 31, 2009, the
became the world's first handheld electronic games. The project began when Michael Katz, Mattel's new product category marketing director, told the engineers in the electronics group to design a game the size of a calculator, using LED (light-emitting diode) technology." our big success was something that I conceptualized—the first handheld game. I asked the design group to see if they could come up with a game that was electronic that was the same size as a calculator. —Michael Katz, former marketing director, Mattel Toys. The result was the 1976 release of Auto Race. Followed by Football later in 1977, the two games were so successful that according to Katz, "these simple electronic handheld games turned into a '$400 million category.'" Mattel would later win the honor of being recognized by the industry for innovation in handheld game device displays. Soon, other manufacturers including Coleco, Parker Brothers, Milton Bradley, Entex, and Bandai began following up with their own tabletop and handheld electronic games. In 1979 the LCD-based Microvision, designed by Smith Engineering and distributed by Milton-Bradley, became the first handheld game console and the first to use interchangeable game cartridges. The Microvision game Cosmic Hunter (1981) also introduced the concept of a directional pad on handheld gaming devices, and is operated by using the thumb to manipulate the on-screen character in any of four directions. In 1979, Gunpei Yokoi, traveling on a bullet train, saw a bored businessman playing with an LCD calculator by pressing the buttons. Yokoi then thought of an idea for a watch that doubled as a miniature game machine for killing time. Starting in 1980, Nintendo began to release a series of electronic games designed by Yokoi called the Game & Watch games. Taking advantage of the technology used in the credit-card-sized calculators that had appeared on the market, Yokoi designed the series of LCD-based games to include a digital time display in the corner of the screen. For later, more complicated Game & Watch games, Yokoi invented a cross shaped directional pad or "D-pad" for control of on-screen characters. Yokoi also included his directional pad on the NES controllers, and the cross-shaped thumb controller soon became standard on game console controllers and ubiquitous across the video game industry since. When Yokoi began designing Nintendo's first handheld game console, he came up with a device that married the elements of his Game & Watch devices and the Famicom console, including both items' D-pad controller. The result was the Nintendo Game Boy. In 1982, the Bandai LCD Solarpower was the first solar-powered gaming device. Some of its games, such as the horror-themed game Terror House, features two LCD panels, one stacked on the other, for an early 3D effect. In 1983, Takara Tomy's Tomytronic 3D simulates 3D by having two LCD panels that were lit by external light through a window on top of the device, making it the first dedicated home video 3D hardware. Beginnings The late 1980s and early 1990s saw the beginnings of the modern-day handheld game console industry, after the demise of the Microvision. As backlit LCD game consoles with color graphics consume a lot of power, they were not battery-friendly like the non-backlit original Game Boy whose monochrome graphics allowed longer battery life. By this point, rechargeable battery technology had not yet matured and so the more advanced game consoles of the time such as the Sega Game Gear and Atari Lynx did not have nearly as much success as the Game Boy. Even though third-party rechargeable batteries were available for the battery-hungry alternatives to the Game Boy, these batteries employed a nickel-cadmium process and had to be completely discharged before being recharged to ensure maximum efficiency; lead-acid batteries could be used with automobile circuit limiters (cigarette lighter plug devices); but the batteries had mediocre portability. The later NiMH batteries, which do not share this requirement for maximum efficiency, were not released until the late 1990s, years after the Game Gear, Atari Lynx, and original Game Boy had been discontinued. During the time when technologically superior handhelds had strict technical limitations, batteries had a very low mAh rating since batteries with heavy power density were not yet available. Modern game systems such as the Nintendo DS and PlayStation Portable have rechargeable Lithium-Ion batteries with proprietary shapes. Other seventh-generation consoles such as the GP2X use standard alkaline batteries. Because the mAh rating of alkaline batteries has increased since the 1990s, the power needed for handhelds like the GP2X may be supplied by relatively few batteries. Game Boy Nintendo released the Game Boy on April 21, 1989 (September 1990 for the UK). The design team headed by Gunpei Yokoi had also been responsible for the Game & Watch system, as well as the Nintendo Entertainment System games Metroid and Kid Icarus. The Game Boy came under scrutiny by Nintendo president Hiroshi Yamauchi, saying that the monochrome screen was too small, and the processing power was inadequate. The design team had felt that low initial cost and battery economy were more important concerns, and when compared to the Microvision, the Game Boy was a huge leap forward. Yokoi recognized that the Game Boy needed a killer app—at least one game that would define the console, and persuade customers to buy it. In June 1988, Minoru Arakawa, then-CEO of Nintendo of America saw a demonstration of the game Tetris at a trade show. Nintendo purchased the rights for the game, and packaged it with the Game Boy system as a launch title. It was almost an immediate hit. By the end of the year more than a million units were sold in the US. As of March 31, 2005, the Game Boy and Game Boy Color combined to sell over 118 million units worldwide. Atari Lynx In 1987, Epyx created the Handy Game; a device that would become the Atari Lynx in 1989. It is the first color handheld console ever made, as well as the first with a backlit screen. It also features networking support with up to 17 other players, and advanced hardware that allows the zooming and scaling of sprites. The Lynx can also be turned upside down to accommodate left-handed players. However, all these features came at a very high price point, which drove consumers to seek cheaper alternatives. The Lynx is also very unwieldy, consumes batteries very quickly, and lacked the third-party support enjoyed by its competitors. Due to its high price, short battery life, production shortages, a dearth of compelling games, and Nintendo's aggressive marketing campaign, and despite a redesign in 1991, the Lynx became a commercial failure. Despite this, companies like Telegames helped to keep the system alive long past its commercial relevance, and when new owner Hasbro released the rights to develop for the public domain, independent developers like Songbird have managed to release new commercial games for the system every year until 2004's Winter Games. TurboExpress The TurboExpress is a portable version of the TurboGrafx, released in 1990 for $249.99. Its Japanese equivalent is the PC Engine GT. It is the most advanced handheld of its time and can play all the TurboGrafx-16's games (which are on a small, credit-card sized media called HuCards). It has a 66 mm (2.6 in.) screen, the same as the original Game Boy, but in a much higher resolution, and can display 64 sprites at once, 16 per scanline, in 512 colors. Although the hardware can only handle 481 simultaneous colors. It has 8 kilobytes of RAM. The Turbo runs the HuC6820 CPU at 1.79 or 7.16 MHz. The optional "TurboVision" TV tuner includes RCA audio/video input, allowing users to use TurboExpress as a video monitor. The "TurboLink" allowed two-player play. Falcon, a flight simulator, included a "head-to-head" dogfight mode that can only be accessed via TurboLink. However, very few TG-16 games offered co-op play modes especially designed with the TurboExpress in mind. Bitcorp Gamate The Bitcorp Gamate is the one of the first handheld game systems created in response to the Nintendo Game Boy. It was released in Asia in 1990 and distributed worldwide by 1991. Like the Sega Game Gear, it was horizontal in orientation and like the Game Boy, required 4 AA batteries. Unlike many later Game Boy clones, its internal components were professionally assembled (no "glop-top" chips). Unfortunately the system's fatal flaw is its screen. Even by the standards of the day, its screen is rather difficult to use, suffering from similar ghosting problems that were common complaints with the first generation Game Boys. Likely because of this fact sales were quite poor, and Bitcorp closed by 1992. However, new games continued to be published for the Asian market, possibly as late as 1994. The total number of games released for the system remains unknown. Gamate games were designed for stereo sound, but the console is only equipped with a mono speaker. Sega Game Gear The Game Gear is the third color handheld console, after the Lynx and the TurboExpress; produced by Sega. Released in Japan in 1990 and in North America and Europe in 1991, it is based on the Master System, which gave Sega the ability to quickly create Game Gear games from its large library of games for the Master System. While never reaching the level of success enjoyed by Nintendo, the Game Gear proved to be a fairly durable competitor, lasting longer than any other Game Boy rivals. While the Game Gear is most frequently seen in black or navy blue, it was also released in a variety of additional colors: red, light blue, yellow, clear, and violet. All of these variations were released in small quantities and frequently only in the Asian market. Following Sega's success with the Game Gear, they began development on a successor during the early 1990s, which was intended to feature a touchscreen interface, many years before the Nintendo DS. However, such a technology was very expensive at the time, and the handheld itself was estimated to have cost around $289 were it to be released. Sega eventually chose to shelve the idea and instead release the Genesis Nomad, a handheld version of the Genesis, as the successor. Watara Supervision The Watara Supervision was released in 1992 in an attempt to compete with the Nintendo Game Boy. The first model was designed very much like a Game Boy, but it is grey in color and has a slightly larger screen. The second model was made with a hinge across the center and can be bent slightly to provide greater comfort for the user. While the system did enjoy a modest degree of success, it never impacted the sales of Nintendo or Sega. The Supervision was redesigned a final time as "The Magnum". Released in limited quantities it was roughly equivalent to the Game Boy Pocket. It was available in three colors: yellow, green and grey. Watara designed many of the games themselves, but did receive some third party support, most notably from Sachen. A TV adapter was available in both PAL and NTSC formats that could transfer the Supervision's black-and-white palette to 4 colors, similar in some regards to the Super Game Boy from Nintendo. Hartung Game Master The Hartung Game Master is an obscure handheld released at an unknown point in the early 1990s. Its graphics fidelity was much lower than most of its contemporaries, displaying just 64x64 pixels. It was available in black, white, and purple, and was frequently rebranded by its distributors, such as Delplay, Videojet and Systema. The exact number of games released is not known, but is likely around 20. The system most frequently turns up in Europe and Australia. Late 1990s By this time, the lack of significant development in Nintendo's product line began allowing more advanced systems such as the Neo Geo Pocket Color and the WonderSwan Color to be developed. Sega Nomad The Nomad was released in October 1995 in North America only. The release was five years into the market span of the Genesis, with an existing library of more than 500 Genesis games. According to former Sega of America research and development head Joe Miller, the Nomad was not intended to be the Game Gear's replacement; he believed that there was little planning from Sega of Japan for the new handheld. Sega was supporting five different consoles: Saturn, Genesis, Game Gear, Pico, and the Master System, as well as the Sega CD and 32X add-ons. In Japan, the Mega Drive had never been successful and the Saturn was more successful than Sony's PlayStation, so Sega Enterprises CEO Hayao Nakayama decided to focus on the Saturn. By 1999, the Nomad was being sold at less than a third of its original price. Game Boy Pocket The Game Boy Pocket is a redesigned version of the original Game Boy having the same features. It was released in 1996. Notably, this variation is smaller and lighter. It comes in seven different colors; red, yellow, green, black, clear, silver, blue, and pink. It has space for two AAA batteries, which provide approximately 10 hours of game play. The screen was changed to a true black-and-white display, rather than the "pea soup" monochromatic display of the original Game Boy. Although, like its predecessor, the Game Boy Pocket has no backlight to allow play in a darkened area, it did notably improve visibility and pixel response-time (mostly eliminating ghosting). The first model of the Game Boy Pocket did not have an LED to show battery levels, but the feature was added due to public demand. The Game Boy Pocket was not a new software platform and played the same software as the original Game Boy model. Game.com The Game.com (pronounced in TV commercials as "game com", not "game dot com", and not capitalized in marketing material) is a handheld game console released by Tiger Electronics in September 1997. It featured many new ideas for handheld consoles and was aimed at an older target audience, sporting PDA-style features and functions such as a touch screen and stylus. However, Tiger hoped it would also challenge Nintendo's Game Boy and gain a following among younger gamers too. Unlike other handheld game consoles, the first game.com consoles included two slots for game cartridges, which would not happen again until the Tapwave Zodiac, the DS and DS Lite, and could be connected to a 14.4 kbit/s modem. Later models had only a single cartridge slot. Game Boy Color The Game Boy Color (also referred to as GBC or CGB) is Nintendo's successor to the Game Boy and was released on October 21, 1998, in Japan and in November of the same year in the United States. It features a color screen, and is slightly bigger than the Game Boy Pocket. The processor is twice as fast as a Game Boy's and has twice as much memory. It also had an infrared communications port for wireless linking which did not appear in later versions of the Game Boy, such as the Game Boy Advance. The Game Boy Color was a response to pressure from game developers for a new system, as they felt that the Game Boy, even in its latest incarnation, the Game Boy Pocket, was insufficient. The resulting product was backward compatible, a first for a handheld console system, and leveraged the large library of games and great installed base of the predecessor system. This became a major feature of the Game Boy
Otto von Bismarck. The latter was so much pleased with Abeken's work that officials started to call Abeken "the quill [i.e., the scribe] of Bismarck." Abeken married again in 1866; his second wife was Hedwig von Olfers, daughter of the general director of the royal museums, Privy Councilor von Olfers. He was much employed by Bismarck in the writing of official despatches, and stood high in the favour of King William, whom he often accompanied on his journeys as representative of the foreign office. He was present with the king during the campaigns of 1866 and 1870–71. In 1851 he published anonymously Babylon und Jerusalem, a scathing criticism of the views of the Countess von Hahn-Hahn. During the war against Austria in 1866 as well as in the wars against France in 1870 and 1871, Abeken stayed in the Prussian headquarters. A major part of the dispatches of the time have been written by him. Unfortunately his health was damaged by the endeavours of these travels, and he died after an illness of several months. Emperor Wilhelm I described Abeken in a condolence letter to his widow: One of my most reliable advisors, standing on my side in the most decisive moments; His loss is irreplaceable to me; In him his fatherland has lost one of the most noble and most loyal men and officials. Despite his engagement in politics, Abeken never lost his interest in theology and continued to publish and speak in this sector during all of his life. He was interested
German-English missionary bishopric in Jerusalem. In the same year, he was sent by Frederick William IV of Prussia to Egypt and Ethiopia, where he joined an expedition led by professor Karl Richard Lepsius. In 1845 and 1846 he returned via Jerusalem and Rome to Germany. He became Legation Councillor in Berlin, later Council Referee at the Ministry of Foreign Affairs. In 1848 he received an appointment in the Prussian ministry for foreign affairs, and in 1853 was promoted to be privy councillor of legation (Geheimer Legationsrath). Abeken remained in charge for more than twenty years of Prussian politics, assisting Otto Theodor Freiherr von Manteuffel and Chancellor Otto von Bismarck. The latter was so much pleased with Abeken's work that officials started to call Abeken "the quill [i.e., the scribe] of Bismarck." Abeken married again in 1866; his second wife was Hedwig von Olfers, daughter of the general director of the royal museums, Privy Councilor von Olfers. He was much employed by Bismarck in the writing of official despatches, and stood high in the favour of King William, whom he often accompanied on his journeys as representative of the foreign office. He was present with the king during the campaigns of 1866 and 1870–71. In 1851 he published anonymously Babylon und Jerusalem, a scathing criticism of the views of the Countess
as the vast majority of the electorate lived in the town and its vicinity, whereas there was a much lower number of electors in the neighbouring Aberdare Valley. During the 1850s and 1860s, however, the population of Aberdare grew rapidly, and the franchise changes in 1867 gave the vote to large numbers of miners in that valley. Amongst these new electors, Bruce remained unpopular as a result of his actions during the 1857–58 dispute. Initially, it appeared that the Aberdare iron master, Richard Fothergill, would be elected to the second seat alongside Bruce. However, the appearance of a third Liberal candidate, Henry Richard, a nonconformist radical popular in both Merthyr and Aberdare, left Bruce on the defensive and he was ultimately defeated, finishing in third place behind both Richard and Fothergill. Later political career After losing his seat, Bruce was elected for Renfrewshire on 25 January 1869, he was made Home Secretary by William Ewart Gladstone. His tenure of this office was conspicuous for a reform of the licensing laws, and he was responsible for the Licensing Act 1872, which made the magistrates the licensing authority, increased the penalties for misconduct in public-houses and shortened the number of hours for the sale of drink. In 1873 Bruce relinquished the home secretaryship, at Gladstone's request, to become Lord President of the Council, and was elevated to the peerage as Baron Aberdare, of Duffryn in the County of Glamorgan, on 23 August that year. Being a Gladstonian Liberal, Aberdare had hoped for a much more radical proposal to keep existing licensee holders for a further ten years, and to prevent any new applicants. Its unpopularity pricked his nonconformist's conscience, when like Gladstone himself he had a strong leaning towards Temperance. He had already pursued 'moral improvement' on miners in the regulations attempting to further ban boys from the pits. The Trades Union Act 1871 was another more liberal regime giving further rights to unions, and protection from malicious prosecutions. The defeat of the Liberal government in the following year terminated Lord Aberdare's official political life, and he subsequently devoted himself to social, educational and economic questions. Education became one of Lord Aberdare's main interests in later life. His interest had been shown by the speech on Welsh education which he had made on 5 May 1862. In 1880, he was appointed to chair the Departmental Committee on Intermediate and Higher Education in Wales and Monmouthshire, whose report ultimately led to the Welsh Intermediate Education Act of 1889. The report also stimulated the campaign for the provision of university education in Wales. In 1883, Lord Aberdare was elected the first president of the University College of South Wales and Monmouthshire. In his inaugural address he declared that the framework of Welsh education would not be complete until there was a University of Wales. The University was eventually founded in 1893 and Aberdare became its first chancellor. In 1876 he was elected a Fellow of the Royal Society; from 1878 to 1891 he was president of the Royal Historical Society. and in 1881 he became president of both the Royal Geographical Society and the Girls' Day School Trust. In 1888 he headed the commission that established the Official Table of Drops, listing how far a person of a particular weight should be dropped when hanged for a capital offence (the only method of 'judicial execution' in the United Kingdom at that time), to ensure an instant and painless death, by cleanly breaking the neck between the 2nd and 3rd vertebrae, an 'exacting science', eventually brought to perfection by Chief Executioner Albert Pierrepoint. Prisoners health, clothing and discipline was a particular concern even at the end of his career. In the Lords he spoke at some length to the Home Affairs Committee chaired by Arthur Balfour about the prison rules system. Aberdare had always expressed concern about intemperate working-classes; in 1878 urging greater vigilance against the vice of excessive drinking, he took evidence on miners and railway colliers drinking habits. The committee tried to establish special legislation based on a link between Sunday Opening and absenteeism established in 1868. Aberdare had been interested in the plight of working
England and Wales in 1864, when he was moved to be Vice-President of the Council of Education. 1868 general election At the 1868 general election, Merthyr Tydfil became a two-member constituency with a much-increased electorate as a result of the Second Reform Act of 1867. Since the formation of the constituency, Merthyr Tydfil had dominated representation as the vast majority of the electorate lived in the town and its vicinity, whereas there was a much lower number of electors in the neighbouring Aberdare Valley. During the 1850s and 1860s, however, the population of Aberdare grew rapidly, and the franchise changes in 1867 gave the vote to large numbers of miners in that valley. Amongst these new electors, Bruce remained unpopular as a result of his actions during the 1857–58 dispute. Initially, it appeared that the Aberdare iron master, Richard Fothergill, would be elected to the second seat alongside Bruce. However, the appearance of a third Liberal candidate, Henry Richard, a nonconformist radical popular in both Merthyr and Aberdare, left Bruce on the defensive and he was ultimately defeated, finishing in third place behind both Richard and Fothergill. Later political career After losing his seat, Bruce was elected for Renfrewshire on 25 January 1869, he was made Home Secretary by William Ewart Gladstone. His tenure of this office was conspicuous for a reform of the licensing laws, and he was responsible for the Licensing Act 1872, which made the magistrates the licensing authority, increased the penalties for misconduct in public-houses and shortened the number of hours for the sale of drink. In 1873 Bruce relinquished the home secretaryship, at Gladstone's request, to become Lord President of the Council, and was elevated to the peerage as Baron Aberdare, of Duffryn in the County of Glamorgan, on 23 August that year. Being a Gladstonian Liberal, Aberdare had hoped for a much more radical proposal to keep existing licensee holders for a further ten years, and to prevent any new applicants. Its unpopularity pricked his nonconformist's conscience, when like Gladstone himself he had a strong leaning towards Temperance. He had already pursued 'moral improvement' on miners in the regulations attempting to further ban boys from the pits. The Trades Union Act 1871 was another more liberal regime giving further rights to unions, and protection from malicious prosecutions. The defeat of the Liberal government in the following year terminated Lord Aberdare's official political life, and he subsequently devoted himself to social, educational and economic questions. Education became one of Lord Aberdare's main interests in later life. His interest had been shown by the speech on Welsh education which he had made on 5 May 1862. In 1880, he was appointed to chair the Departmental Committee on Intermediate and Higher Education in Wales and Monmouthshire, whose report ultimately led to the Welsh Intermediate Education Act of 1889. The report also stimulated the campaign for the provision of university education in Wales. In 1883, Lord Aberdare was elected the first president of the University College of South Wales and Monmouthshire. In his inaugural address he declared that the framework of Welsh education would not be complete until there was a University of Wales. The University was eventually founded in 1893 and Aberdare became its first chancellor. In 1876 he was elected a Fellow of the Royal Society; from 1878 to 1891 he was president of the Royal Historical Society. and in 1881 he became president of both the Royal Geographical Society and the Girls' Day School Trust. In 1888 he headed the commission that established the Official Table of Drops, listing how far a person of a particular weight
dock landing ship, a ship class in the United States Navy USS Harpers Ferry (LSD-49), a Harpers Ferry class dock landing ship of the United States Navy, commissioned in 1995 Harpers Ferry (nightclub), a music venue and nightclub in Boston Harper's Ferry flintlock pistol See also Harpur's Ferry, A
Civil War that took place around what is now Harpers Ferry, West Virginia Harpers Ferry may also refer to: Harpers Ferry class dock landing ship, a ship class in the United States Navy USS Harpers Ferry (LSD-49), a Harpers Ferry class dock landing ship of the United States Navy, commissioned in 1995
their environment limits the availability of oxygen for respiration. Their cellular machinery is adapted to high salt concentrations by having charged amino acids on their surfaces, allowing the retention of water molecules around these components. They are heterotrophs that normally respire by aerobic means. Most halophiles are unable to survive outside their high-salt native environments. Many halophiles are so fragile that when they are placed in distilled water, they immediately lyse from the change in osmotic conditions. Halophiles use a variety of energy sources and can be aerobic or anaerobic; anaerobic halophiles include phototrophic, fermentative, sulfate-reducing, homoacetogenic, and methanogenic species. The Haloarchaea, and particularly the family Halobacteriaceae, are members of the domain Archaea, and comprise the majority of the prokaryotic population in hypersaline environments. Currently, 15 recognised genera are in the family. The domain Bacteria (mainly Salinibacter ruber) can comprise up to 25% of the prokaryotic community, but is more commonly a much lower percentage of the overall population. At times, the alga Dunaliella salina can also proliferate in this environment. A comparatively wide range of taxa has been isolated from saltern crystalliser ponds, including members of these genera: Haloferax, Halogeometricum, Halococcus, Haloterrigena, Halorubrum, Haloarcula, and Halobacterium. However, the viable counts in these cultivation studies have been small when compared to total counts, and the numerical significance of these isolates has been unclear. Only recently has it become possible to determine the identities and relative abundances of organisms in natural populations, typically using PCR-based strategies that target 16S small subunit ribosomal ribonucleic acid (16S rRNA) genes. While comparatively few studies of this type have been performed, results from these suggest that some of the most readily isolated and studied genera may not in fact be significant in the in situ community. This is seen in cases such as the genus Haloarcula, which is estimated to make up less than 0.1% of the in situ community, but commonly appears in isolation studies. Genomic and proteomic signature The comparative genomic and proteomic analysis showed distinct molecular signatures exist for the environmental adaptation of halophiles. At the protein level, the halophilic species are characterized by low hydrophobicity, an overrepresentation of acidic residues, underrepresentation of Cys, lower propensities for helix formation, and higher propensities for coil structure. The core of these proteins is less hydrophobic, such as DHFR, that was found to have narrower β-strands. At the DNA level, the halophiles exhibit distinct dinucleotide and codon usage. Examples Halobacteriaceae is a family that includes a large part of halophilic archaea. The genus Halobacterium under it has a high tolerance for elevated levels of salinity. Some species of halobacteria have acidic proteins that resist the denaturing effects of salts. Halococcus is another genus of the family Halobacteriaceae. Some hypersaline lakes are habitat to numerous families of halophiles. For example, the Makgadikgadi Pans in Botswana form a vast, seasonal, high-salinity water body that manifests halophilic species within the diatom genus Nitzschia in the family Bacillariaceae, as well as species within the genus Lovenula in the family Diaptomidae. Owens Lake in California also contains a large population of the halophilic bacterium Halobacterium halobium. Wallemia ichthyophaga is a basidiomycetous fungus, which requires at least 1.5 M sodium chloride for in vitro growth, and it thrives even in media saturated with salt. Obligate requirement for salt is an exception in fungi. Even species that can tolerate salt
that of the ocean, such as the Great Salt Lake in Utah, Owens Lake in California, the Urmia Lake in Iran, the Dead Sea, and in evaporation ponds. They are theorized to be a possible candidate for extremophiles living in the salty subsurface water ocean of Jupiter's Europa and other similar moons. Classification Halophiles are categorized by the extent of their halotolerance: slight, moderate, or extreme. Slight halophiles prefer 0.3 to 0.8 M (1.7 to 4.8%—seawater is 0.6 M or 3.5%), moderate halophiles 0.8 to 3.4 M (4.7 to 20%), and extreme halophiles 3.4 to 5.1 M (20 to 30%) salt content. Halophiles require sodium chloride (salt) for growth, in contrast to halotolerant organisms, which do not require salt but can grow under saline conditions. Lifestyle High salinity represents an extreme environment in which relatively few organisms have been able to adapt and survive. Most halophilic and all halotolerant organisms expend energy to exclude salt from their cytoplasm to avoid protein aggregation ('salting out'). To survive the high salinities, halophiles employ two differing strategies to prevent desiccation through osmotic movement of water out of their cytoplasm. Both strategies work by increasing the internal osmolarity of the cell. The first strategy is employed by some archaea, the majority of halophilic bacteria, yeasts, algae, and fungi; the organism accumulates organic compounds in the cytoplasm—osmoprotectants which are known as compatible solutes. These can be either synthesised or accumulated from the environment. The most common compatible solutes are neutral or zwitterionic, and include amino acids, sugars, polyols, betaines, and ectoines, as well as derivatives of some of these compounds. The second, more radical adaptation involves selectively absorbing potassium (K+) ions into the cytoplasm. This adaptation is restricted to the extremely halophilic archaeal family Halobacteriaceae, the moderately halophilic bacterial order Halanaerobiales, and the extremely halophilic bacterium Salinibacter ruber. The presence of this adaptation in three distinct evolutionary lineages suggests convergent evolution of this strategy, it being unlikely to be an ancient characteristic retained in only scattered groups or passed on through massive lateral gene transfer. The primary reason for this is the entire intracellular machinery (enzymes, structural proteins, etc.) must be adapted to high salt levels, whereas in the compatible solute adaptation, little or no adjustment is required to intracellular macromolecules; in fact, the compatible solutes often act as more general stress protectants, as well as just osmoprotectants. Of particular note are the extreme halophiles or haloarchaea (often known as halobacteria), a group of archaea, which require at least a 2 M salt concentration and are usually found in saturated